Advances
in COMPUTERS VOLUME 56
This Page Intentionally Left Blank
Advances in
COMPUTERS EDITED BY
MARVIN V. ZELKOWITZ Department of Computer Science and Institute for Advanced Computer Studies University of Maryland College Park, Maryland
VOLUME 56
ACADEMIC PRESS An imprint of Elsevier Science Amsterdam Boston London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo
This book is printed on acid-free paper. Copyright © 2002, Elsevier Science (USA) Except Chapter 5 All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. The appearance of the code at the bottom of the first page of a chapter in this book indicates the Publisher's consent that copies of the chapter may be made for personal or internal use of specific clients. This consent is given on the condition, however, that the copier pay the stated per copy fee through the Copyright Clearance Center, Inc. (222 Rosewood Drive, Danvers, Massachusetts 01923), for copying beyond that permitted by Sections 107 or 108 of the U.S. Copyright Law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating new collective works, or for resale. Copy fees for pre-2002 chapters are as shown on the title pages. If no fee code appears on the title page, the copy fee is the same as for current chapters. ISSN#/2002 $35.00. Explicit permission from Academic Press is not required to reproduce a maximum of two figures or tables from an Academic Press chapter in another scientific or research publication provided that the material has not been credited to another source and that full credit to the Academic Press chapter is given. Academic Press An imprint of Elsevier Science 84 Theobald's Road, London WCIX 8RR http://www.academicpress.com Academic Press An imprint of Elsevier Science 525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.academicpress.com ISBN 0-12-012156-5 A catalogue record for this book is available from the British Library Typeset by Devi Information Systems, Chennai, India Printed and bound in Great Britain by MPG Books Ltd, Bodmin, Cornwall 02 03 04 05 06 07 MP 9 8 7 6 5 4 3 2 1
Contents CONTRIBUTORS PREFACE
ix xiii
Software Evolution and the Staged Model of the Software Lifecycle Keith H. Bennett, Vaclav T. Rajlich, and Norman Wilde
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Introduction Initial Development Evolution—The Key Stage Servicing Phase-Out and Closedown Case Studies Software Change and Comprehension Sustaining Software Value Future Directions: Ultra Rapid Software Evolution Conclusions Acknowledgments References
3 13 16 19 23 24 31 38 44 47 47 48
Embedded Software Edward A. Lee
1. What is Embedded Software? 2. Just Software on Small Computers? 3. Limitations of Prevailing Software Engineering Methods 4. Actor-Oriented Design 5. Examples of Models of Computation 6. Choosing a Model of Computation 7. Heterogeneous Models 8. Component Interfaces 9. Frameworks Supporting Models of Computation 10. Conclusions V
56 57 62 65 71 79 82 84 88 89
VI
CONTENTS
Acknowledgments References
89 90
Empirical Studies of Quality Models in Object-Oriented Systems Lionel C. Briand and Jurgen Wust
1. 2. 3. 4. 5.
Introduction Overview of Existing Studies Data Analysis Methodology Summary of Results Conclusions Appendix A Appendix B: Glossary References
98 99 112 131 150 157 161 162
Software Fault Prevention by Language Choice: Why C Is Not My Favorite Language Richard J. Fateman
1. 2. 3. 4. 5. 6. 7.
Introduction and Background Why Use C? Why does Lisp Differ from C? Root Causes of Flaws: A Lisp Perspective Arguments against Lisp, and Responses But Why Is C Used by Lisp Implementors? Conclusion Appendix 1: Cost of Garbage Collection Appendix 2: Isn't C Free? Acknowledgments and Disclaimers References
168 169 171 173 179 185 185 186 187 187 188
Quantum Computing and Communication Paul E. Black, D. Richard Kuhn, and Carl J. Williams
1. 2. 3. 4.
Introduction The Surprising Quantum World The Mathematics of Quantum Mechanics Quantum Computing
190 192 202 207
CONTENTS
5. Quantum Communication and Cryptography 6. Physical Implementations 7. Conclusions Appendix References
VII
220 231 240 240 242
Exception Handling Peter A. Buhr, Ashif Harji, and W. Y. Russell Mok
1. Introduction 2. EHM Objectives 3. Execution Environment 4. EHM Overview 5. Handling Models 6. EHM Features 7. Handler Context 8. Propagation Models 9. Propagation Mechanisms 10. Exception Partitioning 11. Matching 12. Handler Clause Selection 13. Preventing Recursive Resuming 14. Multiple Executions and Threads 15. Asynchronous Exception Events 16. Conclusions Appendix: Glossary References
246 248 249 253 254 263 272 273 277 280 282 283 285 290 292 297 298 301
Breaking the Robustness Barrier: Recent Progress on the Design of the Robust Multimodal System Sharon Oviatt
1. 2. 3. 4.
Introduction to Multimodal Systems Robustness Issues in the Design of Recognition-Based Systems Future Directions: Breaking the Robustness Barrier Conclusion Acknowledgments References
306 313 331 333 333 333
VIII
CONTENTS
Using Data Mining to Discover the Preferences of Computer Criminals Donald E. Brown, and Louise F. Gunderson
1. 2. 3. 4. 5. 6. 7.
Introduction The Target Selection Process of Criminals Predictive Modeling of Crime Discovering the Preferences of the Agents Methodology Testing with Synthetic Data Conclusions References
344 346 348 352 358 364 369 370
AUTHOR INDEX
375
SUBJECT INDEX
385
CONTENTS OF VOLUMES IN THIS SERIES
395
Contributors Keith H. Bennett is a full professor and former chair in the Department of Computer Science at the University of Durham. His research interests include new software architectures that support evolution. Bennett received a Ph.D. in computer science from the University of Manchester. He is a chartered engineer and a Fellow of the British Computer Society and lEE. Contact him at k e i t h . bennett Odurham.ac.uk. Paul E. Black is a computer scientist in the Information Technology Laboratory of the National Institute of Standards and Technology (NIST). He has published papers on software configuration control, networks and queuing analysis, formal methods, testing, and software verification. He has nearly 20 years of industrial experience in developing software for IC design and verification, assuring software quality, and managing business data processing. Black earned an M.S. in Computer Science from the University of Utah, and a Ph.D. from Brigham Young University. Lionel C. Briand is a professor with the Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he founded the Software Quality Engineering Laboratory ( h t t p : / / w w w . s c e . c a r l e t o n . c a / Squall/Squall.htm). Lionel has been on the program, steering, and organization committees of many international, IEEE conferences and the editorial boards of several scientific journals. His research interests include object-oriented analysis and design, inspections and testing in the context of object-oriented development, quality assurance and control, project planning and risk analysis, and technology evaluation. Donald Brown is a Professor and Chair of the Department of Systems Engineering, University of Virginia. Prior to joining the University of Virginia, Dr. Brown served as an officer in the U.S. Army and later worked at Vector Research, Inc. on projects in medical information processing and multisensor surveillance systems. He is currently a Fellow at the National Institute of Justice Crime Mapping Research Center. Dr. Brown is a Fellow of the IEEE and a past President of the IEEE Systems, Man, and Cybernetics Society. He is the recipient of the Outstanding Contribution Award from that society and the IEEE Millennium Medal. He is also a past Chairman of the Operations Research Society of America
ix
X
CONTRIBUTORS
Technical Section on Artificial Intelligence, and he is the recipient of the Outstanding Service Award from that society. Dr. Brown received his B.S. degree from the U.S. Military Academy, West Point, M.S. and M.Eng. degrees in operations research from the University of California—Berkeley, and a Ph.D. degree in industrial and operations engineering from the University of Michigan—Ann Arbor. His research focuses on data fusion, decision support, and predictive modeling with applications to security and safety. His email address is brown ©Virginia.edu. Peter A. Buhr received B.Sc. Hons/M.Sc. and Ph.D. degrees in computer science from the University of Manitoba in 1976, 1978, and 1985, respectively. He is currently an Associate Professor in the Department of Computer Science, University of Waterloo, Canada. His research interests include concurrency, concurrent profiling/debugging, persistence, and polymorphism. He is the principal designer and implementer for the //System project, thread library for C, the //C-I-+ project, extending C-h-1- with threads, and the MVD project, a collection of software tools to monitor, visuaUze, and debug concurrent //C-h+ programs. Dr. Buhr is a member of the Association for Computing Machinery. Richard J. Fateman received a B.S. degree in physics and mathematics from Union College, Schenectady, NY, and a Ph.D. degree in applied mathematics from Harvard University, Cambridge, MA, in 1966 and 1971, respectively. From 1971 to 1974 he taught in the Department of Mathematics at the Massachusetts Institute of Technology, where he also participated in research on symbolic computation and the Macsyma system. Since 1974 he has been at the University of California at Berkeley where he served as Associate Chair for Computer Science of the Department of Electrical Engineering and Computer Sciences from 1987 to 1990. His research interests include the design and analysis of symbolic mathematical algorithms and systems, implementation of programming languages, and the design of computer environments for scientific programming. He has also done research and teaching in document image analysis. Further details may be found at http://www.cs.berkeley.edu/~fateman. Louise Gunderson is a research assistant in the Department of Systems Engineering, University of Virginia. Prior to joining the University of Virginia, Ms. Gunderson worked for the U.S. Environmental Protection Agency as an Enforcement Specialist. Ms. Gunderson received a B.A. degree in chemistry from the University of California—Berkeley, a B.A. degree in biology from the University of Colorado—Denver, and an M.S. degree in environmental science from the University of Colorado—Denver. She is currendy a Ph.D. candidate in the Department of Systems Engineering, University of Virginia. Her interests involve the
CONTRIBUTORS
XI
modeling and simulation of natural and artificial systems. Her email address is
[email protected]. Ashif Harji received BMath and MMath degrees in computer science from the University of Waterloo in 1997 and 1999, respectively. He is currently a Ph.D. student at the University of Waterloo, Waterloo, Canada. His research interests include concurrency, real-time, scheduling, and number theory. D. Richard Kuhn is a computer scientist in the Information Technology Laboratory of the National Institute of Standards and Technology (NIST). His primary technical interests are in software testing and assurance, and information security. Before joining NIST in 1984, he worked as a systems analyst with NCR Corporation and the Johns Hopkins University Applied Physics Laboratory. He received an M.S. in computer science from the University of Maryland at College Park, and an M.B.A. from the College of William and Mary. Edward A. Lee is a Professor in the Electrical Engineering and Computer Science Department at University of California—Berkeley. His research interests center on design, modeling, and simulation of embedded, real-time computational systems. He is director of the Ptolemy project at UC—Berkeley. He is co-author of four books and numerous papers. His B.S. is from Yale University (1979), his M.S. from MIT (1981), and his Ph.D. from UC—Berkeley (1986). From 1979 to 1982 he was a member of technical staff at Bell Telephone Laboratories in Holmdel, NJ, in the Advanced Data Communications Laboratory. He is a cofounder of BDTI, Inc., where he is currently a Senior Technical Advisor, is cofounder of Agile Design, Inc., and has consulted for a number of other companies. He is a Fellow of the IEEE, was an NSF Presidential Young Investigator, and won the 1997 Frederick Emmons Terman Award for Engineering Education. W.Y. Russell Mok received the BCompSc and MMath degrees in computer science from Concordia University (Montreal) in 1994 and University of Waterloo in 1998, respectively. He is currently working at Algorithmics, Toronto. His research interests include object-oriented design, software patterns, and software engineering. Sharon Oviatt is a Professor and Co-Director of the Center for Human-Computer Communication (CHCC) in the Department of Computer Science at Oregon Health and Sciences University. Previously she has taught and conducted research at the Artificial Intelligence Center at SRI International, and the Universities of Illinois, California, and Oregon State. Her research focuses on human-computer interaction, spoken language and multimodal interfaces, and mobile and highly interactive systems. Examples of recent work involve the development of novel design
XII
CONTRIBUTORS
concepts for multimodal and mobile interfaces, robust interfaces for real-world field environments and diverse users (children, accented speakers), and conversational interfaces with animated software "partners." This work is funded by grants and contracts from the National Science Foundation, DARPA, ONR, and corporate sources such as Intel, Motorola, Microsoft, and Boeing. She is an active member of the international HCI and speech communities, has published over 70 scientific articles, and has served on numerous government advisory panels, editorial boards, and program committees. Her work is featured in recent special issues of Communications for the ACM, Human-Computer Interaction, and IEEE Multimedia. Further information about Dr. Oviatt and CHCC is available at h t t p : / / www.cse.ogi.edu/CHCC. Vaclav T. Rajlich is a full professor and former chair in the Department of Computer Science at Wayne State University. His research interests include software change, evolution, comprehension, and maintenance. Rajlich received a Ph.D. in mathematics from Case Western Reserve University. Contact him at rajlich® wayne.edu. Norman Wilde is a full professor of computer science at the University of West Florida. His research interests include software maintenance and comprehension. Wilde received a Ph.D. in mathematics and operations research from the Massachusetts Institute of Technology. Contact him at nwilde^uwf . edu. Carl J. Williams is a research physicist in the Quantum Processes Group, Atomic Physics Division, Physics Laboratory of the National Institute of Standards and Technology (NIST). Before joining NIST in 1998, he worked as a systems analyst for the Institute of Defense Analyses and was a research scientist studying atomic and molecular scattering and molecular photodissociation at the James Franck Institute of the University of Chicago. He is an expert in the theory of ultra-cold atomic collisions, does research on the physics of quantum information processors, and coordinates quantum information activities at NIST. Williams received his Ph.D. from the University of Chicago in 1987. Jiirgen Wiist received the degree Dipkrm-Informatiker (M.S.) in computer science from the University of Kaiserslaulern, Germany, in 1997. He is currently a researcher at the Fraunhofer Institute for Experimental Software Engineering (lESE) in Kaiserslautern, Germany. His research activities and industrial activities include software measurement, software product evaluation, and objectoriented development techniques.
Preface
Advances in Computers, continually published since 1960, is the oldest series still in publication to provide an annual update to the rapidly changing information technology scene. Each volume provides six to eight chapters describing how the software, hardware, or applications of computers is changing. In this volume, the 56th in the series, eight chapters describe many of the new technologies that are changing the use of computers during the early part of the 21st century. In Chapter 1, "Software Evolution and the Staged Model of the Software Lifecycle" by K. H. Bennett, V. T. Rajlich, and N. Wilde, the authors describe a new view of software maintenance. It is well known that maintenance consumes a major part of the total lifecycle budget for a product; yet maintenance is usually considered an "end game" component of development. In this chapter, the authors view software maintenance as consisting of a number of stages, some of which can start during initial development. This provides a very different perspective on the lifecycle. The chapter introduces a new model of the lifecycle that partitions the conventional maintenance phase in a much more useful, relevant, and constructive way. Chapter 2, "Embedded Software" by E. A. Lee, explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. Time, concurrency, liveness, robustness, continuums, reactivity, and resource management are all functions that are part of the computation of a program; yet prevailing abstractions of programs leave out these "nonfunctional" aspects. This chapter explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. Object-oriented system design is a current approach toward developing quality systems. However, how do we measure the quality of such systems? In order to measure a program's quality, quality models that quantitatively describe how internal structural properties relate to relevant external system qualities, such as reliability or maintainability, are needed. In Chapter 3, "Empirical Studies of Quality Models in Object-Oriented Systems" by L. C. Briand and J. Wiist, the authors summarize the empirical results that have been reported so far with modeling external system quality based on structural design properties. They perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported.
xiii
XIV
PREFACE
Chapter 4, "Software Fault Prevention by Language Choice: Why C is Not My Favorite Language" by R. Fateman, presents an opposing view of the prevaihng sentiment in much of the software engineering world. Much of software design today is based on object-oriented architecture using C-I-+ or Java as the implementation language. Both languages are derived from the C language. However, is C an appropriate base in which to write programs? Dr. Fateman argues that it is not and that a LISP structure is superior. How fast can computers ultimately get? As this is being written, a clock speed of around 2 GHz is available. This speed seems to double about every 18 months ("Moore's Law"). However, are we reaching the limits on the underlying "silicon" structure the modem processor uses? One of the more exciting theoretical developments today is the concept of quantum computing, using the quantum states of atoms to create extremely fast computers, but quantum effects are extremely fragile. How can we create reliable machines using these effects? In "Quantum Computing and Communication" by P. E. Black, D. R. Kuhn, and C. J. WiUiams, the authors describe the history of quantum computing, and describe its applicability to solving modern problems. In particular, quantum computing looks like it has a home in modern cryptography, the ability to encode information so that some may not decipher its contents or so that other codebreakers may decipher its contents. In Chapter 6, "Exception Handling" by P. A. Buhr, A. Harji, and W. Y. R. Mok, the authors describe the current status of exception handling mechanisms in modem languages. Exception handling in languages like PL/I (e.g., the Oncondition) and C (e.g., the Throw statement) can be viewed as add-on features. The authors argue it is no longer possible to consider exception handling as a secondary issue in a language's design. Exception handling is a primary feature in language design and must be integrated with other major features, including advanced control flow, objects, coroutines, concurrency, real-time, and polymorphism. In Chapter 7, "Breaking the Robustness Barrier: Recent Progress on the Design of the Robust Multimodal Systems" by S. Oviatt, goes into the application of applying multimodal systems, two or more combined user input modes, in user interface design. Multimodal interfaces have developed rapidly during the past decade. This chapter specifically addresses the central performance issue of multimodal system design techniques for optimizing robustness. It reviews recent demonstrations of multimodal system robustness that surpass that of unimodal recognition systems, and also discusses future directions for optimizing robustness further through the design of advanced multimodal systems. The final chapter, "Using Data Mining to Discover the Preferences of Computer Criminals" by D. E. Brown and L. F. Gunderson, discusses the ability to predict a
PREFACE
XV
new class of crime, the "cyber crime." With our increased dependence on global networks of computers, criminals are increasingly "hi-tech." The ability to detect illegal intrusion in a computer system makes it possible for law enforcement to both protect potential victims and apprehend perpetrators. However, warnings must be as specific as possible, so that systems that are not likely to be under attack do not shut off necessary services to their users. This chapter discusses a methodology for data-mining the output from intrusion detection systems to discover the preferences of attackers. I hope that you find these articles of interest. If you have any suggestions for future chapters, I can be reached at mvzOcs. umd. edu. MARVIN ZELKOWITZ
College Park, Maryland
This Page Intentionally Left Blank
Software Evolution and the Staged Model of the Software Lifecycle K. H. BENNETT Research Institute for Software University of Durham Durham DH1 3LE United Kingdom
[email protected]
Evolution
V.TRAJLICH Department of Computer Science Wayne State University Detroit, Ml 48202 USA vtr@cs. wayne. edu N.WILDE Department of Computer Science University of West Florida Pensacola, FL 32514 USA
[email protected]
Abstract Software maintenance is concerned with modifying software once it has been deUvered and has entered user service. Many studies have shown that maintenance is the dominant Hfecycle activity for most practical systems; thus maintenance is of enormous industrial and commercial importance. Over the past 25 years or so, a conventional view of software development and maintenance has been accepted in which software is produced, delivered to the user, and then enters a maintenance stage. A review of this approach and the state of the art in research and practice is given at the start of the chapter. In most lifecycle models, software maintenance is lumped together as one phase at the end. In the experience of the authors, based on how maintenance is really undertaken (rather than how it might or should be done), software ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
1
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
K. H. BENNETT £7/A/..
maintenance actually consists of a number of stages, some of which can start during initial development. This provides a very different perspective on the lifecycle. In the chapter, we introduce a new model of the lifecycle that partitions the conventional maintenance phase in a much more useful, relevant, and constructive way. It is termed the staged model There are five stages through which the software and the development team progress. A project starts with initial development stage, and we then identify an explicit evolution stage. Next is a servicing stage, comprising simple tactical activities. Later still, the software moves to ?i phase-out stage in which no more work is done on the software other than to collect revenue from its use. Finally the software has a close-down stage. The key point is that software evolution is quite different and separate from servicing, from phase-out, and from close-down, and this distinction is crucial in clarifying both the technical and business consequences. We show how the new model can provide a coherent analytic approach to preserving software value. Finally, promising research areas are summarized. Introduction 1.1 Background 1.2 Early Work 1.3 Program Comprehension 1.4 Standards 1.5 Iterative Software Development 1.6 The Laws of Software Evolution 1.7 Stage Distinctions 1.8 The Business Context 1.9 Review 1.10 The Stages of the Software Lifecycle Initial Development 2.1 Introduction 2.2 Software Team Expertise 2.3 System Architecture 2.4 What Makes Architecture Evolvable? Evolution—The Key Stage 3.1 Introduction 3.2 Software Releases 3.3 Evolutionary Software Development Servicing 4.1 Software Decay 4.2 Loss of Knowledge and Cultural Change 4.3 Wrapping, Patching, Cloning, and Other "Kludges" 4.4 Reengineering Phase-Out and Closedown
3 3 5 6 7 7 8 9 9 10 12 13 13 14 15 15 16 16 17 17 19 19 20 21 22 23
SOFTWARE EVOLUTION
6. Case Studies 6.1 The Microsoft Corporation 6.2 The VME Operating System 6.3 The Y2K Experience 6.4 A Major Billing System 6.5 A Small Security Company 6.6 A Long-Lived Defense System 6.7 A Printed Circuits Program 6.8 Project PET 6.9 The FASTGEN Geometric Modeling Toolkit 6.10 A Financial Management Application 7. Software Change and Comprehension 7.1 The Miniprocess of Change 7.2 Change Request and Planning 7.3 Change Implementation 7.4 Program Comprehension 8. Sustaining Software Value 8.1 Staving off the Servicing Stage 8.2 Strategies during Development 8.3 Strategies during Evolution 8.4 Strategies during Servicing 9. Future Directions: Ultra Rapid Software Evolution 10. Conclusions Acknowledgments References
1.
Introduction
1.1
Background
24 24 25 26 27 27 28 28 29 30 30 31 31 32 33 35 38 38 39 41 43 44 47 47 48
What is software maintenance? Is it different from software evolution? Why isn't software designed to be easier to maintain? What should we do with legacy software? How do we make money out of maintenance? Many of our conventional ideas are based on analyses carried out in the 1970s, and it is time to rethink these for the modem software industry. The origins of the term maintenance for software are not clear, but it has been used consistently over the past 25 years to refer to post-initial delivery work. This view is reflected in the IEEE definition of software maintenance [1] essentially as a post delivery activity: The process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment. [1, p. 46]
4
K.H. BENNETT Er>^L
Implicit in this definition is the concept of software Ufecycle, which is defined as: The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and, sometimes, retirement phase. Note: These phases may overlap or be performed iteratively. [1, p. 68] A characteristic of established engineering disciplines is that they embody a structured, methodical approach to developing and maintaining artifacts [2, Chaps. 15 and 27]. Software lifecycle models are abstract descriptions of the structured methodical development and modification process typically showing the main stages in producing and maintaining executable software. The idea began in the 1960s with the waterfall model [3]. A lifecycle model, implicit or explicit, is the primary abstraction that software professionals use for managing and controlling a software project, to meet budget, timescale, and quality objectives, with understood risks, and using appropriate resources. The model describes the production of deliverables such as specifications and user documentation, as well as the executable code. The model must be consistent with any legal or contractual constraints within a project's procurement strategy. Thus, it is not surprising that lifecycle models have been given primary attention within the software engineering community. A good overview of software lifecycle models is given in [2] and a very useful model is the spiral model of Boehm [4], which envisages software production as a continual iterative development process. However, crucially, this model does not address the loss of knowledge, which in the authors' experience accompanies the support of long-lived software systems and which vitally constrains the tasks which can be performed. Our aim was to create a lifecycle model that would be useful for the planning, budgeting, and delivery of evolving systems, and that would take into account this loss of knowledge. Our new model is called the staged model. The aim of this chapter is to describe the new staged model [5]. We provide a broad overview of the state of the art in software maintenance and evolution. The emphasis is mainly on process and methods (rather than technology), since this is where the main developments have occurred, and is of most relevance to this chapter. There is much useful material available on software maintenance management, including very practical guides [6]. We start from the foundations established within the international standards community. We then briefly revisit previous research work, as an understanding of these results is essential. Program comprehension is identified as a key component; interestingly, very few
SOFTWARE EVOLUTION
5
textbooks on software engineering and even on software maintenance mention the term, so our review of the state of the art addresses the field to include this perspective. The new model and our view of research areas are influenced by program comprehension more than other aspects. The staged model is presented, and evidence drawn from case studies. Practical implications are then described, and finally, research directions are presented.
1.2
Early Work
In a very influential study, Lientz and Swanson [7,8] undertook a questionnaire survey in the 1970s, in which they analyzed then-current maintenance practices. Maintenance changes to software were categorized into: • perfective (changes to the functionality), adaptive (changes to the environment), corrective (the correction of errors), and preventive (improvements to avoid future problems). This categorization has been reproduced in many software engineering text books and papers (e.g., Sommerville [9], McDermid [2], Pressman [10], Warren [11]), and the study has been repeated in different application domains, in other countries and over a period of 20 years (see, for example, the Ph.D. thesis of Foster [12], who analyzes some 30 studies of this type). However, the basic analysis has remained essentially unchanged, and it is far from clear what benefits this view of maintenance actually brings. Implicit in the Lientz and Swanson model are two concepts: • That software undergoes initial development, it is delivered to its users, and then it enters a maintenance phase. • The maintenance phase is uniform over time in terms of the activities undertaken, the process and tools used, and the business consequences. These concepts have also suggested a uniform set of research problems to improve maintenance, see for example [2, Chap. 20]. One common message emerging from all these surveys is the very substantial proportion of lifecycle costs that are consumed by software maintenance, compared to software development. The figures range from 50 to 90% of the complete lifecycle cost [7]. The proportion for any specific systems clearly depends on the application domain and the successful deployment of the software (some long-lived software is now over 40 years old!).
6
K.H. BENNETT E r 4 L
The balance of lifecycle costs is subject to commercial pressures. It may be possible to discount the purchase price of a new software system, if charges can be recovered later through higher maintenance fees. The vendor may be the only organization capable (i.e., having the knowledge and expertise) of maintaining the system. Depending on the contractual arrangement between the producer and the consumer, and on the expectations for the maintenance phase, the incentive during development to produce maintainable software may vary considerably. Software maintenance is thus not entirely a technical problem. In the 1990s, the practice of outsourcing software maintenance became widespread. A customer company subcontracts all the support and maintenance of a purchased system to a specialist subcontractor. This has raised a set of new commercial issues, for example the potential risk of subcontracting out key company systems; and the difficulties in the future of recalling maintenance back in house or to another subcontractor if the first does not perform acceptably. It is important to recall that it is not simply the software application that evolves. Long-lived software may well outlast the environment within which it was produced. In the military domain, software has sometimes lasted longer than the hardware on which it was cross compiled (presenting major problems if the software has to be modified). Software tools are often advocated for software maintenance, but these may also evolve (and disappear from the market) at a faster rate than the software application under maintenance.
1.3
Program Comprehension
Program comprehension is that activity by which software engineers come to an understanding of the behavior of a software system using the source code as the primary reference. Studies suggest that program comprehension is the major activity of maintenance, absorbing around 50% of the costs [2, Chap. 20; 13]. Program comprehension requires understanding of the user domain that the software serves as well as software engineering and programming knowledge of the program itself. Further details are given in Section 7. The authors believe that comprehension plays a major role in the software lifecycle. During the early stages, the development team builds group understanding, and the system architects have a strategic understanding of the construction and operation of the system at all levels. At later stages, this knowledge is lost as developers disperse and the complexity of the software increases, making it more difficult to understand. Knowledge appears impossible to replace, once lost, and this forms the basis for our new model.
SOFTWARE EVOLUTION
1.4
7
Standards
Software maintenance has been included within more general software engineering standardization initiatives. For example, the IEEE has published a comprehensive set of standards [14], of which Std. 1219 on maintenance forms a coherent part. The IEEE standard defines seven steps in software maintenance change: • Problem modification/identification, classification, and prioritization, • Analysis and understanding (including ripple effects), • Design, • Implementation, • Regression/system testing, • Acceptance testing, • Delivery. Underpinning the standard is a straightforward iterative perspective of software maintenance; a change request is reviewed and its cost estimated; it is implemented; and then validation is carried out. The International Standards Organization (ISO) has also published a software maintenance standard [15]. This is in the context of Std. ISO/IEC 12207, which addresses how an agreement should be drawn up between a software acquirer and supplier (in which maintenance is included). The standard places considerable emphasis on planning.
1.5
Iterative Software Development
The iterative nature of software lifecycle was noted already in the 1970s by several authors. Wirth [16] proposed Stepwise Refinement where functionality is introduced into the program in successive iterations. Basili and Turner [17] described another process where the functionality is added to the program in successive iterative steps. Large software projects of that time already followed iterative scenarios. A notable project was the development of the IBM OS operating system. The experience of that project was described in [18,19]. These authors noted that a software lifecycle is inherently iterative, that a substantial proportion of the functionality is added iteratively, and that the initial development is simply the initialization stage of this process. See also [20].
8
K.H. BENNETT ET^/..
1.6
The Laws of Software Evolution
The evolution of a software system conforms to laws, which are derived from empirical observations of several large systems [21-23]: 1. Continuing change. A program that is used and that, as an implementation of its specification, reflects some other reality undergoes continuing change or becomes progressively less useful. 2. Increasing complexity. As an evolving program is continuously changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it. The laws are only too apparent to anyone who has maintained an old heavily changed system. Lehman also categorized software into three types, as follows [24]: S-type software: this has a rigid specification (S means static) which does not change and which is well understood by the user. The specification defines the complete set of circumstances to which it applies. Examples are offered by many mathematical computations. It is therefore reasonable to prove that the implementation meets the specification. P-type software: this has a theoretical solution, but the implementation of the solution is impractical or impossible. The classic example is offered by a program to play chess, where the rules are completely defined, but do not offer a practical solution to an implementation (P means practical). Thus we must develop an approximate solution that is practical to implement. E-type software: this characterizes almost all software in everyday use, and reflects the real world situation that change is inevitable (E means embedded, in the sense of embedded in the real world). The solution involves a model of the abstract processes involved, which includes the software. Thus the system is an integral part of the world it models, so change occurs because both the world changes and the software is part of that world. For E-type systems, Lehman holds the view that (as in any feedback system), the feed-forward properties such as development technologies, environments, and methods are relatively unimportant and global properties of the maintenance process are insensitive to large variations in these factors. In contrast, he argues that the feedback components largely determine the behavior. So issues like testing, program understanding, inspection, and error reports are crucial to a wellunderstood stable process. This work may provide a theoretical perspective of why program comprehension is so important. He also amplified the concept of E-type software in his principle of uncertainty: the outcome of software system
SOFTWARE EVOLUTION
9
operation in the real world is inherently uncertain with the precise area of uncertainty also unknowable. This work to establish a firm scientific underpinning for software evolution continues in the FEAST project [25-28].
1.7
Stage Distinctions
Sneed [29] and Lehner [30,31] are among the few authors to have observed that the maintenance phase is not uniform. Sneed classified systems into three categories: throw-away systems that typically have a lifetime of less than two years and are neither evolved nor maintained. Then there are static systems that are implemented in a well-defined area and after being developed, their rate of change is less than 10% in a year, or are relatively static after development. Finally there are evolutionary systems that undergo substantial change after the initial development and last many years. The stages of these systems are initial development, evolution (called "further development" by Sneed), "maintenance" (i.e., servicing), and "demise" (i.e., phase-out). Lehner [30] used this model and investigated the lifecycle of 13 business systems from Upper Austria to confirm the existence of the stages. He found that some systems belong to the category of static systems, where there is a very short evolution after the initial development and then the maintenance work very dramatically decreases. Other systems consume substantial effort over many years. Lehner confirmed a clear distinction between evolution (called "growth" in his paper) and servicing (called "saturation") where the maintenance effort is substantially lower [31]. He thus refuted earlier opinions that the evolution and growth of software can continue indefinitely and confirmed Sneed's earlier observation about the distinctions between the several stages, through observation of long-term data from several systems.
1.8
The Business Context
Some of the problems of the traditional lifecycle model stem from recent trends in the business context of software development. Different categories of software application are subjected to radically different kinds of business pressures. Most software engineering techniques available today for software specification, design, and verification have been presented as conventional supply-side methods, driven by technological advance. Such methods may work well for systems with rigid boundaries of concern, such as embedded systems, which may be characterized as risk-averse. In such domains, users have become familiar with long periods between requests for new features and their release in new versions (the so-called "applications backlog").
10
K. H. BENNETT E7>^/..
However such techniques break down for appHcations where system boundaries are not fixed and are subject to constant urgent change. These applications are typically found in emergent organizations—"organizations in a state of continual process change, never arriving, always in transition" [32]. Examples include e-businesses as well as more traditional companies that continually need to reinvent themselves to gain competitive advantage [33]. A stockbroker, for example, may have a need to introduce a new service overnight; the service may only exist for another 24 hours before it is replaced by an updated version. In such organizations we have a demand-led approach to the provision of software services, addressing delivery mechanisms and processes which, when embedded in emergent organizations, give a software solution in emergent terms—one with continual change. The solution never ends and neither does the provision of software. The user demand is for change in 'Internet time" and the result is sometimes termed engineering for emergent solutions. Yet a third category is provided by so-called "legacy systems," which have been defined [34] as "systems which are essential to our organization but we don't know what to do with them." They pose the epitome of the maintenance challenge, because for the great majority, remedial action has never taken place, so whatever structure originally existed has long since disappeared. Legacy systems have been extensively addressed in the literature (see, e.g., [35,36]). The main conclusion is that there is no magic silver bullet; the future of each system needs to be analyzed, planned, and implemented based on both technical and business drivers, and taking into account existing and future staff expertise. Finally, as software systems become larger and more complex, organizations find that it does not make sense to develop in-house all the software they use. Commercial-off-the-shelf (COTS) components are becoming a larger part of most software projects. Selecting and managing such software represents a considerable challenge, since the user becomes dependent on the supplier, whose business and products may evolve in unexpected ways. Technical concerns about software's capabilities, performance, and reliability may become legal and contractual issues, and thus even more difficult to resolve. It appears that many organizations are now or soon will be running, at the same time, a mixture of embedded or legacy software, with added COTS components, and interfaced to new e-business applications. The processes and techniques used in each category clash, yet managers need somehow to make the whole work together to provide the services that clients demand.
1.9
Review
We began by drawing on currently available evidence. Many ideas are now sufficiently mature that process standards have been defined or are emerging, at
SOFTWARE EVOLUTION
11
least for large scale, embedded risk-averse systems. Empirical studies have shown that program comprehension is a crucial part of software maintenance, yet it is an activity that is difficult to automate and relies on human expertise. The increasing business stress on time to market in emergent organizations is increasing the diversity of software types that must be managed, with different processes being appropriate for each type. We conclude that: • Human expertise during the maintenance phase represents a crucial dimension that cannot be ignored. At the moment, some of the hardest software engineering evolution tasks (such as global ripple analysis) need senior engineers fully to comprehend the system and its business role. • We need to explore maintenance to reflect how it is actually done, rather than prescriptively how we would like it to be done. The major contribution of this chapter is to propose that maintenance is not a single uniform phase, the final stage of the conventional lifecycle, but is comprised of several distinct stages and is in turn distinct from evolution. The stages are not only technically distinct, but also require a different business perspective. Our work is motivated by the fact that the Lientz and Swanson approach does not accord with modem industrial practice, based on analysis of a number of case studies. Knowing the fraction of effort spent on various activities during the full lifecycle does not help a manager to plan those activities or make technical decisions about multisourced component-based software, or address the expertise requirements for a team. The conventional analysis has not, over many years, justified the production of more maintainable software despite the benefits that should logically accrue. Especially, the conventional model does not sensibly apply to the many modem projects, which are heavily based on COTS technology. Finally, it is only a technical model and does not include business issues. We also had other concems that the conventional Lientz and Swanson model did not elucidate. For example, there are few guidelines to help an organization assess if a reverse engineering project would be commercially successful, despite the large amount of research and development in this field, and it was not clear why this was the case. The skills of staff involved in post-delivery work seem very important, but the knowledge needed both by humans and in codified form has not been clearly defined, despite a large number of projects which set out to recapture such knowledge. Our motivation for defining a new perspective came from the very evident confusion in the area. A brief examination of a number of Web sites and papers concemed with software engineering, software evolution, and software maintenance also illustrated the confusion between terms such as maintenance and
12
K. H. BENNETT E 7 / \ L
evolution (with a completely uniform maintenance phase), and the almost universal acceptance of the Lientz and Swanson analysis. We found this situation inadequate for defining a clear research agenda that would be of benefit to industry. For these reason, we shall largely restrict our use of the term "software maintenance" from now on in this chapter to historical discussions. We have estabHshed a set of criteria forjudging success of our new perspective: • It should support modern industrial software development which stresses time to delivery and rapid change to meet new user requirements. • It should help with the analysis of COTS-type software. • It should be constructive and predictive—we can use it to help industry to recognize and plan for stage changes. • It should clarify the research agenda—each stage has very different activities and requires very different techniques to achieve improvement. • It should be analytic—we can use it to explain and clarify observed phenomena, e.g., that reverse engineering from code under servicing is very hard. • It should be used to model business activity as well as technical activity. • It should transcend as far as possible particular technologies and application domains (such as retail, defense, embedded, etc.) while being applicable to modem software engineering approaches. • It should also transcend detailed business models and support a variety of product types. On one hand we have the shrink-wrap model where time to market, etc., are important considerations; at the other extreme we have customer-tailored software where the emphasis may be on other attributes like security, reliability, and ease of evolution. • It should be supported by experimental results from the field. • It should help to predict and plan, rather than simply be descriptive. Our perspective is amplified below and is called the staged model.
1.10
The Stages of the Software Lifecycle
The basis of our perspective is that software undergoes several distinctive stages during its life. The following stages are: • Initial development—the first functioning version of the system is developed.
SOFTWARE EVOLUTION
13
• Evolution—if initial development is successful, the software enters the stage of evolution, where engineers extend its capabilities and functionality, possibly in major ways. Changes are made to meet new needs of its users, or because the requirements themselves had not been fully understood, and needed to be given precision through user experience and feedback. The managerial decision to be made during this stage is when and how software should be released to the users (alpha, beta, commercial releases, etc.). • Servicing—the software is subjected to minor defect repairs and very simple changes in function (we note that this term is used by Microsoft in referring to service packs for minor software updates). • Phase-out—no more servicing is being undertaken, and the software's owners seek to generate revenue for its use for as long as possible. Preparation for migration routes is made. • Closedown—the software is withdrawn from the market, and any users directed to a replacement system if this exists. The simplest variant of the staged software lifecycle is shown in Fig. 1. In the following sections, we describe each of these stages in more detail.
2.
Initial Development 2.1
Introduction
The first stage is initial development, when the first version of the software is developed from scratch. This stage has been well described in the software engineering literature and there are very many methods, tools, and textbooks that address it in detail (for example, see [2,9,10,37]). The stage is also addressed by a series of standards by IEEE and ISO, or by domain- or industry-specific standards (for example, in the aerospace sector). Such initial development very rarely now takes place starting from a "green field" situation since there may be an inheritance of old legacy software, as well as external suppliers of new COTS components. Over the past 30 years, since the recognition of software engineering as a discipline [38], a great deal of attention has been paid to the process of initial development of reliable software within budget and to predictable timescales. Software project managers welcomed the earliest process model, called the waterfall model, because it offered a means to make the initial development process more visible and auditable through identifiable deliverables. Since there is such an extensive
14
K. H.BENNETT E T ^ / . .
I Initial development first running version evolution changes
Evolution
loss of evolvability servicing patchies
Servicing
servicing discontinued
Phase-out
Switch-off Close-down
FIG. 1. The simple staged model.
literature dealing with initial development, we will cover only selected aspects of it.
2.2
Software Team Expertise
From the point of view of the future stages, several important foundations are laid during initial development. The first foundation is that the expertise of the software engineering team and in particular of the system architects is established. Initial development is the stage during which the team learns about the domain and the problem. No matter how much previous experience had been accumulated before the project started, new knowledge will be acquired during initial development. This experience is of indispensable value in that it will make future evolution of the software possible. So this aspect—the start of team learning— characterizes the first stage. Despite the many attempts to document and record such team learning, much of it is probably tacit—it is the sort of experience that is extremely difficult to record formally.
SOFTWARE EVOLUTION
2.3
15
System Architecture
Another important result and deliverable from initial development is the architecture of the system, i.e., the components from which the system is built, their interactions, and their properties. The architecture will either facilitate or hinder the changes that will occur during evolution and it will either withstand those changes or break down under their impact. It is certainly possible to document architecture, and standard approaches to architectures (e.g., [39]) provide a framework. In practice, one of the major problems for architectural integrity during initial development is "requirements creep." If the requirements of the software system are not clear, or if they change as the software is developed, then a single clear view of the architecture is very difficult to sustain. Numerous approaches to ameliorating this problem have been devised, such as rapid application development, prototyping, and various management solutions, such as the Chief Programmer team, and (more recently) extreme programming [40]. The approach chosen can be strongly influenced by the form of legal contract between the vendor and customer which may induce either a short- or long-term view of the trade-off between meeting a customer's immediate needs and maintaining a clean software architecture.
2.4
What Makes Architecture Evolvable?
Thus for software to be easily evolved, it must have an appropriate architecture, and the team of engineers must have the necessary expertise. For example, in long-lived systems such as the ICL VME operating system, almost all subcomponents have been replaced at some stage or another. Yet despite this, the overall system has retained much of its architectural integrity [41]. In our experience, the evolution of architecture needs individuals of very high expertise, ability, and leadership. There may be financial pressures to take technical shortcuts in order to deliver changes very quickly (ignoring the problem that these conflict with the architectural demands). Without the right level of human skill and understanding it may not be realized that changes are seriously degrading the software structure until it is too late. There is no easy answer or "prescription" to making an architecture easily evolvable. Inevitably there is a trade-off between gains now, and gains for the future, and the process is not infallible. A pragmatic analysis of software systems which have stood the test of time (e.g., VME, or UNIX) typically shows the original design was undertaken by one, or a few, highly talented individuals. Despite a number of attempts, it has proved very difficult to establish contractually what is meant by maintainable or evolvable software and to define processes
16
K. H. BENNETT ET/^/..
that will produce software with these characteristics. At a basic level, it is possible to insist on the adoption of a house style to programming, to use IEEE or ISO standards in the management and technical implementation, to use modem tools, to document the software, and so on. Where change can be foreseen at design time, it may be possible to parameterize functionality. These techniques may be necessary, but experience shows that they are not sufficient. The problem may be summarized easily: a successful software system will be subjected to changes over its lifetime that the original designers and architects cannot even conceive of. It is therefore not possible to plan for such change, and certainly not possible to create a design that will accommodate it. Thus, some software will be able to evolve, but other systems will have an architecture that is at cross-purposes with a required change. To force the change may introduce technical and business risks and create problems for the future.
3.
Evolution—The Key Stage 3.1
Introduction
The evolution stage is characterized as an iterative addition, modification, or deletion of nontrivial software functionality (program features). This stage represents our first major difference from the traditional model. The usual view is that software is developed and then passed to the maintenance team. However, in many of the case studies described later, we find that this is not the case. Instead, the software is released to customers, and assuming it is successful, it begins to stimulate enthusiastic users. (If it is not successful, then the project is cancelled!) It also begins to generate income and market share. The users provide feedback and requests for new features. The project team is living in an environment of success, and this encourages the senior designers to stick around and support the system through a number of releases. In terms of team learning, it is usually the original design team that sees the new system through its buoyant early days. Of course, errors will be detected during this stage, but these are scheduled for correction in the next release. During the evolution stage, the continued availability of highly skilled staff makes it possible to sustain architectural integrity. Such personnel would seem to be essential. Unfortunately we note that making this form of expertise explicit (in a textbook, for example) has not been successful despite a number of projects concerned with "knowledge-based software engineering." The increase in size, complexity, and functionality of software is partly the result of the learning process in the software team. Cusumano and Selby reported that a feature set during each iteration may change by 30% or more, as a direct
SOFTWARE EVOLUTION
17
result of the learning process during the iteration [42]. Brooks also comments that there is a substantial "learning curve" in building a successful new system [18]. Size and complexity increases are also caused by customers' requests for additional functionality, and market pressures add further to growth, since it may be necessary to match features of the competitor's product. In some domains, such as the public sector, legislative change can force major evolutionary changes, often at short notice, that were never anticipated when the software was first produced. There is often a continuous stream of such changes.
3.2
Software Releases
There are usually several releases to customers during the software evolution stage. The time of each release is based on both technical and business considerations. Managers must take into account various conflicting criteria, which include time to market or time to delivery, stability of software, fault rate reports, etc. Moreover the release can consist of several steps, including alpha and beta releases. Hence the release, which is the traditional boundary between software development and software maintenance, can be a blurred and to a certain degree an arbitrary milestone. For software with a large customer base, it is customary to produce a sequence of versions. These versions coexist among the users and are independendy serviced, mostly to provide bug fixes. This servicing may take the form of patches or minor releases so that a specific copy of the software in the hands of a user may have both a version number and release number. The releases rarely implement a substantial new functionality; that is left to the next version. This variant of the staged lifecycle model for this situation is shown in Fig. 2.
3.3
Evolutionary Software Development
The current trend in software engineering is to minimize the process of initial development, making it into only a preliminary development of a skeletal version or of a prototype of the application. Full development then consists of several iterations, each adding certain functionality or properties to the already existing software system. In this situation, software evolution largely replaces initial development, which then becomes nothing more than the first among several equal iterations. The purpose of evolutionary development is to minimize requirements risks. As observed earlier, software requirements are very often incomplete because of the difficulties in eliciting them. The users are responsible for providing a complete set of accurate requirements, but often provide less than that, because of the lack of knowledge or plain omissions. On top of that, the requirements change
18
K. H.BENNETT E7/\L Initial development
first running version
evolution clianges
Evolution Version 1 -^.^ - -
servicing pa tches
Servicing Version 1 evolution of new version evolution changes Phase-out Version 1 Evolution Version 2 servicing patches
"^\
Close-down Version 1 Servicing Version 2
evolution of new version Phase-out Version 2
I ! Evolution Version . . .
Close-down Version 2
I
L
_._.__ FIG. 2. The versioned staged model.
during development, because the situation in which the software operates changes. There is also a process of learning by both users and implementers and that again contributes to changing requirements. Because of this, a complete set of requirements is impossible or unlikely in many situations so one-step implementation of large software carries a substantial risk. Evolutionary software development that is divided into incremental steps lessens the risk because it allows the users to see and experience the incomplete software after each iteration. One of the well-known and well-described processes of evolutionary software development is the Unified Software Development Process [43]. This process describes in detail how software is to be developed in incremental iterations. Each
SOFTWARE EVOLUTION
19
incremental iteration adds a new functionality or a new property (e.g., security, effectiveness) to the already existing software. This gradual increase in requirements lessens the risk involved, because each iteration provides a fresh feedback about the progress of the project. The Unified Software Development Process describes the number of activities and specifies the documents to be produced during the iterations. However, Booch reports that a major criticism leveled at the Unified Software Development Process and similar approaches is that the resulting processes are rigid, require extensive documentation and many steps, and consequently are too expensive in time for many modem businesses [44]. An emerging alternate approach for systems that require rapid evolution is the agile method, an example of which is Extreme Programming (XP) [40]. XP almost abolishes the initial development phase. Instead, programmers work closely with customers to develop a set of "stories" describing desired features of the new software. Then a series of releases is implemented, with typically only a few weeks between releases. The customer defines the next release by choosing the stories to implement. Programmers take the stories and define more fine-grained tasks, with one programmer taking responsibility for each. Test cases are defined before programming begins. An interesting aspect of XP is that the responsible programmer signs up a partner for the task; all work is done in pairs with both working at the same workstation. Thus, knowledge is shared between at least two programmers, and some self-checking is built in without requiring organized walkthroughs or inspections. Pairs are broken up and reformed for different tasks so experience can be distributed. There is little documentation of code or design, although considerable care is taken to maintain tests that can be rerun in the future. Agile methods seem to discard all the software engineering experience of the past 20 years and place their reliance purely on the retention of expert team personnel for as long as the software needs to evolve. They thus gain valuable time, but perhaps at considerable risk. It remains to be seen whether this kind of methodology will be viable beyond the short term or whether managers and stockholders will instead discover that their critical applications have suddenly made unplanned and costly transitions to servicing.
4.
Servicing
4.1 Software Decay As previously mentioned, software enters the servicing stage as human expertise and/or architectural integrity are lost. Servicing has been alternatively called
20
K. H. BENNETT ET/^L
"saturation" [30,31], "aging software," "decayed software," "maintenance proper," and "legacy software." During this stage, it is difficult and expensive to make changes, and hence changes are usually limited to the minimum. At the same time, the software still may have a "mission critical" status; i.e., the user organization may rely on the software for services essential to its survival. Code decay (or aging) was discussed in [45] and empirical evidence for it was summarized in [46]. The symptoms of code decay include: — excessively complex (bloated) code, i.e., code that is more complex than it needs to be, — vestigial code that supports features no longer used or required, — frequent changes to the code, — history of faults in the code, — delocalized changes are frequent, i.e., changes that affect many parts of the code, — programmers use "kludges," i.e., changes done in an inelegant or inefficient manner, for example, clones or patches, — numerous dependencies in the code. As the number of dependencies increases, the secondary effects of change become more frequent and the possibility of introducing an error into software increases.
4.2
Loss of Knowledge and Cultural Change
In order to understand a software system, programmers need many kinds of knowledge. The programmers must understand the domain of the application in detail. They must understand the objects of the domain, their properties, and their relationships. They must understand the business process that the program supports, as well as all activities and events of that process. They also must understand the algorithms and data structures that implement the objects, events, and processes. They must understand the architecture of the program and all its strengths and weaknesses, imperfections by which the program differs from an ideal. This knowledge may be partially recorded in program documentation, but usually it is of such a size and complexity that a complete recording is impractical. A great part of it usually is not recorded and has the form of individuals' experiences or groups' oral tradition. This knowledge is constantly at risk. Changes in the code make knowledge obsolete. As the symptoms of decay proliferate, the code becomes more and more complicated, and larger and deeper knowledge is necessary in order to understand it. At the same time, there is usually a turnover of programmers on the project. Turnover may have different causes, including the natural turnover of the programmers for their personal reasons, or the needs of other projects that forces
SOFTWARE EVOLUTION
21
managers to reassign programmers to other work. Based on the success of the project, team members are promoted, moved to other projects, and generally disperse. The team expertise to support strategic changes and evolution to the software is thus lost; new staff members joining the team have a much more tactical perspective (e.g., at code level) of the software. Evolvability is lost and, accidentally or by design, the system slips into servicing. However, management that is aware of the decline may recognize the eventual transition by planning for it. Typically, the current software is moved to the servicing stage, while the senior designers initiate a new project to release a radically new version (often with a new name, a new market approach, etc.). A special instance of the loss of knowledge is cultural change in software engineering [47]. Software engineering has almost a half-century of tradition, and there are programs still in use that were created more than 40 years ago. These programs were created in a context of completely different properties of hardware, languages, and operating systems. Computers were slower and had much smaller memories, often requiring elaborate techniques to deal with these limitations. Program architectures in use were also different; modem architectures using techniques such as object orientation were rare at that time. The programmers who created these programs are very often unavailable. Current programmers who try to change these old programs face a double problem: not only must they recover the knowledge that is necessary for that specific program, but they also must recover the knowledge of the culture within which it and similar programs were created. Without that cultural understanding they may be unable to make the simplest changes in the program.
4.3
Wrapping, Patching, Cloning, and Other "Kludges"
During servicing, it is difficult and expensive to make changes, and hence changes are usually limited to the minimum. The programmers must also use unusual techniques for changes, the so-called "kludges." One such technique is wrapping. With wrapping, software is treated as a black box and changes are implemented as wrappers where the original functionality is changed into a new one by modifications of inputs and outputs from the old software. Obviously, only changes of a limited kind can be implemented in this way. Moreover each such change further degrades the architecture of the software and pushes it deeper into the servicing stage. Another kind of change that frequently is employed during servicing is termed cloning. If programmers do not understand fully the program, instead of finding where a specific functionality is implemented in the program, they create another implementation. Thus a program may end up having several implementations of
22
K. H. BENNETT EI>A/..
identical or nearly identical stacks or other data structures, several implementations of identical or almost identical algorithms, etc. Sometimes programmers intentionally create clones out of fear of secondary effects of a change. As an example, let us assume that function f o o () requires a change, but f o o () may be called from other parts of the code so that a change in fooO may create secondary effects in those parts. Since knowledge of the program in the servicing stage is low, the programmers choose a "safe" technique: they copy-and-paste fooO, creating an intentional clone f o o l (). Then they update f o o1 {) so that it satisfies the new requirements, while the old f o o () still remains in use by other parts of the program. Thus there are no secondary effects in the places where f o o () is called. While programmers solve their immediate problem in this way, they negatively impact the program architecture and make future changes harder. The presence of a growing number of clones in code is a significant symptom of code decay during servicing. Several authors have proposed methods of detecting clones automatically using substring matching [48] or subtree matching [49]. Software managers could consider tracking the growth of clones as a measure of code decay, and consider remedial action if the system seems to be decaying too rapidly [50,51]. Servicing patches are fragments of the code, very often in binary form, that are used to distribute bug fixes in a widely distributed software system.
4.4
Reengineering
In the servicing stage, it is difficult to reverse the situation and return to the stage of evolution. That would require regaining the expertise necessary for evolution, recapturing the architecture, restructuring the software, or all of these. Both restructuring and regaining expertise are slow and expensive processes, with many risks involved, and new staff may have to be recruited with appropriate and relevant skills. As analyzed by Olsem [52], the users of a legacy system build their work routines and expectations based on the services it provides and are thus very sensitive about any disruption of routine. Their tolerance of changes may be much smaller than the tolerance displayed by the users of brand new systems. Thus, user rigidity also makes reengineering a very risky proposition. In order to minimize the risk and the disruption of user routine, Olsem advocates incremental reengineering, where the system is reengineered one part at a time. The new parts temporarily coexist with the old parts and old parts are replaced one-by-one, without interruption of the service. A case study of such reengineering was published in [53].
SOFTWARE EVOLUTION
23
This approach to reengineering avoids disruption of the user's routines, but it also preserves the interfaces among the parts and hence the overall architecture of the system. If the architecture is also obsolete, the process provides only partial relief. This impacts the business case for reengineering, since the benefits returned compared to the investment required may be difficult to justify. In the worst case, we are spending resources for little or no benefit. A further difficulty with reengineering of widely used software is the problem of distribution. Getting the new version out to all users can be expensive or impossible, so the burden of servicing the old version may persist. This problem will surely become even greater as software is increasingly introduced to more consumer goods such as mobile phones. Once an object level code has been released in such devices, it is all but impossible to go back to the evolutionary stage. Based on our experience, complete reengineering as a way of stepping back from servicing to evolution is very rare and expensive, so that entrance into the servicing stage is for all practical purposes irreversible.
5.
Phase-Out and Closedown
At some stage the system is essentially frozen and no further changes are allowed. This stage, which we call phase-out, has also been called "decline" [31]. Help desk personnel may still be in place to assist users in running the system, but change requests are no longer honored. Users must work around any remaining defects instead of expecting them to be fixed. Finally, the system may be completely withdrawn from service and even this basic level of staffing is no longer provided. The exact course of phase-out and closedown will depend on the specific system and the contractual obligations in place. Sometimes a system in phase-out is still generating revenue, but in other cases (such as most shrink-wrap software) the user has already paid for it. In this second case, the software producer may be much less motivated to provide support. In a survey by Tamai and Torimitsu [54], an investigation was undertaken of the life span of software in Japan. The survey dealt with software from several application areas such as manufacturing, financial services, construction, and mass media. It found that for software larger than 1 million lines of code, the average life was 12.2 years with a standard deviation of 4.2 years. The lifetime varied more widely for smaller software. Tamai and Torimitsu's work also classified the causes of the closedowns in the following way. Hardware and/or system change caused the closedown in 18%
24
K. H.BENNETT Er/\L
of the cases. New technology was the reason in 23.7% of the cases. A need to satisfy new user requirements (that the old system was unable to satisfy) was the cause in 32.8% of the cases. Finally deterioration of software maintainability was the culprit in 25.4% of the cases. We can speculate that at the end of the lifetime, the software was in phase-out stage and in most of the cases, there was an event (hardware change, new technology, new requirements) that pushed software into closedown. Only in 25.4% of the cases did closedown occur naturally as a free management decision, without any precipitating event from the outside. There are a number of issues related to software shutdown. Contracts should define the legal responsibilities in this phase. In some cases, such as outsourced software in which one company has contracted with another to develop the system, the relationships may be quite complex. Final ownership and retention of the system, its source code, and its documentation should be clearly defined. Frequently system data must be archived and access must be provided to it. Examples of such data are student transcripts, birth certificates, and other longlived data. The issues of data archiving and long-term access must be solved before the system is shut down.
6.
Case Studies
Our new lifecycle model was derived from involvement with and observation of real industrial and commercial software development projects in a number of domains. We then abstracted from the particular experiences and practices of these projects, in order to draw our new perspective. Lehner [30,31] has provided empirical evidence that the activities of "maintenance" change during the lifecycle of a project. However, other than this, very little data have been collected, and our evidence is gleaned from published case studies and personal practical experience. The experience from these projects is summarized in the rest of this section.
6.1 The Microsoft Corporation The description of the Microsoft development process, as given by Cusumano and Selby [42], illustrates the techniques and processes used for high-volume mass market shrink-wrapped software. In particular, we can draw on the following evidence: 1. At Microsoft, there is a substantial investment in the initial development stage, before revenue is generated from sales. This includes testing.
SOFTWARE EVOLUTION
25
2. The division between initial development and evolution is not sharp; the technique of using beta releases to gain experience from customers is widely used. 3. Microsoft tries to avoid quite explicitly the traditional maintenance phase. It is realized that with such a large user base, this is logistically impossible. Object code patches (service packs) are released to fix serious errors, but not for feature enhancement. 4. The development of the next release is happening while the existing release is still achieving major market success. Thus Windows 98 was developed while Windows 95 was still on its rising curve of sales. Microsoft did not wait until Windows 95 sales started to decline to start development, and to do so would have been disastrous. Market strategy is based upon a rich (and expanding) set of features. As soon as Windows 98 reached the market, sales of Windows 95 declined very rapidly. Shortcomings and problems in Windows 95 were taken forward for rectification in Windows 98; they were not addressed by maintenance of Windows 95. 5. Microsoft does not support old versions of software, which have been phased out, but they do provide transition routes from old to new versions. 6. Organizational learning is becoming evident through the use of shared software components. Interestingly, Microsoft has not felt the need for substantial documentation, indicating that the tacit knowledge is retained effectively in design teams. We conclude that evolution represents Microsoft's main activity, and servicing by choice a very minor activity.
6.2
The VME Operating System
This system has been implemented on ICL (and other) machines for the past 30 years or so, and has been written up by International Computers Ltd. [41]. It tends to follow the classical X.Y release form, where X represents a major release (evolution) and Y represents minor changes (servicing). In a similar way to Microsoft, major releases tend to represent market-led developments incorporating new or better facilities. The remarkable property of VME is the way in which its original architectural attributes have remained over such a long period, despite the huge evolution in the facilities. It is likely that none of the original source code from the early 1970s still is present in the current version. Yet its architectural integrity is clearly preserved. We can deduce that: 1. There was a heavy investment in initial development, which has had the effect of a meticulous architectural design. The system has been evolved
26
K. H. BENNETT FT/^/..
by experts with many years of experience, but who also have been able to sustain architectural integrity. 2. Each major release is subject to servicing, and eventually that release is phased-out and closed down. 3. Reverse engineering is not used from one major release to another; evolution is accomplished by team expertise and an excellent architecture.
6.3
The Y2K Experience
An excellent example of software in its servicing stage and its impact has been provided by the "Y2K" episode. It was caused by a widespread convention that limited the representation of a year in a date to the last two digits; for example, the year 1997 was represented by two digits "97." Based on this convention, when January 1, 2000 was reached, the year represented as "00" would be interpreted by computers as 1900, with all accompanying problems such misrepresentation could cause. The origin of the two-digit convention goes back to the early programs of the 1950s when the memory space was at a premium and hence to abbreviate the year to its two final digits seemed reasonable, while the problems this would cause seemed very remote. Even as the time was moving closer toward the fateful January 1, 2000, programmers continued to use the entrenched convention, perhaps out of inertia and habit. The extent of the problem became obvious in the late 1990s and a feverish attempt to remedy the problem became widespread. Articles in the popular press and by pessimists predicted that the programs would not be updated on time, that the impacts of this failure would be catastrophic, disrupting power supplies, goods distribution, financial markets, etc. At the height of the panic, the president of the USA appointed a "Y2K czar" with an office close to the White House whose role was to coordinate the efforts to fix the Y2K problem (and if this did not succeed, to deal with the ensuing chaos). Similar appointments were made in other countries, such as the UK. Fortunately, the dire Y2K predictions did not materialize. Many old programs were closed down. The programs that could not be closed down and needed to be repaired were indeed fixed, mostly by a technique called "windowing," which is a variant of wrapping. The two-digit dates are reinterpreted by moving the "window" from years 1900-1999 to a different period, for example, 1980-2080. In the new window, "99" is still interpreted as 1999, but "00", "01", ..., are now interpreted as 2000, 2001, etc. This worked well (for the time being) and has postponed the problem to the time when the new "window" will run out. The Y2K czar quietly closed his office and left town. There were, in fact, very few reported problems.
SOFTWARE EVOLUTION
27
However the whole Y2K effort is estimated to have had a staggering cost. It is estimated that worldwide, about 45% of all applications were modified and 20% were closed down, at the cost between $375 and $750 billion [55]. From the viewpoint of our model, the Y2K problem was caused by the fact that many legacy systems were in the servicing stage, and although Y2K rectification would be only a routine change during the evolution stage, it was a hard or very hard change during the servicing stage. At heart, the problem was caused by a design decision (a key data representation choice) and changes in design are very hard to achieve successfully during servicing. The reason why the Y2K problem caught so many managers by surprise is the fact that the difference between evolutionary and servicing stages was not well understood.
6.4
A Major Billing System
This 20-year-old system generates revenue for its organization, and is of strategic importance. However, the marketplace for the organization's products has changed rapidly in recent years, and the billing system can no longer keep up with market-led initiatives (such as new products). Analysis shows that this system has slid from evolution into servicing without management realizing it, the key designers have left, the architectural integrity has been lost, changes take far too long to implement, and revalidation is a nightmare; it is a classical legacy system. The only solution (at huge expense) is to replace it.
6.5
A Small Security Company
A small company has a niche market in specialized hardware security devices. The embedded software is based around Microsoft's latest products. The products must use rapidly changing hardware peripherals, and the company must work hard to keep ahead of the competition in terms of the sophistication of the product line. The software therefore consists of COTS components (e.g., special device drivers), locally written components, some legacy code, and glue written in a variety of languages (e.g., C, C-l-l-, BASIC). The system was not planned in this way, but has evolved into this form because of "happenstance." The software is the source of major problems. New components are bought, and must work with the old legacy. Powerful components are linked via very low-level code. Support of locally written components is proving very hard. From our perspective, we have a software system in which some parts are in initial development, some are in evolution, others are in servicing, while others are ready for phase-out. There is no sustained architectural design.
28
K. H.BENNETT E r ^ L
The existing (Lientz and Swanson) type analysis sheds Uttle Ught on this problem. Our model allows each component and connector to be assessed in terms of its stage. This should then allow the company to develop a support plan. For example, a component being serviced can have a dependency on another component being evolved.
6.6
A Long-Lived Defense System
A different type of case study is represented by a long-lived, embedded, defense system which is safety related. This was developed initially many years ago (in Assembler) and needs to be continually updated to reflect changes in the supporting hardware. In classic terms, this system would be thought of as being in the maintenance phase, but according to our analysis, it is still being evolved, yet surely but inexorably slipping into servicing: 1. The software is still core to the organization, and will be for many years. Failure of the software in service would be a disaster. 2. Many experts with in-depth knowledge of the software (and hardware) are working on the system. They understand the architecture and are Assembler experts. The software is being changed to meet quite radical new requirements. It is free from ad hoc patches, and consistent documentation is being produced, although structurally it is decaying. Comprehensive test procedures are in place and are used rigorously. The system engineers understand the impact of local changes on global behavior. Mature, well-understood processes are employed. 3. Conversely, some experts have recently left the organization, and this loss of expertise, accompanied by structural decay mentioned above, is a symptom of serious software decay. Reverse engineering is not considered feasible, partly because of the lack of key expertise. If the process of the decay reaches a certain point, it is likely that the system will be developed again ab initio.
6.7
A Printed Circuits Program
One of the co-authors was for several years a software manager of a software department. One of the projects in that department was a program that designed printed circuit boards, and was used by customers within the same institution. Because of the critical nature of the product, it had to be constantly evolved as new requirements appeared and had to be satisfied. The original developers were evolving the program but at the same time, they were some of the most qualified program developers in the department.
SOFTWARE EVOLUTION
29
Since there was a backlog of other projects that required high expertise, and the difference between evolution and servicing was not understood at the time, the manager tried several times unsuccessfully to transfer the evolution responsibility to different people. However all attempts to train new programmers so that they would be able to take over the evolution task and relieve the original developers turned out to be unsuccessful. In all instances, the new trainees were able to do only very limited tasks and were unable to make strategic changes in the program. At that time, this inability to transfer a "maintenance" task proved to be baffling to the manager. In hindsight, the expertise needed for evolution was equivalent or perhaps even greater than the expertise to create the whole program from scratch. It proved more cost effective to assign the new programmers to the new projects and to leave the experienced developers to evolve the printed circuit program.
6.8
Project PET
This case study is an example of an attempted reengineering project. PET is a CAD tool developed by a car company [56,57] to support the design of the mechanical components (transmission, engine, etc.) of a car. It is implemented in C-h-1-, and every mechanical component is modeled as a C-I-+ class. The mechanical component dependency is described by a set of equations that constitute a complex dependency network. Whenever a parameter value is changed, an inference algorithm traverses the entire network and recalculates the values of all dependent parameters. PET consists of 120,000 lines of C-l-l- code and is interfaced with other CAD software, including 3-D modeling software. After the initial implementation, there was a massive stage of evolution where, in our estimate, more than 70% of the current functionality was either radically changed or newly introduced. The evolution was driven mostly by the user requests. All changes to PET were performed as quickly as possible in order to make the new functionality available. This situation prevented conceptual changes to the architecture, and the architecture progressively deteriorated. Also the original architecture was not conceived for changes of this magnitude. As a result, the architecture has drastically deteriorated to the point where the requested evolutionary changes are becoming increasingly difficult. The symptoms of deterioration include the introduction of clones into the code and the misplacement of code into the wrong classes. During a code review we identified 10% of the PET code as clones. Because of code deterioration, the evolvability of the PET software has been decreasing and some evolutionary changes are becoming very hard. An example of a hard change is a modification to the inferencing algorithms. As mentioned above, the program uses inferencing by which the relationships between the mechanical components are maintained. The program would greatly benefit from
30
K. H. BENNETT E7/\L
an introduction of a commercially available component for inferencing that contains more powerful inferencing algorithms, but the current architecture with misplaced code and clones does not make that change feasible. Because of this, the changes done to the software have the character of patches that further corrode the architecture. Recently a decision was made to move PET software into a servicing stage, with work performed by a different group of people, and to stop all evolutionary changes. While PET will be serviced and should meet the needs of the users in this situation, a new version of PET will be developed from scratch, embodying all the expertise gained from the old PET evolution. The attempt to reengineer the old version of PET has been abandoned.
6.9
The FASTGEN Geometric Modeling Toolkit
FASTGEN is a collection of Fortran programs used by the U.S. Department of Defense to model the interactions between weapons (such as bombs or missiles) and targets (such as tanks or airplanes). Targets are modeled as large collections of triangles, spheres, donuts, and other geometric figures, and ray-tracing programs compute the effects of the explosion of a weapon. FASTGEN was originally developed in the late 1970s by one contractor, and has since been modified many times by other agencies and contractors at different sites ranging from California to Florida. Originally developed primarily for mainframe computers, it has been ported to supercomputer platforms such as CDC and Cray, Digital Equipment VAX, and, in the 1990s to PC and Unix workstations. A study of CONVERT, one of the FASTGEN programs, illustrates the impact of the original architecture on program comprehension and evolvability [58]. The original code was poorly modularized with large, noncohesive subroutines and heavy use of global data. The program still contains several optimizations that were important for the original mainframe environment, but that now make comprehension very difficult. For example, records are read and written in arbitrary batches of 200 at a time; in the original environment input/output could cause the program to be swapped out of memory so it was much more efficient to read many records before doing computations. Current versions of the program preserve this complex batching logic that is now obscure and irrelevant. FASTGEN is now in a late servicing stage, bordering on phase-out.
6.10
A Financial Management Application
This application dates from the 1970s, when it was implemented on DEC PDP computers. Recently it has been ported to PC/Windows machines. It is
SOFTWARE EVOLUTION
31
financially critical to its users. The software is modest in size (around 10,000 lines of code). Prior to the port, the software was stable and had evolved very little. In Lehman's terms this was an S-type system, with very little evolution, and very long lived. During the port, it had been decided to modify the code, preserving the original architecture as far as possible. Unfortunately, this had the following effects: (a) On the PDP series, different peripheral drivers (magnetic tapes, paper tape, disks, etc.) had very different interfaces. These differences were not well hidden, and impacted much of the application code. In the PC implementation, Windows has a much cleaner unified view of access to disks, CDs, etc. (i.e., byte vectors). Yet the original PDP peripheral code structure was retained, because the designers of the port could not be sure of correctly handling all side effects if the structure were changed. As a result, the code is much longer than it needs to be, with much redundancy and unwarranted complexity. (b) Even worse, the application needs to run in real time. The real time model employed in the original language has been retained in the port, yet the model for the new apphcation language has been added. The result is a labyrinthine real time program structure that is extremely hard to comprehend. This application has now slipped to the end of the servicing stage and only the simplest changes are possible. The expertise does not exist to reengineer it. If major changes are needed, the system will have to be rewritten.
7.
Software Change and Comprehension 7.1 The Miniprocess of Change
During both the evolution and the servicing stages, a software system goes through a series of changes. In fact, both evolution and servicing consist of repeated change, and hence understanding the process of software change is the key to understanding these stages and the problems of the whole software lifecycle. Accordingly in this section we look at the process of change in more detail, decomposing change into its constituent tasks. A particularly important task is program comprehension, because it consumes most of the programmer's time, and its success dominates what can or cannot be accomplished by software change. The tasks comprising software change are listed in [14] (see the Introduction). They are summarized in the miniprocess of change. In order to emphasize tasks
32
K. H. BENNETT Er/^L
that we consider important, we divide them differently than the standard and group them into the miniprocess in the following way: • Change request: the new requirements for the system are proposed. • Change planning: to analyze the proposed changes. o Program comprehension: understand the target system. o Change impact analysis: analyze the potential change and its scope. • Change implementation: the change is made and verified. o Restructuring (re-factoring) for change. o Initial change. o Change propagation: make secondary changes to keep the entire system consistent, o Validation and verification: to ensure whether the system after change meets the new requirement and that the old requirements have not been adversely impacted by the change, o Redocumentation: to project the change into all documentation. • Delivery. These tasks are discussed in more detail in this section.
7.2
Change Request and Planning
The users of the system usually originate the change requests (or maintenance requests). These requests have the form of fault reports or requests for enhancements. Standard practice is to have a file of requests (backlog) that is regularly updated. There is a submission deadline for change requests for the next release. After the deadline, the managers decide which particular requests will be implemented in that release. All requests that are submitted after the deadline or the requests that did not make it into the release will have to wait for the following release. Even this superficial processing of change requests requires some understanding of the current system so that the effort required may be estimated. It is a common error to underestimate drastically the time required for a software change and thus the time to produce a release. In small changes, it suffices to find the appropriate location in the code and replace the old functionality with the new one. However large incremental changes
SOFTWARE EVOLUTION
33
require implementation of new domain concepts. Consider a retail "point-ofsale" application for handling bar code scanning and customer checkout. The application would need to deal with several forms of payment, such as cash and credit cards. An enhancement to handle check payments would involve a new concept, related to the existing payment methods but sufficiently different to require additional data structures, processing for authorization, etc. There will be quite a lot of new code, and care is needed to maintain consistency with existing code to avoid degrading the system architecture. Concepts that are dependent on each other must be implemented in the order of their dependency. For example, the concept "tax" is dependent on the concept "item" because different items may have different tax rates and tax without an item is meaningless. Therefore, the implementation of "item" must precede the implementation of "tax." If several concepts are mutually dependent, they must be implemented in the same incremental change. Mutually independent concepts can be introduced in arbitrary order, but it is advisable to introduce them in the order of importance to the user. For example, in the point-of-sale program it is more important to deal correctly with taxes than to support several cashiers. An application with correct support for taxes is already usable in stores with one cashier. The opposite order of incremental changes would postpone the usability of the program. Change planning thus requires the selection of domain concepts to be introduced or further developed. It also requires finding in the old code the location where these concepts should be implemented so that they properly interact with the other already present concepts. Obviously these tasks require a deep understanding of the software and of its problem domain.
7.3
Change Implementation
Implementation of the software change requires several tasks, often with some looping and repetition. If the proposed change has a large impact on the architecture, there may be a preliminary restructuring of the program to maintain cleanness of design. In an object-oriented program, for example, this may involve refactoring to move data or functions from one class to another [59]. The actual change may occur in any one of several ways. For small changes, obsolete code is replaced by a new code. For large incremental changes, new code is written and then "plugged" into the existing system. Several new classes implementing a new concept may be written, tested, and interfaced with the old classes already in the code. Very often the change will propagate; that is, it will require secondary changes. In order to explain change propagation, we must understand that software consists
34
K. H.BENNETT £74/..
of entities (classes, objects, functions, etc.) and their dependencies. A dependency between entities A and B means that entity B provides certain services, which A requires. A function call is an example of a dependency among functions. Different programming languages or operating systems may provide different kinds of entities and dependencies. A dependency of A on 5 is consistent if the requirements of A are satisfied by what B provides. Dependencies can be subtle and of many kinds. The effect may be at the code level; for example, a module under change may use a global variable in a new way, so all uses of the global variable must be analyzed (and so on). Dependencies can also occur via nonfunctional requirements or business rules. For example, in a real time system, alteration of code may affect the timing properties in subtle ways. For this reason, the analysis of a change, and the determination of which code to alter often cannot easily be compartmentalized. Senior maintenance engineers need a deep understanding of the whole system and how it interacts with its environment to determine how a required change should be implemented while hopefully avoiding damage to the system architecture. The business rules may be extremely complex (e.g., the "business rules" that address the navigation and flight systems in an on-board safety critical flight control system); in an old system, any documentation on such rules has probably been lost, and determining the rules retrospectively can be an extremely time-consuming and expensive task (for example, when the domain expert is no longer available). Implementation of a change in software thus starts with a change to a specific entity of the software. After the change, the entity may no longer fit with the other entities of the software, because it may no longer provide what the other entities require, or it may now require different services from the entities it depends on. The dependencies that no longer satisfy the require-provide relationships are called inconsistent dependencies (inconsistencies for short), and they may arise whenever a change is made in the software. In order to reintroduce consistency into software, the programmer keeps track of the inconsistencies and the locations where the secondary changes are to be made. The secondary changes, however, may introduce new inconsistencies, etc. The process in which the change spreads through the software is sometimes called the ripple effect of the change [60,61]. The programmer must guarantee that the change is correctly propagated, and that no inconsistency is left in the software. An unforeseen and uncorrected inconsistency is one of the most common sources of errors in modified software. A software system consists not just of code, but also of documentation. Requirements, designs, test plans, and user manuals can be quite extensive and they often are also made inconsistent by the change. If the documentation is to be useful in the future, it must also be updated.
SOFTWARE EVOLUTION
35
Obviously the modified software system needs to be validated and verified. The most commonly used technique is regression testing, in which a set of system tests is conserved and rerun on the modified system. The regression test set needs to have fairly good coverage of the existing system if it is to be effective. It grows over time as tests are added for each new concept and feature added. The regression test set will also need to be rerun many times over the life of the project. Regression testing is thus not cheap, so it is highly desirable to automate the running of the tests and the checking of the output. Testing is, however, not enough to guarantee that consistency has been maintained. Inspections can be used at several points in the change miniprocess to confirm that the change is being introduced at the right point, that the resulting code meets standards, and that documentation has indeed been updated for consistency. It is evident that clear understanding of the software system is essential at all points during change implementation. Refactoring requires a vision of the architecture and of the division of responsibilities between modules or classes. Change propagation analysis requires tracing the dependencies of one entity on another, and may require knowledge of subtle timing or business rule dependencies. Documentation updating can be among the most knowledge-demanding tasks since it requires an awareness of the multiple places in each document where any particular concept is described. Even testing requires an understanding of the test set, its coverage, and of where different concepts are tested. In the previous paragraphs we have described what should be done as part of each software change. Given the effort required to "do it right" it is not surprising to discover that, in practice, some of these tasks are skipped or slighted. In each such case, a trade-off is being made between the immediate cost or time and an eventual long-term benefit. As we will discuss in Section 8 it is not necessarily irrational to choose the immediate over the long-term, but all such decisions need to be taken with full awareness of the potential consequences. As we have described earlier, program comprehension is a key determinant of the Hfecycle of any specific software product. To understand why this is so, we need to understand what it means to understand a program, why that understanding is difficult, and how it fits into the cycle of software change.
7.4
Program Comprehension
Program comprehension is carried out with the aim of understanding source code, documentation, test suite, design, etc., by human engineers. It is typically a gradual process of building up understanding, which can then be used further to explain the construction and operation of the program. So program comprehension is the activity of understanding how a program is constructed and its
36
K. H. BENNETT E7^/..
underlying intent. The engineer requires precise knowledge of the data items in the program, the way these items are created, and their relationships [62]. Various surveys have shown that the central activity in maintenance is understanding the source code. Chapin and Lau [63] describe program comprehension as the most skilled and labor-intensive part of software maintenance, while Oman [64] states that the key to effective software maintenance is program comprehension. Thus it is a human-intensive activity that consumes considerable costs. An early survey to the field is [13]; see also von Mayrhauser [65]. The understanding can then be used for: • Maintenance and evolution (e.g., [66]), • Reverse engineering (e.g., [13]), • Learning and training, • Redocumentation (e.g., [67,68]), • Reuse (e.g., [69]), • Testing and validation (e.g., [70]). The field has prompted several theories derived from empirical investigation of the behavior of programmers. There are three fundamental views, see Storey [71]: • Comprehension is undertaken in a top-down way, from requirements to implementation [72,73], • Comprehension is undertaken in a bottom-up way, starting with the source code, and deducing what it does and how it does it [74], and • Comprehension is undertaken opportunistically [75,76]. All three may be used at different times, even by a single engineer. It is encouraging to note that much work on comprehension has been supported by empirical work to gain understanding of what engineers actually do in practice (see, for example, [66,76,77]). To support comprehension, a range of tools has been produced and some of these present information about the program, such as variable usage, call graphs, etc., in a diagrammatic or graphical form. Tools divide into two types: • Static analysis tools, which provide information to the engineer based only on the source code (and perhaps documentation), • Dynamic analysis tools, which provide information as the program executes. More recent work is using virtual reality and much more sophisticated visualization metaphors to help understanding [78].
SOFTWARE EVOLUTION
37
The work on an integrated metamodel [65] has drawn together into a single framework the work on cognition of large software systems. It is based on four components: • Top-down structures, • Situation model, • Program model, and • The knowledge base. It combines the top-down perspective with the bottom-up approach (i.e., situation and program models). The knowledge base addresses information concerned with the comprehension task, and is incremented as new and inferred knowledge is determined. The model is not prescriptive, and different approaches to comprehension may be invoked during the comprehension activity. All authors agree that program comprehension is a human-oriented and timeintensive process, requiring expertise in the programming language and environment, deep understanding of the specific code and its interactions, and also knowledge of the problem domain, the tasks the software performs in that domain, and the relationships between those tasks and the software structure. As mentioned earlier, locating concepts in the code is a program comprehension task that is very important during the phase of change planning. Change requests are very often formulated as requests to change or introduce implementation of specific domain concepts, and the very first task is to find where these concepts are found in the code. A usual assumption behind the concept location task is that the user does not have to understand the whole program, but only the part that is relevant to the concepts involved in the change. In a widely cited paper, Biggerstaff et al. [79] presented a technique of concept location in the program based on the similarity of identifiers used in the program and the names of the domain concepts. When trying to locate a concept in the code, the programmer looks for the variables, functions, classes, etc., with a name similar to the name of the concept. For example when trying to locate implementation of breakpoints in a debugger, the programmer looks for variables with identifiers breakpoint. Breakpoint, break-point, brkpt, etc. Text pattern matching tools like "grep" are used for this purpose. Once the appropriate identifier is found, the programmer reads and comprehends the surrounding code in order to locate all code related to the concept being searched. Another technique of concept or feature location is based on analysis of program execution traces [80]. The technique requires instrumentation of the program so that it can be determined which program branches were executed for a given set of input data. Then the program is executed for two sets of data: data set A with the feature and data set B without the feature. The feature is most probably
38
K. H.BENNETT Ef/^L
located in the branches that were executed for data set A but were not executed for data set B. Another method of concept location is based on static search of code [81]. The search typically starts in function main() and the programmer tries to find the implementation of the concept there. If it cannot be located there, it must be implemented in one of the subfunctions called from main(); hence the programmer decides which subfunction is the most likely to implement the concept. This process is recursively repeated (with possible backtracks) until the concept is found. As remarked earlier, we believe that the comprehensibility of a program is a key part of software quality and evolvability, and that research in program comprehension is one of the key frontiers of research in software evolution and maintenance.
8. 8.1
Sustaining Software Value Staving off the Servicing Stage
One of the goals of our staged model of the software lifecycle is to aid software managers in thinking about the systems they control and in planning their futures. It is clear from our argument so far that a software system subtly loses much of its value to its owners when it makes the transition from the evolution to the servicing stage. A software system in the evolution stage is routinely adapted to changing organizational needs and can thus make a considerable contribution to the organization's mission and/or revenues. When the system transitions into servicing, only the simplest changes can be made; the software is a less valuable asset and may actually become a constraint on the organization's success. Thus, most software managers will want to stave off the servicing stage as long as possible. There are a number of strategies that can be adopted to sustain software value, but unfortunately all of them produce their benefits in the long term, but require an expenditure of effort or time in the short term. A software manager must seek an appropriate trade-off between the immediate budget and time pressures of doing business and the potential long-term benefits of increased software value. The appropriate choice of strategies will obviously not be the same for all companies. An e-business that must change almost daily to survive will focus on rapid change, whereas the owners of an embedded system with stable requirements but life-critical consequences of failure may be able to focus on long-term quality.
SOFTWARE EVOLUTION
39
Thus this section is not prescriptive, but merely tries to identify some of the issues that a software manager or chief architect should consider. We list some of the strategies and techniques that have been proposed, categorizing them by their stage in the lifecycle. Unfortunately, there seems to be very little published analysis that would aid a software manager in estimating the costs and benefits. Research into the actual effectiveness of each would seem to be a priority.
8.2
Strategies during Development
The key decisions during development are those that determine the architecture of the new system and the team composition. These decisions are, of course interrelated; as has been mentioned, many of the more famously evolvable systems such as Unix and VME were the product of a very few highly talented individuals. Advice to "hire a genius" is not misplaced, but is difficult to follow in practice. In the current state of the art, there is probably little that can be done to design an architecture to permit any conceivable change. However it is possible to address systematically those potential changes that can be anticipated, at least in general terms. For instance it is well known that changes are extremely common in the user interfaces to systems, to operating systems, and to hardware, while the underlying data and algorithms may be relatively stable. During initial development a roughly prioritized list of the anticipated changes can be a very useful guide to architectural design [82]. Once possible changes are identified, the main architectural strategy to use is information hiding of those components or constructs most likely to change. Software modules are structured so that design decisions, such as the choice of a particular kind of user interface or a specific operating system, are concealed within one small part of the total system, a technique described by Pamas since the early 1970s [83]. If the anticipated change becomes necessary in the future, only a few modules would need to be modified. The emergence of object-oriented languages in the 1990s has provided additional mechanisms for designing to cope with anticipated changes. These languages provide facilities such as abstract classes and interfaces which can be subclassed to provide new kinds of object which are then used by the rest of the program without modification. Designers can also make use of object-oriented design patterns, many of which are intended to provided flexibility to allow for future software enhancements [84]. For example, the Abstract Factory pattern provides a scheme for constructing a family of related objects that interact, such as in a user interface toolkit. The pattern shows how new object classes can be added, say to provide an alternate look-and-feel, with minimal change to the existing code.
40
K. H. BENNETT ET/AL
In both traditional and object-oriented systems, architectural clarity and consistency can greatly facilitate program comprehension. Similar features should always be implemented in the same way, and if possible by similarly named components. For example, one of the authors once studied a test coverage monitoring system that provided coverage of basic blocks, of decisions, and of several kinds of data flows [85]. The code for each kind of monitoring was carefully segregated into source files, with code to display blocks in a file called b d i s p. c, that to display decisions in d d i s p. c, and so on. Within each file, corresponding functions were named the same; for example, each file had a d i s p I a y () function to handle its particular kind of display. This consistency made it very easy for a maintainer to hypothesize where particular concepts were likely to be located, and greatly speeded up understanding of the program architecture, even in the absence of design documentation. On the other hand, inconsistency in design and naming can lead future maintainers into errors in understanding. Polymorphic object-oriented systems are particularly susceptible to this kind of error, since many member functions may have the same name. They thus cannot easily be distinguished by a maintainer reading the code. If they do not perform precisely the same task, or if one version has side effects not shared by others, then the maintainer may be seriously mislead in reading the code [86]. The best way to achieve architectural consistency is probably to leave the basic design in the hands of a very small team who work together very closely. Temptations to rotate personnel between projects should probably be strongly resisted at this stage. Other programmers may later maintain the consistent design if they are encouraged to study the structure of existing code before adding their own contributions. Consistency may then further be enforced by code inspections or walkthroughs. If the project architecture depends on purchased COTS components, as so many modem projects do, then particular care is needed. First, it would obviously be dangerous to depend heavily on a component that is already in the servicing or phase-out stages. It is thus important to understand the true status of each component, which may require information that the component supplier is reluctant to give. Second, the impact of the possible changes on the COTS component should be considered. For instance, if changes to the hardware platform are anticipated, will the COTS supplier, at reasonable cost, evolve his product to use the new hardware? If not, information hiding design might again be advisable to facilitate the possible substitution of a different COTS component in the future. Programming environments and software tools that generate code may create problems similar to those of COTS. Many such environments assume implicitly that all future changes will take place within the environment. The generated code
SOFTWARE EVOLUTION
41
may be incomprehensible for all practical purposes. Unfortunately, experience indicates that environments, and the companies that produce them, often have much shorter lifetimes than the systems developed using them. Finally, there are well-known coding practices that can greatly facilitate program comprehension and thus software change. In Section 2.4 we have mentioned the use of IEEE or ISO standards, the enforcement of house coding style to guarantee uniform layout and commenting, and an appropriate level of documentation to match the criticality of the project. One coding technique that should probably be used more often is to insert instrumentation into the program to aid in debugging and future program comprehension. Experienced programmers have used such instrumentation for years to record key events, interprocess messages, etc. Unfortunately instrumentation is rarely mandated, and more often introduced ad hoc only after a project has fallen into trouble [87]. If used systematically, it can be a great aid to understand the design of a complex system "as-built" [88]. We should mention again that all the above techniques involve a trade-off between evolvability and development time. The study of potential changes takes time and analysis. Design to accommodate change may require more complex code, which impacts both time and later program comprehension. (At least one project found it desirable to remove design patterns that had been introduced to provide unused flexibility [89].) COTS components and programming environments can greatly speed up initial development, but with serious consequences for future evolvability. Design consistency and coding standards are difficult to enforce unless almost all code is inspected, a time-consuming and thus far from universal practice. In the rush to get a product to market, a manager must be careful about decisions that sacrifice precious time against future benefit.
8.3
Strategies during Evolution
During the evolution phase, the goal must be to delay servicing as long as possible by preserving a clean architecture and by facilitating program comprehension. As previously mentioned, most modem projects transition to the evolution phase with at least some key members of the original development team in place. There are, of course, circumstances when this may not be possible and a transition to a new team is unavoidable. This is a strategy that risks an almost immediate slide into servicing due to the difficulties of program comprehension. If a new team is, however, absolutely essential, then there are some steps that can be taken, such as having developers on-site for some months to aid the transition. Software managers placed in this situation may want to consult some of the papers by Pigoski and others that discuss experiences in software transition [90-92].
42
K. H.BENNETT Er/^L
If it has not already been done, the system should now be placed under configuration management control. From here on, different customers will have different versions, and it is essential to have a mechanism for tracking the revisions of each file that went into making up each version. Without such a mechanism it will be very difficult to interpret problem reports arriving from the field. Change control may be formal or informal, depending on the process used, but change control procedures should be well defined, and it is highly desirable to have one person designated as configuration manager, with responsibility for change control and version tracking. At this stage, if not earlier, the project needs to have a clear strategy for program comprehension. That strategy can include combinations of at least three elements: • Team knowledge, carried by team members who commit to long-term participation in the project, • Written documentation of specifications, design, configurations, tests, etc., or its equivalent in the repository of a software tool, and • Reverse engineering tools to recover design information from the system itself. As evolution starts, team composition may change somewhat, often with some shrinkage. If the project manager intends to rely mainly on team knowledge for program comprehension, this is a good point to take inventory of the available knowledge and to try to avoid over concentration in one or two team members. As previously mentioned, agile development methods such as Extreme Programming often use pair programming, in which two programmers work together on each task [40]. As indicated by the printed circuits program case described in Section 6.7 it is probably unrealistic to expect new programmers to undertake major changes alone, but it may be possible to get them to work in a pair with a more experienced programmer. It is desirable to avoid the well-known ''guru" phenomenon, in which one person is the only expert in an important part of the project. The guru often tends to adopt that part as his territory, and makes it difficult for anyone else to work with that code. Once this situation is established, it can be very difficult to manage. A guru problem is obviously dangerous for the future of the project and can also have a bad impact on team morale. If the manager decides to emphasize written documentation for program comprehension, then there will be considerable overhead to revise and update all such documentation as the system evolves. It is likely that any design documentation available at the beginning of evolution will represent the system the developers intended to build, which may differ substantially from what was actually built.
SOFTWARE EVOLUTION
43
A useful technique is incremental redocumentation, in which documentation is generated or updated as modules are modified. A trainee programmer may be assigned to do the write-up, based on notes or an interview with the experienced programmer who actually made the modification, thus reducing the cost and schedule impact of the redocumentation. One of the authors has given a case study showing how this strategy was applied to a large C-I~l- system [93]. The manager needs to establish a tricky balance between programmers' desire to restructure or rewrite code and the economics of the project. Unless they wrote it themselves, programmers will almost always complain of the quaUty of the code they encounter! Some such complaints are certainly well founded as restructuring and refactoring may be needed to maintain a clean architecture. However, if all such requests are approved, the burden of coding, inspecting, and especially testing will become unsustainable. Key decisions in the evolution phase concern the creation of new versions and the transition to servicing. If the versioned staged model of Fig. 2 is followed, the new version will probably start development well before the old one passes into servicing. Management should make a conscious decision as to the paths to be followed, based on judgments about the state of the system and the demands of the market.
8.4 Strategies during Servicing The transition into servicing implies that further change will be relatively minor, perhaps involving bug fixes and peripheral new features. It is important to understand that the transition is largely irreversible since essential knowledge and architectural integrity have probably been lost. If the product is to be reengineered, it is likely that the best strategy will be simply to try to reproduce its black-box behavior rather than to study and reuse its current code or design [94]. Often there may be a transition to a new maintenance team as servicing begins. Expectations about what can be accomplished by such a team should be kept modest to avoid impossible commitments. Configuration management must continue in place to be able to understand reports from the field. Strategies such as opportunistic redocumentation may still be desirable, but fixes that degrade the code may be tolerated since the basic strategy is to minimize cost while maintaining revenue in the short run. Finally, the servicing stage is the time to make and implement a plan for phaseout. Migration paths to a new version in development should be provided. The main issue is often the need to recover and reformat vital organizational data so that it can be used in the new version.
44
9.
K. H. BENNETT Er/\L.
Future Directions: Ultra Rapid Software Evolution
It is possible to inspect each activity of the staged software model and determine how it may be speeded up. Certainly, new technology to automate parts may be expected, supported by tools (for example, in program comprehension, testing etc.). However, it is very difficult to see that such improvements will lead to a radical reduction in the time to evolve a large software system. This prompted us to believe that a new and different way is needed to achieve "ultra rapid evolution"; we term this "evolution in Internet time." It is important to stress that such ultra rapid evolution does not imply poor quality, or software that is simply hacked together without thought. The real challenge is to achieve very fast change yet provide very-high-quality software. Strategically, we plan to achieve this by bringing the evolution process much closer to the business process. The generic problem of ultra rapid evolution is seen as one of the grand challenges for software engineering (see [95,96]). The staged model allows us to address a large system built out of many parts (and so on recursively). Each part may be in one of the five stages (although we would expect the main stress to be the first three stages). This has been ignored in previous research. The integration mechanism is market-led, not simply a technical binding, and requires the representation of nontechnical and nonfunctional attributes of the parts. The new perspective offered by the staged model has been a crucial step in developing a seniceware approach. For software evolution, it is useful to categorize contributing factors into those which can rapidly evolve, and those which cannot, see Table I. We concluded that a "silver bullet," which would somehow transform software into something that could be changed (or could change itself) far more quickly than at present, was not viable. Instead, we take the view that software is actually TABLE I CONTRIBUTING FACTORS OF SOFTWARE EVOLUTION
Fast moving
Slow moving
Software requirements Marketplaces Organizations Emergent companies Demand led Competitive pressures Supply chain delivery Risk taking New business processes Near-business software
Software functionality Skills bases Standards Companies with rigid boundaries Supply led Long-term contracts Software technology Risk averse Software process evolution Software infrastructure
SOFTWARE EVOLUTION
45
hard to change, and thus that change takes time to accompHsh. We needed to look for other solutions. Let us now consider a very different scenario. We assume that our software is structured into a large number of small components that exactly meet the user's needs and no more. Suppose now that a user requires an improved component C. The traditional approach would be to raise a change request with the vendor of the software, and wait for several months for this to be (possibly) implemented, and the modified component integrated. In our solution, the user disengages component C, and searches the marketplace for a replacement D that meets the new needs. When this is found, it replaces C, and is used in the execution of the application. Of course, this assumes that the marketplace can provide the desired component. However, it is a wellestablished property of marketplaces that they can spot trends, and make new products available when they are needed. The rewards for doing so are very strong and the penalties for not doing so are severe. Note that any particular component supplier can (and probably will) use traditional software maintenance techniques to evolve their components. The new dimension is that they must work within a demand-led marketplace. Therefore, if we can find ways to disengage an existing component and bind in a new one (with enhanced functionality and other attributes) ultra rapidly, we have the potential to achieve ultra-rapid evolution in the target system. This concept led us to conclude that the fundamental problem with slow evolution was a result of software that is marketed as a product, in a supply-led marketplace. By removing the concept of ownership, we have instead a service, i.e., something that is used, not owned. Thus, we generalized the component-based solution to the much more generic service-based software in a demand-led marketplace [97]. This service-based model of software is one in which services are configured to meet a specific set of requirements at a point in time, executed and disengaged— the vision of instant service. A service is used rather than owned [98]; it may usefully be considered to comprise a communications protocol together with a service behavior. Services are composed from smaller ones (and so on recursively), procured and paid for on demand. A service is not a mechanized process; it involves humans managing supplier-consumer relationships. This is a radically new industry model, which could function within markets ranging from a genuine open market (requiring software functional equivalence) to a keisetzu market, where there is only one supplier and consumer, both working together with access to each other's information systems to optimize the service to each other. This strategy potentially enables users to create, compose, and assemble a service by bringing together a number of suppliers to meet needs at a specific
46
K. H. BENNETT Er>^L
point in time. An analogy is selling cars: today manufacturers do not sell cars from a premanufactured stock with given color schemes, features, etc.; instead customers configure their desired car from a series of options and only then is the final product assembled. This is only possible because the technology of production has advanced to a state where assembly of the final car can be undertaken sufficiently quickly. Software vendors attempt to offer a similar model of provision by offering products with a series of configurable options. However, this offers extremely limited flexibility—consumers are not free to substitute functions with those from another supplier since the software is subject to binding, which configures and links the component parts, making it very difficult to perform substitution. The aim of this research is to develop the technology which will enable binding to be delayed until immediately before the point of execution of a system. This will enable consumers to select the most appropriate combination of services required at any point in time. However, late binding comes at a price, and for many consumers, issues of reliability, security, cost, and convenience may mean that they prefer to enter into contractual agreements to have some early binding for critical or stable parts of a system, leaving more volatile functions to late binding and thereby maximizing competitive advantage. The consequence is that any future approach to software development must be interdisciplinary so that nontechnical issues, such as supply contracts, terms, and conditions, and error recovery are addressed and built into the new technology. A truly service-based role for software is far more radical than current approaches, in that it seeks to change the very nature of software. To meet users' needs of evolution, flexibility, and personalization, an open marketplace framework is necessary in which the most appropriate versions of software products come together, and are bound and executed as and when needed. At the extreme, the binding that takes place prior to execution is disengaged immediately after execution in order to permit the "system" to evolve for the next point of execution. Flexibility and personalization are achieved through a variety of service providers offering functionality through a competitive marketplace, with each software provision being accompanied by explicit properties of concern for binding (e.g., dependability, performance, quality, license details, etc.). A component is simply a reusable software executable. Our serviceware clearly includes the software itself, but in addition has many nonfunctional attributes, such as cost and payment, trust, brand allegiance, legal status and redress, and security. Binding requires us to negotiate across all such attributes (as far as possibly electronically) to establish a binding, at the extreme just before execution.
SOFTWARE EVOLUTION
47
Requirements for software need to be represented in such a way that an appropriate service can be discovered on the network. The requirements must convey therefore both the description and intention of the desired service. Given the highly dynamic nature of software suppUed as a service, the maintainabiUty of the requirements representation becomes an important consideration. However, the aim of the architecture is not to prescribe such representation, but support whatever conventions users and service suppliers prefer. Automated negotiation is another key issue for research, particularly in areas where nonnumeric terms are used, e.g., legal clauses. Such clauses do not lend themselves to offer/counteroffer and similar approaches. In relation to this, the structure and definition of profiles and terms need much work, particularly where terms are related in some way (e.g., performance and cost). Also we need insight to the issue of when to select a service and when to enter negotiations for a service. It is in this area that multidisciplinary research is planned. We plan to concentrate research in these areas, and use as far as possible available commercial products for the software infrastructure. Finally, many issues need to be resolved concerning mutual performance monitoring and claims of legal redress should they arise.
10.
Conclusions
We have presented a new staged model of the software lifecycle, motivated by the need to formulate an abstraction that is supported partly by empirical published evidence, and partly by the authors' field experiences. In particular, we have observed how the "project knowledge" has been progressively lost over the lifecycle, and the enormous implications for our ability successfully to support the software. We have argued that by understanding the staged model, a manager can better plan and resource a project, in particular to avoid it slipping into servicing irreversibly. We also indicated some future directions based on this approach. ACKNOWLEDGMENTS
Keith Bennett thanks members of the Pennine Research Group at Durham, UMIST, and Keele Universities for the collaborative research which has led to the author's input to this chapter (in particular Malcolm Munro, Paul Layzell, Linda Macauley, Nicolas Gold, Pearl Brereton, and David Budgen). He also thanks the Leverhulme Trust, BT, and EPSRC for generous support, and Deborah Norman for help in preparing the chapter. Vaclav Rajlich thanks Tony Mikulec from Ford Motor Co. for generous support of research in software maintenance. Also discussions with Franz Lehner while visiting University of Regensburg, and with Harry Sneed on several occasions influenced the author's thinking about this area.
48
K. H.BENNETT Er/\/..
Norman Wilde thanks the support of the Software Engineering Research Center (SERC) over the past 15 years; its industrial partners have taught him most of what he knows about software maintenance. More recently the US Air Force Office of Scientific Research under Grant F49620-99-1-0057 has provided an opportunity to study the FASTGEN system mentioned as one of the case studies. REFERENCES
[1] IEEE (1990). Standard Glossary of Software Engineering Terminology, standard IEEE Std 610.12-1990, IEEE, Los Alamitos, CA. Also IEEE Software EngineeringIEEE Standards Collection. IEEE, New York, 1994. [2] McDermid, J. A. (Ed.) (1991). The Software Engineer's Reference Book, Butterworth-Heinemann, London. [3] Royce, W. W. (1970). "Managing the development of large software systems." Proc. IEEE WESCON 1970, pp. 1-9. IEEE, New York. [Reprinted in Thayer, R. H. (Ed.). IEEE Tutorial on Software Engineering Project Management.] [4] Boehm, B. W. (1988). 'A spiral model of software development and enhancement." IEEE Computer, May, 61-72. [5] Rajlich, V. T., and Bennett, K. H. (2000). 'A staged model for the software Hfecycle." IEEE Computer, 33, 66-71. [6] Pigoski, T. R (1997). Practical Software Maintenance: Best Practices for Managing Your Software Investment. Wiley, New York. [7] Lientz, B., and Swanson, E. B. (1980). Software Maintenance Management: a Study of the Maintenance of Computer Application Software in 487 Data Processing Organisations. Addison-Wesley, Reading, MA. [8] Lientz, B., Swanson, E. B., and Tompkins, G. E. (1978). "Characteristics of applications software maintenance." Communications of the ACM, 21, 466-471. [9] Sommerville, I. (1995). Software Engineering. Addison Wesley, Reading, MA. [10] Pressman, R. S. (1996). Software Engineering. McGraw-Hill, New York. [11] Warren, I. (1999). The Renaissance of Legacy Systems. Springer-Verlag, London. [12] Foster, J. R. (1993). Cost Factors in Software Maintenance, Ph.D. Thesis. Computer Science Department, University of Durham. [13] Robson, D. J., Bennett, K. H., Munro, M., and ComeHus, B. J. (1991). "Approaches to program comprehension." Journal of Systems and Software, 14, 79-84. [Reprinted in Arnold, R. (Ed.) (1992). Software Re-engineering. IEEE Computer Society Press, Los Alamitos, CA.] [14] IEEE. Standard for Software Maintenance, p. 56. IEEE, Los Alamitos, CA. [15] International Standards Organisation (1999). International Standard Information Technology: Software Maintenance, ISO/IEC 14764:1999. International Standards Organisation. [16] Wirth, N. (1971). "Program development by stepwise refinement." Communications of the ACM, U.
SOFTWARE EVOLUTION
49
[17] Basili, V. R., and Turner, A. J. (1975). "Iterative enhancement: A practical technique for software development." IEEE Transactions on Software Engineering 1, 90-396. (An updated version was pubUshed as Auerbach Report 14-01-05, 1978, and in Tutorial on Software Maintenance, IEEE Computer Society Press, Los Alamitos, CA, 1982). [18] Brooks, F. The Mythical Man-Month: Essays on Software Engineering. AddisonWesley, Reading, MA. [ 19] Lehman, M. M., and Beladay, L. A. (1976). "A model of large program development." IBM System Journal 3, 225-252. [20] Burch, E., and Kunk, H. (1997). "Modeling software maintenance requests: A case study." Proc. IEEE International Conference on Software Maintenance, pp. 40-47. IEEE Computer Society Press, Los Alamitos, CA. [21] Lehman, M. M. (1980). "Programs, lifecycles, and the laws of software evolution." IEEE Transactions on Software Engineering, 68, 1060-1076. [22] Lehman, M. M. (1984). "Program evolution." Information Processing Management, 20, 19-36. [23] Lehman, M. M. (1985). Program Evolution. Academic Press, London. [24] Lehman, M. M. (1989). "Uncertainty in computer application and its control through the engineering of software." Journal of Software Maintenance, 1, 3-28. [25] Lehman, M. M., and Ramil, J. F. (1998). "Feedback, evolution and software technology—Some results from the FEAST Project, Keynote Lecture." Proceedings 11th International Conference on Software Engineering and its Application, Vol. 1, Paris, 8-10 Dec, pp. 1-12. [26] Ramil, J. R, Lehman, M. M., and Kahen, G. (2000). "The FEAST approach to quantitative process modelling of software evolution processes." Proceedings PROFES'2000 2nd International Conference on Product Focused Software Process Improvement, Oulu, Finland, 20-22 June (F. Bomarius and M. Oivo, Eds.), Lecture Notes on Computer Science 1840, pp. 311-325. Springer-Verlag, Berlin. This paper is a revised version of the report: Kahen, G., Lehman, M. M., Ramil, J. F. (2000). "Model-based assessment of software evolution processes." Research Report 2000/4. Department of Computers, Imperial College. [27] Lehman, M. M., Perry, D. E., and Ramil, J. F. (1998). "Implications of evolution metrics on software maintenance." International Conference on Soft. Maintenance (ICSM'98), Bethesda, Maryland, Nov. 16-24, pp. 208-217. [28] Ramil, J. F., and Lehman, M. M. (1999). "Challenges facing data collection for support and study of software evolution processes," position paper. ICSE 99 Workshop on Empirical Studies of Software Development and Evolution, Los Angeles, May 18. [29] Sneed, H. M. (1989). Software Engineering Management (I. Johnson, TransL). Ellis Horwood, Chichester, West Sussex, pp. 20-21. Original German ed. Software Management, Rudolf Mueller Verlag, Koln, 1987. [30] Lehner, F. (1989). "The software lifecycle in computer applications." Long Range Planning, Vol. 22, No. 5, pp. 38-50. Pergamon Press, Elmsford, NY.
50
K. H.BENNETT E7^/..
[31] Lehner, F. (1991). "Software lifecycle management based on a phase distinction method." Microprocessing and Microprogramming, Vol. 32, pp. 603-608. NorthHolland, Amsterdam. [32] Truex, D. P., Baskerville, R., and Klein, H. (1999). "Growing systems in emergent organizations." Commun. ACM 42, 117-123. [33] Cusumano, M., and Yoffe, D. (1998). Competing on Internet Time-Lessons from Netscape and Its Battle with Microsoft. Free Press (Simon & Schuster), New York. [34] Bennett, K. H. (1995). "Legacy systems: Coping with success." IEEE Software 12, 19-23. [35] Henderson, P. (Ed.) (2000). Systems Engineering for Business Process Change. Springer-Verlag, Berlin. [36] UK EPSRC (1999). "Systems Engineering for Business Process Change." Available at h t t p : / / w w w . s t a f f . e c s . s o t o n . a c . u k / ~ p h / s e b p c . [37] Pfleeger, S. L., and Menezes, W. (2000). "Technology transfer: Marketing technology to software practitioners." IEEE Software 17, 27-33. [38] Naur, P., and Randell, B. (Eds.) (1968). "Software engineering concepts and techniques," NATO Science Committee. Proc. NATO Conferences, Oct. 7-11, Garmisch, Germany. Petrocelli/Charter, New York. [39] Shaw, M., and Garland, D. (1996). Software Architectures. Prentice-Hall, Englewood
cuffs, NJ. [40] Beck, K. (1999). "Embracing change with extreme programming." IEEE Computer 32, 70-77. [41] International Computers Ltd., "The Architecture of Open VME," ICL publication ref. 55480001. ICL, Stevenage, Herts, UK, 1994. [42] Cusumano, M. A., and Selby, R. W. (1997). Microsoft Secrets. HarperCollins, New York. [43] Jacobson, I., Booch, G., and Rumbaugh, J. (1999). The Unified Software Development Process. Addison-Wesley, Reading, MA. [44] Booch, G. (2001). "Developing the future." Commun. ACM, 44, 119-121. [45] Pamas, D. L. (1994). "Software aging." Proceedings 16th International Conference on Software Engineering, pp. 279-287. IEEE Computer Society Press, Los Alamitos, CA. [46] Eick, S. G., Graves, T. L., Karr, A. R, Marron, J. S., and Mockus, A. (2001). "Does code decay? Assessing evidence from change management data." IEEE Transactions on Software Engineering, 27, 1-12. [47] Rajlich, V., Wilde, N., Buckellew, M., and Page, H. (2001). "Software cultures and evolution." IEEE Computer, 34, 24-28. [48] Johnson, J. H. (1994). "Substring matching for clone detection and change tracking." Proceedings IEEE International Conference on Software Maintenance, Victoria, Canada, Sept., pp. 120-126.
SOFTWARE EVOLUTION
51
[49] Baxter, D., Yahin, A., Moura, L., Sant'Anna, M., and Bier, L. (1998). "Clone detection using abstract trees." IEEE International Conference on Software Maintenance, pp. 368-377. [50] Lagu, B., Proulx, D., Mayrand, J., Merlo, E. M., and Hudepohl, J. (1997). "Assessing the benefits of incorporating function clone detection in a development process." IEEE International Conference on Software Maintenance, pp. 314-331. [51] Burd, E., and Munro, M. (1997). "Investigating the maintenance implications of the replication of code." IEEE International Conference on Software Maintenance, pp. 322-329. [52] Olsem, M. R. (1998). An incremental approach to software systems reengineering." Software Maintenance: Research and Practice 10, 181-202. [53] Canfora, G., De Lucia, A., and Di Lucca, G. (1999). "An incremental object-oriented migration strategy for RPG legacy systems." International Journal of Software Engineering and Knowledge Engineering, 9, 5-25. [54] Tamai, T., and Torimitsu, Y. (1992). "Software lifetime and its evolution process over generations." Proc. IEEE International Conference on Software Maintenance, pp. 63-69. [55] Kappelman, L. A. (2000). "Some strategic Y2K blessings." IEEE Software, 17, 42-46. [56] Fanta, R., and Rajlich, V. (1998). "Reengineering object-oriented code." Proc. IEEE International Conference on Software Maintenance, pp. 238-246. [57] Fanta, R., and Rajlich, V. "Removing clones from the code." Journal of Software Maintenance, 1, 223-243. [58] Wilde, N., Buckellew, M., Page, H., and Rajlich, V. (2001). "A case study of feature location in unstructured legacy Fortran code." Proceedings CSMR'Ol, pp. 68-76. IEEE Computer Society Press, Los Alamitos, CA. [59] Fowler, M. (1999). Refactoring: Improving the Design of Existing Code. AddisonWesley, Reading, MA. [60] Yau, S. S., Collofello, J. S., and MacGregor, T. (1978). "Ripple effect analysis of software maintenance." Proc. IEEE COMPSAC, pp. 60-65. [61] Rajlich, V. (2000). "Modeling software evolution by evolving interoperation graphs." Annals of Software Engineering, 9, 235-248. [62] Ogando, R. M., Yau, S. S., Liu, S. S., and Wilde, N. (1994). "An object finder for program structure understanding in software maintenance." Journal of Software Maintenance: Research and Practice, 6, 261-283. [63] Chapin, N., and Lau, T. S. (1996). "Effective size: An example of USE from legacy systems." Journal of Software Maintenance: Research and Practice, 8, 101-116. [64] Oman, P (1990). "Maintenance tools." IEEE Software, 1, 59-65. [65] Von Mayrhauser, A., and Vans, A. M. (1995). "Program comprehension during software maintenance and evolution." IEEE Computer, 28, 44-55.
52
K. H.BENNETT Er>^L
[66] Littman, D. C , Pinto, J., Letovsky, S.. and Soloway, E. (1986). "Mental models and software maintenance." Empirical Studies of Programmers (E. Soloway and S. Iyengar, Eds.), pp. 80-98. Ablex, Norwood, NJ. [67] Basili, V. R., and Mills, H. D. (1982). "Understanding and documenting programs." IEEE Transactions on Software Engineering, 8, 270-283. [68] Younger, E. J., and Bennett, K. H. (1993). "Model-based tools to record program understanding." Proceedings of the IEEE 2nd International Workshop on Program Comprehension, July 8-9, Capri, Italy, pp. 87-95. IEEE Computer Society Press, Los Alamitos, CA. [69] Standish, T. A. (1984). "An essay on software reuse." IEEE Transactions on Software Engineering, 10, 494^97. [70] Weiser, M., and Lyle, J. (1986). "Experiments on slicing-based debugging aids." Empirical Studies of Programmers (E. Soloway and S. Iyengar, Eds.), pp. 187-197. Albex, Norwood, NJ. [71] Storey, M. A. D., Fracchia, F. D., and Muller, H. A. (1997). "Cognitive design elements to support the construction of a mental model during software visualization." Proceedings of the 5th IEEE International Workshop on Program Comprehension, May 28-30, pp. 17-28. [72] Brooks, R. (1983). "Toward a theory of comprehension of computer programs." International Journal of Man-Machine Studies, 18, 542-554. [73] Soloway, E., and Ehrlich, K. (1984). "Empirical studies of programming knowledge." IEEE Transactions on Software Engineering, 10, 595-609. [74] Pennington, N. (1987). "Stimulus structures and mental representations in expert comprehension of computer programs." Cognitive Psychology, 19, 295-341. [75] Letovsky, S. (1987). "Cognitive processes in program comprehension." Journal of Systems and Software, 7, 325-339. [76] Von Mayrhauser, A., Vans, A. M., and Howe, A. E. (1997). "Program understanding behaviour during enhancement of large-scale software." Journal of Software Maintenance: Research and Practice, 9, 299-327. [77] Shneiderman, B., and Mayer, R. (1979). "Syntactic/semantic interactions in programmer behaviour: A model and experimental results." International Journal of Computer and Information Sciences, 8, 219-238. [78] Knight, C , and Munro, M. (1999). "Comprehension with[in] virtual environment visualisations." Proceedings IEEE 7th International Workshop on Program Comprehension, May 5-7, pp. 4-11. [79] Biggerstaff, T., Mitbander, B., and Webster, D. (1994). "Program understanding and concept assignment problem." Communications of ACM, 37, 72-83. [80] Wilde, N., and Scully, M. (1995). "Software reconnaissance: Mapping program features to code." Journal of Software Maintenance: Research and Practice, 1,49-62. [81] Chen, K., and Rajlich, V. (2000). "Case study of feature location using dependency graph." Proc. International Workshop on Program Comprehension, pp. 241-249. IEEE Computer Society Press, Los Alamitos, CA.
SOFTWARE EVOLUTION
53
[82] Hager, J. A. (1989). "Developing maintainable systems: A full life-cycle approach." Proceedings Conference on Software Maintenance, pp. 271-278. Oct. 16-19. IEEE Computer Society Press, Los Alamitos, CA. [83] Pamas, D. L. (1972). "On the criteria to be used in decomposing systems into modules." Communications of the ACM, 29, 1053-1058. [84] Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA. [85] Wilde, N., and Casey, C. (1996). "Early field experience with the software reconnaissance technique for program comprehension." Proceedings International Conference on Software Maintenance—ICSM'96, pp. 312-318. IEEE Computer Society Press, Los Alamitos, CA. [86] Wilde, N., and Huitt, R. (1992). "Maintenance support for object-oriented programs." IEEE Transactions on Software Engineering, 18, 1038-1044. [87] Wilde, N., and Knudson, D. (1999). "Understanding embedded software through instrumentation: Preliminary results from a survey of techniques," Report SERCTR-85-F, Software Engineering Research Center, Purdue University. Available at h t t p : //www. cs.uwf . edu/~wilde/publicatioiis/TecRpt85F_ExSuin. html. [88] Wilde, N., Casey, C , Vandeville, J., Trio, G., and Hotz, D. (1998). "Reverse engineering of software threads: A design recovery technique for large multi-process systems." Journal of Systems and Software, 43, 11-17. [89] Wendorff, P. (2001). "Assessment of design patterns during software reengineering: Lessons learned from a large commercial project." Proceedings Fifth European Conference on Software Maintenance and Reengineering—CSMR'Ol, pp. 77-84. IEEE Computer Society Press, Los Alamitos, CA. [90] Pigoski, T. M., and Sexton, J. (1990). "Software transition: A case study." Proceedings Conference on Software Maintenance, pp. 200-204. IEEE Computer Society Press, Los Alamitos, CA. [91] Vollman, T. (1990). "Transitioning from development to maintenance." Proceedings Conference on Software Maintenance, pp. 189-199. IEEE Computer Society Press, Los Alamitos, CA. [92] Pigoski, T. M., and Cowden, C. A. (1992). "Software transition: Experience and lessons learned." Proceedings Conference on Software Maintenance, pp. 294-298. IEEE Computer Society Press, Los Alamitos, CA. [93] Rajlich, V. (2000). "Incremental redocumentation using the web." IEEE Software, 17, 102-106. [94] Bollig, S., and Xiao, D. (1998). "Throwing off the shackles of a legacy system." IEEE Computer, 31, 104-109. [95] Bennett, K. H., Layzell, P J., Budgen, D., Brereton, O. P., Macaulay, L., and Munro, M. (2000). "Service-based software: The future for flexible software," IEEE APSEC2000. The Asia-Pacific Software Engineering Conference, Singapore, 5-8 December. IEEE Computer Society Press, Los Alamitos, CA.
b4
K. H. BENNETT E7>^/..
[96] Bennett, K. H., Munro, M., Brereton, O. P., Budgen, D., Layzell, P. J., Macaulay, L., Griffiths, D. G., and Stannet, C. (1999). 'The future of software." Communications of ACM, 42, 78-84. [97] Bennett, K. H., Munro, M., Gold, N. E., Layzell, P. J., Budgen, D., and Brereton, O. P. (2001). "An architectural model for service-based software with ultra rapid evolution." Proc. IEEE International Conference on Software Maintenance, Florence, to appear. [98] Lovelock, C., Vandermerwe, S., and Lewis, B. (1996). Services Marketing. PrenticeHall Europe, Englewood Cliffs, NJ. ISBN 013095991X.
Embedded Software EDWARD A. LEE Department of Electrical Engineering and Computer Science University of California—Berkeley 518 Cory Hall Berkeley CA 94720-1770 USA
[email protected]
Abstract The science of computation has systematically abstracted away the physical world. Embedded software systems, however, engage the physical world. Time, concurrency, liveness, robustness, continuums, reactivity, and resource management must be remarried to computation. Prevailing abstractions of computational systems leave out these "nonfunctional" aspects. This chapter explains why embedded software is not just software on small computers, and why it therefore needs fundamentally new views of computation. It suggests component architectures based on a principle called "actor-oriented design," where actors interact according to a model of computation, and describes some models of computation that are suitable for embedded software. It then suggests that actors can define interfaces that declare dynamic aspects that are essential to embedded software, such as temporal properties. These interfaces can be structured in a "system-level-type system" that supports the sort of design-time- and run-time-type checking that conventional software benefits from.
1. What is Embedded Software? 2. Just Software on Small Computers? 2.1 TimeUness 2.2 Concurrency 2.3 Liveness 2.4 Interfaces 2.5 Heterogeneity 2.6 Reactivity 3. Limitations of Prevailing Software Engineering Methods 3.1 Procedures and Object Orientation 3.2 Hardware Design ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
55
56 57 58 58 59 60 61 62 62 63 63
Copyright 2002 Elsevier Science Ltd All rights of reproduction in any form reserved.
56
EDWARD A. LEE
3.3 Real-Time Operating Systems 3.4 Real-Time Object-Oriented Models 4. Actor-Oriented Design 4.1 Abstract Syntax 4.2 Concrete Syntaxes 4.3 Semantics 4.4 Models of Computation 5. Examples of Models of Computation 5.1 Dataflow 5.2 Time Triggered 5.3 Synchronous/Reactive 5.4 Discrete Events 5.5 Process Networks 5.6 Rendezvous 5.7 Publish and Subscribe 5.8 Continuous Time 5.9 Finite State Machines 6. Choosing a Model of Computation 7. Heterogeneous Models 8. Component Interfaces 8.1 On-line-type Systems 8.2 Reflecting Program Dynamics 9. Frameworks Supporting Models of Computation 10. Conclusions Acknowledgments References
1.
64 65 65 66 67 68 69 71 71 72 73 74 74 75 76 76 76 79 82 84 85 86 88 89 89 90
What is Embedded Software?
Deep in the intellectual roots of computation is the notion that software is the realization of mathematical functions as procedures. These functions map a body of input data into a body of output data. The mechanism used to carry out the procedure is not nearly as important as the abstract properties of the function. In fact, we can reduce the mechanism to seven operations on a machine (the famous Turing machine) with an infinite tape capable of storing zeros and ones [1]. This mechanism is, in theory, as good as any other mechanism, and therefore, the significance of the software is not affected by the mechanism. Embedded software is not like that. Its principal role is not the transformation of data, but rather the interaction with the physical world. It executes on machines that are not, first and foremost, computers. They are cars, airplanes, telephones, audio equipment, robots, appliances, toys, security systems, pacemakers, heart
EMBEDDED SOFTWARE
b/
monitors, weapons, television sets, printers, scanners, climate control systems, manufacturing systems, and so on. Software with a principal role of interacting with the physical world must, of necessity, acquire some properties of the physical world. It takes time. It consumes power. It does not terminate (unless it fails). It is not the ideaUzed procedures of Alan Turing. Computer science has tended to view this physicality of embedded software as messy. Consequently, the design of embedded software has not benefited from the richly developed abstractions of the 20th century. Instead of using object modeling, polymorphic-type systems, and automated memory management, engineers write assembly code for idiosyncratic digital signal processors (DSPs) that can do finite impulse response filtering in one (deterministic) instruction cycle per tap. The engineers that write embedded software are rarely computer scientists. They are experts in the application domain with a good understanding of the target architectures they work with. This is probably appropriate. The principal role of embedded software is interaction with the physical world. Consequently, the designer of that software should be the person who best understands that physical world. The challenge to computer scientists, should they choose to accept it, is to invent better abstractions for that domain expert to do her job. Today's domain experts may resist such help. In fact, their skepticism is well warranted. They see Java programs stalling for one-third of a second to perform garbage collection and update the user interface, and they envision airplanes falling out of the sky. The fact is that the best-of-class methods offered by computer scientists today are, for the most part, a poor match to the requirements of embedded systems. At the same time, however, these domain experts face a serious challenge. The complexity of their applications (and consequent size of their programs) is growing rapidly. Their devices now often sit on a network, wireless or wired. Even some programmable DSPs now run a TCP/IP protocol stack, and the applications are getting much more dynamic, with downloadable customization and migrating code. Meanwhile, reliability standards for embedded software remain very high, unlike general-purpose software. At a minimum, the methods used for general-purpose software require considerable adaptation for embedded software. At a maximum, entirely new abstractions that embrace physicality and deliver robustness are needed.
2.
Just Software on Small Computers?
An arrogant view of embedded software is that it is just software on small computers. This view is naive. Timeliness, concurrency, liveness, reactivity, and
58
EDWARD A. LEE
heterogeneity need to be an integral part of the programming abstractions. They are essential to the correctness of a program. It is not sufficient to realize the right mapping from input data to output data.
2.1
Timeliness
Time has been systematically removed from theories of computation. 'Ture" computation does not take time, and has nothing to do with time. It is hard to overemphasize how deeply rooted this is in our culture. So-called "real-time" operating systems often reduce the characterization of a component (a process) to a single number, its priority. Even most "temporal" logics talk about "eventually" and "always," where time is not a quantifier, but rather a qualifier [2]. Attempts to imbue object-oriented design with real-time are far from satisfactory [3]. Much of the problem is that computation does take time. Computer architecture has been tending toward making things harder for the designers of embedded systems. A large part of the (architectural) performance gain in modern processors comes from statistical speedups such as elaborate caching schemes, speculative instruction execution, dynamic dispatch, and branch prediction. These techniques compromise the reliability of embedded systems. In fact, most embedded processors such as programmable DSPs and microcontrollers do not use these techniques. I believe that these techniques have such a big impact on average case performance that they are indispensable. However, software practitioners will have to find abstractions that regain control of time, or the embedded system designers will continue to refuse to use these processors. The issue is not just that execution takes time. Even with infinitely fast computers, embedded software would still have to deal with time because the physical processes, with which it interacts, evolve over time.
2.2
Concurrency
Embedded systems rarely interact with only a single physical process. They must simultaneously react to stimulus from a network and from a variety of sensors, and at the same time, retain timely control over actuators. This impHes that embedded software is concurrent. In general-purpose software practice, management of concurrency is primitive. Threads or processes, semaphores, and monitors [4] are the classic tools for managing concurrency, but I view them as comparable to assembly language in abstraction. They are very difficult to use reliably, except by operating system experts. Only trivial designs are completely comprehensible (to most engineers). Excessively conservative rules of thumb dominate (such as always grab locks in the same order [5]). Concurrency theory has much to offer that has not made its
EMBEDDED SOFTWARE
59
way into widespread practice, but it probably needs adaptation for the embedded system context. For instance, many theories reduce concurrency to "interleavings," which trivialize time by asserting that all computations are equivalent to sequences of discrete timeless operations. Embedded systems engage the physical world, where multiple things happen at once. Reconciling the sequentiality of software and the concurrency of the real world is a key challenge in the design of embedded systems. Classical approaches to concurrency in software (threads, processes, semaphore synchronization, monitors for mutual exclusion, rendezvous, and remote procedure calls) provide a good foundation, but are insufficient by themselves. Complex compositions are simply too hard to understand. An alternative view of concurrency that seems much better suited to embedded systems is implemented in synchronous/reactive languages [6] such as Esterel [7], which are used in safety-critical real-time applications. In Esterel, concurrency is compiled away. Although this approach leads to highly reliable programs, it is too static for some networked embedded systems. It requires that mutations be handled more as incremental compilation than as process scheduling, and incremental compilation for these languages proves to be challenging. We need an approach somewhere in between that of Esterel and that of today's realtime operating systems, with the safety and predictability of Esterel and the adaptability of a real-time operating system.
2.3
Liveness
In embedded systems, liveness is a critical issue. Programs must not terminate or block waiting for events that will never occur. In the Turing view of computation, all nonterminating programs fall into an equivalence class that is implicitly deemed to be a class of defective programs. In embedded computing, however, terminating programs are defective. The term "deadlock" pejoratively describes premature termination of such systems. It is to be avoided at all costs. In the Turing paradigm, given a sufficiendy rich abstraction for expressing procedures, it is undecidable whether those procedures halt. This undecidability has been inconvenient because we cannot identify programs that fail to halt. Now it should be viewed as inconvenient because we cannot identify programs that fail to keep running. Moreover, correctness cannot be viewed as getting the right final answer. It must take into account the timeliness of a continuing stream of partial answers, as well as other "nonfunctional" properties. A key part of the prevailing computation paradigm is that software is defined by the function it computes. The premise is that the function models everything interesting about the software. Even for the portions of embedded software that terminate (and hence have an
60
EDWARD A. LEE
associated "computable function"), this model is a poor match. A key feature of embedded software is its interaction with physical processes, via sensors and actuators. Nonfunctional properties include timing, power consumption, fault recovery, security, and robustness.
2.4
Interfaces
Software engineering has experienced major improvements over the past decade or so through the widespread use of object-oriented design. Object-oriented design is a component technology, in the sense that a large complicated design is composed of pieces that expose interfaces that abstract their own complexity. The use of interfaces in software is not new. It is arguable that the most widely applied component technology based on interfaces is procedures. Procedures are finite computations that take predefined arguments and produce final results. Procedure libraries are marketable component repositories, and have provided an effective abstraction for complex functionality. Object-oriented design aggregates procedures with the data that they operate on (and renames the procedures "methods"). Procedures, however, are a poor match for many embedded system problems. Consider, for example, a speech coder for a cellular telephone. It is artificial to define the speech coder in terms of finite computations. It can be done of course. However, a speech coder is more like a process than a procedure. It is a nonterminating computation that transforms an unbounded stream of input data into an unbounded stream of output data. Indeed, a commercial speech coder component for cellular telephony is likely to be defined as a process that expects to execute on a dedicated signal processor. There is no widely accepted mechanism for packaging the speech coder in any way that it can safely share computing resources with other computations. Processes, and their cousin, threads, are widely used for concurrent software design. Processes can be viewed as a component technology, where a multitasking operating system or multithreaded execution engine provides the framework that coordinates the components. Process interaction mechanisms, such as monitors, semaphores, and remote procedure calls, are supported by the framework. In this context, a process can be viewed as a component that exposes at its interface an ordered sequence of external interactions. However, as a component technology, processes and threads are extremely weak. A composition of two processes is not a process (it no longer exposes at its interface an ordered sequence of external interactions). Worse, a composition of two processes is not a component of any sort that we can easily characterize. It is for this reason that concurrent programs built from processes or threads are so hard to get right. It is very difficult to talk about the properties of the aggregate
EMBEDDED SOFTWARE
61
because we have no ontology for the aggregate. We don't know what it is. There is no (understandable) interface definition. Object-oriented interface definitions work well because of the type systems that support them. Type systems are one of the great practical triumphs of contemporary software. They do more than any other formal method to ensure correctness of (practical) software. Object-oriented languages, with their user-defined abstract data types, and their relationships between these types (inheritance, polymorphism) have had a big impact in both reusability of software (witness the Java class libraries) and the quality of software. Combined with design patterns [8] and object modeling [9], type systems give us a vocabulary for talking about larger structure in software than lines of code and procedures. However, object-oriented programming talks only about static structure. It is about the syntax of procedural programs, and says nothing about their concurrency or dynamics. For example, it is not part of the type signature of an object that the initialize() method must be called before the fire() method. Temporal properties of an object (method x() must be invoked every 10 ms) are also not part of the type signature. For embedded software to benefit from a component technology, that component technology will have to include dynamic properties in interface definitions.
2.5
Heterogeneity
Heterogeneity is an intrinsic part of computation in embedded systems. They mix computational styles and implementation technologies. First, such systems are often a mixture of hardware and software designs, so that the embedded software interacts with hardware that is specifically designed to interact with it. Some of this hardware has continuous-time dynamics, which is a particularly poor match to prevailing computational abstractions. Embedded systems also mix heterogeneous event handling styles. They interact with events occurring irregularly in time (alarms, user commands, sensor triggers, etc.) and regularly in time (sampled sensor data and actuator control signals). These events have widely different tolerances for timeliness of reaction. Today, they are intermingled in real-time software in ad hoc ways; for example, they might be all abstracted as periodic events, and rate-monotonic principles [10] might be used to assign priorities. Perhaps because of the scientific training of most engineers and computer scientists, the tendency is to seek a grand-unified theory, the common model that subsumes everything as a special case, and that can, in principle, explain it all. We find it anathema to combine multiple programming languages, despite the fact that this occurs in practice all the time. Proponents of any one language are sure, absolutely sure, that their language is fully general. There is no need for
62
EDWARD A. LEE
any other, and if only the rest of the world would understand its charms, they would switch to using it. This view will never work for embedded systems, since languages are bound to fit better or worse for any given problem.
2.6
Reactivity
Reactive systems are those that react continuously to their environment at the speed of the environment. Harel and Pnueli [11] and Berry [12] contrast them with interactive systems, which react with the environment at their own speed, and transformational systems, which simply take a body of input data and transform it into a body of output data. Reactive systems have real-time constraints, and are frequently safety-critical, to the point that failures could result in loss of human life. Unlike transformational systems, reactive systems typically do not terminate (unless they fail). Robust distributed networked reactive systems must be capable of adapting to changing conditions. Service demands, computing resources, and sensors may appear and disappear. Quality of service demands may change as conditions change. The system is therefore continuously being redesigned while it operates, and all the while it must not fail. A number of techniques for providing more robust support for reactive system design than what is provided by real-time operating systems have emerged. The synchronous languages, such as Esterel [7], Lustre [13], Signal [14], and Argos [15], are reactive, have been used for applications where validation is important, such as safety-critical control systems in aircraft and nuclear power plants. Lustre, for example, is used by Schneider Electric and Aerospatiale in France. Use of these languages is rapidly spreading in the automotive industry, and support for them is beginning to appear on commercial EDA (electronic design automation) software. Reactive systems must typically react simultaneously to multiple sources of stimulus. Thus, they are concurrent. The synchronous languages manage concurrency in a very different way than that found in real-time operating systems. Their mechanism makes much heavier use of static (compile-time) analysis of concurrency to guarantee behavior. However, compile-time analysis of concurrency has a serious drawback: it compromises modularity and precludes adaptive software architectures.
3.
Limitations of Prevailing Software Engineering Methods
Construction of complex embedded software would benefit from component technology. Ideally, these components are reusable, and embody valuable
EMBEDDED SOFTWARE
63
expertise in one or more aspects of the problem domain. The composition must be meaningful, and ideally, a composition of components yields a new component that can be used to form other compositions. To work, these components need to be abstractions of the complex, domain-specific software that they encapsulate. They must hide the details, and expose only the essential external interfaces, with well-defined semantics.
3.1
Procedures and Object Orientation
A primary abstraction mechanism of this sort in software is the procedure (or in object-oriented culture, a method). Procedures are terminating computations. They take arguments, perform a finite computation, and return results. The real world, however, does not start, execute, complete, and return. Object orientation couples procedural abstraction with data to get data abstraction. Objects, however, are passive, requiring external invocation of their methods. So-called "active objects" are more like an afterthought, requiring still a model of computation to have any useful semantics. The real world is active, more like processes than objects, but with a clear and clean semantics that is firmly rooted in the physical world. So while object-oriented design has proven extremely effective in building large software systems, it has little to offer to address the specific problems of the embedded system designer. A sophisticated component technology for embedded software will talk more about processes than procedures, but we must find a way to make these processes compositional, and to control their real-time behavior in predictable and understandable ways. It will talk about concurrency and the models of computation used to regulate interaction between components. And it will talk about time.
3.2
Hardware Design
Hardware design, of course, is more constrained than software by the physical world. It is instructive to examine the abstractions that have worked for hardware, such as synchronous design. The synchronous abstraction is widely used in hardware to build large, complex, and modular designs, and has recently been applied to software [6], particularly for designing embedded software. Hardware models are conventionally constructed using hardware description languages such as Verilog and VHDL; these languages realize a discrete-event model of computation that makes time a first-class concept, information shared by all components. Synchronous design is done through a stylized use of these languages. Discrete-event models are often used for modeling complex systems.
64
EDWARD A. LEE
particularly in the context of networking, but have not yet (to my knowledge) been deployed into embedded system design. Conceptually, the distinction between hardware and software, from the perspective of computation, has only to do with the degree of concurrency and the role of time. An application with a large amount of concurrency and a heavy temporal content might as well be thought of using hardware abstractions, regardless of how it is implemented. An application that is sequential and has no temporal behavior might as well be thought of using software abstractions, regardless of how it is implemented. The key problem becomes one of identifying the appropriate abstractions for representing the design.
3.3
Real-Time Operating Systems
Most embedded systems, as well as many emerging applications of desktop computers, involve real-time computations. Some of these have hard deadlines, typically involving streaming data and signal processing. Examples include communication subsystems, sensor and actuator interfaces, audio and speech processing subsystems, and video subsystems. Many of these require not just real-time throughput, but also low latency. In general-purpose computers, these tasks have been historically delegated to specialized hardware, such as SoundBlaster cards, video cards, and modems. In embedded systems, these tasks typically compete for resources. As embedded systems become networked, the situation gets much more complicated, because the combination of tasks competing for resources is not known at design time. Many such embedded systems incorporate a real-time operating system, which offers specialized scheduling services tuned to real-time needs, in addition to standard operating system services such as I/O. The schedules might be based on priorities, using for example the principles of rate-monotonic scheduling [10,16], or on deadlines. There remains much work to be done to improve the match between the assumptions of the scheduling principle (such as periodicity, in the case of rate-monotonic scheduling) and the realities of embedded systems. Because the match is not always good today, many real-time embedded systems contain hand-built, specialized microkernels for task scheduling. Such microkernels, however, are rarely sufficiently flexible to accommodate networked applications, and as the complexity of embedded applications grows, they will be increasingly difficult to design. The issues are not simple. Unfortunately, current practice often involves fine tuning priorities until a particular implementation seems to work. The result is fragile systems that fail when anything changes. A key problem in scheduling is that most techniques are not compositional. That is, even if assurances can be provided for an individual component, there are no systematic mechanisms for providing assurances to the aggregate of two
EMBEDDED SOFTWARE
65
components, except in trivial cases. A chronic problem with priority-based scheduling, known as priority inversion, is one manifestation of this problem. Priority inversion occurs when processes interact, for example, by using a monitor to obtain exclusive access to a shared resource. Suppose that a lowpriority process has access to the resource, and is preempted by a medium-priority process. Then a high-priority process preempts the medium-priority process and attempts to gain access to the resource. It is blocked by the low-priority process, but the low-priority process is blocked by the presence of an executable process with higher priority, the medium-priority process. By this mechanism, the highpriority process cannot execute until the medium-priority process completes and allows the low-priority process to relinquish the resource. Although there are ways to prevent priority inversion (priority inheritance and priority ceiling protocols, for example), the problem is symptomatic of a deeper failure. In a priority-based scheduling scheme, processes interact both through the scheduler and through the mutual exclusion mechanism (monitors) supported by the framework. These two interaction mechanisms together, however, have no coherent compositional semantics. It seems like a fruitful research goal to seek a better mechanism.
3.4
Real-Time Object-Oriented Models
Real-time practice has recently been extended to distributed component software in the form of real-time CORBA and related models [17] and real-time objectoriented modeling (ROOM) [18]. CORBA is fundamentally a distributed objectoriented approach based on remote procedure calls. Built upon this foundation of remote procedure calls are various services, including an event service that provides a publish-and-subscribe semantics. Real-time CORBA extends this further by associating priorities with event handling, and leveraging real-time scheduling for processing events in a timely manner. Real-time CORBA, however, is still based on prevailing software abstractions. Thus, for effective real-time performance, a programmer must specify various numbers, such as worst-case and typical execution times for procedures, cached and not. These numbers are hard to know precisely. Real-time scheduling is then driven by additional parameters such as periodicity, and then tweaked with semantically weak parameters called "importance" and "criticality." These parameters, taken together, amount to guesses, as their actual effect on system behavior is hard to predict except by experimentation.
4.
Actor-Oriented Design
Object-oriented design emphasizes inheritance and procedural interfaces. We need an approach that, like object-oriented design, constructs complex
66
EDWARD A. LEE
applications by assembling components, but emphasizes concurrency and communication abstractions, and admits time as a first-class concept. I suggest the term actor-oriented design for a refactored software architecture, where instead of objects, the components are parameterized actors with ports. Ports and parameters define the interface of an actor. A port represents an interaction with other actors, but unlike a method, does not have call-return semantics. Its precise semantics depends on the model of computation, but conceptually it represents signaling between components. There are many examples of actor-oriented frameworks, including Simulink (from The MathWorks), Lab VIEW (from National Instruments), Easy 5x (from Boeing), SPW (the Signal Processing Worksystem, from Cadence), and Cocentric System studio (from Synopsys). The approach has not been entirely ignored by the software engineering community, as evidenced by ROOM [18] and some architecture description languages (ADLs, such as Wright [19]). Hardware design languages, such as VHDL, Verilog, and SystemC, are all actor-oriented. In the academic community, active objects and actors [20,21], timed I/O automata [22], Polls and Metropolis [23], Giotto [24], and Ptolemy and Ptolemy II [25] all emphasize actor orientation. Agha uses the term "actors," which he defines to extend the concept of objects to concurrent computation [26a]. Agha's actors encapsulate a thread of control and have interfaces for interacting with other actors. The protocols used for this interface are called interaction patterns, and are part of the model of computation. My use of the term "actors" is broader, in that I do not require the actors to encapsulate a thread of control, but I share with Agha the notion of interaction patterns, which I call the "model of computation." Agha argues that no model of concurrency can or should allow all communication abstractions to be directly expressed. He describes message passing as akin to "gotos" in their lack of structure. Instead, actors should be composed using an interaction policy. These more specialized interaction policies will form models of computation.
4.1
Abstract Syntax
It is useful to separate syntactic issues from semantic issues. An abstract syntax defines how a design can be decomposed into interconnected components, without being concerned with how a design is represented on paper or in a computer file (that is the concern of the concrete syntax). An abstract syntax is also not concerned with the meaning of the interconnections of components, nor even what a component is. A design is a set of components and relationships among them, where the relationships conform to this abstract syntax. Here, we describe the abstract syntax using informal diagrams that illustrate these sets and relations
67
EMBEDDED SOFTWARE
by giving use cases, although formaUzing the abstract syntax is necessary for precision. Consider the diagram in Fig. 1. This shows three components (actors), each with one port, and an interconnection between these ports mediated by a relation. This illustrates a basic abstract syntax. The abstract syntax says nothing about the meaning of the interconnection, but rather just merely that it exists. To be useful, the abstract syntax is typically augmented with hierarchy, where an actor is itself an aggregate of actors. It can be further elaborated with such features as ports supporting multiple links and relations representing multiple connections. An elaborate abstract syntax of this type is described in [25].
4.2
Concrete Syntaxes
The abstract syntax may be associated with any number of concrete syntaxes. For instance, an XML schema might be used to provide a textual representation of a structure [26b]. A visual editor may provide a diagrammatic syntax, like that shown in Fig. 2. Actor-oriented design does not require visual syntaxes. However, visual depictions of systems have always held a strong human appeal, making them extremely effective in conveying information about a design. Many of the methods described in this chapter can use such depictions to completely and formally specify models. Visual syntaxes can be every bit as precise and complete as textual syntaxes, particularly when they are judiciously combined with textual syntaxes. Visual representations of models have a mixed history. In circuit design, schematic diagrams used to be routinely used to capture all of the essential information needed to implement some systems. Today, schematics are usually replaced by text in hardware description languages such as VHDL or Verilog. In other contexts, visual representations have largely failed, for example, flowcharts for
Actor ^ ^ Relation ^ f r. ^ ^ Link ^^^ Link ^ Port M ^ M I V I Parameters J W
Actor „ ^ Port Parameters
FIG. 1. Abstract syntax of actor-oriented designs.
68
EDWARD A. LEE
..JGl ill m
view
Edit Graph
utilities director library 1 ^ actor library Graphics
Help
^
ThiS modei illustrates composae types m Ptolemy 11, The Record Assembler actor composes a strmg with an integer mto a record token which is then passed through a channel that has random delay. The tokerts amve possibly in another order The Record Dssassembter actor separates the string from the sequence number. The strings are displayed as received (possible out of order), and resequenced by the Sequencer actor, which puts them back m order This example demonstrates how types propagate through record composition and decomposition
^
Master Ctock Strmg Sequence 1
>-
I
Record As&embler
t~~*t ^^^ f—\ m I
Sequence C o u n t ^ W i
1
Record Disassembler channel M( hannei Model • pjgpfay f^^
The channel ss modeled by a vanable delay, which here is random with a Rayleigh dtstribution
H
FIG. 2. An example of a visual concrete syntax. This is the visual editor for Ptolemy II [25] called Vergil, designed by Steve Neuendorffer.
capturing the behavior of software. Recently, a number of innovative visual formalisms, including visual dataflow, hierarchical concurrent finite state machines, and object models, have been garnering support. The UML visual language for object modeling, for example, has been receiving a great deal of practical use [3,27].
4.3
Semantics
A semantics gives meaning to components and their interconnection. It states, for example, that a component is a process, and a connection represents communication between processes. Alternatively, a component may be a state, and a connection may represent a transition between states. In the former case, the semantics may restrict how the communication may occur. These semantic models can be viewed as architectural patterns [28], although for the purposes of this chapter, I will call them models of computation. One of my objectives here is to codify a few of the known models of computation that are useful for embedded software design. Consider a family of models of computation where components are producers or consumers of data (or both). In this case, the ports acquire the property of being inputs, outputs, or both. Consider for example the diagram in Fig. 3.
EMBEDDED SOFTWARE
• ~~ N send(t) - ^ A P E1
69
receiver, put(t) /" ZZiT" ^^^^^^^-^--^f^ SetO ^ ©P2 E: token token tty
FIG. 3. Producer-consumer communication mechanism.
This diagram has two actors, one producer and one consumer. The diagram suggests a port that is an output by showing an outgoing arrow, and an input by showing an ingoing arrow. It also shows a simpUfied version of the Ptolemy II data transport mechanism [25]. The producer sends a token t (which encapsulates user data) via its port by calling a send() method on that port. This results in a call to the put() method of the receiver in the destination port. The destination actor retrieves the token by calling get() on the port. This mechanism, however, is polymorphic, in the sense that it does not specify what it means to call put() or get(). This depends on the model of computation. A model of computation may be very broad or very specific. The more constraints there are, the more specific it is. Ideally, this specificity comes with benefits. For example, Unix pipes do not support feedback structures, and therefore cannot deadlock. Common practice in concurrent programming is that the components are threads that share memory and exchange objects using semaphores and monitors. This is a very broad model of computation with few benefits. In particular, it is hard to talk about the properties of an aggregate of components because an aggregate of components is not a component in the framework. Moreover, it is difficult to analyze a design in such a model of computation for deadlock or temporal behavior. A model of computation is often deeply ingrained in the human culture of the designers that use it. It fades out of the domain of discourse. It can be argued that the Turing sequentiality of computation is so deeply ingrained in contemporary computer science culture that we no longer realize just how thoroughly we have banished time from computation. In a more domain-specific context, users of modeling languages such as Simulink rarely question the suitability of the semantics to their problem at hand. To such users, it does not "have semantics," it just "is." The key challenge in embedded software research is to invent or identify models of computation with properties that match the application domain well. One of the requirements is that time be central to the model.
4.4
Models of Computation
A model of computation can be thought of as the "laws of physics" that govern component interactions. It is the programmer's model, or the conceptual
70
EDWARD A. LEE
framework within which larger designs are constructed by composing components. Design of embedded software will require models of computation that support concurrency. In practice, concurrency seriously complicates system design. No universal model of computation has yet emerged for concurrent computation (although some proponents of one approach or another will dispute this). By contrast, for sequential computation. Von Neumann provided a wildly successful universal abstraction. In this abstraction, a program consists of a sequence of transformations of the system state. In distributed systems, it is difficult to maintain a global notion of "system state," an essential part of the Von Neumann model, since many small state transformations are occurring simultaneously, in arbitrary order. In networked embedded systems, communication bandwidth and latencies will vary over several orders of magnitude, even within the same system design. A model of computation that is well suited to small latencies (e.g., the synchronous hypothesis used in digital circuit design, where computation and communication take "zero" time) is usually poorly suited to large latencies, and vice versa. Thus, practical designs will almost certainly have to combine techniques. It is well understood that effective design of concurrent systems requires one or more levels of abstraction above the hardware support. A hardware system with a shared memory model and transparent cache consistency, for example, still requires at least one more level of abstraction in order to achieve determinate distributed computation. A hardware system based on high-speed packet-switched networks could introduce a shared-memory abstraction above this hardware support, or it could be used directly as the basis for a higher level of abstraction. Abstractions that can be used include the event-based model of Java Beans, semaphores based on Dijkstra's P/V systems [29], guarded communication [30], rendezvous, synchronous message passing, active messages [31], asynchronous message passing, streams (as in Kahn process networks [32]), dataflow (commonly used in signal and image processing), synchronous/reactive systems [6], Linda [33], and many others. These abstractions partially or completely define a model of computation. Applications are built on a model of computation, whether the designer is aware of this or not. Each possibility has strengths and weaknesses. Some guarantee determinacy, some can execute in bounded memory, and some are provably free from deadlock. Different styles of concurrency are often dictated by the application, and the choice of model of computation can subtly aifect the choice of algorithms. While dataflow is a good match for signal processing, for example, it is a poor match for transaction-based systems, control-intensive sequential decision making, and resource management. It is fairly common to support models of computation with language extensions or entirely new languages. Occam, for example, supports synchronous
EMBEDDED SOFTWARE
71
message passing based on guarded communication [30]. Esterel [7], Lustre [13], Signal [14], and Argos [15] support the synchronous/reactive model. These languages, however, have serious drawbacks. Acceptance is slow, platforms are limited, support software is limited, and legacy code must be translated or entirely rewritten. An alternative approach is to explicitly use models of computation for coordination of modular programs written in standard, more widely used languages. The system-level specification language SystemC for hardware systems, for example, uses this approach (see h t t p : / / s y s t e m c . org). In other words, one can decouple the choice of programming language from the choice of model of computation. This also enables mixing such standard languages in order to maximally leverage their strengths. Thus, for example, an embedded application could be described as an interconnection of modules, where modules are written in some combination of C, Java, and VHDL. Use of these languages permits exploiting their strengths. For example, VHDL provides FPGA targeting for reconfigurable hardware implementations. Java, in theory, provides portability, migratability, and a certain measure of security. C provides efficient execution. The interaction between modules could follow any of several principles, e.g., those of Kahn process networks [32]. This abstraction provides a robust interaction layer with loosely synchronized communication and support for mutable systems (in which subsystems come and go). It is not directly built into any of the underlying languages, but rather interacts with them as an application interface. The programmer uses them as a design pattern [8] rather than as a language feature. Larger applications may mix more than one model of computation. For example, the interaction of modules in a real-time, safety-critical subsystem might follow the synchronous/reactive model of computation, while the interaction of this subsystem with other subsystems follows a process networks model. Thus, domain-specific approaches can be combined.
5.
Examples of Models of Computation
There are many models of computation, each dealing with concurrency and time in different ways. In this section, I oudine some of the most useful models for embedded software. All of these will lend a semantics to the same abstract syntax shown in Fig. 1.
5.1
Dataflow
In dataflow models, actors are atomic (indivisible) computations that are triggered by the availability of input data. Connections between actors represent the
72
EDWARD A. LEE
flow of data from a producer actor to a consumer actor. Examples of commercial frameworks that use dataflow models are SPW (signal processing worksystem, from Cadence) and Lab VIEW (from National Instruments). Synchronous dataflow (SDF) is a particularly restricted special case with the extremely useful property that deadlock and boundedness are decidable [34-38]. Boolean dataflow (BDF) is a generalization that sometimes yields to deadlock and boundedness analysis, although fundamentally these questions remain undecidable [39]. Dynamic dataflow (DDF) uses only run-time analysis, and thus makes no attempt to statically answer questions about deadlock and boundedness [40-42]. A small but typical example of an embedded software application modeled using SDF is shown in Fig. 4. That example shows a sound synthesis algorithm that consists of four actors in a feedback loop. The algorithm synthesizes the sound of a plucked string instrument, such as a guitar, using the well-known Karplus-Strong algorithm.
5.2
Time Triggered
Some systems with timed events are driven by clocks, which are signals with events that are repeated indefinitely with a fixed period. A number of software frameworks and hardware architectures have evolved to support this highly regular style of computation. SDF Director
This model implements the Karplus-Strong algorithm for generating a piucked-string musical instrument sound.
0#lay
Lowpass Filter
AIIpass FJItar
M
' jtl^^^"
^G&m
^
- X 1
D
AudioPlayer
FIG. 4. A synchronous dataflow model implemented in the SDF domain (created by Stephen Neuendorflfer) of Ptolemy II [25]. This model uses the audio library created by Brian Vogel.
EMBEDDED SOFTWARE
73
The time-triggered architecture [43] is a hardware architecture supporting such models. The TTA takes advantage of this regularity by statically scheduling computations and communications among distributed components. In hardware design, cycle-driven simulators stimulate computations regularly according to the clock ticks. This strategy matches synchronous hardware design well, and yields highly efficient simulations for certain kinds of designs. In the Scenic system [44], for example, components are processes that run indefinitely, stall to wait for clock ticks, or stall to wait for some condition on the inputs (which are synchronous with clock ticks). Scenic also includes a clever mechanism for modeling preemption, an important feature of many embedded systems. Scenic has evolved into the SystemC specification language for system-level hardware design (see h t t p : / / s y s t e m c . org). The Giotto programming language [24] provides a time-triggered software abstraction which, unlike the TTA or cycle-driven simulation, is hardware independent. It is intended for embedded software systems where periodic events dominate. It combines with finite-state machines (see below) to yield modal models that can be quite expressive. An example of a helicopter controller in Giotto is described in [45]. Discrete-time models of computation are closely related. These are commonly used for digital signal processing, where there is an elaborate theory that handles the composition of subsystems. This model of computation can be generalized to support multiple sample rates. In either case, a global clock defines the discrete points at which signals have values (at the ticks).
5.3
Synchronous/Reactive
In the synchronous/reactive (SR) model of computation [6], connections between components represent data values that are aligned with global clock ticks, as with time-triggered approaches. However, unlike time-triggered and discretetime approaches, there is no assumption that all (or even most) signals have a value at each time tick. This model efficiently deals with concurrent models with irregular events. The components represent relations between input and output values at each tick, allowing for absences of value, and are usually partial functions with certain technical restrictions to ensure determinacy. Sophisticated compiler techniques yield extremely efficient execution that can reduce all concurrency to a sequential execution. Examples of languages that use the SR model of computation include Esterel [7], Signal [14], and Lustre [46]. An example of an application for which the synchronous reactive model is ideally suited is the management of a token-ring protocol for media access control, described in [9]. In this application, a token circulates in a round-robin fashion among users of a communication medium. When a user makes a request for
74
EDWARD A. LEE
access, if the user has the token, access is granted immediately. If not, then access may still be granted if the current holder of the token does not require access. The SR realization of this protocol yields predictable, deterministic management of access. This application benefits from the SR semantics because it includes instantaneous dialog and convergence to a fixed point (which determines who gets access when there is contention). SR models are excellent for applications with concurrent and complex control logic. Because of the tight synchronization, safety-critical real-time applications are a good match. However, also because of the tight synchronization, some applications are overspecified in the SR model, which thus limits the implementation alternatives and makes distributed systems difficult to model. Moreover, in most realizations, modularity is compromised by the need to seek a global fixed point at each clock tick.
5.4
Discrete Events
In discrete-event (DE) models of computation, the connections represent sets of events placed on a time line. An event consists of a value and time stamp. This model of computation is popular for specifying hardware and for simulating telecommunications systems, and has been realized in a large number of simulation environments, simulation languages, and hardware description languages, including VHDL and Verilog. Like SR, there is a globally consistent notion of time, but unlike SR time has a metric, in that the time between events has significance. DE models are often used in the design of communication networks. Figure 2 above gives a very simple DE model that is typical of this usage. That example constructs packets and routes them through a channel model. In this case, the channel model has the feature that it may reorder the packets. A sequencer is used to reconstruct the original packet order. DE models are also excellent descriptions of concurrent hardware, although increasingly the globally consistent notion of time is problematic. In particular, it overspecifies (or overmodels) systems where maintaining such a globally consistent notion is difficult, including large VLSI chips with high clock rates, and networked distributed systems. A key weakness is that it is relatively expensive to implement in software, as evidenced by the relatively slow simulators.
5.5
Process Networks
A common way of handling concurrent software is where components are processes or threads that communicate by asynchronous, buffered message passing. The sender of the message need not wait for the receiver to be ready to receive
EMBEDDED SOFTWARE
75
the message. There are several variants of this technique, but I focus on one that ensures determinate computation, namely Kahn process networks [32]. In a Kahn process network (PN) model of computation, the connections represent sequences of data values (tokens), and the components represent functions that map input sequences into output sequences. Certain technical restrictions on these functions are necessary to ensure determinacy, meaning that the sequences are fully specified. Dataflow models are a special case of process networks that construct processes as sequences of atomic actor firings [47]. PN models are excellent for signal processing [48]. They are loosely coupled, and hence relatively easy to parallelize or distribute. They can be implemented efficiently in both software and hardware, and hence leave implementation options open. A key weakness of PN models is that they are awkward for specifying complicated control logic. Control logic is specified by routing data values.
5.6
Rendezvous
In synchronous message passing, the components are processes, and processes communicate in atomic, instantaneous actions called rendezvous. If two processes are to communicate, and one reaches the point first at which it is ready to communicate, then it stalls until the other process is ready to communicate. "Atomic" means that the two processes are simultaneously involved in the exchange, and that the exchange is initiated and completed in a single uninterruptable step. Examples of rendezvous models include Hoare's communicating sequential processes (CSP) [30] and Milner's calculus of communicating systems (CCS) [49]. This model of computation has been realized in a number of concurrent programming languages, including Lotos and Occam. Rendezvous models are particularly well matched to appHcations where resource sharing is a key element, such as client-server database models and multitasking or multiplexing of hardware resources. A key weakness of rendezvous-based models is that maintaining determinacy can be difficult. Proponents of the approach, of course, cite the ability to model nondeterminacy as a key strength. Rendezvous models and PN both involve threads that communicate via message passing, synchronously in the former case and asynchronously in the latter. Neither model intrinsically includes a notion of time, which can make it difficult to interoperate with models that do include a notion of time. In fact, message events are partially ordered, rather than totally ordered as they would be were they placed on a time line. Both models of computation can be augmented with a notion of time to promote interoperability and to directly model temporal properties (see, for example, [50]). In the Pamela system [51], threads assume that time does not advance while they are active, but can advance when they stall on inputs, outputs, or explicitly indicate
76
EDWARD A. LEE
that time can advance. By this vehicle, additional constraints are imposed on the order of events, and determinate interoperability with timed models of computation becomes possible. This mechanism has the potential of supporting lowlatency feedback and configurable hardware.
5.7
Publish and Subscribe
In publish-and-subscribe models, connections between components are via named event streams. A component that is a consumer of such streams registers an interest in the stream. When a producer produces an event to such a stream, the consumer is notified that a new event is available. It then queries a server for the value of the event. Linda is a classic example of a fully elaborated publish-andsubscribe mechanism [52]. It has recendy been reimplemented in JavaSpaces, from Sun Microsystems. An example of a distributed embedded software application using JavaSpaces is shown in Fig. 5.
5.8
Continuous Time
Physical systems can often be modeled using coupled differential equations. These have a natural representation in the abstract syntax of Fig. 1, where the connections represent continuous-time signals (functions of the time continuum). The components represent relations between these signals. The job of an execution environment is to find a fixed-point, i.e., a set of functions of time that satisfy all the relations. Differential equations are excellent for modeling the physical systems with which embedded software interacts. Joint modeling of these physical systems and the software that interacts with them is essential to developing confidence in a design of embedded software. Such joint modeling is supported by such actororiented modeling frameworks as Simulink, Saber, VHDL-AMS, and Ptolemy II. A Ptolemy II continuous-time model is shown in Fig. 6.
5.9
Finite State Machines
All of the models of computation considered so far are concurrent. It is often useful to combine these concurrent models hierarchically with finite-state machines (FSMs) to get modal models. FSMs are different from any of the models we have considered so far in that they are strictly sequential. A component in this model is called a state or mode, and exactly one state is active at a time. The connections between states represent transitions, or transfer of control between states. Execution is a strictly ordered sequence of state transitions. Transition
77
EMBEDDED SOFTWARE
SDF Publisherl I" r""" """""V
TiltSensorO
r^j^~
»-
w E
^^B||||ip»-
J
p — -
Pyblisher2
1
1 SDF LegoDriver14 ouiiirv
J 4.
jj^^i 1 -^
*-
• . — pumiuMiimmm^
Subscribe r2
1 ' ^feii^i
X ScaleX
Ih—^omm
%
luwayp
X
^
r
^ r
•
^
^
^
1
^
jfcUMMy
"4 n
"w
^^
\mmmpL
SumL 4. 1
15[
FIG. 5. A distributed embedded application using JavaSpaces combined with SDF to realize a publish-and-subscribe model of computation. The upper left model reads sensor data from a tilt sensor and publishes the data on the network. The lower model subscribes to the sensor data and uses it to drive the Lego robot at the upper right. This example was built by Jie Liu and Xiaojun Liu.
systems are a more general version, in that a given component may represent more than one system state (and there may be an infinite number of components). FSM models are excellent for describing control logic in embedded systems, particularly safety-critical systems. FSM models are amenable to in-depth formal analysis, using for example model checking, and thus can be used to avoid surprising behavior. Moreover, FSMs are easily mapped to either hardware or software implementations. FSM models have a number of key weaknesses. First, at a very fundamental level, they are not as expressive as the other models of computation described here. They are not sufficiently rich to describe all partial recursive functions. However, this weakness is acceptable in light of the formal analysis that becomes possible. Many questions about designs are decidable for FSMs and undecidable
78
EDWARD A. LEE
Continuous TJm« (CT) Ojrectof
This model shows a nonlinear feedback system that exhibits chaotic behavior. It is modeled in continuous time. The CT director uses a sophisticated ordmary differential equation solver to execute the model. This particular model is known as a Lorenz attractor.
^mmm
•
J ^
T"""^""""°l
^ Ex|>res^on 1
Integral
A ^ ^ L"-*
IH
Expresslor ^ 2
1
\-
^•'
>--|
Ejqare^or 1 3 V. V.
|„
*•' 4"
•-H
Strange Attractor
^BBH
Integral
Ir Integrat
u^ diL
FIG. 6. A nonlinear feedback system modeled in the continuous-time (CT) domain in Ptolemy II. This model exhibits the chaotic behavior plotted at the right. This model and the CT domain were created by Jie Liu.
for Other models of computation. Another key weakness is that the number of states can get very large even in the face of only modest complexity. This makes the models unwieldy. The latter problem can often be solved by using FSMs in combination with concurrent models of computation. This was first noted by Harel, who introduced the Statecharts formalism. Statecharts combine synchronous/reactive modeling with FSMs [53a]. Statecharts have been adopted by UML for modeling the dynamics of software [3,27]. FSMs have also been combined with differential equations, yielding the so-called hybrid systems model of computation [53b]. FSMs can be hierarchically combined with a huge variety of concurrent models of computation. We call the resulting formahsm "^charts" (pronounced "starcharts") where the star represents a wildcard [54]. Consider the model shown in Fig. 7. In that figure, component B is hierarchically refined by another model consisting of three components, c, d, and e. These latter three components are states of a state machine, and the connections between them are state transitions. States c and e are shown refined to concurrent models
79
EMBEDDED SOFTWARE
A
H
1^
B
^
"n F UJ G U.
FIG. 7. Hierarchical composition of an FSM with concurrent models of computation.
themselves. The interpretation is that while the FSM is in state c, then component B is in fact defined by component H. While it is in state e, component B is defined by a composition of F and G. In the figure, square boxes depict components in a concurrent model of computation, while circles depict states in a state machine. Despite the different concrete syntax, the abstract syntax is the same: components with interconnections. If the concurrent model of computation is SR, then the combination has Statechart semantics. If it is continuous time, then the combination has hybrid systems semantics. If it is PN, then the combination is similar to the SDL language [55]. If it is DE, then the combination is similar to Polis [23]. A hybrid system example implemented in Ptolemy II is shown in Fig. 8.
6.
Choosing a Model of Computation
The rich variety of models of computation outlined above can be daunting to a designer faced with having to select them. Most designers today do not face this choice because they get exposed to only one or two. This is changing, however, as the level of abstraction and domain-specificity of design practice both rise. We expect that sophisticated and highly visual user interfaces will be needed to enable designers to cope with this heterogeneity.
80
EDWARD A. LEE
abs(Force) > SJidkiness Separate-pl = PI, Separate.p2 = PI: Ssparate.vl = V I ; Separate.v2 = V I
T
STi
Gain
I
FIG. 8. Hybrid system model in Ptolemy II, showing a hierarchical composition of a finite state machine (FSM) model and two continuous-time (CT) models. This example models a physical springmass system with two modes of operation. In the Separate mode, it has two masses on springs oscillating independently. In the Together mode, the two masses are struck together, and oscillate together with two springs. The model was created by Jie Liu and Xiaojun Liu.
An essential difference between concurrent models of computation is their modeling of time. Some are very explicit by taking time to be a real number that advances uniformly, and placing events on a time line or evolving continuous signals along the time line. Others are more abstract and take time to be discrete. Others are still more abstract and take time to be merely a constraint imposed by causality. This latter interpretation results in time that is partially ordered, and explains much of the expressiveness in process networks and rendezvousbased models of computation. Partially ordered time provides a mathematical framework for formally analyzing and comparing models of computation [56,57]. Many researchers have thought deeply about the role of time in computation. Benveniste and Le Guernic observe that in certain classes of systems, "the nature of time is by no means universal, but rather local to each subsystem, and consequently multiform" [14]. Lamport observes that a coordinated notion of time cannot be exacdy maintained in distributed systems, and shows that a partial ordering is sufficient [58]. He gives a mechanism in which messages in an asynchronous system carry time stamps and processes manipulate these time stamps. We
EMBEDDED SOFTWARE
81
can then talk about processes having information or knowledge at a consistent cut, rather than "simultaneously." Fidge gives a related mechanism in which processes that can fork and join increment a counter on each event [59]. A partial ordering relationship between these lists of times is determined by process creation, destruction, and communication. If the number of processes is fixed ahead of time, then Mattem gives a more efficient implementation by using "vector time" [60]. All of this work offers ideas for modeling time. How can we reconcile this multiplicity of views? A grand unified approach to modeling would seek a concurrent model of computation that serves all purposes. This could be accomplished by creating a melange, a mixture of all of the above. For example, one might permit each connection between components to use a distinct protocol, where some are timed and some not, and some are synchronous and some not, as done for example in ROOM [18] and SystemC 2.0 ( h t t p : / / s y s temc. org). This offers rich expressiveness, but such a mixture may prove extremely complex and difficult to understand, and synthesis and validation tools would be difficult to design. In my opinion, such richly expressive formalisms are best used as foundations for more specialized models of computation. This, in fact, is the intent in SystemC 2.0 [61]. Another alternative would be to choose one concurrent model of computation, say the rendezvous model, and show that all the others are subsumed as special cases. This is relatively easy to do, in theory. Most of these models of computation are sufficiently expressive to be able to subsume most of the others. However, this fails to acknowledge the strengths and weaknesses of each model of computation. Process networks, for instance, are very good at describing the data dependencies in a signal processing system, but not as good at describing the associated control logic and resource management. Finite-state machines are good at modeling at least simple control logic, but inadequate for modeling data dependencies and numeric computation. Rendezvous-based models are good for resource management, but they overspecify data dependencies. Thus, to design interesting systems, designers need to use heterogeneous models. Certain architecture description languages (ADLs), such as Wright [19] and Rapide [28], define a model of computation. The models are intended for describing the rich sorts of component interactions that commonly arise in software architecture. Indeed, such descriptions often yield good insights about design, but sometimes, the match is poor. Wright, for example, which is based on CSP, does not cleanly describe asynchronous message passing (it requires giving detailed descriptions of the mechanisms of message passing). I believe that what we really want are architecture design languages rather than architecture description languages. That is, their focus should not be on describing current practice, but rather on improving future practice. Wright, therefore, with its strong commitment
82
EDWARD A. LEE
to CSP, should not be concerned with whether it cleanly models asynchronous message passing. It should instead take the stand that asynchronous message passing is a bad idea for the designs it addresses.
7.
Heterogeneous Models
Figure 4 shows a hierarchical heterogeneous combination of models of computation. A concurrent model at the top level has a component that is refined into a finite-state machine. The states in the state machine are further refined into a concurrent model of computation. Ideally, each concurrent model of computation can be designed in such a way that it composes transparently with FSMs, and, in fact, with other concurrent models of computation. In particular, when building a realization of a model of computation, it would be best if it did not need to be jointly designed with the realizations that it can compose with hierarchically. This is a challenging problem. It is not always obvious what the meaning should be of some particular hierarchical combination. The semantics of various combinations of FSMs with various concurrency models are described in [54]. In Ptolemy II [25], the composition is accomplished via a notion called domain polymorphism. The term "domain polymorphism" requires some explanation. First, the term "domain" is used in the Ptolemy project to refer to an implementation of a model of computation. This implementation can be thought of as a "language," except that it does not (necessarily) have the traditional textual syntax of conventional programming languages. Instead, it abides by a common abstract syntax that underlies all Ptolemy models. The term "domain" is a fanciful one, coming from the speculative notion in astrophysics that there are regions of the universe where the laws of physics differ. Such regions are called "domains." The model of computation is analogous to the laws of physics. In Ptolemy II, components (called actors) in a concurrent model of computation implement an interface consisting of a suite of action methods. These methods define the execution of the component. A component that can be executed under the direction of any of a number of models of computation is called a domain polymorphic component. The component is not defined to operate with a particular model of computation, but instead has a well-defined behavior in several, and can be usefully used in several. It is domain polymorphic, meaning specifically that it has a well-defined behavior in more than one domain, and that the behavior is not necessarily the same in dilferent domains. For example, the AddSubtract actor (shown as a square with a -h and - ) appears in Fig. 8, where it adds or subtracts continuous-time signals, and in Fig. 5, where it adds or subtracts streams.
EMBEDDED SOFTWARE
83
In Ptolemy II, an application (which is called a "model") is constructed by composing actors (most of which are domain polymorphic), connecting them, and assigning a domain. The domain governs the interaction between components and the flow of control. It provides the execution semantics to the assembly of components. The key to hierarchically composing multiple models of computation is that an aggregation of components under the control of a domain should itself define a domain polymorphic component. Thus, the aggregate can be used as a component within a different model of computation. In Ptolemy II, this is how finite-state machine models are hierarchically composed with other models to get hybrid systems, Statechart-like models, and SDL-like models. Domain polymorphic components in Ptolemy II simply need to implement a Java interface called Executable. This interface defines three phases of execution, an initialization phase, which is executed once, an iteration phase, which can be executed multiple times, and a termination phase, which is executed once. The iteration itself is divided into three phases also. The first phase, called prefire, can examine the status of the inputs and can abort the iteration or continue it. The prefire phase can also initiate some computation, if appropriate. The second phase, called fire, can also perform some computation, if appropriate, and can produce outputs. The third phase, called postfire, can commit any state changes for the component that might be appropriate. To get hierarchical mixtures of domains, a domain must itself implement the Executable interface to execute an aggregate of components. Thus, it must define an initialization, iteration, and termination phase, and within the iteration phase, it must define the same three phases of execution. The three-phase iteration has proven suitable for a huge variety of models of computation, including synchronous dataflow (SDF) [37], discrete events (DE) [62], discrete time (DT) [63], finite-state machines (FSM) [54], continuous-time (CT) [64], synchronous/reactive (SR), and Giotto (a time-triggered domain) [24]. All of these domains can be combined hierarchically. Some domains in Ptolemy II have fixed-point semantics, meaning that in each iteration, the domain may repeatedly fire the components until a fixed point is found. Two such domains are continuous time (CT) [64] and synchronous/ reactive (SR) [65,66]. The fact that a state update is committed only in the postfire phase of an iteration makes it easy to use domain-polymorphic components in such a domain. Ptolemy II also has domains for which this pattern does not work quite as well. In particular, in the process networks (PN) domain [67] and communicating sequential processes (CSP) domain, each component executes in its own thread. These domains have no difficulty executing domain polymorphic components. They simply wrap in a thread a (potentially) infinite sequence of iterations.
84
EDWARD A. LEE
However, aggregates in such domains are harder to encapsulate as domain polymorphic components, because it is hard to define an iteration for the aggregate. Since each component in the aggregate has its own thread of execution, it can be tricky to define the boundary points between iterations. This is an open issue that the Ptolemy project continues to address, and to which there are several candidate solutions that are applicable for particular problems.
8.
Component Interfaces
The approach described in the previous section is fairly ad hoc. The Ptolemy project has constructed domains to implement various models of computation, most of which have entire research communities centered on them. It has then experimented with combinations of models of computation, and through trial and error, has identified a reasonable design for a domain polymorphic component interface definition. Can this ad hoc approach be made more systematic? I believe that type system concepts can be extended to make this ad hoc approach more systematic. Type systems in modern programming languages, however, do not go far enough. Several researchers have proposed extending the type system to handle such issues as array bounds overruns, which are traditionally left to the run-time system [68]. However, many issues are still not dealt with. For example, the fact that prefire is executed before ^r^ in a domain polymorphic component is not expressed in the type system. At its root, a type system constrains what a component can say about its interface, and how compatibility is ensured when components are composed. Mathematically, type system methods depend on a partial order of types, typically defined by a subtyping relation (for user-defined types such as classes) or in more ad hoc ways (for primitive types such as double or int). They can be built from the robust mathematics of partial orders, leveraging, for example, fixedpoint theorems to ensure convergence of type checking, type resolution, and type inference algorithms. With this very broad interpretation of type systems, all we need is that the properties of an interface be given as elements of a partial order, preferably a complete partial order (CPO) or a lattice [18]. I suggest first that dynamic properties of an interface, such as the conventions in domain polymorphic component design, can be described using nondeterministic automata, and that the pertinent partial ordering relation is the simulation relation between automata. Preliminary work in this direction is reported in [69], which uses a particular automaton model called interface automata [29]. The result is called a behavioral-type system. Behavioral-level types can be used without modifying the underlying languages, but rather by overlaying on standard languages design patterns that make
EMBEDDED SOFTWARE
85
these types explicit. Domain polymorphic components are simply those whose behavioral-level types are polymorphic. Note that there is considerable precedent for such augmentations of the type system. For example, Lucassen and Gifford introduce state into functions using the type system to declare whether functions are free of side effects [70]. MartinLof introduces dependent types, in which types are indexed by terms [71]. Xi uses dependent types to augment the type system to include array sizes, and uses type resolution to annotate programs that do not need dynamic array bounds checking [68]. The technique uses singleton types instead of general terms [72] to help avoid undecidability. While much of the fundamental work has been developed using functional languages (especially ML [73]), there is no reason that I can see that it cannot be applied to more widely accepted languages.
8.1
On-line-type Systems
Static support for type systems gives the compiler responsibility for the robustness of software [74]. This is not adequate when the software architecture is dynamic. The software needs to take responsibility for its own robustness [75]. This means that algorithms that support the type system need to be adapted to be practically executable at run time. ML is an early and well-known realization of a "modem type system" [1,76,77]. It was the first language to use type inference in an integrated way [78], where the types of variables are not declared, but are rather inferred from how they are used. The compile-time algorithms here are elegant, but it is not clear to me whether run-time adaptations are practical. Many modem languages, including Java and C+H-, use declared types rather than type inference, but their extensive use of polymorphism still implies a need for fairly sophisticated type checking and type resolution. Type resolution allows for automatic (lossless) type conversions and for optimized mn-time code, where the overhead of late binding can be avoided. Type inference and type checking can be reformulated as the problem of finding the fixed point of a monotonic function on a lattice, an approach due to Dana Scott [79]. The lattice describes a partial order of types, where the ordering relationship is the subtype relation. For example. Double is a subtype of Number in Java. A typical implementation reformulates the fixed point problem as the solution of a system of equations [49] or of inequalities [80]. Reasonably efficient algorithms have been identified for solving such systems of inequalities [81], although these algorithms are still primarily viewed as part of a compiler, and not part of a mn-time system. Iteration to a fixed point, at first glance, seems too costly for on-line real-time computation. However, there are several languages based on such iteration that
86
EDWARD A. LEE
are used primarily in a real-time context. Esterel is one of these [7]. Esterel compilers synthesize run-time algorithms that converge to a fixed point at each clock of a synchronous system [14]. Such synthesis requires detailed static information about the structure of the application, but methods have been demonstrated that use less static information [65]. Although these techniques have not been proposed primarily in the context of a type system, I believe they can be adapted.
8.2
Reflecting Program Dynamics
Object-oriented programming promises software modularization, but has not completely delivered. The type system captures only static, structural aspects of software. It says litde about the state trajectory of a program (its dynamics) and about its concurrency. Nonetheless, it has proved extremely useful, and through the use of reflection, is able to support distributed systems and mobile code. Reflection, as applied in software, can be viewed as having an on-line model of the software within the software itself. In Java, for example, this is applied in a simple way. The static structure of objects is visible through the Class class and the classes in the reflection package, which includes Method, Constructor, and various others. These classes allow Java code to dynamically query objects for their methods, determine on-the-fly the arguments of the methods, and construct calls to those methods. Reflection is an integral part of Java Beans, mobile code, and CORBA support. It provides a run-time environment with the facilities for stitching together components with relatively intolerant interfaces. However, static structure is not enough. The interfaces between components involve more than method templates, including such properties as communication protocols. To get adaptive software in the context of real-time applications, it will also be important to reflect the program state. Thus, we need reflection on the program dynamics. In embedded software, this could be used, for example, to systematically realize fault detection, isolation, and recovery (FDIR). That is, if the declared dynamic properties of a component are violated at run time, the run-time-type checking can detect it. For example, suppose a component declares as part of its interface definition that it must execute at least once every 10 ms. Then a run-time-type checker will detect a violation of this requirement. The first question becomes at what granularity to do this. Reflection intrinsically refers to a particular abstracted representation of a program. For example, in the case of static structure, Java's reflection package does not include finer granularity than methods. Process-level reflection could include two critical facets, communication protocols and process state. The former would capture in a type system such properties as whether the process uses rendezvous, streams, or events to communicate with
EMBEDDED SOFTWARE
87
Other processes. By contrast, Java Beans defines this property universally to all applications using Java Beans. That is, the event model is the only interaction mechanism available. If a component needs rendezvous, it must implement that on top of events, and the type system provides no mechanism for the component to assert that it needs rendezvous. For this reason, Java Beans seem unlikely to be very useful in applications that need stronger synchronization between processes, and thus it is unlikely to be used much beyond user interface design. Reflecting the process state could be done with an automaton that simulates the program. (We use the term "simulates" in the technical sense of automata theory.) That is, a component or its run-time environment can access the "state" of a process (much as an object accesses its own static structure in Java), but that state is not the detailed state of the process, but rather the state of a carefully chosen automaton that simulates the application. Designing that automaton is then similar (conceptually) to designing the static structure of an object-oriented program, but represents dynamics instead of static structure. Just as we have object-oriented languages to help us develop object-oriented programs, we would need state-oriented languages to help us develop the reflection automaton. These could be based on Statecharts, but would be closer in spirit to UML's state diagrams in that it would not be intended to capture all aspects of behavior. This is analogous to the object model of a program, which does not capture all aspects of the program structure (associations between objects are only weakly described in UML's static structure diagrams). Analogous to object-oriented languages, which are primarily syntactic overlays on imperative languages, a state-oriented language would be a syntactic overlay on an objectoriented language. The syntax could be graphical, as is now becoming popular with object models (especially UML). Well-chosen reflection automata would add value in a number of ways. First, an application may be asked, via the network, or based on sensor data, to make some change in its functionality. How can it tell whether that change is safe? The change may be safe when it is in certain states, and not safe in other states. It would query its reflection automaton, or the reflection automaton of some gatekeeper object, to determine how to react. This could be particularly important in real-time applications. Second, reflection automata could provide a basis for verification via such techniques as model checking. This complements what object-oriented languages offer. Their object model indicates safety of a change with respect to data layout, but they provide no mechanism for determining safety based on the state of the program. When a reflection automaton is combined with concurrency, we get something akin to Statechart's concurrent, hierarchical FSMs, but with a twist. In Statecharts, the concurrency model is fixed. Here, any concurrency model can be used. We
88
EDWARD A. LEE
call this generalization "*charts," pronounced "starcharts," where the star represents a wildcard suggesting the flexibility in concurrency models [54]. Some variations of Statecharts support concurrency using models that are diff'erent from those in the original Statecharts [15,82]. As with Statecharts, concurrent composition of reflection automata provides the benefit of compact representation of a product automaton that potentially has a very large number of states. In this sense, aggregates of components remain components where the reflection automaton of the aggregate is the product automaton of the components, but the product automaton never needs to be explicitly represented. Ideally, reflection automata would also inherit cleanly. Interface theories are evolving that promise to explain exactly how to do this [29]. In addition to application components being reflective, it will probably be beneficial for components in the run-time environment to be reflective. The run-time environment is whatever portion of the system outlives all application components. It provides such services as process scheduling, storage management, and specialization of components for efficient execution. Because it outlives all application components, it provides a convenient place for reflecting aspects of the application that transcend a single component or an aggregate of closely related components.
9.
Frameworks Supporting Models of Computation
In this context, a framework is a set of constraints on components and their interaction, and a set of benefits that derive from those constraints. This is broader than, but consistent with the definition of frameworks in object-oriented design [83]. By this definition, there are a huge number of frameworks, some of which are purely conceptual, cultural, or even philosophical, and some of which are embodied in software. Operating systems are frameworks where the components are programs or processes. Programming languages are frameworks where the components are language primitives and aggregates of these primitives, and the possible interactions are defined by the grammar. Distributed component middleware such as CORBA [17] and DCOM are frameworks. Synchronous digital hardware design principles are a framework. Java Beans form a framework that is particularly tuned to user interface construction. A particular class library and policies for its use is a framework [83]. For any particular application domain, some frameworks are better than others. Operating systems with no real-time facilities have limited utility in embedded systems, for example. In order to obtain certain benefits, frameworks impose constraints. As a rule, stronger benefits come at the expense of stronger constraints. Thus, frameworks may become rather specialized as they seek these benefits.
EMBEDDED SOFTWARE
89
The drawback with speciaHzed frameworks is that they are unlikely to solve all the framework problems for any complex system. To avoid giving up the benefits of specialized frameworks, designers of these complex systems will have to mix frameworks heterogeneously. Of course, a framework within which to heterogeneously mix frameworks is needed. The design of such a framework is the purpose of the Ptolemy project [25]. Each domain, which implements a model of computation, offers the designer a specialized framework, but domains can be mixed hierarchically using the concept of domain polymorphism. A few other research projects have also heterogeneously combined models of computation. The Gravity system and its visual editor Orbit, like Ptolemy, provide a framework for heterogeneous models [84]. A model in a domain is called a facet, and heterogeneous models are multifacetted designs [85]. Jourdan et al. have proposed a combination of Argos, a hierarchical finite-state machine language, with Lustre [13], which has a more dataflow flavor, albeit still within a synchronous/reactive concurrency framework [86]. Another interesting integration of diverse semantic models is done in Statemate [87], which combines activity charts with statecharts. This sort of integration has more recently become part of UML. The activity charts have some of the flavor of a process network.
10.
Conclusions
Embedded software requires a view of computation that is significantly different from the prevailing abstractions in computation. Because such software engages the physical world, it has to embrace time and other nonfunctional properties. Suitable abstractions compose components according to a model of computation. Models of computation with stronger formal properties tend to be more speciaHzed. This specialization limits their applicability, but this limitation can be ameliorated by hierarchically combining heterogeneous models of computation. System-level types capture key features of components and their interactions through a model of computation, and promise to provide robust and understandable composition technologies. ACKNOWLEDGMENTS
This chapter distills the work of many people who have been involved in the Ptolemy Project at Berkeley. Most notably, the individuals who have directly contributed ideas are Shuvra S. Bhattacharyya, John Davis II, Johan Eker, Chamberlain Fong, Christopher Hylands, Joem Janneck, Jie Liu, Xiaojun Liu, Stephen Neuendorifer, John Reekie, Farhana Sheikh, Kees Vissers, Brian K. Vogel, Paul Whitaker, and Yuhong Xiong. The Ptolemy Project is supported by the Defense Advanced Research Projects Agency (DARPA), the MARCO/DARPA Gigascale Silicon Research Center (GSRC), the State of
90
EDWARD A. LEE
California MICRO program, and the following companies: Agilent Technologies, Cadence Design Systems, Hitachi, and Philips. REFERENCES
[1] Turing, A. M. (1936). "On computable numbers with an application to the Entscheidungsproblem." Proceedings of the London Mathematical Society, 42, 230-265. [2] Manna, Z., and Pnueli, A. (1991). The Temporal Logic of Reactive and Concurrent Systems. Springer-Verlag, Berlin. [3] Douglass, B. R (1998). Real-Time UML. Addison-Wesley, Reading, MA. [4] Dijkstra, E. (1968). "Cooperating sequential processes." Programming Languages (E F. Genuys, Ed.). Academic Press. New York. [5] Lea, D. (1997). Concurrent Programming in JavaTM: Design Principles and Patterns. Addison-Wesley, Reading MA. [6] Benveniste, A., and Berry, G. (1991). "The synchronous approach to reactive and real-time systems." Proceedings of the IEEE, 79, 1270-1282. [7] Berry, G., and Gonthier, G. (1992). "The Esterel synchronous programming language: Design, semantics, implementation." Science of Computer Programming, 19, 87-152. [8] Gamma, E., Helm, R., Johnson, R., and Vlissides, J. (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA. [9] Edwards, S. A., and Lee, E. A. (2001). "The semantics and execution of a synchronous block-diagram language," Technical Memorandum UCB/ERL, University of California—Berkeley. Available at h t t p : //ptolemy. e e c s . b e r k e l e y . edu/ publications. [10] Liu, C , and Layland, J. (1973). "Scheduling algorithms for multiprogramming in a hard-real-time environment." Journal of the ACM, 20, 46-61. [11] Harel, D., and Pnueli, A. (1985). "On the development of reactive systems." Logic and Models for Verification and Specification of Concurrent Systems. SpringerVerlag, Berlin. [12] Berry, G. (1989). "Real time programming: Special purpose or general purpose languages." Information Processing (G. Ritter, Ed.), Vol. 89, pp. 11-17. Elsevier Science, Amsterdam. [13] Halbwachs, N., Caspi, P., Raymond, P., and Pilaud, D. (1991). "The synchronous data flow programming language LUSTRE." Proc. IEEE, 79, 1305-1319. [14] Benveniste, A., and Le Guernic, P. (1990). "Hybrid dynamical systems theory and the SIGNAL language." IEEE Transactions on Automatic Control, 35, 525-546. [15] Maraninchi, F. (1991). "The Argos Language: Graphical representation of automata and description of reactive systems." Proceedings IEEE Workshop on Visual Languages, Kobe, Japan, Oct. [16] Klein, M. H., Ralya, T., Pollak, B., Obenza, R., and Harbour, M. G. (1993). A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems. Kluwer Academic, Norwell, MA.
EMBEDDED SOFTWARE
91
[17] Ben-Natan, R. (1995). CORBA: A Guide to Common Object Request Broker Architecture. McGraw-Hill, New York. [18] Selic, B., Gullekson, G., and Ward, P. (1994). Real-Time Object-Oriented Modeling. Wiley, New York. [19] Allen, R., and Garlan, D. (1994). "Formalizing architectural connection." Proceedings of the 16th International Conference on Software Engineering {ICSE 94), pp. 71-80. IEEE Computer Society Press, Los Alamitos, CA. [20] Agha, G. A. (1990). "Concurrent object-oriented programming." Communications of the ACM, 33, \25-\4l. [21] Agha, G. A. (1986). Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press, Cambridge, MA. [22] Lynch, N. A. (1996). Distributed Algorithms. Morgan Kaufmann, San Mateo, CA. [23] Chiodo, M., Giusto, P., Hsieh, H., Jurecska, A., Lavagno, L., and SangiovanniVincentelU, A. (1994). "A formal methodology for hardware/software co-design of embedded systems." IEEE Micro, 14, 26-36. [24] Henzinger, T. A., Horowitz, B., and Kirsch, C. M. (2001). "Giotto: A time-triggered language for embedded programming." Proceedings of EMSOFT 2001, Tahoe City, CA, Lecture Notes on Computer Science, 2211, pp. 166-184. Springer-Verlag, Berlin. [25] Davis, II, J., Hylands, C , Kienhuis, B., Lee, E. A., Liu, J., Liu, X., MuUadi, L., Neuendorifer, S., Tsay, J., Vogel, B., and Xiong, Y. (2001). "Heterogeneous concurrent modeling and design in Java," Technical Memorandum UCB/ERL MOl/12. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . b e r k e l e y . e d u / publications. [26a] Agha, G. A. (1997). "Abstracting interaction patterns: A programming paradigm for open distributed systems." Formal Methods for Open Object-Based Distributed Systems, IFIP Transactions (E. Najm and J.-B. Stefani, Eds.) Chapman and Hall, London. [26b] Lee, E. A., and Neuendorifer, S. (2000). "MoML—A modeling markup language in XML, Version 0.4," Technical Memorandum UCB/ERL MOO/12. University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . b e r k e l e y . e d u / publications. [27] Eriksson, H.-E., and Penker, M. (1998). UML Toolkit. Wiley, New York. [28] Luckham, D. C , and Vera, J. (1995). "An event-based architecture definition language." IEEE Transactions on Software Engineering, 21, 717-734. [29] de Alfaro, L., and Henzinger, T. A. (2001). "Interface theories for component-based design." Proceedings of EMSOFT 2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 148-165. Springer-Verlag, Berlin. [30] Hoare, C. A. R. (1978). "Communicating sequential processes." Communications of the ACM, 21, 666-611.
92
EDWARD A. LEE
[31] von Eicken, T., Culler, D. E., Goldstein, S. C, and Schauser, K. E. (1992). "Active messages: A mechanism for integrated communications and computation." Proceedings of the 19th International Symposium on Computer Architecture, Gold Coast, Australia. Also available as Technical Report TR UCB/CSD 92/675, Computer Science Division, University of California—Berkeley. [32] Kahn, G. (1974). 'The semantics of a simple language for parallel programming." Proceedings of the IFIP Congress 74. North-Holland, Amsterdam. [33] Carriero, N., and Gelernter, D. (1989). "Linda in context." Communications of the ACM, 32, 444-458. [34] Bhattacharyya, S. S., Murthy, R K., and Lee, E. A. (1996). Software Synthesis from Dataflow Graphs. Kluwer Academic, Norwell, MA. [35] Karp, R. M., and Miller, R. E. (1966). "Properties of a model for parallel computations: Determinacy, termination, queueing." SIAM Journal, 14, 1390-1411. [36] Lauwereins, R., Wauters, P., Ade, M., and Peperstraete, J. A. (1994). "Geometric parallelism and cyclo-static dataflow in GRAPE-Il." Proceedings 5th International Workshop on Rapid System Prototyping, Grenoble, France. [37] Lee, E. A., and Messerschmitt, D. G. (1987). "Synchronous data flow." Proceedings of the IEEE, IS, 1235-1245. [38] Lee, E. A., and Messerschmitt, D. G. (1987). "Static scheduling of synchronous data flow programs for digital signal processing." IEEE Transactions on Computers, 36, 24-35. [39] Buck, J. T. (1993). "Scheduling dynamic dataflow graphs with bounded memory using the token flow model," Tech. Report UCB/ERL 93/69, Ph.D. Dissertation. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o l e i n y . e e c s . b e r k e l e y . e d u / publications. [40] Jagannathan, R. (1992). "Parallel execution of GLU programs." Presented at 2nd International Workshop on Dataflow Computing, Hamilton Island, Queensland, Australia. [41] Kaplan, D. J., et al. (1987). "Processing Graph Method Specification Version 1.0," unpublished memorandum. Naval Research Laboratory, Washington DC. [42] Parks, T. M. (1995). "Bounded scheduling of process networks." Technical Report UCB/ERL-95-105, Ph.D. Dissertation. Department of Electrical Engineering and Computer Science. University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [43] Kopetz, H., Holzmann, M., and Elmenreich, W. (2000). "A universal smart transducer interface: TTP/A." 3rd IEEE International Symposium on Object-Oriented Real-Time Distributed Computing {ISORC'2000). [44] Liao, S., Tjiang, S., and Gupta, R. (1997). "An efficient implementation of reactivity for modeling hardware in the scenic design environment." Proceedings of the Design Automation Conference {DAC 97), Anaheim, CA. [45] Koo, T. J., Liebman, J., Ma, C , and Sastry, S. S. (2001). "Hierarchical approach for design of multi-vehicle multi-modal embedded software." Proceedings of EMSOFT
EMBEDDED SOFTWARE
93
2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 344-360. Springer-Verlag, Beriin. [46] Caspi, P., Pilaud, D., Halbwachs, N., and Plaice, J. A. (1987). "LUSTRE: A declarative language for programming synchronous systems." Conference Record of the 14th Annual ACM Symposium on Principles of Programming Languages, Munich, Germany. [47] Lee, E. A., and Parks, T. M. (1995). "Dataflow process networks." Proceedings of the IEEE,S3,113-S0\. [48] Lieverse, P., Van Der Wolf, P., Deprettere, E., and Vissers, K. (2001). "A methodology for architecture exploration of heterogeneous signal processing systems." Journal of VLSI Signal Processing, 29, 197-207. [49] Milner, R. (1978). "A theory of type polymorphism in programming." Journal of Computer and System Sciences, 17, 348-375. [50] Reed, G. M., and Roscoe, A. W. (1988). "A timed model for communicating sequential processes." Theoretical Computer Science 58, 249-261. [51] van Gemund, A. J. C. (1993). "Performance prediction of parallel processing systems: The PAMELA methodology." Proceedings 7th International Conference on Supercomputing, Tokyo. [52] Ahuja, S., Carreiro, N., and Gelemter, D. (1986). "Linda and friends." Computer, 19, 26-34. [53a] Harel, D. (1987). "Statecharts: A visual formalism for complex systems." Science of Computer Programming, 8, 231-274. [53b] Henzinger, T. A. (1996). "The theory of hybrid automata." Proceedings of the 11th Annual Symposium on Logic in Computer Science, pp. 278-292. IEEE Computer Society Press, Los Alamitos, CA. Invited tutorial. [54] Girault, A., Lee, B., and Lee, E. A. (1999). "Hierarchical finite state machines with multiple concurrency models." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 18, 742-760. [55] Saracco, S., Smith, J. R. W., and Reed, R. (1989). Telecommunications Systems Engineering Using SDL, North-Holland-Elsevier, Amsterdam. [56] Lee, E. A., and Sangiovanni-Vincentelli, A. (1998). "A framework for comparing models of computation." IEEE Transaction on Computer-Aided Design, 17, 12171229. [57] Trotter, W. T. (1992). Combinatorics and Partially Ordered Sets. Johns Hopkins Univ. Press, Baltimore, MD. [58] Lamport, L. (1978). "Time, clocks, and the ordering of events in a distributed system." Communications of the ACM, 21, 558-565. [59] Fidge, C. J. (1991). "Logical time in distributed systems." Computer, 24, 28-33. [60] Mattem, F. (1989). "Virtual time and global states of distributed systems." Parallel and Distributed Algorithms (M. Cosnard and P. Quinton, Eds.), pp. 215-226. NorthHolland, Amsterdam.
94
EDWARD A. LEE
[61] Swan, S. (2001). "An introduction to system level modeling in SystemC 2.0," draft report. Cadence Design Systems. [62] Lee, E. A. (1999). "Modeling concurrent real-time processes using discrete events." Annals of Software Engineering, Special Volume on Real-Time Software Engineering, 7, 25-45. [63] Fong, C. (2001). "Discrete-time dataflow models for visual simulation in Ptolemy II," Memorandum UCB/ERL MO 1/9. Electronics Research Laboratory, University of California—Berkeley. Available at h t t p : //ptolemy. eecs . berkeley. edu/ publications. [64] Liu, J. (1998). "Continuous time and mixed-signal simulation in Ptolemy II," UCB/ERL Memorandum M98/74. Department of Electrical Engineering and Computer Science, University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [65] Edwards, S. A. (1997). "The specification and execution of heterogeneous synchronous reactive systems," Technical Report UCB/ERL M97/31, Ph.D. thesis. University of California—Berkeley. Available at h t t p : / / p t o l e m y . e e c s . berkeley.edu/publications. [66] Whitaker, P. (2001). "The simulation of synchronous reactive systems in Ptolemy II," Master's Report, Memorandum UCB/ERL MO 1/20. Electronics Research Laboratory, University of California—Berkeley. Available at h t t p : / / p t o l e m y . eecs.berkeley.edu/publications. [67] Goel, M. (1998). "Process networks in Ptolemy II," UCB/ERL Memorandum M98/69, University of California—Berkeley. Available at h t t p : / / p t o lemy.eecs.berkeley.edu/publications. [68] Xi, H., and Pfenning, F. (1998). "Eliminating array bound checking through dependent types." Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation {PLDI '98), Montreal, pp. 249-257. [69] Lee, E. A., and Xiong, Y. (2001). "System-level types for component-based design." Proceedings ofEMSOFT2001, Tahoe City, CA, Lecture Notes on Computer Science 2211, pp. 237-253. Springer-Verlag, Berlin. [70] Lucassen, J. M., and Gifford, D. K. (1988). "Polymorphic effect systems." Proceedings 15th ACM Symposium on Principles of Programming Languages, pp. 47-57. [71] Martin-Lof, P. (1980). "Constructive mathematics and computer programming." Logic, Methodology, and Philosophy of Science VL pp. 153-175. North-Holland, Amsterdam. [72] Hayashi, S. (1991). "Singleton, union, and intersection types for program extraction." Proceedings of the International Conference on Theoretical Aspects of Computer Science (A. R. Meyer, Ed.), pp. 701-730. [73] Ullman, J. D. (1994). Elements of ML Programming. Prentice-Hall, Englewood Cliffs, NJ. [74] Cardelli, L., and Wegner, P. (1985). "On understanding types, data abstraction, and polymorphism." ACM Computing Surveys, 17, 471-522.
EMBEDDED SOFTWARE
95
[75] Laddaga, R. (1998). "Active software." Position paper for the St. Thomas Workshop on Software Behavior Description. [76] Gordon, M. J., Milner, R., Morris, L., Newey, M., and Wadsworth, C. R (1978). "A metalanguage for interactive proof in LCF." Conference Record of the 5th Annual ACM Symposium, on Principles of Programming Languages, pp. 119-130. Assoc. Comput. Mach., New York. [77] Wikstrom, A. (1988). Standard ML. Prentice-Hall, Englewood Cliffs, NJ. [78] Hudak, P. (1989). "Conception, evolution, and application of functional programming languages." ACM Computing Surveys, 21, 359^11. [79] Scott, D. (1970). "Outline of a mathematical theory of computation." Proceedings of the 4th Annual Princeton Conference on Information Sciences and Systems, pp. 169176. [80] Xiong, Y., and Lee, E. A. (2000). "An extensible type system for component-based design." 6th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Berlin, Lecture Notes on Computer Science, 1785, pp. 20-37. Springer-Verlag, Berlin. [81] Rehof, J., and Mogensen, T. (1996). "Tractable constraints in finite semilattices." Third International Static Analysis Symposium, Lecture Notes in Computer Science 1145, pp. 285-301, Springer-Verlag, Berlin. [82] von der Beeck, M. (1994). "A comparison of statecharts variants." Proceedings of Formal Techniques in Real Time and Fault Tolerant Systems, Lecture Notes on Computer Science 863, pp. 128-148. Springer-Verlag, BerUn. [83] Johnson, R. E. (1997). "Frameworks = (Components -I- Patterns)." Communications of the ACM, 40,39-42. [84] Abu-Ghazaleh, N., Alexander, P., Dieckman, D., Murali, R., and Penix, J. (1998). "Orbit—A framework for high assurance system design and analysis," TR 211/01/98/ECECS. University of Cincinnati. [85] Alexander, P. (1998). "Multi-facetted design: The key to systems engineering." Proceedings of Forum on Design Languages (FDL-98). [86] Jourdan, M., Lagnier, F., Maraninchi, F., and Raymond, P. (1994). "A multiparadigm language for reactive systems." Proceedings of the 1994 International Conference on Computer Languages, Toulouse, France. [87] Harel, D., Lachover, H., Naamad, A., Pnueli, A., Politi, M., Sherman, R., ShtullTrauring, A., and Trakhtenbrot, M. (1990). "STATEMATE: A working environment for the development of complex reactive systems." IEEE Transactions on Software Engineering, 16, 403-414.
This Page Intentionally Left Blank
Empirical Studies of Quality Models in Object-Oriented Systems LIONEL C.BRIAND Software Quality Engineering Laboratory Systems and Computer Engineering Carleton University 1125 Colonel By Drive Ottawa, K1S 5B6 Canada
[email protected] JURGENWUST Fraunhofer lESE Sauerwiesen 6 67661 Kaiserslautern Germany
Abstract Measuring structural design properties of a software system, such as coupling, cohesion, or complexity, is a promising approach toward early quality assessments. To use such measurement effectively, quality models that quantitatively describe how these internal structural properties relate to relevant external system qualities such as reliability or maintainability are needed. This chapter's objective is to summarize, in a structured and detailed fashion, the empirical results reported so far with modeling external system quality based on structural design properties in object-oriented systems. We perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported. Constructive guidelines for facilitating the work of future studies are also provided, thus facilitating the development of an empirical body of knowledge.
1. Introduction 2. Overview of Existing Studies 2.1 Classification of Studies 2.2 Measurement ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
98 99 99 101 97
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
98
LIONEL C. BRIAND AND JURGEN WUST
2.3 Survey of Studies 2.4 Discussion 3. Data Analysis Methodology 3.1 Descriptive Statistics 3.2 Principal Component Analysis 3.3 Univariate Regression Analysis 3.4 Prediction Model Construction 3.5 Prediction Model Evaluation 4. Summary of Results 4.1 Correlational Studies 4.2 Controlled Experiments 5. Conclusions 5.1 Interrelationship between Design Measures 5.2 Indicators of Fault-Proneness 5.3 Indicators of Effort 5.4 Predictive Power of Models 5.5 Cross-System Application 5.6 Cost-Benefit Model 5.7 Advanced Data Analysis Techniques 5.8 Exploitation of Results 5.9 Future Research Directions Appendix A Appendix B: Glossary References
1.
102 110 112 113 114 116 117 126 131 131 149 150 151 151 152 152 152 153 154 154 156 157 161 162
Introduction
As object-oriented programming languages and development methodologies moved forward, a significant research effort was also dedicated to defining specific quality measures and building quality models based on those measures. Quality measures of object-oriented code or design artifacts usually involve analyzing the structure of these artifacts with respect to the interdependencies of classes and components as well as their internal elements (e.g., inner classes, data members, methods). The underlying assumpdon is that such measures can be used as objective measures to predict various external quality aspects of the code or design artifacts, e.g., maintainability and reliability. Such prediction models can then be used to help decision-making during development. For example, we may want to predict the fault-proneness of components in order to focus validation and verification effort, thus finding more defects for the same amount of effort. Furthermore, as predictive measures of fault-proneness, we may want to consider the coupling, or level of dependency, between classes.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
99
A large number of quality measures have been defined in the literature. Most of them are based on plausible assumptions but one key question is to determine whether they are actually useful, significant indicators of any relevant, external quality attribute [la]. We also need to investigate how they can be applied in practice, whether they lead to cost-effective models in a specific application context. Although numerous empirical studies have been performed and reported in order to address the above-mentioned questions, it is difficult to synthesize the current body of knowledge and identify future research directions. One of the main reasons is the large variety of measures investigated and the lack of consistency and rigor in the reporting of results. This chapter's objective is to summarize, in a structured and detailed fashion, the results that have been reported so far. Overall, although not all the results are easy to interpret, there is enough consistency across studies to identify a number of strong conclusions. We also perform a critical review of existing work in order to identify lessons learned regarding the way these studies are performed and reported. Constructive guidelines for facilitating the work of future studies are also provided, thus facilitating the development of an empirical body of knowledge. Section 2 summarizes existing studies and their main characteristics. Section 3 describes the most important principles and techniques regarding the analysis of software quality data and structural measurement. A recommended analysis procedure is also provided. Section 4 summarizes, in great detail, the results of the studies discussed in Section 2. These results are discussed and conclusions are provided in Section 5.
2.
Overview of Existing Studies
This section presents a first overview of the existing studies relating 0 0 design measurement and system quality, and highlights their commonalities and differences. A comparison of their results is performed in Section 4.
2.1 Classification of Studies Despite a large number of papers regarding the quality measurement of objectoriented systems, the number of articles that empirically investigate the relationship between design properties and various external quality attributes is relatively small. These studies fall into two categories: 1. Correlational studies. These are studies which by means of univariate or multivariate regression analysis try to demonstrate a statistical relationship
100
LIONEL C. BRIAND AND JURGEN WUST
between one or more measures of a system's structural properties (as independent variables) and an external system quality (as a dependent variable). 2. Controlled experiments. These are studies that control the structural properties of a set of systems (independent variables, mostly related to the use of the 0 0 inheritance mechanism), and measure the performance of subjects undertaking software development tasks in order to demonstrate a causal relationship between the two. So far such studies have mostly been performed with students and have focused on the impact of inheritance on maintenance tasks. Correlational studies are by far more numerous as they are usually the only option in industrial settings. Outside these two categories, published empirical work typically falls into two further categories: 3. Application of a set of design measures to one or more systems; with a discussion of the obtained distributions of the measures within one system, or a comparison of distributions across two or more systems, e.g., [lb-4]. For instance, [4] develop two versions of a brewery control system to identical specifications, one following a data-driven approach [5], the other a responsibility-driven approach [6]. They apply the set of design measures by Chidamber and Kemerer [7] to the two resulting systems. They find the system resulting from the responsibility-driven approach to display more desirable structural properties. They conclude the responsibility-driven to be more effective for the production of maintainable, extensible and reusable software. Conclusions in such studies of course are only supported when a relationship of the design measures used with the afore-mentioned system qualities is established. Considered in isolation, such studies are not suitable for demonstrating the usefulness of the structural measures, or drawing conclusions from their measurement. 4. Apply a set of design measures to one or more systems and investigate the relationships between these design measures, by investigating pairwise correlations and performing factor analysis (e.g., [1,8,9]). Besides empirical studies, the literature is concerned with the following topics: 5. Definition of new sets of measures (e.g., [3,7,10-17]). 6. Definition of measurement frameworks for one or more structural properties, which provide guidelines on how these properties can, in principle, be measured [13,18-20]. 7. Criticism/theoretical analysis of existing measures and measurement frameworks; in particular, there is an interest in defining, for measures of various structural properties, necessary mathematical properties these
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
101
measures must possess in order for them to be valid for the properties involved [21-24]. Our discussions in this article will focus on categories (1) and (2), with a strong emphasis on the former as these studies are by far the most numerous.
2.2
Measurement
In this section, we provide some examples of measures for object-oriented designs, to give the reader new to the field an impression of what measures of 0 0 structural properties usually are about. We summarize here the measures by Chidamber and Kemerer ([3], in the following referred to as C&K). As we will see, these are the measures having received the widest attention in empirical studies and will be frequendy mentioned in subsequent sections. Chidamber and Kemerer define a suite of six measures (CBO, RFC, LCOM, DIT, NOC, WMC), to quantify the coupling, cohesion, inheritance relationships, and complexity of a class in an 0 0 system: • CBO (Coupling between objects)—A count of the number of noninheritance related couples to other classes. An object of a class is coupled to another, if methods of one class use methods or attributes of the other. • RFC (Response for class)—RFC = |RS| where RS is the response set for the class. The response set can be expressed as {M} (J^^^. [Rt], where [Ri] is the set of methods called by method /, and {M} is the set of all methods in the class. The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. • LCOM (Lack of cohesion in methods)—Consider a Class Ci with methods Ml, M2, . . . , M„. Let {/,} = set of instance variables used by method M/. There are n such sets {/i}, . . . , {/„}. Let P = {(//, Ij) \ // n Ij = 0} and Q = {(liJj) I // n Ij + 0 } . If all n sets {/i}, . . . , {/„} are 0, then let P = 0.
={'
LCOM _ ,
IPI-IQI, ^ 0,
//|P|>|Q| ^ . otherwise.
• DIT (Depth in inheritance tree)—The depth of a class in the inheritance tree is the maximum length from the node to the root of the tree. • NOC (Number of children)—The number of classes derived from a given class.
102
LIONEL C. BRIAND AND JURGEN WUST
• WMC (Weighted method complexity)—Consider a class Ci, with methods Ml, M2, . . . , Mn. Let ci, C2, . . . , Cn be the complexity of the methods. Then n
WMC= ^ c , . The complexities c, were intentionally left undefined. Two versions of WMC were suggested and are frequently used: • In [14,25], Ci is defined as McCabe's cyclomatic complexity of method Mi [26]. • In [27], each c, is set to 1. In other words, this version of WMC counts the (noninherited) methods of the class. The Appendix provides short definitions for all measures mentioned in this chapter.
2.3
Survey of Studies
This section is divided into two distinct subsections. The first presents correlational studies, whereas the second focuses on experiments. The studies are summarized in terms of their settings, dependent variable, independent variables, and analysis techniques.
2.3.7
Correlational
Studies
Table I provides for each study a brief description of • The external system quality of interest that was used as the dependent variable of the study. • The measures of structural properties used as independent variables of the study. • A brief characterization of the system(s) from which the data sets were obtained (language, size, application domain). The systems are described the first time they are reported and are denoted by acronyms in the remainder of the table. • The types of analyses that were performed. Most importandy, this includes • univariate analysis (what modeling technique was used, if any), • multivariate analysis (what modeling technique was used, if any), • what kind of validation of the multivariate model was performed, if any.
TABLEI
OVERVIEW OF CORRELATIONAL STUDIES Reference
Dependent variable
Independent variable
Data set
Univariate analysis
Multivariate analysis
Model evaluation
28
Defect density, fault density, rework effort (system wide)
MOOD metrics
Pearson r
Linear ordinary least-squares regression (LS)
R2
Fault-proneness (faults from acceptance testing)
C&K [3]; Code Metrics Nesting Level, FunctDef, FunctCall -50 measures including C&K, C-FOOD [ 101
UMD: 8 C++ systems, university setting, from students, 4 to 15 KLOC UMD, see [28]
Logistic regression (LR)
Logistic regression (LR)
Contingency table, correctness/ completeness
Fault-proneness (acceptance testing) Fault-proneness (field faults)
UMD, see [28]
-50 measures including C&K, C-FOOD Suite of polymorphism measures 1321, C&K, Pan of C-FOOD
LALO: C++ system, commercia], 40 KLOC XPOSE: 144 classes, commercial
Development effort
-50 measures including C&K, C-FOOD
Expert opinion (EO): "perceived complexity" 0-100%
CACM (author's own), LCOM (2 versions - [3,14])
LIOO, university setting, public domain, 103 classes, 17 KLOC C++, 18 classes, from GUI packages
Fault-proneness (field faults)
LR
LR
LR
Negative binomial regression Spearman rho
LR
LR
LR, MARS
Poisson regression, hybrid with regression trees
e
5 5
R 2 , correctness/ completeness, 10-cross validation (CV) R 2 , correctness/ completeness, 10CV R 2 , correctness/ completeness, 10-CV, crosssystem validation, cost-benefit model 10-CV, ARE, MRE
Xr
(" Z 0 rn ?? 0
TI
: z rn
-I rn
0 V)
<
V)
-i rn
2
A
o
TABLEI - Continued
P
Reference
Dependent variable
Independent variable
Data set
Univariate analysis
35
Fault-proneness (from field faults) Fault-proneness (from field faults)
C&K, without
See [36]and [37]
LR
LALO, see 1301
LR
8 system \pees, 2-3 design alternative each, 20 design\ in total
Ad hoc. count aggrecment with expcrt judgment (no stati\tical testing) Speurni~~n rho
32
LCOM Suite of polymorphism measures by authors, C&K, Part of C- FOOD CDM, CBO, DAC, NCC, NSSR, clferent/affercnt coupling
38
EO: agreement with experts' preference between two design alternatives
39
"niaintcnancc data", number of liult\ (iron1 field)
CDM. C&K (without LCOM). NCC. NSSK. CHNL. NClM
Likelihood of ripple changca
CBO, part of CFood. 10 more coupling mea\ures C&K. LOC. dichotonii/ed versions of CBO and LCOM
Productivity ( s i ~ c / elfort). rework clfort, design effort
EO: subjective sunimative scale across 9 items (understandability, maintainability, etc.)
Author's own (operation/attribute complexity, coupling. cohesion, inheritance)
C++ System (patient care mgrnnt). 1 13 CIS.. 82KLOC; tile tran\icr facility, 29 Java cI\.. hKLOC: LALO. see 1301
3 banking IS: C++. 15KLOC. 45 cls. Objective C, 3KLOC. 27cls. De\ign docs. 25 CIS. 12 system designs developed by 6 student\ to 2 identical specs
LR
Multivariate analysis
LR
Model evaluation
R ~ LL ; ratio test to compare models Goodness of fit (chi-square)
Rankingbased niodcl Linear LS
Linear LS
R
'
TABLEI - Continued Reference
Dependent variable
Independent variable
Data set
Univariate analysis
Multivariate analysis
Model evaluation
43
No. of changes due to defects from integration testslfield usage
DIT, NOC, counts of elements in Shlaer/Mellor design; LOC from code Author's own: AMC, CDE (class design entropy); C&K's WMC LCOM (2 versions: [3,141) C&K; Mthds, NMO, SIX, avg. no. of parameters, LOC Stmts, Mthds, Attrs
Telecom RT sub system, C++, 32 cls., 130KLOC
Spearman rho, linear LS
Linear LS
R2; contingency table
C++, 17 classes from GUI packages
Spearman rho
EO: "perceived complexity" 0-100% EO: "perceived class cohesion" 0- 100% Fault-proneness (from field faults) Fault-proneness (from field faults) Fault-proneness (from field faults) Fault-proneness (from field faults)
Part of C-Food, DIT, NOC, Attrs
See [34] C++, commercial telecom frame work, I74 classes See [36], [46], plus commercial telecom app, C++, 85 classes See [37] v0.5 and v0.6 of Jwriter (comm. text processor). 69 and 42 Java cls. (no inner cls.)
LR (threshold models) R 2 , leave-one-out CV, receiver-operator curve (ROC) R 2 ; leave-one-out CV, ROC; betweenversion validation (v0.5 fit data, v0.6 test data)
A
0
TABLEI
0,
- Continued
Reference
Dependent variable
Independent variable
Data set
Univariate analysis
Multivariate analysis
Model evaluation
48
Fault-proneness (from field faults)
DIT, NOC, Mthds, part o f C-Food
Xpose, 145 Java cls.
L R (model including confounding variable\)
LR
14.25
No. of lines changed over 3 ycars (niaintenance cffort surrogate) No. o f fault\ (fronl t a r i n g ) , fault density. EO: \uhject~vcc o w plcxity (~~ndcr\tanda h i l ~ t yo) n a 1-5 \c~Ic EO: \uhjectivc conlplcxity 1.191
CBK-CBO. MPC, DAC. 2 x Silt (Strnts. Mth+Attr)
2 Ada systems (UIMS. QUES). comnlercial, 39/7 1 CIS. G N U C++ library ( 197 cls.); L E D A lih. (97 el\.): 3 set5 o f \tudent \y\tcnis. 113. 172. 317 el\. re\p: \yst, in 15 I / C++ \ysteni. 13 cl5.. 500LOC. PO\\. Conini. C++ \y\tem. I 0 9 functions. 2.5KL.OC. Imagc proce\\ing SW developed by one person
R ~ leave-one-out ; CV, ROC curve, cost savings model R'; between-\y\ten~
39.50
EO: \uhjcctive conlplexity 1391: no. o f faults in testing, n o . o f mod~tication request?: time to modify Effort (project-wide)
[>IT. NOC. N M O . N M I , CHO, NAS
CBK. LOC
LOC. lih/non-lib functions called, depth in call graph. no. of function dcclarations/dctinitions D V : dome \ystem meter (based on a special notation for busine\\ models, elements o f size, export coupling), function points
37 I S projects. 1.5100 man months. I10 developer\, C++, Smalltalk, 4 G L
Linear L S
Spearman rho
Spcarnian. Kcndall, Pcar\on'\ ISpearnian. Kcndall. Pearson'\ r for all pair!. D V x I V
Quadratic LS
Cross validation (fit/ test data set o f 2 4 1 13 projects), compare MREs of FPA and System Meter
TABLEI - Continued Reference
Dependent variable
Independent variable
Data set
Univariate analysis
54
Effort (project-wide)
No. of classes, Mthds, LOC
7 projects from a small SW company; C++, 15-1 35 CIS.; 3-22 person months dev. 3 C++ systems: case tool, GUI lib, control-SW, total 524 cls., 47KLOC 5 C++ systems, 12-70 CIS., 15- 160KLOC 3 C++ sub systems from a telecom application; 6 KLOC/20 cls., 21 KLOC/45 cls., 6 KLOC/27 cls. Conferencing system, 1 14 C++ cls., 25KLOC
Pearson's r, linear and exponential LS
Effort (class level)
EO: ranking of classes by "perceived complexity" Fault-proneness (from system tests and maintenance phase)
58
59
Effort for field fault fixes; effort for functional enhancements No. of ripple changes a class participates in, proportion ripple changes/changes in a class
Set of 50 measures, mostly size/ complexity of class methods/attributes 4 coupling measures (author's own) C&K without LCOM, LOC, author's own (inheritance-based coupling, memory allocation) C&K measures; various interpretations for WMC (McCabe, Hal stead) CBO, no. of public methods, no. of methods
Conferencing system, 114 C++ cls., 25KLOC
Multivariate analysis
Model evaluation
R2
5 ? z0
Linear LS
R 2 , between-system validation using 4th system (LIOO)
z 0
m
Pearson's r
C
rn
0
7'
LR (separate for each measure, system, type of fault). LR-R-sq.
grnZ ;;I
0 V)
Linear LS
Linear LS
R~
< ;;I V)
Z
V)
Kruskal Wallis, Kendall's tau
108
LIONEL C. BRIAND AND JURGEN WUST
Without delving into the details of the studies in Table I, we can draw a number of observations: • Choice of dependent variable. The dependent variables investigated are either fault-proneness (probability of fault detection) or the number of faults or changes in a class, effort for various development activities, or expert opinion/judgment about the psychological complexity of a class. • Fault-proneness or the number of defects detected in a class is the most frequently investigated dependent variable. Sources of faults are from either unit/system testing or field failures. This choice of dependent variable is by far the most common in the literature. One reason is that using fault-proneness as a dependent variable is an indirect way of looking at reliability, which is an important external quality to consider. Another explanation is that the collection of fault data (including the assignment of faults to classes) is less difficult than collecting other data related to classes (e.g., effort) and this makes it a convenient choice for investigating the impact of structural measures on the cognitive complexity of classes. • Less frequently investigated is effort for various activities: either total development effort, rework effort/functional enhancements, or effort to fix faults. For some studies, effort for individual classes was collected, which, in practice, is a difficult undertaking. Other studies collected system/project-wide effort, which is easier to account for but leads to other practical problems. If systems become the unit of analysis then it becomes difficult to obtain enough data to perform multivariate analysis. • Two studies [40,58] used the likelihood or number of ripple effects on other classes when a change is performed to a class. The goal was to provide a model to support impact analysis. These studies are not described in the remainder of this chapter as they are the only ones of their kind and more studies are needed to confirm the trends observed. • In the absence of hard quality data obtained from development projects, subjective data are sometimes used. For instance, the following have been used: expert opinion about the perceived complexity or cohesion of a class, and preference ranking between design alternatives. There are a number of problems associated with the use of subjective measurement. Determining what constitutes an "expert" is one. Moreover, it is a priori unclear as to which degree experts' judgment correlates with any external system quality attribute. Eliciting expert opinion is a difficult undertaking and must be carefully
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
109
planned to provide meaningful results, and the procedures used must be properly reported. Although this is outside the scope of this article, some abundant literature exists on the subject [60]. An interesting question that, to our knowledge, has not been investigated in depth to date is whether structural measures can perform as well as or better than experts in predicting quality attributes such as fault-proneness. Choice of independent variables. Existing measures receive a varying amount of attention in the empirical studies. The measures by Chidamber and Kemerer [3] were investigated the most. One reason is that this was one of the first publications on the measurement of object-oriented systems. The relative difficulty of collecting more complex measures through static analyzers may also be an explanation. Last, the profusion of papers proposing new measures, using a different terminology and formalism, has made any selection of meaningful measures a difficult undertaking. Some recently published measurement frameworks [18,19] may help choose appropriate measures based on their properties. A careful selection of measures, based on a clear rationale, is indispensable to maintain the complexity of the data analysis within reasonable limits and lower the chances of finding significant relationships by chance [61]. However, in the early stage of investigation, it is common for studies to investigate large numbers of alternatives, as they tend to be exploratory. Building prediction models. Only about half of the studies employ some kind of multivariate analysis in order to build an accurate prediction model for the dependent variable. The remaining studies only investigate the impact of individual measures on system quality, but not their combined impact. Depending on the measurement scale of the dependent variable, different regression techniques are being used. Furthermore, a number of detailed technical issues regarding the data analysis can be observed and are discussed in Section 3. One noticeable pattern is the number of studies that only investigate linear relationships between structural measures and the dependent variable of interest. Although there is no rationale to support this, data sets are often not large enough to investigate nonlinear relationships or interactions. In addition, because of the lack of supporting theory, it is often difficult to know what to search for. Exploratory techniques, such as regression trees or MARS [62], have been used in some studies to determine nonlinearities and variable interactions, with some degree of success [31,33]. Evaluating prediction models. From the studies that perform multivariate analysis, only half of these perform some kind of cross validation [63], where the prediction performance of the multivariate prediction model in a
110
LIONEL C. BRIAND AND JURGEN WUST
relevant application context is investigated. The other studies only provide a measure of the goodness-of-fit of the prediction model (e.g., R^). As a consequence, the potential benefits of using such prediction models are not always clear, especially from a practical standpoint. Very few studies attempt to build a model on a system and apply it to another one, within one environment. As studies move away from exploration and investigate the practical applications of measurement-based models, cross-system predictions will require more attention. One practical difficulty is obtaining consistent data from different projects of comparable nature. • Data sets. Data sets with fault or effort data at the class level are rare. As a consequence, these data sets tend to be repeatedly used for various studies, for example, investigating different sets of measures, or using different modeling techniques. On the one hand, this allows for better comparison between studies but it is also detrimental to building an increased body of knowledge, as replication of individual studies in many different contexts rarely take place. Instead, we find a large number of different studies using a small number of data sets.
2.3.2
Experiments
Table II provides an overview of the controlled experiments investigating the relationship between structural design properties and system quality in objectoriented systems. For each study, we state the literature source, a characterization of the dependent and independent variables investigated, the systems used for the experiment, and the participants involved. The rightmost column indicates what experimental design was employed and the analysis techniques used to test the research hypotheses. For an introduction to experimental designs, see, e.g., [64]. Controlled experiments are far fewer in number than correlational studies. The studies mostly investigate aspects of system understandability and maintainability as dependent variables, and usage of inheritance as the independent variable. Also, we see that the controlled experiments are usually performed in a university setting with student subjects. The qualitative results of these experiments will be summarized in Section 4.2.
2.4
Discussion
From Tables I and II, we can see there is a large number of studies that have already been reported. The great majority of them are correlational studies. One of the reasons is that it is difficult to perform controlled experiments in industrial settings. Moreover, preparing the material for such experiments (e.g., alternative.
?
OVERVIEW OF CONTROLLED EXPERIMENTS Reference
Dependent variable
Independent variables
Systems/subjects
Exp. designlanal. technique
2
Reusability (subjective perception of)
C&K, LOC, methods, attributes, meaningfulness of variable names (subjective measure) Procedural vs 00 design; adherence to common principles of good design Adherence to common principles of good design
2 systems, 3 and 2 classes, one designed to be reusable, the other not
Ad hoc comparison
2 x 2 systems (30 pages reqs & design); 13 student subjects, 2 runs 2 x 2 systems (30 pages reqs & design); 3 1 student subjects, 2 runs 4 versions of a university admin IS system, 2 x 0, I x 3, I x 5 levels of inheritance; 4 x 12 student subjects 2 groups (5&6 students)
2 x 2 fact. Design; ANOVA, paired t- test
Understandability, correctness, completeness, modification rate of Impact Analysis Understandability, correctness, completeness, modification rate of Impact Analysis Maintainability, understandability
68
69
Understandability, modifiability, "debugability" (time, correctness, completeness to perform these tasks) Maintainability (time for maintenance task)
DIT (flat vs deep inheritance structure)
Flat vs deep inheritance structure
Flat vs deep inheritance structure
3 x 2 systems, C++, 400-500 LOC; 3 1 student subjects, 3 runs
in z c
2 x 2 fact. Design; ANOVA, paired t- test 4 x 12 between subject; X 2 to compare groups
Within subject, 2 groups, three diff. tasks
Blocked design, 1 internal rep.; Wilcoxon sign rank and rank sum test
rn
0
2
112
LIONEL C. BRIAND AND JURGEN WUST
functional designs) is usually costly. With correlational studies, actual systems and design documents can be used. Another important observation is that the analysis procedures that are followed throughout the correlational studies vary a great deal. To some extent, some variation is to be expected, as alternative analysis procedures are possible, but many of the studies are actually not optimal in terms of the techniques being used. For instance, [28] overfits the data and performs a great number of statistical tests without using appropriate techniques for repeated testing. We therefore need to facihtate a comparison of studies, to ensure that the data analysis is complete and properly reported. Only then will it be possible to build upon every study and develop a body of knowledge that will allow us to determine how to use structural measurement to build quality models of object-oriented software. Section 3 provides a detailed procedure that was first used (with minor differences) in a number of articles [29-31,33]. Such a procedure will make the results of a study more interpretable—and thus easier to compare—and the analysis more likely to obtain accurate prediction models.
3.
Data Analysis Methodology
Recall that our focus here is to explain the relationships between structural measures of object-oriented designs and external quality measures of interest. In this section, because we focus on data analysis procedures and multivariate modeling, we will refer to these measures as independent and dependent variables, respectively. For the sake of brevity they will be denoted as IVs and DVs. Our goal here is not to paraphrase books on quantitative methods and statistics but rather to clearly make the mapping between the problems we face and the techniques that exist. We also provide clear, practical justifications for the techniques we suggest should be used. On a high level, the procedure we have used [29-31,33] consists of the following steps. 1. Descriptive statistics [70]. An analysis of the frequency distributions of the IVs will help to explain some of the results observed in subsequent steps and is also crucial for explaining differences across studies. 2. Principal component analysis (PCA) [71]. In the investigation of measures of structural properties, it is common to have much collinearity between measures capturing similar underlying phenomena. PCA is a standard technique for determining the dimensions captured by our IVs. PCA will help us better interpret the meaning of our results in subsequent steps. 3. Univariate analysis. Univariate regression analysis looks at the relationships between each of the IVs and the DV under study. This is a first step
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
113
to identify types of IVs significantly related to the DV and thus potential predictors to be used in the next step. 4. Prediction model construction (multivariate analysis). Multivariate analysis also looks at the relationships between IVs and the DV, but considers the former in combination, as covariates in a multivariate model, in order to better explain the variance of the DV and ultimately obtain accurate predictions. To measure the prediction accuracy, different modeling techniques (e.g., OLS [72], logistic regression, Poisson regression [73]) have specific measures of goodness-of-fit of the model. 5. Prediction model evaluation. In order to get an estimate of the predictive power of the multivariate prediction models that is more realistic than goodness-of-fit, we need to apply models to data sets other than those from which they were derived. A set of procedures known as cross-validation [63] should be carried out. Typically, such a procedure consists of dividing the data set into V pieces and use them in turn as test data sets, using the remainder of the data set to fit the model. This is referred to as V crossvalidation and allows the analyst to get a realistic accuracy prediction even when a data set of limited size is available. Based on the results of the cross-validation, the benefit of using the model in a usage scenario should then be demonstrated. The above procedure is aimed at making studies and future replications repeatable and comparable across different environments. In the following, we describe and motivate each step in more detail.
3.1
Descriptive Statistics
Within each case study, the distribution (mean, median, and interquartile ranges) and variance (standard deviation) of each measure is examined. Low variance measures do not differentiate classes very well and therefore are not likely to be useful predictors. The range and distribution of a measure determines the applicability of subsequent regression analysis techniques. Analyzing and presenting the distribution of measures is important for the comparison of different case studies.^ It allows us to determine whether the data collected across studies stem from similar populations. If not, this information will likely be helpful to explain different findings across studies. Also, this analysis will identify measures with potential outlying values, which will be important in the subsequent regression analyses. Univariate and multivariate outlier analyses are discussed in their respective sections. ^ Note that one strong conclusion that comes from our experience of analyzing data and building models is that we will only be able to draw credible conclusions regarding what design measures to use if we are able to replicate studies across a large number of environments and compare their results.
114
LIONEL C. BRIAND AND JURGEN WUST
3.2
Principal Component Analysis
It is common to see groups of variables in a data set that are strongly correlated. These variables are likely to capture the same underlying property of the object to be measured. PCA is a standard technique for identifying the underlying, orthogonal dimensions (which correspond to properties that are directly or indirectly measured) that explain relations between the variables in the data set. For example, analyzing a data set using PCA may lead to the conclusions that all your measures come down to measuring some aspect of class size and import coupling. Principal components (PCs) are linear combinations of the standardized IVs. The sum of the square of the weights in each linear combination is equal to 1. PCs are calculated as follows. The first PC is the linear combination of all standardized variables that explain a maximum amount of variance in the data set. The second and subsequent PCs are linear combinations of all standardized variables, where each new PC is orthogonal to all previously calculated PCs and captures a maximum variance under these conditions. Usually, only a subset of all variables shows large weights and therefore contributes significantly to the variance of each PC. To better identify these variables, the loadings of the variables in a given PC can be considered. The loading of a variable is its correlation with the PC. The variables with high loadings help identify the dimension the PC is capturing but this usually requires some degree of interpretation. In other words, one assigns a meaning or property to a PC based on the variables that show a high loading. For example, one may decide that a particular PC mostly seems to capture the size of a class. In order to further ease interpretation of the PCs, we consider the rotated components. This is a technique where the PCs are subjected to an orthogonal rotation in the sample space. As a result, the resulting principal components (referred to as rotated components) show a clearer pattern of loadings, where the variables either have a very low or high loading, thus showing either a negligible or a significant impact on the PC. There exist several strategies to perform such a rotation, the varimax rotation being the most frequently used strategy in the literature. For a set of n measures there are, at most, n orthogonal PCs, which are calculated in the decreasing order of variance they explain in the data set. Associated with each PC is its eigenvalue, which is a measure of the variance of the PC. Usually, only a subset of the PCs is selected for further analysis (interpretation, rotated components, etc.). A typical stopping rule that we also use in our studies is that only PCs whose eigenvalue is larger than 1.0 are selected. See [71] for more details on PCA and rotated components. We do not consider the PCs themselves for use as independent variables in the prediction model. Although this is often done with ordinary least-square
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
115
(OLS) regression, in the context of logistic regression, this has shown to result in models with a suboptimal goodness-of-fit (when compared to models built using the measures directly), and is not current practice. In addition, principal components are always specific to the particular data set on which they have been computed, and may not be representative of other data sets. A model built using principal components is not likely to be applicable across different systems. Still, it is interesting to interpret the results from regression analyses (see next sections) in the light of the results from PCA, e.g., determine which PCs the measures found to be significant stem from. This shows which dimensions are the main drivers of fault-proneness, and may help explain why this is the case. Regarding replicated studies, it is interesting to see which dimensions are also observable from PCA results in other systems, and find possible explanations for differences in the results, e.g., a different design methodology. We would expect to see consistent trends across systems for the strong PCs that explain a large percentage of the data set variance, and can be readily interpreted. From such observations, we can also derive recommendations regarding which measures appear to be redundant and need not be collected, without losing a significant amount of design information. As an example of an application of PCA, and the types of conclusions we can draw from it. Table III shows the rotated components obtained from cohesion measures applied to the system in [33]. The measures mostly capture two orthogonal dimensions (the rotated components PCI and PC2) in the sample space formed by all measures. Those two dimensions capture 81.5% of the variance in the data set. Analyzing the definitions of the measures with high loadings in PC 1 and PC2 yields the following interpretations of the cohesion dimensions: TABLE HI ROTATED COMPONENTS FOR COHESION MEASURES (FROM
Eigenvalue: Percent: CumPercent: LCOMl LC0M2 LC0M3 LC0M4 LC0M5 Coh Co LCC TCC ICH
[33])
PCI
PC2
4.440 44.398 44.398 0.084 0.041 -0.218 -0.604 -0.878 0.872 0.820 0.869 0.945 0.148
3.711 37.108 81.506 0.980 0.983 0.929 0.224 0.057 -0.113 0.139 0.320 0.132 0.927
116
LIONEL C. BRIAND AND JURGEN WUST
• PCh Measures LC0M5, COH, CO, LCC, TCC are all normalized cohesion measures, i.e., measures that have a notion of maximum cohesion. • PC2: Measures LC0M1-LC0M3, and ICH are nonnormalized cohesion measures, which have no upper bound. As discussed in [18], many of the cohesion measures are based on similar ideas and principles. Diiferences in the definitions are often intended to improve shortcomings of other measures (e.g., behavior of the measure in some pathological cases). The results show that these variations, based on careful theoretical consideration, do not make a substantial difference in practice. By and large, the measures investigated here capture either normalized or nonnormalized cohesion, measures of the latter category having been shown to be related to the size of the class in past studies ([29,30]).
3.3
Univariate Regression Analysis
Univariate regression is performed for each individual IV against the DV, in order to determine whether the measure is a potentially useful predictor. Univariate regression analysis is conducted for two purposes: • to test the hypotheses that the IVs have a significant statistical relationship with the DV, and • to screen out measures not significantly related to the DV and not likely to be significant predictors in multivariate models. Only measures significant at significance level, say, a = 0.25 [74] should be considered for the subsequent multivariate analysis. Note that some IV may be significantly related to the DV for various reasons. It may capture a causal relationship or be the result of a confounding effect with another IV. Because of the repeated testing taking place during univariate analysis, there is a nonnegligible chance to obtain a spurious relationship by chance. Although a number of techniques exist to deal with repeated testing (e.g., Bonferroni [61]), this is not an issue here as we are not trying to demonstrate or provide evidence for a causal relationship. Our goal is to preselect a number of potential predictors for multivariate analysis, which will tell us in turn which IVs seem to be useful predictors. Causality cannot really be demonstrated in this context and only a careful definition of the design measures used as IVs, along with plausible mechanisms to explain causality, can be provided. The choice of modeling technique for univariate analysis (and also the multivariate analysis that follows) is mostly driven by the nature of the DV: its distribution, measurement scale, and whether it is continuous or discrete. Examples from the literature include:
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
117
• Logistic regression to predict the likelihood for an event to occur, e.g., fault detection [29,48]. • Ordinary least-squares regression, often combined with monotonic transformation (logarithmic, quadratic) of the IVs and/or DV, to predict interval/ ratio scale DVs [33,43]. • Negative binomial regression (of which Poisson regression is a special case) to predict discrete DVs that have low averages and whose distribution is skewed to the right [75]. • Parametric and nonparametric measures of correlation (Spearman p, Pearson r) are sometimes used. However, they can only provide a rough picture and are not as practical as they do not account for nonlinearities and are not comparable to the multivariate modeling techniques we present below.
3.3.1
Univariate
Outliers
Outliers are data points located in an empty part of the sample space [76]. Inclusion or exclusion of outliers can have a large influence on the analysis results. It is important that conclusions drawn are not solely dependent on a few oudying observations; otherwise, the resulting prediction models are unstable and cannot be reliably used. When comparing results across replicated studies, it is particularly crucial to ensure that differences in observed trends are not due to singular, oudying data points. For this reason it is necessary to identify outliers, test their influence, and possibly remove them to obtain stable results. For univariate analysis, all observations must be checked for outlying values in the distribution of any one of the measures used in the study. The influence of the identified observation is tested: an oudier is influential, if the significance of the relationship between the measure and the DV depends on the absence or presence of the outlier. Such influential oudiers should not be considered in the univariate analysis results. Oudiers may be detected from scatterplots, and their influence systematically tested. For many regression techniques, specific diagnostics for automatically identifying outliers, e.g.. Cooks Distance for OLS [76], and Pregibon beta for logistic regression [77], were proposed.
3.4
Prediction Model Construction
Multivariate regression for building prediction models of the DV is performed. This analysis is conducted to determine how well we can predict the DV, when the design measures are used in combinadon. For the selection of measures to be used in the model, the following strategy must be employed:
118
LIONEL C. BRIAND AND JURGEN WUST
• Select an appropriate number of independent variables in the model. Overfitting a model increases the standard error of the model's prediction, making the model more dependent on the data set it is based on and thus less generalizable. • Reduce multicollinearity [78], i.e., independent variables which are highly correlated. High multicollinearity results in large standard errors for regression coefficient estimates and may affect the predictive power of the model. It also makes the estimate of the impact of one IV on the DV difficult to derive from the model.
3.4.1
Stepwise Selection Process
Often, the validation studies described here are exploratory in nature; that is, we do not have a strong theory that tells us which variables should be included in the prediction model. In this situation, a stepwise selection process where prediction models are built in a stepwise manner, where each step consists of one variable entering or leaving the model, can be used. The two major stepwise selection processes used for regression model fitting are forward selection and backward elimination [74]. The general forward selection procedure starts with a model that includes the intercept only. Based on certain statistical criteria, variables are selected one at a time for inclusion in the model, until a stopping criterion is fulfilled. Similarly, the general backward elimination procedure starts with a model that includes all independent variables. Variables are selected one at a time to be deleted from the model, until a stopping criterion is fulfilled. When investigating a large number of independent variables, the initial model in a backward selection process would contain too many variables and could not be interpreted in a meaningful way. In that case, we use a stepwise forward selection procedure to build the prediction models. In each step, all variables not already in the model are tested: the most significant variable is selected for inclusion in the model. If this causes a variable already in the model to become not significant (at some level of significance anxit). it is deleted from the model. The process stops when adding the best variable no longer improves the model significantly (at some significance level ofEnter < o^Em)A procedure commonly used to reduce the number of independent variables to make possible the use of a backward selection process is to preselect variables using the results from principal component analysis: the highest loading variables for each principal component are selected and then the backward selection process runs on this reduced set of variables. In our studies [29,30], within the context of logistic regression, this strategy showed the goodness-of-fit of the models thus
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
119
obtained to be poorer than the models obtained from the forward stepwise procedure, hence favoring the use of the latter. The choice of significance levels for measures to enter and exit the model is an indirect means for controling the number of variables in the final model. A rule of thumb for the number of covariates is to have at least 10 data points per independent variable.
3.4.7.7 Criticism
of stepwise selection
heuristics.
Stepwise
selection procedures have been criticized for a couple of reasons: (1) the inclusion of noise variables in the presence of multicollinearity—clearly an issue with our design measures—and (2) the number of variables selected is a function of the sample size and is often too large. This casts doubt on the trustworthiness of a model built in such a fashion. In [36], the authors state that "variables selected through such a procedure cannot be construed as the best object-oriented metrics, nor even as good predictors of the DV." However, many IVs can typically be replaced by other related IVs (i.e., confounded measures, belonging to the same principal component in PCA) without a significant loss of fit. In addition, our studies show that trends between design measures and system quality frequently vary across systems [30,31], and a prediction model built from one system is likely to be representative of only a small number of systems developed in the same environment. The particular measures selected for a model are not of much general importance. Therefore, the goal of building multivariate models using stepwise heuristics is not to determine what are the "best" metrics or whether they are the only or best predictors. The most we can hope for is that the properties/ dimensions (i.e., principal components) captured by the measures are relevant, are frequently represented in the predictive models, and can explain most of the variance in the DV. In short, our aim here is only to obtain an optimal predictive model, as defined in Section 3.5. Stepwise variable selection is a standard technique frequently used in the literature. It is certainly true that the output from such a stepwise selection heuristic cannot be blindly relied upon. It is necessary to perform a number of sanity checks on the resulting model: (1) the number of covariates is reasonable considering the size of the data set, (2) the degree of collinearity among covariates is acceptable [78], and (3) no outlier is overinfluential with respect to the selection of covariates. If violations of these principles are detected, they can be amended by (1) adjusting inclusion/exclusion thresholds, (2) removing covariates, or (3) dismissing data points. We think the results obtained from a model that passes these checks, and also performs reasonably well in the subsequent model evaluation (see Section 3.5), are trustworthy at least in that they indicate the order of magnitude of the benefits that we can expect to achieve from a prediction model built in the same fashion in any given environment.
120
3.4.2
LIONEL C. BRIAND AND JURGEN WUST
Capturing Nonlinear or Local Trends and
Interactions
When analyzing and modeling the relationship between IVs and DVs, one of the main issues is that relationships between these variables can be complex (nonlinear) and involve interaction effects (the effect of one variable depends on the value of one of more other variables). Because we currently know little about what to expect and because such relationships are also expected to vary from one organization and family of systems to another, identifying nonlinear relationships and interaction effects is usually a rather complex, exploratory process. Data mining techniques such as CART regression tree analysis [33,79] makes no functional form assumption about the relationship of IVs and DV. In addition, the tree construction process naturally explores interaction effects. Another recent technique, MARS (multivariate adaptive regression splines) [62], attempts to approximate complex relationships by a series of linear regressions on different intervals of the independent variable ranges and automatically searches for interactions. Both techniques can be combined with traditional regression modeling [33].
3.4.2.1 Hybrid models with regression trees. By adapting some of the recommendations in [80], traditional regression analysis and regression trees can be combined into a hybrid model as follows: • Run regression trees analysis, with some restrictions on the minimum number of observations in each terminal node (in order to ensure that samples will be large enough for the next steps to be useful). • Add dummy variables (binary) to the data set by assigning observations to terminal nodes in the regression trees, i.e., assign 1 to the dummy variable for observations falling in its corresponding terminal node. There are as many dummy variables as terminal nodes in the tree. • Together with the IVs based on design measures, the dummy variables can be used as additional covariates in the stepwise regression. This procedure takes advantage of the modeling power of regression analysis while still using the specific interaction structures that regression trees can uncover and model. As shown in [33], such properties may significantly improve the predictive power of multivariate models.
3.4.2.2 Multivariate
adaptive regression splines (MARS). As
previously discussed, building quality models based on structural, design measures is an exploratory process. MARS is a novel statistical method that has shown to be useful in helping the specification of appropriate regression models in an exploratory context. This technique is presented in [62] and is supported by a
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
121
recent tool developed by Salford Systems.^ At a high level, MARS attempts to approximate complex relationships by a series of linear regressions on different intervals of the independent variable ranges (i.e., subregions of the independent variable space). It is very flexible as it can adapt any functional form and is thus suitable to exploratory data analysis. Search algorithms find the appropriate intervals on which to run independent linear regressions, for each independent variable, and identify interactions while avoiding overfitting the data. Although these algorithms are complex and out of the scope of this paper, MARS is based on a number of simple principles. MARS identifies optimal basis functions based on the IVs, and these basis functions are then used as candidate covariates to be included in the regression model. When we are building, for example, a classification model (such as a fault-proneness model), we use MARS in two steps: (1) Use the MARS algorithms to identify relevant basis functions, and (2) refit the model with logistic regression, using the basis functions as covariates [33]. Our experience has shown that MARS was helpful in building more accurate predictive models [31,33].
3A.3
Multivariate
Outliers
Just as univariate analysis results are susceptible to univariate outliers, multivariate models can be strongly influenced by the absence or presence of individual observations. Our set of n independent variables spans an ^-dimensional sample space. To identify multivariate oudiers in this sample space, we calculate, for each data point, the Mahalanobis Jackknife [81] distance from the sample space centroid. The Mahalanobis Distance is a measure that takes correlations between measures into account. Multivariate outliers are data points with a large distance from the sample space centroid. Again, a multivariate oudier may be overinfluential and therefore removed, if the significance of any of the n variables in the model depends on the absence or presence of the outlier. A subtle point here occurs when dismissing an outlier causes one or more covariates in the model resulting from a stepwise selection heuristic to become insignificant. In that case, our strategy is to rerun the stepwise selection heuristic from scratch, excluding the outlier from the beginning. More detailed information on outlier analysis can be found in [76].
3.4.4
Test for
Multicollinearity
Multivariate models should be tested for multicollinearity. In severe cases, multicollinearity results in inflated standard errors for the estimated coefficients, ^Available at www. salf ord-systems. com.
122
LIONEL C. BRIAND AND JURGEN WUST
which renders predicted values of the model unreliable. The presence of multicollinearity also makes the interpretation of the model difficult, as the impact of individual covariates on the dependent variable can no longer be judged independently from other covariates. According to [74], tests for multicollinearity used in least-squares regression are also applicable in the context of logistic regression. They recommend the test suggested by Belsley et al. [78], which is based on the conditional number of the correlation matrix of the covariates in the model. This conditional number can conveniently be defined in terms of the eigenvalues of principal components as introduced in Section 3.2. Let X\, . . . , Jf„ be the covariates of our model. We perform a principal component analysis on these variables, and set /max to be the largest eigenvalue, /min the smallest eigenvalue of the principal components. The conditional number is then defined as A = \//max//min- A large conditional number (i.e., discrepancy between minimum and maximum eigenvalues) indicates the presence of multicollinearity. A series of experiments showed that the degree of multicollinearity is harmful, and corrective actions should be taken, when the conditional number exceeds 30 [78].
3.4.5
Evaluating Goodness of Fit
The purpose of building multivariate models is to predict the DV as accurately as possible. Different regression techniques provide specific measures for a model's goodness-of-fit, for instance, R^ for OLS or methods based on maximum likelihood estimation such as logistic regression. While these allow, to some degree, for comparison of accuracy between studies, such measures are abstract mathematical artifacts that do not illustrate very well the potential benefits of using the prediction model for decision-making. We provide below a quick summary of goodness-of-fit measures that users of prediction models tend to use to evaluate the practicality of using a model. There are two main cases that must be deak with in practice: (1) classification (such as classifying components as fault-prone or not), and (2) predicting a continuous DV on an interval or ratio scale. We will use an example of each category to illustrate practical measures of goodness-of-fit. 3.4.5.1 Classifying fault-proneneSS. To evaluate the model's goodness-of-fit, we can apply the prediction model to the classes of our data set from which we derived the model.^^ A class is classified fault-prone, if its predicted ^This is, of course, an optimistic way to assess a modeL This is why the term goodness-of-fit is used, as opposed to predictive power. This issue will be addressed in Section 3.5.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
123
probability to contain a fault is higher than a certain threshold, po. Assume we use this prediction to select classes to undergo inspection. Further assume that inspections are 100% eifective; i.e., all faults in a class are found during inspection. We then compare the predicted fault-proneness of classes to their actual faultproneness. We then use the following measures of the goodness-of-fit of the prediction model: • Completeness: Completeness, in this context, is defined as the number of faults in classes classified as fault-prone, divided by the total number of faults in the system. It is a measure of the percentage of faults that would have been found if we used the prediction model to drive inspections. Low completeness indicates that, despite the use of the classification model, many faults are not detected. These faults would then slip to subsequent development phases, where they are more expensive to correct. We can always increase the completeness of our prediction model by lowering the threshold po used to classify classes as fault-prone (TT > po). This causes more classes to be classified as fault-prone; thus completeness increases. However, the number of classes incorrectly being classified as fault-prone also increases. It is therefore important to consider the correctness of the prediction model. • Correctness: Correctness is the number of classes correctly classified as fault-prone, divided by the total number of classes classified as fault-prone. Low correctness means that a high percentage of the classes being classified as fault-prone do not actually contain a fault. We want correctness to be high, as inspections of classes that do not contain faults is an inefficient use of resources. These definitions of completeness and correctness have straightforward, practical interpretations. They can be used in other application contexts where a classification model is required. A drawback of these measures is that they depend on a particular classification threshold. The choice of threshold is system-dependent and, to a large degree, arbitrary. To achieve comparability between studies and models, we can, however, employ a consistent strategy for threshold selection, such as using prior probability (proportion) of fault-prone classes, or selecting threshold po so as to balance the number of actual faulty and predicted faultprone classes. Plotting the correctness and completeness curves as a function of the selected threshold po is also a good, common practice [29], as shown in Fig. 1. As an example, we show in Table IV the fault-proneness classification results from a model ("Linear" logistic regression model) built in [31]. The model identifies 19 out of 144 classes as fault-prone (i.e., 13% of all classes). Of these, 14 actually are faulty (74% correctness), and contain 82 out of 132 faults (62% completeness).
124
LIONEL C. BRIAND AND JURGEN WUST
FIG. 1. Correctness/completeness graph (for "linear model" in [31]). TABLE IV FAULT-PRONENESS CLASSIFICATION RESULTS (LINEAR MODEL IN [31])
Predicted
Actual
No fault Fault
Z
;r <0.5
TT > 0 . 5
108 17 (50 faults)
14 (82 faults)
113 31 (132 faults)
125
19
144
The above figures are based on a cutoff value of ;r = 0.5 for predicting faultprone/not fault-prone classes, and the table only gives a partial picture, as other cutoff values are possible. Figure 1 shows the correctness and completeness numbers (vertical axis) as a function of the threshold n (horizontal axis). Standard measures of the goodness-of-fit used in the context of logistic regression models are sensitivity, specificity, and the area under the receiver-operator curve (ROC) [82]. Sensitivity is the fraction of observed positive outcome cases correctly classified (i.e., the fraction of faulty classes correctly classified faultprone, which is similar to completeness as defined above). Specificity is the fraction of observed negative outcomes cases correctly classified (i.e., the fraction of nonfaulty classes correctly classified not fault-prone). Calculating sensitivity and specificity too requires a selection of a particular threshold, p. The receiveroperator curve is a graph of sensitivity versus 1-specificity as the threshold p is varied. The area under the ROC is a common measure of the goodness-of-fit of the model—a large area under the ROC indicates that high values for both
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
125
sensitivity and specificity can be achieved. The advantage of the area under the ROC is that this measure does not necessitate the selection of a particular threshold. The drawback is that its interpretation (i.e., probability that a randomly selected faulty class has a predicted fault-proneness higher than that of a randomly selected nonfaulty class) has no immediate interpretation in the context of a practical application of the model.
3.4.5.2 Predicting development effort. We use development effort here as an example for the prediction of continuous, interval/ratio scale DVs. In the area of effort estimation, the most commonly used measures of prediction accuracy are the absolute relative error (ARE) and the magnitude of relative error (MRE) of the effort prediction. If eff is the actual effort (e.g., for a class or system), and eff the predicted effort, then ARE = I eff - eff |, and MRE = | eff - eff |/eff. The percentage (or absolute value in terms of person hours) that a predicted effort is on average off is immediately apparent to a practitioner and can be used to decide whether the model can be of any practical help. ARE and MRE measures can readily be used in contexts other than effort estimation.
3.4.6
The Impact of Design Size
The size of an artifact (e.g., class designs) is a necessary part of any model predicting a property (e.g., fault proneness) of this artifact. This is mostly justified by the fact that size determines, to some extent, many of its external properties such as fault-proneness or effort. On the one hand, we want our predictive models to account for size. However, in many cases, e.g., in the case of fault-proneness models, and for practical reasons, we need them to capture more than size effects. Using again our inspection example, a model that systematically identifies larger classes as more fault-prone would a priori be less useful: the predicted fault-prone classes are likely to cover a larger part of the system and the model could not help focus inspection and testing efforts very well. In our studies [29,30,33], we compare models built from (1) size measures only and (2) models allowing all measures (size, coupling, cohesion, inheritance) to enter the model. With these models, we seek to find answers to the following questions: • Are coupling, cohesion, and inheritance (CCI) measures complementary predictors of the DV as compared to size measures alone? • How much more accurate is a model that includes the more difficult to collect coupling, cohesion, and inheritance measures?"^ If it is not ^Such measures usually require the use of complex static analyzers.
126
LIONEL C. BRIAND AND JURGEN WUST
significantly better, then the additional effort of calculating these more expensive measures instead of some easily collected size measures would not be justified. When the DV is class fault-proneness and the measures are collected based on design information, the results [30,31] so far have shown that: • Models including CCI measures clearly outperform models based on size measures only. Even though they may be related to size, CCI measures therefore capture information related to fault-proneness that cannot be explained by size alone. • There is no significant difference between models based on CCI measures only and models based on both CCI and size measures. This indicates that all size aspects that have a bearing on fault-proneness are also accounted for by the set of CCI measures investigated. In other words, the CCI measures are not just complementary to the size measures, they subsume them. When the DV is effort [33], however, it appears that the size accounts for most of the variation in effort, and more sophisticated CCI measures do not help to substantially improve the model's predictive capability. In the model building strategy proposed by El Emam et al. [36], a size measure is forced on the predictive model by default. Measures that are confounded by size are not considered for inclusion in the model. This is an alternative strategy and which one to use depends on your purpose. If you want to build an optimal prediction model and determine what are useful predictors, then the procedure we outlined above is fine. If your goal is to demonstrate that a given measure is related to fault-proneness, or any other DV, and that this relationship cannot be explained by size effects, then the procedure in [36] is appropriate.
3.5
Prediction Model Evaluation
We discussed above the notion of goodness-of-fit and practical ways to measure it. However, although such measures are useful for comparing models built on a given data set, they present two limitations: • They are optimistic since we must expect the model's predictive accuracy to deteriorate when it is applied to data sets different from the one it is built on. • They still do not provide information that can be used directly to assess whether a model can be useful in given circumstances. Those two issues are addressed by the next two subsections.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
3.5.1
127
Cross Validation
One of the commonly encountered problems in software engineering is that our data sets are usually of limited size, i.e., a few hundred observations when we are lucky. Dividing the available data into a modeling set and a test set is usually difficult as it implies that either the test set is going to be too small to obtain representative and reliable results or the modeling set is going to be too small to build a refined predictive model. One reasonable compromise is to use a cross-validation procedure. To get an impression of how well the model performs when applied to different data sets, i.e., its prediction accuracy, a cross-validation should be carried out. Depending on the availability and size of the data set, various cross-validation techniques can be used: • F-cross-validation [63] is what we used in our studies [29,30,33]. For the K-cross-validation, the n data points of each data set are randomly split into V partitions of roughly equal size (n/V). For each partition, we refit the model using all data points not included in the partition, and then apply the resulting model to the data points in the partition. We thus obtain for all n data points a predicted probability of their fault-proneness (or predicted development effort). • Leave-one-out cross-validation, a special case of K-cross-validation, where V = n- I, used for very small data sets. • For larger data sets, one can randomly partition the data set into a fit/ modeling data partition (usually 2/3 of all observations) used to fit the model and a test data partition (all remaining observations). The ideal situation is where separate data sets, derived from different systems stemming from similar environments, are available. The prediction model is built from one system used in turn to make predictions for the other system. This is the most effective demonstration of the practical use of a prediction model. Typically, models are built on past systems and used to predict properties of new systems or their components. System factors may affect the predictive power of a model and, therefore, it is important to validate the model under conditions that resemble as closely as possible its usage conditions. Reference [31] reports on such a study where the authors introduce a cost-effectiveness model for faultproneness models. This is described further in the next section.
3.5.2
Cost-Benefit IVIodel for Class Fault-Proneness Prediction
Goodness-of-fit or predictive power does not give the potential users of a model a direct means to assess whether the model can be practically useful to them.
128
LIONEL C. BRIAND AND JURGEN WUST
We need to develop cost-benefit models that are based on realistic assumptions and that use parameters that can be either measured or estimated. Although it is difficult to further specify general rules to build such models in our context, we will use an example to illustrate the principles to follow: How can we determine whether a fault-proneness model would be economically viable if used to drive inspections? The first step is to identify all the parameters that the model will be based on. At the same time, list all the assumptions on which the model will be based regarding these parameters. Such assumptions are usually necessary to help simplify the cost-benefit model. Some of these assumptions will inevitably be specific to an environment and can very well be unrealistic in others. What we present here is based on a study reported in [31 ]: • All classes predicted as fault-prone are inspected. • Usually, an inspection does not find all faults in a class. We assume an average inspection effectiveness e,0 < e < 1, where e = I means that all faults in inspected classes are being detected. • Faults not discovered during inspection (faults that slipped through, faults in classes not inspected) later cause costs for isolating and fixing them. The average cost of a fault when not detected during inspection is fc. • The cost of inspecting a class is assumed to be proportional to the size of the class. In general, in order to estimate the benefits of a model, we need a comparison baseline that represents what could be achieved without the use of the model. In our example, we assume a simple model that ranks the classes by their size, and selects the n largest classes for inspection. The number n is chosen so that the total size of the selected classes is roughly the same as the total size of classes selected by the fault-proneness model based on design (size and CCI) measures. It is thus ensured that we compare models where the investment—the cost of inspections—are the same or similar and can be factored out. For the specification of the model, we need some additional definitions. Let ci, ... ,CN denote the A^ classes in the system. For / = 1, . . . , A^, let • /, be the number of actual faults in class /, • Pi indicates whether class / is predicted fault-prone by the model, i.e., /?, = 1 if class / is predicted fault-prone, p, = 0 otherwise, and • Si denotes the size of class / (measured in terms of the number of methods, although other measures of size are possible). The inspection cost is ic • 5,, where ic is the cost of inspection of one size unit.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
129
The next step is to quantify the gains and losses due to using the model. In our example, they are all expressed below in effort units, i.e., the effort saved and the effort incurred assuming inspections are performed on code. Gain (effort saved): g_m
=
defects covered and found
g_m
=
e ' fc ' Ufi ' Pi) i
Cost (effort incurred): c_m
=
direct inspection cost + defects not covered + defects that escape
c_m
=
ic.z(5/-A) + f c - Z ( / / - ( l - A ) ) + ( l - ^ ) - f c - Z ( / / - p / ) .
In the same way, we express the cost and gains of using the size-ranking model (baseline) to select the n largest classes, so that their cumulative size is equal or close to Z/(5/ • p,), the size of classes selected by the predictive model.^ For / = 1, . . . , AT, let p. = 1 if class / is among those n largest classes, and p[ = 0 otherwise:
g_s =
e-fC'l^ifrp'i) i
C.S =
ic.z(5,.p;.) + f c - Z ( / , - ( l - p ; ) ) + ( l - e ) - f c - S ( / , - p ; ) .
We now want to assess the difference in cost and gain when using the faultproneness model and size-ranking model, which is our comparison baseline: Again
=
g_m - g_s = e • fc • (Z(// • Pt) - Ufi' i
Acost
=
p\))
i
c_m - c_s = ic • (ZC^, • pi) - Z(5/ • p\)) -h fc(Z/(// • (1 - p/)) - Ufi • (1 - P'i))) + (1 - e) • fc(S(/, • Pi) i
i
-nfrp'i)). i
We select n and therefore p' so that ^i{Si'Pi) — Y.i{si •/?•) ?^ 0 (inspected classes are of roughly equal size in both situations). We can thus, as an approximation, drop the first term from the Acost equation. This also eliminates the inspection cost ic from the equation, and with it the need to make assumptions about the ratio fc to ic for calculating values of Acost. With this simplification, we have Acost = : f c . ( Z ( / / - ( l - A ) ) - Z ( / / - ( l - p ; ) ) ) + ( l - e ) - f c . ( Z ( / / - P / ) - S ( / / - p ; ) ) . /
/•
/
/
^We may not be able to get the exact same size, but we should be sufficiently close so that we perform the forthcoming simplifications. This is usually not difficult as the size of classes composing a system usually represent a small percentage of the system size. In practice, we can therefore make such an approximation and find an adequate set of n largest classes.
130
LIONEL C. BRIAND AND JURGEN WUST
By doing the multiplications and adding summands it is easily shown that Acost = - e • fc • (Z(/, • Pi) - Uf, • P',)) = -Again. The benefit of using the prediction model to select classes for inspection instead of selecting them according to their size is benefit = Again - Acost = 2Again = 2 • e • fc • (Z(// • A ) - ^ifi ' Pi))Thus, the benefit of using the fault-proneness model is proportional to the number of faults it detects above what the size-based model can find (if inspection effort is about equal to that of the baseline model, as is the case here). The factor 2 is because the difference between not finding a fault and having to pay fc, and finding a fault and not having to pay fc is 2fc. Once such a model is developed, parameters e and fc are estimated in a given environment, and we can determine, for a given range of e values, the benefits (in effort unit) of using a fault-proneness model as a function of fc, the cost of a defect slipping through inspections. Based on such information, one may decide whether using a predictive model for driving inspections can bring practical benefits. As an example. Fig. 2 shows the benefit graph for two models, referred to as "linear" and "MARS" model. The benefit of using the linear or MARS model to select classes for inspection over a simple size-based selection of classes is plotted as a function of the number n of classes selected for inspection. The benefit is expressed in multiples of fc, assuming an inspection effectiveness e = 80%. Besides the economical viability of a model, such afigureeffectively demonstrates Benefit [fc]
•MARS-
-Linear
0 10 20 30 40 50 60 70 80 90 100110120130140 Number of classes inspected FIG. 2. Benefit graph for linear (thin Une) and MARS (thick Hne) models from [31].
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
131
the advantages of one model over the other. It also helps to identify ranges for the number of selected classes, n, at which the model usage has its greatest payoff. To decide whether a prediction model is worth using, some additional costs and constraints may also be accounted for, such as, in our example, the cost of deploying the model: automation and training.
4.
Summary of Results
This section presents a summary of the empirical results reported in the studies of Section 2. It attempts to identify consistent trends across the results reported and discuss inconsistencies when they arise.
4.1
Correlational Studies
We focus here on correlational studies where the dependent variable is related to some measure of fault-proneness. The reason is that this is the only dependent variable for which a large-enough number of studies exist and hence a crossexamination of results is possible.
4.7.7
Univariate Analysis of
Fault-Proneness
Tables V to VIII show the results from univariate analysis in studies using faultproneness or the number of faults as a dependent variable, for size, coupling, cohesion, and inheritance measures, respectively. The table for size measures also includes measures presented as "complexity" measures, which in practice are often strongly correlated to simple size measures. The inheritance measures capture various properties of inheritance such as its depth or level of overloading. Each column provides the results for one study, each row the results for one measure. The aim is to facilitate comparison of results where the same measure is investigated in several studies. In each table, the row "Tech." indicates the modeling technique used in each study—which is either univariate logistic regression (denoted as LR) or Spearman p (rho). For studies that investigate more than one system, the row "System" identifies the name of the system each column pertains to. In the body of the table, the semantics of the entries is as follows: • -I-+: measure has positive significant relationship at the 0.01 level • -I-: measure has positive significant relationship at the 0.05 level • O: measure has no significant relationship at the 0.05 level
132
LIONEL C. BRIAND AND JURGEN WUST
TABLE V SUMMARY OF UNIVARIATE ANALYSIS RESULTS FOR SIZE MEASURES
Tech. System ATTRIB STATES EVNT READS WRITES DELS RWD LOC LOC_B LOC_H WMC-ss WMC-1/ NMA/ NMImp WMC-CC NOMA AMC Stmts NAImp NMpub NMNpub NumPara NAInh NMInh TotPrivAtrib TotMethod
29
30
31
LR
LR
LR
43
39 rho CCM
rho EFT
rho
36 LR
LR
37
56
LR
rho
57 LR A
LR B
LR C
o +
+ ++
O O
0 ++ ++ ++ ++ 0
++
++
++ ++
++ ++
-f+
++
++ +-I-
++
++ + ++
++
++ ++
++ ++
O O
++
++ ++
0 0 -I-+
++
• - : measure has negative significant relationship at the 0.05 level • — : measure has negative significant relationship at the 0.01 level • na: the measure was considered, but showed no or little variation (measure not significant and not meaningful in the respective system). Although it may be desirable to provide more detailed results (e.g., regression coefficients, exact p-values) in the table, we chose this more compact summary for the following reasons: • In isolation, the magnitude of a regression coefficient is not meaningful, since this depends on the range and variation of the measure in the data sample. • These coefficients are not applicable in other environments and not really useful to report here.
<
o o o + o +
+ +O o o + + +
ot
+ o + + +0 + i"
c O c + c + c
C3 - ,
gOcctoO+
o t go + i g +
t + + +o + + + g + § I g o S o
+ + + + + + ^ + + + + + + + + + -I-C + C + + + 0 +
+ +
C3
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
o
<
3u
J
0:i 0^
0^
PC
PC
0^
0£i
P c^U U U c ^ C c i : S y 3 S z Q Q < O Q O < O Q O
133
134
c c c c c
++
+ ++ + ++
+ ++ + ++
oo
+ ++
oo
LIONEL C. BRIAND AND JURGEN WUST
ooo+
+ + + +o +
• OOOC c c OO
s s s :su
y^ssyy^a^^
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
135
TABLE VII SUMMARY OF UNIVARIATE ANALYSIS RESULTS FOR COHESION MEASURES
Tech. MHF AHF LCOMl- -[7] LC0M2--[3] LC0M3 LCOM4 LC0M5 Coh Co LCC TCC ICH
28
29
30
rho O 0
LR
LR
0 0
++ + +
+ 0 0
—
o o o
++
36 LR
37 LR
LR
0 0 0
o -1-
0
++
• Some studies do not provide detailed p-values, but only indicate whether it is below a certain threshold. • It helps contain the size of the table. To further contain the size of the tables we made a number of simplifications while presenting the results from several studies: • In [29] and [30], coupling to library classes and nonlibrary classes were measured separately. We show here the results for coupling to nonlibrary classes. The results for coupling to library classes are mostly consistent, except that some coupling mechanisms occur less frequently there, resulting in more measures being "not applicable." • In [36,37,46,48], univariate analysis controlled for size (see subsequent discussion of "confounding effects") by including both a size measure and a design measure in the regression equation. We report here the significance of the design measure only. • In [57], results for different types of faults (faults related and not related to 0 0 features) are reported; we show here the results for correlation against "all faults," which should be consistent with the nature of faults accounted for in the other studies. • In [28], three dependent variables are considered, and we show here the results for defect density only. The results for the other DVs differ only marginally. Tables V to VIII are sparsely populated—most measures have been used only in the context of one or two studies. The only set of measures that have received a wider attention are the six measures by Chidamber and Kemerer [3]. This
136
CQ
<
w PC
C/5
a
c/5
^ < w ^ W H
< C/5
i X
a. Qi O S J >
o
Cii Q
z
r^ uo
VO
un
in
00
<M
CO
CO
cn
ON
^
m
vo m
m
t^
< PU ^ '^ 5
E<
zu a X
z OJ
o tu (Z)
H J D
U
C/D
»i C/3 00
H
-J > < Z
< c^ < > z
tu
D ftj
CO
(N
ON
O
J UO
o
o
oo + +o I O
+
I O O
O+
+ +
I O
+ + + + 4-
I O O O I O I
LIONEL C. BRIAND AND JURGEN WUST
^ CQ O
HJ
0^
o
o
o
2
-6 c/D O
-£ c/: I
+ +
+ + I + + + + + +I I + I+ + + + +
II + + I lOOOOO+O+O
-€ PQ +
a;
O (^ ;^ ^^ J
< S S
D C/5
OO
(N
Pc^Q
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
137
profusion of similar but different measures is of course a problem if we want to converge toward a stable body of knowledge. However, it was unavoidable, in recent years, as the research was at an exploratory stage. The focus should now be on investigating further the use of measures that have shown significance in at least one study. Measures of complexity and size are consistently associated with faultproneness or the number of faults, in the expected direction: the larger/more complex the class, the more fault-prone. For coupling measures, we find a mixed picture: • CBO is significant in only 3 out of 10 instances • RFC in 6 out of 8 instances (two insignificant cases due to controlling for size). This is not surprising as RFC has been shown to be a combination of size and import coupling [19]. • RFC-1, OCAIC, OCMEC, OMMIC, and OMMEC are mostly significant in the expected direction. Therefore, coupling due to parameter typing and method invocation are probably worth investigating further in the future. Cohesion measures are rarely investigated empirically. An explanation may be that these measures are difficult to obtain from static analysis, as they require a detailed analysis of the class attribute usage by methods. The results show that, overall, cohesion measures appear to have no significant relationship to fault-proneness, in particular normalized cohesion measures that have a notion of maximum cohesion [18]. For measures related to inheritance, the results related to the depth of a class (DIT) and the number of children are inconclusive. The use of inheritance mechanisms can increase the fault-proneness of deeper classes, decrease it, or have no effect on fault-proneness at all. Overall, a number of size and coupling measures consistently show a significant relationship in the expected direction. Only for inheritance-related measures DIT and NOC do we find entirely inconsistent trends, indicating that the use of inheritance has effects on fault-proneness that depend on other factors such as design methodology and experience [30]. In general, even assuming consistent data analysis procedures, we must expect some inconsistency in results across studies. This is due to the nature of the data set, different programming practices causing certain measures to display little variance or, in some cases, different interpretations and implementations of the same measures (e.g., LCOM and CBO). 4. 7. 7. 7 Confounding effects, in [36], the authors discuss the notion of confounding effects of design measures with size. The idea is that a measure of a given design measure (e.g., import coupling) can show a significant relationship
138
LIONEL C. BRIAND AND JURGEN WUST
to, say, both class fault-proneness and class size. Class size also is an indicator of fault-proneness. The question then is whether the relationship between the design measures and fault-proneness is due to the measures' association with size, or to a causal relationship between the design property and fault-proneness. To test this possible confounding effect, the authors fitted models including both a measure of size and a design measure, and investigated the significance of the design measure in this model. When controlling for size, the significance of some measures, in particular import coupHng measures, drops, indicating that their relationship to fault-proneness is due to these measures also being associated with size. This may be expected, as larger classes are likely to import more services and has been shown in a number of previous papers [29,30]. Export coupling measures are shown not to be affected by this. The authors then concluded results cast doubts on empirical validity claims made in the past as these empirical results were possibly biased by the confounding effect with size. However, it is interesting to note that controlling for size does not systematically "invalidate" measures in their studies. For example, OCMIC is found insignificant after controlling for size in [37], but significant after controlHng for size in [48]. Furthermore, in the next section, where multivariate models are reported to predict fault-proneness, a number of studies have shown that models based on both coupling and size perform significantly better than size models alone [29,30]. However, again, we must expect variations across systems, and another important point to consider is whether one's objective is to build the most accurate prediction model possible or to demonstrate a causal relationship. In some circumstances, coupling measures may not have a main effect on, say, fault-proneness, but an interaction effect with size. 4. 7.1.2 00 measures and thresholds. One reoccurring suggested use of product measures is that they can be used to build simple quality benchmarks based on thresholds. If a product measure exceeds a certain threshold (e.g., size measured as the number of public methods), the class or module is either rejected and must be redesigned or at least flagged as "critical." In [35] and [47], the authors conduct a refined univariate analysis to investigate whether this assumption is supported by empirical data. To do this, they modified the regular univariate logistic regression models to force the inclusion of thresholds and compared those models with typical logistic regression models. In each case, the authors found no significant difference in goodness-of-fit between the threshold model and the simpler no-threshold models. This result is rather intuitive as it is difficult to imagine why a threshold effect would exist between, for example, size measures and fault-proneness. This would imply a sudden, steep
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
139
increase in fault-proneness in a certain size value range, something that would be difficult to explain as it would imply, for example, a sharp increase in faultproneness above a certain size threshold. Note that none of the other empirical studies reported here assume the existence of thresholds.
4.7.2
Multivariate
Prediction
Models
This section reports on what are possibly the results of highest practical importance: the prediction models, e.g., fault detection probability. Those models are usually multivariate models in the sense that they integrate the effect of more than one structural measure. As we will see, many different measures are used in the various multivariate models, and these measures do not necessarily all capture different aspects of a class. Therefore, we first identify what are common dimensions captured by structural measures, and then investigate whether any of these dimensions are predominantly represented in the multivariate models. 4. 7.2. 7 Principal component analysis. As introduced in Section 3.2, PCA is a technique aimed, in our context, at identifying the dimensions captured by structural measures. For example, does a coupling measure really capture what it is intended to or is that part of the dimension capturing the size of a class. We performed PCA for a large set of measures in the context of four different data sets [29-31,33]. Although there are differences from system to system in the dimensions identified, and how measures are allocated to dimensions, some dimensions reoccurred in two, three, or all studies, and we summarize them here as follows: 1. Import coupling (through method invocation). This dimension includes measures such as MFC and OMMIC, which count invocations of methods of other classes. Note that it is important to distinguish between coupling to library and coupling to nonlibrary classes, which tend to be orthogonal dimensions. However, this distinction was only made in two studies, so we do not separate them here. 2. Class size, measured in terms of the number of methods (NMImp, WMC), method parameters (NumFar, etc.), class attributes, and executable/declaration statements. 3. Normalized cohesion measures (TCC, LCC, etc.), i.e., cohesion measures with an upper bound that represents maximal cohesiveness of methods and attributes. 4. Export coupling to other classes—degree to which a class is used by other classes, be it as class parameter, class attribute, or via method invocation.
140
LIONEL C. BRIAND AND JURGEN WUST
5. Depth of inheritance tree below a class/degree to which a class is extended (NOC, CLD, etc.). 6. Depth of class in inheritance tree/number of ancestor classes (DIT, NOP, etc.). 7. Import coupling from ancestor classes. This is the degree to which the added methods of a derived class make use of inherited methods (AMMIC, IH-ICP,...). 8. Inherited size. This is the number of attributes/methods a class inherits (NMInh, NAInh, etc.). In the following discussion, we assign each measure to one of the above dimensions, in order to abstract from the individual measure and focus on the concept it captures. In cases where, based on empirical results across the four studies, a measure cannot be uniquely assigned to a particular dimension, we assign it to the dimension whose interpretation best matches the definition of the measures. 4. 7 . Z 2 Overview of multivariate models. Tables IX and X summarize the multivariate prediction models for fault-proneness or number of faults (Table IX) and effort (Table X) as dependent variables. Each row provides the details for one prediction model. The first three columns indicate the literature source, the modeling technique, and the procedure used to arrive at the model. The following eight columns list the covariates of the model, each column representing one of the above structural dimensions. Each covariate is listed in the column of the dimension it was assigned to. Column T (total) reports the total number of covariates in the model. The last three columns reproduce whatever measures of goodness-of-fit of the model were reported, what type of cross-validation was performed, if any, and how well the model performed in cross-validation (i.e., its predictive power). The tables contain 13 fault-proneness models and 14 effort models. From the tables, we can draw a number of observations:
4.7.2.3 Design measurement
dimensions represented in the
models. From the 13 fault-proneness models, size measures are represented in 10 models, export coupling in 9, import coupling in 7, and depth of class in 6. The other inheritance dimensions are represented once or twice, while normalized cohesion is not at all. For the 14 effort models, size is a contributor to all of them, import coupling and export coupling are represented in 6 models each, inheritance-based size and import coupling are in 4 models each, and inheritance depth above/below class are in 3 models each, and normalized cohesion is in two. It should be noted that for the above models, authors typically have investigated only a subset of measures in which some dimensions were not represented at all,
--
-
Source Modeling technique
1371
-
How built
1. Import 2. Size coupling
Logistic Size regression measure forced in model, include design measures not confounded with size
LOC
3. N o m . 4. Export 5. Inh. cohesion coupling depth above CBO
6. Inh. 7. Inh. 8. Inh. T Goodnessdepth based size of-fit below coupling ACMIC
3 R2 = 0.366
Type of CV
Predictive power
Leaveone-out
Area under ROC: 0.81 3, sensitivity: 0.84, specificity 0.78. Proportion correct: 0.79
5
D C
L
0
X
6 Z
g rn 0
u
OCMEC DIT
3 R2 = 0.4355 Leaveone-out
AROC: 0.87, sensitivity: 0.81, specificity 0.83 Between- AROC: 0.78, Proportion system validation correct: 0.77
Z
;1 0 V)
$ L
V)
OCMIC
[481 [43]
OLS
NMImp
OCMEC DIT, DIT~
EVNTS EVNTS
INHTS
5 R 2 = 0.42 1 ad' R1.= 87.2% 2 adj. R2 = 89%
Leaveone-out
AROC: 0.85
TABLEIX Source Modeling technique
How built
1291
Forward stcpwise heuri\tic
1301
LR
LK
Forward stepwihe heuristic
I. Import 2. Size coupling
- Continued
3. Norm. 4. Export 5. Inh. 6. Inh. 7. Inh. 8. Inh. T Goodness cohesion coupling depth depth based size of fit above below coupling
NM, NMpub. NumPar NIHICP, RFC. RFC 1.1,
FMMEC NOP NMI
OMMIC NM. KFC NMA Nun~Par
OCAEC NOP FMMEC
NAlnip NumPar
ICP.12, ICH NlHlCP OCMIC OCAlC
CBO'
CI,D
CLI)
3 R' = 0.139. 67% cornpletcne\s, 60% correctness 7 R' = 0 53. Y.lfL conlplctcncr\. X I % cOI-I~cc111c~\ 4, R' = 0.56, 90fL coniplctencss. 7YV correct~ie\s 2 R' =0.16, hX'% cornpletenc\\, OX% correctnc\\ 6 R' = O 53, 9 I(% correctness. 97% completeness
Type of CV
Predictive power
0 C Z rn
r
0 10-cro\s Y2V conlvalldat~on p l c t e n c . 7X'L C~I'~CCIIIC\\
m 9 Z
0
9 Z 0 C
C:
n
t)
rn Z
Z
C: V)
10-cross- Correctness val. and completcness about 90%
-I
TABLEIX - Continued Source Modeling technique
How built
1. Import 2. Size coupling
ICPL OCAIC
forward stepwise heuristic
LR with MARS
OCMIC
OCMIC
NAlrnp, Numpar ICH
3. Norm. 4. Export 5. Inh. 6. Inh. 7. Inh. 8. Inh. T Goodness cohesion coupling depth depth based size of fit above below coupling CBO'
OCMEC
OCMEC DIT
NIP
Type of CV
Predictive power
6 R ' = 0.56, 97% cornpleteness, 87% correctness 3 10-cross- 74% correctval. ness, 62% completeness Between- Completeness system and correctvalidation ness about 60% 10-cross- 68% correctval. ness, 73% completeness Between- Completeness system and correctvalidation ness about 60%
5 D C
3 0
# 6 -
Z 0
rn
0
9
X
rn
z ;;I 0 < V) -I
rn
5
-
Source Modeling technique, IV
How built
(411
Stepwise heuristic
1131
OLS, productivity
OLS. rework effort OLS design cll'ort OLS. no. of lines added/ changed/ removed
-
~
I. Import 2. Size coupling
A priori MPC' \election KFC of var~ahlesIIAC to he included M PC KFC DAC
3. Norm. 4. Export 5. Inh. 6. Inh. 7. Inh. 8. Inh. T Goodnesscohesion coupling depth depth based size of-fit above below coupling
LOC HlCBO
LOC. HILCOM HILCOM
HlCBO
HILCOM
HICHO
WMC NOM sllcl s1/c2 LCOM WMC NOM LCOM
Dl'f
NOC'
III'T
NOC
2" ad;. R' = 75% 4" ad^. R' = 82% 3" ad;. R' = 60% 2 ad;. R' = 60% I0 adi. R' = 87.73% (UIMS). = 85.50% (QUESj 8 adi. R 2 = 87.7 1 % (UIMS), = 85.33% (QUESj
Type Predictive of cross- power validation
C
Betwecn- Correlation: ay\tenl predicted validation values of UlMS model applied to QUES with actual values in QUES: r = 0.678 (other way round r = O 650)
C:
0 Z
Zi C:
cn
-4
TABLEX - Continued Source Modeling How technique, built IV
1. Import 2. Size coupling
n
8. Inh. 3. Norm. 4. Export 5. Inh. 6. Inh. 7. Inh. size cohesion coupling depth depth based above below coupling
2
Sizel Size2
[25]
[33]
OLS, no. of lines added/ changed/ removed
A priori RFC selection DAC of variables to be included
NOM LCOM
T Goodness of fit
DIT
NOC
5
Poisson Forward regression stepwise (PR) heuristic
NAImp, NMImp, NumPar
NA
Hybrid PR & CART
NAImp, NMImp, NumPar
NA NMInh
4
Type Predictive of cross- power validation
adj. R~ = 64.29% (UIMS), = 61.72% (QUES) ad' BetweenR!= 77% system (UIMS) validation 67% (QUES)
UIMS model to applied to QUES: r = 0.44, (other way round r = 0.49) 10-cross- mMRE = validation 1.71
mean MRE (mMRE) = 1.71 7* mMRE = 10-cross- mMRE = 0.72 validation 0.78
5X
6 Z o
? o rn
9
: rn
2 0
d-4 rn
5
TABLEX I . Import 2. Size coupling
Source Modeling How technique, built IV
-..
- ~
PR
Hyhrid PK & CART PR
Hybrid PR & CART
OCMlC M PC. ICP MPC. OCMIC'. ICP OCIMC
- Continued
3. Norm. 4. Export 5. Inh. 6. Inh. 7. Inh. cohe\ion coupling depth depth based ahovc hclow coupling
NAlMP
---
DMMEC
-
-
-.- -
.-
8. Inh. \ile
T Goodness of tit
.---
ACMIC IH-ICP
7
Type Predictive of cross- power validation ~~-~
niMRE = 10-cro\s- mMRE = 1.5 validation 1.55
E 0
z 9 m
NAlmp. NMln~p
PIM-EC
ACMIC. IH-ICP
NA NMlnh
I 0 nlMKE = 10-cross- niMRE = 0.69 validation 0.77
B
z
0
LCOM?. COW. IdCOM3. LCC ICH. NumPar Nun~Par LCC NMlnlp, LCOM I , LCOM2
X
ACMlC
PIM-EC
ACMlC
NMlnh
X
mMKE = 10-cro\\- mMKE = 0.97 val~darion 1.01
mMRE = 10-cross- nlMKE = 0.74 valtdation 0.85
B Z
0 c C:
n Z
s
C: V)
-4
The niodel include\ ;In a d d ~ t ~ o r id; ~ u l~ i i n i y\.~r~nhlci n d ~ c a t ~ nuhcthcr g :I p . ~ ~ t ~ c udcvcliq~cr lar horhcd 011 ;I g l \ c ~cia\\. i The C A R T model, ~ n c l u d edurnniy ba~iahlc, rcpre\cntlng lcal node, from the regre\\lon tree (cl' Section 4' 4 2 ) The nle;l\ure\ Il\led here ~ n c l u d cthihc nica\urc\ that contrlhute to the i ~ n d ~ c o t cnunihcr d (11covarldlc\ I n the niodel delili~tlono f at le;l\t one tncludcd duniniy \allahlc Uec;lu\c thcrc I\ no I: I ~iiapping.the nun~herol'~iico\urc\Il\lcd dlfler, f n ~ nthe "
"
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
147
or were deliberately left out (e.g., when building models based on size measures only). In those instances, the fact that a given dimension is not included in the model does not imply that it is not useful. That notwithstanding, the results seem to indicate that size, import, and export coupling appear to be the dimensions mosdy likely to affect the DV and should always be considered when building models. Dimensions concerning inheritance sometimes play a role (especially with the effort models) and are worth trying out, whereas normalized cohesion measures appear not to be effective.
4.1.2A Goodness-of-fit
achieved by fault-proneness
models.
The LR fault-proneness models typically achieve R^ values from 0.42 to 0.56, which is fairly high (unlike the R^ known from OLS, the R^ for maximum likelihood estimation techniques such as LR [74] seldom approaches 1; from our experience, values above 0.5 usually indicate a very good fit). The high goodnessof-fit is also visible in correctness and completeness values, which are typically well above 80%. Given that these models stem from different environments (wrt overall system/project size, application domain, programming languages, developers, etc.), these results suggest that highly accurate fault-proneness models based on design measures can likely be built in most development environments. Impact of size. In [29] and [30], models based on size measures only and models based on design measures were assessed, to see whether the latter help to better predict fault-proneness over what is explained by size alone. This assumption is confirmed, as the size-only models had a significantly lower fit (R^ < 0.16, completeness and correctness below 70%) than models including design measures.
4.1.2.5 Goodness-of-fit
achieved by effort models.
The OLS
effort models aimed at predicting effort/code chum for individual classes achieve R^ values between 60-87%. Although R^ values approaching 90% usually indicate a good model fit, it is difficult to assess from this information to which degree reliable class level effort estimates can be made—absolute and relative errors are more transparent measures of prediction error in the context of effort estimation. In [33], goodness-of-fit is expressed, amongst others, in terms of the mean MRE, which ranges from 0.7 to 1.7. In other words, on average the predicted effort is 70 to 170% off the actual effort. As such, these models are not suitable for predicting development effort for individual classes. It is shown, however, that when adding up estimates for class level efforts to the system level, reliable system effort estimates with MREs between 10 and 30% can be achieved. Also, the hybrid regression and CART models (see Section 3.4.2) have considerably lower MREs than the regular models, indicating that this approach for capturing nonlinear and local trends in data can indeed help build better models.
148
LIONEL C. BRIAND AND JURGEN WUST
Impact of size. Models based on size measures only, and size and design measures, were built in [33] and [14]. In [14], the size-only models showed a significantly lower fit than design models {R^ 62-65% vs 85-87%), indicating that design measures carry additional information related to effort not contained by size measures. While this difference is statistically significant, it is not clear whether it is also of practical significance. Similar findings were made for the nonhybrid models in [33], where including design measures with the models decreased mMREs somewhat. For the hybrid models, however, the inclusion of design measures did not help to improve the goodness-of-fit. 4.1.2.6 Results from cross-validation. Where conducted, the results from cross-validation (Section 3.5.1) show a promising picture. In most cases, the performance of the models in cross-validation has not dropped much compared to the goodness-of-fit achieved by the models. For the effort models in [33], the change in mMRE compared to goodness-of-fit is statistically not significant. In [14] and [25], the authors calculate a correlation coefficient of actual effort vs predicted effort (prediction from a model built from another system). Depending on the model, they find correlation coefficients r to be between 0.44 and 0.67, significantly different from 0. Here, it is difficult to assess if such a prediction can still be practically useful, as the figures provided have no meaningful interpretation in an application context. The absolute/relative errors of the prediction could provide more insight here. The fault-proneness models typically achieve completeness and correctness levels (or sensitivity/specificity levels) of about 80% and above in cross-validation. Recently, cost-benefit models have been proposed in [31] and [48], investigating whether it is economically viable to use models of such accuracy, for instance, to focus inspection efforts on classes likely to contain faults: • The cost-benefit model in [48] defines the savings of using a prediction model as the proportion of costs due to post-release faults that are saved due to inspection on classes predicted fault-prone (as opposed to performing no inspections at all). This model requires assumptions about the relative cost of post-release faults and class inspections. • The cost-benefit model in [31] and summarized in Section 3.5.2 expresses the benefit of using a prediction model to select classes for inspection over another class selection strategy which is used as comparison baseline (e.g., ranking based on class size). This model requires no assumptions about the relative cost of post-release faults and class inspections, thus making it practical to use. It assumes that you should compare the benefits of decision-making (e.g., select a class to inspect) with the predictive model with respect to the same decisions without the model, following a procedure
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
149
already available. To make the comparison meaningful in terms of benefits, the cost (e.g., the cumulative size of classes inspected) should be held constant for both the procedure using the model and the baseline procedure. In Table IX and Table X, a number of published models were not considered due to some methodological flaws in building these models. Including them would bias the results: • The models in [28] based on 8 observations and 7 covariates suffer from overfitting (as visible in high R^ > 99.9%). • The models in [32] and [55] contain covariates with p values above 0.5. Such covariates can be removed from the model without significant loss of fit. • In [42], a full model with all candidate measures is fitted. From this model, nonsignificant covariates were removed, and predicted values were calculated from the remaining significant covariates, but the regression coefficients from the full model were retained. Of course, the reduced model should have been refitted to obtain unbiased predicted values, or a proper backward elimination procedure could have been used.
4.2
Controlled Experiments
In the following, we briefly report on the qualitative results established in controlled experiments. • In [65], students answered comprehension questions and performed change impact analysis on functionally equivalent systems with differing adherence to design principles (including low coupling, high cohesion, and simple, small classes). The time required to perform the tasks and the correctness of the students' performance were measured. The 0 0 system displaying good object-oriented design principles was shown to be better understandably and faster to modify than the version violating the design principles. A refined experiment in [66] confirmed these results. Note, however, that no statement can be made as to whether any of the design principles contributed more or less to the differences in understandability/maintainability. • Although not a controlled experiment in a strict sense, in [2] two developers were asked to implement a certain piece of software. One was instructed to write code for a specific context and of poor reusability, whereas the other was to write the code in the most reusable manner possible. A set of design measures was applied to both resulting systems. The "reusable" system showed lower coupling and higher cohesion (as measured by CBO and LCOM) than the "poor reusability" one. Also, the "reusable" system made no use of inheritance, while the "poor reusability" one did. Note,
150
LIONEL C. BRIAND AND JURGEN WUST
however, that the sample systems were too small to allow for any statistical testing of significant differences. The remaining studies all focus on the impact of inheritance on understandability and maintainability. • In [68], students answered comprehension questions and performed debugging and modification tasks on functionally equivalent systems with deep and shallow inheritance hierarchies. The time required to perform the tasks and the correctness and completeness of the students' performance were measured. The answers to comprehension questions showed no difference in understandability between the deep and shallow systems. The debugging tasks were more easy to identify and to correct in the shallow systems, while taking the same time as the deep versions. Modification tasks took shorter time for the deep versions, but were carried with lower correctness than in the shallow versions. • In [69], students performed modifications on three pairs of functionally equivalent systems. The time they required for the modification was measured. In two experimental runs, systems with three levels of inheritance were found to be better maintainable than the equivalent flat versions. In a third experimental run using a larger system no difl'erence was found between a system using five levels of inheritance and an equivalent flat version. • In [67], students performed modifications on two pairs of functionally equivalent systems that differ in the use of inheritance (no inheritance, three/five levels of inheritance). The correctness and completeness of their performance was observed, and also the perceived subjective understandability of the systems was measured. Results showed that flat systems were easier to modify than the versions with three or five levels of inheritance. For one pair of systems, the flat system was easier to understand than the inheritance version and no difl'erence was found for the other pair of systems. It is interesting to observe that the results from the controlled experiments mirror well the findings from the correlational studies. Coupling, cohesion, and size appear to consistently afl'ect system quality. For inheritance, inconsistent results are found: use of inheritance can have beneficial, detrimental, or no effects at all on system quality. Possible explanations were already discussed in Section 4.1.1.
5.
Conclusions
Despite a large number of empirical studies and articles, a lot of work remains to be done regarding the construction and application of useful, measurementbased quality models for object-oriented systems.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
151
The difficulty related to developing an empirical body of knowledge stems from: • A large number of proposed measures, many of them being similar. • A large number of external quality attributes of interest (all the "ilities" of software development). • The scarcity of reliable, complete data sets. • The difficulty to integrate quality prediction models in realistic decision processes where their benefits can be assessed. Despite such difficulties, reported studies allow us to draw a number of important conclusions that are discussed below.
5.1
Interrelationship between Design Measures
Many of the coupling, cohesion, and inheritance measures used in this study appear to capture similar dimensions in the data. In fact, the number of dimensions actually captured by the measures is much lower than the number of measures itself. Results from principal component analysis on numerous data sets showed that the measures listed in the Appendix can safely be reduced to a smaller set of about 15 measures, without losing important (i.e., potentially quality-related) design information. This simply reflects the fact that many of the measures proposed in the literature are based on comparable ideas and hypotheses, and are therefore somewhat redundant.
5,2
Indicators of Fault-Proneness
Measures of size, import coupling, and export coupling appear to be useful predictors of fault-proneness. • If one intends to build quality models of 0 0 designs, coupling will very likely be an important structural dimension to consider. More specifically, a strong emphasis should be put on method invocation import coupling since it has shown to be a strong, stable indicator of fault-proneness. We also recommend that the following aspects be measured separately since they capture distinct dimensions in our data sets: import versus export coupling, coupling to library classes versus application classes, and method invocation versus aggregation coupling. • As far as cohesion is concerned and measured today, it is very likely not a very good fault-proneness indicator. This is likely to reflect two facts: (1) the
152
LIONEL C. BRIAND AND JURGEN WUST
weak understanding we currently have of what this attribute is supposed to capture, and (2) the difficulty of measuring such a concept through static analysis only. One illustration of this problem is that two distinct dimensions are captured by existing cohesion measures: normalized versus nonnormalized cohesion measures. As opposed to the various coupling dimensions, these do not look like components of a vector characterizing class cohesion, but rather as two fundamentally different ways of looking at cohesion. • Inheritance measures appear not to be consistent indicators of class faultproneness. Their significance as indicators strongly depends on the experience of the system developers and the inheritance strategy in use on the project. • Size measures are, as expected, consistently good indicators of faultproneness, and can be considered with fault-proneness models. However, we have observed that the above-mentioned dimensions of coupling and inheritance, when combined with size, help explain fault-proneness further than size alone; models based on all three types of measures clearly outperform models based on size only.
5.3
Indicators of Effort
Size seems to be the main effort driver that explains most of the effort variance. While the more sophisticated coupling and inheritance measures also have a univariate relationship to effort, they do not bring substantial gains in terms of goodness-of-fit and cost estimation accuracy.
5.4
Predictive Power of Models
Results concerning the predictive power of fault-proneness/effort models are encouraging. When predicting fault-prone classes, using all the important faultproneness indicators mentioned above, the best models have consistently obtained a percentage of correct classifications of about 80% and find more than 80% of the faults. Overall, the results suggest that design measurement-based models for fault-proneness predictions of classes may be very effective instruments for quality evaluation and control of 0 0 systems. From the results presented in studies predicting development effort, we may conclude that there is a reasonable chance that useful cost estimation models can be built during the analysis and design of object-oriented systems. System effort prediction MREs below 30%, which is an acceptable level of accuracy for cost estimation models [83], seem realistic to achieve.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
5.5
153
Cross-System Application
An important question concerning the usefulness of design measurement-based prediction models is whether they can be viable decision-making tools when applied from one object-oriented system to another, in a given environment. This question has received very Uttle attention in existing empirical studies. The most detailed investigation to date is reported in [31], where the authors applied a fault-proneness model built on one system to another system, developed with a nearly identical development team (a different project manager), using a similar technology ( 0 0 analysis and design and Java) but different design strategies and coding standards. We believe that the context of our study represents realistic conditions that are often encountered in practice: change in personnel, learning effects, and evolution of technology. Our results suggest that applying the models across systems is far from straightforward. Even though the systems stem from the same development environment, the distributions of measures change and, more importantly, system factors (e.g., experience, design method) affect the applicability of fault detection predicted probabilities. However, we have shown the prediction model built on one system can be used to rank classes of a second system (within the same environment) according to their predicted fault-proneness. When used in this manner, the model can in fact be helpful at focusing verification effort on faulty classes. Although predicted defect detection probabilities are clearly not realistically based on actual fault data, the fault-proneness class ranking is accurate. The model performs clearly better than chance and also outperforms a simple model based on class size, e.g., number of methods. It can be argued that, in a more homogeneous environment, this effect might not be as strongly present as in the current study, but we doubt whether such environments exist or are in any case representative. It is likely, however, that the more stable the development practices and the better trained the developers, the more stable the fault-proneness models. For example, in [46] a fault-proneness model built from one version of a system was applied to later versions of the same system, and no shift in predicted probabilities was observed.
5,6
Cost-Benefit Model
To assess the economical viability of fault-proneness models when applied across systems, we proposed a specific cost-benefit model. This model is tailored to a number of specific assumptions regarding the use of the model, but its underlying principles are general and can be reused. The benefit of a fault-proneness model is expressed as a function of a number of parameters, such as cost of a fault that slipped inspection and defect detection effectiveness during inspections.
154
LIONEL C. BRIAND AND JURGEN WUST
Continuing the system example we used in [31], with a test system containing 27 faults, the benefit of using a prediction model to select classes for inspection over a simple size-based class selection heuristic showed to be as high as 17.6 fault-costunits (1 fault-cost-unit = average cost of a fault not detected during inspection), thus demonstrating the usefulness of using measurement-based, fault-proneness models in the environment under study.
5.7
Advanced Data Analysis Techniques
Novel, exploratory analysis techniques (e.g., MARS, and hybrid regression models with regression trees) have been applied to construct fault-proneness and effort models. Because we know little about what functional form to expect, such exploratory techniques that help finding optimal model specifications may be very useful. Initial results support this premise and suggest that the fault-proneness models generated by MARS outperform a logistic regression model where the relationship between the logit and the independent variables is linear (log-linear model). The combination of Poisson regression and regression trees has helped to improve significantly effort predictions, especially predictions based on size measures only that can be performed early on during the design stages. This can be explained by the fact that regression trees tend to capture complementary structures in the data since they help define new predictors capturing interactions between the original predictors. Although they will not systematically help, such hybrid models are worth trying.
5.8
Exploitation of Results
There are a number of typical applications where results from the empirical studies we summarized here can be used to refine decision-making: • One is building quality benchmarks to assess 0 0 software products newly developed or under maintenance, e.g., in the context of large-scale software acquisition and outsourcing, but also in-house development. For example, such experiences are reported in [84] where the code of acquired products is systematically measured to identify systems or subsystems that strongly depart from previously established measurement benchmarks. Such benchmarks can be built based on existing operational software, which has shown to be of good quality. New software can then be analyzed by comparing, for example, import coupling class distributions with the established benchmark. Any system part that would show a strong departure from the benchmark distribution could, for example, be further inspected to investigate the
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
155
cause of the deviation. If no acceptable justification can be found, then the acquisition manager may decide to require some corrective actions before the final delivery or for future releases of the software. This is particularly important when systems will be maintained over a long period of time and new versions produced on a regular basis. • Design measurement can be combined with more qualitative analyses of software systems to make the results of a qualitative analysis more objective. For example, [75] describes a study aimed at assessing the modifiability and reusability of a software system. To this end, a set of change scenarios that the system is likely to undergo in the future was identified, and the impact of change to the system classes assessed for each scenario. The structural properties (size, coupling) of the system classes, which should be indicative of how difficult these changes are to implement, were also measured. The design measures and the scenario evaluation were then integrated by defining a change difficulty index (CDI), which incorporates both the extent of changes to classes, and their associated coupling, complexity, and size. A comparison of the GDIs at the class level then provided insight into the maintainability and reusability of software systems. When static measurement is performed in isolation, large/high-coupling classes are generally considered difficult to maintain or reuse. With the scenario evaluation, we can put this into context. A class with high coupling may cause no problems if it is unlikely to be changed or reused in the future. On the other hand, for a class that is likely to undergo frequent changes, this is not a problem if the class is well designed for it (low size/coupling). Here, the two approaches were used together so that they can address each other's limitations. In both of the above examples, it is important that the measures used are carefully selected as they may affect the results and decisions. One important consideration is that these measures must be consistent indicators of software design problems, and their relationships to external quality attributes, e.g., fault-proneness, should be clearly demonstrated and stable across environments. In the current stage of knowledge, certain measures of import coupling and size appear to be particularly well suited for the above-mentioned purposes. The overall results of the studies tell us that the validity of fault-proneness models may be very context-sensitive. We therefore recommend that faultproneness models be built locally, in the environment where one intends to use them, and their stability must be assessed and analyzed across systems in that environment. Existing measurement tools do not support the use of design measures for this particular purpose. Based on the results summarized here, practitioners (quality assurance teams) can verify whether the set of measures provided by
156
LIONEL C. BRIAND AND JURGEN WUST
their measurement tools covers all dimensions of 0 0 designs we identified as important quality indicators. Our data analysis procedure then provides practitioners with detailed guidance on how they can (1) reduce the set of measures provided by their measurement tools to a minimal, nonredundant set, (2) construct prediction models from these measures, (3) use these prediction models for decision-making during projects, and (4) evaluate the cost-eifectiveness of prediction model usage.
5.9
Future Research Directions
We have identified a number of open questions that are crucial to the betterment and wider acceptance of measurement-based 0 0 design quality models: • Most of the design measurement reported is actually based on measurement of source code. Early analysis and design artifacts are, by definition, not complete and only represent early models of the actual system to be developed. Using models on such representations inevitably introduces a measurement error in terms of structural properties such as coupling or inheritance. The use of predictive models based on early artifacts and their capability to predict the quality of the final system still remains to be investigated. • Future research needs to focus on collecting larger data sets, involving large numbers of systems from the same environment. In order to make the research on fault-prone models of practical relevance, it is crucial that they be usable from project to project. Their applicability depends on their capabiHty to predict accurately and precisely (fine-grained predictions) quality attributes (e.g., class fault-proneness) in new systems, based on the development experience accumulated in past systems. Although many studies report the construction of models, few report on their actual application in realistic settings. Our understanding has now reached a sufficient level so that such studies can be undertaken. • Cost-benefit analyses of fault-proneness models either used a simple class selection heuristic to focus verification based on class size as a comparison benchmark or, even worse, are compared against the case where no verification is performed at all. Both solutions are not fully satisfactory as experts may very well be accurate at predicting quality attributes such as faultproneness, but little is known on how prediction models compare with experts' predictions. Furthermore, there might be a way to combine expert opinion with 0 0 design quality models to obtain more reliable statements about system quality.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
157
As it may be expected, more studies are needed to reach a mature body of knowledge and experience. However, although this is a prerequisite it is not enough. Empirical studies in software engineering need to be better performed, analyzed, and reported. To this purpose, standard procedures may be reused and adapted from other empirical fields, such as medicine, and should be used to evaluate empirical articles. We have provided in this chapter a set of procedures that may be used as a starting point for conducting higher quality research and obtaining more fruitful results when investigating measurement-based quality models.
Appendix A Tables XI—XIV describe the coupling, cohesion, inheritance, and size measures mentioned in this chapter. We list the acronym used for each measure, informal definitions of the measures, and literature references where the measures originally have been proposed. The informal natural language definitions of the measures should give the reader a quick insight into the measures. However, such definitions tend to be ambiguous. Formal definitions for most of the measures using a uniform and unambiguous formalism are provided in [18,19], where we also perform a systematic comparison of these measures, and analyze their mathematical properties. TABLE XI COUPLING MEASURES
Name
Definition
Source
CBO
Coupling between object classes. According to the definition of this measure, a class is coupled to another if methods of one class use methods or attributes of the other, or vice versa. CBO is then defined as the number of other classes to which a class is coupled. This includes inheritance-based coupling (coupling between classes related via inheritance). Same as CBO, except that inheritance-based coupling is not counted. Response set for class. The response set of a class consists of the set M of methods of the class, and the set of methods directly or indirectly invoked by methods in M. In other words, the response set is the set of methods that can potentially be executed in response to a message received by an object of that class. RFC is the number of methods in the response set of the class. Same as RFC, except that methods indirectly invoked by methods in M are not included in the response set. Message passing coupling. The number of method invocations in a class. Data abstraction coupling. The number of attributes in a class that have another class as their type. The number of different classes that are used as types of attributes in a class. Information-flow-based coupling. The number of method invocations in a class, weighted by the number of parameters of the methods invoked. As ICP, but counts invocations of methods of ancestors of classes (i.e., inheritance-based coupling) only.
[3]
CBO' RFC
RFC-1 MFC DAC DAC ICP IH-ICP
[7] [7]
[3] [14] [ 14] [14] [17] [17]
158
LIONEL C. BRIAND AND JURGEN WUST
TABLE XI —
Continued
Name
Definition
NIH-ICP IFCAIC ACAIC OCAIC FCAEC DCAEC OCAEC IFCMIC
As ICP, but counts invocations to classes not related through inheritance. [17] These coupling measures are counts of interactions between classes. The [10] measures distinguish the relationship between classes (friendship, inheritance, none), different types of interactions, and the locus of impact of the interaction. The acronyms for the measures indicate what interactions are counted: The first or first two letters indicate the relationship (A, coupling to ancestor classes; D, Descendents; F, Friend classes, IF, Inverse Friends (classes that declare a given class c as their friend); O, Others, i.e., none of the other relationships). The next two letters indicate the type of interaction: CA: There is a Class-Attribute interaction between classes c and d, if c has an attribute of type d. CM: There is a Class-Method interaction between classes c and d, if class c has a method with a parameter of type class d. MM: There is a Method-Method interaction between classes c and d, if c invokes a method of d, or if a method of class d is passed as parameter (function pointer) to a method of class c. The last two letters indicate the locus of impact: IC: Import coupling, the measure counts for a class c all interactions where c is using another class. EC: Export coupling: count interactions where class d is the used class.
ACMIC OCMIC FCMEC DCMEC OCMEC IFMMIC AMMIC OMMIC FMMEC DMMEC OMMEC IC CBM CC AMC NAS COF CDM
Fan-In Fan-Out
Inheritance coupling: Number of parent classes to which a class is coupled. Coupling between methods: Number of function dependency relationships between inherited and new/redefined methods (should be similar or the same as AMMIC). Class coupling—number of method couplings, i.e., variable references and/or method calls. Average method coupling—CC divided by number of methods in the class. Number of associations—count of the number of association lines emanating from a class in an OMT diagram. Coupling factor—percentage of pairs of classes that are coupled. Coupling dependency metric. Sum of (1) referential dependency (extent to which a program relies on its declaration dependencies remaining unchanged), (2) structural dependency (extent to which a program relies on its internal organization remaining unchanged, and (3) data integrity dependency (vulnerability of data elements in one module to change by other module) Count of modules (classes) that call a given class, plus the number of global data elements Count of modules (classes) called by a given module plus the number of global data elements altered by the module (class)
Source
[57] [57] [56] [56] [50] [28] [39]
[39] [39]
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
159
TABLE XII COHESION MEASURES
Name
Definition
Source
LCOMl
Lack of cohesion in methods. The number of pairs of methods in the class using no attribute in common. LC0M2 is the number of pairs of methods in the class using no attributes in common, minus the number of pairs of methods that do. If this difference is negative, however, LC0M2 is set to 0. Consider an undirected graph G, where the vertices are the methods of a class, and there is an edge between two vertices if the corresponding methods use at least an attribute in common. LCOM3 is defined as the number of connected components of G. Like LC0M3, where graph G additionally has an edge between vertices representing methods m and n, if m invokes n or vice versa. Let V be the number of vertices of graph G from measure LC0M4, and E the number of its edges. Then Co = 2 i\E\ - {\V\ - 1)) / {{\V\ - 1) (|K| - 2)). Consider a set of methods {M,} (/ = 1, . . . , m) accessing a set of attributes {Aj} (j = I, ... ,a). Let //(Ay) be the number of methods which reference attribute Aj. Then
[7]
LC0M2 LCOM3
LCOM4 Co LC0M5
[3]
[13]
[13] [13] [12]
LC0M5 = « ( ( X , = , ^ i^j)) - '") / (1 - '«) • Coh TCC
LCC
ICH
MHF AHF
A variation on LC0M5: Coh = (Zy-i /^ (>^;)) / (^ ' «) Tight class cohesion. Besides methods using attributes directly (by referencing them), this measure considers attributes indirectly used by a method. Method m uses attribute a indirectly, if m directly or indirectly invokes a method that directly uses attribute a. Two methods are called connected, if they directly or indirectly use common attributes. TCC is defined as the percentage of pairs of public methods of the class that are connected, i.e., pairs of methods that directly or indirectly use common attributes. Loose class cohesion. Same as TCC, except that this measure also considers pairs of indirectly connected methods. If there are methods mi, . . . , m„, such that m, and m,+i are connected for / = 1, . . . , « - 1, then mi and m„ are indirectly connected. Measure LCC is the percentage of pairs of public methods of the class which are directly or indirectly connected. Information-flow-based cohesion. ICH for a method is defined as the number of invocations of other methods of the same class, weighted by the number of parameters of the invoked method (cf. coupling measure ICP above). The ICH of a class is the sum of the ICH values of its methods. MHF — method hiding factor, percentage of methods that have external visibility in the system (are not private to a class). Attribute hiding factor, percentage of methods that have external visibility in the system.
[18] [11]
[11]
[17]
[28] [28]
160
LIONEL C. BRIAND AND JURGEN WUST
TABLE XIII INHERITANCE MEASURES
Name DIT
Definition
Depth of inheritance Tree. The DIT of a class is the length of the longest path from the class to the root in the inheritance hierarchy. AID Average inheritance depth of a class. AID of a class without any ancestors is zero. For all other classes, AID of a class is the average AID of its parent classes, increased by one. CLD Class-to-leaf depth. CLD of a class is the maximum number of levels in the hierarchy that are below the class. NOC Number of children. The number of classes that directly inherit from a given class. NOP Number of parents. The number of classes that a given class directly inherits from. NOD Number of descendants. The number of classes that directly or indirecdy inherit from a class (i.e., its children, 'grand-children,' and so on) NOA Number of ancestors. The number of classes that a given class directly or indirectly inherits from. NMO Number of methods overridden. The number of methods in a class that override a method inherited from an ancestor class. NMINH Number of methods inherited. The number of methods in a class that the class inherits from its ancestors and does not override. NMA Number of methods added. The number of new methods in a class, not inherited, not overriding. SIX Specialization index. SIX is NMO * DIT / (NMO-hNMA-FNMINH). MIF Method inheritance factor, percentage of methods inherited in the system (sum of all inherited methods in all classes, divided by sum of inherited and noninherited methods in all classes). AIF Attribute inheritance factor, equivalent to MIF, for attributes. POF Polymorphism factor, percentage of possible opportunities for method overriding that are used. INHTS Dummy variable indicating whether a class partakes in an inheritance relationship. SPA Static polymorphism in ancestors. DPA Dynamic polymorphism in ancestors. SPD Static polymorphism in descendants. DPD Dynamic polymorphism in descendants. SP Static polymorphism in inheritance relations. SP = SPA -I- SPD. DP Dynamic polymorphism in inheritance relations. DP = DPA -f- DPD. NIP Polymorphism in noninheritance relations. OVO Overloading in stand-alone classes. CHNL Class hierarchy nesting level (likely identical with DIT). CACI Class attribute complexity/size, inherited. CI Class method complexity/size, inherited. CMICI Class method interface complexity/size, inherited.
Source [7] [12]
[85] [3,7] [9,16] [9,85] [85] [16] [16] [16] [16] [28]
[28] [28] [34] [32] [32] [32] [32] [32] [32] [32] [32] [39] [55] [55] [55]
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
161
TABLE XIV SIZE MEASURES
Name
Definition
NMImp
The number of methods implemented in a class (noninherited or overriding methods). The number of inherited methods in a class, not overridden. The number of all methods (inherited, overriding, and noninherited) methods of a class. NM = NMImp + NMInh. The number of attributes in a class (excluding inherited ones). Includes attributes of basic types such as strings, integers. The number of inherited attributes in a class. Number of parameters. The sum of the number of parameters of the methods implemented in a class. The number of declaration and executable statements in the method of a class. The number of public methods implemented in a class. The number of nonpublic (i.e., protected or private) methods implemented in a class. Count of attributes per class from the information model. Count of states per class from the information model. Count of events per class from the information model. Count of all read accesses by a class (contained in a case tool). Count of all write accesses by a class (contained in a case tool). Count of all delete accesses by a class contained in the case tool. Count of synchronous accesses per class from the case tool. Lines of code. C-l-i- body file lines of code per class. C+-I- header files lines of code per class. Number of object/memory allocation: Number of statements that allocate new objects or memory in a class. Average method complexity: Average method size for each class. Class attribute complexity/size, local. Class method complexity/size, local. Class method interface complexity/size, local.
NMInh NM
NAImp, Totattrib NAInh NumPar Stmts NMpub NMN pub Attrib States EVNT READS WRITES DELS RWD LOG LOC_B LOC-H NOMA AMC-[57] CACL CL CMICL
Source
[43] [43] [43] [43] [43] [43] [43] [43] [43] [43] [57] [57] [55] [55] [55]
Note. Some of the size measures in Table XIV are frequently used in publications and available tools, and r ) definite source or author can be given for them.
Appendix B: Glossary This glossary provides a list of all abbreviations used throughout the paper. This excludes acronyms used as names of design measures, which are listed in Appendix A. ARE AROC C&K CART CCI
Absolute relative error, see Section 3.4.5. Area under receiver-operator curve. Section 3.4.5. The measures defined by Chidamber and Kemerer [3,7]. Classification and regression trees. CoupUng, cohesion, inheritance measures, i.e., measures of 0 0 design properties, as opposed to (usually simple) size measures.
162
LIONEL C. BRIAND AND JURGEN WUST
C-FOOD Coupling measures for object-oriented designs—the measures defined in [10]. CV Cross-validation, see Section 3.5.1. DV Dependent variable. IV Independent variable. LL Log likelihood. LR Logistic regression. LS Least-squares regression. MARS Multivariate adaptive regression splines. ML Maximum likelihood. MOOD Metrics for object-oriented designs—the measures introduced in [1]. MRE Magnitude of relative error, Section 3.4.5. OLS Ordinary least-squares regression. 00 Object-oriented. PC Principal component. PCA Principal component analysis, Section 3.2. ROC Receiver-operator curve. Section 3.4.5. REFERENCES
[la] ISO/IEC DIS 14598-1. "Information Technology-Product Evaluation." Part 1: General Overview, [lb] Abreu, F., Goulao, M., and Esteves, R. (1995). "Toward the design quality evaluation of object-oriented software systems." 5th International Conference on Software Quality, Austin, Texas, Oct. [2] Barnard, J. (1998). "A new reusability metrics for object-oriented measures." Software Quality Journal 7, 35-50. [3] Chidamber, S. R., and Kemerer, C. F (1994). "A metrics suite for object oriented design." IEEE Transactions on Software Engineering, 20, 476-493. [4] Sharble, R., and Cohen, S. (1993). "The object-oriented brewery: A comparison of two object-oriented development methods." Software Engineering Notes, 18, 60-73. [5] Shlaer, S., and Mellor, S. (1988). Object-Oriented Systems Analysis: Modeling the World in Data. Yourdon Press, Englewood Cliffs, NJ. [6] Wirfs-Brock, R., Wilkersion, B., and Wiener, L. (1990). Designing Object-Oriented Software. Prentice-Hall, Englewood Cliffs, NJ. [7] Chidamber, S. R., and Kemerer, C. F (1991). "Towards a metrics suite for object oriented design." Proceedings Conference on Object-Oriented Programming: Systems, Languages and Applications (00PSLA'9I) (A. Paepcke, Ed.), Oct. Pubhshed in SIGPLAN Notices, 26, 197-211. [8] Counsell, S., and Newson, P. (2000). "Use of friends in C-h-f- Software: An empirical investigation." Journal of Systems and Software, 53, 15-21. [9] Lake, A., and Cook, C. (1994). "Use of factor analysis to develop OOP software complexity metrics." Proceedings 6th Annual Oregon Workshop on Software Metrics, Silver Falls, Oregon.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
163
[10] Briand, L., Devanbu, P., and Melo, W. (1997). "An investigation into coupling measures for C++." Proceedings oflCSE '97, Boston. [11] Bieman, J. M., and Kang, B.-K. (1995). "Cohesion and reuse in an object-oriented system." Proceedings ACM Symposium Software Reusability (SSR'94), pp. 259-262. [ 12] Henderson-Sellers, B. (1996). Software Metrics. Prentice-Hall, Hemel Hempstead, UK. [13] Hitz, M., and Montazeri, B. (1995). "Measuring coupling and cohesion in objectoriented systems." Proceedings International Symposium on Applied Corporate Computing, Monterrey, Mexico, Oct. [14] Li, W., and Henry, S. (1993). "Object-oriented metrics that predict maintainability." Journal of Systems and Software, 23, 111-122. [15] Li, W. (1998). "Another metric suite for object-oriented programming." Journal of Systems and Software, 44, 155-162. [16] Lorenz, M., and Kidd, J. (1994). Object-Oriented Software Metrics. Prentice-Hall Object-Oriented Series, Englewood Cliffs, NJ. [17] Lee, Y.-S., Liang, B.-S., Wu, S.-R, and Wang, F.-J. (1995). "Measuring the coupling and cohesion of an object-oriented program based on information flow." Proceedings International Conference on Software Quality, Maribor, Slovenia. [18] Briand, L., Daly, J., and Wiist, J. (1998). "A unified framework for cohesion measurement in object-oriented systems." Empirical Software Engineering Journal, 3,65-117. [19] Briand, L., Daly, J., and Wiist, J. (1999). "A unified framework for coupling measurement in object-oriented systems." IEEE Transactions on Software Engineering, 25, 91 -121. [20] Eder, J., Kappel, G., and Schrefl, M. (1994). "Couphng and cohesion in objectoriented systems," technical report. University of Klagenfurt. [21] Briand, L., Morasca, S., and Basih, V. (1996). "Property-based software engineering measurement." IEEE Transactions of Software Engineering, 22, 68-86. [22] Kitchenham, B., Pfleeger, S., and Fenton, N. (1995). "Towards a framework for software measurement validation: A measurement theory perspective." IEEE Transactions on Software Engineering, 12, 929-944. [23] Whitmire, S. (1997). Object-Oriented Design Measurement. Wiley, New York. [24] Zuse, H. (1998). A Framework of Software Measurement, de Gruyter, Berlin. [25] Li, W, Henry, S., Kafura, D., and Schulman, R. (1995). "Measuring object-oriented design." Journal of Object-Oriented Programming, 8, 48-55. [26] McCabe, T. J. (1976). "A complexity measures." IEEE Transactions on Software Engineering, 16, 510-522. [27] Basili, V. R., Briand, L. C , and Melo, W. L. (1996). "A validation of object-oriented design metrics as quality indicators." IEEE Transactions on Software Engineering, 22,751-761. [28] Abreu, F., and Melo, W. (1996). "Evaluating the impact of object-oriented design on software quality." Proceedings of Metrics. [29] Briand, L., Daly, J., Porter, V., and Wust, J. (2(X)0). "Exploring the relationships between design measures and software quality in object-oriented systems." Journal of Systems and Software 51, 245-273.
164
LIONEL C. BRIAND AND JURGEN WUST
[30] Briand, L., Wiist, J., and Lounis, H. (2001). "Replicated case studies for investigating quality factors in object-oriented designs." Empirical Software Engineering: An International Journal, 6, 11-58. [31] Briand, L., Melo, W., and Wiist, J. (2001). "Assessing the applicability of faultproneness models across object-oriented software projects." IEEE Transactions on Software Engineering, in press. [32] Benlarbi, S., and Melo, W. (1999). "Polymorphism measures for early risk prediction." Proceedings of the list International Conference on Software Engineering, ICSE 99, Los Angeles, pp. 335-344. [33] Briand, L., and Wust, J. (2001). "The impact of design properties on development cost in object-oriented systems." IEEE Transactions on Software Engineering, 27(11), 963-986. [34] Bansiya, J., Etzkorn, L., Davis, C, and Li. W. (1999). "A class cohesion metric for object-oriented designs." Journal of Object-Oriented Programming, 11, 47-52. [35] Benlarbi, S., El Emam, K., Goel, N., and Rai, S. (2000). "Thresholds for objectoriented measures." Proceedings oflSSRElOOO, pp. 24-37. [36] El Emam, K., Benlarbi, S., Goel, N., and Rai, S. (2001). "The confounding effect of class size on the validity of object-oriented metrics." IEEE Transactions on Software Engineering, 27, 630-650. [37] El Emam, K., Benlarbi, S., Goel, N., and Rai, S. (1999). "A validation of object-oriented metrics." Technical Report ERB-1063, NRG. Available at www. obj e c t - o r i e n t e d . o r g [38] Binkley, A., and Schach, R. (1996). "Impediments to the effective use of metrics within the object-oriented paradigm." technical report. University of Vanderbilt. [39] Binkley, A., and Schach, R. (1998). "Validation of the couphng dependency metric as a predictor of run-time failures and maintenance measures." Proceedings ICSE98, pp. 452-455. [40] Briand, L., Wiist, J., and Lounis, H. (1999). "Using coupling measurement for impact analysis in object-oriented system." Proceedings of the IEEE International Conference on Software Maintencmce (ICSM), Oxford, UK, pp. 475-482. [41] Chidamber, S., Darcy, D., and Kemerer, C. (1998). "Managerial use of metrics for object-oriented software: An exploratory analysis." IEEE Transactions on Software Engineering, 24, 629-639. [42] Chen, J.-Y., and Lu, J.-F. "A new metric for object-oriented design." Information and Technology, 35, 232-240. [43] Cartwright, M., and Shepperd, M. (2000). "An empirical investigation of an objectoriented software system." IEEE Transactions of Software Engineering, 26, 786-796. [44] Etzkorn, L., Bansiya, J., and Davis, C. (1999). "Design and code complexity metrics for OO classes." Journal of Object-Oriented Programming, 11, 35-40. [45] Etzkorn, L., Davis, C., and Li, W. (1998). "A practical look at the lack of cohesion in methods metric." Journal of Object-Oriented Programming, 10, 27-34. [46] El Emam, K., Melo, W., and Machado, J. (2001). "The prediction of faulty classes using object-oriented design metrics." Journal of Systems and Software, 56, 63-75.
QUALITY MODELS IN OBJECT-ORIENTED SYSTEMS
165
[47] El Emam, K., Benlarbi, S., Melo, W., Lounis, H., and Rai, S. (2000). "The optimal class size for object-oriented software: A replicated case study." Technical Report ERB-1074, NRC. Available at www. o b j e c t - o r i e n t e d , org [48] Glasberg, D., El Emam, K., Melo, W., and Madhavji, N. (2000). "Validating objectoriented design metrics on a commercial Java application," TR ERB-1080, NRC. [49] Harrison, R., and Counsell, S. (1998). "The role of inheritance in the maintainabihty of object-oriented systems." Proceedings ofESCOM '98, pp. 449^57. [50] Harrison, R., Counsell, S., and Nithi, R. (1998). "Coupling metrics for object-oriented design." Proceedings of the 5th International Software Metrics Symposium, Bethesda, MD,pp. 150-157. [51] Harrison, R., Samaraweera, L. G., Dobie, M. R., and Lewis, R H. (1996). "An evaluation of code metrics for object-oriented programs," technical report. [52] Harrison, R., and Nithi, R. (1996). "An empirical evaluation of object-oriented design metrics," technical report. [53] Moser, S., Henderson-Sellers, B., and Misic, V. (1999). "Cost estimation based on business models." Journal of Systems and Software, 49, 33-42. [54] Misic, v., and Tesic, D. (1998). "Estimation of effort and complexity: An objectoriented case study." Journal of Systems and Software, 41, 133-143. [55] Nesi, R, and Querci, T. (1998). "Effort estimation and prediction of object-oriented systems." Journal of Systems and Software, 42, 89-102. [56] Rajaraman, Lyu. (1992). "Reliability and maintainability related software coupUng metrics in C-I-+ programs." Proceedings oflSSRE. [57] Tang, M.-H., Kao, M.-H., and Chen, M.-H. (1999). "An empirical study on objectoriented metrics." Proceedings of Metrics 242-249. [58] Wilkie, F. G., and Hylands, B. (1998). "Measuring complexity in C-l-l- application software." Software Practice and Experience, 28, 513-546. [59] Wilkie, F. G., and Kitchenham, B. (2000). "Coupling measures and change ripples in C-l-l- applications." Journal of Systems and Software, 52, 157-264. [60] Meyer, M., and Booker, J. (1991). Eliciting and Analyzing Expert Judgement. A Practical Guide. Academic Press, London. [61] Miller, R., Jr. (1981). Simultaneous Statistical Inference, 2nd. ed. Springer-Verlag, Berlin. [62] Friedman, J. (1991). "Multivariate adaptive regression splines." Annals of Statistics, 19, 1-141. [63] Stone, M. (1974). "Cross-validatory choice and assessment of statistical predictions." Journal of the Royal Statistical Society, Series B 36, 111-147. [64] Spector, R (1981). Research Design, Quantitative Applications in the Social Sciences. Sage, Newbury Park, CA. [65] Briand, L., Bunse, C , Daly, J., and Differding, C. (1997). "An experimental comparison of the maintainability of object-oriented and structured design documents." Empirical Software Engineering 2, 291-312. [66] Briand, L., Bunse, C , and Daly, J. (2001). "An experimental evaluation of quahty guidelines on the maintainability of object-oriented design documents." IEEE Transactions on Software Engineering, 27, 513-530.
166
LIONEL C.BRIAND AND JURGENWUST
[67] Harrison, R., Counsell, S., and Nithi, R. (2000). "Experimental assessment of the effect of inheritance on the maintainabihty of object-oriented systems." Journal of Systems and Software, 52, 173-179. [68] Lake, A., and Cook, C. (1992). "A software complexity metric for C-I-+," Technical Report 92-60-03. Department of Computer Science, Oregon State University. [69] Wood, M., Daly, J., Miller, J., and Roper, M. (1999). "Multi-method research: An empirical investigation of object-oriented technology." Journal of Systems and Software, 4^, 13-26. [70] Hayes, W. (1994). Statistics, fifth ed. Harcourt, San Diego. [71] Dunteman, G. (1989). "Principal Component Analysis," Sage University Paper 0769, Thousand Oaks, CA. [72] Lewis-Beck, M. (1980). Applied Regression: An Introduction. Sage, Thousand Oaks, CA. [73] Long, S. (1997). Regression Models for Categorical and Limited Dependent Variables, Advanced Quantitative Techniques in the Social Sciences Series. Sage, Thousand Oaks, CA. [74] Hosmer, D. W., and Lemeshow, S. (1989). Applied Logistic Regression. Wiley, New York. [75] Briand, L., and Wtist, J. (2001). "Integrating scenario-based and measurement-based software product assessment." Journal of Systems and Software, 59, 3-22. [76] Bamett, V., and Price, T. (1995). Outliers in Statistical Data, 3rd ed. Wiley, New York. [77] Pregibon, D. (1981). "Logistic regression diagnostics." Annals of Statistics, 9, 705724. [78] Belsley, D., Kuh, E., and Welsch, R. (1980). Regression Diagnostics: Identifying Influential Data and Sources ofCollinearity. Wiley, New York. [79] Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and Regression Trees. Wadsworth, Belmont, CA. [80] Steinberg, D., and Cardell, N. (1999). "The Hybrid CART-Logit model in classification and data mining." Salford Systems. Available at http://www.salfordsystems.com. [81] Everitt, B. S. (1993). Cluster Analysis. Arnold, Sevenoaks, UK. [82] Green, D., and Swets, J. (1974). Signal Detection Theory and Psychophysics, rev. ed. Krieger, Huntington, NY. [83] Briand, L., El Emam, K., Maxwell, K.. Surmann, D., and Wieczorek, I. (1999). "An assessment and comparison of common software cost estimation models." Proceedings of the 21st International Conference on Software Engineering, ICSE 99, Los Angeles, CA, pp. 313-322. [84] Mayrand, J., and Coallier, F. (1996). "System acquisition based on software product assessment." Proceedings of ICSE'96, Berlin, Germany, pp. 210-219. [85] Tegarden, D. P., Sheetz, S. D., and Monarchi, D. E. (1995). "A software complexity model of object-oriented systems." Decision Support Systems, 13, 241-262.
Software Fault Prevention by Language Choice: Why C is Not My Favorite Language RICHARD FATEMAN Computer Science Division Electrical Engineering and Computer Sciences Department University of California—Berkeley Berkeley, California 94720-1776 USA [email protected]
Abstract How much does the choice of a programming language influence the prevalence of bugs in the resulting code? It seems obvious that at the level at which individuals write new programs, a change of language can eliminate whole classes of errors, or make them possible. With few exceptions, recent literature on the engineering of large software systems seems to neglect language choice as a factor in overall quality metrics. As a point of comparison we review some interesting recent work which implicitly assumes a program must be written in C. We speculate on how reliability might be affected by changing the language, in particular if we were to use ANSI Common Lisp.
1. 2. 3. 4.
Introduction and Background Why Use C? Why Does Lisp Differ from C? Root Causes of Flaws: A Lisp Perspective 4.1 Logic Flaws 4.2 Interface Flaws 4.3 Maintainability Flaws 5. Arguments against Lisp, and Responses 6. But Why is C Used by Lisp Implementors? 7. Conclusion Appendix 1: Cost of Garbage Collection Appendix 2: Isn't C free? Acknowledgments and Disclaimers References ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
16 7
168 169 171 173 173 178 179 179 185 185 186 187 187 188 Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
168
RICHARD FATEMAN
1.
Introduction and Background
In a recent paper, Yu [1] describes the kinds of errors committed by coders working on Lucent Technologies advanced 5ESS switching system. This system's reliability is now dependent on the correct functioning of several million lines of source code. ^ Yu not only categorizes the errors, but enumerates within some categories the technical guidelines developed to overcome problems. Yu's paper's advice mirrors, in some respects, the recommendations in Maguire's Writing Solid Code [2], a book brought to my attention several years ago for source material in a software engineering undergraduate course. This genial book explains techniques for avoiding pitfalls in programming in C, and contains valuable advice for intermediate or advanced C language programmers. It is reminiscent of (and acknowledges a debt to) Kernighan and Plauger's Elements of Programming Style [3]. Maguire's excellent lessons were gleaned from Microsoft's experience developing "bug-free C programs" and are provided as anecdotes and condensed into pithy good rules. The key emphasis in Yu's paper as well as Maguire's book is that many program problems are preventable by individual programmers or "development engineers" and that strengthening their design and programming capabilities will prevent errors in the first place. Yet the important question that Yu and his team, as well as Maguire, never address is this simple one: "Is the C programming language appropriate for the task at hand?" We, perhaps naively, assume that the task is not merely "write a program that does X." It should be something along the lines of Write a correct, robust, readable, documented program to do X. The program should be written so that it that can be modified, extended, or re-used in the future by the original author or others. It is good (and in some cases vital) that it demonstrate efficiency at run-time in time and space, machine independence, ease of debugging, etc. The task might also include incidental constraints like "Complete the program by Tuesday." For obvious reasons, for purposes of this paper we are assuming that the task constraints do not include "You have no choice: it must be written in C." It is unfortunate that this constraint is implicit in much of what has been ' It would be foolhardy to rely on the perfection of such a large and changing body of code. In fact, the code probably does not function correctly. A strategy to keep it running is to interrupt it perhaps 50 times a second. During these interruptions checks and repairs are made on the consistency of data structures before allowing the resumption of normal processing. Without such checks it is estimated that these systems would crash in a matter of hours.
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
169
written, and that for many programmers and writers about programming it is nearly subconscious: so much so that problems that appear only in C are apparently thought to be inherent in programming. While the C programming language has many virtues, it seems that the forced selection of this language directly causes many of the problems cited by Yu, specifically when the goal is to produce reliable programs in a natural way. Many of us are well aware that the Department of Defense made the determination that for building reliable real-time embedded programs, C was not a suitable language. The resulting engineering process gave birth to the language Ada.^ Ada has not caught on in civilian programming for a variety of reasons. Rather than examining the C/Ada relationship, here we will look primarily at a comparison of C to Common Lisp, a language we think has many lessons for how to support software engineering in the large. While Common Lisp is widely used and highly regarded in certain niches, it is not a mainstream programming language.
2.
Why Use C?
C evolved out of the expressed need to write programs to implement in a moderately high-level language the vast majority of operating systems functionality for the UNIX operating system for the 16-bit PDP-11 computer. It was in turn based on the language "B" used for UNIX on the PDP-7 computer. The intent, at least after the initial implementation, was expanded to try to make this code nearly machine independent, despite the numerous PDP idioms that show through. UNIX and C together have evolved and spread to many different computer architectures. C in particular has also generated successor languages in which one usually sees many of the original choices that were incorporated in C, combining ideas of data structuring (object oriented), economy of expression, and program control flow, with a particular syntactic style. The human/computer design balance in which C and UNIX originated probably made good sense in the early 1970s on many computers. C even looked avant garde in 1978 when Digital Equipment Corp's VAX 11/780 computer became a popular successor to the PDP-11. The manufacturer's operating system was written in a mixture of lower-level languages (Assembler, BLISS) and so C seemed "high level." In fact, DEC (now Compaq)'s Alpha OPEN-VMS software continues to include substantial BLISS source code. ^How much better would the situation be if 5ESS were written in Ada? That would be another paper, I think.
170
RICHARD FATEMAN
C worked well when computers were far more expensive than today: a standard configuration VAX of that time would be a 256-kB, 1-MIPS machine with optional (extra cost) floating-point arithmetic. In 1978 such a machine supported teams of programmers, a screen-oriented editor was a novelty, and at UC—Berkeley, much of the Computer Science research program was supported on just one machine. C has certainly endured, and this is a tribute to the positive qualities of the design: it continues to occupy a certain balance between programming effort and efficiency, portability versus substantial access to the underlying machine mechanisms. Even its strongest advocates must acknowledge that C is not "optimal": certainly smaller code could be provided with byte codes, and faster code by programming in assembler. A strong practical support for C is the fact that it is nearly universally implemented on computing platforms, being available on many of them in a refined development environment. Add to these rationales those provided by employers in choosing C: There is a relative abundance of C programmers coming from school. There is an expectation that established programmers will know C. In fact this contributed to the design of Java, whose syntax is based in part on the assumption that programmers would find a C-like syntax comfortable. However, times have changed. Today we expect a single programmer to command a machine 400 times larger in memory, and 400 times faster than that in 1978. Why should we expect a language design oriented to relatively small code size, oriented toward an environment in which simplicity of design dominates robustness, to continue to be an appropriate choice? Why is it used at Berkeley? Many faculty know C fairly well. We often use UNIX in some form, and even Microsoft Windows or Macintosh systems provide C. C is "good enough" for many student projects. It is at a low-enough level that the transition from C to assembler can be used easily in a tutorial fashion to demonstrate the relationship of higher-level language notions to their implementation at the level of machine architecture. By being the implementation language for the UNIX operating system, additional programming in C provides access to nearly every feature short of those few machine-dependent concepts available only to the assembly-language programmer. Unfortunately, class projects lead students to believe that this is the way it should be, even though nearly all aspects of the project violate real-world programming task requirements. How many real projects have perfectly defined and idealized requirements specified in "the assignment"? How many projects would be deemed complete and given a passing grade when they show first signs of producing a correct answer? A probable typical student project is unreliable, under-designed, under-documented "demoware." It's also written in C. While the real world leaves behind so many aspects of the student project, why should the programming language still be the same?
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
171
While C++ as well as Java and class libraries have changed the outlook of programmers in dealing with complexity through object orientation (and Java has taken a major positive step in automatic storage allocation), there are still areas of concern: these languages seem to be major sources of inefficiency in programming effort, ultimately reflected in the difficulty of using them in building correct large systems.
3. Why Does Lisp Differ from C? Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden implementation of half of Common Lisp. — Philip Greenspun, 10th rule of programming
Today's Common Lisp is descended from Lisp 1.5 of 1960, one of the oldest languages in use today,^ and yet Common Lisp is in some respects one of the newest languages. Today it is defined as a 1994 ANSI standard (X3J13). Most of the evolution since 1960 was driven by programmers optimizing their own productivity environment. Compared to commercial installations of the time, little emphasis was placed on efficient batch processing. Instead, memory and computation resources were deployed specifically for programmer support. This meant time-sharing when others were using batch. This meant single-user workstations when others were using time-sharing. This meant graphical interfaces when others were using text-line interfaces. In a typical development artificial intelligence project, one or a few programmers would set to the task of building a fast prototype to try out ideas. Often this required the building of a kind of new application-specific "language" on top of the Lisp foundation."^ The notion of reliability was rarely a goal, typically being less important than flexibility,^ but tools for debugging were always a very high priority. In academia and in industrial research laboratories, often the most advanced programming environments were developed on Lisp systems, including those at Xerox, BBN, Symbolics, MIT, Stanford, Carnegie-Mellon, and here at UC—Berkeley. ^Only the Fortran heritage is longer. "^The tradition of bottom-up programming in functional languages means that the components tend to be testable in relative isolation, they are more likely to be reusable, and this leads to a greater level of flexibility when the higher-level functionality is implemented. Often this is combined with a top-down design philosophy. ^The ease of prototyping in a language is key: in "Accelerating Hindsight, Lisp as a Vehicle for Rapid Prototyping" Lisp Pointers, 7, 1-2, Jan-Jun 1997, Kent Pitman articulates the reasons. In brief, early review and discovery of problems lead to a rapid realization of what needs to be fixed. Since hindsight is "20-20" this early feedback leads to better results. In the traditional, but now usually disregarded model of software development (the waterfall model) critical problems are discovered rather late in the development cycle.
172
RICHARD FATEMAN
In my opinion this evolution has matured to support the tasks of design and programming addressed professionally.^ In our experience, a C programmer first writing in Lisp will use only that subset of tools already existing in C, and thus may initially write rather poor (nonidiomatic) Lisp. A fair comparison of programming languages requires somewhat more than finding the common subset of them. We believe that reaching a given level of productivity and proficiency can be aided by today's Lisp language design. This problem of writing in a familiar form can be observed more generally. In a Web-based tutorial on Lisp Robert Strandh of the University of Bordeaux^ expands upon the common observation that students (and indeed others) are often inefficient in their work. Instead of learning how to use tools properly, they flail ineffectively with what they already know. He suggests that people can be divided into perfection-oriented Siud performance-oriented groups: The people in the category perfection-oriented have a natural intellectual curiosity. They are constantly searching for better ways of doing things, new methods, new tools. They search for perfection, but they take pleasure in the search itself, knowing perfectly well that perfection can not be accomplished. To the people in this category, failure is a normal part of the strive for perfection. In fact, failure gives a deeper understanding of why a particular path was unsuccessful, making it possible to avoid similar paths in the future. The people in the category performance-oriented, on the contrary, do not at all strive for perfection. Instead they have a need to achieve performance immediately. Such performance leaves no time for intellectual curiosity. Instead, techniques already known to them must be applied to solve problems. To these people, failure is a disaster whose sole feature is to harm instant performance. Similarly, learning represents the possibility of failure and must thus be avoided if possible. To the people in this category, knowledge in other people also represents a threat. As long as everybody around them use tools, techniques, and methods that they themselves know, they can count on outperforming these other people. But when the people around them start learning different, perhaps better, ways, they must defend themselves. Other people having other knowledge might require learning to keep up with performance, and learning, as we pointed out, increases the risk of failure. One possibility for these people is to discredit other people's knowledge. If done well, it would eliminate the need for the extra effort to learn, which would fit very well with their objectives. ^Lisp can also be used to great advantage by novices: for example, a simplified version of Lisp (Scheme) is a popular pedagogical language. This is not our concern here. ^Available at h t t p : / / d e p t - i n f o. l a b r i . u-bordeaux. f r/~strandh/Teaching/MTP/Com mon/Strandh-Tutorial/Dir-symbolic.html.
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
173
Of course this is a simplification, and individuals normally contain aspects of each category; as an example, a perfectionist mathematician may be performanceoriented when it comes to computing.
4.
Root Causes of Flaws: A Lisp Perspective
Our thesis is that the C programming language itself contributes to the pervasiveness and subtlety of programming flaws, and that the use of Common Lisp would benefit the program implementation and maintenance effort. Yu's paper [1] on problems in the 5ESS system indicates 10 major coding fault areas (and an extra "other" category) and gives proposed countermeasures. Not all the countermeasures are easily applied, regardless of language. In particular, how is one to achieve "better thinking" or "more time" or "better education"? Such sections we will not address here. We will look at the other coding fault areas given in each of the remaining major sections. We emphasize, along with Yu, three of these that account for more than 50% of the total. We spend most of our space on the first of these, partly to keep this paper from ballooning out of reasonable length.
4.1
Logic Flaws
The largest area was logic flaws, accounting for 19.8% of the faults encountered. These are errors that occur when the control logic causes a branch to an incorrect part of the program or logically computes an incorrect value. How many of these are easily (we are tempted to say, automatically) corrected by using a language better adapted than C to writing more usually correct programs? (We give examples in Lisp when appropriate.)
4.7.7
L 7. Initialize All Variables before Use
This is done automatically by Lisp for ordinary scalar local variables when created. Initial default values can be specified for every array. Declarations and initializations of global variables can be done v i a d e f v a r , d e f c o n s t a n t , d e f p a r a m e t e r depending on how "constant" they are. Arrays can be initialized as well.
4.7.2
L2. Control Flow of Break and Continue
Statements
Conditional control flow with if, c a s e , and c o n d is clearly indicated in correctly indented code, and Lisp code is correctly indented in the normal
174
RICHARD FATEMAN
development environment. The traditional complaint of non-Lisp programmers that there are too many parentheses is simply not an issue: A programmer types as many parentheses as necessary, watching a suitable editor "flashing" the balancing parenthesis of a construct, and indenting as necessary. Errors in structure are easily detected. Beyond this, one can do far better with proactive editor assistance, as suggested by Fry [4], in making sure that coding reflects the expected control flow. Presumably one of the C problems being cited by Yu is that b r e a k and c o n t i n u e statements can occur in expressions deeply nested inside the s w i t c h or for statements to which they refer. Thus you end up with what amounts to a g o t o statement but one whose target is not apparent. Worse yet someone editing the code may not see your b r e a k or c o n t i n u e statement and surround it with another s w i t c h or for statement, thus inadvertently changing the target. Lisp has a similar problem with the r e t u r n form statement, which can appear inside various constructions (officially those that have a "block" body: let, let*, p r o g , do, d o * d o t i m e s , do I ist among others). With a deeply nested r e t u r n you may not be able to tell which form it's returning from (especially with user-defined macros surrounding the form). It's good Lisp practice in any situation in which it is not entirely obvious what the target of a return is to use the named block statement and convert the r e t u r n to an explicit r e t u r n - f r o m with the label of the block. With C if you want to be sure of getting to some place you must use the g o t o statement, with all the baggage that that might entail.
4.1.3
L3. Check C operator associativity
and precedence
The first example given in Yu's paper (simplified here) was if (x->y.z & r = = s) ..., which should have been if ((x->y.z & r) = = s) .... This would be expressed in Lisp approximately as (if
(equal
(logand
(slot-value
(slot-value
x y)
z)
r)
s) ...) where we assume a corresponding encoding of structures in C and Lisp, and that x is an object of type y. There are neater ways of encoding structures and accessors that would look different from the use of s I o t - v a I u e, so this is only an approximation. Other examples in Yu's paper include bugs based on a programmer's misunderstanding of the order of various operations with respect to incrementation (and of course the implicit agreement of other programmers who have walked
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
175
through the code as to the misinterpretation): * n + + which should have been { * n ) + +.
Of course much of this is (he argues) bad practice in C coding: even if the programmer had gotten it right the first time, the next human reader of the code might misunderstand it. In fact, one could argue that in all possible places a pair of parentheses, even those that are unnecessary, should be inserted in properly engineered code. This is a particularly irksome language issue. Note that the K&R C programming language has 15 precedence levels, of which 3 classes of operator are right-to-left associative. The symbol * occurs in TWO levels, the characters -I- and > in various combinations each occur in THREE distinct levels, and the character - occurs in FOUR levels. By contrast, all operators in Lisp are delimited prefix operators with no associativity or precedence. Even C's a ^ b + c which might not involve much mystery is arguably clearer as Lisp's (+ (* a b) c). If you doubt such clarity helps, ask a C programmer to explain: a * * b + + + c. How sure?
4.7.4
L4. Ensure Loop Boundaries Are Correct and L5. Do Not Overindex Arrays
Lisp has no perfect solution because off-by-one errors cannot be removed syntactically in general. However, it is possible via standard looping constructs to make it clear that the number of iterations corresponds to the number of elements in a set or elements in an array (Common Lisp has the notion of a sequence that includes lists and arrays. Some constructs are available that work on either data structure.): (dotimes (i 5)(f i)) computes (f 0) through (f 4). If A is any sequence (list, array), then (dotimes (i (length A)) ..(elt A i)..) will refer to each element in A. For sets represented as lists, there are alternative forms of iteration such as (delist (i '("hello" "goodbye")) (g i)). There is also the more recently introduced modem functional mapping construct ( m a p ) which takes one argument to specify the result type, a function / of « arguments to be applied, and « sequences. Thus (m a p ' a r r a y #'+ #(1 2 3) #(4 5 6)) produces #(5 7 9). Numerous functions are provided to search, select, sort, and operate on sequences. The meaning of the operation does not require the decoding of a potentially unfamiliar and possibly erroneous C idiom. Instead it relies on the understanding of a function on sequences such as r e m o v e - d u p l i c a t e s . While we are talking about sequences, we should observe that other storage types are available in the language: there is a hash-table primitive data type.
176
RICHARD FATEMAN
Other kinds of logical termination conditions can be imposed by additional iteration constructs. There are several common macro packages that seek to make looping "easier" by interspersing key words like until or u n l e s s with accumulation operations like s u m or collect.
4.7.5
L6. Ensure Value of Variables Is Not Truncated
In C if a wide value (say 16 bits) is assigned to a narrow storage spot, some bits are lost, apparendy without being noticed. This cannot happen in Lisp in assigning values to variables since variables will ordinarily take on "any" values. That is, (setf x y) does not ever change or truncate y. If one stores a value in an object defined using CLOS,^ then one has rather substantial freedom in checking any attributes of the value being deposited by the setf method, and if it matters, this should certainly be checked. In properly engineered code it is likely that one would not be sadsfied with a type check, but plausible ranges or other assertions might be checked as well. This could be done (as they say, "transparently") because the process of setting values can be overloaded. Although setf can be compiled down to a single instruction in the simplest case, it is not confined to be such a simpHsdc implementation as "=" in C. At one time I would have felt compelled to defend some level of overhead in CLOS as being a reasonable price to pay for full-fledged object orientation. Given the advent of C-l-l- and Java, it seems the battle has been fought elsewhere and apparently won.^
4.7.6
L7, Reference Pointer Variables Correctly, L8. Check Pointer Arithmetic, and L9. Ensure Logical OR and AND Tests are Correct
Yu does not give an example, but many C programs have such bugs when first written, and detecting them is painful. Lisp does not have "pointer variables," and it does not do pointer arithmetic, so incorrectly incremendng pointers does not happen. Dereferencing pointers cannot be done incorrecdy because it is not done at all. ^The Common Lisp object system. ^The object system in Common Lisp is more general than that in Java, C++, Smalltalk, and Simula. Among other features, CLOS has multiple dispatch, meaning that the operation being invoked can be selected using the types of all of its arguments. It also supports multiple inheritance, available in C++ but in Java only via interfaces. Some features of CLOS are surprising: dynamic class definition allows one to (for example) add slots and methods to a class after instantiating some elements! Common Lisp also has its meta object protocol (MOP), which can be used to build both more targeted and efficient or more elaborate and general object systems.
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
177
Logical operations on bitstrings are done using l o g a n d , l o g i c r, and I o g X o r, and Lisp provides a full selection of logical bit operations. Truth-valued decisions can be made with a n d and o r as well as not. These are all delimited prefix operators. It is unlikely to be confused with the masking operations, since they have substantially different names, not formed by stuttering one character. C's use of any nonzero value as a Boolean true appeals has limited appeal if you are concerned with readability. In Lisp the value NIL is the only false value.
4.7.7
L10. Assignment and Equal Operators
C uses the easily confused = and = = syntax. Lisp uses the rather distinct s etf and e q u a l operations. In fact there are some alternatives to e q u a l depending upon what is being compared. The nuances of e q and e q n are relevant for optimization, but probably not of concern here.
4.1.8
L11. Ensure Bit Field Data Types Are Unsigned or Enum
Lisp has bit strings; an enumerated data type can be defined, but would probably be handled via abstraction. Small sets are often represented by lists, but could be stored in hash tables or trees or other structures, depending on efficiency criteria.
4.1.9
L12. Use Logical AND and Mask Operators as Intended
This probably refers to the confusing syntactic notation for masking operations in C. In Lisp this is done by the usual parenthesized prefix. While this does not entirely prevent misunderstanding, prefix a n d and l o g a n d are more distinct than C's infix & and &&.
4.1.10
L13. Check Preprocessor Conditionals
There is no example of preprocessor conditional errors in Yu's paper, but we can imagine that this is partly an extension of C's confusing conditionals applied to the preprocessing stage. Conditional code expansion based on the environments at compile-time and source-file-read-time is provided in Lisp through various macro capabilities. The potential confusion of multiple configurations can be a source of errors in any case, and we're not sure Lisp has a lock on a fix here.
178
RICHARD FATEMAN
4.7.7 7 L14. Check Comment
Delimiters
Lisp has several kinds. Since my comments are displayed in the editor in a color different from that of program text, it is hard to confuse them on the screen. I do not understand why this elementary tool has somehow been lost in the 5ESS programmers' environment. Perhaps monochromatic hardcopy is the primary source code repository, and comments are not displayed in a distinct manner. One might think that the use of a particularly dull editor, one unable to tell that it was displaying comments or program, could be to blame. In any case, in C it's hard to see where a comment ends in large comments, and the comments in C don't nest—you can't easily comment out a function that itself contains a comment. Lisp has comments "to the end of the line" as well as bracketing comments.
4.7.12
L15. Ctiecking the Sign of Unsigned
Variables
There are none in Lisp. Variables don't have signs. Numeric values have signs, but asking for the sign of a bitstring or some other encoding that is not a number is an error.
4.7.13
L16. Uses BESS Switch Defined Variables Properly
There would likely be some variation of this issue in any implementation language.
4.1.14
LI7. Use Cast Cautiously
Yu's paper describes bugs caused by number conversion/truncation using casts. Why use cast at all? Are we saving bits? Presumably the storage of data in records would be done by an assignment, or perhaps a write into a file. Basic data types in Lisp are manifest. One can ask of a value "are you an integer?" and then use it appropriately. One can also produce a new value by coercion: say of an integer to a character. One cannot refer to a primitive value of one type through storage equivalence as though it were another in legal code. If cast in C (to support untagged union types) is used to squeeze the most out of storage, it should make any programmer think twice: it's not a great idea in the first place, but at least one would hope that proper support of data abstractions as well as the use of explicit tags would reduce this source of error.
4.2
Interface Flaws
This class of flaws consists of apparent disagreements between function definitions and their uses. The caller assumes an argument is a pointer, but the function
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
179
disagrees. A consequence of some such disagreements can be that an erroneously passed copy of a large structure may overflow a stack. Many of these errors would not occur in Lisp, although there is still the possibility of using arguments in the wrong order, or simply calling the wrong function. Rather than insisting that functions with no return values be declared of return type void, it has been historically convenient in Lisp to decide that every function returns a value; if nothing else comes to mind, perhaps a condition code. Common Lisp allows multiple returned values (any number including 0 values), which removes the necessity for "in/out" or "output parameters" in argument lists. We discuss this "functional" orientation again when we provide arguments against Lisp, but for now, let us say that Lisp allows interfaces that are rather more versatile, allowing optional, keyword, and default arguments. Argument-count checking can be done at compile time and also enforced at runtime.
4.3
Maintainability Flaws
Major flaws in maintainability seem to include insistence on extra parentheses and bracketing to guard against the case of insertion of statements breaking control flow. That is, in C one should write if a {b;} just in case a statement is later inserted before or after the statement b. The otherwise correct if a b ; is not as easily maintained. The Lisp c o n d has no such problem.
5.
Arguments against Lisp, and Responses
We have heard the argument that Lisp is slow because it is interpreted, or is bad because it uses a garbage collector (GC) for storage reallocation. This is hardly tenable when Java is being promoted as a substitute for C, or when heuristic garbage collectors are promoted for C or C++.^^ The pauses that plagued old Lisp systems during GC are no longer likely: a commercial Lisp garbage collector is likely to be based on a quite efficient "generational" scavenger. In an interactive environment, time-sharing delays, network transmission delays, and computation time are likely to be of the same general time scale as pauses for GC. Real-time collectors (say, restricted to 10-ms time slices) are perfectly feasible.^^ In long-running "batch" jobs, GC delays are not of concern in any case. Lisp is now smaller than some net browsers or editors, and fits in memory that costs a few dollars at your comer computer store. Some Lisp systems can produce run-time executable code packages trimmed to exclude most development ^^Available at h t t p : //www. h p l . hp. com/personal/Hans-Boehm/gc/. ' ^ See Appendix 1.
180
RICHARD FATEMAN
features, most particularly the compiler and debugging tools; further trimming can be done if it is possible to detect at "dump" time that e v a I and its friends cannot be used, and that the only functions used are those invoked explicitly or implicitly by user code. It is not always possible to eliminate every bit of code not needed in an application, and so these run-time systems are rarely as small as the "minimal C code" needed to perform a simple task. (One could eliminate the garbage collector if one knew that only a small amount of store was ever needed. Deducing this automatically would be rather difficult.) As one mark, the minimal run-time-only binary from a commercial Lisp vendor, Franz Inc., is about 750 kB. For typical commercially supported Lisp systems one may need to pay a license fee to redistribute run-time-only binaries. This is sometimes cited as a factor in academic software projects' decisions to avoid Lisp, although the rationale does not bear close scrutiny. ^^ A license fee for redistribution of binaries is apparently not an issue in serious commercial Lisp-based software development where manpower and other costs dwarf the cost of buying such rights.'^ In fact if Lisp is properly considered not as a language, but as an "enabling technology," similar to say, a real-time OS (Wind River), or CORBA (Visibroker, etc.), or an object-oriented database (Poet or ODI), then fees or royalties are treated as an accepted norm related to the value added by the system. The reality is that availability and support on missioncritical issues (including updates as hardware and operating systems change) may simply be worth the price in the real world: the alternatives are limited or just as costly (i.e., building and maintaining a "free" implementation or purchasing from another vendor). While we may be used to a C compiler being free, it may actually be simply one that someone else nearby purchased. We address this further in Appendix 2. One might be concerned about error conditions—"What if the garbage collection procedure cannot find more memory?"—except that one must face (and in a bullet-proof program, solve) similar challenges about "What if m a 11 o c returns 0?" or for that matter "What if the run-time stack overflows?" Recovery from such situations inevitably is going to depend on features of the environment external to the language definition. Lisp as a system provides error-handling standards, and particular implementations may provide additional debugging or recovery tools. A system that has a simple description has just one advantage—namely simplicity—compared to a more sympathetic but more '^For fans of free software there is a GNU common lisp (GCL) as well as a CMU Common Lisp. Furthermore, the Lisp tradition is such that major vendors have "lite" Lisp packages free for the downloading. ^^I am grateful for information on this topic from Franz Inc., J. Foderaro and Samantha Cichon, March 15, 1999.
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
181
complex system. This simplicity advantage rapidly disappears when the error handling must be written from scratch: simply crashing with "bus error" is not usually an adequate emergency action. While Lisp can be implemented interpretively, directly, or via a byte-code system, as can Java or C, today's Common Lisps are usually oriented to producing compiled machine code from user programs. Lisp speed in critical programs can be further optimized by advisory declarations. There is some evidence that execution time is comparable to compiled C [5]. Additionally, early compilation also provides extra checking on syntax, argument counts, semantic program analysis, etc. Functional programming is a perplexity in efficiency. In particular, the functional paradigm is favored by many Lisp programmers. While this leads to a kind of modularity that is helpful in debugging (in particular, tracing functions completely reveals the sequence of operations and operands), it can be wasteful. While programmers in C or other languages can use the same functional style, such a choice is somewhat less typical. Let us explain the situation. Assume that you have one instance of a complicated data structure denoted A. You write a loop that repeatedly updates A to be a new combination of the old A and the value of a variable i: say (d o t i m e s {i n) (setf A ( c o m b i n e A i))). The ordinary interpretation of this would be to have Lisp construct a new object C where the value of C is ( c o m b i n e A i). Then A is set to "point to" the same structure as C. The old value of A then becomes garbage and is eventually reclaimed from memory. This happens n times, and so n versions of C are produced with « - 1 of them being discarded. By contrast, a state-oriented (not functional) style of programming would be to alter or update "in place" all the components of A, typically by "passing in A by reference." In this model there is never a "new" or an "old" A: just the single A. This appears to be economical in storage, and indeed unless the functional l o o p above is cleverly optimized or somehow finessed algorithmically, the functional applicative style of programming loses in terms of efficiency. There are three possible remedies in Lisp. The first is rarely useful: to declare that A is a d y n a m i c - e x t e n t variable, and hope that the system will be clever enough to stack-allocate A. This is pretty hard to set up unless A is initialized to a constant: otherwise, it is not obvious that its initial value is unshared. The d y n a m i c - e x t e n t declaration support seems to be most likely used for the processing of & r e s t arguments. More likely is that the compiler would not be able to make an effective optimization of such a declaration because the result of c o m b i n e would be difficult to compute on the stack (unless it were perhaps a constant list). The second remedy, appropriate for management of a set of large objects, is to implement a kind of subset storage allocation method. For example, if one were
182
RICHARD FATEMAN
inclined to explicitly manage a collection of input-output buffers, one can set up a r e s o u r c e initialized to some number of fixed-length byte arrays, and use them one or more at a time via explicit allocation and deallocation. The payoff comes when a deallocated buffer is reallocated without being garbage collected. The mechanism can be implemented in standard Lisp in 18 lines of code in an example given by Norvig [6], and in another 10 lines, a w i t h - r e s o u r c e s macro is defined, regulating return of resources on exit from a dynamic scope. The final remedy is the most well-known historically among Lisp programmers, requiring attention to the concrete data-structure level. It lends itself to abuse and can contribute to debugging mysteries: using in-place alteration or socalled destructive operations. ^"^ Historically this was done by functions r p I a c a and r p l a c d but in Common Lisp these are more easily specified via the setf mechanism. Consider changing the second element of the Hst x = (R S T)to V. Here's how: ( s e t f X '(R S T ) ) = = > (R S T) ;; ( s e t f ( s e c o n d x) ' V ) = = > V ;; X = = > (R V T)
initialize
A functional program would create and return a NEW list (R V T) and leave the value of x alone. Any one of the lines below would do the job, returning as the value of y, the new list. The briefest is cryptic but no faster. (setf y (cons ( f i r s t x ) ( c o n s 'v (rest (setf y (cons (car x ) ( c o n s 'v (cddr ( s e t f y ' ( , ( c a r x) v , @ ( c d d r x ) ) )
(rest x))))
x)))))
Why use the functional version then? Changing the arguments to a function by a "side effect" is considered bad taste. It makes debugging more difficult: you can't fix a bug in function f and try out (f x) if x is broken by a bug in f. Thus, side effects are used by most Lisp programmers cautiously. Since C programmers may not be able to retry f so easily, this is really an indictment of the C (or any batch) programming environment. The C process includes "remaking" the world by recompiling f and perhaps other programs, reloading and reexecuting the whole test framework up to the point of the error. The Lisp programmer would edit f or make some other change, and type (f x). What about data types? Isn't it wasteful to store data in Lisp's finked lists? This depends on the alternatives, and how tight one is for space. Modern Lisp is not only about lists, but has arrays of small-numbers, single- or double-floats, bit-strings, 2-d bitmaps, character-strings, file handles, and a vast collection of ^"^This may sound dangerous, and it is. That is one reason that C is so error prone, because that is how virtually all C language programs with pointers are composed (that is, dangerously).
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
183
"objects" (including methods), etc. While C has some primitive raw objects, it is certainly possible that Lisp has the right mix of features at the right cost, and using its built-in data types can unleash a vast armamentum of program tools. Many Common Lisp implementations allow the definition, allocation, and manipulation of C structures directly, but this is used almost exclusively for communication with C libraries requiring such stuff, and rarely, if ever, for its own sake. With a sufficiently low-level approach one can build specialized datastructures that are more space-efficient than any higher-level language's normal structures, whether this is C or Lisp. We generally don't make much of such issues in comparisons: implementations of C typically waste some number of bits in each 32-bit pointer for machines that have an actual address space less than 4 GB.^^ The implementations also use 8-bit bytes for characters, when 7 or fewer bits^^ might be adequate. In almost all cases, the argument for space efficiency, even though proffered as a reason for using C, is rarely taken entirely seriously. If it were believed that a 10% improvement in speed or size were critical in competitive markets (say, in embedded systems where the vendor has control of all parameters: choice of CPU, etc.), then a strong argument exists in favor of assembly language, not C. In fact, critical components in Lisp implementations may be provided in assembly language, and the prospect exists for a programmer to write in assembly language within Lisp: after all, a typical commercial Lisp system has a compiler and assembler available even at runtime. The argument for assembly language programs where speed and size are truly critical still exists. We suspect that some C programmers, even though they will claim that C is "fast," fail to use the compiler's optimizer, and are therefore substantially slower than they could be! In such circumstances, any argument for speed is questionable. Norvig [6] attacks the common myth that Lisp is a "special purpose" language for artificial intelligence, whereas languages like Pascal [7] and C are "general purpose": Actually, just the reverse is true. Pascal and C are special-purpose languages The majority of their syntax is devoted to arithmetic and Boolean expressions, and while they provide some facilities for forming data structures, they have poor mechanisms for procedural abstraction or control abstraction. In addition, they are designed for the state-oriented style of programming: computing a result by changing the value of variables through assignment statements. [6, p. ix] ^^Even today, almost no programming systems have 2^^ bytes of RAM installed. Why do we not use 24-bit pointers, or even 16-bit "word-aligned" pointers? ^^If you can make do with upper-case letters and numbers you have 64 different values in a mere 6 bits.
184
RICHARD FATEMAN
Another point sometimes raised in justifying the use of C is its obvious compatibility with external libraries and programming interfaces supplied with an operating system. Since virtually all Lisps allow for the calling of "foreign" functions that may be in libraries (or in extremis, written in assembler or C), this is not a serious barrier. Some Lisp systems come packaged with rather complete API setups, which are in effect the provision of the appropriately declared linkages from Lisp to the library. Programs requiring call-backs can also be handled. A more significant issue may be the fact that the compilers directly supported by hardware manufacturers may evolve along with advances in the hardware, and these are likely to be compilers for C or (for scientific computing) Fortran. Thus, MMX extensions in C are provided from Intel. Since those portions of the Lisp run-time system and library that need access to the hardware tend to be written in C, some of these improvements are incorporated in Lisp. We concede that user programs intended to direcdy access new hardware features as soon as they are released may need to be written in assembler or a language that has been extended in an appropriate way. That language today is likely to be C and/or Fortran. A final issue is familiarity with languages. This has had entirely too much influence in language selection. All else being equal, it is sensible to use a programming language when there is a large market of relatively skilled programmers familiar with it. Are there Lisp programmers out there? All computer science graduates at UC—Berkeley (as well as many nonmajors), about 900 per year, are introduced to the Lisp dialect of Scheme. Many also learn C-h-h or Java. The most productive programmers may very well be those who find Lisp most attractive. We see companies that hire primarily on the basis of "experience in C programming" and quiz prospective hires on C-language obscurities. Such a strategy may fail to identify candidates with the key traits that eliminate the other causes of flaws: one would hope that companies wish to hire the candidates of high intelligence, and capable of creative problem solving. Indeed, the strategy of quizzing on C obscurities may repel the very best and the brightest. As a variation on this theme of "we are writing in C because that is what more people know" we have heard anecdotally that it is difficult to assemble a highquality team that can handle a mix of languages: given that if Lisp is introduced late into a project, or must interface to an existing library, then some percentage of the preexisting code (in C) must be "sucked in," requiring understanding of two languages. It is scary to think that some software producers view the key to productivity as targeting their development system as well as their hiring practices for lower-quality programmers. While in some areas it may be advantageous to be able to hire in quantity, it has seemed fairly evident that overall programmer productivity favors quality.
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
6.
185
But Why is C Used by Lisp Implementors?
Some poking around shows that most, if not all, recent Lisp systems are implemented partly in C\ Why? Because virtually all general-purpose hardware/ operating system combinations offer C compilers and a way to interface to their operating system through C. Since one must "bootstrap" from something, C is more convenient and more easily portable than assembly code. Assembly language coding is, however, sometimes required to incorporate low-level machine descriptions when no other satisfactory method can be found, and usually a good compiler will need to know about the assembly-level operation codes of the machine it is compiling for. Above that minimal level, (95-1- %) of Lisp is implemented in Lisp (or a Lisp subset) language. For example, we know of no instance in which a Lisp compiler is written in a language other than Lisp. In fact we feel reasonably comfortable with the view that the C programming language, subject to the constraints of today's world, is a good vehicle for implementing that small kernel of a (presumably different and better!) programming language. The question we have addressed here can be reemphasized: once you or someone else has implemented that better language, why should you continue to write in C?
7.
Conclusion
It is unfortunate that so much commercial programming has fallen into the trap of using an essentially low-productivity language, and addressing shortcomings by a combination of advice, exhortations, and maxims. While tools like version control and interactive development frameworks help to some extent, they do not correct language flaws. Would you consider undergoing surgery knowing that the tools in the operation included = and ==, and that the use of the wrong one would result in your death? Significant complex applications have been programmed in Lisp, including Web-based commerce (stores and business-to-business), computer-aided design, document analysis, control and simulation systems, visual interfaces, and the traditional application areas including artificial intelligence, expert-system building, and programming language experimentation. While we are not aware of controlled experiments that demonstrate the costeffectiveness of Lisp vs Java vs C, we are forced to rely primarily on anecdotal evidence, personal experience, and most heavily, common sense. We expect that programming in Lisp will continue to be especially appropriate for time-critical delivery of reliable complex software. We also expect that when there is a full accounting of all costs for a project, it will be seen as cost-effective as well.
186
RICHARD FATEMAN
Appendix 1: Cost of Garbage Collection For purposes of argument, let us make the hypothesis that a programmer could otherwise keep storage straight and do foolproof allocation and return of storage, without any programming overhead recordkeeping (such as reference counts). It is certainly possible to do this with small programs where we can get away with deferring all deallocations until the end of the run, and let the operating system free the storage, at "no cost." You do this right, you win. Winning is highly unlikely in the case of large, continuously running systems. In fact, such systems tend to be written with their own allocation programs (perhaps to keep a stock of particular sizes on hand and avoid running out when m a 11 o c fails), may use more storage, have more bugs, and be slower than a carefully crafted system. There is some evidence that rolling your own code will not be better than good implementations of "conservative garbage collectors" that heuristically guess at what might be collected: an attempt to partially mitigate the probability of storage leaks in C or C-h-h. There are even Java GCs based on this idea. A comparison of these to the run-time cost of doing garbage collection properly requires a detailed analysis on particular benchmarks, quite beyond the scope of this paper. However, we will try to give some plausibility arguments to support our contention that the cost in all but highly unlikely scenarios will be quite small. We could even make an argument that GC will, for many realistic scenarios, be faster than direct use of m a l i c e . We will, by hypothesis, assert that the GC algorithm is correct. The more sophisticated algorithms are not trivial, but these programs are reasonably mature, and have been beaten on mercilessly by many users for many years. Let us discuss briefly the efficiency issues. There are two places to notice the cost. The historically obvious lumped cost of doing the garbage collection has already been mentioned, and is highly satisfactory. The generation-scavenging ideas that make possible a rather unobtrusive execution require that the system perform some recordkeeping so that the information needed for garbage collection is maintained in a consistent state. The technical requirement in modern generation-scavenging garbage-collection Lisp systems is that the programs must keep track of s e t f or other destructive changes in pointers in old space. In the case that a pointer from an old generation to new space is created, the system must make note of this garbage collection "root" that would otherwise not be known except by expensive scanning of old generations. No marking need be done for creating or modifying a pointer from new space. An important optimization is that no marking and therefore no checking is needed for the large percentage of variables that are stack allocated, local within
SOFTWARE FAULT PREVENTION BY LANGUAGE CHOICE
187
a function, and are naturally going to be used for marking, if they are still on the stack when a GC is prompted. The added cost for a s e t f (from new space) is usually four instructions, most likely overlapped: A call,^^ a load of the new-space border, a compare, and a conditional jump back. The less likely route is about 35 instructions (on a Pentium), when a pointer from old space must be renewed.
Appendix 2: Isn't C free? It's not always the case that the free G-l-l- (GNU C) compiler is the one you should use, but even so, an alternative C compiler is likely to have already been paid for. We have already mentioned the availability of open source or GNUlicensed versions of Common Lisp system (see the Association of Lisp Users home page for descriptions: www. elwood. c o m / a l u / t a b l e / s y s t e m s . htm). Does it make sense nevertheless to buy Lisp (and even buy new versions year after year)? We quote from a Lisp user (3/17/99) on the c o m p . I a n g . I i s p newsgroup, L. Hunter, Ph.D. of the National Library of Medicine (Bethesda, MD, USA): I'd like to point out that it is equally important (or perhaps even more so) that someone be paid, and paid well, to make "industrial strength" versions of the language. Top notch programming language people are expensive, and I want as many as we can collectively afford to be working on LISP. Moving the language into the future, and even just keeping up with the onslaught of new platforms, standards, functions, etc., that we hardcore users need is not something that is likely to happen for free. Lisp is NOT Linux - there isn't nearly the motivation nor the broad need driving Lisp development.
ACKNOWLEDGMENTS AND DISCLAIMERS
Thanks for comments from John Foderaro and Duane Rettig of Franz Inc., as well as George Necula of UC—Berkeley. Remaining errors of omission and commission are the author's own. The author also admits to not only liking Lisp, but to being one of the founders of Franz Inc., a vendor of Lisp systems and applications (www.franz.com). Although he has a potential to profit personally from the more widespread adoption of Common Lisp, he obviously thinks others have a potential to profit from using Lisp as well! ^^Why not an inline expansion? It appears that adding to the bulk of the the code weighs more heavily against performance than the call. I am grateful to Duane Rettig of Franz Inc. for information on this matter.
188
RICHARD FATEMAN
REFERENCES
[1] Yu, W. D. (1998). "A software fault prevention approach in coding and root cause analysis." Bell Labs Technical Journal, 3, 2, 3-21. Available at http://www. Iucent.com/minds/techjournal/apr-junl998/pdf/paper01.pdf. See also Yu, W. D., Barshefsky, A., and Huang, S. T. (1997). 'An empirical study of software faults preventable at a personal level in a very large software development environment." Bell Labs Technical Journal, 2, 3, 221-32. Available at h t t p : //\j\j\j. l u c e n t . com/minds/techj ournal/summer_97/pdf/paperl5. pdf. [2] Maguire, S. (1993). Writing Solid Code. Microsoft Press, Seattle, WA. [3] Kemighan, B. W., and Plauger, P. J. (1974). The Elements of Programming Style. McGraw-Hill, New York. [4] Fry, C. (1997). "Programming on an already full brain," Communications of the ACM, 40, 4, 55-64. [5] Fateman, R., Broughan, K. A., Willcock, D. K., and Rettig, D. (1995). "Fast floatingpoint processing in common Lisp." ACM Transactions on Mathematics Software, 21, 1,26-62. [6] Norvig, P. (1992). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. Morgan Kaufmann, San Mateo, CA. [7] Kemighan, B. W. (1981). "Why Pascal is not my favorite programming language." ATT Bell Labs, Murray Hill, NJ. Available at h t t p : / / w w w . l y s a t o r . liu.se/c/bwk-on-pascal.html.
Quantum Computing and Communication PAUL E. BLACK, D. RICHARD KUHN, AND CARL J. WILLIAMS National Institute of Standards and Technology Gaithersburg, Maryland 20899 USA [paul.black, kuhn, carl.williams]@nist.gov Abstract A quantum computer, if built, will be to an ordinary computer as a hydrogen bomb is to gunpowder, at least for some types of computations. Today no quantum computer exists, beyond laboratory prototypes capable of solving only tiny problems, and many practical problems remain to be solved. Yet the theory of quantum computing has advanced significantly in the past decade, and is becoming a significant discipline in itself. This article explains the concepts and basic mathematics behind quantum computers and some of the promising approaches for building them. We also discuss quantum communication, an essential component of future quantum information processing, and quantum cryptography, widely expected to be the first practical application for quantum information technology. 1. Introduction 2. The Surprising Quantum World 2.1 Sidebar: Doing the Polarization Experiment Yourself 2.2 Returning to the Subject at Hand 2.3 The Four Postulates of Quantum Mechanics 2.4 Superposition 2.5 Randomness 2.6 Measurement 2.7 Entanglement 2.8 Reversibility 2.9 The Exact Measurement "Theorem" 2.10 The No-Cloning Theorem 3. The Mathematics of Quantum Mechanics 3.1 Dirac or Ket Notation 3.2 Superpositions and Measurements 3.3 The Polarization Experiment, Again
190 192 193 194 194 196 199 199 200 201 201 202 202 203 203 204
Official contribution of the National Institute of Standards and Technology: not subject to copyright in the United States
190
4.
5.
6. 7.
PAUL E. BLACK ET-y^L
3.4 Expressing Entanglement 3.5 Unitary Transforms 3.6 Proof of No-Cloning Theorem Quantum Computing 4.1 Quantum Gates and Quantum Computers 4.2 Quantum Algorithms 4.3 Quantum Error Correction Quantum Communication and Cryptography 5.1 Why Quantum Cryptography Matters 5.2 Unbreakable Codes 5.3 Quantum Cryptography 5.4 Prospects and Practical Problems 5.5 Dense Coding 5.6 Quantum Teleportation Physical Implementations 6.1 General Properties Required to Build a Quantum Computer 6.2 Realizations Conclusions Appendix References
1.
205 206 206 207 207 210 216 220 220 221 222 226 228 229 231 232 236 240 240 242
Introduction
Computer users have become accustomed to an exponential increase in computing speed and capacity over the past few decades. Gordon Moore observed in 1965 that chip capacity doubled every year. Although the growth rate has slowed to "only" doubling about every 18 months, the geometric increase predicted by "Moore's law," as it is called, has held for over 40 years. Today's high-end PCs have the same power as machines that were considered supercomputers not long ago. Software advances have been equally dramatic, perhaps most familiar to the average user in the form of computer graphics. The crude colored dots and flat polygons in computer games of 20 years ago have been replaced by the nearphotorealistic graphics of today's video games and movies. An enormous amount of computing power is required for the complex software used in computer animations, molecular biology analyses, computational fluid dynamics, global climate and economic modeling, worldwide credit card processing, and a host of other sophisticated applications. The demands of these problem domains have led researchers to develop distributed computing systems harnessing the power of thousands, and in some cases more than a million, processors into clusters. Yet there are limits to this approach. Adding more processors increases the computing capacity of these clusters only linearly; yet many problems.
QUANTUM COMPUTING AND COMMUNICATION
191
particularly in physics and computer science, increase exponentially with the size of their inputs. The computing demands of these problems seem to be inherent in the problems themselves; that is, the overwhelming consensus is that no possible algorithm executable on a Turing machine, the standard model of computing, can solve the problem with less than exponential resources in time, memory, and processors. The doubling of computing power every 18 months has enabled scientists to tackle problems much larger than those in the past, but even Moore's law has limits. For each new chip generation, the doubling of capacity means that about half as many atoms are being used per bit of information, but when projected into the future, this trend reaches a limit of one atom per bit of information sometime between 2010 and 2020. Does this mean that improvements in computing will slow down at that point? Fortunately, the answer is "not necessarily." One new technology, quantum computing, has the potential to not only continue, but in fact dramatically increase the rate of advances in computing power, at least for some problems. The key feature of a quantum computer is that it can go beyond the Turing machine model of computation. That is, there are functions that can be computed on a quantum computer that cannot be effectively computed with a conventional computer (i.e., a classical Turing machine). This remarkable fact underlies the enormous power of quantum computing. Why all the excitement now? In 1982 Richard Feynman pointed out [1] that simulating some quantum mechanical systems took a huge amount of classical resources. He also suggested the complement that if the quantum effects could be harnessed, they may be able to do a huge amount of classical computation. However, nobody had any idea how that might be done. At about the same time, David Deutsch tried to create the most powerful model of computation consistent with the laws of physics. In 1985 he developed the notion of a Universal Quantum Computer based on the laws of quantum mechanics. He also gave a simple example suggesting that quantum computers may be more powerful than classical computers. Many people improved on this work in the following decade. The next breakthrough came in 1994 when Peter Shor demonstrated [2] that quantum computers could factor large numbers efficiently. This was especially exciting since it is widely believed that no efficient factoring algorithm is possible for classical computers. One limitation still dimmed the lure of quantum computing. Quantum effects are exceeding fragile. Even at atomic sizes, noise tends to quickly distort quantum behavior and squelch nonclassical phenomenon. How could a quantum computer undergo the hundreds or thousands of processing steps needed for even a single algorithm without some way to compensate for errors? Classical computers use millions and even billions of atoms or electrons
192
PAULE. BLACK E7>4L
to smooth out random noise. Communication, storage, and processing measures and compares bits along the way to detect and correct small errors before they accumulate to cause incorrect results, distorted messages, or even system crashes. However, measuring a quantum mechanical system causes the quantum system to change. An important breakthrough came in 1996 when Andrew Steane, and independently, Richard Calderbank and Peter Shor, discovered methods of encoding quantum bits, or "qubits," and measuring group properties so that small errors can be corrected. These ingenious methods use collective measurement to identify characteristics of a group of qubits, for example, parity. Thus, it is conceivable to compensate for an error in a single qubit while preserving the information encoded in the collective quantum state. Although a lot of research and engineering remain, today we see no theoretical obstacles to quantum computation and quantum communication. In this article, we review quantum computing and communications, current status, algorithms, and problems that remain to be solved. Section 2 gives the reader a narrative tutorial on quantum effects and major theorems of quantum mechanics. Section 3 presents the "Dirac" or "ket" notation for quantum mechanics and mathematically restates many of the examples and results of the preceding section. Section 4 goes into more of the details of how a quantum computer might be built and explains some quantum computing algorithms, such as Shor's for factoring, Deutsch's for function characterization, and Grover's for searching, and error correcting schemes. Section 5 treats quantum communication and cryptography. We end with an overview of physical implementations in Section 6.
2.
The Surprising Quantum World
Subatomic particles act very differently from things in the everyday world. Particles can have a presence in several places at once. Also two well-separated particles may have intertwined fates, and the observation of one of the particles will cause this remarkable behavior to vanish. Quantum mechanics describes these, and other physical phenomena extraordinarily well. We begin with a simple experiment that you can do for a few dollars worth of equipment. Begin with a beam of light passing through a polarizer, as in Fig. 1. A typical beam, such as from the sun or a flashlight, has its intensity reduced by half. Suppose we add another polarizer after the first. As we rotate the polarizer, the beam brightens and dims until it is gone,^ as depicted in Fig. 2. ^Real polarizers are not perfect, of course, so a little light always passes.
QUANTUM COMPUTING AND COMMUNICATION
193
#1 FIG. 1. Polarizer dims beam by half.
t
^ #2
#1
FIG. 2. Two orthogonal polarizers extinguish the beam.
Leaving the two polarizers at the minimum, add a third polarizer between them, as shown in Fig. 3. As we rotate it, we can get some light to pass through! How can adding another filter increase the light getting through? Although it takes extensive and elaborate experiments to prove that the following explanation is accurate, we assure you it is. Classical interpretations of these results are misleading at best. To begin the explanation, photons have a characteristic called "polarization." After passing through polarizer #1, all the photons of the light beam are polarized in the same direction as the polarizer. If a polarizer is set at right angles to polarizer #1, the chance of a photon getting through both polarizers is 0, that is, no light gets through. However, when the polarizer in the middle is diagonal to polarizer #1, half the photons pass through the first two polarizers. More importantly, the photons are now oriented diagonally. Half the diagonally oriented photons can now pass through the final polarizer. Because of their relative orientations, each polarizer lets half the photons through, so a total of 1/8 passes through all three polarizers.
2.1
Sidebar: Doing the Polarization Experiment Yourself
You can do the polarization experiment at home with commonly available materials costing a few dollars. You need a bright beam of light. This could be sunUght shining through a hole, a flashlight, or a laser pointer. 1/8
#1
#3
#2
FIG. 3. A third polarizer can partially restore the beam!
194
PAULE. BLACK ETAL
For polarizers you can use the lens from a pair of polarizing sunglasses. You can tell if sunglasses are polarizing by holding two pairs, one behind the other, and looking through the left (or right) lens in series. Rotate one pair of sunglasses relative to the other while keeping the lens in line. If the scene viewed through the lens darkens and lightens as one is rotated, they are polarizing. You can also buy gray light polarizing glasses or plastic sheets on the World Wide Web. Carefully free the lens. One lens can be rigidly attached to a support, but the others must be able to rotate. Shine the light through polarizer #1. Put polarizer #2 in the beam well after polarizer #1. Rotate 2 until the least amount of light comes through. Now put polarizer 3 between 1 and 2. Rotate it until the final beam of light is its brightest. By trying different combinations of lenses and rotations, you can verify that the lenses are at 45° and 90° angles from each other.
2.2
Returning to the Subject at Hand
After we develop the mathematics, we will return to this example in Section 3.3 and show how the results can be derived. The mathematical tools we use are quantum mechanics. Quantum mechanics describes the interactions of electrons, photons, neutrons, etc. at atomic and subatomic scales. It does not explain general relativity, however. Quantum mechanics makes predictions on the atomic and subatomic scale that are found to be extremely accurate and precise. Experiments support this theory to better accuracy than any other physical theory in the history of science. The effects we see at the quantum level are very different from those we see in the everyday world. So, it should not come as a surprise that a different mathematics is used. This section presents fundamental quantum effects and describes some useful laws that follow from those.
2.3
The Four Postulates of Quantum Mechanics
Quantum mechanics is mathematically very well defined and is a framework for defining physical systems. This powerful framework defines what may and may not happen in quantum mechanical systems. Quantum mechanics itself does not give the details of any one particular physical system. Some analogies may help. Algebraic groups have well-defined properties, such as that operations are closed. Yet, the definition of a group does not detail the group of rotations in 3-space or addition on the integers. Likewise, the rules for a role-playing game limit what is and is not allowed, but don't describe individuals or scenarios. Quantum mechanics consists of four postulates [3, pp. 80-94].
QUANTUM COMPUTING AND COMMUNICATION
195
Postulate 1. Any isolated quantum system can be completely mathematically characterized by a state vector in a Hilbert space. A Hilbert space is a complex vector space with inner product. Experiments show there is no need for other descriptions since all the interactions, such as momentum transfer, electric fields, and spin conservation, can be included within the framework. The postulates of quantum mechanics, by themselves, do not tell us what the appropriate Hilbert space is for a particular system. Rather, physicists work long and hard to determine the best approximate model for their system. Given this model, their experimental results can be described by a vector in this appropriate Hilbert space. The notation we will explain in Section 3 cannot express all possible situations, such as if we wish to track our incomplete knowledge of a physical system, but suffices for this paper. There are more elaborate mathematical schemes that can represent as much quantum information as we need. Postulate 2. The time evolution of an isolated quantum system is described by a unitary transformation. Physicists use the term "time evolution" to express that the state of a system is changing solely due to the passage of time; for instance, particles are moving or interacting. If the quantum system is completely isolated from losses to the environment or influences from outside the system, any evolution can be captured by a unitary matrix expressing a transformation on the state vector. Again, pure quantum mechanics doesn't tell us what the transformation is, but provides the framework into which experimental results must fit. The corollary is that isolated quantum systems are reversible. Postulate 3. Only certain sets of measurements can be done at any one time. Measuring projects the state vector of the system onto a new state vector. This is the so-called collapse of the system. From a mathematical description of the set of measurements, one can determine the probability of a state yielding each of the measurement outcomes. One powerful result is that arbitrary quantum states cannot be measured with arbitrary accuracy. No matter how delicately done, the very first measurement forever alters the state of the system. We discuss this in more detail in Section 2.6. The measurements in a set, called "basis," are a description of what can be observed. Often quantum systems can be described with many different, but related, bases. Analogously, positions in the geometric plane may be given as pairs of distances from the origin along orthogonal, or perpendicular, axes, such as X and Y. However, positions may also be given as pairs of distances along the diagonal lines X = Y and X = —Y, which form an equally valid set of orthogonal axes. A simple rotation transforms between coordinates in either basis.
196
PAUL E. BLACK Er>^L
Polar coordinates provide yet another alternative set of coordinates. Although it may be easier to work with one basis or another, it is misleading to think that coordinates in one basis are the coordinates of a position, to the exclusion of others. Postulate 4. The state space of a composite system is the tensor products of the state spaces of the constituent systems. Herein lies a remarkable opportunity for quantum computing. In the everyday world, the composite state space is the product of constituent spaces. However, quantum mechanical systems can become very complicated very fast. The negative view is to realize how much classical computation we need to simulate even simple systems of, say, 10 particles. The positive view is to wonder if this enormously rich state space might be harnessed for very powerful computations.
2.4
Superposition
As can be seen from the polarization experiment above, very tiny entities may behave very differently from macroscopic things. An everyday solid object has a definite position, velocity, etc., but at the quantum scale, particle characteristics are best described as blends or superpositions of base values. When measured, we get a definite value. However, between measurement events, any consistent mathematical model must allow for the potential or amplitude of several states at once. Another example may provide a more intuitive grasp.
2.4.1
Young's Double-Slit
Experiment
In 1801, using only a candle for a light source, Thomas Young performed an experiment whose results can only be explained if light acts as a wave. Young shined the light through two parallel slits onto a surface, as shown in Fig. 4, and saw a pattern of light and dark bands. The wavy line on the right graphs the result; light intensity is the horizontal axis, increasing to the right. This is the well-known interference effect: waves, which cancel and reinforce each other, produce this pattern. Imagine, in contrast, a paintball gun pointing at a wall in which two holes have been drilled, beyond which is a barrier, as shown in Fig. 5. The holes are just big enough for a single paintball to get through, although the balls may ricochet from sides of the holes. The gun shoots at random angles, so only a few of the paintballs get through. If one of the holes is covered up, the balls that get through will leave marks on the wall, with most of the marks concentrated opposite the hole and others scattered in a bell curve (PI) to either side of the hole, as shown in the figure. If only the second hole is open, a similar pattern (P2) emerges
QUANTUM COMPUTING AND COMMUNICATION
Wave source
197
pattern wall
barrier
FIG. 4. Young's double-slit experiment.
P1 +P2 Gun
wall
barrier
FIG. 5. Paintballs fired at a wall.
on the barrier immediately beyond the hole. If both holes are open, the patterns simply add. The paint spots are especially dense where the two patterns overlap, resulting in a bimodal distribution curve that combines PI and P2. No interference is evident. What happens when electrons are fired at two small slits, as in Fig. 6? Surprisingly, they produce the same wave pattern of Fig. 4. That is, the probability of an electron hitting the barrier at a certain location varies in a pattern of alternating high and low, rather than a simple bimodal distribution. This occurs even when electrons are fired one at a time. Similar experiments have been done with atoms and even large molecules of carbon-60 ("buckeyballs"), all demonstrating
198
PAULE. BLACK Er>AL
H1
Electron Gun
H2 pattern wall
barrier
FIG. 6. Double-slit experiment with electrons.
wave-like behavior of matter. So something 'Vave-Uke" must be happening at small scales.
2A.2
Explaining the Double-Slit Experiment
How do we explain these results? If a wave passes through the sHts, we can expect interference, canceling or reinforcing, resulting in a pattern of light and dark lines. But how can individual electrons, atoms, or molecules, fired one at a time, create interference patterns? A desperate classical explanation might be that the particles split, with one part passing through each hole, but this is not the case: if detectors are placed at HI or at H2 or in front of the barrier, only one particle is ever registered at a time. (Remarkably, if a detector is placed at HI or H2, the pattern follows Fig. 5. More about this effect later.) The quantum mechanical explanation is that particles may be in a "superposition" of locations. That is, an electron is in a combination of state "at HI" and "at H2." An everyday solid object has a definite position, mass, electric charge, velocity, etc., but at the quantum scale, particle characteristics are best described as blends or superpositions of base values. When measured, we always get a definite value. However, between measurement events, any consistent mathematical model must potentially allow for an arbitrary superposition of many states. This behavior is contrary to everyday experience, of course, but thousands of experiments have verified this fact: a particle can be in a superposition of several states at the same time. When measured, the superposition collapses into a single state, losing any information about the state before measurement. The photons in
QUANTUM COMPUTING AND COMMUNICATION
199
the beam-and-filters experiment are in a superposition of polarizations. When polarizer #1 tests the photon for vertical or horizontal polarization, either the photon emerges polarized vertically or it doesn't emerge. No information about prior states is maintained. It is not possible to determine whether it had been vertical, diagonal, or somewhere in between. Since vertical polarization is a superposition, or combination, of diagonal polarizations, some of the vertically polarized photons pass through the middle polarizer and emerge polarized diagonally. Half of the now-diagonally polarized photons will pass through the final, horizontal polarizer.
2.5
Randomness
In the beam-and-filters experiment, randomly some photons emerge polarized while others do not emerge at all. This unpredictability is not a lack of knowledge. It is not that we are missing some full understanding of the state of the photons. The random behavior is truly a part of nature. We cannot, even in principle, predict which of the photons will emerge. This intrinsic randomness may be exploited to generate cryptographic keys or events that are not predictable, but it also means that the unpredictability of some measurements is not merely an annoying anomaly to be reduced by better equipment, but an inherent property in quantum computation and information. Even though an individual measurement may be arbitrary, the statistical properties are well defined. Therefore, we may take advantage of the randomness or unpredictability in individual outcomes. We can make larger or more energetic systems that are more predictable, but then the quantum properties, which may be so useful, disappear, too.
2.6
Measurement
As opposed to being an objective, external activity, in quantum mechanics measuring a system is a significant step. A measurement is always with regard to two or more base values. In the photon polarization experiment, the bases are orthogonal directions: vertical and horizontal, two diagonals, 15° and 105°, etc. The basis for other systems may be in terms of momentum, position, energy level, or other physical quantities. When a quantum system is measured, it collapses into one of the measurement bases. No information about previous superpositions remains. We cannot predict into which of the bases a system will collapse; however, given a known state of the system, we can predict the probability of measuring each basis.
200
PAUL E. BLACK ETAL
2.7
Entanglement
Even more surprising than superposition, quantum theory predicts that entities may have correlated fates. That is, the result of a measurement on one photon or atom leads instantaneously to a correlated result when an entangled photon or atom is measured. For a more intuitive grasp of what we mean by "correlated results," imagine that two coins could be entangled (there is no known way of doing this with coins, of course). Imagine one is tossing a coin. Careful records show it comes up "heads" about half the time and "tails" half the time, but any one result is unpredictable. Tossing another coin has similar, random results, but surprisingly, the records of the coin tosses show a correlation! When one coin comes up heads, the other coin comes up tails and vice versa. We say that the state of the two coins is entangled. Before the measurement (the toss), the outcome is unknown, but we know the outcomes will be correlated. As soon as either coin is tossed (measured), the fate of tossing the other coin is sealed. We cannot predict in advance what an individual coin will do, but their results will be correlated: once one is tossed, there is no uncertainty about the other. This imaginary coin tossing is only to give the reader a sense of entanglement. Although one might come up with a classical explanation for these results, multitudes of ingenious experiments have confirmed the existence of entanglement and ruled out any possible classical explanation. Over several decades, physicists have continually refined these experiments to remove loopholes in measurement accuracy or subtle assumptions. All have confirmed the predictions of quantum mechanics. With actual particles any measurement collapses uncertainty in the state. A real experiment would manufacture entangled particles, say by bringing particles together and entangling them or by creating them with entangled properties. For instance, we can "downconvert" one higher energy photon into two lower energy photons which leave in directions not entirely predictable. Careful experiments show that the directions are actually a superposition, not merely a random, unknown direction. However, since the momentum of the higher energy photon is conserved, the directions of the two lower energy photons are entangled. Measuring one causes both photons to collapse into one of the measurement bases. However, once entangled, the photons can be separated by any distance, at any two points in the universe; yet measuring one will result in a perfectly correlated measurement for the other. Even though measurement brings about a synchronous collapse regardless of the separation, entanglement doesn't let us transmit information. We cannot force the result of a measurement any more than we can force the outcome of tossing a fair coin (without interference).
QUANTUM COMPUTING AND COMMUNICATION
2.8
201
Reversibility
Postulate 2 of quantum mechanics says that the evolution of an isolated system is reversible. In other words, any condition leading to an action also may bring about the reverse action in time-reversed circumstances. If we watch a movie of a frictionless pendulum, we cannot tell whether the movie is being shown backwards. In either case, the pendulum moves according to the laws of momentum and gravity. If a beam of photons is likely to move an electron from a lower to a higher energy state, the beam is also likely to move an electron from the higher energy state to the lower one. (In fact, this is the "stimulated emission" of a laser.) This invertible procession of events is referred to as "unitary evolution." To preserve superposition and entanglement, we must use unitary evolutions. An important consequence is that operations should be reversible. Any operation that loses information or is not reversible cannot be unitary, and may lose superposition and entanglement. Thus, to guarantee that a quantum computation step preserves superposition and entanglement, it must be reversible. Finding the conjunction of A AND B is not reversible: if the result is false, we do not know whether A was false, B was false, or both A and B were false. Thus a standard conjunction destroys superpositions and entanglements. However, suppose we set another bit, C, previously set to false, to the conjunction of A AND B, and keep the values of both A and B. This computation is reversible. Given any resulting state of A, B, and C, we can determine the state before the computation. Likewise all standard computations can be done reversibly, albeit with some extra bits. We revisit reversible computations in Section 4.1.1.
2.9
The Exact Measurement "Theorem"
Although quantum mechanics seems strange, it is a very consistent theory. Seemingly reasonable operations are actually inconsistent with the theory as a whole. For instance, one might wish to harness entanglement for faster-thanlight or even instantaneous communication. Unfortunately, any measurement or observation collapses the state. Also unfortunately, it is impossible to tell with local information whether the observation preceded or followed the collapse: the observation gives the same random result in either case. Communicating with the person holding the other entangled particle, to determine some correlation, can only be done classically, that is, no faster than the speed of light. So entanglement cannot be used to transmit information faster than light and violate relativity. If we could exactly measure the entire quantum state of a particle, we could determine whether it were in a superposition. Alice and Bob could begin with two pairs of particles; let us call them the "T" pair, Tl and T2, and the "F" pair, Fl and F2. They manipulate them so Tl and T2 are entangled with each other
202
PAUL E. BLACK ETAL
and Fl and F2 are entangled with each other. Bob then takes Tl and Fl far away from Alice. If exact measurement were possible, Bob could continuously measure his particle Tl to see if it has collapsed into a definite state. To instantly communicate a "1," Alice observes her member of the ' T " pair, T2, causing it to collapse. Because the "T" pair was entangled, Bob's particle, Tl, simultaneously collapses into a definite state. Bob detects the collapse of Tl, and writes down a "1." Similarly, a "0" bit could be transmitted instantly using the "F" pair if, indeed, exact measurement were possible. In fact, if we were able to exactly measure an unknown quantum state, it would lead to many inconsistencies.
2.10 The No-Cloning Theorem One might be tempted to evade the impossibility of exact measurement by making many exact copies of particles and measuring the copies. If we could somehow manage to have an unlimited supply of exact copies, we could measure them and experimentally build up an exact picture of the quantum state of the original particle. However, the "No-Cloning Theorem" proves we cannot make an exact copy of an unknown quantum state. In Section 3.6 we prove a sHghtly simplified version of the theorem. What about setting up an apparatus, say with polarizers, laser beams, magnetic fields, etc., which produces an unlimited number of particles, all in the same quantum state? We could make unlimited measurements in various bases, and measure the state to arbitrary accuracy. Indeed, this is what experimental physicists do. But it is a measurement of the result of a process, not the measurement of a single, unknown state. Alternatively, if we could exactly measure an unknown quantum state, we could prepare as many particles as we wished in that state, effectively cloning. So the lack of exact measurement foils this attempt to clone, and the lack of cloning closes this route to measurement, maintaining the consistency of quantum mechanics.
3.
The Mathematics of Quantum Mechanics
The ugly truth is that general relativity and quantum mechanics are not consistent. That is, our current formulations of general relativity and quantum mechanics give different predictions for extreme cases. We assume there is a 'Theory of Everything" that reconciles the two, but it is still very much an area of thought and research. Since relativity is not needed in quantum computing, we ignore this problem. Let us emphasize that thousands of experiments that have been done throughout the world in the last 100 years are consistent with quantum mechanics.
QUANTUM COMPUTING AND COMMUNICATION
203
We present a succinct notation and mathematics commonly used to formally express the notions of quantum mechanics. Although this formalization cannot express all the nuances, it is enough for this introductory article. More complete notations are given in various books on quantum mechanics.
3.1
Dirac or Ket Notation
We can represent the state of quantum systems in "Dirac" or "ket" ^ notation. ("Ket" rhymes with "let.") A qubit is a quantum system with two discrete states. These two states can be expressed in ket notation as |0) and |1). An arbitrary quantum state is often written | ^ ) . State designations can be arbitrary symbols. For instance, we can refer to the polarization experiment in Section 2 using the bases |t) and | ^ ) for vertical and horizontal polarization and I/*) and | \ ) for the two orthogonal diagonal polarizations. (Caution: although we use an up-arrow for vertical, "up" and "down" polarization are the same thing: they are both vertical polarization. Likewise be careful not to misinterpret the right or diagonal arrows.) A quantum system consisting of two or more quantum states is the tensor product of the separate states in some fixed order. Suppose we have two photons, PI and P2, where PI has the state |P1), and P2 has the state |P2). We can express the state of the joint system as |P1) 0 |P2), or we can express it as |P2) 0 |P1). The particular order doesn't matter as long as it is used consistently. For brevity, the tensor product operator is implicit between adjacent states. The above two-photon system is often written \Pl P2). Since the order is typically implicit, the ket is usually written without indices, thus, \PP). Ket "grouping" is associative; therefore a single ket may be written as multiple kets for clarity: |0)|0)|0), |0)|00), and |00)|0), all mean IO1O2O3). Bases are written in the same notation using kets. For example, four orthogonal bases of a two-qubit system are |00), |01), |10), and |11). Formally, a ket is just a column vector.
3.2
Superpositions and Measurements
Superpositions are written as a sum of states, each with an "amplitude" which may be a complex number. For instance, if an electron has a greater probability of going through the top slit in Fig. 6, its position might be y ^ l / 4 1 ^ 1)+\/3/41H2). The polarization of a photon in an equal superposition of vertical and horizontal polarizations may be written a s l / V 2 | t ) + l / V 2 | ^ ) . In general, a two-qubit system is in the state a\00) + b\Ol) -h c|10) + d\ll), ^The name comes from "bracket." P. A. M. Dirac developed a shorthand "bracket" notion to express the outer product of state vectors, ( ^ | ^ ) . In most cases the column vector, or right-hand side, can be used alone. Being the second half of a bracket, it is called a ket.
204
PAUL E. BLACK ETAL
The norm squared of the amphtude of a state is the probabiUties of measuring the system in that state. The general two-qubit system from above will be measured in state |00) with probability 1^1-^. Similarly, the system will be measured in states |01), |10), or |11> with probabilities \b\^, |c|^, and \d\^ respectively. Amplitudes must be used instead of probabilities to reflect quantum interference and other phenomena. Because a measurement always finds a system in one of the basis states, the probabilities sum to 1. (The requirement that they sum to 1 is a reflection of the basic conservation laws of physics.) Hence the sum of norm squared amplitudes must always sum to 1, also. Amplitudes that nominally do not sum to 1 are understood to be multiplied by an appropriate scaling factor to "normalize" them so they do sum to 1. A measurement collapses the system into one of the bases of the measurement. The probability of measuring the system in, or equivalently, collapsing the system into any one particular basis is the norm squared of the probability. Hence, for the location distribution ^/Tj4 \H\) -\- \/3/4 |//2), the probability of finding an I
electron at location HI is
.2
^1/4
= 1/4, and the probability of finding an
electron at H2 is y 3/4 = 3 / 4 . After measurement, the electron is either in the state \Hl), that is, at HI, or in the state \H2), that is, at H2, and there is no hint that the electron ever had any probability of being anywhere else. If measurements are done at HI or H2, the interference disappears, resulting in the simple bimodal distribution shown in Fig. 5.
3.3
The Polarization Experiment, Again
Just as geometric positions may be equally represented by different coordinate systems, quantum states may be expressed in different bases. A vertically polarized photon's state may be written as |t). It may just as well be written as a superposition of two diagonal bases 1/A/2 |/^) -h 1/V2|\). Likewise a diagonally polarized photon | \ ) may be viewed as being in a superposition of vertical and horizontal polarizations 1/V2 |t) + 1/V2|^). If the polarization is I/*), the superposition is 1/V2|t) - 1/V2|^); note the sign change. In both cases, the amplitudes squared, | 1/V2 | and | - 1 / V 2 | , still sum to 1. We can now express the polarization experiment formally. The first polarizer "measures" in some basis, which we can cafl |t) and | ^ ) . Regardless of previous polarization, the measurement leaves photons in either It) or 1^), but only passes photons that are, say, |t). If the incoming beam is randomly polarized, half the photons collapse into, or are measured as, |t) and passed, which agrees with the observation that the intensity is halved.
QUANTUM COMPUTING AND COMMUNICATION
205
A second polarizer, tilted at an angle, 6, to the first, "measures" in a tilted basis I/'cos e) and |\sin e)- Photons in state |t) can also be considered to be in the superposition cos 6 l/cos e) + sin ^ |\sin e)- The second polarizer measures photons in the tilted basis, and passes only those collapsing into \/co^ e)- Since the chance^ of a photon collapsing into that state is cos^ 6, the intensity of the resultant beam decreases to 0 as the polarizer is rotated to 90°. With polarizer #2 set at a right angle, it measures with the same basis as polarizer #1, that is, |t) and 1^), but only passes photons with state |-»). When polarizer #3 is inserted, it is rotated to a 45° angle. The vertically polarized, that is, |t), photons from polarizer #1 can be considered to be in the superposition cos45°IAos45°) +sin45°I\sin450) = l / V S l / ) + 1/V2|\). So they have a 11/V21 = 1 / 2 chance of collapsing into state \/) and being passed. These photons encounter polarizer #2, where they can be considered to be in the superposition cos 45°ITcos 45°) +sin45°I^sin 45°) = 1/V2|t) + 1 / V 2 H ) . So they again have a 11/V21 = 1 / 2 chance of collapsing, now into state |-»), and being passed. Thus, the chance of an arbitrary photon passing through all three polarizers is 1/2 X 1/2 X 1/2 = 1/8, agreeing with our observation.
3.4
Expressing Entanglement
In the Dirac or ket notation, the tensor product, 0, distributes over addition, e.g., |0)(8>(1/V2|0) + I/V2II)) = (I/V2IOO) + I/V2IOI)). Another example is that the tensor product of equal superpositions is an equal superposition of the entire system:
= ( » 4'4"'4"*^"'> * ^">7!"0 = ^(100)+ 101)+ 110)+ 111)). Note that the square of each amplitude gives a 1/4 chance of each outcome, which is what we expect. If a state cannot be factored in products of simpler states, it is "entangled." For instance, neither l/2|00)+\/374|ll) nor \/y/2[\Heads Tails)-¥\Tails Heads)) can be factored into a product of states. The latter state expresses the entangled coin tossing we discussed in Section 2.7. When we toss the coins (do a measurement), we have equal chances of getting \Heads Tails) (heads on the first coin ^To double check consistency, note that the probabiUty of seeing either state is cos^ 9 -f- sin^ 6 =\.
206
PAUL E. BLACK ETAL
and tails on the second) or \Tails Heads) (tails on the first coin and heads on the second). If we observe the coins separately, they appear to be completely classical, fair coins: heads or tails appear randomly. However, the records of the two coins are correlated: when one comes up heads, the other comes up tails and vice versa.
3.5
Unitary Transforms
Postulate 2 states that all transformations of an isolated quantum system are unitary. In particular, they are linear. If a system undergoes decoherence or collapse because of some outside influence, the transformation is not necessarily unitary, but when an entire system is considered in isolation from any other influence, all transformations are unitary.
3.6
Proof of No-Cloning Theorem
With Postulate 2, we can prove a slighdy simplified version of the No-Cloning Theorem. (A comprehensive version allows for arbitrary ancillary or "work" qubits.) We begin by formalizing the theorem. We hypothesize that there is some operation, U, which exactly copies an arbitrary quantum state, ^ , onto another particle. Its operation would be written as
t/|T)|0) = 1^)1'!'). Does this hypothetical operator have a consistent definition for a state that is a superposition? In Dirac notation, what is the value of U(a\0) -h ^|1)) |0)? Recall that tensor product distributes over superposition. One derivation is to distribute the tensor product first, then distribute the clone operation, and finally perform the hypothetical clone operation: U{a\0) + b\l))\0) = U{a\Om ^ =
b\\m)
Ua\0)\0)-hUb\l)\0)
= a\0)a\0) + b\\)b\l) =
a^\00)-^b^\\l).
However, if we evaluate the clone operation first then distribute, we get U{a\0)-h b\l))\0) = {a\0) + b\l)){a\0) + b\l)) = a\0)a\0) + a\0)b\\) + b\\)a\0) -h b\l)b\l) = a^\00) -\- ab\Ol) -\- ab\lO) -\- b^\n).
QUANTUM COMPUTING AND COMMUNICATION
207
The derivations are different! The mathematics should be consistent unless we're trying something impossible, like dividing by 0. Since the only questionable step was assuming the existence of a cloning operation, we conclude that a general cloning operation is inconsistent with the laws of quantum mechanics. Note that if a is 0 or Z> is 0, the two derivations do give the same result, but a and b are amplitudes (like probabilities) of states in the superposition. If one or the other is 0, there was actually no superposition to begin with, and this proof doesn't apply. In fact, in the absence of arbitrary superposition, we can clone. If we know that a particle is either in state |0) or in state |1), we can simply measure the particle. We then set any number of other particles to that same state, effectively copying the state of the particle. In this case we know something about the original state of the particle. So this "loophole" does not invalidate the theorem that we cannot clone a completely unknown state. In Section 5.6 we explain how we can move, or "teleport," an unknown state to a distant particle, but the state on the original particle is destroyed in the process. So we still end up with just one instance of a completely unknown state.
4.
Quantum Computing
We have seen that phenomena and effects at quantum scales can be quite different from those we are used to. The richness of these effects tantalize us with the possibility of far faster computing, when we manage to harness these effects. But how can we turn these effects in gates and computers? How fast might they solve problems? Are these merely theoretical ideals, like a frictionless surface or noiseless measurement, or is there hope of building an actual device? This section discusses how quantum effects can be harnessed to create gates, assesses the potential for quantum algorithms, and oudines ways of dealing with imperfect operations and devices.
4.1 Quantum Gates and Quantunn Connputers Digital computers, from microprocessors to supercomputers, from the tiny chips running your wristwatch or microwave to continent-spanning distributed systems that handle worldwide credit card transactions, are built of thousands or millions of simple gates. Each gate does a single logical operation, such as producing a 1 if all its inputs are 1 (otherwise, producing a 0 if any input is 0) or inverting a 1 to a 0 and a 0 to a 1. From these simple gates, engineers build more complex circuits that add or multiply two numbers, select a location in memory, or choose which instructions to do next depending on the result of an operation.
208
PAUL E. BLACK ETAL
From these circuits, engineers create more and more complex modules until we have computers, CD players, aircraft navigation systems, laser printers, and cell phones. Although computer engineers still must deal with significant concerns, such as transmitting signals at gigahertz rates, getting a million gates to function in exact lockstep, or storing ten billion bits without losing a single one, conceptually once we can build simple gates, the rest is "merely" design. Quantum computing appears to be similar: we know how to use quantum effects to create quantum gates or operations, we have ideas about combining gates into meaningful modules, and we have a growing body of work about how to do quantum computations reliably, even with imperfect components. Researchers are optimistic because more work brings advances in both theory and practice. In classical computer design, one basic gate is the AND gate. However, as we described in Section 2.8, an AND gate is not reversible. A basic, reversible quantum gate is the "controlled-not" or CNOT gate. It is represented as in Fig. 7. The two horizontal lines, labeled \(p) and |i//), represent two qubits. The gate is the vertical line with connections to the qubits. The top qubit, labeled \y/) and connected with the dot, is the gate's control. The bottom qubit, labeled \(p) and connected with e, is the "data." The data qubit is inverted if the control qubit is 1. If the control is 0, the data qubit is unchanged. Table I shows the operation of CNOT. Typically we consider the inputs to be on the left (the \(p) and \y/)), and the outputs to be on the right. Since CNOT is reversible, it is not unreasonable to consider the right-hand side (the \(p') and \i//')) the "inputs" and the left-hand side the "outputs"! That is, we can run the gate "backwards." The function is still completely determined: every possible "input" produces exactly one "output." So far, this is just the classical exclusive-OR gate. What happens when the control is a superposition? The resultant qubits are entangled. In the following, we apply a CNOT to the control qubit, an equal superposition of |0y,) and |1^) (we use the subscript y/ to distinguish the control qubit), and the data qubit, |0): C N O T ( I / \ / 2 ( |O^)+ I V ) ) ^ |O>) = I/\/2(CNOT|O^O> + CNOT|VO))
= i/v^( |o^o)+ IVI)). ,p c
Uy
1^/ FIG.
7. A CNOT gate.
\r)
QUANTUM COMPUTING AND COMMUNICATION
209
TABLE I FUNCTION OF THE C N O T GATE
\¥)
M
Iv^')
\(p')
|0> 10)
|0) |1)
10) |0)
|0) |1)
ID ID
10) ID
ID ID
ID |0)
What does this mean? One way to understand it is to measure the control qubit. If the result of the measurement is 0, the state has collapsed to |0^0), so we will find the data qubit to be 0. If we measure a 1, the state collapsed to 11^/1), and the data qubit is 1. We could measure the data qubit first and get much the same result. These results are consistent with Table I. So how might we build a CNOT gate? We review several possible implementations in Section 6, but sketch one here. Suppose we use the state of the outermost electron of a sodium atom as a qubit. An electron in the ground state is a logical 0, and an excited electron is a logical 1. An appropriate pulse of energy will flip the state of the qubit. That is, it will excite an outer electron in the ground state, and "discharge" an excited electron. To make a CNOT gate, we arrange a coupling between two atoms such that if the outer electron of the control atom is excited, the outer electron of the data atom flips when we apply a pulse. If the control atom is not excited, the pulse has no effect on the data atom. As can be guessed from this description, the notion of wires and gates, as represented schematically in Fig. 7, might not be used in an actual quantum computer. Instead, different tuned and selected minute energy pulses may cause qubits to interact and change their states. A more easily used quantum gate is the controlled-controlled-not or C2N0T gate. It has two control qubits and one data qubit, as represented schematically in Fig. 8. It is similar to the CNOT: the data qubit is inverted if both the control qubits are 1. If either is 0, the data qubit is unchanged. We can easily make a reversible version of the classical AND gate. To find A AND B, use A and B as
-^ FIG. 8. A C2N0T gate.
210
PAULE. BLACK ETAL
the controls and use a constant 0 as the data. If A and B are both 1, the 0 is flipped to a 1. Otherwise, it remains 0. Many other basic quantum gates have been proposed [3, Chap. 4]. Using these gates as building blocks, useful functions and entire modules have been designed. In short, we can conceptually design complete quantum computing systems. In practice there are still enormous, and perhaps insurmountable, engineering tasks before realistic quantum computing is available. For instance, energy pulses are never perfect, electrons don't always flip when they are supposed to, and stray energy may corrupt data. Section 4.3 explains a possible approach to handhng such errors.
4.2
Quantum Algorithms
The preceding section outlines plans to turn quantum effects into actual gates and, eventually, into quantum computers. But how much faster might quantum computers be? After all, last year's laptop computer seemed fast until this year's computer arrived. To succincdy address this, we introduce some complexity theory. To help answer whether one algorithm or computer is actually faster, we count the number of basic operations a program executes, not (necessarily) the execution, or elapsed "wall clock" time. Differences in elapsed time may be due to differences in the compiler, a neat programming trick, memory caching, or the presence of other running programs. We want to concentrate on fundamental differences, if any, rather than judging a programming competition. In measuring algorithm performance we must consider the size of the input. A computer running a program to factor a 10,000-digit number shouldn't be compared with a different computer that is only factoring a 10-digit number. So we will compare performance in terms of the size of the problem or input. We expect that larger problems take longer to solve than smaller instances of the same problem. Hence, we express performance as a function of the problem size, e.g., /(«). We will see that performance functions fall into theoretically neat and practically useful "complexity classes."
4.2.1
Complexity
Classes
What are some typical complexity classes? Consider the problem of finding a name in a telephone book. If one took a dumb, but straightforward method where we check every entry, one at a time, from the beginning, the expected average number of checks is n/2 for a telephone book with n names. (This is called "sequential search.") Interestingly, if one checks names completely at random.
QUANTUM COMPUTING AND COMMUNICATION
211
even allowing accidental rechecks of names, the expected average number of checks is still «/2. Since telephone books are sorted by name, we can do much better. We estimate where the name will be, and open the book to that spot. Judging from the closeness to the name, we extrapolate again where the name will be and skip there. (This is called "extrapolation search.") This search is much faster, and on average takes some constant multiple of the logarithm of the number of names, or c log n. Although it takes a little longer to estimate and open the book than just checking the next name, as n gets large, those constant multipliers don't matter. For large values of n the logarithm is so much smaller than n itself, it is clear that extrapolation search is far faster than linear search. (When one is close to the name, that is, when n is small, one switches to linear searching, since the time to do a check and move to the next name is much smaller.) This is a mathematically clear distinction. Since almost any reasonable number times a logarithm is eventually smaller than another constant times the number, we'll ignore constant multiples (in most cases) and just indicate what "order" they are. We say that linear search is 0(n), read "big-Oh of n," and extrapolation search is 0(log n), read "big-Oh of log AI." Since logarithms in different bases only differ by a constant multiple, we can (usually) ignore the detail of the logarithm's base. Other common problems take different amounts of time, even by these highlevel comparisons. Consider the problem of finding duplicates in an unordered list of names. Comparing every name to every other name takes some multiple of n^, or O(n^) time. Even if we optimize the algorithm and only compare each name to those after it, the time is still a multiple of n^. Compared with 0(log n) or even 0{n), finding duplicates will be much slower than finding a name, especially for very large values of n. It turns out we can sort names in time proportional to n\og n. Checking for duplicates in a sorted list only takes 0{n), so sorting and then checking takes 0(cn log n-\-dn), for some constants c and d. Since the n log n term is significantly larger than the n term for large n, we can ignore the lone n and say this method is 0(n log n), much faster than the 0{n^) time above. Although the difference between these methods is significant, both are still polynomial, meaning the run time is a polynomial of the size. That is, they are both O(n^) for some constant k. We find a qualitative difference in performance between polynomial algorithms and algorithms with exponential run time, that is, algorithms that are O(k^) for some constant k. Polynomial algorithms are generally practical to run; even for large problems, the run time doesn't increase too much, whereas exponential algorithms are generally intractable. Even seemingly minor increases in the problem size can make the computation completely impractical.
212
PAULE. BLACK ETAL.
There is a mathematical reason for separating polynomial from exponential algorithms. A polynomial of a polynomial is still a polynomial. Thus, even having polynomial algorithms use other polynomial algorithms still results in a polynomial algorithm. Moreover any exponential function always grows faster than any polynomial function.
4.2.2
A Formal Definition of Big-Oh
Complexity
For the curious reader, we formally define big-Oh notation. We say f(n) = 0{g{n)) if there are two positive constants k and ^o such that \f{n)\ < kg{n) for all n > HQ. The constants k and HQ must not depend on n. Informally, there is a constant k such that for large values (beyond no), kg(n) is greater than f(n). From this definition, we see that constant multipliers are absorbed into the k. Also lower order terms, such as dn, are eventually dominated by higher order terms, like cnlogn or n^. Because a "faster" algorithm may have such large constants or lower order terms, it may perform worse than a "slower" algorithm for realistic problems. If we are clearly only solving small problems, it may, in fact, be better to use the "slower" algorithm, especially if the slower algorithm is simpler. However, experience shows that big-Oh complexity is usually an excellent measure for comparing algorithms.
4.2.3
Shor's Factoring
Algorithm
We can now succinctly compare the speed of computations. The security of a widely used encryption scheme, RS A, depends on the presumption that finding the factors of large numbers is intractable. After decades of work, the best classical factoring algorithm is the Number Field Sieve [4]. With it, factoring an n-digit number takes about O steps,"^ which is exponential in n. What does this mean? Suppose you use RSA to encrypt messages, and your opponent buys fast computers to break your code. Multiplication by the Schonhage-Strassen algorithm [5] takes 0(« log «log log A?) steps. Using a key eight times longer means multiplications, and hence encrypting and decrypting time, takes at most 24 times longer to run, for n> \6. However, the time for your opponent to factor the numbers, and hence break the code, increases to e^^ = e^ ^ = ie^\ . In other words, the time to factor is squared. It doesn't matter whether the time is in seconds or days: factoring is exponential. Without too much computational overhead you can increase the size of your key beyond the capability of any '^More precisely, it takes e^'"
' steps
QUANTUM COMPUTING AND COMMUNICATION
213
conceivable computer your opponent could obtain. At least, that was the case until 1994. In 1994, Peter Shor invented a quantum algorithm for factoring numbers that takes O («^log«loglog«) steps [2]. This is polynomial, and, in fact, isn't too much longer than the naive time to multiply. So if you can encrypt, a determined opponent can break the code, as long as a quantum computer is available. With this breakthrough, the cryptography community in particular became very interested in quantum computing. Shor's algorithm, like most factoring algorithms, uses "a standard reduction of the factoring problem to the problem of finding the period of a function" [6]. What is the period of, say, the function 3" mod 14? Values of the function for increasing exponents are 3^ = 3,3^ = 9,3^ = 27 or 13 mod 14, 3^ = 11 mod 14,3^ = 5 mod 14, and 3^ = 1 mod 14. Since the function has the value 1 when n = 6, the period of 3" mod 14 is 6. The algorithm has five main steps to factor a composite number A^ with Shor's algorithm. 1. If AT is even or there are integers a and b > 1 such that N = a^,2oT a SLTQ factors. 2. Pick a positive integer, m, which is relatively prime to N. 3. Using a quantum computer, find the period of m^ mod N, that is, the smallest positive integer P such that m^ = 1 mod N. 4. For a number of theoretic reasons, if P is odd or if m^^^ + 1 = 0 mod TV, start over again with a new m at step 2. 5. Compute the greatest common divisor of m^/^ - 1 and N. This number is a divisor of N. For a concrete example, let N = 323, which is 19 x 17. N is neither even nor the power of an integer. Suppose we choose 4 for m in step 2. Since 4 is relatively prime to 323, we continue to step 3. We find that the period, P, is 36, since 4^^ = 1 mod 323. We do not need to repeat at step 4 since 36 is not odd and 436/2 ^ I - 3Q5 j^Q^ 323, In step 5 we compute the greatest common divisor of 436/2 _ 1 ^^^ 323^ ^hich is 19. Thus we have found a factor of 323. The heart of Shor's algorithm is quantum period finding, which can also be applied to a quantum Fourier transform and finding discrete logarithms. These are exponentially faster than their classical counterparts.
4.2.4
Deutsch's Function Characterization
Problem
To more clearly illustrate quantum computing's potential speedup, let's examine a contrived, but simple problem first presented and solved by Deutsch [7].
214
PAUL E. BLACK ETAL.
Suppose we wish to find out whether an unknown Boolean unary function is constant, either 0 or 1, or not. Classically, we must apply the function twice, once with a 0 input and once with a 1. If the outputs are both 0 or both 1, it is constant; otherwise, it is not. A single classical application of the function, say applying a 1, can't give us enough information. However, a single quantum application of the function can. Using a superposition of 0 and 1 as the input, one quantum computation of the function yields the answer. The solution of Cleve et al. [8] to Deutsch's problem uses a common quantum computing operation, called a Hadamard,^ which converts |0) into the superposition 1/V2(|0) + |1>) and |1) into the superposition 1/V2(|0) - |1)). The algorithm is shown schematically in Fig. 9. The Hadamard is represented as a box with an "H" in it. The function to be characterized is a box labeled "Uf." To begin, we apply a Hadamard to a |0) and another Hadamard to a 11): H|0)//|1) = 1/2(10)+ | 1 ) ) ( | 0 ) - | 1 ) ) = 1/2(|0)(|0)-|1» + |1)(|0)-|1))) To be reversible, the function, Uj, takes a pair of qubits, |x)|y), and produces the pair \x)\y e /(x)). The second qubit is the original second qubit, y, exclusiveor'd with the function applied to the first qubit, f{x). We apply the function one time to the result of the Hadamards, and then apply another Hadamard to the first qubit, not the "result'' qubit. Below the 'T' represents the identity; that is, we do nothing to the second qubit: (i/®/)C/^l/2(|0)(|0)-|l)) + |l)(|0)-|l))) = ( / / 0 / ) 1 / 2 ( ^ ^ | 0 > ( | 0 ) - |1)) + [//|1)(|0) - |1))) = {H^i)i/2(|0)(|0
e /(O)) - l i e /(O))) +
|i>(|Oe/(i)>-|ie/(i))))
lf(0)©f(1)>
FIG. 9. Solution to Deutsch's function characterization problem. ^ Also called Walsh transform, Walsh-Hadamard transform, or discrete Fourier transformation over Z"
QUANTUM COMPUTING AND COMMUNICATION
215
= l/2(H|0)|0e/(0))-H|0)|l©/(0)) + //|l>|0 ©/(!)> - H\m
® f(l)))
^ 1 /7^/? ( (l^> + i l » | 0 ® /(O)) - (|0) + Il))|l ® /(0)>+\ ^ ^ V(io>-ii»io®/(!))-(10)-ii))ii©/(!)) ; • Case analysis and algebraic manipulations reduce this equation to (Details are in the Appendix.)
= i/V2|/(0)e/(i))(|0)-|i)) = 1/(0) 0 / ( 1 ) ) ^ 1 / ^ ( 1 0 ) - | 1 ) ) . We now measure the first qubit. If it is 0, the function is a constant. If we measure a 1, the function is not a constant. Thus, we can compute a property of a function's range using only one function application. Although contrived, this example shows that using quantum entanglement and superposition, we can compute some properties faster than is possible with classical means. This is part of the lure of quantum computing research.
4.2.5
Graver's Search
Algorithm
The requirement of searching for information is simple: find a certain value in a set of values with no ordering. For example, does the name "John Smith" occur in a set of 1000 names? Since there is no order to the names, the best classical solution is to examine each name, one at a time. For a set of N names, the expected run time is 0{N/2): on average we must examine half the names. There are classical methods for speeding up the search, such as sorting the names and doing a binary search, or using parallel processing or associative memory. Sorting requires us to assign an order to the data, which may be hard if we are searching data such as pictures or audio recordings. To accommodate every possible search, say by last name or by first name, we would need to create separate sorted indices into the data, requiring 0{N log N) preliminary computation and 0{N) extra storage. Parallel processing and associative memory takes 0(N) resources. Thus these classical methods speed up query time by taking time earlier or using more resources. In 1996 Grover presented a quantum algorithm [9,10] to solve the general search problem in 0(\/iV log N) time. The algorithm proceeds by repeatedly enhancing the amplitude of the position in which the name occurs. Database searching, especially for previously unindexed information, is becoming more important in business operations, such as data mining. However, Grover's algorithm might have an impact that reaches even farther. Although we
216
PAULE. BLACK ETAL
present the algorithm in terms of looking for names, the search can be adapted to any recognizable pattern. Solutions to problems currently thought to take more than polynomial time, that is, 0(k"), may be solvable in polynomial time. A typical problem in this group is the Traveling Salesman Problem, which is finding the shortest path between all points in a set. This problem occurs in situations such as finding the best routing of trucks between pick-up and drop-off points, airplanes between cities, and the fastest path of a drill head making holes in a printed circuit board. The search algorithm would initialize all possible solutions, and then repeatedly enhance the amplitude of the best solution. No published quantum algorithm exists to solve the Traveling Salesman, or any NP problem, in polynomial time. However, improvements like Grover's hint that it may be possible.
4.2.6
Quantum
Simulation
Since a quantum system is the tensor product of its component systems, the amount of information needed to completely describe an arbitrary quantum system increases exponentially with the size. This means that classical simulation of quantum systems with even a dozen qubits challenges the fastest supercomputers. Researching protein folding to discover new drugs, evaluating different physical models of the universe, understanding new superconductors, or designing quantum computers may take far more classical computer power than could reasonably be expected to exist on Earth in the next decade. However, since quantum computers can represent an exponential amount of information, they may make such investigations tractable.
4.3
Quantum Error Correction
One of the most serious problems for quantum information processing is that of decoherence, the tendency for quantum superpositions to collapse into a single, definite, classical state. As we have seen, the power of quantum computing derives in large part from the ability to take advantage of the unique properties of quantum mechanics—superposition and entanglement. The qubits that compose a quantum computer must inevitably interact with other components of the system, in order to perform a computation. This interaction inevitably will lead to errors. To prevent the state of qubits from degrading to the point that quantum computations fail requires that errors be either prevented or corrected. In classical systems, errors are prevented to some degree by making the ratio of system size to error deviation very large. Error correction methods are well known in conventional computing systems, and have been used for decades. Classical
QUANTUM COMPUTING AND COMMUNICATION
217
error correction uses various types of redundancy to isolate and then correct errors. Multiple copies of a bit or signal can be compared, with the assumption that errors are sufficiently improbable to never result in faulty bits or signals being more likely than valid ones; e.g., if three bits are used to encode a one-bit value, and two of three bits match, then the third is assumed to be faulty. In quantum systems it is not possible to measure qubit values without destroying the superposition that quantum computing needs, so at first there was doubt that quantum error correction would ever be feasible. This is natural, especially considering the no-cloning theorem (Section 2.10). Not only could qubits not be exactly measured, they cannot even be arbitrarily copied by any conceivable scheme for detecting and correcting errors. It is perhaps surprising then that quantum error correction is not only possible, but also remarkably effective. The challenge in quantum error correction is to isolate and correct errors without disturbing the quantum state of the system. It is in fact possible to use some of the same ideas employed for classical error correction in a quantum system; the trick is to match the redundancy to the type of errors likely to occur in the system. Once we know what kinds of errors are most likely, it is possible to design effective quantum error correction mechanisms.
4.3.7
Single-Bit-Flip Errors
To see how this is possible, consider a simple class of errors: single-bit errors that affect qubits independently. (In reality, of course, more complex problems occur, but this example illustrates the basic technique.) Consider a single-qubit, a two-state system with bases |0) and |1). We will use a simple "repetition code"; that is, we represent a logical zero with three zero qubits, |0L) = |000), and a logical one with three ones, |1L) = |111). An arbitrary qubit in this system, written as a superposition a\Oi^) -h b\ 1L), becomes a\000) + b\lll) with repetition coding. Since we assume the system is stable in all ways except perhaps for single-bit flips, there may be either no error or one of three qubits flipped, as shown in Table II.
TABLE II BIT-FLIP ERRORS, SYNDROMES, AND CORRECTIVES
Error
Error state
Syndrome
Correction
No error qubit 1 flipped qubit 2 flipped qubit 3 flipped
a|000) + /)|lll> a|100) + />|011> a\0\0) + b\\0\) fl|001) + Z)|110)
|000> 1110) 1101) 1011)
None X <S) I ^ I I <S)X ^ I I ^ I ®X
218
PAULE. BLACK ETAL.
The strategy for detecting errors is to add three "temporary" qubits \t0t\t2), set to |000), which will hold "parity" results. We then XOR various bits together, putting the results in the added three qubits: to is bit 0 XOR bit l,ti is bit 0 XOR bit 2, and t2 is bit 1 XOR bit 2. This leaves a unique pattern, called a syndrome, for each error. The third column of Table II shows the respective syndromes for each error. Measuring the added three qubits yields a syndrome, while maintaining the superpositions and entanglements we need. Depending on which syndrome we find, we apply one of the three corrective operations given in the last column to the original three repetition encoding qubits. The operation X flips a bit; that is, it changes a 0 to a 1 and a 1 to a 0. The identity operation is / . We illustrate this process in the following example.
4.3.2
An Error Correction
Example
In fact, the repetition code can correct a superposition of errors. This is more realistic than depending on an error aifecting only one qubit. It also illuminates some quantum behaviors. Like any other quantum state, the error may be a superposition, such as
i\[0^X^I®I^\f^I®X^l\
(«|000)-h/?|lll>).
Informally the first factor may be read as, if we measured the state, we having an 80% chance of finding the first qubit flipped and a 20% chance of finding the second qubit flipped. Multiplying the error state is
1^) = (y^x^i
®i ^\fo2i ®x^i\{a\m^) + b\\n))
= Va8(«|ioo) + /?|oi 1)) + \/a2(«|oio) + /?|ioi>). The error state is then augmented with |000) and the syndrome extraction, S, applied: 5'(|^)^|000)) = 5 ( ' v ^ l ^ l 100000)+ /?|011000)) +Va2(«|010000)+/?|101000))')
= Va8(fl|iooiio) + /?|oiiiio)) + Vo2(^|oioioi) + /?|ioiioi)) = Va8(a|ioo) + b\o\\)) 0 |i 10) + \/a2(«|oio) + /?|ioi)) ^ |ioi). Now we measure the last three qubits. This measurement collapses them to 1110) with 80% probabiHty or 1101) with 20% probability. Since they are entangled with
QUANTUM COMPUTING AND COMMUNICATION
219
the repetition coding bits, the coding bits partially collapse, too. The final state is (fl|100) + ^1011)) ^ |110) with 80% probability or (a|010) + ^|101)) (8) |101) with 20% probability. If we measured 1, 1, 0, the first collapse took place, and we apply Z(8)/(g)/ to a|100)+Z?|011), producing a|000)+Z)|lll), the original coding. On the other hand, if we measured 1, 0, 1, we apply I (S> X (S> I to a\OlO)-\-b\lOl). In either case, the system is restored to the original condition, fl|000) + ^ | l l l ) , without ever measuring (or disturbing) the repetition bits themselves. This error correction model works only if no more than one of the three qubits experiences an error. With an error probability of p, the chance of either no error or one error is (1 - p)^ + 3p (1 - p)^ = 1 - 3p^ + 2p^. This method improves system reliability if the chance of an uncorrectable error, which is 3p^ - 2p^, is less than the chance of a single error, p, in other words, if p < 0.5.
4.3.3
From Error Correction to Quantum Fault Tolerance
The replication code given above is simple, but has disadvantages. First, it only corrects "bit flips," that is, errors in the state of a qubit. It cannot correct "phase errors," such as the change of sign in 1/V2(|0) + |1)) to 1/V2(|0) - |1)). Second, a replication code wastes resources. The code uses three actual qubits to encode one logical qubit. Further improvements in reliability take significantly more resources. More efficient codes can correct arbitrary bit or phase errors while using a sublinear number of additional qubits. One such coding scheme is group codes. Since the odds of a single qubit being corrupted must be low (or else error correction wouldn't work at all), we can economize by protecting a group of qubits at the same time, rather than protecting the qubits one at a time. In 1996 Ekert and Macchiavello pointed out [11] that such codes were possible and showed a lower bound. To protect / logical qubits from up to t errors, they must be encoded in the entangled state of at least n physical qubits, such that the following holds: 2^ ,-n /•=o
\
/
An especially promising approach is the use of "concatenated" error correcting codes [12,13]. In this scheme, a single logical qubit is encoded as several qubits, but in addition the code qubits themselves are also encoded, forming a hierarchy of encodings. The significance is that if the probability of error for an individual qubit can be reduced below a certain threshold, then quantum computations can be carried out to an arbitrary degree of accuracy. A new approach complements error correction. Fault tolerant quantum computing avoids the need to actively decode and correct errors by computing directly
220
PAUL E. BLACK ETAL
on encoded quantum states. Instead of computing with gates and qubits, fault tolerant designs use procedures that execute encoded gates on encoded states that represent logical qubits. Although many problems remain to be solved in the physical implementation of fault tolerant quantum computing, this approach brings quantum computing a litde closer to reality.
5.
Quantum Communication and Cryptography
Quantum computing promises a revolutionary advance in computational power, but applications of quantum mechanics to communication and cryptography may have equally spectacular results, and practical implementations may be available much sooner. In addition, quantum communication is likely to be just as essential to quantum computing as networking is to today's computer systems. Most observers expect quantum cryptography to be the first practical application for quantum communications and computing.
5.1 Why Quantum Cryptography Matters Cryptography has a long history of competition between code makers and code breakers. New encryption methods appear routinely, and many are quickly cracked through lines of attack that their creators never considered. During the first and second World Wars, both sides were breaking codes that the other side considered secure. More significantly, a code that is secure at one time may fall to advances in technology. The most famous example of this may be the World War II German Enigma code. Some key mathematical insights made it possible to break Enigma messages encrypted with poorly selected keys, but only with an immense amount of computation. By the middle of the war. Enigma messages were being broken using electromechanical computers developed first by Polish intelligence and later by faster British devices built under the direction of Alan Turing. Although the Germans improved their encryption machines, Joseph Desch, at NCR Corporation, developed code breaking devices 20 times faster than Turing's, enabling the US Navy's Op-20-G to continue cracking many Enigma messages. Today, an average personal computer can break Enigma encryption in seconds. A quantum computer would have the same impact on many existing encryption algorithms. Much of modern cryptography is based on exploiting extremely hard mathematical problems, for which there are no known eificient solutions. Many modem cipher methods are based on the difficulty of factoring (see Section 4.2.3) or computing discrete logarithms for large numbers (e.g., over 100 digits). The
QUANTUM COMPUTING AND COMMUNICATION
221
best algorithms for solving these problems are exponential in the length of input, so a brute force attack would require literally billions of years, even on computers thousands of times faster than today's machines. Quantum computers factoring large numbers or solving discrete logarithms would make some of the most widely used encryption methods obsolete overnight. Although quantum computers are not expected to be available for at least the next decade, the very existence of a quantum factoring algorithm makes classical cryptography obsolete for some applications. It is generally accepted that a new encryption method should protect information for 20 to 30 years, given expected technological advances. Since it is conceivable that a quantum computer will be built within the next two to three decades, algorithms based on factoring or discrete logarithms are, in that sense, obsolete already. Quantum cryptography, however, offers a solution to the problem of securing codes against technological advances.
5.2
Unbreakable Codes
An encrypted message can always be cryptanalyzed by brute force methods— trying every key until the correct one is found. There is, however, one exception to this rule. A cipher developed in 1917 by Gilbert Vemam of AT&T is truly unbreakable. A Vemam cipher, or "one-time pad," uses a key with a random sequence of letters in the encryption alphabet, equal in length to the message to be encrypted. A message, M, is encrypted by adding, modulo the alphabet length, each letter of the key K to the corresponding letter of M, i.e., C, = M, e Ki, where C is the encrypted message, or ciphertext, and ® is modular addition (see Table III). To decrypt, the process is reversed. The Vemam cipher is unbreakable because there is no way to determine a unique match between encrypted message C and key K. Since the key is random and the same length as the message, an encrypted message can decrypt to any text at all, depending on the key that is tried. For example, consider the ciphertext "XEC." Since keys are completely random, all keys are equally probable. So it is just as likely that the key is "UDI," which decrypts to "CAT," or "TPW," which
TABLE III ONE-TIME PAD
Text
Random key
Ciphertext
C(3) A(l) T (20)
©U(21) eD(4) © I (9)
X(24) E(5) C (3)
222
PAUL E. BLACK ETAL
decrypts to "DOG." There is no way to prove which is the real key, and therefore no way to know the original message. Although it is completely secure, the Vernam cipher has serious disadvantages. Since the key must be the same length as the message, a huge volume of key material must be exchanged by sender and recipient. This makes it impractical for high-volume applications such as day-to-day military communication. However, Vernam ciphers may be used to transmit master keys for other encryption schemes. Historically, Vernam ciphers have been used by spies sending short messages, using pads of random keys that could be destroyed after each transmission, hence the common name "one-time pad." An equally serious problem is that if the key is ever reused, it becomes possible to decrypt two or more messages that were encrypted under the same key. A spectacular example of this problem is the post-war decryption of Soviet KGB and GRU messages by U.S. and British intelligence under the code name VENONA. Soviet intelligence had established a practice of reusing one-time pads after a period of years. British intelligence analysts noticed a few matches in ciphers from a large volume of intercepted Soviet communications [14]. Over a period of years, British and U.S. cryptanalysts working at Arlington Hall in Virginia gradually decrypted hundreds of Soviet messages, many of them critical in revealing Soviet espionage against U.S. atomic weapons research in the 1940s and early 1950s. Still another problem with implementing a Vernam cipher is that the key must be truly random. Using text from a book, for example, would not be secure. Similarly, using the output of a conventional cipher system, such as DBS, results in an encryption that is only as secure as the cipher system, not an unbreakable one-time pad system. Pseudo-random number generator programs may produce sequences with correlations or the entire generation algorithm may be discovered; both these attacks have been successfully used. Thus while the Vernam cipher is in theory unbreakable, in practice it becomes difficult and impractical for most applications. Conventional cryptosystems, on the other hand, can be broken but are much more efficient and easier to use.
5.3
Quantum Cryptography
Quantum cryptography offers some potentially enormous advantages over conventional cryptosystems, and may also be the only way to secure communications against the power of quantum computers. With quantum methods, it becomes possible to exchange keys with the guarantee that any eavesdropping to intercept the key is detectable with arbitrarily high probability. If the keys are used as one-time pads, complete security is assured. Although special purpose classical
QUANTUM COMPUTING AND COMMUNICATION
223
hardware can generate keys that are truly random, it is easy to use the collapse of quantum superpositions to generate truly random keys. This eliminates one of the major drawbacks to using one-time pads. The ability to detect the presence of an eavesdropper is in itself a huge advantage over conventional methods. With ordinary cryptography, there is always a risk that the key has been intercepted. Quantum key distribution eliminates this risk using properties of quantum mechanics to reveal the presence of any eavesdropping.
5.3.1 Quantum Key Distribution The first significant communications application proposed using quantum effects is quantum key distribution, which solves the problem of communicating a shared cryptographic key between two parties with complete security. Classical solutions to the key distribution problem all carry a small, but real, risk that the encrypted communications used for sharing a key could be decrypted by an adversary. Quantum key distribution (QKD) can, in theory, make it impossible for the adversary to intercept the key communication without revealing his presence. The security of QKD relies on the physical effects that occur when photons are measured. As discussed in Section 3.3, a photon polarized in a given direction will not pass through a filter whose polarization is perpendicular to the photon's polarization. At any other angle than perpendicular, the photon may or may not pass through the filter, with a probability that depends on the difference between the direction of polarization of the photon and the filter. At 45°, the probability of passing through the filter is 50%. The filter is effectively a measuring device. According to the measurement postulate of quantum mechanics, measurements in a 2-dimensional system are made according to an orthonormal basis.^ Measuring the state transforms it into one or the other of the basis vectors. In effect, the photon is forced to "choose" one of the basis vectors with a probability that depends on how far its angle of polarization is from the two basis vectors. For example, a diagonally polarized photon measured according to a vertical/horizontal basis will be in a state of either vertical or horizontal polarization after measurement. Furthermore, any polarization angle can be represented as a linear combination, a|t) + ^ | ^ ) of orthogonal (i.e., perpendicular) basis vectors. For QKD, two bases are used: rectilinear, with basis vectors t and -^, and diagonal, with basis vectors /* and \ . ^Recall from linear algebra that a basis for a vector space is a set of vectors that can be used in linear combination to produce any vector in the space. A set of k vectors is necessary and sufficient to define a basis for a /c-dimensional space. A commonly used basis for a 2-dimensional vector space is (1,0) and (0,1).
224
PAUL E. BLACK ETAL
Measuring photons in these polarizations according to the basis vectors produces the results shown in Table IV. These results are the basis of a key distribution protocol, BB84, devised by Bennett and Brassard [15]. Many other QKD protocols have been devised since, using similar ideas. Suppose two parties, Alice and Bob, wish to establish a shared cryptographic key. An eavesdropper, Eve, is known to be attempting to observe their communication, see Fig. 10. How can the key be shared without Eve intercepting it? Traditional solutions require that the key be encrypted under a previously shared key, which carries the risk that the communication may be decrypted by cryptanalytic means, or that the previously shared key may have been compromised. Either way. Eve may read the message and learn Alice and Bob's new key. QKD provides a method for establishing the shared key that guarantees either that the key will be perfectly secure or that Alice and Bob will learn that Eve is listening and therefore not use the key. The BB84 QKD protocol takes advantage of the properties shown in Table IV. The protocol proceeds as follows: Alice and Bob agree in advance on a representation for 0 and 1 bits in each basis. For example, they may choose -> and /* to represent 0 and t and \ to represent 1. Alice sends Bob a stream of polarized photons, choosing randomly between t, -^, /-> and \ polarizations. When receiving a photon. Bob chooses randomly between -h and x bases. When TABLE IV PHOTON MEASUREMENT WITH DIFFERENT BASES
t
Polarization Basis Result
+ T
-^ + "^
/ + Tor^ (50/50)
\
T
-^
/
+ t or -^ (50/50)
X
X
X
X
/or\ (50/50)
/or\ (50/50)
/
\
Public channel
^r
^r
Alice
^r Bob
Eve i
L
i i
k
Quantum channel FIG. 10. Quantum key distribution.
L
\
225
QUANTUM COMPUTING AND COMMUNICATION
the transmission is complete, Bob sends Alice the sequence of bases he used to measure the photons. This communication can be completely public. Alice tells Bob which of the bases were the same ones she used. This communication can also be public. Alice and Bob discard the measurements for which Bob used a different basis than Alice. On average, Bob will guess the correct basis 50% of the time, and will therefore get the same polarization as Alice sent. The key is then the interpretation of the sequence of remaining photons as O's and I's. Consider the example in Table V. Eve can listen to the messages between Alice and Bob about the sequences of bases they use and learn the bases that Bob guessed correctly. However, this tells her nothing about the key, because Alice's polarizations were chosen randomly. If Bob guessed + as the correct polarization. Eve does not know whether Alice sent a ^ (0) or a t (1) polarized photon, and therefore knows nothing about the key bit the photon represents. What happens if Eve intercepts the stream sent by Alice and measures the photons? On average. Eve will guess the correct basis 50% of the time, and the wrong basis 50% of the time, just as Bob does. However, when Eve measures a photon, its state is altered to conform to the basis Eve used, so Bob will get the wrong result in approximately half of the cases where he and Alice have chosen the same basis. Since they chose the same basis half the time. Eve's measurement adds an error rate of 25%. Consider the elaborated example in Table VI. We describe details of real systems with some error rate and determining the error rate in Section 5.4. TABLE V DERIVING A NEW KEY
Sent by Alice Basis used by Bob Bob's result Key
-^
— > •
t
t
/
\
/
^
-^
t
\
\
\
t ^
/
X
+
X
X
+
X
+
+
+
X
X
-1-
X
+
+
X
\
-^ /
\
-^ \
\
-^ \
t
-^ /
0
1
T -^ -^ \ 0
0
1
1
1
0
0
TABLE VI QUANTUM KEY DISTRIBUTION WITH EAVESDROPPING
Sent by Alice Basis used by Eve Eve's result Basis used by Bob Bob's result Key
-^ —^ t
t
/
\
/
^
^
t \
\
\
t ^
+
+
+
+
+
X
+
X
X
+
+
X
+
^
-^ /
t
^
t
/
^
\
\
t
-^ \
X
+
/
^
X
X
+
X
X
+
\
^
/
\
-^ /
0
X
err
+
+
t ^
0
+
X
-^ \ 0
err
X
/ +
+
-^
X
t /
\
T ^
/
1
1
+ 0
X
0
226
5.3.2
PAULE. BLACK £ r ^ L
Generating Random Keys
Properly implemented, the BB84 protocol guarantees that Alice and Bob share a key that can be used either as a one-time pad, or as a key for a conventional cryptographic algorithm. In either case, real security is only available if the key is truly random. Any source of nonrandomness is a potential weakness that might be exploited by a cryptanalyst. This is one reason that ordinary pseudorandom number generator programs, such as those used for simulations, are hard to use for cryptography. Some conventional cryptosystems rely on special purpose hardware to generate random bits, and elaborate tests [16] are used to ensure randomness. One of the interesting aspects of quantum cryptography is that it provides a way to ensure a truly random key as well as allowing for detection of eavesdropping. Recall from Section 3.2 that for a superposition a\^) -{- b\(^), the probability of a measurement result of ^ is a^, and of (^ is b^. Therefore when a series of superpositions of a\0) -\- b\l) is measured, 01 and 10 are measured with equal probability. Measuring a series of particles in this state therefore estabhshes a truly random binary sequence.
5.4
Prospects and Practical Problems
Although in theory the BB84 protocol can produce a guaranteed secure key, a number of practical problems remain to be solved before quantum cryptography can fulfill its promise. BB84 and other quantum protocols are idealized, but current technology is not yet close enough to the idealized description to implement quantum protocols as practical products. As of 2002, QKD has not been demonstrated over a distance of more than 50 km, but progress has been steady [17]. Commercial products using quantum protocols may be available by 2005, if problems in generating and detecting single photons can be overcome. Single-photon production is one of the greatest challenges for quantum communication. To prevent eavesdropping, transmission of one photon per time slot is needed. If multiple photons are produced in a time slot, it is possible for an adversary to count the number of photons without disturbing their quantum state. Then, if multiple photons are present, one can be measured while the others are allowed to pass, revealing key information without betraying the presence of the adversary. Current methods of generating single photons typically have an efficiency of less than 15%, leaving plenty of opportunity for Eve. One method of dealing with noise problems is to use privacy amplification techniques. Whenever noise is present, it must be assumed that Eve could obtain partial information on the key bits, since it is not possible for Alice and Bob to
QUANTUM COMPUTING AND COMMUNICATION
227
know for certain whether the error rate results from ordinary noise or from Eve's intrusion. Privacy amplification distills a long key, about which Eve is assumed to have partial information, down to a much shorter key that eliminates Eve's information to an arbitrarily low level. For privacy amplification, the first part of the protocol works exactly as before: Alice sends Bob qubits over a quantum channel, then the two exchange information over a public channel about which measurement bases they used. As before, they delete the qubits for which they used different measurement bases. Now, however, they also must delete bit slots in which Bob should have received a qubit, but didn't, either due to Eve's intrusion or dark counts at Bob's detector. Bob transmits the location of dark counts to Alice over the public channel. Next, Alice and Bob publicly compare small parts of their raw keys to estimate the error rate, then delete these publicly disclosed bits from their key, leaving the tentative final key. If the error rate exceeds a predetermined error threshold, indicating possible interception by Eve, they start over from the beginning to attempt a new key. If the error rate is below the threshold, they remove any remaining errors from the rest of the raw key, to produce the reconciled key by using parity checks of subblocks of the tentative final key. To do this, they partition the key into blocks of length / such that each block is unlikely to contain more than one error. They each compute parity on all blocks and publicly compare results, throwing away the last bit of each compared block. If parity does not agree for a block, they divide the block into two, then compare parity on the subblocks, continuing in this binary search fashion until the faulty bit is found and deleted. This step is repeated with different random partitions until it is no longer efficient to continue. After this process, they select randomly chosen subsets of the remaining key, computing parity and discarding faulty bits and the last bit of each partition as before. This process continues for some fixed number of times to ensure with high probability that the key contains no error. Because physical imperfections are inevitable in any system, it must be assumed that Eve may be able to obtain at least partial information. Eavesdropping may occur, even with significantly improved hardware, either through multiplephoton splitting or by intercepting and resending some bits, but not enough to reveal the presence of the eavesdropper. To overcome this problem, Bennett et al. [18] developed a privacy amplification procedure that distills a secure key by removing Eve's information with an arbitrarily high probability. During the privacy amplification phase of the protocol, Eve's information is removed. The first step in privacy amplification is for Alice and Bob to use the error rate determined above to compute an upper bound, b, on the number of bits in the remaining key that could be known to Eve. Using the number of bits in the
228
PAUL E. BLACK ETAL
remaining, reconciled key, n, and an adjustable security parameter s, they select n-k-s subsets of the reconciled key. The subset selection is done publicly, but the contents of the subsets are kept secret. Alice and Bob then compute parity on the subsets they selected, using the resulting parities as the final secret key. On average, Eve now has less than 2"Vln2 bits of information about the final key. Even if a reliable method of single photon production is developed, errors in transmission are as inevitable with quantum as with classical communication. Because quantum protocols rely on measuring the error rate to detect the presence of an eavesdropper, it is critical that the transmission medium's contribution to the error rate be as small as possible. If transmission errors exceed 25%, secure communication is not possible, because a simple man-in-the-middle attack— measuring all bits and passing them on—will not be detected.
5.5
Dense Coding
As discussed in Section 2.6, a qubit can produce only one bit of classical information. Surprisingly it is possible to communicate two bits of information using only one qubit and an EPR pair in a quantum technique known as dense coding. Dense coding takes advantage of entanglement to double the information content of the physically transmitted qubit. Initially, Alice and Bob must each have one of the entangled particles of an EPR pair: ^o = - ^ ( | 0 0 ) + | l l » . v2 To communicate two bits, Alice represents the possible bit combinations as 0 through 3. Using the qubit in her possession, she then executes one of the transformations in Table VII. After the transformation, Alice sends her qubit to Bob. Now that Bob has both qubits, he can use a controlled-NOT (prior to this, Alice and Bob could apply transformations only to their individual particles). TABLE VII DENSE CODING, PHASE 1
Bits
Transform
New state
00 01 10 11
^0 = (7 0 7)^0 4^1 = (A-^7)4^0 ^2 = (Z®7)4^o "¥, = (¥ ^1)^0
^ ( 1 0 0 ) + 111)) ^ ( | 1 0 ) + |01)) ^(|00)-|11)) ^(|01)-|10))
QUANTUM COMPUTING AND COMMUNICATION
229
The controlled-NOT makes it possible to factor out the second bit, while the first remains entangled, as shown in Table VIII. Note that after the controlled-NOT, it is possible to read off the values of the initial bits by treating 1/V2(|0) +11)) as 0 and 1/V2(|0) - |1)) as 1. All that remains is to reduce the first qubit to a classical value by executing a Hadamard transform, as shown in Table IX. The dense coding concept can also be implemented using three qubits in an entangled state known as a GHZ state [19,20]. With this procedure, Alice can communicate three bits of classical information by sending two qubits. Using local operations on the two qubits, Alice is able to prepare the GHZ particles in any of the eight orthogonal GHZ states. Through entanglement, her operations affect the entire three-state system, just as her operation on one qubit of an entangled pair changes the state of the two qubits in the pair. Similar to twoqubit dense coding. Bob measures his qubit along with the qubits received from Alice to distinguish one of the eight possible states encoded by three bits.
5.6
Quantum Teleportation
As shown in Section 3.6, it is impossible to clone, or copy, an unknown quantum state. However, the quantum state can be moved to another location using classical communication. The original quantum state is reconstructed exactly at the receiver, but the original state is destroyed. The No-Cloning theorem thus TABLE VIII DERIVING THE SECOND BIT
State
C-NOT result
First bit
Second bit
j= (|oo> + |ii»
^ (100) + |ii»
j . (|0) + ID)
|0)
j= (|io> + |0i»
j= (111) +101))
j= (|0) + ID)
ID
j= (|oo> - 111))
j ^ (100) -110))
^ (|0> - ID)
^(101)-110))
J. (101)-111))
;^(|0)-|l))
|0) |1>
TABLE IX DERIVING FIRST BIT First bit
//(First bit)
j ^ (|0> + ID)
j ^ {j^m
+ ID) + j ^ (|0> - ID)) = ^ (|0> + ID +10) - ID) = |0)
j ^ (10) - ID)
j ^ (j^m
+ ID) - j=, (10) - ID)) = ^ (10) + ID -10> + ID) = |i>
230
PAULE. BLACK ETAL
holds because in the end there is only one state. Quantum teleportation can be considered the dual of the dense coding procedure: dense coding maps a quantum state to two classical bits, while teleportation maps two classical bits to a quantum state. Both processes would not be possible without entanglement. Initially, Ahce has a qubit in an unknown state, i.e., O = fl|0) + /?|1).
As with dense coding, Alice and Bob each have one of an entangled pair of qubits: ^0 = -^(100) +111)). The combined state is O ^ ^ o = MO)0-^(|OO) + |ll))+/?|l)0-^(|OO) + |ll))j
= -L(^|o)|oo> + ^|o)|ii)) + -^(/>|i>|oo) + /)|i>|ii)) = -^(fl|000) + ^|011) + /?|100> + /?|lll)).
V2 At this point, Alice has the first two qubits and Bob has the third. Alice applies a controlled-NOT next: (CNOT (8) / ) (O 0 ^ ) = — (a\000) -f a\0\l) + b\ 110) + b\mi)). Applying a Hadamard, H (S> I <S) I, then produces i(fl(|000) + 1100) + 1011) + 1111)) + b{\OlO) - 1110) + 1001) - 1101))). The objective is to retrieve the original state, so we rewrite the state to move the amplitudes a and b through the terms. This shows it is possible to measure the first two bits, leaving the third in a superposition of a and b:
^(100)^10) +110)^10) +101)^11) +111)^11) + mb\o) - \n)b\o) +|00)/)|1)-|10)/)|1)) = ^(|00)(fl|0) + b\l)) + |01)(«|l) + b\0)) + \\0){a\0)-b |10)(^|0) - b\l)) + \\l){a\\)-b\0))).
QUANTUM COMPUTING AND COMMUNICATION
231
What does this equation mean? When we measure the first two qubits, we get one of the four possibiHties. Since the third qubit is entangled, it will be in the corresponding state. For instance, if we measure |00), the system has collapsed to the first state, and the third qubit is the state a\0) + b\l). Table X fists the four possible results. In each case, we can recover the original quantum state by applying a transform determined by the first two qubits, leaving the third qubit in the state 0 = a\0)-\-b\l). Just as quantum computing has required the development of new computing models beyond recursive function theory and Turing machines, quantum communication, teleportation, and dense coding show the need for new models of information beyond classical Shannon information theory.
6.
Physical Implementations
Until now, we have talked primarily about theoretical possibilities and in general physical and mathematical terms. To actually build a quantum computer a specific physical implementation will be required. Several experimental systems have demonstrated that they can manipulate a few qubits, and in some restrictive situations very simple quantum calculations with limited fidelity have been performed. However, a reliable, useful quantum computer is still far in the future. Moreover, we should recall that the first classical computers were large mechanical machines—not electronic computers based on transistors on silicon chips. Similarly, it is likely that the technology used to build the first useful quantum computer will be very different from the technology that eventually wins out. There are many different physical implementations that may satisfy all the necessary elements required for building a scalable quantum processor, but at the moment numerous technical and engineering constraints remain to be overcome. In this section we will list the required properties that a successful physical implementation must have along with several different physical implementations that may possess these properties.
TABLE X QUANTUM TELEPORTATION
Left qubits
3rd qubit state
Transform
3rd qubit new state
|00> 101) |10> 111)
a\0) + b\l) a\l) + b\0) a\0}-b\l) a\\)-b\0)
I X Z Y
a\0) + a\0) + a\0) + a\0) +
b\l) b\l) b\l) b\l)
232
PAUL E. BLACK ETAL.
6.1
General Properties Required to Build a Quantum Computer
We will first describe briefly the general physical traits that any specific physical implementation must have. Currently, there is a big gap between demonstrations in the laboratory and generally useful devices. Moreover, most proposed laboratory implementations and the experiments carried out to date fail to completely satisfy all of the general characteristics. Which system ultimately satisfies all these constraints at a level required to build a true extensible quantum processor is not known. Here, we will comment on the general properties needed to build a useful quantum computing device.
6.7.7
Well-Characterized
Qubits
The first requirement is that the qubits chosen must be well characterized. This requires that each individual qubit must have a well-defined set of quantum states that will make up the "qubit." So far, we have assumed that we had a two-level, or binary, quantum system. In reality, most quantum systems have more than two levels. In principle we could build a quantum computer whose individual elements or qubits consist of systems with four, five, ten, or any number of levels. The different levels we use may be a subset of the total number of levels in individual elements. Whatever the level structure of my qubit, we require that the levels being used have well-defined properties, such as energy. Moreover, a superposition of the levels of the qubit must minimize decoherence by hindering energy from moving into or out of the qubit. This general constraint requires that each individual qubit have the same internal level structure, regardless of its local external environment. This also requires that the qubit is well isolated from its environment to hinder energy flow between the environment and the qubit. Isolating information from the environment is easier for classical bits than for quantum bits. A classical bit or switch is in either the state "0" or the state "1," that is, on or off. Except in special cases, such as communication, CCDs, or magnetic disks, we engineer classical systems to be in one of these two possibilities, never in between. In those special cases interactions are kept to a few tens of devices. A quantum system or qubit is inherently a much more delicate system. Although we may prepare the system in some excited state |1), most quantum systems will decay to |0) or an arbitrary superposition because of interactions with the environment. Interaction with the environment must be controllable to build a large quantum processor. Moreover, in some proposed physical implementations, individual qubits may have a slightly different internal level structure resulting from either the manufacturing process or the interaction of the qubit with its
QUANTUM COMPUTING AND COMMUNICATION
233
environment. This slight difference in level structure must be compensated and should not change during the computation. The physical nature of the qubit may be any one of a number of properties, such as, electron spin, nuclear spin, photon polarization, the motional or trapping state of a neutral atom or ion, or the flux or charge in a superconducting quantum interference device (SQUID). For instance, in the particular case of ion traps, it is the ground state of the hyperfine states that result from actually coupling the electron and nuclear spin together. Further, it is only a specific pair of the magnetic sublevels of those hyperfine states. In a quantum dot one again uses the complex structure of the device to come up with two states to act as the qubit. In this case, it corresponds more to an excitation of an electron or an electron-hole pair.
6.1.2
Scalable Qubit Arrays and Gates
This requirement is a logical extension of the previous requirement. Since individual qubits must be able to interact to build quantum gates, they must be held in some type of replicated trap or matrix. In the very early days of computing mechanical switches or tubes in racks held information instead of microscopic transistors etched in silicon and mounted in integrated circuits. Thus, the specific nature of the supporting infrastructure depends on the nature of the qubit. Regardless of the matrix or environment holding individual qubits, it is essential that we can add more qubits without modifying the properties of the previous qubits and having to reengineer the whole system. Because quantum systems are so sensitive to their environment, scalability is not trivial. Scalability also requires that qubits are stable over periods that are long compared to both single-qubit operations and two-qubit gates. In other words, the states of the qubits must not decohere on a time scale that is long compared to one- and two-qubit operations. This is increasingly important in larger quantum computers where larger portions of the computer must wait for some parts to finish.
6.1.3
Stability and Speed
Physical implementations of a qubit are based on different underlying physical effects. In general these physical effects have very different decoherence times. For example, nuclear spin relaxation can be from one-tenth of a second to a year, whereas the decoherence time is more like 10"^ s in the case of electron spin. It is approximately 10"^ s for a quantum dot, and around 10"^ s for an electron in certain solid state implementations. Although one might conclude that nuclear spins are the best, decoherence time is not the only concern.
234
PAUL E. BLACK ETAL
A qubit must interact with external agencies so two qubits can interact. The stronger the external interactions, the faster two-qubit gates could operate. Because of the weak interaction of the nuclear spin with its environment, gates will likely take from 10""^ to 10"^ s to operate, giving "clock speeds" of from 1 kHz to 1 MHz. The interaction for electron spin is stronger, so gates based on electron spin qubits may operate in 10"^ to 10~^ s or at 1 to 100 MHz. Thus, electron spin qubits are likely to be faster, but less stable. Since quantum error correction, presented in Section 4.3, requires gates like those used for computations, error correction only helps if we can expect more than about 10,000 operations before decoherence, that is, an error. Reaching this level of accuracy with a scalable method is the current milestone. Therefore, it is really the ratio of gate operation time to decoherence time, or operations until decoherence, that is the near term goal. Dividing decoherence times by operation times we may have from 10^ to 10^"^ nuclear spin operations before a decoherence or between 10-^ and 10^ electron spin operations before a decoherence. Although decoherence and operation times are very different, the number of operations may be similar. This is not surprising since, in general, the weaker the underlying interactions, the slower the decoherence and the slower the one- and two-qubit operations. We see that many schemes offer the possibility of more than 10,000 operations between decoherence events. The primary problem is engineering the systems to get high gate speeds and enough operations before decoherence. It is not clear which physical implementation will provide the best qubits and gates. If we examine the state of the art in doing one-qubit operations while controlling decoherence, ions in an ion trap and single nuclear spins look the most promising. However, the weak interaction with the environment, and thus potentially other qubits, makes 2-qubit gates significantly slower than some solidstate implementations.
6.7.4
Good Fidelity
Fidelity is a measurement of the decoherence or decay of many qubits relative to one- and two-qubit gate times. Another aspect of fidelity is that when we perform an operation such as a CNOT, we do not expect to do it perfectly, but nearly perfectly. As an example, when we flip a classical bit from "0" to "1," it either succeeds or fails—there is no in between. Even at a detailed view, it is relatively straightforward to strengthen classical devices to add more charge, increase voltage, etc., driving the value to a clean "0" or " 1," charged or uncharged, on or off state. When we flip a quantum bit, we intend to exchange the amplitudes of the "0" and " 1 " states: a|0) -h /?|1)-^^|0) -h a|l). Since most physical effects we use are continuous values, the result of the operation is likely to have a smafl
QUANTUM COMPUTING AND COMMUNICATION
235
distortion: a\0) + /?|1)^/?'|0) + e'^a'\l). The error, e, is nearly zero, and the primed quantities are almost equal to the unprimed quantities, but they are not perfectly equal. In this case, we require that the overlap between our expected result and the actual result be such that the net effect is to have a probability of error for a gate operation on the order of 10"^.
6.1.5
Universal Family of Unitary
Transformations
In general, to build a true quantum computer it is only necessary to be able to perform an arbitrary one-qubit operation and almost any single two-qubit gate. If one can do arbitrary single-qubit operations and almost any single two-qubit gate, one can combine these operations to perform single-qubit operations, such as the Hadamard, and multiqubit operations, such as a CNOT or C2N0T gate (see Section 4.1) [21]. From these, we know we can construct any Boolean function.
6.1.6
Initialize Values
Another important operation is the initialization of all the qubits into a welldefined and -characterized initial state. This is essential if one wishes to perform a specific algorithm since the initial state of the system must typically be known and be unentangled. The initializing of the qubits corresponds to putting a quantum system into a completely coherent state that basically requires removing all thermal fluctuations and reduces the entropy (lack of order) of the system to 0. This is an extremely difficult task.
6.7.7
Readout
Another important requirement is the ability to reliably read resultant qubits. In many experimental situations, this is a technically challenging problem because one needs to detect a quantum state of a system that has been engineered to only weakly interact with its environment. However, this same system at "readout time" must interact sufficiently strongly that we can ascertain whether it is in the state |0) or |1), while simultaneously ensuring that the result is not limited by our measurement or detection efficiency. Readout, along with single-qubit gates, implies we need to be able to uniquely address each qubit.
6.1.8
Types of Qubits
We must be able to store quantum information for relatively long times: the equivalent of main memory, or RAM, in classical computers. There are two general possibilities: material qubits, such as atoms or electrons, and "flying"
236
PAUL E. BLACK ETAL
qubits, or photons. Each has its own strengths and weaknesses. Material qubits can have decoherence times on the order of days, while photons move very fast and interact weakly with their environment. A quantum memory will likely be made of material systems consisting of neutral atoms or ions held in microtraps, solid state materials involving electron or nuclear spins, or artificial atoms like quantum dots. These material qubits are ideal for storing information if decoherence can be controlled. For example, single ions can be coherently stored for several days. However, manipulating individual photons or trying to build a two-qubit gate using photons appears quite difficult. A quantum processor is likely to use material qubits, too, to build the equivalent of registers and to interact well with the quantum memory.
6.1.9
Communication
When we wish to transmit or move quantum information, we typically want to use photons: they travel very fast and interact weakly with their environment. To build a successful quantum communication system will likely require the ability to move quantum information between material qubits and photons. This is another relatively difficult task, but several experiments have been successfully performed. However, different implementations of material qubits will likely need different solutions to moving entangled or superposed information from particles to photons and back.
6.2
Realizations
Just as there are many subatomic properties that may be exploited for quantum effects, realizations range from brand new technologies to decades old technologies harnessed and adapted for quantum computing. These technologies can be categorized into two basic classes. First, a top-down approach where we take existing technology from the material science and solid state fields and adapt it to produce quantum systems. This top-down approach involves creative ideas such as implementing single-ion impurities in silicon, designing very uniform quantum dots whose electronic properties are well characterized and controllable, using superconducting quantum interference devices, and several others. A second contrasting approach is a bottom-up approach. The idea here is to start with a good, natural qubit, such as an atom or ion, and trap the particle in a benign environment. This latter concept provides very good single qubits but leaves open the question of scalability—especially when one begins to examine the mechanical limits of current traps. The benefit of this approach is excellent, uniform, decoherence-free qubits with great readout and initialization capabilities.
QUANTUM COMPUTING AND COMMUNICATION
237
The hard problem will be scaling these systems and making the gate operations fast. The top-down approach suffers from decoherence in some cases or a dramatic failure of uniformity in the individual qubits: "identical qubits" are not truly uniform, decoherence-free individual qubits. The bottom-up approach has the benefit of good quality qubits and starting with a basic understanding of decoherence processes. Below we will briefly discuss some of these possible technologies.
6.2.1
Charged Atoms in an Ion Trap
The one system that has had great success is ions in an ion trap. Dave Wineland's group at NIST, Boulder has • entangled four ions, • shown exceedingly long coherence times for a single qubit, • demonstrated high-efficiency readout, • initialized four atoms into their ground state, and • multiplexed atoms between two traps. They have also shown violations of the Bell's inequalities [22] and had many other successes. Their remarkable success and leadership of this effort blazes new frontiers in the experimental approaches to quantum computation, and their progress shows no signs of slowing. Ions of beryllium are held single file. Laser pulses flip individual ions. To implement a CNOT gate, the motion of the ions "sloshing" back and forth in the trap is coupled to the electron levels. That is, if ions are in motion, the electron level is flipped. Otherwise the electron level is unchanged. This is the operation described abstractly in Section 4.1.
6.2.2
Neutral Atoms in Optical Lattices or
Microtraps
Several groups are attempting to repeat the ion trap success using neutral atoms in optical lattices, where a trapping potential results from intersecting standing waves of light from four or more laser beams in free space, or micro-magnetic or micro-optical traps. These efforts are just getting seriously underway and appear to have many of the advantages of the ion scheme. One major difference is that the atoms interact less strongly with each other than with the ions. This could lead to better decoherence, but is also likely to lead to slower 2-qubit gate operations because of the weaker interactions. Much of the promise of the neutral atom
238
PAUL E. BLACK ETAL
approach is based on the remarkable advances made in the past two decades in laser cooling of atoms and the formation of neutral atom Bose-Einstein condensates where thousands of atoms are ''condensed into" a single-quantum state with temperatures of a few nanokelvins, or billionth of a degree above absolute zero. These advances have allowed scientists to manipulate large numbers of atoms in extremely controlled and exotic ways. The tools, techniques, and understandings developed over these past two decades may prove very useful in these current attempts to create quantum gates with these systems.
6.2.3 Solid State Many different approaches fall under the realm of solid state. One general approach is to use quantum dots—so-called artificial atoms—as qubits. If it can be done in a controlled and decoherence free way, then one has the advantages of the atom and ion approach while having the controlled environment and assumed scalability that comes with solid state material processing. Another variant is embedding single-atom impurities, such as '''P, in silicon. The ^^P nuclear spin serves as a qubit while using basic semiconductor technology to build the required scalable infrastructure. Alternative approaches based on excitons in quantum dots or electronic spins in semiconductors are also being investigated. The primary difficulty of these approaches is building the artificial atoms or implanting the ^^P impurities precisely where required.
6.2.4 NMR Nuclear magnetic resonance (NMR) has shown some remarkable achievements in quantum computing. However, it is widely believed that the current NMR approach will not scale to systems with more than 15 or 20 qubits. NMR uses ingenious series of radio-frequency pulses to manipulate the nuclei of atoms in molecules. Although all isolated atoms of a certain element resonate at the same frequency, their interactions with other atoms in a molecule cause slight changes in resonance. The NMR approach is extremely useful in coming up with a series of pulses to manipulate relatively complex systems of atoms in a molecule in situations where individual qubit rotations or gates might appear problematic. Thus, this work provides useful knowledge into how to manipulate complex quantum systems. Low-temperature solid state NMR is one possible way forward. Like the previous section, a single-atom impurity—such as ^^P in siUcon—is a qubit, but NMR attempts to perform single-site addressability, detection, and manipulation on nuclear spins.
QUANTUM COMPUTING AND COMMUNICATION
6.2.5
239
Photon
Photons are clearly the best way to transmit information, since they move at the speed of light and do not strongly interact with their environment. This nearperfect characteristic for quantum communication makes photons problematic for quantum computation. In fact, early approaches to using photons for quantum computation suffered from a requirement of exponential numbers of optical elements and resources as one scaled the system. A second problem was that creating conditional logic for two-qubit gates appeared very difficult since two photons do not interact strongly even in highly nonlinear materials. In fact, most nonlinear phenomena involving light fields result only at high intensity. Recently, new approaches for doing quantum computation with photons that depend on using measurement in a "dual-rail" approach to create entanglement have appeared. This approach removes many of the constraints of early approaches, but provides an alternative approach to creating quantum logic. Experimental efforts using this approach are just beginning. The approach will still have to solve the technically challenging problems caused by high-speed motion of their qubits, a benefit in communication and a possible benefit in computational speed, and by the lack of highly efficient, single-photon detectors essential to the success of this approach.
6.2.6
Optical Cavity Quantum
Electrodynamics
Other atomic-type approaches involve strongly coupling atoms or ions to photons using high-finesse optical cavities. A similar type of approach may be possible using tailored quantum dots, ring resonators, or photonic materials. One advantage of these types of approaches is the ability to move quantum information from photons to material qubits and back. This type of technology appears to be essential anyway since material qubits (e.g., atoms, ions, electrons) are best for storing quantum information while photons, i.e., flying qubits, are best for transmitting quantum information. It is possible that these approaches may provide very fast quantum processors as well. Numerous efforts to investigate these schemes are underway.
6.2.7
Superconducting
Qubits
Superconducting quantum interference devices (SQUID) can provide two types of qubits: either flux-based qubits, corresponding to bulk quantum circulation, or charge-based qubits responsible for superconductivity. SQUID-based science has been a field of investigation for several decades but has only recently shown an ability to observe Rabi-flopping—a key experiment that shows the ability to
240
PAUL E. BLACK ETAL
do single-qubit operations. This approach to quantum computation has great potential but also will have to overcome numerous technical difficulties. One major issue is the need to operate a bulk system at liquid helium temperatures. In summary, numerous physical approaches to quantum computing have been proposed and many are under serious research. Which of these approaches will ultimately be successful is not clear. In the near term, the ions and atomic systems will likely show the most progress, but the final winner will be the system that meets all of the technical requirements. This system may not even be among those listed above. What is important is that each of these approaches is providing us with an increased understanding of complex quantum systems and their coupling to the environment. This knowledge is essential to tackling the broad range of technical barriers that will have to be overcome to bring this exciting, perhaps revolutionary, field to fruition.
7.
Conclusions
It will be at least a decade, and probably longer, before a practical quantum computer can be built. Yet the introduction of principles of quantum mechanics into computing theory has resulted in remarkable results already. Perhaps most significantly, it has been shown that there are functions that can be computed on a quantum computer that cannot be effectively computed with a conventional computer (i.e., a classical Turing machine). This astonishing result has changed the understanding of computing theory that has been accepted for more than 50 years. Similarly, the application of quantum mechanics to information theory has shown that the accepted Shannon limit on the information carrying capacity of a bit can be exceeded. The field of quantum computing has produced a few algorithms that vastly exceed the performance of any conventional computer algorithm, but it is unclear whether these algorithms will remain rare novelties, or if quantum methods can be applied to a broad range of computing problems. The future of quantum communication is less uncertain, but a great deal of work is required before quantum networks can enter mainstream computing. Regardless of the future of practical quantum information systems, the union of quantum physics and computing theory has developed into a rich field that is changing our understanding of both computing and physics.
Appendix The jump in the derivation of the answer to Deutsch's function characterization problem in Section 4.2.4 started after the application of the Hadamard, leaving us with the equation
QUANTUM COMPUTING AND COMMUNICATION
241
1 nxFo i (l^> + li))lo® /(o)> - (10) + |i»|i e/(0)> +\ ^ ^ V (iO) - Ii))|0e/(i)) - (10) - |i))|i ©/(!)) ; • The easiest way to follow the result is to do case analysis. Here we have four cases: each possible result of /(O) and / ( I ) . The exclusive-or operation is 0 ® a = a and 1 © a = a. Case I:
/(O) = 0
/(1) = 0
, / ^ , / ^ / ( | 0 ) + |l))|0©/(0))-(|0) + |l))|l®/(0)) + ^ ^ V (|0> - I l » | 0 e / ( 1 ) ) - (10) - |1))|1 © / ( ! ) ) - 1 / 9 ^ / 9 l (|0) + | 1 ) ) | 0 ) - ( | 0 ) + |1))|1) +
^V^(V(lo)-li))|0)-(|0)-|i))|i)
" ^ ^
Distributing the second qubit across the superposition of the first yields = 1/2V2(|0)|0) + |1)|0) - |0)|1) - |1)|1) + |0)|0) - |1)|0) - |0)|1) + |1)|1)). Collecting and canceling like terms, we get = 1/2V^(2|0)|0)-2|0)|1)). We now factor out the 2 and the first qubit: = 1/V2|0)(|0)-|1)). Generalizing, we get the final result for this case:
= i/V2|/(0)e/(i))(|0)-|i)). Case II: /(O) = 0 /(l) = 1 1 n^ri i (lo> + Ii»|0e /(O)) - (10) + |i))|i ® /(O)) +\ ' ^ V (I0> - |1))|0®/(D) - (10) - |1))|1 ®/(D) ) " ^ ^ UI0)-li»li)-(l0)-li»l0) / - 1/2V2(|0)|0) + |1)|0) - |0)|1) - |1)|1) + |0)|1) - IDID -|0)|0) + |1)|0)). The reader can verify that collecting, canceling, and factoring gives = 1/V2|1)(|0)-|1)). This generalizes to Case IFs final result: -l/\^|/(0)©/(l))(|0)-|l)).
242
PAUL E. BLACK ETAL.
Case III:
/(O) ^ 1 /(1) = 0 1 /2A/i ( (1^^ + Il>)|0® /(O)) - (|0) + |1))11 ® /(0)> + '
^
V(|o>-|i))|o©/(i)>-(|0)-|i))|i®/(i)>
" ^ ^
V(lo>-li))lo)-(|0)-|i))|i>
= 1/2V2(|0)|1) + |1>|1) - |0)|0> - |1>|0) + |0)|0> - |1)|0) -|0)|1> + |1>|1» -1/V2|1)(|1>-|0»
= l/V2|/(0)®/(l)>(|l)-|0)). Case IV:
/(O) = 1 / ( I ) = 1
,,^./:;({\o) '
+ n))\o®fm}-{\o)
+ \\))\i(Bfm
+
V(lo>-|i>)|Oe/(i)>-(|0)-|i»|ie/(i)> - I / 9 A / ^ / ' ( | 0 ) + I1))|I>-(I0) + |I»|0> +
~ ^ ^
Ul0)-|i»li)-(l0)-|i»|0)
= 1/2V2(|0>|1) + |1>|1) - |0)|0) - |1)|0) + |0)|1> - |1)|1) -|0>|0) + |1)|0» = 1/V2|0)(|1)-|0)) = l/V2|/(0)®/(l))(|l)-|0)). We compute the second qubit to be| 1) -10) in Cases I and II, and|0) -11) in Cases III and IV. This extra multiplication by - 1 is called a "global phase." A global phase is akin to rotating a cube in purely empty space: without a reference, it is merely a mathematical artifact and has no physical meaning. Thus, all the cases result in l/\/2|/(0) ® /(1)>(|0) - |1>).
REFERENCES
Many papers and articles on quantum computing are archived by Los Alamos National Laboratory with support by the United States National Science Foundation and Department of Energy. The URL for the quantum physics archive is http://arXiv.org/archive/quant-ph/. References to it are "quant-ph/YYMMNNN" where the date first submitted is given as YY (year), MM (month), NNN (number within month).
QUANTUM COMPUTING AND COMMUNICATION
243
[1] Feynman, R. (1982). "Simulating physics with computers." International Journal of Theoretical Physics, 21, 6&7, 467^88. [2] Shor, P. W. (1997). "Polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer." SI AM Journal on Computing, 26, 5, 1484-1509, quant-ph/9508027. [3] Nielsen, M. A., and Chuang, I. L. (2000). Quantum Communication and Quantum Information. Cambridge Univ. Press, Cambridge, UK. [4] Lenstra, A. K., and Lenstra, H. W., Jr. (Eds.) (1993). The Development of the Number Field Sieve, Lecture Notes in Mathematics, Vol. 1554. Springer-Verlag, Berlin. [5] Schonhage, A. (1982). "Asymptotically fast algorithms for the numerical multipUcation and division of polynomials with complex coefficients." In Computer Algebra EUROCAM '82, Lectures Notes in Computer Science, Vol. 144, pp. 3-15. SpringerVerlag, Berlin. [6] Riefifel, E., and Polak, W. (2000). "An introduction to quantum computing for nonphysicists." ACM Computing Surveys, 32, 3, 300-335, quant-ph/9809016. [7] Deutsch, D. (1985). "Quantum theory, the Church-Turing principle and the universal quantum computer." Proc. of Royal Society London A, 400, 97-117. [8] Cleve, R., Ekert, A. K., Macchiavello, C , and Mosca, M. (1998). "Quantum algorithms revisited." Proc. of Royal Society London A, 454, 339-354. [9] Grover, L. K. (1996a). "A fast quantum mechanical algorithm for database search." in 28th Annual ACM Symposium on the Theory of Computation, pp. 212-219. ACM Press, New York. [10] Grover, L. K. (1996b). "A fast quantum mechanical algorithm for database search," quant-ph/9605043. [11] Ekert, A. K., and Macchiavello, C. (1996). "Quantum error correction for communication." Physical Review Letters, 77, 2585-2588, quant-ph/9602022. [12] Knill, E., and LaFlamme, R. (1996). "Concatenated quantum codes," quantph/9608012. [13] Knill, E., and LaFlamme, R. (1997). "A theory of quantum error-correcting codes." Phys. Rev. A, 55, 900. [14] Wright, P. (1987). Spy Catcher: The Candid Autobiography of a Senior Intelligence Officer. Viking Penguin, New York. [15] Bennett, C. H., and Brassard, G. (1984). In Proc. IEEE International Conf on Computers, Systems and Signal Processing, Bangalore, India, p. 175. IEEE, New York. [16] National Institute of Standards and Technology (2000). "A statistical test suite for random and pseudorandom number generators for cryptographic applications," SP 800-22. [17] Brassard, G., and Crepeau, C. (1996). "25 years of quantum cryptography." SIGACT News, 27, 3,13-24. [18] Bennett, C. H., Brassard, G., Crepeau, C , and Maurer, U. M. (1995). "GeneraUzed privacy amphfication." IEEE Transactions on Information Theory, 41, 6, 1915-1923.
244
PAULE. BLACK ETAL.
[19] Cereceda, J. L. (2001). "Quantum dense coding using three qubits." C/Alto del Leon 8, 4A, 28038 Madrid, Spain, May 21, quant-ph/0105096. [20] Gorbachev, V. N., ZhiHba, A. I., Trubilko, A. I., and Yakovleva, E. S. "Teleportation of entangled states and dense coding using a multiparticle quantum channel," quantph/0011124. [21] Deutsch, D., Barenco, A., and Ekert, A. (1995). "Universality in quantum computation." Proc. of Royal Society London A, 499, 669-611. [22] Bell, J. S. (1964). "On the Einstein-Podolsky-Rosen Paradox." Physics, 1, 195-200.
Exception Handling'' PETER A, BUHR, ASHIF HARJI, AND W. Y. RUSSELL MOK Department of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1 Canada {pabuhr,asharji,wyrmok}@uwaterloo.ca Abstract It is no longer possible to consider exception handling as a secondary issue in a language's design, or even worse, as a mechanism added after the fact via a library approach. Exception handling is a primary feature in language design and must be integrated with other major features, including advanced control flow, objects, coroutines, concurrency, real-time, and polymorphism. Integration is crucial as there are both obvious and subtle interactions between exception handling and other language features. Unfortunately, many exception handling mechanisms work only with a subset of the language features and in the sequential domain. A comprehensive design analysis is presented for an easy-to-use and extensible exception-handling mechanism within a concurrent, object-oriented environment. The environment includes language constructs with separate execution stacks, e.g., coroutines and tasks, so the exception environment is significantly more complex than the normal single-stack situation. The pros and cons of various exception features are examined, along with feature interaction with other language mechanisms. Both exception termination and resumption models are examined in this environment, and previous criticisms of the resumption model, a feature commonly missing in modem languages, are addressed. 1. 2. 3. 4. 5.
Introduction EHM Objectives Execution Environment EHM Overview Handling Models
246 248 249 253 254
' This chapter is an extended version of the paper "Advanced Exception Handling Mechanisms" in IEEE Transaction on Software Engineering, 26(9), 820-836, September 2000. ©2000 IEEE. Portions reprinted with permission. ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
245
Copyright 2002 Elsevier Science Ltd All rights of reproduction in any form reserved.
246
PETER A. BUHRET/A/..
5.1 Nonlocal Transfer 5.2 Termination 5.3 Retry 5.4 Resumption 6. EHM Features 6.1 Catch-Any and Reraise 6.2 Derived Exceptions 6.3 Exception Parameters 6.4 Bound Exceptions and Conditional Handling 6.5 Exception List 7. Handler Context 7.1 Guarded Block 7.2 Lexical Context 8. Propagation Models 8.1 Dynamic Propagation 8.2 Static Propagation 9. Propagation Mechanisms 10. Exception Partitioning 10.1 Derived Exception Implications 11. Matching 12. Handler Clause Selection 13. Preventing Recursive Resuming 13.1 Mesa Propagation 13.2 VMS Propagation 14. Multiple Executions and Threads 14.1 Coroutine Environment 14.2 Concurrent Environment 14.3 Real-Time Environment 15. Asynchronous Exception Events 15.1 Communication 15.2 Nonreentrant Problem 15.3 Disabling Asynchronous Exceptions 15.4 Multiple Pending Asynchronous Exceptions 15.5 Converting Interrupts to Exceptions 16. Conclusions Appendix: Glossary References
1.
255 257 259 260 263 263 264 265 267 269 272 272 272 273 274 275 277 280 281 282 283 285 286 287 290 290 291 291 292 292 293 294 296 297 297 298 301
Introduction
Substantial research has been done on exceptions but there is little agreement on what an exception is. Attempts have been made to define exceptions in terms
EXCEPTION HANDLING
247
of errors but an error itself is also ill-defined. Instead of struggling to define what an exception is, this discussion examines the entire process as a control flow mechanism, and an exception is a component of an exception-handling mechanism (EHM) that specifies program behavior after an exception has been detected. The control flow generated by an EHM is supposed to make certain programming tasks easier, in particular, writing robust programs. Robustness results because exceptions are an active rather than a passive phenomenon, forcing programs to react immediately when exceptions occur. This dynamic redirection of control flow indirectly forces programmers to think about the consequences of exceptions when designing and writing programs. Nevertheless, exceptions are not a panacea and are only as good as the programmer using them. The strongest definition we are prepared to give for an exception is an event that is known to exist but which is ancillary to an algorithm or execution. Because it is ancillary, the exception may be forgotten or ignored without penalty in the majority of cases, e.g., an arithmetic overflow, which is a major source of errors in programs. In other situations, the exception always occurs but with a low frequency, e.g., encountering end-of-file when reading data. Essentially, a programmer must decide on the level of frequency that moves an event from the algorithmic norm to an exceptional case. Once this decision is made, the mechanism to deal with the exceptional event is best moved out of the normal algorithmic code and handled separately. It is this mechanism that constitutes an EHM. Even with the availability of EHMs, the common programming techniques used to handle exceptions are return codes and status flags. The return code technique requires each routine to return a value on its completion. Different values indicate whether a normal or rare condition has occurred during the execution of a routine. Alternatively, or in conjunction with return codes, is the status flag technique, which uses a shared variable to indicate the occurrence of a rare condition. Setting a status flag indicates a rare condition has occurred; the value remains as long as it is not overwritten by another condition. Both techniques have noticeable drawbacks. First, and foremost, the programmer is required to explicitly test the return values or status flags; hence, an error is discovered and subsequently handled only when checks are made. Without timely checking, a program is allowed to continue after an error, which can lead to wasted work at the very least, or an erroneous computation leading to failure at the very worst. Second, these tests are located throughout the program, reducing readability and maintainability. Third, as a routine can encounter many different errors, it may be difficult to determine if all the necessary error cases are handled. Finally, removing, changing, or adding return or status values is difficult as the testing is coded inline. The return code technique often encodes exception values among normal returned values, which artificially enlarges the range of valid values independent of the computation. Hence, changing a value representing
248
PETER A. BUHR ETAL
an exception into a normal return value or vice versa can result in interactions between the exception handling and normal code, where the two cases should be independent. The status flag technique uses a shared variable that precludes its use in a concurrent environment as it can change unpredictably. Fortunately, modern EHM techniques are slowly supplanting return codes and flags, even though EHMs have been available for more than two decades. A general framework is presented for exception handling, along with an attempt to compose an ideal EHM, with suggested solutions to some outstanding EHM problems. In constructing the framework, a partial survey of existing EHMs is necessary to compare and contrast approaches.
2.
EHM Objectives
The failure of return codes and status flags as an informal EHM suggests the need for a formal EHM supported by the programming language, which must: 1. alleviate multiple testing for the occurrence of rare conditions throughout the program, and at the location where the test must occur, be able to change control flow without requiring additional testing and transfers, 2. provide a mechanism to prevent an incomplete operation from continuing, and 3. be extensible to allow adding, changing, and removing exceptions. The first objective targets readability and programmability by eliminating checking of return codes and flags, and removing the need to pass fix-up routines or have complex control logic within normal code to deal with exceptional cases. The second objective provides a transfer from the exception point that disallows returning, directing control flow away from an operation where local information is possibly corrupted; i.e., the operation is nonresumable. The last objective targets extensibility, easily allowing change in the EHM, and these changes should have minimal effects on existing programs using them. Two existing EHMs illustrate the three objectives: Unix signal mechanism. On encountering a rare condition, a signal (interrupt) is generated, which preempts execution and calls a handler routine to deal with the condition, suspending prior execution; when the handler routine returns, prior execution continues. This change of control flow does not require the programmer's involvement or testing any error codes, as there is (usually) no exphcit caU to the signal handler in the user program. Using a special jump facility, I o n g j m p, the handler routine can prevent an incomplete operation from continuing, and possibly terminate multiple active blocks between the signal handler and the transfer point (see Section 5.1 for details of this mechanism).
EXCEPTION HANDLING
249
Extensibility is quite limited, as most signals are predefined and unavailable to programmers. If a library uses one of the few user available signals, all clients must agree on the signal's definition, which may be impossible. Ada exception mechanism. On encountering a rare condition, an exception is raised in Ada terminology, and control flow transfers to a sequence of statements to handle the exception. This change of control flow does not require the programmer's involvement or testing any error codes. The operation encountering the rare condition cannot be continued, and possibly multiple active blocks between the raise point and the statements handling the exception are terminated. A new exception can be declared as long as there is no name conflict in the flat exception name-space; hence the mechanism is reasonably extensible. A good EHM should strive to be orthogonal with other language features; i.e., the EHM features should be able to be used in any reasonable context without obscure restrictions. Nevertheless, specific implementation and optimization techniques for some language constructs can impose restrictions on other constructs, particularly the EHM.
3.
Execution Environment
The structure of the execution environment has a significant effect on an EHM; e.g., a concurrent, object-oriented environment requires a more complex EHM than a sequential non-object-oriented environment. The execution model described in [1] is adopted for this discussion; it identifies three elementary execution properties: 1. Execution is the state information needed to permit independent execution. It includes local and global data, current execution location, and routine activation records (i.e., the runtime stack) of a sequential computation. From the exception perspective, an execution is the minimal language unit in which an exception can be raised. In simple sequential programming languages, there is only one execution, which makes exception handling simple and straightforward. More complex languages allow multiple executions but each is executed sequentially (called coroutines). In this case, only one exception can occur at a time but there are multiple independent units in which an exception can occur. Interaction among these units with respect to exceptions now becomes an issue. 2. Thread is execution of code that occurs independently of and possibly concurrently with another execution; thread execution is sequential as it changes an execution's state. Multiple threads provide concurrent execution; multiple CPUs provide parallel execution of threads. A context switch is a change in the execution/thread binding.
250
PETER A.
BUHHETAL
A thread performs normal and exceptional execution as it changes an execution's state. A thread may execute on its own execution or that of a coroutine, so any concurrent system is complex from an exception perspective. That is, multiple exceptions can be raised simultaneously within each execution being changed by different threads. 3. Mutual exclusion is serializing execution of an operation on a shared resource. Mutual exclusion is a concern only when there are multiple threads, as these threads may attempt to simultaneously change the same data. In this case, the data can become corrupted if the threads do not coordinate read and write operations on the data. From the exception perspective, the occurrence of simultaneous exceptions may result in simultaneous access of shared data, either in the implementation of exceptions in the runtime system or at the user-level with respect to data that is part of the exception. In either case, the data must be protected from simultaneous access via mutual exclusion. The first two properties are fundamental; i.e., it is impossible to create them from simpler constructs in a programming language. Only mutual exclusion can be generated using basic control structures and variables (e.g., Dekker's algorithm), but software algorithms are complex and inefficient. Thus, these three properties must be supplied via the programming language. Table I shows the difi'erent constructs possible when an object possesses different execution properties; each of the eight entries in the table are discussed below. TABLE I ELEMENTARY EXECUTION PROPERTIES
Object properties
Object's member routine properties
Thread
Execution-state
No mutual exclusion
Mutual exclusion
no no yes yes
no yes no yes
1 object 3 coroutine 5 (rejected) 7 (rejected)
2 monitor 4 coroutine-monitor 6 (rejected) 8 task
Case 1 is an object (or a routine not a member of an object) using the caller's execution and thread to change its state. For example, class foo { v o i d memO { ... } }; f o o f; f.mem(...); //caller's execution
and
thread
EXCEPTION HANDLING
251
The call f. m e m (...) creates an activation record (stack frame) on the runtime stack containing the local environment for member routine m e m , i.e., local variables and state. This activation record is pushed on the stack of the execution associated with the thread performing the call. Since this kind of object provides no mutual exclusion, it is normally accessed only by a single thread. Case 2 is like Case 1 but deals with concurrent access by ensuring mutual exclusion for the duration of each computation by a member routine, called a monitor [la]. For example, men iter
f oo
void
{
memO
{ ... }
}; foo
f;
f.mem(...);
//caller's
execution/thread
and
mutual
exclusion
The call f. m e m (...) works as in Case 1, with the additional effect that only one thread can be active in the monitor at a time. The implicit mutual exclusion is a fundamental aspect of a monitor and is part of the programming language. Case 3 is an object that has its own execution-state but no thread. Such an object uses its caller's thread to advance its own execution and usually, but not always, returns the thread back to the caller. This abstraction is a coroutine [lb]. For example, coroutine
foo
void
{
memO
{ ... }
}; foo
f;
f.mem{...);
/ / f s execution
and
caller's
thread
The call f. m e m (...) creates an activation record (stack frame) on f's runtime stack and the calling thread performs the call. In this case, the thread "context switches" from its execution to the execution of the coroutine. When the call returns, the thread context switches from the coroutine's execution back to its execution. Case 4 is like Case 3 but deals with the concurrent access problem by ensuring mutual exclusion, called a coroutine-monitor. For example, cormon itor void };
f oo
{
memO
{ ... }
252 foo
PETER A. BUHR ETAL. f;
f.mem(...);
//f's
execution/caller's
thread
and mutual
exclusion
The call f. m e m {...) works as in Case 3, with the additional effect that only one thread can be active in the coroutine-monitor at a time. Cases 5 and 6 are objects with a thread but no execution-state. Both cases are rejected because the thread cannot be used to provide additional concurrency. That is, the object's thread cannot execute on its own since it does not have an execution, so it cannot perform any independent actions. Case 7 is an object that has its own execution and thread. Because it has both properties it is capable of executing on its own; however, it lacks mutual exclusion, so access to the object's data via calls to its member routine is unsafe, and therefore, this case is rejected. Case 8 is like Case 7 but deals with the concurrent access problem by implicitly ensuring mutual exclusion, called a task. For example, task foo { void
memO
{ ... }
}; foo
f;
f.mem(...);
//choice
of execution/thread
and mutual
exclusion
The call f. m e m (...) works as in Case 4, except there are two threads associated with the call, the caller's and the task's. Therefore, one of the two threads must block during the call, called a rendezvous. The key point is that an execution supplies a stack for routine activation records, and exceptional control-flow traverses this stack to locate a handler, often terminating activation records as it goes. When there is only one stack, it is straightforward to define consistent and well-formed semantics. However, when there are multiple stacks created by instances of coroutines and/or tasks, the EHM semantics can and should become more sophisticated, resulting in more complexity. For example, assume a simple environment composed of nested routine calls. When an exception is raised, the current stack is traversed up to its base activation-record looking for a handler. If no handler is found, it is reasonable to terminate the program, as no more handlers exist. Now, assume a complex environment composed of coroutines and/or tasks. When an exception is raised, the current coroutine/task stack is traversed up to its base activation-record looking for a handler. If no handler is found, it is possible to continue propagating the exception from the top of the current stack to another coroutine or task stack. The choice for selecting the point of continuation depends on the particular EHM
EXCEPTION HANDLING
253
Strategy. Hence, the complexity and design of the execution environment significantly affects the complexity and design of its EHM.
4.
EHM Overview
An event is an exception instance, and is raised by executing a language or system operation, which need not be available to programmers, e.g., only the runtime system may raise predefined exceptions, such as hardware exceptions. Raising an exception indicates an abnormal condition the programmer cannot or does not want to handle via conventional control flow. As mentioned, what conditions are considered abnormal is programmer or system determined. The execution raising the event is the source execution. The execution that changes control flow due to a raised event is the faulting execution; its control flow is routed to a handler. With multiple executions, it is possible to have an exception raised in a source execution different from the faulting execution. Propagating an exception directs the control flow of the faulting execution to a handler, and requires a propagation mechanism to locate the handler. The chosen handler is said to have caught (catch) the event when execution transfers there; a handler may deal with one or more exceptions. The faulting execution handles an event by executing a handler associated with the raised exception. It is possible that another exception is raised or the current exception is reraised while executing the handler. A handler is said to have handled an event only if the handler returns. Unlike returning from a routine, there may be multiple return mechanisms for a handler (see Section 5). For a synchronous exception, the source and faulting execution are the same; i.e., the exception is raised and handled by the same execution. It is usually difficult to distinguish raising and propagating in the synchronous case, as both happen together. For an asynchronous exception, the source and faulting execution are usually different, e.g., raise E in Ex raises exception E from the current source execution to the faulting execution E x. Unlike a synchronous exception, raising an asynchronous exception does not lead to the immediate propagation of the event in the faulting execution. In the Unix example, an asynchronous signal can be blocked, delaying propagation in the faulting execution. Rather, an asynchronous exception is more like a nonblocking direct communication from the source to the faulting execution. The change in control flow in the faulting execution is the result of delivering the exception event, which initiates the propagation of the event in the faulting execution. While the propagation in the faulting execution can be carried out by the source, faulting or even another execution (see Section 14.1), for the moment, assumes the source raises the event and the faulting execution propagates and handles it.
254
PETER A.
BUHRETAL
Goodenough's seminal paper on exception handling suggests a handler can be associated with programming units as small as a subexpression and as large as a routine [2, pp. 686-687]. Between these extremes is associating a handler with a language's notion of a block, i.e., the facility that combines multiple statements into a single unit, as in Ada [3], Modula-3 [4], and C-h-l- [5]. While the granularity of a block is coarser than an expression, our experience is that finegrained handling is rare. As well, having handlers, which may contain arbitrarily complex code, in the middle of an expression can be difficult to read. In this situation, it is usually just as easy to subdivide the expression into multiple blocks with necessary handlers. Finally, handlers in expressions or for routines may need a mechanism to return results to allow execution to continue, which requires additional constructs [2, p. 690]. Therefore, this discussion assumes handlers are only associated with language blocks. In addition, a single handler can handle several different kinds of exceptions and multiple handlers can be bound to one block. Syntactically, the set of handlers bound to a particular block is the handler clause, and a block with handlers becomes a guarded block, e.g., try { //introduce new //guarded block raise E l ; } catch( El ) { // } catch{ //
block
//synchronous //may handle
exception multiple
exceptions
handlerl E2 ) {
//multiple
handlers
handler2
}
The propagation mechanism also determines the order that the handler clauses bound to a guarded block are searched. A block with no handler clause is an unguarded block. An exception may propagate from any block. In summary, an EHM = Exceptions H- Raise -\- Propagation -h Handlers, where exceptions define the kinds of events that can be generated, raise generates the exception event and finds the faulting execution, propagation finds the handler in the faulting execution, and handlers catch the raised event during propagation.
5.
Handling Models
Yemini and Berry [6, p. 218] identify 5 exception handling models: non-local transfer, 2 termination, retry and resumption. An EHM can provide multiple models.
255
EXCEPTION HANDLING
5.1
Nonlocal Transfer
Local transfer exists in all programming languages, implicitly or explicitly (see Fig. 1). Implicit local-transfer occurs in selection and looping constructs in the form of hidden g o t o statements, which transfer to lexically fixed locations. Explicit local-transfer occurs with a g o t o statement, which also transfers to lexically fixed locations. In both cases, the transfer points are known at compile time, and hence, are referred to as statically or lexically scoped transfer points. Dynamically scoped transfer points are also possible, called nonlocal transfer. In Fig. 2, the label variable L contains both a point of transfer and a pointer to an activation record on the stack containing the transfer point; therefore, a label value is not static. The nonlocal transfer in f using the g o t o directs control flow first to the specified activation record and then to the location in the code associated with the activation record. A consequence of the transfer is that blocks activated between the g o t o and the label value are terminated; terminating these
if (...)
Implicit Transfer
Explicit Transfer
{ // false => transfer to else
if(! ...)gotoL1;
II transfer after else } else {
goto L2; L1:
} while (...)
{ //false => transfer after while
L2: L3: if (! ...) goto L4;
} II transfer to start of while
goto L3; L4:
FIG. 1. Statically scoped transfer.
label L; void f() { goto L;} voidg(){f();} void h() { L = L1; f(); L1: L = L2; g(); L2; }
f
r
goto L
1
h
f g h
•
k
1 i
«
* LI *
r—
1
L
activation records (stack)
goto L
1
' fc 1 n * Lfi. *
FIG. 2. Dynamically scoped transfer.
1 L
activation records (stack)
256
PETER A.
BUHRETAL
blocks is called stack unwinding. In the example, the first nonlocal transfer from f transfers to the static label L 1 in the activation record for h, terminating the activation record for f. The second nonlocal transfer from f transfers to the static label L 2 in the activation record for h, terminating the activation records for f and g. PL/I [7] is one of a small number of languages (Beta [8], C [9]) supporting nonlocal transfer among dynamic blocks through the use of label variables. The C routines s e t j m p and I o n g j m p are a simplified nonlocal transfer, where s e t j m p sets up a dynamic label variable and I o n g j m p performs the nonlocal transfer. An EHM can be constructed using a nonlocal transfer mechanism by labeling code to form handlers and terminating operations with nonlocal transfers to labels in prior blocks. For example, in Fig. 3, the PL/I program (left example) assigns to the label variable E 1 a label in the scope of procedure TEST and then calls procedure F. Procedure F executes a transfer ( G 0 T 0 ) to the label variable, transferring out of F, through any number of additional scope levels, to the label LI in T E S T. Inside the handler, the same action is performed but the transfer point is changed to L 2. Similarly, the C program (right example) uses s e t j m p to store the current execution context in variable E 1, which is within the scope of the call to s e t j m p, s e t j m p returns a zero value, and a call is made to routine f. Routine f executes a transfer (I o n g j m p ) to the execution-context variable, transferring out of f, through any number of additional scope levels, back within the saved scope of s e t j m p, which returns a nonzero value. Inside the handler, the same action is performed but the transfer point is changed to the second call of s e t j m p. The key point is that the transfer point for the GOTO PL/I
C
TEST: PROC OPTIONS(MAIN); DCL E1 LABEL; F: PROC; G0T0E1; END; E1 = L 1 ; CALL F; RETURN; L1: /* HANDLER 1 V E1 = L2; CALL F; RETURN; L2: /* HANDLER 2 V END;
jmp.buf E1; void f(void) { longjmp(E1, 1); } int main() { if (setjmp(E1) = = 0 ) { f(); } else { /* handler 1 V if (setjmp(E1) = = 0 ) { f(); } else { r handler 2 V } } }
FIG. 3. Nonlocal transfer.
EXCEPTION HANDLING
257
or I o n g j m p is unknown statically; it is determined by the dynamic value of the label or execution-context variable. Unfortunately, nonlocal transfer is too general, allowing branching to almost anywhere (the structured programming problem). This lack of discipline makes programs less maintainable and error-prone [10, p. 102]. More importantly, an EHM is essential for sound and efficient code generation by a compiler (as for concurrency [11]). If a compiler is unaware of exception handling (e.g., s e t j m p/ I o n g j m p in C), it may perform code optimizations that invalidate the program, needing bizarre concepts like the v o I a t i I e declaration qualifier. Because of these problems, nonlocal transfer is unacceptable as an EHM. However, nonlocal transfer is essential in an EHM; otherwise it is impossible to achieve the first two EHM objectives in Section 2, i.e., alleviating explicit testing and preventing return to a nonresumable operation. A restricted form of nonlocal transfer appears next.
5.2
Termination
In the termination model, control flow transfers from the raise point to a handler, terminating intervening blocks on the runtime stack (like nonlocal transfer). When the handler completes, control flow continues as if the incomplete operation in the guarded block terminated without encountering the exception. Hence, the handler acts as an alternative operation for its guarded block. This model is the most popular, appearing in Ada, C-l-h, ML [12], Modula-3, and Java [13]. The difference between nonlocal transfer and termination is that termination is restricted in the following ways: • Termination cannot be used to create a loop, i.e., cause a backward branch in the program, which means only looping constructs can be used to create a loop. This restriction is important to the programmer since all the situations that result in repeated execution of statements in a program are clearly delineated by the looping constructs. • Since termination always transfers out of containing blocks, it cannot be used to branch into a block. This restriction is important for languages allowing declarations within the bodies of blocks. Branching into the middle of a block may not create the necessary local variables or initialize them properly. Yemini and Berry (also in CLU [21, p. 547]) divide termination into one level and multiple levels. That is, control transfers from the signaller to the immediate caller (one level) or from the signaller to any nested caller (multiple levels). However, this artificial distinction largely stems from a desire to support exceptions lists (see Section 6.5).
258
PETER A. BUHR ETAL. Ada
C++
procedure main is E1 : exception; procedure f is begin raise E1; end f; begin f; exception when E1 => - handler end main;
voidfO { throw 0; } int main() { try { f(); } catch( int) { // handler } }
FIG. 4. Termination.
For example, in Fig. 4, the Ada program (left example) declares an exception E 1 in the scope of procedure m a i n and then calls procedure f. Procedure f executes a raise of the exception, transferring out of f, through any number of additional scope levels, to the handler at the end of m a i n . The C++ program (right example) does not declare an exception label; instead, an object type is used as the label, and the type is inferred from an object specified at the raise point; in this case, t h r o w 0 implies an exception label of int. Routine f executes a raise ( t h r o w ) of the exception, transferring out of f, through any number of additional scope levels, to the handler at the end of the try statement in m a i n . Note that termination achieves the first two EHM objectives in Section 2, without the drawbacks of nonlocal transfer. (Interestingly, the C++ approach seems to provide additional generality because any type can be an exception; i.e., there is no special exception type in the language. However, in practice, this generality is almost never used. First, using a type like I n t as an exception is dangerous because there is no exceptional meaning for this type. That is, one library routine can raise i n t to mean one thing and another routine can raise i n t to mean another; a handler catching I n t may have no idea about the meaning of the exception. To prevent this ambiguity, users create specific types describing the exception, e.g., o v e r f l o w , u n d e r f l o w , etc. Second, these specific types are very rarely used both in normal computations and for raising exceptions, so the sole purpose of these types is for raising unambiguous exceptions. In essence, C++ programmers ignore the generality available in the language and follow a convention of creating explicit exceptions types. Therefore, having a specific exception type in a programming language is not a restriction, and it provides additional documentation, discrimination among conventional and exception types, and provides the compiler with exact knowledge about type usage rather than having to infer it from the program. Hence, a specific exception type is used in this discussion.)
EXCEPTION HANDLING
259
Termination is often likened to a reverse routine call, particularly when argument/parameters are available (see Section 6.3). Raise acts like a call and the handler acts like a routine, but control flows down the stack rather than up. In detail, the differences between routine call and termination are: • A routine call is statically bound, whereas a termination handler is dynamically bound. (Routine pointers and virtual routines, which are just routine pointers, are dynamically bound.) That is, the routine name of a call selects its associated code at compile time based on the lexical scope of the program, whereas the handler name of a raise selects its associated handler code at runtime based on the dynamic blocks on the stack (as in many versions of LISP). • A routine returns to the dynamic location of its call, whereas a termination handler returns to its associated lexical block. That is, a routine returns one stack frame to the caller's stack frame, whereas a handler returns to the lexical context of the guarded block it is associated with. A side effect of returning to a guarded-block's lexical context maybe stack unwinding if a raise is in another stack frame. Understanding the notions of static versus dynamic name binding and static versus dynamic transfer points are key to understanding exceptions.
5.3
Retry
The retry model combines the termination model with special handler semantics, i.e., restart the failed operation, creating an implicit loop in the control flow. There must be a clear beginning for the operation to be restarted. The beginning of the guarded block is usually the restart point and there is hardly any other sensible choice. The left example of Fig. 5 shows a retry handler by extending the C-I-+ exception mechanism; the example calculates a normalized sum for a set of numbers, ignoring negative values. The exception is raised using termination semantics, and the retry handler completes by jumping to the start of the try block. The handler is supposed to remove the abnormal condition so the operation can complete during retry. Mesa [14], Exceptional C [15], and Eiffel [16] provide retry semantics through a retry statement only available in the handler body. As mentioned, establishing the operation restart point is essential; reversing lines 5 and 6 in the figure generates a subtle error with respect to the exception but not normal execution, i.e., the s u m counter is not reset on retry. This error can be difficult to discover because control flow involving propagation may occur infrequently. In addition, when multiple handlers exist in the handler clause, these handlers must use the same restart point, which may make retrying more difficult to use in some cases.
260
PETER A. BUHR ETAL Simulation
Retry 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
float nsum( int n, float a[ ]) { float sum; int 1, cnt = n; try { sum = 0;
float nsum( int n, float a[ ] ) { float sum; int i, cnt = n; while (true ) { // infinite loop try { sum = 0;
for ( i = 0;
for ( i = 0; i < n; i += 1 ) { if ( a[i] < 0 ) throw i; sum += a[i] /cnt; } break; // terminate loop } catch( int i) { a[i] = 0; cnt-= 1; }
} retry(int i ) { a[il = 0; cnt-= 1; }
} return sum;
return sum; }
} FIG. 5. Retry.
Finally, Gehani [15, p. 834] shows the retrying model can be simulated with a loop and the termination model (see right example of Fig. 5). In general, the transformation is straightforward by nesting the guarded block in a loop with a loop-exit at the end of the guarded block. Often it is possible to rewrite the code to eliminate the retry. In the example, it is possible to construct an 0{n) solution versus an 0{n^), such as performing one pass over the data to eliminate the negative numbers and then performing another pass to generate the normalized sum. We believe simulation or rewriting is superior so all looping is the result of language looping constructs, not hidden in the EHM. Because of the above problems and that retry can be simulated easily with termination and looping, retry seldom appears in an EHM.
5.4
Resumption
The previous three models are all based on nonlocal transfer to ensure control does not return to an incomplete operation. In the resuming model, control flow transfers from the raise point to a handler to correct an incomplete operation, and then back to the raise point to continue execution. Resumption is often likened to a normal routine call, particularly when argument/parameters are available, as both return to the dynamic location of the call [15,17]. However, a routine call is statically bound, whereas a resumption handler is dynamically bound (like a termination handler). Resumption is used in cases where fix-up and continuation are possible. For example, in large scientific applications, which run for hours or days, it is
261
EXCEPTION HANDLING
unacceptable to terminate the program for many "error" situations, such as computational problems like zero divide, overflow/underflow, etc., and logical situations like a singular matrix. Instead, these problems call a fix-up routine, which logs the problem and performs a fix-up action allowing execution to continue. While a fix-up may not result in an exact result, an approximation may suffice and many hours of computer time are salvaged. For example, in Fig. 6, the PL/I-like program declares the builtin on-condition Z E R 0 D I V I D E in the scope of procedure TEST. (This example applies equally well to user-defined on-conditions.) An on-condition is like a routine name that is dynamically scoped, i.e., a call to Z E R 0 D IVI D E (by the hardware) selects the closest instance on the call stack rather than selecting an instance at compile time based on lexical scope. In detail, each on-condition has a stack, on which handler bodies are pushed, and a call to an on-condition executes the top handler (unless the condition is disabled). The effect of entering a resumption (try) block is achieved implicitly when flow of control enters a procedure or block within the lexical scope of condition variables by copying the top element of each stack and the copies are pushed onto their stacks. When an ON CONDITION statement is executed, it replaces the top element of the stack with a new handler routine. Replacement is used instead of associating resumption handlers with a block. When flow of control leaves a procedure or block that copied the top element, the stacks are popped so on-units set up inside these blocks disappear. Stepping through the code in Fig. 6:
1
4 5 2 3
TEST: PROC OPTIONS(MAIN); DCL ZERODIVIDE CONDITION; /* default handler D V F: PROC; A = B / 0; / 'call H1 */ ON CONDITION(ZERODIVIDE) RETURN 2; /* H2 V A = B/0;/*ca//H2y END; ON CONDITION(ZERODIVIDE) RETURN 1; /* H) */ A = B/0;/*ca//H7 V CALL F(); END;
1. declare
H1
H2
H1
H1
H1
H1
2. on stmt
3. call f
4. on stmt
5. return f
FIG. 6. Resumption.
262
PETER A. BUHRETAL
1. The declaration of the builtin on-condition Z E R 0 D I V I D E in the body of TEST creates a stack with a system defauk handler. (This declaration already exists in the program preamble.) 2. The ON CONDITION statement forZERODIVIDE replaces the default handler at the top of Z E R 0 D I V I D E's stack with a handler routine. (The mechanism for returning a result from the handler routine is simplified for this example.) Within TEST there is an implicit call to condition Z E R 0 D I V I D E via the hardware when the expression B / 0 is executed, which invokes the top handler, the handler executes, and the value 1 is returned for this erroneous expression. Therefore, A is assigned the value 1. 3. The call to routine F copies the top handler onto the top of Z E R 0 D I V I D E's stack. At the start of F, there is an implicit call to condition Z E R 0 DIVIDE via the hardware when the expression B / 0 is executed, which performs the same handler actions as before. 4. The ON CONDITION statement for Z E 0 D I V I D E in f then replaces the handler at the top of Z E R 0 D I V I D E's stack, and this handler is signaled as before. 5. When F returns, the top (local) handler is popped from ZERODIVIDE's stack. This facility allows a fixed named, ZERODIVIDE, to have different handlers associated with it based on the dynamic versus the static structure of the program. The often-cited alternatives to the resumption model are having pointers to fixup routines or passing fix-up routines as arguments, either of which is called at the raise point. Pointers to fix-up routines are like status flags, and therefore, have similar problems. In particular, because the pointer values are changed in many places to perform different fix-up actions, users must follow a strict convention of saving and restoring the previous routine value to ensure no local computation changes the environment for a previous one. Relying on users to follow conventions is always extremely error prone. Passing fix-up routines as arguments eliminates most of the routine-pointer problems but prevents reuse (see Section 6.5), can substantially increase the number of parameters, and is impossible for legacy code where fix-up parameters do not currently exist. Liskov and Snyder [18, p. 549] and Mitchell and Stroustrup [19, p. 392] argue against the resumption model but the reasons seem anecdotal. Goodenough's resumption model is complex and Mesa's resumption is based on this model [6, pp. 235-240]. However, a resumption model can be as simple as a dynamic routine call, which is easy to implement in languages with nested routines. For languages without nested routines, like C/C-h+, it is still possible to construct a simple resumption model [15,20,21]. Given a simple resumption model, the only remaining problem is recursive resumption, which is discussed in Section 8.1.3.
EXCEPTION HANDLING
263
Hence, while resumption has been rejected by many language designers, we argue that it is a viable and useful mechanism in an EHM.
6.
EHM Features
This section examines additional features that make an EHM easy to use (see also [2,17,20,22]), but more importandy some of these features have an impact on the design of the EHM.
6.1 Catch-Any and Reraise It is important to have a simple mechanism for catching any exception entering a guarded block, and to subsequently reraise the unknown exception, e.g., try { } catch(...) { ... r a i s e ;
II catch any exception II reraise unknown exception
}
In this case, the semantics of r a i s e are different from normal. A reraise does not start a new raise; it is a continuation of the current raise even though the current raise has been caught. As a result, the propagation mechanism continues searching from its current context, as if the exception was not caught. For termination, this capability allows cleanup of a guarded block when it does not handle an exception. For resumption, this capability allows a guarded block to gather or generate information about control flow passing through the guarded block but only in one direction, i.e., during the raise versus the resumption. Execution of reraise is often lexically confined to the body of a handler, i.e., a reraise cannot be executed from a routine called from the handler. The reason for this syntactic restriction is that a reraise in a routine called from the handler must unwind the stack to the handler to correctly restart the propagation. For termination, where stack unwinding is occurring during the raise, this semantics is consistent. For resumption, where stack unwinding is not occurring, this local unwinding is necessary but conceptually inconsistent with the global semantics. The syntactic restriction maintains global semantics by eliminating the potential for local inconsistency. If local inconsistency is not an issue, the syntactic restriction can be removed. Java block finalization, executed on both normal and exceptional termination, and C-l-+-style object destructors also provide cleanup mechanisms. However,
264
PETER A.
BUHRETAL
Fig. 7 shows it is awkward to simulate block finalization; the main problem is preventing duplication of the cleanup code, which forces the use of a routine or class for the cleanup. Unfortunately, this approach makes accessing local variables in the block containing the try statement difficult from the cleanup routine or class. For systems with nested routines and classes, the references can be direct; otherwise, variables must be explicitly passed to the cleanup routine/class.
6.2
Derived Exceptions
An exception hierarchy is useful to organize exceptions, similar to a class hierarchy in object-oriented languages. An exception can be derived from another exception, just like deriving a subclass from a class. A programmer can then choose to handle an exception at different degrees of specificity along the hierarchy; derived exceptions support a more flexible programming style, and hence, should be supported in an EHM. An important design question is whether to allow derivation from multiple exceptions, called multiple derivation, which is similar to multiple inheritance of classes. While Cargill [23] and others argue against multiple inheritance as a general programming facility, the focus here is on derived exceptions. Consider the following example of multiply deriving an exception [22, p. 19]: exception
network_err,
exception
n e t w o rk_f i le_err
file_err; : network_err,
file_err;
which derives n e t w o r k _ f i l e _ e r r from n e t w o r k _ e r rand While this looks reasonable, there are subtle problems: Finalization
int mainQ { intf = open(...); try {
Routine Simulation
Destructor Simulation
void cleanup(int &f) { close(f);
class cleanup { int &f; public: cleanup(int &f): f(f) {} ~cleanup() { close(f); }
s int main() { intf = open(...); try {
int main() { f = open(...); { cleanup v(f);
cleanup(f); } catch(...) { cleanup(f);
} finally { close(f); }
file_err.
s
} '"' }
} FIG. 7. Finalization
EXCEPTION HANDLING
265
try { ... r a i s e } catchi
n e t w o r k _ f i le_err ...
network_err
catch( file_err
) ...
) ...
//close
network
//close
file
connection
If n e t w o r k _ f i l e _ e r r i s raised, neither of the handlers may be appropriate for handling the raised event, but more importantly, which handler in the handler clause should be chosen because of the inheritance relationship? Executing both handlers may look legitimate, but indeed it is not. If a handler clause has a handler only for f i I e _ e r r, does it mean that it cannot handle n e t w o r k _ f i l e _ e r r completely and should raise n e t w o r k _ e r r afterward? The example shows that handling an exception having multiple parents may be inappropriate. If an exception cannot be caught by one of its parents, the derivation becomes moot. Therefore, multiple derivation is a questionable feature for derived exceptions as it introduces significant complications into the semantics with little benefit.
6.3
Exception Parameters
The ability to pass data from the source to the faulting execution is essential for the handler to analyze why an exception is raised and how to deal with it. Exception parameters enable the source to transfer important information into and out of a handler, just like routine parameters and results. An exception parameter can be read-only, write-only, and read-write. While information could be passed through shared objects, exception parameters eliminate side effects and locking in a concurrent environment. Ada has no parameters, C (via s e t j m p / I o n g j m p) has a single integer parameter, Modula-3 and C-I-+ have a single general parameter, and ML and Mesa have multiple parameters. Parameter specification for an exception depends on the form of the exception declaration. In Mesa and Modula-3, a technique similar to routine parameters is used, as in exception E( int ); r a i s e E( 7 ) . . .
//exception declaration with parameter //integer argument supplied at raise
c a t c h ( E( p ) ) . . .
//integer
parameter p received in handler
In C+-I-, an object type is the exception and an object instance is created from it as the parameter, as in struct
E{
int i;
266
PETER A. BUHRETAL.
E ( i n t p) : {i = p; } }; throw
E{ 7 );
c a t c h ( E p ) ...
//object
argument supplied at raise
//object
parameter p received in handler
In all cases, it is possible to have parameters that are routines (or member routines), and these routines can perform special operations. For example, by convention or with special syntax, an argument or member routine can be used as a default handler, which is called if the faulting execution does not find a handler during propagation, as in v o i d f{...) {...} exception
E( . . . ) d e f a u l t (
s t r u c t E { ... v o i d d e f a u l t ! )
f ); //default
routine
{ . . . } ; } ; //named
f
default
member
Other specialized operations are conceivable. The arguments of an asynchronous exception may be accessible to the source after the event is raised, and to the faulting execution after the event is caught. For example, in Exi
Ex2
r a i s e E( p ) i n E x 2 c a t c h i E( p ) ) { p = 3;
p = 4;
if the exception argument p is a pointer, executions E x 1 and E x 2 race to change what it is referencing. Therefore, access to these arguments must be properly synchronized in a concurrent environment if pointers are involved. The synchronization can be provided by the EHM or by the programmer. The former makes programming easier but can lead to unnecessary synchronization as it requires blocking the source or the faulting execution when the argument is accessed, which may be inappropriate in certain cases. The latter is more flexible as it can accommodate specific synchronization needs. With the use of conditional variables, monitors, futures and other facilities for synchronization, the synchronization required for accessing an exception argument can be easily implemented by a programmer. Hence, leaving synchronization to the programmer simplifies the EHM interface and hardly loses any capabilities. Finally, with derived exceptions, parameters to and results from a handler must be dealt with carefully, depending on the particular language. For example, in Fig. 8 exception D is derived from B with additional data fields for passing information into and out of a handler. When a D event is raised and caught by a handler for B, it is being treated as a B exception within the handler and the
EXCEPTION HANDLING
267
exception B { // base int 1; void m() {...} }; exception D : B { // derived int j; void m() {...} }; voidfO { raise D(); // derived } void g() { try { f(); } catch( B b ) { // cannot access D::j without down-cast b.m(); // calls D::m or B::m ? ] } FIG. 8. Subtyping.
additional data field, j, cannot be accessed without a dynamic down-cast. Consequently, if the handler returns to the raise point, some data fields in D may be uninitialized. A similar problem occurs if static dispatch is used instead of dynamic (both Modula-3 and C-l-l- support both forms of dispatch). The handler treating exception D as a B may call members in B with static dispatch rather than members in D. For termination, these problems do not exist because the handler parameters are the same or up-casts of arguments. For resumption, any result values returned from the handler to the raise point are the same or downcasts of arguments. However, the problem of down-casting is a subtyping issue, independent of the EHM, which programmers must be aware of when combining derived exceptions and exception parameters with resumption.
6.4
Bound Exceptions and Conditional Handling
In Ada, an exception declared in a generic package creates a new instance for each package instantiation, e.g., generic
package
Stack
Is
o v e r f l o w : e x c e p t i o n ; ... e n d Stacksp a c k a g e S I is n e w S t a c k ; p a c k a g e S2 is n e w S t a c k ;
-- n e w -- n e w
begin ...
S I . p u s h ! . . . ) ; ...
S2.push(...);
...
overflow overflow
268
PETER A. BUHR ETAL.
exception when
SI.overflow
= > ...
-- c a t c h
overflow
for
S1
when
S2.overflow
= > ...
-- c a t c h
overflow
for
S2
Hence, it is possible to distinguish which stack raised the overflow without passing data from the raise to the exception. In object-oriented languages, the class is used as a unit of modularity for controlling scope and visibility. Similarly, it makes sense to associate exceptions with the class that raises them, as in class file { exception
file_err;
...
However, is the exception associated with the class or objects instantiated from it? As above, the answer affects the capabilities for catching the exception, as in f i l e f; try { ... f . r e a d { . . . ) ; ... } c a t c h ( f i l e . f i l e _ e r r ) ... c a t c h ! f . f i l e _ e r r ) ...
//may //option //option
raise
file_err
1 2
In option 1, only one file_err exception exists for all objects created by type file. Hence, this c a t c h clause deals with f 11 e _ e r r events regardless of which file object raises it. In option 2, each file object has its own f 11 e _ e r r exception. Hence, this c a t c h clause only deals with f i I e _ e r r events raised by object f; i.e., the handler is for an event bound to a particular object, called a bound exception. This specificity prevents the handler from catching the same exception bound to a different object. Both facilities are useful but the difference between them is substantial and leads to an important robustness issue. Finally, an exception among classes is simply handled by declaring the exception outside of the classes and referencing it within the classes. Bound exceptions cannot be trivially mimicked by other mechanisms. Deriving a new exception for each file object (e.g., f _ f 11 e _ e r r from f i I e _ e r r) results in an explosion in the total number of exceptions, and cannot handle dynamically allocated objects, which have no static name. Passing the associated object as an argument to the handler and checking whether the argument is the bound object, as in catch! file.file_err!file if ! f p == & f ) ... else
raise
* f p ) ) { // fp //deal only //reraise
is passed with f
from
the
raise
event
requires programmers to follow the coding convention of reraising the event if the bound object is inappropriate [20]. Such a coding convention is unreliable.
EXCEPTION HANDLING
269
significantly reducing robustness. In addition, mimicking becomes infeasible for derived exceptions using the termination model, as in exception
B( o b j
);
//base
e x c e p t i o n D( o b j ) : B; obj o 1 , o2; try { ... r a i s e D { . . . ) ; } c a t c h ! DIobj ^o) ) { if { o == & o 1 ) ... else raise } catch! B(obj *o)
//
exception
// bound } catch( //deal
only
//reraise
with
else
//reraise
) {
event }catch{
//deal
form ol.D
o1
) {
if ( o == & o 2 ) ... raise
exception
derived
only
with
o2.B
) {
o2
event
When exception D is raised, the problem occurs when the first handler catches the derived exception and reraises it if the object is inappropriate. The reraise for the termination model immediately terminates the current guarded block, which precludes the handler for the base exception in that guarded block from being considered. Therefore, the "catch first, then reraise" approach is an incomplete substitute for bound exceptions. Finally, it is possible to generalize the concept of the bound exception with conditional handling [24], as in catch(
E( o b j & o ) ) w h e n (
o . f == 5 ) ...
where the w h e n clause specifies a general conditional expression that must also be true before the handler is chosen. Conditional handling can mimic bound events simply by checking whether the object parameter is equal to the desired object. Also, the object in the conditional does not have to be the object containing the exception declaration as for bound exceptions. The problem with conditional handling is the necessity of passing the object as an argument or embedding it in the exception before it is raised. Furthermore, there is now only a coincidental connection between the exception and conditional object versus the statically nested exception in the bound object. While we have experience on the usefulness of bound exceptions [20], we have none on conditional handling.
6.5
Exception List
An exception list is part of a routine's signature and specifies which exceptions may propagate to its caller, e.g., in Goodenough, CLU, Modula-3, and Java (optional in C-I-I-). In essence, the exception list is precisely specifying the behavior of a routine. The exception list allows an exception to propagate through
270
PETER A. BUHR ETAL
many levels of routine call only as long as it is explicitly stated that propagation for that exception is allowed. This capability allows static detection of situations where a raised exception is not handled locally or by its caller, or runtime detection where the exception may be converted into a special failure exception or the program terminated. While specification of routine behavior is certainly essential, this feature is too restrictive [19, p. 394], having a significant feature interaction between the EHM and a language's type system. For example, consider the simplified C++ template routine s o r t template
T> v o i d
boot
sorti
T items[
operator<(const
] ) {
T &a,
const
T &b
};
using the operator routine < in its definition. In general, it is impossible to know which exceptions may be propagated from the routine <, and subsequently those from sort because s o r t takes many different < routines to support code reuse. Therefore, it is impossible to give an exception list on the template routine. An alternative is to add the specification at instantiation of the template routine, as in sort(
V ) r a i s e s X , Y, Z ;
//instantiate
witti
exception
list
This case works because a new s o r t routine is essentially generated for each instantiation and so there are actually multiple versions each with a different signature. However, if s o r t is precompiled so it cannot be expanded at each call, or if it is passed as a further argument, there is only one signature to match all calls. For example, in template
mainO
T> v o i d
sorti
T items!
] );
//precompiled
T> v o i d f( T i t e m s ! ] , v o i d
{^s)(
T [ ] ) ) {
//cannot
instantiate
with
exception
list
//cannot
instantiate
with
exception
list
{
T a[ 10 ] ; s o r t ! a ); f{ a, s o r t
);
}
if the < operation for type T raises an exception, the interface declaration for s o rt cannot be extended for the call in m a i n because s o r t is already compiled, nor can the parameter declaration of s for routine f be extended for the call to s within it. As well, for arguments of routine pointers (functional style) and/or polymorphic methods or routines (object-oriented style), exception lists preclude reuse, e.g..
EXCEPTION HANDLING
271
class B { v i r t u a l int g() {} int f{ int ( ^ g ) ( . . . ) ) { ... ^ g ( . . . )
...
}
int f{) { ... g { ) ; ... } }; class D : public B {
int g{) r a i s e s l E ) { r a i s e E; }
int g{) r a i s e s ( E ) { r a i s e E; }
int h ( . . . ) {
int h{) {
try {
try {
... f( g ); ...
... f O ; ...
} c a t c h ( E ) ...
} c a t c h ! E ) ...
}
} };
(Assume exception lists are required in this example.) The left example illustrates arguments of routine pointers, where routine h calls f passing argument g, and f calls g with the potential to raise exception E. Routine h is clearly capable of handling the exception because it has an appropriate t r y block and is aware the version of g it passes to f may raise the exception. However, this reasonable case is precluded because the signature of the argument routine g is less restrictive than the parameter variable g of f. This reasonable case only works if the exception is passed unchanged through the intermediate routine f. Similarly, the right example illustrates object-oriented dynamic dispatch, where the derived class replaces member g, which is called from member B:: f. Member routine D:: h calls B:: f, which calls D:: g with the potential to raise exception E. Member D:: h is clearly capable of handling the exception because it has an appropriate t ry block and it created the version of g raising the exception. However, this reasonable case is precluded because the signature of D:: g is less restrictive than B:: g. This reasonable case only works if the exception is passed unchanged through the intermediate routine B:: f. If f in the left example or B in the right example are precompiled in a library, there is no option to expand the signatures to allow this reuse scenario. Nor is it reasonable to expand the signature for every routine. In fact, doing so makes the program less robust because the signature now covers too broad a range of exceptions. Converting the specific raised exception to the failure exception at the boundary, where the specific exception does not appear in the exception list, precludes any chance of handling the specific event at a lower level and only complicates any recovery. The problem is exacerbated when a raised event has an argument because the argument is lost in the conversion. Finally, determining an exception list for a routine becomes difficult or impossible with the introduction of asynchronous exceptions because an asynchronous exception may be propagated at any time.
272
PETER A.
7.
BUHRETAL
Handler Context
The static context of a handler is examined with respect to its guarded block and lexical context.
7.1
Guarded Block
The static context of handlers is different in Ada and C++. An Ada handler is nested inside its guarded block, and hence, can access variables declared in it, while a C++ handler executes in a scope outside its guarded block, making variables in the guarded block unaccessible, e.g.. Ada VAR x: BEGIN VAR
C++
I N T E G E R ; --
outer
X : I N T E G E R ; --
EXCEPTION
WHEN
X := 0; -- i n n e r x END;
inner
Others
=>
int x; //outer try { int x; //inner } c a t c h ( ... ) {
X = 0; //outer
x
}
By moving the handler and possibly adding another nested block, the same semantics can be accomplished in either language, as long as a handler can be associated with any nested block. According to [22, p. 31], the approach in C++ can lead to better code generation. Because one approach can mimic the other, local declarations in a guarded block are assumed to be invisible in the handler.
7.2
Lexical Context
Resuming a handler is like calling a nested routine, which requires the lexical context for the handler to access local variables in its static scope. In general, languages with nested routines (or classes) use lexical links among activation records, which are traversed dynamically for global references. Compilers often attempt to optimize out lexical links for performance reasons, which can complicate resumption. For termination, when the stack is unwound immediately, the issue of lexical context does not arise. However, if unwinding occurs during or after the handler is executed, it may be necessary to ignore extraneous stack frames to obtain correct references, which complicates propagation. The lexical context for resuming handlers has been cited as a source of confusion and complexity [15, p. 833; 19, pp. 391-392], Confusion results from unexpected values being accessed due to differences between static and dynamic contexts, and
273
EXCEPTION HANDLING
complexity from the need for lexical links. However, both these issues are related to nesting and not specific to an EHM. Figure 9 shows nonexception and exception examples that generate identical dynamic situations. The call to n e s t e d 1 in the left example and the resuming handler in the right example both have a lexical context of f(true...), so both routines reference x with a value of t r u e even though there is an instance of f {f a I s e...) (i.e., x is false) direcdy above them on the stack. As mentioned, this confusion and lexical links are an artifact of nesting not the resumption model. We believe the number of cases where this problem occurs is few. In languages without nested routines, e.g., C/C++, these issues do not exist, but the resuming handlers must then be specified separately from the guarded block, affecting readability [15,20,21].
8.
Propagation Models
Most EHMs adopt dynamic propagation, which searches the dynamic callstack to find a handler. The other propagation mechanism is static propagation, which searches the static (lexical) call-stack to find a handler. Static propagation was proposed by Knudsen [25,26], and his work has been largely ignored in the nested1()
handler lexical link - - ^
f(false,nested1)Lj_,
call
f(false)
k nested2()
L ^
nestedO L ^
f(true,NULL)
T
"^ ^
void f( bool X, void (*r)() ) { void nested 1() {...} void nested2() { f( ! X, nested 1 ); } if ( X ) nested2(); else r();
stack growth
f(true) I
void f( bool x ) { void nestedO { try{f(!x); } catch( E ) } if ( X ) nestedO; else resume E;
FIG. 9. Lexical contexts.
- ^
274
PETERA. BUHREr/\L
EHM literature. As a result, dynamic propagation is often known as propagation. Both propagation models are discussed and analyzed.
8.1
Dynamic Propagation
Dynamic propagation allows the handler clause bound to the top block on the dynamic call-stack to handle the event, provided it has an appropriate handler. A consequence is that the event is handled by a handler closest to the block where propagation of the event starts, called closeness. Usually, operations higher on the stack are more specific while those lower on the call-stack are more general. Handling an exception at the highest level deals with the exception in a context that is more specific, without affecting the abstract operation at a lower level. Handling an exception is often easier in a specific context than in a general context. Dynamic propagation also minimizes the amount of stack searching and unwinding when raising an exception. However, there are criticisms against dynamic propagation: visibility, dynamic handler selection, and recursive resuming. These criticisms are discussed before looking at static propagation, a proposal intended to solve the problems of dynamic propagation.
a 7.7
Visibility
Dynamic propagation can propagate an exception into a block in a different lexical scope, as in the examples in Section 6.5. In this case, the exception is propagated through a scope where it is invisible and then back into a scope where it is visible. It has been suggested this semantics is undesirable because a routine is indirectly propagating an exception it does not know [27]. Some language designers believe an exception should never be propagated into a scope where it is invisible, or if allowed, the exception should lose its identity and be converted into a general failure exception. However, we have demonstrated the reuse restrictions resulting from complete prevention and loss of specific information for conversion when these semantics are adopted (see end of Section 6.5).
8.1.2
Dynamic Handler
Selection
With dynamic propagation, the handler chosen for an exception cannot usually be determined statically, due to conditional code or calls to precompiled routines raising an event. Hence, a programmer seldom knows statically which handler may be selected, making the program more difficult to trace and the EHM harder to use [6,10,25,27]. However, when raising an exception it is rare to know what specific action is taken; otherwise, it is unnecessary to define the handler in a separate place,
EXCEPTION HANDLING
275
i.e., bound to a guarded block lower on the call-stack. Therefore, the uncertainty of a handling action when an event is raised is not introduced by a specific EHM but by the nature of the problem and its solution. For example, a library normally declares exceptions and raises them without providing any handlers; the library client provides the specific handlers for the exception in their applications. Similarly, the return code technique does not allow the library writer to know the action taken by a client. When an EHM facility is used correctly, the control flow of propagation and the side effects of handlers should be understandable.
8.1.3
Recursive
Resuming
Because resumption does not unwind the stack, handlers defined in previous scopes continue to be present during resuming propagation. In termination, the handlers in previous scopes disappear as the stack is unwound. The presence of resuming handlers in previous scopes can cause a situation called recursive resuming. The simplest situation where recursive resuming can occur is when a handler for a resuming exception resumes the same event, as in try { ...
resume
R; ...
} catch! R ) resume
//T(H(R}} R;
IIH(R)
=> =>
try
block
handler
for
handles
R
R
The try block resumes R. Handler H is called by the resume, and the blocks on the call-stack are ...
^
T (H{R))
^
H{R) ( ^
stack
top)
Then H resumes exception R again, which finds the handler just above it at T { H { R)) and calls handler H ( R ) again and this continues until the runtime stack overflows. Recursive resuming is similar to infinite recursion, and can be difficult to discover both at compile time and at runtime because of the dynamic choice of a handler. Asynchronous resuming compounds the difficulty because it can cause recursive resuming where it is impossible for synchronous resuming because the asynchronous event can be delivered at any time. MacLaren briefly discusses the recursive resuming problem in the context of PL/I [10, p. 101], and the problem exists in Exceptional C and ^System. Mesa made an attempt to solve this problem but its solution is often criticized as incomprehensible. Two solutions are discussed in Section 13.
8.2
Static Propagation
Knudsen proposed a static propagation mechanism [25,26], with the intention of resolving the dynamic propagation problems, using a handler based on Tennent's sequel construct [28, p. 108]. A sequel is a routine, including parameters; however.
276
PETER A.
BUHRETAL
when a sequel terminates, execution continues at the end of the block in which the sequel is declared rather than after the sequel call. Thus, handling an exception with a sequel adheres to the termination model. However, propagation is along the lexical hierarchy, i.e., static propagation, because of static name-binding. Hence, for each sequel call, the handling action is known at compile time. As mentioned, a termination handler is essentially a sequel as it continues execution after the end of the guarded block; the difference is the dynamic name-binding for termination handlers. Finally, Knudsen augments the sequel with virtual and default sequels to deal with controlled cleanup, but points out that mechanisms suggested in Section 6.1 can also be used [26, p. 48]. Static propagation is feasible for monolithic programs (left example in Fig. 10). However, it fails for modular (library) code as the static context of the module and user code are disjoint; e.g., if s t a c k is separately compiled, the sequel call in p u s h no longer knows the static blocks containing calls to p u s h. To overcome this problem, a sequel can be made a parameter of s t a c k (right example in Fig. 10). In static propagation, every exception called during a routine's execution is known statically, i.e., the static context and/or sequel parameters form the equivalent of an exception list (see Section 6.5). However, when sequels become part of a class's or routine's type signature, reuse is inhibited, as for exception lists. Furthermore, declarations and calls now have potentially many additional arguments, even if parameter defaults are used, which results in additional execution cost on every call. Interestingly, the dynamic handler selection issue is resolved only for monolithic programs; when sequels are passed as arguments, the selection becomes dynamic; i.e., the call does not know statically which handler is chosen, but it does eliminate the propagation search. Finally, there is no recursive resuming because there is no special resumption capability; resumption is achieved by explicitly passing fix-up routines and using normal routine call. Monolithic
Separate Compilation
{ // new block sequel StackOverflow(...) { . . . } class stack { void push( int i) { ... StackOverflow(...); }
class stack { // separately compiled stack( sequel overflow(...) ) { . . . } void push( int i) { .. .overflow(...); } ]\ _ ^ { // separately compiled sequel StackOverflow(...) {. • } stack s( StackOverflow); ... s.push( 3 ); // overflow 7 } // sequel transfers fiere
}; ' " stack s; ... s.push( 3 ); // overflow ? } //sequel transfers here
FIG. 10. Sequel compilation structure.
EXCEPTION HANDLING
277
which is available in most languages. However, passing fix-up routines has the same problems as passing sequel routines. Essentially, if users are willing to expUcitly pass sequel arguments, they are probably willing to pass fix-up routines. Finally, Knudsen shows several examples where static propagation provides syntax and semantics superior to traditional dynamic EHMs (e.g., CLU/Ada). However, with advanced language features, like generics and overloading, and advanced EHM features it is possible to achieve almost equivalent syntax and semantics in the dynamic case. For these reasons, static propagation seldom appears in an EHM; instead, most EHMs use the more powerful and expressive dynamic propagation.
9.
Propagation Mechanisms
Propagating directs control flow of the faulting execution to a handler; the search for a handler proceeds through the blocks, guarded, and unguarded, on the call (or lexical) stack. Different implementation actions occur during the search depending on the kind of propagation, where the kinds of propagation are terminating and resuming, and both forms can coexist in a single EHM. Terminating or throwing propagation means control does not return to the raise point. The unwinding associated with terminating normally occurs during propagation, although this is not required; unwinding can occur when the handler is found, during the handler's execution, or on its completion. However, there is no advantage to delaying unwinding for termination, and doing so results in problems (see Sections 7.2 and 13) and complicates most implementations. Resuming propagation means control returns to the point of the raise; hence, there is no stack unwinding. However, a handler may determine that control cannot return, and needs to unwind the stack, i.e., change the resume into a terminate. This capability is essential to prevent unsafe resumption, and mechanisms to accomplish it are discussed below. Three approaches for associating terminating or resuming propagation with an exception are possible: 1. At the declaration of the exception, as in terminate E l ; resu me E 2 ; try {
//specific
... r a i s e E l ; ... r a i s e E 2 ; } catch( E l ) ... c a t c h ( E2 ) . . .
//generic
declaration
raise
//generic handler
278
PETER A. BUHR ETAL
Associating the propagation mechanism at exception declaration means the raise and handler can be generic. With this form, there is a partitioning of exceptions, as in Goodenough [2] with ESCAPE and NOTIFY, //System [20] with exceptions and interventions, and Exceptional C [15] with exceptions and signals. 2. At the raise of the exception event, as in exception
E;
//generic
declaration
try { ... t e r m i n a t e ... } catch(
resume
E;
//specific
raise
E;
E ) ...
//generic
handler
Associating the propagation mechanism at the raise means the declaration and handler can be generic. With this form, an exception can be used in either form; i.e., exception E can imply termination or resumption depending on the raise. The generic handler catching the exception must behave according to the kind of handler model associated with exception event. As a result, it is almost mandatory to have a facility in the handler to determine the kind of exception as different actions are usually taken for each. 3. At the handler, as in exception
E;
//generic
declaration
try { ... r a i s e E; } terminate(
E ) ...
//generic //specific
raise handler
try { ...
raise
} resume(
E;
E ) ...
//generic //specific
raise handler
Associating the propagation mechanism at the handler means the declaration and raise can be generic. With this form, an exception can be used in either form; i.e., exception E can imply termination or resumption depending on the handler. However, it is ambiguous to have the two handlers appear in the same handler clause for the same exception. Interestingly, the choice of handling model can be further delayed using an unw i n d statement available only in the handler to trigger stack unwinding, as in
279
EXCEPTION HANDLING
exception
E;
//generic
declaration
try { ... r a i s e
E;
//generic //generic
} catch( E ) { if (...)
raise handler
{
...
u nwi nd;
//
=>
termination
//
=>
resumption
} else {
In this form, a handler imphes resumption unless an u n w i n d is executed. The u n w i n d capability in VMS [29, Chap. 4] and any language with nonlocal transfer can support this approach. Both schemes have implications with respect to the implementation because stack unwinding must be delayed, which can have an effect on other aspects of the EHM. Unfortunately, this approach violates the EHM objective of preventing an incomplete operation from continuing; i.e., it is impossible at the raise point to ensure control flow does not return. As a result, this particular approach is rejected. Continuing with the first two approaches, if an exception can be overloaded, i.e., be both a terminating and resuming exception, combinations of the first two forms of handler-model specification are possible, as in t e r m i n a t e E; r e s u m e E; try {
e x c e p t i o n E; //generic
//overload
try {
... t e r m i n a t e E;
... t e r m i n a t e E;
... r e s u m e E; } catch( E ) ... //generic //either
terminate
declaration
or
... r e s u m e E; handler resume
} t e r m i n a t e ! E ) ...
overload
r e s u m e ! E ) ...
In both cases, the kind of handler model for the exception is specified at the raise and fixed during the propagation. In the left example, exception E is overloaded at the declaration and the generic handler catching the exception must behave according to the kind of handler model associated with the exception event. As mentioned, it is almost mandatory to have a facility in the handler to determine the kind of exception. In general, it is better software engineering to partition the handler code for each kind of handler model. In the right example, the generic exception is made specific at the raise and the overloaded handlers choose the appropriate kind. In this form, the handler code is partitioned for each kind of
280
PETER A. BUHR ETAL
handler model. However, unlike the previous scheme, the exception declaration does not convey how the exception may be used in the program. Finally, it is possible to specify the handler model in all three locations, as in t e r m i n a t e E; resume E; try {
//overload
... t e r m i n a t e E; ... r e s u m e E ; } t e r m i n a t e ! E ) ... r e s u m e ! E ) ...
//overload
The EHM in ^System [20] uses all three locations to specify the handler model. While pedantic, the redundancy of this format helps in reading the code because the declaration specifies the kind of exception (especially when the exception declaration is part of an interface). As well, it is unnecessary to have a mechanism in the handler to determine the kind of raised exception. Finally, in an EHM where terminating and resuming coexist, it is possible to partially override their semantics by raising events within a handler, as in try {
try {
... r e s u m e E l ; } catch! El ) terminate
E2;
... t e r m l n a t e E l ; } c a t c h ! El ) r e s u m e E2;
In the left example, the terminate overrides the resuming and forces stack unwinding, starting with the stack frame of the handler (frame on the top of the stack), followed by the stack frame of the block that originally resumed the exception. In the right example, the resume cannot override the terminate because the stack frames are already unwound, so the new resume starts with the handler stack frame.
10.
Exception Partitioning
As mentioned, associating the propagation mechanism at exception declaration results in exception partitioning into terminating and resuming exceptions. Without partitioning, i.e., generic exception declarations, every exception becomes dual as it can be raised with either form of handler model. However, an exception declaration should reflect the nature of the abnormal condition causing the event being raised. For example, Unix signals S I G B U S and S I G T E R M always lead to termination of an operation, and hence, should be declared as terminationonly. Indeed, having termination-only and resume-only exceptions removes the mistake of using the wrong kind of raise and/or handler.
EXCEPTION HANDLING
281
However, having a dual exception is also useful. While overloading an exception name allows it to be treated as a dual, few languages allow overloading of variables in a block. Alternatively, an exception can be declared as d u a l . Both forms of making an exception dual have the following advantages. First, encountering an abnormal condition can lead to resuming or terminating an exception depending on the particular context. Without dual exceptions, two different exceptions must be declared, one being terminate-only and the other resume-only. These two exceptions are apparently unrelated without a naming convention; using a single dual exception is simpler. Second, using a dual exception instead of resume-only for some abnormal conditions allows a resumed event to be terminated when no resuming handler is found. This effect can be achieved through a default resuming handler that raises a termination exception. The problem is that terminate-only and resume-only exceptions lack the flexibility of dual, and flexibility improves reusability. This observation does not imply all exceptions should be dual, only that dual exceptions are useful.
10.1
Derived Exception Implications
With derived exceptions and partitioned exceptions, there is the issue of deriving one kind of exception from another, e.g., terminate from resume, called heterogeneous derivation. If the derivation is restricted to exceptions of the same kind it is called homogeneous derivation. Homogeneous derivation is straightforward and easy to understand. Heterogeneous derivation is complex but moreflexiblebecause it allows deriving from any kind of exception. With heterogeneous derivation, it is possible to have all exceptions in one hierarchy. The complexity with heterogeneous derivation comes from the following heterogeneous derivations: parent termmate derived option
resume
dual
dual
i
i
I
termmate resume i i dual dual terminate resume terminate
; resume 1
In option 1, the kind of exception is different when the derived exception is raised and the parent is caught. If a resume-only exception is caught by a terminate-only handler, it could unwind the stack, but that invalidates resumption at the raise point. If a terminate-only exception is caught by a resume-only handler, it could resume the event, but that invalidates the termination at the raise point. In option 2, problems occur when the dual exception attempts to perform an unwind or resume on an exception of the wrong kind, resulting in the option 1 problems. In option 3, there is neither an obvious problem nor an advantage if the dual exception is caught by the more specific parent. In most cases, it seems that heterogeneous
282
PETER A. BUHR ETAL.
derivation does not simplify programming and may confuse programmers; hence, it is a questionable feature.
11.
Matching
In Section 9, either the exception declaration or the raise fixes the handler model for an exception event. The propagation mechanism then finds a handler matching both the kind and exception. However, there is no requirement the kind must match; only the exception must match, which leads to four possible situations in an EHM:
termmation resumption
terminating handler 1. matching 3. unmatching
resuming handler 2. unmatching 4. matching
Up to now, matching has been assumed between handling model and propagation mechanism; i.e., termination matches with terminating and resumption with resuming. However, the other two possibilities (options 2 and 3) must be examined to determine whether there are useful semantics. In fact, this discussion parallels that for heterogeneous derivation. In option 2, when a termination exception is raised, the stack is immediately unwound and the operation cannot be resumed. Therefore, a resuming handler handling a termination exception cannot resume the terminated operation. This semantics is misleading and difficult to understand, possibly resulting in an error long after the handler returns, because an operation raising a termination exception expects a handler to provide an alternative for its guarded block, and a resuming handler catching an exception expects the operation raising it to continue. Therefore, unmatching semantics for a termination exception is largely an unsafe feature. In option 3, when an exception is resumed, the stack is not unwound so a terminating handler has four possibilities. First, the stack is not unwound and the exception is handled with the resumption model; i.e., the termination is ignored. Second, the stack is unwound only after the handler executes to completion. Third, the stack is unwound by executing a special statement during execution of the handler. Fourth, the stack is unwound after finding the terminating handler but before executing it. The first option is unsafe because the terminating handler does not intend to resume, and therefore, it does not correct the problem before returning to the raise point. The next two options afford no benefit as there is no advantage to delaying unwinding for termination, and doing so results in problems (see Sections 7.2 and 13) and complicates most implementations. These problems can be avoided by the fourth option, which unwinds the stack before executing the handler, essentially handling the resumed exception as a
EXCEPTION HANDLING
283
termination exception. It also simplifies the task of writing a terminating handler because a programmer does not have to be concerned about unwinding the stack explicitly, or any esoteric issues if the stack is unwound inside or after the terminating handler. Because of its superiority over the other two options favoring termination, the last option is the best unmatching semantics for a resumption exception (but it is still questionable). With matching semantics, it is possible to determine what model is used to handle a raised exception (and the control flow) by knowing either how an exception is raised or which handler is chosen. Abstracting the resumption and the termination model is done in a symmetric fashion. The same cannot be said about unmatching semantics. In particular, it is impossible to tell whether a resumed exception is handled with the resumption model without knowing the handler catching it, but a termination exception is always handled with the termination model. Hence, terminating and resuming are asymmetric in unmatching semantics. Without knowing the handling model used for a resumed exception, it becomes more difficult to understand the resuming mechanism for unmatching semantics than to understand the terminating and resuming mechanism for matching semantics. Therefore, unmatching semantics is inferior to matching and a questionable feature in an EHM.
12.
Handler Clause Selection
The propagation mechanism determines how handler clauses are searched to locate a handler. It does not specify which handler in a handler clause is chosen if there are multiple handlers capable of catching the exception. For example, a handler clause can handle both a derived and a base exception. This section discusses issues about two orthogonal criteria—matching and specificity—for choosing a handler among those capable of handling a raised exception in a handler clause. Matching criteria (see Section 11) selects a handler matching with the propagation mechanism, e.g., try { resume E ; } t e r m i n a t e E ... r e s u m e E ...
//matching
Matching only applies for an EHM with the two distinct propagation mechanisms and handler partitioning. Specificity criteria selects the most specific eligible handler within a handler clause using the following ordering rules:
284
PETER A. BUHR ETAL
1. The exception is derived from another exception (see Section 6.2): terminate
B;
terminate
D : B;
try { ... t e r m i n a t e } terminate! terminate!
D;
D ) ... II more
specific
B ) ...
2. The exception is bound to an object rather than to a class (see Section 6.4): try { ... f . r e a d O ; } terminate!
...
f.file_err
) ... II more
t e r m i n a t e ! f i le.f ile_err
specific
) ...
3. The exception is bound to the same object and derived from another exception: class foo { terminate terminate void foo
m!)
B; D : B; { ... t e r m i n a t e
D; }
f;
try { ...
f.m!);
} terminate!
f.D
) ... II more
terminate!
f.B
)...
specific
In this case, it is may be infeasible to tell which handler in a handler clause is more specific: try { } terminate!
D ) ... II equally
t e r m i n a t e ! f.B
specific
) ...
Here, there is a choice between a derived exception and a bound, base exception, which could be said to be equally specific.
EXCEPTION HANDLING
285
A language designer must set priorities among these orthogonal criteria. In addition, the priority of handling a termination exception is orthogonal to that of a resumed one. Dynamic propagation (see Section 8.1) uses closeness; i.e., select a handler closest to the raise point on the stack, to first locate a possible set of eligible handlers in a handler clause. Given a set of eligible handlers, matching should have the highest priority, when applicable, because matching semantics is safe, consistent, and comprehensible (see Section 11). A consequence of matching is a terminating handler hierarchy for termination exceptions and a resuming handler hierarchy for resumed ones. With separate handler hierarchies, it is reasonable for an exception to have both a default terminating and resuming handler (see Section 6.3 concerning default handlers). It is still possible for a default resuming handler to override resuming (see Section 9) and raise a termination exception in the terminating hierarchy. Overriding does not violate mandatory matching because of the explicit terminating raise in the handler. If there is no default handler in either case, the runtime system must take some appropriate action, usually terminating the execution. Specificity is good, but comes after matching; e.g., if specificity is selected before matching in try { ...terminate
D; . . .
/ / D is derived
} t e r m i n a t e ! B ) ...
//matching
r e s u m e ! D ) ...
//specificity
from B
then the handler r e s u m e ! D ) is chosen, not that for terminate( B ), which violates handler matching. The only exception to these rules is when two handlers in the same handler clause are equally specific, requiring additional criteria to resolve the ambiguity. The most common one is the position of a handler in a handler clause, e.g., select the first equally matching handler found in the handler-clause list. Whatever this additional criteria is, it should be applied to resolve ambiguity only after using the other criteria.
13.
Preventing Recursive Resuming
Recursive resuming (see Section 8.1.3) is the only legitimate criticism of resuming propagation. The mechanisms in Mesa [14, p. 143] and VMS [29, pp. 90-92] represent the two main approaches for solving this problem. The rest of this section looks at these two solutions.
286
PETERA. B U H R E T ^ L
13.1
Mesa Propagation
Mesa propagation prevents recursive resuming by not reusing an unhandled handler bound to a specific called block; i.e., once a handler for a block is entered, it is marked as unhandled and not used again. The propagation mechanism always starts from the top of the stack to find an unmarked handler for a resume exception.^ However, this unambiguous semantics is often described as confusing. The following program demonstrates how Mesa solves recursive resuming: void testO { try {
/ / TUHUR2))
try {
//
try { resume } catch( } catch(
T2(H2(R1)}
/ / T3(H3(R2)} R1 ;
R2 ) r e s u m e
R1 ) r e s u m e
R1;
R2;
II H3(R2) II
} c a t c h ! R2 ) . . .
H2(R1}
II HUR2)
}
The following stack is generated at the point when exception R 1 is resumed from the innermost try block: test
-^ T 1 ( H 1 ( R 2 ) )
^
T2(H2(R1))
-^ T3{H3(R2))
^
H2(R1)
The potential infinite recursion occurs because H 2 ( R 1 ) resumes R 2, and there is resuming handler H 3 ( R 2 ), which resumes R 1, while handler H 2 ( R 1 ) is still on the stack. Hence, handler body H 2 ( R 1 ) calls handler body H 3 ( R 2 ) and vice versa with no case to stop the recursion. Mesa prevents the infinite recursion by marking an unhandled handler, i.e., a handler that has not returned, as ineligible (in bold), resulting in test
-> T 1 { H 1 ( R 2 ) )
-^ T 2 ( H 2 { R 1 ) )
^
T3(H3(R2))
^
H2(R1)
Now, H 2 ( R 1 ) resumes R 2, which is handled by H 3 ( R 2 ): t e s t ^ T 1 ( H 1 { R 2 ) ) -> T2( H 2 ( R 1 )) ^ T 3 ( H 3 { R2 )) - ^ H2(R1) ^ H3(R2)
Therefore, when H 3 ( R 2 ) resumes R 1 no infinite recursion occurs as the handler for R 1 in T 2 ( H 2 ( R 1)) is marked ineligible. ^This semantics was determined with test programs and discussions with Michael Plass and Alan Freier at Xerox Pare.
EXCEPTION HANDLING
287
However, the confusion with the Mesa semantics is that there is now no handler for R 1, even though the nested t r y blocks appear to deal with this situation. In fact, looking at the static structure, a programmer might incorrectly assume there is an infinite recursion between handlers H 2 ( R 1 ) and H 3 ( R 2 ), as they resume one another. This confusion has resulted in a reticence by language designers to incorporate resuming facilities in new languages. In detail, the Mesa semantics has the following negative attributes: • Resuming an exception in a block and in one of its handlers can call different handlers, even though the block and its handlers are in the same lexical scope. For instance, in the above example, an exception generated in a guarded block is handled by handlers at or below the block on the stack, but an exception generated in a handler body can be handled by handlers above it on the stack. Clearly, lexical scoping does not reflect the difference in semantics. • Abstraction implies a routine should be treated as a client of routines it calls direcdy or indirectly, and have no access to the implementations it uses. However, if resuming from a resuming handler is a useful feature, some implementation knowledge about the handlers bound to the stack above it must be available to successfully understand how to make corrections, thereby violating abstraction. • Finally, exceptions are designed for communicating abnormal conditions from callee to caller. However, resuming an exception inside a resuming handler is like abnormal condition propagating from caller to callee because of the use of handlers above it on the stack.
13.2
VMS Propagation
The VMS propagation mechanism solves the recursive resuming problem, but without the Mesa problems. This mechanism is then extended to cover asynchronous exceptions, which neither Mesa nor VMS have. Before looking at the VMS mechanism, the concept of consequent events is defined, which helps to explain why the semantics of the VMS mechanism are desirable.
73.2.7
Consequent
Events
Raising an exception synchronously implies an abnormal condition has been encountered. A handler can catch an event and then raise another synchronous event if it encounters another abnormal condition, resulting in a second synchronous exception. The second event is considered a consequent event of the first.
288
PETER A. BUHR ETAL
More precisely, every synchronous event is an immediate consequent event of the most recent exception being handled in the execution (if there is one). For example, in the previous Mesa resuming example, the consequence sequence is R 1, R 2, and R 1. Therefore, a consequent event is either the immediate consequent event of an event or the immediate consequent event of another consequent event. The consequence relation is transitive, but not reflexive. Hence, synchronous events propagated when no other events are being handled are the only nonconsequent events. An asynchronous exception is not a consequent event of other exceptions propagated in the faulting execution because the condition resulting in the event is encountered by the source execution, and in general, not related to the faulting execution. Only a synchronous event raised after an asynchronous event is delivered can be a consequent event of the asynchronous event.
75.2.2
Consequential
Propagation
The VMS propagation mechanism is referred to as consequential propagation, based on the premise that if a handler cannot handle an event, it should not handle its consequent events, either. Conceptually, the propagation searches the execution stack in the normal way to find a handler, but marks as ineligible all handlers inspected, including the chosen handler. Marks are cleared only when an event is handled, so any consequent event raised during handling also sees the marked handlers. Practically, all resuming handlers at each level are marked when resuming an event; however, stack unwinding eliminates the need for marking when raising a termination exception. Matching (see Section 11) eliminates the need to mark terminating handlers because only resuming handlers catch resume events. If the resuming handler overrides the propagation by raising a termination exception, the stack is unwound normally from the current handler frame. How does consequential propagation make a difi'erence? Given the previous Mesa runtime stack test
^
T1{H1(R2))
^
T2(H2(R1))
->
T3(H3(R2))
->
H2(R1)
consequential propagation marks all handlers between the raise of R 1 in T 3 ( H 3 ( R 2 ) ) t o T 2 ( H 2 ( R 1 ) ) a s ineligible (in bold): test
^
T1{H1(R2))
^
T2(H2{R1))
-^
T3(H3(R2))
-^
H2(R1)
Now, H 2 ( R 1 ) resumes R 2, which is handled by H 1 ( R 2 ) instead of H 3 { R 2 ): test ^ T 1 ( H 1 ( R 2 ) ) ^ T 2 ( H 2 ( R 1 ) ) - ^ T 3 ( H 3 ( R 2 ) )
^ H2(R1) -> H1(R2)
EXCEPTION HANDLING
289
Like Mesa, recursive resuming is eliminated, but consequential propagation does not result in the confusing resumption of R 1 from H 3 ( R 2 ). In general, consequential propagation eliminates recursive resuming because a resuming handler marked for a particular event cannot be called to handle its consequent events. As well, propagating a synchronous resumption event out of a handler does not call a handler bound to a stack frame between the handler and the handler body, which is similar to a termination event propagated out of a guarded block because of stack unwinding. Consequential propagation does not preclude all infinite recursion with respect to propagation, as in void test!) { try { ...
IIT(H(R)) resume
R; ...
} catch( R ) testO;
II
HiR)
}
Here, each call of t e s t creates a new try block to handle the next recursion, resulting in an infinite number of handlers: test -^ T ( H ( R ) ) -^ HIR) ^
test ^
T(H(R)) ^
H(R) ^ t e s t ^
...
As a result, there is always an eligible handler to catch the next event in the recursion. Consequential propagation is not supposed to handle this situation as it is considered an error with respect to recursion not propagation. Finally, consequential propagation does not affect termination propagation because marked resuming handlers are simply removed during stack unwinding. Hence, the application of consequential propagation is consistent with either terminating or resuming. As well, because of handler partitioning, a terminating handler for the same event bound to a prior block of a resuming handler is still eligible, as in void testO { d u a l R; II terminate and resume try { ... resume R; ... } t e r m i n a t e ! R ) ... r e s u m e ! R ) t e r m i n a t e R;
IITir(R)MR)) llt(R) llr(R)
}
Here, the resume of R in the try block is first handled by r (R), resulting in the call stack
290
test
PETER A. BUHR ETAL.
^
T(r(R),t(R))
->
r(R)
While r(R) is marked ineligible, the terminating handler, t(R), for the same try block is still eligible. The handler r( R) then terminates the exception R, and the stack is unwound starting at the frame for handler r {R) to the try block where the exception is caught by handler t (R), resulting in the call stack test
-^
t(R).
The try block is effectively gone because the scope of the handler does not include the try block (see Section 7.1). All handlers are considered unmarked for a propagated asynchronous event because an asynchronous event is not a consequent event. Therefore, the propagation mechanism searches every handler on the runtime stack. Hence, a handler ineligible to handle an event and its consequent events can be chosen to handle a newly arrived asynchronous event, reflecting its lack of consequentiality. In summation, consequential propagation is better than other existing propagation mechanisms because: • it supports terminating and resuming propagation, and the search for a handler occurs in a uniformly defined way, • it prevents recursive resuming and handles synchronous and asynchronous exceptions according to a sensible consequence relation among exceptions, and • the context of a handler closely resembles its guarded block with respect to lexical location; in effect, an event propagated out of a handler is handled as if the event is directly propagated out of its guarded block.
14.
Multiple Executions and Threads
The presence of multiple executions and multiple threads has an impact on an EHM. In particular, each execution has its own stack on which threads execute, and different threads can carry out the various operations associated with handling an exception. For example, the thread of the source execution delivers an exception to the faulting execution; the thread of the faulting execution propagates and handles it.
14.1
Coroutine Environment
Coroutines represent the simplest execution environment where the source execution can be different from the faulting execution, but the thread of a single task executes both source and faulting execution. In theory, either execution can
EXCEPTION HANDLING
291
propagate the event, but in practice, only the faulting execution is reasonable. Assume the source execution propagates the event in try { try { Exi (suspended) } catch( E2 ) ... } c a t c h ! E l ) ...
IITUHUEV) IIT2(H2(E2}) II HUE2} II H2(E1}
and execution E x 1 is suspended in the guarded region of try block T 2. While suspended, a source execution E x 2 raises and propagates an asynchronous exception E 1 in E X 1, which directs control flow of E x 1 to handler H 2 ( E 1), unwinding the stack in the process. While E x 1 is still suspended, a third source execution E X 3 raises and propagates another asynchronous exception E 2 ( E x 2 and E x 3 do not have to be distinct). Hence, control flow of E x 1 goes to another handler determined in the dynamic context, further unwinding the stack. The net effect is that neither of the exceptions is handled by any handler in the program fragment. The alternative approach is for the faulting execution, E x 1, to propagate the exceptions. Regardless of which order E x 1 raises the two arriving events, at least a handler for one of the events is called. Therefore, only the faulting execution should propagate an exception in an environment with multiple executions.
14.2 Concurrent Environment Concurrency represents the most complex execution environment, where the separate source and faulting executions are executed by threads of different tasks. In theory, either execution can propagate an event, but in practice, only the faulting execution is reasonable. If the source execution propagates the event, it must change the faulting execution, including the runtime stack and program counter. Consequently, the runtime stack and the program counter become shared resources between these tasks, making a task's execution dependent on another task's execution in a direct way, i.e., not through communication. To avoid corrupting an execution, locking is now required. Hence, an execution must lock and unlock its runtime stack before and after each execution time-slice. Obviously, this approach generates a large amount of superfluous lockings to deal with a situation that occurs rarely. Therefore, it is reasonable to allow only the faulting execution to propagate an exception in an environment with multiple tasks.
14.3 Real-Time Environment In the design and implementation of real-time programs, various timing constraints are guaranteed through the use of scheduling algorithms, as well as an EHM. Exceptions are extremely crucial in real-time systems, e.g, deadline expiry
292
PETER A. BUHRETAL
or early/late starting exceptions, as they allow a system to react to abnormal situations in a timely fashion. Hecht and Hecht [30] demonstrated, through various empirical studies, that the introduction of even the most basic fault-tolerance mechanisms into a real-time system drastically improves its reliability. The main conflict between real-time and an EHM is the need for constanttime operations and the dynamic choice of a handler [31]. As pointed out in Section 8.1.2, the dynamic choice of a handler is crucial to an EHM, and therefore, it may be impossible to resolve this conflict. At best, exceptions may only be used in restricted ways in real-time systems when a bound can be established on cafl stack depth and the number of active handlers, which indirectly puts a bound on propagation.
15.
Asynchronous Exception Events
The protocol for communicating asynchronous events among coroutines and tasks is examined.
15.1
Communication
Because only the faulting execution should propagate an event and directly alter control flow, the source execution only needs to deliver the event to the faulting execution. This requires a form of direct communication not involving shared objects. In essence, an event is transmitted from the source to the faulting execution. There are two major categories of direct communication: blocking and nonblocking. In the first, the sender blocks until the receiver is ready to receive the event; in the second, the sender does not block.
75.7.7
Source Execution
Requirement
Using blocking communication, the source execution blocks until the faulting execution executes a complementary receive. However, an execution may infrequently (or never) check for incoming exception events. Hence, the source can be blocked for an extended period of time waiting for the faulting execution to receive the event. Therefore, blocking communication is rejected. Only nonblocking communication allows the source execution to raise an exception on one or more executions without suffering an extended delay.
75.7.2
Faulting Execution
Requirement
Nonblocking communication for exceptions is different from ordinary nonblocking communication. In the latter case, a message is delivered only after
EXCEPTION HANDLING
293
the receiver executes some form of receive. The former requires the receiver to receive an exception event without expHcitly executing a receive because an EHM should preclude checking for an abnormal condition. The programmer is required to set up a handler only to handle the rare condition. From the programmer's perspective, the delivery of an asynchronous exception should be transparent. Therefore, the runtime system of the faulting execution must poll for the arrival of asynchronous exceptions, and propagate it on arrival. The delivery of asynchronous exceptions must be timely, but not necessarily immediate. There are two polling strategies: implicit polling and explicit polling. Implicit polling is performed by the underlying system. (Hardware interrupts involve implicit polling because the CPU automatically polls for the event.) Explicit polling requires the programmer to insert explicit code to activate polling. Implicit polling alleviates programmers from polling, and hence, provides an apparently easier interface to programmers. On the other hand, implicit polling has its drawbacks. First, infrequent implicit polling can delay the handling of asynchronous exceptions; polling too frequently can degrade the runtime efficiency. Without specific knowledge of a program, it is difficult to have the right frequency for implicit polling. Second, implicit polling suffers the nonreentrant problem (see Section 15.2). Explicit polling gives a programmer control over when an asynchronous exception can be raised. Therefore, the programmer can delay or even completely ignore pending asynchronous exceptions. Delaying and ignoring asynchronous exceptions are both undesirable. The other drawback of explicit polling is that a programmer must worry about when to and when not to poll, which is equivalent to explicitly checking for exceptions. Unfortunately, an EHM with asynchronous exceptions needs to employ both implicit and explicit polling. Implicit polling simplifies using the EHM and reduces the damage a programmer can do by ignoring asynchronous exceptions. However, the frequency of implicit polling should be low to avoid unnecessary loss of efficiency. Explicit polling allows programmers to have additional polling when it is necessary. The combination of implicit and explicit polling gives a balance between programmability and efficiency. Finally, certain situations can require implicit polling be turned off, possibly by a compiler or runtime switch, e.g., in low-level system code where execution efficiency is crucial or real-time programming to ensure deadlines.
15.2
Nonreentrant Problem
Asynchronous events introduce a form of concurrency into sequential execution because delivery is nondeterministic with implicit polling. The event delivery can be considered as temporarily stealing a thread to execute the handler. As a
294
PETER A. BUHR ETAL
result, it is possible for a computation to be interrupted while in an inconsistent state, a handler to be found, and the handler to recursively call the inconsistent computation, called the nonreentrant problem. For example, while allocating memory, an execution is suspended by delivery of an asynchronous event, and the handler for the exception attempts to allocate memory. The recursive entry of the memory allocator may corrupt its data structures. The nonreentrant problem cannot be solved by locking the computation because either the recursive call deadlocks, or if recursive locks are used, reenters and corrupts the data. To ensure correctness of a nonreentrant routine, an execution must achieve the necessary mutual exclusion by blocking delivery, and consequently the propagation of asynchronous exceptions, hence temporarily precluding delivery. Hardware interrupts are also implicitly polled by the CPU. The nonreentrant problem can occur if the interrupt handler enables the interrupt and recursively calls the same computation as has been interrupted. However, because hardware interrupts can happen at times when asynchronous exceptions cannot, it is more difficult to control delivery.
15.3
Disabling Asynchronous Exceptions
Because of the nonreentrant problem, facilities to disable asynchronous exceptions must exist. There are two aspects to disabling: the specific event to be disabled and the duration of disabling. (This discussion is also applicable to hardware interrupts and interrupt handlers.)
75.5.7
Specific Event
Without derived exceptions, only the specified exception is disabled; with derived exceptions, the exception and all its descendants can be disabled. Disabling an individual exception but not its descendants, called individual disabling, is tedious as a programmer must list all the exceptions being disabled, nor does it complement the exception hierarchy. If a new derived exception should be treated as an instance of its ancestors, the exception must be disabled wherever its ancestor is disabled. Individual disabling does not automatically disable the descendants of the specified exceptions, and therefore, introducing a new derived exception requires modifying existing code to prevent it from activating a handler bound to its ancestor. The alternative, hierarchical disabling, disables an exception and its descendants. The derivation becomes more restrictive because a derived exception also inherits the disabling characteristics of its parent. Compared to individual disabling, hierarchical disabling is more complex to implement and usually has a higher runtime cost. However, the improvement in programmability makes hierarchical disabling attractive.
EXCEPTION HANDLING
295
A diiferent approach is to use priorities instead of hierarchical disabUng, allowing a derived exception to override its parent's priority when necessary. Selective disabUng can be achieved by disabling exceptions of priority lower than or equal to a specified value. This selective disabling scheme trades off the programmability and extensibility of hierarchical disabling for lower implementation and runtime costs. However, the problem with priorities is assigning priority values. Introducing a new exception requires an understanding of its abnormal nature plus its priority compared to other exceptions. Hence, defining a new exception requires an extensive knowledge of the whole system with respect to priorities, which makes the system less maintainable and understandable. It is conceivable to combine priorities with hierarchical disabling; a programmer specifies both an exception and a priority to disable an asynchronous exception. However, the problem of maintaining consistent priorities throughout the exception hierarchy still exists. In general, priorities are an additional concept that increases the complexity of the overall system without significant benefit. Therefore, hierarchical disabling with derived exceptions seems the best approach in an extensible EHM. Note that multiple derivation (see Section 6.2) only complicates hierarchical disabling, and the same arguments can be used against hierarchical disabling with multiple derivation.
75.3.2
Duration
The duration for disabling could be specified by a time duration, but normally the disabling duration is specified by a region of code that cannot be interrupted. There are several mechanisms available for specifying the region of uninterruptable code. One approach is to supply explicit routines to turn on and off the disabling for particular asynchronous exceptions. However, the resulting programming style is like using a semaphore for locking and unlocking, which is a low-level abstraction. Programming errors result from forgetting a complementary call and are difficult to debug. An alternative is a new kind of block, called a protected block, which specifies a list of asynchronous events to be disabled across the associated region of code. On entering a protected block, the list of disabled asynchronous events is modified, and subsequently enabled when the block exits. The effect is like entering a guarded block so disabling applies to the block and any code dynamically accessed via that block, e.g., called routines. An approach suggested for Java [32] associates the disabling semantics with an exception named AI E. If a member routine includes this exception in its exception list, interrupts are disabled during execution of the member; hence, the member body is the protected block. However, this approach is poor language
296
PETER A. BUHR ETAL
design because it associates important semantics with a name, AI E, and makes this name a hidden keyword. The protected block seems the simplest and most consistent in an imperative language with nested blocks. Regardless of how asynchronous exceptions are disabled, all events (except for special system events) should be disabled initially for an execution; otherwise, an execution cannot install handlers before asynchronous events begin arriving.
15.4
Multiple Pending Asynchronous Exceptions
Since asynchronous events are not serviced immediately, there is the potential for multiple events to arrive between two polls for events. There are several options for dealing with these pending asynchronous events. If asynchronous events are not queued, there can be only one pending event. New events must be discarded after the first one arrives, or overwritten as new ones arrive, or overwritten only by higher priority events. However, the risk of losing an asynchronous event makes a system less robust; hence queuing events is usually superior. If asynchronous events are queued, there are multiple pending events and several options for servicing them. The order of arrival (first-in, first-out, FIFO) can be chosen to determine the service order for handling pending events. However, a strict FIFO delivery order may be unacceptable, e.g., an asynchronous event to stop an execution from continuing erroneous computation can be delayed for an extended period of time in a FIFO queue. A more flexible semantics for handling pending exceptions is user-defined priorities. However, Section 15.3 discusses how a priority scheme reduces extensibility, making it inappropriate in an environment emphasizing code reuse. Therefore, FIFO order seems acceptable for its simplicity in understanding and low implementation cost. However, allowing a pending event whose delivery is disabled to prevent delivering other pending events seems undesirable. Hence, an event should be able to be delivered before earlier events if the earlier events are disabled. This out-of-order delivery has important implications on the programming model of asynchronous exceptions. A programmer must be aware of the fact that two exceptions having the same source and faulting execution may be delivered out-of-order (when the first is disabled but not the second). This approach may seem unreasonable, especially when causal ordering is proved to be beneficial in distributed programming. However, out-of-order delivery is necessary for urgent events. Currently, the most adequate delivery scheme remains as an open problem, and the answer may only come with more experience.
EXCEPTION HANDLING
297
15.5 Converting Interrupts to Exceptions As mentioned, hardware interrupts can occur at any time, which significantly complicates the nonreentrant problem. One technique that mitigates the problem is to convert interrupts into language-level asynchronous events, which are then controlled by the runtime system. Some interrupts target the whole program, like abort execution, while some target individual executions that compose a program, like completion of a specific thread's I/O operation. Each interrupt handler raises an appropriate asynchronous exception to the particular faulting execution or to some system execution for program faults. However, interrupts must still be disabled when enqueueing and dequeuing the asynchronous events to avoid the possibility of corrupting the queue by another interrupt or the execution processing the asynchronous events. By delivering interrupts through the EHM, the nonreentrant problem is avoided and interrupts are disabled for the minimal time. Furthermore, interrupts do not usually have all the capabilities of an EHM, such as parameters; hence, interrupts are not a substitute for a general EHM. Finally, the conversion also simplifies the interface within the language. The interrupts can be completely hidden within the EHM, and programmers only need to handle abnormal conditions at the language level, which improves portability across systems. However, for critical interrupts and in hard real-time systems, it may still be necessary to have some control over interrupts if they require immediate service; i.e., software polling is inadequate. One final point about programming interrupt handlers is that raising a synchronous exception within an interrupt handler is meaningful only if it does not propagate outside of the handler. The reason is that the handler executes on an arbitrary execution stack, and hence, there is usually no relationship between the interrupt handler and the execution. Indeed, Ada 95 specifies that propagating an event from an interrupt handler has no effect.
16.
Conclusions
Static and dynamic name binding, and static and dynamic transfer points can be combined to form the following different language constructs: transfer point static dynamic
name binding static dynamic sequel termination routine call resumption
These four constructs succinctly cover all the kinds of control flow associated with routines and exceptions.
298
PETER A. BUHRETAL.
Raising, propagating, and handling an exception are the three core controlflow mechanisms of an EHM. There are two useful handling models: termination and resumption. For safety, an EHM should provide matching propagation mechanisms: terminating and resuming. Handlers should be partitioned with respect to the handling models to provide better abstraction. Consequential propagation solves the recursive resuming problem and provides consistent propagation semantics with termination, making it the best choice for an EHM with resumption. As a result, the resumption model becomes attractive and can be introduced into existing termination-only EHMs. Exception parameters, homogeneous derivation of exceptions, and bound/conditional handling all improve programmability and extensibility. In a concurrent environment, an EHM must provide some disabling facilities to solve the nonreentrant problem. Hierarchical disabling is best in terms of programmability and extensibility. An EHM based on the ideas presented here has been implemented in fiC++ [21], providing feedback on correctness.
Appendix: Glossary asynchronous exception is when a source execution raises an exception event in a different faulting execution, e.g., raise E in Ex raises exception E from the current source execution to the faulting execution E x. bound exception is when an exception event is bound to a particular object, rather than to a class of objects or no object. catch is the selection of a handler in a handler clause during propagation to deal with an exception event. closeness is when an event is handled by a handler closest to the block where propagation of the event starts. conditional handling is when the handler for an exception event also depends on a predicate for selection. consequent event is when a handler catches an event and then raises another synchronous event due to an abnormal condition, so the second event is a consequence of the first. consequential propagation assumes that if a handler cannot handle an event, it should not handle its consequent events. default handler is a handler called if the faulting execution does not find a handler during propagation. delivery is the arrival of an exception event at a faulting execution, which initiates propagation of the event within the faulting execution. dual is a kind of exception that can be associated with both termination and resumption.
EXCEPTION HANDLING
299
dynamic propagation is propagation that searches the dynamic scopes (callstack) to find a handler. event is an exception instance generated at a raise and caught by a handler. exception is an event that is known to exist but which is ancillary to an algorithm or execution. exception list is part of a routine's signature specifying which exceptions may propagate out of a routine to its caller. exception parameter is the ability to pass data from the raise in the source execution to the handler in the faulting execution so the handler can analyze why an exception is raised and how to deal with it. exception partitioning occurs when exceptions are explicitly divided into different kinds, e.g., terminating and resuming. execution is the state information needed to permit independent execution, and the minimal language unit in which an exception can be raised. explicit polling is when arrival and propagation of asynchronous exceptions require the programmer to insert explicit code. failure exception is a system exception raised if and only if an exception is raised that is not part of the routine's interface. faulting execution is the execution (process, task, coroutine) affected by an exception event; its control flow is routed to a handler. guarded block is a programming language block with handlers. handled is when the handler for an exception returns. handler is a sequence of statements dealing with one or more exceptions. handler clause is the set of handlers bound to a guarded block. handler hierarchies is when different kinds of handlers are organized into separate hierarchies for various purposes. handles is the execution of a handler in a handler clause associated with a raised exception. heterogeneous derivation is when different kinds of exceptions can be derived from one another, e.g., terminating from resuming or vice versa. hierarchical disabling is when an individual exception is disabled and all of its hierarchical descendants are implicitly disabled. homogeneous derivation is when different kinds of exceptions can only be derived from exceptions of the same kind, e.g., terminating from terminating or resuming from resuming. implicit polling is when arrival and propagation of asynchronous exceptions are performed by the underlying system. individual disabling is when an individual exception is disabled but not its hierarchical descendants.
300
PETER A. BUHRET-^^L
marking is flagging handlers as ineligible during propagation so they cannot be considered again should propagation reencounter them. matching is when the handling model and propagation mechanism are the same; i.e., termination matches with terminating and resumption with resuming. multiple derivation is the ability to derive an exception from multiple exceptions, which is similar to multiple inheritance of classes. mutual exclusion is serializing execution of an operation on a shared resource. nonlocal transfer is a transfer, usually via a g o t o, to a dynamically scoped location, where any activation records on the call stack between the transfer and the specified label are terminated. nonreentrant problem is when a computation is interrupted asynchronously while in an inconsistent state, and the handler for the asynchronous interrupt invokes the same computation. nonresumable is an operation that cannot be restarted or continued; i.e., the operation must be terminated. propagating is the directing of control flow within a faulting execution from the raise to a handler. propagation mechanism is the algorithm used to locate an appropriate handler. protected block is a lexical block specifying a list of asynchronous events to be disabled during execution of the block. raise causes control flow to transfer up the lexical or dynamic scopes of the language until it is caught by a handler. recursive resuming is the potential for infinite recursion resulting from the presence of resuming handlers in previous scopes during propagation. resuming propagation is when propagation returns to the point of the raise. return code is a value encoded among normal returned values or a separate value returned from a routine call indicating additional information about the routine's computation. sequel is a routine, including parameters, which upon returning, continues execution at the end of the block in which the sequel is declared rather than after the sequel call. source execution is the execution (process, task, coroutine) raising an exception event. specificity selects the most specific eligible handler within a handler clause using ordering rules. stack unwinding is the terminating of blocks, including activation records, between the raise point and the handler. static propagation is propagation that searches the lexical scopes (static nesting) to find a handler.
EXCEPTION HANDLING
301
status flag is a shared (global) variable indicating the occurrence of a rare condition, e.g., e r r n o in UNIX. Setting a status flag indicates a rare condition has occurred; the value remains as long as it is not overwritten by another condition. synchronous exception is when the source and faulting executions are the same; i.e., the exception is raised and handled by the same execution. terminating propagation is when propagation does not return to the raise point. thread is execution of code that occurs independendy of and possibly concurrently with another execution; thread execution is sequential as it changes an execution's state. throwing propagation see "terminating propagation." unguarded block is a programming language block without handlers.
REFERENCES
[1] Buhr, P. A., Ditchfield, G., Stroobosscher, R. A., Younger, B. M., and Zamke, C. R. (1992). "/^C++: Concurrency in the object-oriented language C-l—1-." Software— Practice and Experience, 22(2), 137-172. [la] Hoare, C. A. R. (1974). "Monitors: An operating system structuring concept." Communications of the ACM, 17(10), 549-557. [lb] Marlin, C. D. (1980). Coroutines: A Programming Methodology, a Language Design and an Implementation, volume 95 of Lecture Notes in Computer Science, Ed. by G. Goos and J. Hartmanis. Springer-Verlag, Berlin. [2] Goodenough, J. B. (1975). "Exception handling: Issues and a proposed notation." Communications of the ACM, 18(12), 683-696. [3] Intermetrics, Inc. (1995). Annotated Ada Reference Manual, international standard ISO/IEC 8652:1995(E) with COR. 1:2000 ed. Language and Standards Libraries. [4] Cardelli, L., Donahue, J., Glassman, L., Jordan, M., Kalsow, B., and Nelson, G. (1988). "Modula-3 report." Technical Report 31, Systems Research Center, 130 Lytton Avenue, Palo Alto, California 94301. [5] Stroustrup, B. (1997). The C-\-\- Programming language, third ed. Addison-Wesley, Reading, MA. [6] Yemini, S., and Berry, D. M. (1985). "A modular verifiable exception-handling mechanism." ACM Transactions on Programming Languages and Systems, 7(2), 214-243. [7] International Business Machines (1981). OS and DOS PL/1 Reference Manual, first ed. Manual GC26-3977-0. [8] Madsen, O. L., M0ller-Pedersen, B., and Nygaard, K. (1993). Object-Oriented Programming in the BETA Programming Language. Addison-Wesley, Reading, MA. [9] Kemighan, B. W., and Ritchie, D. M. (1988). The C Programming Language, Prentice Hall Software Series, second ed. Prentice Hall, Englewood Cliffs, NJ.
302
PETER A. BUHR ETAL.
101 MacLaren, M. D. (1977). "Exception handling in PL/I." SIGPLAN notice^, 12(3), 101-104. Proceeding.\ ( f L I I ~ ACM Cot7fc~renceon Lcing~1ugeDesign for Reliable Software, March 28-30, 1977. Raleigh, NC. 1I] Buhr, P. A. (1995). "Are safe concurrency libraries possible?" Con7munication.sof the ACM, 38(2), 1 17-1 20. 121 Milner, R., and Tofte, M. (1991 ). Cornnzentcrrx or1 Stcrndrrrd ML. MIT Press, Cambridge, MA. 1 3 1 Gosling, J., Joy, B., and Steele. G. (1996). The J a w Language Specification. Addison-Wesley, Reading, MA. [I41 Mitchell, J. G., Maybury. W.. and Sweet, R. (1979). "Mesa language manual." Technical Report CSL-79-3. Xerox Palo Alto Research Center. '151 Gehani, N. H. (1992). "Exceptional C or C with exceptions." Sofiwc~re-Practice and Experience, 22( lo), 827-848. [16] Meyer, B. (1992). Elfel: 7'11~~ Lrrr1glrtrge. Prentice Hall Object-Oriented Series. Prentice-Hall, Englewood Clitfs. NJ. [I71 Drew, S. J., and Gough. K. J. (1994). "Exception handling: Expecting the unexpected." Computer Langlrtrges. 20(2). [I81 Liskov, B. H., and Snyder. A. (1979). "Exception handling in CLU." IEEE Transactions on S o f t ~ ~ a Engineering. re 5(6). 546-558. [19] Stroustrup, B. (1994). The Design crnd E~~)llrtion of'C++. Addison-Wesley, Reading, MA. [20] Buhr, P. A,, Macdonald, H. I.. and Zarnke, C. R. (1992). "Synchronous and asynchronous handling of abnormal events in the /tSystem." Stfrw~rre-Practice and Experience, 22(9), 735-776. [21] Buhr, P. A,, and Stroobosscher. R. A. (2001 ). "PC++ annotated reference manual, version 4.9." Technical report, Department of Computer Science. University of Waterloo, Waterloo, Ontario N2L 3G 1. Canada. f tp : //plg . uwaterloo . ca/pub/ uSystem/uC++.ps.gz. [22] Koenig, A,, and Stroustrup. B. (1990). "Exception handling for C++." Journal of Object-Oriented Progrumrning. 3 ( 2 ) ,16-33. [23] Cargill, T. A. (1990). "Does C++ really need multiple inheritance?" In USENIX C++ Conference Proceedings, pp. 3 15-323. San Francisco, CA. USENIX Association. [24] Mok, W. Y. R. (1997). "Concurrent abnormal event handling mechanisms." Master's thesis, University of Waterloo. Waterloo. Ontario N2L 3G 1 , Canada. f tp : //plg. uwaterloo.ca/pub/uSystem/MokThesis.ps.gz. [25] Knudsen, J. L. (1984). "Exception handling-A and Experience, 14(5),429449.
static approach." Sojt~~are-Practice
[26] Knudsen, J. L. (1987):'Better exception handling in block structured systems." IEEE Software, 4(3), 4 0 4 9 . [27] Motet, G., Mapinard, A,. and Geotfroy. J . C. (1996). Design of Dependable Ada Sofrware. Prentice-Hall. Englewood Clifl's. NJ.
EXCEPTION HANDLING
303
[28] Tennent, R. D. (1977). "Language design methods based on semantic principles." Acta Infomatica, 8(2), 97-112. Reprinted in [33]. [29] Kenah, L. J., Goldenberg, R. E., and Bate, S. F. (1988). VAX/VMS Internals and Data Structures Version 4.4. Digital Press. [30] Hecht, H., and Hecht, M. (1986). "Software reliability in the systems context." IEEE Transactions on Software Engineering, 12(1), 51-58. [31] Lang, J., and Stewart, D. B. (1998). "A study of the applicabihty of existing exception-handling techniques to component-based real-time software technology." ACM Transactions on Programming Languages and Systems, 20(2), 274-301. [32] Real Time for Java Experts Group, h t t p : //www. r t j . org. (1999). [33] Wasserman, A. I. (Ed.) (1980). Tutorial: Programming Language Design. Computer Society Press.
This Page Intentionally Left Blank
Breaking the Robustness Barrier: Recent Progress on the Design of Robust IVIultimodal Systems SHARON OVIATT Center for Human Computer Communication Computer Science Department Oregon Graduate Institute of Science and Technology 20,000 N. W. Walker Road Beaverton, Oregon 97006 USA [email protected]
Abstract Cumulative evidence now clarifies that a well-designed multimodal system that fuses two or more information sources can be an effective means of reducing recognition uncertainty. Performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), for varied tasks (map-based simulation, speaker identification), and in different environments (noisy, quiet). Perhaps most importantly, the error suppression achievable with a multimodal system, compared with a unimodal spoken language one, can be in excess of 40%. Recent studies also have revealed that a multimodal system can perform in a more stable way than a unimodal one across varied real-world users (accented versus native speakers) and usage contexts (mobile versus stationary use). This chapter reviews these recent demonstrations of multimodal system robustness, distills general design strategies for optimizing robustness, and discusses future directions in the design of advanced multimodal systems. Finally, implications are discussed for the successful commercialization of promising but error-prone recognition-based technologies during the next decade.
1. Introduction to Multimodal Systems 1.1 Types of Multimodal System 1.2 Motivation for Multimodal System Design 1.3 Long-Term Directions: Multimodal-Multisensor Systems That Model Biosensory Perception 2. Robustness Issues in the Design of Recognition-Based Systems ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
305
306 307 309 312 313
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
306
SHARON OVIATT
2.1 Recognition Errors in Unimodal Speech Systems 314 2.2 Research on Suppression of Recognition Errors in Multimodal Systems . 316 2.3 Multimodal Design Strategies for Optimizing Robustness 326 2.4 Performance Metrics as Forcing Functions for Robustness 329 3. Future Directions: Breaking the Robustness Barrier 331 4. Conclusion 333 Acknowledgments 333 References 333
1.
Introduction to Multimodal Systems
Multimodal systems process two or more combined user input modes—such as speech, pen, gaze, manual gestures, and body movements—in a coordinated manner with multimedia system output. This class of systems represents a new direction for computing, and a paradigm shift away from conventional windowsicons-menus-pointing device (WIMP) interfaces. Multimodal interfaces aim to recognize naturally occurring forms of human language and behavior, which incorporate at least one recognition-based technology (e.g., speech, pen, vision). The development of novel multimodal systems has been enabled by the myriad input and output technologies currently becoming available, including new devices and improvements in recognition-based technologies. Multimodal interfaces have developed rapidly during the past decade, with steady progress toward building more general and robust systems [1,2]. Major developments have occurred in the hardware and software needed to support key component technologies incorporated within multimodal systems, and in techniques for integrating parallel input streams. The array of multimodal applications also has expanded rapidly, and currently ranges from map-based and virtual reality systems for simulation and training, to person identification/verification systems for security purposes, to medical and Web-based transaction systems that eventually will transform our daily lives [2-4]. In addition, multimodal systems have diversified to include new modality combinations, including speech and pen input, speech and lip movements, speech and manual gesturing, and gaze tracking and manual input [5-9]. This chapter specifically addresses the central performance issue of multimodal system design techniques for optimizing robustness. It reviews recent demonstrations of multimodal system robustness that surpass that of unimodal recognition systems, and also discusses future directions for optimizing robustness further through the design of advanced multimodal systems. Currently, there are two types of system that are relatively mature within the field of multimodal research, ones capable of processing users' speech and pen-based input, and others based
BREAKING THE ROBUSTNESS BARRIER
307
on speech and lip movements. Both types of system process two recognitionbased input modes that are semantically rich, and have received focused research and development attention. As we will learn in later sections, the presence of two semantically rich input modes is an important prerequisite for suppression of recognition errors. The present chapter will focus on a discussion of these two types of multimodal system.
1,1
Types of Multimodal System
Since the appearance of Bolt's "Put That There" [10] demonstration system, which processed speech in parallel with touch-pad pointing, a variety of new multimodal systems have emerged. Most of the early multimodal systems processed simple mouse or touch-pad pointing along with speech input [11-16]. However, contemporary multimodal systems that process two parallel input streams, each of which is capable of conveying rich semantic information, have now been developed. These multimodal systems recognize two natural forms of human language and behavior, for which two recognition-based technologies are incorporated within a more powerful bimodal user interface. To date, systems that combine either speech and pen input [2,17] or speech and lip movements [1,7,18] are the predominant examples of this new class of multimodal system. In both cases, the keyboard and mouse have been abandoned. For speech and pen systems, spoken language sometimes is processed along with complex pen-based gestural input involving hundreds of different symbolic interpretations beyond pointing [2]. For speech and lip movement systems, spoken language is processed along with corresponding human lip movement information during the natural audio-visual experience of spoken interaction. In both cases, considerable work has been directed toward quantitative modeling of the integration and synchronization characteristics of the two input modes being processed, and innovative new time-sensitive architectures have been developed to process these rich forms of patterned input in a robust manner. Recent reviews of the cognitive science underpinnings, natural language processing and integration techniques, and architectural features used in these two types of multimodal system have been summarized elsewhere (see Benoit et al. [1], Oviatt et al. [2], and Oviatt [19]). Multimodal systems designed to recognize speech and pen-based gestures first were prototyped and studied in the early 1990s [20], with the original QuickSet system prototype built in 1994. The QuickSet system is an agent-based, collaborative multimodal system that runs on a hand-held PC [6]. As an example of a multimodal pen/voice command, a user might add three air landing strips to a map by saying "airplane landing strips facing this way (draws arrow NW), facing this way (draws arrow NE), and facing this way (draws arrow SE)." Other systems
308
SHARON OVIATT
of this type were built in the late 1990s, with examples including the Humancentric Word Processor, Portable Voice Assistant, QuickDoc, and MVIEWS [2,21-23]. In most cases, these multimodal systems jointly interpreted speech and pen input based on a frame-based method of information fusion and a late semantic fusion approach, although QuickSet uses a statistically ranked unification process and a hybrid symbolic/statistical architecture [24]. Other very recent speech and pen multimodal systems also have begun to adopt unificationbased multimodal fusion and hybrid processing approaches [25,26], although some of these newer systems still are limited to pen-based pointing. In comparison with the multimodal speech and lip movement literature, research and system building on multimodal speech and pen systems has focused more heavily on diversification of applications and near-term commercialization potential. In contrast, research on multimodal speech and lip movements has been driven largely by cognitive science interest in intersensory audio-visual perception, and the coordination of speech output with lip and facial movements [5,7,27-36]. Among the contributions of this literature has been a detailed classification of human lip movements (visemes), and the viseme-phoneme mappings that occur during articulated speech. Actual systems capable of processing combined speech and lip movements have been developed during the 1980s and 1990s, and include the classic work by Petajan [37], Brooke and Petajan [38], and others [39-43]. Additional examples of speech and lip movement systems and applications have been detailed elsewhere [1,7]. The quantitative modeling of synchronized phoneme/viseme patterns that has been central to this multimodal literature recently has been used to build animated characters that generate text-to-speech output with coordinated lip movements for new conversational interfaces [28,44]. In contrast with the multimodal speech and pen literature, which has adopted late integration and hybrid approaches to processing dual information, speech and lip movement systems sometimes have been based on an early feature-level fusion approach. Although very few existing multimodal interfaces currently include adaptive processing, researchers in this area have begun exploring adaptive techniques for improving system robustness during noise [45-47]. This is an important future research direction that will be discussed further in Section 2.2.2. As multimodal interfaces gradually evolve toward supporting more advanced recognition of users' natural activities in context, including the meaningful incorporation of vision technologies, they will begin to support innovative directions in pervasive interface design. New multimodal interfaces also will expand beyond rudimentary bimodal systems to ones that incorporate three or more input modes, qualitatively different modes, and more sophisticated models of multimodal interaction. This trend already has been initiated within biometrics research, which has combined recognition of multiple behavioral input modes (e.g., speech.
BREAKING THE ROBUSTNESS BARRIER
309
handwriting, gesturing, and body movement) with physiological ones (e.g., retinal scans, fingerprints) in an effort to achieve reliable person identification and verification in challenging field conditions [4,48].
1.2
Motivation for Multimodal System Design
The growing interest in multimodal interface design is inspired largely by the goal of supporting more flexible, transparent, and powerfully expressive means of human-computer interaction. Users have a strong preference to interact multimodally in many applications, and their performance is enhanced by it [2]. Multimodal interfaces likewise have the potential to expand computing to more challenging applications, to a broader spectrum of everyday users, and to accommodate more adverse usage conditions such as mobility. As this chapter will detail, multimodal interfaces also can function in a more robust and stable manner than unimodal systems involving a single recognition-based technology (e.g., speech, pen, vision).
7.Z 7 Universal Access and
Mobility
A major motivation for developing more flexible multimodal interfaces has been their potential to expand the accessibility of computing to more diverse and nonspecialist users. There are large individual differences in people's ability and preference to use different modes of communication, and multimodal interfaces are expected to increase the accessibility of computing for users of different ages, skill levels, cultures, and sensory, motor, and intellectual impairments. In part, an inherently flexible multimodal interface provides people with interaction choices that can be used to circumvent personal limitations. This is becoming increasingly important, since U.S. legislation effective June 2001 now requires that computer interfaces demonstrate accessibility in order to meet federal procurement regulations [49,50]. Such interfaces also permit users to alternate input modes, which can prevent overuse and damage to any individual modality during extended computing tasks (R. Markinson, University of California at San Francisco Medical School, 1993). Another increasingly important advantage of multimodal interfaces is that they can expand the usage contexts in which computing is viable, including natural field settings and during mobility. In particular, they permit users to switch between modes as needed during the changing conditions of mobile use. Since input modes can be complementary along many dimensions, their combination within a multimodal interface provides broader utility across varied and changing usage contexts. For example, a person with a multimodal pen/voice interface may use
310
SHARON OVIATT
hands-free speech input for voice dialing a car cell phone, but switch to pen input to avoid speaking a financial transaction in a public setting.
1.2.2 Error Avoidance and
Resolution
Of special relevance to this chapter, multimodal interface design frequently manifests improved error handling, in terms of both error avoidance and graceful recovery from errors [43,51-55]. There are user- and system-centered reasons why multimodal systems facilitate error recovery, when compared with unimodal recognition-based interfaces. First, in a multimodal speech and pen interface, users will select the input mode that they judge less error prone for particular lexical content, which tends to lead to error avoidance [51]. For example, they may prefer speedy speech input, but will switch to pen input to communicate a foreign surname. Secondly, users' language often is simplified when interacting multimodally. In one study, a user added a boat dock to an interactive map by speaking "Place a boat dock on the east, no, west end of Reward Lake." When using multimodal pen/voice input, the same user completed the same action with [draws rectangle] "Add dock." Multimodal utterances generally were documented to be briefer, and to contain fewer disfluencies and complex locative descriptions, compared with a speechonly interface [56]. This can result in substantially reducing the complexity of natural language processing that is needed, thereby reducing recognition errors [57]. Thirdly, users have a strong tendency to switch modes after a system recognition error, which tends to prevent repeat errors and to facilitate error recovery. This error resolution occurs because the confusion matrices differ for any given lexical content for the two recognition technologies involved [52]. In addition to these user-centered reasons for better error avoidance and resolution, there also are system-centered reasons for superior error handling. A welldesigned multimodal architecture with two semantically rich input modes can support mutual disambiguation of signals. For example. Fig. 1 illustrates mutual disambiguation from a user's log during an interaction with the QuickSet multimodal system. In this example, the user said "zoom out" and drew a checkmark. Although the lexical phrase "zoom out" only was ranked fourth on the speech «-best list, the checkmark was recognized correctly by the gesture recognizer, and the correct semantic interpretation "zoom out" was recovered successfully (i.e., ranked first) on the final multimodal n-htsi list. As a result, the map interface
BREAKING THE ROBUSTNESS BARRIER
311
FIG. 1. QuickSet user interface during multimodal command to "zoom out," illustrating mutual disambiguation with the correct speech interpretation pulled up on its «-best list to produce a correct final multimodal interpretation.
zoomed out correctly, and no errors were ever experienced by the user. This recovery of the correct interpretation was achievable within the multimodal architecture because inappropriate signal pieces are discarded or "weeded out" during the unification process, which imposes semantic, temporal, and other constraints on what can be considered "legal" multimodal interpretations [2,6]. In this particular example, the three alternatives ranked higher on the speech «-best list only could have integrated with circle or question mark gestures, which were not present on the «-best gesture list. As a result, these alternatives could not form a legal integration and were discarded. Using the QuickSet architecture, which involves late semantic integration and unification [2,6,24], it has been demonstrated empirically that a multimodal system can support mutual disambiguation of speech and pen input during semantic interpretation [53,58,59]. As a result, such a system yields a higher overall rate of correct utterance interpretations than spoken language processing alone. This performance improvement is the direct result of the disambiguation between signals that can occur in a well-designed multimodal system, because each mode provides context for interpreting the other during integration. To achieve optimal disambiguation of meaning, a multimodal interface ideally should be designed to include complementary input modes, and each mode should provide duplicate functionality such that users can accomplish their goals using either one.
312
SHARON OVIATT
Parallel error suppression also has been observed in multimodal speech and lip movement systems, although the primary focus has been on demonstrating improvements during noise. During the audio-visual perception of speech and lip movements, enhancement of multimodal speech recognition has been demonstrated over audio-only processing for human listeners [5,30,32,34,60] and also for multimodal speech and lip movement systems [3,39,43,45,61-65]. In this literature, key complementarities have been identified between acoustic speech and corresponding lip movements, which jointly supply unique information for accurately recognizing phonemes. More detailed research findings on the error suppression capabilities and mechanisms of multimodal systems will be reviewed in Section 2.2.
1.3
Long-Term Directions: Multimodal-Multisensor Systenns That Model Biosensory Perception
The advent of multimodal interfaces based on recognition of human speech, gaze, gesture, and other natural behavior represents only the beginning of a progression toward computational interfaces capable of relatively human-like sensory perception. Such interfaces eventually will interpret continuous input from a large number of different visual, auditory, tactile, and other input modes, which will be recognized as users engage in everyday activities. The same system will track and incorporate information from multiple sensors on the user's interface and surrounding physical environment in order to support intelligent adaptation to the user, task and usage environment. This type of advanced multimodalmultisensor interface will be integrated within a flexible architecture in which information from different input modes or sensors can be actively recruited when it is relevant to the accurate interpretation of an ongoing user activity. The flexible collection of information essentially will permit dynamic reconfiguration of future multimodal-multisensor interfaces, especially when key information is incomplete or discordant, or at points when the user's activity changes. Adaptive multimodal-multisensor interfaces that incorporate a broad range of information have the potential to achieve unparalleled robustness, and to support new functionality. They also have the potential to perform flexibly as multifunctional and personalized mobile interfaces. At their most evolved endpoint, this new class of interfaces will become capable of relatively human-like sensoryperceptual capabilities, including self-diagnostic functions. The long-term research direction of designing robust multimodal-multisensor interfaces will be guided in part by biological, neurophysiological, and psychological evidence on the organization of intelligent sensory perception [66]. Coordinated sensory perception in humans and animals is active, purposeful, and able to achieve remarkable robustness through multimodality [5,30,32,34,67,68]. In
BREAKING THE ROBUSTNESS BARRIER
313
fact, robustness generally is achieved by integrating information from many different sources, whether different input modes, or different kinds of data from the same mode (e.g., brightness, color). During fusion of perceptual information, for example, the primary benefits include improved robustness, the extraction of qualitatively new perceptions (e.g., binocular stereo, depth perception), and compensation for perceptual disturbance (e.g., eye movement correction of perturbations induced by head movement). In biological systems, input also is dynamically recruited from relevant sensory neurons in a way that is both sensitive to the organism's present context, and informed by prior experience [69-71]. When orienting to a new stimulus, the collection of input sampled by an organism can be reconfigured abruptly. Since numerous information sources are involved in natural sensory perception, discordant or potentially faulty information can be elegantly resolved by recalibration or temporary suppression of the "offending" sensor [72-74]. In designing future architectures for multimodal interfaces, important insights clearly can be gained from biological and cognitive principles of sensory integration, intersensory perception, and their adaptivity during purposeful activity. As a counterpoint, designing robust multimodal interfaces also requires a computational perspective that is informed by the implementation of past fusion-based systems. Historically, such systems often have involved conservative applications for which errors are considered costly and unacceptable, including biometrics, military, and aviation tasks [4,75,76]. However, fusion-based systems also have been common within the fields of robotics and speech recognition [3,7,18,24,39, 43,47,77]. Although discussion of these many disparate literatures is beyond the scope of this chapter, nonetheless examination of the past application of fusion techniques can provide valuable guidance for the design of future multimodalmultisensor interfaces. In the present chapter, discussion will focus on research involving multimodal systems that incorporate speech recognition.
2.
Robustness Issues in the Design of Recognition-Based Systems
As described in the Introduction, state-of-the-art multimodal systems now are capable of processing two parallel input streams that each convey rich semantic information. The two predominant types of such a system both incorporate speech processing, with one focusing on multimodal speech and pen input [2,17], and the other multimodal speech and lip movements [1,7,18]. To better understand the comparative robustness issues associated with unimodal versus multimodal system design. Section 2.1 will summarize the primary error handling problems with unimodal recognition of an acoustic speech stream. Although spoken
314
SHARON OVIATT
language systems support a natural and powerfully expressive means of interaction with a computer, it is still the case that high error rates and fragile error handling pose the main interface design challenge that limit the commercial potential of this technology. For comparison, Section 2.2 will review research on the relative robustness of multimodal systems that incorporate speech. Section 2.3 then will summarize multimodal design strategies for optimizing robustness, and Section 2.4 will discuss the performance metrics used as forcing functions for achieving robustness.
2.1
Recognition Errors in Unimodal Speech Systems
Spoken language systems involve recognition-based technology that by nature is probabilistic and therefore subject to misinterpretation. Benchmark error rates reported for speech recognition systems still are too high to support many applications [78], and the time that users spend resolving errors can be substantial and frustrating. Although speech technology often performs adequately for read speech, for adult native speakers of a language, or when speaking under idealized laboratory conditions, current estimates indicate a 20-50% decrease in recognition rates when speech is delivered during natural spontaneous interactions, by a realistic range of diverse speakers (e.g., accented, child), or in natural field environments. Word error rates (WERs) are well known to vary directly with speaking style, such that the more natural the speech delivery the higher the recognition system's WER. In a study by Weintraub et al. [79], speakers' WERs increased from 29% during carefully read dictation, to 38% during a more conversationally read delivery, to 53% during natural spontaneous interactive speech. During spontaneous interaction, speakers typically are engaged in real tasks, and this generates variability in their speech for several reasons. For example, frequent miscommunication during a difficult task can prompt a speaker to hyperarticulate during their repair attempts, which leads to durational and other signal adaptations [80]. Interpersonal tasks or stress also can be associated with fluctuating emotional states, giving rise to pitch adaptations [81]. Basically, the recognition rate degrades whenever a user's speech style departs in some way from the training data upon which a recognizer was developed. Some speech adaptations, like hyperarticulation, can be particularly difficult to process because the signal changes often begin and end very abruptly, and they may only affect part of a longer utterance [80]. In the case of speaker accents, a recognizer can be trained to recognize an individual accent, although it is far more difficult to recognize varied accents successfully (e.g., Asian, European, African, North American), as might be required for an automated public telephone service. In the case of heterogeneous accents, it can be infeasible to specifically tailor an
BREAKING THE ROBUSTNESS BARRIER
315
application to minimize highly confusable error patterns in a way that would assist in supporting robust recognition [53]. The problem of supporting adequate recognition rates for diverse speaker groups is due partly to the need for corpus collection, language modeling, and tailored interface design with different user groups. For example, recent research has estimated that children's speech is subject to recognition error rates that are two-to-five times higher than adult speech [82-85]. The language development literature indicates that there are specific reasons why children's speech is harder to process than that of adults. Not only is it less mature, children's speech production is inherently more variable at any given stage, and it also is changing dynamically as they develop [86,87]. In addition to the many difficulties presented by spontaneous speech, speaker stylistic adaptations, and diverse speaker groups, it is widely recognized that laboratory assessments overestimate the recognition rates that can be supported in natural field settings [88-90]. Field environments typically involve variable noise levels, social interchange, multitasking and interruption of tasks, increased cognitive load and human performance errors, and other sources of stress, which collectively produce 20-50% drops in speech recognition accuracy. In fact, environmental noise currently is viewed as one of the primary obstacles to widespread commercialization of spoken language technology [89,91]. During field use and mobility, there actually are two main problems that contribute to degradation in system accuracy. The first is that noise itself contaminates the speech signal, making it harder to process. Stationary noise sources often can be modeled and processed successfully, when they can be predicted (e.g., road noise in a moving car). However, many noises in natural field environments are nonstationary ones that either change abruptly or involve variable phasein/phase-out noise as the user moves. Natural field environments also present qualitatively different sources of noise that cannot always be anticipated and modeled. Speech technology has special difficulty handling abrupt onset and nonstationary sources of environmental noise. The second key problem, which has been less well recognized and understood, is that people speak differently under noisy conditions in order to make themselves understood. During noise, speakers have an automatic normalization response called the "Lombard effect" [92], which causes systematic speech modifications that include increased volume, reduced speaking rate, and changes in articulation and pitch [58,91,93-95]. The Lombard effect not only occurs in human adults, but also in young children, primates, quail, and essentially all animals [96-98]. From an interface design standpoint, it is important to realize that the Lombard effect essentially is reflexive. As a result, it has not been possible to eliminate it through instruction or training, or to suppress it selectively when noise is introduced [99].
316
SHARON OVIATT
Although speech originally produced in noise actually is more intelligible to a human listener, a system's recognition accuracy instead degrades when it must process Lombard speech [91]. To summarize, current estimates indicate a 20-50% decrease in recognition rate performance when attempts are made to process natural spontaneous speech, or speech produced by a wider range of diverse speakers in real-world field environments. Unfortunately, this is precisely the kind of realistic speech that must be recognized successfully before widespread commercialization can occur. During the development of modern speech technology there generally has been an overreliance on hidden Markov modeling, and a relatively singular focus on recognizing the phonetic features of acoustic speech. Until very recently, the speech community also has focused quite narrowly on unimodal speech processing. Finally, speech recognition research has depended very heavily on the word error rate as a forcing function for advancing its technology. Alternative perspectives on the successful development of robust speech technology will be discussed throughout this chapter.
2.2
Research on Suppression of Recognition Errors in Multimodal Systems
A different approach to resolving the impasse created by recognition errors is to design a more flexible multimodal interface that incorporates speech as one of the input options. In the past, skeptics have claimed that a multimodal system incorporating two error-prone recognition technologies would simply compound errors and yield even greater unreliability. However, as introduced earlier, cumulative data now clarify that a system which fuses two or more input modes can be an effective means of reducing recognition uncertainty, thereby improving robustness [39,43,53,58]. Furthermore, performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), for varied tasks (map-based simulation, speaker identification), and in different environments (noisy mobile, quiet stationary). Perhaps most importantly, the error suppression achievable with a multimodal system, compared with an acoustic-only speech system, can be very substantial in noisy environments [39,45,58,62,64,65]. Even in environments not degraded by noise, the error suppression in multimodal systems can exceed 40%, compared with a traditional speech system [53]. Recent studies also have revealed that a multimodal architecture can support mutual disambiguation of input signals, which stabilizes the system's performance in a way that can minimize or even close the recognition rate gap between nonnative and native speakers [53], and between mobile and stationary system use [58]. These results indicate that a well-designed multimodal system not only can
BREAKING THE ROBUSTNESS BARRIER
317
perform overall more robustly than a unimodal system, but they also can perform in a more reliable way across varied real-world users and usage contexts. In the following sections, research findings that compare the robustness of multimodal speech processing with parallel unimodal speech processing will be summarized. Relevant studies will be reviewed on this topic from the multimodal literature on speech and pen systems and speech and lip movement systems.
2.2.1 Robustness of Multimodal Speech and Pen Systems The literature on multimodal speech and pen systems recently has demonstrated error suppression ranging between 19 and 41% for speech processed within a multimodal architecture [53,58]. In two recent studies involving over 4600 multimodal commands, these robustness improvements also were documented to be greater for diverse user groups (e.g., accented versus native speakers) and challenging usage contexts (noisy mobile contexts versus quiet stationary use), as introduced above. That is, multimodal speech and pen systems typically show a larger performance advantage precisely for those users and usage contexts in which speech-only systems typically fail. Although recognition rates degrade sharply under the different kinds of conditions discussed in Section 2.1, nonetheless new multimodal pen/voice systems that improve robustness for many of these challenging forms of speech can be designed. Research on multimodal speech and pen systems also has introduced the concept of mutual disambiguation (see Section 1.2 for definition and illustration). This literature has documented that a well-integrated multimodal system that incorporates two semantically rich input modes can support significant levels of mutual disambiguation between incoming signals. That is, a synergistic multimodal system can be designed in which each input mode disambiguates partial or ambiguous information in the other mode during the recognition process. Due to this capacity for mutual disambiguation, the performance of each error-prone mode potentially can be stabilized by the alternate mode whenever challenging usage conditions arise. 2.2.1.1 Accented Speaker Study, in a recent study, eight native speakers of English and eight accented speakers who represented different native languages (e.g.. Mandarin, Tamil, Spanish, Turkish, Yoruba) each communicated 100 commands multimodally to the QuickSet system while using a hand-held PC. Sections 1.1 and 1.2 described the basic QuickSet system, and Fig. 2 illustrates its interface. With QuickSet, all participants could use multimodal speech and pen input to complete map-based simulation exercises. During testing, users accomplished a variety of tasks such as adding objects to a map (e.g., "Backbum zone" (draws irregular rectangular area)), moving objects (e.g., "Jeep follow this route"
318
SHARON OVIATT
FIG. 2. Diverse speakers completing commands multimodally using speech and gesture, which often would fail for a speech system due to varied accents.
(draws line)), and so forth. Details of the QuickSet system's signal and language processing, integration methods, and symbolic/statistical hybrid architecture have been summarized elsewhere [2,6,24]. In this study, data were collected on over 2000 multimodal commands, and the system's performance was analyzed for the overall multimodal recognition rate, recognition errors occurring within each system component (i.e., speech versus gesture recognition), and the rate of mutual disambiguation between speech and pen input during the integration process. When examining the rate of mutual disambiguation, all cases were assessed in which one or both recognizers failed to determine the correct lexical interpretation of the users' input, although the correct choice effectively was "retrieved" from lower down on an individual recognizer's «-best list to produce a correct final multimodal interpretation. The rate of mutual disambiguation per subject (MD^) was calculated as the percentage of all their scorable integrated commands (Nj) in which the rank of the correct lexical choice on the multimodal «-best list (R^^) was lower than the average rank of the correct lexical choice on the speech and gesture n-best lists (R^ and Rf), minus the number of commands in which the rank of the correct choice on the multimodal n-best list was higher than its average rank on the speech and gesture n-best lists, or
BREAKING THE ROBUSTNESS BARRIER
319
MD was calculated both at the signal processing level (i.e., based on rankings in the speech and gesture signal n-hesi lists), and at the parse level after natural language processing (i.e., based on the spoken and gestural parse w-best lists). Scorable commands included all those that the system integrated successfully, and that contained the correct lexical information somewhere in the speech, gesture, and multimodal w-best lists. All significant MD results reported in this section [2.2.1] repUcated across both signal and parse-level MD. The results of this study confirmed that a multimodal architecture can support significant levels of mutual disambiguation, with one in eight user commands recognized correctly due to mutual disambiguation. Table la confirms that the speech recognition rate was much poorer for accented speakers (-9.5%), as would be expected, although their gesture recognition rate averaged slightly but significantly better (-1-3.4%). Table lb reveals that the rate of mutual disambiguation (MD) was significantly higher for accented speakers (-1-15%) than for native speakers of English (+8.5%)—by a substantial 76%. As a result. Table la shows that the final multimodal recognition rate for accented speakers no longer differed significantly from the performance of native speakers. The main factor responsible TABLE la DIFFERENCE IN RECOGNITION RATE PERFORMANCE OF ACCENTED SPEAKERS, COMPARED WITH NATIVE ONES, DURING SPEECH, GESTURE, AND MUUTIMODAL PROCESSING
Type of language processing
% Performance difference for accented speakers
Speech Gesture Multimodal
-9.5* +3.4* —
* Significant difference present.
TABLE lb MUTUAL DISAMBIGUATION (MD) RATE AND RATIO OF MD INVOLVING SPEECH SIGNAL PULL-UPS FOR NATIVE AND ACCENTED SPEAKERS
Type of MD metric
Native speakers
Accented speakers
Signal MD rate Ratio of speech pull-ups
8.5% 0.35
15.0% * 0.65*
•= Significant difi'erence present.
320
SHARON OVIATT
for closing this performance gap between groups was the higher rate of mutual disambiguation for accented speakers. Overall, a 41% reduction was revealed in the total error rate for spoken language processed within the multimodal architecture, compared with spoken language processed as a stand-alone [53]. Table lb also reveals that speech recognition was the more fragile mode for accented speakers, with two-thirds of all mutual disambiguation involving pull-ups of their failed speech signals. However, the reverse was true for native speakers, with two-thirds of the mutual disambiguation in their case involving retrieval of failed ambiguous gesture signals. These data emphasize that there often are asymmetries during multimodal processing as to which input mode is more fragile in terms of reliable recognition. When one mode is expected to be less reliable, as is speech for accented speakers or during noise, then the most strategic multimodal design approach is to supplement the error-prone mode with an alternative one that can act as a natural complement and stabilizer by promoting mutual disambiguation. Table II reveals that although single-syllable words represented just 40% of users' multimodal commands in these data, they nonetheless accounted for 58.2% of speech recognition errors. Basically, these brief monosyllabic commands were especially error prone because of the minimal amount of acoustic signal information available for the speech recognizer to process. These relatively fragile monosyllabic commands also accounted for 84.6% of the cases in which a failed speech interpretation was pulled up during the mutual disambiguation process, which was significandy greater than the rate observed for multisyllabic utterances [53]. 2.2.1.2 Mobile Study, in a second study, 22 users interacted multimodally using the QuickSet system on a hand-held PC. Each user completed half of 100 commands in a quiet room (42 dB) while stationary, and the other half while mobile in a moderately noisy natural setting (40-60 dB), as illustrated in Fig. 3. TABLE II RELATION BETWEEN SPOKEN COMMAND LENGTH, THE PRESENCE OF SPEECH RECOGNITION ERRORS, AND THE PERCENTAGE OF MULTIMODAL COMMANDS WITH MUTUAL DISAMBIGUATION (MD) INVOLVING A SPEECH SIGNAL PULL-UP
1 syllable 2-7 syllables
% Total commands in corpus
% Speech recognition errors
% MD with speech pull-ups
40 60
58.2 41.8
84.6* 15.4
* Significant difference present between monosyllabic and multisyllabic commands.
BREAKING THE ROBUSTNESS BARRIER
321
FIG. 3. Mobile user with a hand-held PC in a moderately noisy cafeteria, who is completing commands multimodally that often fail for a speech system.
Testing was replicated across microphones representing opposite quality, including a high-quality, close-talking, noise-canceling microphone, and also a lowquality, built-in microphone without noise cancellation. Over 2600 multimodal utterances were evaluated for the multimodal recognition rate, recognition errors occurring within each component recognizer, and the rate of mutual disambiguation between signals. The results indicated that one in seven utterances were recognized correctly because of mutual disambiguation occurring during multimodal processing, even though one or both of the component recognizers failed to interpret the user's intended meaning. Table Ilia shows that the speech recognition rate was degraded
322
SHARON OVIATT TABLE Ilia
DIFFERENCE IN RECOGNITION RATE PERFORMANCE IN MOBILE ENVIRONMENT, COMPARED WITH STATIONARY ONE, FOR SPEECH, GESTURE, AND MULTIMODAL PROCESSING
Type of language processing
% Performance difference when mobile
Speech Gesture Multimodal
-10.0* — -8.0 *
* Significant difference present.
when speakers were mobile in a moderately noisy environment, compared with when they were stationary in a quiet setting (-10%). However, their gesture recognition rate did not decline significantly during mobility, perhaps because pen input involved brief one- to three-stroke gestures. Table Illb reveals that the rate of mutual disambiguation in the mobile condition (+16%) also averaged substantially higher than the same user's stationary rate (-1-9.5%). As a result, Table Ilia confirms a significant narrowing of the gap between mobile and stationary recognition rates (to -8.0%) during multimodal processing, compared with spoken language processing alone. In fact, 19-35% relative reductions in the total error rate (for noise-canceling versus built-in microphones, respectively) were observed when speech was processed within the multimodal architecture [58]. Finally, the general pattern of results obtained in this mobile study replicated across opposite types of microphone technology. When systems must process speech in natural contexts that involve variable levels of noise, and qualitatively different types of noise (e.g., abrupt onset, phasein/phase-out), the problem of supporting robust recognition is extremely difficult. Even when it is feasible to collect realistic mobile training data and to model many qualitatively different sources of noise, speech processing during abrupt shifts in noise (and the corresponding Lombard adaptations that users make) simply is a challenging problem. As a result, mobile speech processing remains an unsolved problem for traditional speech recognition. In the face of such challenges, a multimodal architecture that supports mutual disambiguation potentially can provide TABLE Illb MUTUAL DISAMBIGUATION (MD) RATE AND RATIO OF M D INVOLVING SPEECH SIGNAL PULL-UPS IN STATIONARY AND MOBILE ENVIRONMENTS
Type of MD metric
Stationary
Mobile
Signal MD rate Ratio of speech pull-ups
9.5% .26
16.0% * .34 *
* Significant difference present.
BREAKING THE ROBUSTNESS BARRIER
323
greater stability and a more viable long-term avenue for managing errors in emerging mobile interfaces. This theme also is central to the performance advantages identified for multimodal speech and lip movement systems, which are described in Section 2.2.2. One unique aspect of this mobile study was its focus on testing during actual use of an implemented multimodal system while users were mobile in a natural field environment. Such performance testing was possible because of the state of development of multimodal speech and pen systems, which now are beginning to transition into conmiercial applications. It also was possible because of the emerging research infrastructure now becoming available for collecting mobile field data [58]. In addition, this mobile study was unique in its examination of performance during naturalistic noisy conditions, especially the inclusion of nonstationary noise. As a result, the present data provide information on the expected performance advantages of multimodal systems in moderately noisy field settings, with implications for the real-world commercialization of new mobile interfaces. In summary, in both of the studies described in this section, even though one or both of the component recognizers failed to identify users' intended meaning, the architectural constraints imposed by the multimodal system's unification process ruled out incompatible speech and pen signal integrations. These unification constraints effectively pruned recognition errors from the «-best lists of the component recognizers, which resulted in the retrieval of correct lexical information from lower down on their lists, producing a correct final multimodal interpretation. This process suppressed many errors that would have occurred, such that users never experienced them. It also had an especially large impact on reducing the speech recognition errors that otherwise were so prevalent for accented speakers and in noisy environments.
2.2.2 Robustness of Multimodal Speech and Lip Movement Systems During the natural audio-visual perception of speech, human listeners typically observe a speaker's lip and facial movements while attending to speech. Furthermore, their accurate interpretation of speech is well known to be superior during multimodal speech perception, compared with acoustic-only speech processing [5,30,32,34]. In noisy environments, which include most natural field environments, visual information about a speaker's lip movements can be particularly valuable for the accurate interpretation of speech. However, there also are large individual and cultural differences in the information available in visible lip movements, as well as in people's ability and tendency to lip-read [7]. For example, the hearing impaired, elderly, and nonnative speakers all typically rely
324
SHARON OVIATT
more heavily on visual lip movements when they attend to speech, so for these populations accurate interpretation can depend critically on combined audiovisual processing [100,101]. The cognitive science literature generally has provided a good foundation for understanding many aspects of the design and expected value of multimodal speech and lip movement systems. In many of the multimodal speech and lip movement systems developed during the 1980s and 1990s, error suppression also has been observed [3,37,39,43,45,6165,102]. This literature has investigated the use of visually derived information about a speaker's lip movements (visemes) to improve recognition of acoustic speech (phonemes). The primary focus of this research has been on demonstrating robustness improvement during the audio-visual processing of speech during noise, compared with acoustic-only speech processing, with demonstrations of a larger boost in robustness as the noise level increases and speech recognition errors rise. Robustness improvements for multimodal speech and lip movement systems that have been reported under noise-free conditions actually have been relatively small when they occur at all, typically with less than a 10% relative error reduction [102]. In fact, sometimes a performance penalty occurs during the audio-visual processing of noise-free speech, largely as a consequence of adopting approaches designed to handle speech in noise [103]. On the other hand, robustness improvements of over 50% relative error reduction frequently have been documented under noisy conditions [39,45,46,61,65,102]. 2.2.2.1 Profile of Typical Study, in typical studies exploring performance enhancement in multimodal speech and lip movement systems, researchers have compared different approaches for audio-only, visual-only, and audio-visual system processing. Typically, testing has been done on a limited single-speaker corpus involving read materials such as nonsense words or digits [39,43]. Artificial stationary noise (e.g., white noise) then is added to generate conditions representing a range of different signal-to-noise ratio (SNR) decibel levels, for example, graduated intervals between - 5 and -1-25 dB. Most assessments have been performed on isolated-word speaker-dependent speech systems [39], although more recent studies now are beginning to examine continuous speech recognition as well [45]. The most common goal in these studies has been a basic demonstration of whether word error rates for audio-visual speech processing exceed those for audio-only and video-only processing, preferably at all levels of additive noise. Frequently, different integration strategies for audio-visual processing also are compared in detail. As described previously, the most common result has been to find the largest enhancements of audio-visual performance at the most degraded noise levels, and modest or no enhancement in a noise-free context. Unlike studies on multimodal speech and pen systems, research on the performance of multimodal speech and lip movement systems has not focused on the
BREAKING THE ROBUSTNESS BARRIER
325
mutual disambiguation of information that can occur between two rich input modes, but rather on the bootstrapping of speech recognition under noisy conditions. In addition, studies conducted in this area have not involved testing with fully implemented systems in actual noisy field settings. They likewise have been limited to testing on stationary noise sources (for a recent exception, see DuPont and Luettin's research [45]), rather than the more realistic and challenging nonstationary sources common during mobile use. Future research in this area will need to include more realistic test conditions before results can be generalized to situations of commercial import. More recent research in this area now is beginning to train systems and evaluate multimodal integration techniques on increasingly large multiparty corpora, and also to develop multimodal audio-visual systems for a variety of potential applications (e.g., speech recognition, speaker recognition, speech event detection) [3,45]. Currently, researchers in this area are striving to develop new integration techniques that can support general robustness advantages across the spectrum of noise conditions, from extremely adverse environments with SNR ranging 0 to -22 dB, to quiet environments with SNR ranging 20-30 dB. One goal is to develop multimodal integration techniques that yield generally superior robustness in widely varied and potentially changing environment conditions, such as those anticipated during mobile use. A second goal is to demonstrate larger improvements for audio-visual processing over audio-only in noise-free environments, which has been relatively elusive to date [104]. Late-integration fusion (i.e., "decision-level") and hybrid integration techniques, such as those used in multimodal speech and pen systems, generally have become viewed as good avenues for achieving these robustness goals [3,43,45,62,65,102]. Recent work also has begun to focus on audio-visual robustness gains achievable through adaptive processing, in particular various techniques for stream weight estimation [45,63,64]. For example, a recent experiment by Potamianos and Neti [64] of IBM-Watson reported over a 20% relative error reduction based on an n-best stream likelihood dispersion measure. Further work on adaptive multimodal processing is an important research direction in need of additional attention. Issues as basic as determining the key criteria and strategies needed to accomplish intelligent adaptation in natural field settings still are very poorly understood. In general, early attempts to adapt multimodal audio-visual processing based on simple engineering concepts will need to be superceded by empirically validated strategies. For example, automated dynamic weighting of the audio and visual input modes as a function of SNR estimates [46,47] is known to be problematic because it fails to take into account the impact of users' Lombard adaptations (for discussion, see Oviatt's research [105]).
326
SHARON OVIATT
Like the literature on multimodal speech and pen interaction, research in this area has identified key complementarities between the audio speech signal and corresponding visible speech movements [29,33,106]. For example, place of articulation is difficult to discriminate auditorally for consonants, but easy to distinguish visually from the position of the teeth, tongue, and lips. Natural feature-level complementarities also have been identified between visemes and phonemes for vowel articulation, with vowel rounding better conveyed visually, and vowel height and backness better revealed auditorally [29,33]. Some speech and lip movement systems have developed heuristic rules incorporating information about the relative confusability of different kinds of phonemes within their audio and visual processing components [107]. Future systems that incorporate phoneme-level information of this kind are considered a potentially promising avenue for improving robustness. In particular, research on the misclassification of consonants and vowels by audio-visual systems has emphasized the design recommendation that the visual component be weighted more heavily when discriminating place and manner of articulation, but less heavily when determining voicing [65]. Research by Silsbee and colleagues [65] has indicated that when consonant versus vowel classification tasks are considered separately, although no robustness enhancement occurs for audio-visual processing of consonants during noise-free conditions, an impressive 61% relative error reduction is obtained for vowels [65]. These results underscore the potential value of applying cognitive science findings to the design of future adaptive systems. Finally, hke the literature on multimodal speech and pen systems, in this research area brief spoken monosyllables have been associated with larger magnitude robustness gains during audio-visual processing, compared to multisyllabic utterances [108]. This is largely because monosyllables contain relatively impoverished acoustic information, and therefore are subject to higher rates of speech recognition errors. This finding in the speech and lip movement literature basically is parallel to the higher rate of mutual disambiguation reported for monosyllables in the multimodal speech and pen literature [53]. As will be discussed in Section 2.3, this replicated finding suggests that monosyllables may represent one of the targets of opportunity for future multimodal system design.
2,3
Multimodal Design Strategies for Optimizing Robustness
From the emerging literature on multimodal system performance, especially the error suppression achievable with such systems, there are several key concepts that surface as important for their design. The following are examples of fertile
BREAKING THE ROBUSTNESS BARRIER
327
research strategies known to be relevant to improving the robustness of future multimodal systems: • Increase the number of input modes interpreted within the multimodal system. This principle is effective because it supports effective supplementation and disambiguation of partial or conflicting information that may be present in any individual input mode. Current bimodal systems largely are successful due to their elementary fusion of information sources. However, according to this general principle, future multimodal systems could optimize robustness further by combining additional information sources—for example, three or more input modes. How much additional robustness gain can be expected as a function of incorporating additional sources of information is an issue that remains to be evaluated in future research. • Combine input modes that represent semantically rich information sources. In order to design multimodal systems that support mutual disambiguation, a minimum of two semantically rich input modes is required. Both types of multimodal system discussed in this chapter process two semantically rich input modes, and both have demonstrated enhanced error suppression compared with unimodal processing. In contrast, multimodal systems that combine only one semantically rich input mode (e.g., speech) with a second that is limited in information content (e.g., mouse, touch, or pen input only for selection) cannot support mutual disambiguation. However, even these more primitive multimodal systems can support disambiguation of the rich input mode to some degree by the more limited one. For example, when pointing to select an interface entity or input field, the natural language processing can be constrained to a reduced set of viable interpretations, thereby improving the accuracy of spoken language recognition [109]. • Increase the heterogeneity of input modes combined within the multimodal system. In order to bootstrap the joint potential of two input modes for collecting the relevant information needed to achieve mutual disambiguation of partial or conflicting information during fusion, one strategy is to sample from a broad range of qualitatively different information sources. In the near term, the most likely candidates for new modes to incorporate within multimodal systems involve vision-based recognition technologies. Specific goals and strategies for achieving increased heterogeneity of information, and how successfully they may optimize overall multimodal system robustness, is a topic that needs to be addressed in future research. One specific strategy for achieving heterogeneity is described in the next section. • Integrate maximally complementary input modes. One goal in the design of multimodal systems is to combine modes into a well-integrated system. If
328
SHARON OVIATT
designed opportunistically, such a system should integrate complementary modalities to yield a highly synergistic blend in which the strengths of each mode can be capitalized upon and used to overcome weaknesses in the other [11]. As discussed earlier, in the multimodal speech and lip movement literature, natural feature-level complementarities already have been identified between visemes and phonemes [29,33]. In multimodal speech and pen research, the main complementarity involves visual-spatial semantic content. Whereas visual-spatial information is uniquely and clearly indicated via pen input, the strong descriptive capabilities of speech are better suited for specifying temporal and other nonspatial information [56,110]. In general, this design approach promotes the philosophy of using modalities to their natural advantage, and it also represents a strategy for combining modes in a manner that can generate mutual disambiguation. In fact, achieving multimodal performance gains of the type described earlier in this chapter is well known to depend in part on successful identification of the unique semantic complementarities of a given pair of input modes. As discussed in Section 2.2.1, when one mode is expected to be less rehable (e.g., speech for accented speakers or during noise), then the most strategic multimodal design approach is to supplement the error-prone mode with a second one that can act as a natural complement and stabilizer in promoting mutual disambiguation. Future research needs to explore asymmetries in the reliability of diiferent input modes, as well as the main complementarities that exist between modes that can be leveraged during multimodal system design. • Develop multimodal processing techniques that retain information. In addition to the general design strategies outlined above, it also is important to develop multimodal signal processing, language processing, and architectural techniques that retain information and make it available during decision-level fusion. For example, alternative interpretations should not be pruned prematurely from each of the component recognizers' «-best lists. Excessive pruning of n-besi list alternatives (i.e., by setting probability estimate thresholds too high) could result in eliminating the information needed for mutual disambiguation to occur. This is because the correct partial information must be present on each recognizer's «-best list in order for the correct final multimodal interpretation to be formed during unification. The following are research strategies that are known to be relevant for successfully applying multimodal system design to targets of opportunity in which the
BREAKING THE ROBUSTNESS BARRIER
329
greatest enhancement of robustness is likely to be demonstrated over unimodal system design: • Apply multimodal system design to brief information segments for which robust recognition is known to be unreliable. As outlined in Sections 2.2.1 and 2.2.2, brief segments of information are the most fragile and subject to error during recognition (e.g., monosyllabic acoustic content during speech recognition). They also are selectively improved during multimodal processing in which additional information sources are used to supplement interpretation. • Apply multimodal system design to challenging user groups and usage environments for which robust recognition is known to be unreliable. When a recognition-based component technology is known to be selectively faulty for a given user group or usage environment, then a multimodal interface can be used to stabilize errors and improve the system's average recognition accuracy. As discussed earlier, accented speakers and noisy mobile environments are more prone to precipitate speech recognition errors. In such cases, a multimodal interface that processes additional information sources can be crucial in disambiguating the error-prone speech signal, sometimes recovering performance to levels that match the accuracy of nonrisk conditions. Further research needs to continue investigating other potential targets of opportunity that may benefit selectively from multimodal processing, including especially complex task appHcations, errorprone input devices (e.g., laser pointers), and so forth. In discussing the above strategies, there is a central theme that emerges. Whenever information is too scant or ambiguous to support accurate recognition, a multimodal interface can provide an especially opportune solution to fortify robustness. Furthermore, the key design strategies that contribute to the enhanced robustness of multimodal interfaces are those that add greater breadth and richness to the information sources that are integrated within a given multimodal system. Essentially, the broader the information collection net cast, the greater the likelihood missing or conflicting information will be resolved, leading to successful disambiguation of user input during the recognition process.
2.4
Performance Metrics as Forcing Functions for Robustness
In the past, the speech community has relied almost exclusively on the assessment of WER to calibrate the performance accuracy of spoken language systems. This metric has served as the basic forcing function for comparing and iterating
330
SHARON OVIATT
spoken language systems. In particular, WER was used throughout the DARPAfunded Speech Grand Challenge research program [78] to compare speech systems at various funded sites. Toward the end of this research program, it was widely acknowledged that although metrics are needed as a forcing function, nonetheless reliance on any single metric can be risky and counterproductive to the promotion of high-quality research and system building. This is because a singular focus on developing technology to meet the demands of any specific metric essentially encourages the research community to adopt a narrow and conservative set of design goals. It also tends to encourage relatively minor iterative algorithmic adaptations during research and system development, rather than a broader and potentially more productive search for innovative solutions to the hardest problems. When innovative or even radically different strategies are required to circumvent a difficult technical barrier, then new performance metrics can act as a stimulus and guide in advancing research in the new direction. Finally, in the case of the speech community's overreliance on WER, one specific adverse consequence was the general disincentive to address many essential usercentered design issues that could have reduced errors and improved error handling in spoken language systems. During the development of multimodal systems, one focus of early assessments clearly has been on the demonstration of improved robustness over unimodal speech systems. To track this, researchers have calculated an overall multimodal recognition rate, although often summarized at the utterance level and with additional diagnostic information about the performance of the system's two component recognizers. This has provided a global assessment tool for indexing the average level of multimodal system accuracy, as well as the basic information needed for comparative analysis of multimodal versus unimodal system performance. However, as an alternative approach to traditional speech processing, multimodal research also has begun to adopt new and more specialized metrics, such as a given system's rate of mutual disambiguation. This concept has been valuable for assessing the degree of error suppression achievable in multimodal systems. It also has provided a tool for assessing each input mode's ability to disambiguate errors in the other mode. This latter information has assisted in clarifying the relative stability of each mode, and also in establishing how effectively two modes work together to supply the complementary information needed to stabilize system performance. In this respect, the mutual disambiguation metric has significant diagnostic capabilities beyond simply summarizing the average level of system accuracy. As part of exploratory research, the mutual disambiguation metric also is beginning to be used to define in what circumstances a particular input mode is effective
BREAKING THE ROBUSTNESS BARRIER
331
at stabilizing the performance of a more fragile mode. In this sense, it is playing an active role in exploring user-centered design issues relevant to the development of new multimodal systems. It also is elucidating the dynamics of error suppression. In the future, other new metrics that reflect concepts of central importance to the development of emerging multimodal systems will be needed.
3.
Future Directions: Breaking the Robustness Barrier
The computer science community is just beginning to understand how to design innovative, well-integrated, and robust multimodal systems. To date, most multimodal systems remain bimodal, and recognition technologies related to several human senses (e.g., haptics, smell) have yet to be well represented within multimodal interfaces. As with past multimodal systems, the design and development of new types of multimodal system that include such modes will not be achievable through intuition alone. Rather, it will depend on knowledge of the usage and natural integration patterns that typify people's combined use of various input modes. This means that the successful design of new multimodal systems will continue to require guidance from cognitive science on the coordinated human perception and production of natural modalities. In this respect, multimodal systems only can flourish through multidisciplinary cooperation and teamwork among those working on different component technologies. The multimodal research community also could benefit from far more cross-fertilization among researchers representing the main subareas of multimodal expertise, especially those working in the more active areas of speech and pen and speech and lip movement research. Finally, with multimodal research projects and funding expanding in Europe, Japan, and elsewhere, the time is ripe for more international collaboration in this research area. To achieve commercialization and widespread dissemination of multimodal interfaces, more general, robust, and scalable multimodal architectures will be needed, which now are beginning to emerge. Most multimodal systems have been built during the past decade, and they are research-level systems. However, in several cases they now have developed beyond the prototype stage, and are being integrated with other software at academic and federal sites, or are beginning to appear as newly shipped products [2,19]. Future research will need to focus on developing hybrid symbolic/statistical architectures based on large corpora and refined fusion techniques in order to optimize multimodal system robustness. Research also will need to develop new architectures capable of flexibly coordinating numerous multimodal-multisensor system components to support new directions in adaptive processing. To transcend the
332
SHARON OVIATT
robustness barrier, research likewise will need to explore new natural language, dialogue processing, and statistical techniques for optimizing mutual disambiguation among the input modes combined within new classes of a multimodal system. As multimodal interfaces gradually progress toward supporting more robust and human-like perception of users' natural activities in context, they will need to expand beyond rudimentary bimodal systems to ones that incorporate three or more input modes. Like biological systems, they should be generalized to include input from qualitatively different and semantically rich information sources. This increase in the number and heterogeneity of input modes can effectively broaden the reach of advanced multimodal systems, and provide them with access to the discriminative information needed to reliably recognize and process users' language, actions, and intentions in a wide array of different situations. Advances of this kind are expected to contribute to a new level of robustness or hybrid vigor in multimodal system performance. This trend already has been initiated within the field of biometrics research, which is combining recognition of multiple behavioral modes with physiological ones to achieve reliable person identification and verification under challenging field conditions. To support increasingly pervasive multimodal interfaces, these combined information sources ideally must include data collected from a wide array of sensors as well as input modes, and from both active and passive forms of user input. Very few existing multimodal systems that involve speech recognition currently include any adaptive processing. With respect to societal impact, the shift toward adaptive multimodal interfaces is expected to provide significantly enhanced usability for a diverse range of everyday users, including young and old, experienced and inexperienced, able-bodied and disabled. Such interfaces also will be far more personalized and appropriately responsive to the changing contexts induced by mobihty than interfaces of the past. With respect to robustness, adaptivity to the user, ongoing task, dialogue, environmental context, and input modes will collectively generate constraints that can greatly improve system reliability. In the future, adaptive multimodal systems will require active tracking of potentially discriminative information, as well as flexible incorporation of additional information sources during the process of fusion and interpretation. In this respect, future multimodal interfaces and architectures will need to be able to engage in flexible reconfiguration, such that specific types of information can be integrated as needed when adverse conditions arise (e.g., noise), or if the confidence estimate for a given interpretation falls too low. The successful design of future adaptive multimodal systems could benefit from a thoughtful examination of the models already provided by biology and cognitive science on intelligent adaptation during perception, as well as from the literature on robotics.
BREAKING THE ROBUSTNESS BARRIER
4.
333
Conclusion
In summary, a well-designed multimodal system that fuses two or more information sources can be an effective means of reducing recognition uncertainty. Performance advantages have been demonstrated for different modality combinations (speech and pen, speech and lip movements), as well as for varied tasks and different environments. Furthermore, the average error suppression achievable with a multimodal system, compared with a unimodal spoken language one, can be very substantial. These findings indicate that promising but error-prone recognition-based technologies are increasingly likely to be embedded within multimodal systems in order to achieve commercial viability during the next decade. Recent research also has demonstrated that multimodal systems can perform more stably for challenging real-world user groups and usage contexts. For this reason, they are expected to play an especially central role in the emergence of mobile interfaces, and in the design of interfaces for every-person universal access. In the long term, adaptive multimodal-multisensor interfaces are viewed as a key avenue for supporting far more pervasive interfaces with entirely new functionality not supported by computing of the past. ACKNOWLEDGMENTS
I thank the National Science Foundation for their support over the past decade, which has enabled me to pursue basic exploratory research on many aspects of multimodal interaction, interface design, and system development. The preparation of this chapter has been supported by NSF Grant IRI-9530666 and NSF Special Extension for Creativity (SEC) Grant IIS-9530666. This work also has been supported by Contracts DABT63-95-C-007 and N66001-99-D-8503 from DARPA's Information Technology and Information Systems Office, and Grant NOOO14-99-1-0377 from ONR. I also thank Phil Cohen and others in the Center for Human-Computer Communication for many insightful discussions, and Dana Director, Rachel Coulston, and Kim Tice for expert assistance with manuscript preparation. REFERENCES
[1] Benoit, C, Martin, J. C, Pelachaud, C, Schomaker, L., and Suhm, B. (2000). "Audio-visual and multimodal speech-based systems." Handbook of Multimodal and Spoken Dialogue Systems: Resources, Terminology and Product Evaluation (D. Gibbon, I. Mertins, and R. Moore, Eds.), pp. 102-203. Kluwer Academic, Boston. [2] Oviatt, S. L., Cohen, R R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., Holzman, T, Winograd, T, Landay, J., Larson, J., and Ferro, D. (2000). "Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions." Human Computer Interaction, 15, 4, 263-322.
334
[3]
[4] [5]
[6]
[7] [8] [9]
[10] [11]
[12]
[13]
[14]
[15]
SHARON OVIATT [Reprinted in Human-Computer Interaction in the New Millennium (J. Carroll, Ed.), Chap. 19, pp. 421-456. Addison-Wesley, Reading, MA, 2001.] Neti, C , Iyengar, G., Potamianos, G., Senior, A., and Maison, B. (2000). "Perceptual interfaces for information interaction: Joint processing of audio and visual information for human-computer interaction." Proceedings of the International Conference on Spoken Language Processing, Beijing, 3, 11-14. Pankanti, S., Bolle, R. M., and Jain, A. (Eds.) (2000). "Biometrics: The future of identification." Computer, 33, 2, 46-80. Benoit, C , and Le Goff, B. (1998). "Audio-visual speech synthesis from French text: Eight years of models, designs and evaluation at the ICP." Speech Communication, 26, 117-129. Cohen, P. R., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., and Clow, J. (1997). "Quickset: Multimodal interaction for distributed applications." Proceedings of the Fifth ACM International Multimedia Conference, pp. 31-^0. ACM Press, New York. Stork, D. G., and Hennecke, M. E. (Eds.) (1996). Speechreading by Humans and Machines. Springer-Verlag, New York. Turk, M., and Robertson, G. (Eds.) (2000). "Perceptual user interfaces." Communications of the ACM (special issue on Perceptual User Interface), 43, 3, 32-70. Zhai, S., Morimoto, C , and Ihde, S. (1999). "Manual and gaze input cascaded (MAGIC) pointing." Proceedings of the Conference on Human Factors in Computing Systems (CHF99), pp. 246-253. ACM Press, New York. Bolt, R. A. (1980). "Put-that-there: Voice and gesture at the graphics interface." Computer Graphics, 14, 3, 262-270. Cohen, R R., Dalrymple, M., Moran, D. B., Pereira, F C. N., Sullivan, J. W., Gargan, R. A., Schlossberg, J. L., and Tyler, S. W. (1989). "Synergistic use of direct manipulation and natural language." Proceedings of the Conference on Human Factors in Computing Systems (CHI'89), pp. 227-234. ACM Press, New York. [Reprinted in Readings in Intelligent User Interfaces (Maybury and Wahlster, Eds.), pp. 29-37, Morgan Kaufmann, San Francisco.] Kobsa, A., Allgayer, J., Reddig, C, Reithinger, N., Schmauks, D., Harbusch, K., and Wahlster, W. (1986). "Combining deictic gestures and natural language for referent identification." Proceedings of the 11th International Conf on Computational Linguistics, Bonn, Germany, pp. 356-361. Neal, J. G., and Shapiro, S. C. (1991). "Intelligent multimedia interface technology." Intelligent User Interfaces (J. W. Sullivan and S. W. Tyler, Eds.), pp. 11-43. ACM Press, New York. Seneff, S., Goddeau, D., Pao, C , and Polifroni, J. (1996). "Multimodal discourse modeling in a multi-user multi-domain environment." Proceedings of the International Conference on Spoken Language Processing (T. Bunnell and W. Idsardi, Eds.), Vol. 1, pp. 192-195. University of Delaware and A. 1. duPont Institute. Siroux, J., Guyomard, M., Multon, R, and Remondeau, C. (1995). "Modeling and processing of the oral and tactile activities in the Georal tactile system."
BREAKING THE ROBUSTNESS BARRIER
[16]
[17] [18] [19] [20]
[21]
[22]
[23]
[24] [25]
[26]
[27]
[28]
[29]
335
Proceedings of the International Conference on Cooperative Multimodal Communication, Theory & Applications. Eindhoven, Netherlands. Wahlster, W. (1991). "User and discourse models for multimodal communciation." Intelligent User Interfaces (J. W. Sullivan and S. W. Tyler, Eds.), Chap. 3, pp. 45-67. ACM Press, New York. Oviatt, S. L., and Cohen, P. R. (2000). "Multimodal systems that process what comes naturally." Communications of the ACM, 43, 3, 45-53. Rubin, P., Vatikiotis-Bateson, E., and Benoit, C. (Eds.) (1998). Speech Communication (special issue on audio-visual speech processing). 26, 1-2. Oviatt, S. L. (2002). "Multimodal Interfaces." Handbook of Human-Computer Interaction (J. Jacko and A. Sears, Eds.). Lawrence Erlbaum, Mahwah, NJ. Oviatt, S. L., Cohen, R R., Pong, M. W., and Frank, M. R (1992). "A rapid semi-automatic simulation technique for investigating interactive speech and handwriting." Proceedings of the International Conference on Spoken Language Processing, University of Alberta, Vol. 2, pp. 1351-1354. Bers, J., Miller, S., and Makhoul, J. (1998). "Designing conversational interfaces with multimodal interaction." DARPA Workshop on Broadcast News Understanding Systems, pp. 319-321, Cheyer, A. (1998). "MVIEWS: Multimodal tools for the video analyst." Proceedings of the International Conference on Intelligent User Interfaces (IUr98), pp. 55-62. ACM Press, New York. Waibel, A., Suhm, B., Vo, M. T., and Yang, J. (1997). "Multimodal interfaces for multimedia information agents." Proceedings of the International Conference on Acoustics, Speech and Signal Processing (lEEE-ICASSP), Vol. 1, pp. 167-170. IEEE Press, Menlo Park, CA. Wu, L., Oviatt, S., and Cohen, P. (1999). "Multimodal integration: A statistical view." IEEE Transactions on Multimedia, 1, 4, 334-342. Bangalore, S., and Johnston, M. (2000). "Integrating multimodal language processing with speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 2, pp. 126-129. Chinese Friendship, Beijing. Denecke, M., and Yang, J. (2000). "Partial information in multimodal dialogue." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), pp. 624-633. Chinese Friendship, Beijing. Bernstein, L., and Benoit, C. (1996). "For speech perception by humans or machines, three senses are better than one." Proceedings of the International Conference on Spoken Language Processing, 3, 1477-1480. Cohen, M. M., and Massaro, D. W. (1993). "Modeling coarticulation in synthetic visible speech." Models and Techniques in Computer Animation (N. M. Thalmann and D. Thalmann, Eds.), pp. 139-156. Springer-Verlag, Berhn. Massaro, D. W., and Stork, D. G. (1998). "Sensory integration and speechreading by humans and machines." American Scientist, 86, 236-244.
336
SHARON OVIATT
[30] McGrath, M., and Summerfield, Q. (1985). "Intermodal timing relations and audiovisual speech recognition by normal-hearing adults." Journal of the Acoustical Society of America, 11, 2, 678-685. [31] McGurk, H., and MacDonald, J. (1976). "Hearing lips and seeing voices." Nature, 264, 746-748. [32] McLeod, A., and Summerfield, Q. (1987). "Quantifying the contribution of vision to speech perception in noise." British Journal ofAudiology, 21, 131-141. [33] Robert-Ribes, J., Schwartz, J. L., Lallouache, T., and Escudier, R (1998). "Complementarity and synergy in bimodal speech: Auditory, visual, and audio-visual identification of French oral vowels in noise." Journal of the Acoustical Society of America, 103, 6, 3677-3689. [34] Sumby, W. H., and Pollack, I. (1954). "Visual contribution to speech intelhgibility in noise." Journal of the Acoustical Society of America, 26, 212-215. [35] Summerfield, A. Q. (1992). "Lipreading and audio-visual speech perception." Philosophical Transactions of the Royal Society of London, Series B, 335, 71-78. [36] Vatikiotis-Bateson, E., Munhall, K. G., Hirayama, M., Lee, Y. V., and Terzopoulos, D. (1996). "The dynamics of audiovisual behavior in speech." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 221-232. Springer-Verlag, Berlin. [37] Petajan, E. D. (1984). Automatic Lipreading to Enhance Speech Recognition, Ph.D. thesis. University of Illinois at Urbana-Champaign. [38] Brooke, N. M., and Petajan, E. D. (1986). "Seeing speech: Investigations into the synthesis and recognition of visible speech movements using automatic image processing and computer graphics." Proceedings of the International Conference on Speech Input and Output: Techniques and Applications, 258, 104-109. [39] Adjoudani, A., and Benoit, C. (1995). "Audio-visual speech recognition compared across two architectures." Proceedings of the Eurospeech Conference, Madrid, Spain, Vol. 2, pp. 1563-1566. [40] Bregler, C , and Konig, Y. (1994). "Eigenlips for robust speech recognition." Proceedings of the International Conference on Acoustics Speech and Signal Processing (lEEE-ICASSP), Vol. 2, pp. 669-672. [41] Goldschen, A. J. (1993). Continuous Automatic Speech Recognition by Lipreading, Ph.D. thesis. Department of Electrical Engineering and Computer Science, George Washington University. [42] Silsbee, P. L., and Su, Q. (1996). "Audiovisual sensory integration using Hidden Markov Models." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 489-504. Springer-Verlag, Berlin. [43] Tomlinson, M. J., Russell, M. J., and Brooke, N. M. (1996). "Integrating audio and visual information to provide highly robust speech recognition." Proceedings of the International Conference on Acoustics Speech and Signal Processing (lEEEICASSP), pp. S21-S24.
BREAKING THE ROBUSTNESS BARRIER
337
[44] Cassell, J., Sullivan, J., Prevost, S., and Churchill, E. (Eds.) (2000). Embodied conversational agents. MIT Press, Cambridge, MA. [45] Dupont, S., and Luettin, J. (2000). "Audio-visual speech modeling for continuous speech recognition." IEEE Transactions on Multimedia, 2, 3, 141-151. [46] Meier, U., Hiirst, W., and Duchnowski, P. (1996). "Adaptive bimodal sensor fusion for automatic speechreading." Proceedings of the International Conference on Acoustics, Speech and Signal Processing (lEEE-ICASSP), pp. 833-836. IEEE Press, Menlo Park, CA. [47] Rogozan, A., and Deglise, P. (1998). "Adaptive fusion of acoustic and visual sources for automatic speech recognition." Speech Communication, 26, 1-2, 149-161. [48] Choudhury, T., Clarkson, B., Jebara, T., and Pentland, S. (1999). "Multimodal person recognition using unconstrained audio and video." Proceedings of the 2nd International Conference on Audio-and-Video-based Biometric Person Authentication, Washington, DC, pp. 176-181. [49] Lee, J. (2001). "Retoohng products so all can use them." New York Times, June 21. [50] Jorge, J., Heller, R., and Guedj, R. (Eds.) (2001). Proceedings of the NSF/EC Workshop on Universal Accessibility and Ubiquitous Computing: Providing for the Elderly, Alcacer do Sal, Portugal, 22-25 May. Available at h t t p : //immi. i n e s c . pt/alcacerOl/procs/papers-list.html. [51] Oviatt, S. L., and van Gent, R. (1996). "Error resolution during multimodal humancomputer interaction." Proceedings of the International Conference on Spoken Language Processing, Vol. 2, pp. 204-207. University of Delaware Press. [52] Oviatt, S. L., Bernard, J., and Levow, G. (1998). "Linguistic adaptation during error resolution with spoken and multimodal systems." Language and Speech (special issue on prosody and conversation), 41, 3-4, 419-442. [53] Oviatt, S. L. (1999). "Mutual disambiguation of recognition errors in a multimodal architecture." Proceedings of the Conference on Human Factors in Computing Systems (CHr99), pp. 576-583. ACM Press, New York. [54] Rudnicky, A., and Hauptman, A. (1992). "Multimodal interactions in speech systems." Multimedia Interface Design, Frontier Series (M. Blattner and R. Dannenberg, Eds.), pp. 147-172. ACM Press, New York. [55] Suhm, B. (1998). Multimodal Interactive Error Recovery for Non-conversational Speech User Interfaces, Ph.D. thesis. Karlsruhe University, Germany. [56] Oviatt, S. L. (1997). "Multimodal interactive maps: Designing for human performance." Human-Computer Interaction (special issue on multimodal interfaces), 12, 93-129. [57] Oviatt, S. L., and Kuhn, K. (1998). "Referential features and linguistic indirection in multimodal language." Proceedings of the International Conference on Spoken Language Processing, ASSTA Inc., Sydney, Australia, Vol. 6, pp. 2339-2342. [58] Oviatt, S. L. (2000). "Multimodal system processing in mobile environments." Proceedings of the Thirteenth Annual ACM Symposium on User Interface Software Technology (UIST2000), pp. 21-30. ACM Press, New York.
338
SHARON OVIATT
[59] Oviatt, S. L. (2000). "Taming recognition errors with a multimodal architecture." Communications of the ACM (special issue on conversational interfaces), 43, 9, 45-51. [60] Erber, N. P. (1975). "Auditory-visual perception of speech." Journal of Speech and Hearing Disorders, 40, 481-492. [61] Bregler, C , Omohundro, S. M., Shi, J., and Konig, Y. (1996). "Towards a robust speechreading dialog system." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 409^23. SpringerVerlag, Berlin. [62] Brooke, M. (1996). "Using the visual component in automatic speech recognition." Proceedings of the International Conference on Spoken Language Processing, Vol. 3, pp. 1656-1659. [63] Nakamura, S., Ito, H., and Shikano, K. (2000). "Stream weight optimization of speech and lip image sequence for audio-visual speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP 2000) (B. Yuan, T. Huang and X. Tang, Eds.), Vol. 3, pp. 20-24. Chinese Friendship Publishers, Beijing. [64] Potamianos, G., and Neti, C. (2000). "Stream confidence estimation for audiovisual speech recognition." Proceedings of the International Conference on Spoken Language Processing (ICSLP 2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 3, pp. 746-749. Chinese Friendship Publishers, Beijing. [65] Silsbee, P. L., and Bovik, A. C. (1996). "Computer lipreading for improved accuracy in automatic speech recognition." IEEE Transactions on Speech and Audio Processing, 4, 5, 337-351. [66] Murphy, R. R. (1996). "Biological and cognitive foundations of intelligent sensor fusion." IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 26, 1, 42-51. [67] Lee, D. (1978). "The functions of vision." Modes of Perceiving and Processing Information (H. L. Pick and E. Saltzman, Eds.), pp. 159-170. Wiley, New York. [68] Pick, H. L., and Saltzman, E. (1978). "Modes of perceiving and processing information." Modes of Perceiving and Processing Information (H. L. Pick, Jr., and E. Saltzman, Eds.), pp. 1-20. Wiley, New York. [69] Pick, H. (1987). "Information and effects of early perceptual experience." Contemporary Topics in Developmental Psychology (N. Eisenberg, Ed.), pp. 59-76. Wiley, New York. [70] Stein, B., and Meredith, M. (1993). The Merging of the Senses. MIT Press, Cambridge, MA. [71 ] Welch, R. B. (1978). Perceptual Modification: Adapting to Altered Sensory Environments. Academic Press, New York. [72] Bower, T. G. R. (1974). "The evolution of sensory systems." Perception: Essays in Honor of James J. Gibson (R. B. MacLeod and H. L. Pick, Jr., Eds.), pp. 141-153. Cornell University Press, Ithaca, NY.
BREAKING THE ROBUSTNESS BARRIER
339
[73] Freedman, S. J., and Rekosh, J. H. (1968). "The functional integrity of spatial behavior." The Neuropsychology of Spatially-Oriented Behavior (S. J. Freedman, Eds.), pp. 153-162. Dorsey Press, Homewood, IL. [74] Lackner, J. R. (1981). "Some aspects of sensory-motor control and adaptation in man." Intersensory Perception and Sensory Integration (R. D. Walk and H. L. Pick, Eds.), pp. 143-173. Plenum, New York. [75] Hall, D. L. (1992). Mathematical Techniques in Multisensor Data Fusion. Artech House, Boston. [76] Pavel, M., and Sharma, R. K. (1997). "Model-based sensor fusion for aviation." Proceedings of SPIE,30m, 169-176. [77] Hager, G. D. (1990). Task-Directed Sensor Fusion and Planning: A Computational Approach. Kluwer Academic, Boston. [78] Martin, A., Fiscus, J., Fisher, B., Pallet, D., and Przybocki, M. (1997). "System descriptions and performance summary." Proceedings of the Conversational Speech Recognition Workshop/DARPA Hub-5E Evaluation. Morgan Kaufman, San Mateo, CA. [79] Weintraub, M., Taussig, K., Hunicke, K., and Snodgrass, A. (1997). "Effect of speaking style on LVCSR performance." Proceedings of the Conversational Speech Recognition Workshop/DARPA Hub-5E Evaluation. Morgan Kaufman, San Mateo, CA. [80] Oviatt, S. L., MacEachem, M., and Levow, G. (1998). "Predicting hyperarticulate speech during human-computer error resolution." Speech Communication, 24, 87-110. [81] Banse, R., and Scherer, K. (1996). "Acoustic profiles in vocal emotion expression." Journal of Personality and Social Psychology, 70, 3, 614-636. [82] Aist, G., Chan, P., Huang, X., Jiang, L., Kennedy, R., Latimer, D., Mostow, J., and Yeung, C. (1998). "How effective is unsupervised data collection for children's speech recognition?" Proceedings of the International Conference on Spoken Language Processing, ASSTA Inc., Sydney, Vol. 7, pp. 3171-3174. [83] Das, S., Nix, D., and Picheny, M. (1998). "Improvements in children's speech recognition performance." Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Vol. 1, pp. 433-436. IEEE Press, Menlo Park, CA. [84] Potamianos, A., Narayanan, S., and Lee, S. (1997). "Automatic speech recognition for children." European Conference on Speech Communication and Technology, 5, 2371-2374. [85] Wilpon, J. G., and Jacobsen, C. N. (1996). "A study of speech recognition for children and the elderly." Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP'96), pp. 349-352. [86] Lee, S., Potamianos, A., and Narayanan, S. (1997). "Analysis of children's speech: Duration, pitch and formants." European Conference on Speech Communication and Technology, Vol. 1, pp. 473^76. [87] Yeni-Komshian, G., Kavanaugh, J., and Ferguson, C. (Eds.) (1980). Child Phonology, Vol. I: Production. Academic Press, New York.
340
SHARON OVIATT
[88] Das, S., Bakis, R., Nadas, A., Nahamoo, D., and Picheny, M. (1993). "Influence of background noise and microphone on the performance of the IBM TANGORA speech recognition system." Proceedings of the IEEE International Conference on Acoustic Speech Signal Processing, Vol. 2, pp. 71-74. [89] Gong, Y. (1995). "Speech recognition in noisy environments." Speech Communication, 16,261-291. [90] Lockwood, R, and Boudy, J. (1992). "Experiments with a non-linear spectral subtractor (NSS), Hidden Markov Models and the projection for robust speech recognition in cars." Speech Communication, 11, 2-3, 215-228. [91] Junqua, J. C. (1993). "The Lombard reflex and its role on human listeners and automatic speech recognizers." Journal of the Acoustical Society of America, 93, 1,510-524. [92] Lombard, E. (1911). "Le signe de 1'elevation de la voix." Annals Maladiers Oreille, Larynx, Nez, Pharynx, 37, 101-119. [93] Hanley, T. D., and Steer, M. D. (1949). "Effect of level of distracting noise upon speaking rate, duration and intensity." Journal of Speech and Hearing Disorders, 14, 363-368. [94] Schulman, R. (1989). "Articulatory dynamics of loud and normal speech." Journal of the Acoustical Society of America, 85, 295-312. [95] van Summers, W. V., Pisoni, D. B., Bernacki, R. H., Pedlow, R. I., and Stokes, M. A. (1988). "Effects of noise on speech production: Acoustic and perceptual analyses." Journal of the Acoustical Society of America, 84, 917-928. [96] Potash, L. M. (1972). "A signal detection problem and a possible solution in Japanese quail." Animal Behavior, 20, 192-195. [97] Sinott, J. M., Stebbins, W. C, and Moody, D. B. (1975). "Regulation of voice ampUtude by the monkey." Journal of the Acoustical Society of America, 58, 412-414. [98] Siegel, G. M., Pick, H. L., Olsen, M. G., and Sawin, L. (1976). "Auditory feedback in the regulation of vocal intensity of preschool children." Developmental Psychology, 12, 255-261. [99] Pick, H. L., Siegel, G. M., Fox, R W., Garber, S. R., and Kearney, J. K. (1989). "Inhibiting the Lombard effect." Journal of the Acoustical Society of America, 85, 2, 894-900. [100] Fuster-Duran, A. (1996). "Perception of conflicting audio-visual speech: An examination across Spanish and German." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 135-143. SpringerVerlag, Berlin. [101] Massaro, D. W. (1996). "Bimodal speech perception: A progress report." Speechreading by Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 79-101. Springer-Verlag, Berlin. [102] Hennecke, M. E., Stork, D. G., and Prasad, K. V. (1996). "Visionary speech: Looking ahead to practical speechreading systems." Speechreading by
BREAKING THE ROBUSTNESS BARRIER
[103] [104]
[105]
[106]
[107] [108]
[109]
[110]
341
Humans and Machines: Models, Systems and Applications (D. G. Stork and M. E. Hennecke, Eds.), NATO ASI Series, Series F: Computer and Systems Sciences 150, pp. 331-349. Springer-Verlag, Berlin. Haton, J. P. (1993). "Automatic recognition in noisy speech." New Advances and Trends in Speech Recognition and Coding. NATO Advanced Study Institute. Senior, A., Neti, C. V., and Maison, B. (1999). "On the use of visual information for improving audio-based speaker recognition." Proceedings of Auditory-Visual Speech Processing (AVSP), 108-111. Oviatt, S. L. (2000). "Multimodal signal processing in naturalistic noisy environments." Proceedings of the International Conference on Spoken Language Processing (ICSLP'2000) (B. Yuan, T. Huang, and X. Tang, Eds.), Vol. 2, pp. 696-699. Chinese Friendship PubHshers, Beijing. Summerfield, Q. (1987). "Some preliminaries to a comprehensive account of audio-visual speech perception." Hearing by Eye: The Psychology of Lip-reading, (B. Dodd and R. Campbell, Eds.), pp. 3-51. Lawrence Erlbaum, London. Petajan, E. D. (1987). "An improved automatic lipreading system to enhance speech recognition." Tech. Rep. 11251-871012-11ITM, AT&T Bell Labs. Iverson, P., Bernstein, L., and Auer, E. (1998). "Modeling the interaction of phonemic intelligibility and lexical structure in audiovisual word recognition." Speech Communication, 26, 1-2, 45-63. Oviatt, S. L., Cohen, P R., and Wang, M. Q. (1994). "Toward interface design for human language technology: Modality and structure as determinants of linguistic complexity." Speech Communication, 15, 3 ^ , 283-300. Oviatt, S. L., DeAngeli, A., and Kuhn, K. (1997). "Integration and synchronization of input modes during multimodal human-computer interaction." Proceedings of the Conference on Human Factors in Computing Systems (CHr97), pp. 415^22. ACM Press, New York.
This Page Intentionally Left Blank
Using Data Mining to Discover the Preferences of Computer Criminals DONALD E. BROWN AND LOUISE R GUNDERSON Department of Systems and Information Engineering University of Virginia Olsson 114A 115 Engineer's Way Charlottesville, Virginia 22904 USA [email protected], [email protected]
Abstract The ability to predict criminal incidents is vital for all types of law enforcement agencies. This ability makes it possible for law enforcement to both protect potential victims and apprehend perpetrators. However, for those in charge of preventing computer attacks, this ability has become even more important. While some responses are possible to these attacks, most of them require that warnings of possible attacks go out in "cyber time." However, it is also imperative that warnings be as specific as possible, so that systems that are not likely to be under attack do not shut off necessary services to their users. This chapter discusses a methodology for data-mining the output from intrusion detection systems to discover the preferences of attackers. These preferences can then be communicated to other systems, which have features similar to these discovered preferences. This approach has two theoretical bases. One is judgment analysis, which comes from the cognitive sciences arena, and the other is data mining and pattern recognition. Judgment analysis is used to construct a mathematical formulation for this decision to choose a specific target. This formulation allows clustering to be used to discover the preferences of the criminals from the data. One problem is posed by the fact that many criminals may have the same preferences or one criminal may have more than one set of preferences; thus, an individual criminal cannot be identified by this method. Instead we refer to the discovered preferences as representing agents. Another problem is that, while all of the agents are operating in the same event space, they may not all be using the same feature set to choose their targets. In order to discover these agents and their preferences, a salience weighting methodology has been developed. This method, when applied to the events caused ADVANCES IN COMPUTERS, VOL. 56 ISBN 0-12-012156-5
343
Copyright 2002 Elsevier Science Ltd Allrightsof reproduction in any form reserved.
344
DONALD E. BROWN AND LOUISE F. GUNDERSON
by attackers, allows for the discovery of the preferences for the features in the environment used by each of the discovered agents to select a target. Once the target preference of the agents has been discovered, this knowledge can be used to create a system for the prediction of future targets. In order to construct this system, one would use the output of existing intrusion detection systems. This data would be used by automated data-mining software to discover the preferences of the attackers and to warn machines with similar attributes. Because this entire process is automatic, the sites could be warned in "cyber time."
1. Introduction 2. The Target Selection Process of Criminals 2.1 Rational Choice Theory 2.2 Routine Activity Hypothesis 2.3 Victim Profiling 3. Predictive Modeling of Crime 3.1 Previous Models 3.2 Multiagent Modeling 4. Discovering the Preferences of the Agents 4.1 Clustering 4.2 Judgment Analysis 4.3 Applying the Judgment Analysis Model to Criminal Preference 5. Methodology 5.1 Cluster-Specific Salience Weighting 5.2 Using the Discovered Agents in a Multiagent Model 6. Testing with Synthetic Data 7. Conclusions References
1.
344 346 346 347 348 348 348 350 352 352 353 356 358 358 361 364 369 370
Introduction
In the past few years, computer networks, including those that constitute the Internet, have become vitally important to the world economy. While the number of users is difficult to estimate, the number of hosts has grown from 4 in 1969 to 72,398,092 in January 2000 [1]. According to one estimate, worldwide e-commerce generated $132 billion in revenues in 2000 [2]. The increased use of these networks has created a new venue for criminals. In addition, the proliferation of free hacking/cracking software has changed the nature of computer crime from an endeavor that required computer expertise to one that can be practised by a computer novice [3]. For these reasons, the number of computer crimes has increased dramatically. The CERT Coordination Center at Carnegie Mellon has
THE PREFERENCES OF COMPUTER CRIMINALS
345
documented an exponential growth in the number of incidents reported to them in the past decade, from 252 in 1990 to 21,756 in 2000 [4]. For this discussion, the emphasis will be on denial of service (DOS) attacks. A DOS attack is an attempt to prevent legitimate users of a service from using that service. The most common type of attack is one that consumes (scarce) resources. This type of attack may include [5]: • Consumption of bandwidth, for example, by generating a large number of packets directed at a network. • Consumption of disk space, for example, by spamming or e-mail bombing. • Consumption of network connections, for example, by sending requests for a connection with an incorrect (spoofed) IP address. The nature of this type of attack places some constraints on the techniques that can be used to protect vulnerable systems. First is the speed with which the attack proceeds. This speed requires that warnings of possible attacks go out in "cyber time." Another constraint is that warnings be as specific as possible, so that systems not likely to be under attack do not shut off necessary services to their users. The methodology described in this paper is based on the identification of the target preferences of the attackers, in order to predict the targets that they will subsequently target. Fundamentally, this approach develops a model of the criminals' decision-making process. We discover these criminal preferences in much the same way that Internet businesses are discovering customer preferences: by observing and analyzing behavior on the Web. Figure 1 shows the basic components of this preference discovery approach. We observe criminal incidents in time and across the network topology. Each of these incidents is Time Axis
Feature Space
Network Topology
FIG. 1. Graphical depiction of the preference discovery approach.
346
DONALD E. BROWN AND LOUISE F. GUNDERSON
mapped to a feature space that contains attributes of the sites attacked and the type of attack. We cluster these incidents in feature space. More formally, we develop a density estimate for the decision surfaces across feature space. This surface then becomes the basis for modeling the decision behavior of the criminals toward future attacks. Once the target preferences have been discovered, then a model of criminal behavior can be developed. In order to develop a method for discovering target preference, the existing criminological literature on preferences must be examined. This is done in Section 2. Three theories of criminal choice will be discussed: rational choice, routine activity, and victim profiling. Taken together, these theories show that the choice behavior of criminals is nonrandom. In Section 3, some of the types of models, including the multiagent model, used in modeling the prediction of criminal activity are discussed. In Section 4, the theoretical basis for the preference discovery methodology is discussed. In Section 5, the preference discovery methodology is discussed in more detail. Section 6 gives some of the results of the methodology with simulated data.
2.
The Target Selection Process of Criminals 2.1
Rational Choice Theory
The rational choice theory developed by Cornish and Clarke suggests that the choice of a target is based on a rational decision process. This choice is based on a rough cost-benefit analysis between the costs and benefits posed by a specific target [6]. This model has been proposed for the target selection process of shoplifters [7] commercial burglars, and robbers [8]. It has also been extended to cover computer and network crime [9]. Let us take for example the choice to target a series of sites for a denial of service attack. In order to assess the value of these targets, the possible rewards of attacking these targets (the bragging rights of having disrupted a particular site or sites or the political value of the disruption) are balanced against the possible costs (the humiliation of getting caught or of not successfully disrupting the site (perhaps after bragging about one's ability to do so)). However, the criminal only has limited time and incomplete information with which to make this targeting decision [10]. This means that they must use proxies to assess the probability of the rewards or costs for a specific target. For example, let us consider a group that is firmly committed to disrupting the U.S. economy. The value of their reward will be related to the content of the site. For them, disrupting the "Mom's Little Apple Shop" site will result in less value than disrupting the "Big E-Commerce" site. Their proxies for the value of the site will be the "importance" of the site and the size of the market share that uses the site. This implies that the attacker
THE PREFERENCES OF COMPUTER CRIMINALS
347
will have a specific preference for sites with features that indicate a small cost and a large reward. This analysis would also be used in the choice of method, for example, the use of a gun in committing a crime [11]. However, an individual's choices may be limited both by their economic and social situation and by the nature of the crime. For example, consider an individual with poor literacy skills and no high school diploma. This person may have trouble getting legal jobs that can compete with the illegal work available to them. So the educational attributes of the individual will limit their legal choices. However, the properties of the offense will also narrow their set of criminal choices. For example, this individual would have difficulty in committing an act of embezzlement at the local bank, but might have no difficulty in taking up burglary as a profession. Since these attributes will structure the decision-making process of the criminal, they are called choice-structuring properties [12]. In the case of computer crime, the type of tool that can be used will be determined by the choice-structuring property of the attacker's ability. An unskilled attacker would need to use a program written by another person, where skilled programmers could write their own attack tool. Each of these attack tools will have a set of site types that it works best with. Therefore, these tools will structure the choice of targets that the attacker will target. The rational criminal hypothesis implies that criminals will have a strong preference for targets with certain features. These preferences will be determined by the weighted benefits of attacking the target, the cost of attacking the target, and the type of tools they have available for attacking the target.
2.2
Routine Activity Hypothesis
The routine activity hypothesis was developed in an effort to explain the fact that some occupations and settings have disproportionately high victimization rates [13,14]. According to the routine activity hypothesis, a criminal incident requires three things: 1. A motivated offender, 2. A suitable target (or victim), and 3. The absence of a motivated guardian [15]. It has since been demonstrated that some parameters can have a major effect on the probability of crime. For example, students who have dogs, jobs, and extra locks are significantly less likely to have a major larceny. On the other hand students who live in noisy neighborhoods, belong to many organizations, or eat out often are significantly more likely to be victimized [16]. Some places also have higher crime levels. The crime rates near taverns, bars, liquor stores, and bus depots are higher than those in areas farther away [17,18].
348
DONALD E. BROWN AND LOUISE F. GUNDERSON
While this theory has not been expHcitly used for computer crime, let us consider how it would play out for the example discussed above. In this example we consider a group that is firmly committed to disrupting the U.S. economy. This group would be the motivated offender. A site, whose damage would cause major economic disruption, would be a suitable target. This would result in a preference dependent upon the features of the sites, where targets that are more "important" and have a larger market share are more suitable. For example, "Mom's Little Apple Shop" site might be a less attractive target than the "Big E-Commerce" site. The presence of a motivated guardian would be represented by the presence of a firewall or a quick response to the attack. This would result in a preference for targets that appeared to be relatively undefended. The routine activity hypothesis again implies that criminals will have a strong preference for targets with certain features. These preferences will be determined by their motivation to attack the target, their perception of the suitability of the target, and the absence of a motivated guardian.
2.3
Victim Profiling
Criminal profiling is a method of answering the basic question "What kind of person committed this crime?" This may include a psychological assessment, a social assessment, and strategies for interviewing suspects [19]. Criminal profiling is generally used in the case of violent crimes. Victim profiling is one of the methods of criminal profiling. In victim profiling, the question changes from "What are the characteristics of the attacker?" to "What need in the attacker does this particular victim satisfy?" [20]. In the case of victims of violent crime, this can be a complex and time-intensive process. It involves analysis of the physical and lifestyle features of the victim and analysis of the crime scene to determine what features of the victim and/or scene made it an attractive target. In many cases this may not be determinable from the available evidence. However, in the case of computer crime, the characteristics of the attacked site are completely available to the investigator in a way not possible with other types of crime. In fact, most of the characteristics of the victim that the attacker observes are the same characteristics that the investigator can observe. While it is not possible to read the mind of the attacker, the features of the victim are plain to see.
3.
Predictive Modeling of Crime 3.1
Previous Models
Recently a number of researchers have begun to expand existing approaches to predictive modeling of criminal activity. This section provides a brief overview of this work and shows its relation to our proposal.
THE PREFERENCES OF COMPUTER CRIMINALS
349
Kelly [21] has explored the relationship between public order crime and more serious crime. This work attempts to discover the strength of the relationship in the "broken windows" hypothesis. The approach uses log-linear models on a lattice. The predictor variables are public order crimes at previous time instances and at specific lattice or areal locations. The response variables are the felony or serious crimes at the same locations. This work is designed to increase understanding and based on the results to possibly direct police activities toward public order crimes. This is clearly an important contribution. However, the method is not designed to predict criminal activity in space and time, as is our proposed research. The work of Rengert [22] is similar in that it is designed to study the emergence of drug markets. His model explores factors of accessibility, susceptibility, and opportunity. As with the work by Kelly, Rengert's model is designed to increase our understanding of drug markets and inform policymakers. Again it is not intended as an operational tool. Rogerson [23] has developed methods for detecting changes in crime rates between areas. The approach uses methods from statistical process control (cumulative sum statistic in a spatial context) with assumptions of independence between time instances. He is also interested in models of the displacement of crime from one location to another, particularly in response to police actions. This last concern is relevant to the work we propose here. His approach differs from ours in that he uses a priori models of displacement in response to police actions. For example, his models assume a known utility function for criminals and use inputs such as the probability of arrest given the police action. Again these contributors to the criminals' utility function are assumed known. Olligschlaeger [24] developed an approach to forecasting calls for service using chaotic cellular forecasting. In this approach he organized the data into grid cells with monthly time units. He then used summary statistics on the calls for service in surrounding grid cells to predict calls for service in a center grid cell for the next month. He used a back-propagation neural network to actually perform the prediction at each cell. In tests of the method he showed forecasting accuracy better than that obtained from conventional statistical methods. Gorr and Olligschlaeger [25] have explored more traditional time series methods for predicting crime in grid cells. In particular they have looked at HoltWinters exponential smoothing and classical decomposition. These methods inherently look at the past crime data to predict future events. Both this work and the chaotic cellular forecasting work differ from our proposed approach in that they use only past criminal event data in their models. As a result they cannot directly address how changes affect the prediction of crime. Liu and Brown [26] developed an approach to criminal event prediction using criminal preference discovery. As inputs to the process they take past data of
350
DONALD E. BROWN AND LOUISE F. GUNDERSON
criminal activity, e.g., breaking and entering. They compute a density estimate in a high-dimensional feature space, where the features consist of all possible attributes relevant to the criminal in selecting his targets. Example features include distance to major highways, distance to schools, income, and type of housing. They then map this density back into geographic space to compute a threat surface or regions with high probability for future criminal events. Testing shows this method outperforms extrapolation from previous hot spots. However all of these methods make the underlying assumption that all criminals have the same preference for the features in an environment and that the environment is not changing. In the next section, multiagent modeling is considered. This method explicitly considers that different criminal agents in an environment have different preferences and that the environment may not be stable over time.
3.2
Multiagent Modeling
While there are many approaches to simulating human societies, one of the most promising approaches involves the use of multiagent models, also known as distributed artificial intelligence models [27]. In this type of a model, agents (defined below) are created in an environment, generally spatial, that contains both other agents and objects with which the agents can interact [28]. While there is no universal definition of an agent, an agent is generally defined as having four properties [29]: • Autonomy—the agent does not need to be externally directed. In general the agent has a set of rules to generate behaviors. • Reactivity—the agent can perceive its surroundings and react to them. • Social ability—the agent can interact with other agents in the model. • Proactivity—the agent can initiate its own goal-directed behaviors. There are major advantages to this distributed approach for the modeling of human criminal behavior: • Criminals do not all have the same target preferences. A multiagent model allows for the construction of heterogeneous agents, with different frequencies of attack and target preferences. The proactivity of an agent allows the agents to interact with the existing targets in their environment. • Criminals can communicate about methods of attack and possible targets. A multiagent model allows for the simulation of that communication process. • Most criminal behavior takes place in a geographic setting. Because multiagent models have an environmental (physical) component, they can
THE PREFERENCES OF COMPUTER CRIMINALS
351
explicitly simulate the physical distribution of a problem. While this is less important in computer crime, it is of paramount importance in the modeling of traditional criminal activity. While multiagent modeling has not yet been used to simulate criminal activity, it has been used in a wide variety of simulations of human behavior. Below is a partial list: • Simulation of the behavior of recreational users of forest lands [30], • Simulation of changes in the Anasazi culture in northeast Arizona [31], • Simulation of the movement of individuals in a city [32], and • Simulation of the effects of the organization of a fishing society on natural resources [33]. These simulations demonstrate the power of this approach for the modeling of human activity. However, not all types of multiagent models have the same predictive ability. In order to be accurate, the preferences of the agents must be accurately assessed. One distinction in this type of model is between "weak" and "strong" social simulations [27]. In a "weak" social simulation, the modeler has determined the relevant features in the model and the preferences and behaviors of the agents. While this type of model, if correctly constructed, can yield interesting insights about the behavior of cultures, it cannot be used as a predictive model. In a "strong" social simulation, the agent preferences and behaviors are derived from the preferences and behaviors of the humans in the environment being studied. A form of the "strong" social simulation is one using a "calibrated agent." Gimblett et al. use these calibrated agents in their work on recreational simulation [34]. They collected survey data from recreation users of the area of study. Then the results of the surveys were used to construct calibrated agents that have preferences and behaviors resembling those of the humans in the environment. However, in the case of computer criminals, the use of surveys is not possible. The first problem is that since most computer criminals are not identified, the population of identified computer criminals would be a biased sample of the entire population. The second problem is, even if one could find an unbiased population, why would a group of hackers tell the truth in a survey? Common sense tells us that the truth content of the responses would be low. Therefore it is necessary to find a way to discover the agents and their preferences from the event data, so as to correctly calibrate the multiagent model. This discovery method is discussed in the next section.
352
DONALD E. BROWN AND LOUISE F. GUNDERSON
4.
Discovering the Preferences of the Agents 4.1
Clustering
The data-mining technique proposed for discovering the agents and identifying their target preference is clustering. Clustering is the practice of grouping objects according to perceived similarities [35,36]. Clustering is generally used when no classes have been denned a priori for the data set [37, p. 287]. Clustering is often used in the analysis of social systems [38]. Clustering has also been used in a wide array of classification problems, in fields as diverse as medicine, market research, archeology, and social services [36, pp. 8,9]. In this discussion the term algorithm will be used for a specific clustering method, while procedure will be used for the entire process, which may include standardization of the variables in the data set, selection of the appropriate number of clusters produced, or other associated manipulations of the data set. Because clustering is a multiobjective methodology, there is no single clustering procedure that can be regarded as appropriate for most situations. Instead the applicability of a specific clustering procedure must be evaluated by the results that it produces in a specific situation [37, p. 311]. Many clustering algorithms have been created, and each variation has advantages and disadvantages when applied to different types of data or when searching for different cluster "shapes." A partial list of some possible clustering algorithms follows. 1. Methods in which the number of clusters are chosen a priori. In these methods, a criterion for measuring the adequacy of the partitioning of the objects into the selected number of disjoint classes is chosen. Then a set of transforms is selected to allow for the changing of one partition into another partition. The partitions are then modified until none of the transforms will improve the criterion chosen for measuring the adequacy of the partitioning. Some examples of this type of algorithm are k-means algorithms and simulated anneahng algorithms [36, pp. 41-45]. 2. Methods based on mixture models. In these methods the data are considered as coming from a mixture of sources, where each source has a conditional density function [35]. These methods can be used for data sets in which the clusters overlap. Some examples of this type of algorithm include Bayesian classification systems, and mode separation [37, pp. 316-318]. 3. Hierarchical clustering methods. In hierarchical methods, the number of clusters is not predetermined. Rather, a series of partitions is created, starting with a single cluster containing all of the observations and ending with a single cluster for each observation. This series of partitions can be displayed in two dimensions as a dendrogram. The number of clusters is determined by the use of a heuristic, called a stopping rule.
THE PREFERENCES OF COMPUTER CRIMINALS
353
For all methods of clustering, the inclusion of irrelevant features can mask the true structure of the data [36, pp. 23-26]. This makes it important to select only the relevant features for determining the number of agents and their preferences. For the problem of identifying attacking agents, at first blush it seems that it would only be necessary to cluster the attacks along some major attributes, Hke number of pages at a site or political affiliation. However, different criminals can and do place different weights on the same proxies [39]. For clustering algorithms, these preference differences can be expressed as cluster-specific salience weightings, where for each cluster the features have a different salience for the individual [40]. Models that take into account these preference differences between individuals have been developed for recreational fishing [41] and transportation choice [42]. However, these papers use a classification method based on survey data of the individuals involved in the activity to determine their preferences. This is clearly not feasible for criminal activity. The discussion below describes a clustering process designed to discover this type of clusters from existing event data. For any criminal event, the environment presents a large number of features that can be used directly, or as proxies for other hidden features, and each criminal will select his own feature set. The nature of these target preferences suggests a method for separating them. It can be shown that if a criminal cares about a specific feature, the values of that feature will be constrained, with the tightness of the constraint corresponding to the degree to which he cares about it. However, if he is indifferent to a specific feature, then the values of that feature will be unconstrained. Thus, we see that the distribution of the events caused by a specific criminal will be dependent on the salience weighting of that feature to that criminal. This is shown graphically in Fig. 2. In this figure, Cluster A represents the attacks of an individual who cares about all three features of potential targets: size of the site, percentage of military business done at the site, and political affiliation of the site (measured on a spectrum from communist to democratic). The cluster formed by his attacks will form a spheroidal cluster in the space formed by these three attributes. Cluster B represents the attacks of an individual who only cares about two of these features: percentage of military business done at the site and political affiliation of the site. Since he does not care about the distance from roads, his crimes are uniformly distributed across this feature. This results in a cluster that is cylindrical in the space formed by all three attributes, but is circular in the space formed by the two attributes the criminal cares about.
4.2
Judgment Analysis
One method of assessing human preferences that can be used for this problem is judgment analysis. Judgment analysis is an a posteriori method of assessing how a decision maker formed a judgment [43]. This theory comes from the field
354
DONALD E. BROWN AND LOUISE F. GUNDERSON
Cluster A
•
• • • •
\ •) /
Political affiliation Size of the site
Percentage of military business
FIG. 2. Graphical depiction of two preferences.
of cognitive psychology and is based on the work of Egon Brunswik, who viewed the decision-maker as being embedded in an ecology from which he received cues as to the true state of things [44]. These cues are probabilistically related to the actual state of events. Judgment theory is concerned with the weighting that the individual places on the cues in his environment. One of the major strengths of this theory, in the context of predicting computer crime, is that it does not require that the cognitive process of the criminal be modeled. Rather, the weights that the criminal places on the cues in his environment will be derived from the events caused by that criminal. This theory has also been used to construct a cognitive continuum between analytic and intuitive thought [45]. Brunswik's original theory and its extensions have been used in such domains as meteorological forecasting [46], social welfare judgments [47], the understanding of risk judgments [48], and medical decision making [49]. In judgment analysis, the judgment process is represented by the lens model [43]. To discuss this model, let us consider the simple example of estimating the distance to a child's building block lying on a table. In this model, the actual distance to the block is an environmental (distal) variable (je)- The observer has a series of observable (proximal) cues (c,) relating to this distal variable, such as the size of the retinal representation of the block, the differences in the image in the right and left eyes, and the blurring of the image. These cues have a correlation to the actual state (ecological validity). The subject weights the cues and uses a function of these weighted cues to make a judgment as to the true
355
THE PREFERENCES OF COMPUTER CRIMINALS
state (ys). This cue weighting has a correlation to the relationship of the cues to the actual state (cue utilization validity). The actual achievement (performance) in the judgment task can be used to update the weights placed on the cues in future judgment tasks. This model is described by ys = ^ >v/X/, where ^5 is the judgment of the condition of target 5, y^ the actual environmental condition of the target, n the total number of cues available to the judgment maker, Xi the value represented by cue /, where / goes from 1 to n, and w/ the weighting of cue /. This model is shown graphically in Fig. 3. This model does not capture the motivation of the individual. In the case of crime analysis, if the motivation is not considered, then all of the possible cues available to the attacker must be considered. This significantly increases the difficulty involved in the construction of the model. However, this model can be extended in a way that allows for a smaller subset of cues to be considered. This extension uses the rational choice theory and the routine choice hypothesis discussed above. If these theories are used, then only the cues that could be considered to have a significant effect on the criminal's perception of the risks or the benefits of the crime, the suitability of the target, or the presence of a guardian must be considered. This is a subset significantly smaller than that of all the possible cues. While the computer criminal's venue is different from that
•Achievement
Distal (Environmental) Variable
Subject Judgment
FIG. 3. Lens model.
356
DONALD E. BROWN AND LOUISE F. GUNDERSON
of the types of criminals for whom these models were developed, their decision process can be described using these models. Let us look again at the example of the computer terrorists discussed above. Some of the cues that they might use to determine the "importance" of the site might be: • The number of hits that the site gets (a measure of its relative importance), • For a commercial site, the total value of commodities sold at the site (a measure of the economic importance of the site), • The number of sites that point to this site (a measure of relative importance), • The type of firewalls employed by the host, • The type and level of encryption used at the site, and • The size of the company or government behind the site. This extension results in the hierarchical judgment design model [43]. In this model, the attackers are using the weighted cues to assess the value of the risk or benefit (value), which can be considered either a first-order judgment or a second-order cue. These second-order cues are then used to make the secondorder judgment as to the "best" target. This is shown graphically in Fig. 4.
4.3
Applying the Judgment Analysis Model to Criminal Preference
Let us consider the above-mentioned group of attackers. They have a number of sites to choose to attack, with their judgment of the value of each site represented by a weighted sum m
where ys is the judgment of the condition of target s, m the total number of risks/benefits perceived by the judgment maker, Vj the risk/benefit represented by value y, where j goes from 1 to w, and Wj the weighting of the risk/benefit represented by value j . The perceived risk/benefit of a target is derived from the weighted sum of the cues pertaining to that risk/benefit. This results in
357
THE PREFERENCES OF COMPUTER CRIMINALS
Achievement
Outcome
First-Order Cues
First-Order Judgment Second-Order Cues
Second-Order Judgment
FIG. 4. Hierarchical lens model.
where rij is the number of cues for value /, x/y value of cue j for value /, and Wj the weighting of cue j . If we define w/y = vv/Wy, then this equation can be rewritten as ys
=
Wll^ll + W12X12 + • • • + \\^\nnXim, + W21X2I + W22X22 + • • • + W2;;,2^2m2 + • • • + W„iX„i + Wn2Xn2 + • • • 4- W„m.X, 'nm„Xnm„-
The distribution of the values of the cues in the targets available to the criminal must be considered. First, there must be some divergence in the values of an attribute [50]. For example, all computer attackers attack sites that are hosted on computers. Therefore, the fact that a site is on a computer (as opposed to a book, or a sign, or a billboard) gives no information as to the cue weighting of the attacker. All of the cues used in this analysis must have some divergence. Second, the distribution of the values of the cues must be examined. Values that are relatively unusual, i.e., that represent a low-probability event, should carry a higher weight than values that are less unusual [51]. For example, a hacker who prefers to attack sites with Teletubbies would be more unusual than a hacker who prefers to hit stock brokerage sites, simply because there are more sites devoted to
358
DONALD E. BROWN AND LOUISE F. GUNDERSON
the art of stock brokerage than devoted to the art of the Teletubby. This means that before the analysis of real data can proceed, the data must be adjusted to reflect the prior probabilities. Given that these two assumptions are met, for features for which the criminal shows no preference vv/y = 0, and for features for which the criminal shows a preference Wij may be large. This term then becomes representative of the salience of the feature to the criminal, and is termed the salience weighting of the feature. For an interval feature, as the salience weighting approaches zero, since the probability of any value will be the same as the probability of any other value, the distribution of the events in the feature space will be uniform. If the feature is categorical, then the events will be uniformly distributed among the categories. For a nonzero salience weighting, the events will be grouped around the maximum preference, with the tightness of the grouping being proportional to the strength of the salience. This means that the events caused by a specific criminal will have a smaller variance along the feature axis for which that criminal has relatively large salience weighting. So, prospect theory suggests that the events caused by a specific criminal should have the following characteristics: • A relatively small variance along the axis for which the criminal has a relatively large salience weighting, and • A relatively large variance along the axis for which the criminal has a relatively small salience weighting. Since each of the criminals can have a different salience weighting for each of the features, it should be possible to discover the preferences of individual criminals.
5. 5.1 5.7.7
Methodology
Cluster-Specific Salience Weighting
Overview
As mentioned above, the point of the cluster specific salience weighting (CSSW) methodology is to identify the attacking agents and their preferences. The cue data available are the features of the sites that are attacked. These features can include: • The type of business done at the site, • The number of pages at the site.
THE PREFERENCES OF COMPUTER CRIMINALS
359
• The amount of business done at the site, and • The political affiliations of the site. The resulting clusters represent the attacking agents and the salient features represent the preferences of the agents. CSSW is used to identify these clusters and the features salient to those clusters. In order to do this, the software clusters the events in a space defined by all possible features. If any resulting cluster has a variance in all of the features less than a predetermined "cutoff" level, then it is considered to represent an agent and removed from the data set. The remaining events are clustered in all possible subsets of the features until either no events or no features remain.
5.7.2
Picking a Clustering IVIethod and Stopping
Rule
The first problem in constructing the cluster-specific salience weighting is to choose a clustering algorithm. Different clustering algorithms have different properties and problems. For this analysis, the following properties are important: • The algorithm should not be biased against forming lenticular clusters. This is important because the elongation of the clusters yields valuable information about the cluster-specific salience weight. • The algorithm should be fast. The number of analyses required by a data set with many possible features requires that the algorithm be fast. • The clusters resulting from the algorithm should be independent of the order of the observations. The need for the resulting clusters to be observation order independent suggests an agglomerative hierarchical method. However, some agglomerative hierarchical methods, namely centroid clustering and Ward's method, tend to be biased against lenticular shapes [35], so single clustering (also called hierarchical nearest neighbor), which does not impose any shape on the resulting clusters, was chosen. However, it should be noted that other clustering methods, namely mixture models, could be used in the place of this hierarchical method. After the selection of the appropriate clustering algorithm, the next problem is the selection of an appropriate number of clusters. Milligan and Cooper tested 30 stopping rules on nonoverlapping clusters [52]. They found that the best stopping criteria was the Calinski and Harabasz index [53]. This stopping rule uses the variance ration criteria (VRC), which is the ratio of the between group sum of squares (BOSS) and the within group sum of squares (WGSS). VRC =
BOSS A:-l WGSS
n-k
where k is the number of clusters and n the number of observations.
360
DONALD E. BROWN AND LOUISE F. GUNDERSON
The VRC is calculated for increasing numbers of clusters. The first number of clusters for which the VRC shows a local maximum (or at least a rapid rate of increase) is chosen as the appropriate number of clusters.
5.7.5
Implementing
the CSSW
Methodology
Below is a brief description of the CSSW methodology. This method could be employed with many different clustering methods and stopping rules. 1. A cutoff variance (v) is chosen for all dimensions, where all dimensions = n. 2. A cutoff number (s)is chosen for the smallest number of points in a cluster. 3. A cutoff number (m) is chosen for the number of local maxima to be tested. 4. The observations are clustered in all dimensions, and the VRC is calculated for all possible numbers of clusters. 5. The first local maxima is chosen. 6. The within cluster variance is calculated for each cluster with more than s points for all of the dimensions. 7. If a cluster is identified, for which the variance is less than v for all n variables, this cluster is identified and removed from the data set. 8. If no such cluster is identified, then the next local maxima is investigated, until the number of maxima reaches m. All identified clusters are removed from the data set. 9. The remaining data are clustered in all possible subsets of n- I variables. 10. The process is repeated until the number of events is less than the smallest number of points allowed in a cluster or there are no remaining features to be tested. This method is shown graphically below. Figure 5 shows the events caused by three agents: Agent A has a preference in xi, X2, and X3; Agent B has a preference in jci and X2; and Agent C has a preference in xj and X3. If the events are clustered in xi, X2, and X3, the cluster that contains the events caused by Agent A can be removed. Then the remaining events are clustered in x\ and X2. The cluster that contains the events caused by Agent B can be removed (see Fig. 6). Then the remaining events are clustered in xi and X3, but no cluster can be removed (see Fig. 7). Finally, the remaining events are clustered in xi and X2. The cluster that contains the events caused by Agent C can be removed (see Fig. 8). This simple example shows how the CSSW can be used to separate the clusters and to determine the feature weighting for each of them.
361
THE PREFERENCES OF COMPUTER CRIMINALS
Agent B
Agent A
X2 - value of highest priced item for sale
X3 - distance from New York
Agent C
X^ - volume of business
FIG. 5. Results of first clustering.
Agent B
X2 - value of highest priced item for sale
X3 - distance from New York
Xi - volume of business
FIG. 6. Results of second clustering.
5.2
Using the Discovered Agents in a Multiagent Model
In order to construct a multiagent model of the Internet, a simulation of the Internet must first be created. In order to create this simulation, the features that will be considered are chosen. As mentioned above, some features could be political affiliation of the site, size of the site, or type of business done at the site. Since it is clearly impossible to model the entire Internet, a subset of sites of interest to the modeler could be chosen. Each of the sites is identified by the value of the features of the site and a label, which is the address of the particular site. An interesting characteristic of the Internet is that multiple sites will have different addresses, but the same (or very similar) vectors of features. Then the agents and their preferences must be created. It is possible, if the modeler has no attack data, to create the agents a priori, by considering the
362
DONALD E. BROWN AND LOUISE F. GUNDERSON
X3 - distance from New York
X-, - volume of business FIG. 7. Results of third clustering.
X2 - value of highest priced item for sale Agent 0
X3 - distance from New York FIG. 8. Results of final clustering.
preferences of an imaginary attacker. However, for this case we assume that the modeler has previous attack data. The feature vector of the sites that have been attacked will be extracted, and this features vector will be clustered using the CSSW method described above. This will result in the discovery of the number of agents and their preferences (or lack of them). Once the agents and their preference have been identified, then a new round of attacks can be simulated. This direct simulation gives the user the chance to experiment with potential changes, and to see the effects in a synthetic Internet. This is shown graphically in Fig. 9. However, this methodology can also be used to create a protective system. Systems administrators do have some options open to them after an attack is started. One option is to shorten the ''wait" time on a SYN connection. This decreases the severity of the attack, but it also decreases the accessibility of the
THE PREFERENCES OF COMPUTER CRIMINALS
363
Attack Data
Attack database Event, Xi, X2,..., Xp, addressl
^
Agent 1 Agent 2
<
Agent n
1
FIG. 9. Multiagent model.
system to clients, particularly those with slow servers. A more draconian approach is to limit the number of connections available. This will protect the server, but will limit, or stop, accessibility to clients. In both of these cases, hardening the system against a denial of service attack is expensive in terms of service to clients. This means that this hardening should only be used when it is needed. However, it has been very difficult to determine who the target of a denial of service attack will be, even after it is underway. The approach to determining an attacker's preferences could be used to create an automated data-mining model that would interact with the security systems (firewalls, intrusion detection systems, etc.) of vulnerable computer systems. The proposed system would work in the following way: • The attacker launches the first wave of a denial of service attack. • The attacked machines send information about the attack (type of tool used, features of site attacked, other relevant features) to the data-mining system. • This system then uses the clustering methodology to automatically discover the features of interest to the attackers and their preferences in these features. • The system then sends a warning to machines with similar attributes.
364
DONALD E. BROWN AND LOUISE F. GUNDERSON
Because this entire process is automatic, the sites could be warned to start hardening in "cyber time." This would allow sites to avoid unnecessary hardening, while still providing for fast hardening when similar sites are being attacked. A schematic of this system is shown in Fig. 10.
6.
Testing with Synthetic Data
In order to construct an adequate synthetic data set, the nature of the Internet must be considered. One way of considering the Internet is as a series of available Data-Mining System
System A
System C
System B
System D
System E
1^^ wave of Denial of Service Attack 2"^ wave of Denial of Service Attack Notice of Attack and information about the features of the attacked system Warning of Potential Attack FIG. 10. Attack warning system.
THE PREFERENCES OF COMPUTER CRIMINALS
365
sites. Each of the sites is identified by the value of the vector of feature values that describe the site and a label, which is the address of the particular site. In order to simulate this a 5-dimensional space was constructed. Each of the dimensions has a value from 0 to 1. The space was binned into intervals of 0.1 units, resulting in 10^ possible hypercubes. Each hypercube was considered to have 100 different sites, which would correspond to 100 sites with the vector of feature values represented by that particular hypercube. For example, let us take a hypercube that represented sites with: • 90-100% commercial, • 90-100% U.S. government business, • Between 25 and 50 pages at the site, • With an annual income of between $1,000,000 and $2,000,000, and • 90-100% capitaUst affiliation. We would assume that many different companies would meet that description, providing at least 100 different sites that would correspond to this particular hypercube. Each attack model contained four agents, each with their own sets of preferences for the five features. The preferences were AT(0.125,0.03), iV(0.375,0.03), 7V(0.625,0.03), iV(0.875,0.03), and (7(0,1), where N(fi,(j) indicates a Gaussian distribution with a mean = // and standard deviation = fi and C/(0,1) indicates a uniform distribution between 0 and 1. The preferences were chosen randomly. An initial attack was simulated by creating 400 events per agent, with the events being distributed according to the preferences of that agent. A site label was chosen randomly from among 100 possible values. The attack was analyzed with the cluster-specific salience weighting software, to discover the preferences of the attackers. The software settings were: • The cutoff variance (v) = 0.2, • The cutoff number (s) = 150, and • The cutoff number (m) = 5. This preference was used to create a prediction for the targets of a second attack. In order to determine the accuracy of the prediction, a second attack was simulated in the same way as the first attack. All of the events in the predicted attack and the second attack were classified by the hypercube that the event occurred in. The Pearson's correlation coefficient was calculated for the hypercubes. Note that only the hypercubes in which an attack actually occurred were considered. This methodology was compared with a naive method of predicting future attacks, in which it was assumed that a site that had been attacked would be
366
DONALD E. BROWN AND LOUISE F. GUNDERSON
attacked again. In this methodology the sites in the hypercubes were also considered, since without an analysis of the features, only the actual site attacked can be identified. Both of these procedures were performed on 105 models with increasing numbers of features for which the agents have no preference (C/(0,1)). In all cases the CSSW forecast was more accurate than the naive forecast. Figure 11 shows the results of the CSSW and naive prediction for increasing levels of uncertainty. The y axis shows the Pearson's correlation coefficient, and the X axis shows the number of features for which one of the four agents shows no preference, out of a possible 20 features. Some of the variation in the results is due to the distribution of the "no preference" between the agents. To take an extreme example, in the models where 9 out of 20 features were defined, the highest correlation coefficients ranged from 0.9 to -0.36. In order to demonstrate some of the factors in the performance of the methodology those two models will be discussed in more detail. The preference structure for the case in which the methodology performed well is presented in Table I. The preference structure that the methodology discovered is shown in Table II. The agents in Table II are shown in the order that they were discovered. The discovery process went in this fashion: • In the first clustering agent 2, which had preferences in all 5 dimensions, was identified as derived agent A, and its events were removed from the data set. • No agents were identified in subsequent clusterings until the remaining data were clustered in the subspace formed by the features X3 and X5. Then 1
c o 0.5
11 ;" i l l I •
o .y c o
go
• Naive Forecast • CSSW Forecast
- • • -0.5
CO
0
a. -1
• i l l , . i
t 5
. i
i
^ i
;
• • I #—#—J 10
Number of features for which the agents had no preference (out of 20) FIG. 11. Results of the CSSW and naive prediction.
THE PREFERENCES OF COMPUTER CRIMINALS
367
TABLE I ACTUAL PREFERENCE STRUCTURE FOR A CASE WITH 9 OUT OF THE 20 POSSIBLE PREFERENCES IDENTIFIED, WHERE THE METHODOLOGY PERFORMED WELL Agent
x\
X2
XT,
X4
X5
1 2 3 4
(7(0,1) iV (0.375,0.03) (7(0,1) Ar(0.875,0.03)
7V(0.875,0.03) N(0.875,0.03) (7(0,1) (7(0,1)
(7(0,1) N(0.375,0.03) N(0.875,0.03) (7(0,1)
(7(0,1) N(0.375,0.03) (7(0,1) N(0.375,0.03)
iV(0.625,0.03) 7V(0.125,0.03) ]V(0.875,0.03) (7(0,1)
A^ore. The cells in gray represent those features for which the agent has no preference.
TABLE II DERIVED PREFERENCE STRUCTURE FOR A CASE WITH 9 OUT OF THE 20 POSSIBLE PREFERENCES IDENTIFIED, WHERE THE METHODOLOGY PERFORMED WELL
Derived agent'^
n
x\
X2
XT,
X4
X5
stdxj
std X2
std X3
std X4
stdxs
A B C D E
300 313 386 196 5
0.37 0.50 0.87 0.33 0.29
0.88 0.47 0.60 0.87 0.95
0.37 0.88 0.48 0.50 0.55
0.38 0.48 0.41 0.50 0.34
0.13 0.88 0.51 0.63 0.60
0.03 0.30 0.05 0.20 0.27
0.03 0.28 0.29 0.03 0.01
0.03 0.03 0.27 0.26 0.18
0.03 0.28 0.16 0.27 0.25
0.03 0.03 0.25 0.03 0.03
Note. The cells in gray represent those features for which the agent has no preference. " Listed in order of identification.
agent 3, which had preferences in X3 and X5, was identified as derived agent B, and its events were removed from the data set. • In the clustering done on the subspace formed by the features x \ and X4 agent 4 was identified, but because the cutoff variance was set at 0.2, portions of the remaining agent 1 were included in agent 4. These events were identified as belonging to derived agent C and were removed from the data set. • Because some of the events caused by agent 4 were removed incorrectly, the final clustering on the subspace formed by the features xi and X5 identified two agents, one on either side of the gap formed by the removal of the events forming derived agent C. These points became derived agents D and E. Even though the methodology did not recover the actual preferences of the agents precisely, the final derived agent model was close enough to perform well as a predictive model. There were two major factors in this performance. First, since one of the agents had preferences for all features, it was removed early in the process. Second, the discovered agents had preferences for the same features as the actual agents.
368
DONALD E. BROWN AND LOUISE F. GUNDERSON
The lowest correlation coefficient, -0.36, came from the following model of agent preferences. Its preference structure is presented in Table III. The preference structure that the methodology discovered is shown in Table IV. For this model, the discovery process went in this fashion: • No agents were identified until the data were clustered in the subspace formed by the features X2, X3, X4. Then agent 2, which had preferences in X2, X3, and X4, was correctly identified as derived agent A, and its events were removed from the data set. • In the clustering done on the subspace formed by the features xi, X3, and X4 agent 3 was identified as derived agent B; however almost 100 of the events belonging to agents 1 and 4, which overlapped agent 3, were erroneously included in the derived agent. • In the final clustering on the subspace formed by features X2 and X3, the remaining events of agents 1 and 4 were identified as derived agent C. In this case the confusion of agents 1 and 4 caused the prediction to be less than optimal, since agent 1 had a preference for only one feature while agent 4 had a preference for 4 of the 5 features. This confusion was caused by the creation of TABLE III ACTUAL PREFERENCE STRUCTURE FOR A CASE WITH 9 OUT OF THE 20 POSSIBLE PREFERENCES IDENTIFIED, WHERE THE METHODOLOGY PERFORMED POORLY
Agent
xi
xi
x^
X4
X5
1 2 3 4
(7(0,1) (7(0,1) Ar(0.875,0.03) Ar(0.125,0.03)
7V(0.125,0.03) 7V(0.625,0.03) (7(0,1) A^ (0.125,0.03)
(7(0,1) N(0.375,0.03) yV(0.625,0.03) N(0.375,0.03)
(7(0,1) N(0.875,0.03) A^(0.375,0.03) (7(0,1)
(/(0,1) (7(0,1) (7(0,1) TV (0.125,0.03)
Note. The cells in gray represent those features for which the agent has no preference.
TABLE IV DERIVED PREFERENCE STRUCTURE FOR A CASE WITH 9 OUT OF THE 20 POSSIBLE PREFERENCES IDENTIFIED, WHERE THE METHODOLOGY PERFORMED POORLY
Agent"
n
x\
xo
X3
X4
X5
stdxi
std X2
std X3
std X4
std X5
A B C D
300 397 502 1
0.52 0.84 0.23 0.14
0.63 0.40 0.12 0.07
0.37 0.63 0.40 0.71
0.88 0.41 0.52 0.07
0.54 0.49 0.27 0.66
0.28 0.10 0.22 NA
0.03 0.30 0.03 NA
0.03 0.09 0.20 NA
0.03 0.17 0.29 NA
0.30 0.28 0.26 NA
Note. The cells in gray represent those features for which the agent has no preference. " Listed in order of identification.
THE PREFERENCES OF COMPUTER CRIMINALS
369
one large cluster, rather that two smaller clusters. Using a different clustering algorithm, such as a density-based clustering methodology, might have prevented this.
7. Conclusions The new threats posed by the increase in computer networks require new tools for the protection of networks and the discovery of the attacker. This problem is made more complex by the ever-changing nature of the cyber environment. The use of a multiagent model allows attackers with different target preferences in a changing environment to be simulated. However, this modeling technique requires that the preferences of the agents be derived from the events. This paper has shown a methodology that can be used to discover these attack preferences. A series of simulated attack sequences were used to test this methodology. In each of these tests, an attack by a specific set of agents with specific preferences was simulated. The CSSW methodology was used to analyze the attacks and discover the number of attacking agents and their preferences. These discovered preferences were used to make a prediction about the features that would be attacked if the same agents attacked again. In order to test this prediction, a second attack by the same agents was simulated. The correlation between the prediction and the second simulated attack was calculated. This prediction was compared to a naive prediction, where the sites that were attacked in the first simulated attack were predicted to be attacked in the second simulated attack. These tests were done 105 times with different agent preferences. The results of these tests showed that this method had the best predictive ability on cases where all of the agents had preferences for all of the features. Its predictive ability decreased with the number of features for which the agents had "no preference" (C/(0,1)). Tests where even one agent had preferences in all of the features had greater predictive power than tests in which all of the agents had at least one feature for which they had no preference. Further work with different clustering algorithms may yield better results. This strongly suggests that further work needs to be done in the choice of algorithms. However, in all cases the CSSW prediction had a correlation to the second attack higher than that of the naive prediction. While this methodology still needs to be tested with real attack data, it provides a significant improvement to the naive forecast in a simulated environment. Since this method can be automated, it can be run in cyber time, which allows for the creation of an automated warning system. However, the current implementation of the methodology only represents a "first cut." The impact of different clustering methodologies and methods for determining the appropriate settings for the parameters of the method remain to be explored. Even so, this methodology
370
DONALD E. BROWN AND LOUISE R GUNDERSON
does represent a significant improvement on current methodologies for the determination of the targets of a denial of service attack. REFERENCES
[1] Tehan, R. (2000). "RL30435: Internet and e-commerce statistics: What they mean and where to find them on the web." Congressional Research Service Issue Brief, available at h t t p : //www. c n i e . o r g / n l e / s t - 3 6 . html. [2] ActivMedia (June 2000). "Real numbers behind 'net profits 2000," available at h t t p : //www. activmediaresearch. com/real_nuinbers_2000. html. [3] Dion, D. (2001). "Script kiddies and packet monkeys—The new generation of 'hackers,'" available at http://www.sans.org/infosecFAQ/hackers/ monkeys.htm. [4] CERT (2001). "CERT/CC Statistics 1988-2001," available at h t t p : / / w w w . c e r t . org/stats/cert_stats.html. [5] CERT (2001). "CERT Coordination Center: Denial of service attacks," available at h t t p : //www. c e r t . org/tech_tips/denial_of . s e r v i c e .html. [6] Clarke, R. V., and Cornish, D. B. (1985). "Modeling offenders' decisions: A framework for research and policy." Crime and Justice: An Annual Review of Research (M. Tonry and N. Morris, Eds.), Vol. 6, Univ. of Chicago Press, Chicago. [7] Carroll, J., and Weaver, F. (1986). "Shoplifters' perceptions of crime opportunities: A process tracing study." The Reasoning Criminal: Rational Choice Perspectives on Offending (D. B. Cornish and R. V. Clarke, Eds.), pp. 129-155. SpringerVerlag, Berlin. [81 Walsh, D. (1986). "Victim selection procedures among economic criminals: The rational choice perspective." The Reasoning Criminal: Rational Choice Perspectives on Offending (D. B. Cornish and R. V Clarke, Eds.), pp. 129-155. Springer-Verlag, Beriin. [9] Westland, C. (1996). "A rational choice model of computer and network crime." InternationalJournal of Electronic Commerce, 1, 2, 109-126. [10] Johnson, E., and Payne, J. (1986). "The decision to commit a crime: An informationprocessing analysis." The Reasoning Criminal: Rational Choice Perspectives on Offending (D. B. Cornish and R. V Clarke, Eds.), pp. 170-185. Springer-Verlag, Berlin. [11] Harding, R. W. (1993). "Gun use in crime, rational choice, and social learning theory." Routine Activity and Rational Choice (R. V. Clarke and M. Felson, Eds.). Transaction, New Brunswick. [12] Cornish, D. B., and Clarke, R. V. (1987). "Understanding crime displacement: An application of rational choice theory." Criminology, 25, 4, 933-947. [13] Block, R., Felson, M., and Block, C. (1985). "Crime victimization rates for incumbents of 246 occupations." Sociology and Social Research, 69, 442-451. [14] Felson, M. (1987). "Routine activities and crime prevention in the developing metropoHs." Criminology, 25, 4, 911-931.
THE PREFERENCES OF COMPUTER CRIMINALS
371
[15] Cohen, L. E., and Felson, M. (1979). "Social change and crime rate trends: A routine activities approach." American Sociologic Review, 44, 588-607. [16] Mustaine, E. E., and Tewksbury, R. (1998). "Predicting risks of larceny theft victimization: A routine activity analysis using refined lifestyle measures." Criminology, 36, 4, 829-857. [17] Roncek, D. W., and Maier, P. A. (1991). "Bars, blocks, and crimes revisited: Linking the theory of routine activities to the empiricism of 'hot spots.' " Criminology, 29, 4, 725-753. [18] Sherman, L. W., Gartin, P. R., and Buerger, M. E. (1991). "Hot spots of predatory crime: Routine activities and the criminology of place." Criminology, 27, 1, 27-55. [19] Holmes, R. M., and Holmes, S. T. (1996). Profiling Violent Crimes: An Investigative Tool. Sage, Thousand Oaks, CA. [20] Turvey, B. (1999). Criminal Profiling: An Introduction to Behavioral Evidence Analysis. Academic Press, San Diego. [21] Kelly, W. (1999). "A GIS analysis of the relationship between public order and more serious crime." Predictive Modeling Cluster Conference, National Institute of Justice, Crime Mapping Center, March 8, 1999. [22] Rengert, G. (1999). "Evaluation of drug markets: An analysis of the geography of susceptibility, accessibility, opportunity and police action." Predictive Modeling Cluster Conference, National Institute of Justice, Crime Mapping Center, March 8. [23] Rogerson, P. A. (1999). "Detection and prediction of geographical changes in crime rates." Predictive Modeling Cluster Conference, National Institute of Justice, Crime Mapping Center, March 8. [24] Olligschlaeger, O. (1997). "Artificial neural networks and crime mapping." Crime Mapping and Crime Prevention (D. Weisburd and T. McEwen, Eds.). Criminal Justice Press, Money, NY. [25] Gorr, W., and OUigschlaeger, A. (1999). "Crime hot spot forecasting: Modefing and comparative evaluation." Predictive Modeling Cluster Conference, National Institute of Justice, Crime Mapping Center, March 8. [26] Lui, H., and Brown, D. E. (1999). "Spatial temporal event prediction: A new model." IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, October. [27] Kohler, T. (2000). "Putting social sciences together again: An introduction to the volume." Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes (T. A. Kohler and G. J. Gumerman, Eds.), pp. 1-19. Oxford Univ. Press, New York. [28] Ferber, J. (1998). Multi-agent Systems: An Introduction to Distributed Artificial Intelligence. Addison-Wesley, Reading, MA. [29] Gilbert, N., and Troitzsch, K. G. (1999). Simulation for the Social Scientist. Open Univ. Press, Buckingham. [30] Gimblett, H. R., Merton, R. T., and Itami, R. M. (1998). "A complex systems approach to simulating human behavior using synthetic landscapes." Complexity
372
[31]
[32]
[33]
[34]
[35] [36] [37] [38]
[39]
[40] [41] [42] [43] [44] [45]
[46]
DONALD E. BROWN AND LOUISE F. GUNDERSON
International 6. Available at h t t p : / / l i f e . c s u . e d u . a u / c o m p l e x / c i / v o l 6 / g i m b l e t t / g i m b l e t t .html. Dean, J. S., Gumerman, G. J., Epstein, J. M., Axtell, R. L., Swedlind, A. C., Parker, M. T., and McCarroll, S. (2000). ''Understanding Anasazi culture change through agent-based modeling." Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes (T. A. Kohler and G. J. Gumerman, Eds.). Oxford University Press, New York. Penn, A., and Dalton, N. (1994). "The architecture of society: Stochastic simulation of urban movement." Simulating Societies (N. Gilbert and J. Doran, Eds.). Univ. College London Press, London. Bousquet, F., Cambier, C., Mullon, C., Morand, P., and Quensiere, J. (1994). "Simulating fishermen's society." Simulating Societies (N. Gilbert and J. Doran, Eds.). UCL Press, London. Gimblett, H. R., Durnota, B., and Itami, R. M. (1996). "Some practical issues in designing and calibrating artificial human-recreator agents in GIS-based simulated worlds: Workshop on comparing reactive (ALife-ish) and intentional agents." Complexity International, 3. Available at h t t p : //www. csu. edu. a u / c i / v o l 0 3 / bdurmota/bdurmota.html. Everitt, B. S. (1993). Cluster Analysis. Edward Arnold, London. Gordon, A. D. (1999). Classification, 2nd ed. Chapman & Hall, New York. Ripley, B. D. (1996). Pattern Recognition and Neural Networks. Cambridge Univ. Press, Cambridge, UK. Byrne, J. M. (1986). "Cides, citizens, and crime: The ecological/nonecological debate reconsidered." The Social Ecology of Crime (J. M. Byrne and R. J. Sampson, Eds.), pp. 1-22. Springer-Verlag, Berlin. Tuck, M., and Riley, D. (1986). "The theory of reasoned action: A decision theory of crime." The Reasoning Criminal: Rational Choice Perspectives on Offending (D. B. Cornish and R. V. Clarke, Eds.), pp. 129-155. Springer-Verlag, Berlin. Mirkin, B. (1999). "Concept learning and feature selection based on square-error clustering." Machine Learning, 35, 25-39. Train, K. (1998). "Recreation demand models with taste differences." Land Economics, 74, 230-239. Bhat, C. (2000). "Incorporating observed and unobserved heterogeneity in urban work travel mode choices modeling." Transportation Science, 34, 2, 228-238. Cooksey, R. W. (1996). Judgment Analysis. Academic Press, San Diego, CA. Brunswik, E. (1956). Perception and the Representative Design of Psychological Experiments. University of California Press, Berkeley, CA. Hammond, K. R., Hamm, R. M., Grassia, J., and Pearson, T. (1987). "Direct comparisons of the efficacy of intuitive and analytical cognition in expert judgment." IEEE Transactions on Systems, Man, and Cybernetics, 17, 5, 753-770. Stewart, T. R., Moninger, W. R., Grassia, J., Brady, R. H., and Merrem, F. H. (1989). "Analysis of expert judgment in a hail forecasting experiment." Weather and Forecasting, 4, 24-34.
THE PREFERENCES OF COMPUTER CRIMINALS
373
[47] Dagleish, L. I. (1988). "Decision making in child abuse cases: Application of social judgment theory and signal detection theory." Human Judgment: The SJT View, Advances in Psychology, Vol. 54. Elsevier, Amsterdam. [48] Bushell, H., and Dalgleish, L. I. (1993). "Assessment of risk by employees in hazardous workplaces." Safety Science Monitor, 3. [49] Wigton, R. S. (1988). "Applications of judgment analysis and cognitive feedback to medicine." Human Judgment: The SJT View, Advances in Psychology, Vol. 54. Elsevier Science, Amsterdam. [50] Zeleny, M. (1982). Multiple Criteria Decision Making. McGraw-Hill, New York. [51] Brown, D., and Hagen, S. (1999). "Data association methods with application to law enforcement." Technical Report 001-1999, Department of Systems Engineering, University of Virginia. [52] Milligan, G. W., and Cooper, M. C. (1985). "An examination of procedures for determining the number of clusters in a data set." Psychometrika, 50, 2, 159-179. [53] Calinski, T., and Harabasz, J. (1974). "A dendrite method for cluster analysis." Communications in Statistics, 3, 1, 1-27.
This Page Intentionally Left Blank
Author Index Numbers in italics indicate the pages on which complete references are given.
Abreu,F., 100, 103, 112, 135, 149, 158, 159, 160, 161,162 Abu-Ghazaleh, N., 89, 90 Ade, M., 72, 93 Adjoudani, A., 308, 312, 313, 316, 324, 336 Agha, G.A., 66, 90 Ahuja, S., 76, 90 Aist,G., 315, 339 Alexander, P., 89, 90 Allen, R., 66, 81,90 Allgayer, J., 307, 334 Auer, E., 326, 341 Axtell,R.L., 351,372 B Bakis,R., 315,340 Bangalore, S., 308, 335 Banse,R., 314,539 Bansiya, J., 103, 105,162,164 Barenco, A., 235, 243 Barnard,!., 100, 111, 149,762 Bamett,V., 117, 121,763 Basili, V.R., 7, 36, 49, 52, 101, 102, 103, 762, 763 Baskerville, R., 10, 50 Bate, S.F., 279, 285, 302 Baxter, D., 22, 57 Beck, K., 15, 19,42,50 Beladay, L.A., 7, 49 Bell, J.S., 237, 243 Belsley,D., 118,119, 122,762 Ben-Natan, R., 65, 88, 90 Benlarbi, S., 103, 104, 105, 119, 126, 135, 137, 138, 141, 149, 160, 762,164 Bennett, C.H., 224, 227, 243 Bennett, K.H., 4, 6, 10, 36,44, 45,48, 50, 52, 53, 54 Benoit, C , 306, 307, 308, 312, 313, 316, 323, 333, 334, 335, 336 Benveniste, A., 59, 62, 63, 70, 71, 73, 80, 86, 90 Bemacki,R.H., 315,340 Bernard,!., 310,337
Bernstein, L., 308, 326, 335, 341 Berry, D.M., 255, 262, 274, 303 Berry, G., 59, 62, 63, 71, 73, 86, 90 Bers,!., 306, 307, 308, 309, 311, 313, 318, 331, 333, 335 Bhat, C., 353, 373 Bhattacharyya, S.S., 71,90 Bieman,!.M., 100, 159,762 Bier, L., 22, 57 Biggerstaff, T., 37, 52 Binkley,A., 104, 158,763 Block, C., 347, 377 Block, R., 347, 377 Boehm, B.W., 4, 48 Bolle, R.M., 306, 309, 313, 334 Bollig, S., 43, 53 Bolt, R.A., 307, 334 Booch,G., 18, 19,50 Booker,!., 109,765 Boudy,!., 315,340 Bousquet,F.,351,372 Bovik, A.C., 312, 316, 324, 325, 326, 338 Bower, T.G.R., 313,338 Brady, R.H., 354, 373 Brassard, G., 224, 226, 227, 243 Bregler, C., 308, 312, 324, 336, 338 Breiman, L., 120, 762 Brereton, O.R, 44, 45, 53, 54 Briand, L.C., 100, 101, 102, 103, 104, 108, 109, 111, 112, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 130, 135, 137, 138, 139, 142, 143, 145, 147, 148,149,152,153,155,157,159,762,763 Brooke, N.M., 308, 310, 312, 313, 316, 324, 325, 336, 338 Brooks, F, 7, 11,49,50 Brooks, R., 36, 52 Broughan, K.A., \^\,188 Brown, D., 357, 373 Brown, D.E., 349, 372 Brunswik, E., 354, 373 Buck, !.T., 72, 90 Buckellew,M., 21, 30, 50, 57 Budgen, D., 44, 45, 53, 54 Buerger, M.E., 347, 377
375
376
AUTHOR INDEX
Buhr, RA., 249, 257, 262, 263, 268, 269, 273, 278, 280, 298, 301 B u n s c C 111, 149,762 Burch, E., 7, 49 Burd, E., 22, 57 Bushell, H., 354, 373 Byrne, J.M., 352, 373
Cornelius. B.J., 6, 36, 48 Cornish, D.B., 346, 347, 370 Counsel], S., 100, 106, 111, 150, 158, 163, 164 Cowden, C.A.,41,5i Crepeau, C, 226, 227, 243 Culler, D.E., 70, 95 Cusumano, M.A., 10, 17, 24, 50 D
Calinski, T., 359, 373 Cambier,C., 351, 572 Canfora, G., 22, 57 Cardell, N., 120, 766 Cardelli, L., 85, 91, 254, 301 Cargill, T.A., 264, 301 Carreiro, N., 76, 90 Carriero, N., 70, 91 Carroll, J., 346, 370 Cartwright, M., 105, 117, 141, 160, 161, 163 Casey, C , 40, 41, 5 i Caspi,P.,62,71,73, 89, 97,92 Cassell, J., 308, 337 Cerceda, J.L., 229, 243 Chan, P., 315, 339 Chapin, N., 36, 57 Chen, J.-Y., 104, 149,163 Chen, K., 38, 52 Chen, L., 306, 307, 311, 318, J i 4 Chen, M.-H., 107, 135, 158, 161, 766 Cheyer, A., 308, 335 Chidamber, S.R., 100, 101, 103, 104, 109, 137, 144, 157, 158, 159, 160, 163 Chiodo, M., 66, 79, 97 Choudhury, T., 309, 337 Chuang,I.L., 194,210,243 Churchill, E., 308, 337 Clarke, R.V., 346, 347, 370 Clarkson, B., 309, 337 Cleve,R.,214,243 Clow, J., 306, 307, 311, 318, i J 4 Coallier, R, 154, 765 Cohen, L.E., 347, 371 Cohen, M.M., 308, 335 Cohen, P.R., 306, 307, 308, 309, 311, 313, 318, 327, 328, 331, 333, 334, 335, 341 Cohen, S., 100, 766 Collofello, J.S., 34, 57 Cook,C., 100, 111, 149, 160,765 Cooksey, R.W., 353, 354, 356, 373 Cooper, M.C., 359, 373
Dagleish, L.I., 354, 373 Dalrymple, M., 307, 328, 334 Dalton,N.,351,372 Daly, J., 100, 103, 109, 111, 112, 116, 117, 118, 123, 125, 127, 137, 138, 139, 142, 149, 150, 157, 159, 762, 763, 766 Darcy, D., 104, 144,763 Das, S., 315, 339,340 Davis, C , 103, 105,762,764 Davis, II. J., 65, 66, 67, 69, 72, 82, 89, 97 de Alfaro, L., 70, 84, 88, 97 De Lucia, A.. 22, 51 Dean, J.S., 351,372 DeAngeli, A., 328, 347 Deglise,R, 308, 313, 325, 337 Denecke, M.. 308, 335 Deprettere, E., 75, 94 Deutsch,D.,213,235, 243 Devanbu, R, 100, 103, 157, 159, 762 Di Lucca, G., 22, 57 Dieckman, D., 89, 90 Differding, C , 111, 149,762 Dijkstra, E., 58, 97 Ditchfield, G., 249, 301 Dixon, D., 344, 370 Dobie, M.R., 106, 165 Donahue, J., 254, 307 Douglass, B.R, 58, 68, 78, 97 Drew, S.J., 260, 263, 301 Duchnowski, R, 308, 324, 325, 337 Duncan, L., 306, 307, 308, 309, 311, 313, 318, 331,333 Dunteman, G., 112, 114,764 Dupont, S., 308, 312, 316, 324, 325, 337 Durnota,B.,351,372
Eder,J., 100,764 Edwards, S.A., 61, 73, 83, 86, 97 Ehrlich, K., 36, 52
AUTHOR INDEX
Eick, S.G., 20, 50 Ekert,A.K.,214,219,235,243 El Emam, K., 104, 105, 106, 117, 119, 126, 135, 137, 138, 141, 148, 152, 153, 762,164 Elmenreich, W., 73, 93 Epstein, J.M., 351, 372 Erber,N.R,312,338 Eriksson, H.-E., 68, 78, 91 Escudier, P., 308, 326, 328, 336 Esteves,R., 100, 161,762 Etkom, L., 103, 105, 762,164 Everitt, B.S., 121,164, 352, 359, 372
377
Gilford, D.K., 85, 94 Gilbert, N., 350, 372 Gimblett,H.R., 351,372 Girault, A., 78, 82, 88, 97 Giusto, R, 66, 79, 97 Glasberg, D., 106, 117, 135, 138, 141, 148, 764 Glassman, L., 254, 307 Goddeau, D., 307, 334 Goel, M., 83, 97 Goel, N., 104, 105, 119, 126, 135, 137, 138, 141, 762,164 Gold, N.E., 45, 54 Goldenberg, R.E., 279, 285, 302 Goldschen, A.J., 308, 336 Goldstein, S.C, 70, 95 Gong, Y., 315, 340 Fanta, R., 29, 57 Gonthier, G., 59, 62, 71, 73, 86, 90 Goodenough, J.B., 254, 263, 278, 307 Fateman,R., ISl, 188 Gorbachev, V.N., 229, 243 Felson, M., 347, 371 Gordon, A.D., 352, 372 Fenton,N., 101,765 Gordon, M.J., 85, 92 Ferber, J., 350, 372 Gorr, W., 349, 377 Ferguson, C , 315, i i 9 Ferro, D., 306, 307, 308, 309, 311, 313, 318, 331, Gosling, J., 257, 307 Gough, K.J., 260, 263, 307 333 Goulao, M., 100, 161,762 Feynman, R., 191,243 Grassia, J., 354, 373 Fidge,CJ.,81,97 Graves, T.L., 20, 50 Fiscus,J., 314, 330,ii9 Green, D., 124,164 Fisher, B., 314, 330,339 Griffiths, D.G., 44, 54 Fong, C , 83, 97 Grover,L.K., 215, 243 Fong, M.W., 307, 335 Guedj, R., 309, 337 Foster, J.R., 5,48 Gullekson, G., 65, 66, 81, 84, 95 Fowler, M., 33, 57 Gumerman, 351, 372 Fox,P.W.,315,340 Gupta, R., 73, 93 Fracchia, FD., 36, 52 Guyomard, M., 307, 334 Frank, M.R, 307, 335 Freedman,S J., 313,339 Friedman, J.H., 109, 120, 162, 164 H Fry, C , 174, 188 Hagen, S., 357, 373 Fuster-Duran, A., 324, 340 Hager,G.D., 313,339 Hager, J.A., 39, 53 Halbwachs, N., 62, 71, 73, 89, 97, 92 Hall, D.L., 313, 339 Gamma, E., 39, 53, 71, 97 Hamm, R.M., 354, 373 Garber,S.R., 315,340 Hammond, K.R., 354, 373 Gargan, R.A., 307, 328, 334 Hanley,T.D., 315,340 Garlan,D., 66, 81,90 Harabasz, J., 359, 373 Garland, D., 15,50 Harbour, M.G., 64, 93 Gartin, RR., 347, 377 Harbusch, K., 307, 334 Gehani, N.H., 259, 260, 262, 272, 273, 278, 307 Harding, R.W., 347, 370 Gelemter, D., 70, 76, 97 Harel, D., 62, 78, 89, 92 Harrison, R., 106, 111, 150, 158, 764, 765 Geoffroy, J.C., 274, 302
378
AUTHOR INDEX
Haton, J.R, 324, 341 Hauptman, A., 310,iJ7 Hayashi, 85, 92 Hayes, W., 112,764 Hecht, H., 292, 302 Hecht, M., 292, 302 Heller, R., 309, 337 Helm, R., 39, 5J, 71, 97 Henderson, R, 10, 50 Henderson-Sellers, B., 100, 106, 159, 160, 764, 765 Hennecke, M.E., 306, 307, 308, 313, 323, 324, 325,334, 340 Henry, S., 100, 102, 106, 144, 145, 147, 148, 157, 765 Henzinger, T.A., 66, 67, 73, 78, 83, 84, 88, 9], 92 Hirayama, M., 308, 336 Hitz,M., 100, 159,764 Hoare, C.A.R., 70, 71, 75, 92, 251, 302 Holmes, R.M., 348, 371 Holmes, ST., 348, 371 Holzman, T., 306, 307, 308, 309, 311, 313, 318. 331,333 Holzmann, M., 73, 93 Horowitz, B., 66, 73, 83, 92 Hosmer, D.W., 116, 118, 122, 764 Hotz,D.,41,55 Howe, A.E., 36, 52 Hsieh, H., 66, 79, 91 Humg,X., 315, 339 Hudak, R, 85, 92 Hudepohl, J., 22, 57 Huitt, R., 40, 53 Hunicke,K., 314,3J9 Hurst, W., 308, 324, 325, 337 Hylands, B., 107, 108, 166 Hylands, C , 65, 66, 67, 69, 72, 82, 89, 91
Ihde, S., 306, 334 Itami,R.M.,351,372 Ito,H., 312, 324, 325, JJ5 Iverson, R, 326, 341 Iyengar, G., 306, 312, 313, 324, 325, 334
Jacobsen,C.N., 315, J39 Jacobson, L, 18,50
Jagannathan, R., 72, 92 Jain, A., 306, 309, 313, iJ4 Jebara, T.. 309, 337 Jiang, L., 315, i J 9 Johnson, E., 346, 370 Johnson, J.H., 22, 50 Johnson, R., 39, 5J, 71,97 Johnson, R.E., 88, 92 Johnston, M., 306, 307, 308, 311,318, 334, 335 Jordan, M., 254, 301 Jorge, J., 309, 337 Jourdan, M., 89, 92 Joy, B., 257,i6>7 Junqua,J.C., 315, 316,540 Jurecska, A., 66, 79, 97 K Kafura, D., 102. 106, 145, 148, 765 Kahen, G., 9, 49 Kahn,G., 70, 71,75, 92 Kang, B.-K., 100, 159, 762 Kao, M.-H., 107, 135, 158, 161, 766 Kaplan, D.J.,^rfl/., 72, 92 Kappel, G., 100, 764 Kappelman, L.A., 27, 51 Karp, R.M., 72, 93 Karr, A.R, 20, 50 Kaslow, B., 254, 301 Kavanaugh,J., 315, 359 Kearney, J.K., 315,346' Kelly, W., 349, 377 Kemerer,C.R, 100, 101, 103, 104, 109, 137, 144, 157, 158, 159, 160,763 Kenah, L.J., 279, 285, 302 Kennedy, R., 315, 339 Kernighan, B.W., 168, 188, 255, 302 Kidd,l, 100, 160, 765 Kienhuis, B., 65, 66, 67, 69, 72, 82, 88, 97 Kirsch, CM., 66, 73, 83, 92 Kitchenham, B., 101, 107, 765, 766 Klein, H., 10,5(9 Klein, M.H., 64, 93 Knight, C, 36, 52 Knill,E.,219, 243 Knudsen, J.L., 273, 274, 275, 276, 302 Knudson, D.,41,53 Kobsa, A., 307, 334 Koenig, A., 263, 264, 272, 302 Kohler,T., 350, 351,372 Konig, Y., 308, 312, 324, 336, 338 Koo, T.J., 73, 93
379
AUTHOR INDEX
Kopetz, H., 73, 93 Kuh,E., 118, 119, 122,762 Kuhn,K., 310, 328,337,347 Kunk, H., 7, 49
Lachover, H., 89, 92 Lackner,J.R., 313,359 Laddaga, R., 85, 93 LaFlamme,R., 219, 243 Lagnier, R, 89, 92 Lagu, B., 22, 51 Lake, A., 100, 111,149,160,765 Lallouache, T., 308, 326, 328, 336 Lamport, L., 80, 93 Landay, J., 306, 307, 308, 309, 311, 313, 318, 331,333 Lang, J., 292, 302 Larson, J., 306, 307, 308, 309, 311, 313, 318, 331,333 Latimer, D., 315, 339 Lau, T.S., 36, 57 Lauwereins, R., 72, 93 Lavagno, L., 66, 79, 97 Layland,J.,61,64, 94 Layzell, RJ., 44, 45, 53, 54 Le Golf, B., 306, 308, 312, 323, 334 Le Guemic, R, 62, 71, 73, 80, 86, 90 Lea, D., 58, 93 Lee, B., 78, 82, 88, 97 Lee, D., 312, 335 Lee, E.A., 61, 65, 66, 67, 69, 72, 73, 75, 78, 80, 82, 83, 84, 85, 88, 89, 90, 97,93, 95 Lee, J., 309, 337 Lee, S., 315,339 Lee, Y.-S., 100, 157, 159, 765 Lee, Y.V., 308, 336 Lehman, M.M., 7, 8, 9, 49 Lehner, R, 9, 20, 23, 24, 49, 50 Lemeshow, S., 116, 118, 122, 764 Lenstra,A.K., 212, 243 Lenstra,H.W., Jr., 212, 243 Letovsky, S., 36, 52 Levow,G., 310, 314, 337, 339 Lewis, B., 45, 54 Lewis, RH., 106, 765 Lewis-Beck, M., 113,765 Li, W., 100, 102, 103, 105, 106, 144, 145, 147, 148, 157, 762, 764, 765 Liang, B.-S., 100, 157, 159, 765
Liao, S., 73, 93 Liebman, J., 73, 93 Lientz, B., 5, 48 Lieverse, R, 75, 94 Liskov, B.H., 262, 269, 302 Littman, D.C., 36, 52 Liu, C., 61, 64, 94 Liu, J., 65, 66, 67, 69, 72, 82, 83, 89, 97, 94 Liu, S.S., 36, 57 Liu, X., 65, 66, 67, 69, 72, 82, 89, 97 Lockwood,R, 315, 34(? Lombard, £.,315,340 Long,S., 113,765 Lorenz,M., 100, 160,765 Lounis, H., 103, 104, 105, 108, 112, 116, 118, 119, 125, 126, 135, 137, 138, 139, 142, 147, 763, 764 Lovelock, C., 45, 54 Lu,J.-R, 104, 149,763 Lucassen, J.M., 85, 94 Luckham,D.C.,68,81, 94 Luettin, J., 308, 312, 316, 324, 325, 337 Lui, H., 349, 372 Lyle, J., 36, 52 Lynch, N.A., 66, 94 M Ma, C , 73, 93 Macaulay, L., 44, 53, 54 McCabe, T.J., 102, 765 McCarroll,S.,351,372 Macchiavello, C , 214, 219, 243 McDermid, J.A., 4, 5, 6, 13,45 Macdonald, H.I., 262, 263, 268, 269, 273, 278, 280, 307 MacDonald, J., 308, 336 MacEachem,M., 314,339 McGee, D., 306, 307, 311, 318, 334 McGrath, M., 308, 312, 323, 336 MacGregor, T., 34, 57 McGurk, H., 308, 334 Machado, J., 105, 135, 141, 153, 764 MacLaren, M.D., 257, 274, 275, 302 McLeod, A., 308, 312, 323, 336 Madhavji, N., 106, 117, 135, 138, 141, 148, 764 Maguire, S., 168,755 Maier, RA., 347, 377 Maison, B., 306, 312, 313, 324, 325, 334, 341 Makhoul, J., 308, 335 Manna, Z., 58, 94
380
AUTHOR INDEX
Mapinard, A., 274, 302 Maraninchi, R, 62, 71, 88, 89, 92, 94 Marlin, CD., 251,302 Marron, J.S., 20, 50 Martin, A., 314, 330, i i 9 Martin, J.C, 306, 307, 308, 313, 333 Martin-Lof, P., 85, 94 Masden, O.L., 255, 302 Massaro, D.W., 308, 324, 326, 328, 335, 340 Mattem,F., 81,94 Maurer, U.M., 227, 243 Maxwell, K., 152,162 Maybury, W., 259, 285, 302 Mayer, R., 36, 52 Mayrand, J., 22, 57, 154,765 Meier, U., 308, 324, 325, 337 Mellor, S., 100,166 Melo, W.L., 100, 102, 103, 104, 105, 106, 109, 112, 117, 119, 121, 123, 124, 126, 127, 128, 130, 135, 138, 139, 141, 143, 148, 149, 152, 153, 157, 158, 159, 160, 162, 163,164 Menezes,W., 13,50 Meredith, M., 313,355 Merlo, E.M., 22, 57 Merrem, F.H., 354, 373 Merton,R.T., 351,372 Messerschmitt, D.G., 72, 83, 93 Meyer, B., 259, 302 Meyer, M., 109, 765 Miller,!., I l l , 150,766 Miller, R., Jr., 109, 116, 765 Miller, R.E., 72, 93 Miller, S., 308, 335 Milligan, G.W., 359, 373 Mills, H.D., 36, 52 Milner, R., 75, 85, 92, 94, 257, 302 Mirkin, B., 353, 373 Misic, v., 106, 107, 765 Mitbander, B., 37, 52 Mitchell, J.G., 259, 285, 302 Mockus, A., 20, 50 Mogensen, T., 85, 94 Mok, W.Y.R., 269, 302 M0ller-Pedersen, B., 255, 302 Monarchi, D.E., 160, 766 Moninger, W.R., 354, 373 Montazeri, B., 100, 159,764 Moody, D.B., 315,340 Moran, D.B., 307, 328, 334 Morand,R, 351,372 Morasca, S., 101,763
Morimoto, C , 306, 334 Morris, 85, 92 Mosca,M., 214, 243 Moser, S., 106, 165 Mostow,!., 315, 339 Motet, G., 274, 302 Moura, L., 22, 51 Muliadi, L., 65, 66, 67, 69, 72, 82, 89, 97 Muller, H.A., 36, 52 Mullon,C.,351, 372 Multon, ¥., 307, 334 Munhall, K.G., 308, 336 Munro, M., 6, 22, 36, 44, 45, 48, 51, 52, 53, 54 Murali, R., 89, 90 Murphy, R.R., 312, 33<S Murthy, RK., 72, 90 Mustaine, E.E., 347, 377 N Naamad, A., 89, 92 Nadas, A., 315,340 Nahamon,D.,315, 340 Nakamura, S., 312, 324, 325, 338 Narayanan, S.,315, 339 Naur, R, 13,50 Neal J.G., 307, 334 Nelson. G., 254, 307 Nesi, R, 107, 149, 160, 161, 766 Neti, C.V., 306, 312, 313, 316, 324, 325, 334, 338, 341 Neuendorffer, S., 65, 66, 67, 69, 72, 82, 89, 97, 93 Newey, M., 85, 92 Newson, R, 100,763 Nielsen, M.A., 194,210,243 Nithi, R., 106, 111, 150, 158, 764, 765 Nix, D.,315, 339 Norvig, R, \H3,J88 Nygaard, K., 255, 302 O Obenza, R., 64, 93 Ogando, R.M., 36, 57 Olligschaeger, O., 349, 377 Olligschlaeger, A., 349, 377 01sen,M.G.,315, 340 Olshen, R.A., 120,762 Oman, R, 36, 57
381
AUTHOR INDEX
Omohundro, S.M., 312, 324, 338 Oslem, M.R., 22, 57 Oviatt, S.L., 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 320, 322, 323, 325, 326, 327, 328, 331, 333, 334, 335, 337, 338, 339, 341
Page, H., 21, 30, 50, 57 Pallet,D., 314, 330,539 Pankanti, S., 306, 309, 313, 334 Pao, C , 307, 334 Parker, M.T., 351,372 Parks, T.M., 72, 75, 93, 94 Pamas, D.L., 20, 39, 50, 53 Pavel, M., 313, 339 Payne, J., 346, 370 Pearson, T., 354, 373 Pedlow,R.L, 315,340 Pelachaud, C , 306, 307, 308, 313, 333 Penix, J., 89, 90 Penker, M., 68, 78, 97 Penn, A., 351,372 Pennington, N., 36, 52 Pentland, S., 309, 337 Peperstraete, J.A., 72, 93 Pereira, F.C.N., 307, 328, 334 Perry, D.E., 9, 49 Petajan, E.D., 308, 324, 326, 336, 341 Pfenning, R, 84, 85, 95 Pfleeger, S.L., 13,50, 101,765 Picheny,M., 315,339,340 Pick, H.L., 312, 313, 315, 338, 340 Pigoski,T.M., 41, 53 Pigoski, T.R, 4, 48 Pilaud,D., 62, 71, 73, 89, 97, 92 Pinto, J., 36, 52 Pisoni,D.B., 315,340 Pittman, J., 306, 307, 311, 318, 334 Plaice, J.A., 73, 97 Plauger, PJ., 168,188 Pnueli, A., 58, 62, 89, 92, 94 Polak,W., 213, 243 Polifroni, J., 307, 334 Politi, M., 89, 92 Pollack,!., 308, 312, 323,336 Pollak, B., 64, 93 Porter, v., 103, 112,116, 117, 118, 123, 125, 127, 138, 139, 142, 763 Potamianos, A., 315,339
Potamianos, G., 306, 312, 313, 316, 324, 325, 334, 338 Potash, L.M., 315,340 Prasad, K.V., 324, 325, 340 Pregibon, D., 117,766 Pressman, R.S., 5, 13,45 Prevost, S., 308, 337 Price,!., 117, 121,763 Proulx, D., 22, 57 Przybocki,M., 314,330,339
Quensiere, J., 351, 372 Querci, T., 107, 149, 160, 161, 766 R Rai, S., 104, 105, 119, 126, 135, 137, 138, 141, 762, 764 Rajaraman, Lyu, 107, 158, 766 Rajlich, V.T., 4, 21, 29, 30, 38, 34,43,48, 50, 51, 52,53 Ralya, T., 64, 93 Ramil, J.F., 9, 49 Randall, B., 13,50 Raymond, P, 62, 71, 89, 92 Reddig, C , 307, 334 Reed, G.M., 75, 94 Reed, R., 79, 94 Rehof, J., 85, 94 Reithinger, N., 307, 334 Rekosh,J.H., 313,339 Remondeau, C , 307, 334 Rengert, G., 349, 377 Rettig, D., \S\,188 Rieffel,E., 213, 243 Riley, D., 353, 373 Ripley, B.D., 352, 353, 372 Ritchie, D.M., 255, 302 Robert-Ribes, 308, 326, 328, 336 Robertson, G., 306, 334 Robson, D.J., 6, 36, 48 Rogerson, PA., 349, 377 Rogozan, A., 308, 313, 325, 337 Roncek, D.W., 347, 377 Roper, M., I l l , 150,766 Roscoe, A.W., 75, 94 Royce, W.W., 4, 48 Rubin,P, 307, 313,335 Rudnicky, A., 310,337
382
AUTHOR INDEX
Rumbaugh, J., 18,50 Russell, M.J., 308, 310, 312, 313, 316, 324, 325. 336
Saltzman, E., 312, 33^ Samaraweera, L.G., 106, 165 Sangiovanni-Vincentelli, A., 66, 79, 80, 9L 93 Sant'Anna, M., 22, 57 Saracco, S., 79, 94 Sastry, S.S., 73, 93 Sawin,L., 315,340 Schach, R., 104, 158,163 Schauser, K.E., 70, 95 Scherer,K., 314,339 Schlossberg, J.L., 307, 328, 334 Schmauks, D., 307, 334 Schomaker, L., 306, 307, 308, 313, 333 Schrefl, M., 100,164 Schulman, R., 102, 106, 145, 148, 765, 315, 340 Schwartz, J.L., 308, 326, 328, 336 Scott, D., 85, 94 Scully, M., 37, 52 Selby, R.W., 17, 24, 50 SelicB., 65, 66, 81,84, 95 Seneff, S., 307, 334 Senior, A., 306, 312, 313, 324, 325, 334, 341 Sexton, J., 41, 53 Shapiro, S.C., 307, 334 Sharble, R., 100, 766 Sharma,R.K., 313,339 Shaw, M., 15, 50 Sheetz, S.D., 160, 766 Shepperd, M., 105, 117, 141, 160, 161, 163 Sherman, L.W., 347, 371 Sherman, R., 89, 92 Shi, J., 312, 324, 338 Shikano, K., 312, 324, 325, 338 Shlaer, S., 100, 766 Shneiderman, B., 36, 52 Shor,RW., 191,213,244 Shtull-Trauring, A., 89, 92 Siegel,G.M., 315,340 Silsbee, RL., 308, 312, 316, 324, 325, 326, 336, 338 Sinott,J.M., 315,340 Siroux, J., 307, 334 Smith, L, 306, 307, 311, 318, 334 Smith, J.R.W., 79, 94 Sneed, H.M., 9, 49
Snodgrass,A., 314, 339 Snyder, A., 262, 269, 302 Soloway. E., 36, 52 Somerville, I., 5. 13,45 Specter, R, 110,766 Standish, T.A., 36, 52 Stannet, C , 44, 54 Stebbins,W.C., 315, 340 Steele, G., 257, 301 Steer, M.D., 315, 340 Stein, B.,313, 33(5 Steinberg, D., 120, 766 Stewart, D.B., 292, 302 Stewart, T.R., 354, 373 Stokes, M.A.,315, 340 Stone, C.J., 120,762 Stone, M., 109, 113, 127,766 Storey, M.A.D., 36, 52 Stork, D.G., 306, 307, 308, 313, 323, 324, 325, 326, 328, 334, 335, 340 Stroobosscher, R.A., 249, 262, 273, 298, 307 Stroustrup, B., 254, 262, 263, 264, 270, 272, 302, 303 Su, Q., 308, 336 Suhm, B., 306, 307, 308, 309, 310, 311, 313, 318, 331,333,335,337 Sullivan, J.W., 307, 308, 328, 334, 337 Sumby, W.H., 308, 312, 323, 336 Summerfield, A.Q., 308, 312, 323, 326, 336, 341 Surmann, D., 152, 162 Swan, S.,81, 95 Swanson, E.B., 5, 48 Swedlind,A.C..351,372 Sweet, R., 259, 285, 302 Swets,J., 124,764
Tamai, T., 23, 57 Tang, M.-H., 107, 135, 158, 161, 766 Taussig, K.,314, 339 Tegarden, D.R, 160,766 Tehan, R., 344, 370 Tennent, R.D., 275, 303 Terzopoulos, D., 308, 336 Tesic, D., 107, 765 Tewksbury, R., 347, 377 Tjiang, S., 73, 93 Tofte, M., 257, 302 Tomlinson, M.J., 308, 310, 312, 313, 316, 324, 325,336
AUTHOR INDEX
383
Ward, P., 65, 66, 81, 84, 95 Warren, I., 5, 48 Wauters, P, 72, 93 Weaver, P., 346, 370 Webster, D., 37, 52 Wegner, P, 85, 97 Weintraub,M., 314,339 Weiser, M., 36, 52 Welch, R.B., 313,335 Welsch,R., 118, 119, 122,762 WendorfF,P, 41, 53 Westland, C., 346, 370 Whitaker, P, 83, 95 Whitmire, S., 101,766 Wieczorek, I., 152,762 Wiener, L., 100,766 Wigton, R.S., 354, 373 Wikstrom, A., 85, 95 Wilde, N., 21, 30, 36, 37, 40, 41, 50, 51, 52, 53 Ullman, J.D., 85, 95 Wilkersion, B., 100,766 Wilkie, EG., 107, 108, 766 Willcock, D.K., IS\,188 Wilpon,J.G., 315, 339 Van Der Wolf, P., 75, 94 Winograd, T., 306, 307, 308, 309, 311, 313, 318, van Gemund, A.J.C., 75, 95 331,333 vanGent, R., 310, 337 Wirfs-Brock, R., 100,766 van Summers, W.V., 315, 340 Wirth, N., 7, 48 Vandermerwe, S., 45, 54 Wood, M., I l l , 150,766 Vaudeville, J., 41, 53 Wright, P, 222, 244 Vans, A.M., 36, 37, 57, 52 Wu, L., 306, 307, 308, 309, 311, 313, 318, 331, Vatikiotis-Bateson, E., 307, 308, 313, 335, 336 333, 335 Vera, J., 68, 81, 94 Wu, S.-E, 100, 157, 159, 765 Vergo, J., 306, 307, 308, 309, 311, 313, 318, 331, Wust, J., 100, 103, 104, 108, 109, 112, 115, 116, 333 117, 118, 119, 120, 121, 123, 124, 125, Vissers, K., 75, 94 126, 127, 128, 130, 135, 137, 138, 139, Vlissides, J., 39,53,71,97 142, 143, 145, 147, 148, 152, 153, 155, Vo, M.T., 308, 335 157,159, 762, 763 Vogel, B., 65, 66, 67, 69, 72, 82, 89, 97 Vollman,T., 41, 53 X von der Beeck, M., 88, 95 von Eicken, T., 70, 95 Xi, H., 84, 85, 95 von Mayrhauser, A., 36, 37, 57, 52 Xiao, D., 43, 53 Xiong, Y., 65, 66, 67, 69, 72, 82, 84, 85, 89, 97, W 93,95
Torimitsu, Y., 23, 57 Train, K., 353, 373 Trakhtenbrot, M., 89, 92 Trio, G., 41, 53 Troitzsch, K.G., 350, 372 Trotter, W.T., 80, 95 Trubilko, A.I., 229, 243 Truex, D.P., 10, 50 Tsay, J., 65, 66, 67, 69, 72, 82, 89, 91 Tuck, M., 353, 373 Turing, A.M., 56, 85, 95 Turk, M., 306, 334 Turner, A.J., 7, 49 Turvey, B., 348, 371 Tyler, S.W., 307, 328, 334
Wadsworth, C.P, 85, 92 Wahlster, W, 307, 334, 335 Waibel, A., 308, 335 Walsh, D., 346, 370 Wang,E-J., 100, 157, 159, 765 Wang, M.Q., 327, 341
Yahin, A., 22, 57 Yakovleva, E.S., 229, 243 Yang, J., 308, 335
384
Yau, S.S., 34, 36, 57 Yemini, S., 255, 262, 274, 303 Yeni-Komshian, 315, 339 Yeung,C.,315,J39 Yoffe, D., 10, 50 Younger, B.M., 249, 301 Younger, E.J., 36, 52 Yu, W.D., 168, 173,188
AUTHOR INDEX
Z Zarnke, C.R., 249, 262, 263, 268, 269, 273, 278, 2^0,301 Zeleny, M., 357. 373 Zhai, S., 306, 334 Zhiliba, A.I., 229, 243 Zuse, H., 101, 166
Subject Index
5ESS, 173, 178
Absolute relative error (ARE), 125 Accented speakers, 314, 317-20, 329 Accessibility of computing, 309 Action methods, 82 Actor-oriented design, 65-71 abstract syntaxes, 66-7 concrete syntaxes, 66, 67-8 semantics, 68-9 Actors, 66, 82 Ada, 169 exception handling, 249, 254, 257-8, 265, 268, 297 handler context, 272 Adaptive processing, 308, 325, 326, 331, 332 Agents calibrated, 351 definition of, 350 discovering the preferences of, 352-8 methodology for. See CSSW methodology using clustering, 352-3 using judgment analysis, 353-8 Agile methods, 19,42 AND gate, 208, 209 Architecture description languages (ADLs), 66,81 Architecture design languages, 81 Argos,62,71,89 Arrays, 175 Assembly languages, 170, 183, 185 Assignment operators, 177 Asynchronous exceptions, 253-4, 292-7, 298 communication requirements, 292-3 converting interrupts to exceptions, 297 disabling, 294-6 multiple pending, 296 nonreentrant problem, 2 9 3 ^ Atoms, neutral, 237-8 Audio-visual perception, 308 Automata, reflection, 87-8
B Backward elimination, 118 Basis functions, 121 Basis vectors, 223 Beta releases, 25 Binding, 46 Biometrics research, 308-9, 332 Bit field data types, 177 BLISS, 169 Block finalization, 2 6 3 ^ Boolean dataflow (BDF), 72 Bound exceptions, 268-9, 298 Boundedness, 72
C, 71, 255 arguments for, 179-84 compilers for, 187 diiferences from Lisp, 171-3 evolution of, 169-70 exception handling, 262, 265 flaws in, 173-9 GNUC, 187 in Lisp implementations, 185 reasons for use of, 170-1 C-h-h, 85, 171, 179 exception handling, 254,257-8, 259, 262,265, 269 handler context, 272 C2N0T gate, 209-10 Calculus of communicating systems (CCS), 75 CART, 120, 147 Cast, 178 Catch, 253, 298 Causality, 116, 138 Cavities, optical, 239 CBO, 101, 137, 149, 157 Change difficulty index (CDI), 155 Choice structuring properties, 347 Ciphertext, 221 Classes, 268 Clocks, 72 Clones, of quantum states, 202, 206-7
385
386
SUBJECT INDEX
of software code, 21-2, 29 CLOS, 176 Closeness, 274, 285, 298 CLU, 269, 277 Clustering, 352-3 algorithms, 352, 359-60, 369 hierarchical methods, 352, 359 CNOT gate, 208-9, 237 Code analysis tools, 36 Code generators, 40-1 Codes, unbreakable, 221-2 Coding, dense, 228-9, 230 Cohesion measures, 115-16, 125-26, 135, 137, 151 definitions of, 159 LCC, 139, 159 LCOM, 101, 137, 149, 159 normalized, 116, 139, 140, 151 TCC, 139, 159 Comment delimiters, 178 Common Lisp. See Lisp, Common Communicating sequential processes (CSP), 75, 83 Communication networks, 74 Compilers, 85 Completeness, 123, 124, 148, 150 Complexity classes, 210-12 Complexity measures, 131, 137 Component interfaces, 60-1, 84-8 Component technology, 60, 61, 62-3 Components, domain polymorphic, 82, 83 Computation, science of, 56 Concept location, 37-8 Concurrency, 58-9, 63, 70 Concurrent execution, 249, 291 Concurrent programming languages, 75 Conditional handling, 269, 298 Configuration management, 42, 43 Confounding effects, 137-8 Confusion matrices, 310 Consequent events, 287-8, 298 immediate, 288 Consequential propagation, 288-90, 298 Consonants, 326 Continuous time (CT) models, 76, 83 Control flow, conditional, 173-4 Controlled experiments. See Experiments, controlled Context switch, 249 CORBA, 88 real-time, 65 Correctness, 123, 124, 148, 150
Correlational studies, 99-100, 102-10, 131^9 choice of dependent variable, 108-9 choice of independent variables, 109 data sets, 110 measures of correlation, 117 multivariate prediction models, 113, 139-49 overview of, 103-7 prediction models building, 109 evaluating, 109-10 univariate analysis of fault-proneness, 131-9 Coroutine-monitor, 251-2 Coroutines, 249, 251,290-1 COTS components, 10, 11, 27, 40, 41 Coupling measures, 125-6, 1 3 3 ^ , 135, 137, 151 AMMIC, 140, 158 CBO, 101, 137, 149, 157 definitions of, 157-8 export, 138, 139, 140, 151 IH-ICP, 140, 157 import, 138, 139, 140, 151, 155 MPC, 139, 157 OCAIC, 137, 157 OCMEC, 137, 158 OCMIC, 138, 157 OMMEC, 137, 158 OMMIC, 137, 139, 158 RFC, 101, 137, 157 RFC-1, 137, 157 Crime displacement of, 349 predictive modeling of, 348-50 multiagent modeling, 350-1 public order, 349 Criminals computer, 351, 355 discovery of preferences of, 349-50 profiling of, 348 target preferences of, 350 target selection process of, 346-8 see also Agents Cross-validation (CV), 113, 127, 148 Cryptography, 213, 220 CSSW methodology, 358-61, 362 test of, 364-9 Cue weighting, 354-5, 356-7 Cycle-driven simulators, 73
SUBJECT INDEX
D Dark counts, 227 Data archiving, 24 Data mining model, automated, 363-4 Dataflow models, 71-2, 75 Deadlock, 59, 72 Decline, software, 23 Decoherence, 216, 233-4 Default handler, 266, 298 Defective programs, 59 Delivery, 254, 298 Denial of service (DOS) attacks, 345, 3 6 2 ^ Dense coding, 228-9, 230 Dependencies, 34 Derived exceptions, 264-5, 267, 281-2 Descriptive statistics, 112, 113 Design measures. See Measures for objectoriented designs Deutsch's function characterization problem, 213-15,240-2 Differential equations, 76 Digital signal processors (DSPs), 57 Dirac notation, 203 Disambiguation, mutual, 310-11, 316, 317-20,327,328,330-1 Discrete-event (DE) models, 6 3 ^ , 74, 79, 83 Discrete-time (DT) models, 73, 83 Distributed artificial intelligence models, 350 DIT, 101, 137, 140, 160 Documentation updating, 34, 35, 42-3 Domain concepts, 33 Domain polymorphism, 82, 89 Domains, 82, 89 Drug markets, 349 Dual exceptions, 280-1, 298 Dynamic dataflow (DDF), 72 Dynamic handler selection, 274-5 Dynamic propagation, 273, 274-5, 285, 299
e-business applications, 10 E-type software, 8 Eavesdropping, detection of, 223-5, 227 Eff-ort, 108 development, 125 indicators of, 152 multivariate prediction models for, 140, 144-6, 147-9 Eiff^el, 259
387
Embedded software, 55-89 facets of, 57-62 limitations of software engineering methods, 62-5 nature of, 56-7 see also Models of computation Entanglement, 200, 205 Entities, 34 Equal operators, 177 Error correction, quantum, 216-20, 234 Single-bit-flip errors, 217-18 Esterel,59,62,71,73, 86 Event, 253, 299 Exact measurement theorem, 201-2 Exception handling mechanisms (EHM), 246-98 asynchronous exception events, 2 5 3 ^ , 292-7, 298 exception partitioning, 278, 280-2, 299 execution environment 249-53 features, 263-71 bound exceptions, 268-9, 298 catch-any and reraise, 2 6 3 ^ derived exceptions, 264-5, 267, 281-2 exception list, 269-71, 299 exception parameters, 265-7, 299 handler clause selection, 283-5 handler context, 272-3 handling models, 255-63 nonlocal transfer, 255-7 resumption, 260-3 retry, 259-60 termination, 257-9, 276 matching, 282-3, 285, 300 multiple executions and threads, 290-2 objectives, 248-9 overview, 2 5 3 ^ preventing recursive resuming, 285-7 propagation mechanisms, 277-80 propagation models, 273-7 Exception parameters, 265-7, 299 Exception partitioning, 278, 280-2, 299 Exceptional C, 259, 275, 278 Exceptions, 247, 299 Executable, 83 Execution, 249, 299 properties, 249-50 Experiments, controlled, 100, 110, 111, 149-50 overview of. 111 summary of results, 149-50 Expert opinion (EO), 108
388
SUBJECT INDEX
Explicit polling, 293, 299 Exponential algorithms, 211-12 Extreme Programming (XP), 19, 42
Factoring algorithms, 212-13 Failure exception, 270, 299 FASTGEN, 30 Fault detection, isolation and recovery (FDIR), 86 Fault-proneness, 98, 108, 122-4, 125-6 cost-benefit models for prediction of 127-31, 148, 152, 153, 156 future research directions, 156 indicators of, 151-2 multivariate models for, 139, 140, 141-3, 147-9 univariate analysis of, 131-9 Fault-tolerance mechanisms, 292 Faulting execution, 253, 299 Fidelity, 234-5 Finite-state machines (FSMs), 73, 76-9, 81, 83 Fix-up routines, 260-1, 262, 277 Flexibility, 171 Forward selection, 118 Frameworks, 88-9 Fusion techniques, 313, 325, 331 decision-level, 328
Garbage collection (GC), 179, 186-7 Gates, encoded, 220 GHZ state, 229 Giotto, 73, 83 Global phase, 242 Goodenough, 269, 278 Goodness-of-fit measures, 122-5, 147 limitations, 126 Gravity system, 89 Group codes, 219 Guarded block, 254, 272, 299
Handler hierarchies, 285, 299 Handlers, 248, 299 context of, 272-3 Handles, 253, 299 Hardware description languages, 63, 66, 67 Hardware design, 63-4 Hardware interrupts, 294, 297 Heterogeneity, 61-2, 82-4 Heterogeneous derivation, 281, 299 Hierarchical disabling, 294, 299 Hilbert spaces, 195 Homogeneous derivation, 281, 299 Human-centric Word Processor, 308 Hybrid regression models, 120, 154 Hybrid systems model, 78, 79 I ICL, 15,25 Implicit polling, 293, 299 Inconsistencies, 34, 40 Individual disabling, 294, 299 Information hiding, 39, 40 Inheritance measures, 125-6, 131, 136, 137, 140, 149-50, 152 CLD, 139, 160 definitions of, 160 DIT, 101, 137, 140, 160 NOC, 101, 137, 139, 160 NOP, 140, 160 Initialization of variables, 173 Instrumentation, 41 Interaction patterns, 66 Interactive systems, 62 Interface automata, 84 Interface flaws, 178-9 Interfaces, component 60-1, 84-8 Internet, 344, 364-5 multiagent model of, 3 6 1 ^ protective system for, 362-4 testing of methodology for predicting attacks on, 364-9 Interrupts, hardware, 294, 297 Ion traps, 237
H Hadamards, 214 Handled, 253, 299 Handler clause, 254, 299 selection, 283-5
Java, 71,85, 86, 170, 171 exception handling, 257, 263, 269, 295 Java Beans, 86, 87, 88 JavaSpaces, 77
SUBJECT INDEX
Judgment analysis, 353-8 applied to criminal preference, 356-8 hierarchical judgment design model, 356 lens model, 354-5 K Kahn process networks, 71, 75 Ket notation, 203 Keys, reconciled, 227 Keys, tentative final, 227 "Kludges", 20, 21-2 Knowledge, loss of, 4, 20-1
Latencies, 70 Lattices, optical, 237 LCOM, 101, 137, 149, 159 Legacy systems, 10, 20, 22, 27 Lens model, 354-5 Lexical contexts, 272-3 Lifecycle models. See Software lifecycle models Linda, 76 Lip-reading, 3 2 3 ^ Lisp, Common, 169 arguments against, 179-84 arguments for, 173-9 compiled, 181 cost-effectiveness of, 185 datatypes, 182-3 differences from C, 171-3 license fees for, 180, 187 object system (CLOS), 176 use of C in, 185 Liveness, 59-60 Loadings, 114 Logarithms, discrete, 220-1 Logic flaws, 173-8 Logical bit operations, 177 Logistic regression (LR), 117, 118, 124, 131, 147, 154 Lombard effect, 315-16, 322, 325 Looping constructs, 175-6 Lustre, 62, 71, 73, 89 M McCabe's cyclomatic complexity, 102 Magnitude of relative error (MRE), 125, 147, 148
389
Maintainability, 110, 149, 155 flaws, 179 Marketplaces, software, 45 Marking, 286, 300 MARS, 109, 120-1, 130, 154, 161 Mask operators, 177 Matching, 282-3, 285, 300 Measurement in quantum mechanics, 195, 199 Measurement frameworks, definition of, 100 Measures for object-oriented designs, 100, 101-2 application of, 100 distribution of, 113 interrelationship between, 151 mathematical properties of, 100-1 quality benchmarks, 154 and thresholds, 138-9 Mesa, 259, 262, 265, 275 propagation mechanism, 286-7 Metamodel, integrated, 37 Methods, 60, 63, 86 C++, 298 Microphones, 321, 322 Microsoft Corporation, 13, 24-5 System, 275, 278, 280 Microtraps, 237-8 Mixture models, 352, 359 ML, 85, 257, 265 Mobility, 309-10, 320-3, 325 Models of computation, 66, 68-71 choice of, 79-82 component interfaces in, 84-8 examples of, 71-9 frameworks supporting, 88-9 heterogeneous, 82-^ Modes, 76 Modula-3, 254, 257, 265, 269 Monitor, 251 Monosyllables, 320, 326 Moore's law, 190, 191 Multicollinearity, 121-2 Multimodal-multisensor systems, 312-13, 331 Multimodal systems, 305-14, 316-33 design strategies, 326-9 error avoidance and resolution, 310-12 future directions, 308-9, 331-2 long-term, 312-13 motivation for, 309-12 research on recognition error suppression, 316-26 accented speaker study, 317-20
390
SUBJECT INDEX
mobile study, 320-3 speech and lip movement studies, 324-6 speech and lip movement, 307, 308, 312, 328 robustness of, 323-6 speech and pen, 307-8, 310, 328 robustness of, 317-23 types of, 307-9 user-centred design issues, 330-1 Multiple derivation, 264-5, 295, 300 Multivariate prediction models, 113, 139-49 design measurement dimensions in, 140 for effort, 140, 144-6, 147-9 for fault-proneness, 139, 140, 141-3, 147-9 overview, 140 predictive power of, 152 Mutable systems, 71 Mutual disambiguation, 310-11,316,317-20, 327, 328, 330-1 Mutual exclusion, 250, 300 MVIEWS, 308 N Name binding, 259, 297 Negotiation, automated, 47 Neutral atoms, 237-8 No-cloning theorem, 202, 206-7, 230 NOC, 101, 137, 139, 160 Noise, 226, 308, 312, 315, 320-3, 324 nonstationary, 315, 323, 325 stationary, 324, 325 Nonlocal transfer, 255-7, 300 Nonreentrant problem, 2 9 3 ^ , 300 Nonresumable operations, 248, 300 Nuclear Magnetic Resonance (NMR), 238 Number Field Sieve, 212 O Object destructors, 263 Object-oriented systems, 39, 40, 60, 63, 86 interface definitions, 61 polymorphic, 40 see also Measures for object-oriented designs On-conditions, 261-2 One-time pads, 221, 222, 223 Operating systems, 88 Operator associativity and precedence, 174-5 Ordinary least-squares regression (OLS), 117 Outliers
influential, 117 multivariate, 121 univariate, 117
P-type software, 8 Pamela, 75-6 Parallel execution, 249 Pascal. 183 Patches. 22, 30 PCA. 112, 114-16, 139-40 Perfection-oriented groups, 172 Performance-oriented groups, 172 PET, 29-30 Phonemes, 324, 326, 328 Photons, 236, 239 PL/I, 255, 275 Pointer variables, 176 Polarization experiment, 192-4, 204-5 used in quantum cryptography, 223-5 Polling, 293 Polynomial algorithms, 211-12 Portable Voice Assistant, 308 Porting of applications, 30-1 Ports, 66, 67, 68-9 Prediction model construction, 113, 117-26 design size impact, 125-6 goodness-of-fit evaluation, 122-5 interaction effect identification, 120 multicollinearity tests, 121-2 multivariate outliers, 121 nonlinear relationship identification, 120 stepwise selection process, 118-19 Prediction model evaluation, 113, 126-31 cost-benefit models 127-31, 148, 152, 153, 156 cross-system application, 152-3 cross-validation, 113, 127, 148 Preprocessor conditionals, 177 Principal Component Analysis. See PCA Principal components (PCs), 114-16 Priorities, 295, 296 Priority inversion, 65 Privacy amplification, 226-8 Procedure calls, remote, 65 Procedures, 60, 63 Processes, 63 Process network (PN) model, 74-5, 79, 81, 83 Program comprehension, 6, 11, 35-8
391
SUBJECT INDEX
Program execution traces, 37-8 Programming language choice as factor in fault prevention, 167, 168-9 familiarity issue, 184 see also C; Lisp, Common Propagating, 253, 300 Propagation mechanism, 253, 300 Protected block, 295-6, 300 Ptolemy II, 69, 72, 76, 79, 80, 82-4 Ptolemy project, 84, 89 Publish-and-subscribe models, 76
Quantum communication, 236, 240 see also Quantum cryptography Quantum computing, 189-192, 207-20, 231-40 fault tolerant, 219-20 physical implementations, 2 3 1 ^ 0 general properties, 232-6 realizations, 236-9 quantum algorithms, 210-16 quantum error correction, 216-20, 234 quantum gates, 207-10 quantum simulation, 216 Universal Quantum Computer, 191 Quantum cryptography, 213, 220, 222-8 problems with, 226-8 quantum key distribution (QKD), 223-5, 226 Quantum dots, 238, 239 Quantum electrodynamics, optical cavity, 239 Quantum mechanics, 194-207 mathematics of, 202-7 postulates of, 194-6 theorems of, 201-2 Quantum teleportation, 229-31 Qubits, 192, 203 electron spin, 233, 234, 236 fidelity of, 234-5 flying, 235-6, 239 initialisation of, 235 material, 235, 236, 239 nuclear spin, 233, 234, 236 scalable arrays, 233 speed of, 2 3 3 ^ superconducting, 239-40 types of, 235-6 well-characterized, 232-3 QuickDoc, 308 QuickSet, 307, 308, 310-11, 317-18, 320
R Raise, 249, 259 Randomness, 199, 226 Rapide, 81 Rational choice theory, 346-7 Reactive systems, 62 Reactivity, 62 Real-time environment, 291-2 Real-time object-oriented modeling (ROOM), 65,66,81 Real-time operating systems, 58, 59, 64-5 Receiver-operator curve (ROC), 124-5 Recursive resuming, 275, 300 preventing, 285-90 Redundancy, 217 Reengineering, 22-3, 29-30 Refactoring, 33, 35 Reflection, 86-8 Regression testing, 35 Regression trees, 109, 120, 154 Reliability, 108, 171,292 Rendezvous, 252 Rendezvous model, 75-6, 81, 86-7 Repetitioncode, 217, 218 Reraise, 263 Resuming propagation, 277, 300 Resumption, 260-3 Retry model, 259-60 Return codes, 247, 300 Reusability of code, 149, 155 Reverse engineering, 11, 12, 26 Reversibility, 195, 201 RFC, 101, 137, 157 Ripple effects, 108 Robustness, 85, 308 breaking the robustness barrier, 331-2 design strategies for optimizing, 326-9 of multimodal speech and lip movement systems, 323-6 of multimodal speech and pen systems, 317-23 performance metrics for, 329-31 through multimodality, 312-13, 316 Rotated components, 114, 115 Routine activity hypothesis, 347-8 Routine calls, 260 Run-time environment, 88
S-type software, 8, 31 Salience weighting, 353, 358
392
SUBJECT INDEX
Cluster-specific (CSSW), 358-61, 362 test of methodology, 364-9 Saturation, 20 Scenic, 73 Scheduling, task, 64-5 Search algorithms, 210-11 extrapolation search, 211 Grover's, 215-16 sequential search, 210 Sensitivity, 124, 148 Sensory perception, 312-13 Sequel, 275-7, 300 Service packs, 13, 25 Serviceware, 44, 46 Shor's factoring algorithm, 212-13 Sign checking, 178 Signal, 62, 71, 73 Signal processing, 75 Size measures, 131, 132, 137, 147, 151, 152 definitions of, 161 NAInh, 140, 161 NMImp, 139, 161 NMInh, 140, 161 NumPar, 139, 161 Social simulation, 351 Software change, 31-8 change control, 42 change implementation, 33-5 change planning, 32-3, 37 change requests, 32, 37 miniprocess of, 31-2 Software decay, 19-20,28 Software development iterative, 7 strategies during, 3 9 ^ 1 Software engineering cultural change in, 21 knowledge-based, 16 Software evolution contributing factors, 44 laws, 8 ultra rapid, 44-7 Software life span, 23 Software lifecycle costs, 5-6 definition, 4 Software lifecycle models Service-based, 45-7 spiral model, 4 staged model, 4, 12-47 case studies, 24-31 close-down, 13, 2 3 ^
evolution, 13, 16-19,29,41-3 initial development, 12, 13-16, 3 9 ^ 1 phase-out, 13,23,24,43 servicing, 13, 19-23,43 ultra-rapid evolution, 44-7 waterfall model, 4, 13 Software maintenance categorization, 5 definition, 3 outsourcing, 6 stage distinctions, 9 standards, 7, 41 Software releases, 17, 25, 32 Software systems categorisation of, 9-10 Software team expertise, 14 Software transition, 41 Software types, 8 Software value, sustaining, 38-43 Sound synthesis algorithm, 72 Source execution, 253, 300 Spearman p (rho) modeling technique, 131 Specificity, 124, 148, 283-5, 300 Speech Grand Challenge program, 330 Speech systems, unimodal, 314-16 SQUID, 233, 239-40 Stack unwinding, 255, 300 ^charts (star-charts), 78, 88 State-oriented languages, 87 Statecharts, 78, 79, 87, 88 Statemate, 89 States, 76 Static propagation, 273-4, 275-7, 300 Status flags, 247, 248, 301 Stopping rules, 359, 360 Subjective data, 108 Superposition, 196-9,203-4 Synchronous dataflow (SDF), 72, 77, 83 Synchronous exception, 253, 301 Synchronous/reactive (SR) models, 59, 71, 73-^, 78, 79, 83, 89 Syndromes, 217-18 System architecture, 15-16 SystemC, 66, 71,73, 81
Task, 252 Tensor products, 203 Terminating propagation, 277, 301 Termination, 257-9, 276
393
SUBJECT INDEX
Threads, 60, 249-50, 301 Thresholds, 138-9 Throwing propagation, 277, 301 Time, modeling of, 80-2 Time evolution, 195 Time-triggered architecture (TTA), 72-3 Timeliness, 58 Token-ring protocol, 73-4 Transfer points, 259, 297 Transformational systems, 62 Transformations, unitary, 195, 206 universal family of, 235 Transitions, 76 Transmission errors, 228 Traveling Salesman Problem, 216 Truncation of variables, 176 Turing machine, 56, 191 Turing paradigm, 59 Type inference, 86 Type systems, 61, 84-5 on-line, 85-6 system-level 84
U UML, 68, 78, 87, 89 Understandability, 110, 149 Unguarded block, 254, 301 Unified Software Development Process, 18-19 Unimodal speech systems, 314-16 Univariate regression analysis, 112-13, 116-17 Unix, 169, 170, 280 signal mechanism, 248-9
Variance ration criteria (VRC), 359-60 Varimax rotation, 114 Verilog, 63, 66, 74 Vemam cipher, 221-2 VHDL, 63,66,71,74 Victim profiHng, 348 Viseme-phoneme mappings, 308 Visemes, 308, 324, 326, 328 Visibility, 274 Vision-based recognition, 327 Visual syntaxes, 67-8 VME operating system, 15, 25-6 VMS, 279 propagation mechanism, 287-90 Vowels, 326 W WER, 314, 329-30 WIMP interfaces, 306 "Windowing", 26 WMC, 102 Word error rates (WERs), 314, 329-30 Wrapping, 21 Wright, 66, 81-2
XP, 19, 42
Y2K problem, 26-7 Young's double-slit experiment, 196-9
This Page Intentionally Left Blank
Contents of Volumes in This Series Volume 40 Program Understanding: Models and Experiments A. VON MAYRHAUSER AND A . M . VANS
Software Prototyping ALAN M . DAVIS
Rapid Prototyping of Microelectronic Systems APOSTOLOS DOLLAS AND J. D. STERLING BABCOCK
Cache Coherence in Multiprocessors: A Survey MAZIN S. YOUSIF, M . J. THAZHUTHAVEETIL, AND C . R. DAS
The Adequacy of Office Models CHANDRA S. AMARAVADI, JOEY R GEORGE, OLIVIA R. LIU SHENG, AND JAY F. NUNAMAKER
Volume 41 Directions in Software Process Research H. DIETER ROMBACH AND MARTIN VERLAGE
The Experience Factory and Its Relationship to Other Quality Approaches VICTOR R. BASILI
CASE Adoption: A Process, Not an Event JOCK A. RADER
On the Necessary Conditions for the Composition of Integrated Software Engineering Environments DAVID J. CARNEY AND ALAN W. BROWN
Software Quality, Software Process, and Software Testing DICK HAMLET
Advances in Benchmarking Techniques: New Standards and Quantitative Metrics THOMAS CONTE AND WEN-MEI W . HWU
An Evolutionary Path for Transaction Processing Systems CARLTON PU, AVRAHAM LEFF, AND S H U - W E I , F. CHEN
Volume 42 Nonfunctional Requirements of Real-Time Systems TEREZA G . KIRNER AND ALAN M . DAVIS
A Review of Software Inspections ADAM PORTER, HARVEY SIY, AND LAWRENCE VOTTA
Advances in Software Reliability Engineering JOHN D . M U S A AND WILLA EHRLICH
Network Interconnection and Protocol Conversion MING T. LIU
A Universal Model of Legged Locomotion Gaits S. T. VENKATARAMAN
395
396
CONTENTS OF VOLUMES IN THIS SERIES
Volume 43 Program Slicing DAVID W. BINKLEY AND KEITH BRIAN GALLAGHER
Language Features for the Interconnection of Software Components RENATE MOTSCHNIG-PITRIK AND ROLAND T. MITTERMEIR
Using Model Checking to Analyze Requirements and Designs JOANNE ATLEE, MARSHA CHECHIK, AND JOHN GANNON
Information Technology and Productivity: A Review of the Literature ERIK BRYNJOLFSSON AND SHINKYU YANG
The Complexity of Problems WILLIAM GASARCH
3-D Computer Vision Using Structured Light: Design, Calibration, and Implementation Issues FRED W . DEPIERO AND MOHAN M . TRIVEDI
Volume 44 Managing the Risks in Information Systems and Technology (IT) ROBERT N . CHARETTE
Software Cost Estimation: A Review of Models, Process and Practice FIONA WALKERDEN AND ROSS JEFFERY
Experimentation in Software Engineering SHARI LAWRENCE PFLEEGER
Parallel Computer Construction Outside the United States RALPH DUNCAN
Control of Information Distribution and Access RALFHAUSER
Asynchronous Transfer Mode: An Engineering Network Standard for High Speed Communications RONALD J. VETTER
Communication Complexity EYAL KUSHILEVITZ
Volume 45 Control in Multi-threaded Information Systems PABLO A. STRAUB AND CARLOS A. HURTADO
Parallelization of DOALL and DOACROSS Loops—a Survey A. R. HURSON, JOFORD T. LiM, KRISHNA M. K A V I , AND B E N L E E
Programming Irregular Applications: Runtime Support, Compilation and Tools JOEL SALTZ, GAGAN AGRAWAL, CHIALIN CHANG, RAJA D A S , GUY EDJLALI, PAUL HAVLAK, YUAN-SHIN HWANG, BONGKI MOON, RAVI PONNUSAMY, SHAMIK SHARMA, ALAN SUSSMAN AND MUSTAFA UYSAL
Optimization Via Evolutionary Processes SRILATA RAMAN AND L . M . PATNAIK
Software Reliability and Readiness Assessment Based on the Non-homogeneous Poisson Process AMRIT L . GOEL AND KUNE-ZANG YANG
Computer-supported Cooperative Work and Groupware JONATHAN GRUDIN AND STEVEN E . POLTROCK
Technology and Schools GLEN L . BULL
CONTENTS OF VOLUMES IN THIS SERIES
Volume 46 Software Process Appraisal and Improvement: Models and Standards MARK C . PAULK
A Software Process Engineering Framework JYRKI KONTIO
Gaining Business Value from IT Investments PAMELA SIMMONS
Reliability Measurement, Analysis, and Improvement for Large Software Systems JEFF TIAN
Role-based Access Control RAVISANDHU
Multithreaded Systems KRISHNA M . KAVI, BEN LEE AND ALLI R. HURSON
Coordination Models and Language GEORGE A. PAPADOPOULOS AND FARHAD ARBAB
Multidisciplinary Problem Solving Environments for Computational Science ELIAS N . HOUSTIS, JOHN R. RICE AND NAREN RAMAKRISHNAN
Volume 47 Natural Language Processing: A Human-Computer Interaction Perspective BILL MANARIS
Cognitive Adaptive Computer Help (COACH): A Case Study EDWIN J. SELKER
Cellular Automata Models of Self-replicating Systems JAMES A. REGGIA, HUI-HSIEN CHOU, AND JASON D . LOHN
Ultrasound Visualization THOMAS R. NELSON
Patterns and System Development BRANDON GOLDFEDDER
High Performance Digital Video Servers: Storage and Retrieval of Compressed Scalable Video SEUNGYUP PAEK AND SHIH-FU CHANG
Software Acquisition: The Custom/Package and Insource/Outsource Dimensions PAUL NELSON, ABRAHAM SEIDMANN, AND WILLIAM RICHMOND
Volume 48 Architectures and Patterns for Developing High-performance, Real-time ORB Endsystems DOUGLAS C . SCHMIDT, DAVID L . LEVINE AND CHRIS CLEELAND
Heterogeneous Data Access in a Mobile Environment - Issues and Solutions J. B. LiM AND A. R. HURSON
The World Wide Web HAL BERGHEL AND DOUGLAS BLANK
Progress in Internet Security RANDALL J. ATKINSON AND J. ERIC KLINKER
Digital Libraries: Social Issues and Technological Advances HSINCHUN CHEN AND ANDREA L . HOUSTON
Architectures for Mobile Robot Control JULIO K. ROSENBLATT AND JAMES A. HENDLER
397
398
CONTENTS OF VOLUMES IN THIS SERIES
Volume 49 A Survey of Current Paradigms in Machine Translation BONNIE J. DORR, PAMELA W. JORDAN AND JOHN W. BENOIT
Formality in Specification and Modeling: Developments in Software Engineering Practice J. S. FITZGERALD
3-D Visualization of Software Structure MATHEW L . STAPLES AND JAMES M . BIEMAN
Using Domain Models for System Testing A. VON MAYRHAUSER AND R. MRAZ
Exception-handling Design Patterns WILLIAM G . BAIL
Managing Control Asynchrony on SIMD Machines—a Survey NAEL B . ABU-GHAZALEH AND PHILIP A. WILSEY
A Taxonomy of Distributed Real-time Control Systems J. R. A C R E , L . P. CLARE AND S. SASTRY
Volume 50 Index Part I Subject Index, Volumes 1^9
Volume 51 Index Part II Author Index Cumulative list of Titles Table of Contents, Volumes 1-49
Volume 52 Eras of Business Computing ALAN R. HEVNER AND DONALD J. BERNDT
Numerical Weather Prediction FERDINAND BAER
Machine Translation SERGEI NIRENBURG AND YORICK WILKS
The Games Computers (and People) Play JONATHAN SCHAEFFER
From Single Word to Natural Dialogue NEILS OLE BENSON AND LAILA DYBKJAER
Embedded Microprocessors: Evolution, Trends and Challenges MANFRED SCHLETT
CONTENTS OF VOLUMES IN THIS SERIES
399
Volume 53 Shared-Memory Multiprocessing: Current State and Future Directions PER STEUSTROM, ERIK HAGERSTEU, DAVID
I. LITA,
MARGARET
MARTONOSI
AND
MADAN VERNGOPAL
Shared Memory and Distributed Shared Memory Systems: A Survey KRISHNA KAUI, HYONG-SHIK K I M , BEU LEE AND A. R. HURSON
Resource-Aware Meta Computing JEFFREY K . HOLLINGSWORTH, PETER J. KELCHER AND KYUNG D . RYU
Knowledge Management WILLIAM W . AGRESTI
A Methodology for Evaluating Predictive Metrics JASRETT ROSENBERG
An Empirical Review of Software Process Assessments KHALED E L EMAM AND DENNIS R . GOLDENSON
State of the Art in Electronic Payment Systems N. ASOKAN, P. J A N S O N , M . S T E I V E S AND M. W A I D N E S
Defective Software: An Overview of Legal Remedies and Technical Measures Available to Consumers COLLEEN KOTYK VOSSLER AND JEFFREY
VOAS
Volume 54 An Overview of Components and Component-Based Development ALAN W . BROWN
Working with UML: A Software Design Process Based on Inspections for the Unified Modeling Language GUILHERME H. T R A V A S S O S , FORREST S H U L L AND JEFFREY CARVER
Enterprise JavaBeans and Microsoft Transaction Server: Frameworks for Distributed Enterprise Components AVRAHAM L E F F , JOHN P R O K O P E K , J A M E S T . R A Y F I E L D AND IGNACIO S I L V A - L E P E
Maintenance Process and Product Evaluation Using Reliability, Risk, and Test Metrics NORMAN F. SCHNEIDEWIND
Computer Technology Changes and Purchasing Strategies GERALD V. POST
Secure Outsourcing of Scientific Computations MIKHAIL J. ATALLAH, K.N. PANTAZOPOULOS, JOHN R . RICE AND EUGENE SPAFFORD
Volume 55 The Virtual University: A State of the Art LINDA
HARASIM
The Net, the Web and the Children W. NEVILLE
HOLMES
400
CONTENTS OF VOLUMES IN THIS SERIES
Source Selection and Ranking in the WebSemantics Architecture Using Quahty of Data Metadata GEORGE A. MIHAILA, LOUIQA RASCHID, AND MARIA-ESTER VIDAL
Mining Scientific Data NAREN RAMAKRISHNAN AND ANANTH Y. GRAMA
History and Contributions of Theoretical Computer Science JOHN E . SAVAGE, ALAN L . SALEM AND CARL SMITH
Security Policies Ross ANDERSON, FRANK STAJANO AND JONG-HYEON LEE
Transistors and IC Design YUAN TAUR
ISBN 0-12-012156-5
9"780120"121564