SAVING HUMAN LIVES
Issues in Business Ethics VOLUME 21
Series Editors Henk van Luijk, Emeritus Professor of Business...
27 downloads
685 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
SAVING HUMAN LIVES
Issues in Business Ethics VOLUME 21
Series Editors Henk van Luijk, Emeritus Professor of Business Ethics Patricia Werhane, University of Virginia, U.S.A.
Editorial Board Brenda Almond, University of Hull, Hull, U.K. Antonio Argandoña, IESE, Barcelona, Spain William C. Frederick, University of Pittsburgh, U.S.A. Georges Enderle, University of Notre Dame, U.S.A. Norman E. Bowie, University of Minnesota, U.S.A. Brian Harvey, Manchester Business School, U.K. Horst Steinmann, University of Erlangen-Nurnberg, Nurnberg, Germany
The titles published in this series are listed at the end of this volume.
Saving Human Lives Lessons in Management Ethics
by
ROBERT ELLIOTT ALLINSON Soka University of America T e Ch Th C inese Un U iversity of Hong Kong
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN-10 ISBN-10 ISBN-13 ISBN-13
1-4020-2905-5 (HB) 1-4020-2980-2 ( e-book) 978-1-4020-2905-9 (HB) 978-1-4020-2980-6 (e-book)
Published by Springer, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. www.springeronline.com
Printed on acid-free paper
All Rights Reserved © 2005 Springer No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Printed in the Netherlands.
To my wife, Iréna, whose unfailing support and brilliant suggestions inform this book and grace my existence. To the bravery of the late Justice Peter Mahon, Stuart Macfarlane f and Captain Gordon Vette who risked their careers and their lives to research and investigate the events that made up the disaster on Mt. Erebus, whose work may serve as an inspiration to us all for the courage that humanity needs to struggle for the cause of justice.
Saving Human Lives is an attempt to demonstrate how ethical management can save human lives. Its thesis is explored through a detailed investigation of case studies all of which support the thesis that greater attention to ethics in management would have saved human lives. To this author, the saving of human lives is the greatest possible contribution ethics can make to management and it is to this end-goal that this book is dedicated. As one reads through the chapters one readily notice parallels to numerous global crises and disasters that have occurred since the preparation of this book. The Columbia space shuttle disaster is an ominous reminder that the lessons of history were not learned in the Challengerr space shuttle disaster. The terrible events of September 11 in the United States are horrific reminders that the lessons of management ethics could also have prevented the needless loss of lives. In both of these recent cases, warnings of the impending disasters were out there but were neglected or trivialized. The prioritization of ethics as is strongly argued for throughout this volume would have not tolerated such a neglect of the advance warnings of the serious, potential threat to human lives. To give special thanks to those who have helped in the preparation of this volume, I wish to thank the contributions of Stuart Macfarlane, Captain Gordon Vette, Mrs. Justice Mahon, Mrs. Jim Collins and Peter McErlane all of whom gave valuable information which aided my investigations into the disasterr on Mt. Erebus enormously. All of these individuals were kind to give of their personal time when I visited with them in New Zealand and provided both materials and a focus to my investigation which was immeasurable in its value. Without the opportunity for personal interviews with all of the aforementioned, there would have been no way of obtaining crucial information which has remained unpublished. I also wish to thank Roger Boisjoly for keeping an extensive, personal correspondence with me that shed invaluable light on my investigations of the Challengerr space shuttle disaster. I wish to thank the University of Canterbury which offered me Erskine Fellowship that enabled me to come to New Zealand. I was able to deepen my investigations of the disaster on Mt. Erebus that I had originally undertaken over a decade earlier. Some of the material in this book originally appeared in my earlier volume, Global Disasters: Inquiries into Management Ethics.
viii
The materials in the chapters on the Challengerr disaster and the disaster on Mt. Erebus are revised and new chapters have been added including case studies of the Titanic disaster and the Vasa disaster. The chapters on the Challengerr disaster and the disaster on Mt. Erebus contain key points never before published which cast great light on these disasters and provide powerful support for the thesis of the present book that ethical management could have prevented these disasters.
TABLE OF CONTENTS CHAPTER 1 ACCIDENTS, TRAGEDIES AND DISASTERS.....................1 THE RULE OF ACCIDENTAL ..........................................3 THE EXPLANATION OF HUMAN ERROR......................3 THE EXPLANATION OF A BREAKDOWN OF A MATERIAL OR TECHNICAL COMPONENT AND ITS COROLLARY, “RISKY TECHNOLOGY” ..................3 RISKY OR UNRULY TECHNOLOGY? .............................4 THE EXPLANATION OF ORGANIZATIONAL INERTIA OR BUREAUCRATIC DRIFT............................6 ACCIDENTS WILL HAPPEN .............................................7 THE WORD ‘ACCIDENT’ ..................................................8 THE BELIEF IN MONOCAUSALITY............................ 12 MULTI-CAUSALITY AND MULTIPLE RESPONSIBILITY .............................................................14 FAULT FINDING AND THE SCAPEGOAT ....................15 WARNINGS AND ETHICS ...............................................17 FREEDOM AND ETHICS..................................................18 NOTES ................................................................................19 CHAPTER 2 THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS ......................................................21 ETHICS AS INVOLVED IN THE GOALS OF AN ORGANIZATION...............................................................22 ETHICS AND THE CONDUCT OF BUSINESS ENTERPRISE...................................................................... 33 ix
x
TABLE OF CONTENTS
ETHICS AND THE INFRASTRUCTURE OF A BUSINESS ORGANIZATION ...........................................33 THE WILL TO COMMUNICATE: FORMAL AND INFORMAL REPORTING CHANNELS .................34 THE UNDERLYING PRINCIPLE OF ETHICAL COMMUNICATION: RESPECT FOR PERSONS............34 ETHICS AND INFORMAL CHANNELS OF COMMUNICATION ..........................................................36 ETHICS AND FORMAL REPORTING CHANNELS ........................................................................37 THE ARGUMENT FOR THE EQUIVALENCE IN EFFECT OF COMPETING NORMATIVE ETHICAL JUSTIFICATIONS ..............................................................39 NOTES ................................................................................46 CHAPTER 3 THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL......................................................................48 THE BUCK STOPS HERE .................................................48 THE WILL TO COMMUNICATE ..................................... 50 THE MANAGER’S TASK .................................................52 THE MANAGER AS EDUCATOR AND FACILITATOR OF GOOD WILL.................................................................53 NOTES ................................................................................55 CHAPTER 4 CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT ....................................................................60 CRISIS MANAGEMENT: THE “BAND-AID” APPROACH ........................................................................60
TABLE OF CONTENTS
xi
CONCEPTUAL PREPAREDNESS....................................64 THE EXPLICIT PRIORITIZATION OF A SAFETY ETHOS ......................................................66 THE INDEPENDENTLY FUNDED SAFETY BOARD WITH FULL VETO POWERS OVER OPERATIONAL DECISIONS ...........................................67 NOTES ................................................................................68 CHAPTER 5 THE VASA DISASTER ....... ...................................................71 THE CONSTRUCTION OF THE VASA ............................72 THE STABILITY TEST .....................................................73 THE QUESTION OF BALLAST .......................................74 THE WIND PRESSURE ON THE SAILS .........................75 THE VASA’S S CENTER OF GRAVITY ..............................76 WHY THE VASA CAPSIZED.............................................76 CONCLUSIONS .................................................................82 NOTES ................................................................................82 CHAPTER 6 THE TITANIC C DISASTER......................................................84 METAPHYSICAL BELIEF SYSTEMS .............................84 LOSS OF LIFE ....................................................................87 REPORT OF THE COURT (BRITISH REPORT) .............87 THE COLLISION ...............................................................87 CAUSES OF THE DISASTER ...........................................88 WARNINGS........................................................................88 WARNINGS TO PASSENGERS .......................................91
xii
TABLE OF CONTENTS
SPEED OF THE SHIP.........................................................92 WEATHER..........................................................................92 CAUSES OF DEATHS .......................................................92 HOW ELSE COULD THE TITANIC DISASTER BEEN PREVENTED? ....................................................................92 THE SAILING ORDERS....................................................93 RELEVANT DESIGN FEATURES ...................................94 RIVETS ...............................................................................94 THE INADEQUACY OF THE HUMAN ERROR HYPOTHESIS.....................................................................96 LIFEBOATS........................................................................ 97 THIRD-CLASS PASSENGERS .........................................98 NEARBY RESCUE POSSIBILITIES ................................99 THE RESCUE BY THE S.S. “CARPATHIA”..................... 99 S.S. “CALIFORNIAN”.........................................................99 N FINDINGS OF THE COURT ...........................................101 LOOK-OUT.......................................................................101 SPEED ...............................................................................102 RECOMMENDATIONS................................................... 102 NOTES .............................................................................. 104 CHAPTER 7 THE SPACE SHUTTLE CHALLENGER DISASTER.........107 A BRIEF SYNOPSIS ........................................................108 KEY WORDS....................................................................109 THE WORD ‘ACCIDENT’ ..............................................109 CAUSE AND CONTRIBUTING CAUSE .......................112 DECISION MAKING .......................................................115
xiii
TABLE OF CONTENTS
THE ATMOSPHERE OF THE DECISION MAKING PROCESS.........................................................115 THE DECISION MUST BE MADE UNDER PRESSURE........................................................................117 A FIXED DEADLINE MUST BE MET...........................118 A WRONG DECISION WILL HAVE GRAVE CONSEQUENCES............................................................119 THE PRESENCE OF IRREGULARITIES.
..................120
ABBREVIATED CHRONOLOGY OF EVENTS LEADING TO CHALLENGER LAUNCH .......................121 (a.) THE LACK OF ANY CLEAR UNIFORM GUIDELINES AS TO MORAL CRITERIA ....................123 (b.) A FORMULA FOR THE ATTRIBUTION OF THE RESPONSIBILITY FOR THE BURDEN OF PROOF IN AN ARGUMENT.....................................124 (c.) WHAT WOULD COUNT AS SUFFICIENT JUSTIFICATION FOR CONCLUSIONS ........................126 (d.) THE LACK OF U UNDERSTANDING OF WHAT INPUT IS RELEVANT TO A DECISION .......................126 THE LACK OF A SPELLED OUT DECISION MAKING MECHANISM .................................................130 MANAGEMENT STRUCTURE ......................................131 THE LANGUAGE OF COMMUNICATION ..................134 RESPONSIBILITY: BOTTOM UP ..................................137 TOP DOWN RESPONSIBILITY .................................
139
DORMANT STAGE .........................................................139 THE WILL TO COMMUNICATE ...............................
141
NOTES ..............................................................................143
xiv
TABLE OF CONTENTS
CHAPTER 8 POST-CHALLENGER INVESTIGATIONS.........................154 THE WORD ‘ACCIDENT’ ..............................................154 THE CONFLATION OF GENERAL UNKNOWN RISK OF SPACE TRAVEL WITH THE SPECIFICALLY FOREKNOWN RISK OF THE O-RINGS ...........................................................................155 SAFETY PRIORITY.........................................................157 DECISION MAKING .......................................................157 SAFETY FIRST?...............................................................158 IS THERE A GREATER SENSE OF RESPONSIBILITY NOW? ...............................................162 WERE MIDDLE MANAGERS SIMPLY FOLLOWING POLICY? ...........................................................................163 WERE THE MIDDLE MANAGERS MORAL?..........
164
WAS THE CHALLENGER DISASTER A MISTAKE?...165 WAS THE CHALLENGER DISASTER AN ACCIDENT? .....................................................................165 WAS THE CHALLENGER DISASTER THE INEVITABLE OUTCOME OF CULTURAL DETERMINISM? AND, WAS MORAL CHOICE AND MORAL RESPONSIBILITY THEREFORE IRRELEVANT AS AN EXPLANATION? ................................................166 THE HISTORY OF O-RING EROSION..........................168 NORMALIZED DECISIONS? .........................................170 LINKS BETWEEN TEMPERATURE AND EROSION ...................................................................... 170 TRUSTING ESTABLISHED PATTERNS OF DEVIANCE? .....................................................................173
TABLE OF CONTENTS
xv
FAITH IN THE SECONDARY SEAL? ...........................173 DID ENGINEERS BELIEVE THAT THE CHALLENGER WAS SAFE TO FLY?.............................176 WERE THE ENGINEERS UNAWARE OF WHAT WAS A PROBABLE OUTCOME? .....................178 THE QUESTION OF “HARD DATA” ............................180 ETHICAL DECISION MAKING .....................................181 CONCLUSION..................................................................182 NOTES ..............................................................................183 CHAPTER 9 THE HERALD OF FREE ENTERPRISE E DISASTER .........198 THE IMMEDIATE CAUSE OF THE DISASTER AND MULTI-RESPONSIBILITY ............................................ 199 THE ORDERS.................................................................. 202 BOTTOM UP RESPONSIBILITY .................................. 204 TOP DOWN RESPONSIBILITY .................................... 208 A DYSFUNCTIONAL MANAGEMENT ....................... 209 THE LACK OF A CLEAR ATTRIBUTION OF DOMAINS OF RESPONSIBILITY FOR OFFICERS.....210 THE LACK OF THE ISSUANCE OF CLEAR INSTRUCTIONS ..............................................................210 TECHNICAL COMPONENT...........................................212 THE CLOSING OF THE DOORS....................................213 THE WILL TO COMMUNICATE .................................. 213 CONCLUSIONS OF THE COURT..................................218 NOTES ..............................................................................220
xvi
TABLE OF CONTENTS
CHAPTER 10 THE KING’S CROSS UNDERGROUND FIRE ................. 223 EPISTEMOLOGICAL FRAMEWORKS COMPARED ................................................................ 224 FENNELL’S EPISTEMOLOGICAL FRAMEWORK.....227 THE USE OF WORDS .....................................................228 THE CAUSE OF THE FIRE.............................................230 RESPONSIBILITY FOR THE FIRE: TOP DOWN .........233 RESPONSIBILITY: BOTTOM UP ..................................235 THE IMPORTANCE OF A SAFETY ETHOS ............. 246 THE ROLE OF MANAGEMENT STRUCTURE IN BLOCKING THE FLOW OF INFORMATION ..............247 FENNELL’S RECOMMENDATIONS: THE PRIMACY OF SAFETY...................................................249 PHILOSOPHICAL UNDERPINNINGS...........................251 NOTES ..............................................................................252 CHAPTER 11 THE DISASTER ON MT. EREBUS ....................................256 A SHORT HISTORY OF THE DISASTER .....................258 THE CHIPPINDALE REPORT ........................................258 JUSTICE MAHON’S CRITIQUE OF THE CHIPPINDALE REPORT.................................................259 THE EVIDENCE FROM THE FLIGHT-DECK TAPES ...............................................................................260 VETTE’S TEXT................................................................265 MACFARLANE’S NOTES ON VETTE’S TEXT ...........265
TABLE OF CONTENTS
xvii
MACFARLANE’S REPLY TO 1. (THAT THE CHANGE IN COMPUTER FLIGHT PATH DID NOT MISLEAD THE CREW)..................................266 MACFARLANE’S REPLY TO 2. (THAT PILOTS VIOLATED INSTRUCTIONS NOT TO DESCEND BELOW 16,000 FEET) .....................................................268 MACFARLANE’S REPLY TO 3. (THAT THE CREW WERE UNCERTAIN OF THEIR POSITION)....270 MACFARLANE’S REPLY TO 4. (THAT RADAR WOULD HAVE SHOWN EREBUS AHEAD) ................271 TAKING A PHENOMENOLOGICAL VIEW .................272 THE WHITEOUT PHENOMENON ............................ ... .272 PHENOMENOLOGICAL APPROACH: TAPES ............274 THE COHERENCE THEORY OF TRUTH.....................276 MISMANAGEMENT .......................................................278 THE CAUSE OF THE DISASTER ..................................280 DEFECTS IN ADMINISTRATIVE STRUCTURE .........283 DEFECTS IN ADMINISTRATIVE COMMUNICATIONS SYSTEM .....................................284 SUMMARY OF MANAGEMENT DEFECTS ................285 THE LACK OF ANY SAFETY ETHOS..........................287 TOP DOWN RESPONSIBILITY .....................................287 NOTES ..............................................................................289 APPENDICES TO CHAPTER 11 ......................................313 APPENDIX ONE ..............................................................313 APPENDIX TWO .............................................................316 APPENDIX THREE..........................................................320 NOTES ..............................................................................320
xviii
TABLE OF CONTENTS
CHAPTER 12 MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION ...............................................322 NOTES..............................................................................332 BIBLIOGRAPHY..................................................................334 INDEX...................................................................................347
CHAPTER 1 ACCIDENTS, TRAGEDIES AND DISASTERS A prime thesis of this book is that a leading cause of man-made, corporate or organizational disasters is a lack of accountability, especially on the part of top management. This lack of accountability can be traced to or at least shown to be compatible with a certain metaphysical view of the world, the view of determinism. The thesis of determinism is that no matter what human beings do, or do not do, certain outcomes are inevitable. In ancient Greek times, to believe that certain outcomes were inevitable was to believe that one was a tool of the play of the gods, or of Fate. In terms of highly consequential and unfortunate events of life such a view that one was ruled by Fate was linked with the idea that life was inevitably tragic. Tragedy as a form of drama came into being to justify the fated outcomes and in the process to ennoble the participants in the tragic events. The usage of such terms in these modern times has degenerated.1 Whenever a disaster is described as a ‘tragedy’, one does not need to further inquire into the cause or causes of this event because it has now been lifted outside of human power into the domain of Greek drama and fate. As a tragedy, it was fated to be and the only response is to accept it (and others of its kind) as part of the inescapable human situation. One may mourn it and sympathize briefly with the victims. But one is freed (by thinking of it as a tragedy) from the need to consider how one may prevent like disasters from happening in the future. While it may be objected by the reader that this is too purist a view of the everyday person’s understanding of the word, ‘tragedy’, it is the view of the present author that one must be careful both of unconscious associations in one’s own mind and the associations that such a word usage creates in the minds of future readers. The word ‘tragedy’ possesses a grand and influential history. The tragic heroes and heroines of Greek and Shakespearean tragedy generally stood for higher values and it was because they stood for those higher values combined with some internal character flaws that they determined their own fate. They come down as heroes and 1
2
SAVING HUMAN LIVES
heroines to present readers precisely because of a combination of nobility and internal character flaws that brought about their own end. It is Oedipus’ desire to know coupled with his arrogance and rash action that in the end is the cause of his own tragic fate. It is Antigone’s placing of her ethical principles above the law of the land coupled with her open defiance of her uncle, the king, that brings about her tragic fate. It is Macbeth’s ambition to be king coupled with a susceptibility to be influenced by Lady Macbeth that causes him to commit murder. In the cases of disasters caused by a combination of a lack of sound management practises and a lack of ethics on the part of senior and line management, there are no noble heroes or heroines and character flaws should play no role in a properly managed organization. While few today would be willing to subscribe to a view that life was fated or tragic in the high sense of the term, it is the thesis of the present author that other forms of determinism rule today. The idea of chance or accident has overtaken the whimsical actions of the Greek gods and the mysteries of Technology have overtaken the mysteries of the three Fates of Greek mythology. Such beliefs which can be referred to as determinism by Accidents and determinism by Technological/Organizational complexity are argued in the course of this work to be key factors which have influenced decision making that has led to disastrous outcomes. While these are certainly not the only factors, they can play a role in influencing or justifying additional key factors such as a lack of ethical and epistemological accountability. In his book, Normal Accidents: Living with High Risk Technologies, Charles Perrow has given birth to the idea that certain technological disasters are inevitable due to the interlocking complexity of technology and bureaucratic organizations and that accidents under such circumstances become normal and expected. With such a thesis, cosmic and technological determinism are woven into the same fabric.
ACCIDENTS, TRAGEDIES AND DISASTERS
3
THE RULE OF ACCIDENTAL The tacit thesis that the leading cause of disasters is the accidental typically manifests itself in three different forms. One form in which the accidental appears is in the guise of human error; the second form in which the accidental appears is in the guise of a breakdown of a material or technical component, the third form in which the accidental appears is through the inertia of compliance with the mechanisms of organizational routines. This is not intended as a complete list of the manifestations of the accident or the mistake; it is only intended to be illustrative and educational.
THE EXPLANATION OF HUMAN ERROR Human error appears in a number of guises. It can appear simply as human error. It can appear as pilot error, operator error, faulty judgement, a judgement call, and so on. In one and all of these forms, it is ultimately ineliminable because human error is accepted as a fundamental human condition. Human beings, being finite and imperfect, cannot avoid error. As there will always be human error, there is not much that can be done about it. The inevitability of human error is noted in such popular phrases as ‘we all make mistakes’; ‘nobody’s perfect’; and even, ‘it’s human a nature’; or, as a justification for someone accused of an error, ‘I’m only human’. Since human error is ineliminable (under anyone’s view), if the disaster is caused by human error, then the disaster was not preventable and the matter can be put to rest.2
THE EXPLANATION OF A BREAKDOWN OF A MATERIAL OR TECHNICAL COMPONENT AND ITS COROLLARY, “RISKY TECHNOLOGY” One often hears the phrase, ‘it was the part that was faulty.’ If the explanation offered as the cause of a disaster is not human error, it
4
SAVING HUMAN LIVES
will most frequently be a malfunctioning component part. A famous example of the faulty part explanation is the O-ring malfunction explanation of the Challengerr disaster. Since ultimately all material parts are finite and imperfect and subject to breakdown, it is inevitable that eventually any and every part may malfunction. Hence, the explanation of the breakdown of a material component appears, like the explanation of human error, to be ineliminable. Sometimes the material part explanation is compounded with the use of the phrase, ‘high risk technology’ so as to imply that there is even a greater likelihood of a parts breakdown the higher the technology that is in use. What is forgotten in the parts analysis are usually several key questions such as why was such technology chosen in the first place (in the case of the Challenger, the malfunctioning O-rings were the least safe out of four engineering designs submitted). Thus, it was not so much a case of ‘risky technology’ (that was bound to go wrong), but a case of risky choice on the part of the management that selected the design to be built. Other key questions that are usually neglected in the ‘malfunctioning part’ explanation are: Were there regular, recent and thorough inspections of the part in question? Did the part pass rigorous safety standards? Did any warnings exist that such a part was likely to malfunction with disastrous consequences? The answer to the third question, the existence of prior warnings that the part was likely to malfunction with disastrous consequences was ‘yes’ in the case of the Challengerr disaster as there was an eight year history of red flagged warnings concerning the continued reliance on the O-rings.
RISKY OR UNRULY TECHNOLOGY? From Perrow’s title of his book, Normal Accidents: Living with High-Risk Technologies (1984, 1999) to Diane Vaughan’s title of her 1996 book, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, one gains the impression in postChallengerr literature that one is a victim today to the ever increasing advances in risky technology. Vaughan states in her preface that,
ACCIDENTS, TRAGEDIES AND DISASTERS
5
‘The practical lessons from the Challengerr accident warn us about the hazards of living in this technological age’. In the last sentence of her preface she writes, ‘This case directs our attention to the relentless inevitability of mistake in organizations - and the irrevocable harm that can occur when those organizations deal in risky technology’. But there was no necessity to fly if the technology were that risky or to choose that technology in the first place. The O-ring design of giant rubber gaskets keeping combustible hot gases from leaking out ranked fourth out of four submitted engineering designs and according to one important article co-written by Trudy Bell, Senior Editor of the engineering journal, IEEE Spectrum and Karl Esch, ‘The Space Shuttle: A case of subjective engineering’, Vol. 26, No. 6, June 1989, p. 44, [curiously omitted from her list of sources] was the chief cause of the Challengerr disaster. (IEEE is the acronym for the Institute of Electrical and Electronic Engineers). The real culprit was nott the ‘risky technology’ of her subtitle, which locates the risk in the technology but ‘risky assessment’, which locates the risk in the decision to employ technology while not basing that decision on known safe designs and continued use on performance data and not on subjective judgements. The phrase ‘risky technology’ mistakes the effect f for the cause. It is more appropriate to speak of risky assessment (not ‘faulty judgment’ for that could be an even more sophistic version of the fallacious inevitable human error hypothesis y discussed above). ‘Risky assessment’ clearly locates the problem in the choice of unreliable and in the case of where life and death risks (of others) are being taken, unethical assessment strategies. Perhaps the best descriptive phrase to use is ‘ethically and epistemologically reckless assessment’. What is lost in the pseudo-concept of technological hazard or risky technology is that there is no need to choose a risky or an unruly technology. Turner and Pidgeon are mistaken (as is Vaughan) when they say that ‘ ... Vaughan points out that the shuttle and its operation were at the cutting edge of high performance aerospace systems. [certainly not the O-rings!] As a consequence the engineers were dealing with the ongoing development of an ‘unruly’ technology’3. Not so, there was no need to choose the poorest design available. Safer designs were available at the time. The Aerojet’s design was safer. In fact, the solid rocket joints have actually been completely
6
SAVING HUMAN LIVES
redesigned. There are now three O-rings and when the boosters are firing, the pressure from the booster makes for a tighter seal. The astronauts now have access to explosive bolt escape hatches and are now provided both with parachutes and space pressure suits. The astronauts are now informed of any potential problems with a launch in question4. As will be discussed in Chapter Eight below, the astronauts could in fact have been saved with the installation of an abort system whereas Vaughan states that no abort system was possible (despite the fact that an abort system was indeed possible according to a number of sources which are not to be found in her bibliography, including Mike W. Martin and Roland Schinzinger, Ethics in Engineering, Second Edition, New York: McGraw Hill Company, 1989 and even in a source that she frequently cites, Richard Lewis, Challenger: The Fateful Voyage, New York: Columbia University Press, 1988, which points out that it was an issue of policy not possibility that no abort system was installed) leaving her readers with the impression that the two civilians and five astronauts were helpless victims of risky technology. According to Schwartz, at the time of the Challengerr launch, NASA was cutting their safety budget by half a billion dollars ((New York Times, 24 April, 1986) and their quality control staff by 70% ((New York Times, 8 May, 1986).5
THE EXPLANATION OF ORGANIZATIONAL INERTIA OR BUREAUCRATIC DRIFT While this is not a common manifestation of the accidental cause, it can be cited as one. The explanation is that as bureaucracy in organizations increase there is a tendency for human responsibility for decisions to be supplanted by bureaucratic processes. As a result, as the good poet says, ‘there is many a slip twixt cup and lip’. While large organizations inevitably breed inertia, what is neglected in this explanation is that if the decision making of the organization poses life and death consequences for certain individuals, then such decision making cannot be left to the vagaries of inertia.
ACCIDENTS, TRAGEDIES AND DISASTERS
7
Management is responsible for ensuring that life and death decisions are not allowed to be decided by bureaucratic drift.
ACCIDENTS WILL HAPPEN One of the most prevalent of beliefs that tends to undermine the project of disaster prevention management before it gets underway is the belief that accidents are inevitable given the cosmic design or lack of design of the universe. Just as the belief in the inevitability of human error can function as a strong disincentive to developing an orientation towards disaster prevention, the belief that accidents will happen and there is no way to prevent them also functions as a strong disincentive to building a comprehensive program of disaster prevention management. The important thing to focus on for the present moment is that both of these are beliefs; they are not established facts in any sense off the word. While no one would attempt to argue for the possibility of eliminating human error, that is not the same as arguing for the possibility of reducing human error to an ineliminable minimum. That human error can be reduced in its occurrence and its effects is one of the by-products of disaster prevention management. How this occurs will be demonstrated in the empirical cases to be examined. But the first step is to demythologize the belief that human error cannot be eliminated. The term ‘demythologize’ is preferable to the term ‘deconstruct’ because while deconstruction can take place to serve a number of purposes, the motive behind demythologizing is to attempt to take away the power of the belief in question to influence choices and behavior by showing that the belief is a myth. While human error cannot be eliminated, one can reduce the importance of this fact by replacing the phrase ‘there will always be human error’ with the phrase, ‘the incidence and the effects of human error can be greatly reduced’. Likewise, with the thesis, ‘accidents will happen’; there is no need to argue that accidents will never happen. In order to properly demythologize the phrase ‘accidents will happen’, it can be altered to read, ‘corporate disasters are largely not accidents at all, but nearly always functions of mismanagement’. Thus, one need not lock horns
8
SAVING HUMAN LIVES
with the view that “accidents are inevitable”. All one needs to do is to provide sound arguments that disasters of the sort that this book addresses are hardly accidental in their occurrence but arise from a series of bad management practises. These bad management practises can be said to arise from a lack of ethical sensitivity or an ethical consciousness.
THE WORD ‘ACCIDENT’ If the view that is presented in this book turns out to be correct, it is rather important that one avoids the use of the word ‘accident’ to refer to the kinds of disasters under discussion. According to its dictionary definition, the word ‘accidental’ carries with it both the connotations of something that occurs by chance and something nonessential or incidental.6 The use of the word d ‘accident’ in this connection carries with it the connotation that whatever happened (as the Challenger “accident”) was inevitable (since accidents are inevitable and the Challengerr disaster was an accident, it was inevitable), and therefore could not have been prevented. As will be discussed below, literature concerned with the Challengerr disaster abounds with the term ‘accident’ to refer to the Challengerr disaster. Even the title of the Presidential Report claims to be the Report of the Presidential Commission ... on the Challenger Accident. It is indeed unfortunate that such a prestigious body adds its weight through its choice of language to the belief system that disasters such as the Challengerr were not preventable and, also, as the word ‘accident’ also connotes, were somewhat trivial in nature. (One does not refer to an earthquake, for example, as an accident even though it was not preventable by known human means). While the Presidential Committee might not have thought that the Challengerr disaster was not preventable, their use of the term ‘accident’ in the title and throughout their report carried with it an influence all of its own. The thesis that “accidents will happen” and that therefore nothing can be done to prevent their occurrence reaches its logical fulfillment in the thesis of Charles Perrow that accidents are so inevitable and therefore non preventable that we are even justified in calling them
ACCIDENTS, TRAGEDIES AND DISASTERS
9
“normal”. Such a usage does extreme violence to the concept of an accident which by definition would be an occurrence that was not a regular occurrence and therefore abnormal rather than normal. Perrow’s intention must be to create the impression that “accidents” are part of the normal sequence of events. But statistically this is not the case. Nevertheless, we find the main title of his book to be Normal Accidents where he not only refers to such major disasters as Three Mile Island as accidents (here, he is not alone as the Presidential Report also refers to the Three Mile Island Accident), but moreover that nothing at all can be done to prevent their occurrence. Nothing could be further from the thesis of this present book. Perrow’s view, however, cannot be simply dismissed. He argues that high technology of today carries with it such interlocking complexity that it is impossible to eliminate the possibility of accidents. The problem with his analysis is that one must bear in mind that his thesis that “accidents are inevitable and normal” is a belief, and not a given fact. Some of the evidence that he provides in his book does not support his conclusion. For example, in his treatment of the Erebus disaster, which occupies three pages of discussion, he cites the complexity and the coupling of the system that resulted in the substitution of the wrong programming for the pilot to follow. In the chapter of this present book, ‘The Disaster on Mount Erebus’, it is argued that the fact that this substitution went undetected and/or was not communicated to the pilot was due to faulty management practises and a lack of an ethical imperative both of which could have been otherwise.7 If one leaves it as simple “complexity and interactive coupling”, then management is not responsible. The Erebus example, at the very least, does not support Perrow’s conclusion that this disaster was ineliminable unless one simply wants to hold onto the implacable belief that the complexity of causes in and of itself is of such a nature that there is no place at all for the concept of responsibility. The ‘accident’ and ‘tragedy’ language pervades many sources. Claus Jensen’s book on the Challengerr published by Farrar, Straus and Giroux in 1996 bears the title, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time. In Rosa Lynn B. Pinkus, Larry J. Shuman, Norman P. Hummon, Harvey Wolfe,
10
SAVING HUMAN LIVES
Engineering Ethics, Balancing Cost, Schedule, and Risk, Lessons Learned from the Space Shuttle, published by Cambridge University Press in 1997, the term ‘accident’ appears on page 140 when describing the Challenger, ‘accident’ is used three times in the next two pages, ‘tragic accident’ on pp. 240 and 278 (a double whammy), and forms part of the title of chapter 14, thus appearing as a header of every page from p. 277 to p. 327 in the powerfully influencing phrase, ‘The Challenger [sic] Accident’. When the Challenger disaster is referred to as a tragedy it summons up an aura of inevitability and nobility when in fact, it was horrific and not tragic at all. To reference it as ‘tragic’ implies not only that it was noble and hence in some important sense worthy and redeeming but also implies that it was fated and thus unpreventable. The term ‘tragedy’ takes the responsibility for the decision out of the hands of the launch decision makers altogether and assigns the responsibility to a higher force. Today it is the gods of Technology instead of the gods of Fate. The meaning of ‘tragedy’ as mournful serves to focus attention on the victims and not on those responsible decision-makers. In Caroline Whitbeck’s, Ethics in Engineering Practice and Research, published by Cambridge University Press in 1998 there is a mixed usage. The term ‘accident’ is freely interspersed with ‘disaster’ thus diluting the meaning of ‘disaster’. For example, the word ‘accident’ appears five times on p. 144 to three uses of ‘disaster’. On p. 145, ‘accident’ appears three more times. The very first page of Perrow’s new Afterword to his 1999 edition of Normal Accidents: Living with High-Risk Technologies features the word ‘accident’ no less than ten times and the word ‘accident’ occurs repeatedly throughout his Afterword. Whatever arguments that do appear conferring ethical responsibility to decision-makers, the incessant reference to the outcome as an ‘accident’ trivializes and neutralizes the ethical responsibility supposedly attributed to decision-makers. The subliminal effect of this word usage in terms of influencing the possibility of future Challenger like disasters cannot be ignored. At the same time it reveals metaphysical belief systems most likely shared by those who made the decision to launch: a metaphysical beliefs that exempted them from responsibility in advance. It is important to take note of metaphysical belief systems that are ethically neutering and to address this issue.
ACCIDENTS, TRAGEDIES AND DISASTERS
11
The practise of employing the words ‘accident’ and ‘tragedy’ in literature on the Challengerr and other corporate disasters is endemic. Diane Vaughan, whose later book on the Challengerr was to attract much attention, was to make an early start as an arch user of the accident and tragedy language. On the first page of her article, ‘Autonomy, Interdependence, and Social Control: NASA and the Space Shuttle Challenger’, Administrative Science Quarterly, Vol. 35, Number l, March 1990 the word ‘accident’ appears nine times, three times further buttressed by the adjective ‘technical system’ and once by the adjective ‘normal’. The word ‘accident’ is employed a total of forty-seven times in the course of her article. ‘Tragedy’ appears twice on page one and a total of nine times during the course of her article. In a later article, as mentioned above, she switches her preference to ‘tragedy’ as in the title, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, the Challengerr Tragedy’, California Management Review, Vol. 39, No. 2, Winter 1997. (Here the concept of “risky work” also makes an appearance – the real question is whose choice is it and whether it is necessary to work under risky conditions). Such language implies a lack of responsibility since accidents do happen and tragedies are of course due to Fate. Occasionally, there has been a change from the use of the word ‘accident’ to the word ‘incident’ as in the title of Joseph R. Herkert’s otherwise excellent article, ‘Management’s Hat Trick: Misuse of “Engineering Judgement” in the Challenger Incident’, in the Journal of Business Ethics, Vol. 10, No. 8, August 1991. Herkert’s usage in his article is evenly distributed between the two terms ‘accident’ and ‘incident’. In ‘Risk Reporting And The Management of Industrial Crises’, Dorothy Nelkin does raise the issue of the use of words and the importance of the proper word choice. For example, she asks, ‘Was Three Mile Island an ‘accident’ or an ‘incident’?’ And she goes on to comment that, ‘Selective use of adjectives can trivialize an event or render it important ... define an issue as a problem or reduce it to a routine’. Cff , Journal of Management Studies, Vol. 25, Number 4, July 1988, p. 348. Nevertheless, the trend to use the accident and tragedy language carries on. An interesting new trend may be indicated in the Post-Modern mixture of the language of ‘disaster’, ‘accident’ and even ‘catastrophe’, ‘catastrophic accident’ and ‘calamitous accident’ at several junctures to describe the
12
SAVING HUMAN LIVES
Challengerr disaster in Ann Larabee, Decade of Disaster, published by the University of Illinois Press in 2000. (While the present author originally thought that the ‘Post-Modern’ description was tongue in cheek, it was later discovered that Larabee’s work on the Challenger actually received the Postmodern Culture Journal’s Electronic text award). The addition of the word ‘catastrophe’ to describe the Challengerr disaster betokens the powerful hand of both Chance and Fate at once since one tends to think of catastrophes, like a Comet crashing into earth, as something which is not due to man’s actions and at the same time is unpreventable. ‘Calamity’ is not much better. It emphasizes the woeful side of the matter and thus falls in more with the ‘tragedy’ language while still borrowing the association with Chance and Fate that are connoted by ‘catastrophe’. With such a variety of descriptions the Challenger disaster becomes more and more opaque in Larabee’s treatment, at one point even becoming a ‘necrodrama’, thus casting the Challengerr disaster as an even more ghoulish spectacle than a ‘tragedy’. A ‘necrodrama’ presumably would be a kind of horror movie which would provide delectable appeal to those of certain theatrical and amoral (or immoral) palettes. Perhaps the change from ‘tragedy’ to ‘necrodrama’ is a reflection of the changing sensibility of the times.
THE BELIEF IN MONOCAUSALITY While it may be argued that no one takes the thesis of single causality seriously nowadays, there is a great deal of discussion of the concept of “human error” which seems to attempt to place blame ffor disasters on the shoulders of a particular individual or individuals. Such an approach, which tends to ignore systemic management factors would seem to be subject to the belief that events take place in isolation from a network or system of interacting causes. Whenever there is a strong effort to fix blame on a certain individual or group, such as is noticed in the case of the Herald of Free Enterprise disaster or the case of the disaster at Mt. Erebus which was and still is argued by the Air New Zealand to be a case of “pilot error”, there is frequently a behavior known as, ‘scapegoating’. This
ACCIDENTS, TRAGEDIES AND DISASTERS
13
was especially true of the Presidential Commission’s findings in the case of the Three Mile Island disaster where they argued that ‘the major cause of the accident was due to the inappropriate actions by those who were operating the plant’8. Even in the case of where multiple causes are sometimes considered as in the Presidential Report on the Challengerr disaster, the causes are separated out as if they operated in isolation from each other as single causes, divided into technical causes on the one hand and faulty management causes on the other. In this case, as will be seen below, it is not at all clear that the technical cause can properly speaking be considered as a cause, or at least, not as a primary cause as it is so designated in the Presidential Report. If the idea of causality is to be utilized, it must be borne in mind that no cause operates in splendid isolation from other causes. The belief in monocausality may lead the unwary thinker in the direction of eitherr finding particular scapegoats on which to affix blame or it may lead the unwary thinker in the direction of singling out technical factors on which to place major blame. A cause cannot operate singly: it always operates as an ingredient in a network of connections. In order to better appreciate the causes of a disaster, it is best to steer clear of the belief in single causes operating alone or in small clusters apart from some systemic background out of which they emerge. The reason why it is important to fully appreciate the systemic origin of any particular cause is that one way of initiating a change in how disasters may be approached is to look for the systemic causality in addition to the particular causation of any particular element in that system. It is not that by paying attention to systemic factors that one will ignore particular causes, but one will be in a better position to initiate a possible change in the causal pattern if the matter is approached both from the perspective of the individual causal agent and from the perspective of the systemic background out of which that causal agent arises. When the systemic background is fully considered, it should be discovered that no one cause operates or can operate in isolation from both a general system of causation and the cooperation of a series of co-contributing causes. Some so-called single causes, such as a technical defect in an O-ring design in the case of the Challenger disaster are not, properly speaking, single causes at all, as by themselves they possess no capacity for action but are set
14
SAVING HUMAN LIVES
into motion by the direction for action given and the actions taken by top management. The series of co-operating causes exists by virtue of the fact that each individual particular cause contributes some element of support to that general direction.
MULTI-CAUSALITY AND MULTIPLE RESPONSIBILITY In place of the concept of monocausality, one can think in terms of multi-causality, that is, that a number of causes can and in fact do contribute to the emergence of an event and that any one of these causes can effect the changing of this event. An illustration of the operation of multi-causality is the phenomenon of a mother awakening to the hearing of her baby’s crying. The mother’s awakening is not an event that occurs in isolation and it cannot occur in isolation. In order for the mother to awaken, air must exist in order to carry the sound waves; the mother’s hearing and anticipation must be in a state of receptivity and readiness to receive the sound even while asleep. One of the factors that predisposes the mother to be receptive is the fact that she is (and understands herself to be) responsible for the baby in virtue of the fact that she is the mother of the child. This is the systemic factor. Another ingredient in the emergence of the event is that there is air for the sound waves to travel. If one transfers this example to an organizational context, the relationship of mother to child has to do with an established hierarchy of authority, or a systemic basis for the causal action to occur; the existence of the air may be taken to represent the communication channels that must exist for the action to take place; the mother awakening also requires a certain degree of the assumption of responsibility on the part of the mother for being responsible to the child. Rather than there being no cause for the event, the example cited demonstrates that there are multiple causes for the event and all causes must be co-present in order that the event occur. The notion of multiple causality brings along with it the notion of multiple responsibility. Instead of asking, in this case, what is the cause of the mother’s awakening? (the question formulated in
ACCIDENTS, TRAGEDIES AND DISASTERS
15
this way assumes that there is a single cause for any event), one must learn to ask of each individual, what is her/his part in the emergence of this event? The notion of multiple causality does not reduce the responsibility of any one factor in the emergence of any event but extends the responsibility of all factors.
FAULT FINDING AND THE SCAPEGOAT Finding who is at fault is to assume that there is one and only one individual in every situation who is responsible. This individual is the scapegoat: the one who takes the blame. The technique of looking for and finding someone on whom to blame a situation is called scapegoating. When scapegoating is successful, one may think that one has solved the reason for the occurrence of a disaster. One may then think of appropriate punishment to dole out to the scapegoat. When this has been decided and the scapegoat (in some unusual incidents there may be scapegoats especially when diverse individuals can be seen or are seen to be acting as a group) has been punished, then the riddle of the occurrence of the disaster has been “solved”. That other disasters may arise is unfortunate and inevitable as there will always be human error. However, one can always discover and root out the scapegoats and the process goes on. The problem with the technique off scapegoating is nott only that it may be unfair to the scapegoat but that by thinking that one has discovered the cause of the disaster, one is alleviated from the responsible of searching further for the entire constellation of factors (which may or may not include the scapegoat). Whenever a scapegoat is searched for or cited as the cause of any catastrophe, one may be duly suspicious whether the true culprit or culprits have been found. Scapegoating is not a means of finding and assigning responsibility. It is a means of avoiding finding and assigning true responsibility. Wherever one finds the scapegoat mentality at work, responsibility has been abrogated, not shouldered. An analysis of causality leads one away from scapegoating and towards the discovery of the real
16
SAVING HUMAN LIVES
responsibility for any event, which is always, and in every case multiple and never single. A good illustration of the awareness of multiple causality may be found in Churchill’s response to his horrified discovery that Singapore, rather than being an impregnable, proved to be highly vulnerable to a Japanese land invasion. In his history of World War Two, Churchill comments: ‘I ought to have known. My advisers ought to have known and I ought to have been told, and I ought to have asked.’9 This multileveled response of Churchill’s beautifully reveals the multiple nature of causality. It is possible that the Japanese invasion of Singapore could have been prevented. This is not a mere matter of the 20-20 vision of hindsight. What is interesting and well reflected in Churchill’s statement is that it could have been prevented in a number of different ways. In other words, it could have been prevented by taking different actions or by different single causes. Churchill himself could have known. If he did not know, he could have asked those who did. There were others who should have known as well. Those others (his advisers) not only should have known: they should have informed him (without his asking for the information) if they had had the knowledge. If the others had not known, Churchill, by inquiring of them, may have stirred them to find out. If the others had known, Churchill, by inquiring of them, may have been able to obtain the information. If they knew, they should have informed him. However, it is equally true that the disaster might have been prevented by the expedient of Churchill questioning his advisers on this issue. On the other hand, it was also true that his advisers should have found this out by themselves. Having found out the information, they then could have informed Churchill on their own without waiting for him to inquire of them. Any one of these eventualities might have been enough to change the course of events. The point being, there was no single cause for the success of the invasion, either. One could say that as Commander in Chief, Churchill’s ignorance was the reason for the Japanese success. He did not take the steps which could have prevented invasion. Or, one could say that his commanders in the field taking no action to inform him were the reason for the Japanese success. This is all apart from the efficiency of the Japanese military operation, which obviously had something to do with the successful
ACCIDENTS, TRAGEDIES AND DISASTERS
17
outcome. The statement of Churchill’s reveals that the Japanese could possibly have been repelled in a number of different ways. Thus, there is no single cause for their success either. Their success was made possible by Churchill and his commanders having done nothing to defend Singapore from a land invasion in addition to their own well-taken action. The final outcome of the taking of Singapore is a case in point of multiple causation.
WARNINGS AND ETHICS It is amazing that in nearly every r case study of disaster that follows that warnings existed that could have prevented the disasters. Only in one case, that of the disaster on Mt. Erebus was there no warning. But in that case it was the actual lack of a warning, that should have been present, that precipitated the disaster. In no case was there an example of a single warning. Warnings were multiple and in many cases were marked by a sense of urgency and in some cases were continuous over a long period of time. The question is, why were the warnings not heeded, or in the one case, not provided? It is the thesis of this book that the lack of attention to warnings is a most serious symptom of both a lack of the will to communication, which will be discussed in Chapter Three, below, and a lack of ethical consciousness. That it is a lack of a will to communicate is self-evident. If one either does not provide warnings when they are crucially necessary or pay attention to warnings when they are of paramountt importance, this is evidence of a lack of free flowing communication. But what lies underneath the lack of communication? Of course on one level a lack of providing communication results from a fear that communication will not be listened to and, even worse, single one out as a troublemaker. A lack of listening to communication, of not taking it seriously, is sometimes explained in terms of there always being warnings and most of them turn out to be hoaxes. If one listened to every warning that was issued, one would never get anything done. While superficially such an explanation seems plausible, when one considers that the warnings that are discussed in this present volume
18
SAVING HUMAN LIVES
are warnings of life and death, a lack of attention to such warnings takes on a different significance. It appears that what is in back of a lack of paying attention to warnings is a poverty of moral imagination: an incapacity to draw out the consequences of not heeding warnings in terms of the loss of life that not heeding warnings will bring about. Such a poverty of moral imagination is in effect moral blindness. The question naturally arises, what is the reason for the existence of moral blindness? While multiple answers can be given to account for the existence of moral blindness, it is apparent that an ethical consciousness is not considered a significant perspective. One attempt at removing moral blindness is to attempt to restore the importance of an ethical consciousness. In order to do so, a rule of thumb is to consider whether providing or heeding a warning will save human lives. If providing or heeding a warning will have the consequence of saving lives, the injunction to provide and heed warnings will take on the significance of an ethical imperative. Making the providing and the heeding of warnings into an ethical imperative may have an effect curing moral blindness.
FREEDOM AND ETHICS The first step in preventing disasters is the recognition that disasters are preventable for the most part. This may sound like a very obvious step to make but the omission of this step is one of the most major obstacles that hinders successful disaster prevention.10 This chapter in particular has been devoted to attempting to laying the groundwork for the thesis that disasters are preventable in principle. What has been stressed in this opening chapter is that in a universe in which everything is determined by forces outside of human control, whether it be the Fates or the complexities of high-risk technology, ethical concern will have no place. If it is granted that one possesses a certain measure of freedom to influence the course of events, then having recourse to some ethical direction is of great assistance. Empty freedom without a sense of direction is blind; an ethical direction without the capacity to implement it meaningfully is lame.
ACCIDENTS, TRAGEDIES AND DISASTERS
19
So far this discussion has been devoted to the possibility of freedom. It is now time to turn to the possibility of ethics.
NOTES 1
Consider the use of the word ‘tragedy’ in the title of Corporate Tragedies: Product Tampering, Sabotage and Other Disasters (by I.I. Mitroff and R.H. Kilman), New York: Praeger Publishers, 1984. While the authors of the volume may not at all intend the reader to view the issues that they discuss as inevitable and beyond our control, the word ‘tragedy’ carries with it a power all of its own to influence readers in this direction. When it appears in a title, the word carries with it even greater influence. Shrivastava, for example, in the very first line of the preface to his very ffine work, Bhopal, Anatomy Of A Crisis, Cambridge: Ballinger Publishing Co., 1987, refers to Bhopal as the Bhopal tragedy. Consider also the title of Romzek’s and Dubnick’s excellent article on the Challenger, ‘Accountability in the Public Sector: Lessons from the Challengerr Tragedy’, Public Administration Review, May/June 1987, pp. 227-238. Consider Diane Vaughan’s use of the word tragedy in, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, the Challengerr Tragedy’, Management Review, Vol. 39, No. 2, Winter 1997, p. 86-7. The notion of C California the Challengerr as a tragedy ennobles the personalities of the crew and civilians as if they were making a choice to fly in the face of great dangers and thus to become tragic heroes and heroines. In fact, they had no idea of the risk they were taking and were unfortunate victims and nott tragic heroes or heroines. 2 Human error, or pilot error, has been a frequent explanation given by commercialairlines for commercial airline disasters and has had a strong effect on the popular mind. John Nance writes: ‘... throughout the history of commercial-airline accident investigation the pattern had been the same: The discovery that a human failure was the immediate cause of an accident almost always put an end to the investigation. Pilot error. Maintenance error. Procedural error. Controller error. As if there were no need to look further - no need to see that such accidents are the effects of other causes, causes yet to be addressed’. Cf., John J. Nance, Blind Trust, William Morrow and Company, New York, 1986, p. 73. 3 Barry A. Turner and Nick F. Pidgeon, Man-Made Disasters, Second Edition, Oxford: Butterworth Heinemann, 1997, p. 182. 4 According to a private interview conducted by the author with a current astronaut f Robert E. Allinson Review of The Challenger Launch Decision, Risky at NASA. Cf., Technology, Culture and Deviance at NASA, Transactions/Society, NovemberDecember, 1997, pp. 98-102. 5 Cf., f Howard S. Schwartz, Narcissistic Process and Corporate Decay, The Theory of the Organization Ideal, New York and London: New York University Press, 1990, p. 103. 6 Webster’s New International Dictionary, Latest Unabridged, 2nd Edition.
20 7
SAVING HUMAN LIVES
Charles Perrow, N Normal Accidents: Living with High Risk Technologies, New York: Basic Books, 1999, pp. 130-132. 8 As quoted in Charles Perrow, N Normal Accidents, Living With High-Risk Technologies, New York: Basic Books, 1999, p. 26. 9 Winston Churchill, T The Second World War, Vol. IV, Chapter III, London: Cassell, 1951, p. 43. The source of this quotation was A.L. Minkes, The Entrepreneurial M Manager , Harmondsworth: Penguin Business Books, 1987, p. 25. The author is indebted to Professor Minkes for private discussions of Churchill’s view of multiple responsibility. 10 The thesis that disasters must be first conceived d of as preventable plays a decisive I estigation into the King’s role in the conceptual framework of Fennell in his Inv Cross Underground Fire, Her Majesty’s Stationery Office, 1988. Cf., f Chapter Ten, below. That the concept that corporate disasters (in his vernacular, ‘corporate tragedies’) are not preventable is a necessary first block to be removed in preventing them is a claim that is also put forth by Mark Pastin in his book, The Hard Problems of Management, Gaining the Ethics Edge, San Francisco: Jossey-Bass Publishers, 1986. When Pastin asked a group of executives why they did not anticipate tragedies, ‘they responded that tragedies are not predictable’, (p. 130). Pastin goes on to state that, ‘Almost every corporate tragedy from thalidomide to Bhopal was forewarned. But in many cases, the warnings were so obvious and/or uncannily accurate that we can only conclude that managers refused to consider them,’ (p. 175, emphasis his).
CHAPTER 2 THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS The orientation of this book is that business ethics is not a separate subject from business administration. This is not to say that business cannot be conducted with a very low priority given to ethics. It is only to say that there is no inherent conflict between the nature of business, properly conceived, and ethical behavior. Business can be conducted with a very high priority given to ethics. This does not mean that all business that conducts itself by giving a very high priority to ethics will be successful. But that a business that gives a high priority to ethics will necessarily be unsuccessful is also not the case. There will always be situations in which choices must be made between profit and ethics. But this only means that ethics does possess a place in business, properly conceived. The danger of separating business from ethics is that one can easily justify unethical business practises by saying to oneself, f or to others, ‘this is business’, as if business were a separate domain which wore an ethics proof vest. Just as in the Mafia movies when before a contract is carried out, the contract killer says, ‘nothing personal’, so, the businessman could always say, ‘after all, this is business’, or, ‘business is business’, which means that ethics is not considered. This attitude can be seen in such statements as, ‘when I go home to my family, I will put on my ethical hat. Now, I am wearing my business hat’. If one were to attempt to start up a business and only later were to consider whether to apply ethics to modify the results of that discipline, it would be entirely possible that conflicts would result between what were perceived as the proper pursuit of that discipline and ethical considerations. The argument presented here is that it is more efficient (in addition to being more true) to take ethical considerations into account in the construction of the definition of the discipline. There are three fundamental ways in which ethics is involved in the definition of business enterprise. Firstly, ethics is
21
22
SAVING HUMAN LIVES
involved in terms of the setting of the goal or goals of the business enterprise. Secondly, ethics is involved in terms of the establishment and the maintenance of the inward ties and the expressive communicative infrastructure of the business organization. Thirdly, ethics is involved in the very conduct of business in terms of a trust between employers and employees, between trading partners and between businesses and customers. This last feature of ethics has received treatment by others and thus will be mentioned only in order to make the entire treatment of ethics more complete.
ETHICS AS INVOLVED IN THE GOALS OF AN ORGANIZATION No organization, even if it were interested in profitability as a prime goal, could avoid producing some kind of social benefit or avoid intending at least in some part to achieve some kind of goal which is other than purely making money. Even the examples of a mint that mints coins or a printer that prints paper notes to be used as currency are literally making money but they are also providing a service, that is, creating the means for everyone to have a medium of exchange, which is a utilitarian benefit. Every r business enterprise that involves providing a service or making a product must provide a service or make a product which is something other t than the profit that is to be generated from the service or the product. Even the businessperson who sets out simply to make a profit must have some idea as to how to go about doing this. ‘Making a profit’ does not give the businessperson starting up, any idea as to what to do or how to do it. The profit motive is a non-user friendly Pole-Star. The goal of simply ‘making money’ does not give one any direction as to how to start up, continue a business enterprise, or offer any suggestion as to what kind of business enterprise to start up in the first place. If something else were to be required in order to start up a business enterprise in addition to defining the essence of the goal of business as being ‘the business of business is to make money’, or ‘business is buying and selling’, (in Chinese the word which is translated as ‘businessperson’ is ‘buying and selling person’), then there must be
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
23
some other element which is the defining feature of business f margin. The directive to make enterprise in addition to the profit profits is too empty: it does not define business. One must add in something else such as, to produce a product that is needed, or to provide a service that is needed.The notion of filling some kind of social need must be taken into account when one is starting up a business enterprise. What is really lacking here is a fuller definition of business enterprise than the technically correct definition of the ownership or use of capital investment, land or labour in order to generate revenue for the owners. A definition of business enterprise that includes the creation of social value or the fulfillment of social need is needed before one can really understand what is meant by the word ‘business’ in the first place. It is understood that the desire for the acquisition of personal or family wealth is a need that arises from and is fostered by the fact that one is born into a system in which one must gain wealth in order to survive. One is not born into a system in which one is entitled to a comfortable, guaranteed income unless one happens to be born into royalty. But such a need is common to all who are born into a system in which some become engineers, some become physicians and some lawyers. What then should the motivation be for entering into the world of business? It cannot simply be that one considers that one can make more money than one can make by being an engineer because the desire to make ‘more money’ is a product of a social and an economic system; it is not inherent to the nature of trading or business. The question really is, what lies at the basis of business activities? What is being suggested here is that at the bottom of the motivation of the businessperson there is some desire to contribute to the overall social benefit. Regardless of what kind of social and economic system one is born into, one is born into a greater society. In some form or another, one’s deeper desires are to be affiliated with, recognized by and contributory to that society. In its most primitive form, the desire to be an accepted member of society is actualized by the desire to obtain personal wealth. This is corroborated d by the fact that the drive to obtain personal wealth cannot be actualized without taking social needs into account. While it could be argued that the fulfillment of social needs is not a self-conscious ethical act, it certainly can be selfconscious as it was in the case of Henry Ford to be cited below and in
24
SAVING HUMAN LIVES
the case of many Japanese companies today. If it is not a self-conscious ethical act, it can become self-conscious utilitarian choice. If a deontologist were to consider the satisfaction of social needs to be the right course of action, then engaging in business could be construed as an ethical activity from a deontological viewpoint as well. Whether or not one also derived a profit from one’s actions would be a separate question from whether or not one’s actions produced social value. If one’s actions produced social value such actions can be construed as ethical whether or not they are also profitable. It is not only with respect to defining business enterprise that one must take into account the issue of social values. If one consults the typical kinds of definitions of economics one finds in an economics textbook, one can find implicit references to the creation of social value or the fulfillment of social needs. In order to show that ultimately human beings are producers and producers of social value, one may examine some standard definitions of economics that appear in arguably the most famous and influential textbook of economics in the United States. For the purposes of this examination, one may make reference to the work of Paul A. Samuelson, the economist who revolutionized economics at MIT and together with Robert Solow, turned MIT into the institution that best embodied mainstream economic thought for some three decades. The gifted Samuelson, though firmly in the Keynesian camp was able to unite a century’s worth of economic insights into a single, coherent theory the neo-classical synthesis - that dominated economic discourse from the 1950s through the 1980s. His textbook provided the basic education in economics of the present author among many others. In the Twelfth Edition of Samuelson’s classic textbook on economics published in 1980, one finds a list of definitions of economics that provide an opportunity to illustrate how definitions of economics cannot be complete without a reference to social values and social needs even though in the following definitions the references are not always fully obvious. The first definition that is given is: ‘Economics is the study of those activities that involve production and exchange among people.’ 1 The above definition possesses the advantage of referring to people as the obvious agents of economic transactions without whom
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
25
economic transactions would make no sense as there would be no one to produce goods or services and no one to exchange them and no one to receive them. It all but comes out and states that economics cannot be defined without a reference to social values or needs. However, without an explicit reference to the fulfillment of social needs or the creation of social value, it is incomplete as a definition of economics since it does not refer to the motivation or end-purpose of economic endeavors. By referring to production and exchange a glimmer of the mechanisms of economic activity is offered. The second definition offered is: ‘Economics analyzes movements in the overall economy - trends in prices, output, and unemployment. Once such phenomena are understood, economics helps develop the policies by which governments can affect the overall economy.’ 2 This definition loses some of the advantages of the first definition by making the reference to the ultimate providers and recipients of economic transactions nearly non-existent. However, its reference to unemployment is unwittingly people oriented. Everyone psychologically fears unemployment and the use of this word in a sentence makes it crashingly clear that it is people who would be unemployed. It also possesses the unique advantage of referring to the behavior of governments since government monetary policy, for example, obviously effects human economic behavior. However, how and why movements in economy take place remains shrouded in mystery and hence this definition is perhaps even less adequate than the first definition which at least makes reference to production and exchange. The third definition offered is: ‘Economics is the science of choice. It d productive studies how people choose to use scarce orr limited resources (land, labor, equipment, technical knowledge) to produce various commodities (such as wheat, beef, overcoats, concerts, roads, missiles) and distribute these goods to various members of society for their consumption.’ 3 This definition is without doubt the best so far. Like the first definition, an explicit reference is made to people as both the agents and the ultimate recipients of economic actions. The mentioning of specific commodities such as overcoats makes it very obvious that economics is involved in providing value for people and by
26
SAVING HUMAN LIVES
extension possesses social value and for the same reason is involved in fulfilling social needs without which it would serve no purpose whatsoever and in fact would not possess any reason for being in the first place. By making explicit reference both to the examples of concerts and missiles this definition of economics makes it very clear that economics cannot be defined without reference to value and/or disvalue. It only lacks an explicit reference to the function of economics as a provider of social needs and a creator of social value to be more complete on the ethical side of economic activity and an explicit reference to the basic forms of economic activity such as capital investment, labour, rent and trade to be more complete on the business side of economic activity. The fourth definition of economics that is given is the following: ‘Economics is the study of how human beings go about the business of organizing consumption and production activities.’4 This definition possesses the advantages of the first and the third definitions of explicitly referring to people or human beings. It also possesses the distinct advantage of referring to business thus displaying the important feature of economics that economics cannot be defined without reference to buying and selling, renting or trading. The mention of consumption and production is advantageous because it only requires a moment’s thought to realize that it is human beings who need to consume and benefit from consumption and that consumption cannot take place without production (of either goods, services or labour). It is hardly complete since the mere mention of production does not offer a hint as to the mechanisms of production. The fifth definition that is given is the following: ‘Economics is the study of money, interest rates, capital, and wealth.’5 While this would seem to be the most technically correct definition so far and resembles in this way the technical though limited correctness of the fourth definition while possessing the obvious advantage of an economy of expression, it possesses the disadvantage of concealing that it is people who set pay scales or interest rates and people who risk or lose capital and people who accrue wealth. The reference to wealth, however, does possess the advantage of making it very obvious that economics is concerned with the creation of value. In this case it is monetary value which is indicated. Again, some of the
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
27
main instruments of the production of revenue such as manufacturing, trade, labour, rent, and sales are omitted from the definition. The sixth and last definition offered appears to be an effort to summarize the variations of definitions that can be offered: ‘Economics is the study of how people and society choose to employ scarce resources that could have alternative uses in order to produce various commodities and distribute them for consumption, now or in the future, among various persons and groups in society.’ 6 This summary definition possesses the advantages of comparative comprehensiveness and economy while making explicit reference to people and society, thus displaying that it is people and society who are the agents and the ultimate recipients of economic activities. It also possesses the advantage of explicitly stating that the purpose of economic activity is the production of commodities for people thus making it abundantly evident that people are the end users of economic activities and that it is thus social needs and social values that are being filled. While none off the above definitions explicitly refer to social needs or social values, social needs and social values are implied by all the definitions of economics that are offered thus suggesting that it is impossible in principle to define economics without taking into account social needs or social values in the first place. The means of production are not referred to in this definition and thus how production and distribution take place and why they take place is not indicated. It is interesting to note that in the third and in the sixth definition offered, a reference to scarce or limited d resources is indicated. This most likely is the influence of the tradition of economic thought which can be traced back to such figures as Malthus. Theoretically, one could approach economics as the study of abundantt or over abundantt resources. This would appear to be a question of circumstance, not principle. However, the reference to scarcity does seem to presuppose an ethical value which is either thrift or the value of distributive justice. The question, which is left unstated is, how does one manage production and distribution when the resources are limited? The question seems to imply that some attention in economics must be paid to making sure that resources either do not completely run out or that they are equitably distributed. Some concern is being shown for either the future of economic pursuit or
28
SAVING HUMAN LIVES
the equity of its distribution or both. In either case, the reference to scarcity seems to suggest that some ethical value is at stake even if it is only meant that the ethical value is one’s egoistic survival. It seems to follow from this lengthy analysis that it would make sense to include the idea of social value and or social need in the very definition of business or business enterprise just as it would make a definition of economics more intelligible if the notions of the fulfillment of social needs or the providing of social values were included as part of its definition. It would also seem that any useful definition of business enterprise should include the major forms of the production, transmission and distribution of wealth just as any complete definition of economics should make reference to the major mechanisms of wealth production. One might venture a definition of business which includes a direct reference to the how of production and consumption (thus satisfying the technical needs of the definition) which at the same time refers directly to the motivation or the endpurpose of business activity in the first place. While it is certain that the following definition is by no means problem free, it possesses the advantage of making the concept of social value and the means of creating that social value explicit and thus presents itself as a userfriendly definition for the business person. Business enterprise may be defined as the ownership or use of capital investment, labour or property to produce a product, or to provide a service that fills some existent social need, or creates a new need to be filled, or creates some social value and by so doing, it is usually hoped that an economic benefit or reward is generated for the owner or owners. This definition of business is more user-friendly than the definitions of economics that appear above because it both explains how products or services are produced and states that such a production fills social needs (whether pre-existent or created) and fills these social needs by creating social value. It also possesses the advantage of making a direct reference to the notion of an economic benefit or reward instead of profit because such notation carries the advantage of including economic behavior that was not aimed at profit as a net result such as barter economics, non-profit business ventures and other trading relations. The main point which the above list of examples of definitions and the analysis of such definitions is designed to illustrate is that the very definition of business requires
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
29
input from ethics in order to make sense of what it means to start or operate a business in the first place. Whether one’s real desire were to fill some social need or whether the social need were one which should be filled is another story. The point is, one cannot have a business enterprise in the first place unless one takes social needs into account. While this seems to be implied or stated in definitions of economics, it does not always appear to be stated or implied in such definitions of business that one may find. Business has no definition if social needs and social values are excluded when one attempts to define business. Thus, it is more honest and more descriptive of the essence of business to take ethical values (e.g. producing goods or services for others) into account when attempting to define the essential subject matter of business itself. Even the choice to be a business person is a choice which cannot but depend upon or refer to human value. Making of money is not an end in itself. Money serves the purposes of storing up security against the future thus implying that one must value the future in order to justify the saving of money. What sense can be made out of a life devoted to making money without a wanted outcome and a method for bringing the outcome into being? Even the example of the stockbroker who were to claim to be a stockbroker simply to make money could not be a stockbroker without serving some social need, which in this case, might be to assist other people in making money from the money that they have already made, but which was only earning bank interest. He or she need not have this social need in mind as his or her main motivation, but he or she cannot help but serving this social need if he or she were to make his or her own profits. If it were to sound circular that the social need that is served, is, in this case, also making money, one must keep in mind that in this case part of the money is being made for others, and thus social value is created for others. Thus, the stockbroker, even if her or his prime motive were to make a large profit for herself or himself, cannot avoid making money for others. She or he may lose money for others too, but in theory, at least, she or he cannot make money without some others making money, for if all her or his transactions were to result in losses for her or his customers, she or he would find herself or himself without a clientele. The directive to ‘make money’ is not sufficient to enable the stockbroker to conduct her or his business.
30
SAVING HUMAN LIVES
The stockbroker must know what to do and how to do it. She or he must know how and when, in this case, to buy and sell stocks and which stocks to buy and sell and which stocks to hold. Thus, her or his business enterprise cannot be simply defined as a means of making money. Whether or not the provider of the service, or the manufacturer of the product, has in mind any social value to be gained from that service or product, the service or product must provide some social value. It may be argued that some services (such as prostitution), or some products (such as weapons), provide disvalue rather than value, but it would be difficult to provide an example of a good produced, or a service provided, that did not provide some social value. There may be debate as to whether the net effect of the product or service is a value or a disvalue to society, but this does not affect the point that every good produced, or every service provided, does not produce some social value however minimal, or however counterbalanced it is by some social disvalue that is produced. What follows from this is that there is no such activity as a business venture that is totally divorced from producing some social value. Whether or not the social value is intended by the provider of the service or the manufacturer of the product does not affect the truth of this essential fact. If it were true that every business enterprise must produce some social value, then to take the consideration of value as part of what is produced is a consideration which may be entertained at the very beginnings of a business enterprise. If one were to consider that ethics were relevant to business, then one could even take ethical considerations into account in large measure when one were contemplating the roles of business organizations. In 1984, Roger Smith, the CEO of G.M. reportedly stated that the responsibility of CEO’s and Boards of Directors needed to be expanded beyond the traditional concept of being responsible only to the shareholders of a company. Responsibility should be widened, according to Smith, to include the natural environment; the economic health of the entire country [this could be expanded beyond national boundaries] and the welfare of future generations.7 The notion that business is not there only to make a profit is, of course, not new to Smith. The original Henry Ford, the inventor of the model T, also held the view that
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
31
business has other concerns than the interests of its stockholders when he said: ‘For a long time people believed that the only purpose of industry is to make a profit. They were wrong. Its purpose is to serve the general welfare’.8 The stockbroker may concentrate on her or his own profits to a large degree and focus only as needs be on the profits of her or his clientele. But the fact that she or he cannot make money without making money for some others, and thus creating social value for some others, means that social value or ethics is not something which is inherently in conflict with her or his desire to make money. In fact, she or he cannot make money without making money for at least some others (even if in some cases these others are not her or his clients), so that she or he cannot help but producing social value, even if she or he were to have no interest in producing social value. One must bear in mind that for a business to embrace ethics as part of its definition, it does not mean that a business should give up the value of making profits. Values which are non-profit values, can serve as a motivator at the same time that one is motivated by profit values. Ethics need not be seen as something antagonistic to the profit motive; it can exist alongside of the profit motive and even contribute to the profit motive. The separation of business and ethics is a part of a Western culture that considers that what is ethically good must belong to a non-body realm and that what is profitable cannot be ethical. For the Western mind, the notion that ethical and materialistic motivations can exist alongside of each other and even enhance each other is one which is a difficult notion to thoroughly accept. What is being said is that the relation between business and ethics may be perceived as an internal relation, not as an relationship between two external realms. In the Protestant Ethic, if one were to earn a great deal of money, that would be a sign that one might have been favoured by the Deity. But in this case ethics and business are still perceived as belonging to two separate realms. What is being maintained here is that ethics and business can co-exist in the same realm. The position developed here is different from what Adam Smith was saying in his early formulations of the theory of wealth. For Smith,
32
SAVING HUMAN LIVES
while one was ethically minded, one’s main economic intent was to consider one’s private interest and the “invisible hand” would benefit society as a whole. While Smith certainly believed that the entrepreneur must be ethical in her or his business behavior, ethics was built into business enterprise as a frequent side-effect of business enterprise. It was not the motivating cause behind business activity and the individual was perceived of as seeking her or his own selfinterest. To quote Smith’s famous passage: ‘... every individual ... intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention ... By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.’9 While this can, of course, be disputed, it is interesting to note that for Smith there not only was no necessary conflict between the demands of business and the demands of morality; there was a probable correlation. If one were to pursue one’s self-interest, the interests of morality were likely to be served. Unlike Smith, one need not relegate socially produced ethical values to the status of a sideproduct which is created as an indirect effect of following one’s own self-interest. Social ethics need not be seen as something extrinsic to or antagonistic to the profit motive; it can exist alongside the profit motive and even contribute to the profit motive. The modern day counterpart to Adam Smith is Milton Friedman who has vigorously championed the idea that, ‘there is one and only one social responsibility of business - to use its resources and engage in activities to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception and fraud.’10 Friedman actually separates ethics from business to a greater degree than does Smith since Smith sees an ethical outcome as a likely side effect of pure business activity, while Friedman considers that possible unethical outcomes from business activities need to be redressed by government action and will not be taken care of by business pursuit alone. The position developed in the present work differs from Friedman’s, in that the
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
33
present author argues that ethics can be an ally to business, whereas for Friedman it is an impedimentt and an antagonist. The position of the present author resembles Smith’s, in that business and ethics are not perceived as being in conflict. It differs from Smith’s, in that the present work argues that ethical concerns are necessary parts of business enterprise, whereas Smith sees them as probable but gratuitous side effects. The position of the present author, unlike both Smith’s and Friedman’s, takes the view that self-conscious ethics can be a part of the goals of the business organization.11
ETHICS AND THE CONDUCT OF BUSINESS ENTERPRISE There is another way in which ethics is already a part of business, which is at the basis of both the goals of business discussed above and the communicative infrastructure to be discussed below, and that is in terms of the moral relationship that lies at the very basis of agreements and contracts. The very idea of trade requires a moral relationship between at least two persons. One cannot conduct business without the trust required to keep agreements. It is impossible to eliminate the waiting time between the delivery of a product or service and the payment. Whichever comes first, there is always a time of trust. The relationship between buyer and seller requires morality. The moral relationship always underlies the communicative infrastructure between business partners and employer and employee relationships. Without an underlying moral relationship, business agreements between partners and contracts between employer and employee would not be possible.
ETHICS AND THE INFRASTRUCTURE OF A BUSINESS ORGANIZATION Up until now this inquiry has been largely limited to the examination of what a business enterprise is attempting to accomplish externally,
34
SAVING HUMAN LIVES
or the outside goals of an organization. The inside of a business organization or its management structure can also be examined for ethical content. What is being claimed is that there is a fundamental congruence between the principles of sound managerial organization and fundamental ethical principles. It is surprising to discover that ethical considerations are already inherent in the basic rules of sound organization. Whether such ethical considerations are explicit is another matter, but ethical principles can be discovered to lie at the heart of the basic rules of how to manage organizations effectively.
THE WILL TO COMMUNICATE: FORMAL AND INFORMAL REPORTING CHANNELS From the standpoint of management, one of the most pervasive and important factors related to the ffunctioning of a business enterprise is that of communication. The key concerns of management, such as leadership, motivation, organization, interpersonal interaction, problem solving, and decision making all involve communication. Unless one were to consider a one person firm, the quality of communication is a very important factor in an internally well managed organization. The quality of communication can in large measure be determined by what can be termed ‘the desire or the will to communicate’. Firstly, one can examine the will to communicate from an ethical perspective in general; secondly, one can examine the will to communicate from the perspective of informal channels of communication; thirdly, one can examine the will to communicate from the perspective of formal reporting channels.
THE UNDERLYING PRINCIPLE OF ETHICAL COMMUNICATION: RESPECT FOR PERSONS One perspective which cuts across most if not all ethical and religious perspectives can be defined as, respect for persons. As
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
35
one’s most fundamental value, there would be few instances, if any, in which one would be willing to violate one’s respect for persons. If the concept of ‘respect for persons’ were analyzed, it is observable that one’s attitude is the same whether the source of the attitude of respect for persons is the value that this attitude is a reflection of what is right, regardless of consequences (or deontological ethics), or the value that this attitude will conduce towards the greatest happiness for the greatest number of persons (utilitarian ethics) or the value that this attitude will conduce towards the greatest possibility of self-realization on the part off everyone (individual or general eudaemonism). The selection of this principle of ethics thus cuts across the false dichotomies between deontology, utilitarianism and eudaemonism and allows one to be ethical without worrying if one must first decide if deontology or utilitarianism or eudaemonism were correct before one can choose to be ethical. In addition, it is of interest that this principle appears to be both a cultural and a religious universal. One can trace this notion back to the Golden Rule both of the Judaeo-Christian and the Confucian heritage which in its negative formulation in Confucius reads, ‘Never do unto others what you would not like them to do onto you’.12 One can also find this most basic principle in early Buddhism.13 The classic philosophical source for the notion of treating persons with respect is Kant’s Categorical Imperative which in the most user friendly of its three versions reads, ‘Act in such a way that you treat humanity, both in your person and in the person of all others, never merely as a means, but also always as an end’.14 This concept of respect can be expanded to include the natural environment and future generations if one views ‘respect for persons’ to be too anthropocentric or present bound. For simplicity’s sake the expression ‘respect for persons’ is employed, since despoiling the environment and upsetting the natural balance inevitably implies a disrespect for living persons and future generations. It is also to be understood that the concept of ‘respect for persons’ does not imply a disrespect for animals or other “non-persons” such as extra-terrestrial life that may be discovered in the future. The concept of a person may need to be expanded in the course of time. In a day to day business context, when one is treating another as a person, one relates to that other not merely as an employee, but as a
36
SAVING HUMAN LIVES
full human being. While this might translate into many different fundamental modes of relating to the other, such as with politeness, recognition of the other’s human rights and human needs, the ultimate translation for according respect to another as a person is to communicate with that person, whether directly or indirectly, as an equal. It is in this sense that the Founding Fathers of the United States considered that ‘all men are created equal’. This did not mean that they were created equal in endowment but it did mean that they were created equal in moral worth. In the Japanese Management System, this is shown in non-verbal forms of communication such as the practise (honored in former times more than at present) in giving lifetime employment to employees, which makes the employee feel as valued and safe as the employer, and following the ringi system of decision making, which makes the employee feel that her or his opinion matters just as much as the opinion of her or his employer. This does not mean, of course, that there are no hierarchies of authority in a business context. But given those hierarchies, communication must be respectful. Respectful communication means both that one listens to the other, and that one talks to the other with an attitude that the other deserves to be both talked to and listened to. This might entail going out of one's way both to talk to the other, and to encourage the other to talk to one. Such an attitude springs from the fundamental ethical tenet that one’s primary attitude towards others is one of respect.
ETHICS AND INFORMAL CHANNELS OF COMMUNICATION The establishing of viable informal channels of communication is one extremely important method of ensuring that the will to communicate has some way of manifesting itself. It is not enough to possess the intention of respecting personhood; it is necessary to the concept of respecting persons to ensure that that respect will be actualized. Otherwise, respect for persons will remain a hollow attitude which possesses no outlet and hence will not really constitute a genuine respectful relation to persons.
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
37
The setting up of informal channels such as common rooms, tea rooms and coffee shops, which are genuinely frequented by all members of an organization, thus giving access to all members by all members, is a non-verbal communication that one really does possess an attitude of respect for persons. The legendary stories of such figures as Tom Watson, Sr., Konosuke Matsushita and Soichiro Honda walking the factory floors, and Akio Morita eating in the cafeteria with his blue collar workers, are other examples of demonstrating that a genuine respect for persons does exist. IBM’s Open Door policy is another example of an informal channel, which comes close to being a formal reporting channel, and thus cuts across the strict line of formality and informality. While all of these can be seen as integral aspects of good business policy, they can also be seen as emanating from a basic ethical attitude. Once again, there is no fundamental conflict between good business policy and ethical values. In fact, it may be argued that the providing of these communication channels in the right spirit actually provides the conditions that make ethical relations possible. By the same token it may be argued that it is an ethical spirit that generates the concept of these channels in the first place. The division between sound management policy and the attitude of respect is a pseudo-distinction.
ETHICS AND FORMAL REPORTING CHANNELS Under the category of formal reporting channels, one can include the explicit responsibility for communicating clearly those sets of tasks which each level within the hierarchy is responsible for carrying out. Without the notion of a clear-cut set of responsibilities belonging to each member in a vertical chain, it is difficult to understand on what basis each member of a chain reports to another, except in terms of pure authority. Thus, every level mustt carry with it a clear-cut set of responsibilities, so that it is clear what must be reported upon, in addition to, to whom one is responsible for reporting. If what an individual is responsible for, and to whom an individual is responsible, is not communicated to the individual, then that individual has not been treated respectfully for their role in the
38
SAVING HUMAN LIVES
organization has not been defined. In addition to the matter of the possible neglect and/or conflict of responsibilities that might ensue, it is also not clear from the reporting individual’s standpoint, what matters that individual should report upon. It is also unclear to the individual who should receive the report, what matters she or he should expect to receive reports upon, from which individuals. The absence of clear-cut and clearly communicated sets of responsibilities both to act upon, to report upon, and to receive reports about, undermines the process of communication both from the management standpoint and also undermines the importance of the communication process from an ethical standpoint. If there were clear-cut formal reporting lines of communication, without the accompanying designated content which is to be communicated, it would be difficult to understand on what basis each member of a chain were to report to another except in terms of pure authority. Now, it can be said that the notion of clear cut divisions of responsibility and reporting channels are simply elements of sound management policy. If there were an absence of knowing what one is responsible for, the notion of reporting channels leading to and from one’s level in the hierarchy would be without efficacy. Likewise, if there were a clear-cut notion of what one is responsible for, without the presence of reporting channels, one would function as an isolate within an organization, and this can lead to management mistakes of great magnitude. What is of interest to note here is that the presence of clearly defined roles of responsibility and formal reporting channels are not only the basis of organization in the first place: they are the consequences of a fundamental ethical orientation. The organization which functions without a clear-cut set of reporting channels functions disrespectfully, since it is not according individual persons any means for significant communication. An organization which does not set out both unique responsibilities for each level of power, and a clear line of reporting channels, treats all individuals within that organization with disrespect. Of course, the presence of the mechanisms would be but empty forms if they were not informed by a genuine spirit of the will to communicate. But the absence of such mechanisms is at the same time the trademark of an unethical organization, because no genuine means for communication are available, a and therefore, there is no
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
39
way in which respect for persons can be made manifest. While it is certainly not a new idea that clear-cut and clearly assigned roles of responsibility and reporting channels are the very basis of sound management policy, it is of interest to note that these very same mechanisms are a key ingredient in the formation of an ethical organization. In the chapters to follow, the consequences of not possessing clear-cut formal reporting channels will be illustrated by the examples that are studied. It will be especially obvious in the cases of the Challenger disaster, the King’s Cross fire disaster and in the disaster on Mt. Erebus how the absence of clear-cut reporting channels was a major contributory r cause of the ensuing disasters. In conclusion, the examination of sound mechanisms for communication and appropriate management structures and procedures shows that these mechanisms are at the same time, forms of ethical communication. It seems to be the case that not only is ethics required for the formulation of the goals and thus the matter (goods or services) of business organizations but that business organizations which are organized on the basis of sound management principles provide the forms without which ethical communication and ethical action within the firm cannot be actualized. Ethics is required for business and in the sense that proper business organization is required for ethical business relations inside the firm, business is required for ethics.
THE ARGUMENT FOR THE EQUIVALENCE IN EFFECT OF COMPETING NORMATIVE ETHICAL JUSTIFICATIONS In the end, the school of ethics to which one belongs is not ultimately significant in determining ethical outcomes. In other words, the same actions may be generated whether one is following deontological or utilitarian principles. Of course, this is not to say that there are no conflicts between outcomes that may be predetermined from different ethical starting points, but such conflicts may be resolved by adopting a multi-positional ethical framework. Before examining
40
SAVING HUMAN LIVES
a multi-positional ethical framework, it is interesting to become aware that the same behavior may be generated by opposing justification systems. To illustrate the harmony that exists between competing ethical frameworks, it is useful to briefly examine the fundamentals of the leading ethical schools. Deontological ethics may be defined as that version of ethics in which one embraces m values from the standpoint of what one holds to be absolutely right or wrong regardless of consequences. While the word originates from the Greek for duty, its use since refers more to the absoluteness of the choice. Sometimes, deontological ethics is also called formalist ethics to emphasize that one’s values are held regardless of the context. Utilitarian ethics may be defined as that version of ethics in which one embraces or disengages from values from the standpoint of what social benefits or harms may be produced. Nowadays, consequentialism is a term used interchangeably with utilitarianism. Eudaemonistic ethics, not so common any more, may be defined as that version of ethics in which one embraces values from the standpoint of how an individual achieves his or her greatest self actualization. Egoistic ethics (such as Ayn Rand’s ethics) is basically the pursuit of self-interest, and thus would not distinguish ethics from the typical picture of business as the pursuit of self-interest. As a result, such a definition of ethics adds nothing at all to the current conception of the non-ethical pursuit of business, and therefore adds no contrast for a fruitful discussion. Ayn Rand is like Adam Smith without the proviso that the greatest good for society will frequently come about through the device of the invisible hand, or like Friedman without the proviso that the greatest good for society will come about through the survival of the fittest. In other words, Rand seems to endorse self-interest without any regard for the public welfare. Hedonism is the unbridled pursuit of pleasure, perhaps even at the expense of egoistic self-interest. Hedonism, like egoism, adds nothing substantial to the prevailing concept of the non-ethical pursuit of business and hence provides no contrast for a fruitful discussion. Religious ethics can by and large be perceived as a species of deontology though the source of the deontology in this case may not be a matter of personal choice but may come about through religious
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
41
upbringing. For practical purposes, religious ethics may still be regarded as a variation of deontology. Of course this is arguable since it may be contended that one embraces one’s religion out of egoistic motives such as the motive to save one’s place in Heaven. Freud argued that religion was adopted to satisfy psychological needs and hence in a way could be said to have been maintaining that religion served a utilitarian purpose. For the sake of this discussion, however, it is the absolute rightness or wrongness of certain acts according to one’s religious beliefs that is relevant and hence for present purposes religion may be considered to be a branch of deontological ethics. Some writers prefer to designate religious ethics as command ethics. In this sense, one is commanded to perform certain acts or refrain from others regardless of whether one considers these acts to be right or wrong. From this perspective, religious ethics is not deontological. However, since one’s consideration of what is absolutely right or wrong is identical with one’s religious values, in another sense religious ethics is still deontological although there is a specific source of what is considered right or wrong, such as written scriptures. Virtue ethics, which is growing in popularity, extols the merits of character rather than specific acts, but is still either deontologically or eudaemonistically based. Eudaemonism or self-realization ethics adds the concept of the realization of one’s higher, fuller or complete nature to the egoistic concept of self-interest and hence transcends egoism proper. While not popular in the present age it may possess value despite its rather archaic appellation. In some ways, the three main schools of ethics cannot be absolutely distinguished from each other. While utilitarianism might seem to be favored by businessmen, one’s absolute choice to be a utilitarian must be either a deontological one or an arbitrary one. Kant, who is perceived as the arch deontologist, tests his main ethical principle, the Categorical Imperative, in some of its formulations, by appealing to a consequentialist criterion. Plato’s philosophy was certainly a blend of deontology, eudaemonism and utilitarianism. The entire deontologist versus utilitarian debate can be seen as a pseudoconflict. Plato argued that one should be just, both because of the intrinsic goodness of justice (a form, it may be argued, of deontology) and because of the results that it brings (a form of utilitarianism).
42
SAVING HUMAN LIVES
Plato adds an additional component to the deontological-utilitarian or formalist-consequentialist mix by saying that when one values justice both for its own sake and for its results, one thereby derives one’s truest happiness as well. In this respect Plato’s ethical philosophy was a blend of all three ethical ingredients. In the Republic Plato has Socrates say that justice must be placed in the highest class of things: ‘... as a thing which anyone who is to gain happiness must value both for itself and for its results’. (Republic, V, 358) Within the mixture, Plato takes the side that justice is to be valued more for its own sake than for its results, but this does not detract from his view that it is to be valued for both reasons and thatt it can be valued for both is what places it among the highest class of things. ((Republic V, 366) The reliance upon the deontological-consequentialist-eudaemonistic mix with a slight built-in bias towards deontology offers a way out of dilemmas that confront the pure deontologist, the pure consequentialist and the pure eudaemonist. For example, the classic critique of utilitarianism’s ‘greatest good for the greatest number’ is that an injustice for a particular class could result, such as the slavery of a minority. The built in deontological portion rules out this version of consequentialism. On the other hand, if following the directive of producing the greatest good for the greatest number runs the risk of producing a mediocre society, then the deontological portion or the eudaemonistic portion can act as a corrective. This was Plato’s approach. While it may appear odd to classify Plato as a hybrid of deontologist, consequentialist and eudaemonistic elements, he does not think that one must make a choice between the Right and the Good. While it is traditional to describe Plato’s ethic as a form of eudaemonism, such a label does not take sufficient account of his emphasis on the intrinsic value of justice. Most properly understood, valuing justice both for itself, and for its results, also brings about the greatest human fulfillment, so that Plato’s ethics is actually a blend of all three types. While Plato’s writings are not all consistent with each other on this point, in his key works, he did not see a conflict between the demands of happiness and the demands of morality. Indeed, if one understood Plato’s separate works as intending to emphasize different aspects of life, then the different works need not be interpreted as inconsistent with each other. If one considers that Plato’s vision of the Republic is for him the best and the happiest
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
43
society, and in this work he is emphasizing the focus of society as a whole, then the demands of deontology and utilitarianism can be seen to merge. If profit making can be seen to represent a wider social interest, then it need not be intrinsically incompatible with an ethical orientation. That Plato saw it this way is clear from his division of society to include a capitalist class for whom making money is the prime value. The presence of this class is, for Plato, one of the ingredients in what he considers to be the best or most just society. It is an encouraging sign that some writers today are seeing that deontological or formalistic ethics and utilitarian or consequentialist ethics need not be seen as competing ethical viewpoints, but may be regarded as complementary perspectives. All that is lacking, to some extent, is the incorporation of the eudaemonistic perspective as well. As a result, one can be a deontologist in some respects, a eudaemonist in others and a utilitarian in yet other respects. One’s choice of ethics can be context driven, that is, relative to the type of choice that one is making, without one becoming a relativist proper, that is, one whose ethics is wholly determined by the context or culture in which one finds oneself. While Plato’s ethics was a blend of all three types, one would hardly class Plato as a relativist. Within the limits of this present work, ethics is viewed from a utilitarian standpoint when one refers to ethics as the social values which no business enterprise can escape from creating. This is still not an absolute utilitarianism, since it remains one’s deontological choice that it is good to create social values. Ethics is viewed from both a deontological standpoint and a utilitarian standpoint when ethics is examined as a defining feature of the communicative infrastructure of a business organization. Ethics is viewed from an eudaemonistic standpoint when one considers that it is through business enterprise understood in its proper sense that some people can realize their full natures. It may be said that when business activities and internal business communication are both carried out in an ethical fashion that one’s virtues (and the virtues of others) are enhanced. In this sense, this constitutes a virtue ethics as well. As one acts nobly, one builds one’s character. As one creates a proper business structure, one provides the environment for others to develop their character as well. Just as Aristotle argued that one
44
SAVING HUMAN LIVES
became good by performing good acts, one cannot separate right actions from virtue ethics. Properly speaking, they require each other. It may be argued that from the standpoint of utilitarianism that ethics may be a component of business, but from the standpoint of deontology that ethics can become the antithesis of business whenever business violated certain absolute ethical principles. For example, if one were a drug dealer, then, while one would produce some social good (jobs), one would produce far greater social evil (drug addiction). From the standpoint of deontology, this is ethically wrong even if some extremely minor social benefits were to be produced from the standpoint of the utilitarianism of the drug dealer and his employees. This particular business (not business in general) would also be considered ethically wrong from the standpoint of a wider utilitarianism which took not only the interests of this business into account, but the interests of the entire society. Thus, it would appear that from the standpoint of deontology and wider utilitarianism, ethics and certain forms of business, to use Plato’s metaphor from the Philebus, will not mix. Another solution to the problem of the appearance of certain kinds of business operations as unethical is simply to understand that when certain boundaries are crossed an innocent or good activity can lose its goodness. Eating is good, but gluttony may lead to illness. Or, one may simply adopt the position that the subject of one’s treatment is limited to legitimate businesses, not illegal ones. This, however, to a certain extent, begs the question. Even illegal and unethical businesses have their own internal ethics, such as the code of the Mafia. But for the purposes of this book, it is important to remember that the ties between ethics and business are only being stressed in order to show the relationship between social values and business enterprises and not thereby to justify all business enterprises. While even the drug dealer produces some good, however minimal and outweighed by the harm she or he produces, her or his business activities are still value related. It would be marvelous if one could put a stop to the drug business by adopting a eudaemonistic ethic and attempt to persuade dealers to understand that they can make money and realize their happiness in another way without producing such terrible social harm. But since drug dealers cannot normally be so convinced, that is why one requires legislation and a police force. Not every social
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
45
problem can be corrected by an understanding of ethics, whether deontological, utilitarian or eudameonistic ethics. Underlying the utilitarian seeming approach taken to ethics presented here are fundamental deontological values such as not condoning murder, criminal businesses, mob violence, treachery, deceit, false representation and other dishonest business practises. (To be fair to utilitarianism, the same values can be generated from a utilitarian base by thinking in terms of consequences such as, if everyone were a murderer, then the greatest good for the greatest number would not be achieved, and so on, but this is a more roundabout method). The main point is that the data base from which one starts is the data base of ethical business. For the purposes of this work, the data base is limited to that of legitimate business. This is not to say that illicit business may not possess an ethical justification but this is a separate question altogether. Within the scope of this present work, by limiting oneself to legitimate business as a data base, one builds in a deontology before setting out to examine the inherent connection between ethics and business enterprise from a utilitarian standpoint. One could also have an underlying eudaemonistic base as well since part of what is being said here is that businesses can become more meaningful for their owners and workers as they become more ethically aware. The ethical business organization becomes a business community in which all members feel a sense of belonging. While in the ensuing discussion, it may appear as if it is the utilitarian perspective which is awarded the most attention, it would not be fair to say that this is an exclusive ethical justification. With respect to the setting up of the definition of business, the built-in dimension of ethics appears to be utilitarian but this choice of definition is itself deontologically weighted. With respect to the ethical basis for agreements and contracts, the built-in dimension of ethics appears to be both deontological and utilitarian and even eudaemonistic, although primarily utilitarian. With respect to the choice of the ethical goals of a business, while it can be said that these goals are utilitarian (or eudaemonistically) based, the choice of such a utilitarian base remains deontological. With respect to the role that ethical values play in communicative infrastructure, it is clear that ethical values are chosen because of deontological, utilitarian and eudaemonistic considerations.
46
SAVING HUMAN LIVES
NOTES 1. Paul A. Samuelson and William D. Nordhaus, Economics, Twelfth Edition, New York: McGraw Hill, 1980, p. 4. 2. Ibid. 3. Ibid. 4. Ibid. 5. Ibid. 6. Ibid. 7. Donald P. Robin and R. Eric Reidenbach, Business Ethics, Where Profits Meet Value Systems, Englewood Cliffs: Prentice-Hall, 1989, p. 7. 8. Ibid., p. 161. 9. Adam Smith, An Inquiry Into the Nature and Causes of the Wealth of Nations, Edwin Cannan, (ed), New York: Modern Library, 1937, p. 423. Patricia Werhane has argued in her Adam Smith and His Legacy for Modern Capitalism, New York: Oxford University Press, 1991, p. viii that even this quotation of Adam Smith’s was intended to be understood as applicable only in the context of a level playing field and a well-defined framework of justice. Cf., f also, p. 130, et passim. Thus, Smith cannot be construed to be the patron of robber barons. Smith, it must be recalled was a Professor of Moral Philosophy. In his book, The Theory of Moral Sentiments, he states that regarding the rules of justice: ‘... the most sacred regard is due to them’. He also states that ‘Without this sacredd regard to general rules, there is no man whose conduct can be much depended upon. It is this which constitutes the most essential difference between a man of principle and honor and a f Knud Hakonssen, (ed.), Adam Smith, The Theory of worthless fellow’. Cf., Moral Sentiments, Cambridge University Press: Cambridge, 2002, pp. 204, 189. But it is also true that Smith thought that given a just society, the pursuit of self interest frequently led in the direction of justice for society as a whole. Smith thus conceived of business interests conducted inside the framework of justice as not merely linked with ethical interests but as frequent causes of ethical outcomes. 10. Milton Friedman, ‘The Social Responsibility of Business is to Increase Its Profits’, reprinted in Raziel Abelson and Marie-Louise Friquegnon, Ethics for Modern Life, New York: St. Martin’s Press, 1987, p. 287. [Originally taken from Friedman’s, Capitalism and Freedom, Chicago: University of Chicago Press, 1962, p. 133.] 11. For a view which considers Smith to be closer to ethics than Friedman but for different reasons than those of the present author, cf., f Thomas Donaldson, Corporations and Morality, Englewood Cliffs: Prentice-Hall, 1982, pp. 62-69.
THE INTERNAL RELATIONSHIP BETWEEN ETHICS AND BUSINESS
47
12. Analects, XV, 23; Chung Yung, ch. 13. For a fuller discussion of the f Robert E. Allinson, ‘The Confucian Golden Rule: A Golden Rule, cf., Negative Formulation’, Journal of Chinese Philosophy, 12, 1985, pp. 305315. 13. Hajime Nakamura, ‘The Significance of “Harmony” in Buddhist Thought’, in Robert E. Allinson and Shu-hsien Liu (eds), Harmony and Strife: Contemporary Perspectives, East and West, Hong Kong: The Chinese University Press, 1988, p. 102. It is interesting in this regard to consider Thomas Donaldson’s and Thomas W. Dunfee’s intriguing notion of Hypernorms developed in some detail in their persuasive book, Ties That Bind, A Social Contracts Approach to Business Ethics, Boston: Harvard Business School Press, 1999. 14. Immanuel Kant, Groundwork of the Metaphysics of Morals, p. 429. Cff , Kant’s Critique of Practical Reason and Other Works on the Theory of Ethics, T. K. Abbott (trans.), London: Longmans, 6th Edition, 1963, p. 47.
CHAPTER 3
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL ‘The Buck Stops Here and It Stops Everywhere Else As Well’ is a new phrase which can serve as the cornerstone for the new epistemological and ethical foundations that are to be laid. The phrase reveals that responsibility for acts and decisions lies with everyone and at all times and is not restricted to any one person, place or committee.1 While the lion’s share of responsibility may belong to Top Management in that Top Management carries with it the responsibility to educate every member of a business community to take this phrase seriously, this phrase is thoroughly democratic and is not limited to Top Management.2 This phrase is the cornerstone of effective disaster prevention management and it emphasizes that moral responsibility is truly everybody’s business.3 Before discussing further how the incorporation of this phrase functions in sound and ethically based decision making in an organization, it is opportune to take time out to find out what is wrong with its predecessor, ‘The Buck Stops Here’.
THE BUCK STOPS HERE ‘The Buck Stops Here’ seems a harmless enough phrase. Why should a phrase like this one cause any trouble? It is the contention of this book that precisely this phrase lies at the very heart of communication impediments that prevent the very free flow of information and consequent action that makes disaster prevention a reality. The origin of the phrase lies of course with Harry Truman who displayed this phrase on his desk with the meaning that if a problem reached the Oval Office, it would not be passed over to be resolved
48
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
49
by someone else. In other words, Truman was not going to abrogate responsibility. He would not pass the buck. This would seem to be an honorific enough origin for a phrase. What began as a good intention, however, has unfortunately been subject to considerable abuse. In an organization made up of many levels of authority, every other level of authority can then pass the buck of responsibility to the highest level. If the buck stops at the highest level, then it does not stop anyplace else. The attitude cultivated by such attribution of responsibility only to the very highest level is one of non-responsibility. t Such an attitude is a recipe for disaster. Of course, if Truman’s acceptance of the buck stopping at his door were accepted by every r top executive, the effects of responsibility lying only with the top executive would be greatly ameliorated. Because, at least one person would be taking responsibility and if that person were the chief executive, the beneficial effects of this would be enormous. However, if the chief executive also interpreted this phrase to mean that responsibility lay elsewhere, then the effects would be disastrous. The new conceptual foundation advanced in this work has as one of its major components the revision of the phrase, ‘The Buck Stops Here’ to ‘The Buck Stops Here and It Stops Everywhere Else as Well’. The revised phrase carries with it the implication that responsibility exists at every level of authority no matter how picayune and no matter how elevated. What this means is that responsibility is shared. In ensuing chapters it will be demonstrated how this can effect the prevention of disaster situations from arising. In nearly every disaster situation that is analyzed it will be discovered that at some level or another information was available which could have prevented the disaster from happening but the information was either not sharedd with the parties who possessed the capability or authority to act upon it or if it were, it was not acted upon. The information flow was impeded. Either those with the information did not pass it on or those who did receive the information did not act upon it. In the case of those with the capability and authority to act on information received, their not acting upon it was a manner of nott receiving it. They received the information but discounted its importance. Hence, they did not really receive it in a true sense of the word ‘receive’. In either case, there
50
SAVING HUMAN LIVES
was a communication jam. Either the information was not passed on or it did not get through. What the rest of this chapter will argue is that the unwillingness to assume responsibility for action or the lack of free flowing information is the source of most if not all disasters. This chapter focuses on the power of the phrase, ‘The Buck Stops Here’ to stop the free flow of information.
THE WILL TO COMMUNICATE The topic at the heart of this discussion is the will to communicate. In order to have successful communication both the parties who send the information and the parties who receive the information must have the belief that communication is desired and that it will bring about a positive benefit. If the sending party is of the belief that her or his information will not have any effect or that it is not desired, then the information will not be passed on. If the receiving party is of the belief that she or he already knows all of the answers, then she or he will neither be receptive to any information received nor will she or he request information. If the sending party knows or surmises that her or his information will not receive a hearing, then her or his will to communicate is destroyed. If the receiving party is not really interested in hearing information or soliciting information, then her or his will to communicate is destroyed since proper communication requires a sender, a receiver and an inquirer. The concept of feedback which is an important facet of communication is in this model understood to be part of reception. A true reception of information includes an acknowledgement of information received and a response. Receiving information is not simply a passive act. It involves letting the party who has sent the information know that the information has been received and in what manner the information that has been received is being considered, e.g., it will be considered at some time in the future, it will receive immediate attention, and so on. The notion of acting upon is also understood to be part of receiving information as again the model of reception is not one of simply passively observing and taking no action. If, for example, the only person who is in a position to act upon certain information is
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
51
given that information and does not act upon that information (and such information should be acted upon) then that person has not received the information in the full sense of the word. If all levels of an organization really believe that ‘The Buck Stops Here’ means that all effective decision making and all power resides at the top, then there is no communication between the levels of the organization. Where the will to communicate has been destroyed then there is no communication of any importance. In this condition, the organization is a disaster waiting to happen. If, on the other hand, all levels of an organization truly believe that the powers that be strongly desire communication and that there is a reasonable prospect that information will be listened to and acted upon, then within that organization it may be said that there is a will to communicate. When it is said that all levels of an organization must believe that communication is desirable, what is meant is that the very top levels of the organization must also possess this belief. The free flow of information can only exist when there is no point at which the information flow is dammed up. If the belief structure of an entire organization is such that, ‘The Buck Stops Here and It Stops Everyplace Else as Well’, then each and every member of an organization has the responsibility to solicit information, to pass information on and to act on such information when it is appropriate. It is not merely desirable to pass information on in the interests of an efficient organization: it is the responsibility of each member of an organization to make sure that the information is passed on. What this means is that it is the responsibility of each member of an organization to ensure that the information is not only passed on but that it is received. By the same token, it is the responsibility of those who receive the information to truly attend seriously to the information that is received. In order for this process to be effective, communication cannot be impeded at any point. If there is a structure or structures within an organization that make the free passage of information difficult or impossible, then such structure or structures must be altered. But this is not the complete answer. The most important element in the process is the will to communicate. If that will is present, then the necessary structures
52
SAVING HUMAN LIVES
will be altered. If that will is not present, then all the changes in organizational structure in the world will not amount to anything.4 The will to communicate can be strengthened by the adherence to the rephrasing of ‘The Buck Stops Here’ to ‘The Buck Stops Here and It Stops Everyplace else as Well’. This phrase implies that each level of an organization possesses a unique and an indispensable importance. Any level of an organization possesses the power to block the communication process. By the very same token, every level of an organization possesses the power to enhance the communication process. But the communication process is not likely to be enhanced unless it is truly believed in by all participating parties that the buck stops everywhere.
THE MANAGER’S TASK It is the task of the manager to make sure that the passage of information in any organization be as complete, unrestricted and as speedy as possible. If there are any kinks in the bureaucratic communication line, then wherever those kinks exist, potential disaster areas exist. Crisis or disaster will strike at the weakest links of an organization and the weakest links of an organization are those links where there is a stoppage of the flow of information. In Chinese medicine, there are two elements which are considered vital for the healthy functioning of the body. There is the material element and there is energy component. The energy component is called the chi. In Chinese terms, for there to be good health, the chi must be able to flow smoothly. If the chi is not functioning, then the structure will collapse. If the chi is functioning well, then nearly any structure can work. In organizational terms, the flow of information is the chi of an organization. Where there is a blockage of the chi, the structure of an organization will be powerless. Where there is a free and unimpeded flow of chi, the structure of an organization will be able to function with maximum effect.
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
53
THE MANAGER AS EDUCATOR AND FACILITATOR OF GOOD WILL One of the vital tasks of the manager is education. One tends to think of the manager as an administrator and or as a leader, but rarely as an educator. But good administration and good leadership cannot be carried out unless there is good will. The will to communicate is evidence of the presence of good will. If there is a will to communicate in a company then good will is present in that company. If there is no will to communicate in a company or a feeble will to communicate in a company then good will is absent or nearly absent in that company. There has been much discussion of the concept of the empowerment of individuals in an organization. But the lack of a will to communicate results in the enfeeblement of the organization no matter how much individuals might feel empowered. How is good will produced? Good will cannot be manufactured. It is the natural result of all parts of a company following the precept, ‘The Buck Stops Here and It Stops Everyplace Else as Well’. The manager’s task is the task both of educating herself or himself and of educating the company. The ideal person to initiate this process of education, of course, would be the CEO. If it becomes clear that the CEO desires that this policy of the buck stopping everywhere be implemented, then the implementation will have a good beginning. But a beginning is not sufficient. All levels of the company or organization must themselves really believe that such a policy is a necessary one and this, of course, requires education. Without the firm belief in the importance and the effectiveness of this policy, it cannot work. The willingness of management to perceive itself in this educative function is crucial to the operation of the system. The management must retain its own responsibility while at the same time ensuring that all levels in the hierarchy of responsibility shoulder their share of responsibility. The willingness of management to participate actively in this approach is the backbone of the system. It is conviction of this book that the establishing of both the will and the means of communication is the strongest possible method of disaster prevention available to an organization. The presence of both the will and the means of communication are evidence of the
54
SAVING HUMAN LIVES
presence of good will in an organization. The presence of good will can only be maintained by the continuing n and the pervasive existence of a two way will to communicate at all levels. But such a will to communicate in turn depends upon the manager’s self-understanding of the importance of the existence of good will and her or his determination in seeing to it that the will to communicate is continuously actualized at all levels. The manager does not only require self-education; she or he must also be a continuous teacher. In addition, the manager must be a leader; she or he must take an active role not only in self-education and educating others; she or he must take an active role in implementing the will to communicate in her or his organization. One’s success in establishing the will to communicate in a company is directly proportional to one’s success in supplanting the notion of ‘The Buck Stops Here’ with the notion of ‘The Buck Stops Here and It Stops Everyplace Else as Well’. The notion of the buck stopping here and d also everywhere else at once is the notion of a business organization as not only a network of communication but also as business community. What is being discussed in large measure is the formation of a business ethic that is not only a formula for making business ethical but for making a business community possible. When a true business community is present, rather than there existing different levels of hierarchies what exists are different members of a community. A member of a community will naturally not only feel responsible for spotting potential disasters, requesting information, passing information on and acting upon such information received, but moreover will feel a keen sense of responsibility for making the business work. The same conditions for effective disaster prevention are the conditions for the success of a company. This is not to say that these conditions, when present, will ensure success. But it is very difficult to envision a successful company in which there is no sense whatsoever of belonging to a community.
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
55
NOTES 1. The idea of ‘The Buck Stops Here and It Stops Everywhere Else as Well’ initially appeared in Robert E. Allinson and Leonard Minkes, ‘Principles, Proverbs and Shibboleths of Administration’, International Journal off Technology Management, Vol. 5, No. 2, 1990, pp. 179-187. A similar argument that sharing responsibility does not diminish any particular person’s share is developed by Michael J. Zimmerman in his essay, ‘Sharing Responsibility’, collected in Peter A. French, The Spectrum of Responsibility, New York: St. Martin’s Press, 1990), pp. 275-289. The emphasis here is on the point that multiple causality extends the sense of responsibility but the point is substantially the same. An important implication of both Zimmerman’s and the argument of the present author is that the pseudoproblem of how or whether a corporation (which is manifestly not literally a person) can be considered morally responsible or should responsibility only be attached to individuals within a corporation is dissolved. If responsibility is something which can be shared without being diminished (Zimmerman) or, as the present author would prefer to put it, is something which is extended d when there is more than one actor, [the word ‘share’ carries with it for some readers the idea of dividing up responsibility such that if two actors share in the responsibility for something, then each share might be less than if one bore the responsibility alone], then both a corporation and an individual member acting on behalf of that corporation can be held accountable. This would appear to put paid to Velasquez’s argument that, ‘if we accept the view that moral responsibility for wrongful corporate acts rests with the corporation, we will tend to be satisfied with blaming or punishing only the corporate entity ... If we are to deter corporate wrongdoing ... our blame and punishment must travel behind the corporate veil to lodge with those who knowingly and intentionally bring aboutt the corporation’s acts’. Cf., Manual G. Velasquez, S.J., ‘Why Corporations Are Not Morally Responsible for Anything They Do’, Business & Professional Ethics Journal, vol. 2, n. 3, Spring, 1983, pp. 1-18. Secondly and most importantly, Velasquez’s argument rests on a literal distinction between a corporation and its members such h that a corporation cannot possess intent. Consider his statement, ‘Since the acts off a corporation are constituted or brought about wholly by bodily movements that are under the direct control of agents other than itself, it is inappropriate to blame or punish the corporation r for those acts’. But this line of reasoning seemingly neglects the important fact that members of a corporation and in particular directors of a corporation are vested with the responsibility to make decisions on behalf off the corporation and the corporation is in turn responsible for decisions made by such members since these decisions are not made in the personal capacities of such members but in their capacities as representatives of the corporation. If this argument is not made then only the assets of individuals could be attached for damages and not the greater assets of a corporation. It would appear that Velasquez’s argument is vitiated on other grounds as well. On the one hand, there is the empirical fact that corporations are held accountable in practise for errors made by their employees, e.g., the Exxon Corporation was originally supposed to to pay out around U.S.$5 billion in
56
SAVING HUMAN LIVES
government claims over the Exxon Valdez oil spill which took place in March, 1989. However, this decision has since been appealed and quashed. In any event, Velasquez could argue here that such an accountability was illicitly extracted. It is not wholly irrelevant to consider, however, that in U.S. law that corporations are already considered as persons (in a legal sense and not in every sense) and are considered morally and legally responsible. Thus, the issue of whether philosophers can decide if a corporation is a person first in order to decide moral or legal responsibility has already been decided upon by courts. That corporations may not be persons in every sense is irrelevant since they have been considered by U.S. courts to be persons in the sense that they can be considered culpable even of corporate manslaughter. Cf., People v. Rochester Railway and Light Co. 195 N.Y. Y 102; (1909) 88 N.E. Reporterr 22 and State v. Lehigh Valley Railway Co. 90 N.J. Law 372; 103 Atlantic Reporterr 685. In the former case, the judgment of the Court of Appeals of New York was delivered by Hiscock, J. who stated that: ‘... we have no doubt that a definition of certain forms of manslaughter might have been formulated which would be applicable to a corporation, and make it criminally liable for various acts of misfeasance and nonfeasance when resulting in homicide ...’ Velasquez is well aware of French’s argument that a corporation may be considered to have intent if one takes into account the ‘CID’ structure (Corporate Internal Decision Structure) and quotes French’s point that the CID structure ‘accomplishes a subordination and synthesis of the intentions and acts of various biological persons into a corporate decision’. But this is insufficient in Velasquez’s terms since in certain cases, at least, ‘to the extent that a corporate act is the unintentional result of the concatenated actions of several corporate members, the corporate act may be an act for which no one is morally responsible: it is an unintentional happening’. But this argument, however subtle, does not take account of the central notion of this chapter which is that shared responsibility extends the concept of responsibility to every member of an organization and to the extent to which there is an omission of a necessary action, that individual or individuals and the corporation at large are nevertheless responsible. An extended discussion of this occurs in Chapter Nine of this book, ‘The Herald of Free Enterprise Disaster’, especially n. 14. Michael Keeley’s and David Ozar’s concern that if a corporation is construed along the model of a moral person, then corporate accountability might be lost is also addressed by the principle that ‘The Buck Stops Here and It Stops Everyplace Else as Well’. With the application of this principle both the corporation (as the body which is endowed with responsibility) and individuals within the corporation are jointly responsible. The notion of the joint and multiple attribution of responsibility is amply illustrated in the case of the British ferry disaster taken up in Chapter Nine. For Keely’s and Ozar’s arguments, Cf., Michael Keeley, ‘Organizations as NonPersons’, Journal of Value Inquiry, 15: 1981, pp. 149-154 and David T. Ozar, ‘The Moral Responsibility of Corporations’, in n Thomas Donaldson and Patricia H. Werhane (eds.), Ethical Issues in Business, Englewood Cliffs, N.J.: Prentice-Hall, 1979, pp. 294-300. Ozar seems to acknowledge the notion of shared responsibility
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
57
among individuals within a corporation in his chapter; he still finds difficulty with the notion of finding a corporation morally responsible though he plainly wavers on this point: ‘If a pollution control device malfunctions and the community suffers the consequences of a dangerous pollutant, we might not hold the corporation morally responsible for the injury. Here, too, however, the use of the excuse is qualified. Both the individuals and corporations are required to exercise a reasonable degree of care to avoid accidents. If they fail to do so, then they are held morally responsible for acting negligently and they are not excused in the same way for what happens as a result’. Cf., Ibid., p. 297. For a further discussion of Ozar, Cf., below, Chapter Eleven, n. 75. 2. In their joint article, ‘The Moral Dimension of Organizational Culture’, Waters and Bird attempt to find ways to ‘strengthen the moral dimension of an organization’s culture’ which include seniorr management articulating an ‘explicit managerial ideology’. Cf., James A. Waters and Frederick Bird, ‘Moral Dimension of Organizational Culture’, Journal of Business Ethics, 6, 1987, pp. 15-22. While the present author would prefer the word ‘ethos’ to ideology, Waters and Bird do seem to stress that the CEO possesses an educative responsibility. It is the proposal of the present author that the phrase ‘The Buck Stops Here and It Stops Everywhere Else as Well’ be adopted as part of an explicit managerial ethos. This ethos would also include a safety first priority as will be discussed in the sequel. It may be noted that the taking on of ethical imperatives as partt of an overall managerial ethos would be considered to be the adoption of goals that were both goals forr and off an organization as is implicit in the arguments of both this and the following chapters. There should be no conflict between goals which would be considered the goals of and the goals forr an organization when one is considering the managerial ethos. Cf., Michael Keeley, ‘Organizations as Non-Persons’, Journal of Value Inquiry, 15: 1981, pp. 149-l55 for a discussion of this distinction. The incorporation of ethical goals into an overall managerial ethos would also appear to solve the conflict perceived by Ladd to exist between the dictates of personal morality and management objectives in his article, ‘Morality and the Ideal of Rationality in Formal Organizations’. Cf., Thomas Donaldson and Patricia H. Werhane, Ethical Issues in Business, A Philosophical Approach, Englewood Cliffs: Prentice-Hall, 1983, pp. 125-136. For approaches that do not see an incompatibility in including ethical goals in organizational goals, Cf., Kenneth Goodpaster, ‘Morality in Organizations’, Ibid., pp. 137-145 and Thomas Donaldson, Corporations and Morality, Englewood Cliffs: Prentice-Hall, 1982, Chapter two. With regard to the recommendation that safety be given top priority, Thomas Dunfee argues: ‘Although absolute safety is an unrealistic goal, managers must give the highest priority to safety. The principle is one of ethics rather than law, and therefore extends beyond the obligations imposed by government regulation’. Cf., Thomas W. Dunfee, ‘The Case For Professional Norms of Business Ethics’, American Business Law Journal, Vol. 25, 1987, p. 404. For arguments that Top Management bears a special responsibility for transmitting ethical goals throughout a corporation, Cf., Kenneth R. Andrews, ‘Can the best corporations be made moral?’, Harvard Business Review, May-June, 1973, pp. 57-64, Payne, Dinah, et. al., ‘Corporate Codes of Conduct: A Collective Conscience and Continuum’, Journal of Business Ethics, Vol. 9, iss. 11,
58
SAVING HUMAN LIVES
November, 1990, pp. 879-889 and Michael Simmons, ‘Creating a New Leadership Initiative’, Industrial and Commercial Training, Vol. 22, iss. 5, 1990, pp. 3-7. 3. There is a subtle difference here between the emphasis in this book on taking moral responsibility for decision making at every level and the extrinsic adoption of business ethics by the setting down of explicit codes of conduct. The latter understanding of business ethics is afforded by such articles as Bodo B. Schlegelmilch, ‘The Ethics Gap between Britain and the United States: A Comparison of the State of Business Ethics in both Countries’, European Management Journal, Vol. 7, No. l, 1989, pp. 57-64. The difference between the emphasis in this book and what is covered in explicit codes of ethics in companies is well illustrated both by the definition of and specific examples cited of ethical codes by Bodo B. Schlegelmilch and Jane E. Houston. Their definition of a corporate code of ethics is: ‘A statement setting down corporate principles, ethics, rules of conduct, codes of practice or company philosophy concerning responsibility to employees, shareholders, consumers, the environment or any other aspects of society external to the company’. Cf., Schlegelmilch and Houston, ‘Corporate r Codes of Ethics in Large U.K. Companies’, European Journal of Marketing, Vol. 23, No. 6, 1989, p. 11. Such a definition does not seem to include a concept of responsibility forr managerial decision making inside the firm. It also does not specify any need for a prioritization of ethical policies such as a safety first priority. In the main, most ethical codes concentrate on employee conduct with community, environment and customers coming in second. Ibid., p. 15. One solution to this problem is the formulation of general credos as opposed to formal codes. As Thomas W. Dunfee writes: ‘The credo can send a signal that managers are expected to incorporate ethical considerations into all aspects of their personal decision making. Individual responsibility is emphasized, each manager must apply the general credo in a manner appropriate to his/her responsibilities. The general credo employed by Johnson & Johnson is widely thought to have been a major factor in its decision to pull Tylenol off the market when cyanide was discovered in some capsules’. Cff Thomas W. Dunfee, ‘The Case For Professional Norms of Business Ethics’, American Business Law Journal, Vol. 25, 1987, p. 391. 4. As Christopher Stone notes in Where the Law Ends, ‘We can restructure the corporation’s information processes so as to make it gather and channel vital data to those in a position to do something about it. But what is to guarantee that the person in authority, supplied the information, will act upon it? ... We can make companies install special officers in charge of particular problem areas. [one of Stone’s own solutions] But what is there to guarantee that, the special executive having been instituted, the other officers will not undermine him in all the subtle ways open to them?’ Cff , Where the Law Ends, The Social Control of Corporate Behavior, New York: Harper and Row, 1975, pp. 228-229. A striking example of a system of explicit reporting channels already in place which individuals both in the role of transmitter and receiver chose to ignore is the case of the Challengerr disaster detailed in Chapter Seven, below. It is odd that so little attention has been given by major philosophers to the conceptual problems related to corporations. According to an insightful article by Walter Robert Goedecke, ‘... a major area of modern life and modern law is neglected by the philosophers. Austin did not deal with corporations,
THE BUCK STOPS HERE AND IT STOPS EVERYWHERE ELSE AS WELL
59
nor has H.L.A. Hart in his works. John Rawls has nothing on corporations in his monumental Theory of Justice. Dworkin discusses the problems of social justice without reference to corporations’. [Hegel, Goedecke points out, is a major exception with respect to classical figures]. Cf., ‘Corporations and the Philosophy of Law’, The Journal of Value Inquiry, IX, Summer, 1976, p. 81.
CHAPTER 4
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT CRISIS MANAGEMENT: THE “BAND-AID” APPROACH In the literature that exists so far, the phrase ‘crisis management’ has been widely employed. But this terminology is ambiguous. ‘Crisis management’ can be taken to refer either to managing a crisis after it has arisen, that is, intervening in a crisis situation or managing in such a way that a crisis does not arise in the first place. The blanket phrase ‘crisis management’ is thus a conceptual blanket that covers a multitude of sins. It is best to avoid the usage of such a label since the inclusion of the word ‘management’ in such a label implies that the process so labeled is envisioned as a solution to the problem of crises in general. This, however, is not really the case. At best, socalled crisis management addresses only crises that have already arisen and usually only when such crises have become either imminent or already actualized disasters. It is best therefore to avoid the use of the blanket phrase ‘crisis management’ altogether, and when it is used, to consider carefully if what is truly meant by such a phrase is ‘crisis intervention management’ or more truly, since a socalled crisis is not recognized as such h until it is an imminent or actual disaster, ‘disaster intervention management’. In the so-called ‘crisis management’ literature, the crisis is normally seen as that which eventuates as a consequence and is not usually conceived of as something other than the disaster itself. For Shrivastava, in his excellent book, Bhopal, Anatomy of a Crisis, the terms crisis and disaster are used more or less interchangeably. In some instances, his use is nearly opposite to that of this volume in that the disaster for Shrivastava is that which effects the crisis, whereas in the usage of
60
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT
61
the present volume, the crisis is that which precedes and can be in fact part of the cause of the disaster.l In the usage in this book, the word d ‘crisis’ is used in conformance with its dictionary definition as a decisive or critical moment or turning point when things can take a dramatic turn, normally for the worse, although if caught at this stage they can also be modified.2 In the Chinese language, the symbol for wei ji is a combination of two words, ‘danger’ and ‘opportunity’. ‘Disaster’ is also used in conformance with its dictionary definition as a ‘sudden and extraordinary misfortune’ to signify the actual onset of the climactic event itself.3 As it will be noted particularly in the analysis of the Challengerr disaster to follow, crisis and disaster can be seen as forming a continuum in which a crisis if not stopped eventually leads to the disaster event. However, in nearly all of the literature on the subject, the words ‘crisis’ and ‘disaster’ are used interchangeably which provides the reader with much confusion. Since ‘crisis management’ is used in the literature to refer for the most part to either how one responds to an existent crisis or how one might anticipate crises and therefore be able to respond to them, crisis management most often connotes crisis intervention management whether after the onset of the disaster or in anticipation of the disaster.4 In either of these two modes, it is nevertheless a band-aid approach since it either comes into effect after the wound or primarily addresses itself to having a band-aid ready to cover the wound immediately so that the wound does not bleed overly much. Mitroff, Shrivastava and Udwadia, writing jointly, while still using the language of crisis management, argue well for a proactive rather than a reactive model of crisis management. Corporations are not to passively await a crisis and then react when a crisis hits them. They are to be more prepared and actively anticipate crises. Prevention, however, is not the primary purpose off even this sophisticated model of crisis management: ‘Indeed, prevention of all crises is not the basic purpose of planning and crisis management’.5 In a later article, Mitroff argues that every organization should engage in crisis planning which he defines as follows: ‘... crisis planning is a process of continual asking “what if” a set of crises hit us simultaneously? What are we prepared to do? Are we trained both intellectually and emotionally to handle a major crisis? In other words, the purpose of
62
SAVING HUMAN LIVES
crisis management is to teach an organization to confront, in advance, the stress that will arise when a crisis happens’.6 This notion of crisis management may perhaps best be understood as anticipatory crisis intervention rather than crisis prevention proper. While it shows an advance from the simple reliance upon fixed contingency plans, it is nevertheless geared towards conceiving of reactions to various types of contemplated events.7 Mitroff does also speak of an entire organization as being proactive ratherr than reactive. Nevertheless, if crisis management does not involve a change in one’s conceptual foundations as is argued above, it is not at all clear how an organization can become proactive. The presence of a special crisis management team or a crisis manager may even operate as a disincentive for the rest of the organization to take responsibility for being crisis prevention managers. Consider carefully the probable influence of ‘The Buck Stops Here’. In Mitroff's writings, as in Shrivastava’s, the words ‘crises’ and ‘disasters’ are used virtually interchangeably, thus seemingly not sharing the view of this book which is that crisis and disaster are better understood if differentiated as different phases of a crisis on a continuum, the disaster being the final or most actualized phase of the crisis. However, Mitroff certainly does share the view that a d ‘disaster’) does give off early crisis (for the present author, read warning signals before it occurs.8 However, this is not quite the same as the argument presented in this book which would urge that the real crisis lies in the prior blockage of communication channels and the absence of a sound epistemological and ethical foundation for multiresponsibility for action and decision making. Mitroff’s earlier view tends to illustrate the current view of crisis management as that which operates for the most part after the fact; e.g., his recommendation that organizations should have a permanent, trained crisis management team.9 In the approach suggested in this volume, the crisis is perceived as the active stage on a crisis continuum which is already existent in a dormant stage, the eventual outcome of which is all too frequently the disaster itself. While the approach in this book could also be labeled crisis prevention management, since the concern of the present author is primarily with the prevention of disasters, the label, ‘Disaster Prevention Management’ is to be preferred. In a more
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT
63
fundamental sense, the way to prevent disasters is to prevent the arising of crises, either dormant or active. Thus, the way to prevent disasters from arising is through a crisis prevention management style. A major aspect of the change in epistemological foundations that is being suggested in this book is the change from management as seen as geared simply to profitability to management seen as being geared to the prevention off crises. The prime focus of the model presented in this volume is not to wait for disasters to happen and then to react to them but to attempt to prevent them from happening in the first place. Of course, there is in certain instances a very thin line which divides preventive designs from how those same designs operate in the case of an imminent disaster occurrence. However, it is still important to note that the focus is on prevention rather than response to disaster. This is not to say that crisis management understood strictly as an approach designed primarily to deal with disasters after they occur and also as a means of anticipating possible disasters and preparing for them is not an indispensable business function. A strong stand has been taken here mainly to differentiate such an approach from crisis or disaster prevention proper. While it would be suicidal to have no contingency plans ready to cope with conceivable disasters, the point being emphasized here is that one must be careful not to be therefore lulled into the point of view that disasters are inevitable and thus one need not exercise equal vigilance in their prevention. A good work that spells out the notion of crisis management clearly is Sidney Fink’s rather unfortunately titled, Crisis Management, Planning For the Inevitable, published by the American Management Association in 1986. While the book is mainly a primer in how to deal with disasters after they happen, Fink does relate that according to the Report of the President’s Commission on the Accident at Three Mile Island, a full thirteen months before the disaster (which he refers to as an ‘incident’), a senior engineer wrote a strongly worded safety warning describing an almost identical accident [sic] to the one that ultimately occurred at TMI and how these same conditions could happen at TMI and that therefore clear safety instructions should be passed on to operators, but nothing happened and the memorandum was bogged down in chains of command and channels of communication (an all too familiar story
64
SAVING HUMAN LIVES
as can be seen in the case of the Challengerr disaster and the Herald of Free Enterprise disaster) and was never even seen by the operators until after the accident [sic].10 Despite this virtually foreseen dimension of the disaster at TMI, the Presidential Commission finally resolved that ‘we are convinced that an accident like Three Mile Island was eventually inevitable’.11 Unlike the Presidential Commission, Fink is clearly aware of the prevention component of crisis management butt this component is not given the major emphasis in his book which in the end seems to be an example of anticipatory crisis intervention management.12 But for even anticipatory crisis intervention management to be truly effective, it would require, as is argued in this present work, a change in one’s entire epistemological foundations. Despite his statement that this key memorandum was ‘sidetracked in chains of command and channels of communication’, there is no recommendation in his book for an overhauling of the epistemological foundations with special attention to be given to the need for clear chains of command and channels of communication. But without precisely this sort of overhaul, such disasters as TMI and the others treated in this volume will occur over and over and over again.
CONCEPTUAL PREPAREDNESS It may be recalled by some readers that the motto of the Boy Scouts of America is ‘Be Prepared’. Such a motto has a strong parallel with the concept of conceptual preparedness introduced here. Conceptual preparedness is not equivalent to either drawing up a contingency plan for various imagined disaster possibilities nor is it the delegation of such a responsibility to a special task force. In the Boy Scouts, the idea was that each and every Scout had the personal responsibility of being prepared and this had to do with a sense of continuous vigilance rather than a conjuring up of all the possible things that might happen to one. Conceptual preparedness is a frame of mind rather than any particular plan or organizational structure or set of questions to pose. Conceptual preparedness for an organization as a whole is a frame of mind that entails that an absolute freedom of
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT
65
communication exists within every level of an organization and an absolute freedom of communication between one level of organization and another whether one is moving within the organization horizontally from one level to another, vertically from one level to another, or moving outside of the organization to make contact with or receive communication from outside organizations. The perspective that this book is arguing that must be put in place is a mentality that provides for an ease of operation that pervades an entire organization. It is the existence of this mentality that equips an organization to be sensitive to the prevention of disasters regardless of what particular kinds of disasters might happen to arise. Conceptual preparedness differs from ‘what if’ or crisis planning in that contingency management or crisis planning is limited by the particular questions that might be asked of the disasters that are imagined as possible occurrences. While such a questioning frame of mind must be incorporated into any optimal form of crisis prevention management, it would be a mistake to identify the core of crisis prevention management exclusively with ‘what if’ thinking. ‘What if' thinking is still a form of the ‘band-aid’ approach to crisis management albeit its greater sophistication. The ‘what if' approach still does not call for a complete change in epistemological foundations but maintains the same epistemological foundation and anticipates how to utilize that structure in dealing with various imagined possible concrete situations. Even the best intentioned ‘what if’ model is limited to being a kind of anticipatory form of crisis intervention management and is not really a crisis prevention model at all. The reason for this is that there has been no examination of the mentality and consequent management style that lies behind the arising of crises in the first place. The ‘what if’ school of crisis management still maintains that there is nothing wrong with current fundamental epistemological foundations but concentrates itself on how to tactically employ current resources to their best effect. It lacks therefore any basic strategy to fundamentally effect a genuine crisis prevention management objective. The ‘what if’ form of anticipatory crisis intervention differs from conceptual preparedness in general in that it is still finite in its possible courses of action. It resembles what may be considered to be possible tactics of intervention rather than an overall strategy of
66
SAVING HUMAN LIVES
prevention. The limits of the ‘what if’ approach show up when a possible disaster situation arises that has not been envisioned. But it is not only that the best ‘what if’ thinkers cannot possibly envision all future possibilities since ‘what if’ thinking must form a vital part of any crisis prevention management strategy. It is also that the ‘what if’ thought process is limited to specific courses of action as its modality; it has not fully embraced the general thesis that the entire mentality must be altered.
THE EXPLICIT PRIORITIZATION OF A SAFETY ETHOS In the context of disaster prevention management, the logical culmination of an ethical consciousness which places respect for persons as its top priority is a consciousness that places matters of safety first in an organization especially when safety concerns possible matters of life and death. It is therefore of great assistance to specify the main thrust of an ethical consciousness within an organization to be towards the attempt to always give the first priority to the safety of life and limb of organizational members and members of the general public who are served or affected by the organization. Such a prioritization may be called the establishment of a safety ethos for the organization. (Such a concern can be expanded to include an environmental concern as well since a healthy environment is a condition for the long term safety of the general public). While the need for an explicit safety ethos may not be clear until one considers the case studies to follow in some detail, some general observations can be made. First off all, the establishing of a safety ethos for an organization empowers members of that organization with an ethical imperative. This gives all members of an organization both a greater sense of responsibility and also focuses their ethical responsibility by prioritizing it. The explicit prioritization of a safety ethos can serve as a motivator for everyone within an organization to be on the look-out for potential danger signals. In addition, the establishing of a safety ethos also provides a certain degree of peer
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT
67
pressure which can serve to protect members of an organization so that they might feel more safe to speak out without the possible loss of their job security. With no strong safety ethos that is embraced by Top Management, those who might spot some potential danger might well be disinclined to speak up if by speaking up they potentially could alienate some authority figure. A safety ethos can also serve as a source of authority for those in Top Management to appeal to when confronted with pressures from the outside such as political or financial pressures. The CEO can always turn down appeals to cut corners by saying that he has his hands tied: he is fully committed to a safety first prioritization.
THE INDEPENDENTLY FUNDED SAFETY BOARD WITH FULL VETO POWERS OVER OPERATIONAL DECISIONS There is always the distinct possibility that cases may exist in which clear lines of communication and clear areas of authority and a genuine spirit of communication is present and yet certain individuals may fear to speak out for fear of losing their jobs. Even the presence of a safety ethos may not bring with it a sufficient protection if a particular CEO is not fully behind the safety ethos. In such a case, it is important to have a management structure to provide even more protection than can be provided by a safety ethos by itself. Such protection can be provided by an independently funded Safety Board which can be a channel for receiving and sending warnings of possible disasters without the fear of endangering job security. If such a board is independently funded it can pursue its findings and announce its findings without fear of endangering its own employment. If such a board is given a full veto power over operational decisions, such as for example, the decision to launch the Challenger, then a safety first prioritization can be more assured of being truly actualized. If, for example, as is the case even with the most current management structure at NASA, the decisions of a Safety Board can be overidden, then a safety first prioritization is being given only lip service. The granting of both financial independence and full veto powers to a Safety Board is the
68
SAVING HUMAN LIVES
only assurance that safety first considerations will be given a strong possibility of being actualized. The granting of such powers to a Safety Board can also be seen to be in the interest of a CEO or of the Board of Directors of a firm. In the eventuality that outside pressures are brought to bear on an organization to take actions that might endanger safety the CEO can always justify her or his refusing to bend under pressure by saying that it was not her or his decision to make. The granting of full veto powers to the Safety Board gives the CEO even more freedom to refuse to bend under pressure than the existence of a safety ethos would by itself. The CEO does not have to fear for her or his own job appointment if the decision not to take action does not come from her or his office. While this fear could conceivably still operate for the Safety Board itself, it is a less likely channel to be subject to outside political or economic pressure. If the Safety Board is also being subjected to such pressures despite its mandate to enforce the safety ethos, it may be necessary to consider how more protection can be given to those appointed to Safety Board seats in terms of job security in the same fashion as the members of the Supreme Court of the United States are given lifetime appointments so that their opinions will not be subject to outside political or economic pressure.
NOTES 1. Paul Shrivastava, Bhopal, Anatomy of a Crisis, Cambridge: Ballinger Publishing Company, 1987, pp. xv, xvi. 2 . Websters’s New International Dictionary, Unabridged, 2nd Edition. 3. Ibid. Of course, it could be argued that the very idea of a misfortune conjures up the picture of fortune’s wheel taking a sudden turn and in this sense the choice of the word ‘disaster’ rather than ‘accident’, for example, is arbitrary. However, since disaster is not nearly in as active use as ‘accident’ or ‘tragedy’, given the limitations of language choice, ‘disaster’, in terms off semantics, in the opinion of the present author, does allow for the sense of responsibility to be present more so than ‘accident’ or ‘tragedy’. Itt goes without saying that the types of disasters under consideration are the ones which are man-made or for which corporate bodies can be considered responsible rather than natural disasters such as earthquakes, floods, typhoons, tidal waves and the like. 4. In an article which deals with the organizational dimensions of disaster (and developing a view of disaster prevention management more consonant with the view
CRISIS MANAGEMENT AND DISASTER PREVENTION MANAGEMENT
69
developed above), when referring to the literature of disaster studies from post World War Two to 1976, Barry Turner argues that, ‘Disaster planning literature aimed at management ... begins with the assumption that a catastrophe will occur, and directs attention to the forming of emergency committees and the organization of rescue and relief plans’. Cf., f Barry A. Turner, ‘The Organizational and Interorganizational Development of Disasters’, ‘Administrative Science Quarterly, Sept. 1976, Vol. 21, pp. 380-381. 5. Ian I. Mitroff, Paul Shrivastava and Firdaus E. Udwadia, ‘Effective Crisis Management’, The Academy of Management EXECUTIVE, Vol. l, No. 3, p. 285. (Mitroff’s disclaimer that crisis managementt is not primarily aimed at the prevention of crises is all the more surprising considering the title of an earlier article of his, ‘Teaching Corporate American To Think About Crisis Prevention’, The Journal of Business Strategy, Spring, 1986, Vol. 6, No. 4, pp. 40-47). 6. Ian I. Mitroff, ‘Crisis Management: Cutting Through the Confusion’, Sloan Management Review, Vol. 29, No. 2, Winter, 1988, p. 17. 7. A fairly standard view of crisis management as consisting of contingency plans can be found in Robert F. Hartley, Management Mistakes, New York: John Wiley & Sons, 1983, 1986, p. 284. 8. Op. cit., p. 189. Ibid., p. 19. The importance of a crisis team is emphasized by Littlejohn in his book Crisis Management, A Team Approach, where he argues that ‘... the designation of a select team of individuals both to plan for and to manage the crisis represents the pinnacle of effective crisis management’. New York, AMA Membership Publications Division, 1983, p. 11. 9. In more recent writings, Mitroff’s view seems to have evolved and comes much closer to the viewpoint of the present, respective author. In particular, in his article ‘Crisis leadership’ he writes, ‘The best form of crisis leadership is crisis prevention’. However, what he means by this is to ‘put the proper CL programs and mechanisms in place before a major crisis. Conduct brain-storming sessions as to how you can better amplify early-warning signals, that all impending crises send out. ... There is almost always someone who knows beforehand that the crisis is about to happen. When early warning signals are picked up and acted upon, crises might be prevented, or its ill effects greatly diminished’. Cf., f Ian Mitroff, ‘Crisis Leadership’, Executive Excellence, Provo, August 2001. Crisis leadership is an approach that he advocates which replaces crisis management and this innovation possesses much in common with the viewpoint of the present author. It still emphasizes ‘having mechanisms and structures in place’ rather than the entire organization becoming an organization which is organized around the core principle off the prioritization of safety as in the view taken in this volume, but the differences in the points of view in Mitroff’s latest approach and that of the present author are shrinking. 10. Sidney Fink, Crisis Management, Planning for the Inevitable, AMACON, New York, 1986, pp. 11-12. 11. Ibid., p. 8. It is interesting to speculate if Perrow was influenced by or influenced this Presidential commission since he wrote a report for it. He appears to make no mention of the warning memorandum in his work. Cff , Normal Accidents, Living
70
SAVING HUMAN LIVES
With High- Risk Technologies, p. vii. and his chapter, ‘Normal Accident at Three Mile Island’, pp. 15-31. 12. Another work which approaches crisis management from the angle of anticipatory crisis intervention is Gerald C. Meyers and John Holusha, When It Hits the Fan, Managing the Nine Crises of Business, New American Library, New York, 1986. (The title should provide a strong clue to the orientation of the book as crisis intervention rather than crisis prevention).
CHAPTER 5 THE VASA DISASTER While it may seem strange to reach so far backwards in time, it is important to consider a case from the more distant past so that it becomes evident that despite the fact that from the present perspective the case of the Vasa would be considered to be a case of low technology, the causes of the disaster are not greatly different from the cases of disaster that occurred in high technology environments. It is important to take note of this when one considers that one finds so many theses these days that connect man-made disasters with the riskiness and complexity of high technology. After a brief consideration of the case of the Vasa, it should be clear that the more things change, the more they remain the same. On the 10th of August, 1628 the warship Vasa, the oldest ship in the Royal Swedish Navy, set out on her maiden voyage, capsized in a slight squall and sank in Stockholm harbor. While fortunately there were no victims in this disaster, nevertheless there are lessons to be learned. The same patterns of causality that are to be found in subsequent disasters are already to be seen in the Vasa disaster. On the 5th of September, 1628, a Naval Court of inquiry was held at the Tre Kronor Palace in Stockholm. The court was composed of seventeen persons six of whom were members of the Council of the realm. The King’s half brother, Lord Admiral Karl Karlsson Gyllenhielm was acting chairman. The inquiry found no one guilty and no definitive conclusions were reached as to the cause or causes of the disaster. The parallel here with the verdict of the Privy Council (the British counterpart to the Supreme Court in the United States) discussed below in Chapter Eleven and the Appendices with regard to the Air New Zealand disaster on Mount Erebus three hundred and fifty-one years later is rather stunning. No one was officially reprimanded or punished. After the inquiry, a group of the experts of the court assembled in order to analyze the records and attempt to find an explanation. There is a record of this meeting. It is difficult to
71
72
SAVING HUMAN LIVES
ascertain the causes of this disaster for a great number of reasons not the least of which is that the testimonies of witnesses at the main inquiry held on 5th September 1628 were obtained under the threat of severe punishment and the penalties were harsh. Immediately after the disaster the surviving captain, Söfring Hansson, was arrested and put into prison and the day after, on August eleventh, a preliminary inquiry was held between him and the shipbuilder. This chapter is based on the book WHY VASA CAPSIZED which was written by Captain (ret.) Curt Borgenstam, R.Sw. N. Engineer and Anders Sandström.1 The wreck of the Vasa was located in 1956 by Anders Franzén and was salvaged in 1961 by Anders Franzén. This has proved to be a great source for the determination of the proximate causes of the disaster. The Vasa now stands in the Vasa Museum in Stockholm and is a magnificent and awesome sight for the visitor.
THE CONSTRUCTION OF THE VASA King Gustav Adolf commissioned the ship to be built by Henrik Hybertsson who became ill and died in the course of the construction and the work was completed by his assistant, Hein Jacobsson. A probable key point brought out in the book, WHY VASA CAPSIZED, was that the shipbuilder had planned the Vasa to have one enclosed battery deck.2 The King, however, decided upon two gun decks, possibly inspired by a Danish ship, Sancta Sophia, which was outfitted with two gun decks, which was being built at the same time as the planning and design of the Vasa, (1624-1627). The hull, however, had already been built and though it was modified, the modifications did not contribute to stability. The point is that the hull had not been designed for a ship that would carry two battery decks. The question which arises is, why was construction not stopped when it became obvious that the safety of the crew would be endangered? While it can be alleged that it was the wrath of the King that was feared it can also be said that the King had not been informed of the importance of the priority of a safety ethic.
THE VASA DISASTER
73
THE STABILITY TEST In the presence of Admiral Klas Fleming, the Vasa was given a stability test while it was still being fitted out at the quay outside of the Royal Castle. It was found to be unstable. Neither of the shipbuilders, Hein Jacobsson or Johan Isbrandsson, who were interrogated by the Court of Inquiry, were present at the test or were informed of its outcome. The question arises as to why they were not invited, why they were not informed of its results and further, why had they not inquired as to the results of the test. It is equally surprising that they themselves did not give the Vasa a stability test. One might well ask why both Admiral Fleming and Captain Söfring Hansson agreed to take the Vasa to sea after the failure of the stability test. Of course, the answer is pressure from above. According to Borgenstam, ‘... the King had in writing ordered that ‘both the Vasa and the Kronan [another ship commissioned at the time] shall be ready by the next Jacobi, (that is 25th July), and if not, those responsible would be subject to His Majestys disgrace.3 Neither Admiral Fleming nor Captain Hansson said anything about the stability test in their testimony at the inquiry. At the hearing the boatswain Matsson testified that Hansson had told the Admiral that the ship was unstable. The Admiral and the Captain ordered thirty men to run from one side of the ship to the other. According to Borgenstam, ‘The first time’, said boatswain Matsson, ‘it heeled over one plank, the second time two planks and the third time three planks’. (The similarity with the O-ring erosion in the space shuttle Challengerr to be discussed below in Chapters Seven and Eight must be noted. Here, it was considered that the ship could stand a list of three planks without capsizing. In the case of the Challenger, it was considered that if the O-rings could burn one-third of the way through that they had a three to one safety margin.) Then, he said, that the Admiral had ordered the testing to be stopped and said that ‘if they had run over the ship one more time she would have capsized’. (The Admiral sounds a bit like Boisjoly in his warnings of disaster if the Challengerr were to be launched). The Admiral had also uttered a wish “that His Royal Majesty had been home”.4 The King, Gustav II Adolf was leading the campaign overseas against
74
SAVING HUMAN LIVES
Poland. However, it took only one day for a letter to reach him on the twelfth of August, 1628, and inform him that his ship, the Vasa, had capsized. Thus, he could also have been informed of the stability tests prior to the sailing. Since the sailing was on the tenth of August and the King’s original deadline was the 25th of July, the ship was already delayed. What difference would one more day have made? (The similarities with the Challengerr again come to mind). While it can be said that the proximate cause of the disaster was the decision for the Vasa to set sail in spite of the negative results of the stability tests, the cause which underlay this decision was the absence of a safety first priority and hence a disregard for the lives of the crew. Underlying the lack of a safety first imperative was a ethical disregard for the potential loss of human life. The decision not to heed safety requirements was a decision which can be said to be an unethical decision. Therefore, the underlying cause of the Vasa disaster was a lack of ethics.
THE QUESTION OF BALLAST In the course of the inquiry, the interrogator stated that the shipbuilder (Hein Jacobsson) had said that if he had been informed that the ship was unstable, then he would have seen to it that it was loaded down one foot deeper. Why had he not been informed? Was Admiral Fleming too concerned aabout the military considerations that if there were more ballast added that the gun ports would have been too low to have been effective? Is this why he did not choose to inform the shipbuilder? Why had the shipbuilder not performed his own stability test? Why had he not asked to be present at the stability test that he knew would be performed? Why had he not inquired as to the results of the test when it was completed? The Vasa had a limited space for ballast. She was so heavily built and armed that the available weight margin for the ballast was ‘... clearly insufficient to give her the much needed weight stability.’5 According to Borgenstam, in the section, ‘Why the Vasa Capsized’, of the thirteen reasons cited, ballast is listed as the tenth. Borgenstam stated that about twice as much ballast as was carried was needed but that
THE VASA DISASTER
75
amount could not have been accomodated and would have brought the gun ports too close to the water.6 Reasoning after the fact, Borgenstam concluded that a total of 130 additional tons of ballast was needed.7 On the other hand, while this view seems most plausible, Admiral Klas Fleming, whose view must have been very influential at the time, argued that less ballast was needed, not more.8 Of course, Admiral Fleming’s view may well have been influenced by purely military considerations, not nautical ones. In his exchange with Matsson, he stated that ‘You are carrying too much ballast: the gunports are too close to the water!’9 No one knew how much weight was needed for ballast as calculations of that sort were not possible in the 17th century. However, by reasoning backwards from the failure of the stability test it would have been obvious that more ballast was needed and it could have been added until the ship passed the stability tests. Thus, it was not the lack of precise measurements that was really the problem. If the ship could not pass the stability tests, it should not have been allowed to sail. If it did, and the gun ports were brought too low, it would have had to have been brought back into port and not approved as a military vessel until some adjustments were made. The most obvious rectification would have been a considerable reduction in the top weight. Most 17th century warships had such a low form stability that they were entirely dependent for weight stability by ballasting.10
THE WIND PRESSURE ON THE SAILS There was very little wind. At the main inquiry, Lieutenant Petter Gierdsson testified that, ‘The weather was not strong enough to pull out the sheets ...’11 However, as Borgenstam pointed out, the process was probably quick and dynamic as the squall came up suddenly.12 (Again one thinks of the Challengerr and the wind shear!). Even though it was a light wind, because off the low form stability of 17th century warships, they were very sensitive to the position of the center of gravity. Even a small change in the center of gravity, such as might be occasioned by the sudden onset of the wind, would completely change their stability characteristics.13 This was shown in
76
SAVING HUMAN LIVES
the earlier stability tests where 30 sailors running side to side three times nearly capsized the Vasa.
THE VASA’S CENTER OF GRAVITY During the inquiry the judge asked the shipbuilder Johan Isbrandsson ‘why the superstructure was heavier than the lower part?’ 14 In the analysis attempted after the inquiries (a most interesting procedure which might benefit inquiries in the present age), a number of those present testified that the ship was heavier above than below and that it could not carry the superstructure. By reasoning backwards from today’s calculations, Borgenstam concluded that if the center of gravity of the Vasa had been only about 5-10 centimeters lower, she would not have capsized in the harbour.
WHY THE VASA CAPSIZED Borgenstam offers thirteen reasons why the Vasa capsized. In none of the thirteen reasons which are given is a lack of ethics cited as a consideration. While it might sound strange to posit the lack of something as a cause, it is no more strange than it is to say that malnutrition or a lack of food is a cause of death. It would be of interest to take note of the morally relevant considerations which lie at the heart of most of the reasons which are given by Borgenstam. The first reason given (these are not in any order of priority) is that originally the Vasa had been conceived of as a small ship and the plans were altered so that she was completed as a large ship.15 The new dimensions did not match with the timber already cut.16 What is missing in Borgenstam’s analysis is that the shipbuilder should have informed the King of this problem at once and, if the problem compromised the safety of the vessel, the King also should have been so informed. In the part of the book, WHY VASA CAPSIZED, which was written by Anders Samström, entitled, ‘The Vasa and the King’s Specifications’, what is missing in Borgenstam’s analysis is supplied.
THE VASA DISASTER
77
Samström reports that the shipbuilderr objected that the timber did not match the King’s specifications. The King replied, reasonably enough, that if the ships cannot be built according to the specifications that the contract be set up such that the larger ship should be completed first.17 Samström further mentions that the King was influenced in his decision by the Domesnäs disaster. (Borgenstam earlier made reference to this disaster in which ten Navy ships were lost in a storm in September of the same year). 18 Both of these points seem to indicate that the King was not as mad as would have appeared if one judges from the ornateness of the adorning art work that would appear to have made the Vasa a strange choice for a warship. His concern about the Navy shows a responsible mind at work. That he was agreeable to modify his earlier decision to follow certain specifications when it became obvious that the material was unsuitable shows an accomodating mind. Why then was there such hesitation to approach him when the Vasa failed its stability tests? The second reason given is that the production of the Vasa was delayed and the work on the Vasa, in the end, was carried out in great haste. While this, too, appears to be a technical reason, it also veils a morally relevant consideration at its core. Upon deeper inspection, if a shipbuilder approves of building a ship in great haste, he again fails to take the future safety of the crew into consideration. The hasty construction is a result of a lack of ethical considerations. The hasty construction while a secondary cause is itself an effect. It is not a first cause. If a safety first ethic had been in force, hasty construction would not have been allowed. The third reason, which is often cited as a key factor, is that the Vasa had been originally designed to carry one gun deck but in the end was outfitted with two. While this certainly appears to be a strong proximate cause, again it is the effect of a lack of an ethical prioritization. Surely, it would have been known by the shipbuilder that the ship’s design did not permit two gun decks. And, while attempts were made to strengthen the hull, this did little to increase the stability of the ship. Once again, a lack of ethics ruled the situation. The fourth reason given was that the Vasa was conceived of as an experimental ship aiming at a maximum of armament and thus little
78
SAVING HUMAN LIVES
attention was given to factors such as stability. This reason implicitly exposes the lack of a safety first priority. If the first priority is military experimentation then the crew should certainly have been informed of the unsafe condition of the vessel. By not informing the crew, there is no consideration given to the possible loss of the crew once the ship set out to sea. (Once more one thinks of the astronauts and the civilians on board the Challengerr who were not informed of the dangerous condition of the O-rings and the hazard this posed for their lives). While the ship fortunately capsized while it was still in the harbor and hence all the crew was saved, that would not have been possible if the ship had made it further out to the open sea. Had the crew been lost at sea they would have died as hapless victims of a military experiment. The analogy with the Challengerr disaster which was to occur three hundred and fifty-eight years later is not lost. The fifth reason does not appear to possess any obvious lack of ethical considerations at its source. The fifth reason given is that Henrik Hybertsson became seriously ill and died a year before the Vasa was completed. As a result, the work had to be handed over to his assistant, Hein Jacobsson. Thus, there was a discontinuity in management. The sixth reason cited was the change in armament plan. This appears to repeat the third reason although more detail is specified. One of the details is that from the beginning the first plan specified an armament heavier than the hull could carry. This was not a change in the armament plan as it was part of the original plan. However, this is not the main point. The main point is that this, too, is at bottom the result of a lack of an ethical priority. If the armaments were known in advance to be too heavy for the hull it was obvious from the beginning that a design deficient ship was being built. It was the ethical responsibility of the shipbuilder to communicate this to the King. Safety was being compromised in the very design of the ship. Once again, one considers the choice of the O-ring design in the case of the Challenger. The seventh reason given was that the hull plus the armaments was much heavier than was customary at the time. The hull was also of exceptionally generous dimensions. Again, these seemingly technical considerations mask a more insidious cause. As it was known in advance that the hull was of exceptionally generous dimensions and
THE VASA DISASTER
79
added its weight to the weight of the armament, then ethical considerations were ignored. To ignore the future safety of the crew which would set out in an unsafe vessel was to be ethically irresponsible. An ethically responsible decision would have been not to build the hull in such a fashion. The presence of the prioritization of ethical decision making would have ruled out the construction of such a hull in the first place. The size and weight of the hull is a secondary cause, not a primary cause. The primary cause was a lack of a safety first priority which in turn was due to the lack of a moral imperative. The eighth reason given was that there were no known methods for calculating stability. In the opinion off this present author, this reason is insufficient. While there were no advanced means as would be available now there certainly were ways of estimating stability which were actually carried out at the time. If the lessons of those tests that were available had been heeded, the ship would not have been permitted to sail. Thus, it was not the lack of sophisticated engineering calculations that was a reason for the disaster. It was the failure to follow the known means of estimating stability that were practiced at the time. The failure to heed the lessons of the failed stability tests was due to a lack of an ethical imperative. The ninth reason given was that no methods for controlling the weight were practiced. This reason veils its morally relevant cause. That no methods for controlling the weight were practiced is an effect, although it became a secondary cause. The primary cause was that there was no overriding ethical priority. If there had been an overriding ethical priority, then weight control would have been practiced to ensure that safety standards were being met. In the absence of a safety first priority there was insufficient reason to practice weight control. The tenth reason given was the insufficient ballast. While this reason has been explored above it can be reiterated that the lack of ballast conceals a deeper reason as its cause. The lack of ballast is the result of a lack of a safety first prioritization. Of course, it can be argued that all of the reasons feed on one another such that no one reason stands by itself. The lack of sufficient ballast might in the end be traced back to the original design of not taking into account two gun batteries. But what is important is that at every design stage and at
80
SAVING HUMAN LIVES
every decision, morally relevant considerations were suppressed. Thus, in the end, it was the lack of an ethical consciousness or an ethical sensibility that was the true and fundamental cause of the Vasa disaster. Of course, it can also be said that it was the lack of warning the King of the problem and thus a fundamental failure of communication was present. There was foreknowledge of the disaster that would ensue. But that warning was not communicated to the person in charge. There was most obviously a lack of the will to communicate. The eleventh reason given was that the Vasa was allowed to sail even though the stability test carried out in the presence of the Admiral indicated that she was too weak and unstable. While here for the first time it would appear that Borgenstam is entertaining a lack of ethics as a cause of the disaster, he does not make this inference. He argues that, ‘The reason for this was surely shortage of time (she was already delayed, and the King had put great pressure on all responsible for her quick completion). Another reason for this was that nobody seemed able to suggest any way of curing the lack of stiffness.’19 While no one could deny the felt pressure to meet the King’s deadline, in order to meet the schedule, ethical considerations had to be sacrificed. In other words, it was not only a case of needing to finish a project on time; it was also a case in which an unethical decision had to be made in order to complete the project on time. If there had been a moral imperative, the tests would have been heeded regardless of the King’s displeasure. As pointed out above, the sailing of the vessel was already past the deadline. What difference would the delay of one day have made in notifying the king of the failure of the stability test? Of course, there is another reason that may be hidden here. If the king had been so notified, he might well have asked why such a major problem had been undetected and/or unreported until the very last moment? Fear of authority may well have been a decisive factor in not informing the King earlier. Perhaps it is not surprising that the writer of ‘The Emperor’s New Clothes’ was a Scandinavian, Hans Christian Anderson. Was King Gustav Adolf his model? But what is lacking then is the education of the person in authority in the primacy of the moral imperative. The reason offered that no one could suggest any way of curing the lack of stiffness is not a sufficient reason to permit the vessel to sail.
THE VASA DISASTER
81
The twelfth reason offered appears to be due to a weakness in management rather than an ethical weakness. The twelfth reason was that there was no coordination between those responsible for the building of the hull, the decoration, the rigging, the ballast, and the armament. A sound management structure which offered both formal and informal communication channels for horizontal communication would probably have solved this problem. Of course, this is not strictly speaking a pure management weakness since a lack of communicating the choice of armament, for example, to those building the hull, would also possess ethical implications. The thirteenth reason offered was that the Vasa’s center of gravity was too high such that the sudden squall of wind of eight knots was sufficient to make her capsize. However, as the authors implicitly recognize, this in a sense was a secondary cause since the usual way of avoiding this would have been to increase the ballast which would have brought the gun ports down too low to be effective. As argued above the lack of ballast is itself a secondary cause as the primary cause was the absence of an ethical sensibility which would have considered the lives of the crew above all else. At the end of his section, entitled, ‘The Vasa and the King’s Specifications’, Anders Samström, brings up a vexing thought. He mentions that it was brought out during the inquiry, that during the discussion of the construction of the Vasa, her captain, Söfring Hansson says, ‘I fear that the same thing will happen to the ship which Hein Jacobsson is now building, for it is being built in the same way’. Of course it is only a hypothesis, though a reasonable one, that the ship to which he was referring was the Nyckln which was completed in the Naval Yard in 1630. But the Nyckln served without incident in the Navy for many years. In the last sentence of the book, Samström states that the Nyckln was more fortunate than the Vasa. But this analysis is insufficient. If the sister ship of the Vasa was indeed the Nyckln, and it was built in the same fashion, then if it did not capsize, the implications are puzzling indeed. It cannot simply be that it was more fortunate. It would mean either that the entire previous analysis is mistaken or that some important changes were made in the construction of the Nyckln. Since it is not likely that the previous analysis is wholly mistaken, it must follow that some significant alterations were made during the construction
82
SAVING HUMAN LIVES
of the Nyckln. On this one must await the carrying out of further research on the details of the construction of the Nyckln.
CONCLUSIONS Though the Vasa was, compared to the present time, a construction of low technology, the primary causes of its capsizing are not different from the causes of the more recent disasters studied such as the causes of the Challengerr disaster or the disaster on Mount Erebus. In the case of the Vasa, information was available at the time that could have been communicated to the King. Had such information been communicated and had the King been willing to act on such information, the ship could have been saved. The fate of the Vasa was predictable and was not unforeseen. While no lives were lost, this was due to the fact that it capsized before it left the harbor. That the King did not receive warnings in advance was certainly a key element in what occurred. Thus a will to communicate was definitely absent in the case of the Vasa. In addition to the lack of communication, there was an absence of a safety first prioritization that would have ensured that the King receive the information that the Vasa was unsafe. Thus, behind the lack of communication was a lack of ethical priorities. One must look to the subsequent disasters to be studied to see if a pattern has already begun to emerge.
NOTES 1. Curt Borgenstam and Anders Sandström, WHY VASA CAPSIZED, trans. by Curt Borgenstam and Klas Melmerson, Stockholm: Vasa Museum, 1995. 2. Ibid., p. 24. 3. Ibid., p. 31. 4. Ibid., p. 31. 5. Ibid., p. 43. 6. Ibid., p. 56. 7. Ibid., p. 55. 8. Ibid., p. 51.
THE VASA DISASTER
83
9. Ibid., p 51. 10. Ibid., p. 43. It should be noted that one off the authors, Captain Curt Borgenstam, taught Warship Design to the Marine Engineering Cadets at the Royal Institute of Technology in Stockholm. 11. Ibid., p. 45. 12. Ibid., p. 45. 13. Ibid., p. 43. 14. Ibid., p. 49. 15. Ibid., p. 57. 16. Ibid., p. 21. 17. Anders Samström, WHY VASA CAPSIZED, p. 74. 18. Curt Borgenstam, WHY VASA CAPSIZED, p. 21. 19. Ibid., p. 58.
CHAPTER 6 THE TITANIC DISASTER METAPHYSICAL BELIEF SYSTEMS It is apposite that this volume include the case of the most famous of all management disasters, that of the Titanic. The case of the Titanic well illustrates all of the weak metaphysical and ethical foundations that are the basis of the disasters studied in the course of this volume. One epistemological thesis that is well illustrated in the case of the Titanic is the curious mixture of an attitude of invincibility, that no disaster could befall the Titanic and the strange accompanying bedfellow belief, that disasters are inevitable and belong to the realm of Fate and are in any case beyond the reach of human prevention. It is an odd conjunction of belief systems but upon analysis perhaps is not as odd as first appears to the eye. In a way, one could say that these belief systems complement and strengthen each other. Both beliefs are absolutes. The belief that one’s own system m is invincible is an absolute. This absolute can survive alongside of the absolute that if something did happen it would be impossible to prevent it. That one’s own system is invincible gives credence to the accompanying belief that this kind d of invincibility can only be threatened by that which is beyond control. It is a kind of hubris that is akin to the ancient Greek hubris that no harm can befall one except that, of course, which is fated. By the same token, the notion that disasters are inevitable is a belief that strengthens the belief that m is invincible. The idea of the inevitability of one’s particular system disaster is an inevitability that is so inevitable that it has no relevance. The invincibility thesis was well expressed by Captain Edward Smith, ‘I cannot imagine any condition which would cause a ship to founder. I cannot conceive of any vital disaster happening to this vessel. Modern shipbuilding has gone beyond that.’1 The disaster, similar to the Challengerr disaster, and the disaster on Mt. Erebus, revealed two 84
THE TITANIC DISASTER
85
complementary belief systems. One belief system was “nothing can happen to us”. Just as NASA operated from an invincibility thesis and Air New Zealand was famous for its slogan ‘Nobody Does it Better’, the Titanic was perceived of and even spoken about as unsinkable. The problem with this belief system is that it created an atmosphere of complacency. Precautions were not taken which could have prevented the collision or have minimized the consequence of its effect. If an attitude of finitude and vulnerability had replaced one of invincibility, it would have resulted in a greater care being taken when navigating in iceberg laden waters. Speed would have been reduced. Lifeboat drills would nott have been ignored. Warning messages would have been heeded. A change in course to avoid the icebergs would most likely been decided upon. All of these actions were not taken because of an attitude of heedlessness. Heedlessness or recklessness is a direct consequence of a belief in invincibility. It is only if one believes oneself to be invincible that one is mindless of dangers. While it seems self-contradictory, at the same time that one maintains the invincibility thesis, one maintains the thesis that ‘accidents will happen’ and when they do, ‘accidents are inevitable and thus unpreventable’; it is the very unpreventability of accidents that allows the thesis ‘accidents will happen’ to coexist with the thesis of invincibility. One is invincible under all normal conditions. If a condition is abnormal, then it is by definition outside of the boundaries of invincibility and therefore does not need to be taken into consideration. For example, of course the Titanic would be vulnerable to being sunk by a torpedo. But since it was not wartime, such a threat would not be considered one that one needed to be concerned about. And, if it should occur, such an occurrence would be mentally filed under the concept of inevitable accidents, a thesis which can happily coexist with the thesis of invincibility. In fact, the twin theses are wedded together. For, when an “accident” happens and the invincible proves vulnerable, it is proof that ‘accidents will happen’. It is almost as if the self-image of invincibility were that of Zeus, but even Zeus was subject to the laws of Fate. If, on the other hand, the invincibility thesis is replaced with a vulnerability thesis, there is no great need to have recourse to the thesis of ‘accidents will happen’ when something goes wrong. When something does go wrong, it is expected that it will since one has already subscribed to
86
SAVING HUMAN LIVES
the vulnerability thesis. However, since one knows that one is vulnerable, one at the same time will take all possible precautions to prevent disasters and to minimize the consequences of their effects when they occur. It is important to note that both theses are abrogations of responsibility for what happens and this is the ring that weds the theses together. If ‘nothing can happen to us’ is held, it follows that one is not responsible for what happens since one is protected, as it were, by the magic spell of invincibility. If an accident does occur, similarly one is not responsible for the occurrence of the accident since accidents by definition are random, inevitable, and unpredictable events. One can hold the two theses together because both theses operate via forces that are beyond the scope of human responsibility. The two theses are in a way two sides of the same coin. No responsibility is taken for either side of the coin. One simply flips the coin. If one is invincible, there is no need for precautions. If one suffers an unpreventable accident, there is also nothing that could have been done to avert it. While the theses appear on the surface to contradict each other, and logically are selfcontradictory, there is no real self-contradiction between the two theses (which is why they can be maintained together) since invincibility is viable only during the normal course of events and not viable during inevitable accidents, and inevitable accidents, by definition not taking place during the normal course of events, are not factored into the original definition of invincibility. In the opinion of the present author, what is most notable about the Titanic disaster, is that, exactly like all of the disasters studied in this volume, it could have been prevented. It was not an inevitable accident. And moreover, even had it occurred, many of the lives lost could have been saved. There were many warnings of the imminent danger that the Titanic was in, but none of the warnings were heeded. A lack of respectful and thus ethical communication was perhaps the leading cause of the Titanic disaster. The Titanic represents a graphic illustration of how powerful metaphysical beliefs are. It is only the belief in the Titanic’s invincibility that must have overriden the need to communicate the messages to the Captain and at the same time the lack of ethical sensibility or moral imagination that if such messages were left unheeded, what the consequences would be. There is a
THE TITANIC DISASTER
87
multi-causal analysis of the reasons behind the Titanic disaster to follow. But behind all the contributing causes, it can only be the power of metaphysical belief that stifled the moral imagination. Of course, if safety had been the overriding priority of the ship, such an ethical consciousness would have overridden the metaphysical belief system. But metaphysics, not ethics, was the dominant force.
LOSS OF LIFE According to Discovery there were more than 2,000 passengers and crew who set out (2,201 persons on board).2 According to the American report, there were 2,223 persons on board.3 According to the British report, 1,490 lives were lost. According to the American report4, 1,517 lives were lost and 706 were saved.5 (Other reports mention the figure of 705). According to an April 12, 1912 article in the Engineering News, W. W. Jeffries, General Passenger agent of the White Star line counted the number of lives lost at 1442.6 According to Discovery, 1,523 men women and children lost their lives when the “Titanic” sunk.7 While in general, most of the information in this chapter is based on the British Board of Trade Inquiry, “Report on the Loss of the Titanic”, observations from other sources, including the American report, are included.
REPORT OF THE COURT (BRITISH REPORT) THE COLLISION Three days into her maiden voyage, E J Smith received several ice warnings from other ships. He altered his course further South to avoid danger but did not change fast speed. At 11:40 PM Sunday lookouts high in the crow’s nest spotted an iceberg and rang a warning bell three times to the Bridge. The ship turned hard a’port but it was too late.8 At a little before 11:40, one of the look-outs in the crow’s-nest struck three blows on the gong, which was the accepted warning for something ahead, following this immediately
88
SAVING HUMAN LIVES
afterwards by a telephone message to the bridge, “Iceberg straight ahead”.9 Mr. Murdoch, the officer of the watch, gave the order “Hard a-starboard,” and immediately telegraphed down to the engine-room “Stop. Full speed astern”. But it was too late. On 11:40 p.m. on 14th April, 1912 she struck an iceberg and at 2:20 a.m. on the next day, she foundered.
CAUSES OF THE DISASTER Right Honourable Lord Mersey, Wreck Commissioner stated: ‘The Court, having carefully enquired into the circumstances of the above mentioned shipping casualty, finds, for the reasons appearing in the Annex hereto, that the loss of the said ship was due to collision with an iceberg, brought about by the excessive speed at which the ship was being navigated.’10
WARNINGS At 9 a.m. (“Titanic” time) on that day a wireless message from the s.s. “Caronia” was received by Captain Smith. It was as follows: “Captain, ‘Titanic’ - West-bound steamers report bergs growlers and field ice in 42 degrees N. from 49 degrees to 51 degrees W., 12th April ...” This message refers to bergs, growlers (icebergs of a small mass), and field ice (frozen sea water floating in a much looser form than pack ice) as sighted on the 12th of April, at least 48 hours before the time of collision. Captain Smith acknowledged the receipt of the message. At that time the “Titanic’s” position was about lat. 43
THE TITANIC DISASTER
89
degress 35 minutes N. and long. 43 degrees and 50 minutes W. At 1:42 p.m. a wireless message from the s.s. “Baltic” was received by Captain Smith. An extract of this message was: “Greek steamer ‘Athenai’ reports passing icebergs and large quantities of ice today in lat. 41 degrees 51 minutes No. long 49 degrees 52 minutes W....” Captain Smith acknowledged the receipt of this message also. At the time the “Titanic’s” position was about 42 degrees 35 minutes N., 45 degrees 50 minutes W. Mr. Ismay the Chairman and Managing Director of the White Star Line was on board the “Titanic” and it appeared that the Master handed the Baltic’s message to Mr. Ismay almost immediately after it was received. About 1:45 p.m. (“Titanic” time), on the 14th a message was sent from the German steamer “ “Amerika ” to the Hydrographic Office in Washington: “ ‘Amerika’ passed two large icebergs in 41 degrees 27 minutes N., 50 degrees 8 minutes W., on the 14th April.” This was a position south of the point of the “Titanic’s” disaster. It was passed to the “Titanic”. According to the report, “Being a message affecting navigation, it should in the ordinary course have been taken to the bridge. So far as can be ascertained, it was never heard by anyone on board the “Titanic” outside the Marconi room. There were two Marconi operators in the Marconi room, namely, Phillips who perished, and Bride, who survived and gave evidence. Bride did not receive the “Amerika message nor did Phillips mention it to him, though the two had much conversation together after it had been received. I am of opinion that when this message reached the Marconi room it was put aside by Phillips to wait until the “Titanic” would be within the call of Cape Race (at about 8 or 8:30 p.m.), and that it was never handed to any officer of the “Titanic”.11 However, here, the conclusions drawn by Mr. Mersey appear unfounded. It seems most unlikely that if Bride and Phillips had “much conversation together it [the message] had been received” that
90
SAVING HUMAN LIVES
Bride made no mention of such a significant message. In fact, presumably, since it is only Bride who survived, one has only Bride’s word on it that Phillips received the message and that Phillips failed to communicate the message to Bride. At 7:30 p.m. a fourth message was received and said by the Marconi operator Bride to have been delivered to the bridge. This message was from the s.s. “Californian” to the s.s. ““Antillan,” but was picked up by the “Titanic.” It was as follows: “To Captain, ““Antilla”, 6:30 p.m. apparent ship’s time; lat. 42 degrees 3 minutes N., long. 49 degrees 9 minutes W. Three large bergs five miles to southward of us. Regards, Lord.” According to the report, incredibly enough, “Bride does not remember to what officer he delivered this message”.12 Is it possible that Bride did not deliver this message at all? There was a fifth message received in the Marconi room of the “Titanic” at 9:40 p.m. This was from a steamer called the “Mesaba”: “From “Mesaba” to “Titanic” and all east-bound ships. Ice report in lat. 42 degrees N. to 41 degrees 25 minutes N., long. 49 degrees to long. 50 degrees 30 minutes W. Saw much heavy pack ice and great number of large icebergs. Also field ice. Weather good, clear.” According to the report, “This message clearly indicated the presence of ice in the immediate vicinity of the “Titanic”, and if it had reached the bridge would perhaps have affected the navigation of the vessel. Unfortunately, it does not appear to have been delivered to the Master or to any of the officers”. 13 The explanation in the report, incredibly enough, was that the Marconi operator was very busy from 8 p.m. onward transmitting messages for passengers on board the “Titanic” and the probability is that “he failed to grasp the significance and importance of the message ...” 14 According to Mr. Mersey, “I am satisfied it [this message] was not received by him [Captain Smith]”. According to the American report, there was no general discussion of these warnings among the officers, no conference called to discuss
THE TITANIC DISASTER
91
these warnings, and no heed was given to them.15 According to the British report, Mr. Lightroller, the second officer, reported that both he and the Master knew that the danger of meeting ice still existed. Mr. Lightroller testified that the Master showed him the “Caronia” message about 12:45 p.m. and before going off watch had reported that “he made a rough calculation in his head which satisfied a him that the “Titanic” would not reach the position mentioned in the message until he came on watch again at 6 p.m.”16 Mr. Lightroller testified that at 6 p.m., he does not recall having been told about the “Baltic” message which had been received at 1:42 p.m.17 The extent of the non reporting of warning messages is extreme. Mr. Lightroller testified that he asked Mr. Moody, the sixth officer (dead), at what time we should reach the vicinity of the ice and says that he thinks that Mr. Moody reported about 11 o’clock.18 If this testimony is correct, then the second officer of the Titanic knew full well the approximately correct time to anticipate the ice collision. However, Mr. Lightroller testified further that this time did not agree with his own mental calculations which showed an even earlier anticipation time (of 9:30 p.m. which in his testimony he even changed to between 7 and 8 o’clock). In any event the point is that both Mr. Moody and Mr. Lightroller expected that they would reach the vicinity of ice before midnight which is when it was met. However, when questioned as to whether he was afraid that they might run across a growler (a low-lying berg), he answered that he judged that they would see it at a distance of probably two miles.19 At 9:30 p.m., Mr. Lightroller instructed the crew on the crow’s nest “to keep a sharp look-out for ice, particularly small ice and growlers”.20 At 10 o’clock, Mr. Lightroller handed over the watch to Mr. Murdoch, the first officer, (dead) telling him that “we might be up around the ice any time now”. That Mr. Murdoch knew of the danger of meeting ice appears from the evidence of Hemming, a lamp trimmer, who says that about 7:15 p.m., Mr. Murdoch told him that “... we are in the vicinity of ice”.
WARNINGS TO PASSENGERS According to the American report, no general alarm was sounded and no systematic warning was given the passengers.21 Mrs. George
92
SAVING HUMAN LIVES
Stone, a first-class passenger reported, “Not a single warning was given in the part of the ship in which I was ... We could have died like rats in a trap for all the warning we received”.22
SPEED OF THE SHIP “The entire passage had been made at high speed ... and this speed was never reduced until the collision was unavoidable.”23 The fourth officer estimated the “Titanic’s” speed at 22 knots from 7:30 p.m. to the time of the collision.
WEATHER The weather from 6 p.m. onwards to the time of the collision was perfectly clear and fine. The stars were out. There was no moon which would have affected visibility. There was also a drop of temperature of 10 deg. in slightly less than two hours and though this is not an indication of ice, according to Sir Ernest Shackelton: “if there was no wind and the temperature fell abnormally for the time of the year, I would consider that I was approaching an area which might have ice in it”.24
CAUSES OF DEATHS According to Titanic, 1994, one could not have lasted more than 2030 minutes in the ice cold water as it was 28 degrees F. or 4 degrees under freezing temperature. Most of the “Titanic’s” casualities died from exposure to ice cold water, not to drownings.
THE TITANIC DISASTER
93
HOW ELSE COULD THE TITANIC DISASTER BEEN PREVENTED? According to an April, 1912 issue of the Scientific American, “Had the “Titanic” been running under a slow bell, she would probably have been afloat today.” [emphasis in original]25 According to the Scientific American, half speed, as she should have been, would have saved the ship.26 According to Unger, “Had the lookouts been equipped with binoculars, they would almost certainly have seen the iceberg in time for the ship to have avoided it.”27 The British Report takes exception to this and states that the provision of binoculars would have made no difference.
THE SAILING ORDERS Besides the book of Ship’s Rules, every master when first appointed to command a ship is addressed by special letter from the Company, of which the following passage is an extract: “You are to dismiss all idea of competitive passages with other vessels and to concentrate your attention upon a cautious, prudent and ever watchful system of navigation, which shall lose time or suffer any other temporary inconvenience rather than incur the slightest risk which can be avoided”. Mr. Sanderson, one of the directors, in his evidence says with reference to the above letter: “We never fail to tell them in handing them these letters that we do not wish them to take it as a mere matter of form; that we wish them to read these letters, and to write an acknowledgement to us that they have read them, and that they will be influenced by what we have said in those letters”.28 The Master could have turned southward instead of west; or he could have reduced speed. He did neither. According to the record, the reason why the Master continued on in his present course and rate of speed is that for the past twenty-five years, “the practise of liners using this track when in the vicinity of ice at night had been in clear weather to keep the course, to maintain the speed and to trust to a sharp look-out to enable them to avoid the danger. This practise, it
94
SAVING HUMAN LIVES
was said, had been justified by experience, no casualties having resulted from it”.29 As incredible as such a justification is, perhaps it does illustrate the power of a belief system to rule the human mind. If a course of action has been reliable in the past, one should consider that it is justifiable. It stretches credulity, however, no matter how powerful this belief system was, that it could be said to override a safety first priority ethic. It would seem that such a belief system could only flourish in an ethical vacuum. As such, the previous ships can be said to have been saved from disaster by chance and not by design. After the Titanic disaster, the routing was altered, which indicates that one counter-example was sufficient to set such reasoning aside. According to the report, the root of the rationale of following the course of past experience was “... probably to be found in competition and in the desire of the public for quick passages rather than in the judgment of navigators”.30 This conclusion would be more consistent with the absence of a safety ethic rather than its presence. In the inquiry, several suggestions were made as to possible influences on Captain Smith. One was that he was influenced by the presence of Mr. Ismay on board and also of a conversation of which he may have had knowledge which transpired between Mr. Ismay and the Chief Engineer at Queenstown concerning the speed of the ship and the consumption of coal. Whereas Mr. Mersey does not credit either of these influences, it is the opinion of the present author that both of these influences may have contributed to the fatal negligence of Captain Smith.
RELEVANT DESIGN FEATURES
RIVETS When the French oceanographic group IFREMER made their dive to the “Titanic” in 1996, Paul Matthias, president of Polaris imaging, discovered that the exact cause of the sinking of the “Titanic” was that the rivets on the starboard side of the “Titanic” which held the plates to the hull’s cross beams had popped open, creating splits the
THE TITANIC DISASTER
95
size of a hardcover book. (There was no gash, as popularly supposed, resulting from the collision with the iceberg). The question then becomes, why did the rivets pop? While any conclusions require more evidence than is available from a sample of two rivets out of two to three million rivets, the scientific analysis of two retrieved rivets from the “Titanic” reveals a most significant finding. Paul Foecke, the metallurgist who performed the microscopic examination of the “Titanic’s” rivets discovered that the slag content was a dangerously high 9.3%, a concentration that would have made the rivets extremely brittle and substantially weaker than they would have been with an appropriate amount of slag. (2% of slag is added to wrought iron to give it strength) Further, the slag streaks were supposed to run lengthwise along the rivet but instead made a 90 degree turn at the head end in the rivet. This, according to Foecke, was a major area of weakness.31 Foecke even found a 1906 metallurgical reference book which defined medium quality wrought iron as containing 2% to 2.5% slag. However, this does not mean that that was the standard at the time. If it were the standard at the time, and if a subsequent examination of a statistically significant number of rivets revealed the same results, one would be forced to draw the conclusion that the choice of materials for the construction of the “Titanic” seriously compromised her safety. For Unger, though he is a greatt spokesman for multi-causality, he does not utilize the term, “Neither the design nor the construction of the Titanic caused the disaster”.32 He does, however, point to a host of other factors: “... the root cause was a lack of understanding of the limitations of the technology on the part of those who should have known better. The catastrophe was brought on by a combination of poor judgement (excessive speed, not posting extra lookouts), bad management (inadequate lifeboat capacity, no lifeboat drills, no binoculars for lookouts), and perhaps near criminal malfeasance (on the part of the “California’s” captain). But his reasoning for not including design as one of the multiple factors is shaky. He argues that the sister ship, the Olympic, operated reliably for several decades. But that is only because the Olympic did not hit an iceberg! It should be stressed that according to the report, “... if the “Titanic” had been subdivided in the longitudinal method, instead of in the transverse method only, she would have been able, if damaged as
96
SAVING HUMAN LIVES
supposed, to remain afloat ...” (36) [emphasis added] This one piece of information the present author founds astonishing. It should be noted that the “Titanic’s” transverse compartments did have watertight bulkheads but, incredibly, no watertight tops. As a result, as each compartment filled up with water, the water spilled over into the adjacent compartment much in the fashion as compartments in an ice tray, when overfilled, spill over into the compartments next to them. The wreck commissioner, Mr. Mersey, makes the following comment relevant to naval design: “It seems to me that the Board [Board of Trade] should be empowered to require the production of the designs of all passenger steamers at an early period of their construction and to direct such alterations as may appear to them to be necessary and practicable for the purpose of securing proper watertight sub-division”.33
THE INADEQUACY OF THE HUMAN ERROR HYPOTHESIS It is commonplace to attribute disaster to the chance concatenation of a number of human errors. But there is no chance concatenation here. There is a multi-causal explanation because many factors contribute to the final outcome. But all of these factors are symptomatic of a pervasive neglect of safety precautions and thus a poverty of ethical imagination. The human error explanation omits that in each area of a contributing cause, there is an accompanying human responsibility. For example, there is a responsibility for naval design; there is a responsibility for taking action to warn the passengers, and so on. The human error explanation is incomplete and inaccurate for it emphasizes only the finiteness and fallibility of human beings and does not emphasize the lack of ethics - and the connection between the lack of ethics and the lack off a safety first priority. Once the connection between a high priority for safety and an ethical attitude has been made plain, it is to be hoped that safety will receive more consideration in the future. One advantage of philosophical investigations of what normally are considered the province of either naval architects, lawyers, or navigators is that a strong connection
THE TITANIC DISASTER
97
between a safety priority and ethics can be clearly spelled out. This is why the expression, ‘safety ethic’ is preferred as it leaves no doubts as to the connection. It is not being maintained that there is no incidence of a disaster occurring which is the result of human error. What is being maintained is that great care must be taken not to conflate the concepts of “human error” and “ethical responsibility”. Ethical errors are not the results of miscalculations such as mistakes one might make by adding up some figures incorrectly. Ethical errors (in the context which is relevant) are the outcome of not placing safety first. Not placing safety first is a lack of ethics, not the result of inevitable human error. It is important that such a distinction be made. For example in the Final Report of the Estonia, the Committee stated that in the case of the Herald, “... the need for practical and efficient transport seems to have taken priority over safety considerations”.34 What can be added to this observation is that the lower priority given to safety considerations is a reflection of an ethical error. Ethical error, as opposed to “inevitable human errors” can be altered. Both the “Titanic” and the “Estonia” could have slowed down! Excessive speed was pointed to as a fundamental cause of each one of these disasters. (179)35 In the case of the Estonia, the clam door was pointed to as a preferable design choice (in the language of the present author, ethically superior)36, though it is not mentioned in the conclusions.
LIFEBOATS According to the report, there had been neither a proper boat drill nor a boat muster. Preventative measures thus were conspicuous by their absence. At the time of the attempt to save the passengers, the ethical priorities were high. Captain Smith, the Master, Mr. Wilde, the Chief Officer, Mr. Murdoch, the First officer and Mr. Moody, the Sixth Officer, all went down with the ship while performing their duties. The “Titanic” carried lifeboats in a number sufficient to accomodate approximately half of the total number m aboard. She carried lifeboats adequate for 1,179 persons (1,176 according to the American report)
98
SAVING HUMAN LIVES
out of a total of 2,201 persons aboard. It must be noted that when the last lifeboat was boarded and away a total of approximately 1,500 people remained stranded on deck only to go down with the ship. According to the American report, there was a lack of preparation which was obvious at the time of the loading of the lifeboats. As a result of the confusion, there was inadequate use made out of the available lifeboats. The subcommittee stated that the failure to utilize all lifeboats in their recognized capacity for safety unquestionably resulted in the needless sacrifice of several hundred lives which might otherwise have been saved.37 One is forcibly reminded of the failure to supply “lifeboats” (explosive bolt hatches, parachutes and space pressure suits) which might well have saved the crew and passengers aboard the Challenger.
THIRD-CLASS PASSENGERS According to the British report, ‘They were not unfairly treated’.38 The report endeavors to explain the lower percentage of third-class passengers saved in terms of ‘... the greater reluctance of third class passengers to leave the ship, by their unwillingness to part with their luggage, by the difficulty in getting them up from their quarters, which were at the extreme ends of the ship ...’ 39 However, it must be noted that the very location of third-class passengers indicated a lack of ethical considerations in the first place. The British report categorically denies any preferential treatment given to first- and second-class passengers in boarding the lifeboats.40 But 62.46 per cent of the 325 first-class passengers survived, in contrast to 37.94 per cent (25% according to the American report) of the 1,316 third-class passengers, while, of the women in the first-class accomodation, 97.22 per cent were saved, as opposed to 46.66 per cent of those in third-class, and of the children, all those travelling first- and second-class were saved, while 65.38 per cent of those travelling third-class were killed. The report cites “the difficulty in getting them [third-class passengers] up from their quarters, which were at the extreme ends of the ship” as a reason for the disproportion. But answers, “No”, to the question, “Was the
THE TITANIC DISASTER
99
construction of the vessel and its arrangements such as to make it difficult for any class of passenger ... to take full advantage of any of the existing provisions for safety?”41 It is also the case that according to certain reports, third class passengers were held back until all first and second class passengers were given time to board the life boats.42
NEARBY RESCUE POSSIBILITIES
THE RESCUE BY THE S.S. “CARPATHIA” As soon as the dangerous condition of the ship was realized, messages were sent by the Master’s orders to all steamers within reach. The Cunard steamer, “Carpathia”, 58 miles away, announced that she was coming to the assistance of the “Titanic”. Messages were being sent out up until a few minutes of the foundering of the “Titanic”. Eight distress rockets were sent off at intervals and, in addition, Mr. Boxall used a Morse light in the direction of a ship whose lights he saw at a distance off about five or six miles. Although since the time of the report there is some dispute over which ship this was, according to the report, “There appears no doubt that the vessel whose lights he saw was the “Californian””.43 On receipt of the “Titanic’s” first distress message, the Captain immediately ordered the ship to be turned round and driven at her highest speed (17 1/2 knots) through the dangerous waters in the direction of the “Titanic”. He was able to arrive in 4 hours time. Eventually, Captain Rostron of the “Carpathia” picked up 712 persons, one of whom died shortly afterwards. Captain Rostron’s behavior was exemplary.
S.S. “CALIFORNIAN” The evidence from the “Californian” speaks of eight rockets having been seen. The one Marconi operator on board the “Californian” was asleep. The story behind the sleeping Marconi operator, according to
100
SAVING HUMAN LIVES
Colonel Gracie’s account is that around 11 p.m., when the Captain of the “Californian” saw a ship approaching from the east, which he was advised to be the “Titanic”, he ordered this message sent: “We are stopped and surrounded by ice”. To this the “Titanic’s” wireless operator replied, “Shut up, I am busy. I am working Cape Race”. The operator placed priority on sending private messages of passengers despite the issues of navigation and life and death that were at stake. (Titanic, 1998) At 11:30 p.m., the wireless operator of the “Californian” went to sleep. While it appears to the present author that the sleeping operator displayed an appalling lack of responsibility, it is also appalling that eight distress rockets could have been seen without any action taken. Some of the evidence appears to be in dispute. Ernest Gill, a crewman aboard the “Californian” stated that he saw the lights of a very large steamship about 10 miles away and a white rocket and thought that it must be a vessel in distress. He wondered “why the devil did they not wake the wireless man up?” He said that the crew were talking about the disregard of the rockets. On the other hand, Captain Lord of the “Californian” scoffed at the story and maintained that he knew nothing of the disaster until the next morning.44 According to Gracie, however, both the British report and the evidence of the American investigation concurred that it was gross negligence that sealed the fate of all that were lost. Gracie was indefatigable in his search for truth. He attended numerous court hearings in the USA and tracked down as many survivors as possible. Largely due to the effects of his exposure to the icy waters of the Atlantic, he succumbed in December of the fateful year of 1912. According to Gracie’s account, the Captain and crew watched the lights of the “Titanic” from the deck of their ship and plainly saw the white rockets that were sent up. In Gracie’s words, “Captain Lord {the Captain of the “Californian”] was completely in possession of the knowledge that he was in proximity to a ship in distress. He could have put himself into immediate communication with us by wireless had he desired information of the name of the ship and the disaster which had befallen it. His indifference is made apparent by his orders to ‘go on Morseing’, instead of utilizing the more modern method of the inventive genius and gentleman, Mr. Marconi, which eventually saved us all.”45 What we do know is that at 6:30 p.m., Lord himself
THE TITANIC DISASTER
101
sent the message that he knew of icebergs in the vicinity. If he had been aware of a ship in distress, why would he not have done everything possible to attempt to communicate with her? At one point in his testimony to the U.S. Senate subcommittee, he states that the Titanic was clearly visible and was only 4 miles away.46 The “Californian” saw a vessel stop at 11:40 p.m. This was the time of the collision of the “Titanic” with the iceberg. The “Californian” saw eight distress rockets between 12:45 to 1:45 a.m. It was only five to six miles away (it has been estimated no more than ten miles). Why did the “Californian” not come to the rescue of the “Titanic”? According to the report, “... when she [“Californian”] first saw the rockets, [she] could have pushed through the [loose] ice without any serious risk ... Had she done so she might have saved many if not all of the lives that were lost”.47 [emphasis added] According to the American report, officers of the “Californian” admit seeing rockets in the general direction of the “Titanic”. The Senate subcommittee stated, “In our opinion such conduct, [replying to the distress rockets by signalling from a large white light] whether arising from indifference or gross carelessness, is most reprehensible, and places upon the commander of the “Californian” a grave responsibility ... Had assistance been promptly proffered, or had the wireless operator of the “Californian” remained a few minutes longer at his post on Sunday evening, that ship might have had the proud distinction of rescuing the lives of the passengers and crew of the Titanic”.48
FINDINGS OF THE COURT LOOK-OUT “... in view of the night being moonless ... it is not considered that the look-out was sufficient. An extra look-out should, under the circumstances, have been placed at the stemhead, and a sharp lookout should have been kept from both sides of the bridge by an officer.” 49
102
SAVING HUMAN LIVES
SPEED No directions were given to reduce speed.50
RECOMMENDATIONS Of the recommendations made by the court, two stand out. One of the recommendations refers to decisions regarding naval design; the other to operational decision making. With regard to decisions regarding naval design, that a newly appointed Bulkhead Committee investigate the possibilities of providing a double skin carried up above the waterline; or, a longitudinal, vertical watertight bulkhead on each side of the ship and/or with watertight transverse bulkheads. In addition, the Committee should investigate the practicability of providing decks above the waterline and investigate the practicability of increasing the protection given by sub-division such that the ship could remain afloat.51 With regard to operational decision making, that when ice is reported in or near the track of the ship the ship should proceed in the dark hours at moderate speed or alter her course.52 The American report stated that “The openings through deck E were not designed for water-tight closing, as the evidence shows that flooding over deck E contributed largely to the sinking of the vessel”.53 These two provisions seem very simple although the first may be complicated to implement. That neither was done seems to imply that a less than high priority was given to safety considerations. This can only mean that ethical thinking was absent. The “solutions” are not unthinkable. They probably were thought about. That they were not thought of seriously was not due to the difficulties of thinking them up. That they were not thought of seriously was due to the fact that safety was not given a prioritization in thinking. Ethical considerations were not foremost. Had ethical considerations been foremost, one or both of these two recommendations, one for shipowners and shipbuilders and one for pilots, could have been implemented.
THE TITANIC DISASTER
103
In the American report, the committee recommended that laws be revised so that no ship may leave an American port unless it required sufficient lifeboats to accomodate every passenger and every member of the crew. The committee further recommended that passengers and crew be assigned to lifeboats before sailing, routes to boats be posted and that boat assignments be based on the convenience of location of the boat to the room in question. The committee further recommended that steamships carrying more than 100 passengers be required to carry searchlights, and that a wireless operator be on duty 24 hours per day. While the British report did make recommendations of this nature, they were much weaker in nature in that they were in the form of recommendations to the Board of Trade for its consideration and subject to its qualification as it sees fit, and not recommendations for implementation in law. No recommendations were made regarding the prior assignment of passengers to specific lifeboats. According to an August 15, 1912 article in the Engineering News, neither report strongly assigned responsibility to anyone for the disaster.54 It is the opinion of the present author, however, that both reports did emphasize deficiencies in naval design. The British report, unlike the American report which seems silent on this issue, did strongly cite the speed of the ship as the prime if not the sole cause of the disaster. Both reports strongly cite the negligence of the captain of the “Californian”. In the British report, to the question “What was the cause of the loss of the “Titanic”, and of the loss of life which thereby ensued or occurred?”, the answer given is simply “Collision with an iceberg and the subsequent foundering of the ship”.55 On the other hand, at the very beginning of the report, the Wreck Commissioner does state that “The Court, having carefully enquired into the circumstances of the above mentioned shipping casuality, [the “Titanic”] finds, for the reasons appearing in the Annex hereto, that the loss of the said ship was due to the collision with an iceberg, brought about by the excessive speed at which the ship was being navigated”.56 It is, in the opinion of the present author, not the excessive speed of the ship, which is the proximate cause of the disaster, but the general attitude of a lack of a safety ethic which is the initiating cause of the
104
SAVING HUMAN LIVES
excessive speed. The lack of a safety ethic is the general underlying cause which manifests itself in not listening to warnings given, not inspiring officers to communicate urgent warnings, not being cautious in dangerous waters, nott having enough lifeboats on board for all passengers, and not having the proper design for maximum safety in case of impact. The lack k of a safety ethic is in turn influenced from a twin metaphysical belief system that the Titanic was unsinkable, in this case explicitly articulated as such by its Captain, and the unarticulated metaphysical belief that disasters were in any case unpreventable. Clearly this disaster was preventable and was both foreseen and forewarned. The Captain was certainly negligent but it is important to note that it was in the area of ethics that he and some members of his crew were negligent.
NOTES 1
Titanic: Fortune & Fate, Catalogue from the Mariners’ Museum Exhibition, Beverly McMillan and Stanley Lehrer, The Mariners’ Museum, Newport News, Virginia: Simon & Schuster, 1998, p. 16. 2 Titanic: The Investigation Begins; Titanic: T The Anatomy of a Disaster, 1997, Discovery Communications, Inc., Videotape. 3 S. Res. 283, “Titanic” Disaster Hearing Before a Subcommittee of the Committee on Commerce of the United States Senate, Sixty-second Congress, Second session, Directing the Committee on Commerce to Investigate the Causes Leading to the Wreck of the White Star liner “Titanic”, Part 15, Digest off Testimony, Washington, GPO, 1912. These figures are confirmed by the U.S. Senate subcommittee hearing into the Titanic disaster. Cf., f Stephen J. Spignesi, The Complete Titanic, From the Ship’s Earliest Blueprints to the Epic Film, Secaucus, N. J.: Carol Publishing Group, 1998, pp. 122, 161. 4 S. Res. 283, “Titanic” Disaster Hearing Before a Subcommittee of the Committee on Commerce of the United States Senate, Sixty-second Congress, Second session, Directing the Committee on Commerce to Investigate the Causes Leading to the Wreck of the White Star liner “Titanic”, Part 15, Digest off Testimony, Washington, GPO, 1912. 5 Spignesi, 1998, p. 122. 6 Spignesi, 1998, p. 204. 7 Titanic: The Investigation Begins; Titanic: T The Anatomy of a Disaster, 1997, Discovery Communications, Inc., Videotape. 8 Titanic, Vols. I-IV, New York: Greystone Communications, Inc., A & E Television Networks, Hearst, ABC, NBC, 1994.
THE TITANIC DISASTER
9
105
Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 30. 10 Ibid., p. 1. 11 Ibid., p. 27. 12 Ibid., p. 27. 13 Ibid., p. 28. 144 Ibid., p. 28. 15 Spignesi, 1998, p. 123. 16 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 28. 177 Ibid., p. 28. 18 Ibid., p. 28. 199 Ibid., p. 28. 200 Ibid., p. 29. 21 Spignesi, 1998, p. 125. 22 Titanic: Fortune & Fate, Catalogue from the Mariners’ Museum Exhibition, Beverly McMillan and Stanley Lehrer, The Mariners’ Museum, Newport News, Virginia: Simon & Schuster, 1998, p. 104. 23 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 29. 244 Ibid., p. 29. 25 Spignesi, 1998, p. 192. 26 Spignesi, 1998, p. 193. 27 Stephen H. Unger, Controlling Technology, Ethics and the Responsible Engineer, 2nd Edition, New York: John Wiley & Sons, Inc., 1994, p. 64 28 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 24. 299 Ibid., p. 30. 300 Ibid., p. 30. 31 Spignesi, 1998, p. 266. 32 Stephen H. Unger, Controlling Technology, Ethics and the Responsible Engineer, 2nd Edition, New York: John Wiley & Sons, Inc., 1994, p. 66. 33 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 61. 34 Final Report of the Estonia, pp. 140-141. [Add in full citation] 35 While speed as a cause is not mentioned in the conclusions, in the Final Report of the Estonia, it is mentioned that speed was maintained right up until the time of list developing, p. 175. Similar to the “Titanic” - speed was maintained until time of collision. 36 Final Report of the Estonia, p. 144. 37 Spignesis, 1998, p. 126. 38 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 41. The controversy over the treatment of passengers in different classes calls attention to the importance of being able to consult different official reports of the disaster. One is especially strongly
106
SAVING HUMAN LIVES
reminded of the importance of possessing access to different official reports when one considers the disaster on Mt. Erebus. 399 Ibid., p. 40. 400 Ibid., pp. 40-41, 70. 41 Ibid., p. 71. 42 Titanic, Vols. I-IV, New York: Greystone Communications, Inc., A & E Television Networks, Hearst, ABC, NBC, 1994. 43 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 41. 44 Titanic, Vols. I-IV, New York: Greystone Communications, Inc., A & E Television Networks, Hearst, ABC, NBC, 1994. 45 Colonel Archibald Gracie, Titanic, A Survivor’s Story, Phoenix Mill-ThruppStroud- Gloucestershire: Sutton Publishing, 1998, pp. 23-24. 46 Spignesi, 1998, p. 148. 47 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 46. 49 Spignesi, 1998, pp. 125-6 49 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 64. 50 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 64. 51 Ibid., p. 72. 522 Ibid., p. 73. 53 Spignesi, 1998, p. 124. 54 Spignesi, 1998, pp. 206-7. The thesis off multi-responsibility is obvious when one considers the different figures the action of any of whom could have changed the course of the events: the Captains of the Titanic and the Californian come most forcibly to mind. 55 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 71. 56 Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998, p. 1.
CHAPTER 7 THE SPACE SHUTTLE CHALLENGER DISASTER In the case of the launch of the Space Shuttle Challengerr which resulted in the deaths of five crew members and two civilian passengers aboard, the disaster will be treated as the final event in a linkage of events which led up to this moment. Those moments immediately preceding the launch of the Challengerr are indicative of a crisis situation which is already fully actual. The events which precede in time, going back eight years, are indicative of a crisis situation which is dormant. The prevention of the Challengerr disaster could have taken place at any time either during the dormant or during the active stage of the crisis. The end result of the crisis was the disaster event. It is most apposite to begin with a brief synopsis of the basic elements of the launch. After this short synopsis, three stages of the crisis will be treated: (A) the after-math; (B) the active stage; (C) the dormant stage. The categories under which the disaster will be subsumed are as follows: Words; Decision Making; Management Structure; The Language of Communication; Responsibility; The Will to Communicate. The category of Words will be applied to the aftermath and to the active stage; Decision Making will mainly be treated in the active stage; Management Structure spans both dormant and active stages; The Language of Communication will be treated in the active stage; Responsibility will be treated in the dormant and the active stages; The Will to Communicate will be treated in all three stages. As a result, instead of a chronological sequence of events, the analysis will cut across time since the key explanatory concepts apply in different (and sometimes multiple) stages. For present purposes, the understanding of the power of the explanatory concepts is more significant than the chronology of events so that apart from the brief synopsis, the structure of the exposition will follow the
107
108
SAVING HUMAN LIVES
categories of explanation. The reader should bear in mind that there is some overlap between the various categories so that issues treated under one category will sometimes reappear under another. This is because to some extent the categories are means for convenient classification but human action, being a whole, cuts across such classifications in many cases.
A BRIEF SYNOPSIS On January 28, 1986 at 11:38 A.M., the Challengerr was launched. 73 seconds later the faultily and unethically designed space shuttle broke apart (contrary to popular belief, it did not explode since the crew compartment was intact and the crew and passengers were very much alive at the time it made impact with the ocean) in a fireball that has been referred to as an “accident”. The myth that everyone was killed instantaneously in an explosion persists in current literature. The problem with this myth is that it curtails inquiry into why the passengers and crew were not provided with an escape system. The inappropriateness of the term ‘accident’ is vividly illustrated by the content of the quotation of Roger Boisjoly with which Patricia Werhane concludes her chapter on her fascinating study of the Challenger: I can’t characterize it as an “accident” at all. It was a horrible, terrible disaster. But not an accident. Because we could have stopped it. We had initially stopped it. And then the decision was made to go forward anyways.1 Aboard were seven human beings including two civilians, one of whom was a junior high school and high school teacher who was to initiate the Teacher In Space program, Ms. S. Christa McAuliffe. According to the Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, the cause of the “accident” was a ‘... failure in the joint between the two lower segments of the right Solid Rocket Motor’.2 The conclusion of the Presidential Commission was that ‘... the cause of the Challenger accident [sic]
THE SPACE SHUTTLE CHALLENGER DISASTER
109
was the failure of the pressure seal in the aft field joint of the right Solid Rocket Motor’.3 At a later point in the report, the Commission also assigns a contributing cause of the “accident” which they attribute to a flaw in the decision making process to launch the Challenger.4
KEY WORDS In the analysis of key words that are of importance in this instance, three in particular bear notice: (i) accident; (ii) cause; (iii) contributing cause. While all three of these words figure prominently in the after-math stage, the two senses of cause are also of importance to examine as they appear in the active stage of the crisis.
THE WORD ‘ACCIDENT’ The word ‘accident’ figures only in the after-math stage. The use of the word does not appear until after the event and it appears as a description of the kind of event that was supposed to have happened. It is not unimportant to begin with the after-math because the aftermath is the opening door to the event. It defines how the event examined has been interpreted and hence the key words used in the after-math period must be subjected to close scrutiny for they will affect the entire reading of the case. When the word, ‘accident’, is employed, the meaning that is conveyed in ordinary English is that something has happened that was not foreseen and was unpredictable a and unavoidable. Thus, if the present author carries a pitcher of milk and while carrying it, it slips out of his hand and breaks, spilling the milk, it can be said, it fell accidentally. Someone could say, ‘you should be more careful’. But if someone says this, the implication is always that, ‘accidents will happen’ and in the best of all possible worlds, there will always be accidents.
110
SAVING HUMAN LIVES
If the above example is now escalated to a considerable degree to a traffic accident in which the present author unintentionally runs over a small child who has run out in front of his automobile, it still may be said that this was an ‘accident’ meaning that it was not an intentional act. If after examination it appears that such an event was unavoidable, it will be considered appropriate to call this an accident. In such a case, the word ‘accident’ carries with it the connotation that accidents to a certain extent are unavoidable. If, on the other hand, self-examination leads the present author to the reflection that he could have avoided running over the child, it would not be correct to describe the event as an accident. If, for example, the present author had been paying enough attention to the road and not at that moment been adjusting his seat-belt (which he both could and should have done before starting to drive his vehicle), he most likely could have seen the child run out into the street and could have taken evasive action. In this case, self-examination leads him to the conclusion that he could have foreseen what was going to happen and could have prevented it. The present author may then consider that he did not simply run over the child by an unavoidable meeting in space-time, but that he is guilty of manslaughter. In the first case, that of spilling the milk, a close examination might also have revealed that he could have watched his step more closely and hence could have avoided spilling the milk. But the consequences of this act are not so great as to make this self-examination very important or useful. After all, this is a good use of the word ‘accident’ since there will be other cases of this kind which will be unpreventable a in part because they are not considered important enough to attempt to prevent and in part because many of them are truly unpreventable. However, in the case of the child whose life the present author has taken, it is clear that one would like to avoid such events in the future and therefore it is prudent to be very careful in one’s choice of words. For example, if one persists in the employment of the word ‘accident’ to describe one’s running over the child, it is entirely possible that one might not take any steps to avoid such an ‘accident’ from reoccurring. After all, accidents will happen. What can you do about it? Since the concept of loss of life is not so minute as the loss of milk, it is not entirely suitable to classify the running over the child as an accident but it is more appropriate to consider it to be a case of manslaughter.
THE SPACE SHUTTLE CHALLENGER DISASTER
111
By now, the case in point should be crystal clear. If the Challenger disaster is referred to as an ‘accident’, it is being conceived of as an event that could not have been avoided and by extension all further events of similar kinds should (and will) be considered as accidents. There is not a great deal one can do about accidents since accidents will happen and accidents are inevitable. This kind of classification of the Challengerr disaster thus puts one off from investigating it very closely. It also, to a certain extent, minimizes its importance as the concept of an accident is closely tied up in one’s mind with something rather trivial. One would not, for example, call an earthquake that claimed 7 lives an ‘accident’ even though it may well have been an event which was unavoidable. Perrow’s proposal to utilize the term ‘accident’ for losses of life under 100 is not entirely suitable since the designation of the Challengerr disaster as an accident would blind one to the fact that the disastrous outcome was foreseen and could easily have been prevented. The public significance of the Challengerr disaster also makes it natural to refer to it as a disaster despite the fact that a small number of lives were lost. But the primary reason why the term ‘disaster’ is preferable as a term of description is that it is so obvious that the Challengerr disaster could have been prevented that it was not accidental in any sense of the term. The use of the word ‘accident’ to describe the Challenger disaster is a key to further interpretation of the event. Each time it is used in reference to the disaster, all of the associations that occur with the term ‘accident’ receive reinforcement. The continued use of the word ‘accident’ continually predisposes readers to consider that what happened could not have been avoided and that what is more it was rather trivial. The occurrence of the word ‘accident’ takes place in the after-math to the disaster but it forms the overall impression of the entire sequence of events that led up to it as having been a series of unrelated circumstantial errors. As a result, its choice as a term of reference is extremely consequential. The leading source for the use of this term is the classical source of information concerning the Challenger disaster, the Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Washington, D.C.: GPO Superintendent of Documents, 1986 (to go out of print very soon after publication). The choice of this term for the title of the work not only predisposes the reading of the report
112
SAVING HUMAN LIVES
with all the associations of the term m ‘accident’ but in all consequent references to this work the term ‘accident’ receives more and more reinforcement. In the ordinary person’s mind, all that will be left of the report is most likely the impression left by the title since that would be the most that anyone would have heard of it except those directly concerned. Even after an accurate analysis has been done, by summing up one’s results under the word ‘accident’, one effectively neutralizes even an analysis which may have been aimed at arguing that this disaster could have been prevented. Such is the power of a simple, ordinary English word.
CAUSE AND CONTRIBUTING CAUSE The reason why these two particular terms have been selected for analysis is that they appear in the Presidential Report as titles of chapters and as terms which describe why the Challengerr disaster occurred. In the Presidential Report the primary cause of the accident is taken to be a technical fault and the contributing cause is taken to be the flaw in the decision making process. In a chapter entitled, ‘The Cause of the Accident’, it is stated that, ‘The consensus of the Commission and participating investigative agencies is that the loss of the Space Shuttle Challenger was caused by a failure in the joint between the two lower segments off the right solid Rocket Motor.’5 At the end of this chapter, under ‘Conclusion’ it is stated that, ‘In view of these findings, the Commission concluded that the cause of the Challenger accident [note the continuing use of the term, ‘accident’] was the failure of the pressure seal in the aft field joint of the right Solid Rocket Motor.’6 In another chapter entitled, ‘The Contributing Cause of the Accident’, it is stated that, ‘The Commission concluded that there was a serious flaw in the decision making process leading up to the launch of flight 51-L.’7 What one is led to think from the Presidential Report from the use of the words is that the technical defect is the main cause of the disaster. The problem with this impression is that if one’s primary attention is centered upon the technical defect, one may not take sufficient pains to address the putatively minor cause of the flaw in the decision
THE SPACE SHUTTLE CHALLENGER DISASTER
113
making process. However, if there is a main cause and a contributing cause, the most accurate impression with which one should be left is to devote the brunt of one’s energies to taking care of the main cause and the remaining energies to deal with the more minor cause. It is important, therefore, that one is clear as to which is the main cause and which the minor cause. In order to show that the President’s Commission has the priorities reversed, it may be useful to considerr an analogy. If a father, in his decision to punish his child by striking her with a belt, chooses a belt with a loose buckle and in striking the child the buckle flies off and hits her in the eye, thereby blinding her, would it be correct to say that the loose buckle is the primary cause of the accident (the term ‘accident’ is correctly used here) and the father’s decision and act of striking her the secondary cause? One’s reaction is to find this sort of description unnatural. One would consider it far more natural to say that the father’s decision to punish the child was the primary cause of the accident. Why? Because one realizes that the chain of events which resulted in the child’s blindness was set into motion by the action of the father which resulted from his decision. If the father had not decided to punish the child, the loose belt buckle would have harmed no one as it hung from the belt in the closet. The only way in which it came to harm anyone was by being an instrument of someone’s will. As an inanimate object, it possessed no power on its own to initiate any sequence of events. If this analogy is carried over to the Challengerr disaster, then the management decision to launch the Challengerr becomes the primary cause of the disaster. The O-ring defect becomes the loose belt buckle. By itself, the O-ring defect was harmless. It only became harmful when it was used as an instrument in a launch. One can even extend the analogy a bit further. The decision makers were aware, as was the father, that the O-ring had a defect (that the belt buckle was loose), but since the belt buckle had never come off before, the father was not concerned. One reason he was not concerned was that the safety of his child was not paramount in his mind. What he was intent upon was punishment. It could be argued that if the O-ring had been operative, one also would not have had a disaster. Therefore, we are entitled to consider the O-ring the cause of the disaster. The difficulty with this line of
114
SAVING HUMAN LIVES
reasoning is that it is entirely possible that not only the O-ring was defective but that there might have been some other technical malfunction further along down the line. The Challengerr might have contained a series of booby traps, the O-ring simply being the first in line. In a direct interview with the author and a scientist employed at NASA at the time, for example, the scientist maintained that when the Challengerr blew up [the scientist’s wording] everyone initially thought that it was the main engine because that had been suspect for some time. Because of the fact that more than one technical malfunction was possible, the ultimate responsibility for the disaster was the overall decision that the Challengerr was safe. The technical malfunction that did in fact cause the Challenger to break up into pieces and eventually cause the death all of the astronauts and civilian passengers was one which the management did know about beforehand and need not therefore have caused the Challenger to disintegrate and ultimately cause the death of the astronauts and the civilian passengers. In other words, the management had control over the O-ring and not vice-versa. What is of assistance in the prevention of further disasters is the putting in place of a mechanism and a consciousness which will prevent decisions being carried out to launch future Challengers which are known to have safety problems. It is important to remember that the technical defect of the O-ring could not, by itself, have harmed anyone. It possessed no responsibility for blowing up the Challenger and causing the deaths of the astronauts and the civilians. The reason that it possessed no responsibility for the Challengerr disaster is that it did not, by itself, decide to act. It seems more appropriate, then, to describe the technical defect of the Challengerr with the term proximate cause and the decision to launch the Challenger, the primary cause. In the labeling system provided by the Presidential Report, even iff the analysis within the report places a strong responsibility on the decision making procedure, the labels, as titles of chapters, carry an influence of their own. One may not remember the detailed arguments regarding the management structure, but what one can remember is that according to the Presidential Report, the finding was that ‘... the cause of the accident was the O-ring’. One is far more inclined to remember the leading actress in a famous film that one saw years ago than the supporting
THE SPACE SHUTTLE CHALLENGER DISASTER
115
actress. And that is why one’s choice of words plays such an important role.
DECISION MAKING To simplify the task, it is best to concentrate on the decision-making in the active stage of the crisis only. In the Presidential Report, the flaw in the decision making process is a contributing cause of the Challengerr disaster. This analysis must be extended. The decision making process can be seen to have four separate and co-contributing defects: (I.) The Atmosphere of the Decision Making Process was Not Conducive to Good Decision Making; (II.) The Lack of Standardized Criteria for Decision Making (a.) The Lack of any Clear Uniform Guidelines as to Moral Criteria; (b.) The Lack of a Formula for the Attribution of the Responsibility of The Burden of Proof in an Argument; (c.) The Lack of Understanding of What Would Count as Sufficient Justification for Conclusions; (d.) The Lack of Understanding of what Input is Relevant to a Decision (III.) The Absence of a Chain of Command or the Lack of Exercise of a Chain of Command in the Final Decision Making; (IV.) The Lack of a Spelled Out Decision Making Mechanism (such as unanimous vote, secret ballot, majority vote, consensus, etc.)
THE ATMOSPHERE OF THE DECISION MAKING PROCESS The atmosphere of the decision to launch the Challenger was the atmosphere of a crisis. It is curious because the crisis was at this point in time, largely self-inflicted. In other words, the crisis was not forced upon those who were in the process of deciding to launch the Challenger. It was brought about by their own previous actions and decisions, a fact that will be examined later under the topic of responsibility. There was no external crisis at that point which needed action to resolve. And yet, the atmosphere of the decision
116
SAVING HUMAN LIVES
making process was one that one would expect to obtain during a crisis situation. The problem with the atmosphere of a crisis is that unless there is a crisis to resolve, (and even then it is dubious whether a crisis atmosphere in the decision making process is helpful), the conditions of a crisis atmosphere make it more difficult to make a judicious decision, not less.8 What is a crisis atmosphere? It is not difficult to define. A crisis atmosphere is one in which a decision must be made under a great deal of some kind of pressure, usually to meet a fixed deadline, where a wrong decision is very likely to have grave consequences and is characterized by the presence of time pressure, hurried decisions, sometimes at strange times of the night, and very possibly irregular to unethical steps in the decision making process.9 All of the above conditions were present in the case of the decision making atmosphere in which the decision to launch the Challenger was made. Obviously, these are not ideal conditions under which to make a judicious decision. What is interesting to observe is that normally, for crisis situations, such conditions are unavoidable because one is forced to make a decision under the conditions of a double or multiple bind. The conditions of a double or multiple bind are such that one is forced to make one decision and no matter which decision one makes, the consequence requires one to take a serious risk. The concept of being forced to make a decision is an important one. For example, in the case of having to take some action to free hostages, if one does not decide to act, the hostages may be killed. Such a crisis situation impels an atmosphere of crisis which is difficult to dispel. In the case of the Challenger, if there had been no decision to launch the Challenger, what would have been the consequences? No lives would have been lost, certainly. The curious feature of the atmosphere which was present during the Challenger launch decision was that it was not a forced choice but it was made under all of the conditions of a forced choice. The crisis, thus, was self-imposed. In what follows it can be shown that all of the conditions of a crisis atmosphere were present.
THE SPACE SHUTTLE CHALLENGER DISASTER
117
THE DECISION MUST BE MADE UNDER PRESSURE There is no question that the decision to launch the Challengerr was made under highly pressurized conditions. This is indicated not only by the time constraints which are discussed under the time deadline below, but by the fact that the participants felt under pressure. This, in itself, is not an ideal state in which to make a judicious decision. Even the Commission itself at several points acknowledges the existence of pressure. It does, curiously, reverse itself but its own findings remain self-contradictory on this. The engineer, Bosijoly, whose early warning memo about the O-rings went ignored, stated, ‘I felt personally that management was under a lot of pressure to launch’.10 Mr. Mulloy and Mr. Hardy used their positions of authority to apply informal pressure on the others to overcome any resistance to launching. Mr. Mulloy’s influential statement was, ‘My God, Thiokol, when do you want me to launch, next April?’11 Mr. Hardy’s was, ‘I’m appalled at your recommendation’12 (to the original recommendation of Thiokol against launching). In two places in the Presidential Report, the Commission acknowledges that pressure exists. In one chapter significantly entitled, ‘Pressures on the System’, it is stated that: ‘Pressures developed because of the need to meet customer commitments, which translated into a requirement to launch a certain number of flights per year and to launch them m on time. Such considerations may occasionally have obscured engineering [read: ethical or safety concerns].’13 In another place in the Report, it is stated that: ‘The Commission concluded that the Thiokol Management reversed its position [not to launch] and recommended the launch of 51-L, at the urging of Marshall and contrary to the views of its engineers in order to accomodate a major customer.’14 Curiously, at another point in the Report, the Commission seems to have forgotten these statements when it states that: ‘The Commission concluded that the decision to launch the Challengerr was made solely by the appropriate NASA officials without any outside intervention or pressure.’15 There were a variety of external pressures. For example, it was thought important that the shuttle should be perceived as operating on a routine basis. So, that required a certain number of lifts per year
118
SAVING HUMAN LIVES
and not a number of delays.16 When one contemplates the phrase ‘major customer’, one imagines that there was a great deal of commercial pressure involved which, in the end, may well have been an extremely influential factor. One only wonders who the unnamed f n. 14; n. 30) ‘major customer’ was. (Cf.,
A FIXED DEADLINE MUST BE MET Here, there was definitely the concept of a fixed deadline. But the date actually selected was not the only, the last, or even the most optimal available date among the choice of alternatives. It certainly was not the optimal date in terms of weather conditions. Here, because the deadline was in fact a fairly flexible one, and was chosen from a wide range of options, it is difficult to understand why it was treated as a fixed deadline. It is almost as if the crisis situation were self imposed. It is obvious that one must select a time frame for a launch. But there is a difference between a schedule and a fixed deadline. What is not clear was the urgency in meeting that particular fatal deadline of January 28, 1986. The background for the launch of the Challengerr was that it had already been postponed three times and scrubbed once. The only postponement that had to do with weather conditions was a postponement that occurred on January 25, 1986. It is strange and unsettling that the even less ideal weather conditions of the 28 of January were not considered sufficient to warrant another postponement.17 The non-double bind, free choice status of the deadline of the 28 of January must be considered carefully. There was no urgent reason why the flight had to be launched on that particular day or att that particular time. Nonetheless, the date and time did function as a deadline in terms of the creation of a crisis atmosphere. It is important to remember that there was no strong technical reason why the flight had to conform to this day and time. One may speculate on such reasoning as ‘it had already been postponed so many times’ and so on, but such reasoning is not very impressive when it comes to taking life and death risks. According to a direct interview with the author and a scientist working at NASA at the time, there were also rumors circulating at NASA at the time that
THE SPACE SHUTTLE CHALLENGER DISASTER
119
there had been a call from the White House that the President was eager to talk to the Teacher In Space. However true or false such a rumor is, what is of importance in one’s analysis of a crisis is that the fixed deadline did not represent a forced choice (that no other daydate was available). There was no double-bind situation present. If there had been no launch on this day-date there would have been no equally serious potential consequence as the life and death risk that was being taken. For instance, on other flights, such as to Venus, there were very few windows to space open. If one were passed up, there could be a considerable wait of a year or more. In this case, there was no technical window available only at this time. The selfimposed deadline was a free choice, not a forced choice. For all that, it played its role in creating a crisis situation.
A WRONG DECISION WILL HAVE GRAVE CONSEQUENCES This bears little discussion. Seven lives were at stake. Again, the curious feature is that in a typical crisis situation, one is in a forced choice, double (or multiple) bind. Whichever way one turns, there is possibly a grave consequence and one must make a choice. For example, in a decision to free hostages, if one decides to go in after them, the kidnappers may decide to murder the hostages. If one decides to take no action, the kidnappers may also decide to murder the hostages. There is no known choice which is free of risk. In the case of the Challengerr launch decision, there is the typical crisis situation where there is a crisis atmosphere present except that there is neither double bind nor forced choice. There is a decision which will be risk free, namely, not to launch. But throughout the decision making period, this alternative is nott given equal priority (see II. b.). Thus, a crisis atmosphere is provoked (the forced choice, doublebind decision) where a genuine double-bind situation does not obtain. What of the objection that anyone who is going up in space always takes such risks and is aware of them or would not be there in the first place. This objection is sound only for the general situation of risk of which all astronauts would certainly have been aware. What is
120
SAVING HUMAN LIVES
under discussion here is that there were certain known risks (such as the defective O-ring under certain temperatures) which placed this specific launch under an unwarranted risk of which the astronauts and civilians were not aware. For example, as to whether one could have known in advance that the O-rings would malfunction, according to the Presidential Report, ‘Each of the [previous] launches below 61 degrees [ambient] Fahrenheit resulted in one or more o-rings showing signs of thermal distress.’18 Had Commander Francis R. (Dick) Scobee been aware of this fact, would he have risked the lives of his crew and two civilians aboard at an ambient launch temperature of 36 degrees Fahrenheit?19 The objection that there is always a general risk in volunteering for such a role as an astronaut does not apply to specific situations in which it might well appear that unnecessary and unwarranted risks are being taken. For example, if Commander Scobee had been offered the choice to go up now or wait for a time when weather conditions were known to be safe and still decided to go up, then he (or the astronauts and civilian passengers for whom he was responsible) would have been taking a known risk. As it was, they took a risk which in their own minds was very probably a different sort of risk than the one under which they actually were. In fact, by the very presence of a civilian schoolteacher on board, they might even have been under the impression that this launch was relatively safe rather than relatively unsafe. If a crew is aware of the dangers and still decide to go up, they qualify as heroes and heroines. If they are unaware of certain launch-specific and avoidable dangers, they are only victims. NASA’s subsequent decision to give a former astronaut input to a launch decision process is really not to the point here. Relevant to a decision to launch would be the placement on the final decision making committee an astronaut and/or a Safety Officer with veto power.20
THE PRESENCE OF IRREGULARITIES Here, irregularities abound which could have alerted someone to the presence of a crisis atmosphere and the consequent question of
THE SPACE SHUTTLE CHALLENGER DISASTER
121
whether such a crisis atmosphere was a warning signal that ideal or even minimal decision making conditions were not present. Irregularities were present such as five separate caucuses over the launch decision, meetings in such unlikely places as motel rooms and three off-net caucus’ after the last one of which, the highly pressured Thiokol reversal of their original unanimous recommendation of not launching took place. What follows is an abbreviated chart of the run-up of events immediately prior to and influencing the final decision to launch the Challenger:
ABBREVIATED CHRONOLOGY OF EVENTS LEADING TO CHALLENGER LAUNCH 2:00 P.M. (EST) January 27, Discussion of weather conditions predicted for launch on 28 January 1986. Those present included: Jesse W. Moore, Associate Administrator, Space Flight, NASA HQ and Director, Johnson Space Center (JSC); Arnold D. Aldrich, Manager, Space Transportation Systems Program, JSC; Lawrence B. Mulloy, Manager SRB Project, Marshall Space Flight Center (MSFC); Dr. William Lucas, Director, Marshall Space Flight Center. 5:15 P.M. (EST) Kennedy Space Center (KSC): McDonald calls Cecil Houston informing him that Morton Thiokol engineering had concerns regarding O-ring temperatures. Those present included: Al McDonald, Director, Solid Motor Rocket (SRM) Project, Morton Thiokol, Inc.; Cecil Houston, MSFC temperatures Resident Manager, at KSC. 6:30 P.M. (EST) Marshall Space Flight Center: Lovingood calls Reinartz and tells him that if Morton Thiokol persists, they should not launch. Lovingood also suggests advising Aldrich (Level II) of teleconference to prepare him for Level I meeting to inform him of possible recommendation to delay flight.
122
SAVING HUMAN LIVES
Those present included: Judson A. Lovingood, Deputy Manager, Shuttle Projects Office, MSFC; Stanley R. Reinartz, Manager, Shuttle Projects Office, MSFC 7:00 P.M. (EST) Motel Room Visit: Mulloy, Reinartz visit Lucas and Kingsbury (Director, Science and Engineering, MSFC) in their motel rooms. 8:45 P.M. (EST) Morton Thiokol: Second teleconference Utah Recommendation by Morton Thiokol (Lund, Vice-President, Engineering, Morton Thiokol) is not to launch until temperature of the O-ring reached 53 degrees Fahrenheit, which was the lowest temperature of any previous flight. Kilminster (Vice-President, Space Booster Programs, Morton Thiokol) states that he cannot recommend launch. Hardy is reported by both McDonald and Boisjoly to have said he is “appalled” by Morton Thiokol’s recommendation. Kilminster asks for five minutes off-net to caucus. 10:30 P.M. (EST) Caucus continues for 30-35 minutes off-net Thompson (Supervisor, Structure Design, Morton Thiokol) and Boisjoly voice objections to launch. Lund is asked to put on management hat by Mason (Senior Vice-President Wasatch Operations and CEO, Morton Thiokol). Final management review with only Mason, Lund, Kilminster and Wiggins (Vice-President and General Manager, Space Division, Morton Thiokol). Approximately 10:30 P.M.- 11:00 P.M. (EST) Kennedy Space Center Mcdonald continues to argue for delay. Kilminster reads the rationale for recommending launch and signs off for the launch. 11:45 P.M. (EST) Kilminster signs facsimile, faxes Morton Thiokol’s recommendation to launch. Aldrich is apparently not informed of the O-ring concerns. It is important to bear in mind that the engineers had no part in writing the recommendation and refused to sign it. The engineers did not change from their original recommendation not to launch. While this is a greatly abbreviated chart of the events, two outstanding facts emerge: (a) It appears as if there is significant disagreement requiring urgent resolution; (b) there appear irregularities in the meetings (motel rooms, off nets). Both of these facts are indicators that the atmosphere of a crisis is present.21
THE SPACE SHUTTLE CHALLENGER DISASTER
123
If these events are abnormal, then it would appear that they would not only not be conducive to making a good decision but they might also make one wonder if it were a good idea to make any sort of decision under such circumstances. One criterion for decision making is that unless absolutely necessary (a forced choice is present) one should not decide in a crisis atmosphere. And, if a crisis atmosphere is present, one should on this ground alone be suspicious why one is being obliged to make a decision unless it is absolutely necessary to do so.
(a.) THE LACK OF ANY CLEAR UNIFORM GUIDELINES AS TO MORAL CRITERIA It appears as if there were no spelled out criteria for decision making of a moral nature. For example, if the safety of the astronauts and the civilians were of paramount importance, one could then say that a general guideline existed which would aid decision making. Consequently, if it were to become a subject of debate as to whether the launch were safe, and it were alleged that it appeared as if there were certain risks of certain probabilities, one could appeal to the explicit guideline. Suppose the standardized guideline stated that, unless it were highly probable that the launch was safe (and high probability was given 8.5 out of l0), it would not be launched, then the participants in the decision making process would have had to face this issue squarely. They would have to commit themselves to a launch under the clear understanding thatt they considered it 85% safe to launch. (The figure is not inviolable. It is chosen to reflect for example, a teacher’s minimum confident recommendation of a student. If a teacher feels that a student is a B+ student, then this student should be able to handle all reasonable difficulties competently. Or, if one’s car is inspected and receives a B+ all around rating, one may feel safe to drive it). Decision making in the absence of any criteria (even when meant simply as general guidelines) seems almost random. It is possible that the sorts of safety standards set by commercial airlines for their own airplanes could be used as some sort of guide. If one considers this possibility
124
SAVING HUMAN LIVES
as a guideline, however, one may be in for some sort of shock. According to former Apollo astronaut Frank Borman, later to be vice-chairman of Texas Air Corporation in Las Cruces, New Mexico, and for seventeen years president of Eastern Airlines, ‘The 727 airplanes that we fly are proven vehicles with levels of safety and redundancy built in - levels, he said, that the space shuttle comes nowhere near to’. If this is the case, then the concept of safety for the space program needs to be given a great deal more thought.22 It is interesting to reflect on the criterion for finding someone accused of a crime guilty in a court of law in the United States of America. In order that a juror vote guilty he or she must be convinced ‘beyond the shadow of a doubt’. The rationale behind such a criterion is that there is always the risk that an innocent person may be incarcerated or executed. Thus, if there is a reasonable doubt that the person is guilty, it is recommended that one vote for innocence. A criterion is proposed which takes into account the seriousness of making a wrong decision. According to the moral formula, it is better to allow ten guilty parties to escape than to imprison one innocent person. In the case of sending up the civilians and the astronauts, the risk is of the probability of carrying out a death sentence. This was in fact what was behind the decision of the Morton Thiokol engineers and managers not to fly. They thought that there was a doubt. But the existence of a doubt was not considered to be adequate in the case of the launch of the Challenger.
(b.) A FORMULA FOR THE ATTRIBUTION OF THE RESPONSIBILITY FOR THE BURDEN OF PROOF IN AN ARGUMENT The burden of proof in an argument should always rest on the one who is attempting to argue for a course of action which will constitute an alteration of the status quo. For example, if the status quo is that the Challengerr is on the launching pad, there is no need to provide an argument that it should remain on the launching pad. If the Challengerr is to be launched, however, one does need to provide
THE SPACE SHUTTLE CHALLENGER DISASTER
125
an argument to justify its being launched. If the question is safety (guideline a.), then it is up to those who are in favor of launch to prove that the Challengerr is safe beyond a reasonable shadow of doubt. (One may use this formula instead of the 85% standard. The important thing is not the exact formula selected but having some standardized criteria rather than none). It is not up to those who are doubtful about the Challengerr to prove that the Challengerr is unsafe. It is impossible to prove that something (car, plane) is unsafe if by proof one expects to be shown that it is 100% certain that x is unsafe. However, it is equally difficult to prove that x is 100% safe and, if safety is the paramount criteria, this is what the reasoning should be designed to establish. At least, one should have written criteria to appeal to for general guidelines so that all the participants to a decision have some common ground on which to base their decisions. According to Boisjoly, the pressure in this case was opposite to the normal pressure where the burden of proof would be to prove that x was safe; here, the pressure was to prove that it was unsafe.23 What is strongly recommended here is that the criteria for a decision be spelled out explicitly so that such an incident could not repeat itself. While it may be common sense that the burden of proof should rest on the shoulders of those who wish to alter the status quo (in this case to launch, to prove that it is safe to launch), apparently in this key case, the burden of proof was alleged to rest on the shoulders of those who opposed the launch, to prove that it was 100% unsafe to launch. To avoid the possibility of a repetition of such a case, it would be wise to write into the guidelines that the burden of proof shall always fall on the shoulders of those who wish to take a risky action, namely to risk the lives of astronauts and civilians, to prove that the action that they are about to undertake has a high degree of probability to be safe. This burden must never be shifted around to rest on the shoulders of engineers to prove that it was 100% unsafe. The importance of written general guidelines is that engineers, as Boisjoly, might be able to smell a rat, to know that they are being asked to do something 180 degrees different than normal, but they may not as engineers realize the incongruity of what they are being asked to do - nor should they have to. This is the value of having written, formal guidelines for decision making.
126
SAVING HUMAN LIVES
(c.) WHAT WOULD COUNT AS SUFFICIENT JUSTIFICATION FOR CONCLUSIONS It might be very prudent to consider carefully the sophism contained in what Commissioner Feynman termed Russian roulette reasoning. If a coin has come up tails 100 times in a row, it still does not alter the probability that on the next toss of the coin the probability that it will come up tails is still 50%. That something has worked in the past is not a guarantee that it will continue to work in the future especially if certain conditions, such as temperature, are different now than they were in the past. As Commissioner Feynman observed in the Presidential Report, standards were lowered because nothing had happened so far. In his words, ‘a kind of Russian roulette ... [The Shuttle] flies [with O-ring erosion] and nothing happens. Then it is suggested, therefore, that the risk is no longer so high for the next flights. We can lower our standards a little bit because we got away with it last time ... You got away with it, but it shouldn’t be done over and over again like that.’24 With respect to what counts as a sufficient justification for a conclusion, one cannot appeal to what happened in the past to guarantee the future. For example, if one has driven one’s automobile two thousand miles with bad brakes, or brakes that are suspect, it does nott follow that one can now drive further.
(d.) THE LACK OF UNDERSTANDING OF WHAT INPUT IS RELEVANT TO A DECISION A key incident of the importance of stipulating criteria for what input is relevant to a decision appears in the case of the ill conceived request made of Robert Lund to shift from engineering to “management” hats in order to make his decision whether to launch. Lund is requested (not ordered but this may be a moot point) by his senior, Jerald Mason, who is the CEO of Morton Thiokol, to take off his engineering hat and make a pure management decision. Up to this point in time, Lund, on engineering grounds (he is both an engineer
THE SPACE SHUTTLE CHALLENGER DISASTER
127
and a manager), had opposed the launch. When asked to make a pure management decision, (he is Morton Thiokol’s Vice-President for Engineering), he endorses the launch. The problem here is, since he is both a manager and an engineer, then, why should he not take engineering [read: ethics, safety] factors into account? What would a pure management decision be? It is hard to imagine what this is. But in the absence of criteria to take engineering considerations into account, one may, as Lund was, be bullied into leaving them out.25
THE ABSENCE OF A CHAIN OF COMMAND OR THE LACK OF EXERCISE OF A CHAIN OF COMMAND IN THE FINAL DECISION MAKING PROCESS During the active stage of the decision, according to the Presidential report, it appears as if the decision was basically made by Kilminster, Vice-President, Space Booster Programs at Morton Thiokol. While even in the Report this fact is difficult to ferret out, there is one statement in the report from which one could conclude that it was Kilminster who made the final decision: ‘... Mr. Kilminster reviewed rationale that supported proceeding with the launch and so recommended. Mr. Reinartz, Manager, Shuttle Projects Office at Marshall, asked if anyone in the loop had a different position or disagreed or something to that effect, with the Thiokol recommendation as presented by Mr. Kilminster. There were no dissenting responses.’26 Kilminster (aptly named) is the one who faxed Thiokol’s recommendation to launch to Kennedy and Marshall and whose signature appears on the fax.27 Kilminster also is the one who requested the off-line caucus which turned into the ill fated 35 minute caucus during which the Morton Thiokol management (not the engineers) reversed their recommendation against launching. Kilminster had heard Lund’s report. r Thus, he knows, (as do Hardy and Mulloy) of the contractor’s recommendation. It is a moot point by that time as to whether he knew of the initial recommendation. It is hard to imagine he did not - but he does not really need to have since he would have just heard Lund’s report. (See chronology of
128
SAVING HUMAN LIVES
events above). In any event the history of the problem was discussed in the off net caucus.28 The problem with Kilminster’s actions is that they were not congruent with the actual chain of command. Both Mason, the Senior Vice-President for Wasatch Operations, and, most importantly, the CEO of Morton Thiokol and Wiggins, VicePresident and General Manager off the Space Division at Morton Thiokol, who were in positions of authority above Kilminster were present and yet allowed Kilminster to run the show, and especially allowed Kilminster to make the decision to launch29 with support from Reinartz, Manager, Shuttle Projects Office at Marshall.30 This Theseus sorely needs Ariade’s thread to guide himself through this labyrinth of obfuscation! This Theseus fears that at the end of this labyrinth he, too, will discover a whole lot of bull. It would appear that both Mason and Wiggins have abrogated responsibility by allowing someone lower on the command chain to take official responsibility for such a crucial decision. A strict adherence to the proper chain of command would not allow Kilminster to take responsibility for the decision to launch. Two things are noticeable. First, if Kilminster’s superiors were there, it might appear that they would or should have been responsible for the decision rather than him. From the Report of the Presidential Commission it appears as if both Wiggins and Mason play passive roles in the final decision making process. The normal chain of command is not followed. What all of this means is that with respect to such an important decision as to the launching of the Challenger, it is left unclear exactly who was responsible. While, on the one hand, from the standpoint of the chain of command, as the superior officer, Mason would appear to have to be responsible for the immediate decision, on the other hand, by allowing Kilminster to take official responsibility for the decision, they abrogate their own responsibility. This makes it appear as if Kilminster is responsible. But this is not exactly true either since he is not technically in command although he has made a pro forma decision. Whether or not it is normal for Kilminster to sign off as Program Director, this would not be relevant as if this is normal, it should not be. With his signature appearing, the impression is created that Kilminster is the responsible party. But he is not the superior officer! The end result is to leave it very unclear as to who is responsible. And this may indeed be the
THE SPACE SHUTTLE CHALLENGER DISASTER
129
very point of it all. If the chain of command were to be followed at the very least one would possess a clear sense of the attribution of responsibility. Those in positions of power might be more inclined to make responsible decisions if their signatures were to have to appear. In addition, decisions would be made by those to whom authority for making such decisions has been initially entrusted based upon the notion presumably that they were the most qualified to make such decisions. But, by suspending the chain of command, those in command are not going to be held accountable for making the decision, so they may not take sufficient care that the decision is impeccable. The suspension of the chain of command leaves more open the possibility that the wrong decision will be reached both from the standpoint of the consequent vagueness of the responsibility factor making the decision less subject to self scrutiny and from the standpoint of the less expert making the decision. The first requirement, then, is that there must be a clear-cut, spelled out chain of command which is strictly adhered to. There are at least four reasons for following an explicit command structure. The most important reason to specify and follow a command structure is that whoever is in a position of authority will then be obliged to accept self-responsibility for the authority that she or he is given. Secondly, if authority has been given on the basis of qualifications, then the person who possesses the skills and experience to be responsible for a final decision will make that decision. Thirdly, everyone who is a party to the decision making will have a clear understanding to whom they are ultimately responsible which should make the decision making procedure more transparent. Transparency also ensures greater integrity and scrupulousness. The muted portion of the teleconference was a stunning example of a lack of transparency! Fourthly, with a clear-cut chain of command that is adhered to, the flow of information is less likely to be cut off. If there is no clear-cut chain of command, it is more possible that a vital piece of information may not reach those who are responsible for making the final decision simply because in the absence of a chain of command, it may not be known to whom to transmit the information. With clear reporting channels, it is more likely that information will be passed up the chain to reach the highest authority while at the same time not missing any vital link on the way.
130
SAVING HUMAN LIVES
THE LACK OF A SPELLED OUT DECISIONMAKING MECHANISM It is not clear from the President’s Report, how the final decision was made to launch the Challenger. After the last off-net caucus, Thiokol management reversed itself and Kilminster asked if anyone was against launching. When no one raised any objection, Kilminster took this to mean that the decision to launch was to be made and he so made it. If there had been some procedural mechanism for decision making, it is entirely possible that this decision would not have been made. For example, no off-net caucuses should have been permitted. It was a 35 minute off-net caucus that immediately preceded the Thiokol reversal. Being off-net, it is entirely likely that pressures could have been brought to bear that would not have been feasible to introduce during a monitored caucus. Even more importantly, it was never very clear exactly how a decision was to have been reached. For example, if a written decision making procedure had called for a secret ballot and a unanimous vote, it is entirely possible that someone might have felt the courage to go against the launch and the launch would not have taken place. In the absence of any mechanism, it appears as if an open consensus mechanism was used, in which case, intimidation can always have its highest efficacy. It is not necessary that a unanimous consensus by secret ballot would have to have been the procedure. It could for example have been a simple majority by secret ballot. But in the absence of any mechanism, decision making is at best a confused process and at worst a manipulated one. Either of these options is not very optimal for a decision to be made of the order of magnitude under discussion. If one considers the Japanese ringii system of decision making, a unanimous consent would have to be achieved prior to the active stage. In other words, at the very first dissent of one particular man, namely, Boisjoly, the decision to use the O-rings would not have been possible. In terms of the active stage of the crisis, if one were to appeal to the Japanese Management System, at the very least one would consider that strong dissent was a warning signal that all was not proceeding well in the direction of the decision that was under consideration. Strong dissent would indicate that one
THE SPACE SHUTTLE CHALLENGER DISASTER
131
would be well disposed to pause as a good decision should be strongly based. While it may seem strange to consider subjective criteria, the presence of an atmosphere of uneasiness at making such a decision might indicate that possibly something was wrong with the decision that was to be made. This is not to say that such a feeling must be a decisive factor but to ignore it entirely is to do so at risk. One cautionary note that may be utilized as a warning signal is that one should not make or force a decision unless it is absolutely necessary and/or there is no significant dissent. A written guideline, then, which can serve as a very general formal guideline is that, if there is significant disagreement among members or strong dissent by one member, one should not make a decision unless absolutely necessary.31 This bears some resemblance to the legal concept of reasonable doubt. If there is reasonable doubt that someone is not guilty then that is sufficient proof of innocence. Here, there was certainly strong evidence of reasonable doubt that the launch was not safe.
MANAGEMENT STRUCTURE The issue of the management structure is one which the Commission does address in general terms. For example, in the Report it is stated that, ‘The Commission is troubled by what appears to be a propensity of management at Marshall to contain potentially serious problems and to attempt to resolve them internally rather than communicate them forward.’32 The Launch Director stated that if he had had the information on the relationship of temperature to the seals ‘we wouldn’t have launched if it hadn’t been 53 degrees’. While this statement of the Launch Directorr is questionable and will be questioned below, the Presidential Commission did recommend that, ‘NASA should take energetic steps to eliminate this tendency [to not pass on information bearing on safety] at Marshall Space Flight Center, whether by changes in personnel, organization, indoctrination or all three.’33 The term ‘education’ is preferable to ‘indoctrination’ but the point of the Presidential Commission is that
132
SAVING HUMAN LIVES
there needs to be attention paid to ethical responsibility in addition to altering the structure of the organization. The most powerful need for education appears to be in the area of ethics. Managers need to be educated that safety is a number one priority. When there are serious doubts u raised as to safety and human lives are at stake, information cannot be withheld. When Lucas, MSFC Director, was asked if he reported what Reinartz, Manager of the Shuttle Projects Office at Marshall, told him [concerning problems with the O-ring] to Moore, Associate Administrator, Office of Space Flight, NASA or Aldrich, (even though he talked with both of them), he says he did not because he did not “report” to Aldrich. He was responsible to Moore whom he could have told, but he chose not to tell. Thus, even having a chain of command in place is not sufficient if members of a chain choose not to relay information up the chain.34 Mulloy did not think that certain information had to be passed up so he refrained from passing the information along to Aldrich.35 Information regarding the O-ring problem was not conveyed by Mulloy in the Flight Readiness Review to Level l.36 According to the Presidential Report, there were reporting requirements which were not met: ‘The Problem Reporting and Corrective Action Document (JSC 08126A, paragraph 3.2d) requires project offices to inform Level II of launch constraints. Thatt requirement was not met. Neither Level II nor Level I was informed.’37 It would appear that ethical responsibility would be a major component in the call for educational change. In terms of organizational structure, Reinartz seemed to wear two hats. Under one hat, as part of the Shuttle Projects Office at Marshall, he reported to Director Lucas.38 Under another hat, as Manager of the Shuttle Projects Office, he reported directly to Aldrich, Manager, Space Transportation Systems Program, Johnson Space Center.39 Reinartz reported directly to Aldrich and missed Lucas. In the active phase of the final decision making process Reinartz is the one who unilaterally decided not to pass information (concerning the O-ring problem) on to Level II.40 The situation that exists is that there is a member of Marshall not responsible to the Director of Marshall insofar as he wears another hat. What the Presidential Report refers to as ‘a tendency at Marshall to contain information’ also has its
THE SPACE SHUTTLE CHALLENGER DISASTER
133
source in a missing link in the organizational chain of command. There seems to be a discrepancy between reporting channels and the organizational chart. One suggestion here is to require that reporting channels adhere to the organizational chart. On one hand, it can be said that Reinartz has simply been unethical in his unilateral decision not to pass information to Lucas or to Aldrich. On the other hand, the organizational structure allows him this license because under one aspect of this structure he is not responsible to Lucas (according to Lucas’ testimony).41 This would still not account for him not passing the information on to Aldrich, but by bypassing Lucas, another possible channel for reporting information has been omitted. It is an organizational weakness to have the Director of a program with one member under him who is not responsible to him. Any other other valuable information which he should know about may bypass him. If Reinartz qua Manager of the Shuttle Projects office, chooses not to relay information to Aldrich, he can bypass the Director of MSFC, Lucas, who would be responsible to Moore (not Aldrich). This creates a gap in the information chain. Lucas is in dialogue with Moore and thus misses Aldrich. Reinartz is in dialogue with Aldrich and hence misses both Lucas and Moore. Theoretically, Aldrich could pass information on to Moore but the correct channel (the Director of Marshall to the Associate Administrator Moore) lacks an important input from Reinartz. It appears that there needs to be changes both in the structure of the organization to remove any gaps in the chain of command and in terms of inculcating a greater sense of ethical responsibility. From the standpoint of ethical responsibility, no one individual should possess the right to edit out input. This individualism on Mulloy’s and Reinartz’s parts is dangerous. In addition, a safety officer with full veto power over the launch should sit on the final decision making committee. According to a direct interview with the author and a scientist at NASA, there is now a safety committee. One member of the final decision making committee to launch sits on the safety committee. But this is not enough. What is needed is that one member of the safety committee sit on the final decision making committee and possess a veto vote. Otherwise, any veto from the safety committee can be overridden by the final decision making
134
SAVING HUMAN LIVES
committee to launch. While it is better to have a safety committee than not, it needs to be given some teeth.
THE LANGUAGE OF COMMUNICATION First of all, it is worthwhile to examine the role of language in the request/demand formulation of proofs of safety or lack of safety. Secondly, it is of note to examine carefully the role that language plays in the formulations of launch recommendation or nonrecommendation statements. The language formulations in use reflect the unwillingness of most personnel involved to take responsibility for judgments. For example, consider Mr. Boisjoly’s testimony that it was ‘... Up to us to prove beyond a shadow of a doubt that it was not safe to launch.’42 While this has been discussed under the category of the attribution of the burden of proof (above), it falls under the category of linguistic formulation as well. It is impossible to prove that something is 100% unsafe. The reason for this is because it is always possible, however unlikely, that something may function without an incident even though it is flawed. For example, one may be able to drive a car without an accident even though one’s brakes are malfunctioning because one may be able to drive a few blocks without having to use them. Or, one’s tie rods may be weak, but one’s driving does not subject them to sufficient stress to cause them to break down. There is never any way to prove that anything is l00% unsafe and therefore this requirement does not in any way guarantee or even offer a reasonable safety guarantee. For example, if someone brings a vacuum cleaner into the house of the present author to sell it with frayed a wiring and argues that it cannot be proved beyond a shadow of a doubt that it is unsafe, one would agree with him. It may in fact function without a problem for some time. But the present author still would not use it. It makes more sense to ask for a guarantee of safety than to ask for a proof of a lack of safety. While one can never guarantee that anything is 100% safe, it is reasonable to require safety checks and inspections to offer some assurance that anything carries with it a reasonably high safety margin. For example, some kinds off safety standards must be used
THE SPACE SHUTTLE CHALLENGER DISASTER
135
and enforced in terms of deciding that commercial passenger planes are in a reasonably safe condition for flight. Suppose someone were to say that a commercial airliner would not meet any safety standards before take-off and an engineer complained. The reply was, you prove that it is unsafe and then and only then we will not take off. Obviously, this sounds suspect. The burden for establishing some reasonable safety standard rests on the airline before exposing passengers to life risk. It is not therefore unreasonable that some kind of like standards could have been applied in the case of the Challenger. If it were the case that decision makers report that they were asked to prove that the Challengerr was 100% unsafe to fly, then it should be part of a written criteria of decision making that the proper requirement to satisfy was that there should be considerable (not l00%) evidence that the Challengerr (and like flights) were highly safe to fly no different than any commercial airliner would be required to meet rigorous safety standards before it was permitted to take off. No one would ask the ground crews to prove that the flight was 100% unsafe to fly and when the crews could not prove that, then decide that it was safe to fly. A ground safety crew member might not feel very satisfied with the condition of an engine but if he were asked if it were absolutely unsafe, f he might have to say, no. However, if he were asked if it were extremely safe (the proper question), he could then answer, no. Thus, the phraseology of the question and the form to which the answer is made to conform is of absolutely crucial importance. Launch recommendation statements are particularly valuable to examine. Take, for example, the 8:45 P.M. statement of Kilminster that, ‘He could not recommend launch’.43 Instead of saying, for example, it would be unsafe to launch (objective judgment), or, this shuttle is unsafe (forcing someone else to take sides against him), he is more interested in covering himself. While he could not recommend the launch, perhaps someone else could. He does not exactly go on record against launching, just against recommending launching. He might be neutral with respect to launching but just have no positive feeling for it - but he is not against it either. This could be likened to an academic situation in which a candidate is up for promotion and when the Chairperson is asked for his input, the Chairperson states that she or he cannot recommend the candidate.
136
SAVING HUMAN LIVES
This leaves it up to someone else to put the candidate forward as the Chairperson has not vigorously opposed, simply withheld support. Notice, the Chairperson has not said that she or he does not recommend the candidate, but that she or he cannot recommend the candidate. If the Chairperson does nott recommend the candidate, then it appears as if the Chairperson has at least made a somewhat more definite statement. If the Chairperson merely cannot recommend the candidate, it might appear as if the Chairperson would like to recommend the candidate, but for some other reasons (the opinion of others perhaps), she or he simply cannot. Whereas, if the Chairperson does nott recommend the candidate, then the Chairperson has committed herself or himself to a greater extent. There is not a large difference here, but there is a subtle one. The choice of cannott implies that there is some reason behind the Chairperson’s not being able to do something that is not quite her or his responsibility. Something prevents the Chairperson from taking a stand. The choice of do nott would entail that the Chairperson assumes full responsibility for not doing something. It is not something else that prevents the Chairperson from not taking action. The Chairperson chooses not to take action. ‘Do not’ is avoided as a linguistic formulation because it is a univocal ascription of responsibility to the Chairperson for her or his actions. ‘Cannot’ to some extent lets the Chairperson off the hook. The Chairperson cannot do something even though she or he really would have liked to. The launch decision maker cannot recommend launching even though she or he really would prefer to launch. ‘I cannot recommend launch’ is logically compatible with ‘I cannot recommend not launching’. All of this is perfectly compatible both with logic and with the ordinary use of the English language. ‘I cannot recommend’ is equivocal. It can be taken to mean that the person is against something. But logically it does not really mean this. Logically, it is equivalent to the person not taking any position on the subject at all. ‘I recommend not launching’ is clear and unequivocal. This formulation cannot be misconstrued. The person asserts a preference for not launching. The other formulation is ambiguous. It can mean that the person is unwilling to state a preference: ‘I cannot recommend’ means that the person is not in a position to take a position. The ‘cannot’ modifies the recommend. It is not possible for
THE SPACE SHUTTLE CHALLENGER DISASTER
137
the person to recommend. It may be equally not possible for the person not to recommend. Thus the person does not really commit herself or himself. With ‘I recommend not launching’, the person is firmly committed to a position. The issue here is not simply the use of language. It is an issue of ethics. It could be argued that the form, ‘I cannot recommend launching ...’ arises simply out of conservatism, but one must distinguish carefully from a desire to be conservative and a desire not to commit oneself and to hide behind the ambiguity of language. If one’s professional judgment is being called upon and no one else is in a position to make such a judgment, then such an individual is ethically obliged to state her or his conclusion in a nonambiguous form. If it requires the services of a philosopher to point out the ambiguities and the ethical hiding places of certain language forms, then professional philosophers with an interest in language and ethics and who possess some understanding of management should be included in such discussions. In such a case, a statement form such as, ‘In my carefully considered opinion, I recommend against launching’ is to be preferred to such a statement form such as, ‘I cannot recommend launching’ and by adding the ‘carefully considered’ one may also be said to be satisfying the demands of conservatism without hiding behind it.
RESPONSIBILITY: BOTTOM UP DORMANT STAGE According to the Report of the Presidential Commission, the Challengerr disaster would not have occurred if a better system of management had been in place: ‘The Commission concluded that ... a well structured and managed system emphasizing safety would have flagged the rising doubts about the Solid Rocket Booster joint seal. Had these matters been clearly stated and emphasized in the flight readiness process ... it seems likely that the launch of 51-L [the space shuttle Challenger] might not have occurred when it did.’44 While this would have helped to a great extent, it does not quite tell the whole story. The problem is that no matter how well structured or
138
SAVING HUMAN LIVES
well managed the system is, there is no way to prevent acts of criminal negligence or irresponsibility in not passing information up the chain of command or not paying sufficient heed to information that is passed up. It is evident from the memoranda that are included at the back of the Report of the Presidential Commission that problems were already flagged but that individuals both chose not to pass on certain information or to act on such (well flagged) information received. Memoranda marked URGENT and in at least one case even making use of the red flag language were ignored or discounted.45 As early as 31 July 1985, R. M. Boisjoly, engineer at Morton Thiokol, wrote a memorandum to R.K. Lund, Vice-President, Engineering at Morton Thiokol, regarding the O-ring problem with a comment that, ‘The result would be a catastrophe of the highest order - loss of human life’. This memorandum was concurred and signed by J.R. Kapp, the Manager of Applied Mechanics at MortonThiokol.46 On October l, 1985, R.V. Ebeling, Manager, SRM Ignition System, Final Assembly, Special Projects and Ground Test at Morton Thiokol, wrote a memorandum to A.J. McDonald, Director of the Solid Rocket Motor Project at Morton Thiokol, in which his language with respect to the O-ring problem was, ‘This is a red flag’. This memorandum was c.c.’d to 22 others including J. Kilminster.47 Mulloy, the Solid Rocket Booster Project Manager at Marshall, and Kilminster, Vice President, Space Booster Programs at Morton Thiokol, were very definitely aware of the O-ring problem. As far back as January 31, 1985, Mulloy even wrote a memorandum about it marked ‘certified urgent’ which revealed his own anxiety about it. This memorandum was stamped Receivedd with Kilminster’s stamp.48 Despite this knowledge, he (Mulloy) waived launch constraints.49 He possessed full information and did not convey launch constraint action or six waivers to Moore (level 1) Aldrich (level 2) or Thomas.50 One cannot say that due to the overwhelming quantity of memoranda that it was impossible to keep track or to recall every memorandum as he even wrote his own urgent memorandum regarding the problem. In Kilminster’s certificate of flight Readiness on January 9, 1986, there is no mention of 0-ring problems in the solid Rocket Booster Joint.51 There is also no mention made of O-
THE SPACE SHUTTLE CHALLENGER DISASTER
139
ring problems in the certification endorsement signed on January 15, 1986 by both Kilminster and Mulloy.52 R. Thompson, Supervisor, Structures Design at Morton Thiokol was also very much aware of the O-ring problem. As far back as 22 August 1985, he wrote a memorandum expressing his awareness that the problem was acute.53 On September 5, 1985, J.E. Kingsbury, Director of Science and Engineering at Morton Thiokol wrote a memorandum on the subject of O-ring Joint Seals expressing his being ‘most anxious’ about O-ring seals to Mulloy54. It is not a case of not using a red flag system. While the system can no doubt be improved, this does not represent a total solution. Part of the problem lies with the ethics of the personnel and with the available channels of information. One can alter the system as much as possible, but if someone does not pass on key information, there is not very much that can be gained by altering the system. One can reduce the likelihood of one individual possessing information without others becoming aware of it by introducing horizontal communication and also by introducing the ringii system so that someone, even on a low level, with a strong safety concern, can block action. Horizontal communication would be a method for enhancing the possibility that information might be passed on early on before decisions reached a critical stage.
TOP DOWN RESPONSIBILITY DORMANT STAGE As is discussed in Chapter Three, the buck stops everywhere. Communication is top down as well as bottom up. Senior management possesses a responsibility for asking for information, not simply passively awaiting information to reach them. When the Commission discusses the need for better propaganda (V) education would be a better term m here - the notion of top down responsibility must be included. The problem of responsibility does not lie exclusively with the structure of a system but with the sense
140
SAVING HUMAN LIVES
of responsibility that is present within a structure. Programme Directors at Levels I and II as well as the Launch Director for the 51L mission claim to have been unaware of critical information regarding O-ring damage in earlier flights. They claim not to have heard of the argument between Thiokol engineers and members of NASA. Had they been informed, they say, their decisions about the launch would have been very different. The Launch Director stated that if they had had the information on the relationship of temperature to the seals ‘we wouldn’t have launched if it hadn’t been 53 degrees’. The question is, what is responsible for the blockage in the flow of information? One can, as above, see the problem in terms of the failure of those below to channel information to those above. However, one can also inquire why those above do not ask if there are any problems which require attention. The top manager does not need to function as simply a blind receiver: there is a responsibility for the person in the highest management position to keep well informed and this requires asking for information and not simply waiting for it to reach her or his desk. For example, any manager could have asked if were there any launch restraints. To assume that because none were reported to them thatt there were none is to ignore the possibility that waivers would not be reported. In a senior management team with a strong concern for safety, they could have asked, and more importantly, should d have asked, had there been any history of launch constraints. To utilize a prosaic example, if the present author were the owner of a racing car and were observing the start of the race from his box, would it not occur to the present author to turn to his manager (who is sitting next to him in the box), and ask, is this track (which is covered with ice at the time) under any constraint or was it in the past? Or, will this icy weather affect the race in any way? This is the element of active inquiry which should be part of any concerned senior management’s management behavior. One can extend this analogy just a bit further when one considers the long history of the 0-ring problem. If once again, the present author were to own a racing car, would he know nothing about one of its most long standing tricky engine problems? Even if the present author’s knowledge is imperfect, would he not have heard that the overhead cam sometimes acts up when it gets cold? If the present author were to own a race horse, would he not know that this horse
THE SPACE SHUTTLE CHALLENGER DISASTER
141
has a propensity to jump at the starter’s gate when the weather turns cold? If the present author co-owned the horse with three others, would it not occur to one of the interested co-owners to raise the question to the trainer, did you cure that starting problem? The “Know-Nothings”, Smith, Thomas, Aldrich and Moore all claim to know nothing.55 But they could have asked. Aldrich points to the failure of communication. But he does not own any part of the failure. He thus abrogates his responsibility for not knowing. Aldrich says that there is something he ought to have known about. But what he does not say is that he also could have asked.56 Thomas said that he didn’t know about the Thiokol objections.57 But he could also have asked if there had been any objections. Contrast the notion of assumed responsibility of the top four management people at NASA with the assumed responsibilities of Churchill (Chapter One). While the Presidential Commission does seem to have something to say about bottom-up responsibility (‘Problem reporting requirements are not concise and fail to get critical information to the proper levels of management’),58 the Commission does not speak to the question of top management’s responsibility for asking questions.
THE WILL TO COMMUNICATE What was operative at NASA prior to the Challengerr disaster was what might be termed a will not to communicate. In order to meet the schedule of 10 routine lifts per year, section heads were told to cut corners; to go ahead and get it in the air.59 Such a concentration on scheduling, what might be the scientific equivalent of the tyranny of the bottom line, probably, more than anything else was responsible for the disaster. The presence of the will to communicate informed by a sense of ethical responsibility might go far towards the prevention of another Challengerr disaster. The question remains, as far as what the commission terms “propaganda”, will personnel be urged to practise the will to communicate? Now, the channels of communication are more controlled. For example any memorandum regarding safety matters marked ‘Urgent’ must be turned over to the newly formed safety committee, but the question remains, without
142
SAVING HUMAN LIVES
the will to communicate being present, will memoranda be passed on? And can the safety committee be given more teeth to see to it that its requirements are met? Can someone from the safety committee sit with veto power on the final decision making committee? The will to communicate must be strengthened by making the possibility of communication more effective. The will to communicate can be undermined r if there are breaks in the command chain. Even with the best will to communicate, if the organizational structure allows one individual to skip one level in the hierarchy and report to an individual above his superior and his superior to report to yet another individual, this cuts off the flow of information. It also creates the possibility of abuse in terms of making it possible to withhold information from one’s superior who in virtue of being one’s superior has been entrusted with the responsibility for deciding if that information needs to be passed along higher up. The will to communicate will be strengthened if there are clearly spelled out decision making procedures. In the absence of a clear-cut decision making procedure, personalities may dominate a caucus. Final authority for making a decision should follow the chain of command that is present so that there can be no avoidance of responsibility. Management education should include the concept of top down responsibility in addition to bottom up responsibility. Senior management also needs to be educated to make active inquiries into the history of launch constraints and questions regarding safety of any launch in question. Top management cannot simply claim that ‘they did not know’. They have a responsibility for finding out. The will to communicate can be seriously hampered if language and argument forms are used to obfuscate opinion. Would it be possible to take greater care with the use of such words as ‘accident’ and ‘secondary cause’? Will there be explanations of where the burden of proof in an argument should lie? Will there be explanations of how certain linguistic formulations are non-ambiguous and others are cloaks behind which one may shirk one’s responsibility for making a judgment? There may be the best will to communicate in the world, but if the tools of communication are counter-productive, then communication cannot take place.
THE SPACE SHUTTLE CHALLENGER DISASTER
143
How can the idea of “propaganda”/education be expanded? Would it be possible to borrow from the Japanese system the idea that anyone in the information chain can block a decision if that person feels strongly that a key safety consideration is being ignored? If the will to communicate can be given effective channels, many problems can be forestalled. When safety becomes a paramount consideration for everyone, when the buck truly stops everywhere, with such a united sense of responsibility, such a disaster as the Challengerr disaster has an excellent chance of being prevented.
Notes 1. For the continuation of the myth of the instant death via explosion inside another book on myth, consider Richard Howells, The Myth of the Titanic, Houndmills, Basingstoke, Hampshire and London: Macmillan Press, 1999, p. 153. The death by explosion myth is perpetuated in Kenneth K. Humphreys, What Every Engineer Should Know About Ethics, New York, Basel: Marcel Dekker, 2001, pp. 135, 137. For the excellent quotation from Roger Boisjoly, the present author is indebted to Patricia Werhane, Moral Imagination & Management DecisionMaking, New York, Oxford, Oxford University Press, 1999, p. 68. The original source from which the quotation is taken is Mark Maier, ‘Challenger: The Path to Disaster,’ Case Research Journal, 4:1-155. For earlier descriptions utilizing the term ‘accident’, note in particular the mode of reference to the Challengerr in the wording of the title, Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accidentt and in the title of chapter III, The Accident, chapter IV, The Cause of the Accident and throughout the entire report. Cff , Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Vol. I, Washington, D.C.: GPO, 1986. It should be pointed that the use of the term ‘accident’ is not peculiar to this particular Presidential Commission. Consider the title of the Presidential Commission report on the disaster at Three Mile Island: Report of the Emergency Preparedness and Response Task Force, Mile Island, Washington, D.C.: President’s Commission on the Accident at Three M GPO, 1979. The characterization of the Challengerr disaster as an accident is perpetuated in literature which follows upon the choice of the Presidential Commission’s description, e.g., Investigation of the Challenger Accident, Report of the Committee On Science and Technology, House of Representatives, cited below, n. 49. On page one of this report, the word ‘accident’ appears six times and the word ‘tragedy’ once, including the combined phrase ‘tragic accident’ in the first sentence of that report. How widespread the use of the term ‘accident’ has been to characterize the Challengerr disaster is perhaps most vividly illustrated by the late Professor Richard Feynman, perhaps one of the farthest thinking and most liberal member of the Rogers’ commission, who, in part two of his own book, What Do YOU Care What Other People Think, refers to the Challengerr disaster as both an accident and a tragedy (a double whammy as is
144
SAVING HUMAN LIVES
pointed out below) and uses the word ‘accident’ four times in the first two pages of his chapter fortunately titled, ‘Mr. Feynman Goes to Washington: Investigating the Space Shuttle Challengerr Disaster’. Cf., f Richard P. Feynman, What Do YOU Care What Other People Think?, London: Unwin Hyman Limited, 1989, pp. 116-117. Even Shrivastava who takes pains to say that industrial crises are not accidental uses the term frequently in his own book to refer to Bhopal presumably in accordance with his definition of accident as an ‘unfortunate event resulting from carelessness, unawareness, ignorance, or a combination of the three’. (p. 142) But since the term ‘accident’ also carries with it the meaning of that which was unforeseen, it is an unfortunate term to employ. Cf., f Paul Shrivastava, Bhopal, Anatomy of a Crisis, Cambridge: Ballinger Publishing Company, 1987, pp. xvi, 3, 42, 49, 53, 142 et passim. The usage of ‘accident’ as a routine description for the Challenger disaster continues unabated. Cf., f Charles B. Fleddermann, Engineering Ethics, Upper Saddle River, NJ: Prentice Hall, 2001, p. 8. 2. Ibid., p. 40. 3. Ibid., p. 72. 4. Ibid., p. 104. Voluntary agents chose to use the O-ring and to continue to use the O-ring. In Thierry C. Pauchant and Ian I. Mitroff’s view, ‘ ... it is absurd to state that ... a defect in an O ring “caused” the Challengerr disaster – even though managers at ... NASA used these simplistic explanations to escape their responsibilities.’ Cff , Transforming the Crisis-Prone Organization, Preventing Individual, Organizational and Environmental Tragedies: San Francisco: Josseyf use of language in the last Bass Publishers, 1992, p. 22. Note the unfortunate word of the title. Sometimes in the course of this work, the word ‘disaster’ is used to describe the Challenger flight. However, sometimes it is referred to as an ‘accident’ and sometimes as a ‘tragedy’. (pp. 16-17) 5. Ibid., p. 40. 6. Ibid., p. 72. 7. Ibid., p. 104. 8. Stuart Slatter, Corporate Recovery, A Guide to Turnabout Management, Harmondsworth: Penguin Books, 1986, p. 61. That a high level of emotional stress interferes with judgments and gives rise to ill-considered actions is supported by a considerable body of psychological evidence. Cf., f Irving L. Janis, CRUCIAL DECISIONS, Leadership in Policymaking and Crisis Management, New York: The Free Press, 1989, p. 77. 9. Op. cit., p. 61. The notions of strange times of night and irregular to unethical steps in the decision making process have been added to Slatter’s list of characteristics of a crisis. The definition of a crisis atmosphere given above is a fuller one than Slatter’s although some of the elements he notes have been incorporated. 10. Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, p. 93. 11. Ibid., p. 96. 12. Ibid., p. 94.
THE SPACE SHUTTLE CHALLENGER DISASTER
145
13. Ibid., p. 165. 14. Ibid., p. 104. While the Rogers Commission leaves the major customer unidentified, Malcolm McConnell in his Challenger, A Major Malfunction, London: Simon & Schuster, 1987, states that the major customer was ‘... in this case, NASA’. (p. 200). Cf., f n. 30 below. The late Professor Feynman, member of the Rogers Commission, repeatedly asserts in his own book, What Do YOU Care What Other People Think?, London: Unwin Hyman Ltd., 1989, that Rogers in particular seemed to be overly protective of NASA. Cff , pp. 129-132, 139, et passim. Richard Cook, a NASA budget analyst, has taken the Rogers Commission to task in even stronger terms. Cff , ‘THE ROGERS COMMISSION FAILED, Questions it never asked, answers it didn’t listen to’, THE WASHINGTON MONTHLY, Y November, 1986. One points he makes at the beginning of his article is that, ‘The commission’s final report absolves high NASA officials of any direct responsibility for the accident’. He correlates this with the fact that acting administrator William Graham nominated the commission members and seven of its thirteen members had direct ties to NASA. Cf., f pp. l3-14. (This is tantamount to entrrusting the welfare of the chickens to the fox). One can contrast the findings of the Rogers Committee report with the far more personnel specific recriminations of the House Report which designates which high-level personnel are at fault from this perspective: ‘The Director of Marshall’s Shuttle Projects Office may have violated NASA’s Flight Readiness Review policy directive by failing to report the results of the January 27th teleconference to the Associate Administrator of Space Flight ... The Launch Director failed to place safety paramount in evaluating the launch readinesss of STS 5l-L’. Cff , House Report, p. 11. 15. Op. cit., p. 176. The House Committee Report from the Committee on Science and Technology takes a different view from that of this portion of the Rogers Report when it states in its conclusions that, ‘Pressures within NASA ... caused a realignment of priorities in the direction of productivity at the cost of safety’. (House Report, p. 3) And again: ‘The pressure on NASA to achieve planned flight rates was so pervasive that it undoubtedly adversely affected attitude regarding safety’. (House Report, p. 22) That this attitude continues is also part of the conclusions of the House Committee on Science and Technology: ‘The Committee believes that the pressure to push for an unrealistic number of flights continues to exist in some sectors of NASA and jeopardizes the promotion of a ‘safety first’ attitude throughout the Shuttle program’. (House Report, p. 3). E ,Washington, D.C.: GPO, 1986), Chapter Eight, 16. Cff , REPORT AT A GLANCE ‘Pressures on the System’. 17. Report of the PRESIDENTIAL COMMISSION I on the Space Shuttle Challenger Accident, p. 17. There has been considerable discussion since as to the choice of that fateful day ranging from attempting to make it possible that then VicePresident George (no W.) Bush could be present (why a previous cold day was skipped over) and scooping the Soviets on coverage of Halley’s Comet to a desire to make the launch coincide with Reagan’s State of the Union Message in which a direct reference was to be made to the Teacher-in-Space at that moment. Cf., f Malcolm McConnell, CHALLENGER, A Major Malfunction,
146
SAVING HUMAN LIVES
London: Simon & Schuster, 1987, p. 22-23, pp. 131-136 and Richard Cook, ‘THE ROGERS COMMISSION FAILED, Questions it never asked, answers it didn’t listen to’, THE WASHINGTON MONTHLY, Y November 1986, pp. 20-21. Both sources quote from a prepared speech to be given by Reagan in support of this. While the issue of political pressure will be treated more fully in the subsequent chapter the point here remains that whatever the pressure, it was not a forced choice in which the launch had to be made on that day or an equivalent risk to loss of life would also be taken. 18. Ibid., p. 7l. 19. Ibid., p. 19. 20. One of the recommendations of the Commission was to appoint an astronaut with recommendation powers but not with full veto powers over a launch. Cff , REPORT AT A GLANCE, Recommendations, V. 21. Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, pp. 104-111. Cf., f Roger M. Boisjoly for a good, overall summary account in, ‘The Challenger Disaster: Moral Responsibility and the Working Engineer,’ Deborah G. Johnson, Ethical Issues in Engineering, Englewood Cliffs, New Jersey: Prentice Hall, 1991, pp. 6-14. 22. Trudy E. Bell and Karl Esch, ‘The Fatal Flaw in Flight 51-L’, IEEE Spectrum, m February, 1987, pp. 45-6. 23. Op. cit., p. 93. 24. Cff , Ibid., p. 148. 25. Cff , Ibid., p. 93. Charles E. Harris, Jr., Michael S. Pritchard and Michael J. Rabins argue that management and engineering concerns should be divided up with safety belonging to the engineers’ concern. While they do insist that safety concerns can never be overriden, it is ethically questionable that there can be a proper management concern that does not include safety. Cff , Engineering Ethics, Concepts and Cases, Belmont: Wadsworth Publishing Company, 1996, pp. 278279. In these authors’ discussion of the infamous Mason to Lund shuffle, they state that ‘ ... Mason may be correct that there was some difference of opinion among those most qualified to render judgment, even if this information was not confirmed until after the event. If engineers disagreed over the technical issues, the engineering considerations were perhaps not as compelling as they would have been if the engineers had been unanimous.’ Ibid., p. 284. But the engineers originally had been unanimous! In any case, the issue here rests on the phrase may be correct, and the only evidence provided for this is Mason’s account that he had conversations with individual engineers (who, one may well ask?) after the teleconference. At the teleconference Mason thought he was the only one who supported the launch. Such issues, however, are obviated if one has clearcut criteria for decision making such as it only requires one dissenting vote on the part of an engineer to block a decision to launch 26. Cff , Trudy E. Bell and Karl Esch, ‘The Fatal Flaw in Flight 51-L’, IEEE Spectrum, February, 1987, pp. 99-100, p. 109.
THE SPACE SHUTTLE CHALLENGER DISASTER
147
27. Ibid., pp. 96-97. What is not so clear in the Rogers Report is that regardless of who made the recommendation to launch, Kilminster was not ultimately responsible for the launch; that person would have been Jesse W. Moore, Associate Administrator for space flight, the Level 1 manager at NASA headquarters in Washington, D.C., with whom final approval for a launch rested. Cff , Trudy Bell and Karl Esch, ‘The Fatal Flaw in Flight 51-L’, in IEEE Spectrum, February r 1987, p. 38. According to the House Report, ‘NASA managers delegated the responsibility for making technical judgments to lower level managers or assistants’. Cff , House Report, p. 30. It would appear that this delegation habit pervaded the higher levels of management as well where senior management delegated decision making to middle management. (IEEE stands for the Institute of Electrical and Electronic Engineers) 28. Rogers Report, p. 92. 29. Ibid., p. 99. It is a difficult matter indeed to work out who is responsible for launch decisions. Part of the problem is the Byzantine decision making structure. The distinguished ethicist Patricia Werhane considers that it is the middle manager Mulloy who made the decision to launch. Patricia Werhane writes, ‘Mulloy, a middle manager at NASA and the NASA person closest to the O-ring situation, made the final decision to launch without even communicating to his superiors Thiokol’s objections to launch.’ Cf., f Patricia H. Werhane, Moral Imagination & Management Decision-Making, New York: Oxford University Press, 1999, p. 61. Immediately one begins to wonder, why are middle managers given this responsibility in the first place? Why would such a major decision such as a launch decision not be the province of senior management? Does this not indicate both a lack of management sense and management ethical imagination? When the present author posed d this question to Boisjoly, his answer in private correspondence on May 5, 2002 was the following. ‘The decision to launch does not vest in one person but ratherr a series of levels from one to four. Level four is the lowest level between the contractors and NASA and if everything is resolved at Level 4 Flight Readiness Review meetings, the launch is a GO unless something at the other three higher levels surfaces to change to a no-launch decision. Level 1 is the NASA Administrator level at the end of the decision chain. So with that in mind, Mulloy and Stanley Reinartz who were on the telecon are the two responsible because they did not notify JSC in Houston about the teleconference or the discussions. They say AFTER THE FACT that they were not required to do so because the issue was resolved by them but common sense should prevail and get everyone in the chain of command involved. Again, their actions can be traced to a fear of Dr. Lucas who could and probably would destroy their careers on such a issue’. The point of the present author is that if a proper chain of command structure had been in effect, Mulloy and Reinartz would not have had the powerr to make that decision on their own. A middle manager was given the responsibility for making a decision which in the opinion of the present author, should only be vested in the most senior level of management. In this sense, the senior management was ultimately responsible for the launch decision because, in the sense of the proper management sense of responsibility, they were or should have been de jure responsible. In addition, the
148
SAVING HUMAN LIVES
will to communicate stressed above in Chapter Three of the present volume was not in effect. 30. Ibid., p. 100. According to Boisjoly, who was present at both the monitored and the unmonitored caucus, ‘The [unmonitored] caucus constituted the unethical decision-making forum resulting from intense customer intimidation’. Cff , wysiwyg://partner.12/http://onlineethics.org/essays/shuttle/telecon.html. Who was the customer? Charles E. Harris, Jr., Michael S. Pritchard and Michael J. Rabins write on the first page of their textbook: ‘Jerald Mason, Senior VicePresident at Morton Thiokol, knew that NASA badly needed a successful flight. He also knew that Morton Thiokol needed a new contract with NASA, and a recommendation against launch was probably not perceived as enhancing the prospects of obtaining that contract’. Cf., f Charles E. Harris, Jr., Michael S. Pritchard and Michael J. Rabins, Engineering Ethics, Concepts and Cases, Belmont: Wadsworth Publishing Company, 1966, p. 1. In this case, one cannot perceive of Mason, who is the most senior manager present among the decision making foursome, as a disinterested party. Is it a good management policy, then, to place a major customer in the decision making role in the management structure? While the Mason four are typically referred to as middle managers in the Challengerr literature, all four members off the final decision making unit are senior managers of Morton Thiokol. (Is this carrying the notion of customer first, too far?) While it could well be that there was more than one major customer and perhaps both NASA and Morton Thiokol could have been, at different times, the mysterious, unnamed major customer to which the Presidential Commission was referring, (n. 14 above), one ponders both the ethics and the management policy of allowing the senior management of a major customer/contractor to make operational management decisions for the contracting agent. (At times it works the other way around as well, as from Morton-Thiokol’s point of view, NASA is a major customer). According to a very interesting analysis of Boisjoly on May 3, 2002 in response to a question put to him by the present author as to how he would assign blame for the decision making in the case of the Challengerr launch, ‘Who is to blame for the Challenger launch is a very simple question to answer in my opinion. First, I assign NASA 25% of the overall blame for creating such ridiculous pressure on the MTI managers to launch. I assign 100% of that apportioned blame to Mulloy alone because he knew better and had all the available information before the telecon meeting to stop the launch but he was a “company” man and chose to play the game instead of doing the correct thing. Second I assign 75% of the blame on the four MTI managers who changed the launch decision from No-Launch to Launch. Now those four managers must be sorted out for proportional blame. The Senior V.P., Mason is assigned 25% of the blame because he created the pressure on the V.P. Engineering, Lund to launch. Lund must be assigned 75% of the blame because he was the last ENGINEERING VOICE capable of stopping the launch but he caved in to the pressure from the senior V.P. (Mason). The other two MTI managers were simple pawns without authority or desire to do anything so as far as I am concerned they acted like the puppets they were. More senior managers at NASA were not told because the mentality at the MSFC facility, run by DR. Lucas,
THE SPACE SHUTTLE CHALLENGER DISASTER
149
Center Director, was that MSFC would NEVER be responsible for delaying or canceling a launch. I know this sounds unbelievable but it is true. He was a tyrant’. Boisjoly’s analysis is of special interest since he places the brunt of the blame on Lund because he had an engineering voice and Mulloy who had the management power to stop the launch, but did not. In the approach taken in this present volume, the emphasis is not so much on fault finding but rather on what changes one must make both in ethical consciousness and in management structure so as to prevent a similar disaster from occurring. How can one learn from past disasters how to prevent them in the future? In Chapter Three of the present volume the concept of the will to communicate is argued to contain the concept of the will to receive information. Itt is clear that in the case of Marshall, there was an absence of the will to receive information. 31. For clear-cut evidence of such general significant disagreement and strong dissent, Cff , Ibid., pp. 92-3. According to the House Committee on Science and Technology, the dissent, especially that of Boisjoly was significant enough to have been decisive: ‘The question remains: Should the engineering concerns, as expressed in the pre-launch conference, have been sufficient to stop the launch? The Committee concludes the answer is yes. Thiokol’s recognized expert on SRM seals had evidence he believed to be conclusive and sufficient. His opinion, in the absence of evidence to the contrary, should have been accepted until such time as better information became available’. Cff , House Report, p. 7. 32. Op. cit., p. 104. Curiously, the House Committee on Science and Technology did not take the view that communication blockages contributed to the problem: ‘No evidence was found to support a conclusion that the system inhibited communication or that it was difficult to surface problems’. Cff , House Report, p. 29. Later in that same report, however, the House Committee quotes Aldrich in his testimony to the Rogers Commission: ‘The second breakdown in communications ... was ... that that [the joint seal problem] was not brought through my office in either direction ... was not worked through the NASA Headquarters ... nor when the Marshall Organizations brought these concerns to be reported were we involved. And I believe that is a critical breakdown in process and I think it is also against the documented reporting channels that the program is supposed to operated to’. (Rogers Commission Report, Volume V, p. m with management not communicating 1490), House Report, p. 214. The problem is not limited to Marshall. Boisjoly reports withholding of key information on the part of Morton Thiokol as well. For example, in private correspondence with Boisjoly and the present author on April 2, 2002, Boisjoly pointed out that ‘... the temperature data from the Presidential Commission Report ... was not available to me or my colleague engineers and managers prior to the Challenger launch. The temperature data existed within the bowels of Morton Thiokol but I did not have the power to force those that had it to give it to me. I asked for the data in September 1985 but did not receive any data’. Problems in communication were not limited to Marshall. Boisjoly also points out that the data from resiliency testing indicating that seals would lift off their sealing surfaces for several seconds at 75 degrees Farenheit and in excess of 10 minutes at 50 degrees Farenheit was thought to be too sensitive by Morton Thiokol management to be
150
SAVING HUMAN LIVES
released. Cf., f Roger M. Boisjoly, ‘The Challenger Disaster: Moral Responsibility and the Working Engineer,’ Deborah G. Johnson, Ethical Issues in Engineering, Englewood Cliffs: Prentice Hall, 1993, p. 7. One can only imagine the implications: a seal that lifted off its surface for more than ten minutes at 50 degrees Farenheit? It would only require a few milliseconds for the superhot combustible gases (which were over 5,000 degrees Fahrenheit) to escape and burst into flame. How could any ethical agent in possession of this fact authorize the launching of a manned flight at a low temperature? At the very least it must be said, in the distinguished ethicist Patricia Werhane’s marvelous phrase, they were woefully lacking in ‘moral imagination’. It seems that they must have been lacking in ordinary imagination as well. 33. REPORT AT A GLANCE, V. 34. Cff , Report of the PRESIDENTIAL COMMMISSION on the Space Shuttle Challenger Accident, p. 104. 35. Ibid., p. 98. Despite this evidence the House Committee on Science and Technology makes the following statement: ‘There is no evidence that middle level managers suppressed information that they themselves deemed to be significant’. Cf., f House Report, p. 30. It could conceivably be argued that Mulloy did not consider his information significant but according to the House Report on Science and Technology, he himself as long before as March 17, 1983 had ‘... informed NASA Level 1 (meaning the Associate Administrator for Space Flight), of the pending change in criticality from 1R to 1, which meant that a single seal failure could result in the loss of the Shuttle and crew. That change was approved on March 28, 1983’. Cff , House Report, p. 53. 36. Op. cit., p. 85. 37. Another structural weakness is that Level 2 was not in line for another path of system reporting. Cff , Ibid., p. 85 and p. 159. 38. Ibid., p. 229. 39. Cff , Ibid., p. 229 and Lucas’ testimony, Ibid., p. 101. 40. Ibid., p. 88. Despite this, Cff , n. 34. 41. Ibid., p. 101. 42. Ibid., p. 93. 43. Ibid., p. 107. 44. Ibid., p. 104. 45. Cf., f Internal correspondence and memoranda, Appendix D, Ibid., pp. 245-256. Despite this, Cff , n. 34, 39. 46. Cf., f Memorandum, Ibid., p. 249. On the basis of this memorandum, Boisjoly finds it incredible that top NASA management denied knowing anything about the problems with the joint seals, when questioned by the Presidential Commission. Cf., f wysiwyg://partner.12/http://onlineethics.org/essays/shuttle/predis.html. This memorandum had been preceded by an earlier memorandum of Boisjoly’s on July 22, 1985 which warns off a horrifying flight failure before a solution is implemented to prevent O-ring erosion. Cf., f Investigation of the Challenger Accident, Report of the Committee On Science and Technology,
THE SPACE SHUTTLE CHALLENGER DISASTER
151
House of Representatives, Ninety-Ninth Congress, 2nd Session, Union Calendar No. 600, House Report 99-1016, Washington, D.C., Government Printing Office, 1986, p. 285. This memorandum is curiously blotted out so as to be almost illegible. 47. Op. cit., p. 252 ; Cf., f House Report, p. 287. From the technical side, suggestions have been put forth that there was redundancy, a technical engineering term which carries the meaning that if a primary part malfunctions, a secondary part takes over which can fulfill the functions of the primary part. In this case, the O-ring was designed so that redundancy was supposedly built in. However, the primary engineer, Boisjoly, pointed out that, ‘... if erosion penetrates the primary O-ring seal, there is a higher probability of no secondary seal capability ...’, Cff , House Report, p. 224. 48. Ibid., p. 247. 49. Ibid., p. 138. 50. Ibid., p. 84. The history may even be taken back further. One memorandum written in 1978 by John Q. Miller, Marshall’s Chief of the Solid Rocket Motor branch to Project Manager George Hardy, when referring to the Thiokol field joint design, stated that this design was so hazardous that it could produce ‘... hot leaks and resulting catastrophic failure’. Cf., f Malcolm McConnell, Challenger, A Major Malfunction, London: Simon and Schuster, 1987, pp. 118-119. The O-ring design was known to management as so hazardous that it could produce a catastrophic failure eight years prior to the Challenger disaster. Mr. Miller continued his critical appraisal of the O-ring design in further memoranda including one written on February 22, 1984 urging close surveillance of the problem of the heat damaged O-ring and drawing attention to the fact that the sealant would not adhere to the surface to which it was being applied. This memorandum was sent out to Hardy and Mulloy amongst others. Cff , Investigation of the Challenger Accident, Report of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, Second Session, Union Calendar No. 600, House Report 99-1016, Washington, D. C., U.S. Government Printing Office, 1986), pp. 283-284. 51. Op. cit., p.85. 52. Ibid., p. 85. 53. Ibid., p. 25l. 54. Ibid., p. 256. 55. Ibid., p. 103. Serious questions have been asked about whether senior management truly knew nothing. Richard Cook points out that O-ring charring was a major agenda item on all Jesse Moore’s monthly staff reviews during 1985 according to documents released by NASA. Cf., f Richard Cook, ‘THE ROGERS COMMISSION FAILED, Questions it never asked, answers it didn’t listen to’, WASHINGTON MONTHLY, Y November 1986, p. 16. According the House Report of the Committee on Science and Technology: ‘Information on the flaws in the joint design and the problems encountered in missions prior to 51-L was widely available and had been presented to all levels of Shuttle management’. Cff , House Report, p. 5. Stephen Unger points out that Moore, Aldridge and other
152
SAVING HUMAN LIVES
high-level NASA officials paid little attention to the field joints even though they had been formally classified as critical. (Emphasis not in the original) He offers evidence that these high-level NASA officials knew about the problem. Cf., f Stephen H. Unger, Controlling Technology, Ethics t and the Responsible Engineer, New York: John Wiley & Sons, 1994, p. 100. That the senior administration knew about the O-ring dangers seems to be a fact that is well established. ‘By the summer of 1985 ... the dangers of flying the shuttle with a flawed seal joint were widely known within the organization and brought to the attention of senior officials.’ Cff , Maureen Hogan Casamayou, Bureaucracy in Crisis, Three Mile Island, the Shuttle Challenger, and Risk Assessment, Boulder, San Francisco, Oxford: Westview Press, 1993, p. 21. Lawrence Mulloy, the SRB project manager at Marshall, informed Level 1 (the Associate Administrator for Space Flight) during an FRR briefing that the secondary seal was unreliable and the joint was officially no longer redundant (March 17, 1983), p. 26. A comprehensive account of how detailed the knowledge of senior officials was on the O-rings appears on pp. 28-29. In spite of the fact that Claus Jensen lists Casamayou’s fine work in the bibliography of his own well wrought work, he nonetheless states that by the end off 1985, NASA senior administrators were unaware that there was a problem with the solid-fuel rocket booster joints. Cff , Claus Jensen, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, New York: Farrar, Straus and Giroux, 1996, p. 206. It is valuable to emphasize the moral responsibility of the high-level officials and not focus attention exclusively on the unethical Mason four who made the recommendation to launch. Unger is one of a very few sources which points out that neither NASA nor Morton Thiokol considered that the astronauts had a right to know they were being sent up in an unsafe vehicle. But Unger does not suggest that their right of choice has been abrogated. Unger does refer to codes of engineering ethics which point up the ethical responsibility of engineers to hold public safety and health and welfare paramount in the performance of their duties including codes of ethics such as those of the IEEE (the Institute of Electronic and Electrical Engineers) which include disclosure of factors which ‘might endanger the public or environment’ n although it does not specify to whom this disclosure is made. (p. 283) Unger does refer to Baum’s notion of informed consent which includes the duty to provide ‘... complete, accurate, and understandable information to all potentially affected parties ...’ but cancels this out by including the duty to carry out approved assignments ‘... even if the engineers believe that they may (or even will) have harmful consequences.’ (p. 114) While engineering ethics are of course laudatory, the problem is that whistle blowers face harsh reprisal. Aviation Week and Space Technology reported that Allan J. McDonald and Roger Boisjoly were transferred to inferior posts. Cf., f Claus Jensen, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, New York: Farrar, Straus and Giroux, 1996, p. 366. Incredibly, Boisjoly has been chided for not having done more to stop the launch. Boisjoly sought medical help for two years because of guilt that he had not done more to prevent the disaster. Cff , Ibid., pp. 346, 356. Hence, the reliance upon professional ethics outside of a management framework is not a fully
THE SPACE SHUTTLE CHALLENGER DISASTER
153
adequate position. Unger does not provide a corporate imperative that includes the passing of information on to risk takers as a fundamental part of a management led safety first ethic or management structures that mandate vetoing courses of action that place the lives of others at high risk. Unger does devote a section to outlining internal management procedures including an open-door policy, memoranda, ombundspersons, and the like but points out that in practise such measures are ineffective because of retaliation. Management procedures can only be effective in a context in which there is a corporate ethical ethos that is management activated, management led and management supported. 56. Ibid., p. 103. 57. Ibid., p. 103. 58. Ibid., p. 161. As reported to the author in direct interview with a scientist at NASA. This is also part of the findings of the House Committee on Science and Technology: ‘Meeting flight schedules and cutting cost were given a higher priority than flight safety’. Cff , House Report, p. 9.
CHAPTER 8 POST-CHALLENGER INVESTIGATIONS One primary source for post-challenger investigations is the PostChallenger Evaluation of Space Shuttle Risk Assessment and Management, prepared by the Committee on Shuttle Criticality Review and Hazard Analysis Audit of the Aeronautics and Space Engineering Board, chaired by Alton D. Slay, published by the National Academy Press in 1988, hereinafter to be referred to as the Slay report. Despite its title, by far the major focus of the Slay report is risk assessment rather than management which it discusses only in a few key passages. The attention off the present volume, is, in the main, directed at management issues. However, the Slay report is important for its use of the word ‘risk’. In the Slay report, the fallacy committed is the conflation of the general unknown risk of space travel with the specifically known (but not to the civilians and crew aboard the Challenger) hazardous design risk of the O-rings.
THE WORD ‘ACCIDENT’ 1 The notion that the Challengerr disaster was an accident is reinforced by the Slay report. On the first page of the preface, when the Challengerr is first discussed, the word ‘accident’ appears no less than five times in the first three paragraphs of the discussion including four times in the second paragraph alone.2 Whatever the ensuing discussion, the subliminal tone of the report is already set at the outset that what happened was accidental and no one naturally is responsible for accidents.
154
POST-CHALLENGER INVESTIGATIONS
155
THE CONFLATION OF GENERAL UNKNOWN RISK OF SPACE TRAVEL WITH THE SPECIFICALLY FOREKNOWN RISK OF THE O-RINGS Since the title of the report makes use of the word ‘risk’ twice in terms of both ‘risk assessment’ and ‘risk management’, it would seem that the word ‘risk’ had a clear definition. Chapter four is entitled ‘Risk Assessment and Risk Management: The Committee’s View’. However, it would appear that the word ‘risk’ is anything but clearly defined. In the view of the present author, one must distinguish carefully between known risks and therefore calculable risks and unknown risks and therefore f incalculable except in a very broad sense. For example, since it is risky to drive an automobile, one must accept the fact that every time one operates a vehicle that one is taking a certain risk. This risk, since it is open ended, is incalculable except in a very broad sense and therefore it is willingly accepted. If, on the other hand, someone points out to a driver that the tires on her or his automobile are extremely worn and that the driver is taking both her or his life and that of her or his passengers in her or his hands to drive the vehicle without replacing them, then this driver is taking a known and calculable risk. If the driver still decides to drive this car under such conditions it might be said that the driver is accepting a known risk. If, on the other hand, the driver begins to drive the car not knowing that during the night someone has mounted unsafe tires on the wheels, it cannot be said that when the driver and her or his passengers wind up as traffic fatalities that the driver knew that driving was risky. To use the term ‘risk’ to cover both the general unknown risk and the ‘could be known but actually unknown specific risk’ is to use language in a misleading way. In the very first paragraph of chapter four on ‘Risk Assessment and Risk Management’ under the heading ‘General Concept’, 4.1, the Committee compares the risk that the astronauts were taking to the general unknown risk and reasons that this was a chance that they were taking:
156
SAVING HUMAN LIVES
Almost lost in the strong public reaction to the Challenger failure was the inescapable fact that major advances in mankind’s capability to explore and operate in space - indeed, even in routine atmospheric flight - will only be accomplished in the face of risk.3 The risks of space flight must be accepted by those who are asked to participate in each flight ... ’4 The use of the term ‘risk’ in both of these sentences is obviously the general unknown risk that anyone mustt take in every moment of life, enhanced in this case, in the more dangerous world of outer space exploration. But the risk that the Challengerr crew and passengers were taking was not of this sort. The risk was very well known and highly calculable beforehand but not to the astronauts.5 It would only make sense to use ‘risk assessment and risk management’ in this sense if Captain Dick Scobee had been told of the risk factor created by the O-rings and then asked if he wished to take up his crew and passengers in spite of the known risk. In this case, Captain Scobee would have been asked to make a risk assessment on behalf of himself and his crew and civilian passengers. To say that all of the astronauts knew they were taking a risk when they strapped themselves in is to seriously confuse the issue. They did know they were taking a general, unknown, risk. They did not know that they were taking a highly specific and calculable risk. According to Malcolm McConnell, who covered space science and the Space Shuttle programme for the Reader’s Digestt and was at Cape Canaveral to report on Challenger’s Teacher-in-Space on the day of the disaster, ‘As far as anyone in the astronaut corps had been informed, there had never been a problem with the SRB field joints.’6 To say pointedly in the text of the report that astronauts must take risks is to ignore the important distinction between general unknown risks which all astronauts would of course be willing to accept and specific, calculable risks which astronauts would be willing to accept only under the condition that they were make known to them and that they could thereby choose or not choose to accept those risks. The same mistake is repeated in an article of Diane Vaughan’s, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, and The
POST-CHALLENGER INVESTIGATIONS
157
Challengerr Tragedy,’ where she switches the location of the risk from the technology as in the sub-title of her book, The Challenger Launch Decision: Risky Technology ... to “risky work” blurring the all important distinction between general, unknown risk and specifically foreknown and therefore avoidable risks.
SAFETY PRIORITY It is of interest to note that while a new position of Associate Administrator for SRM7 has been formed, this administrator has appeal rights only to the Administrator of NASA on any decision relevant to the safety of the STS and its crew.8 This is in striking contrast to the recommendations of the Fennell report on the King’s Cross fire examined in the sequel and the general line of thought which is advanced in the first four chapters of this book that one of the major solutions is the evocation and implementation of a safety ethos which is embraced by all members of the business community. How different this is from the attitude of Arnold Aldrich, of Level 1 top management at the time of the Challengerr disaster, who in testimony before the Rogers Commission is quoted as stating that ‘... we had no concern for the performance or safety of the flight articles [orbiters, boosters, main engines, or fuel tanks] at that time, nor do I even at this time’.9 Up to the present day, engineers still do not possess the right to vote to launch or not to launch and this decision is still made exclusively by management.
DECISION MAKING As of July 22, 1987, almost exactly one and one-half years after the Challengerr disaster, the Committee writes in a letter to James Fletcher that there still is a difficulty in discovering who is responsible for final decisions made at NASA.10 As the issue of personal responsibility for decision making is seen in this volume as a very key ingredient in sound decision making, without the
158
SAVING HUMAN LIVES
resolution of this problem, no amount of making up of new boards or other administrative shuffling will solve the problem of proper decision making within any and all boards. The Committee does have a strong sense of the importance of this problem as it indicates in this same letter to Fletcher though it is buried, unfortunately, in Appendix C of the Report: The Committee recommends that the Administrator of NASA periodically remind all of the NASA organization of the specific individuals by name and position who are responsible for final decisions (and the organizational relationships among them) based on the advice coming from each panel and board.11 (emphasis in original) While the above appears in a section of the Report entitled, ‘Report to the President Implementations of the Recommendations of the Presidential Commission on the Space Shuttle Challenger Accident, (emphasis in original) it is clear that it is yet another recommendation and not yet an implementation. If an item as crucial to the proper and safe operation as the clear-cut knowledge as to who is responsible for what decisions is still considered to be on the status of a recommendation to implement and not yet established as an explicit, iron-clad policy, then on the all important issue of responsibility for top management decision making, not very much essential has changed at NASA.12
SAFETY FIRST? On the issue of risk management, it is NASA’S currently held belief that the management of risks of the STS must be the responsibility of the line management (i.e., NSTS Program Manager, the Associate Administrator for Space Flight, and, ultimately, the Administrator of NASA). The safety organizations, such as they are, are limited to risk assessment only. This is another way of saying that the safety organization can act only in an advisory capacity and has no veto power over actual operational decisions.13 The House Committee on
POST-CHALLENGER INVESTIGATIONS
159
Science and Technology, in contrast, recommended in its Investigation of the Challenger Accidentt that the new Associate Administrator for Safety, Reliability and Quality Assurance must ‘... have the authority not only to stop a particular flight, e.g., at a Flight Readiness Review, but to stop the whole mission planning process if necessary’. 14 To limit the role of the safety organization to advice and assessment is not to have a safety first priority. It would be similar to giving the owner of a car the right to override the safety check guidance contained in the annual inspection of her or his vehicle.This attitude, while showing some concern for safety f by requiring an inspection, clearly cannot be construed as placing safety first. If one wished to go further, one could argue that the safety organization should be involved not only in the assessment of existing designs but in the initial selection of designs. For example, the McDonnell-Douglas proposal for an orbiter incorporated a practical abort capability that would have protected the shuttle crew during all phases of the mission. In fact, their system actually anticipated the technical cause of the Challengerr disaster in that it provided for a burn through wire that would have sensed O-ring leakage, then triggered booster thrust termination and the orbiter’s abort rocket escape system. As it turned out, it seems that at least three crew members were breathing during the two minutes and forty-five seconds the crew compartment fell toward the sea.15 But James Fletcher, undeterred by any advice from or veto from a safety organization, voted against the McDonnellDouglas proposal that included an abort rocket escape system.16 In addition to arguments of cost-effectiveness, f Fletcher apparently considered that since no escape system existed on airplanes, therefore the shuttle did not need one.17 Of course, this would assume that the shuttle was as safe as a commercial airliner, an assumption both belied by arguments presented in the previous chapter and by Fletcher’s own expectation of another Challengerr like disaster. Was NASA always like this? The answer is no. Robert R. Gilruth, coordinator of Project Mercury in 1959 would not concur with the idea of manned spaceflight unless there was abort capacity.18 Joseph J. Trento writes in his trenchantly titled, Prescription For Disaster, that, ‘No part of Mercury, Gemini, and later Apollo got more attention than escape systems and astronaut safety. Only in 1972,
160
SAVING HUMAN LIVES
with the approval of the Space Shuttle design, did NASA abandon that tradition’.19 Gilruth who more than most people understood the risk of space flight had great difficulty understanding why NASA had no escape system.20 For oldtimers like von Braun, the decision to save money on the shuttle by using solid rockets was a very dangerous one. If anything went wrong in a flight, a solid rocket could not be shut down (unlike the use of liquid fuel rockets). It could be likened to sending people up riding on a Roman Candle. But what illustrated the lack of safety concern the most was the decision not to provide for any escape system for passengers.21 The lack of an escape system was not only an isolated technological item. Its absence symbolized the general attitude towards safety that prevailed at NASA during the Challengerr days. According to Trento, Gilruth saw the lack of an escape system as reflecting an absence of a safety ethos: ‘To Gilruth, the lack of an escape system violated everthing NASA stood for.’22 What would an escape system have cost? According to Trento, the only system that would have worked for the entire crew was a crew compartment escape system that a study projected would cost $292 million.23 A price had been put on each astronaut’s and civilian passenger’s head of 41.7 million dollars. This was a small percentage of the entire cost of the shuttle which has been estimated to have been $3 billion. Ironically, another proposal from Lockheed was also turned down although this proposal involved shipping the booster shipments in a vertical position on barges instead of Thiokol’s rail-shipment system which the Rogers commission found to be one of the contributing causes of the Challengerr disaster in that the lower segment of the right-hand solid rocket booster might have been damaged during shipment.24 Still another contractor, Aereojet, had proposed to eliminate the segmented casing altogether and to build the rocket booster as a single case. However, Dr. Fletcher was able to make a unilateral decision without the benefit of a safety committee’s advice or without the hindrance of a safety veto of his ultimate choice of design. According to Malcolm McConnell, who covered the Space Shuttle program for Readers Digest, ‘In his formal Selection Statement, Administrator Fletcher singled out Thiokol’s casing field joint design as an example off [engineering] excellence’.25 While McConnell entertains some tightly reasoned political reasons for
POST-CHALLENGER INVESTIGATIONS
161
Fletcher’s ultimate favoring of Thiokol’s design, if the choice of an original design had to pass the approval of an independently funded safety organization (not line management a which would ultimately have been responsible to Fletcher), then it is highly unlikely that either this design would have been selected d or that an orbiter without abort capacity would have been selected. The recommendations of the Slay report do not call for a safety organization to do anything but advise on the safety of existing designs. They do not call for a safety organization to provide any sort of input to initial design selection. It does not appear that the recommendations of the Slay report significantly enhance the posture of a safety first organizational policy. There is another way in which giving a safety organization veto power over operational decisions could have averted the Challenger disaster. If safety organizations are limited to assessment of existing designs and not to active risk management, if there are some strong commercial or political pressures to make a dubious operational decision (in this case to launch the Challenger), then there is no iron clad way to resist such pressure. For example, it has been argued that there was no strong reason why the Challengerr had to be launched under such bad weather conditions unless there had been very strong commercial or political pressure applied. McConnell discusses a cogent list of reasons including potential visits from the then VicePresident George Bush, the possible benefit of timing the launch to exactly coincide with the State of the Union message of President Reagan, and the perceived need to keep on schedule so as to keep ahead of the Soviet’s program.26 There is no way to totally eliminate the possible presence of political and commercial pressures. Political pressure is not something new. In fact, one could well argue that in the early days of NASA, outside political pressure was even greater in that those were the days of the Space Race with the Soviet Union. In Trento’s chapter entitled ‘The Kennedy Years’, he relates that NASA was under intense pressure to get John Glenn up into orbit in the same year as the Soviets. NASA’s response to the media and to members of Congress was: ‘You like to have a man go up with everything as near perfect as possible. This business is risky’. After the eighth time Glenn’s flight was postponed, a reporter asked Kennedy about the delays. Kennedy’s response was: ‘ ... I think we ought to stick with the present group who are making the
162
SAVING HUMAN LIVES
judgement’.27 Perhaps one could argue that during the Kennedy years a safety ethos not only guided NASA but it also appeared to set the standard for the Oval Office as well. When a safety first priority is seen to be embraced not only by NASA’s top administration but by the top administration of the nation, the example not to ‘cut corners’ is a powerful one. In this respect one can say that a safety first priority is not only essential for space administration, but that the responsibility for engendering such a priority can be traced right on up to the White House. But even with a government that is either not safety minded or is in greatest probability not familiar with detailed safety risks (one cannot imagine that Reagan was aware of the risk that Challenger was taking with respect to the faulty O-rings), there is no need for space administration to kow tow to outside pressure. Space administration has, first of all, the responsibility to report to the White House what risks would be taken if there were to be a launch. And if the suggestions of this book are taken seriously, if there were a safety committee or officer who had the power to veto a decision to launch made at the highest level, this would provide a senior administrator a face-saving way outt of being forced or pressured presumably to order a launch under far less than ideal conditions. The senior administrator could then say that she or he was so constrained by her or his safety officer. By also having an explicit safety first ethos, this could also be appealed to simultaneously as the conceptual justification for the safety f officer’s concrete decision to veto in this circumstance. It is thus even in the interest of the senior administrators to have a safety first ethos with teeth in it to save them from being seemingly forced to make unwise decisions. There is still no safety ethos that involves a safety first priority with real teeth, that is veto power over the initial design and operational decisions.
IS THERE A GREATER SENSE OF RESPONSIBILITY NOW? The fundamental problem of taking personal responsibility for final decision making remains an unresolved issue. Some works
POST-CHALLENGER INVESTIGATIONS
163
commenting on the Challengerr disaster create the impression that the concept that anyone in particular was responsible for the Challenger disaster is a debatable question. Such is the impression created in the opening paragraph to their conclusion in their 1997 publication, Engineering Ethics, in which Pinkus, Shuman, Hummon and Wolfe state that ‘Our role in presenting this rather extensive, but important and instructive case, [that of the Challenger] is not to take sides’.28 In the last two sentences of their conclusion they attempt to portray the case as a toss-up: “Engineers who were very familiar with the problem [O-ring problem] said not to extrapolate or launch - we don’t [sic] understand enough about the mechanism. [Actually they were more vehement in their opposition to launch than this summary description suggests and documented evidence exists that at least two of them understood the mechanism well enough to know that the mechanism was defective] In contrast, managers said that our understanding of the mechanism, the data, and how they were presented justifies extrapolation and the launch.”29 [Even the managers themselves were not in unanimity and a certain amount of verbal manipulation by some was required to bring them all in line].
WERE MIDDLE MANAGERS SIMPLY FOLLOWING POLICY? According to Diane Vaughan’s, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, the middle managers were simply following policy (a variation on following orders) in making their final choice. According to regulation it was not necessary to report previous instances of O-ring erosion which fell within predictions. But these included red flag instances. Regulations were observed. But, senior managers gave sworn testimony that they should have been informed and would have intervened to have stopped the launch had they been informed (thus arguing that there could and should have been rule violation in this case). Vaughan may be bureaucratically, technically and legalistically correct in stating that the middle managers were not obligated to report the intense debate over the O-rings and
164
SAVING HUMAN LIVES
temperature concerns voiced at the teleconference to senior managers. Sometimes, however, one needs to do more than what is bureaucratically, technically or legalistically required. She repeats her thesis that managers were lawful rule followers in a later article, stating that ‘The image of rule-violating middle managers was unfounded. In the history of decision making on the Solid Rocket Boosters, 1977-1985, and on the eve of the launch, Marshall middle managers abided by every NASA launch decision rule’.30
WERE THE MIDDLE MANAGERS MORAL? Diane Vaughan implies that deciding not to inform the risk takers of the red flagged problem was a perfectly moral choice. She states that ‘locating responsibility for continuing to launch during this period with individual managers who intentionally suppressed information about O-ring problems is incorrect’. Then, who was morally (if not legalistically) responsible for not informing senior management of the O-ring problems? For Vaughan, ‘The cause of disaster was a mistake embedded in the banality of organizational life ...’ was ... ‘a normal accident’ [a contradictio in adjecto] and an ‘inevitable mistake’. This view corresponds to reactions of senior NASA officials nearer to the time of the disaster. In a televised interview, on July l, 1990, Mr. William Lenoir, the then new NASA Director of Space Flight (Associate Administrator for Space Administration), in responding to the two discoveries of hydrogen leaks which temporarily grounded the space program stated that, ‘it’s all in a day’s work’ and that it was ‘business as usual and I am not embarrassed by it’. Such statements seem to indicate strongly that the attitude of top administration is that potential unpredictable disasters may be uncovered at the last moment, even on the launching pad, and that such h discoveries are routine and not symptomatic of any management dsyfunction. f This would appear to support the prevailing mental set thatt disasters or potential disasters are deus ex machina and will always occur and there is no special relationship between them and the possibility of mismanagement. In early July, 1990, Mr. James Fletcher, re-appointed at that time as the Head of NASA, commented that there could be a re-occurrence of a
POST-CHALLENGER INVESTIGATIONS
165
Challengerr like disaster. His comment, reported on BBC in early July of 1990, also appears to indicate thatt disasters are the order of the day in the space world and that therefore, f presumably one should not be unduly alarmed at their prospect nor are there any special steps to take to seriously reduce the possibilities of their occurrence.
WAS THE CHALLENGER DISASTER A MISTAKE? The Challengerr disaster was not a mistake, the word implying a harmless and inevitable error to which all are prone. It was a decision made in light of full knowledge of both the dangerous risk and the horrifying consequences. The ‘mistake’ language is morally neutering and again carries the connotation off unavoidability. The choice of the word ‘mistake’ in this context is ludicrous. It conjures up the image of hitting the wrong button by error, ‘You mean that was the rocket launch button, Joe? I thought it was the toaster!’ The ‘mistake’ language trivializes what was a considered decision to place human lives at risk in opposition to expertt counsel. The ‘mistake’ language both makes the fateful launch decision a frivolous decision and it absolves the decision makers of responsibility. It cloaks the decision with the blanket of inevitable human error when in fact it was a decision which was made in the full knowledge of what was the highly likely, red flagged, horrific consequence.
WAS THE CHALLENGER DISASTER AN ACCIDENT? Vaughan states that, ‘The Challengerr disaster was an accident, the result of a mistake’. But the evidence she offers in her own volume does not support the thesis that the Challengerr disaster was an accident, that is, a random and thus unpreventable occurrence. The evidence that she provides in her book suggests that the Challengerr disaster was
166
SAVING HUMAN LIVES
avoidable and on the contrary, purposeful f action (not accidental action) was taken which may have paved the way for the disaster. In chapter seven, ‘Structural Secrecy’ she writes that: ‘Following preestablished criteria for an operational system m and ignoring the developmental nature of the shuttles unruly technology, [one wonders if NASA officials were attempting to cage a wild beast here] top NASA officials purposefully weakened the internal safety system’. She writes further: ‘Jack Macidull, FAA investigator working for the Presidential Commission wrote, “Valid safety awareness was screened by first misdefining, and then not reevaluating the capability of the system a operational assumptions based on new data, which led to dangerous and procedures.”’ How can the Challengerr disaster have been an inevitable mistake and how can ‘... the Challengerr launch decision [have been] a story about routine decisions in the workplace’, if the author is also stating that, ‘to their great credit, both investigations [the House Committee and the Presidential Commission] made it clear that the disaster was not merely a technical failure; the NASA organization was implicated ... powerful elites, far removed from the hands-on risk assessment process ... made decisions and took actions that compromised both the shuttle design and the environment of technical decision making for work groups throughout the NASA-contractor system.’ This description does not sound like ‘routine decisions in the workplace’. She cannot have it both t ways. The shuttle design cannot have been both compromised and also justified. There cannot be both a systematic, long-term and comprehensive compromise of safety on the one hand and a conclusion that ‘The Challengerr disaster was an accident, the result of a mistake’.
WAS THE CHALLENGER DISASTER THE INEVITABLE OUTCOME OF CULTURAL DETERMINISM? AND, WAS MORAL CHOICE AND MORAL RESPONSIBILITY THEREFORE IRRELEVANT AS AN EXPLANATION? For Vaughan, ‘The teleconference was ... a situation of perhaps unparalleled uncertainty for those assembled, all participants’
POST-CHALLENGER INVESTIGATIONS
167
behavior was scripted in advance by the triumvirate of cultural imperatives that shaped their previous choices’. And, again, ‘Socially organized and history-dependent, it is unlikely that the decision they reached could have been otherwise, given the multilayered cultures to which they all belonged’. And, finally, ‘... when individual actions are embedded in an organizational context, evil becomes irrelevant as an explanation because the meaning of one’s actions and the locus of responsibility for one’s deeds become transformed, as this book will show...[a horrifying prospect] ... managers ... considered the costs and benefits of launching ... No rules were violated. Following rules, doing their jobs, they made a mistake. With all procedural systems for risk assessment in place, they made a disastrous decision’. It is unfathomable how ‘the meaning of one’s actions and the locus of responsibility for one’s deeds becomes transformed’ unless one is referring to the actions of brainwashed members of a cult. But here, the reference is to responsible professional human beings, trained engineers and managers charged with great public responsibilities. The decisions reached could have been otherwise than they were. They were not forced to authorize the launch of the Challenger. Cultural imperatives, have for Vaughan, usurped the role of moral imperatives. In this case, one cannot help but carry out the mandates of the cultural imperatives. Presumably, the engineers and the senior managers (if one accepts the testimony of the senior managers) were capable of resisting the cultural imperatives. The middle managers alone had no choice but to succumb to them. It is surprising how selective the system of cultural determination is in its choice of helpless victims. But what is more surprising yet is that while evil becomes irrelevant as an explanation for those whose actions are embedded in organizations, it is equally unfathomable how the author can claim that, both the House Committee and the Presidential Commission made it clear that ... the NASA organization was implicated ... powerful elites ... made decisions and took actions that compromised both the shuttle design and the environment of technical decision making for work groups throughout the NASAcontractor system. While these powerful elites are left unnamed, apparently, they, too, are capable of transcending their cultural determinations and evil is relevant as an explanation for their actions.
168
SAVING HUMAN LIVES
Some human beings are capable of transcending their organizational cultures as these examples taken from her own book duly show. Even the middle managers themselves, as she states, ‘were, in fact, quite moral ... as they calculated risk’. It is not altogether clear, in the end, which statements in her book should be taken as representing her position. What is clear is that all of her statements do not hang together. Either evil is irrelevant as an explanation and the managers (unlike their engineering counterparts) are but computers performing a calculation or they are responsible agents (though immoral rather than moral) as argued above in their calculations. If she allows that they can be moral, then they can be immoral as well. If she does not grant them the possibility of being moral, then they cannot be moral as they calculate risk. In the end, it is left to the reader to jettison the idea that evil is irrelevant for explaining action and agree with the author (against herself) that the managers were responsible but in contrast, to her statement, immoral as they calculated risk. Evil, or more accurately, moral responsibility, is relevant as a key consideration in explaining the Challengerr launch decision.
THE HISTORY OF O-RING EROSION According to Schwartz, NASA rejected Morton-Thiokol’s design for solid rocket booster seals because “joint rotation” prevented the secondary O-rings from sealing. Schwartz goes on to say that in May of 1982, NASA ‘accepted the conclusion that the secondary O-ring was no longer functional ... when the Solid Rocket Motor reached 40% of its maximum expected operating pressure’ and therefore ruled the seal nonredundant.’ Schwartz points out that in flighterosion of the primary seal occurred as early as the second shuttle flight, in November of 1981, and beginning in February, 1984, it became a regular occurrence, with some primary O-rings not sealing at all. On flight 51-B, not only did a primary O-ring fail altogether to seal, but a secondary O-ring eroded. Therefore, Schwartz argues that Starbuck and Milliken’s claim that managers were trying to remove unnecessary safety factors on the basis of successful experience, what they called “fine-tuning” was incorrect.31 This also makes one
POST-CHALLENGER INVESTIGATIONS
169
question both Vaughan and Weick when one considers Weick’s restatement of Vaughan when he writes that ‘Engineers learn that erosion recurs, is to be expected on future flights, lasts for only a very short time, is self-limiting, and proves that redundancy is working’.32 Which engineers thought that? Which engineers thought that expected erosion proves that redundancy is working? Certainly not the engineer who knew the most about the O-rings, Roger Boisjoly who argued in sworn testimony before the House Committee that ‘if erosion penetrates the primary O-ring seal, there is a higher probability of no secondary seal capability ...’ Which engineers did think that erosion proves that redundancy is working? Some managers apparently did. Erosion might prove that redundancy is needed – not that it is working. The example of erosion is also given as the only example of a “Mixed Signal” (one which is defined as a signal which is a signal of potential danger followed by a signal that all is well) in Vaughan’s article, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, the Challengerr Tragedy’.33 But again for whom was this signal mixed? Certainly not for Boisjoly, according to documented evidence. It was clear enough to him. At the most relevant point to consider erosion, the Challenger launch decision, all fourteen Morton Thiokol engineers and managers did not consider it safe to fly. That a margin of erosion may have been considered safe by some engineers and managers under certain conditions in some of the launches prior to the Challenger does not prove that they were so seduced by an unbroken string of past successes despite limited erosion that they decided to go for broke in the case of the Challenger. They did not decide to go for broke in the case of the Challenger. They decided nott to launch. The argument that deviance had been normalized so the Challengerr decision was simply another example of the toleration of deviance is without d in the case of the grounds. There was no toleration of deviance Challenger decision because it was decided not to launch. There was no redecision. Four managers exercised executive privilege and reversed the decision not to launch. If anything, an unprecedented reversal of a unanimous decision not to launch shows an intolerance of deviance!
170
SAVING HUMAN LIVES
NORMALIZED DECISIONS? One also wonders about Vaughan, Turner and Pidgeon when Turner and Pidgeon write, ‘Vaughan’s account shows how the significance of the available warnings concerning flight safety, and the norms through which risk was judged, were accordingly negotiated and renegotiated through the working practises of the teams of engineers ... for the O-ring seals a cycle of decision-making was set where signals of the ‘deviant’ behaviour of the system were successively ‘normalized’ as acceptable ...’34 But it would be more accurate to say that the norms were legislated by management and not negotiated and that the signals were ‘imperialized’ rather than normalized for it was the managers and not the engineers who found the safety standards “normal” and therefore acceptable. One need only recall senior engineer Boisjoly’s famous memorandum of July 31, 1985 with his eerie and prescient statement that ‘The result would be a catastrophe of the highest order - loss of human life,’ and his earlier report of July 22, 1985 in which he warned of a horrifying flight failure unless a solution were implemented to prevent O-ring erosion. Both of Boisjoly’s frightening memoranda were written and delivered at nearly the same time that an unwitting and trusting 37 year old mother of two, Christa MacAuliffe was being named at ceremonies at the White House as Teacher in Space (July 19, 1985). Did the left hand know what the right hand was doing?
LINKS BETWEEN TEMPERATURE AND EROSION Why was the obvious link between temperature and erosion on previous launches not more strongly considered by management? Seven flights had shown evidence off damage. The most damage had occurred on the coldest flight – at a still mild 53 degrees Fahrenheit but it is claimed that no absolute correlation could be seen between temperature and damage. Serious damage, as it was represented before the Presidential Commission, had occurred at 75 degrees, for example. Gleick points out thatt ....‘The error was to ignore the
POST-CHALLENGER INVESTIGATIONS
171
flights on which no damage had occurred, on the basis that they were irrelevant. When these were plotted – seventeen flights at temperatures from 66 to 81 degrees – the effect of temperature stood out plainly. Damage was closely associated with cold. It was as if, to weigh the proposition that California cities tend to be in the Westernmost United States, someone made a map of California – omitting the non-California cities that would make the tendency apparent. A team of statisticians formed by the National Research Council to follow up the commission report analyzed the same data and estimated a “gambling probability” of 14 per cent for a catastrophic O-ring failure at a temperature of 31 degrees.’35 This is even higher than Feynman’s (1 in 100) and much higher than the figure given by NASA management who posited that the risk of a disaster involving the loss of the Challengerr was 1 in 100,000. In any case, it could be noted that the fact that O-rings showed damage when there was no cold temperature did not imply that cold had no effect on the possibility of failure. It only meant that the O-ring design was a faulty one and that it could fail at any temperature. That it was more likely to fail at cold temperatures only meant that this was the most dangerous time to fly, not that the design was safe to fly at higher temperatures. Perhaps scientists whose careers are dedicated to basic research and ethically concerned philosophers should be part of management decision making bodies which are exposing risk takers to possible loss of life. This was the reason behind Professor Feynman’s celebrated gesture of dropping a piece of rubber O-ring into a glass of ice water at the televised hearings - to show that there was no resiliency to it at a freezing temperature and thus it could not expand to contain the superhot inflammable gases. If a pure scientist had been on the decision making body – someone could have performed his glass of ice water test. The science writers, John and Mary Gribben say, any of the Thiokol engineers, had they thought the way Feynman did, could have done the experiment before the launch. But the engineers were applied scientists, not pure scientists. They would not have been thinking in the same way as a Richard Feynman! Diane Vaughan is dismissive of Professor Feynman’s famous gesture of dipping a piece of an O-ring into a glass of ice water during the televised Commission hearings and Professor Feynman’s astonishment over the concepts of ‘acceptable
172
SAVING HUMAN LIVES
risk’ and ‘acceptable erosion’. For her, Professor Feynman’s experiment ‘... greatly simplified the issue on the table on the eve of the launch. Managers and engineers alike knew that when rubber gets cold, it gets hard. [unlike what she says in other, key sections of her book] The issues discussed during the teleconference were about much more complicated interaction effects: joint dynamics that involved the timing (in milliseconds) of ignition pressure, rotation, resiliency, and redundancy.’ (emphasis added) Did she not think that the Nobel laureate, Einstein Award and Niels Bohr Gold Medal Winning physicist, former Professor of Theoretical Physics at the California Institute of Technology, and Presidential Commission member Richard Feynman knew that? The whole point of Professor Feynman’s test was to prove that there was no resiliency to the rubber at freezing temperature for more than a few seconds. Of course, Feynman was aware of resiliency – that was the entire point of his test. That the lack of resiliency lasted for more than a few seconds was proof to him that this was a far longer time than the milliseconds required for the superhot propellant gases to escape and burn the O-rings. Diane Vaughan’s Master’s Degree in Sociology does not qualify her to evaluate a scientific experiment. The experiment was performed by Richard Feynman, one of the truly great scientists of the twentieth century. In a private correspondence with the present author on March 28, 2002, Roger Boisjoly stated, ‘... you did not need to be a Rocket scientist to understand the basic operation of O-Ring seals. [This is the first time the present author has heard this expression used by a Rocket scientist!]You are absolutely right in your assessment [that cold adversely affected the operation of the O-rings] for the following reason. Seals of all kinds are made from resilient rubber like materials because they seal very well. However, when you place these same seals in a cold temperature they harden and are no longer able to seal as well. If the temperature is below freezing then you might as well use a seal made from steel rod instead of rubber but we all know that steel rods do not seal very well. A relatively soft rubber seal will quickly spring back from its compressed shape to its natural uncompressed shape at room temperature but will not move at all from its compressed shape when frozen. Yes, it is really as simple as that’. Whose opinion is more reliable? Diane Vaughan’s opinion that the idea that rubber
POST-CHALLENGER INVESTIGATIONS
173
gets hard when it is cold is too simple or Richard Feynman’s and Roger Boisjoly’s opinion that the failure to be aware of the simple fact that when rubber gets cold it gets hard was to have very disastrous consequences. Perhaps itt is for this reason that the Departments of Philosophy at Harvard and Princeton encourage graduate students and even faculty to attend 100 level courses in philosophy. One is reminded of the critique a commentator made of Aristotle that he only gave the reader dazzling glimpses of what was obvious. Would it be the case that everyone possessed the ability of a genius to see what was obvious.
TRUSTING ESTABLISHED PATTERNS OF DEVIANCE? In any event, all of the engineers did not simply grow to trust the established pattern of deviance. They were time pressured to submit reports. They were pressured to say what managers wanted to hear. Reports they wrote were altered by managers. They objected both in the form of the red-flagged memos and in the initial unanimous decision not to launch and in the final efforts of the two most senior experts (who knew the most about the O-rings) to prevent the launch.
FAITH IN THE SECONDARY SEAL? What of the argument that the engineers were misled into thinking that the secondary seal would seal if the primary seal did not seal. Thus, they did not willingly sacrifice f the astronauts and civilians. They made an honest mistake. Weick, Rensis Likert Professor of Organizational Behavior and Psychology at the School of Business Administration at Ann Arbor in a review of Vaughan’s, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA makes reference to ‘the engineers’ beliefs that they are working with a redundant set of O-rings whose erosion is within their experience base and is a self-limiting phenomenon that
174
SAVING HUMAN LIVES
occurs within margins of safety.’ Scott D. Sagan, Associate Professor of Political Science at Stanford University seconds this in his sequential review when he cites Vaughan quoting one [unnamed] engineer as saying. ‘The data said the primary would always push into the joint and seal ... And if we didn’t have a primary seal in a worst case scenario, we had faith in the secondary. (p. 105)’.37 Sagan continues quoting from Vaughan, ‘The engineers, Boisjoly and Thompson, had expressed some question about how long it would take that [primary] O-ring to move ... So, if their concern was a valid concern, what would happen? And the answer was, the secondary Oring would seal … (p. 320) What the Challengerr launch team failed to consider was the possibility that the freezing cold temperature would reduce the reliability of both the secondary O-ring and the primary O-ring.’38 What evidence is there for the key claim that all of the engineers had faith in the secondary seal? If they had such faith, why had they written red flagged memoranda warning of the danger of the O-rings? In private correspondence on March 28, 2002 with the present author, Boisjoly stated, ‘There was never a vote by the engineers. We were at the meeting to present the technical facts about why we should not launch and the actual decision would be made by our Program Manager. However, of the 14 bodies (managers and engineers) in the room at MTI, [Morton Thiokol Incorporated] not one negative voice was ever heard. Everyone was in full agreement to stop the launch (managers and engineers alike)’. The unnamed lone engineer Vaughan and Sagan cite for evidence that the engineers had faith in the secondary was Leon Ray, a Marshall S&E (Science and Engineering Directorate) engineer, who is reported as saying this in a private telephone interview with Vaughan. Was Ray’s comment taken out of context? Later, he is reported to have stated that his opinion did not apply in the case of cold temperatures! What team are Vaughan and Sagan referring to that were unaware of the effect of cold on the O-rings? The engineers are surely aware of this possibility. Casamayou, referring to the Presidential Commission Report (the Rogers Commission Report) writes, ‘Roger Boisjoly, the Thiokol expert on the seal joint, who had made at least one of the charts McDonald used in his presentation, had secondary sealing capability and temperature in mind when he constructed the chart.’39 The redundancy lasted for all of 2/10 of a
POST-CHALLENGER INVESTIGATIONS
175
second.40 (If as one recalls, the O-ring is stiff for more than a few seconds, then one is covered for only 2/10 of a second of the several seconds and it only takes a few milliseconds for the hot gases to escape and burn up the O-rings). Of course, if one piece of rubber is stiff, why would the other not also be stiff? Boisjoly recalls his own conversation with Mulloy during the teleconference on the eve of the Challengerr launch, He, Boisjoly, ‘expressed deep concern about launching in cold temperature’ and presented his second chart that showed the critical relation between temperature and the timing function of the O-rings.’(emphasis added) In Boisoly’s self-account, ‘We would have low O-ring squeeze due to low temperature which I calculated earlier in the day.’41 Sagan’s quotation is taken from a private telephone interview of Vaughan with, of all people, Mason the most senior and therefore the most culpable of the four middle managers who made the decision to launch in opposition to the engineers. Vaughan said earlier that ‘Managers and engineers alike knew that when rubber gets cold, it gets hard’. In any event, if Mason is reported to be recounting in a third hand account what he understood an engineer (McDonald) to be saying in an answer to Boisjoly’s and Thompsons’s concern, it is nonetheless similar to asking the wolf if the chickens are safe. Should we trust Mason’s word that the “team” did not realize the possibility that the freezing cold temperature would reduce the reliability of both the secondary O-ring and the primary O-ring when according to John and Mary Gribben’s book on Feynman, an engineer from the Thiokol company, responsible for the seals, testified for the Rogers Commission (without being solicited) that the Thiokol engineers were so concerned about the possible effects of cold on the seals that the night before the launch they had advised NASA not to fly if the temperature fell below 53 degrees Farenheit. McDonald, the engineer whom Feynman names, stated, ‘If something goes wrong with this flight, I wouldn’t want to stand up in front of a board of inquiry and say that I went ahead and told them to go ahead and fly this thing outside what it was qualified to do’. McDonald’s testimony was so stunning that Rogers asked him to repeat the whole story.42 This is the same McDonald to whom Mason attributes the view that the engineers had not considered the effect of freezing cold temperature on the O-rings. Should one trust Mcdonald’s unsolicited sworn
176
SAVING HUMAN LIVES
testimony before the Presidential Commission or should one trust Vaughan’s private telephone interview of Mason’s recollection of what he reports McDonald to be saying? This is the problem with the sources which her reviewers rely upon and on the basis of which Vaughan’s arguments which form the crux of her book stand or fall. It is best to consider Boisjoly’s opinion since he was the Senior scientist who knew the most about the O-rings. Senior Engineer Boisjoly, in sworn first hand testimony before the House Committee, stated that , ‘if erosion penetrates the primary O-ring seal, there is a higher probability of no secondary seal capability ...’ Despite this documented evidence, Vaughan states that the cold temperature was a weak signal (one that is unclear or improbable) and is the only example offered to support the thesis of “Weak Signals” in her article on the Challenger.43 If cold temperature is a strong signal – in the case of stiffening rubber it would seem to be a self-evident and therefore strong signal – and not a weak signal and it is the only seemingly substantive example of a weak signal offered in her work to support the hypothesis of weak signals, then this would signal the collapse of the hypothesis of weak signals.
DID ENGINEERS BELIEVE THAT THE CHALLENGER WAS SAFE TO FLY? What of the argument that the engineers were misled into thinking that since the previous flights had been successful despite erosion that future flights were therefore safe. Consider Weick’s ringing endorsement of Vaughan when he writes that ‘Engineers learn that erosion recurs, is to be expected on future flights, lasts for only a very short time, is self-limiting, and proves that redundancy is working.’44 On the other hand, engineers considered that it was safe to fly even though they did possess some safety concerns. Weick echoes Vaughan when he writes that ‘in 1985 there was both escalating concern about safety and belief that risk was acceptable (p. 172), an unusual combination of simultaneous belief and doubt that may be more common than we suspect’.45
POST-CHALLENGER INVESTIGATIONS
177
The engineers were not so easily fooled. That is why they originally unanimously recommended not launching the Challenger. When they could not convince their managers, their managers overruled their opposition and reversed their decision although the two Senior engineers who knew the most about the O-rings continued to attempt to persuade the management until all such attempts became rebuffed to the point of hopelessness. The engineers did not vote to reverse their decision. It is thus difficult to understand Weick’s above statement about the engineers. If one substitutes the words ‘shuttle managers’ for ‘engineers’, the statement might be more accurate.46 The question is, from what sources and in what manner was such information obtained? Boisjoly claims that Marshall managers altered the original Thiokol charts deleting concerns with lower temperatures and O-ring problems. When Rogers asked the engineer Russell why he wrote a letter saying that they should close out the Oring issue, Russell replied, ‘Because I was asked to do it’. It is also important to consider that these escalating concerns about safety and belief that risk was acceptable were not necessarily held by the same people and the concept of acceptable risk was not unqualified. An examination of the sources cited by Vaughan reveals that Mark Salita (in a private Vaughan-Salita interview transcript) was one who is reported to have believed in acceptable risk. Arnie Thompson was one who shared escalating concerns about O-ring resiliency.47 Most of the information stating that some select engineers did not find the risk unacceptable was gathered by private, personal interviews with Vaughan. Boisjoly’s reported satisfaction with the O-rings was gathered by a private telephone interview with Vaughan. Were his statements taken out of context? They certainly contradict what he said in sworn testimony before the Presidential Commission. In private correspondence with the present author on March 28, 2002, Boisjoly said that he thought that Vaughan had not understood the content of what she was told in interviews and that she had not checked back with him. As a result, he said, her book was filled with ‘gross inaccuracies’. Even in these private telephone interviews held between Vaughan and those whom she interviewed, evidence is cited that one engineer (Leon Ray) who was reportedly satisfied with the O-rings, specifically excluded conditions of cold weather.
178
SAVING HUMAN LIVES
WERE THE ENGINEERS UNAWARE OF WHAT WAS A PROBABLE OUTCOME? What of the argument made that the engineers were no more prescient than anyone else that something so catastrophic could happen Karl Wieck states that in Vaughan’s treatment, ‘Roger Boisjoly ... emerges as a person who is no more prescient than the rest of us. After the decision was made to launch, Boisjoly stopped by Jerry Burns’ office on his way home and asked him to be sure to document the damage in the post-flight analysis of the Challenger, since this would give them a new data point to analyze (p. 380).’48 The implication is that Boisjoly thus thought that nothing would happen to the Challenger. This is factually mistaken. A number of engineers predicted this possibility, the first one of whom did as far back as 1978 [eight years prior to the Challengerr disaster which occurred on January 28, 1986], and off course Boisjoly’s two eerily prescient memoranda occurred within one year of the Challenger disaster. John Q. Miller, Marshall’s Chief off the Solid Rocket Motor branch wrote in a memorandum to Project Manager George Hardy, when referring to the Morton Thiokol, Inc., field joint design, that this design was so hazardous that it could produce ‘hot leaks and resulting catastrophic failure’. Maureen Hogan Casamayou argues that it was known as far back as 1977.49 Seven more years passed prior to Senior Engineer Boisjoly’s second famous memorandum of 1985 with his eerie and prescient statement that ‘The result would be a catastrophe of the highest order - loss of human life,’’ which itself has been preceded by his own earlier 1985 memorandum that warned of a horrifying flight failure unless a solution were implemented to prevent O-ring erosion.50 In contrast to the evidence of black and white signed memoranda personally written and signed by Boisjoly, Weick’s quotation of Boisjoly was taken from a private interview of Vaughan with Burns of a conversation Burns reports that he had with Boisjoly. In any case, what Boisjoly was interested in was data concerning the cold temperature. Obviously, this was very much on his mind (in contrast with the Vaughan/Wieck/Mason view that the team had not considered the effectt of cold on the rubber). {Vaughan has selective amnesia on this issue as can be seen below.} In case the
POST-CHALLENGER INVESTIGATIONS
179
Challengerr was not lost on this particular flight, he wanted data about cold temperatures to prevent a disaster on a subsequent flight. That he asked for this data does nott prove that he was confident that the Challenger was safe to fly.
WHAT OF THE ‘WEAK, MIXED, EMBEDDED AND ROUTINE SIGNALS’ ARGUMENT? This argument states that the signals that something catastrophic would happen were Weak Signals. As a result, such signals were simply not heard. Consequently, no one was really responsible. It was ‘a story of embedded signals that became clear only in hindsight.’51 For Weick, ‘scant attention [is] paid to memos of warning, which are weak signals in a culture built around adversarial formal reviews.’52 Further, Wieck trumpets Vaughan’s opinion ‘that signals seen in context tend to be weak, mixed, and routine.’53 What is a weak signal? Among the existing signals were two red flagged memoranda including one which stated that ‘The result would be a catastrophe of the highest order - loss of human life’ and the original unanimous decision on the part of Thiokol engineers not to launch and the two Senior Scientists at the end frantically attempting to convince management not to launch. Can these signals be labeled as weak? A red flagged signal is a screaming signal. A red flagged memorandum, by definition is neither weak, mixed, nor routine. The language of these memoranda is forceful and alarming. What would a strong signal be? A stated above, the only example of a “weak signal” Vaughan offers is that of the cold temperature. But according to her own statement in The Challenger Launch Decision: ‘Managers and engineers alike knew that when rubber gets cold, it gets hard’ (Kindly note the contradiction here between her statement made above that the engineering team failed to take into account cold temperatures). How then could this be classified as a weak signal? Perhaps, what she means by this (although she does not say this), is that the test which showed thatt there was no gas leakage at 30 degrees F indicated that cold was nott always associated with leakage and hence this fact made the signal “weak”. If this were her
180
SAVING HUMAN LIVES
argument, it, too would be flawed because, as pointed out in n.35 below, the test which showed no leakage at 30 degrees F was not of a joint seal at all since the test ring was a solid block of metal. And if this is a prime example offered of a weak signal, the hypothesis that weak signals contributed to the Challengerr disaster is seriously (sorry) weakened. She chastised the Nobel Laureate physicist Richard Feynman for demonstrating the rigidity of rubber at freezing temperatures by performing this scientific experiment on television. O-ring erosion also proves to be the only example providing support to the inference that there were routine signals in the workplace. Here again she repeats whatt she claims in her book, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA that ‘For Marshall and Thiokol working engineers assigned to the Solid Rocket Booster Project, multiple instances of erosion indicated not danger, but assurance that they correctly understood the problem’.54 But for which Marshall and Thiokol working engineers did the erosion indicate that there was no danger? Certainly not Boisjoly – was he not a working engineer? Documented proof is required to back up such sweeping conclusions. The assertions made do not support the conclusion that signals at NASA were mixed, weak and routine. ‘hot leaks and resulting catastrophic failure’... ‘horrifying flight failure unless a solution were implemented to prevent O-ring erosion’... ‘The result would be a catastrophe of the highest order - loss of human life’ Do these signals sound mixed, routine or weak?
THE QUESTION OF “HARD DATA” After the no-launch recommendation of Morton Thiokol, the engineers were challenged to come up with hard data to support their no-go recommendation. There was no “hard data”, “solid engineering data”; e.g., flights or tests at lower temperatures that showed blow-by. But they knew that when rubber gets cold, it gets hard. So, why did they need tests to prove what they already knew? They already had some tests showing that the colder it got, the slower the squeeze. So, if by some hook or crook some of the
POST-CHALLENGER INVESTIGATIONS
181
previous flights had escaped damage, did that mean that they had operated under safe conditions? The notion of requiring “hard data” in every situation which one faces before making a judgment is questionable. What of a physician, who is examining a patient with a serious flu who wishes to go outside without warm clothing in 0 degrees F. weather? The physician may know that generally speaking, cold temperature aggravates flu. However, she has also had patients whose flu worsened in high temperatures. And she has no data base for patients going outside in 0 degrees F. temperatures. Would she say, ‘I have no hard data for patients going outside in 0 degree temperatures. Since my data base is empty, I must say that I cannot advise against this. In the absence of a data base, it must be o.k. to go outside in the cold. After all, I did have one patient whose flu condition became worse even on a hot day’. This is recognizable nonsense. Why? Because physicians do not operate as strict positivists. They make clinical judgments based on whatever evidence is available. The absence of a data base is not a grounds for endorsing risky action. But this is what the engineers were asked to do. They were pressured to endorse an action because they lacked “hard data”. The physician advises sick patients not to go out in cold temperatures because this is a piece of general knowledge from which she advises patients even when there is no data base to rely upon. In fact, it is in the absence of a data base that many judgments of physicians are required to be made. This is why there is such a phrase as ‘clinical judgment’ in the first place. And when there is a life and death risk at stake, does it make sense to insist on “hard data” before a professional judgment is heeded? And does it make sense to take a life and death risk because there is an absence of “hard data”?
ETHICAL DECISION MAKING It would appear that when one is making decisions that involve life and death risks for the risk takers (not the risk makers), one is charged with a grave ethical responsibility. When the question of the consequences of one’s decision, i.e., that the decision to take a life and death risk that should, in the professional opinion of those who
182
SAVING HUMAN LIVES
are expert in the subject matter, not be taken, is to take the lives of others, which are in one’s hands, casually. Such a casual attitude towards the lives of others is at best, in the distinguished ethicist, Patricia Werhane’s vocabulary, reflective of a poverty of moral imagination, and at worst, not to put too fine a point on it, simply immoral. To be amoral in a situation where morality is required is to be immoral. No one in the teleconference ever brings up moral considerations. The engineering risks are discussed as if one were referring to the possibilities of mechanisms operating or not operating successfully without any regard to the ethical consequences of what would happen if they did not operate according to expectations. It is astonishing in the end that moral consequences were not taken into account. The decision makers were certainly moral dullards. It was not the absence of hard data that was the problem; it was the absence of soft data.
CONCLUSION A survey of the literature on the Challengerr disaster shows a great deal of ambivalence. There appears to be a puzzling phenomenon that relatively recent literature on the Challengerr disaster (with the exception of Jensen and Schinzinger and Martin) such as that of Vaughan, Wieck and Sagan, Rosa Lynn B. Pinkus, Larry J. Shuman, Norman P. Hummon, Harvey Wolfe, Barry A. Turner and Nick F. Pidgeon, tends to be exculpatory of NASA though Vaughan’s book on the issue seems inconsistent on this point. Earlier literature such as Schwartz, McConnell, Trento, and Casamayou seemed more critical of NASA. These are only some examples but nevertheless there does seem to be some sort of trend towards an apologetics of the past. The arguments of Vaughan in particular seem to remove moral responsibility from the agents responsible for decision making and in this respect are philosophically troubling.
POST-CHALLENGER INVESTIGATIONS
183
NOTES 1.
2.
3.
In an otherwise brilliant book, Malcolm McConnell provides fuel to the “accident” hypothesis with the word ‘malfunction’ in the title of his work, Challenger, A Major Malfunction, London: Simon & Schuster, 1987. While this is by no means his intention, the word ‘malfunction’ does connote a technical or parts breakdown which could fall under the accident hypothesis. Preface by Alton SlayChairman, Committee on Shuttle Criticality, Review and Hazard Analysis Audit, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, Washington: National Academy Press, 1988, p. v. This standard source is omitted from Diane Vaughan’s The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Ibid., p. 33. While Diane Vaughan argues in n The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA that ‘Managers were, in fact, quite moral and rule abiding as they calculated risk’ and ‘... the launch decision was rational calculation but not amoral ...’ and ‘flying with acceptable risks was normative in NASA culture’, that does not make the risk calculations morally acceptable. A significant number of enlightening scientific, philosophical and engineering books and key articles analyzing risk assessment and engineering and management judgments, including Eliot Marshall’s, The Association for the Advancement of Science’s, discussion, ‘Feynman issues his own shuttle report, attacking NASA’s risk estimates’, of the late distinguished physicist and Presidential Commission member Professor f Feynman’s discussion of risk calculation based on actual performance data in Science, Vol. 232, June, 1986, p. 1596, are omitted from her list of sources. Professor Feynman was shocked to learn that NASA management claimed that the risk factor of a launch crash was 1 in 100,000, which translated into a shuttle launch every day for the next 280 years without one equipment-based failure. The figure of 1 in 100,000 was a subjective engineering judgment without the use of actual performance data. Unless the risk estimates are based on some actual performance data, according to Feynman, ‘it's all tomfoolery’. He calculated the odds at 1 in 100 when utilizing past performance data as a data base. Professor Feynman analyzed the ‘safety margin of three’ to which Ms. Vaughan refers. As he explains, ‘If a bridge is built to withstand a certain load ... it may be designed for the materials used to actually stand up under three times the load ... But if the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. There was no safety factor at all, even though the bridge did not actually collapse because the crack only went one-third of the way through the beam. The O-rings of the solid rocket boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety could be inferred’. Major General Donald Kutyna, Director of Space Systems and Command Control, USAF, and Presidential Commission member strongly agreed. For General Kutyna, the O-ring evidence was analogous to evidence that an airlinerr wing was about to fall off. For a discussion of Morton Thiokol’s managers attempting to justify high levels of
184
4.
5.
SAVING HUMAN LIVES
risk by appropriating the expert role of risk measurement and representing it to others as engineering judgment, see Joseph R. Herkert, ‘Management’s hat trick; misuse of Engineering Judgment in the Challenger incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991, pp. 617-20. (Yet another source not consulted by Ms. Vaughan) Further sources omitted from her discussion include Report to the President, REPORT AT A GLANCE, BY THE PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Washington, D. C.: GPO, 1986, Chapters III, IV, and Conclusion and Patricia H. Werhane, ‘Engineers and management: The challenge of the Challenger incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991. Ibid., p. 33. In a recent work, Caroline Whitbeck cites how in a management training exercise which is a facsimile of the Morton Thiokol managers’ decision call, two-thirds of predominantly male manager trainees decided to take the risk to launch whereas most of the trainees in all-female management programs did not. She reports that in her trials with upper-level engineering students, she f Caroline Whitbeck, Ethics in Engineering obtained different results. Cf., Practice and Research, Cambridge: Cambridge University Press, 1998, pp. 145, 155. Ibid., p. 6. Charles E. Harris, Jr., Michael S. Pritchard and Michael J. Rabins write that ‘ ... no one presented them [the astronauts] with the information about the O-ring behavior at low temperatures. Therefore, they did not give their consent to launch despite the O-ring risk, since they were unaware of the risk.’ Cff , Engineering Ethics, Concepts and Cases, Belmont: Wadsworth Publishing Company, 1996, p. 193. Claus Jensen writes, ‘At the beginning of April, the Rogers commission summoned John Young and a group of other space shuttle astronauts ... During this session, the astronauts reiterated that they had never been told about the problems with the solid rocket booster’. Cf., f Claus Jensen, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, Barbara Haveland, (trans.), New York: Farrar, Straus and Giroux, 1996, pp. 320-321. In private correspondence with the present author on March 28, 2002, Boisjoly wrote, ‘I KNOW for a FACT that the astronauts on Challenger did NOT KNOW about the problem with the O-rings at temperatures below 50 degrees F. I also know they were kept in the dark about the specifics about the problems being experienced with the SRM [Solid Rocket Motor] joints and ORings during actual flights. I base the first statement on conversations I had with Mike Smiths’s widow. He was the pilot on Challenger. The second statement I base on the shocked look on Astronaut’s Sally Ride’s face when she learned about how poorly the SRM joints were working during flights. She was the first woman astronaut in space and was a member of the Presidential Commission investigating the Challengerr disaster.’ Major General Donald Kutyna makes the only reference the present author has been able to discover in the literature that states that some astronauts did know about the O-rings, but does not state whether these astronauts were ones who were aboard the Challengerr or ones who shared this information with the astronauts and f Richard Sykes, (ed.), No Ordinary civilians on board the Challenger. Cf., Genius, New York and London: W.W. Norton & Co, 1994, pp. 206-7, 219.
POST-CHALLENGER INVESTIGATIONS
185
Kathyrn Cordell Thornton, the celebrated astronaut who was part of the 100 strong astronaut corps at the time of the Challenger, and now Director of the Center for Science, Mathematics and Engineering Education at the University of Virginia in a direct interview with the present author accompanied by the distinguished business ethicist, Patricia Werhane, held at her office in Thornton Hall at the University of Virginia on 1 September 1999, reported that astronauts were present at briefings and thought that there was discussion of the O-rings but did not think that Weitz, their representative, had committed perjury at the hearings. Did these astronauts know, butt did not inform Weitz? The question remains, if some astronauts did know, was this knowledge communicated to the seven people aboard the Challenger? It does not appear that Christa MacAuliffe was aware of the O-ring dangers. Corrigan relates Christa MacAuliffe’s account of what the president and the pilot from the January flight told them in the White House: ‘They were told about the dangers of the space program. She said that one could be intimidated thinking off all that he had said until you realize that NASA employed the most sophisticated safety features, and they would never take any chances with their equipment, much less an astronaut’s life’. In pre-flight interviews she states that she has no fears about going on the space shuttle. Cf., f Grace George Corrigan, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London: University of Nebraska Press, 1993, pp. 92, 121. When interviewed by Space News Roundup, a newspaper at the Johnson Space Center, she said that, ‘When the Challenger had the problem back in the summer with the heat sensors on the engine ... and ... one of Boston’s papers called me and asked me what I thought was wrong ... I said, ‘I have no idea. What has NASA said?’ With respect to the Challenger launch, Corrigan writes that, ‘... Christa felt no anxiety about the flight. ‘I don’t see it as a dangerous thing to do,’ she said, pausing for a moment. ‘Well, I suppose it is, with all those rockets and fuel tanks. But if I saw it as a big risk, I’d feel differently.’ (emphasis added, cff , A Journal for Christa, pp. 115, 118) This emphasized sentence is highly significant. It implies that if Christa had been told of the real risk, she would not have flown! Contrast this with Roger Boisjoly’s private diary entry after the eve of launch teleconference: ‘I sincerely hope this launch does not result in a catastrophe. I personally do not agree with some of the statements made in Joe Kilminster’s written summary stating that SRM-25 is okay to fly’. Cff , No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, Barbara Haveland, (trans.), New York: Farrar, Straus and Giroux, 1996, p. 310. It seems that Roger Boisjoly and Christa McAuliffe had very different information relevant to the safety of the mission. Schinzinger and Martin state that the Challengerr astronauts were not informed of particular problems such as the field joints and that ‘they were not asked for their consent to be launched under circumstances that experienced engineers had claimed to be unsafe’. Cf., f Roland Schinzinger and Mike W. Martin, Introduction to Engineering Ethics, Boston, New York, Toronto: McGraw Hill, 2001, p. 101. Of course, it can well be argued that even if the actual risks had been made known to the astronauts and the civilians that their informed and voluntary consent to go up on the shuttle still would not have
186
SAVING HUMAN LIVES
been sufficient to justify their going up. John Stuart Mill stated in his justly famous essay, On Liberty that, ‘Over himself, over his own body and mind, the individual is sovereign’. No one therefore, possesses the right to harm the body or take the life of another individual or, by extension, to authorize conditions that jeopardize the body or life of another. Mill further argues that no one should be allowed to harm another or to sell himself or allow himself to be sold, as a slave. Cf., f John Stuart Mill, On Liberty, Gertrude Himmelfarb (ed.), Harmondsworth: Penguin Books, 1979, Part V., Applications, p. 173. In the case of selling oneself, or allowing oneself to be sold d as a slave, one revokes one’s own future freedom and one should not be allowed to do this, even voluntarily. One cannot be allowed to “sell” or volunteer one’s body to an organization to do with that body what that organization pleases especially if what is done is life threatening. If one grants that human life is a necessary condition for human freedom, it follows that one should not be allowed to consent to place one’s own or others’ lives at dangerous risk. In other words, the consent of the victim is not enough. According to James Childress, ‘The consent of the victim should not eliminate our concern for the risk-benefit ratio. It is true that we allow persons to accept risks for themselves all the time – for example, in mountain climbing, skydiving, and other risky sports ...’ So why not let persons take any risks that they are willing to accept in space travel? [Childress’s discussion is aimed at decisions taken in medical research, but it is equally relevant to management decision making]. ‘The answer is that it is one matter for people to accept certain risks for themselves; it is another matter for a specialized profession, [here one can substitute ‘managers’ for ‘medical research scientists’], acting on behalf of the entire society, to impose those risks on individuals even when those individuals are willing to accept them. The danger of exploitation is too great.’ Cf., f James F. Childress, Priorities in Biomedical Ethics, Philadelphia: The Westminster Press, 1981, p. 60. It should be pointed out that nowhere in her 575 page book on the Challengerr launch decision does Ms. Vaughan speak to the ethical issue of whether the astronauts or the civilian passengers were informed of the problems with the O-rings. Sources were available to her at the time of her writing which do discuss the lack of consent on the part of the astronauts and civilians, such as Corrigan’s Journal for Christa, Christa McAuliffe, Teacher in Space (published in 1993); Schinzinger and Martin’s excellent book, Introduction to Engineering Ethics (originally published in 1989); Mark Maier and Roger Boisjoly, ‘Roger Boisjoly and the Space Shuttle Challengerr disaster’, Binghamton: SUNYBinghamton, School of Education and Human Development, Career and Interdisciplinary Studies Division, Videotape instructional package. Ms. Vaughan makes no mention in her book of the opinions of these sources which speak to the important issue that neither the astronauts nor the civilian passengers were informed of the O-ring dangers. With the exception of the Maier Videotape package, these works are conspicuously absent from her seemingly comprehensive and thereby systematically misleading bibliography. That it is systematically misleading follows from the fact that the sources which are absent are those which present views which are absent from her discussion,
POST-CHALLENGER INVESTIGATIONS
187
such as the lack of consent on the part of the astronauts and the civilians. As Alice says, ‘it gets curiouser and curiouser’. The fact that the astronauts were not informed of the O-ring dangers that is discussed in the Maier package is not d and thus does mentioned. More strangely yet, a book that is frequently cited form part of Ms. Vaughan’s active data base, Richard S. Lewis’ book, Challenger, The Final Voyage, New York: Columbia University Press, 1988, makes the point a number of times that the astronauts were not informed of the O-ring problem: ‘Along with the general public, the astronauts who were flying the shuttle were unaware of the escalating danger of joint seal failure. So were the congressional committees charged with overseeing the shuttle program. NASA never told them that the shuttle had a problem’. (p. 76) Lewis also r ‘Chairman Rogers raised the quotes the Presidential Commission report: question of whether any astronaut office representative was aware [of the Oring problem]’. Weitz [a representative] answered: ‘We were not aware of any concern with the O-rings, let alone the effect of weather on the O-rings ...’ (p. 183) It is indeed perplexing that Ms. Vaughan does not discuss the fact that the astronauts knew nothing of the risk they were taking - although she does mention (without commenting on the ethical implications) that they were not informed of the fateful teleconference. They died as sitting ducks, not as heroes and heroines. It is puzzling that she omits discussion of the fact that the astronauts and civilians were left unaware of their red flagged plight depicted with explicit predictions of their horrifying fate when this matter is actually discussed in detail in Lewis, a source that she repeatedly cites. 6. McConnell, in his aptly titled chapter, ‘Rehearsal for Disaster’, p. 6. 7. Slay Report, p. 17. 8. Slay Report, p. 19. 9. Richard Cook, ‘The Rogers Commission Failed, Questions it never asked, answers it didn’t listen to’, The Washington Monthly, November, 1986, p. 14. 10. Slay Report, p. 108. 11. Slay Report, p. 111. 12. Slay Report, p. 104. This issue is also emphasized in the House Report in which it states that: ‘The management of the Shuttle program is complex and diversified and it is not always clear who has authority or responsibility’. Cff , Investigation of the Challenger Accident, Report of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, Second Session, Union Calendar No. 600, House Report 99-1016, Washington, D.C.: U.S. Government Printing Office, 1986, p. 166. And again, ‘General Stafford, a former Gemini and Apollo astronaut, also stated, ‘... I was never comfortable with the lead center type of management structure, after having seen how satisfactorily Apollo worked’. The Apollo program was managed out of headquarters. Cff , House Report, p. 167. 13. Slay Report, p. 3. In his excellent book, Where the Law Ends, The Social Control of Corporate Behavior, Christopher Stone argues eleven years before the Challengerr disaster for the establishment of a Special Public Director in charge of safety in the case of product safety and proposes that such an officer
188
SAVING HUMAN LIVES
should have veto power over production. Cff , Where the Law Ends, New York: Harper and Row, 1975, p. 183. 14. Cff , Investigation of the Challenger Accident, Report of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, Second Session, Union Calendar No. 600, House Report 99-1016, Washington, D. C.: U.S. Government Printing Office, 1986, pp. 122-3. 15. Malcolm McConnell, Challenger, A Major Malfunction, London: Simon & Schuster, 1987, p. 251. 16. Ibid., p. 49. This appears to be a different conclusion from that drawn by the House Committee on Science and Technology which states that, ‘Launch abort during SRB burn appears impossible ...’ Cff , Investigation of the Challenger Accident, Report of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, Second Session, Union Calendar No. 600, House Report 99-1016, Washington, D.C.: U.S. Government Printing Office, 1986, p. 24. It should be pointed out, however, that the House Committee based its opinion on the evidence taken from Captain Robert L. Crippen, who presented information to the Rogers commission that he did not ‘ ... know of an escape system thatt would have saved the crew from the particular incident that we just went through. I don’t think it is possible to build such a system’. Cff , House Report, pp. 76-77. Should the limits of Crippen’s knowledge be taken as definitive of reality? According to Roland Schinzinger and Mike Martin, McDonnell Douglas had designed an abort module with its own thruster. ‘It would have allowed the separation of the orbiter, triggered (among other events) by a field-joint leak. But such a safety measure was rejected as too expensive because of an accompanying reduction in payload’. Cf., f Roland Schinzinger and Mike W. Martin, Introduction to Engineering Ethics, Boston, New York, Toronto: McGraw Hill, 2001, p.101. It is important to recall that the crew and passengers were alive and conscious after the spectacular explosion. During their nearly three minute descent, some crew members had activated and used their emergency air packs. Cf., f Ann Larabee, Decade of Disaster, Urbana and Chicago: The University of Illinois Press, 2000, pp. 22-23. According to Lewis, given that ‘the probable cause of death was ocean impact ... a crew module equipped with a parachute descent system would have saved them - and averted the most horrendous result of the accident [sic]’. Cff , Challenger: The Final Voyage, p. 178. Oddly enough, a source which Ms. Vaughan does cite repeatedly (but not on this issue) and thus does form part of her active data base, Richard S. Lewis’ work published by Columbia University Press, Challenger, The Final Voyage, referred to above, states in one place that it was not an issue of possibility that the Challengerr was not equipped with an abort system, but an issue of policy: ‘The question of whether such [abort] systems are feasible for the orbiter is nott one of engineering, but of policy’. (p. 178) Earlier spacecraft had been equipped with launch escape systems, thus proving by fact that escape systems were not only possible, but were actual. Some of the confusion as to whether an abort system was possible may have arisen because of ambiguity as to whether one is referring to ejection when solid rocket fuel is utilized or liquid rocket fuel. But, there was no necessity in the
POST-CHALLENGER INVESTIGATIONS
189
choice of solid fuel rockets which von Braun likened to sending the astronauts and civilians up on a roman candle. (Lewis, p. 158) A roman candle, once lit, cannot be put out. Liquid fuel rockets (the Russians will use no other kind) can be shut off. According to NASA, ‘Crew escape and launch abort studies will be complete on October 1, 1986, with an n implementation decision in December 1986’. Apparently, this would imply (if the word ‘implementation’ has any meaning at all) that an abort system is practical and feasible and was thought to be so by NASA. Cf., f Recommendation 7, National Aeronautics and Space Administration Report to the President, Actions to Implement the Recommendations of the Presidential Commission on the Space Shuttle Challenger Accident, Executive Summary, Washington, D. C.: GPO, 1986, p. 4. (This is yet another official source which mysteriously is not referred to by Ms. Vaughan and is not to be found in her seemingly all encompassing data base). The inability of the O-rings to seal was what allowed the superhot combustible gases (which were over 5,000 degrees Fahrenheit, a simple fact never mentioned in Ms. Vaughan’s account) to escape and burst into flame. The flame plume was deflected rearward by the aerodynamic slipstream. The aft field joint of the right Solid Rocket Booster faces the External Tank. The External Tank and the right Solid Rocket Booster are connected by several struts, including one near the aft field joint that failed. [read: the combustion chamber rode on top of the fuel tank] These deflections directed the flame plume onto the surface of the External Tank and the swirling flame mixed with leaking hydrogen from the External Tank. No one to the knowledge of the present author has investigated why the External Tank was leaking hydrogen; Ms. Vaughan makes no mention of it at all. The External Tank exploded, separating the crew cabin from the Orbiter. The force of the blast probably did not seriously injure any of the crew members. The cabin dropped 65,000 feet in two minutes and forty-five seconds. (A separate crew compartment parachute would have saved all aboard.) The Challenger’s cabin and crew slammed into eighty feet of water at 207 miles an hour. No one survived. Nowhere in her 575 page tome is it mentioned that there was no necessity to omit an escape system for the crew and passengers and that such an omission was ethically unpardonable. Vaughan’s omission of this discussion is also ethically unpardonable. The fact that a crew module equipped with explosive bolt hatches, space pressure suits (unlike the costumes they were given which were equivalent to the unpressurized space suits worn by actors and actresses on Star Trek) and a parachute descent system would probably have saved the lives of all civilians and astronauts is not to be found in the 575 pages. The fact is that the civilians and the astronauts did not have to die. Why these seven lives were lost is a question never raised in one sentence of her book. 17. Joseph J. Trento, Prescription For Disaster, New York: Crown Publishers, Inc., 1987, p. 239. Trento also authored (with Richard Hirsch) The National Aeronautics and Space Administration. According to the House Committee on Science and Technology, in the source selection statement, ‘Selection of Contractor for Space Shuttle Program, Solid Rocket Motors’, a statement was
190
SAVING HUMAN LIVES
included that Thiokol ranked fourth out of the four bidders in the design category (See Appendix V-A). Cff , House Report, p. 52. 18. Op. cit., p. 2l 19. Ibid., p. 22. 20. Ibid., p. 73. 21. Ibid., p. 107. 22. Ibid., p. 107. 23. Ibid., p. 139. 24. Op. cit., p. 58. 25. McConnell, pp. 58-9. 26. McConnell, pp. 131-5. 27. McConnell, pp. 131-5. 28. Trento, p. 45. 29. Rosa Lynn B. Pinkus, Larry J. Shuman, Norman P. Hummon, Harvey Wolfe, Engineering Ethics, Balancing Cost, Schedule, and Risk, Lessons Learned from the Space Shuttle, Cambridge: Cambridge University Press, 1997, p. 316. 30. Ibid., p. 324. With respect to whistleblowing policy, Schinzinger and Martin report that ‘Today NASA has a policy that allows aerospace workers with concerns to report them anonymously to the Batelle Memorial Institute in Columbus, Ohio, but open disagreement still invited harassment for a number of years’. Cf., f Roland Schinzinger and Mike W. Martin, Introduction to Engineering Ethics, Boston, New York, Toronto: McGraw Hill, 2001, p.104. Of course, common sense dictates that such whistle blowing must eventually identify the source for it to be effective. More importantly, the Institute would not possess any veto power over flights. f Diane Vaughan, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, 31. Cf., the Challengerr Tragedy’, California Management Review, Vol. 39, No. 2, Winter 1997, pp. 83-4. ‘Risky Work’ is a phrase that covers over the reality which is that senior managers decide that such work is to be carried out. The risk is in the decision to create such work, not in the work itself, which is the effect of the decision. ‘Risky assignments’ might be a more accurate descriptive phrase. 32. Howard S. Schwartz, Narcissistic Process and Corporate Decay, The Theory of the Organization Ideal, New York and London: New York University Press, 1990, p. 75. 33. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 399. Consider this astonishing account of erosion: ‘It should be pointed out that erosion of the Orings is not necessarily a bad thing. Since the solid rocket boosters are only used for the first few minutes of the flight, it might be perfectly acceptable [to whom?] to design a joint in which O-rings erode in a controlled manner. [The present author did nott make this up] As long as the O-rings don’t [sic] completely burn through before the solid boosters run out of fuel and are
POST-CHALLENGER INVESTIGATIONS
191
jettisoned, this design should be fine’. Cf., f Charles B. Fleddermann, Engineering Ethics, Upper Saddle River, NJ: Prentice-Hall, 2001, p. 8. 34. Diane Vaughan, ‘The Trickle-Down Effect: f Policy Decisions, Risky Work, the Challengerr Tragedy,’ California Management Review, Vol. 39, No. 2, Winter 1997, p. 86-7. 35. Barry A. Turner and Nick F. Pidgeon, Man-Made Disasters, Second Edition, Oxford: Butterworth Heinemann, 1997, p. 182. One cannot make blanket statements about “acceptable erosion”. The question is, acceptable under what conditions? In private correspondence on May 9, 2002, Boisjoly wrote to the present author that, ‘We were all trapped in a GO FOR LAUNCH MINDSET AT NASA. Everyone on the program knew with 100% certainty that NO MANAGER AT EITHER NASA OR MTI would ever stop the flight to fix the joints so our energy was used to make the flights as safe as humanly possible while a series of subscale 10 inch diameter tests were run to characterize the erosion anomaly.’ (emphasis his) It is illegitimate to utilize hindsight to then argue that engineers thought that the previous flights were safe. 36. James Gleick, Genius The Life and Science of Richard r Feynman, NY: Random House, 1993. Gleick’s ingenious argument, however, need not be relied upon. The present author learned from private correspondence with Boisjoly on March 27, 2002 that the documentary evidence presented at the teleconference that showed test results of cold gas blow-by tests at 30 degrees F which showed no leakage and were used by management to argue that joints sealed at these temperatures were not seal tests at all but of a test ring which was a solid block of metal which as a solid block of metal naturally possessed no deflection characteristics. Cf.,wysiwyg://partner.12/http://onlineethics.org/essays/shuttle/te f lecon.html. This is an electronic version of what Boisjoly originally presented at the ASME (American Society for Mechanical Engineers) Winter Annual Meeting in Boston, Massachusetts, December 13-15, 1987. This evidence which also formed the basis of the opinion of the Presidential Commission was seriously flawed because the documentary evidence were not seal tests at all which is how they were presented as being, but were tests which only showed blow-by prior to any joint deflection! Thus the statistics generated were based on a flawed data base! The test that showed no gas leakage at 30 degrees F which appeared to be an argument against the launching at low temperatures as a prime factor in the blow-up of the Challenger, was a test that was not of a joint seal! This data was used as an argument by management to say that the joints sealed at 30 degrees F. It is easy to be misled by statistical information and not to consider the actual basis of the tests that are utilized as the data base for the statistical evidence which is derived. In this case, the data base was completely different from the use to which it was put. Since the Rogers Commission Report is out of print, for a clear presentation of the flight data, the reader is directed to Roland Schinzinger and Mike W. Martin, Introduction to Engineering Ethics, Boston, New York, Toronto, et. al.: McGraw Hill, 2001, p. 99. This work, put together by a senior engineer and a senior philosopher is of special merit. When one sees that there is no incident of O-ring thermal distress in 17 flights at temperatures of above 65 degrees (and incidents of O-ring thermal distress in 3
192
SAVING HUMAN LIVES
flights at these temperatures), the picture becomes clear. In private correpondence with Boisjoly and the present author on April 2, 2002, Boisjoly emphasized that ‘... you must IGNORE all temperature data from the Presidential Commission Report. It is ... because that data was not available to me or my colleague engineers and managers prior to the Challenger launch. The temperature data existed within the bowels of Morton Thiokol but I did not have the power to force those that had it to give it to me. I asked for the data in September 1985 but did not receive any data. It is this very difference in what was known before the launch versus the data that was collected by the Presidential Commission after the disaster because they had both the Power and the Authority to get the data, that causes everyone to draw the wrong conclusions about the data. So just forget you ever heard about the Presidential Commission temperature data because it is not relevant to the Challenger launch decision. The ONLY Relevant temperature data is the preliminary data obtained after the 15th flight at temperatures of 100, 75 and 50 degrees plus the temperature of the 15th flight O-Rings at 53 degrees. This is the sum total of all the temperature data that is relevant to the launch decision. Anything else is bogus and does not warrant or belong in any truthful discussion. The relevancy of the data is very simple. When a piece of O-Ring material was tested at 100 degrees it remained in contact with both metal test plates but when tested at 75 degrees there was O-Ring separation from one of the plates for 2.4 seconds and 600 seconds at 50 degrees. CONCLUSION: As the temperature decreases the sealing performance of the O-Ring gets worse. At freezing temperature or below, it will get much worse. IT’S REALLY THAT SIMPLE.’ (emphasis his) The post-Challengerr literature is replete with a misuse of data. For example, in an otherwise thoughtful book, Charles E. Harris, Jr., Michael S. Pritchard and Michael J. Rabins, write on page one of their 1996 textbook, Engineering Ethics, Concepts and Cases, published in Belmont by Wadsworth Publishing Company, that, ‘The technical evidence was incomplete but ominous: there appeared to be a correlation between temperature and resiliency. Although there was some leaking around the seal even at relatively high temperatures ...’ Such a statement lends support to the thesis that the launch decision was reasonable since it would then appear that the engineers were aware that there was leakage at high temperatures and thus would not consider that cold temperatures represented a unique risk. But this implies that the engineers knew all that they needed to know concerning leakage at high temperatures. According to W. Robison, R. Boisjoly, D. Hoecker and S. Young, ‘The data necessary for a calculation of Oring temperatures was thus not collected all along during the shuttle history. When Boisjoly asked for that data in September [1985], along with much other data, any one of which might have been the crucial missing piece to explain the anomalous cause, it was not supplied. In fact, the engineers received none of the data they requested. ... For example, an O-ring taken from storage for test at 100 degrees Farenheit would not be 100 degrees Farenheit until it warmed up. Similarly, even if one knows the ambient air temperature at the time of launch, one still needs to calculate the temperature of the O-ring’. Cf., f ‘Representation and Misrepresentation: Tufte and the Morton Thiokol Engineers on the
POST-CHALLENGER INVESTIGATIONS
193
Challenger’, Science and Engineering Ethics, Volume 8, Issue 1, 2002, p. 75. (One must also recall that the leakage at 75 degrees Farenheit was of a nozzle joint and not of a field joint. Cf., f n. 43.) 37. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 397. 38. Scott D. Sagan, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 403. 39. Scott D. Sagan, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 404. 40. Caasamayou, p. 30-31. (emphasis added) It should be noted that this book was published three years prior to Vaughan’s book and four years prior to the reviews cited. It seems to have miraculously escaped her misleadingly voluminous appearing bibliography. 41. Caasamayou, p. 47. 42. Caasamayou, p. 49; Rogers Commission Report, 1:88-89; 4: 790-791, sec. 1413-1414. 43. John and Mary Gribben, Richard Feynman, A Life in Science, New York, London: 1997, pp. 234-5. 44. Diane Vaughan, ‘The Trickle-Down Effect: f Policy Decisions, Risky Work, the Challengerr Tragedy’, California Management Review, Vol. 39, No. 2, Winter 1997, p. 87. Consider how in The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA Vaughan arrives at her hypothesis of weak signals with regard to the O-ring and temperature issue: ‘... the next flight sustained the greatest erosion in program history. With a calculated Oring temperature of 75 degrees, STS 51-B contradicted the posited relationship between damage and cold temperature, further weakening its salience as a possible causal factor. (p. 246) Her phrase ‘the greatest erosion in program history’ sounds powerful, but what does it mean? In response to this statement, Boisjoly in private correspondence with the present author on May 9, 2002 stated, ‘51-B, 17th flight, April 29, 1985 is a Nozzle Joint not a field joint. The distinction is very important because a Nozzle joint CANNOT BURN THROUGH because the secondary O-ring is a FACE SEAL and not a bore seal, like the two field joint o-rings. She [Vaughan] does not have a clue about the difference between the two joints and neither does anyone else.’ (emphasis his) The point here is that there is no contradiction because the breakdown of this type of seal at 75 degrees was not a danger because it could not burn through. As Boisjoly has communicated to the present author in private correspondence on May 9, 2002, Vaughan routinely intermixes nozzle joints and field joints and in so doing does everyone a disservice. According to Boisjoly, comparing nozzle joints with field joints is like comparing oranges with apples. In her The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, ,Vaughan provides other examples of weak signals. A further example of a weak signal offered is any engineering analysis that did not accord with scientific positivism. (p. 355) One may consider the points raised in the section above entitled ‘The Question of “Hard Data”’. Wieck echoes Vaughan’s idea of
194
SAVING HUMAN LIVES
warning memoranda as weak signals. (Cf., f below n. 52) This, however, stretches credulity. A red-flagged warning making specific reference to strong probable loss of life cannot be construed as a weak signal. (Unless, of course, one arbitrarily defines weak as any message that does not include strictly positivistic data). Most amazing of all is the statement found in Vaughan, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA that the no-launch recommendation was a weak signal. (p. 353) With respect to hard data, her points on the previous page supporting the thesis that the nolaunch recommendation was a weak signal contain serious errors and misleading statements. Point 1. (of 6) ignores the degree of difference in O-ring leakage in flights at different temperatures. Points 2. and 5. both conflate an absence of O-ring leakage from stationary tests with dynamic O-ring leakage in flights. Cf., f n. 35 above. Points 3. and 6. do nott refer to O-rings at all. Point 4. ignores the massive difference in O-ring leakage and in resiliency when correlated with temperature. (Cf., f n. 46) As an illustration of her sophistic reasoning, in point 4. she states that ‘Although SRM-15 (STS 51-C), launched with an O-ring temperature of 53 degrees F, had the most blow-by, SRM-22 (STS 61-A), launched with an O-ring temperature of 75 degrees F, had experienced “the next worst blow-by”’. But this “next worst blow-by” is a statement taken out of context. There is a substantial difference between the worst case of blow-by and the “next worst”. In private correspondence with the present author on May 9, 2001, Roger Boisjoly wrote, ‘ ... the only two meaningful flights to compare for blow-by were flights 15 and 22. At low temperature (53 degrees F) we saw huge blow-by and at high temperature (75 degrees F) we saw minor blow-by by comparison. Couple this with the resiliency data and it really doesn’t take a rocket scientist to make the attempt to stop the launch of Challenger. This was the correct and only data to compare, Period.’ According to W. Robison, R. Boisjoly, D. Hoecker and S. Young, ‘STS 22 was launched at an ambient temperature of 78 degrees F and a calculated Oring temperature of 75 degrees F. It had experienced a small amount of blow-by. STS 15 was launched with a calculated O-ring temperature of 53 degrees F and experienced a substantially greater amount of blow-by. One conclusion to draw was that the lower the temperature of the O-rings, the greater the blow-by and the closer the booster joint approaches complete failure. A second conclusion was that the primary O-rings could not be depended upon to seal at what anyone would consider a “normal” temperature – 75 degrees F.’ Does this sound like a mixed signal? (This reminds one of the infamous case of the Pinto, ‘Unsafe at any speed’. Cf., f ‘Representation and Misrepresentation: Tufte and the Morton Thiokol Engineers on the Challenger’, Science and Engineering Ethics, Volume 8, Issue 1, 2002, p. 68. Her “hard data” does not support the conclusion that the no-launch recommendation was a weak signal. But she supplies the best argument of all that the no-launch recommendation was not a weak signal when she writes that ‘... in all the years off association between Thiokol and Marshall, this was the first time the contractor had come forward with a no-launch recommendation’. (p. 305) Such a stunning deviant signal, a first in history, cannot be interpreted as weak or mixed. But one must ask oneself, why would
POST-CHALLENGER INVESTIGATIONS
195
one attempt to portray the no-launch decision as a weak signal in the first place? The only answer that comes to mind is that it is an effort to explain why this signal was not heard and thus to exonerate those who chose not to hear it. But the characterization of the non-launch decision as weak is preposterous. As Boisjoly has related in private correspondence with the present author on May 9, 2002, ‘Even the Presidential Commission concluded that there was sufficient information in the flawed charts and presentation to stop the launch and fix the joints’. The no-launch decision was unprecedented and went against obvious heavy pressure. It had to be very strong indeed to command the unanimous backing of 14 Morton Thiokol managers and engineers in an unprecedented opposition to heavy pressure. The extremely intensive attempts of Boisjoly to convince Thiokol managers not to launch h can hardly be perceived as “weak”. Indeed, the thesis of weak signals is so ludicrous that it belongs to the genre of opera buffa. 45. Caasamayou, p. 43 and Rogers Commission Report, 2: H-41, 44. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 399. In order to understand how Vaughan’s conclusions are not based on a valid evidence base, for illustrative purposes, one may take a specific example of her understanding of O-ring erosion. In The Challenger Launch Decision: Risky i Technology, Culture, and Deviance at NASA, she writes, ‘The recommendation to fly after [Space Transport System] STS-2 [the second fflight on November 12, 1981] was made first by Thiokol and Marshall working engineers and then conveyed to their managers in the work group. Consensus existed that erosion did not threaten the ability of either the primary or the secondary O-ring to seal the joint. Conforming to the guidelines of NASA’s Acceptable Risk Process, engineers tested and did calculations to learn the cause of the problem and the capabilities and limits of the SRB [Solid Rocket Booster] joint design, producing numerical boundaries for “deviant” [unacceptable] and “normal” [acceptable] a performance. Because the erosion on STS-2 fell within these mathematical limits, it was redefined as an ‘acceptable risk’. (p. 150) In private correspondence on May 9, 2002, Boisjoly responded to this description of STS-2. ‘The whole statement is wrong. There was NO acceptable erosion in the specifications. STS-2 was an R & D flight and by definition it’s a test flight and the erosion should have stopped the flights but it did not for reasons I have already stated [Cf., n. 34 above]. There were no recommendations by me or others to continue flying. We didn't have a say in the process’. Examples m of errors in Vaughan’s accounts concerning erosion could be multiplied. To take only one more as an illustration, consider another case of where Vaughan refers to “the most extensive erosion” at moderate temperatures. Vaughan writes that ‘Ironically, STS 51-D sustained the most extensive erosion found on a primary O-ring to date. Despite moderate weather that put joint temperature at 67 degrees F prior to ignition ...’ (The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, p. 162) In response to this assertion, in private correspondence with the present author on May 9, 2002, Roger Boisjoly s wrote, ‘Flight 51-D is the 16th flight on April 12 - 19, 1985 and Had NO Erosion On Field Joint, so I don't
196
SAVING HUMAN LIVES
know what erosion she is talking about. (emphasis his) At another point in the correspondence between the present author and Boisjoly he wonders where Vaughan is getting her data. 46. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 400. In private correspondence the present author asked Boisjoly the question whether it was right to infer that the escalating concern was about different issues (cold temperature perhaps?), than the acceptable risk? Boisjoly answered on May 9, 2002 that indeed I was correct. Acceptable risk did not include cold conditions. He stressed, ‘We knew that cold caused the worst possible problem for the seals after the 15th flight because we now had resiliency test data. THIS ISN’T ROCKET SCIENCE. Warm seals work very well. Cold seal are very lucky to work at all at freezing temperature. Post Challenger testing also proved this.’ (emphasis his) Patricia Werhane points out that it was Mulloy (a manager, not an engineer) who ‘described the O-ring erosion as “accepted and indeed f Patricia Werhane, Moral expected – and no longer considered an anomaly.”’Cf., Imagination & Management Decision Making, New York and Oxford, Oxford University Press, 1999, p. 48. Such evidence cannot be utilized to support the conclusion that engineers thought that erosion was acceptable – managers obviously did. Werhane’s quotation is taken from Bell, T. E. and K. Esch, ‘The Fatal Flaw in Flight 51-L,’ IEEE Spectrum, vol. 24, pp. 36-51 and the Rogers Commission, 1: p. 33. (IEEE stands for the Institute of Electrical and Electronic Engineers) In private correspondence with the author on May 9, 2002, Boisjoly stated that ‘Without low temperature, flight 51-L, Challengerr was an acceptable risk by past definition but with low temperature it was a no brainer to stop the flight’. This seems to the present author to be the decisive point. If some engineers did think that a margin of erosion was acceptable, they did not so think that such past experience justified the Challengerr launch and it is at this point that the “toleration of deviance” argument must be made. It was only the managers who voted for the launch and thus only the managers who ultimately thought that the possibility of erosion was safe even under cold temperatures. This “acceptable erosion” tale that is told by Vaughan and innocently perpetuated by Wieck is a fairy tale. As Boisjoly stated in private correspondence with the present author on May 9, 2002, ‘ ... all changed in 1985 when the post flight data from the 15th and 17th flights were discovered. From that point on, I did nott want to continue to fly. This is why I wrote my July 31, 1985 memo and others’. He also wrote that, ‘51-C, Flight 15, January 24, 1985 had MASSIVE BLOWBY on two field joints and we came as close to losing a flight as we ever wanted to experience. THIS WAS DUE TO VERY COLD WEATHER PRECEDING THE LAUNCH AND AN O-RING TEMPERATURE AT THE TIME OF LAUNCH OF 53 DEGREES F. This was all confirmed by resiliency testing after f this flight’. (emphasis his) 47. Rogers Commission 5: 1568-9. 48. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 397. 49. Maureen Hogan Casamayou, Bureaucracy in Crisis, Three Mile Island, the
POST-CHALLENGER INVESTIGATIONS
197
Shuttle Challenger, and Risk Assessment, Boulder, San Francisco, Oxford: Westview Press, 1993, p. 3, 26. f Malcolm McConnell, Challenger, A Major Malfunction, London: Simon 50. Cf., and Schuster, 1987, pp. 118-9, and House Report, pp. 283-4. 51. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 395. 52. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 397. 53. Karl E. Weick, Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June, 1997, p. 398. Cff , n. 43. 54. Cf., f Diane Vaughan, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, the Challengerr Tragedy’, California Management Review, Vol. 39, No. 2, Winter 1997, p. 87. The problem is, to what stage of the problem is she referring? Obviously not after engineers became aware of the cold temperatures issue. In private correspondence with Boisjoly, on May 9, 2000, he pointed out, ‘She [Diane Vaughan] has no continuity or chronology in her book and that is also what makes it worthless’. 55. Cff , Presidential Report, 1, p. 249, Malcolm McConnell, Challenger, A Major Malfunction, London: Simon and Schuster, 1987, pp. 118-9 and House Report, pp. 283-4. 56. Grace George Corrigan, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London, University off Nebraska Press: 1993, p. 91.
CHAPTER 9 THE HERALD OF FREE ENTERPRISE DISASTER The Herald of Free Enterprise disaster provides a lesson in how the lack of an adequate management system and a lack of individual ethical responsibility on the part of both line staff and senior management combined to cause a disaster in which 193 people lost their lives. The disaster provides a reverse lesson in what would constitute a good management system and what would constitute ethical action. What is of special interest in this case is that responsibility for the disaster is explicitly distributed among both senior management (including the Board of Directors) and junior superintendents. While there is not a crystal clear attribution for the major share of the responsibility, such an attribution can be inferred from an examination of the case. What is of interest is to take note of the very clear and explicit acknowledgement of the concept of multiple or shared responsibility for the disaster. ‘From top to bottom the body corporate r was infected with the disease of sloppiness.’ (Sheen Report, l4.1) On the 6th of March 1987 the Roll on/Roll off passenger and freight ferry Herald of Free Enterprise capsized and sank in five minutes off the coast of Zeebrugge, Belgium with the loss of 193 lives. A Formal Investigation was held at Church House, Westminster and at Alexandra House, Kingsway, W.C.l. between April 27, 1987 and June 12, 1987 before the Honourable Mr. Justice Sheen. The Zeebrugge disaster will be considered under the following set of categories: The Immediate Cause of the disaster and MultiResponsibility; The Orders; Bottom Up Responsibility; Top Down Responsibility; Dysfunctional Management; The Technical Component; The Will to Communicate; Conclusions of the Court.
198
THE HERALD OF FREE ENTERPRISE DISASTER
199
THE IMMEDIATE CAUSE OF THE DISASTER AND MULTI-RESPONSIBILITY According to the Formal Investigation, the immediate cause of the disaster was the fact that the Heraldd went to sea with her inner and outer bow doors open. Much has been made of the fact that Mr. Mark Victor Stanley, the Assistant Bosun, had opened the bow doors upon arrival in Zeebrugge and has accepted that it was his duty to close the bow doors at the time of departure from Zeebrugge and that he failed to carry out this duty. After being released from work by the Bosun, Mr. Ayling, Mr. Stanley went to his cabin where he fell asleep and was not awakened by the call ‘Harbour Stations’, which was given over the address system. He remained on his bunk until he was thrown out of it when the Heraldd began to capsize. However, the matter is not as simple as it appears and even the willing acceptance of responsibility on the part of the Assistant Bosun is not a full explanation of what occurred. The case of Mr. Stanley is a good illustration of the point that a voluntary acceptance of responsibility for an event is nott sufficient to account for complete responsibility for the event. The matter becomes more complex when one considers the statement of Mr. David Steel, QC at the inquiry that, ‘Although, he Mr. Stanley, was nominally in charge of closing the doors there was no system whereby he was the only person responsible. He was not the only person who ever closed them and they were often closed by other crew members ... We must be careful not to allow the weight of this tragedy to fall upon the unsupported shoulder of the assistant bosun.’1 The system of checking whether the bow doors were closed fell to the Chief Officer, Mr. Leslie Sabel. But it is difficult to attribute proximate responsibility to the Chief Officer when we consider that according to Mr. Steel, the regulations under which the ferry was operating required Mr. Sabel to be in two places at once: checking the bow doors and taking up his position on the bridge for ‘Harbour Stations’.2 Obviously, orders which require that someone be in two places at the same time cannot be considered to be clear orders. The orders are definitely ambiguous. This is not to say that what occurred was simply the result of unclear orders. The officer to whom the
200
SAVING HUMAN LIVES
orders applied possessed the responsibility of questioning the orders and ensuring that the orders were changed if necessary. Obviously, management is responsible for writing up such unclear orders in the first place. Nonetheless, responsibility does belong to the officer to whom these orders applied. Despite the ambiguity of the regulations, the Report did find that, ‘Mr. Sabel failed to carry out his duty to ensure that the bow doors were closed. He was seriously negligent by reason of that failure. Of all the many faults which combined to lead directly or indirectly to this tragic failure that of Mr. Leslie Sabel was the most immediate.’3 The situation is further complicated by the fact that a third party, the Bosun, Mr. Terence Ayling, as the last man to leave the vicinity of the bow doors, was in a position to close the doors. Mr. Ayling could also have made sure that the doors were closed. As the last person to leave their vicinity, presumably he was in a position to witness that the doors were still open. Who did he think would then close the doors? It seems to the present author that Mr. Ayling certainly possessed a responsibility to close the doors even if it were not part of his job specifications. While it may not have been part of his normal duties, on the other hand, the clear danger that open doors would have represented should have been sufficient to prompt him to close the doors. Mr. Ayling, in this respect, resembles the ticket officer of the King’s Cross Underground who did not put out a burning tissue because it was not part of his assigned duties. While it is the responsibility of management to ensure that every person is given clear-cut responsibilities, it is equally the responsibility of line staff or any person in the chain of command to act when urgent action is called for that transcends the line of duty but nonetheless presents a situation which, unacted upon, may have unimaginable ethical consequences. Mr. Ayling’s attitude, as that of the ticket officer of the King’s Cross Underground and Mr. Healy of Air New Zealand to be discussed below, reflects a vision of ‘The Buck Stops Here’, i.e., it stops with whoever was assigned this duty, and it falls within the province of no one else. When he was asked why he did not shut the doors he replied that ‘It has never been part of my duties to close the doors or make sure anybody is there to close the doors’.4 The matter of proximate causes does not end with these three figures. One may also inquire what the role of Captain Davis Lewry was in
THE HERALD OF FREE ENTERPRISE DISASTER
201
all of this. On March 6, Captain Lewry saw the Chief Officer come to the Bridge. Captain Lewry did not ask the Chief Officer if the ship were secure and the Chief Officer also did not make a report. With respect to the three part model of communication, Captain Lewry did not communicate fully in that he should have inquired if the bow doors were closed. The Chief Officer was equally responsible in that the Chief Officer made no report. The Chief Officer thus also did not communicate. Now, of course, all of this falls under the ill conceived and widely employed system of exception reporting, that is, one makes a report only when one knows that there is a problem. Captain Lewry, one assumes, did not know that the doors were not closed. And the Captain would not ask for a report in that if he did not receive a report it would mean that the doors were closed. However, while the system of exception reporting is certainly at fault, the Captain and the Chief Officer were equally at fault for not communicating beyond the limits called for by the exception reporting system. The Formal Investigation makes the point that Captain should have insisted upon receiving a report that the ship was secure.5 By the same token, the Report does not say that it was equally the responsibility of the Chief Officer to make the report. With respect to the three parties most directly involved as proximate causes of the disaster, the Court found that, ‘... the capsizing of the Herald of Free Enterprise was partly caused or contributed to by serious negligence in the discharge of their duties by Captain David Lewry (Master), Mr. Leslie Sabel (Chief Officer) and Mr. Mark Victor Stanley (Assistant Bosun) ...’6 While there are mitigating factors in the case of all three of these men, on the other hand, such mitigating factors do not entirely absolve them of all possible individual responsibility. In the case of the Assistant Bosun, while it may well have been true that he was not the one who always was responsible for seeing to it that the doors were closed, on the other hand, despite the obvious sloppiness of the system which did not clearly assign this responsibility to any one person, which illustrates that this is not simply a chance concatenation of errors, the Assistant Bosun could and should have seen to it that it was his responsibility or that if it were not his responsibility that it was someone else’s. It is important to underscore that this was not an example of a series of unrelated errors as has been putt forward as the leading cause of
202
SAVING HUMAN LIVES
disasters by such figures as Perrow. In fact, by investigating a case in detail, it can be discovered that it is not a series of unrelated errors at all, but rather a series of events all of which are generated by a lack of specific orders as to who is responsible for what, an inherent ambiguity in the reliance upon the exception reporting system, a common sense of a lack of responsibility, and a lack of a sense of an overriding safety ethos. One should be duly suspicious of a theory that attributes disasters to chance concatenation of errors. At the very least, such a theory must be supported by examples taken from case studies. Until such examples are given to support the conclusion of the ‘chance concatenation of errors’ theory, it does not deserve much attention. Despite the ambiguity of the orders (which shall be discussed below), it is the position of the present author that the Chief Officer, Mr. Sabel, both could and should have made it his responsibility for seeing to it that the doors were closed. The Assistant Bosun, despite the fact that it was not part of his job description, should also have exercised more responsibility in making sure that the doors were closed. Likewise, the Captain, David Lewry, both could and should have made it his responsibility to inquire and make sure that the doors were closed. Whatever else is disclosed in the course of the investigation, each of these three share a part of the responsibility for what occurred and it is the merit of the official Investigation that a clear sense of co-responsibility among the immediate staff concerned is explicitly stated.
THE ORDERS According to the Formal Investigation, the Company had issued a set of standing orders which included the following: 01.09 Ready for Sea Heads of Departments are to report to the Master immediately they are aware of any deficiency which is likely to cause their
THE HERALD OF FREE ENTERPRISE DISASTER
203
departments to be unready for sea in any respect at the due sailing time. In the absence of any such report the Master will assume, at the due sailing time, that the vessel is ready for sea in all respects.7 This is a classic example of the exception reporting method. One reports only when there is something amiss. If nothing is amiss, no report is given. Upon inspection, it should be apparent that such an order is highly unsatisfactory. Winston Churchill was careful to attribute responsibility not only to his informants for not informing him that Singapore was vulnerable to invasion but that he was equally responsible for not inquiring of them. The order 01.09 would seem to be worded in such a way as both not to require those in a position to know to make a report and not to require those to whom a report is being made to ask for the conclusions of a report. The total absence of a report is taken to be sufficient that the ship is shipshape. Thus, this order is two steps away from Churchill’s requirements. It would not even meet his requirements if it were to state that a report must be made. It would further require that there must be a request for such a report in the case that there was a failure to report. But in this case, a failure to report is taken to mean that everything is in order. Such an order is dangerous. It is dangerous because the absence of any report can have two meanings: it can have the meaning that everything is O.K. However, it can also have the meaning that everything is not O.K. but that for one reason or another, no report is made. There are many variables which could intervene to prevent a report being made, the absence of which is inconsistent with a state of safety. Two crewmen could have had an argument which had culminated in fisticuffs the result of which Mr. Sabel may not have remembered the fact that the doors had been left open. The Assistant Bosun, instead of falling asleep, may have imbibed too much alcohol and have fallen overboard. And so on. While the Formal Investigation rightly blames Captain Lewry for not insisting upon receiving a report that all was secure, from the wording of Order ‘01.09’ one becomes keenly aware of the failure of management to have written up satisfactory orders to be followed.
204
SAVING HUMAN LIVES
This is not simply the failure of the management of Townsend Care Ferries Ltd., a subsidiary of P.& O. although it is also the failure of Townsend Car Ferries. It is the failure of management education in general which has granted legitimacy, for the sake of convenience, to the exception reporting method.
BOTTOM UP RESPONSIBILITY The sequence of proximate responsibilities does not end with the action or lack of action on the part of Captain Lewry despite the fact that as Master of the ship, Captain Lewry was ultimately responsible for whatever were to happen on his ship. Above Captain Lewry was Captain John Michael Kirby, the Senior Master as from May, 1985. One of Captain Kirby’s functions as Senior Master was to act as coordinator between all the masters and officers of the ship in order to achieve uniformity in the practises operated by the different crews.8 Captain Kirby had written two memoranda to Mr. M. Ridley, Chief Superintendent in which he had expressed a strong sense of concern that there was a need for a permanent set of officers. One of these memoranda was dated 22 November 1985; the other was dated 28 January 1987. In the latter memoranda, it is stated: ‘I wish to stress again that HERALD badly needs a permanent complement of good deck officers. ... Shipboard maintenance, safety gear checks, crew training and the overall smooth running of the vessel have all suffered ...’9 At first glance this would make it seem that Captain Kirby had exercised a strong sense of responsibility in the matter by attempting to bring to the attention of his superiors the nest of crew problems that could have directly or indirectly provided the conditions which contributed to the possibility and the eventual actuality of the disaster. On the other hand, in one highly specific instance Captain Kirby had interpreted the meaning of an Instruction in what was to prove to be a fatal manner.
THE HERALD OF FREE ENTERPRISE DISASTER
205
The Instruction which related specifically to the closing of the bow doors was as follows: The officer loading the main vehicle deck, G Deck, to ensure that the water tight and bow/stern doors are secured when leaving port.10 Captain Kirby did not enforce strictt compliance with this instruction in that this instruction had been complied with in his interpretation if the loading officer ensured that the Bosun was actually at the control position and (apparently) was going to operate the doors before the officer left to go to his own harbour station. l1 But in the strict meaning of ensure, such an interpretation as that of Captain Kirby’s does not ensure that the doors will be closed. While the present author differs from the Report in so far as according to the Report, the other masters could not be expected to interpret this instruction any better since Captain Kirby had failed to do so - (‘As the Senior Master did not enforce compliance with that very important instruction, he could not expect other masters to do so’.)12 - on the other hand, Captain Kirby’s interpretation of this instruction was a highly negligent one. Since he was superior to the other captains, his interpretation was bound to be influential and thus it was consequential in light of his position of responsibility. On the other hand, the other captains were certainly not robots and could have interpreted the meaning of this instruction differently on their own ships. Thus, while Captain Kirby could not expect other masters to do differently than he did; this is not the point at issue. It is not important whether or not the other masters met Captain Kirby’s expectations; what is important is whether they acted responsibly on their own. In the case of Captain Lewry it is clear that he did not. One cannot say that he was simply ordnung folgen (following orders). It is both true that Captain Kirby could have inquired into the vagueness of the orders and have insisted upon a more failsafe checking of the doors and it is true that Captain Lewry could have taken the initiative of doing so himself whether or not Captain Kirby had done so.
206
SAVING HUMAN LIVES
Captain Kirby was negligent. He did not introduce some form of a fail-safe system (to be discussed below) for the closing of the doors. He seemed content to accept the Ship’s Standing Orders as they stood despite their ambiguity (which is in part discussed above and which is discussed in greater length below). In the Report, Captain Kirby is said to have been one of the many masters who ‘... failed to apply their minds to those Orders and to take steps to have them clarified. Captain Kirby must bear his share of the responsibility for the disaster’. l3 While it is of note that Captain Kirby is said to have been in part responsible for the disaster, it is highly curious that he is not included among those disciplined: Captain David Lewry (Master), Mr. Leslie Sabel (Chief Officer) and Mr. Mark Victor Stanley (Assistant bosun). Captain Kirby was, however, included among the seven employees of P. & O. European Ferries who were committed for trial at the Old Bailey on manslaughter charges.14 There is, however, another aspect of what the Report is saying with respect to Captain Kirby and other masters which is important to note. In the Report it is being said that they possessed an intellectual responsibility both to think about orders d and to take steps to have them clarified. While such a statement may be at variance with the Report’s earlier statement that if Captain Kirby had interpreted the orders negligently then one could not blame the other captains for so doing, that is not the most telling feature of this concept of intellectual responsibility. The most telling feature of this concept of intellectual responsibility is that someone in the position of being a line officer is said to be responsible both for questioning orders and what is more, to go beyond the wording of the Report, to ask that they be changed d if necessary. In fact, although this is not said in so many words in the Report, the concept of intellectual responsibility may be expanded to include behaving otherwise than is so indicated in an order. One can make a report even if in the strict understanding of the exception reporting system one is not obliged to make a report. Here, the importance of the correct use of language (words) and the importance of responsibility at every level of an organization are brought together in a dramatic illustration of the importance of both of these concerns. It is special interest to note in this connection that a concept of intellectual responsibility is being put forward. Someone is being held responsible for how they think, interpret and
THE HERALD OF FREE ENTERPRISE DISASTER
207
subsequently behave. The idea that someone in the chain of command merely puts into practice an order that is generated from someone else above him in the chain of command is absolutely rejected in this reading of this Report. The notion of individual responsibility for interpreting and acting upon orders is one of the most unique features of this Report. The importance of the notion of questioning orders by those in a position to assess whether those orders are appropriate given the specific circumstances in which they are to be exercised is absolutely paramount. It is paramount not only in the obviously top priority context of the prevention of disasters in their dormant stage but it is also of importance in the aftermath when it comes to assigning responsibility to which parties for what has occurred. For example, in his insightful article, ‘Can a company be criminal?’, Lord Wedderburn, professor of commercial law in the University of London raises the question of liability for actions by those in positions of being senior managers or even masters of ships.l5 But we may well consider extending this notion of interpretive responsibility even more widely to include all members of an organization. In this case, we can include not only Captains’ Kirby and Lewry under the notion of intellectual responsibility; we can also include the loading officer Mr. Ayling and the Assistant Bosun, Mr. Stanley. While Mr. Steel, QC (in the above quotation) would make light of Mr. Stanley's falling asleep, because someone else sometimes did (and therefore in this case could have) closed the doors, it is equally the case that Mr. Stanley (and/or) Mr. Ayling, could have questioned these orders or lack of clarity of these orders. The notion of intellectual responsibility for questioning orders which is raised for the first time in the Report is a notion which may be applied even more widely than it is applied in the Report (which restricts it to supervisory senior managers such as Captain Kirby) and than it is applied by Lord Wedderburn [who restricts it to senior management, supervisory senior managers and managers immediately beneath those (such as Captain Lewry)]: there is no reason not to include junior managers such as an Assistant Bosun as well. This is not to take the position that the Assistant Bosun is responsible for the disaster such that no one else is equally responsible. The buck stops
208
SAVING HUMAN LIVES
everywhere and thus it stops with the Assistant Bosun as well as the CEO of the company.
TOP DOWN RESPONSIBILITY According to the Sheen Report, senior management is principally at fault in terms of being responsible for the occurrence of the disaster. While this is not said explicitly, it is stated that ‘... a full investigation into the circumstances of the disaster leads inexorably to the conclusion that the underlying or cardinal faults lay higher up in the company’.16 The phrase ‘underlying or cardinal faults’ seems to carry with it the meaning that the primary and originating cause of the disaster lay with the higher ups. While unfortunately this is not conveyed in the actual decision of the Court which is restricted to assigning fault to the three crew members (Lewry, Sabel and Stanley) and assigning partial fault to the company without making it at all clear which (if either or any of these parties) was principally at fault, nonetheless, the language used here would seem to place the brunt of the responsibility on the back of the company. In fact there is even direct reference to the Board of Directors: ‘The Board of Directors did not appreciate their responsibility for the safe management of their ships’.17 After having made it fairly clear that senior management was principally at fault there is also a clear reference to the concept of shared responsibility which ran through every level of the company organization. Such an explicit acknowledgement of the notion of shared responsibility is rather unique in the literature of Court Investigations and deserves to be quoted in full: ‘All concerned in management, from the members of the Board of Directors down to the junior superintendents, were guilty of fault in that all must be regarded as sharing responsibility for the failure of management.’18 It is indeed unfortunate that the decision of the Court does not reflect this concept of shared responsibility fully and in its findings scapegoats by finding only three parties individually guilty with only an amorphous reference to the Company (see decision above). This is reinforced by the assigning of penalties to the three crew members
THE HERALD OF FREE ENTERPRISE DISASTER
209
without assigning any penalty to the Company. However misleading the actual decision of the Court is, it remains of interest to note that the seeds of the right decision are contained in the body of the Report.
A DYSFUNCTIONAL MANAGEMENT Management can be viewed as dysfunctional f when it fails to assign clear-cut roles for its directors and its officers and fails to issue clearcut orders for its officers to follow. In the case of a major public transport system, an almost total absence of any safety concern for crew or passengers can also be viewed as a symptom of a dysfunctional management. In these two key respects, the management of P.&O. Ferries was dysfunctional. Since, in this instance, the relevant non-assignment of responsibilities and issuing of unclear orders can be understood as both a manifestation of a lack of an ethical consciousness and a lack of understanding of the responsibilities of a manager, we can treat both simultaneously as evidence of a lack of a safety priority and a lack of a proper management structure. There seem to be two chief ways of implementing a safety first consciousness which was absent: (i) the clear attribution of domains of responsibility for specified officers; (ii) the issuance of clear instructions. It is a credit to the Report that issues of good management are tied in with issues of safety such that poor management practices are cited as instances of a potentially disastrous operational procedure. Itt is not often that one finds a Report which so clearly ties up good d management practices with safety such that one does not need to construe safety (or ethical considerations in general) as something externally added on to what otherwise could be good managementt policy. Here, it is clear that good management policy would automatically be conducive to a safe operation. Safety, from this standpoint need not be seen as an extrabusiness consideration but as an intrinsic feature of good business management and an outcome of good business management. In this sense ethics is not something which is extrinsic to and having to be
210
SAVING HUMAN LIVES
added (perhaps distastefully) to business but is the natural consequence of a good business administration.
THE LACK OF A CLEAR ATTRIBUTION OF DOMAINS OF RESPONSIBILITY FOR OFFICERS On the 18th of March, 1986 there was a meeting of senior Masters with management, at which Mr. Develin (a Director of the Company) was the Chair. At this meeting, according to the Report, Mr. Develin said that it was more preferable not to define the roles of officers. This attitude was described by Mr. Owen as an ‘abject abdication of responsibility’.19 Here, we can find an indictment of management as being irresponsible for not carrying out the role of management. Responsibility is not merely or only to a safety first ethic; it is also to the role of being a manager. In other t words, there appears to be an ethic of management which is assumed as underlying management practices. It is management’s duty to assign roles for officers. A general conclusion we can draw from this is that corporate ethics does not concern itself only with safety as a paramount consideration; it is also concerned with management carrying out the proper duties of its own office as management.
THE LACK OF THE ISSUANCE OF CLEAR INSTRUCTIONS The importance of clear instructions is emphasized strongly in the Sheen Report. In heavy black type it is stated that ‘CLEAR INSTRUCTIONS ARE THE FOUNDATION OF A SAFE SYSTEM OF OPERATION’. 20 The meaning of ‘clarity’ is further unpacked in a surprisingly philosophical dictum of the Court also placed in heavy black type in the Report: ‘ANY SET OF ORDERS MUST BE SO DRAFTED THAT EVERY EXPRESSION THEREIN HAS ONLY ONE MEANING THROUGHOUT THOSE ORDERS’21. The implication of a lack off clear instructions for the
THE HERALD OF FREE ENTERPRISE DISASTER
211
disaster is also plainly indicated in the Report: ‘It was the failure to give clear orders about the duties of the Officers on the Zeebrugge run which contributed so greatly to the causes of this disaster ... The Board of Directors must accept a heavy responsibility for their lamentable lack of directions. Individually and collectively they lacked a sense of responsibility’.22 It is of great interest that the Court in this instance sees no problem with assigning both an individual and a collective responsibility to the Board of Directors. Each member of the Board is responsible as an individual for her or his lack of responsibility as a Director. In addition, the Board as a whole, the implication of which, it seems, would be the Company as a whole, is responsible as a Company for this lack of responsibility. While in the decision of the Court, there is, most unfortunately, no reflection of the weight of responsibility being on the senior management end (in fact the contrary impression is given by virtue of the fact that no penalty is assigned to senior management), once again in the body of the Report one can find strong language which would seem to imply that senior management bears a great responsibility for what occurred. Again, both the concepts of individual responsibility and collective responsibility are pointed to such that one derives the undeniable impression that both each and every Director is personally responsible and that it is the responsibility of a Board of Directors to manage properly and with a sense of responsibility as a Board. What is of special interest to note in this connection of responsibility is that the responsibility is spelled out clearly to consist in being responsible for issuing a clear set off directions. In short, management is responsible for directing or guiding and this is accomplished by issuing clear instructions and clear indications for the delegations of specific areas of responsibility to specific personnel. Responsibility as a term is here clearly defined as to define roles of those entrusted with authority and to give them clear orders as to the implementation of those roles. This is the responsibility of senior management in very specific terms. One may think back to the case of the Challengerr in which there was no clear-cut policy as to who should be making the decision for launching. The management problem was the same in the case of the Challenger. The only difference in the Presidential Report was that there was no strong indictment of senior
212
SAVING HUMAN LIVES
management as having abrogated their responsibility for giving a clear set of directives.
TECHNICAL COMPONENT It should be apparent to any intelligent observer that the system employed for checking to make sure that the doors were closed was severely deficient. A good corrective would be the installation of electronic lights on the bridge to confirm the closure of doors and a closed circuit television. In fact, just these measures have now been taken. A good deal of discussion has been given over to the lack of a proper fail safe means of checking that would take into account simple advances in technology and such discussions are absolutely correct. However, it must be borne in mind throughout such a discussion that the lack of this or that advanced technological unit is not the question at issue. The reason for this is that, just as in the case of the Challenger, if one stresses the issue of technical deficiencies (such as the known limitations of the O-ring), one might imagine that disasters could be prevented once this particular problem was solved. But the major question at issue is the willingness of senior management to generate a safety ethos in the first place and to engage in sound management practices in the second place (which, as pointed out above, are mutually inclusive practises). If one were to limit one’s attention to the obvious limitations with the methods for verifying if the doors were closed, one might think that by correcting this, that the safety problems were solved. But the lack of attention to the door closing problem was only symptomatic of a wider problem: which was the lack of attention to safety and to sound management policy. An analysis of the problem of the closing of the doors should bear this in mind. If only the problem of the lack of a sound technological system of making sure that the doors were closed were solved, this would be similar to solving the problem of the space shuttle by solving the problem of the O-rings. But both the problem with the O-rings and the problem with not having any clear system of closing the doors are only symptomatic of a wider problem: the lack of a sound and ethically responsible management system.
THE HERALD OF FREE ENTERPRISE DISASTER
213
THE CLOSING OF THE DOORS Townsend Ferries did acknowledge their responsibility for the poor system (if it is even fair to call it a system at all) of both closing the doors, a reporting system, and a system of confirmation and a monitoring system in an extremely revealing statement by Mr.Clarke in the course of the Investigation: Townsend Ferries recognize that long before the 6th March 1987 both their sea and shore staff should have given proper consideration to the adequacy of the whole system relating to the closing of doors on this class of ship ... they should have improved the system by first improving their instructions ... by introducing an express instruction that the doors should be closed, secondly by introducing a reporting system, thirdly by ensuring that the closure of the doors was properly checked and fourthly, by introducing a monitoring or checking system.23 While this both seems to acknowledge responsibility on the part of line staff for seeing to it that a proper system was put into place (though left vague as to how the instructions were to be improved) and it does take into account technical factors, there is no sense of acknowledgement that the general directive to place safety as the first priority should have been there in the first place as a guideline set by senior management. This acknowledgement of responsibility seems to go so far as to place the responsibility on individual staff members without acknowledging that general guidelines both to a safety top priority and a clear-cut delegation of authority in particular cases (which again is lacking even in n this acknowledgement - e.g., who should bell the cat or att least, who is responsible for deciding who should bell the cat) are the responsibilities of senior management.
THE WILL TO COMMUNICATE The Zeebrugge disaster was preventable. Not only was it preventable: it was to a large extent foreseen. Itt has already been noted that the
214
SAVING HUMAN LIVES
Standing Orders did not provide any method of positive reporting that the bow doors were closed. The absence of any report was taken to mean that the ship was ready to sail. However, different individual Senior Masters of vessels of the same class had sent memoranda to higher ups to alert them to the need to install some electronic system of checking that the doors were closed. Management that had received those memoranda decided to disregard them despite the fact that those who wrote them (namely, the Senior Masters) would have been in the best position to assess their necessity. It may be said that communication was flawed essentially on the receiving end. Despite such memoranda being sent, the importance of the memoranda was disregarded. In the Sheen Report, among the various complaints alleged by Masters which were not heard by the Marine Department, which was the body to which the Masters reported, was included what turned out to be the key request: The wish to have lights fitted on the bridge to indicate whether the bow and stern doors were open or closed. 24 Memoranda were sent not only because of the prescience of ship captains but because a like incident (which fortunately had not caused a disaster) had already occurred prior to the Zeebrugge disaster for virtually the same set of reasons! On the 29th of October 1983 the Assistant Bosun of the PRIDE E had neglected to close both the bow and the stern doors on sailing from the No. 5 berth in Dover. It appeared that he had fallen asleep and, for that reason, he failed to carry out that duty.25 After this incident (which had occurred some four years prior to the Zeebrugge disaster), the Master of the PRIDE, Captain Blowers, sent out a memorandum on the 28th of June 1985 to Mr. Develin. Part of that memorandum is reproduced below. Mimic Panel- There is no indication on the bridge as to whether the most important watertight doors are closed or not. With the very short distance between the berth and the open sea on both sides of the channel this can be a problem iff the operator is delayed or having problems in closing the doors. Indicator lights on the very excellent
THE HERALD OF FREE ENTERPRISE DISASTER
215
mimic panel could enable the bridge team to monitor the situation in such circumstances.26 What is so excellent about this memorandum is that it not only suggests a practical solution to a problem but it explains why this is a problem given the special knowledge of the circumstances that a ship’s captain would obviously have. What is astonishing are the responses of the managers to this memorandum. They are so astonishing that they deserve to be quoted in full: From Mr. J.F. Alcindor, a Deputy Chief Superintendent: ‘Do they need an indicator to tell them whether the deck storekeeper is awake and sober? My goodness!’ From Mr. A.C. Reynolds: ‘Nice but don't we already pay someone?’ From Mr. R. Ellison: ‘Assume the guy who shuts the doors tells the bridge if there is a problem’. From Mr. D. R. Hamilton: ‘Nice!’ 27 The responses to the memorandum show that the problem is not taken seriously at all and assume one and all that there is apparently no real need for the memorandum to have been written in the first place. The first three responses all show that there is no understanding of the reason given in the memorandum for the need for the lights; it is almost as if they have not read the memorandum which has stated that a problem could arise with the current system of one man reporting. The last response does not bear comment. In the opinion of Justice Sheen, proper attention to just this memorandum would have been sufficient to have prevented the Zeebrugge disaster: ‘If the sensible suggestion that indicator lights be installed had received in 1985, the serious consideration which it deserved, it is at least possible that they would have been fitted in the early months of 1986 and this disaster might well have been prevented’.28
216
SAVING HUMAN LIVES
The matter of indicator lights was raised again in 1986 by Captain Kirby and Captain de Ste Croix. On the 17th of May 1986, Captain Kirby, the Senior Master of Heraldd wrote a memorandum to Mr. Alcindor which included the suggestion: ‘17. Bow and stern doors. Open/closed indication to be duplicated on the bridge’.29 On the 9th of October, 1986, Captain de Ste Croix once again wrote a memorandum to the Senior Electrical Officer which is so detailed as to the reason for the need for the bridge indication that it deserves to be quoted verbatim and in full: Another incident has occurred to remind me of my request of some time ago for bridge indication of the position of the bow and stern watertight doors. I still feel that although it is the duty of a crew member to check the position of the doors visually prior to proceeding to sea, it is so important to the safety of the ship that they are closed that we should have bridge indication. We have indicators for many pieces of equipment on the bridge, many of which should be checked visually in another part of the ship e.g. main engine bridge stands connected, bow thrusts on board, etc., and I feel that the bow and stern doors are every bit as important as these. Is the issue still being considered or has it been considered too difficult or expensive? 30 What is particularly important to note about this memoranda is that Captain de Ste Croix seemingly is attempting to address issues raised by the managers. The issue that Reynolds and Ellison raised that there was already someone who was supposed to check visually on this is acknowledged but it is argued that on the grounds of importance such a device would possess for the safety of the ship, there should be electronic indication. The safety of the ship is presented as the justification for the request. On that memorandum was written (apparently by and apparently indicating the assent of the Senior Electrical Officer): Please submit request to marine department on the usual application form. If it receives their blessing I will proceed with
THE HERALD OF FREE ENTERPRISE DISASTER
217
the specification. It can be done, but will require a few deck and bulkhead penetrations.31 On the 18th of October 1986, Mr. R. W. King sent a memorandum to Mr. Alcindor in which he said: I cannot see the purpose or the need for the stern door to be monitored on the bridge, as the seaman in charge of closing the doors is standing by the control panel watching them close.32 In this memorandum, the justification of the importance of the electronic indication for the safety of the ship is ignored. On the 2lst of October l986, Mr. Alcindor sent a memorandum to Captain de Ste Croix which included the statements: Bow and stern door remote indication . I concur with Mr. King [that] the project is unnecessary and not the real answer to the problem. In short, if the bow and stern doors are left open, then the person responsible for closing them should be disciplined.33 What is most amazing about this response of Mr. Alcindor’s is that the concept of discipline as an answer could not possibly redress the problem if water entered through the open doors (as of course it did) and the ship were consequently sunk. There is again no attention paid to the ethical implications that the safety of the ship (and those aboard) would be jeopardized. Poor Captain de Ste Croix did not give up. He tried once more to send a memorandum on the 28th of October 1986 (one almost imagines him sending a memorandum out on the 6th of March 1987). But this last memorandum which reiterates how important he thought the issue of knowing that the doors were closed actually was also sadly ignored. That the matter was a simple one to address is shown by the fact that within only a matter of days after the disaster, indicator lights were installed in the remaining ships of the fleet. What is of note is that the warning was definitely given; foreknowledge was present. But
218
SAVING HUMAN LIVES
the communication from the Masters was not taken seriously by the Marine Department. Part of the fault, of course, lies with the lack of any priority safety ethos which should have been given by senior management as a directive to the Marine Department; part of the fault lies with the Marine Department which on its own could have considered the safety issue more strongly. Part of the answer lies with the concept of management as not really listening receptively to requests (of any kind) which come to them from those in a position of power beneath them. The concept of management as having the responsibility to listen, to receive communication, does not seem to have been a part of Townsend Ferries. For management to be effective, a will to communicate must be present. Communication does not consist only of the giving of information; it also consists in the receiving of information. In this case, the will to communicate is sadly deficient in that the will to receive communication is conspicuous by its absence. In the words of the Sheen Report: ... the shore management took very little notice of what they were told by their Masters. The Masters met only intermittently. There was one period of two and a half years during which there was no formal meeting between Management and Senior Masters ... the real complaint, which appears to the Court to be fully justified, was that the “Marine Department” did not listen to the complaints or suggestions of wishes of their Masters.34
CONCLUSIONS OF THE COURT The Sheen investigation arrived at a set of recommendations for the future management of P.&O., the company which owned Townsend Ferries. What is especially interesting about the suggestions is that all of the suggestions reflect management fundamentals. In other words, if P.&O. had been properly managed, the disaster need not have occurred. While the decision of the Court (as mentioned above) does not fully reflect this point, the language in the body of the Report and the recommendations to P.&O. are clear indicators that the Sheen Investigation found that the Zeebrugge disaster was a
THE HERALD OF FREE ENTERPRISE DISASTER
219
function of a dysfunctional management. A dysfunctional management is a management that basically does not function in its capacity to manage. There would appear no better way to describe the management of P.&O. other than as dysfunctional. It is of interest to repeat the majority of the Court’s suggestions to management here: This Court need say no more than stress the need for: Clear and concise orders; attention at all times to all matters affecting the safety of the ship and those on board. There must be no ‘cutting of corners’; the maintenance of proper channels of communication between ship and shore for the receipt and dissemination of information; a clear and firm management and command structure.35 In order to complete the suggestions of the Court, it might be valuable to add some general directives for a safety ethos and specific modes of implementing a safety ethos such as a Director in charge of safety who would be empowered to insist that safety devices that were considered important be installed. The power to install safety devices or take appropriate safety measures should lie within the scope of the ship’s Captain or the Senior Master (assuming that one is within a reasonable cost limit which certainly the electronic indicators fell within). There was no need in the first place to bring up matters of this nature to the Board of Directors. There can only be a ‘cutting of corners’ in the absence of any top priority given to safety. Therefore, it is not enough to recommend against the ‘cutting of corners’. While it is certainly laudatory to recommend attention at all times be paid to safety, it would even be stronger if safety is made a number one priority as then it would be crystal clear what kind of attention should be given to it. In any event, it is gratifying how much the Court has paid attention to the recognition of sound management principles. It is a sorry day that private organizations of the enormous scale of P.&O., require to be instructed in basic management principles by justices of courts. But some solace may be taken from the fact that the corrective to
220
SAVING HUMAN LIVES
disasters for the most part is the attention to fundamental management principles.
NOTES l. 2 3.
4. 5. 6. 7.
8. 9. 10. 11. 12. 13. 14.
Judith Cook, An Accident Waiting To Happen, London: Unwin Hymen Limited, 1989, pp. 11-12. Ibid., p. 12. MV Herald of Free Enterprise, Report of Court No. 8074 Formal Investigation, London: Her Majesty’s Stationery Office, 1987, p. 10. (Hereafter to be referred to as the Sheen Report). Sheen Report, p. 8. Sheen Report, p. 12. Cf., f Decision of the Court, Sheen Report. Sheen Report, p. 12. As to the reference to ‘the Company’, at the time of the casuality, the Heraldd was owned by Townsend Car Ferries Limited which was a subsidiary of the Peninsula and Oriental Steam Navigation Company (P. & O.) Sheen Report, p. 13. Sheen Report, p. 13. Sheen Report, p. 14. Sheen Report, p. 14. Sheen Report, p. 14. Sheen Report, p. 14. While Justice Sheen could not seemingly commit himself clearly in his decision as to the share of responsibility that belonged to the Company, it is of interest to t in his asking them to pay the brunt note his indirect ascription of responsibility of the court costs and his accompanying comment in his final two sentences of his Report: ‘There being no other way in which this Court can mark its feelings about the conduct of Townsend Car Ferries Limited other than by an order that they should pay a substantial part of the costs of this investigation, I have ordered them to pay the sum of 350,000 pounds. That seems to me to meet the justice of the case’. Sheen Report, p. 75. There is, however, a sad postscript to this case. On October 19, 1990, corporate manslaughter charges against. P..& 0. European Ferries and seven of its employees m were dropped by Mr. Justice Turner at the Central Criminal Court in London. The charges had alleged that there was an ‘obvious and serious risk that as a result of the defendants’ failure to do their duties properly the Herald would sail with open doors, capsize and cause death. But the judge said that there was no direct evidence that any of the five senior defendants would have perceived the risk was obvious’. Cff , The London Times, October 20, 1990, p. l. While itt seems grotesque to this author that this risk would not have been perceived as obvious, the difficulty may have
THE HERALD OF FREE ENTERPRISE DISASTER
221
hinged on what the Court was utilizing as standards of direct evidence. As in each case any of the defendants could have argued that he was under the impression that another one of the defendants was taking care of this problem, then singly none of the defendants would have appeared to have been directly or solely responsible. If this construction of the meaning of direct evidence was employed it would appear to indicate that a concept of multiple causation and shared responsibility might have enabled a different verdict to have been rendered. However, despite the fact thatt it is commonly argued that no one takes the notion of monocausality seriously in these modern times, it is evident from the Court transcripts that the presiding Judge, Justice Turner, was operating from a concept of single causality and exclusive responsibility. In rendering his decision, he argues that, ‘there is no direct evidence ... that any one of the defendants ... would themselves, or should themselves, have perceived that the risk of an open-door sailing was either obvious and/or serious. Nor yet, in my judgment, was there any evidence from which such an inference might safely be drawn’. Cff , Central Criminal Court, CCC No. 900160, Old Bailey, London, Friday, 19th October, 1990, (Newgate Reporters, Ltd., Official Court Reporters to the Central Criminal Court), Day 27, p. 8. And again, ‘there is no question of a corporation being rendered liable to conviction for manslaughter by aggregating the faults of a number of different individuals, whose separate faults would be insufficient to constitute the offence of manslaughter, and then saying that in totality they amounted to such a high degree of fault that the corporation r should be convicted’. Cff , Ibid., CCC No. 900160, Day 26, p. 21. But why not? The separate individuals were all acting, not on their own, but as representatives of a company. And it was clear that the disaster would not have occurred had their behavior been different. This is as clear an example of a belief in monocausality as can be found. If no one individual’s actions is sufficient to cause an effect, then a combination of such actions are also insufficient. According to the argument of Justice Turner, in each case, whether of management or of a line officer, any degree of negligence would appear to be mitigated by the fact that any one individual act or lack of action [or in the case of management, for example, the issuance of unclear orders] would not have been sufficient to constitute being responsible for what occurred. One is reminded of Professor Revans’ account of the Butler-Sloss Report of the Hospital Internal Communication Project in Essex, England, in which the failure of communication was described by the wonderful phrase, which perhaps should be engraved on every Board Member’s door, ‘I thought THEY were supposed to be doing that’. Cff , R.W. Evans, The Golden Jubilee of Action Learning, Manchester Business School, 1988, p. 28. It is unfortunate, from the point of view of the present author, that a corporation cannot, for Justice Turner, be held accountable for its sloppy management procedures. At the very least it was instructive to discover that the ferry company itself and at least three company directors were included among those charged in the charges brought against P.& O. European Ferries and seven of its employees. Among senior management defendants were Chief Marine Superintendent
222
15. 16. 17. 18. 19. 20. 2l. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.
SAVING HUMAN LIVES
Jeffrey Develin, his Deputy John Alcindor, Technical Director Wallace Ayre, Senior Master John Kirby, and the Master of the Herald, David Lewry. The London Times, October 10, 1987. Sheen Report, p., 14. Sheen Report, p. 15. Sheen Report, p. 14. Sheen Report, p. 15. Ibid., p. l5. Ibid., p. 16. Ibid., p. 15. Ibid., p. 15. Ibid., p. 17. Ibid., p. 23. Ibid., p. 23. Ibid., p. 24. Ibid., p. 24. Ibid., p. 24. Ibid., p. 24. Ibid., p. 24. Ibid., p. 25. Ibid., p. 25. Ibid., p. 17. Ibid., p. 58.
CHAPTER 10 THE KING’S CROSS UNDERGROUND FIRE The King’s Cross is one of England’s great landmarks. The area around the station was originally known as Battle Bridge and according to tradition, it was here that Queen Boadicea routed the Roman legions before putting Roman London to the fire. Battle Bridge became King’s Cross in 1830 when a statue of George IV marked the area. To this day the British Rail station at King’s Cross is famous as the start of the East coast route to Scotland and the North of England.l By 1987 King’s Cross was not only a famous historical landmark, it had become the busiest station on the Underground network. On an average weekday over 250,000 passengers used the station with 100,000 or so passing through in each peak period.2 Sometime soon after the evening rush hour had ended on Wednesday 18 November 1987, a fire of catastrophic proportions in the King’s Cross Underground station was to claim the lives of 31 persons and to cause serious injuries to many others. The Secretary of State for Transport in England (the Rt Hon. Paul Channon MP) appointed Desmond Fennell, Queen’s Counsel, to hold a formal Investigation of the fire. With the aid of four assessors, who included Professor Bernard Crossland, Pro-Vice Chancellor of the Queen’s University, Belfast, Sir Peter Darby, H.M. Chief Inspector of Fire Services for England and Wales, Major Anthony King, an Inspecting Officer of Railways and Dr. Alan Roberts, Director of the Explosion and Flame Laboratory, The Health and Safety Executive, Buxton, Desmond Fennell was to conduct an investigation of the King’s Cross Fire in which the Court heard 114 witnesses in Part One which lasted 40 days and 36 witnesses in Part Two which lasted 51 days. Over 80,000 documents, over 100 reports and 15 videos were submitted in evidence.3 The outcome of Desmond Fennell’s investigation was a report of 247 pages, entitled, Investigation into the King’s Cross Underground Fire, London: Her Majesty’s Stationery Office, 1988.
223
224
SAVING HUMAN LIVES
The chief concerns of this chapter will be as follows: Epistemological Frameworks Compared; Fennell’s Epistemological Framework; The Use of Words; The Cause of the Fire; Responsibility for the Fire; The Importance of a Safety Ethos; The Role of Management Structure in Blocking the Flow of Information; Fennell’s Recommendations; The Primacy of Safety; Philosophical Underpinnings. Many of the topics treated occur temporally in the aftermath stage of the disaster. One reason for this is that it is of particular interest to compare and contrast the investigation of this disaster and the investigation of the Challengerr disaster. Both the similarities and differences in the investigations might provide further thought to the possibilities of considering a more international forum for the investigation of future disasters.
EPISTEMOLOGICAL FRAMEWORKS COMPARED It is of interest to note that at the very beginning of the Report that Fennell makes a fundamental distinction between two questions: how did the fire start and why did 31 people die. In the Presidential Report concerning the Challenger, these two different issues are not distinguished. It could be argued, of course, that in the case of the Challengerr it was a foregone conclusion that if the Challengerr were to be destroyed that all lives on board would be lost, so that there was no need to posit such a distinction. Such a conclusion, however, would have been seriously mistaken as the lives of the crew members and the civilian passengers could have been saved with an explosive bolt hatch, space pressure suits and a parachute descent system. Fennell’s marking of this distinction makes another point. For Fennell, the investigation of the King’s Cross fire derives its importance from the death of 31 human beings. This is the first and foremost concern of his study. In the case of the Challenger, while there is some concern over the fact that five astronauts and two civilians met their deaths as a result of that ill decided voyage, the primary concern seems to be with the cause of the disaster. In Fennell’s study, concern is equally distributed over the possible
THE KING’S CROSS UNDERGROUND FIRE
225
cause of the fire and the death of the victims. What stands out from this is the priority of the ethical issue in Fennell’s study. This is an important distinction upon which to dwell. In the Presidential investigation and in later investigations such as Diane Vaughan’s flawed work, the question, why did the civilians and the astronauts aboard the Challengerr have to die, is not raised. Fennell’s inquiry is of signal ethical importance. For the Challengerr disaster a key question should be, why did seven people die? If the question is put in this fashion, the lack of ethical motivation to provide an explosive bolt hatch, space pressure suits and a parachute descent system looms large. The lack of ethical considerations in the decision to launch the Challengerr takes on greater significance. As noted in previous discussions of the Challengerr disaster in this volume, one could aver that the astronauts took a known risk and thus their possible deaths were not as innocent as those of the passengers or bystanders during the King’s Cross disaster. Apart from the issue of whether the risk was known, it is of interest to note that a loss of human life is considered of less importance in this argument if it is somehow collaborated in by the victim. Again, Fennell’s report places the priority of human life and the safety measures that he recommends at the highest priority. When one contrasts these measures with those recommended in the Presidential Report of the Challenger, it is apparent that the preservation of human life is the paramount issue in Fennell’s investigation and the same cannot be said of the Presidential Report or later investigations such as those conducted by Diane Vaughan. From an ethical point of view, the question can be raised, why should human life be the top priority in one set of circumstances and not in another? The separation of the two questions, how did the fire start and why did 31 people die, at the beginning of Fennell’s investigation, shows the primacy given to human life in the fact that there is the question raised at all, why did 31 people die? In the Presidential Report concerning the Challenger, such a question is never raised explicitly and certainly not at the very beginning of the inquiry. Again, one could say that the investigation of a fire is very different from the investigation of a technical and man-made disaster. But it does occasion the opportunity to raise the very important question, why should there be a difference shown in the importance given to the
226
SAVING HUMAN LIVES
lives lost in one case and in the other. t By separating the questions, Fennell makes one point very clear: the two issues are separate. Whatever the cause of the fire, there is another issue at stake, the lives of these 31 and possibly more people in the future if like disasters recur. In investigating the cause of the fire, if the cause of the fire is discovered, the investigation is not finished. The further question is, why these people had to die and how such deaths can be prevented in the future? If the issues are not separated, just as in the Presidential Report on the Challenger, once a cause or causes of the disaster can be identified, the investigation is complete. There is no further question concerning or decisive steps to be taken with respect to how to save such lives in the future. Thus, the raising of such a distinction at the very beginning of the Fennell investigation has consequences both for the entire Report and for his conclusions. It is of great interest to note how the epistemological framework of an investigation influences the course and the outcome of that investigation. What are the “causes” of the difference in Fennell’s epistemological framework and that of the Presidential commission? First of all, as above, one can point to the greater priority accorded to ethics in Fennell’s inquiry. In this case, ethics occupies a more important role in Fennell’s philosophical framework k than it does in the Presidential Commission’s framework. Secondly, in Fennell’s investigation, there is an absence of the presupposition that so guides the Presidential Report on the Challenger, that accidents will happen and there is nothing in the final analysis that one can do that will prevent them. In fact, Fennell’s investigation seems to be guided by an almost opposite presupposition: accidents are by no means inevitable and that it is one’s proper function and one’s duty to prevent them. These two presuppositions, that ethical questions are of top priority and that accidents are not inevitable but rather fall very much within one’s control direct Fennell’s inquiry and consequent Report. More than anything else, the difference in Fennell’s epistemological framework from that of the Presidential Commission’s is responsible for the very different character both of his investigation and of its conclusions.
THE KING’S CROSS UNDERGROUND FIRE
227
FENNELL’S EPISTEMOLOGICAL FRAMEWORK Fennell’s investigation is characterized by a clear statement of purpose found near the beginning of his Report to which the remainder of the Report addresses itself and with which the Report ultimately concludes. This statement of purpose is contained within items 5 and 14 of the items grouped under his ‘Introduction and Scope of the Investigation’ which forms chapter two of his investigation. In item 5 it is stated that, ‘This investigation was inquisitorial. It was an exercise designed to establish the cause of the disaster and to make recommendations which will make a recurrence less likely. Those who died deserved nothing less.’4 In item 14 it is stated that ‘This investigation had only one goal: to ascertain the cause of the tragedy and to try and ensure that it will never happen again’.5 It is of interest to examine the guiding presuppositions of these statements of purpose. It can be noted that the investigation is characterized as inquisitorial. Such a characterization might in some minds imply that someone or a group is at fault and that a primary task of the investigation is to find d and assign blame. An investigation of this type thus would emphasize the fault-finding and consequent fault-correcting dimension. However, such a characterization is not completely accurate. While it is definitely true, that unlike a factfinding investigation, an inquisitorial investigation is not a neutral starting point, it does not follow that its primary purpose is to assign blame. It begins with the implication that what is being looked for is something that is wrong and that the purpose of the investigation is to right that wrong. This is amplified in the subsequent sentence that the recommendations that will be made have as their purpose to ‘... make a recurrence less likely’. Thus, the investigation will be characterized by its emphasis on prevention of disasters in the future. Finally, the ethical priorities in the investigation are clearly stressed in the final sentence in item 5, that ‘Those who died deserve nothing less’. All of this is neatly summed up in Item 14, ‘This Investigation had only one goal: to ascertain the cause of the tragedy and to try and ensure that it will never happen again’. The main goal of the investigation, nay, its sole goal, is to find out the cause of what happened for the purpose of ensuring that it does not happen again.
228
SAVING HUMAN LIVES
Not only is the ethical dimension the paramount motivation but there is the underlying belief that such an investigation can and should have efficacy. The disaster that occurred is one which in principle was capable of being prevented. The emphasis is on the prevention of disasters.
THE USE OF WORDS It is of interest to note how the use of key designators of the event are largely although not always consonant with both the epistemological framework and the statements off purpose. While occasionally, the King’s Cross Fire is referred to as an ‘accident’, by and large this term is not utilized. More commonly, the King’s Cross Fire is referred to as a ‘tragedy’ (as in the above quotation) and as a ‘disaster’. At the very end of the conclusion to the investigation in item 9, it is stated that, ‘This has been a long and searching Investigation into a terrible disaster in which 31 people lostt their lives and many more were injured’.6 The label of ‘disaster’ rather than ‘accident’ carries with it the implication that this was an event that not only was something terrible (here underscored by adding the word ‘terrible’ to the normal connotation of ‘disaster’) and thus lifting it out of the realm of the trivial, but also the implication that it could (unlike most accidents) have been prevented. Can one justify the label ‘disaster’ as opposed to ‘accident’ on the grounds that 31 lives were lost as over against only 7 in the case of the Challengerr disaster? The choice of words reflects not only a degree in the quantitative loss of lives but it also reflects the seriousness with which the loss of lives is taken. Moreover, it reflects the underlying presupposition that disasters are events which fall within one’s scope of concern and power to prevent and in principle are events which may be prevented and that one possesses an obligation to attempt to prevent them. The choice of words used in the aftermath of the disaster was not random. It is apparent from various sections of the investigation that Fennell and his co-assessors regarded the King’s Cross Fire as a disaster that could and should have been prevented. This is indicated not only in the statements of purpose of the investigation but in the
THE KING’S CROSS UNDERGROUND FIRE
229
insistence that one of the causes of the disaster in the first place was the outmoded belief that ‘accidents were inevitable and thus were bound to happen’. In this regard, Fennell’s Report appears not only to be unique in the literature of Court Investigations, but bears an uncanny resemblance to one of the main theses of this present volume. One of the main theses of the present author is that a leading impediment to the prevention of disasters is the belief that disasters are inevitable and thus they cannot be prevented. In a word, an epistemological mind block is one of the primary causes that pave the way to the occurrence of disasters. While the thesis that one’s beliefs play a decisive role in disaster prevention may not seem a surprising thesis for a philosopher or a psychologist to entertain, it is astounding to find such a thesis stated in the body of a Court Report and with particular reference to a cause of a disaster that has actually occurred. In item 1 of the conclusion of the investigation in chapter 21, it is stated that, ‘... London Underground, and its holding company London Regional Transport, had a blind spot - a belief that fires were inevitable, coupled with a belief that any fire on a wooden escalator, and there had been many, would never develop in a way which would endanger passengers’.7 This is also pointed out in an earlier and well conceived chapter entitled, ‘The Ethos of London Underground’ in item 12 where Fennell states that, ‘It was ... a matter of some concern to me that the directors of London Underground should still subscribe to the received wisdom that fires were an occupational hazard on the Underground. Dr. Ridley [Chairman and Managing Director of London Underground] did not feel to agree with the Court that fire should be regarded as an unacceptable hazard to be eliminated, since it was regarded that fires were a part of the nature of the oldest, most extensive underground railway in the world ... In effect he was advocating fire precaution rather than fire prevention’.8 And in item 13 of the same chapter, Fennell proceeds to state the extent of his disagreement with the concept of the inevitability of accidents: ‘It is my belief that this approach is seriously flawed because it fails to recognize the unpredictable nature of fire. A mass passenger transport service cannot tolerate the concept of an acceptable level of fire hazard. [One is reminded of Feynman’s critique of the concept of acceptable risk (for which he is much chided by Vaughan) in the case of the decision
230
SAVING HUMAN LIVES
to launch the Challenger] In my view what is needed from London Underground is an entirely new pro-active approach to safety management.’9 In his later important chapter 13, ‘The Management of Safety’, Fennell reiterates his belief in the importance of disengaging oneself from the presupposition that ‘accidents will happen’ in item 18: ‘I believe a philosophy which takes as its starting point the inevitability of fires is dangerously flawed’.10 He is supported in his belief in the testimony of Mr. R.M. Warburton, OBE, BA the Director General of The Royal Society for the Prevention of Accidents, who also considered d that the presupposition of the inevitability of accidents was an underlying cause of the disaster: ‘I think throughout the whole of the papers I have read there has been an assumption that fire is in fact an everyday happening and that, although such events could disrupt the service, and could cause damage, that in fact it was part of the everyday operation’.11 It is of special interest to note that in connection with the concept that fires were an everyday event that in order to suppress the connotation of the dangers of fire, the word ‘smouldering’ was in common use to describe previous outbreaks rather than ‘fire’. Fennell reports that: ‘... the outbreak k of fire was not regarded as something unusual; indeed it was regarded by senior management as inevitable with a system of this age. This attitude was no doubt increased by the insistence of London Underground management that a fire should never be referred to as a fire but by the euphemism ‘smouldering’.122 It is interesting for present purposes to take note that in this case the choice of words used is self-conscious and purposeful. The purpose for this choice of words is to direct attention away from the dangerous implications carried by the word ‘fire’. No better example of the importance of the words used can probably be found. Fortunately, London Underground has now agreed to stop using the word ‘smouldering’.13
THE CAUSE OF THE FIRE Unlike the Challengerr investigation, there is no separation of primary and secondary causes of the disaster. While the separation of
THE KING’S CROSS UNDERGROUND FIRE
231
primary and secondary causes in the case of the Challengerr was flawed in terms of which cause oughtt to have been labeled primary and which secondary (the technical cause was labeled as primary and the management cause was labeled as secondary), in the case of the King’s Cross disaster, flaws due to mismanagement are in the main not included under the list of causes of the fire. This is a weakness of the King’s Cross Investigation. By not including mismanagement contributions under the label of ‘causes’ attention may be misdirected away from the area of mismanagement. It would have been advantageous to include the area of mismanagement under the rubric of ‘causes of the disaster’ and to have singled it out as one of the primary causes of the disaster. In Fennell’s Report, the causes cited for the outbreak of the fire are in the main immediate physical causes. This may in part be due to the fact that in Fennell’s Report there is a distinction in the investigation of the areas of the cause of the fire and the cause of the deaths of the 31 people who were its victims. In the investigation of the causes of the deaths, mismanagement factors are indicated. In the investigation of the causes of the fire, mismanagement factors are implied but not clearly stated. (While in Fennell’s report there is a further distinction between the causes of the outbreak of the fire and the disastrous flashover, for purposes of this chapter, these two may be conflated as the reasons are closely related). The first of the conclusions from the witness evidence was that, ‘The fire was initiated by smokers’ material, probably a carelessly discarded lighted match, which fell through the clearance between the steps and the skirting board on the right-hand side of escalator 4.’14 In terms of the spread of the fire, the presence of grease beneath the steps is cited as a cause: ‘It was possible for the grease to ignite easily because of the mixture of grease and fibrous materials which formed a wick. Without this wick k effect, the grease was not very easy to ignite.15 Mismanagement factors are implied along with the physical causes when it is added that, ‘The lift and escalator maintenance manager told the Investigation that he believed the accumulation of grease on escalator 4 at the time of the fire had probably been there for a number of years’.16 This in itself would serve as an indicator that there was not a safety first prioritization for London Underground. It would also indicate that a will to
232
SAVING HUMAN LIVES
communicate was not present. Why had the lift and escalator manager not called this to the attention of senior management? Or, why had the senior management not instructed him to ensure that there were no safety hazards? Further elaboration of causative ffactors for the physical emergence of the fire include the fact that people continued to smoke in the Underground in spite of a ban, gaps observed two weeks before the disaster between the treads and the skirting board on the Piccadilly Line escalator 4 at King’s Cross through which a lighted match could pass, the absence of 30 per cent of the cleats making it more possible for a match to fall through the gap, that the fire on the running track ignited the dry plywood skirting board and so on to the escalation of the fire to disastrous proportions.17 What is never brought out clearly in the course of Fennell’s investigation is why the skirting board, balustrades, treads, decking and floor of the escalator would have to have been constructed of wooden (and hence flammable) materials in the first place. While one could point to the date of the construction, on the other hand, the recognition of a fire hazard when more appropriate fire-proof materials would have been available would seem to have signaled the need for replacement prior to the outbreak of a major disaster. While it is a fact that so many wooden materials are not cited as part of the cause of the fire, it is noted in Fennell’s recommendations that he does call for ‘... an early replacement of all wooden skirting boards, balustrades, decking and advertising r panels by metal ones ...’18 Thus, by implication, the flammable materials utilized in the construction ought to have been included among the various cited causes. Mismanagement factors are more explicitly listed as causes under the question of why did 31 people die? While mismanagement factors can be more comprehensively treated later on under the topic of responsibility, a few items can be noted here. After an alarm was raised by a passenger about 17:30, according to a procedure in the rule book, one of the staff went to inspect. The staff member had received no fire training and informed neither the station manager nor the line controller. Moreover, London Underground had no evacuation plan. Police officers, present by chance, who did not know the geography of the station, decided to evacuate passengers by way of the Victoria Line escalator, which turned out to be a
THE KING’S CROSS UNDERGROUND FIRE
233
disastrous decision. By the time the London Fire Brigade did arrive, it was too late for them to do anything. Oddly enough, between 19:30 and 19:45, not a single drop of water had been applied to the fire.19
RESPONSIBILITY FOR THE FIRE: TOP DOWN In contrast to the Presidential Report on the Challengerr which emphasized to a considerable degree the area of bottom up responsibility, in Fennell’s investigation, as was the case in Sheen’s investigation of the British Ferry disaster, the responsibility of senior management for the Kings Cross Fire is underscored. This is brought out in a number of places in Fennell’s investigation. In chapter 20 of Fennell’s Investigation, entitled, ‘Recommendations’, with regard to the management of safety, he lists as the first of the most important recommendations that, ‘The recommendations of internal inquiries into accidents must be considered at the Director level’.20 While this recommendation has to do with the aftermath of a disaster, no such recommendation is to be found in the Presidential Report on the Challenger. In the all important chapter 13, ‘The Management of Safety’, Fennell reports that many of the shortcomings in the physical and the human state of affairs at King’s Cross on 18 November 1987 had already been identified beforehand: ‘They were also highlighted in reports by the fire brigade, police, and Railway Fire Prevention and Fire Safety Standards Committee. The many recommendations had not been adequately considered by senior managers and there was no way to ensure they were circulated, considered and acted upon.’21 It is of interest to note that the responsibility for acting upon bottom up reports is clearly laid upon the shoulders of senior management. Such an attribution of responsibility is not present in the Presidential Report on the Challenger. Fennell emphasizes that the knowledge to prevent such a fire disaster was available beforehand: ‘The Engineering Director, Mr. Lawrence, recognized that London Underground had a blind spot to the hazard from fire on wooden escalators which was revealed by earlier incidents. Furthermore, Dr. Ridley recognized
234
SAVING HUMAN LIVES
that London Underground at its highest levels may not have given as high a priority to passenger safety in stations as it should have done.’22 He goes on to say that, ‘With the exception of the Oxford Circus station fire [previous], there was not sufficient interest at the highest level in the inquiries. There was no incentive for those conducting them to pursue their findings or recommendations, or by others to translate them into action.’23 It is also extremely interesting to note that there was no concept of a safety ethos that pervaded the organization. Such a concept of a safety ethos would have possibly circumvented certain bureaucratic obstacles. For example, with a clear safety ethos specified as the highest priority for the organization, it would have been very difficult for officers in charge of safety to have considered that their domains of responsibility were restricted to employees for example, rather than passengers. Very specific charges of top down responsibility are cited: ‘... senior managers including Mr. Powell, the Safety Manager (Central Safety Unit), and Mr. Straker, the Personnel Director, charged with health and safety responsibilities, categorically stated that they did not see passenger safety as being part of their job.’24 Mr. Fennell is joined in his assigning of major responsibility to top management by the testimony of Mr. R.M. Warburton. In his examination by Mr. Henderson QC, Counsel to the Court on 3 June 1988, the question is put by Mr. Henderson to Mr. Warburton: ‘... in your view there had been a collective failure from the most senior management downwards, over many years to minimize the outbreak of fire, and more importantly to foresee and to plan for an uncontrolled outbreak of fire at an underground station with a real potential for large-scale loss of life ... having the opportunity to read the transcripts of the evidence given by the witnesses ... have your views about that collective failure been fortified or reduced? [reply] I think fortified.’25 In further questions by Mr. Henderson when he asks if management has learned any lessons from past incidents he answers, ‘No, I am afraid I did not. I think one of the problems there - and that was reinforced in the evidence of the directors - was how limited was the distribution of its information.26 Such an acknowledgement is a strong indicatorr that the will to communicate was lacking in London Underground. It is clear from all of the
THE KING’S CROSS UNDERGROUND FIRE
235
foregoing that senior management is definitely cited as shouldering a major share of the responsibility for the disaster.
RESPONSIBILITY: BOTTOM UP Fennell’s investigation does not ignore the co-responsibility for the outbreak of the fire on the part of employees and middle management. In the evidence of Mr. R.M. Warburton given to Mr. Henderson, there is equal responsibility placed on the shoulders of subordinates in the lack of reporting information upwards. In reply to Mr. Henderson's statement that all the directors told the Court on oath that they genuinely believed that there was no risk to passengers because there would be both the opportunity and the system in place to control an outbreak, Mr. Warburton’s reply was that such a belief was not soundly based and that the main reason for it not being soundly based was ‘... the lack of incisive information that was going to directors.’27 Further, when cross-examined by Mr. Read, QC, Counsel for London Regional Transport and London Underground, Mr. Warburton testified that, ‘What concerned me was essentially that I could not find a great deal of evidence that that data if it existed was brought to the attention of the directors or senior management.’28 Of course, both top down and bottom up responsibility are pointed to and sometimes are co-involved. At one point Mr. Warburton volunteers that the evidence points to the fact that: ‘... one of the shortcomings that is unequivocally accepted is a failure to report adequately, to follow up reports, and indeed to analyze the implications of the reports.29 This is as broad an indictment of the failure to communicate as can be found. In this statement, both the willingness to pass information forward and the willingness to consider the information once received are shown to have been lacking. The excellent point is added that there was a failure to follow up on information received. All that is missing from this analysis of the failure of communication in the London Underground is the need for active inquiry from the senior management as to what problems actually existed. Mr. Warburton’s statement is as fine a statement as can be found which coincides with
236
SAVING HUMAN LIVES
one of the major theses of this present work with regard to responsibility which is that the buck stops everywhere and there is no single and exclusive locus of responsibility. In each instance of an action that occurred during the active phase of the disaster, one can point to both top down and bottom up responsibility. Unlike theories of the causes of disaster such as ‘systemic error’ theories in which it is held that no single link in the chain of events leading to a disaster could have or did foresee the consequences of its contribution to the eventual outcome, with a multiple causation and multiple responsibility account of disaster, any number of actions by individuals or by the organization at large could have prevented this disastrous outcome. For example, in the chapter entitled, ‘The Response of the London Underground Operating Staff’, Fennell points out four salient facts: the staff had not been adequately trained; there was no plan for evacuation of the station; communications equipment was poor or not used; and there was no supervision.30 While one could interpret these separate factors as unwitting links in a chain, this would be to ignore that there were self-conscious agents involved in each of these separate factors all of whom possessed a moral responsibility. While all of these can be seen as strictly speaking failures on the part of top management, one can also point to a lack of initiative on the part of staff to act in the absence of directives. According to the London Underground rule book in force at the time of the fire, staff were required to deal themselves with any outbreak of fire and only to send for the London Fire Brigade when the fire was beyond their control. This policy was adhered to by London Underground despite the fact that two years previous to the disaster, the London Fire Brigade had urged them to call them immediately upon any suggestion of fires on the Underground: ‘... on 23 August 1985 (the day following a fire at Baker Street) the Chief Officer requested Deputy Assistant Chief Officer Kennedy, to write officially to London Underground ... His letter to Mr. Cope, the Operations Director (Railways) [included this statement]: ‘... the Brigade made it quite clear that the Brigade should be called immediately to any fire on the underground railway network.’3l (Belatedly, London Underground’s new policy is that the London
THE KING’S CROSS UNDERGROUND FIRE
237
Fire Brigade should be summoned immediately if there is any suggestion of fire.)32 After a burning tissue was reported by a passenger to Leading Railman Brickell, he extinguished it himself. He did act properly in accordance with the rule book. One may still ask whether or not he could have exercised more initiative on his own to call the London Fire Brigade immediately which he did not. The fire on escalator 4 was seen by Mr. Squire about 19:29 and reported to Booking Clerk Newman in the ticket office. Mr. Newman telephoned Relief Station Inspector Hayes, unfortunately describing the location of the fire incorrectly. Having received the message, Relief Inspector Hayes set off with Railman Farrell to the incorrect location without informing either the station manager or the line controller.33 Such action on the part of Relief Inspector Hayes and Railman Farrell shows that they were either not trained in the proper management procedures to report to their superiors in the event of such contingencies or acted in violation of such training. In eitherr event, it shows that a will to communicate was absent. If there were no such rule to communicate to one’s superiors, then it was the responsibility of management to set forth a general rule that in the light of emergencies that might develop, one should always communicate to a responsible party what action one is taking. If some circumstance developed in which it was not possible to have such a general rule in place, then employees should be encouraged to take initiative with regard to how they should act in emergency situations. It would be impossible to detail each and every course of action that one should take in a rule book. Thus, some responsibility for taking appropriate action must rest on the shoulders of the immediate staff concerned. In the meantime, Leading Railman Brickell received a similar report from other passengers but neither knew where the fire hydrant was nor was familiar with the water fog equipment. While top management can certainly by indicted for not seeing that better training was received, Brickell could also have taken his own initiative for knowing where the fire hydrant was and learning how to operate the water fog equipment in accordance with the thesis of this present work, ‘The Buck Stops Here and it Stops Everyplace Else as Well’. After receiving the first report, Mr. Newman was informed again by a second passenger. Not observing any fire from where he was in the
238
SAVING HUMAN LIVES
ticket office, he took no further action as he had no training in evacuation procedures and saw his duties as limited to the ticket office.34 This lack of further action on the part of Mr. Newman is unfathomable. Having been twice informed of a fire, whether or not he could personally see the fire, Mr. Newman possessed the moral responsibility as an individual to report the fire. Again, while top management can be criticized for not seeing to it that he had some training in evacuation, certainly Mr. Newman could have possessed a wider sense of responsibility for his position than simply acting as a ticket agent. Both individual responsibility and management responsibility are co-involved in this special case. While Mr. Newman possessed such a responsibility as a human being, management equally possessed the responsibility to set forth a safety first prioritization such that actions conducive to saving lives would always take precedence over one’s job specifications. Relief Inspector Hayes passed the water fog controls and saw both smoke and flame but did not know how to operate the water fog controls. He could not get close enough to the fire with a carbon dioxide fire extinguisher. Here again, top management may be cited for a failure to make sure that Relief Inspectors knew how to operate equipment but it is also the case that Relief Inspector Hayes could have made it his business to become familiar with the use of such equipment in the case of an emergency. Once again, a safety first prioritization for London Underground would have prompted management to ensure that Inspectors knew how to use safety equipment and at the same time, such a safety ethos would have encouraged Inspector Hayes to have learned to use such equipment on his own initiative. Such a safety ethos combined with an organizational will to communicate would have prompted Hayes to have communicated to authorities aabove him the nature of what was occurring. It is not simply a matter of a concatenation of circumstantial errors; it is a matter of an organization operating without an genuine safety ethos. Relief Inspector Hayes comes under considerable rebuke by Mr. Fennell for his lack of action: ‘Relief Station Inspector Hayes was unprepared by training and experience to take charge of the incident. His failure to notify the station manager or line controller as soon as he received a report of a fire or to operate water fog equipment were serious omissions which may
THE KING’S CROSS UNDERGROUND FIRE
239
have contributed to the disaster, although it is possible that the chain of events was too far advanced for any action on his part to have averted the development of the fire, but it might have delayed it.’35 While Mr. Hayes could certainly be faulted for his lack of action, since such a lack of similar actions was endemic throughout the London Underground it can only be surmised that there was neither a strong safety ethos nor a strong will to communicate present in the organization. And it was surely the responsibility of management to have ensured the presence of these two factors in their organization. Such a lack of responsibility on the part of individual agents in an organization leading to the outcome of a disaster bears a distinct and disturbing resemblance to both the cases of the British Ferry disaster and the disaster on Mount Erebus in n which just the actions of one person could have prevented the disastrous outcomes. But it must be remembered that it was not only that individuals did not take appropriate action. It was also that the management involved in all three of these disasters failed to instill a safety first ethos and a will to communicate. There were several other incidents of staff irresponsibility that may well have been co-contributing causes of the disaster. Leading Railwoman Eusebe, Leading Railman Swaby and Leading Railwoman Ord were not at their posts and one railman’s post was unfilled. From the evidence of Leading Railwoman Ord, both she and Swaby were taking an extended meal break at the time of the flashover.36 Again, while top management can well be taken to task for allowing such laxity to exist without appropriate supervision, it may also be said that staff might have taken their own responsibilities more seriously and thus were equally derelict in their sense of duty. Once again, a safety first ethos would possibly have encouraged Ord and Swaby to end their meal break promptly even without the presence of supervision. In the case of the Metropolitan and Circle Lines Staff there were an equal number of staff blunders both of senior and junior staff. Mr. Worrell, the most senior member of staff as the station manager of King’s Cross that night was positioned not at the centre of the station but at the far end where his office had been temporarily re-located during station building works. He had expressed his anxiety over this but had been overruled. This particular blunder was one the
240
SAVING HUMAN LIVES
consequences of which had been foreseen. While he may not have anticipated a fire, he was aware that he had no external telephone for any possible emergencies that might arise. In his temporary office the only means of communication available to him was an internal telephone. Mr. Worrell did attempt to communicate his concerns but his concerns had not been heeded. In this case, there had been a partial communication. Someone had spoken, but no one had listened. Mr. Worrell was not told of the fire until 19:42, twelve minutes after the report of the fire. After having been told of the fire, despite his earlier attempt to protest his office location, notwithstanding his position as senior representative of the London Underground, Station Manager Worrell made no attempt to contact the London Fire Brigade incredible as this may seem to the reader ...37 Once again, one can consider individual responsibility and management responsibility for not providing a safety first ethos and a will to communicate. It is difficult to understand Mr. Worrell’s failure to communicate such important information. In Mr. Worrell, one possesses the example of a senior staff member who did not take the initiative to notify the Fire Brigade throughout the entire disaster. The Relief Station Manager at King’s Cross that fateful evening was Mr. Pilgrim who was present with Station Manager Worrell in his office when the call came through informing them of the fire. Pilgrim also did not take this call seriously and waited several minutes taking a refreshment break. Coming out later with Worrell he did aid in the evacuation of passengers. Once again one thinks of the lack of individual moral initiative on the partt of staff as human beings with moral responsibility to others. A course in ethics would seem to be desired. On the other hand, it is surely the responsibility of the company to vigorously champion the cause of safety. What conclusions can be drawn with regard to the London Underground Staff? According to Mr. Fennell, ‘... the London Underground Staff ... were woefully f ill-equipped to meet the emergency that arose. Those on duty did the best they could using their common sense in the absence of training and supervision. Had the water fog equipment been used there is reason to think that the progress of the fire would have been delayed and the London Fire Brigade might have been able to deal with it. In fact, not a drop of water was applied to the fire nor any fire extinguishers used by the
THE KING’S CROSS UNDERGROUND FIRE
241
London Underground staff’.38 While Mr. Fennell points the finger at top management here as responsible for the lack of training and supervision, it can also be alleged that the staff concerned might have taken the responsibility onto themselves for familiarizing themselves with the equipment or making requests for such training in the event that such an emergency were to arise. Responsibility in an organization is not only top down; it is also bottom up. And this is an excellent case in point where staff might have taken matters into their own hands when top management had forsaken its proper responsibilities to seek and insist upon receiving proper training in the use of fire-fighting equipment and short of that, seeking to teach themselves the rudiments of the proper use of fire extinguishers and water fog equipment. The number of staff that were irresponsible is so incredible as to stagger the imagination. When one contemplates the number of individuals involved who could have by more prompt action done much to ameliorate the effects of the fire, one must appreciate the importance of management’s trumpeting r the overriding importance of safety and communication. The response or lack of proper response of the line controllers is detailed in Mr. Fennell’s report and contains many examples of negligence on the part of staff which may have contributed substantially to the loss of life by allowing trains to stop at King’s Cross and passengers to disembark. In the first case, Mr. Tumbridge, the HQ controller, was alerted by the Piccadilly Line controller at 19:39 of a fire and by the London Fire Brigade at 19:53 that a ‘full fire’ was reported at King’s Cross station. He called the Duty Officer, Divisional Operations Manager Green at home at 19:53 (after alerting the London Underground fire department) who was thus alerted 18 minutes after the fire was first reported to the HQ control room.39 Mr. Fennell theorizes that if Mr. Tumbridge had been informed by a station supervisor when the fire was first detected, events might have been very different. Mr. Green took an additional full hour to reach the station. In the meantime, Mr. Tumbridge did not specifically order Northern Line trains not to stop or to verify whether trains were in fact stopping and these failures, in Mr. Fennell’s eyes, ‘... may have materially contributed to the disaster.’40 The Piccadilly Line controller on duty on the evening of 18 November 1987 was a Mr. R. Hanson. Mr. Hanson did learn of the
242
SAVING HUMAN LIVES
fire indirectly at 19:38 during a telephone conversation with the British Transport Police about an unrelated incident. He received a request to order trains on his line not to stop at King’s Cross which he put into effect by 19:45. It was mainly due to the lack of a direct line to the station manager’s office at King’s Cross that they were unable to alert station managers any sooner. One must ask, of course, why such a line did not exist? Was it a cost factor? Or, was it that safety did not receive any special importance? The Northern Line controller at the time of the fire was Mr. Goldfinch. It is likely, according to Mr. Fennell, that he received an instruction to order trains not to stop at about 17:44 although this and other calls received during this period were not logged. At 19:50 he received a direct instruction to order trains not to stop and he implemented it. Unfortunately, in the previous eight minutes, three southbound and two northbound Northern Line trains had stopped and passengers had been allowed to disembark. This controller has since left the service.4l Mr. Fennell concludes with the salient point that, ‘None of the controllers thought to check with the London Fire Brigade that it was safe to run trains through the station or indeed to call the station to see what was happening.’42 From this statement, it may be gathered that a share of the responsibility for the subsequent loss of life of the passengers lies with the lack off initiative shown by the line controllers. Again, it can also be said that management bears a part of this responsibility for not having stressed the importance of safety. What of the Senior Operating Staff? Since responsibility for King’s Cross station fell to the division which included the Metropolitan Line, Acting Traffic Manager Nelson was the most senior London Underground officer in the station until the arrival of Incident Officer Green after 21:00. However, Nelson construed his duties as having to do almost entirely with the Metropolitan side of the station.43 While such an interpretation of his duties might have been by the book, it does show a lack of initiative and a sense of responsibility for the disaster that was taking place which both could and should have been displayed by a line officer. Further, Nelson did not take the initiative to let those on the surface know he was available to the station nor did he offer his detailed knowledge of the layout of the station.44 Such a lack of initiative on the part of a line officer can only be
THE KING’S CROSS UNDERGROUND FIRE
243
construed as irresponsible and co-contributory to the extent of the disaster. At the same time one can only ask why did such a lack of willingness to communicate exist in the organization? On the tube lines side, two area managers arrived independently by train shortly after 20:00. Amazingly enough, none of the managers attempted to contact the emergency services or London Underground personnel on the surface by telephone. Traffic Manager Weston, who came up shortly after 20:30 assumed that Acting Traffic Manager Nelson was in charge and did not make contact with the London Fire Brigade area control unit once he saw that Incident Officer Green had arrived.45 This appears to be a clear case of, ‘I thought THEY were taking care of that’. With a strong sense of ‘The Buck Stops Here and it Stops Everyplace Else at Once’, a safety first ethos and a sincere will to communicate, each manager would have made an attempt to contact the emergency services of London Underground on the surface. One can always ascribe such behavior to a lack of coordination among management; however, it is clear that there was a lack of initiative responsibility on the part of each and every member of senior management in this respect. The severity of the disaster finally became clear to Mr. Green and he reported to Assistant Chief Officer Kennedy of the London Fire Brigade about 21:20. Mr. Green assumed that Kennedy was in full control of the situation and that firemen below ground were in contact with those above. He did not bother to find out if either of these assumptions were correct. This would appear to suggest that senior management on the scene did not take sufficient responsibility for the disaster that was taking place under their hands. This attitude on the part of Mr. Green reminds one of the attitude of the various crew members of the Herald of Free Enterprise. When one sees alarming absence of proper organizational and ethical attitudes in one disaster after another, one becomes more and more convinced of the importance of proper management education. In Fennell’s judgment, most of the above showed a lack of communication: ‘It will be clear ... that there was no effective communication between those present on either side of the station and those outside, and that several opportunities for the exchange of vital information between London Underground and London Fire Brigade personnel were lost.’46 It appears as if there are material
244
SAVING HUMAN LIVES
reasons for the lack of communication. In Fennell’s judgement, ‘ ... it was a material deficiency on the night of the disaster that there was no member of London Underground in the [control] room and much of the equipment was out of order.’47 In addition, ‘All five of the [closed circuit television] cameras covering the Northern Line and some of the Piccadilly Line cameras were out of service, having been removed before the modernization work in the station. Neither the station manager nor the supervisory staff had been consulted about the removal of these cameras.’48 One must consider, however, that it would be a lack of proper management and a safety first prioritization that would permit a situation to exist in which no member of London Underground was in the control room and would tolerate equipment which did not work. It would also appear that a sense of initiative responsibility for the disaster was not displayed either by staff or by senior management present. Some of the share of responsibility clearly lies with top management for not providing proper training for staff immediately responsible for dealing with disasters. Some of the share of responsibility clearly lies with the lack of initiative shown by line personnel in either familiarizing themselves with equipment around them or even in using equipment with which they were familiar. With respect to communications equipment, it is very puzzling that no use whatsoever was made of the public address system at King’s Cross throughout the fire and evacuation.49 In Fennell’s chapter, ‘Communication Systems’, he states that: ‘It is clear that the station staff, several of whom had a good knowledge of the communications equipment available, failed to make use of it. They did not call the London Fire Brigade upon discovery of the fire, inform the station manager or the line controller promptly, nor use the platform public address system to keep passengers informed during the emergency. There was unacceptable delay in passing on and carrying out the police request that trains should non-stop.’50 What is to account for the general lack of training and initiative taken in this emergency situation? In Fennell’s appraisal, the underlying reason for this was the lack of a safety priority policy and instead a policy which laid total emphasis upon operational maximizability (the operational equivalent of the bottom line): ‘Their [London Underground] concern with accounting for staff and keeping trains running prevented them from making a
THE KING’S CROSS UNDERGROUND FIRE
245
proper appraisal of the overall situation and ensuring that relevant information was passed to the emergency services and HQ controller.’51 This reminds one strongly of the pressure on the system of having to maintain a schedule routine to get the shuttle in the air that was cited as one of the pressures to launch the Challenger. When considering overall responsibility, one can, of course, also consider the role played both by the response of the police and the London Fire Brigade to the disaster. While space does not permit a comprehensive analysis of the response of the emergency services, a few points may be noted with respect to the issue of co-responsibility. As pointed out earlier, the London Fire Brigade had no information as to what was going on below. However, it is equally true that the London Fire Brigade did not attempt to obtain such information from the London Underground.52 A major issue discussed above is that they did not know the layout of the underground station. While above it is argued that it was the responsibility of London Underground to have supplied such information, it can be also be said that the Fire Brigade could have asked for it. Fennell states that, ‘... although the occupier of property should invariably provide a guide to meet the Fire Brigade upon arrival, where such a guide is not provided and the Fire Brigade have no detailed knowledge of the geography, it is their duty to obtain details forthwith.’53 Once again communication is radically incomplete. London Underground should have informed the Fire Brigade that it had no maps. The Fire Brigade should have inquired if maps were available. Having made such inquiries (or having been informed), Fennell’s point becomes relevant. The Fire Brigade could have inquired elsewhere as to how to obtain maps. Fennell also points out that fire officers below the surface had not taken their radios with them and thus could not communicate with those on the surface.54 Finally, Fennell also suggests that there was no overall communication between the various emergency services at command level. With respect to police, Fennell recommends that British Transport Police review its arrangements for access to station keys, location information, and liason arrangements with other emergency services.55
246
SAVING HUMAN LIVES
THE IMPORTANCE OF A SAFETY ETHOS While it is clear from the above that both bottom up and top down responsibility were involved both in the outbreak and the disastrous proportions of the fire, it is natural to consider that the ethos of an organization is and should be directed primarily by those ultimately responsible for the organization, namely, top management. What can be said to be lacking in the directives coming from top management was a safety first ethos. The investigation of Fennell shows that there was also a fundamental confusion among management as to who was responsible for what. According to London Regional Transport, all operational matters including safety were a matter for the operating company, London Underground. Fennell reports that, ‘The Chairman of London Regional Transport, Sir Keith Bright, told me that whereas financial matters were strictly monitored, safety was not strictly monitored by London Regional Transport. In my view he was mistaken as to his responsibility ...’56 With respect to London Underground, Fennell finds them equally blameworthy: ‘In my judgement Dr. Ridley was correct to say that London Underground at its highest level may not have given as high a priority to passenger safety in stations as it should have done’.57 Fennell thinks that the reason for this confusion among top management arose because no one person was charged with overall responsibility for safety. Each director believed that he was responsible for safety in his division, but that it covered principally the safety of his staff. For example, the operations director, who was responsible for the safe operation of the system, did not believe that he was responsible for the safety of lifts and escalators which in his view came under the engineering director’s department.58 In his evidence to the Investigation the Chairman of London Regional Transport, Sir Keith Bright said that the London Regional Transport did not interfere in the day-to-day operation of the railway. London Regional Transport’s position of leaving operational matters to London Underground was underlined at every stage during Sir Keith’s evidence.59 On the other hand, the Engineering Director did not concern himself with whether the operating staff were properly trained in fire safety and evacuation
THE KING’S CROSS UNDERGROUND FIRE
247
procedures because he considered those matters to be the province of the Operations Directorate. But, according the Fennell, ‘Such matters clearly had a bearing on the safety of passengers in stations for which he shared corporate responsibility and the security and maintenance of assets for which he was directly responsible.’60 What is clear from all of this is that there was little cross-fertilization between Engineering and Operating Directorates and as Fennell puts it, ‘ ... at the highest level one director was unlikely to trespass on the territory of another.’61 In passing, Fennell does suggest an intriguing proposal which deserves consideration with respect to disaster prevention management in general, that of the creation of a national department concerned with disaster inquiries: ‘I raised the question during the hearings of whether there should be a national disaster plan. On reflection, it seems to me that an office or desk in a Government department which would coordinate the valuable information that exists relating to disasters and their consequences could serve as a focal point for sharing experience and knowledge. It is unsatisfactory that those coping with the consequences of major disasters should very often have to start from scratch, and that the lessons to be learned from earlier accidents involving deaths and injuries should not be as widely disseminated as possible.’62 While Mr. Fennell does not discuss this further, such a proposal has international significance in addition to national significance and merits further consideration.
THE ROLE OF MANAGEMENT STRUCTURE IN BLOCKING THE FLOW OF INFORMATION It seems apparent that, in addition to the lack of the assumption of responsibility on the part of both top, senior management and staff, that there was indeed a communication problem which was spread both between and within both Engineering and Operating Directorates. To what can one attribute such large breaks in the flow of information? To a certain extentt one can look to the organizational changes that were taking place at the time although this can by no means represent a total explanation. Mr. Styles, the lift and escalator
248
SAVING HUMAN LIVES
engineer from 1973 to 1987 told the Court that during 1985 and 1986 the staff were much occupied with getting the new management system running. More significantly, until 1984 his division had been part of the Operations Department and after the move to the Engineering Department, informal contact with operating staff had largely ceased and there was some confusion over areas of responsibility.63 In addition, the client/contractor split of areas of responsibility was not properly established at the time of the King’s Cross fire, and the lift and escalator engineer said that he did not succeed in monitoring escalator cleaning standards.64 With the lack of clarity as to who was properly responsible for what, communication naturally suffered a breakdown. For example, recommendations for action involving escalators made in internal inquiry reports of accidents did not always reach the Engineering Department.65 The problem with the compartmentalization approach to management is not only inefficiency but the potential unawareness of the building blocks of disaster and the incapacity to deal with these effectively when they erupt. In Fennell’s view, the situation would have been improved with the availability of up-to-date and comprehensive organizational charts which would have at least shown where the areas of responsibility lay: ‘... It may be an indication of the compartmental approach to management within London Underground that no up-to-date or complete chart showing the level of responsibility at which decisions were being taken was available. Such a management tool was, in my view, essential for senior managers to identify properly where decisions were being taken and where gaps in responsibility could occur.’66 What better proof could be offered of the signal importance of management fundamentals to the saving of lives than this? The connection between sound management procedures and ethics could not be better illustrated. This awareness of management tools on Fennell’s part shows a greater degree of focus on the area of management that is shown by contrast in the Presidential Report on the Challenger where problems in the chain of command pointed to earlier in this present volume were not picked up by the Presidential Commission.
THE KING’S CROSS UNDERGROUND FIRE
249
FENNELL’S RECOMMENDATIONS: THE PRIMACY OF SAFETY In the case of the King’s Cross fire, the thesis of multiple causation introduced in Chapter One of this book seems more than confirmed. In fact, it seems so confirmed that one is overwhelmed by the sheer magnitude of co-contributing causes both t to the outbreak of the fire and more importantly to the inefficient response to the containment of the fire and the inadequate protection given to the passengers, the victims of the fire. While one could, theoretically, attempt to patch up each and every defect individually, this might not materially alter things because what appears to have been missing is the element of coordination itself. In Fennell’s eyes what is missing is precisely a safety ethos and a safety ethos which is an all encompassing directive of the organization as a whole. In the opinion of the present author, such a safety ethos would provide the coordinating linkage if it is coupled with both a will to communicate and a proper attention to charting organizational responsibilities. Such a safety ethos if given teeth and an efficient management structure in which to operate might breathe life into an organization and inform its parts. Such appears to be the main thrust of Fennell’s recommendations along with very specific, practical proposals such as the installation of public telephones in the Underground stations. But first of all a safety ethos must be present and must be present in an allencompassing and authoritative form. Of course that safety ethos can only be effective if all of the members of an organization are truly behind it. But the first step, at least, is that the top management of an organization is responsible for the installation of such a safety ethos and further for attempting to make such a safety ethos a primary directive for all members of the organization. At the very least this offers an umbrella under which to make organizational changes and a criterion to identify areas which need changing. Fennell is extremely clear that a safety ethos must be part and parcel of a corporate ethos. His approach thus does not confine itself simply to the appointment off a safety expert or the creation of a safety committee which seems to have been more of the approach of the Presidential Commission on the Challenger. Fennell’s
250
SAVING HUMAN LIVES
recommendations require a philosophical change at the very top of the management chain: ‘It is clearly in London Regional Transport’s interest to take an independent view on the effectiveness and value for money of London Underground’s safety programme. If London Regional Transport sets corporate safety objectives, they will be seen to be a part of the overall corporate objectives, and not in conflict with them.’67 Fennell does also suggest [a suggestion that has since been implemented] that a non-executive director whose specific responsibility is safety sit on its Board.68 This suggestion parallels what is suggested in this present work and is unlike the recommendations of the Presidential Commission on the Challenger. The Presidential Commission recommends setting up an independent safety board one of whose members would either be or report to a member of the Directorate. However, as pointed out earlier in this volume, it is more to the point that the safety expert sit on the Directorate and this is in fact the solution proposed [and subsequently taken up by London Underground] by Fennell. What is more, the safety expert is responsible, according to Fennell for establishing an internal audit procedure by which performance standards are judged against safety standards. Financial performance is measured against safety standards.69 This is an especially intriguing point because it provides a precise mechanism for showing the priority that must be given to safety factors. This gives the safety ethos some teeth. The presence of the safety expert on the Board also gives him or her access to day to day decisions of the Board and the opportunity to give input early on in decision making. One could argue presumably that with a public service such as an underground railway it is natural thatt safety would be an important issue. However, the same priority cannot and should not exist in an experimental space program. As soon as one has raised this point, however, one must consider why the loss of life is valued more in one instance than in another. And if safety is recognized as a priority by establishing a board of safety (as in the case of the Challenger), then why not establish a system for seeing to it that safety receives the priority it is recognized to possess. In the case of the King’s Cross investigation one may consider that the concern for safety was greater due to the nature of the disaster where 31 innocent civilians were the
THE KING’S CROSS UNDERGROUND FIRE
251
victims (in the case of the Challengerr there were only two innocent civilians). Nevertheless, the lessons learned from one disaster can still be applied to the other. In the case of King’s Cross investigation, the Chief Safety Officer has some degree of power. In addition to the ingenious safety audit system, ‘The Chief Safety Inspector shall review existing safety arrangement, identify hazards, recommend policies, objectives and systems to meet those hazards, and thereafter audit the effectiveness of the system. He should have direct access to the Chief Executive of London Underground and the power to call for any report, logs and correspondence relating to safety’. 70
PHILOSOPHICAL UNDERPINNINGS The priority according to safety and the provision of some system which can maintain its priority may be said to be due to a strong philosophical presupposition that disasters can be prevented and that such prevention requires continuous, active efforts. Without the presence of this assumption, no far reaching steps to implement and ensure safety would be taken. If one takes the Russian roulette attitude to safety alluded to by Professor Feynman in the Presidential Report, then the lack of previous serious disasters would be sufficient to guard against future ones. On the contrary, one cannot rely upon a previous record of the lack of disasters. Such an attitude reflects that ‘accidents simply happen’ and are not subject to our control. The premium placed on prevention by Mr. Fennell is a lesson well worth noting. In his investigation, Mr. Fennell quotes Mr. Adams, the Senior Personnel Manager (Operations), who wrote in a memorandum to the Operating Management Meetings in August 1987 prior to the fire that: A safe environment is not one in which there is an absence or a low number of serious injury incidents, but is the result of active participation by management and staff in identifying hazards and then doing something positive about them. In other words, the absence of accidents is a negative measure largely dependent on luck while the identification then prompt elimination or control of
252
SAVING HUMAN LIVES
hazards is a positive step and is essential to the discharge of our duties ...71 What if such a brilliant analysis of the significance of the absence of previous disasters had been available to those decision makers who decided to launch the Challenger? This philosophical attitude is a prerequisite to the conception of and viability of a managed safety program. One must not operate under the double illusion that a previous record of safety protects one from the future and that “accidents will happen” anyway and that nothing can be done to ultimately prevent them. In Mr. Fennell’s overview of his report, he sums up this attitude: ‘Since no one had been killed in the earlier fires they [the London Underground] genuinely believed that with passengers and staff acting as fire detectors there would be sufficient time to evacuate passengers safely.’72 It is interesting to take note that during the investigation that Mr. Warburton could find no evidence that someone within the organization was questioning what the worst possible consequences of fire could be. Nobody had asked ‘what if ...?’ If they had, the aim Fennell argues, ‘ ... would be to minimize the number of fires and thus to reduce the probability of one of them becoming a major incident. The ultimate objective must be the elimination of all fires’.73 The philosophical underpinnings require an attitude that disasters can - in Fennell’s words - mustt be prevented and that means that active and continuous human intervention is required. The possession of this philosophical base makes it possible that one take a view towards the prevention of disasters that is both hopeful and active. When such a view as this can inform and guide corporate policy, the prospects of disaster can be retarded. Itt is of interest to note that such a view is not purely wishful thinking but is part and parcel of staid British Government policy.
NOTES l. Desmond Fennell, Investigation into the King’s Cross Underground Fire, London: Her Majesty’s Stationery Office, 1988, p. 37.
THE KING’S CROSS UNDERGROUND FIRE
253
Ibid., p. 38. Ibid., p. 188. Ibid., p.22. Ibid., p.23. Ibid., p. 184. Perrow suggests at one point that the term ‘catastrophe’ be reserved for accidents that kill more than 100 people with one blow. Cf., f His new Afterword to Normal Accidents: Living with High Risk Technologies, Princeton: Princeton University Press, 1999, p. 357. However, Perrow still refers to the Challengerr disaster as a disaster in which “only” 7 people lost their lives. 7. Op. cit., p. 183. 8. Ibid., p. 3l. 9. Ibid., p. 32. 10. Ibid., p. 120. 11. Ibid., p. 233. 12. Ibid., p. 61. 13. Ibid., p. 61. 14. Ibid., p. 111. 15. Ibid., p. 104. 16. Ibid., p. 104. 17. Ibid., pp. 15-16. 18. Ibid., p. 114. 19. Ibid., pp.16-17. 20. Ibid., p. 167. 21. Ibid., p. 117. 22. Ibid., p. 116. 23. Ibid., p. 117. 24. Ibid., p. 117. 25. Ibid., p. 230. 26. Ibid., p. 232. 27. Ibid., p. 233. 28. Ibid., p. 235. 29. Ibid., p. 236. 30. Ibid., p. 61. In Perrow’s classic version off a systemic error thesis, he argues in his 1999 Afterword to his Normal Accidents: Living with High Risk Technologies that ‘... it takes the right combination of inevitable errors to produce an accident’. But this ignores the fact that conscious agents are involved, not inevitable errors. Charles Perrow, Normal Accidents: Living with High Risk Technologies, Princeton: Princeton University Press, 1999. Errors need not be inevitable. With a sound management structure, a will to communicate, and an ethical ethos making safety the highest priority, why would there be “inevitable errors”? In any case, such measures would certainly reduce the efficacy of such errors. And a proper 2. 3. 4. 5. 6.
254
SAVING HUMAN LIVES
management system guided by an ethical imperative would make it less likely that there could be a “chance concatenation of errors”. 31. Ibid., pp. 75-76. 32. Ibid., p. 61. 33. Ibid., p. 62. 34. Ibid., p. 62. 35. Ibid., p. 63. 36. Ibid., p. 63. 37. Ibid., p. 65. 38. Ibid., pp. 37-8. 39. Ibid., p.68. 40. Ibid., p. 69. 4l. Ibid., pp. 69-70. This behavior can be contrasted with that of Harish Bhurvey, the stationmaster of Bhopal railway station who informed stations before and after Bhopal to stop all rail traffic on the outskirts of the city. Thus, trains did not pass through Bhopal and the lives of thousands of passengers were saved. But the stationmaster died from inhalation of the poisonous gas. Cf., f Paul Shrivastava, Bhopal, Anatomy of a Crisis, Cambridge: Ballinger Publishing Co., 1988, p. 7. 42. Op. Cit., p. 71. 43. Ibid., p. 72. 44. Ibid., p. 72. 45. Ibid., p. 73. 46. Ibid., p. 75. 47. Ibid., p. 19. 48. Ibid., p. 136. 49. Ibid., p. 135. 50. Ibid., p. 135. 5l. Ibid., p. 74. 52. Ibid., p. 82. 53. Ibid., p. 83. 54. Ibid., p. 82. 55. Ibid., p. 89. 56. Ibid., p. 17. 57. Ibid., p. 18. 58. Ibid., p. 18. 59. Ibid., p. 26. 60. Ibid., p. 29. 6l. Ibid., p. 29. 62. Ibid., p. 159. 63. Ibid., p. 35. 64. Ibid., p. 35.
THE KING’S CROSS UNDERGROUND FIRE
65. 66. 67. 68. 69. 70. 7l. 72. 73.
Ibid., p. 35. Ibid., p. 34. Ibid., p. 123. Ibid., p. 124. Ibid., p. 124. Ibid., p. 168. Ibid., p. 120. Ibid., p. 18. Ibid., p. 120.
255
CHAPTER 11
THE DISASTER ON MT. EREBUS ‘The job of a professionally conducted internal inquiry is to unearth a great mass of no evidence.’ Jonathan Lynn and Antony Jay (eds.), The Complete Yes Minister, The Diaries of a Cabinet Minister, by the Right Hon. James Hacker, MP, London: Guild Publishing Co., 1984, p. 178. On 28 November 1979 Air New Zealand DC10 Airliner, Flight 901 with 257 people aboard plunged in broad daylight into the side of a 12,000 foot active volcano, Mt. Erebus, in the Antarctic, leaving no survivors. It is one of the best documented disasters and yet, for all that, the cause eluded investigators. Why did a skilled crew fly straight into a mountain wall in clear weather? While perhaps not internationally as well known as some of the other case studies of disasters investigated, the case of Erebus is of particular interest. It is of particular interest because quite unlike the other cases studied, the first official report of this disaster turns out, upon close inspection, to be fraudulent with respect to key details. As a result, one of its unexpected lessons is that one cannot necessarily take the evidence offered or the conclusions reached by an official body at face value. Quite fortunately, in this case there is a further official investigation which disproves the results of the earlier investigation and other, independently gathered evidence which is in support of the second investigation’s results. An investigation into this disaster thus requires one to study at least two reports: the first, fraudulent report and the second, corrective report. In so doing, this case study proves useful as an example of how to assess the validity of an investigation into a disaster and thus offers further tools for the investigation of future official reports. This case study thus is concentrated on the aftermath of a disaster as much as it is concentrated on an inquiry into the original disaster itself. What is of special interest in the investigation of the investigation is that clear
256
THE DISASTER ON MT. EREBUS
257
philosophical grounds can be found for assessing which of the two reports is valid. Thus, this chapter will concern itself with examining the evidence offered in the two reports in addition to establishing the true cause of and the responsibility for the disaster. The case study of Erebus can teach one at least three things: first of all, there are cases of airline disasters that are almost entirely the fault of management error. Such a discovery might do much to redress the general picture in the public’s eye that the majority if not all of airline disasters that occur are the result of the much abused notion ‘pilot error’. Secondly, that one must not be taken in by the fact that certain evidence is presented or that certain conclusions are reached in an official investigation. One must pay very careful attention to whether evidence presented really supports certain conclusions. Thirdly, one can be made much more aware of what general criteria one may use in assessing whether the conclusions reached in reports are proper or not. In short, one can become more schooled in how to read the literature of official investigations. The starting point for the case study analysis of the Erebus disaster is the Report of the Royal Commission to inquire into The Crash on Mount Erebus, Antarctica of a DC10 Aircraft operated by Air New Zealand Limited written by the Royal Commissioner, the Honourable P.T. Mahon, Judge of the High Court at Auckland, hereinafter to be referred to as the Mahon Report.1 The Royal Commission made up of Justice Mahon and two barristers, David Baragwanath and Gary Harrison and a retired Air Marshall of the RAF, Sir Rochford Hughes heard evidence over a period of 75 days. The notes of evidence comprised 3083 pages and 284 documentary exhibits were produced and studied. In addition to hearing evidence in Auckland, the Royal Commissioner, The Honourable P.T. Mahon, accompanied by Mr. Baragwanath, Q.C., travelled overseas and spent over 3 weeks in the U.S.A., Canada and the U.K. interviewing experts and obtaining depositions from witnesses.2 The story of Justice Mahon’s virtually single-handed investigation is a remarkable story in itself. Of special value for this chapter is Justice Mahon’s own subsequently published book which gives his account of the inquiry entitled, Verdict on Erebus.3
258
SAVING HUMAN LIVES
A SHORT HISTORY OF THE DISASTER On November 28, 1979 a DC10 passenger jet airliner owned by Air New Zealand carrying out a tourist flight from New Zealand to Antarctica and back, flew in broad daylight into the lower slopes of Mount Erebus in Antarctica. There were no survivors of the crash and 257 people lost their lives. The overriding question raised both during the course of Justice Mahon’s Formal Inquiry and his account of the inquiry given in his book, Verdict on Erebus was why and how did this disaster occur. The pilot in command of the fatal flight, Captain Jim Collins was known throughout Flight Operations Division as one of the most meticulous and conscientious pilots in the company. His experience with jet airliners was vast. His co-pilot, Greg Cassin, was equally conscientious, and was also highly experienced and highly skilled, as were the two flight engineers, Gordon Brooks and Nick Moloney. They were joined by an experienced arctic observer, Peter Mulgrew.4
THE CHIPPINDALE REPORT The Chief Inspector’s Final Report required by law to be carried out by the Chief Inspector of Air Accidents and reported to the Minister of Transport of New Zealand was approved for release as a public document on 31 May 1980 and was published under the title of AIRCRAFT ACCIDENT REPORT No. 79-139 by the Office of Air Accidents Investigation and written up by the Chief Inspector of Air Accidents, one R. Chippindale, hereinafter to be referred to as the Chippindale Report. The Chippindale Report very clearly placed the responsibility for the accident upon the aircrew.5 In short, the Erebus disaster was a case of pilot error. What is fairly amazing about this is that if it were not for the work of Justice Mahon and Captain Gordon Vette in particular, the Chippindale Report is likely to have been believed in its entirety and there the matter would have been laid to rest.
THE DISASTER ON MT. EREBUS
259
JUSTICE MAHON’S CRITIQUE OF THE CHIPPINDALE REPORT According to Justice Mahon (a fact which can be borne out by an independent reading of the Chippindale Report), a major section of the Chief Inspector’s report consisted of the readout by overseas experts of the flight-deck recorder tapes (CVR). Mainly on the basis of the interpretation by overseas experts of certain key fragments of conversation recorded from the flight-deck recorder tapes, tucked away on p. 53 of his report in an ordinary paragraph (3.37) it is stated that, ‘The probable cause of this accident was the decision of the captain to continue the flight at low level toward an area of poor surface and horizon definition when the crew was not certain of their position ...’6 In short, Chippindale has assigned the most likely cause of the disaster as being a classic case of ‘pilot error’. The problems with Chippindale’s conclusion are manifold. First and foremost, upon a close examination of the primary evidence for his conclusion, which is the evidence taken from the tapes, it is clear that his conclusion is not warranted by the evidence. Secondly, there is contravening evidence from the tapes which shows that the pilot and crew show no indications of being uncertain of their position. Thirdly, there are more credible explanations of what did cause the disaster which include a crucial operational error. Fourthly, when one compares the plausibility of the explanation of the disaster from the cause of management error only one hypothesis needs to be assumed; when one attempts to explain the Erebus disaster as a result of pilot or crew error, approximately 54 ad hoc assumptions need to be made. Since the details of the Erebus disaster are extremely complex, in this chapter it will be necessary to concentrate primarily on the evidence taken from the interpretation of the tapes and make only more passing references to some of the more complex, technical details. When one does discover the fact that the tapes in no way show that the crew were uncertain of their position this pulls the props out from under the most important piece of evidence which supports the conclusions of the Chippindale report. Thus, although in this chapter full justice cannot be done to the enormity and complexity of the set of problems that plagued the investigation of
260
SAVING HUMAN LIVES
the Erebus disaster, the most fundamental features of the problem can be set to rights. First of all, it is best to examine the evidence proposed by Chippindale which was used to support the theory of ‘pilot error’.
THE EVIDENCE FROM THE FLIGHT-DECK TAPES According to the Chippindale report, the crew were uncertain of their position and were reacting to this sense of uncertainty with alarm. Just prior to the disaster, according to Chippindale, ‘... the two flight engineers on the flight deck had ... expressed their mounting alarm.’7 This notion that the crew was in a state of fear or alarm is reinforced by a further reference to the ... apprehension expressed by the flight engineers ...’8 That the crew was not in agreement with the captain’s decision is indicated in the Chippindale report when it is stated that ‘... the flight engineers ... expressed their dissatisfaction with the descent toward a cloud covered area ...’9 When Judge Mahon first read the transcript of the contents of what was recorded onto the cockpit voice recorder system (CVR), he thought that the transcript was a record of exactly what was said by different voices.10 However, after he discovered that the CVR tapes had been taken to Washington for the purpose of transcription and it had taken five days for a transcript to be prepared of a 30-minute tape, he realized that the quality off the tape recording must have been suspect.11 When he himself listened to the tape recording, he found that it was extremely difficult to distinguish what was being said, ‘... I found that the volume of conversations and cross-talk on the flight deck behind the two pilots made it difficult in the extreme to ascertain what exactly was being said. Conversations between different people tended to run together. A sentence uttered by someone would be interrupted mid-way through by a sentence spoken by someone else who was evidently closer to the microphone. Someone would give an answer to an indecipherable question.’12 Of greatest importance are the particular fragments which are adduced to have both been intelligible and to have been the strongest pieces of evidence that the crew were uncertain of their position, alarmed over
THE DISASTER ON MT. EREBUS
261
this, and unhappy with the aeronautical decisions of their captain. Two particular such passages from the transcript are quoted below. The first of these is utilized as indicating that the crew is both concerned about visibility and also uncertain of their position: ‘Bit thick here eh Bert’ ‘You’re really a long while on ... instruments at this time are you’13 When Justice Mahon, accompanied by QC Baragwanath visited Washington they discovered that itt had been agreed by everyone present in Washington that these particular fragments had been considered too unintelligible to be of any assistance and they had not been included in the transcript which had been agreed upon by all parties in Washington. Thereafter, however, it was discovered that the chief inspector had gone on to Farnborough in the U.K. and the Farnborough expert, Mr. Davis, had considered that the extracts were intelligible and so he had included them.14 When Mr. Baragwanath and Justice Mahon listened to the tapes, they were satisfied that these two extracts quoted above did not represent what had been said. Neither Mr. Baragwanath, Justice Mahon nor Mr. Turner (who played the tapes back for them) were able to pick up the crucial word, ‘here’. There was doubt whether the word ‘thick’ could be heard. Most importantly, although this observation was supposed to have been answered by the flight engineer Moloney, his name was not ‘Bert’ and in fact it was undisputed that there was no one on the flight deck named ‘Bert’.15 It should be pointed out here that the CVR could also pick up conversation of passengers near to the flight deck. As to the second disputed sentence while it was found that the word ‘instrument’ was used, the key phrase ‘at this time’ was not possible to be made out with certainty. Thus, the first fragment might well have been a conversation between passengers seated near to the flight deck. (Here, it would have been of interest to examine the passenger manifest to determine if any of those seated near to the cockpit had the first name, ‘Bert’ or the first name or surname ‘Gilbert’. So far as is known, this has not been done). The second fragment conceivably could have been a conversation between a
262
SAVING HUMAN LIVES
crew member and a passenger about instrument flying in general in which a passenger’s surprise that an aircraft is flown for the most time on instruments is registered after having discovered that fact through conversation with a member of the aircrew. At the very least, the fact that the tapes were extremely difficult to make out and that they could have picked up passenger conversation were very vital elements left out of the Chippindale Report and call the evidence offered in the Report very much into question. Justice Mahon proceeded on to Farnborough where he listened to the same tapes over again in the company of Mr. Davis, Mr. Tench (Chief Inspector of Air Accidents for the United Kingdom), Mr. Shaddick and Mr. Baragwanath. Apart from Mr. Davis, who ventured no opinion whatsoever, the consensus among the other four was that the extracts were not intelligible enough to yield any reliable meaning.16 With respect to fragments which appear to have been relied upon by the chief inspector to indicate both dissatisfaction with the aeronautical decisions of the captain and mounting alarm, one such fragment was the following: Captain Collins: Tell him we can make a visual descent descending Unidentified: My G-d 17 The pertinent phrase, ‘My G-d’ presumably thought to have been uttered by a flight engineer cannot in the opinion of Justice Mahon be interpreted as an expression of alarm as to the decision of Captain Collins to advise McMurdo that he was about to make a visual descent. A flight engineer who was not in favour of such a decision would not content himself with an expostulation to the Deity but would presumably offer some reasoned objection. Another place in the transcript which is used as evidence for a measure of doubt on the part of the crew is the following:
THE DISASTER ON MT. EREBUS
263
? Where are we? ? About up to here now? [sound of rustling paper] 18 According to Justice Mahon there is no reason for the insertion of the question mark after the phrase, ‘About up to here now’. It is the insertion of this question mark which would indicate uncertainty. Its absence indicates the contrary. While there is also no indication as to whom the flight engineer is speaking, it is clear that a map is being referred to. Thus to whomever the flight engineer was speaking, he was pointing out where they were by reference to a map.19 Justice Mahon’s conclusion with reference to these statements found in the chief inspector’s report was that they were wholly unsubstantiated: In my opinion none of these views expressed by the chief inspector in his report is substantiated, either by the transcript of the CVR, or by the process of listening to the tapes.20 If it is true, then, that the captain and the crew had no doubts at all as to where they were, why would they fly their airplane directly into the side of a mountain thus killing everyone aboard? It would appear to have been an act of sheer lunacy. In fact, incredible though this may be, the Director of Civil Aviation, Captain E.T. Kippenberger put forth exactly this claim. It is worth quoting verbatim the Director’s testimony as recounted by Justice Mahon: ‘The Director explained that ... he had formed the view that Captain Collins must have been suddenly afflicted by some medical or psychological malady which made him oblivious to the mortal danger looming in front of him. This was a most incautious remark. Everyone in the courtroom immediately saw that if his theory were sound, the same insidious malady must simultaneously have affected not only the two pilots, but also Peter Mulgrew.’21 It is of interest to turn to the Mahon Report in this connection since if one does not credit the thesis of the false coordinates and the existence of the polar illusion (to be discussed below), then one is seemingly forced into adopting rather ludicrous explanations for what did occur:
264
SAVING HUMAN LIVES
It was stated by the Director of Civil Aviation that in his opinion the whiteout phenomenon did not exist in this case, or if it did exist, then it played no part in the accident. This of course required him to give some explanation as to why both pilots made coincidentally the same type of gross visual error. He suggested that each may have become afflicted by some mental or psychological defect which controlled their actions. This involved the startling proposition that a combination of physical and psychological malfunctions occurred simultaneously to each pilot. I was surprised to find that a person with the status of the director should advance a suggestion which is so palpably absurd.22 The answer to this astonishing dilemma is equally astonishing and is two-fold. The first piece to the puzzle is that the navigation staff of Air New Zealand had, on the eve of the flight, altered the computer track of the airplane 27 miles to the East of its original track such that Flight TE 901 was electronically programmed on a collision course with Mt. Erebus.23 The second piece to the puzzle is that Mt. Erebus was completely invisible to the pilot and crew because of the whiteout phenomenon (discussed below). The whiteout phenomenon, however, cannot be taken as the primary cause of the disaster because it would not have been a problem had the airplane been flying in accordance with its original computer track which would have kept it 27 miles to the West of Mt. Erebus. In Justice Mahon’s analysis, ‘ ... what made the aircraft strike the mountain? In the end, it was the ocular illusion. But the weather would have meant nothing, and would not have affected the flight at all, had the computer track not been changed. For it was the alteration of the destination coordinates which diverted the CD10 from McMurdo Sound, where its onwards journey was harmless, to Lewis Bay where its onwards course could only lead to destruction. To any ordinary mind, apprised of all the facts, the prime cause of the accident [sic] was clear. It was caused by the organisation and the planning at Auckland which changed the computer flight path without telling the crew’.(Erebus Papers, pp. 37-8)
THE DISASTER ON MT. EREBUS
265
VETTE’S TEXT Captain Gordon Vette independently put together a book on the subject of the Erebus disaster in 1983 entitled Impact Erebus. Vette’s credentials for writing such a book are impeccable. According to the Hon. P.T. Mahon, Q.C., ‘Captain Vette ... had an impressive background not only as an international pilot flying DC10 aircraft but also as a specially selected training officer in the operation of such aircraft.’24 Captain Vette was a qualified navigator, an AA instructor and was qualified to instruct both commercial and air force pilots to fly. He had, in fact, taught both Collins and R. T. Johnson (one of the briefing pilots) to fly.
MACFARLANE’S NOTES ON VETTE’S TEXT In the context of the enormous complexities which attend this investigation especially in light of conflicting testimonies, it is fortunate that a Senior Lecturer in Law at the University of Auckland (now retired), Mr. Stuart Macfarlane, has added an appendix to Gordon Vette’s Impact Erebus entitled ‘Notes on Text’. These notes were actually written independently of Vette’s Impact Erebus but nonetheless make an excellent summary of the entire case and it is of help in clarifying the issues to follow Macfarlane’s breakdown of the main elements in the Chippindale Report in order to bring out both his criticism, Justice Mahon’s critique and the present observations. Macfarlane’s approach to the Chippindale Report is to reduce it to five main propositions and to reply to each one in kind. For the purposes of this chapter, his outline will be followed in terms of the first four propositions and the fifth, the question of whiteout, will be treated in a separate section. For Macfarlane, the five propositions which make up the basis of the Chippindale Report are: 1. The change in the computer flight path from McMurdo Sound to Lewis Bay [which would take the aircraft over Mt. Erebus] did not mislead the crew
266
SAVING HUMAN LIVES
2. The crash was caused by the pilots descending beneath 16,000 feet contrary to the airlines’ instructions 3. The crew was not certain as to their position 4. The aircraft’s radar would have depicted the mountainous terrain ahead 5. The captain headed the aircraft toward cloud-covered high ground appearing to the pilot as an area of limited visibility or whiteout. 25 According to the Chippindale Report, in section 1.1.1, under the heading ‘Factual Information’, in preparation for Flight TE 901, the two ill fated pilots were among those attending a pre-flight briefing: The briefing gave details of the instrument flight rules (IFR) route to McMurdo which passed almost directly over Mt. Erebus, a 12,450 ft high active volcano ... 26
MACFARLANE’S REPLY TO 1. (THAT THE CHANGE IN COMPUTER FLIGHT PATH DID NOT MISLEAD THE CREW) According to Mr. Macfarlane, ‘The paragraph is mistaken and mistaken in the vital matter which led to the crash. The briefing did not give details of an IFR [Instrument Flight Rules] route almost directly over Mt. Erebus. Quite the contrary. The briefing officer handed out a computer flight plan which showed the route passed down the centre of McMurdo Sound ... some 27 miles away from a Mt. Erebus route. Apart from the crew who were killed, [who obviously were not available for interview] the three surviving pilots [pilots who had been at the briefing but had not been aboard Flight TE 901] all had been given the impression by the briefing that the route lay down McMurdo Sound.’27 There is a later twist in this story r which needs to be mentioned here. It proved to be impossible to recover any written documents relating to the content of the briefing. In the first place, the Chief Executive of Air New Zealand, Mr. M.R. Davis ordered that all copies of the documents relating to this flight and all Antarctic flights were to be
THE DISASTER ON MT. EREBUS
267
impounded with the exception of one file to be kept in his office of which all irrelevant documents would be destroyed.28 Justice Mahon comments that since this, at the time, was the fourth worse disaster in aviation history that, ‘... this direction on the part of the chief executive for the destruction of “irrelevant” documents was one of the most remarkable executive decisions ever to have been made in the corporate affairs of a large New Zealand company’.29 In the second chapter in this bizarre story, the briefing documents of First Officer Cassin (which were known by the evidence of the flight despatch officer to have been leftt at home) were recovered from his home on the day after the disaster by an employee of the airline and have never been seen since.30 In a further twist to this already twisted plot, the briefing documents of Captain Collins, which were known to have been taken by him on the flight, were also not recovered despite the recovery of his undamaged, empty flight bag.31 In fact, according to Mrs. Collins, her husband was an habitual note taker and he had a black ring-binder notebook which, upon her last sight of it, had all of its pages intact. However, upon separate recovery of this item, all the pages were missing.32 Furthermore, Mrs. Collins reported a burglary which occurred on March 29, 1980 in which someone had broken into her home and had taken only a file of documents she had kept relating to the disaster.33 Mrs. Collins also said that since the disaster she has not been able to locate her husband’s New Zealand Atlas and his collection of maps he kept on his bookshelf close by his atlas.34 Despite the overriding importance off the navigation procedures in this case and in particular the change in the navigation co-ordinates (which altered the flight path), the inspector never interviewed the navigation staff of the airline.35 In the opinion of Justice Mahon, the alteration of the flight track is the crucial determining cause of the disaster: The changing of the coordinates on the morning of the flight was the dominant cause of the disasterr ...36 Consider, for a moment, the scenario envisioned by the chief inspector. The pilots are briefed that they are flying on a route which
268
SAVING HUMAN LIVES
takes them directly over a mountain over 12,000 feet high. Their coordinates (as the pilots know) are programmed to take them directly over this mountain. This kind of Internal Navigation System (AINS) is rarely off by more than 1 or 2 degrees on such a length of flight.37 Weather conditions are poor at least in terms of what lies ahead. Given the length of time involved, the pilots would know that they must be directly in front of Mt. Erebus. At precisely this moment, without being able to see clearly ahead, a decision is made to descend so as to presumably take them directly into where the mountain should be (although they cannot make it out). Such an explanation as to what occurred makes the pilots out to be raving lunatics and another explanation altogether is called for.
MACFARLANE’S REPLY TO 2. (THAT PILOTS VIOLATED INSTRUCTIONS NOT TO DESCEND BELOW 16,000 FEET) Re: Descents beneath 16,000 feet Among the seven reasons which Macfarlane adduces for why a descent under 16,000 feet was not strictly prohibited (thus concurring with Justice Mahon’s conclusions in his Report and in his book, Verdict on Erebus and with Captain Vette) two are of immediate interest. First of all, an independent source, the McMurdo authorities, told the Inspector that previous flights flew at low levels including as low as 500 feet.38 Secondly, eight line pilots (i.e., not members of the administration of Air New Zealand) said in evidence they believed that descent was permitted, one said that he did not believe he was so permitted. After consideration off all the evidence, Justice Mahon made the following comment in his Report: In the first place the evidence makes it clear, in my opinion, that all Antarctica flights from and including 18 October 1977 involved a letdown in the McMurdo area to altitudes considerably less than 6,000 feet, and that in the main the flights down
THE DISASTER ON MT. EREBUS
269
McMurdo Sound and across the Ross Ice Shelf to the south of Mt. Erebus were conducted at altitudes ranging from 1,500 to 3,000 feet.39 In addition, a full year prior to the disaster an article appeared in the airline’s newsletter, Air New Zealand News. The article recounted the then flight of TE 901 with Captain Keesing, at that time Director of Flight Operations International aboard, cruising at an altitude of 2,000 feet past Antarctica’s Mount Erebus.40 During the hearings, it came out that while this newsletter was distributed to every staff member of the airline (and there were 8,700 at the time), both the Director of Flight Operations who had succeeded Captain Keesing and the Chief Executive denied having seen it. 41 In fact, such lowflying was so common that it had been part of publicity material put out by the company itself and distributed to a million New Zealand homes.42 The airline had printed 1 million copies of the article which the President of McDonnell Douglas had sent to the chief executive. Despite this, a series of executive pilots called by the company denied to a man that they had knowledge of low-flying. Justice Mahon’s response to this deserves to be given in full: In the wealth of external evidence establishing that low-flying was not only common knowledge but common knowledge dissseminated by the company itself, I believed myself compelled to reject the evidence of witnesses from Flight Operations Division that they never knew the actual operating altitudes.43 The reason for low-flying being so common was to facilitate the spectacular sight-seeing opportunities afforded by Mt. Erebus and to make photo taking a possibility. As Justice Mahon makes plain in Verdict on Erebus, ‘... any worthwhile photograph, taken without special equipment ... could only be effective at heights not exceeding about 2,000 feet’.44 The point about the issue about descent revolves around the navigation track question. If the navigation a track had not been altered and the flight were to have proceeded down McMurdo Sound, some 27 miles West of Mt. Erebus, such a descent would have been
270
SAVING HUMAN LIVES
harmless. If the navigation track were altered and it was in the crew’s knowledge that they were in the immediate vicinity of Mt. Erebus, then such a descent would have been engaged in only as an act of madness or extreme folly. It stands to reason that the descent was made under the presumption that the navigation track was the one given at the briefing and that they were far away from Mt. Erebus at the time of the order for the descent.
MACFARLANE’S REPLY TO 3. (THAT THE CREW WERE UNCERTAIN OF THEIR POSITION) As this has been dealt with extensively above, it is useful to interpose only one of Macfarlane’s points. He notes that from the American transcript from the CVR that it is clear from the exchange between the crew that Flight Officer Maloney was monitoring the actions of First Officer Cassin as part of the normal crew loop procedure.45 This would reinforce the view of the Mahon report that the crew were not derelict in their responsibilities and furthermore that as they were monitoring, the decision to descend was apparently considered to be an inconsequential one (as if they were not in a position which would involve the aircraft and passengers in any danger). Furthermore, it lessens the probability of the psychological aberrance thesis of the Director of Civil Aviation as in this case, all five of the crew on the flight deck would have to have been suffering from severe psychological abnormalities. It should be noted that the normal procedure would have been for Captain Collins to have plotted on a topographical map the coordinates given to him at the briefing and thus not relied upon his navigation track as a secondary means of monitoring his position. Both Captain Collins’ widow and his daughters have testified that he spent the evening before the flight working on maps.46 Captain Collins, known as careful, conscientious and methodical in his performance of his duties would hardly have overlooked the plotting of his coordinates the evening before his flight.47 Captain Vette thinks that the rustling of papers heard on the CVR must have been due to the need to keep refolding his charts as the flight progressed.48
THE DISASTER ON MT. EREBUS
271
In the trenchant phrase of Mr. Macfarlane, ‘Even his [Collins’] severest critics have not, to my knowledge, suggested Collins was reading a newspaper’.49 A list of items known to have been carried in Captain Collins’ flight bag were not recovered.50 The list could have supported the proposition that Captain Collins had followed the wrong coordinates. Among the items missing listed by Justice Mahon are the following: A map or maps he had been working on with plotting instruments the previous night The atlas on which he had been plotting that night The large topographical map issued to him by Flight Despatch the morning of the flight The briefing documents Diagrams which would have shown the flight path to be down McMurdo Sound [away from Erebus] 51
MACFARLANE’S REPLY TO 4. (THAT RADAR WOULD HAVE SHOWN EREBUS AHEAD) According to the Chippindale Report, ‘The aircraft’s radar would have depicted the mountainous terrain ahead.’52 This statement is so powerful in its implications that it would seem to supervene all others made so far. Even if the navigation track had been altered so that the aircraft was on a collision course with Mt. Erebus, pilot error must have been at the very least co-involved because of a failure to attend to the radar screen. Even if weather conditions (such as the whiteout phenomenon to be discussed below) were such as to make visual monitoring ineffective, the radar (were it to be functioning) would show the mountain ahead. What is so insidiously misleading about this key statement in the Chippindale Report is that the radar, whether functional or not, was not capable of picking up such a mountainous terrain as Mt. Erebus. When the indefatigable Justice Mahon inquired of the Avionics Division of the Bendix Corporation which manufactures the radar on
272
SAVING HUMAN LIVES
DC 10 aircraft, Bendix explained that since the slopes of Mr. Erebus were covered in dry ice and snow and the radar was designed to detect rain precipitation in cloud, the return from the mountain would have been nil.53
TAKING A PHENOMENOLOGICAL VIEW THE WHITEOUT PHENOMENON It is of especial interest for those interested in the relevance of applied philosophy to the investigations of disaster to take note of how, why, and when a phenomenological attitude might come into play and how it might be of use in sorting out such puzzles as might come into being in an investigation of a disaster case. In the case at hand, while the term ‘phenomenological’ (any more than ‘coherence’ or ‘simplicity’) does not appear in Justice Mahon’s works, it is clear from his account of the beginning of the consideration of the whiteout phenomenon that a phenomenological approach to the possibility of understanding experience is at work. While the term ‘phenomenological approach’ may be considered to have a wide range of meanings, in this case the most relevant meaning is that one considers exactly what one must be experiencing in order to act in a certain fashion without presupposing any interpretation of the experience. Even before the Chippindale Report was published, according to Justice Mahon, Captain Vette began to inquire into why Captain Collins would have flown at low altitudes into the side of a mountain in broad daylight without even attempting to take evasive action. In his testimony before the court, Captain Vette explained the ‘fail-safe’ or ‘crew-cooperative’ training through which a crew as a whole (rather than a pilot) are flying an aircraft. Any matter which is communicated to one member of a crew must be responded to. This is known as the ‘challenge’ procedure. Now, during the last few minutes of the flight, the two pilots on the flight deck were present, Flight Engineer Brooks and Mahoney and Peter Mulgrew. What is most amazing is that none of these
THE DISASTER ON MT. EREBUS
273
individuals ever expressed any alarm or doubt as the aircraft closed in on the mountain which lay directly ahead of its flight path. In order to explain this one has only a few hypotheses: (i) the entire crew is completely insane or incapacitated; (ii) the entire crew is aware that there is possibly an obstacle in front of them but are deciding to take a terrific gamble with their own lives and those of their passengers; (iii) the entire crew is a victim to a complete mass hallucination and literally does not see either a mountain or the shadows of a mountain in front of them. Since hypothesis (i) and (ii) are unacceptable (they are mentioned here only for logical completeness; they are not considered by Captain Vette)54, the only possible solution is (iii). The only way to account for the behavior of a rational and highly skilled crew flying directly into a mountain is that not only do they not suspect that it is there; they do not see it at all. That one is a rational being and possesses the capacity to avoid a fatal obstacle and nevertheless adopts a collision course with that fatal obstacle only makes sense on the phenomenological approach to understanding the experience in question: they are acting exactly the same as a crew would act who truly perceive that no obstacle lies in front of them. While normally a mass total optical illusion is only a logical possibility in most cases, in the Antarctic it is in fact a very common possibility. Such a possibility is known as the whiteout phenomenon. In support of Captain Vette’s view an expert witness was called, a Professor R.H. Day who was Foundation Professor of Psychology at Monash University. Professor Day, who possessed special qualifications with regard to human factors in aviation emphasized that it was not the visual perception of one member of the flight crew that had failed but that of five persons.55 While a comprehensive explanation for the possibility of such an optical illusion might be over technical, a succinct summary was encapsulated in the testimony of First Officer Rhodes who was a qualified air accident investigator for ALPA: The matt surface of the snow gives no depth perception even in conditions of fifty miles’ visibility and causes the wall of snow ahead to appear as a flat plateau with a distant horizon.56
274
SAVING HUMAN LIVES
The matter of the ocular illusion was brought up again by Justice Mahon and Mr. Baragwanath in their visit to the United States. They interviewed Captain Arthur P. Ginsberg of the U.S. Air Force who was a visual perception expert in the field of military aviation and Mr. G. W. Shannon, a Canadian airline executive and pilot with extensive polar experience and both confirmed the testimony of First Officer Rhodes. In fact, Mr. Shannon added the interesting point (having flown in the McMurdo area) that in his opinion Captain Collins had elected to fly away mainly because he could not see anything in front of him whereby by this time he should have been able to see the buildings of McMurdo base (had he been where he thought he would have been).
PHENOMENOLOGICAL APPROACH: TAPES There is yet another occasion where a phenomenological approach seems to have been employed by Justice Mahon and that is in his listening to the tapes. As there was no element of urgency or anxiety in the discussion between the two pilots as to which way they should turn the DC 10, a phenomenological approach to understanding as to why there was no urgency or anxiety detectable in their voices is simply that the two pilots saw no reason to be alarmed. An interpretive approach (such as that found in the Chippindale Report) would have it that the two pilots saw something ahead and therefore were becoming alarmed by what they saw. This requires that the sense of alarm and urgency be read into the tapes where none appears. If one accepts the evidence off the tapes (the parts that are intelligible) without attempting to interpret it to force it to meet some theory (in the Chippindale Report: to fit the theory of pilot error), then the pilots are not at all alarmed by anything in front of them. This means that either they see the mountain (or what could be a mountain) in front of them but are uncannily calm and in kamikaze fashion decide to plow right on into it or they are so busy playing cards (or whatever) that they neglect to look in front of them. The most plausible explanation of their lack of alarm is that they are looking in front of them but they literally do not see anything to
THE DISASTER ON MT. EREBUS
275
provoke their alarm and that is why no alarm is registered in their voices. This is the only hypothesis that makes sense as to why they are carrying on a casual conversation while the DC 10 flies on at 260 knots per hour into the mountain slope 3,500 yards directly ahead.57 The hypothetico-deductive method of thought begins with an hypothesis and in its deficient mode attempts to make the data square with the hypothesis. The hypothesis is that the crew knew that they were in the Erebus vicinity and could not see clearly ahead of them. Speculation then provides possible reasons as to why they cannot see: they are not looking; they are suicidal. The phenomenological approach begins not with what the pilots do not see or why they do not see but rather with what they do see. What the pilots do see is an uninterrupted horizon in front of them. It is not a case of their not seeing what is there due to some inattention or incapacity of theirs or an obstruction which is present. They do see what is there in their field of perception. What is there in their perception (the phenomenological field) is an uninterrupted field of vision. They clearly see ahead what there is to be seen, namely, nothing. The conditions of perception are such that they do clearly see what is there to be seen. However, an optical illusion is present which completely blocks out what is there in reality. For a commonplace example of the phenomenological field of vision one may consider that in one’s perception one sees a straight branch as bent when it is under water. In this case, one is truly seeing what is available for one’s perceptual field. However, as the branch is not really bent, one is suffering from what is commonly referred to as an optical illusion. But one is not mistaken about what one is in fact seeing; it is only that what one is seeing is not what is in fact really there. In the case of the polar illusion one also sees what is available to see in terms of one’s perceptual field; e.g., a clear horizon. It is only also that in this case the horizon is not in fact clear but contains a mountain. This, however, is not due to a mistake in vision on the part of the crew or to faulty inspection of the environment. They do in fact see what is there to be seen. Thus, there is no question of any pilot or crew error being involved.
276
SAVING HUMAN LIVES
THE COHERENCE THEORY OF TRUTH In this case study there has been a tendency to concentrate on the positive evidence in support of the thesis that management error was in back of the Erebus disaster and to take a critical view of the evidence that the disaster was the result of pilot error. This is natural in view of the fact that this seems overwhelmingly to be the correct view. Notwithstanding this, however, there were in the course of the Mahon investigation much conflicting testimony which, to avoid needless complexity, has not been discussed in great detail. It is of interest therefore to put forth a very general theory of how to account for conflicting evidence (even if suspect) in arriving at one’s final conclusions. It is a credit to Justice Mahon that in his book, Verdict on Erebus, that he introduces a classical philosophical criterion of truth in order to sift through the conflicting testimony, namely, a combination of the criterion of simplicity and coherence theory of truth. While he does not label his criterion as such, it is indubitable that the criterion he does employ is none other than a combination of simplicity and coherence. It is thus fascinating that an investigation into a disaster is supported by an application of philosophical techniques which in the end prove to be an indispensable feature of how one arrives at a sound final conclusion as to what happened. In his chapter, ‘My final conclusions on the evidence’, Justice Mahon states, ‘When I came to appraise the whole of the altitude and navigation evidence, I calculated the total of the basic errors, whether of omission or commission, which on the basis of the airline’s case, had been made by various people in the airline who were all experts in their particular fields, I found that fifteen witnesses had made between them thirty-five errors, and I was quite unable to accept that all of these mistakes could possibly have been made’.58 This could be construed to be a version of the coherence theory of truth since what Justice Mahon is saying in effect is that in order for the airline’s case of pilot error to be made, one must assume a version of reality to hold in which thirty-five mistakes in testimony must have been made. While this is logically possible, Justice Mahon is assuming that it is more likely that the correct version of what happened did not contain thirty-five mistakes to have been made. In other words, to accept the
THE DISASTER ON MT. EREBUS
277
reality version of the Chippindale report, a more incoherent view of reality is required. This can also be seen as a simplicity theory of truth in that ceteris paribus the theory that requires the lesser amount of ad hoc hypotheses to be made is to be preferred. In order that the case put forward by the airline (that it was a case of pilot error) be considered correct, Justice Mahon refers to Macfarlane’s appendix to Vette’s, Impact Erebus part of which was employed above to structure the responses to the Chippindale Report, and counts a total of fifty-four errors which would have to have been made by Flight Operations personnel in connection with the navigation question alone in order that the case put forward by the airline retain its credibility. Macfarlane counts 177 errors that would have had to have been made concerning height alone.59 (Judge Mahon counts multiple “mistakes” by one witness as one mistake whereas Macfarlane counts each separate “mistake” as a mistake on its own). If one adds to these lists “mistakes” about not revealing what whiteout was, one can add to this number. Finally, in assessing the general, overall credibility of the airline’s case for pilot error, Justice Mahon decided in the most controversial portion of his ruling that the airline’s case was composed of a tissue of lies: ... the palpably false sections of evidence which I heard could not have been the result of mistake, or faulty recollection. They originated, I am compelled to say, in a pre-determined plan of deception. They were very clearly part of an attempt to conceal a series of disastrous administrative blunders and so, in regard to the particular items of evidence to which I have referred, I am forced reluctantly to say that I had to listen to an orchestrated litany of lies.60 Despite the brilliance of Justice Mahon’s application of the coherentist theory of truth, this part of his findings has been overturned by subsequent court findings which have found his reasoning to be either in violation of certain legal technicalities and/or self-contradictory. While these later court findings in no way took exception to Justice Mahon’s finding that pilot error was not
278
SAVING HUMAN LIVES
responsible for the disaster, they did not wish either to indict management for its sloppy administrative procedures nor did they wish to accept his further interpretation of the false evidence as existing as part of a strategic plan of deception.61 It is, however, unfortunate that so much attention has been placed on the reversal of this aspect of Justice Mahon’s findings such that management’s culpability for the disaster has been lost sight of. From a philosophical point of view, the reasoning that Justice Mahon employed to find management guilty of a conspiracy to cover up what had happened was impeccable and it made sense out of the testimony that he heard. It did not detract from the power of his reasoning process and was in no wise self-contradictory. It is unfortunate, therefore, that later court findings did employ the adjective ‘self-contradictory’ to describe Justice Mahon’s reasoning.62 Mr. Macfarlane argues that Justice Mahon’s findings on credibility were in fact directly relevant to his causation finding since his causation finding was based on his credibility finding. By claiming that Justice Mahon had violated certain legal principles and engaged in self-contradictory reasoning in his finding of the lack of credibility on the part of certain witnesses, the later courts do not accept the finding of the lack off credibility. By not accepting the finding of the lack of credibility, however, they implicitly accept the evidence as trustworthy. But the acceptance of the evidence as trustworthy has a direct relevance to Justice Mahon’s causation verdict by making his causation verdict untenable. While it may not have been necessary to emphasize the ‘orchestrated litany of lies’ to prove management culpable, Justice Mahon did need that portion of his verdict that there were a ‘litany of lies’. 64
MISMANAGEMENT The instructive thing about the Erebus disaster is that it was preventable. What is important is to know how in detail to prevent such a disaster from happening again. What can be learned from the investigation of the Erebus disaster is that it was due to a management blunder. But it is not enough to leave it at that. One
THE DISASTER ON MT. EREBUS
279
must go further to analyze whatt weaknesses in structure and communication allowed such a blunder to happen. Charles Perrow refers to the Erebus disaster in three places in Normal Accidents, and seems to arrive at three or four different conclusions as to the cause of the disaster, the second of which appears to be the ineptitude of Air New Zealand. But he does not go on to specify that in which the ineptitude consisted. In his first conclusion concerning the causation of the Erebus disaster, he states that, ‘In one awful accident [sic], the headings entered into the autopilot were changed at the last moment before a flight without informing the captain, and the airliner flew into a mountain in Antarctica’. This mode of reference, however, could leave one with the impression that this is but another case of human error - in this case possibly the error of the computer operator (rather than any management practice) who failed to inform the appropriate parties of the programming change. As a case of human error, this becomes a species of the general problem of the impossibility of eradicating all human error or the inevitability of error. This is precisely the wrong conclusion to reach since this particular error was symptomatic of a badly managed organization.65 What is necessary is to specify exactly what correct management procedures would have had to be in effect in order for such a mistake not to have been made. Or, since one cannot guarantee the nonexistence of mistakes, what is of greatest importance is to ask what set of self-corrective management procedural rules should be installed. What can be said is that if the correct set of management procedural rules were to have been in effect a few days before the Erebus disaster took place, it is highly unlikely that it would have occurred. What is important for the purposes of this discussion is the recognition of two facts: first of all, that the Erebus disaster was due to mismanagement; secondly, it must spelled out exactly in what way mismanagement was at fault and how to correct the possibility of such mismanagement occurring in the future. It is important to note that in so doing what is being said here is that sound management practices could have prevented the Erebus disaster from having occurred and can prevent disasters arising from like causes in the future from occurring.
280
SAVING HUMAN LIVES
THE CAUSE OF THE DISASTER As has been indicated above, according to Justice Mahon, the overriding cause of the disaster was the airline’s alteration of the computer track without notifying the aircrew. Justice Mahon states this conclusion of his in a number of places within the official report including the following: ‘... I am able to reach a decision as to whether or not there was a single cause of the disaster. In my opinion there was. The dominant cause of the disaster was the act of the airline in changing the computer track of the aircraft without telling the aircrew.’66 The polar whiteout condition cannot be taken as the primary cause according to the reasoning of Justice Mahon (which appears to be impeccable) because had it not been for the alteration in the navigation track it would have been completely irrelevant whether or not there was a whiteout phenomenon to be experienced at Mt. Erebus because without that alteration of the nav track, TE 90l would not have been anywhere near Mt. Erebus. From Justice Mahon’s reasoning, it is clear that in the context of isolating the one dominant cause of the disaster that the one dominant cause was a failure of communication. This is unlike the third conclusion reached by Charles Perrow which is that ‘... It took the combination of these two independent “failures” [the whiteout and consequent failure to see what was there and the failure to communicate the change in the coordinates] to produce the frightful accident [sic] of flying into a mountain on clear day [sic].’67 By stating that there were two causes side by side and not distinguishing these causes in the order of importance, Perrow has given the reader the false impression that the weather was as much at fault in this affair as sloppy and unethical management. Such a lack of specification of the dominant cause not only demonstrates a lack of logic but obfuscates the real cause of the disaster by including the weather as a co-primary cause. The whiteout that occurred would have been perfectly harmless had Captain Collins been flying where he thought he was flying, 27 miles safely away from Mt. Erebus. The changing of the coordinates was the initiating cause of the disaster. It could be argued, of course, that if there had been no Sector Whiteout, that Captain Collins would have been able to see Mt. Erebus ahead. But this is not the point.
THE DISASTER ON MT. EREBUS
281
There could have been a temporary loss of oxygen in the cabin and in those moments the crew could have plowed into Erebus. But if they were safely out over McMurdo Sound, a temporary lack of oxygen would have not caused a crash. In addition, the lumping together of the weather and management’s failure to communicate as if they were of equal causal status also adds further fuel to Perrow's thesis that “accidents” are unavoidable since what could be more unpredictable than the weather? In fact, what is of special importance here is that the effects of whiteout were known about and could have been communicated to Captain Collins. On February 4, 1977, Captain Ian Gemmell, the Chief Pilot of Air NZ had accompanied Captain Grundy and one of the Civil Aviation airline inspectors to an Antarctic briefing given by the U.S. Navy at the Christchurch headquarters of the Navy. (Verdict on Erebus, p. 82) It appears as if Gemmell told no one of the U.S. whiteout briefing. (Erebus Papers, pp. 113-114) Thus, once more, an adequate warning could have alerted the pilot to such an extent that the disaster might well have been averted. It should also be mentioned that whiteout is not simply the occurrence of a rare weather phenomenon. It is the occurrence of a very common weather phenomenon in the Antarctic. The identical weather conditions repeated themselves when Justice Mahon flew to Mt. Erebus to reconstruct Captain White’s flight. ((Impact Erebus III Video) The problem with Perrow’s “unavoidable accident” thesis is not only that itt is not well supported by solid reasoning in this case but also that it becomes a self-fulfilling prophecy, influencing those who are persuaded of it not to attempt to search for the real causes and thus to attempt to prevent disasters since once more nothing can be done to prevent disasters. If one takes the alteration of the navigation track of TE 90l to the wrong coordinates without communicating that change to the crew to have been the original cause of the disaster, there is a certain likeness of this case with that of the Challenger. Had it not been for the rushed decision to launch the Challengerr at a time when temperatures were dangerous for the proper closure of the O-rings, the possibility of the O-rings malfunctioning would not have been pertinent. They came into play only because of the decision to launch at the wrong time. The decision to launch at the wrong time in turn came about in part because of a dysfunctional management
282
SAVING HUMAN LIVES
procedure for decision making. The two-cause (technicalorganization) analysis of the Challengerr is as flawed as Perrow’s two-cause (weather-organization) analysis of the disaster on Mt. Erebus. [Perrow’s two-way cause theory receives a new twist in Dianne Vaughan’s 1997 treatment of the Challengerr in which she argues that the O-rings were not the cause at all, but rather the culprit was the weather - wind shear that dislodged the O-rings that had not been sealed because of the charring. What she fails to realize is that the wind shear would have had no effect upon normally sealed Orings, but had an effect on the O-rings only because of the fact that they had malfunctioned.] (Vaughan 1997, p. 83) And in both cases the two-cause analysis masks the real nature of management error as the single dominant cause. That this single dominant cause is itself comprised of many elements is a separate issue. Of course, many elements from within the management organization have contributed to the existence of the flawed management system. For example, a lack of self-understanding on the part of top management of the relationship between crew and passenger safety and sound management practices is a key element within the complex of causes that make up the single dominant cause of management error. But before one can analyze the make up of the single dominant cause, one must first arrive at the conclusion that the single dominant cause is management error and not pilot error, weather conditions, operator error and the like. One can, of course, consider the case of the operator error in inserting the wrong coordinates as a major cause of the disaster. But such errors are not always avoidable. If solid management practises had existed to minimize the effect of such errors - such as logging the alteration, immediately passing on the information to the pilots of the flight in question, making certain that a confirmation was received from the pilots in question that the alteration had been received before approval for the flight to depart could be given, and so on, the error of the computer operator would not have had any harmful effect. If the pilots had only been informed of the change in coordinates, this alone would have prevented the disaster. The crew themselves cannot be faulted for not checking the coordinates before they were typed in the on board computer because they would have had no reason to think that there had been any change in the flight plan as there was no entry besides ‘flash ops’
THE DISASTER ON MT. EREBUS
283
which is the place for the crew to check for any late change in coordinates. Otherwise, as Justice Mahon states, ‘ ... one of [the crew] ... simply typed into the aircraft computer the long list of coordinates and headings (containing a total of 247 scattered digits) without noticing that the destination longitude had been altered by two digits. It would not have crossed the mind of any pilot or flight engineer that the flight plan, stored permanently in the airline’s ground computer, would be amended without the flight crew being notified in writing’. ((Erebus Papers, pp. 37-8) In the case of the disaster on Mt. Erebus, the first question that one needs to ask at this time is, what faulty (or absent) management procedures led to the insertion of the wrong coordinates into the navigation track system, or, what absent or faulty procedures made such an insertion undetectible and hence unchangeable? In Justice Mahon’s language, the mistake (of the false programming of the aircraft) was, ‘... directly attributable, not so much to the persons who made it, but to the incompetent administrative airline procedures which made the mistake possible. In my opinion, neither Captain Collins nor First Officer Cassin nor the flight engineers made any error which contributed d to the disaster, and were not responsible for its occurrence’.68 Can it be specified which management fundamentals were not being observed as evidence of a dysfunctional management system? For one thing, there was no organization chart clearly setting out defined areas of responsibility and authority (familiar story?). But the problems seem to have run far deeper than this. Mahon divides the administrative weaknesses into two areas: structure and communication. It was illuminating to discover that Justice Mahon selected two of the same categories for classifying management mistakes that are chosen as categories for classifying management weaknesses that appear in this present work.
DEFECTS IN ADMINISTRATIVE STRUCTURE (i) Lack of a permanent executive pilot staff or an adequate accountability system in lieu of this. Operational and executive pilots
284
SAVING HUMAN LIVES
overlapped. As a result, while an executive pilot was off on operational flying, there was no official system of recording what had happened in his absence. For a graphic illustration of the Court’s concentration on management basics, it is worthwhile to quote verbatim from the Mahon Report that, ‘... there appeared to be no filing system which could tell an executive pilot exactly what had happened within his jurisdiction while he was away’.69 (ii) There were no written directives from the Flight Operations Division specifying the duties or responsibilities of any particular executive pilot (an almost identical problem that was at the seat of the Zeebrugge disaster as has been noted in the discussion of the Herald). d 70 (iii) There was no written directive addressed to the Navigation Section or the Computer Section or the Flight Despatch Section specifying what steps which must be taken to transmit adjustments to flight plans, navigation procedures. In other words, if a change were made in a flight plan, it would seem that it would have been especially prudent to inform a list of pertinent authorities of this alteration, in particular, of course, the Captain of the departing aircraft. What is of special interest to note with regard to the particulars of this disaster under study is that there was no direction given to the Flight Despatch Section to maintain a written record of what was contained in the ‘Antarctic Envelope’ or copies of what was in that envelope which was handed to the crew before flight.71
DEFECTS IN ADMINISTRATIVE COMMUNICATIONS SYSTEM The Administrative Communications system would be best classified as dysfunctional as it was as marked by the absence of any clear directives or checking procedures as was the administrative structure. In general it can be said that the most absent feature of a sound communications system was the lack of any documentary evidence
THE DISASTER ON MT. EREBUS
285
as to either administrative decisions that were reached or communications that had been made. Justice Mahon specifies one case in particular which amply demonstrates this point. Captain Keesing, as Director of Flightt Operations, had submitted an operational scheme for the initial Antarctic flights including a minimum safe altitude, to the Civil Aviation Division which approved of his recommendations. Unknown to Captain Keesing, Captain Gemmell (his subordinate), made a separate arrangement with the Civil Aviation Division which involved a totally different minimum safe altitude (shades of the Challengerr disaster!). Captain Keesing knew nothing about the separate agreement with the Civil Aviation Division until after the disaster.72 To avoid undue prolixity, it is wise to concentrate on one single instance of dysfunctional management communications. There seems to have been an almost total lack of any written records of management decisions and communications. In fact, when Justice Mahon called the CEO as witness and he raised the issue with him of unrecorded communications both intra and inter divisions with respect to the safety of flying operations, ‘The chief executive said he controlled the airline on a verbal basis.’73 The CEO stated in his own defense that ‘... many large companies were controlled on this basis.’74 Of course, the relevant issue here is that an airline company takes many people’s lives in its hands who are its passengers and its crew. There was not even a written report by the CEO to the company’s board, a fact that Justice Mahon could scarce find credible.75
SUMMARY OF MANAGEMENT DEFECTS By this point, it was certainly clear to Justice Mahon as it must be to anyone that there was a virtual absence of any proper - day by day administration of flight management system at work with regard to the Flight Operations Division. To summarize, there was no organization chart setting out clearly defined areas of responsibility and authority; the communications system was largely verbally absent and virtually completely absent of any written recording
286
SAVING HUMAN LIVES
methodology; there was a lack of administrative continuity. As a result, Justice Mahon went further in his reflection that a management error was the chief cause of the Erebus disaster to state that it was essentially a systemic (i.e., not a single) management dysfunction which was the originating cause and continuing cause of the disaster. In other words, while the alteration of the coordinates was certainly the effective cause, that too, was actually a symptom of the systematic absence of proper management guidelines. It is of utmost importance to notice the supervening place communications takes in Justice Mahon’s account of management behavior. It was not exactly the alteration of the coordinates that was the problem (people can, of course, make mistakes), but it was the failure to notify the flight crew of the alteration. One cannot of course prevent a possible error in calculation. But one can require that all changes be communicated for further checking. Thus, Justice Mahon has put his finger on the real cause, which is not simply the insertion of the incorrect coordinates, but a failure in communicating properly about it: ‘The omission to notify the flight crew of the change in the computer track was, of course, an appalling error. It was the original and dominating factor behind the disaster.’76 But even this gross error was only a symptom of a general lack of a communications system. In Justice Mahon’s words, ‘It is clear enough that the original and continuing cause of the accident was a breakdown of the systems organisation of the Flight Operations Division of the airline’.77 However, having said that, Justice Mahon does not consider that it would be proper to extend responsibility for this lack of system to the Board of Directors in general. In his words, ‘No board member could be expected to investigate the day-to-day administration of flight operations’.78 Here, one could demand more from board members than Justice Mahon demands. While the Board could not oversee details of everyday a operations, it seems that they could have raised pertinent questions from time to time of the CEO. For example, the Board would obviously have known of the CEO’s predilection for running the corporation entirely on verbal commands without any record keeping system. Such behavior should not have been countenanced in the case of a corporation which daily takes direct responsibility for many hundreds of lives. Apart from this area of disagreement with Justice Mahon, his conclusion that it was the
THE DISASTER ON MT. EREBUS
287
lack of communications which in the end was responsible for the total breakdown of the management system - or what was really the absence of a management system - is inescapable. In Mahon’s words, ‘... in looking into the communication lapses which led to the disastrous mistake over the co-ordinates, I have been confronted at every turn with the vague recollections of everyone concerned, unsupported by the slightest vestige of any system of recorded communication and of course it was this communications breakdown, which in turn amounts to a systems breakdown, which is the true cause of the disaster’.79
THE LACK OF ANY SAFETY ETHOS While Justice Mahon did not point to the need for a general safety ethos, one can also ask why this breakdown in communication and structure both existed and was allowed to continue to exist. One can simply fault management practices and let it go at that. But one can also tackle the issue from the standpoint that there was no organizing corporate safety ethos. If there had been a strong and pervasive safety ethos, it is very likely that greater attention might have been given to make sure that the absences of proper management practices would not have been allowed to exist in the first place. If there had been an organizing principle, then it would have been far more likely that rules would have been implemented to actualize the principle. In the absence of such a guiding principle, there was no driving force to establish the correct rules.
TOP DOWN RESPONSIBILITY It is evident that the Erebus disaster is a classic case of disaster which is a result of mismanagement. The notion of looking to human error either in the case of pilot error or the computer operator who programmed in the wrong coordinates is clearly shown to be wrongheaded. The real error lay in the nearly total lack of a proper
288
SAVING HUMAN LIVES
system of organization. One can point to an indeterminate number of acts which led to the wrong coordinates being fed into the computer and TE 901 being allowed to take off with its nav track programmed on a route unbeknownst to its pilot and crew. But behind all of these separate acts and non-acts lay a lack of any policy of coordination. In the words of Mr. Baragwanath: ‘While the accident had no single cause, the series of factors giving rise to the accident are overwhelmingly due to the absence of an adequate company organisation’.80 But the question still remains open, who is responsible for such an abysmal and dangerous lack of organization? While it can be said that anyone in a position of responsibility could have raised the issue, it certainly would seem to have been responsibility of the CEO. The CEO was very anxious, understandably, in the course of the investigation to place the blame for the disaster on the pilot and crew and to attempt to avoid any inference that management procedures were at fault.81 This attempt to free management from any responsibility, which even went to the extremes of directing that all documents relating to this flightt were to be impounded and the “irrelevant” ones were to be placed in the company’s shredder without copies, was extraordinary.82 But all of these actions, in spite of French’s extraordinary justification, only confirm the culpability of management which in turn is the responsibility ultimately of at least the CEO.83 It is an open question whether top down responsibility ends simply there. Are the members of the board completely ignorant of the management style of their CEO? Here, it would appear that they must also be held accountable if not for the specific actions then at least for the fact that they have given responsibility to a CEO who managed without paying the right sort of attention to either sound management fundamentals or to a safety ethos. It must be considered deeply whether the Board can be considered totally exempt from responsibility for the appointment of someone who can take a completely carte blanche view of management. One must consider deeply appointment and recall criteria for CEOs and remember the most basic adage of this book, that the buck stops everywhere while keeping in mind that the issue here is not simply exposing who is at fault (in this case management)
THE DISASTER ON MT. EREBUS
289
- the issue remains - what can be done to redress management mistakes.84
NOTES 1.
2. 3. 4.
5.
Justice P.T. Mahon, Report of the Royal Commission to inquire into The Crash on MOUNT EREBUS, ANTARTICA OF A DC10 AIRCRAFT operated by AIR NEW ZEALAND LIMITED, Wellington: P.D. Hasselberg, Government Printer, 1981. Judge Mahon’s Report was not tabled (recognized as an official government document) in New Zealand Parliament until August of 1999. (This astounding fact was learned in private conversation with Maria Collins at her home in Auckland, New Zealand on 13 July 2001). Gordon Vette with John Macdonald, IMPACT EREBUS TWO, Auckland: Aviation Consultants Ltd., 2000. Justice P.T. Mahon, Verdict on EREBUS, Auckland: William Collins Publishers Ltd, 1984, p. 23. Vette, p. 9. Captain Vette related in a private interview with the author that Collins, a personal friend of his, was so meticulous that he would not allow passengers to set sail in a boat without first checking all safety equipment. Mrs. Collins confirmed this in a later interview when she said that, ‘If the book said, thou shalt do so and so, he did so and so’. Mrs. Cassin, widow of the co-pilot Greg Cassin has since gone blind and because of a very short-lived marriage has lost her accident compensation. Mahon Report, p. 41. The view of pilot error as the cause found support in an article written by Captain William B. Mackley (Retd.), ‘Aftermath of Mt. Erebus’, in Flight Safety Digest, September, 1982, pp. 1-7. This appears to be the continuing view of Air New Zealand according to French, Cf., f ‘The Principle of Responsive Adjustment in Corporate Moral Responsibility: The Crash on Mount Erebus’, Journal of Business Ethics, 3, 1984, p. 111. This is supported by the fact that Air New Zealand has not, according to French, altered its management policies. Cff , Ibid., pp. 109-110. (French’s claim is inconsistent with reports by Vette in his video, Impact Erebus Two, 2000). It is of interest to note that at one point Captain a Mackley was flight safety advisor, international for Air New Zealand. Iff Captain Mackley was taking a biased view favoring management, it would call into question the value of relying upon a safety officer’s intervening opinion in impending disaster situations. For this reason, no one recommendation proposed d in this volume is sufficient but it is hoped that if all of the recommendations proposed are adopted that a check and balances system will exist such that disasters can, for the most part, be prevented even if one link in the preventive chain is weak. The point emphasized in this work is that an entire safety ethos must exist which is encouraged to manifest itself through the emphasis on the buck stopping
290
6. 7. 8. 9.
10. 11. 12. 13. 14. 15.
SAVING HUMAN LIVES
everywhere. A safety officer who viewed his role as rubber stamping management policy would not be participating in a general safety ethos. The appointment of such an officer in the first place would be the appointment of a management that did not really intend to honour a safety first priority. R. Chippindale, AIRCRAFT ACCIDENT Report No. 79-139, Wellington: Office of Air Accidents Investigation, Ministry of Transport, 1980, p. 53. Chippindale, p. 48. Chippindale, p. 49. Chippindale, p. 52. For further misrepresentations contained in the Chippindale Report, which, in the end, were also contained in the Woodhouse portion of the Court of Appeal judgement discussed in n. 61 below and the Privy Council f Stuart Macfarlane, The Erebus Papers, judgement discussed in n. 62 below, cf., Auckland: Avon Press, Ltd., 199l, pp. 507-509. Mahon Report, p. 36. Mahon Report, p. 36. Mahon Report, pp. 36-7. Mahon Report, p. 37. Mahon Report, p. 37. Mahon Report, p. 38. In a private interview at the University of Canterbury in Christchurch, New Zealand with the present author on 3 May, 2001, Captain Vette, who had flown with Brooks and had heard the CVR tapes made the following comments. “From his position as the flight engineer, Brooks would see the milk bottle effect where all looks whitish but you know you are seeing something – this is what is usually referred to as whiteout orr polar whiteout as opposed to sector whiteout when you are unaware you are seeing an illusion. That is why he starts to say, ‘I don’t like the look of this’. Mulgrew seeing Cape Tennyson, the perfect counterfeit for Cape Bird, says, ‘Bit thick near Cape Bird’, not ‘a bit thick here ... Bert’.” Justice Mahon and Baragwanath took the tapes overseas to listen to the relevant sections through the sophisticated listening devices available at the National Transportation Safety Board’s offices in Washington, D. C. where a Colonel Turner had a variety of filters which could be operated so as to suppress background noise to some extent. Justice Mahon reports that after repeated listening to the relevant sections that ‘As to the two alleged remarks about the weather, Colonel Turner was quite unwilling to accept that they had been made. ... Colonel Turner’s interpretation of [the infamous ‘Bit thick here eh Bert?’] was that a speaker was in fact saying, ‘This is Cape Bird’.’ (Verdict, p. 178) As to the ‘a long while on ... instruments’, ‘Colonel Turner was again definite that this remark about ‘instruments’ really comprised two sections of an interlocking conversation, and that no reference to the DC10 being flown on ‘instruments’ on its approach to McMurdo could be inferred. We eventually completed the process of playing and replaying the tapes. It seemed at this stage as if nothing had been said about the weather being ‘thick’. It was also clear that the remarks of the two pilots had never disclosed any element of concern. The ‘mounting alarm’ of the flight engineers was, in
THE DISASTER ON MT. EREBUS
16. 17. 18. 19. 20. 21. 22. 23.
291
our view, not evident at all from what we could hear through Colonel Turner’s listening devices.’ (Verdict, pp. 178-9) Mahon Report, p. 38. Mahon Report, p. 39. Mahon Report, p. 39. Mahon Report, p. 39. Mahon Report, p. 43. Mahon Report, pp. 99-100. Mahon Report, p. 114. Vette, p. 8. The question of how and why the change was made and how and why it was not communicated to Captain Collins is a matter of incredible complexity and bizarreness. An extremely oversimplified summary which is greatly indebted to private conversation and correspondence with Stuart Macfarlane between April and June of 2001 is as follows. Originally (for six flights), the Antarctic flights were programmed to fly over Mount Erebus (the waypoint was Williams field behind Mt. Erebus). No pilot actually took the plane over Mt. Erebus as, according to Captain Vette who piloted planes on the original Antarctic flights, it was considered far too dangerous to fly directly over an active, 12, 450 ft volcano. (According to a private interview of Captain Vette by the present author at the University of Canterbury, in Christchurch, New Zealand on 13 May, 2001). There were a total of three changes (two if one considers that the last change is a change back to the original waypoint) in coordinates from the original set of coordinates. The first change was to the NDB (also behind Erebus). The second change was from the NDB (a non directional beacon) to the Dailey Islands (behind McMurdo Sound), which Brian Hewitt, the Chief Navigator, said he had not intended to make but believed he was re-entering the Williams Field coordinates, not knowing the then current waypoint was the NDB. Brian Hewitt, according to his own account, altered the waypoint by accident to the Dailey Islands, but was unaware of this change. ((Erebus Papers, p. 177, pp. 325-6) This mistake was made by typing the wrong figures. In any event, this new routing proved to be beneficial since it safely took the planes down McMurdo Sound some 27 miles away from Mt. Erebus and this remained d the route for the next 14 months. The third change was from Dailey Islands to the TACAN. The last change took place in the following manner. After his November 14 flight to the Antarctic, Captain Simpson related to Captain Johnson that the plane was some 27 miles away from the beacons. On November 15, Captain R. T. Johnson, Flight Manager, ordered Mr. L. A. Lawton, Superintendent, Navigation Services, to make the change from Dailey Islands to the TACAN. (Erebus Papers, pp. 247, 258) Mr. Hewitt made a third alteration in the coordinates which he claimed to amount to only 2.1 miles. (According to Air New Zealand, this was not found out until ‘Mr. Dorday’s Discovery’, Erebus Papers, pp. 286-298). Given that the flight plans had been over McMurdo Sound for the past 14 months and that the coordinates that Captain Collins was given programmed his flight over Erebus, the alteration made by Mr. Hewitt must have been by some 27 miles
292
SAVING HUMAN LIVES
from the Dailey Islands to the TACAN. [Recently, Captain Wilson had flown to the Antarctic on 7 November on McWilliams’ flight while the McMurdo waypoint was the Dailey Islands (though the flight was actually diverted to the South Magnetic Pole) and had to claim thatt he never discovered that fact even though he was the briefing officer and went on the flight in that capacity. (Erebus Papers, p. 577) The whole management of Air NZ, including the Navigation Section, the briefing officerr Wilson, and the executive pilots swore that they never knew a McMurdo flight path.] At this point, one can only speculate. Was Mr. Hewitt aware that he had in fact earlier made a change that amounted to a 27 mile change that had not been approved by the Civil Aviation Authority and now wished to make it appear that he was only making a slight correction from one approved waypoint behind Mr. Erebus to another nearby waypoint also behind Mt. Erebus? Mr. Hewitt claimed that the new alteration he was making only changed the waypoint from one waypoint behind Mr. Erebus, the NDB to another, known as the TACAN. (The implication would be that all of the pilots knew that they had been flying over McMurdo Sound for the past 14 months while Mr. Hewitt thought that they had been flying over Mt. Erebus). When Hewitt was entering the data, he said that he retrieved the old Williams Field flight plan (the original flightt plan from 1977 which took pilots over Erebus) instead of printing out the new data that he had been entering. The final disastrous change in coordinates was not a computer error of entering the wrong decimal point, but a retrieval of an old flight plan. This is important to take note of because it shows that it was not a matter of being a victim of technology beyond human control. Assuming he is telling the truth, he made a serious procedural blunder here. Assuming otherwise, he knows that he is making a 27 mile change, but is retrieving an old flight plan because, he thinks, it will more effectively cover up the fact that he has made a change of that magnitude. The third change takes place in two stages. Firstly, Hewitt had Brown enter the amendment into the AIRS computer but itt could not go live until the update of the address book between 2 and 3 in the morning of ( Papers, p. 258) Mr. Wednesday 28th, the morning of Collins’ flight. (Erebus Hewitt apparently attempted to communicate that there was an alteration of coordinates to Mr. Kealey, the flight despatch officer of the next flight, the flight of Captain Jeff White. ((Erebus Papers, p. 247) Firstly, he telephoned Jarvis of the Air NZ computer section in Newton Gully, a suburb of Auckland. Hewitt told Jarvis of the data processing section that he (Hewitt) could not get the computer to accept the change in flight plan. ((Erebus Papers, p. 248) Jarvis told Hewitt that he could not make the change. Hewitt telephoned D. T. Kealey, Flight Services Controller for Flightt Despatch, with the coordinates since Hewitt was in downtown Auckland in the Air NZ Building and Kealey was at the airport. According to Hewitt, he instructed Kealey to hand amend the flight plan on 20 November. (Erebus ( Papers, p. 248) Kealey could not type in a change so Hewitt asked him to print out the flight plan and then handwrite in the changed waypoint which he did not do. On the flight plan which Brown sent to Mac Centre, no coordinates were written in, but only the vague term ‘McMurdo’ was used. The use of such a vague term can be explained by the
THE DISASTER ON MT. EREBUS
293
theory that such a term could be taken to stand for either Dailey Islands or the NDB behind Erebus or the TACAN behind Erebus. Thus, no one could be sure precisely what kind of change Hewitt had been making. One powerful implication of this choice of vague terminology was that since each flight plan was also radioed to ATC at McMurdo, on this flight plan, while the coordinates were programmed to fly over Erebus, the American air traffic controller thought that the flight plan was to the Dailey Islands since for the previous flights in 1978 and 1979 the destination coordinates had been ones referring to the Dailey Islands waypoint (behind McMurdo Sound) and this is where the US Navy was first searching after the crash. In Justice Mahon’s words, ‘If the actual coordinates for the new destination waypoint had been contained in the ATC flight plan, then the air traffic controllers would immediately have recognized those coordinates as representing the TACAN position and they would have radioed their Christchurch base and asked them to seek an explanation from Air New Zealand, for the simple reason that they would not have approved any proposed flight path over the top of Mount Erebus. This use of the word ‘McMurdo’ was therefore an event of great significance’. (Verdict, p. 138-9, mistake 11) A vital opportunity for a warning to be given, in this case by the U.S. Navy, was obstructed. Initially, Hewitt considered the change to be very important. His words were something like this: ‘the change has to go in’. Afterwards, however, everyone in the nav section said that the change was believed to be only 2.1 miles and insignificant and not worth mentioning. If Mr. Kealey had made that alteration, it would not have mattered to Captain White’s flight since there was no sector whiteout on Captain White’s flight and even if Captain White had stayed on the nav track k (which he did not), he would easily have seen Mt. Erebus in front of him. According to Kealey, Hewitt told him to make the alteration on 21 November, after White’s flight had departed (but Kealey did not tell White by radio). The evidence conflicts on whether White’s flight had left or not. Kealey said (contrary to Hewitt’s evidence) that White’s flight had departed and said that ‘I wouldn’t take it to be our responsibility’ to radio White to tell him of the change. ((Erebus Papers, p. 266) One is forcibly reminded here of the absence of the ethic, ‘The Buck Stops Here and it stops Everywhere Else as Well’. In any case as the flight plans were only voluntary, Captain White disengaged his nav track at 70 miles out ((Erebus Papers, p. 273). Pilots were not compelled to follow the flight path. They were entitled to fly anywhere in the McMurdo area subject to being satisfied as to safety. (In fact, Collins was one of the few pilots, if not the only pilot, who did follow the flight plan. He was exceedingly cautious and planned in advance exactly what he would do, plotting the night before precisely where the flight plan would take him). Nevertheless, had Kealey told White, news of the alteration in the flight coordinates would have reached Collins. In Stuart Macfarlane’s words, ‘Had the flight plan been hand amended, it would have provided clear warning to the crew that the flight path had been altered. The crew would have been alerted, and there can be no doubt that the meticulous Captain Collins would have carried out the correct checks to verify exactly what the change involved.’ (Erebus Papers, p. 249) After this, it is a case of going from the obscure to the
294
SAVING HUMAN LIVES
opaque. Despite the fact that Mr. Hewittt had seemed most anxious to ensure that Mr. Kealey communicate the alteration of the coordinates to Captain White, when Mr. W. K. Amies, Navigation Systems Specialist, was asked about the changed coordinates, he replied that since it was only a matter of 2.1 miles, he did not think it was of any significance and would only have mentioned it to Captain Collins in passing. Nothing happened until the change was finally transferred into the live section of the computer in the early hours of the morning of Collins’ flight the next week and Flight Despatch printed out the flight plan held in the computer. Flight Despatch had a terminal from which they could print out the current flight plan. For regular routes the flight plan was stored in a casette which they handed over to the pilots which was simply inserted into the onboard computer. Coordinates had to be keyboarded into the onboard navigation computer by someone on the flight deck. Mr. Hewitt did not attempt to communicate the alteration off the coordinates to Mr. Johnny Johnson, the Flight Despatch Officer of the week k who was taking the new coordinates to Captain Collins. One can conjecture that he thought that Mr. Kealey would inform the Flight Despatch Officer of Captain Collins’ flight or that Mr. Kealey would take it upon himself to personally inform Captain Collins of the change in coordinates. On the other hand, since Kealey never informed Hewitt that he had not communicated the change, why did Hewitt not ask Kealey if he had communicated the change? Kealey never told Hewitt that he did not make the change. Why was a confirmation not required on such an important matter? Hewitt was aware that such a change, if pilots were not made aware of it, would be dangerous. When the present authorr asked Captain Vette, a qualified navigator, he answered thatt of course Mr. Hewitt knew the significance of the change and that ‘their whole purpose [Navigation section] was to prevent us from bumping into mountains’. (In a private conversation with the present author in Auckland on 7 July, 2001). Mr. Kealey has so testified that he was under the impression that the alteration off the coordinates to which Mr. Hewitt had referred related only to the Antarctic flight of the week previous (White’s flight). (For Kealey’s evidence, see Erebus Papers, pp. 261-267). Mr. Kealey did not communicate the fact that a change would be made to the Flight Despatch officer of Collins’ flight, I. A. (Johnny) Johnson. In effect, Captain Collins was given a new set of flight coordinates without being apprized of this crucial fact. What is more is that on the flight plan that he was given, there was no ‘flash ops’ message which would have drawn attention to an important change. The whole point of the ‘flash ops’ section is to alert the crew to a late change. There is a serious question in the mind of the present author as to why Navigation section in the person of Hewitt did not ensure that Captain Collins would know of the alteration in coordinates by informing him and by making sure that he received a confirmation from Captain Collins. This would appear to be both a failure in professional duty and an ethical failure. Who else but Hewitt would have been more responsible for entering the fact that there had been a change in coordinates under the important heading, ‘flash ops’? The further question is why management procedures were not in place that would have ensured that such information would have been communicated regardless of the
THE DISASTER ON MT. EREBUS
24.
25. 26. 27.
295
indisposition of any officer? Navigation section was never informed of briefing content and briefing officers were never present during Navigation section changes. Navigation section should have communicated the change to Flight Despatch, to the briefing officer, Wilson, placed a flash ops message on the flight plan to alert the pilot, as well as notifying McMurdo ATC. One can raise the question of both the absence of the will to communicate and the absence of adequate management safeguards in the absence of such communication. There were no written instructions as to who was responsible for what. Three figures can be said to have possessed great responsibility in the matter: Hewitt, Kealey and Lawton (Erebus Papers, pp. 256-7) It is a terrifyingly graphic case of the fatal consequences of not considering that ‘The Buck Stops Here and It Stops Every Place Else as Well’. It makes it clear that the problem is not so much a lack of communication, for that is the effect, but a failure to assume responsibility for communication and the underlying failure of being sensitive to ethical consequences or, if you like, a failure of moral imagination, for that is the initiating cause. One cannot neglect the absence of an overriding ethical imperative which everyone in the organization would be informed about and motivated by, the presence of which is the responsibility of senior management to ensure. One must be very careful to avoid thinking that disasters are simply to due a massive series of coincidences. Finally, in the case of a communication breakdown, there must be some management system in place which would both catch the communications breakdown and ensure that communications reach those for whom the transmission of such information is crucial. Now, according to Captain Vette in a private conversation with the present author in Auckland on 9 July 2001, the system has been altered and pilots are provided with new dated charts at the same time new tapes of coordinates are entered. However, as Captain Vette points out, this system m still depends on someone providing the pilots with the correct charts. In the end, management checking, cross-checking and confirmation systems must be thorough going and, in addition, since there can always be some loophole, an ethical imperative m must be widespread so that everyone clearly understands what the purpose is behind extensive management safeguards. (Nothing happened either to Kealey or to Hewitt). Mahon, Verdict, p. 152. Captain Vette might be familiar to readers as the pilot described in ‘The mercy mission of Flight 103’ in Readers Digest, September 1982, pp. 27-33 who deviated from his commercial flight to guide a lone singleengine Cessna 188 to safety. In 1980, the Guild of Air Pilots and Air Navigators awarded Gordon Vette the Johnston Memorial Trophy for outstanding air navigation. The author was guided to such information through a direct interview with Vette’s son, a pilot with Cathay Pacific. In a macabre twist of fate, the flight engineer on Vette’s plane, Gordon Brooks, was later killed in the crash at Mount Erebus. Cf., f Stanley Stewart, Emergency Crisis on the Flight Deck, Shrewsbury: Airlife Publishing, Ltd., 1989, chapter one. Vette, p. 306. Chippindale Report, p. 8. Vette, p. 307. It should also be mentioned that the passenger maps which were handed out to every passenger clearly showed the flight path to be one which
296
28. 29.
30. 31. 32. 33.
SAVING HUMAN LIVES
ran down the middle of McMurdo Sound. While it could be argued that the publicity section had simply printed up maps with the incorrect route, the fact that the route marked on the map is one which runs down the middle of the Sound is extremely revelatory. Where else could they have found out what to mark on the map except by consulting the Navigation section? The present author is indebted to Peter McErlane who in Auckland, New Zealand, on 5 July 2001 showed him an original passenger map from flight 901. Mahon Report, pp. 20-21. Mahon Report, p. 23. Peter French, however, considers that such action on the part of Air New Zealand was justified on the grounds that ‘... the airline wished to avoid trial by the press that could affect future settlements with relatives of the victims’. He adds, moreover, that, ‘The Royal Commission, I think, is most unkind in its description of the post-accident behavior of the senior executives of Air New Zealand’. Cf., f Peter French, ‘The Principle of Responsive Adjustment in Corporate Moral Responsibility: The Crash on Mount Erebus’, Journal of Business Ethics, 3, (1984), p. 109. For a penetrating account of the reasons why Justice Mahon placed emphasis m upon the document shredding, Cff , Stuart Macfarlane, ‘Destruction of Documents’, The Erebus Papers, Auckland: Avon Press, Ltd., 1991, pp. 537-55. As an example, Mr. Macfarlane (a now retired Senior Lecturer in the Law Faculty at The University of Auckland) writes: ‘Mr. Justice Mahon told me ... that he suspected that Air New Zealand was dishonestly attempting to attribute blame to the dead pilots and had concealed documents from him so as to conceal the evidence contained in those documents which would have proved that the pilots were not to blame ... The Judge also told me that ... he had become suspicious of the coincidences that all of the documents produced to him were photocopies; no originals came to light. He suspected that on certain documents undetectable alterations which favoured the airline case might have been made. An alteration made to an original document carries the risk of detection, but when photocopied the risk is minimised, if not eliminated. He thought it possible the originals had been destroyed to avoid detection’. The Erebus Papers, pp. 537-8. Mrs. Collins, widow of pilot of the fight, Jim Collins, related to the present author in a private conversation held at her home in Auckland, New Zealand on 13 July 2001 that she was told at the time by a friend who worked on the same floor as the publicity office that the shredding was being carried out in such haste to get rid of the documents that someone’s tie was caught in the shredding machine. Mahon Report, p. 23. For details on the missing Cassin documents, cff , Verdict, pp. 218-219. Mahon Report, p. 23, 142-3. Verdict, p. 215. Verdict, p. 273. In a private conversation held at her home in Auckland, New Zealand on 13 July, 2001 Mrs. Collins related that March 29th was her birthday and it was only a few days after the Mahon Report came out that her home was burgled. There were items of value that were there (such as jewelry) but nothing of any value appeared to be missing. The burglars appeared to have been on a fishing expedition. She said that they might have been looking for a draft of the
THE DISASTER ON MT. EREBUS
34.
35. 36. 37.
38. 39. 40. 41. 42. 43. 44. 45. 46.
47.
48. 49. 50.
297
Chippendale Report which she was to comment upon before it was published. She had, however, given this to a pilot friend before the occurrence of the burglary. Verdict, p. 215. Mrs. Collins revealed that a few days after the disaster that Ron Chippindale came to her home and asked for her husband’s Atlas. She replied in all innocence that it was missing but that she could surely find him a New Zealand Atlas at a neighbor’s home. Mrs. Collins, a most charming person to interview, came to New Zealand with her parents from Austria, sponsored by no less a personage than Sir Karl Popper. Verdict, p. 63. Cff , Vette, p. 309. Mahon Report, p. 19. (emphasis in the original). Mahon Report, pp. 34-35. According to private correspondence with Mr. Macfarlane on 3 May 2001, the AINS was much more accurate than this. This concurs with evidence on the accuracy of the AINS given by highly experienced command officers and first officers f (pilots and co-pilots to lay readers) in direct interviews with the author. Cf., f also, Erebus Papers, pp. 53-56. Vette, p. 315. Mahon Report, p. 75. Verdict, p. 113. Verdict, p. 113. Verdict, p. 24l; Mahon Report, p. 78; Vette, p. 338. Verdict, p. 24l. Verdict, p. 118. Vette, p. 319. Mahon Report, p. 95-96; Vette, pp. 327-328. Cf., f According to Captain Vette, ‘His two elder daughters, Kathryn, 16 and Elizabeth, 14, were attentive observers. In reply to their queries, he laid the big chart out on the floor; Kathryn particularly remembers her father indicating the path the aircraft would follow - down the coast of Victoria Land and on to McMurdo Sound. It would fly back up the same coast on the homeward leg, he told her.’ IMPACT EREBUS, p. 105. In Justice Mahon’s words, ‘The witnesses in the case who were asked to describe the personality and working methods of Captain Collins were unanimous in their opinion. It did not matter whether they were executive pilots or line pilots. They said he was careful, conscientious and methodical. The last adjective was particularly stressed’. Mahon Report, pp.98-99. Vette, p. 320. Vette, pp. 320-321. Mahon Report, pp. 140-14l. Cf., f Mahon Report, p. 142-145 for a series of further details regarding the fate of the two key flight bags including the strange detail that the two flight bags which had been recovered intact (but empty) and lodged in the Police store at McMurdo and would have been returned in due course to Mrs. Collins and Mrs. Cassin were taken away from the store by someone never to be seen again.
298
SAVING HUMAN LIVES
51. Mahon Report, p. 141. 52. Chippindale Report, 3.36, p. 53. 53. Vette, p. 341 and Mahon Report, p. 125. It is also the case that Air New Zealand’s policy was that radar was not to be used as a primary navigation aid. Cff , Erebus Papers, p. 244. To consider that it might be turned to in case of emergency is to assume the scenario that the pilots thought that they were lost and could not see where they were. 54. Vette, pp 236-7. 55. Verdict, p. 154. 56. Verdict, p. 155. Some have raised the issue why the pilots and the experienced arctic observer aboard did not notice the difference in the landscape from the route they thought they were taking. The most credible hypothesis is that the dual deception of whiteout (on which the crew had never been briefed) and flight plan led the crew to mistake the terrain near to Erebus (two similarly appearing and geographically located capes and an island) for the terrain were expecting to see. Cff , The Erebus Papers, p. 455. Whiteout on its own was not enough to mislead the pilots for they could have been briefed on its existence and if Air New Zealand had followed the Royal New Zealand Air Force rules, they would never have been flying on Visual Flight Rules in an area where whiteout occurred. The ANZAF adopted the USAF drill. Every military flight went down McMurdo Sound on instruments with bearing from beacons and were talked in on radar from the ATC. The military knew what they were doing. They had been flying there since the 1950’s, for over thirty years. Air New Zealand did not follow safety as their highest priority with regard to their crew and their passengers. Why did they not follow standards which were given in the military briefing? The New Zealand Air Force had adopted the USAF briefings. Why had Air New Zealand done otherwise? The information they received at the military briefings was not made known either to the pilots or to the NAV section. Once more, one finds a lack of communication. Information which is received is not passed on. The briefing the pilots received from Air New Zealand referred to whiteout as caused by blown snow. Therefore, they thought they could avoid whiteout simply by avoiding blown snow. There are at least three kinds of whiteout. There is the whiteout due to blown snow; there is general whiteout caused by optical illusion which the military briefed Gemmell and Civil Aviation on on two occasions, one in the 1960’s which Kippenberger would have attended and another in 1977; there is sector whiteout, the most insidious kind discovered by Captain Vette in which you can see texture in some places and whiteout in other places where you think you can see. Brooks, the engineer, had lost side texture when he said ‘I don’t like this’. (The present author is indebted to a private conversation with in Auckland with Stuart Macfarlane on 9 July 2001 for this information). It was not a matter of poor visual conditions as there is no evidence that the pilots were unsure of their location, and recovered passenger’s photographs taken up until the very moment of the crash show clear visual conditions. Cff , The Erebus Papers, p. 240. According to Vette, ‘... crew members were misled by a remarkable correspondence between the appearance of the features they were observing as
THE DISASTER ON MT. EREBUS
57.
58. 59.
60.
61.
299
they flew ... and the appearance of the features they would have seen had the aircraft been following its planned course up the middle of McMurdo Sound. Cf., f David O’Hare, Stanley Roscoe, with contributions by Gordon Vette and Michael Young, Flight Deck Performance, The Human Factor, Ames: Iowa State University Press, 1990, p. 23. Had Air New Zealand followed the military flight path, this disaster never would have occurred in the first place. One can say that the root cause of the disaster was a lack of safety as the highest priority. There was no need to kill 257 people to learn that they should have followed the standards of the military briefing. Proper management and proper communication are in service of safety. If safety is followed as the highest priority, then one understands the reason behind management safeguards and the importance of passing information onwards. Mahon Report, pp. 42-3; For the phenomenological fact thatt no anxiety or urgency was present in the voices, Cff , Verdict, p. 198. It is also the case that had Captain Collins been aware that he was about to crash into the side of a mountain, he would have called for ‘Firewall power’ (the maximum power used for life and death situations) and not have gone through a standard drill calling for ‘Go round power’ which is high but nott critical power. (For this information the present author is grateful to Captain Vette for his conversation at the University of Canterbury, Christchurch, New Zealand on 13 May, 2001). Verdict, p. 247. Verdict, p. 140 and Vette, pp. 329-334 for the summary of the 54 mistakes; pp. 338-340 for the summary of the 177 mistakes concerning height - all of these latter from Macfarlane’s Notes on Text; Cf., f also Vette, p. 141. Verdict, p. 247.(This is the notorious paragraph 377 of the Mahon report discussed below in note 62.) Such a finding is highly unusual for a Court to make according to a Supreme Court Justice (Hong Kong) in a direct interview with the author. According to the Woodhouse judgement of the New Zealand Court of Appeal, in ‘Air New Zealand v. Mahon’ (No 2), (the ruling is divided into two separate judgements): ‘But lest there be any misunderstanding it is necessary to emphasize at the outset that no attack can be or indeed has been made upon the conclusions it [the Mahon Report] reaches as to the cause of the crash.’ (620 NZLR). [NZLR = New Zealand Law Reports] This phraseology would seemingly imply that the conclusions would or could be left intact. An even stronger statement appears later when it is concluded with respect to the Mahon Report that, ‘Thus the conclusions as to the cause of the crash must and do stand’. (630 NZLR) And again: ‘Already we have emphasized and we do so once again that what was said in the Royal Commission Report about the cause or causes of the accident must stand entirely unaffected by these proceedings’. ( (NZLR 649) If the case before the CA had been an appeal, the CA would have had in law the jurisdiction to rewrite the Mahon Report and come to findings of fact which differed from the Report. However, the case was not an appeal, but was an application for judicial review, and in judicial review proceedings, the court has no such power. This was believed to be the law before the Erebus case. Because of what the PC said (Cf., f Appendices), one cannot be too sure just what
300
SAVING HUMAN LIVES
the law is. (Gratitude must be extended to Mr. Macfarlane for an extensive private correspondence which has clarified these distinctions). According to Professor J. F. Northey, Dean of the Faculty of Law and Head of the Department of Law, and Professor f of Public Law at the University of Auckland, writing in Recent Law, Legal Research Foundation, February, 1982, p. 31, in his explanation of the ruling of the New Zealand Court of Appeal, in ‘Air New Zealand v. Mahon’ (No 2), reported in [1981] NZLR 618: ‘... the Commissioner’s [Justice Mahon’s] ... conclusion that administrative airline procedures and not pilot error were the cause is unaffected by the Court of Appeal decision [the CA made no finding on causation of crash]....The Court of d decide: that the terms of reference of the Commissioner conferred Appeal did no express [or] implied power to make a finding of guilt in respect of a crime ... that the Commissioner did not comply with natural justice before making the finding of a conspiracy to commit perjury’. (emphasis in original) Cff , The Erebus Papers, p. 505. The notion of not complying with natural justice amounts to the Judge not warning each witness at the time that the Judge did not find his testimony believable and thus each witness would have had the opportunity to call evidence and have submissions made in support of the witness’s statements. (Acknowledgement must be given to a private correspondence from Mr. Macfarlane for this straightforward explanation of this legal phrase). It is interesting to note, however, that the Cooke judgement does acknowledge the sloppy work in the making of the change and the major error made in not notifying the aircrew of the change of coordinates but does not take this acknowledgement into consideration in their conclusion of no conclusion on causation: ‘Beyond argument, it would seem, there was slipshod work within the airline in the making of the change and the failure to expressly notify flight crews’. (66l NZLR) This despite their disclaimers to the contrary that they could make no causation finding. According to Andrew Beck, ‘The Cooke judgment stressed that the court could not itself make findings as to the causes of the disaster and that no aappeal lies from a commission’s report’. Cff , Andrew Beck, ‘Trial of a High Court Judge for Defamation?’, The Law Quarterly Review, Vol. 103, July, 1987, p. 465. Beck goes on to argue that the review jurisdiction posited by the Woodhouse judgement is wider than that permitted by the Cooke judgment so that the court could review the findings of a commission if those findings were not reached in accordance with natural justice, whereas the Cooke judgment would not allow this. According to Beck, the Woodhouse criterion would allow a court, even on judicial review, to rewrite the commission’s findings. 62. The Privy Council (the English equivalent of the U.S. Supreme Court) confirmed the N.Z.C.A., but went further than the CA to uphold the causation finding of Justice Mahon. Cf., f ‘Mahon v. Air New Zealand, Ltd.’, [1984] AC 808 (AC = Appeal Cases), also reported in 3 WLR 884 (WLR = Weekly Law Reports) and in [1983] NZLR 662: ‘The Royal Commission Report convincingly clears Captain Collins and First Officer Cassin of any suggestion that negligence on their part had in any way contributed to the disaster. That is unchallenged ...’ ((NZLR 684) ‘The Judge’s Report contains numerous examples
THE DISASTER ON MT. EREBUS
301
and criticisms of Air New Zealand’s slipshod system of administration and absence of liason both between sections and between individual members of sections in the branch of management that was concerned with flight operations. Grave deficiencies are exposed in the briefing for Antarctic flights; and the explanation advanced by witnesses for the airline as to how it came about that Captain Collins and First Officer Cassin were briefed on a flight path that took the aircraft over the ice-covered waters of McMurdo Sound well to the west of Erebus but were issued for use in the aircraft's computer as the nav track of a flight path which went directly over Mt. Erebus itself, without the aircrew being told of the change, involved admissions of a whole succession of inexusable blunders by individual members of the executive staff. None of this was challenged before their Lordships ... These appalling blunders and deficiencies ... had caused the loss of 257 lives.’ (685 NZLR) However, with respect to paragraph 377, the Privy Council also concluded that in so far as Justice Mahon’s conclusions in paragraph 377 of the Mahon Report that there existed a ‘... pre-determined plan of deception’ and, the grand phrase that has since become a part of New Zealand vocabulary, ‘... an orchestrated litany of lies’ on the part of management were based substantially on the issues surrounding the ‘destruction of documents’ [this is a smokescreen] and in the opinion of the Privy Council, Justice Mahon had no positive evidence that supported such an inference (that organized deception existed) that therefore Justice Mahon was not entitled to make such an inference. The problem with this finding of the Privy Council is that Justice Mahon’s conclusions in paragraph 377 were not really based on the destruction of documents issue but on the incongruity of the otherwise inexplicable existence of a massive series of unrelated errors and the need to ignore the conflicting testimony of the surviving pilots [who were not on board Captain Collins’ flight] who attended the fateful briefing. With regard to allowing a massive series of unrelated errors to stand uncontested, Macfarlane aptly notes in one instance that: ‘By making that finding the Privy Council are saying in n effect that the 54 mistakes as to flight path which were allegedly made by Air New Zealand could not even conceptually be regarded as conceivable untruths. Instead, they must, according to their Lordships’ reasoning, inevitably be no more than: ‘... a whole succession of inexcusable blunders by individual members of the executive staff’ (NZLR 685) Cff , The Erebus Papers, p. 589. Such a finding would appear to leave their Lordships with a view off reality as being a chance concatenation of a series of unrelated mistakes. Justice Mahon was not satisfied with this version of reality: ‘ ... the Judge considered the probability of so many mistakes by so many witnesses being genuine. He ... concluded that they could not have been accidental and that the likelihood of their being independent and with no collaboration was sufficiently low as to justify a finding of an orchestrated litany of lies, as opposed to a succession of evidence upon one issue which coincidentally happened to be untrue’. Ibid., p. 637. Cff , p. 636; 703-708. But the ruling of the Privy Council has more than philosophical implications. It seems that, on one reading, while the causation finding of Justice Mahon is upheld, the causation only pertains to ‘ ... a whole succession of inexcusable
302
SAVING HUMAN LIVES
blunders by individual members of the executive staff ...’ and thus Air New Zealand is not responsible. In this fashion, the disaster is implicitly ascribed to ineliminable human error (in this case an amazing abundance of it) and thus with extraordinary deftness the Law Lords succeed in upholding both Justice Mahon and the Court of Appeal, and clearing both the pilots and Air New Zealand of any blame for the crash. The sole culprit in the end is the chance concatenation of a whole series of ineliminable human errors. The reasoning, however, of the Privy Council is not thorough going. On the one hand, whether or not they have the legal right to do so, they clearly approve of Justice Mahon’s indictment of Air New Zealand for its ‘slipshod system of administration’ but they refrain from clearly approving of the indictment of Air New Zealand as being responsible such that the only responsibility lies in the ‘whole succession of inexcusable blunders by individual members’ which, put this way, apparently has no connection to the shoddy management framework in which these blunders took place. It is a pity, that despite their Lordships’ strong indictment of management practices, that management is left off the hook. This despite their Lordships’ obvious realization not only of the close connection between mismanagement and individual error (or a series of errors in this case) but of their Lordships’ acknowledgement that Justice Mahon’s attribution of responsibility to managementt rather than to individual error was both well supported and unchallenged by their Lordships: 393. ‘In my opinion therefore, the single dominant and effective cause of the disaster was the mistake made by those airline officials who programmed the aircraft to fly directly at Mt. Erebus and omitted to tell the aircrew. That mistake is directly attributable, not so much to the persons who made it, but to the incompetent administrative procedures which made the mistake possible’. (664 NZLR) (re: 393) ‘These findings, which fall fairly and squarely within the Royal Commission’s terms of reference and for which there was ample supportive evidence at the inquiry before the Judge ... are not susceptible to challenge in proceedings of this kind’. (665 NZLR) In finding a previous finding well supported and then ignoring it in its own finding, the Privy Council seems to be guilty of slipshod reasoning, if not an outright self-contradiction. It could be argued, from a legal point of view, that the PC was not taking a position on Mahon’s causation finding and thus the aapproval of Mahon’s reasoning here was only by way of showing agreement for how he arrived at his conclusions which were in this instance well founded but outside of their legal domain to comment upon. One could nonetheless trace a self-contradiction in their reasoning which was of the following sort: [Though we as a Court have no legal right to comment on this] Justice Mahon was right to conclude that management was responsible for the fateful mistake, and not the persons who made it. Justice Mahon was right to conclude that the persons (committing individual blunders) who made the fateful mistake were responsible and not management. In this fashion, the PC commits a self-contradiction in reasoning which is altogether outside of their legal domain to comment upon. In addition, they appear to ascribe this self-contradictory reasoning to Justice Mahon making this a selfcontradiction inside an ascribed [but fictional] self-contradiction.
THE DISASTER ON MT. EREBUS
63. 64.
65. 66.
303
That their Lordships are perfectly well aware of the causal connection between mismanagement and error is revealed in another place in their judgement: The findings of the Judge as to the cause of the disaster, viz., ‘... the mistake made by those airline officials who programmed the aircraft to fly directly at Mt. Erebus and omitted to tell the aircrew’, and as to the occurrence of a whole series of previous inexcusable blunders and slipshod administrative practices by the management of Air New Zealand, of which this mistake was the result, [the double mistake of wrong programming and nott informing the air crew of the change] are for the most part also dealt with in one or other of the judgements of the Court of Appeal. That such blunders did occur has not been the subject of any challenge in the proceedings for judicial review of the Royal Commission Report’. (671-2 NZLR) It seems ironic, then, that while the Privy Council accused Judge Mahon of self-contradictory reasoning (681, 683 NZLR) (which manifestly does not seem to be the case) that they do attribute a selfcontradictory line of reasoning to him by showing an agreement with a fantasy self-contradictory line of reasoning that Justice Mahon had never put forth such that their own judgement is not only riddled with self-contradictory reasoning (of which this is only one example) but is based upon self-contradictory reasoning. For the basis of the Privy Council judgement as self-contradictory, cf., f Appendix l and The Erebus Papers, pp. 593, 596, et passim. For mistakes in reasoning and misstatements of Justice Mahon’s points by the Law Lords in ascribing self-contradictory reasoning to Justice Mahon, cff , The Erebus Papers, pp. 571-573. For further self-contradictions in and mistakes made in Privy Council ruling, cf., f appendices one and two below and The Erebus Papers, pp. 702-708 (which contains a comprehensive list of the mistakes made by the Privy Council in their ruling). The Erebus Papers, p. 593. As will be described in detail in appendices one and two, in the case of the testimony of the briefing officers, the importance of the credibility finding was that it implied that if the briefing officers had lied when they said that they had briefed the pilots of Flight 901 that their flight plan was to take them over Erebus then the briefing could not have been construed to have been over Erebus. If the briefing were not to have been over Erebus, this supports the causation finding that the pilots had been misled by the content of the briefing to have been over harmless McMurdo sound. Thus, the deceit of the briefing officers in this case is directly relevant to establishing Mahon's causation finding. The consequences of the credibility finding to the case are enormous. If this credibility finding is disallowed, the causation verdict is seriously undermined. If the credibility finding is allowed and the briefing officers’ testimonies are reliable, then the pilots are responsible for the crash and there is no mistake in coordinates or failure to notify the pilots of such. If the credibility finding is allowed and the briefing officers’ testimonies are unreliable, then the airline management is responsible for the crash and the pilots are exonerated. Charles Perrow, Normal Accidents, Living With High-Risk Technologies, New York: Basic Books, 1984, pp. 134, 132. Mahon Report, p. 158.
304
SAVING HUMAN LIVES
67. Op. cit., p. 146. In this portion of Perrow’s discussion, it is not clear if he means that one of the twin causes is the alteration of the coordinates or the failure to notify the pilots of the alteration. Here, Perrow is understood as intending the latter since this is his meaning on p. 132 where only one of the causes is mentioned. Perrow’s possible fourth conclusion as to the cause of the disaster on Mt. Erebus is only stated indirectly when he states immediately underneath the discussion of Erebus but without any explicit attribution of this explanation to account for Erebus that, ‘ ... the complexity and the coupling of the system appear to account for a significant number of accidents’. (p. 134) While this is Perrow’s central thesis and so one would expect that the purpose of introducing Erebus as an example would be to illustrate his thesis, his intended use of the example of Erebus is not altogether obvious. In any event, the example does not support his general conclusion. One wonders what examples do count to support his conclusion. In the absence of any examples which support his conclusion, his conclusion would be unwarranted. While the case of Erebus is certainly complex, it is not clear that it is “tightly coupled”. In fact, there are loosely coupled occasions, which, if agents had acted otherwise than they had, the disaster would not have occurred. If Chief Pilot Gemmell or others from Civil Aviation who had attended the military briefings on whiteout had passed on the message to Air New Zealand that the Americans would not allow flights in the Antarctic to be conducted under Visual Flight Rules and insisted upon a military flight path (only 2 miles from the Dailey Islands flight path), then Air New Zealand would never have allowed a flight path near Mt. Erebus in the first place. The Americans had been flying down the Antarctic for thirty years since the 1950’s. They knew what was safe and what was unsafe. If Mac Centre had not invited the aircraft to descend the 1,500 feet, the disaster would not have occurred. (Cf., f Mahon Report, pp. 157-8) Iff the Air New Zealand briefing had been up to the quality of the military briefing, the disaster would not have occurred. With respect to the weather, massive coincidences conspired but these too were not inevitable. If there had been wind to blow away the fog, the whiteout deception would not have occurred. If back eddies (wind currents) had not sucked up the fog and thus covered over the Ross sea ice, the disaster would n at the angle of 28 degrees, whiteout not have occurred. If the sun had not been would not have occurred. (Cff , Impact Erebus, p. 163) These events, hardly tightly coupled, were by no means inevitable. Most importantly, the Royal New Zealand Air Force adopted the United States Air Force drill. Every military flight went down the middle of McMurdo Sound on instruments with bearings from beacons and were talked in on radar from the ATC. Air New Zealand was reckless. Both Gemmell and Civil Aviation had been exposed to full military briefings (there were two of these, one in the 1960’s and the other, Deep Freeeze was held on 4 February, 1977) and did not pass on the content of the military briefings. (Cff , Erebus Papers, p. 372) In addition to the pilots not being told of the change in coordinates or of their location on radar, there were other crucial warnings which would have prevented the disaster, which were not passed on. For example, ‘ ... the combination of low flying and the lack of white-out briefing meant that captains such as Captain Collins presumably
THE DISASTER ON MT. EREBUS
305
assumed that they could safely fly VMC below cloud, whereas this is simply not the case – as indeed Antarctic experience had established for many years’. (Erebus Papers), p. 364. Captain Simpson, for example, who was present at the same briefing as Captain Collins, testified that no mention of white-out was made at the briefing. (The self-signed text of Simpson’s testimony in his own handwriting is to be found in the Erebus Papers, pp. 672-678) For some of the d to a private conversation with Stuart above information the author is indebted Macfarlane in Auckland on 9 July 2001. 68. Mahon Report, p. 159. There is at least one qualified authority, t Air Marshall Sir Rochford Hughes, who was technical advisor to QC Baragwanath, who takes the view that the pilots could not escape some 10 per cent responsibility for the disaster while about 90 per cent of the disaster was due to organizational faults. Sir Rochford’s reasoning that the pilots possessed some responsibility was that ‘ ... the crew should have positively identified the location of Mt. Erebus, either visually or through some navigational aid ...’ Cff , The Erebus Papers, p. 404. But Sir Rochford’s analysis does not take into account that the pilot and crew were misled into thinking that they were on an entirely different flight path and it would not have occurred to them that they were in dangerous proximity to Erebus and because of the polar whiteout phenomenon there was a misleadingly clear visibility with no apparent obstacle to be seen ahead. Justice Harold H. Greene in Beattie v. United States of America [1988] is less credible in his descriptions of the event in ruling that the staff at McMurdo Base were not responsible but that the pilots and Air New Zealand were responsible for the crash. While his ruling does include pilot responsibility, his reasoning appears not to take into account that the pilots had no idea that they were passing across Erebus and also appears to be based on the concept that they were lost and were making circles while wondering where they were and ran into Mount Erebus. Cff , The Erebus Papers, p. 439. According to Earl L. Wiener and David C. Nagel ‘There had been no opportunity for intervention on the part of the crew, who unknowingly transferred an incorrect waypoint from one computer system to another’. Cf., f Earl L. Wiener and David C. Nagel, (eds.) Human Factors in Aviation, New York: Harcourt Brace Jovanovich, 1988, p. 440. The puzzle remains as to why the United States Navy McMurdo Station did not provide radar guidance to flight 901. Of course, one great difficulty was that they too thought that the flight track proceeded down the centre of the sound. (Verdict, p. 159) The US Navy did offer humanitarian assistance beyond their legal requirements to civilian aircraft operating in the vicinity of McMurdo and ‘MAC Centre, the general communications centre at McMurdo Station did inform flight 901 by High Frequency radio of the possibility of radar assistance, as follows: ... “within a range of 40 miles of McMurdo we have a radar that will, if you desire, we can let you down to 1,500 feet on radar vectors” [,] over. The crew responded, “Roger New Zealand 901 that’s acceptable”.’ (Erebus ( Papers, p. 452) There are different interpretations of this exchange. Captain Gordon Vette testified that this communication established full radar coverage for the flight. ((Erebus Papers, p. 452) He was present at the American trial. In private conversation held on 8 July 2001 in Auckland, New Zealand with Captain Vette,
306
SAVING HUMAN LIVES
he indicated that if a radar signal from the Air Control Tower set off the airplane’s transponder and a blue signal comes on, which it did, then this indicates that the Air Control Tower is aware that it has the plane on radar. The crew on the plane would have thus felt some degree of security that they were safe from the fact that they thought they were being tracked on radar. However one construes the situation it remains puzzling why the radar controllers would not have felt some ethical responsibility for guiding the plane. The other principal air traffic control facility at McMurdo, Ice Tower, communicates with aircraft on UHF and VHF frequencies. Ice Tower and flight 901 attempted to establish VHF communications but were successful for only one 30 second period which proved that the aircraft was within VHF line-of-sight of Ice Tower. In the plaintiffs’ pretrial brief, it was stated that ‘The subsequent inability to reestablish contact should have alerted the tower personnel that something was amiss. Mac Centre ... should have known that, with the aircraft due overhead at any minute, the line-of–sight radio must be blocked by terrain obstructions ... The Navy controllers were trained on the whiteout phenomenon. On the date of the accident, the Centre and Tower personnel should have been aware that weather conditions were conducive to whiteout. ... The plaintiffs allege that ... the Navy air traffic controllers had the equipment to depict the aircraft’s position, the aircraft was within range off that equipment, and the controllers anticipated the imminent arrival of the aircraft.’ (Erebus ( Papers, pp. 40-44) The question of the present author is why would Ice Tower attempt to establish VHF communications with flight 901 unless it was attempting to follow the plane? [The control tower has only VHF frequencies which are dependent upon line of sight but are free from static] When Justice Mahon interviewed U.S. Navy witnesses he learned that all transmissions to or from the McMurdo area are recorded on tape at Mac Centre. Justice Mahon reports that ‘It was on record that the tapes at Mac Centre, which had been recording transmissions both from Mac Centre and from the control tower to the DC10, had been silent for a period of four minutes 42 seconds prior to the moment when the aircraft struck the northern slopes of Mount Erebus. This had been due to the fact, so it was understood, that neither Mac Centre norr the control tower had spoken to the aircraft over that period of time’. ... However, Justice Mahon also learned that the investigator First Officer Rhodes ‘had d inquired as to the reason for this silence and he had been told that this last portion of the tape prior to impact had been accidentally erased’. (Verdict, p. 160) Justice Mahon conjectured that some lawyer or hostile party might say that ‘... the radar operator at the control tower had seen the DC10 on his radar screen. He would have observed a line of ‘blips’ travelling right to left from behind the mountain before turning away to the right again and disappearing. He would have told the radio operator at the tower. The radio operator might then have spoken to the aircraft on VHF and told the crew that, instead of being in McMurdo Sound, as both the crew and the control tower thought, the aircraft was ... to the north of Mount Erebus.’ When the radio operator repeatedly warned the aircraft that it was on a collision course with the mountain, no communication was received by the aircraft because of the fact that the aircraft was behind the mountain. ‘During these
THE DISASTER ON MT. EREBUS
69. 70. 71. 72. 73. 74. 75.
307
transmissions from the control tower, the radio operator at Mac Centre would have been listening carefully. ... In this case, it might be suggested, that the radio operator at Mac Centre had made the fatal blunder of not repeating on HF radio the warning which he had just heard on VHF radio form the control tower. Had he only taken the precaution of repeating the warning, then Captain Collins would immediately have heard the warning on HF and would have applied power and put the DC10 into a turning climb and flown away.’ (Verdict, pp. 161-2) Justice Mahon goes on to say that ‘Baragwanath and I were acutely aware that we had not interviewed three particular witnesses who had been on duty at McMurdo Air Traffic Control on the day in question. They were the radar operator at the control tower, the radio operator at the tower, and the radio operator at Mac Centre who had been monitoring the transmissions from the tower and who had himself spoken to the DC10 on his HF transmitter’. (Verdict, p. 170) According to a private interview with Stuart Macfarlane in Auckland, New Zealand on 26 April 2001, when he was in the States, the US attorneys for the claimants told him that the radar operator had a mental breakdown shortly thereafter. According to a private interview with Captain Vette at the University of Canterbury, in Christchurch, New Zealand on 13 May 2001, the transponder on the plane had responded to radar so that the plane was definitely being tracked by radar. According to Macfarlane, his recollection of the US trial is that while that fact did not necessarily prove that the signal from the transponder had returned to the transmitter, it suggested that it probably did. In Macfarlane’s account it probably did return to the transmitter. Cff , Erebus Papers, pp. 457-459. If the American navy actually not only was tracking flight 901 on radar but was aware that flight 901 thought that it was being tracked on radar and therefore must have thought thatt if it were in danger, that it would certainly have been warned, makes the matter even more morally pernicious. From the point of view of the present author, the main point would seem to be that whether or not the transmitter received a return signal, the plane was definitely being tracked on radar. This fact by itself summons up the question of moral responsibility even if one cannot a establish legal responsibility. Mahon Report, p. 146. Mahon Report, p. 146. Mahon Report, p. 147. Mahon Report, p. 147. Mahon Report, p. 148. Mahon Report, p. 148. Mahon Report, p. 148. French apparently finds this custom of strictly verbal communication inoffensive and likens it to family style or military style communication. Cf., f French, p. 105. He goes so far as to say that, ‘The fact that the Board and the CEO adopted and encouraged the less formal communication system ... is surely an inadequate basis upon which to find them morally responsible for the disaster on Mount Erebus even though causal responsibility focuses on the organizational breakdown’. Cf., f French, p. 106. The problem with this analysis is that it does not take into account the inherent relationship
308
SAVING HUMAN LIVES
that exists between a high moral concern such as a safety first priority and sound management fundamentals such as a communications system designed to notify a Captain of a change in flight plan. Such a design would obviously satisfy a criterion of respect for persons and at the same time be a vital safety precaution. An informal, verbal system with no written instructions or records to verify obviously leaves much to chance. In this case, it is not inoffensive; it is dangerous. Queen’s Counsel Baragwanath argued that: ‘The system of oral communication described by the Chief Executive is totally inappropriate [in] an aviation company where communication by written instructions is critical to flight safety’. Cff , The Erebus Papers, p. 389. French’s analysis, however, is difficult to follow. He goes on to say that, ‘Air New Zealand’s organizational and communication structure was causally responsible for the crash ... and the corporation can be held strictly and possibly objectively liable under the law, for damages. The matter of the corporations’ r moral responsibility, however, cannot be resolved by either causal identification alone or by causal responsibility ...’ Cf., f French p. 106. It is difficult to understand why a corporation can be legally blameworthy and liable for damages for having a flawed organizational and communication structure and yet, on these grounds, cannot be found to be morally blameworthy. It would seem to be that it was only on account of moral responsibility (that the corporation should have been attentive to having a sound organizational and communications structure in place) that the corporation could be heldd to be legally accountable. (The notion that a corporation can be legally liable but not morally responsible is also advanced by David T. Ozar in, ‘The Moral Responsibility of Corporations’, in Thomas Donaldson and Patricia H. Werhane (eds.) Ethical Issues in Business, Englewood Cliffs: Prentice-Hall, 1979, pp. 294-300. Ozar, however, wavers considerably on this point). How French could argue that ‘Air New Zealand’s organizational and communication structure surely was causally responsible for the crash’ and yet at the same time ‘The fact that the Board and the CEO adopted and encouraged the less formal communication system ... is surely an inadequate basis upon which to find them morally responsible....’ is unfathomable. One is reminded of F.H. Bradley’s argument that ‘we need make no distinction between responsibility ... and liability to punishment ((Ethical Studies, Oxford: Oxford University Press, 1962, p. 4). It seems that having been responsible for having a certain organizational and communications structure in place which is causally responsible for a disaster and being liable for punishment on that account is also sufficient to be held morally responsible for that event. In any case, if French does consider that ‘Air New Zealand’s organizational and communication structure [or lack of it] was responsible for the crash’, then surely the system of non-written communications is part of that “structure”. In the case of the non communication of the change of the flight coordinates, the lack of written communications and a specified formal reporting channel (which would manifestly include the pilot!) is a crucial element in the pilot not knowing of the change of coordinates. Why, then, does French find this “system” of informal communication inoffensive?
THE DISASTER ON MT. EREBUS
309
76. Mahon Report, p. 13. Thus, the error was not simply the error made in programming but in not informing the pilot of the change. This difference is glossed over by Captain William B. Mackley (Retd.) in his article on Erebus in Flight Safety Digest, when he quotes Sir Geoffrey Roberts (then Air New Zealand’s chief executive and later its chairman of directors) as saying, ‘The error in programming was a contributory cause. The basic cause was pilot error’. Cff , Flight Safety Digest, September 1982, p. 7. If it had been simply an error in programming that could be, of course, attributable to ineliminable human error. But the problem was the lack of any system of communications of changes to the pilots and a related absence of proper management practices as detailed above. The absence of proper communications and management practices is something that is correctible; a certain n proportion of ineliminable human error is not. Upon an examination of Justice Mahon’s private papers one can even find comments on specific reasons why written instructions are superior to reliance only upon different personnel double checking. In the all important instance of the mistake in entering the coordinates into the computer, Justice Mahon writes: ‘Check that the co-ordinates recorded in the ALPHA plan were correctly entered into the AINS programme. The mistake made by Hewitt [in navigation section] is the kind of mistake which is plainly to be guarded against whenever computers are used. We have heard that there is now a second person required to check the information. But there is still no written instruction, and with a change in personnel the information obtained at such a price will disappear.’ Cff , The Erebus Papers, p. 364. Justice Mahon’s point is that with a change in personnel or someone being ill one day there is no way that the replacement would necessarily realize her or his responsibility. Since Air New Zealand and commentators such as Peter French did not see anything amiss with verbal instructions perhaps such an example would make the point rather obvious. It is also important to note that while a popular impression of this disaster might be that there are inevitable accidents that are possible given the nature of high technology, in this case a simple, basic management practice of keeping logs could have caught the error and averted the disaster. It is difficult to understand why French did not discern the absence of written instructions as a serious example of management malpractice. Justice Mahon comments on the ‘consequence of the failure to reduce communications to writing’. As Justice Mahon notes in his private papers, ‘The mix-up over Captain White’s flight and the failure to get the message through to him to make a manual change in his flight plan is attributable to a lack of clear written instructions as to the duties of the various persons’. ((Erebus Papers, p. 366) Thus in these two instances alone, key to the actual occurrence of the disaster, written instructions and written logs could have prevented the disaster from occurring. As Justice Mahon notes, ‘The reliance on oral comunication .... is simply not feasible in such an organisation, where flight safety is at risk ...’ ((Erebus Papers, p. 364) Queen’s Counsel Baragwanath makes similar comments: ‘... there was no adequate system of recording at Flight Despatch, and for conveying to flight crews, the information received by Flight Despatch ... the regularr shifts of personnel at Flight Despatch
310
77. 78. 79. 80.
SAVING HUMAN LIVES
and liason (often during the course of the flight) could not be maintained purely verbally’. ((Erebus Papers, p. 370) Mahon Report, p. 148. Mahon Report, p. 148. Mahon Report, p. 149. Mahon Report, p. 19. Mr. Macfarlane quotes Queen’s Counsel Baragwanath as saying: ‘The responsibilities of managers and senior executives must be clearly defined in writing and chains of responsibility established ... Perhaps the greatest of the many lessons learned from this tragedy related to flight safety is the vital need to pay the same careful attention to the operator’s organisation as has traditionally been paid to the aircraft’s soundness and the pilots’ fitness and skill’. Cff , The Erebus Papers, p. 390. That basic training in management was needed by Air New Zealand is a point which Baragwanath makes most trenchantly: ‘There has never been any real training in management (including systems management) brought to bear within the Operations Division ... they have simply been given no opportunity ... to acquire the management skills which are needed to carry out a significant part of their responsibilities. ... a further input of top-class management skills is required’. ((Erebus Papers, p. 371) It is a point worth noting that management education is what is being called for. Baragwanath emphasizes how it was precisely this lack of training which, in his view, was the dominant cause of the disaster: ‘The system (described particularly by Captain R. T. Johnson), involving ... lack of real access to formal training, has resulted in a lack of management skills which has contributed to the shortcomings in the computer system already described. Is it not the dominant factor in the causation of [the] accident [sic]? (Erebus Papers, p. 371) Baragwanath distinguishes carefully between the failure to tell the flight crew of the change in coordinates and the lack of any management system which would have ensured that they were told: ‘Finally, and critically, was the failure to tell the crew of the change in the flight plan, caused by the lack of any system to ensure that they were told of the lethal change in the flight plan. This was again caused by the lack of a defined system to ensure they were told.’ (Erebus Papers, p. 383) Baragwanath lists the structural weaknesses of the management system of Air New Zealand as ‘Lack of adequate job specifications; Lack of up-to-date job hierarchy chart; Lack of any adequate “fail-safe” system for ensuring proper passing of information and safety checks; Lack of any adequate system for reviewing such arrangements; Lack of any adequate system to train executives; Lack of adequate procedures to carry on control while an executive pilot is away; Lack of any adequate system of supervision following delegation; Lack of any adequate filing system; Resort to oral instructions, which are not recorded in writing. These factors are interrelated; together they provide an atmosphere in which there must be continuing risk of breakdowns in communication and consequent danger to safety. ... The responsibility of managers and senior executives must be clearly defined in writing and chains of responsibility established’. (Erebus ( Papers, pp. 389-390) Is it not a sad state of affairs that basic lessons in managementt must be given by Justices such as Justice Mahon, Justice Sheen (In the Herald of Free Enterprise
THE DISASTER ON MT. EREBUS
81.
82. 83.
84.
311
disaster), Justice Fennell (in the King’s Cross Fire disaster) and Queen’s Counsel Baragwanath to corporations into whose hands human lives are entrusted for their safety. Mahon Report, p. 21. In Mr. Davis’ cross-examination by Mr. Baragwanath, Mr. Davis (then CEO of Air New Zealand) stated: ‘I believe that the planning that went into the Antarctic operation was appropriate to the mission we had in mind. Some errors occurred, as they occurred previously elsewhere in the world, of which there are significant examples available. A series of events came together which resulted in a catastrophic magnitude. I can’t accept the Company planning activity in principle was inadequate’. Cff , The Erebus Papers, p. 390. Mahon Report, pp. 20-21. If, on the other hand, the responsibility for the disaster were to have been ascribed to pilot error, then a NZD 42,000 limit would apply to claims against the airline. Verdict, p. 102. The question, then, is hardly academic. French, however, employs a rather curious criterion to ascertain management culpability with particular reference to the Erebus example. For French, moral responsibility can be attributed to a company only if the company fails to make adjustments in its policies after its policies have been faulted as being responsible for a disaster. In the case of Erebus, if Air New Zealand has not changed its management policies, it, according to French, can be held accountable for the Erebus disaster. Cf., f French, p. 110. The problem with French’s criterion is (if this is a correct reading of French), if Air New Zealand had changed its management policies, thus admitting culpability, it would, according to French, not be culpable. It would seem more reasonable that the proof of moral culpability should not rest upon a company’s own internal assessment of its past actions. The seeming implications off French’s argument is that if a corporation acknowledges its responsibility, it was not responsible; if it does not acknowledge its own responsibility in its subsequent behavioral modifications, it is responsible. In any event, from the point of view taken here, a corporation’s tacit or explicit acknowledgement of its past behavior by future modifications is not an exoneration of its past behavior. Nor is its own non acknowledgement by refusing to modify its past behavior a proof of its guilt. It is culpable because its proper actions at the time of the disaster could have prevented the disaster and its subsequent actions have nothing to do with establishing or disestablishing its prior guilt. Of course the issue runs deeper than this, since according to widespread corporate practice the Board of Directors is selected by management rather than vice-versa. Cf., f Myles L. Mace, Directors: Myth and Reality, Harvard Business School Classics, Boston: Harvard Business School Press, 1971, 1986, pp. 66-67. In his 1986 preface to the Harvard Business School Classics Edition, Professor Mace emphasizes that, ‘As was the case ten years ago, C.E.O.’s still control board membership; determine what the board does and does not do ...’, p. vii. According to Gerald F. Cavanaugh and Arthur F. McGovern, while one principal responsibility of the board of directors is the selection and dismissal of the CEO, candidates for boards are most often suggested by the CEO and in elections for the board there is only one candidate for each position. In addition,
312
SAVING HUMAN LIVES
the successor of the CEO is almost invariably selected by the CEO (Cff , Mace, p. 190.) While the other principal responsibility of the board of directors is the approval of major policies, one has to bear in mind that in U.S. firms, the chairperson of the board who generally determines the agenda of the meetings generally is the CEO. In addition, one can imagine the difficulty a vice president would have in questioning a proposal made by the CEO, who will be determining his future employment prospects. Cff , Ethical Dilemmas in the Modern Corporation, Englewood Cliffs: Prentice-Hall, 1988, pp. 26-29. While an increase in the number of outside directors on boards might be of some assistance, one would still have to face the problem of their being initially selected by the CEO and the further problem of how to ensure their commitment to an active role as a corporate conscience if their affiliation is only part-time.
THE DISASTER ON MT. EREBUS
APPENDICES TO CHAPTER 11 APPENDIX ONE Re Privy Council Judgement Re Erebus Royal Commission; Air New Zealand Ltd v Mahon NZLR [1983] In addition to the self-contradiction contained in n. 62 to chapter eleven, it appears that the ruling of the Privy Council is based on a more central self-contradiction. This appears to be an appropriate place to examine this self-contradiction. Private correspondence with Mr. Stuart Macfarlane is the source of the recognition of this selfcontradiction. In finding that the pilots were not to blame for the crash, the PC would appear to accept that they unwittingly flew into Mt. Erebus thinking that their aircraft was programmed in another direction when it was programmed to take them directly into Mt. Erebus. In addition, for them not to be to blame, they would have to have been briefed that their flight was over McMurdo Sound. Thus, if anyone had been not telling the truth, whoever had testified that the briefing had been of a flight path over Erebus would have to be that person or persons. Now, the briefing officers, Captains Wilson and R.T. Johnson, claim that they repeatedly told the pilots that the flight path crossed Erebus and that the pilots had disregarded their briefing instructions. (Erebus ( Papers, p. 210) Mahon believed the testimony of the surviving crew (Captains Simpson and Gabriel and First Officer Irvine, who attended the briefing but had not been on board Collins’ flight) that the briefing had been of a route over McMurdo Sound. (He also believed the confirming evidence of the pilots of previous flights who attended ealier briefings over the years.) Cff , Erebus Papers, p. 594. If the pilots are not to blame for the crash, then the briefing officers must not have been telling the truth when they testified that the flight path was over Erebus. If the briefing officers were telling the truth,
313
314
SAVING HUMAN LIVES
then the fault for the crash would plainly belong to the pilots who would have then disregarded the briefing route given to them. But in the judgement of both the CA and the PC, the briefing officers are held to have been telling the truth. ((Erebus Papers, p. 210) But this is impossible. If the briefing officers are held to have been telling the truth (that the computer flight path ran across Erebus), then the pilots must be responsible for the crash. Either the briefing officers are telling the truth and the pilots are responsible for the crash or the briefing officers are not telling the truth and the pilots are not responsible for the crash. What the PC says, is that the briefing officers are telling the truth and the pilots were not responsible for the crash. The Woodhouse judgment does not face the consequences that if the briefing officers told the truth that the pilots must therefore be at fault in ignoring the repeated warnings given by the briefing officers. By summarizing the reasoning of the PC, the following selfcontradiction may be generated: If the briefing officers are telling the truth, the pilots must be responsible for the crash. The briefing officers are telling the truth. The pilots are not responsible for the crash. That this contradiction actually forms the basis of the Privy Council’s judgement follows from the fact that the judgement of the Privy Council finds the pilots not to be to blame (and therefore the briefing officers must not have been telling the truth) and at the same time concurs with Justice Mahon in his causation ruling (although they attribute this to a series of errors rather than to management proper). If they concur with Mahon on causation then the briefing officers must not have been telling the truth as this is necessary for the pilots to be free of blame and it is also necessary for management (or a series of errors) to be responsible for causation. For a crucial error was the briefing officers handing out a flight plan which showed a McMurdo Sound route, and verbally confirming that was the route. That the briefing officers both told the truth and did not tell the truth is the basis of the Privy Council’s judgement that both the pilots are free from blame and that management (or a series of errors)
THE DISASTER ON MT. EREBUS
315
was responsible. Thus, the very basis of the Law Lord’s ruling is a blatant self-contradiction.1 The question then arises, how can five of the greatest legal minds of the Commonwealth (not to mention five Justices of the Court of Appeal in New Zealand) have made a such a series of egregious blunders in their reasoning? The only explanation that makes sense out of this is that the Law Lords were guided by the practical consideration of attempting to absolve the pilots of blame, Air New Zealand of blame and to find fault with Justice Mahon’s verdict which tended to place too much blame (in their opinion) on Air New Zealand while at the same time upholding his finding on causation. Needless to say, one cannot reconcile all of these objectives (which are self-contradictory) without involving oneself in self-contradictory reasoning. It is only somewhat astounding that such selfcontradictory reasoning - and there are other examples pointed out by Mr. Macfarlane in his masterly Erebus Papers and additional examples pointed out below - now is permanently recorded in print in the Law Reports.2
316
SAVING HUMAN LIVES
APPENDIX TWO Because of the mass of obfuscations contained in the Chippindale report and the rulings of both the CA and the Privy Council, it is extremely difficult to make sense out of the entire Erebus matter. In order to provide a thread to guide the reader through this labyrinth, it might be of assistance to provide a simplified outline of what actually occurred. While this outline may not be accurate in every detail since it is a simplification of events, it proposes to offer a general sequence of events without including details that might obscure the main outline: In the first place the routing of Air New Zealand was to have been over Mt. Erebus. There was a change in this routing to a route which did not pass over Erebus but which passed over McMurdo Sound. This change was most likely accidental and when discovered later was maintained - perhaps as a more judicious route since Erebus was bypassed. The crucial change was deliberate and involved the change back from McMurdo Sound to an Erebus Flight path. It was this change, that was made after the briefing, that was not communicated to the pilots. In giving testimony, the briefing officers maintained that they had presented the route as being over Mt. Erebus. This was contested by the surviving pilots (those pilots present at the briefing who were not aboard Collins’ flight) who maintained that the route was presented as being over McMurdo sound. The briefing officers did not wish to appear to have informed the pilots that the route was over McMurdo sound because to have done so would have removed all blame from the pilots and at the same time placed d it on the shoulders of either themselves or management or both. The Woodhouse ruling of the Court of Appeal and the Privy Council ruling both, by implication, accepted the version of events related by the briefing officers. In so doing they ignored completely the conflicting testimony (Cff , The Erebus Papers, pp. 589, 596 et passim) of the surviving pilots (those present at the briefing who did not take the fatal flight) and the confirming evidence of pilots from previous briefings. (Cff , Erebus Papers, p. 326). That the conflicting
THE DISASTER ON MT. EREBUS
317
evidence was ignored may be inferred from the oblique reference of the Privy Council that ‘ ... their Lordships have examined the evidence of primary facts [they must here be referring to the testimony of the surviving pilots, i.e., the pilots present at the briefing but who were not aboard Collins’ flight] relevant to this matter [the adoption of the McMurdo routing] that was given at the hearings. This they did, not for the purpose of assessing its reliability, but simply to see whether any positive evidence that supported such an inference [that there was a deliberate adoption of the McMurdo routing] existed; and none was to be found’. (683 NZLR) Why was the testimony of the surviving pilots (those who had not been on board Flight 901) ignored? By ignoring any conflicting evidence, by implication, the PC accepted the version of events as presented in the testimony of the briefing officers. In this case the PC accepted that the briefing officers were telling the truth when they testified that they had briefed the pilots that their route was over Mt. Erebus. This then forms the self-contradiction that is at the basis of the judgement of the PC discussed above. The reason that Justice Mahon placed so much weight on the concept of a conspiracy was that it was by finding the evidence of the briefing pilots not credible that he was able to reach his verdict on causation. Cff , The Erebus Papers, pp. 593-596. The PC placed so much emphasis on Justice Mahon being out of line for his conspiracy finding because they did not wish to make it appear that deceiving evidence had been given by the Air New Zealand witnesses. If the conspiracy portion of Mahon’s ruling can be overturned, then it might appear more plausible that the evidence of the briefing officers was reliable.3 This despite the fact that the Law Lords themselves state that they accept as correct that Justice Mahon found some evidence unreliable (682 NZLR). They are careful not to say which evidence as if to imply that it was the evidence of the surviving pilots (those who were not on board Flight 901) for if they were to agree that it was the evidence of the briefing officers that Justice Mahon was correct in finding unreliable their entire case would have fallen apart.
318
SAVING HUMAN LIVES
It is very difficult to extract this outline of events from the PC report since it is in Macfarlane’s fine phrase ‘a masterpiece of obscurity’ but this outline is the only way to make sense out of what occurred. As for some of the obfuscating techniques of the PC one may consider the following: The intent of their ruling is to diminish the case of Justice Mahon against Air New Zealand and at the same time exonerate the pilots and attribute the blame to what amounts to Fate. Their primary means of diminishing Justice Mahon’s verdict was to accuse him of violating certain legal principles (such as gathering evidence after the hearing and violating natural justice) and engaging in self-contradictory reasoning. (For the supposed self-contradictory reasoning engaged in by Justice Mahon, Cff , Erebus Papers, pp. 569573). The PC further obfuscates the case off conspiracy by concentrating their attention on the unreliability of evidence given by witnesses on matters of permitted altitudes and the question of missing documents and the destruction of documents. While the unreliability of this evidence is indeed important it does not compare in importance with considering the possible unreliability of the evidence given by either the briefing officers or the surviving pilots (those not on board Flight 901). The PC does not consider that either of these conflicting versions of what happened could have been unreliable. It only implicitly accepts the version of the briefing officers by obliquely referring to the fact that they found no other evidence. (Cf., f above, paragraph five, this appendix.) By concentrating a great deal on what is not central and treating that as central and by not speaking to crucial points, the crucial points are neglected and thus deemphasized. This obfuscating technique is the combination of a smokescreen and the non-addressing of main issues. In this case, the PC accomplishes its obfuscation as much by what it does not say as by what it does say. An additional obfuscation occurs which has the effect of creating the impression that the evidence of the briefing officers is to be accepted when the PC finds that Justice Mahon has not sufficient probative evidence to prove conspiracy. In finding Mahon’s evidence insufficient they indirectly cast a favorable light on Captain Johnson since he would be one of those involved in possibly giving false evidence regarding matters of permitted altitudes and communication
THE DISASTER ON MT. EREBUS
319
regarding coordinates. By so clearing Captain Johnson of deception here the tacit impression is created that his evidence at the briefing would also be reliable. While this does not at all follow logically, nor is it said directly, the overturning of Justice Mahon’s conspiracy findings seems to confer a greater credibility on management witnesses. In the case of the briefing, that the briefing was falsely described to have been over Erebus could not have been a simple error. It had to have been a deliberate deception. By finding against a conspiracy ruling, there is a subtle implication that while there had been some errors, there was no deliberate deception. But in the case of the briefing officers, in order to obtain a consistency of testimony, there must have been a deliberate deception in their account of the events and in greatest likelihood this must have been agreed between themselves to be how they would present the case. It is also of interest to note that since self-contradictory reasoning is one of the bases upon which the PC itself declares that a judgement can be set aside (681 NZLR) and since the PC is manifestly guilty of several crucial instances of self-contradictory reasoning and indeed the basis of their entire decision rests upon self-contradictory grounds, then upon their own stated criterion of a judgement being null and void, their judgement in the case of Re Erebus Royal Commission; Air New Zealand Ltd v Mahon is null and void.4
320
SAVING HUMAN LIVES
APPENDIX THREE It is of interest, from a strictly legal point of view, that neither the CA nor the PC had the legal right to take any position whatsoever on Mahon’s causation findings. This is because the Air New Zealand motion for judicial review did not ask the court to take any action visà-vis Mahon’s causation findings. The CA do not express an opinion on Mahon's causation findings not because they had no opinion on his findings, but because they had no right in law to express such an opinion. The PC also had no right in n law to express such an opinion but they go out of their way to praise Mahon’s causation findings. In this regard as well, the conclusions of the Law Lords with respect to Mahon’s causation findings would appear to be nullified. When Mahon appealed to the PC, the appeal related to whether the CA (on judicial review proceedings - not on appeal proceedings) had correctly stated the law which the CA claimed allowed the Court on judicial review to overturn the findings by Mahon that there had been an orchestrated litany of lies. Instead of keeping to stating the law, the PC made findings of facts which could be validly found, one would think from the earlier understanding of the law set out in the Cooke judgment, only on appeal proceedings proper, not on appeal proceedings which arise out of judicial review. The PC found as a fact in favour of Air NZ there had been no orchestrated litany of lies (as opposed to simply finding that Mahon had no right to say there was an orchestrated litany of lies, as they were entitled to do so). The Law Lords then found in favour of Mahon that Mahon was correct in clearing the pilots of blame. On the conventional view of the law as stated by the Cooke judgment it would seem that the Law Lords had no authority to make either finding, because the proceedings were judicial review not appeal.
NOTES 1. If the briefing officers are telling the truth, then they told the pilots that the computer flight path was programmed to take any aircraft locked onto its
THE DISASTER ON MT. EREBUS
321
navigation track across Erebus. When, after the briefing, the briefing officer, Captain R.T. Johnson, participated with officers of navigation section in shifting the flight path to a radio beacon behind Mt. Erebus (the TACAN), he believed, according to the Air NZ case, the shift was merely from one location behind Erebus (Williams Field) to another (the TACAN), and did not involve a shift of navigation path from the West Dailey Island waypoint in McMurdo Sound. No evidence came out that anyone told Captain Wilson, the other briefing officer who had previously briefed the pilots. 2. There are additional hypotheses which can be contemplated for the existence of such outrageous sophisms. One interesting fictional account which takes the conspiracy theory of Justice Mahon one step further can be found in the novel by Michael Delahaye, Stalking Horse, New York: Charles Scribner’s Sons, 1988, which loosely models itself on the Erebus case. Personal correspondence with Captain Vette is the source of the referral to Delahaye’s novel. 3. To see the effect of later court rulings on Justice Mahon, Cf., f Andrew Beck, ‘Trial of a High Court Judge for Defamation?’, The Law Quarterly Review, Vol. 103, July 1987, pp. 461-883. This article is of special interest for points of law. For example, when the PC says that Mahon’s causation findings ‘are not susceptible to challenge’, they are stating the Cooke judgment version of the law in that the proceedings in the CA which are being appealed from are judicial review proceedings, not an appeal. However, as Beck argues, the PC states the law in such a way as to challenge Mahon’s causation findings (by giving grounds, such as self-contradictory reasoning, under which a ruling may be overturned), which forms yet another self-contradiction. Or, one can also formulate the contradiction in a way different from Beck’s formulation, which is: if Mahon’s causation findings are challenged, then his entire ruling in effect has been overturned. However, the PC upholds his ruling on causation. Thus, though pronouncing his ruling unchallengeable, they proceed to challenge his causation ruling but at the same time declare that it is to remain intact. Such further self-contradictory reasoning would provide additional support for the decision of the Privy Council to be set aside since it would have once more hoisted itself on its own petard. 4. The point made here is the same as in n. 3 except that it is noted as a separate issue from indulging in self-contradictory reasoning and hence rendering their own ruling nugatory that by saying anything at all about Mahon’s causation findings the Law Lords have overstepped their legal rights. Cff , Op. cit.
CHAPTER 12 MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION The title of this chapter was inspired by Diane Vaughan who refers to the Challenger disaster as an ‘organizational-technical system accident’.1 The problem with this description is that it consigns responsibility for the disaster to an abstract system so that whose responsibility it is to prevent disasters cannot be clearly discerned. In addition, while this may not have been her intention, the effect of the linkage of the words ‘organization’ with ‘technical factors’ in a hyphenated phrase creates the spectre of the Techno-Organization, the twenty-first century counterpart of the Greek Fates. Whatever responsibility either individual human beings or management might have had for a disaster now belongs to the abstract, dehumanized and deterministic Techno-Organization, an organization now gifted with all the mystique and power that is vested in the new god, Technology. What is not noted by the description of the Challengerr disaster as an ‘organizational-technical system failure’ is that the choice of the technically flawed design in the first place and the decision to go on using it was the result of poor and unethical management practice. Vaughan concludes by saying that ‘we safely can conclude that intraand interorganizational relations are characterized by structurally engendered weaknesses that contribute to technical system accidents’.2 Where then, one may ask, does the responsibility lie? It would appear, virtually nowhere. This is truly a view from nowhere. On the other hand, are corporate disasters caused by ‘complexity and tight coupling of the system’ in accordance with the thesis of Charles Perrow and his adherents? For example, Karl Weick groups the Challengerr disaster under Perrow’s ‘tight coupling’ rubric when, commenting favorably upon Perrow’s thesis, he remarks, ‘Perrow (1984) has, I think, correctly identified a new cause of human-made catastrophes, interactive complexity in the presence of tight coupling, producing a system accident ... Recent benchmark catastrophes such
322
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
323
as ... Challenger ... fit this recipe.’3 Not only does Challenger, from the extensive analysis of this disaster offered earlier in this volume stray far from being a result of ‘interactive complexity and tight coupling’, but there is a subtle danger in this form of classification. The emphasis on complexification and interaction among complex parts creates the impression that disasters are the results of mechanical reactions between the machine like parts of a mechanistic organization. The problem with this kind of classification is that not only does it appear deterministic and thus beyond redress, but it effectively cloaks the human and responsible agents behind organization: senior management and individual managers and workers. If, according to Perrow, complexity and tight coupling make for normal accidents, then, all of the case studies which are examined in this volume are one and all abnormal disasters. However, the pattern of mistakes that have been revealed in the course of the study of these major disasters suggests that a close, in-depth and comprehensive investigation of all disasters in the manner that has been conducted in this volume would reveal that it is more likely an absence of ethics and dysfunctional management and not tight coupling and complexity which is at the root of the occurrence of man-made system disasters. In all of the cases studied in this volume, it is apparent both that there are faulty management practices and that managers are responsible either for having installed those practices, allowed them to prevail, or in some cases for not having taken appropriate actions themselves. In the case of the disaster on Mt. Erebus it is evident that a major abuse was the absence of sound management practices such as the keeping of written logs and the requirementt of reporting of changes. Behind this, of course, one must consider the responsibility of senior management for not seeing to it that sound management practices were in place. In other words, senior management has an ethical responsibility to perform its own duties responsibly. The notion that senior management has a professional and ethical responsibility to practice sound management and to see to it that sound management practices are followed is underscored by the events in every case studied. The building in of an ethical imperative as the lead element in its corporate culture would be a vital first step for the corporate policy of Air New Zealand. If the ethic of respect for persons had been a priority principle at Air New Zealand, then this could have led
324
SAVING HUMAN LIVES
to making sure that Captain Collins, the pilot of the Antarctica Flight was informed of the changing of the coordinates so that he knew that their flight was heading over Mr. Erebus. If the ethic of respect for persons had been a first priority principle at NASA, then Commander Dick Scobee would have been informed of the grave risks of flying with a defectively designed O-ring at cold temperatures. Indeed, if the ethics of respect for persons had been a first priority principle at NASA, such an unsafe (replace for technologically risky) design for the O-rings would never have been chosen in the first place. It is important to recognize that the case of the disaster on Mt. Erebus, (as with all the other cases studied) was in no way due to a “tight coupling” of technology, the hypothesis to account for disaster put forth by Charles Perrow. The disaster was due to sloppy management practices and a lack of an overall ethical directive. Perrow, while acknowledging that this ‘awful accident’ [sic]4 was due to Air New Zealand’s ineptitude, glosses over that ineptitude by referring once more in the same context to ... ‘complexity and coupling of the system’.5 But, the Erebus disaster was not due to complexity and coupling of the system: it was due to improper and unethical management practices. If anything, the Erebus disaster showed the absence of proper coupling. If the examples Perrow adduces to support his conclusion that disasters are the results of ‘complexity and tight coupling’ do not support his conclusion, then why should his conclusion be considered? To blame tight coupling and complexity for the occurrence of the disasters is analogous to blaming the O-ring for the Challengerr disaster. Tight coupling and complexity are the “technical” components of large organizations. But on their own, they do not cause disasters. The cause of the disasters is the action or the lack of action of the voluntary agents within the organization. In the case of the Challengerr disaster, while it was taken for granted that there was a built-in risk in space exploration, the problem was that it was taken for granted. The possibility of the occurrence of a disaster was not envisioned as linked with management practice, but was envisioned as linked with the gambling odds of a disaster occurring while operating in the casino of space exploration. Since the prospects of disaster were not seen as connected to anything in
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
325
particular that human beings could do or not do, the concept of the prevention of disasters was not a live concept much less a high priority concept. Since there had been no past history of disasters (those that had occurred seemed to have been conveniently forgotten), there was no strong concept of taking strict safety precautions. A kind of gambler’s fallacy was played out in the foreground against the background off the inevitability of disaster. While disaster was inevitable (the realm of space exploration was risky) its imminent possibility was removed by the favorably increasing gambling odds. In the case of the Challenger, the management at NASA were willing to gamble with human life. It seems that a missing factor in their mind set was the ethical concept that what would be lost was human life. Even if the probability of the loss of the crew were small, the problem was that the consequences were grave. The loss of life possibility was submerged under laws of mathematical probability. NASA’s self-perceived invincibility would protect it and if for some reason it failed, this was simply one of the chances that one had to take. In the case of the Challengerr the notion of the gambler’s fallacy was utilized to keep preventive measures at bay. The concept that disasters could, should be and had to be prevented was not operative. That near disasters had been possible, but had not occurred seemed to strengthen the belief of management that each situation was disaster proof (while disasters were nevertheless held to be inevitable). In the case of the Challenger, the past history of O-ring erosion was not taken by managementt to be a sign that this was a danger spot that should be attended to before a disaster occurred. To the contrary, this was taken as a signal by managementt that one could get away with O-ring erosion without consequence! The presence of O-ring erosion in previous NASA flights was taken by decision making managers to be a sign that flights could take place without causing a disaster. The Challenger disaster was a classic case of a history of close calls not proving to be sufficient to puncture the belief balloon of each situation being considered immune to the general inevitability of disaster. The previous history of close calls without the end result of disasters seemed in each case to reinforce the idea that each situation was a “special case” and was somehow charmed and therefore impervious to disaster. In the case
326
SAVING HUMAN LIVES
of the Challengerr disaster, the previous history of successful flirtations with fate was taken as a sign by management that there was no need to take preventive measures. Richard Feynman’s ‘Russian roulette’ theory of management seemed be the order of the day. Disaster brinksmanship rather than disaster prevention characterized management’s thinking in the case of the Challenger disaster. One is responsible for whatever set of beliefs one holds. To simply describe a belief system without assigning responsibility for subscribing to that belief system in spite of its consequences is to create the impression that those who hold fallacious belief systems are victims of these belief systems and therefore are not responsible for the behavior that ensues from embracing these belief systems.6 Much has been made out of the ‘can do’ attitude that prevailed at NASA. This ‘can do’ attitude may have been a contributing factor in the make up of NASA’s invincibility complex. But one must be careful in one’s analysis to distinguish between one’s being morally responsible for holding onto a fallacious belief system and appealing to an existing belief atmosphere to explain, account for or justify behavior. The distinguished ethicist, Patricia Werhane, to her credit, stresses moral responsibility in her article on the Challenger.7 A misplaced ‘can do’ attitude may be tantamount to moral irresponsibility. In the case of the Challengerr and of the disaster on Mt. Erebus, safety was neglected. In both cases there was a lack of understanding of the relationship between human action and results. In the case of the Challenger the double illusion was that disasters were inevitable and therefore they could not be prevented anyway and at the same time there was essentially no need to take any steps to prevent them because NASA’s enchanted aura off protection was such that no disaster could affect it. In all the cases studied, disasters were seen to be outside the realm of human preventability. In all the cases disasters were seen part of Fate, as apart from human prevention and control. In the case of the Challenger, the self-contradictory attitude that dominated NASA’s thinking was that disaster was inevitable, but highly unlikely, because NASA was protected by the magic aura of its past successes. In all the cases studied, the disasters were effectively put outside the realm of human intervention and control.
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
327
In all of the cases studied, disasters were perceived as operating in a non human realm. They were out of sight, out of mind. In none of the cases was there a consciousness that disaster was an eventuality that strongly needed to be prevented. NASA’s management is joined by scholarly opinion such as that of William H. Starbuck and Frances J. Milliken who state in their joint conclusion, ‘Learning From Disasters’ to their article on the Challengerr that ‘Because some disasters do inevitably happen, we should strive to make disasters less costly and more beneficial.’8 Those now in management positions at NASA would find the conclusion of these scholars to be most supportive, comforting and congenial. All the cases studied have brought with them astonishing evidence of management malpractice and a lack of understanding of the sense of responsibility for decision making that should be part and parcel of the province of governance that is the duty of management. That management practices existed thatt were fundamentally unsound was evident; that the managers were responsible for the existence of these unsound practices was not realized in any of the cases studied. In every single case studied, there was no prioritization of a safety ethic. In all of the cases, there was and still is seemingly no understanding of the connection between an overall safety priority and the establishment of sound management procedures. In all of the cases, mismanagement and the lack of proper communication among managers and among managers and managed play key roles. But, behind management malpractice and communication snafus, given the obvious ultimate price that must be paid in human life, it seems that the dominant problem is a lack of an overall ethical consciousness that would place safety first. It is not simply a problem of communication. In a prescient comment, Professor De George stated in an article in the Journal of Business Ethics that, ‘Ethics ... might help prevent disasters’.9 In all of the disasters except for the disaster on Mt. Erebus there were urgent, extensive and long standing warnings. In the case of the Vasa, there could have been warnings since there was foreknowledge that the safety of the ship was compromised, but fear of authority prevented warnings from reaching the King. It is extremely likely that it was the same fear of authority that prevented even stronger warnings of problems with the Challengerr launch from being
328
SAVING HUMAN LIVES
communicated to authorities. In the case of the Titanic, not only were warnings of imminent danger unheeded but distress signals were equally ignored. In the case of Erebus, there was a warning but a warning that was never communicated. The “warning” should have been of the crucial change in the coordinates. Thus, in the case of Erebus the failure to communicate - in this case the failure to send rather than to receive communications - was of equal disastrous proportions. The lack of attention to warnings or the failure to communicate them can only be taken as a signal that a respect for human life was not the ultimate concern of management. In the case of Erebus, while there had not been a history of warnings, communication was also crucially involved in that a warning that could have prevented the disaster was never issued. If the pilots had been warned that the coordinates had been changed, the disaster would never have occurred. Thus, in each and every case, a failure to pay attention to warnings, either to warnings that were already sent or a failure to send a warning in the first place, was a crucial contributing cause to the disaster that eventually occurred. In the case of the Titanic, post-disaster signals of distress that could have saved many lives, also went ignored. The failure to pay proper attention to communication, whether by not paying attention to communication already received or by not paying attention to the sending of communication was in each and every case of disaster a crucial element in making it possible that that disaster happened. In the case of the Titanic, there were warnings of icebergs but they were not communicated to the Captain. In addition, a nearby ship could have communicated warnings to the Titanic, but it failed to do so. In the case of the Herald, the importance of a failsafe warning that the bow doors were open was ignored by senior management despite repeated requests for such a warning system. In the case of the King’s Cross Fire disaster, the failure to communicate warnings whether to the London Fire Brigade or to those in senior positions was endemic. In the case of the Challenger, red flagged warnings of the possibility of just such a disaster as did occur because of the Oring problems were systematically ignored. In this volume it has been suggested that behind the prevalent attitude of not communicating information and not heeding information received is a certain moral blindness. In Patricia Werhane’s analysis, this could
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
329
be called a lack of moral imagination, e.g., a lack of consideration of what consequences could occur if warnings were not communicated or not listened to. It could also be perceived of as a lack of responsibility, a lack of understanding of multiple causality and multiple responsibility of how ‘The Buck Stops Here and it Stops Everyplace Else as Well’. It can also be understood as a lack of understanding of how important it is for organizations to possess the will to communicate. It can also be understood as a lack of understanding of the importance of an overriding safety ethos, that ethical sensitivity is or should be the first and foremost priority in human thinking. If the spirit of this book is to be heeded, then the tendency to interpret “accidents” as the consequences of the increasing complexity of organizations and the incapacities of operators and managers to cope with this increasing complexity must be resisted.10 Such an analysis removes responsibility from the shoulders of those who should bear it and shuffles it off onto the intricacies of the system, in the Techno-Organization shuffle which is the contemporary version of the relegation of problems to the domain of the Greek Fates. While it is commonplace to consider the responsibilities that attend great power when one considers heads of state, it has not been so common to consider the responsibilities that attend the great power that is wielded by our corporate heads. If this book is to make some small headway towards the prevention of disasters then the actions or lack of actions of corporate heads must be subjected to the same kind of scrutiny to which we subject political leaders.11 The tools that are available for scrupulous observation are sharpened tools of language and ethical expectations. The expectation should exist that heads of public companies and private companies alike should shoulder a strong sense of responsibility both for facing up to the possibility of disaster and taking steps to ensure that the prospects of disaster can be minimized. While this book cannot pretend to be a panacea for the prevention of disasters, it is a necessary first step to realize that disasters can be prevented. In order to do this, a metaphysical framework must be embraced that disasters are not beyond the pale of human intervention.
330
SAVING HUMAN LIVES
Another lesson that can be learned from the study of these disasters is that one can seek ways in which intra-organizational adversarial relationships can be replaced by intra-organizational cooperative relationships. It would be advisable if managers and engineers could work together in a more cooperative spirit than seems to be practiced under the present system. For example, according to Starbuck and Milliken, managers tend to propose cuts in safety factors because in their role as managers they are expected to pursue cost reduction.12 Engineers, on the other hand may emphasize safety over cost. This places engineers and managers over against each other. With this division of role responsibilities, managers, who favour cost reductions, make the ultimate decisions, with disastrous effects, as happened at Thiokol. If engineers and managers worked more as a team from the very beginning with a joint dedication to a safety first priority, then the prospect of the choice of Thiokol’s design in the first place would have been less likely and the prospect of its continuing use very unlikely. Also, when one considers the final stages of the decision making that led to the launch of the Challenger, what if the ringi system of decision making or its cultural equivalent had been in operation? In the actual decision making process, when Boisjoly registered his significant dissent, the decision to launch could not have taken place. Of course, it could be argued that Boisjoly gave in. But he did so, because it appeared that further dissent was useless. Boisjoly, under the ringi system of decision making, confident that even a single dissenting voice would be sufficient to stop a launch, would probably have persevered in his dissent under these conditions. And, in accordance with the ringi system of decision making, a single dissent would have been enough to stop the launch.13 Since ‘The Buck Stops Everywhere’, this includes senior management. Patricia H. Werhane in her article on the Challenger, focuses her examination on ‘the engineers and managers directly responsible for the launch and NASA officials who signed off on the launch’.14 She includes Al McDonald, Roger Boisjoly, Robert Lund, Jerry Mason and Larry Mulloy. She stresses moral responsibility in the case of middle management and the engineers and indirectly, in the case of senior management as well. Joseph R. Herkert, in his article on the Challenger, seems to stress the roles of middle
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
331
management and engineers to the neglect of the roles of senior management.15 In the spirit of this paper that the buck stops everywhere, of course the buck stops with both the Boisjoly’s s and the Moore’s of the world.16 But, it is also in the spirit of this book that the Moore’s and the Aldrich’s of the world have the added responsibility to communicate the sense of moral responsibility throughout the organization both by acting on it themselves and seeing to it that everyone else in the organization takes the sense of moral responsibility as seriously as they do. How can senior management be inspired to take on its role off moral responsibility more seriously? One possible suggestion would be to consider the building in of an ethical imperative in the mission statement of the organization. Another suggestion would be to consider making it mandatory or part of the corporate culture that the executive take an executive oath upon assuming the role of a Director (as the Hippocratic oath taken by the physician). Such a Director’s Oath could consist of a set of statements which would reflect the Director’s strong commitment to her or his employees, the top priority of safety, the high ethical importance of communication, the recognition of the tripartite nature of communication, and the protection of that communication. The Director’s Oath might include statements that she or he as Director is morally responsible for seeing to itt that both a will to communicate and a structure for communication was present in the organization. The Director’s Oath might include statements which commit her or him to the issuing of clear directives, to assign clear-cut job responsibilities to individuals and to ensure that there are no missing links in a command chain. The Director’s Oath might include statements which would commit her or him to continuously search for existing problems. A further suggestion would be to seriously consider setting down moral responsibilities for the safety of employees, consumers and the general public as part of the legal duties of a Director of a company just as fiduciary duties are considered part of the legal duties of company Directors. A final suggestion might be to consider the establishment of a tenured, independently funded Safety Board which would possess veto power over operational decisions which placed the lives of employees or the general public at risk. Perhaps, all of the above suggestions could be
332
SAVING HUMAN LIVES
incorporated so that both an inner motivation and an external structure can exist such that the moral responsibility for preventing disasters can become a normal function of the chief executive officer and a pervasive attitude of business enterprise in general. One may remember the words of Konosuke Matsushita, ‘whenever something went wrong, I always chided myself – ‘it’s your own fault, after all’.17
NOTES 1.
2. 3.
4.
5. 6.
7.
Diane Vaughan, ‘Autonomy, Interdependence, and Social Control: NASA and the Space Shuttle Challenger’, Administrative Science Quarterly, Vol. 35, Number l, March 1990, p. 226. Ibid., p. 256. Karl E. Weick, ‘Enacted Sensemaking in Crisis Situations’, Journal of Management Studies, Vol. 25, No. 4. July 1988, p. 316. Diane Vaughan also states that, ‘The Challenger disaster can justifiably be classed as a normal accident: an organizational-technical system failure that was the inevitable product of the two complex systems’. Cff , The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, Chicago and London: The University of Chicago Press, 1996, p. 415. Perrow, ironically enough, distances himself from his followers by disclaiming the Challengerr as a “normal accident” (double quotation marks added), Cf., f Charles Perrow, ‘The Limits of Safety: The Enhancement of a Theory of Accidents’, Journal of Contingencies and Crisis Management, Vol. 2, No. 4, December, 1994, p. 218. Despite his disclaimer, his name is frequently invoked to support the thesis that the Challengerr disaster was the result of a tightly coupled system. In Ann Larabee, Decade of Disaster, Chicago, Urbana: The University of Illinois Press, 2000, the names of Perrow and Diane Vaughan are invoked together. (Cf., f p. 29-39). Charles Perrow, Normal Accidents, Living With High-Risk Technologies, New York: Basic Books, 1984, 1999, p. 132. Perrow argued the same thesis in Technology Studies, 1994, Volume 1. The idea of a “normal accident” is a double whammy (normalizing the irregular) and a contradiction in terms as well (since by definition an accident is what is abnormal). Ibid., p. 134. This appears to be the main line of argument advanced in Diane Vaughan, The Challenger Launch Decision, Risky T Technology, Culture and Deviance at NASA, Chicago and London: The University of Chicago Press, 1996. Patricia Werhane, ‘Engineers and Management: The Challenge of the Challenger Incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991, pp. 612-613.
MORAL RESPONSIBILITY AND TECHNO-ORGANIZATION
8.
9. 10. 11.
12.
13.
14.
15.
16.
333
W. H. Starbuck and F. J. Milliken, ‘Challenger: Fine-Tuning The Odds Until Something Breaks’, Journal of Management Studies, Vol. 25, No. 4, July 1988, p. 337. Vide, International Herald Tribune, January 29, 1996, pp. 1, 8. Richard T. De George, ‘The Status of Business Ethics: Past and Future’, Journal of Business Ethics, Vol. 6, 1987, p. 209. S. Prakash Sethi, ‘Inhuman Errors and Industrial Crises’, Columbia Journal of World Business, Vol. XXII, Number 1, Spring 1987, p. 103. For an enlightening discussion of the moral responsibilities of public officials, cf., Dennis F. Thompson, ‘The Moral Responsibility of Many Hands’, in cf Political Ethics and Public Office, Cambridge: Harvard University Press, 1987, pp. 40-65. It is of interest to note that Thompson does not consider that the moral responsibility of any one individual is diminished by its being part of a wider or collective responsibility. W.H. Starbuck and F. J. Milliken, ‘Challenger: Fine-Tuning The Odds Until Something Breaks’, Journal of Management Studies, Vol. 25, No. 4, July 1988, p. 333. Needless to say, the ringi system of decision making cannot simply be transplanted to another culture. Appropriate safeguards for whistle blowers would have to be built in. Superb suggestions for the protection of whistle blowers are to be found in Joseph R. Herkert, ‘Management’s Hat Trick: Misuse of “Engineering Judgment” in the Challenger Incident [sic]’, Journal of Business Ethics, Vol. 10, No. 8, August 1991, p. 619. These suggestions include the need for professional engineering societies to seek means of sanctioning employers who punish their engineering employees for acting in the public f also, S.H. Unger, Controlling Technology, New York: Holt, interest. Cf., Rinehart and Winston, 1982. Patricia Werhane, ‘Engineers and Management: The Challenge of the Challenger Incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991, pp. 605-616. Cff , p. 606. Joseph R. Herkert, ‘Management’s Hat Trick: Misuse of “Engineering Judgment” in the Challenger Incident [sic]’, Journal of Business Ethics, Vol. 10, No. 8, August 1991, pp. 617-620. The sharing of responsibility on all levels is strongly alluded to by the question pattern found at the end of Joan C. Callahan’s discussion of the Challenger in Joan C. Callahan, Ethical Issues in Professional Life, New York: Oxford University Press, 1988, p.324.
BIBLIOGRAPHY Allinson, Robert E. and Minkes, Leonard, ‘Principles, Proverbs and Shibboleths of Administration’, International Journal of Technology Management, Vol. 5, No. 2, 1990. f Golden Rule: A Negative Allinson, Robert E., ‘The Confucian Formulation’, Journal of Chinese Philosophy, 12, 1985. Allinson, Robert E. and Liu, Shu-hsien, (eds.), Harmony and Strife: Contemporary Perspectives, East and West, Hong Kong: The Chinese University Press, 1988. Allinson, Robert E., Review of The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, Transactions/Society, November-December 1997. Andrews, Kenneth R,. ‘Can the best corporations be made moral?’, Harvard Business Review, May-June 1973. Beck, Andrew, ‘Trial of a High Court Judge for Defamation?’, The Law Quarterly Review, Vol. 103, July 1987. Bell, Trudy E. and Karl Esch, ‘The Fatal Flaw in Flight 51-L’, IEEE Spectrum, February 1987. Bell, Trudy E. and Esch, Karl, ‘The Space Shuttle: A case of subjective engineering’, IEEE Spectrum, Vol. 26, No. 6, June 1989. Boisjoly, Roger Marshall, (December m 13-15, 1987), ASME Winter Annual Meeting in Boston, Massachusetts. The Online Ethics Center for Engineering and Science. Retrieved from http://onlineethics.org/essays/shuttle/telecon.html. Borgenstam, Curt and Sandström, Anders, WHY VASA CAPSIZED, trans. by Curt Borgenstam and Klas Melmerson, Stockholm: Vasa Museum, 1995.
334
BIBLIOGRAPHY
335
Bradley, F.H., Ethical Studies, Oxford: Oxford University Press, 1962. Callahan, Joan C., Ethical Issues in Professional Life, New York: Oxford University Press, 1988. Casamayou, Maureen Hogan, Bureaucracy in Crisis, Three Mile Island, the Shuttle Challenger, and Risk Assessment, Boulder, San Francisco, Oxford: Westview Press, 1993. Cavanaugh, Gerald F. and McGovern, Arthur F., Ethical Dilemmas in the Modern Corporation, Englewood Cliffs: Prentice-Hall, 1988. Central Criminal Court, CCC No. 900160, Old Bailey, London, Friday, 19th October, 1990. Childress, James F., Priorities in Biomedical Ethics, Philadelphia: The Westminster Press, 1981. Chippindale, R., AIRCRAFT ACCIDENT Report No. 79-139, Wellington: Office of Air Accidents Investigation, Ministry of Transport, 1980. Churchill, Winston, The Second World War, Vol. IV, Chapter III, London: Cassell, 1951. Cook, Judith, An Accident Waiting To Happen, London: Unwin Hymen Limited, 1989. Cook, Richard, ‘The Rogers Commission Failed, Questions it never asked, answers it didn’t listen to’, The Washington Monthly, November 1986. Corrigan, Grace George, A Journal for Christa, Christa McAuliffe, Teacher in Space, Lincoln and London: University of Nebraska Press, 1993.
336
BIBLIOGRAPHY
De George, Richard T., ‘Engineers and Management: The Challenge of the Challenger Incident’, Journal of Business Ethics, Vol. 6, 1987. Donaldson, Thomas and Dunfee, Thomas W., Ties That Bind, A Social Contracts Approach to Business Ethics, Boston: Harvard Business School Press, 1999. Donaldson, Thomas and Werhane, Patricia H., (eds.), Ethical Issues in Business, Englewood Cliffs: Prentice-Hall, 1979. Donaldson, Thomas and Werhane, Patricia H., (eds.), Ethical Issues in Business, Englewood Cliffs, N.J.: Prentice-Hal, 1983. Donaldson, Thomas and Werhane, Patricia H., (eds.), Ethical Issues in Business, A Philosophical Approach, Englewood Cliffs: PrenticeHall, 1996. Donaldson, Thomas, Corporations and Morality, Englewood Cliffs: Prentice-Hall, 1982. Dunfee, Thomas W., ‘The Case For Professional Norms of Business Ethics’, American Business Law Journal, Vol. 25, 1987. Evans, R.W., The Golden Jubilee of Action Learning, Manchester Business School, 1988. Fennell, Desmond, Investigation into the King’s Cross Underground Fire, London: Her Majesty’s Stationery Office, 1988. Feynman, Richard P., What Do YOU Care What Other People Think?, London: Unwin Hyman Limited, 1989. Fink, Sidney, Crisis Management, Planning for the Inevitable, New York: AMACON, 1986. Fleddermann, Charles B., Engineering Ethics, Upper Saddle River, NJ: Prentice Hall, 2001.
BIBLIOGRAPHY
337
French, Peter A., ‘Moral Dimension of Organizational Culture’, Journal of Business Ethics, 3, 1984. French, Peter A., The Spectrum of Responsibility, New York: St. Martin’s Press, 1990. French, Peter A., ‘The Principle of Responsive Adjustment in Corporate Moral Responsibility: The Crash on Mount Erebus’, Journal of Business Ethics, 3, 1984. Friedman, Milton, Ethics for Modern Life, New York: St. Martin’s Press, 1987. Gleick, James, Genius, The Life and Science of Richard Feynman, NY: Random House, 1993. Gracie, Colonel Archibald, Titanic, A Survivor’s Story, Phoenix Mill-Thrupp-Stroud- Gloucestershire: Sutton Publishing, 1998. Hakonssen, Knud, (ed.), Adam Smith, The Theory of Moral Sentiments, Cambridge University Press: Cambridge, 2002. Harris, Jr., Charles E., Pritchard, Michael S. and Rabins, Michael J., Engineering Ethics, Concepts and Cases, Belmont: Wadsworth Publishing Company, 1996. Hartley, Robert F., Management Mistakes, New York: John Wiley & Sons, 1983. Herkert, Joseph R., ‘Management’s hat trick; misuse of Engineering Judgement in the Challenger incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991. Howells, Richard, The Myth of the Titanic, Houndmills, Basingstoke, Hampshire and London: Macmillan Press, 1999. Humphreys, Kenneth K., What Every Engineer Should Know About Ethics, New York: Basel: Marcel Dekker, 2001.
338
BIBLIOGRAPHY
Investigation of the Challenger Accident, Report of the Committee on Science and Technology, House of Representatives, Ninety-Ninth Congress, Second Session, Union Calendar No. 600, House Report 99-1016, Washington, D. C., U.S. Government Printing Office, 1986. Janis, Irving L., CRUCIAL DECISIONS, Leadership in Policymaking and Crisis Management, New York: The Free Press, 1989. Jensen, Claus, No Downlink, A Dramatic Narrative about the Challenger Accident and Our Time, New York: Farrar, Straus and Giroux, 1996. Johnson, Deborah G., Ethical Issues in Engineering, Englewood Cliffs, New Jersey: Prentice Hall, 1991. Johnson, Deborah G., Ethical Issues in Engineering, Englewood Cliffs: Prentice Hall, 1993. Kant, Immanuel, Kant’s Critique of Practical Reason and Other Works on the Theory of Ethics, T. K. Abbott (trans.), London: Longmans, 6th Edition, 1963. Keeley, Michael, ‘Organizations as Non-Persons’, Journal of Value Inquiry, 15: 1981. Larabee, Ann, Decade of Disaster, Urbana and Chicago: The University of Illinois Press, 2000. Littlejohn, Crisis Management, A Team Approach, New York: AMA Membership Publications Division, 1983. Mace, Myles L., Directors: Myth and Reality, Harvard Business School Classics, Boston: Harvard Business School Press, 1971, 1986. Macfarlane, Stuart, The Erebus Papers, Auckland: Avon Press, Ltd., 1991.
BIBLIOGRAPHY
339
Mackley, William B., (Retd.), ‘Aftermath of Mt. Erebus’, Flight Safety Digest, September, 1982. Mahon, Justice P.T., Report of the Royal Commission to inquire into The Crash on MOUNT EREBUS, ANTARTICA OF A DC10 AIRCRAFT operated by AIR NEW ZEALAND LIMITED, Wellington: P.D. Hasselberg, Government Printer, 1981. Mahon, Justice P.T., Verdict on EREBUS, Auckland: William Collins Publishers Ltd, 1984. McConnell, Malcolm, CHALLENGER, A Major Malfunction, London: Simon & Schuster, 1987. McMillan, Beverly and Stanley Lehrer, Titanic: Fortune & Fate, Catalogue from the Mariners’ Museum Exhibition, The Mariners’ Museum, Newport News, Virginia: Simon & Schuster, 1998. Meyers, Gerald C. and Holusha, John, When It Hits the Fan, Managing the Nine Crises of Business, New York: New American Library, 1986. Mill, John Stuart, On Liberty, Gertrude Himmelfarb (ed), Harmondsworth: Penguin Books, 1979. Minkes, A.L., The Entrepreneurial Manager, Harmondsworth: Penguin Business Books, 1987. Mitroff, Ian I., Paul Shrivastava, and Firdaus E. Udwadia, ‘Effective Crisis Management’, The Academy of Management EXECUTIVE, Vol. 1, No. 3. Mitroff, Ian I., ‘Crisis Management: Cutting Through the Confusion’, Sloan Management Review, Vol. 29, No. 2, Winter 1988. Mitroff, Ian I. .and Kilman, R. H. Corporate Tragedies: Product Tampering, Sabotage and Other Disasters, New York: Praeger Publishers, 1984.
340
BIBLIOGRAPHY
Mitroff, Ian I., ‘Crisis Leadership’, Executive Excellence, Provo, August 2001. MV Herald of Free Enterprise, Report of Court No. 8074 Formal Investigation, London: Her Majesty’s Stationery Office, 1987. Nance, John J., Blind Trust, New York: William Morrow and Company, 1986. O’Hare, David and Roscoe, Stanley, Flight Deck Performance, The Human Factor, Ames: Iowa State University Press, 1990. Pastin, Mark, The Hard Problems of Management, Gaining the Ethics Edge, San Francisco: Jossey-Bass Publishers, 1986. Pauchant, Thierry C. and Mitroff, Ian I., Transforming the CrisisProne Organization, Preventing Individual, Organizational and Environmental Tragedies: San Francisco: Jossey-Bass Publishers, 1992. Payne, Dinah, et. al., ‘Corporate Codes of Conduct: A Collective Conscience and Continuum’, Journal of Business Ethics, Vol. 9, iss. 11, November 1990. Perrow, Charles, ‘The Limits of Safety: The Enhancement of a Theory of Accidents’, Journal of Contingencies and Crisis Management, Vol. 2, No. 4, December, 1994. Perrow, Charles, Normal Accidents: Living with High Risk Technologies, New York: Basic Books, 1999. Perrow, Charles, Technology Studies, Volume 1, 1994. Pinkus, Rosa Lynn B., Shuman, Larry J., Hummon, Norman P., Wolfe, Harvey, Engineering Ethics, Balancing Cost, Schedule, and
BIBLIOGRAPHY
341
Risk, Lessons Learned from the Space Shuttle, Cambridge: Cambridge University Press, 1997. Report of the Emergency Preparedness and Response Task Force, President’s Commission on the Accident at Three Mile Island, Washington, D.C.: GPO, 1979. Report of the Loss of the “Titanic” (S.S.), The Official Government Enquiry, 30 July 1912, New York: St. Martin’s Press, 1998. Report of the PRESIDENTIAL COMMISSION I on the Space Shuttle Challenger Accident, Vol. I, Washington, D.C.: GPO, 1986. Report to the President, REPORT AT A GLANCE, BY THE PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Washington, D. C.: GPO, 1986. Robin, Donald P. and Reidenbach, R. Eric, Business Ethics, Where Profits Meet Value Systems, Englewood Cliffs: Prentice-Hall, 1989. Romzek and Dubnick, ‘Accountability in the Public Sector: Lessons from the Challengerr Tragedy’, Public Administration Review, May/June 1987. Samuelson, Paul A. and Nordhaus, William D., Economics, Twelfth Edition, New York: McGraw Hill, 1980. Schinzinger, Roland and Martin, Mike W., Introduction to Engineering Ethics, Boston, New York, Toronto: McGraw Hill, 2001. Schlegelmilch and Houston, ‘Corporate Codes of Ethics in Large U.K. Companies’, European Journal of Marketing, Vol. 23, No. 6, 1989. Schlegelmilch, Bodo B., ‘The Ethics Gap between Britain and the United States: A Comparison of the State of Business Ethics in both Countries’, European Management Journal, Vol. 7, No. l, 1989.
342
BIBLIOGRAPHY
Schwartz, Howard S., Narcissistic Process and Corporate Decay, The Theory of the Organization Ideal, New York and London: New York University Press, 1990. Sethi, S. Prakash, ‘Inhuman Errors and Industrial Crises’, Columbia Journal of World Business, Vol. XXII, Number 1, Spring 1987. Shrivastava, Paul, Bhopal, Anatomy of a Crisis, Cambridge: Ballinger Publishing Company, 1987. Simmons, Michael, ‘Creating a New Leadership Initiative’, Industrial and Commercial Training, Vol. 22, iss. 5, 1990. Slatter, Stuart, Corporate Recovery, A Guide to Turnabout Management, Harmondsworth: Penguin Books, 1986. Slay, Alton, Post-Challenger Evaluation of Space Shuttle Risk Assessment and Management, Washington: National Academy Press, 1988. Smith, Adam, An Inquiry Into the Nature and Causes of the Wealth of Nations, Edwin Cannan, (ed.), New York: Modern Library, 1937. Smith, Adam, The Theory of Moral Sentiments, Cambridge University Press: Cambridge, 2002. Spignesi, Stephen J., The Complete Titanic, From the Ship’s Earliest Blueprints to the Epic Film, St. Martin’s Press, 1987. Starbuck, W. H. and Milliken, F. J., ‘Challenger: Fine-Tuning The Odds Until Something Breaks’, Journal of Management Studies, Vol. 25, No. 4, July 1988. Stewart, Stanley, Emergency Crisis on the Flight Deck, Shrewsbury: Airlife Publishing, Ltd., 1989. Stone, Christopher, ‘Corporations and the Philosophy of Law’, The Journal of Value Inquiry, IX, Summer 1976.
BIBLIOGRAPHY
343
Stone, Christopher, Where the Law Ends, The Social Control of Corporate Behavior, New York: Harper and Row, 1975. Sykes, Richard (ed.), No Ordinary Genius, New York and London: W.W. Norton & Co, 1994. The London Times, October 10, 1987. The London Times, October 20, 1990. Thompson, Dennis F., Political Ethics and Public Office, Cambridge: Harvard University Press, 1987. Titanic, Vols. I-IV, New York: Greystone Communications, Inc., A & E Television Networks, Hearst, ABC, NBC, 1994. Titanic: The Investigation Begins; Titanic: The Anatomy of a Disaster, Discovery Communications, Inc., Videotape, 1997. Trento, Joseph J., Prescription For Disaster, New York: Crown Publishers, Inc., 1987. Turner, Barry A. and Pidgeon, Nick F., Man-Made Disasters, Second Edition, Oxford: Butterworth Heinemann, 1997. Turner, Barry A., ‘The Organizational and Interorganizational Development of Disasters’, Administrative Science Quarterly, Vol. 21, September 1976. Unger, Stephen H., Controlling Technology, New York: Holt, Rinehart and Winston, 1982. Unger, Stephen H., Controlling Technology, Ethics and the Responsible Engineer, 2nd Edition, New York: John Wiley & Sons, Inc., 1994.
344
BIBLIOGRAPHY
Vaughan, Diane, ‘Autonomy, Interdependence, and Social Control: NASA and the Space Shuttle Challenger’, Administrative Science Quarterly, Vol. 35, Number 1, March 1990. Vaughan, Diane, The Challenger Launch Decision, Risky Technology, Culture and Deviance at NASA, Chicago: The University of Chicago Press, 1996. Vaughan, Diane, ‘The Trickle-Down Effect: Policy Decisions, Risky Work, the Challengerr Tragedy’, California Management Review, Vol. 39, No. 2, Winter 1997. Velasquez, Manuel, G., S.J., ‘Why Corporations Are Not Morally Responsible for Anything They Do’, Business & Professional Ethics Journal, vol. 2, n. 3, Spring 1983.
Vette, Gordon and Macdonald, John, IMPACT EREBUS TWO, Auckland: Aviation Consultants Ltd., 2000. Vette, Gordon, Readers Digest, September 1982. Waters, James A. and Bird, Frederick, ‘Moral dimension of organizational culture’, Journal of Business Ethics, 6, 1987. Webster’s New International Dictionary, Latest Unabridged, 2nd Edition. Weick, Karl E., ‘Enacted Sensemaking in Crisis Situations’, Journal of Management Studies, Vol. 25, No. 4, July 1988. Weick, Karl E., Review of The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, ASQ, June 1997. Werhane, Patricia H., ‘Engineers and Management: The Challenge of the Challenger incident’, Journal of Business Ethics, Vol. 10, No. 8, August 1991.
BIBLIOGRAPHY
345
Werhane, Patricia H., Adam Smith and His Legacy for Modern Capitalism, New York: Oxford University Press, 1991. Werhane, Patricia H., Moral Imagination & Management DecisionMaking, New York, Oxford: Oxford University Press, 1999. Whitbeck, Caroline, Ethics in Engineering Practice and Research, Cambridge: Cambridge University Press, 1998.
INDEX 51-B, 168, 193 Aereojet, 160 Aeronautics and Space Engineering Board, 154 AINS, 268, 297, 309, also see Internal Navigation System Air New Zealand, 12, 71, 85, 200, 256, 257, 258, 264, 266, 268, 269, 270, 279, 289, 291, 296, 298, 299, 300, 303, 304, 305, 308, 309, 310, 311, 313, 315, 316, 317, 318, 319, 320, 323, 324 Alcindor, 215, 216, 217, 222 Alexandra House, 198 Allinson, 19, 47 American Society for Mechanical Engineers ASME, 191 Amies, 294 Anderson, 80 Antarctic, 256, 258, 268, 269, 273, 279, 281, 284, 285, 291, 301, 304, 311, 324 anticipatory crisis intervention, 62, 64, 65, 70 Antigone, 2 ANZAF, 298 Associate Administrator for Space Administration, 164 ATC, 293, 298, 304 ATC flight, 293 ‘Athenai’, 89 Avionics Division of the Bendix Corporation, 271 Ayling, 200 Ayre, 222 “Baltic”, 89, 91 Baragwanath, 257, 261, 262, 274, 288, 290, 305, 308, 309, 310, 311 Bay, 264, 265 Beck, 300, 321 Bell, 5, 146, 147, 196, 334 Bhopal, 19, 20, 60, 68, 144 Bhopal railway station, 254 Bhurvey, 254
Bird, 57 Board of Trade, 96, 103, also see British Broad of Trade Inquiry Boisjoly, 73, 108, 122, 125, 130, 134, 138, 143, 146, 147, 148, 149, 150, 151, 152, 169, 170, 172, 174, 177, 178, 184, 191, 193, 195, 196, 197, 330, 331 Borgenstam, 72, 73, 74, 75, 76, 80, 82, 83 Bottom Up Responsibility, 142, 198, 204 Boy Scouts of America, 64 Bradley, 308 Bright, 246 British Board of Trade Inquiry, 87, also see Broad of Trade British Rail, 223 British Transport Police, 242, 245 Brooks, 258, 272, 290, 295, 298 Buddhism, 35 Bulkhead Committee, 102 Bush, 145, 161 Butler-Sloss Report, 221 California Institute of Technology, 172 “Californian”, 90, 99, 101, 103 Callahan, 333 Cape Bird, 290 Cape Canaveral, 156 Cape Race, 89, 100 Cape Tennyson, 290 “Caronia”, 88, 91 “Carpathia”, 99 Carpathia, 99 carte blanche, 288 Casamayou, 152, 174, 178, 182, 197 Cassin, 258, 289, 296, 297 Categorical Imperative, 41 Cathay Pacific, 295 Cavanaugh, 311 Central Criminal Court, 220 Channon, 223 chi, 52, 145 Childress, 186
347
348 Chippindale Report, 258, 259, 262, 265, 266, 271, 272, 274, 277, 290, 295, 298 Churchill, 16, 20, 141, 203 ‘CID’ (Corporate Internal Decision Structure), 56 Collins, 258, 262, 263, 265, 267, 270, 272, 274, 280, 283, 289, 291, 297, 299, 300, 304, 307, 313, 316, 317, 324 Committee on Shuttle Criticality, 154, 183 Conceptual preparedness, 64, 65 Confucian, 35 Confucius, 35 consequentialism, 40, 42 consequentialist, 41 Cook, 145, 146, 151, 187, 300 Corrigan, 185, 197 Court of Appeal, 56, 290, 299, 302, 303, 315, 316 Crippen, 188 Crossland, 223 cultural imperatives, 167 CVR, 259, 260, 261, 263, 270, 290 Dailey Islands, 291, 304 Darby, 223 David C. Nagel, 305 Davis, 261, 262, 266, 311 Day, 273 DC 10, 272, 274 DC10 aircraft, 265 De George, 327, 333 de Ste Croix, 216, 217 Delahaye, 321 deontologist, 24, 41 deontology, 35, 40, 41, 44, 45 determinism, 1, 2, 166 deus ex machina, 164 Develin, 210, 214, 222 Director’s Oath, 331 Donaldson, 46, 47, 56, 57, 308, 336 Douglas, 159, 188, 269 Dunfee, 47, 57, 58, 336 Dworkin, 59 egoism, 40, 41
INDEX
Einstein Award, 172 Ellison, 215, 216 epistemological, 2, 48, 62, 63, 64, 65, 84, 226, 228, 229, also see Epistemological Epistemological, 224, also see epistemological Esch, 5, 146, 147, 196, 334 “Estonia”, 97 ethical imperative, 9, 18, 57, 66, 79, 295, 323, 331 ethical priorities, 82, 97, 227 Eudaemonism, 41 eudaemonist, 41, 42, 43, 44, 45 Evans, 221 Exxon Corporation, 55 FAA, 166 Fahrenheit, 120, 122, 150, 170, 189 Farnborough, 261, 262 Fennell, 20, 157, 223, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 238, 240, 241, 242, 243, 245, 246, 247, 248, 249, 250, 251, 252, 311 Feynman, 126, 143, 145, 171, 175, 180, 183, 191, 193, 229, 251, 326 Field, 196, 291, 292, 321 Final Report of the Estonia, 97, 105 Fink, 63, 69 Firewall power, 299 First Officer Cassin, 267, 270, 283, 300 flash ops, 282, 294 Fleddermann, 144, 191 Fleming, 73, 74 Fletcher, 157, 159, 160, 164 Flight Despatch, 271, 284, 292, 309 Flight TE 901, 264, 266 Foecke, 95 Ford, 23, 30 Franzén, 72 French, 55, 56, 94, 288, 289, 296, 307, 308, 309, 311 Freud, 41 Friedman, 32, 40, 46 Gemmell, 281, 285, 298, 304
INDEX
George IV, 223 Gierdsson, 75 Gilruth, 159 Ginsberg, 274 Gleick, 171, 191 Glenn, 161 Go round power, 299 Goedecke, 58 Goodpaster, 57 Gordon Vette, 305 Gracie, 100, 106 Greek drama, 1 Greek Fates, 322, 329 Greek hubris, 84, also see hubris Greek mythology, 2 Greene, 305 Gribben, 171, 175, 193 Grundy, 281 Guild of Air Pilots, 295 Gustav Adolf, 72, 80, also see Gustav II Adolf Gustav II Adolf, 73, also see Gustav Adolf Gyllenhielm, 71 Hamilton, 215 Hansson, 72, 73, 81 Harbour Stations, 199 hard data, 180, 182, 194 Hardy, 117, 122, 127, 151, 178 Harris, 146, 148, 184, 192 Harrison, 257 Hartley, 69 Harvard, 57, 173, 333 Haveland, 184 Hazard Analysis Audit, 154, 183 Hedonism, 40 Hegel, 59 Herald of Free Enterprise, 12, 56, 64, 198, 201, 220, 243, 310 Herkert, 11, 184, 330, 333 Hewitt, 291, 309 hierarchy, 14, 37, 38, 142, 310 Himmelfarb, 186 Hirsch, 190 Hoecker, 192, 194 Holusha, 70
349
Honda, 37 House Committee, 145, 149, 166, 167, 169, 176, 188 House Committee on Science and Technology, 145, 149, 153, 159, 188, 190 Houston, 58 hubris, 84, also see Greek hubris Hughes, 257, 305 Hummon, 9, 163, 182, 190 Hybertsson, 72, 78 IBM, 37 Internal Navigation System (AINS), 268 Isbrandsson, 73, 76 Jacobsson, 72, 73, 74, 78, 81 Japanese Management System, 36, 130 Jeffries, 87 Jensen, 9, 152, 182, 184 Johnson, 265, 291, 310, 313, 318, 321 Johnson & Johnson, 58 Johnson Space Center, 121, 132, also see JSC Johnston Memorial Trophy, 295 JSC, 121, 132, 147, also see Johnson Space Center Judaeo-Christian, 35 Kealey, 292 Keeley, 56, 57 Keesing, 269, 285 Keynesian camp, 24 Kilminster, 122, 127, 130, 135, 138, 146, 185 King, 217 King’s Cross, 20, 39, 157, 200, 223, 225, 228, 231, 232, 233, 239, 240, 241, 242, 244, 248, 249, 250, 311, 328 King’s Cross Underground, 20, 200, 223 Kippenberger, 263 Kirby, 204, 205, 206, 207, 216, 222 Kutyna, 184 Lady Macbeth, 2 Larabee, 12, 188, 332
350 Law Lords, 302, 303, 315, 317, 320, 321 laws of Fate, 85 Lawton, 291 Lenoir, 164 Lewis, 6, 187, 188, 189 Lewry, 200, 201, 202, 203, 204, 205, 206, 207, 208, 222 Likert, 173 Liquid fuel rockets, 189 Littlejohn, 69 London Fire Brigade, 233, 236, 237, 240, 241, 242, 243, 245, 328 London Regional Transport, 229, 235, 246, 250 London Underground, 229, 230, 231, 232, 233, 234, 235, 236, 238, 240, 241, 242, 243, 245, 246, 248, 250, 251, 252 Lordships, 301, 303, 317 Lund, 122, 126, 127, 138, 146, 148, 330 Mac Centre, 292, 304, 306 MacAuliffe, 170, 185 Macbeth, 2 Macdonald, 289 Mace, 311 Macfarlane, 265, 266, 268, 270, 271, 277, 278, 298, 299, 300, 301, 305, 307, 310, 313, 315, 318, 338 Macidull, 166 Mackley, 289, 309 Mafia, 21, 44 Mahon, 257, 265, 289 Mahon Report, 257, 263, 284, 289, 290, 291, 296, 297, 298, 299, 301, 303, 304, 305, 307, 309, 310, 311 Maier Videotape, 187 Malcolm McConnell, 145 Malthus, 27 Marconi room, 89, 90 Marine Department, 214, 218 Marshall, 117, 121, 127, 131, 132, 133, 138, 145, 149, 151, 152, 164, 174, 177, 178, 180, 183, 195, 257, 305
INDEX
Martin, 6, 46, 55, 105, 106, 182, 185, 188, 190, 192 Mason, 122, 126, 128, 146, 148, 152, 175, 179, 330 Matsushita, 37, 332 Matthias, 94 McAuliffe, 108, 185, 197 McConnell, 145, 151, 156, 160, 161, 182, 183, 187, 188, 190, 197 McDonald, 121, 122, 138, 152, 174, 330 McGovern, 311 McMurdo, 262, 264, 265, 266, 268, 269, 271, 274, 281, 290, 292, 293, 295, 297, 301, 303, 304, 305, 306, 313, 314, 316, 317, 321 McMurdo Air Traffic Control, 307 McMurdo Sound, 264, 265, 266, 269, 271, 281, 291, 297, 301, 304, 306, 313, 314, 316, 321 Mersey, 88, 89, 90, 94, 96 “Mesaba”, 90 metallurgical, 95 metallurgist, 95 metaphysical, 1, 10, 84, 86, 104, 329 Metaphysical, 84 metaphysics, 10, 87 Metropolitan Line, 242 Meyers, 70 Mill, 186 Milliken, 168, 327, 330, 333 Minister of Transport of New Zealand, 258 Minkes, 55 MIT, 24 Mitroff, 19 Mixed Signal, 169 Moloney, 258, 261 Monash University, 273 monocausality, 13, 14 moral imperative, 79, 80, 167 Morita, 37 Mount Erebus, 9, 71, 82, 239, 257, 258, 269, 289, 291, 295, 296, 305, 307
INDEX
Mt. Erebus, 12, 17, 39, 256, 264, 265, 266, 268, 269, 271, 280, 282, 283, 289, 291, 301, 303, 304, 305, 313, 316, 317, 321, 323, 324, 326, 327 MTI, 148, 174, 191, also see Morton Thiokol Mulgrew, 258, 263, 272, 290 Mulloy, 117, 121, 122, 127, 132, 133, 138, 139, 147, 148, 150, 151, 152, 175, 196, 330 multi-causal, 87, 96, also see multicausality, multiple causality multi-causality, 14, also see multicausal, multiple causality multiple causality, 14, 16, 55, 329, also see multi-causality, multicausal multiple responsibility, 14, 20, 236, 329, also see multi-responsibility multi-responsibility, 62, 106, also see multiple responsibility Nagel, 305 Nance, 19 NASA, 4, 6, 11, 19, 67, 85, 114, 117, 118, 120, 121, 131, 132, 133, 140, 141, 144, 145, 147, 148, 149, 150, 151, 152, 153, 157, 158, 159, 161, 163, 164, 166, 167, 168, 171, 173, 175, 180, 182, 183, 185, 189, 190, 191, 193, 195, 196, 197, 324, 325, 326, 327, 330, 332, 344, also see National Aeronautics and Space Administration, also see National Aeronautics and Space Administration National Aeronautics and Space Administration, 189, 190, also see NASA National Research Council, 171 National Transportation Safety Board, 290 New Zealand Parliament, 289 Niels Bohr Gold Medal, 172 Northern Line, 241, 242, 244 Northey, 300 Nozzle Joint, 193
351
NSTS, 158 "Nyckln", 81, 82 O’Hare, 299 Oedipus, 2 Old Bailey, 206 Orbiter., 189 Organizational Behavior and Psychology, 173 organizational reponsibilities, 249 O-ring, 4, 5, 13, 73, 113, 114, 117, 120, 121, 122, 126, 130, 132, 138, 139, 140, 144, 147, 150, 151, 154, 155, 156, 159, 162, 163, 164, 168, 170, 171, 173, 174, 177, 178, 180, 183, 184, 189, 190, 192, 193, 195, 196, 212, 281, 324, 325, 328 Oval Office, 48, 162 Ozar, 56, 308 P & O, 204, 206, 209, also see Peninsula and Oriental Steam Navigation Company Pastin, 20 Peninsula and Oriental Steam Navigation Company, 220, also see P&O People v. Rochester Railway and Light Co., 56 Perrow, 2, 4, 8, 9, 10, 20, 69, 111, 202, 253, 279, 280, 282, 303, 304, 322, 324, 332 phenomenological, 272, 273, 274, 275, 299 Philebus, 44 philosophical presupposition, 251 Philosophical Underpinnings, 224, 251 Piccadilly Line, 232, 241, 244 Pidgeon, 5, 19, 170, 182, 191 pilot error, 3, 12, 19, 257, 258, 259, 260, 271, 274, 276, 277, 282, 287, 289, 300, 309, 311 Pinkus, 9, 163, 182, 190 Pinto, 194 Plato, 41, 44 polar whiteout, 280, 305, also see sector whiteout, whiteout, whiteout phenomenon
352 Popper, 297 President Reagan, 161 Presidential Commission, 8, 13, 64, 108, 128, 131, 137, 141, 143, 148, 149, 150, 158, 166, 167, 171, 174, 177, 183, 184, 189, 191, 195, 226, 249, 250, also see Rogers Commission Presidential Report, 8, 9, 13, 112, 114, 115, 117, 120, 126, 132, 197, 211, 224, 225, 226, 233, 248 PRIDE, 214 Primacy of Safety, 224 prioritization, 58, 66, 67, 69, 77, 79, 82, 102, 231, 238, 244 Pritchard, 146, 148, 184, 192 Privy Council, 71, 290, 300, 303, 313, 314, 316, 321 Project Mercury, 159 Queen Boadicea, 223 Queen’s Counsel, 223, 308, 310 Rabins, 146, 148, 184, 192 Rand, 40, 191 Rawls, 59 Ray, 174, 178 relativist, 43 Republic, 42 Reynolds, 215, 216 Ride, 184 ringi, 29, 36, 41, 130, 139, 176, 330, 333 Robert Lund, 330 Roberts, 223, 309 Robison, 192, 194 Rocket scientist, 172 Rogers Commission, 145, 149, 157, 174, 187, 191, 193, 195, 196, 197, See Presidential Commission, also see Presidential Commission Roscoe, 299 Ross Ice Shelf, 269 Ross sea, 304 Royal Castle, 73 Royal Commission, 257, 289, 296, 299, 300, 303, 313, 319 Royal New Zealand Air Force, 298, 304
INDEX
Royal Society for the Prevention of Accidents, 230 Rule of the Accidental, 3 Russell, 177 Russian roulette, 126, 251, 326 Sabel, 199, 201, 202, 203, 206, 208 Sagan, 174, 182, 193 Salita, 177 Samström, 76, 81, 83 Samuelson, 24, 46 Sandström, 72, 82 "Sancta Sophia", 72 Schinzinger, 6, 182, 185, 188, 190, 192 Schlegelmilch, 58 Schwartz, 6, 19, 168, 182, 190 Scobee, 120, 156, 324 Sector Whiteout, 280, 290, 293, 298, also see polar whiteout, whiteout, whiteout phenomenon Sethi, 333 Shackelton, 92 Shakespearean tragedy, 1 Shannon, 274 Sheen, 198, 208, 210, 214, 215, 218, 220, 222, 233, 310 Sheen Report, 198, 208, 210, 214, 218 Ship’s Standing Orders, 206 Shrivastava, 19, 60, 61, 62, 68, 69, 144, 254 Shuman, 9, 163, 182, 190 Shuttle Criticality Review, 154 Simmons, 58 Simpson, 291, 305, 313 Slay, 154, 161, 183, 187, 188 Smith, 31, 32, 33, 40, 46, 337 Solid Rocket Booster Project, 138, 180 Solid Rocket Boosters, 164 Solid Rocket Motor, 108, 112, 138, 151, 168, 178, 190 SRM, 184 Solow, 24 sound management, 2, 37, 38, 39, 81, 212, 219, 248, 279, 282, 288, 308, 323, 327 Soviet Union, 161
INDEX
Space Race, 161 Space Systems and Command Control, 184 SRM-15, 194, also see STS 51-C SRM-22, 194, also see STS 61-A Standing Orders, 214 Stanley, 199, 201, 206 Star Trek, 189 Starbuck, 168, 327, 330, 333 State v. Lehigh Valley Railway Co., 56 Steel, 199, 207 Stewart, 295 Structural Secrecy, 166 STS, 194 STS 51-C, 194, also see SRM-15 STS 61-A, 194, also see SRM-22 Stuart Macfarlane, 293, 305, 307 Supreme Court of the United States, 68 TACAN, 291, 321 TE 901, 269, 288 TE 90l, 280, 281 Techno-Organization, 322, 329 ‘The Emperor’s New Clothes’, 80 the Johnson Space Center, 185 Theoretical Physics, 172 Thiokol, 117, 121, 122, 124, 126, 127, 128, 130, 138, 139, 140, 141, 147, 148, 149, 151, 152, 160, 161, 168, 169, 171, 174, 175, 177, 178, 179, 180, 184, 190, 192, 193, 194, 195, 330, also see MTI Thompson, 174, 177 Thornton, 185 Thornton Hall, 185 Three Mile Island, 9, 11, 13, 63, 70, 143, 152, 197 Titanic, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 97, 99, 101, 103, 104, 143, 328 TMI, 63 Top Down Responsibility, 139, 198, 208, 287 Top Management, 48, 57, 67 Townsend Care Ferries Ltd., 204 Townsend Ferries, 213, 218
353
Tre Kronor Palace, 71 Trento, 159, 161, 182, 190 Truman, 48, 49 Turner, 5, 19, 69, 170, 182, 191 Tylenol, 58 Udwadia, 61, 69 Unger, 93, 95, 105, 151, 152, 333 utilitarianism, 35, 40, 41, 43, 44, 45 Vasa, 3, 71, 72, 73, 74, 76, 77, 78, 80, 81, 82, 327 Vaughan, 4, 5, 11, 19, 156, 163, 164, 165, 166, 167, 169, 170, 171, 173, 174, 176, 177, 178, 179, 182, 183, 186, 188, 190, 191, 193, 195, 196, 197, 225, 282, 322, 332 Velasquez, 55, 56 Verdict on Erebus, 257, 258, 268, 269, 276, 281 Vette, 258, 265, 268, 270, 272, 273, 277, 289, 290, 291, 295, 297, 298, 299, 305, 321 Victoria Land, 297 Visual Flight Rules, 298, 304 voice recorder system (CVR), 260 von Braun, 160, 189 W. H. Starbuck, 333 Warburton, 230, 234, 235, 252 Warnings, 17, 88, 91 Waters, 57 Weak Signals, 176, 179 Wedderburn, 207 wei ji, 61 Weick, 169, 173, 176, 177, 178, 179, 190, 193, 195, 196, 197, 322, 332 Werhane, 46, 56, 57, 108, 143, 147, 150, 182, 184, 185, 196, 308, 326, 328, 330, 332, 333 Westminster, 186, 198 Whitbeck, 10, 184 White, 292 White House, 119, 162, 170, 185 White Star line, 87 whiteout, 264, 265, 266, 273, 277, 298, 304, also see polar whiteout, sector whiteout, whiteout phenomenon
354 Whiteout, 281, 290, 298, also see polar whiteout, sector whiteout, whiteout phenomenon whiteout phenomenon, 264, 271, 272, 273, 280, 306, also see whiteout, also see polar whiteout, sector whiteout, whiteout Wiener, 305
INDEX
Wolfe, 9, 163, 182, 190 Woodhouse, 290, 299, 314, 316 World War Two, 16, 69 Zeebrugge, 198, 199, 211, 213, 214, 215, 218, 284 Zeus, 85 Zimmerman, 55