Risk Dilemmas
Risk Dilemmas Forced Choices and Survival Mark Jablonowski
© Mark Jablonowski 2007 All rights reserve...
16 downloads
639 Views
744KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Risk Dilemmas
Risk Dilemmas Forced Choices and Survival Mark Jablonowski
© Mark Jablonowski 2007 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No paragraph of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, 90 Tottenham Court Road, London W1T 4LP. Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The author has asserted his right to be identified as the author of this work in accordance with the Copyright, Designs and Patents Act 1988. First published 2007 by PALGRAVE MACMILLAN Houndmills, Basingstoke, Hampshire RG21 6XS and 175 Fifth Avenue, New York, N.Y. 10010 Companies and representatives throughout the world PALGRAVE MACMILLAN is the global academic imprint of the Palgrave Macmillan division of St. Martin’s Press, LLC and of Palgrave Macmillan Ltd. Macmillan is a registered trademark in the United States, United Kingdom and other countries. Palgrave is a registered trademark in the European Union and other countries. ISBN-13: 9780230538719 hardback ISBN-10: 0230538711 hardback This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. Logging, pulping and manufacturing processes are expected to conform to the environmental regulations of the country of origin. A catalogue record for this book is available from the British Library. A catalog record for this book is available from the Library of Congress. 10 9 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 09 08 07 Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham and Eastbourne
Contents
Introduction
vii
1 A Review of High-Stakes Decision Criteria 1.1 1.2 1.3 1.4
Formalizing risky decisions The expected value criterion Decision criteria when probability is unknown or irrelevant Conditions for indifference between fatalism and precaution Appendix: A fuzzy representation of danger
2 Finding Alternatives to Risk 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9
1 3 5 7 10
15
The preactionary approach Identifying alternatives using backcasting Backcasting under uncertainty Backcasting versus backtracking Maintaining the balance of life Contrasting the “post-fact” approach Cost/benefit and post-fact risk management Avoiding mechanistic precaution Risk acceptance – risk avoidance – risk anticipation
3 Risk Avoidance: All or Nothing 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
1
How risk grows Why prioritization fails Pragmatic arguments for not adding risks Satisfying the burden of proof A possibilistic model of catastrophic potentials Is there a “natural” level of risk? On the notion of “selective fatalism” Selective fatalism and dilemmas The “tolerability” compromise v
16 18 22 24 26 28 29 32 34
36 36 39 40 41 42 45 47 50 52
vi Contents
4 Precaution in Context 4.1 4.2 4.3 4.4 4.5 4.6
The hallmarks of precaution Context and risk acceptance criteria The problem of valuation Inter-contextual effects of precaution Alternatives assessment across contexts The need for coordinated goals
5 A Reassessment of Risk Assessment 5.1 5.2 5.3 5.4
Using risk assessments the right way Identifying high-stakes risks and their mechanisms Decision theoretic models Integrating fuzzy risk thresholds
6 Can We Avoid Risk Dilemmas? 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8
The only two options Facing the paradox of progress Risk dilemmas and self-interest The prospect of “infinite disutility” The need for a wider approach to science Radical rethinking Science to the rescue? The dangers of giving up
7 Summary and Conclusion (of Sorts) 7.1 7.2 7.3 7.4 7.5
Understanding high-stakes decision processes Making precaution work How do current regimes compare? Doing the right thing Who will lead the way?
56 56 58 60 61 65 66
68 69 70 75 78
81 82 83 85 89 91 93 96 100
102 103 104 106 109 113
Notes
118
References
127
Index
132
Introduction This book is about risk. As it turns out, the basic concepts of risk are not difficult to grasp. What we might call the “technical” aspects are fairly straightforward. Risk, in all stages, consists of the combination of likelihood and adverse consequences resulting from the randomness of events. Randomness is a form of uncertainty that follows from our inability to specify initial conditions of some action, and hence subsequent outcomes. We can only assess the variability inherent in randomness in terms of the long run relative frequency we call probability. Probability is, fundamentally, a physical property of the world, not some mathematical construct. Consider a large bowl, or urn, filled with colored balls, some black, some white. If we mix the bowl well and draw without looking (assuring “randomness”), we draw either a black or a white ball. Over a large number of draws, the number of black draws will approach some limiting frequency. That is the property we refer to here as probability. The word “risk,” however, is used in several ways. It may, for example, simply be related to the occurrence of some untoward event: You risk indigestion by eating spicy food. Sometimes we use the word risk as a synonym for danger: Skydiving is risky. While all usages include some element of chance (probability) and adverse consequences, how we respond to risk varies with its probability/consequence characteristics. Understanding the differences is crucial to the management of risk. Our focus here is high stakes, catastrophic risks that threaten the existence of the entity under study. That entity might be an individual, a business, or even an entire society. As we will see, high-stakes risks have properties that make them unique. The most significant of these is that high-stake risks are irreversible: We don’t get a “second chance.” The focus is shifted from the statistical (probability) side to consequences. As a result, cost/benefit analysis based on probabilityweighted results does not translate well from the statistical domain to the high-stakes world. This suggests a unique set of analytical techniques is needed to deal with high-stakes risk. vii
viii Introduction
When we get beyond the statistical realm and start dealing with existential issues related to high-stakes (catastrophic) risk, reference to some sort of philosophy of existence – at the individual, business, and societal level – is indispensable. Probability, possibility, and consequences are all technical qualities of risk. So are the decision criteria we apply to risk. Enumerating and understanding these technical qualities is essential to effective risk management. However, we can’t develop the proper attitude toward risk without understanding how all aspects of risk fit into the wider aspect of existence, human and otherwise. While our analysis remains at all times empirical (i.e., driven by observation), this means that what we cannot observe is just as important as what we can. We have to adjust our empirical approach accordingly. True knowledge involves trying to understand, or at least cope with, what we don’t know as well as what we do. In assessing the properties of risk, especially high-stakes risk, we ultimately focus on their management. This requires that we go beyond technical matters and into the realm of purpose: Why study risk? And that gets us perilously close to the deep water of philosophy. A deeper understanding of risk, in the context of a “risk philosophy” or a metaphysics of existence (human and otherwise), is needed if the study of high-stakes risk is to be more than satisfying a technical curiosity. We explore here a type of risk where the stakes change the game. The purely mathematical qualities of risk give way to the physical, both in terms of measurement and action. As we will see, real-world assessment and treatment of risk proceeds under a great deal of uncertainty. This uncertainty is different from randomness, as we might experience by sampling colored balls from a well-mixed urn or flipping a coin. It is due to knowledge imperfection and imparts a fuzziness to the analysis that can itself be identified using a well-defined theory. The basic premise of this study is that in the world of complex, high-stakes risk management, risks may create dilemmas that involve our very existence. They arise from the fact that, once entrenched, eliminating high-stakes risks can become very expensive, creating significant “counter-risks.” Fortunately, these risk dilemmas may be resolved with proper foresight. This process of risk anticipation involves a thorough assessment of safe alternatives early on in the process of planning for progress. Identifying and implementing
Introduction ix
alternatives to risk has challenges of its own. We will review some of these challenges, including how to set appropriate thresholds for risk acceptance, and the effect of context on these thresholds. We will also suggest how some highly developed risk assessment methods, currently geared to the statistical cost/benefit domain, can be used to address high-stakes risks as well. The enormity of the challenge of dealing with high-stakes risk and the associated dilemmas can lead us to wonder whether there really is any way out. We routinely face and deal with precautionary challenges on an individual and organizational level (e.g., in the environment of modern business). Once again, context is important. We could apply many of the ideas here to dealing with high-stakes risk on the individual or organizational level. The deeper challenge lies in figuring out how individual choices react with choices on a wider level. In the wider social domain, the choice becomes one between doing nothing (fatalism) or taking some very serious actions “up front” (the implementation of alternatives). The choice is ultimately a matter of how seriously we take any current threats to our existence. The level of these threats is not assessed in any great level of detail here. Instead, the analysis provides a general guide for assessing and dealing with high-stakes risks, the existence of which may lead to dilemmas of action. That said, it is obvious that certain dilemmas of risk, or at least potential dilemmas, currently exist on a variety of levels. In terms of the relevance of these discussions to real-world decisions today, we would say it is “high.” Acceptance of that rating, and any actions it entails, is ultimately up to the reader, however. The ultimate solutions to risk dilemmas may not revolve around us coming up with sophisticated techniques for dealing with risk but instead depends on our conviction in applying simple solutions, even if they require some degree of personal sacrifice.
Also by Mark Jablonowski PRECAUTIONARY RISK MANAGEMENT: Dealing with catastrophic loss POTENTIALS IN BUSINESS, THE COMMUNITY AND SOCIETY
This page intentionally left blank
1 A Review of High-Stakes Decision Criteria
Risky choices entail, by definition, the chance of bad outcomes.1 It is seldom recognized how crucial the magnitude of these outcomes is to the decision. Ultimately, the stakes may be very high: Catastrophic, ruinous, terminal. The entity under risk simply ceases to exist or continues to exist in some greatly diminished form. Unfortunately, dealing with high-stakes risks is not as straightforward as dealing with their more mundane counterparts. What follows is an introduction to some of these high-stakes issues and a description of ways that we might resolve them. The solutions are not simple and may entail serious issues of their own. This discussion presents these high-stakes decision criteria and sets the stage for a deeper discussion of the challenges to their implementation. Choice under conditions of risk, especially high-stakes risk, is really a very intuitive process. We introduce a very basic formal framework to focus discussion and thinking about the critical points. At no point should the formal framework overwhelm our deeper understanding. That said, some of our intuitions may have themselves become clouded by both external and internal influences. High-stakes decisions can have “winners” and “losers.” Undue influence, based on self-interest, needs to be avoided. The aim here is that open discussion of the problems and potential techniques for their solution will add transparency to these critical issues.
1.1
Formalizing risky decisions
To focus our discussion, we will utilize a formal framework that attempts to represent the fundamental characteristics of these 1
2 Risk Dilemmas
Figure 1.1
No loss event
Loss event occurs
Do nothing
0
X
Take action
Y
Y
A Simple decision matrix.
decisions. Most basically, we can represent the process using a simple decision matrix, as shown in Figure 1.1. The matrix consists of two columns, which will represent possible outcomes or states-of-theworld. Here, we limit ourselves to two possibilities: A loss event occurs or it does not. We also show two rows, representing two choices or decisions: We can either do nothing or choose to take action (to prevent the loss event). The individual cells of the matrix show all possible combinations of actions and resulting states or outcomes. If we choose to be passive in the face of potential loss, the outcome is based solely on the potential states of the world. If the loss event fails to occur, we of course lose nothing (“0”). However, if we do nothing and, by chance, the loss event occurs, we lose X (or $X), the cost (or negative value) of the impact. Our other alternative is to take action. We assume taking action entails some cost Y, often measured monetarily, and that the action is 100 percent effective. By taking action, we either avoid the exposure altogether (prevention) or avoid its impacts (protection). The cost of prevention is fixed, so it does not matter what the state of the world is, the maximum and minimum we lose if we choose prevention is the cost of prevention itself, Y (or $Y). We can use this simple matrix to represent how choices and outcomes are related by the causal structure of the world. Our specific concern here is with high-stakes, or catastrophic, impacts. This suggests ruin or irreversibility of results. The cost of loss, X, however we choose to measure it, is therefore some very large number. While irreversibility, ruin, failure, extinctions, and other
A Review of High-Stakes Decision Criteria 3
terms related to catastrophe admit shades of definition, we will use them synonymously here. The chief distinguishing feature of catastrophic losses is their terminality: We don’t get a “second chance.” The notion of terminality, however, is bound to the nature of the entity at risk. That entity may be an individual human being, a business enterprise, the community, society at large, and even our planet’s ecosphere. As a result, the notion of relevant perspective on the nature of high-stakes risk becomes important. Yet, to each individual entity, as a goingconcern, if you will, the prospect of catastrophe presents similar issues. So, while homicide may represent a catastrophic threat to an individual, it may be merely a statistic to the wider community. This does not change the fact that the risk is uniquely devastating from the perspective of the individual and that individual will (in most cases) make decisions from that perspective. Suffice it to assume for now that a separate theory of high-stakes risk need not be tailored to each individual level of the world which risk affects. We will assume each responds generally in the same fashion to catastrophic risks they face, with differences that can be addressed as specific circumstances (without harm to our general arguments about catastrophe and our responses to it). We return later to the effects of context on risk management in a wider perspective.
1.2
The expected value criterion
When risk manifests itself over a relatively short-term time horizon (say, 10–25 years or so) or when observations can be carried out under controlled conditions (i.e., sampling), we can use statistical averages to make decisions based on economic cost/benefit optimization. When making loss prevention decisions, we determine the average or expected value of loss by multiplying its probability by the potential impact. When outcomes can be measured in dollars and cents, we can determine an expected monetary value, EV, as $EVx = px • $X Here, the symbol “·” denotes multiplication. The expected value of loss ($EVx ) is compared to cost of prevention, $Y. If the expected cost of loss exceeds prevention cost, we implement prevention and
4 Risk Dilemmas
realize a net gain on average. When the cost of prevention exceeds the expected loss cost, we forego prevention. Over a relatively short time horizon, or number of experiments, we can gain relative confidence in our results through observation of the outcomes. We choose monetary units here to focus the discussion. We could use physical units, such as units of physical property lost, or even lives. In realworld decisions, monetization of results often causes its own problems. For the purpose of exposition, we will presently ignore these issues. A problem arises when we try to apply statistical reasoning to catastrophic, irreversible events that are, concomitantly, infrequent. For one thing, their infrequency makes it difficult to assess the probability of these events with any degree of precision. At best, we can only assign intervals of uncertainty or possibly fuzzy boundaries graded by credibility. A more serious difficulty is based on the fact that observation of our results cannot be achieved in any reasonable time period. Say that we somehow can magically determine the exact probability of a catastrophic hurricane that destroys New York City, with no less precision than we can determine the outcome of a coin toss. The annual probability is .001 or one chance in a thousand. The cost of the destruction is estimated at $300 billion (a made up, but not unreasonable, number). The expected value of loss is $300 million, suggesting we would pay up to this amount to prevent the loss. But let’s say it costs $350 million to prevent – what do we do then? Presumably, reject the protection, based on the unfavorable averaging of costs and benefits. Yet, what if the loss occurs and then what comfort have we achieved in saving $50 million (or any saving, between $300 million and $300 billion)? Presumably, by using this rationale consistently, over many decisions, humankind will “come out ahead” in the end. Yet, how many $300 billion “statistical variations” can the human race afford? A more mundane example suggests that expected value decision is complicated at many levels of application, including that of the individual business entity. Say a business depends on a $1,000,000 plant for its profit. The probability the plant would be destroyed by fire in any year is given to us, with certainty (something rare even in the world of business risk). The annual probability of a fire causing loss of the $1,000,000 plant, and therefore ruining the company, is .01 or one in one hundred. Now say that company management
A Review of High-Stakes Decision Criteria 5
has the option of installing a sprinkler system (assumed 100 percent effective), at an annualized cost of $15,000. The expected value of loss is .01·$1,000,000 or $10,000. As the cost of protection exceeds the expected value of loss, we would not install protection. Yet, once again, the $5000 saving is of cold comfort should a ruinous fire occur.2
1.3 Decision criteria when probability is unknown or irrelevant3 Due to the finality of catastrophe, we do not get a second chance to make the decision right. Statistical decision in the face of highstakes risk faces the catastrophe problem: In the long run, there may be no long run. As a result, high-stakes decision-makers turn to rational decision criteria that operate under the assumption the probability of loss is unknown or irrelevant. Among these is the precautionary criterion. Precaution is based on minimizing the maximum possible loss or minimax.4 We assume for now that potential loss X is catastrophic and in this example, much greater than (>>) the cost to prevent, Y. The cost of doing nothing remains zero (0). Under minimax, we first identify the maximum loss under each choice, regardless of potential states of the world. We then choose the smallest of these. Conversely, we would not use precautionary criteria when we can adequately define an event statistically. If we identify the probability of an event as .1, or one chance in ten, with confidence, and can observe the outcomes of this probability over some number of trials, we can experimentally verify the result of our decision. Only in very special cases will the precautionary approach offer the optimal experimental result when it comes to risk prevention. In most cases, it will appear “conservative.” While inappropriate in the statistical domain, we can fruitfully extrapolate this conservatism to events that are (a) uncertain due to natural or artificial limits on the amount of experimental trial we can observe (probabilities are unknown or very imperfectly known) or (b) we are subject to termination of the experiment at random time, with irreversible results (probabilities are irrelevant), or both. Based on condition (a), most naturally occurring events of a sufficiently risky magnitude (e.g., severe earthquakes or windstorms) do
6 Risk Dilemmas
not occur with enough frequency in any particular region for us to be able to get an accurate statistical record on which to base decisions. Hence the observation that we are unable to use statistics effectively when the number of trials, or years of exposure in this case, is less than 10 or 25 or so and most certainly if they don’t exceed 50 to 100. This means the results of “experiments” involving probabilities of .1 (one in ten) to .01 (one in one hundred) start to become unobservable in a natural time span. Extrapolating cautions about the conservatism of precaution from the statistical domain to the highstakes domain is not germane because the conditions of a relevant statistical experiment simply do not exist. In application, we need to identify a threshold for the practical “possibility” of catastrophic events. Using a strictly “zero” threshold for occurrence probability has the effect of making everything risky, as no event can be excluded with absolute certainty, and hence the minimax rule becomes, “avoid everything.” Physics tells us that there is a probability, albeit tiny, that air molecules could act so as to reinflate a flat tire. No person would sensibly wait for this to happen, in lieu of calling for a tow truck. Instead, we usually identify some threshold probability, albeit a very imperfectly specified one, to define possibility. Our uncertainty-modified minimax rule becomes, “avoid the possibility of danger.” In the Appendix to this chapter, we show how uncertainty about both thresholds and probability of occurrence can be formalized to give us a workable definition of “danger.” While an uncertainty-modified version of precaution suggests that not everything is risky, a lot of things still are. How do we deal with them on a consistent basis? The problem with applying minimax precaution is that we must theoretically be willing to spend up to the amount of loss to prevent the loss. Quite simply, precaution may become expensive, either in terms of forgone benefits (opportunity costs) or in terms of direct costs of prevention. Precautionary action may in this way introduce serious “counter-risks.”5 Once these counter-risks get serious enough, we face risk dilemmas: We are doomed if we do, doomed if we don’t. The potential for these dilemmas often becomes the main impediment to effective precautionary risk management.6
A Review of High-Stakes Decision Criteria 7
Now, of course, not all precautionary actions are expensive. The business fire protection example is a case in point. Sprinkler installations in modern facilities often cost less per square foot than carpeting. The point is, however, that dilemmas can and do develop. When they do, they become problematic. This problem shapes the essence of our approach to seeking safety: How can we achieve safety without incurring potential risk dilemmas? The other option for decisions when probability is unknown/ irrelevant is the “opposite” of precautionary minimax. Known as the minimin, it suggests we minimize the minimum loss (costs) when outcomes are potentially severe but unknown. Most often, this lowcost solution implies that we do nothing. While idleness in the presence of potential danger sounds bad, we may come to the position simply because no others “work.” For example, we have shown that expected value decision-making runs into insurmountable logical difficulties when applied to the catastrophe problem. On the other hand, precaution, consistently applied, may lead to risk dilemmas. If it doesn’t matter what we do, why waste resources? The minimin or “do nothing” criterion is what we might rightfully call a fatalistic one with respect to high-stakes risk.7 It implies a certain powerlessness, or at least an inevitability, with respect to high-stakes outcomes.
1.4 Conditions for indifference between fatalism and precaution In terms of outcome and in the case where X is unequivocally “catastrophic,” how do the fatalist and precautionist compare? At first glance, the answer may identify X as the critical variable: Precaution avoids catastrophe, X. Yet, when Y grows large, approaching X, we face the dilemma of precaution. This would suggest rather that the emphasis be placed on Y, prevention cost. But the dilemma is really only a special case, isn’t it? The problem is that as long as the dilemma exists for one catastrophic risk, it exists for all. We can’t prioritize catastrophes. They are all “bad.” Preventing four out of five catastrophic threats really doesn’t get us anywhere, as statistical arguments don’t apply. The catastrophe problem is a global, not local, phenomenon. We will address the practical issues associated with this “all or nothing” approach later on. For now, we focus on
8 Risk Dilemmas
No loss event
Loss event occurs
Max
Min
Do nothing
0
X
X
0
Take action
Y
Y
Y
Y
Min
Y
0 Difference
Y
X >> Y Figure 1.2 Decision matrix showing the difference between minimax and minimin criteria.
prevention cost Y in assessing the difference between the fatalist and precautionist. In Figure 1.2, we show the minimin (fatalistic) and minimax (precautionary) criteria applied to our simple decision matrix. The matrix is extended to show the row comparison operation, as well as the minimization of values found by that operation (the columns of the max/min row comparison). Once again, the minimin in this case is “0”, and the minimax is Y, assuming Y is less (usually, substantially less) than X. The difference between the minimax and minimin is given as Y – 0, or simply, Y. Quite simply, therefore, the difference between “doing nothing” and taking preventive action is the cost of that action, Y. This means the fatalist is indifferent between doing nothing and precaution only when the cost of precaution is zero. We can also show this difference arithmetically, as Y increases from “0” to X (holding X constant), in Figure 1.3. It seems like an almost trivial result, yet it has profound implications for action. If the world (i.e., nature) behaves in such a fashion that we don’t need to take precautions, fatalism “makes sense.” O.K., but aren’t we once again approaching triviality? Sure, we don’t need to take precaution if no risk exists – but how do we get to that state in the first place? As we will argue in depth further on, if existence (nature) places us in that state, we might reason that
A Review of High-Stakes Decision Criteria 9
Difference
X
0
X Y
Figure 1.3 ically.
The difference between minimax and minimim shown arithmet-
doing what comes natural is “cost free.” A reasoned fatalism, therefore, amounts to doing what comes naturally, free from the worry of risk. Given these conditions, we are able to define conditions for indifference between the fatalist and the precautionist. This means that, in terms of losses and costs, we can show when a precautionist behaves like a fatalist and vice versa. The fatalist, as the discussion above suggests, is indifferent to precaution when prevention and/or avoidance are essentially cost free. When is doing nothing “taking precaution”? When it becomes a natural part of our existence. On the other hand, it can be argued that the precautionist is indifferent only when it is too late. That is, when the cost of prevention Y equals catastrophe X. In the extreme, when the cost of prevention equals or approaches the cost of loss, we encounter the essential dilemma of high-stakes risk. We show the indifference matrices of each respective approach in Figure 1.4(a) and (b). As we will see, precautionists may become fatalists simply out of frustration with the ability to do something, as when they face serious risk dilemmas. The fatalist’s indifference matrix suggests that there can be a reconciliation with risk acceptance and precaution, under some very natural conditions of life, the result of which is the elimination of significant risk.
10 Risk Dilemmas
No loss event
Loss event occurs
Do nothing
0
X
Take action
(0)
(0)
(a) Fatalists indifference matrix No loss event
Loss event occurs
Do nothing
0
X
Take action
(X)
(X)
(b) Precautionist’s indifference matrix Figure 1.4
Indifference conditions between fatalism and precaution.
Appendix: A fuzzy representation of danger Identifying danger is a fundamental component of high-stakes risk decision-making. The process is complicated by the fact we must often do so in a complex and dynamic environment. As a result, considerable uncertainties due to knowledge imperfection enter the process. Knowledge imperfection is a form of uncertainty that is distinct from randomness.8 Rather than probability, it is measured based on possibility. We seek possible precise, or crisp, representations that are compatible with our knowledge. The more imperfect the knowledge, the more possibilities. In the case of perfect knowledge, only one representation is obtained: One plus one is two. On the other hand, under complete ignorance, anything is possible. The level of uncertainty in between is most simply defined by an interval: The temperature tomorrow morning should be between 60 degrees and 70 degrees. We do not establish such intervals by tabulating data
A Review of High-Stakes Decision Criteria 11
but rather instrumentally based on how well they let us deal with an uncertain world. By making measurements that are too precise, we face the potential they very well may be wrong. For example, the temperature tomorrow morning will be 68.52 degrees. On the other hand, too wide an interval conveys no useable information: The temperature tomorrow morning will be between 0 and 120 degrees. The tradeoff is, therefore, between specificity (and hence information) and truth. Applied to the assessment of danger, we see that our definitions are indeed imperfect and necessarily so. To formalize this uncertainty, we will use the theory of fuzzy sets. Fuzzy sets are a generalization of intervals that include an assessment of degree of membership in a flexible set.9 Figure A1(a) is a representation of the fuzzy set “danger,” in terms of exceeding the annual probability of an event. We will assume the consequences of the event, should it occur, are unequivocally catastrophic. The fuzzy interpretation could also be extended to the consequence dimension. Fuzzy membership is represented on the closed interval 0 to 1, with 0 representing no membership in the set of “dangerous” probabilities and 1 representing full membership. Numbers in between represent our unsureness about probabilities that are between “fully possible” and “not possible” members. These numbers may be taken to represent the fact that numbers in between have some of the properties of both “danger” and “not danger.”10 We also show in the figure a possible precise, or crisp, representation of the threshold. Notice that any such number will be arbitrary in its precision. As a result, decisions will be very sensitive to exactly where we place this threshold. Applying precision in a naturally imprecise domain also leads to a variety of paradoxes of presentation. For example, if we choose a precise threshold of .00001 or one in one hundred thousand for “danger,” can we realistically justify a probability of .000009 (.000001 less) as not dangerous? Our assessment of the probability of rare events will also be imperfect. As a result, we can use a fuzzy representation for probability as well, as shown in Figure A1(b). While a single probability may represent our “best guess,” the uncertainty involved will certainly dictate a wider range of possibilities, again, determined instrumentally. Also shown in the figure is a precise probability estimate for comparison.
12
Membership
1
0 0
pc
1
Probability
(a) Precise and fuzzy definition of the “possibility” of catastrophe
Membership
1
0 1
po
0
Probability
(b) Precise and fuzzy measurements of probability of loss
Membership
1
0 0
1 Probability
(c) The fuzzy detection of danger Figure A1. Components of a fuzzy definition of danger.
A Review of High-Stakes Decision Criteria 13
We can then combine the two to assess the danger associated with any particular exposure, as shown in Figure 1A(c). Some exposures will be clearly dangerous, while others clearly not. As shown, our probability estimate has a fair degree of overlap with our fuzzy danger threshold. The degree of the confidence expressed in cases of a danger assessment under conditions of overlap may be calculated as 1 minus the membership value at the intersection of the two sets. Low intersection indicates a low possibility of danger, while a high degree of intersection indicates a high possibility that the exposure is more clearly dangerous. Notice that the uncertainty involved in this fuzzy estimate suggests at least the limited possibility (with a confidence of approximately .25 – the peak of the intersection of the probability measure and our risk criterion) that the exposure is dangerous. Using the crisp representations for this analysis would, on the other hand, have suggested no danger exists, with complete certainty. Considering uncertainty, therefore, provides a more accurate (though less precise) representation of danger in realistic situations. The concept of precaution based on the minimax and a wider articulation of uncertainty is included in statements of the “precautionary principle” for dealing with high-stakes risks. Versions of this principle have been applied to law, regulation, and community guidance in several countries, as well as in several global forums about risk.11 The language contained in the Bergen Ministerial Declaration on Sustainable Development issued in 1990 with the cooperation of the Economic Commission of Europe is typical. It states: In order to achieve sustainable development, policies must be based on the precautionary principle. Environmental measures must anticipate, prevent and attack the causes of environmental degradation. Where there are threats of serious or irreversible damage, lack of scientific certainty shall not be used as a reason for postponing measures to prevent environmental degradation.12 In statements of the precautionary principle, uncertainty or “lack of scientific certainty” is used to qualify simple application of the minimax. In practice, some risks will be well known and hence produce rather narrow membership functions. We can easily classify those as “dangerous” or “not dangerous” (e.g., prolonged asbestos
14 Risk Dilemmas
exposure and drinking water, respectively). On the other hand, uncertainty about “possibility” that bridges the risk threshold to at least some degree may require precautionary action as well (from a social standpoint, nuclear power and genetically modified foods are two commonly cited examples). While some criticize linguistic articulations of the principle as “vague,” we can see that this linguistic character represents essential components of minimax precaution, including avoidance and the effects of uncertainty. Using fuzzy sets, we can represent the principle formally without destroying its applicability. This makes for less controversy about what the principle really means and provides a suitable framework for further research.
2 Finding Alternatives to Risk
Life offers us a path to freedom from risks: Avoid them. Realistic difficulties enter, and what seems to be a rather simple adage, turns out to be very complicated in application. It seems that our challenge in practical life is not so much avoiding risk, as avoiding the dilemmas that such avoidance entails. These practical difficulties do not represent a defect in simple risk avoidance criteria. They are a result of the way our world is. To lead the worry-free life, we need to be able to reduce the precautionist’s decision matrix to that of the fatalist. This requires that we make the “costs” of precaution zero or at least near enough to zero that they don’t matter. Doing so in turn requires that we assess our path toward progress, early on in its development, identifying alternative courses of action that avoid risk in a no-cost/low-cost manner. Costs, especially opportunity costs, are minimized when we assess early enough into the process. We don’t entrench risky activities only to wrangle with the prospects of a costly retrenchment later. The process relates to the old saying: The best things in life are free. We will examine the characteristics of alternative assessment for avoiding risk dilemmas. Some very intuitive tools can provide formal guidance in this regard. Specific applications are of course a matter of our own ingenuity. Indeed, the proper goal of precautionary science is to achieve progress safely. The entire process of a more precautionary approach to risk requires a significant reappraisal of what costs and benefits progress entails and how we properly value these. Ultimately, the cost/benefit balance may reduce to a respect for the wider balance of life. The implication for each individual, organization, and society 15
16 Risk Dilemmas
at large is to understand how everyone contributes to this wider balance and how we may all do so while avoiding potential dangers.
2.1
The preactionary approach
It is often claimed that precaution is defective in that it ignores benefits. We have suggested that precaution is more a matter of eschewing tradeoffs of benefits against costs when the potential for irreversible catastrophic outcomes preclude any possibility of fully achieving these benefits. Quite simply, trading off the acceptance of high-stakes risk to achieve benefits doesn’t make much sense if there may be no one left to enjoy the benefits. In this regard, precaution demands that we be willing to forego benefits, accrue direct costs, or incur some combination of these in order to prevent (i.e., eliminate the possibility of) the loss. The principle stops short, however, of requiring that we accept a greater risk to avoid another, as some critics suggest. Properly considered, application of the minimax cannot increase risk, due to the simple fact that the largest potential loss, including loss in terms of opportunity costs of foregone benefits, becomes the focal point ( the “max”) of the decision. The challenge here is ultimately one of adequate framing of decision alternatives, and it is not peculiar to precautionary applications of the minimax. In the wider social context, consider the choice between the spread of insect-borne disease and the use of potentially hazardous insecticides such as DDT. DDT has a potential for ecological catastrophe. On this basis, precautionary avoidance is suggested and has indeed often been implemented. On the other hand, it may be argued that the spread of insect-borne disease presents the greater peril.1 That being the case, spread of disease becomes the maximum loss, and we use DDT to prevent it. A practical issue remains in that we are now faced with the option of two very bad alternatives. While we might somehow determine that the results of unprevented disease are marginally worse than the affects of DDT, how do we justify the choice in the face of potential (or actual) ecological disaster resulting from DDT usage? We are faced with the dilemma of precaution, regardless of the fact disease is somewhat worse than DDT poisoning (or vice versa).
Finding Alternatives to Risk 17
The difficulty is not in the technical construction of precaution. We apply precaution with no particular sense of distress when precautionary action is no cost/low cost: We see a precariously tilted manhole cover in our way as we cross the street, and we step around it. As has often been observed, the insurance purchase decision for both individuals and businesses has many of the hallmarks of precautionary decision-making. Insurance offers a very economical form of precaution against financial ruin.2 Likewise, precautionary decisions regarding health and safety at both the organizational and community level often involve low-cost implementations for avoiding risk. Nonetheless, truly significant decisions at the personal, business, community, and societal level may, and often do, involve precautionary dilemmas. One alternative is an uncomfortable fatalism based on acquiescence: We can’t do anything about the truly significant risk of the world, so why try? We will suggest here that precautionary dilemmas may in fact be resolved, at least in theory, by changing the way we plan for progress. By looking at alternative, risk-free pathways in the early stages in planning for progress, dilemmas may be avoided. The root of precautionary dilemmas lies in the dynamics of risk. In other words, the behavior of risk exposures over time and our response to them. If we examine risk on a forward-looking basis, at discrete points in time, precautionary dilemmas are likely to evolve upon us. At some point in the past, we may have faced a decision matrix that suggests that either the cost of avoidance/prevention is not the maximal alternative, in terms of precautionary action, or the threat is not significant. At some future point, we may assess the decision matrix again, only to find that we now face a decision matrix that calls for precaution but at which time precaution suggests dilemma. Such dynamics may not simply be intentionally caused but could instead stem from a bias toward the status quo in growth patterns. This bias creates an impetus to forge ahead on some given path until we are sure the path is inappropriate. By the time we realize the danger, it may be too late. To overcome the bias inherent in mechanistic risk assessments performed at a state in time, the process of alternatives assessment has been suggested. Alternatives assessment involves the analysis and selection of forward-looking choices for avoiding risk.3 In a sense,
18 Risk Dilemmas
alternatives assessment is “pre-precautionary,” or preactionary, in that alternatives eliminate future risk and hence alter all potential (future) decision structures. Alternatives assessment shifts the focus of risk management from “how do I reduce the catastrophic risk exposure of this activity?” to simply “how do I avoid risk?” The second question sounds a lot tougher, and it is. That’s no reason to avoid it. Once we engage in an activity, incurring the possibility of high “sunk costs,” seeking a post-fact solution carries with it the possibility that we may not find one or at least not find one at a reasonable cost (thereby incurring the precautionary dilemma). The cost of forced acceptance of risk can be high. On the other hand, under alternatives assessment, we always have the ability to forego further progress until the risk issues are resolved. Alternatives assessment, therefore, always fails “safe.”
2.2
Identifying alternatives using backcasting
A natural framework for alternatives assessment and preactionary risk treatment is based on the idea of backcasting. Backcasting is a form of scenario generation which attempts to extrapolate plausible paths backward from desired alternative futures.4 This is in contrast to forecasting that looks forward to pathways that result in possible futures. Forecasting in this sense remains descriptive, while backcasting is more normative. Backcasting as an explicit technique of futures analysis, in fact, has its roots in the planning of national and world energy futures, with its emphasis on sustainability. The sustainability goal, matching resource outflows to renewals, can itself be viewed as a vital form of catastrophe avoidance. Potential pathways to sustainability today are constricted by global risk issues, such as global warming. Not surprisingly, the treatment of global warming in the face of world industrial progress presents us with one of our most troublesome examples of the potential for a full-blown precautionary dilemma. More formally, the backcasting process works as suggested in Figure 2.1. We view the process here from the standpoint of highstakes risk management. We find ourselves at point in time Tpresent with our present state (xp ) determined by some past pathway,
Finding Alternatives to Risk 19
Possible future xfp
Impact, cost to prevent
“Forecast”
Catastrophe level
Trend
“Backcast”
xp
Desired future xfd 0 Tpast
Tpresent
Tfuture
Time Figure 2.1
The backcasting process.5
from Tpast to now. Backcasting requires the postulation of a desired future destination, xfd , at time Tfuture . Here, desired future represents our desire to minimize costs of risk avoidance (in an absolute sense). For simplicity, we show one possible pathway, developed by working back from the desired Tfuture to now. Multiple pathways (i.e., strategies) may exist. In contrast, forecasting predicts a scenario(s) based on potentials. These potentials may themselves be determined by applying extrapolation techniques to the observed past, that is, the period from Tpast to now, in the hopes of “predicting” the future. This possible future, xfp , may include exceedance of some catastrophe level (intentionally or unintentionally), both in terms of loss or avoidance cost, and hence entail precautionary dilemmas at time Tfuture . It may be argued at this point that alternatives assessment based on the backcasting approach just pushes precautionary dilemmas “back in time.” Alternatives are a current precautionary response that may entail future costs. The crucial difference is that the costs have not yet been realized. We have not yet incurred them, and as a result, there is no pre-determined sunk cost (the “fail-safe” feature of alternatives assessment). It is certainly less problematic to consider whether a city should be built near an earthquake fault zone than to move that city should we find the risk of catastrophic destruction via
20 Risk Dilemmas
earthquake unacceptable. The aim of science should be determining suitable, risk-free pathways of progress, not determining how we get ourselves out of precautionary jams once we are already in them. In the words of one climate scholar, with regard to the society-wide catastrophic threat of global warming, “If you don’t know how to fix it, please stop breaking it!.”6 Avoidance of anticipatory precautionary dilemmas requires that a wider philosophy of risk (and progress) be adopted and that alternatives, risk potentials, and rewards be treated holistically. What if no legitimate pathways exist that do not entail the potential of catastrophe (colloquially, “we can’t get there from here”)? It appears then that we have no choice but to acquiesce to risk: To make progress, we need to take risk – don’t we? The paradox that results is based on the fact that progress under these conditions leads to eventual doom. The problem is compounded by the fact that as the number of credible hazards increases, so does the possibility that at least one will result in disaster. Why do we accept the notion that there can be no progress without risk (of the catastrophic variety), yet reject the possibility of eliminating the most worrisome exposures? More often than not, such resignation is adopted to disguise inequitable exposure to risk, in favor of special interests and commercial concerns. After all, it is a lot easier (and, hence, cheaper) to acquiesce to risk than to seek alternatives. The “cover” is perfected via clever manipulation of the expected value cost/benefit calculus. Potential failure is no excuse for at least making the attempt. Once again, science is about solving these dilemmas and not about making “progress” only by increasing the potential for eventual disaster. What we gain immediately from adopting a stance that prefers progress through risk avoidance is hope. Alternatives assessment is not about selecting alternative risk treatments, it is about selecting alternative pathways to progress. Backcasting can form the underlying backbone of alternatives assessment for risk control. Here is where we want to be, now how do we get there? Again, where we want to be is determined (indeed, pre-determined) by our wider construction of what it means to be “risky.” A natural approach to risk is not about taking actions that satisfy our purely material desires, stopping to predict our next steps along
Finding Alternatives to Risk 21
the way, for better or for worse. Running our lives by trying to predict one step ahead can lead slowly, but inexorably, to eventual precautionary dilemmas. Instead, we need to work backward from a state that we all agree is the most natural. Under the mechanistic approach, those who deal with high-stakes risk are expected to be well informed about probability, the long run relative frequency of events, and how it behaves. This expectation is based on an extrapolation from the statistical realm. In the high-stakes realm, we don’t have the primary matter required for an understanding of probability that is sufficient to make the formal theory of probability of any use. While something like randomness or chance may ultimately be afoot here, attempts to reduce the associated probabilities to precise mathematical reasoning is, in principle, doomed. The very nature of such rare events is that they defy definition in terms of exact probabilities. We are left with a propensity, a vague representation of how we expect these very low probabilities to behave. This translates to our rough, or fuzzy, view of possibility in our definition of risk as “the possibility of serious adverse consequences.” It is then a small step from dealing with possibilities to dealing with fate. High-stakes risk assessment is about how we respond to the possibilities. Avoiding precautionary dilemmas using alternative assessment is about responding to risk naturally. We don’t need to worry how to respond to actual or even potential cases of risk, as under our regime they are “outlawed.” Risk cannot exist, so why worry about it? It does not make sense to ask what precautionary action we could take to make a risk-free situation risk free. It just is. By making decisions based on forecasting from past data, we are preordaining the outcome. In the process, we end up making any outcome we can determine with precision inevitable. This is a result of the fact that forecasting only gets better when we have more and more data. In others words, as the past builds. We could look at this as the period between Tpast and Tpresent in our backcasting diagram. Obviously, our forecast for outcome at Tfuture gets more accurate as we approach Tfuture . In seeking more accuracy about the outcome, we end up approaching it. By the time we are confident in our forecast, it may be too late. It is like the captain of a ship trying to get closer to a faint object in the dark of night to determine whether it is just the reflection of the moon or a rocky ledge. By the time he determines
22 Risk Dilemmas
it is in fact a ledge, it may be too late. In backcasting, we project backward from some desired future, so our actual past to this time is really of no direct concern. With respect to the DDT example introduced above, the dilemma of DDT versus disease spread can be addressed via attention to ultimate goals of environmental and human safety and a search for suitable alternatives. Indeed, such alternatives have long been proposed in the form of non-poisonous interventions. Both biological and behavioral solutions have been suggested and effectively implemented in many cases.7 Precaution, proactively applied, can result in effective and relatively low-cost solutions, once we recognize in reverse, as it were, suitable pathways to risk reduction and elimination.
2.3
Backcasting under uncertainty
Backcasting attempts to eliminate the potential for catastrophe, in a manner free of risk dilemmas. It is not strictly about reducing our uncertainty about risk. Reducing our uncertainty about the probability of loss associated with any exposure to catastrophic risk may of course help exclude an exposure from the category of “catastrophic” risk by placing it within the category of “practically impossible.” However, narrowing the membership function can also place an exposure into the unequivocally “possible” category as well. Precautionary actions based on this uncertainty-modified view of the world suggest that we need not, should not, and cannot wait until the probability of loss is identified as “not risk” for sure. The mere (credible) possibility of loss, as formally defined in the manner suggested in the Appendix to Chapter 1, demands avoidance. Backcasting and the related search for alternatives is a process in which this avoidance can be carried out with the least amount of disruption to progress, however we choose to define it. That said, we need to further address the uncertainty of outcomes inherent in any choice of alternative courses of action. Our main goal is avoiding risk, under the proviso that risk dilemmas are avoided as well. Backcasting suggests alternatives. How do we know the alternatives will be effective? Backcasting itself must proceed in a manner that takes into account knowledge imperfections. Backcasting toward alternatives is essentially a process of modeling. We model systems that can successfully achieve an environment
Finding Alternatives to Risk 23
free of risk dilemmas. Modeling under uncertainty due to knowledge imperfections must therefore be exploratory. Exploratory models are models of causation applied on the basis of multiple plausible alternatives.8 Exploration, in turn, is a natural reaction to uncertainties. Under uncertainty, of the type attributable to knowledge imperfections, the best we can do is suggest an ensemble of models that gets us from point a to point b. The greater our uncertainty, the wider the selection of plausible models. Knowledge, in turn, tightens our models, with perfect knowledge resulting in a single selected model. Traditional models ignore uncertainty by combining the bundle of plausible observations into a single, consolidated model based on some sort of averaging criteria. In the process of consolidation, we lose most if not all the information about the uncertainty involved. This also reduces our ability to respond to novel situations, by artificially reducing our flexibility of response (i.e., all responses are now based on the fixed, consolidated model). Backcasting is by its very nature explorative. Each path back from a desirable future to the present is based on an ensemble of plausible models. In many cases, the desirable future will itself be fuzzy. In the case of high-stakes risk, it may just be about knowing (roughly) what we want to avoid. Flexibility is maintained as no plausible model is excluded from the process. General guidance can still be provided by suitably chosen summary measures, such as the simple average of plausible bundles, a modal estimate, or even interval-based “cuts” through the bundle based on various degrees of credibility. The point is that the integrity of the original bundle is maintained throughout. We approach backcasting and explorative modeling not from a sense of well-defined mathematical structures that are to be induced from data. The process is more creative, owing more to the logic of discovery than the process of verification. As such, there are no set rules for exploration. In developing backcasting scenarios, therefore, experimentation and exploration proceed together. A key aspect of the implementation of models defined by exploration is the notion of robustness. If practical results depend on admittedly uncertain models, what if our adopted course of action fails? Robust plans tend to minimize the effects of failures in the face of uncertainty. In alternatives assessment, if models should fail, we want
24 Risk Dilemmas
them to fail “safe.” In other words, if they are ineffective, this lack of effectiveness should not add to the potential for danger. Their failure therefore makes us no better, or no worse, off. The notion of robust fail-safe design is essential to backcasting alternatives under uncertainty. A natural approach to fail-safe alternatives design is to never venture too far from ideas that we can confidently identify as providing safety. It’s like sailing at sea while always maintaining visual contact with the shore. If something goes wrong, we can always navigate our way back to safe shores. A robust approach to alternatives in terms of fail-safe may seem timid, and indeed it is. A considerable degree of circumspection is necessary, early in the process, to avoid the creation of future risk dilemmas.9 Within the framework of exploratory backcasting, scientific discovery can be viewed as trying to find a way out of our risk dilemmas. Potential solutions are then subject to the process of verification. In terms of our risk model, that verification is provided by the degree of comfort we achieve in avoiding risk (to the degree possible). We will assume also that this avoidance is consistent with living some sort of fulfilling life. The most basic question for any riskrelated backcasting is whether we can obtain our wider goals within the parameters of risk-free living. The process will be iterative, in the sense that our goals may themselves be altered based on feasibility. All in all, assessment of alternatives based on these goals becomes a process involving a considerable amount of feedback (and “feedforward”) along the way.
2.4
Backcasting versus backtracking
We have suggested that by basing risk treatment on a forecasted approach, we invite dilemma. Forecasting is based on incremental change. The temptation is to postpone action until we get nearer, and surer, of potential problems. However, at some point of time, we may not be able to regain our original position with respect to risk by simply retracing our steps or what we might call backtracking. As the process develops, we may face discontinuities that suggest hysteresis in the dynamic course of risk.10 Dynamic hysteresis is a condition in which the path back from the initial state (in terms of internal parameters) may be different from the one that led us to the current (or future) state, in terms of manipulation of external parameters. In
Finding Alternatives to Risk 25
the case of risk, hysteresis may arise from the accumulation of sunk costs or from the physical properties of complex systems.11 Under conditions of hysteresis, there is also the possibility that we may not be able to get back to the initial state. Changes may at some point become irreversible (or at least difficult to reverse). We may wade out into a swift river, only to find that the currents slowly shift the sand between us and the shore. Following our original path back now becomes perilous. Backtracking is therefore not the same as backcasting, which is all about understanding where we want to be and how to get there without getting in “over our heads” (literally). More formally, our dynamic internal parameter is impact or loss. Our external parameter(s) may be something like increasing atmospheric carbon, the result of excessive accumulation being catastrophic climate changes. We begin at state xp . As we proceed, a forecasting/look ahead approach may suggest that at some future level of external parameter, Z, we reach a “critical state” xc , after which increasing Z further we forecast potential catastrophe. Backtracking is based on the premise that by decreasing Z, we can return to some more suitable (sustainable) level along a path that simply retraces (“backtracks”) the original path. The potential for hysteresis implies that the path of the internal parameter (impact) in response to changes in the external parameter may be different “backward” then “forward.” This implies impacts are path dependent. Hysteresis in these cases may lead to catastrophe, suggesting that either the catastrophic potentials are irreversible or that “going backward” imposes countervailing risk of a catastrophic nature. In the case of carbon emissions and the potential for global climate catastrophe, we may reach some point at which significantly reducing carbon emissions could cause economic collapse or other reverse effects, of equally catastrophic nature. So while increasing Z implies catastrophe once we reach some critical level, so does decreasing Z. The two pathways define the “horns” between which we are stuck when faced with risk dilemmas. We are unable to forestall catastrophe by manipulation of the external parameter once critical state is reached – any way we turn. And under extreme uncertainty, when or whether we have reached the critical state is very imperfectly known. Understanding hysteresis and associated complex dynamics of systems is an important component of alternatives assessment based on backcasting. To the extent that we can identify complex dynamics
26 Risk Dilemmas
that result in hysteresis and other complex forms of dynamic behavior of systems, we can use this information to help plan against disaster (by implementing suitable alternatives early on in the process of planning). Complex dynamic structures must be considered as part of the explorative process of backcasting toward alternative futures.
2.5
Maintaining the balance of life
The concept of fate, or reasoned fatalism, with respect to achieving a suitably risk-free existence based on rational response to risk assumes an order or balance to life. Cost/benefit analysis ultimately aims at a balance. This balance suggests that, at a minimum, costs and benefits even out. When costs and benefits can be measured in monetary terms and there exists a sufficient time horizon over which “expected outcomes” can be realized, cost/benefit is a perfectly reasonable guide to life. Yet, the resources which we spend on prevention or avoidance of catastrophic loss represent a far different concept of cost than catastrophe itself. The possibility of catastrophe changes the whole meaning of exchange. The riches we gain from avoiding risk are based on continued existence, and how do we value that? In attempting to balance catastrophic risk potential against benefits, we find an irreducible conflict of terms. There is no proper balance in terms of dollars and cents. In purely monetary terms, the cost of catastrophic risk is infinite. The benefits are not. So what have we gained by introducing a life-saving drug whose interactions may ultimately prove catastrophic? A natural approach to risk also implies a balance. It is a balance based on natural qualities. We balance a regime of risk elimination goals against the need to live a life free from the threat of increasing the possibility of doom. Infinite penalties of doom imply that we be willing to spend infinite amounts to prevent doom. The result is the dilemma of precaution. The natural life, on the other hand, requires no special resources. What is the “cost” of breathing or a heart beat? What cost do you put on natural activities? We don’t decide to eat based on cost benefit analysis, except in the wider sense of supporting our natural existence (although costs and benefits may figure into how much, or what, we eat). We eat to live. In the same way, the natural approach is a way of life. The idea of cost only makes sense when we are acquiring something ancillary to our basic needs. In turn, the concept of opportunity
Finding Alternatives to Risk 27
cost only makes sense when we are giving up something valuable. A natural life presents us with a balance of existence with catastrophic risks. Fatalism, doing nothing, means doing nothing that will disrupt the natural flow of life. Now, if we can reduce the natural risks of life, by finding shelter, producing food, or curing disease, we should do it. We gain what is arguably a better life by doing so. To the extent any of these introduces the possibility of catastrophic risk, however, it is not worth it. It may seem like we have simply redefined risk and the cost to avoid it, suggesting that only the risks we don’t like have a legitimate cost of avoidance, which in turn leads to precautionary dilemmas. First of all, the risks we accept, and conversely those we avoid, have their roots in the concept of a natural background level of risk. How exactly we define such level is a matter for the wider community of humans to decide, the process of which we describe in a later chapter. That we have at least an intuitive notion of “naturalness” is made plain by observation of how humans, and all other animals for that matter, respond to risk. We base our assessment of “low cost or no cost” on the observation that if certain risks are identified before they can entrench themselves, then we avoid at the very least the sunk cost of going back once we have made substantial investments. As for the cost of lost opportunities, that is, foregone benefits, the whole point of alternatives assessment is to find equal opportunities that do not entail risk. To declare that this cannot be done without incurring such risks doesn’t make sense. Realize also that the risks we are talking about are so severe as to overwhelm any possible benefits, should they occur. Unless we are prepared to say they can’t, we need to think of alternatives. Last but not least are the direct costs of prevention or avoidance. Do we properly count the cost of building a house to keep out the elements as a cost of prevention or simply a cost of life? Direct prevention costs are simply a matter of convention: How we choose to look at the problem. We would argue that such direct costs are not part of some risk/reward tradeoff but rather a part of the resources we expend on living. To the extent our civilization collectively believes that the costs of living exceed those of not, we have a more widespread teleological difficulty on our hands. We can at least take some comfort that ancient humans, living without the benefit
28 Risk Dilemmas
of modern invention, were not so collectively unhappy that they felt there was no point in “going on.” The notion of balance is contained in the concept of sustainable development.12 Sustainability suggests we balance our lives and our pursuit of progress against the availability of resources – most importantly, the natural (physical) ones. A sustainable life is one in which we balance natural resource use with replenishment, so as to achieve long-run survival. Sustainability is therefore a long-run risk management strategy. It follows that any conception of the “natural life” must also entail some concept of sustainability. Sometimes a noise becomes so loud we can’t really hear it anymore, or a light so bright, we can’t see it. A cost may also become so great that it becomes imperceptible. What would you pay to save your child from suffering? An infinite amount but that being something you don’t have is the same as nothing (all the material wealth you have being, perhaps, not enough). Cost is, therefore, not a consideration. Instead, we avoid unnatural increases in risk, accepting those we deem natural, including the risk of being wrong about which risk we can identify and then choose to prevent. The missing piece in this naturalistic view of risk is just what level, if any, we find risk acceptable. As we have suggested early on, a strictly zero threshold for possibility of risk presents insurmountable practical difficulties. It is doubtful that any life could be truly sustainable based on the achievement of, or quest for achievement of, a truly zero level of risk. We turn our attention to a suitable definition of possibility of risk, in terms of probability, in the next chapter.
2.6
Contrasting the “post-fact” approach
Unlike the search for alternatives based on backcasting goals, the statistical model of risk is based on prediction. It can be viewed as essentially a simple short-run feedback loop that responds to statistical questions about the effectiveness of loss treatments. This approach is at the heart of what we might call the identify-assess-treat (I-A-T) model of the management of statistical risk. We identify risk characteristics, assess treatment options based on our mechanical criteria of decision (e.g., minimize expected monetary value), and then treat risk accordingly (Figure 2.2).
Finding Alternatives to Risk 29
Treat risk
Assess risk Monitor results
Indentify risk Figure 2.2 The Identify-assess-treat (I-A-T) model behind post-fact risk management.
The I-A-T model is arguably a manifestation of a single period, lookahead forecast, adjusted iteratively in response to statistical information. We take action and then predict the next step. Again, in a genuinely statistical environment, the approach is perfectly valid. However, when statistical prediction cannot be fulfilled in any meaningful time frame, as in the case of rare, catastrophic losses, the process can lead to disaster. It is post-fact’s inability to look forward through what is really a rather dense haze of uncertainty that promotes the creation of risk dilemmas. We move, somewhat blindly, ahead, guided only by perceived benefits at the end of the road. We approach in small steps what turns out to be a large enemy – the potential for catastrophic risk. Before we know it, we are overwhelmed. Incremental approaches to non-incremental problems don’t work. In the case of risk, this defect is directly related to the inability of a statistical mind-set to deal with catastrophic potentials.
2.7
Cost/benefit and post-fact risk management
Mechanistic risk assessment provides the input to decision processes that are usually based on some sort of cost/benefit analysis. Expected value is calculated as probability of loss multiplied by monetary loss potential in an effort to place a long-run cost on risk. This probabilistic cost is then compared against benefits in a manner that tends to maximize the differential of costs and benefits in
30 Risk Dilemmas
any one case and cumulatively as well. This approach encounters huge, indeed insurmountable, difficulties when we go beyond the realm of monetary valuation. The wider scale of existential risks often defies simple monetary measures. What is the true measure of the value of a life or a habitat? In the theory of classical economics, the determination of value requires a market: A set mechanism of exchange in which the medium of exchange (“money”) allows us to measure this exchange value in common terms. What may at first glance seem like an attractive property – the ability to harmonize costs and benefits under one common mode of measurement – turns out to be a serious impediment to its rational use.13 This is not to say that economic matters be ignored. It is simply that they are more suitable to analysis when random events can be assessed statistically. The existential quality of the catastrophe problem (in the long run, we may cease to exist) makes problems that rely on markets unworkable in this domain. Markets deal with either the deterministic or the statistical. They presume that outcomes will manifest themselves in the long run. The uncertainty and finality of catastrophe make reliance on these assumptions dangerous. The concept of economic exchange of goods and services simply does not serve as the proper basis for a theory for high-stakes risks. No suitable market exists for “survival.” It is a concept beyond tradeoffs, as we normally construe them.14 Safety, in the ultimate sense of survival, is rather a thing that must be assessed as part of our overall worldview and our view of how we fit into that world. It is defined by a lifestyle we choose, not based on cost/benefit but on the right things to do. The technical apparatus of cost/benefit in the wider social framework of what’s good and bad for us is flawed. We need not be economic experts to recognize this. The tip off is when cost/benefit requires us to put an economic value on a human life, an animal life, or an ecosystem. How do we do that? That puts us squarely back in the region of complex tradeoff and indeed the dilemmas of highstakes risk treatment that cost/benefit supposedly avoids. This does not mean we are without a means of guidance through life, it just means that this guidance will not come in terms of dollars and cents either gained or lost. Value propositions will more likely have to be determined qualitatively.
Finding Alternatives to Risk 31
When contrasted to the mechanistic approach of cost/benefit, we can begin to see why a cost-free approach of risk avoidance makes sense. The optimal approach is the cost-free approach. Our pathway is pure knowledge, itself, we might argue, a cost-free commodity. Natural responses to risk fit in to our natural lives and therefore create a balance. This balance is, once again, not a matter of balancing monetary cost against monetary benefits. Rather it is about achieving the proper balance among all aspects of our life with respect to the demands of nature. In dismissing a technological cost/benefit approach to nuclear power, Garrett Hardin has observed, “A society that cannot survive without atomic energy cannot survive with it.”15 Once again, “affordability” is measured in terms of balance. Living with nuclear power implies that we have disturbed our natural balance, in terms of sustainability via relatively safe means. We have to introduce the risks of nuclear power to offset an energy budget run amok. The result is natural constraints in terms of risk acceptance may be exceeded. The situation, in turn, is cause for worry. All in all, the analysis suggests a failure to manage high-stakes risk associated with resource consumption. The distinction between direct costs and opportunity costs once again becomes important. The cost to preserve a stand of pine trees in the middle of an industrial development costs more than to preserve a stand of pines in the Maine wilderness. That is, until the stand in the wilderness presents an obstacle to industrialization. If we want to develop the land, to build a new factory or a housing community, the decision becomes one of opportunity cost, in terms of foregone business or living space. The difference in both cases is one of assessment of the need for development and its alternatives. Yet, we’re not prepared to spend infinite amounts to save one life, are we? Resources are limited. Say that it costs society some large sum, say $9 billion to reduce the probability of loss in a population of 100,000 from one in ten thousand (.0001) to one in one hundred thousand (.00001), the one in a hundred thousand representing our risk threshold in this case. The cost “per life saved” is $1 billion. Questions like this are often assumed to raise the issue of proper valuation of life. The precautionary approach, we might assume, suggests a very high, perhaps an infinite value. Yet, we feel extreme trepidation in the $1 billion tradeoff. Why? Its not a value issue so much
32 Risk Dilemmas
as the fact that the money could almost certainly be spent to save, let’s just say, at least as many lives elsewhere. We have in this case a risk dilemma. If we assumed a cost per life saved is $5000, we would not have this dilemma. Now a cost/benefit assessment could probably come up with a more definitive decision to reject the $9 billion expenditure, but only based on cost of life tradeoffs, which we would suggest are arbitrary. There is no reason to introduce tradeoffs. The fact that a dilemma exists is much more informative. We either work toward alternatives to try to make the environment safe without jeopardizing, in terms of available monetary expenditures, some other safety goal, or fatalistically accept the risk. In accepting the risk, there is no comfort based on any tradeoff in lives saved, as no valid criterion for judging the appropriateness of this tradeoff exists. To avoid fatalism as a foregone conclusion, we need to avoid the idea of resource limitation as a foregone conclusion. That’s where science comes in, based on the proper expansion of precautionary alternatives.
2.8
Avoiding mechanistic precaution
While expected value decision making is clearly mechanistic, purely technical versions of precaution based on mimimax and fatalism based on minimin can be as well. As the physical conditions present themselves, we take action (as the decision matrix determined for each respective decision criteria suggests). Naturalism enters only when we put these criteria within a wider universe of purpose. Precaution, applied mechanically, soon becomes a matter of accounting for multiple dilemmas as costs of prevention approach cost of ultimate risk. We may resort then to alternatives assessment, but again it is on an individualized, mechanistic basis. The great benefit of precautionary alternatives assessment is that it is proactive, rather than reactive, or post hoc, as is the standard model of statistical decision. Mechanistic application of alternatives analysis as an adjunct to precaution is subject to pitfalls. Adopting alternatives assumes taking alternate courses of action to achieve certain ends. First of all, the ends themselves may be questionable, perhaps subject to alternatives themselves. Second, in a mechanistic setting, the status quo always represents the default if we can’t come up with any other
Finding Alternatives to Risk 33
suitable alternatives. A mechanistic alternatives assessment can put us once again in the undesirable position of dilemma: All available alternatives suggest dilemmas with respect to the status quo. The intuitive approach suggests instead that caution becomes part of our approach to natural conditions of life, including our definition of “progress” and what is truly desirable. This is not to say that there will not be tradeoffs to consider. It is just that these tradeoffs will be based on a wider conception of what our course of action may or may not gain us. Once again, the mechanistic approach is geared to this process in reverse: We choose a course of action based on the pursuit of purely material gains and then do what it takes to get there. “Doing what it takes” may of course be tempered with an alternative assessment. However, if no “safe” alternatives are found, we proceed anyway. In the mechanistic setting, this approach, once again, invites dilemma. Viewing alternatives assessment as a replacement for risk assessment is wrong. Doing so just drops alternatives assessment into the place of risk assessment within what is still a mechanistic framework. The mechanistic framework really needs to be replaced with a natural one, and then alternatives assessment follows naturally. Alternatives assessment must itself be suitably preactionary. The mechanistic model of risk is simply a byproduct of an equally mechanical model of progress. Goals are established, and then the means to the goal are determined. Risk becomes a mere technicality, to be dealt with via iterative application of the I-A-T model. When inconvenient feelings, or emotions, get in the way, as suggested by psychometric studies of risk behavior, they are dismissed as irrational. It then becomes a matter of convincing stakeholders, via “risk communication,” that the mechanistic approach is the only rational one just like the Newtonian approach is the only right way to look at how celestial bodies interact with each other. A mechanistic fatalism, unsupported by any moral code may in fact be the dominant risk philosophy of today. Such fatalism, often driven by actual or potential precautionary dilemmas, is more acquiescence than true risk acceptance. It is not true risk management in that it does not rationally reduce our worry over risk. This is how fatalism, the belief in fate or destiny, itself neutral as to futures, has acquired such a negative connotation. Applied mechanistically, it may in fact invite doom.
34 Risk Dilemmas
The challenge is to integrate precautionary alternatives assessment into the very fabric of living. The natural approach to progress must entail precautionary alternative assessment as part of the process of defining and planning for progress. And by progress, we mean safe progress. A naturalistic application of alternatives assessment in support of risk avoidance must come early enough in the process to prevent the accumulation of risk and risk dilemmas.
2.9
Risk acceptance – risk avoidance – risk anticipation
In the statistical domain, we accept risk when the benefits of doing so exceed the probability-weighted costs. Our rewards so show themselves over a relatively near term horizon, offering us a reliable indicator of our success in this regard. When the stakes get high, losses are ruinous or otherwise irrecoverable. We can not avail ourselves of statistical arguments here, because in the long run, there may be no long run. While every act or event has the potential for danger, we need to set some sort of practical possibility threshold below which we accept the theoretical potential for danger. Risks or probability/loss combinations that entail some very low, but not strictly zero, probability of catastrophe are also deemed “acceptable” on a pragmatic basis. When the stakes are high, it is only this type of acceptance which we deem a legitimate form of risk management. The response to high-stakes risks that present the genuine possibility of catastrophe is avoidance. Risk avoidance is not as simple as it may seem, however. The dynamic character of risk suggests we may face serious risk dilemmas by waiting too long to act. By considering only risk avoidance or acceptance, in a static framework, our risk management program may turn out to be less than effective against high-stakes risk. Proper risk management requires that we think ahead. Seeking alternatives early on in the process of planning can help alleviate risk dilemmas. Alternatives assessment in a preactionary fashion involves risk anticipation. We consider the possibilities early on, before they become entrenched, and hence costly to avoid in the future. Our overall
Finding Alternatives to Risk 35
Acceptance: Accept statistical risks based on cost/benefit comparison. Risk (probability/loss combinations) in which catastrophic impacts are highly improbable (though not strictly zero) are also acceptable. Avoidance: Precautionary treatment based on the minimax suggests the “possibility” (with respect to some risk threshold) of catastrophe be avoided, either by eliminating the activity or reducing its associated likelihood of catastrophe below the possibility threshold.
Anticipation: The dynamic character of risk demands that we take a preactionary approach based on assessing potential risk-free paths to progress. Avoidance applied in a post hoc fashion leads to risk dilemmas of the “doomed if we do, doomed if we don’t” variety. Figure 2.3
Extending the Risk Management Repertoire to Include Anticipation.
plan of risk management must therefore include risk avoidance (of high-stakes potentials), reasoned risk acceptance as necessary, and as a prelude, preactionary planing based on risk anticipation (Figure 2.3).
3 Risk Avoidance: All or Nothing
Minimax based on suitable risk thresholds is a strong preventative against catastrophic risk. To that, we add an environment under which outcomes are known only very imperfectly. This strong prescription based on wide potential application increases the chances that our goals, values, and actions will in someway conflict. So, in order to achieve an adequate degree of power against risk, we make our risk criteria more prone to risk dilemmas. Risk dilemmas therefore follow from the “all or nothing” character of precaution: Partial solutions don’t properly deal with the catastrophe problem (“in the long run, there is no long run”). A critical determinant of the power of precaution to protect us from high-stakes risk is where and how we set our possibilistic thresholds for risk. Set the thresholds too low (in terms of probability of occurrence), and we may make them impracticable from the standpoint of living life even at its most basic level. Set them too high, and we increase the possibility of catastrophic risk. Let’s examine the wider implication of minimax based on some negligible, or de minimis, possibility thresholds.1
3.1
How risk grows
When we deal with catastrophic risk, we need to make sure we take the proper perspective, integrating all potential exposures into the analysis. Understanding the aggregation of risk from a variety of sources across both the potential exposures themselves and across time lets us monitor our risk management policy for 36
Risk Avoidance 37
1
Probability of loss
0.8 0.6 0.4 0.2 0 1
Figure 3.1
4
7
10
13
16 19 22 25 28 Number of exposures
31
34
37
40
A mathematical representation of risk growth.
dangerous accumulation.2 When we work in the statistical domain, the probability of loss that results from adding exposures increases in a continuous fashion. Figure 3.1 shows how the probability of loss increases as we add multiple exposures, each having a .1 (one in ten) chance of loss. This representation is based on a simple application of the theory of probability.3 Intuitively, as we add more exposures, the chance that at least one of them will lead to a loss increases, approaching 1 (certainty) in the limit. A result such as this might permit us to set risk acceptance in terms of cost/benefit tradeoffs based on individual risk conditions, as well as the growing aggregation of risk. In the complex and dynamic high-stakes domain, we can rarely identify the conditions for risk growth with such precision.4 As a result, we need to apply sensible thresholds to define risk acceptability in terms of its relative “possibility” for causing great harm. Thresholds, in terms of either precise or fuzzy probability cut-offs, are much stricter. We avoid any and all exposures above the threshold, regardless of how little or how much they add to the exposure after that threshold is breached. Mathematically, the possibility that either risk 1 or risk 2 will be dangerous is calculated as the maximum (Max) of their respective possibilities (on a scale of 0–1), X1 and X2 : PossX1 or X2 = Max X1 X2
38 Risk Dilemmas
or more generally, PossX1 X2 Xn = Max X1 X2 Xn As a result, the outcome will always be driven by that risk with the highest possibility for disaster. So, if only one risk has a high possibility (at or near, “1”), the entire ensemble is “risky.” The all or nothing character of possibilistic thresholds follows from their inherent uncertainty and our desire to strictly avoid unpleasant surprises (as reflected in our minimax decision rule). This property of risk thresholds has significant implications for the management of risk. For example, if we find ourselves incurring some number of risks above the threshold, reducing these risks one at a time will not make us safe according to the threshold criterion. As long as one risk exists, the potential threat exists.5 Possibility thresholds suggest a modal approach to risk, rather than one based on continuous measurement, as in the statistical domain. This modality is a direct result of the uncertainties we face. To get a better feel for the character of these decision processes, let’s perform a hypothetical thought experiment. Say, you insult a warlord at the airport, and as a result, the warlord tells you that upon return to his native land, he will dispatch five ninjas who, in devotion to their warlord, will not return until you have been “eliminated.” The police intercept and jail four of the five before they enter the country. Do you feel safer? Maybe. Do you feel safe? Probably not. The number of assassins assigned to your “case” makes a difference, just as the theory of probability suggests. Unless it becomes “impossible” that an assassin will slip through, you may not sleep very well at night. On the other hand, knowing that the warlord has vowed to send “as many assassins as it takes” would cause you great concern indeed. Your doom in that case might be considered pretty much assured. The feeling of safety here demands some threshold be met, just as in the case of setting precautionary thresholds. Possibility is a much weaker form of uncertainty than probability, in that it depends on a much lower volume of knowledge. The result is, however, a much stronger rule about risk taking. In effect, the weakness in our knowledge is compensated by stronger decision criteria. This is the tradeoff that dealing with potentially ruinous results demands.
Risk Avoidance 39
3.2
Why prioritization fails
While cardinal measures of risk in the face of catastrophe and uncertainty cannot be made with any degree of accuracy, what about prioritization? The existence of the catastrophe problem under extreme knowledge imperfection assures that prioritization does not make sense. There is simply no basis for thinking that addressing catastrophic potentials one-at-a-time can offer us any great comfort. On principle, it makes no sense to say one catastrophe is greater than the other. Whether we get run over by a 300-pound rock or a 3000-pound rock, the ultimate impact is the same: When we’re gone, we’re gone. The uncertainty inherent in risk accumulation means that we might not even be able to discern the direct effects of eliminating this risk or that risk (as discussed in more detail below). As in the application of cardinal expected value decisions, any comfort we gain from prioritized attack of high-stakes loss potentials is strictly psychological, and that psychological peace comes only from self-delusion. The choice of prioritization targets therefore remains somewhat arbitrary. We might just as well decide among them by flipping a coin.6 The question remains whether we have chosen the right order, or even more significantly, does anything but complete elimination (in terms of a given threshold) even matter? Prioritization, like cost/benefit in general, fails on the basis that a partial solution to high-stakes risk does not work on logical grounds. It doesn’t solve the catastrophe problem. It is just a post-fact attempt at resolving risk dilemmas doomed to failure by the very nature of the catastrophic risks it is trying to avoid. Paradoxically, by giving us false comfort that dilemmas can be tackled one at a time, it may in fact increase the number of risks we are willing to accept. On more pragmatic grounds, it might seem that at least some sort of prioritization is necessary to be able to allocate scarce resources to the reduction and eventual elimination of risk. Unjustifiable means, however, cannot be used to satisfy impossible ends: Resource difficulties arise from a post-fact view of risk and not despite it. As proposed above, the preactionary approach to risk attempts to minimize costs of prevention by identifying and developing alternatives early on in the process of planning for progress. We circumvent resource difficulties by planning ahead, not by lagging behind.
40 Risk Dilemmas
When deterministic or even statistical knowledge is available, we can prioritize risks based on the fact that the payoff of such prioritization will be verifiable within the relatively short run. By allocating resources to risks that are the worst first, we achieve the maximum risk reduction at the lowest cost. When statistical information is not available or is very fuzzy, prioritization reduces guesswork. It boils down to seeking solutions that may or may not get us anywhere and may even take us in the wrong direction. More insidious is the idea that we can rely on prioritization as an escape when things get bad. As we will show below, given the uncertain nature of risk and its accumulation in complex, high-stakes domains, by the time things get bad in this environment, it may be too late to tackle them piecemeal.
3.3
Pragmatic arguments for not adding risks
The idea that risk response can be adaptive or achieved over some convenient period of time is of no comfort under the all or nothing approach. The threshold approach implies that it doesn’t do any good to reduce risks gradually from some number X toward zero. All it takes is one. In that case, does it matter how many risks we accumulate beyond the threshold? As in the case of our ninja assassins, comfort cannot be achieved as long as one still lurks. So with respect to reducing catastrophic risks that might exist, “It doesn’t matter if we eliminate a few, as long as one exists, we still face the potential for disaster.” Does the reverse argument, however, suggest a proliferation of risk on the same basis? “It doesn’t matter if we add a few, as long as one exists, we still face the potential for disaster.” The later, it would seem, is a strong argument for fatalism in the face of the daunting task of reducing all risk below the level of possibility. Despite the logical issues, there are strong pragmatic reasons to avoid the accumulation of risks. One of the strongest is that once loss potentials build, it becomes very difficult to get ourselves out from under the catastrophe problem. Prioritization does not work on principle. Might we determine some other approach in the future? If in the future we determine that some more complete form of risk avoidance is the best way to go, by being cautious about adding risks now, we may have less infrastructure to dismantle then. This reduces future sunk costs that contribute to the risk dilemma. A pragmatic
Risk Avoidance 41
approach would therefore suggest the potentials for risk dilemmas drive avoidance of risk accumulation. Precautionary approaches deal with low-probability potentials for danger, which lacking a sound observational basis for action suggest we must deal instead with possibilities than probabilities. Exposures to risk in the past have in fact gone from potential to distinct possibility of threat. Examples include asbestos, cigarette smoking, and chloroflorocarbons (CFCs). With all of these, the cost of dismantling the empire of dependencies and support mechanisms for these activities was huge, leading to pronounced precautionary dilemmas. In the case of asbestos and CFCs, dilemmas were ultimately solved using alternatives (with smoking subject to avoidance). Avoiding risk, now based on the avoidance of future dilemmas, lies at the root of preactionary strategy aimed at stopping problems before they occur. Why not wait until the evidence is in? If we accept the view that risk accumulation can only be assessed in very gross terms, we can appreciate how much more difficult the whole process will be when we admit accumulation. Potential risks may be less alarming if we know that we have the ability to easily avert them before they become serious.
3.4
Satisfying the burden of proof
The fundamental nature of precaution is that we assume that some exposure to risk be proven reasonably safe (i.e., not possibly dangerous) before it is allowed. Establishing this burden suggests a risk-free path to progress. Any exposure that cannot satisfy this “burden of proof” is not allowed. Alternatives assessment based on backcasting is a formal attempt to identify and implement safe pathways to progress that can satisfy this burden of proof. On the other hand, we may adopt the assumption that an action or activity is safe until proven unsafe. Proof in this case requires going beyond a mere “possibility” of risk to statistical evidence. This is precisely the form of proof we rely on statistical analysis of risk to provide. In either case, the potential for a reasonable verdict suggests that no outside influences enter. That means that if the decision is ultimately made that an activity is in fact risky, eliminating it must not be hindered by sunk costs or other factors that make the eventual
42 Risk Dilemmas
resolution subject to dilemma. This presents a further pragmatic argument for not adding risk potentials. New risks that cannot be subject to subsequent reversal, based on sufficient proof, should be avoided. It is the only fair way to administer the burden of proof requirement. The reverse burden to prove an activity as unsafe rather than safe is clearly fatalistic in the sense we have described here. It assumes we take the absolute least cost approach: Proceed with the activity, not incurring costs (direct or opportunity) of avoidance, until the activity is proven unsafe. On this view, the benefits of, say, building an expanded nuclear power structure dictates that we proceed and do so until we have substantive evidence (including statistical data) that suggests that such an undertaking is unsafe. Likewise, we proceed with the accumulation of any other risks that may fit this criterion. Down the road, we may find that evidence may suggest that one, some, or all of these endeavors are unsafe. Again, accumulation has created risks that might not be easily reversed, and we find ourselves in a risk dilemma. The bargain was struck based on a burden of proof, the reversal of which now cannot be carried out, and we are stuck with an obviously (statistically) dangerous action without recourse. Allowing difficult to reverse risks to accumulate under any burden of proof requirement proves unfair and ultimately dangerous. If we cannot reasonably assure ourselves that such dilemmas will not occur, the burden of proof condition is moot, and we should avoid the activity.
3.5
A possibilistic model of catastrophic potentials
In assessing the potential growth of catastrophic risk aggregated across exposures, we have shown that uncertainties about the measurement of catastrophic risk and the proper response to it combine in a complex fashion. While we may, perhaps, be able to specify the direction of change in probability of catastrophic loss as a result of adding risks, our assessment of the level of change will be fuzzy. Even the concept of “all or nothing” is not as straightforward as it seems, when applied to a very uncertain environment. This is simply the result of our having to deal with what are bound to be very fuzzy measurements of risk due to the degree of knowledge imperfection involved. How this risk grows will also be influenced by our decision criteria. As suggested in the Appendix to Chapter 1, these criteria
Risk Avoidance 43
will be fuzzy as well due to the complexity and dynamics of any thresholds we set with regard to a possibilistic minimax treatment of high-stakes risk. As a result, the fuzziness of both can combine to increase our uncertainty even further. We suggest here that the best way to view such growth may be in terms of ultra-fuzzy sets. Ultra-fuzzy or type-2 fuzzy sets represent our uncertainty about the fuzzy nature of possibility itself.7 Expressing our “uncertainty about uncertainty,” they enter when multiple sources of uncertainty exist. We may simply not know enough about how these sources of uncertainty interact to be able to specify their combination in terms of a simple fuzzy membership function. The ultra-fuzzy approach assures that we properly capture the high degree of uncertainty that occurs when assessing high-stakes risk potentials. This enhanced uncertainty has significant implications for the way we deal with catastrophic risks on an individual, organizational, and societal level. A representation of the possibility of disaster, in terms of fuzzy membership, is shown in Figure 3.2. Here, potential acceptance of risk exposures ranges from accepting no risk to accepting all risks. The representation takes into consideration the fuzzy nature of the probability estimates of individual risks. It also reflects the fact that our decision criteria may interact. The growth of the possibility of disaster is therefore not just a matter of growth in probability. It also reflects the permissiveness of our risk decision criteria. This in turn depends on where and how we set our risk thresholds. As these risk
Possibility of disaster
1
0 Accept no risk Figure 3.2
Accept all risk
An ultra-fuzzy representation of how catastrophic risk grows.
44 Risk Dilemmas
thresholds are themselves fuzzy, they add to the overall uncertainty inherent on any assessment of the possibility of doom arising from the accumulation of risk.8 With reference to the figure, we would obviously prefer to be closer to the point of minimizing our possibility of doom (the far left side of the figure) in our ultra-fuzzy characterization of disaster possibilities. How we get there or even knowing how close we really are is made much more complex by the introduction of extreme knowledge imperfection (i.e., fuzziness). For example, the fuzzy character of the growth of risk illustrates why prioritization is a very uncertain gambit. Prioritization assumes that we can continue to ignore the potential for risk, addressing it only after that potential somehow manifests itself. It is not the time we should start thinking about risk management when the possibility of doom becomes distinct in the fuzzy risk landscape (i.e., to the far right of the figure). At that stage, we are too close to the potential for trouble, and the options are just too uncertain. Given the ultra-fuzzy nature of the situation, which risks can we confidently characterize as worse than others? And how, given this fuzziness, do we even know we are going in the right direction? Note that the gray area within the ultra-fuzzy region supports a variety of possible pathways to and from risk, including some very chaotic ones. The ultra-fuzzy interpretation also suggests why discussion in terms of burden of proof may demand precautionary action sooner than later. In terms of our ultra-fuzzy interpretation of risk accumulation, the mere potential that we might find ourselves in a position where doom becomes a distinct possibility suggests that the burden of proof requirement may, at that point, be moot – especially, if return to a point of significantly diminished possibility is problematic. In a highly uncertain atmosphere, a permissive attitude toward risk has the effect of obviating important burden of proof considerations however we construct them and upon whomever we finally decide they should fall. Last but certainly not least, the ultra-fuzzy interpretation undercuts the whole ideas of a mechanistic approach to high-stakes risk based on precise calculation of outcomes. Mechanism implies some orderly model upon which we can base our actions. Due to the complex uncertainties that exist, this simply cannot be the case with highsakes risks. Uncertainty is a fundamental part of their nature. This
Risk Avoidance 45
combined with the finality of catastrophes once they occur makes it imperative that we consider decision options that stop the spread of risk, as early in the process of planning for progress as possible (i.e., while the possibility of disaster remains low). The ultra-fuzzy representation can be viewed as a somewhat stylized expression of our intuitions about real-world risks. Too many complex sources of uncertainty combine for us to make any accurate determination of potential pathways toward disaster. This does not mean that we cannot make rough, yet useful, assessments. We can also craft risk strategies based on those rough assessments. If we can assess, somewhat imprecisely, that we are at or nearer the “impossibility” end of the disaster spectrum, this should offer us the comfort that we are doing everything possible, within our abilities, to reduce the threat of high-stakes risk. The value of fuzzy models in inherently uncertain situations is that these models, unlike their precise counterparts, suggest further questions, not just answers.
3.6
Is there a “natural” level of risk?
The idea of an all or nothing application of precaution has strong implications. All or nothing risk avoidance depends critically on how we define the region of possibility or alternatively the dividing line between acceptable and unacceptable probability/loss combinations associated with any exposure to risk. As we have shown above, any workable definition of minimax avoidance of risk, on either a preactionary or precautionary basis, requires that we identify some non-zero risk level for exposures to high-stakes risk that provides a satisfying threshold for action. This de minimis level is complex, and as a result, it can only be imperfectly known. Setting this fuzzy level too low, then, has the effect of making all action risky, and hence prohibited. Set it too high, and we face the catastrophe problem. Practically, our definition of “possibility” entails some degree of acceptance or what we might consider a form of reasoned fatalism. Reasoned acceptance based on possibility is unlike the fatalism entailed in risk dilemmas, which we might more properly describe as mere acquiescence. Many rationales exist in support of a variety of risk acceptance criteria.9 The most reasonable of these appear to be based on some natural “background level” of risk. The strongest support for
46 Risk Dilemmas
1. Sustainable (as evidenced by evolutionary history). 2. Irreducible (with respect to the “subsistence level”). 3. Low uncertainty (of the “knowledge imperfection” variety). Figure 3.3
Observed properties of the natural level of risk.
this natural approach is a rather remarkable streak of evolutionary survival among both humans and the other species on this earth. As evolutionary survival implies at least some sort of minimum conditions for the continuation of life, we might define acceptable risk based on some subsistence level of human existence. The question becomes: What is the risk inherent in that level of human progress just necessary to satisfy the most basic requirements of life? At this subsistence level, we define a natural level of risk as sharing the observed properties outlined in Figure 3.3. Sustainability supports the notion of a probability of disaster sufficiently low as to define at least a practical impossibility. Empirical verification comes, once again, from an observed run of evolutionary survival of life forms on earth that goes back millions of years. Natural risks are something humans can live with, literally. When it comes to our existence, risk management in the high-stakes domain considers cost/benefit only in its widest sense. The balance is a natural one, which we have shown above as constituting the “balance of life.” This balancing of the powers of nature is suggested by the coexistence of point 1, a natural sustainable risk level exists which, by point 2, supports life at some minimally “liveable” level. Of course, sustainability need not imply that risk avoidance requires that we live at subsistence level. It merely sets a tangible reference point for minimal risk. In fact, the true measure of progress is how much we can enrich our lives without exceeding the subsistence level of risk. The second point, irreducibility, is a function of the fact that subsistence defines a floor, or minimum level, below which life is simply not worth living. We need to identify risks that, once again, at the subsistence level cannot be reduced further without jeopardizing the most basic level of our immediate existence (the “balance of life”). Related is the idea that we should not add to the background level of risk. Not adding to risks represents the basic idea that, under extreme uncertainty, we need to be cautious about adding
Risk Avoidance 47
risk, especially imperfectly known ones. By establishing a threshold definition of acceptability based on the rough notion of “possibility,” we protect against adding to the background level of risk, thereby assuring sustainability. Last but not least, natural risks are fairly known to us. They are basic, and as such, their relatively uncomplicated nature allows us to identify their probability characteristics with some degree of precision. Again, this precision need not be absolute. We just need to be relatively confident that there is no significant overlap with our risk threshold. The property of being well-known suggests that, given the wide natural history of such risks, the associated degree of fuzzy uncertainty is low. We can in this fashion say with relative confidence that, for example, drinking water is not risky. This last point suggests that many of the difficulties of assessing “unnatural” risks lies in the fact that these assessments are very imperfect. As imperfection implies an interval that may extend into the forbidden region of risk, such exposures to potential are logically more prone to avoidance based on the uncertainty-modified minimax criterion that underlies precaution. The natural background level of risk to humans, for example, is made up of various threats to life from natural causes. These include lightning strikes, accidental drowning, falls, insect or animal bites, and the like. Generally, theses natural occurrences present a fatality risk to humans of roughly 10−6 (one in a million) to around 10−5 (one in one hundred thousand). To reiterate, this natural level reflects a degree of possibility that can only be defined in an imperfect, fuzzy fashion (see the Appendix to Chapter 1).
3.7
On the notion of “selective fatalism”
The possibility threshold argument, based on natural criteria, is a logical blend of pragmatic and theoretical considerations that help insulate us from catastrophic risks. Yet, what do we make of the fact that, based on purely descriptive observation, people do accept certain activities and exposures that entail the distinct possibility of risk? This despite the fact they are contrary to the dictates of minimax precaution. On a society-wide basis, consider the risk inherent in operating motor vehicles in the U.S. If we use a very basic calculation for the probability that a member of the U.S. population will perish in an automobile accident, we calculate 40,000 fatalities a year
48 Risk Dilemmas
by the current U.S. population of 300,000,000 and get a number of around .00013 or 13 × 10−4 10 These numbers would suggest that driving entails a risk of individual catastrophe some 1000 times greater than what might be suggested as a reasonable “natural” background risk, such as being struck by lightning.11 Certainly we would consider individual risk probabilities that are so low to be “practically impossible” (despite the unfortunate few a year that succumb to the peril).These observations suggest a degree of what can be characterized as selective fatalism.12 We apply precaution (avoidance) to some risks and fatalism to others. That is, we seem to accept some risks of catastrophe that are distinctly possible while rejecting others. How does this observation coincide with the notion of a natural level of risk acceptance – or does it? A wide variety of theories have been used to explain, and justify, this hybrid approach to risk. Some researchers suggest that differential acceptance is simply a manifestation of the application of the cost/benefit calculus. We accept higher probabilities of catastrophe when the benefits are greater. Observations of human behavior in this regard simply reveal the cost/benefit preferences inherent in the actual choices we make about the world.13 Other suggestions have focused on observed human perceptions about high-stakes exposures that go beyond their likelihood. These include the extent to which we consider any such exposure to risk as equitable and fair, the degree to which an exposure is familiar to us, our sense of control over the exposure, and the extent to which the exposure is voluntary, as opposed to forced (see Figure 3.4).14 Attempts to reconcile fatalism with the precautionary approach suggest these properties define the “naturalness” of the exposure despite the fact that many of the risks we exhibit this selectivity toward are technological or otherwise “man made.” Indeed, what we consider acceptable man-made exposures often share many of the properties of their natural counterparts. Might driving a car have become a “natural” exposure in today’s world? This approach has the effect of raising the observed acceptance level in terms of probability/possibility thresholds. The distinction of how the exposure came to be is not crucial to the distinction of naturalness based on possibility. The question for all subjective acceptance criteria becomes, “what are the pragmatic implications of this differential acceptance?” In terms of impacts
Risk Avoidance 49
– Fairness/equity. – Familiarity. – Degree of control. – Voluntary vs. involuntary. – Degree of trust. Figure 3.4
Beliefs that influence risk acceptance.
alone, the outcomes seem equivalent. What difference does it make whether we are severely injured or killed in the woods by a falling tree or in the street by a falling high-tech cellular telephone antenna tower? What, we might ask, do fairness, degree of control, and familiarity have to do with whether we survive or not? The deeper concern should be about the level of possibility these exposures represent. Does that level correspond to the natural background level of risk we deem generally suitable for the reasons outlined in Figure 3.3? Injury via a falling tree, it may be argued, is far more “natural” than that caused by a cellular telephone antenna tower erected by humans, but so what? The deeper problem with accepting or rejecting “possible” risks based on any of these differential factors is that in doing so we have not solved the catastrophe problem. The acceptance of exposures under these guidelines may still present the possibility of disaster. This does not mean that these perceived risk characteristics are meaningless. Rather than define naturalness, they may be an indicator of acceptable risk levels. The presence of these factors may signal a degree of naturalness within these exposures which relates directly to their relatively benign status with respect to risk. The implication is that exposures that are controllable, fair, understandable, in a word, “natural” are also less risky (in the possibilistic sense). For example, inequity in risk exposure may result in one group taking advantage of another, profiting from promoting a risky exposure that they may feel (rightly or wrongly) they may not fall victim to them. In the same sense, controllability and an understanding of risk suggests that we may have adequate reason to consider the exposure truly benign, as opposed to wishful thinking. These characteristics may therefore serve as a reasonable heuristic adopted in lieu of a deeper investigation of the possibilities of danger. The final judgement of acceptability, however, must be based on the assessment of possibility with respect to background risk levels.
50 Risk Dilemmas
Lennart Sjoberg, of the Center for Risk Research at Stockholm University, has distilled these perceived risk factors into the idea he labels “tampering with nature.”15 Dr. Sjoberg has performed psychometric studies that suggest subjects base acceptability of risk on the perceived degree to which an exposure reflects these characteristics. Among the qualities of exposures that tamper with nature are the potential for catastrophic impacts. This means that tampering with nature reflects manifestations of risk that exceed natural risk thresholds, as we have defined them. Such observations are valuable in that they suggest that humans may use qualitative indicators of the probability/possibility of events. Once again, exposures that suggest tampering with nature also suggest exceedance of the acceptability threshold. These findings are in line with a body of psychometric assessment that finds consequences (losses) are a primary driver of risk decisions. The analyses confirm our logical assessment in terms of precaution: When the stakes are high, we avoid risk. Probabilities, except to the degree they might relate to credible possibilities (versus impossibilities), are ignored. We suggest further that practical impossibility, that is, the threshold of acceptance, is ultimately determined by the natural properties of the world. The relevance of explicit or implicit cost/benefit calculation to all this is non-existent. Related to this perception, and perhaps giving us a wider appreciation of what it takes to make a risk “natural,” psychologists have determined several related variables that people use to make decisions about high-stakes risk. We avoid tampering with nature, as reflected in these characteristics, because it increases the possibility of disaster. This makes the only legitimate applications of fatalism those that are based on naturally determined possibility thresholds.
3.8
Selective fatalism and dilemmas
The notion of a psychology of risk acceptance clouds the issue of the proper level of risk. In what sense can psychometric observations of the level of risk people actually do accept serve as a criterion from those they rationally should accept? As shown above, while psychological reactions to risk may be an indication of the relative risk inherent in an exposure (in terms of what we have called its naturalness), they constitute neither justification nor final proof.
Risk Avoidance 51
Genuinely voluntary activities in the face of risk are really irrelevant to the normative study of risk (i.e., the determination of what risks we should accept).They may be rational or irrational, even pathological. Examples include recreational risks (such as mountain climbing or skydiving), risky sexual behaviors, smoking, and various lifestyle habits. These self-imposed risks are functionally different than risks that are more properly considered external. They are very specific to the individual’s psychology. Pathological psychologies and behaviors may involve the potential for maliciously inflicted harm on either one’s self or others. To the extent we include abnormal behaviors in the study of risk acceptability, we severely distort the results. We would not, for example, consider suicides (roughly 30,000 a year in the U.S.) as properly constituting to any degree part of the foundation of a normative risk threshold. The aspects of those risk perception studies that we can identify as voluntary are not properly part of the study of rational risk acceptance at all (except maybe in a deeply individualistic, psychological vein). These studies originated with psychologists, and they should be left to the psychologists. Our interests lie in the consequences of these behaviors. What we are concerned with is risk that is not voluntary, in the sense of being “self-inflicted.” We would further propose that many of the risks people think of as voluntary are really not. Driving a car is often cited as a paradigmatic example of acceptable risk based on voluntary behaviors. But is it? We may not have much choice but to drive a car if we are to sustain ourselves with gainful employment in the modern world. We really have to ask ourselves how many risks we face are truly voluntary and how many are a matter of forced acquiescence, no matter how subtle.16 The issue becomes one of acquiescence to risk dilemmas. In the process of resignation, we may become desensitized to risk. This in turn leads to delusions of voluntary acceptance and control. To set our entire basis for the management of high-stakes risk purely on observed behavior is, therefore, not acceptable. To the degree people are able to get beyond forced acceptance, they rightly seek more justifiable criteria. The proper criteria are a product of the natural world and our place in it. What is the real difference between cost/benefit and risk dilemmas in these situations? While acceptance of some positive level of risk is what makes cost/benefit satisfying, we gain no such satisfaction from accepting, or acquiescing to risk, under conditions of the
52 Risk Dilemmas
risk dilemma. Under risk dilemmas, differential thresholds of risk acceptance (“selective fatalism”) are driven by the strength of the dilemma, that is, how close the expected cost of avoidance (direct or opportunity cost), Y, comes to the loss potential, X (see Figure 1.3, Chapter 1). When the cost of resolving risk dilemmas is small, we can choose genuinely risk-free regimes. When the costs get high, acceptance is forced via potential dilemmas. We accept the risk of disaster because it is “too expensive” to do otherwise. Under a genuine cost/benefit, we achieve a balance, the ultimate result of which is freedom from worry. Under acquiescence based on risk dilemmas, we can achieve no such peace of mind, no matter how hard the cost/benefit apologists try to convince us.
3.9
The “tolerability” compromise
An attempt to reconcile acceptability based on “practical impossibility” and observed risk behavior underlies the concept of tolerability of risk. The most developed exposition of the concept guides regulatory policy with respect to high-stakes risks in the U.K. under the Health and Safety Executives (HSE) risk standards with regard to annual fatality risk to the individual.17 The HSE suggests that there exists a threshold below which high-stakes risks are clearly negligible (de minimis), and hence the exposures acceptable. This lower acceptability level corresponds to what we have identified as the natural risk level and is usually pegged at “one in a million” or 10−6 18 An upper level is also pegged based on observed (“voluntary”) risk-taking behavior. This level is usually set at an annual probability of around one in ten thousand (.0001 or 10−4 ) or sometimes as high as one in one thousand (.001 or 10−3 ). Risks that entail probabilities above this threshold are unacceptable under any circumstances. In between, the standard suggests a region of “tolerability.” In this regard, the HSE introduces the principle of making risk “as low as reasonably practicable” or ALARP. The concept is shown in Figure 3.5. At first glance, ALARP seems to be a reintroduction of cost/benefit, and it is in many cases interpreted as a loose version thereof. However, as closer examination of the principle suggests, its drafters were well aware of the problematic nature of accepting high-stakes risk on the basis of statistical reasoning. ALARP is therefore often cast in a
Risk Avoidance 53
Cannot be justified except in extraordinary circumstances.
Unacceptable region
Tolerable only if risk reduction is impracticable or its cost is grossly disproportionate to improvement gained.
The ALARP or tolerability region
Risks should be periodically reviewed . . . for example, by ascertaining whether further or new control measures need to be introduced.
Need to maintain assurance that risk remains at this level.
Broadly acceptable region
Negligible risk
Figure 3.5
Tolerability of risk criteria.19
dynamic framework. Risks in this region are tolerated, with immediate actions based on practicability, while longer term solutions are sought, which reduce these risks below the lower (natural) threshold. This interpretation blends the theoretical goals, based on what may reasonably be interpreted as precautionary thresholds, with practicality. The question becomes whether the prospects for solutions to risk dilemmas are actively pursued? If not, the standards simply pander to the attractive potentialities of precaution while pursuing a “do what you will” (i.e., fatalistic) approach in practice. If applied without at least some degree of conviction to reduce risk to natural background levels, ALARP may be used as a justification of the status quo, where solutions are delayed indefinitely. ALARP formulations can therefore benefit by making the potential for dilemma of risk acceptance more explicit. Adoption of the notion of tolerability, in effect, recognizes that a region of risk dilemmas does exist. Beyond recognition, we need to establish whether the ALARP region perpetuates these dilemmas or actively attempts to solve them. To the extent we are left to make
54 Risk Dilemmas
the decision based on some sort of fuzzy cost/benefit that incorporates a vague idea of “reasonableness,” the whole concept offers no real guidance to resolving the catastrophe problem at all. The result is fatalism based on acquiescence in institutionalized garb. To the extent ALARP implies a temporary “holding pattern” that itself acts as a trigger to seek alternatives, it can promote active risk avoidance. The question of how long we stay in this pattern before acceptable alternatives materialize becomes an open issue that, in the context of an ultra-fuzzy view of risk accumulation, could have very immediate (and very bad) consequences. We have already reviewed the tricky dynamic aspects of risk accumulation. We return to the notion of when action may in fact be “too late” in a later chapter. Unfortunately, most applications seem to be more in the spirit of a wait-and-see attitude rather than a very temporary approach aimed at spurring sound action. This is further suggested by the fact most allowable (i.e., tolerable) thresholds give tremendous leeway to the presence of risk. Suggestions that serious individual risks be allowed at a threshold approaching 10−4 (one in ten thousand) or even 10−3 (one in a thousand) are tantamount to no risk control at all. If we factor in uncertainty, which suggests that measurement errors and other factors could result in assessments of risk that run an order of magnitude higher, or perhaps more, such acceptability bounds verge on the territory of statistical risk. Thresholds that could conceivably approach one in a hundred (10−2 ) would suggest boundaries not at the limit of reasonable safety but at the point of utter outrage by those potentially affected. These limits might satisfy those who think “seeing is believing,” but allowing high-stakes risk at the level at which they become visible to the naked eye, so to speak, is extremely dangerous. It is a brinksmanship for which the penalty is quite literally doom. Promulgating tolerability limits often seems more a matter of pushing the envelope of human tolerance, or how much we can “get away with,” rather than striving to do what is safe. The primary concern of those who promulgate tolerability limits must not be with how unachievable a genuinely zero level of risk is but instead how far we really are away from that level. While the exact level or even a reasonable fuzzy level of risk acceptance based on de minimis possibility is controversial, the fact that we currently acquiesce to risk considerably over that level is not. A properly precautionary approach
Risk Avoidance 55
suggests that wherever we may find ourselves within the currently specified level of tolerability, we should at least start heading in the direction of zero risk and do so as fast as we can. Selecting some appropriate acceptance level of risk is something we can debate once we get nearer to it.
4 Precaution in Context
We have defined high-stakes or catastrophic risks as, roughly, those that entail the destruction of the entity. They are final and irreversible in their effects. It is this terminal nature from which the catastrophe problem arises: How do we approach decision when we don’t get a second chance to get things right? In the process, we have steered clear of context. That is, the issue that the notion of catastrophe is relative to the entity at risk. What is catastrophic to the individual may have little effect on society. Regional issues may themselves pale when compared to the fate of the world at large. One reason we have not emphasized context is that, to a great extent, the notion of high-stakes decision criteria can be applied generally. Avoidance of disaster makes sense whether that disaster affects the individual, a community, or even a business enterprise. Nonetheless, contextual interactions in high-stakes decisions can occur. We need to be wary of these when establishing a wider program of risk management. Fortunately, these interactions can be accommodated using the threshold approach to precautionary risk management. To do so in an effective and equitable manner, we need to take a coordinated view.
4.1
The hallmarks of precaution
When we reference context with respect to high-stakes risks, we consider the size of the body affected, usually in terms of human beings but also perhaps in terms of assets, the affected ecosphere (both plants and animals), or some combination of these.1 We must also 56
Precaution in Context 57
consider the composition, or type, of entity affected. Composition, for example, contrasts human with non-human, human with business or organization, business with non-human, and so on. Composition can, therefore, interact with shear numbers in the final determination of context. These compositional interactions are important because they cause us to view precautionary action in each context in a wider way. For example, we may ask, is business survival more important than individual survival? Or, is ecological survival more important than that of humankind? These differences in population affected and composition can determine how we apply precaution. We would argue, however, that certain aspects of precaution are valid across contexts. The hallmarks of precaution include the desire to avoid destruction of the entity, irrelevance of cost/benefit in doing so, and the potential for precautionary dilemmas (Figure 4.1). Commonalities are applicable to individuals, organizations, and society at large. At the lowest level, the individual, we usually regard these hallmarks as intuitive, even instinctual. We have attempted to present these intuitions in a wider formal framework of high-stakes decision-making, above. We now examine their applicability across wider contexts. Types of precautionary actions are easily identifiable across contexts. Individual precautions include looking both ways before we cross a street and vaccination against communicable diseases. Businesses take financial precautions by incorporating and buying insurance. They also spend money on physical protection in the form of sprinklers and fire alarms. Social precautions include the regulation of food safety and driving habits. When catastrophe must be accepted, cost/benefit comparisons often provide little comfort. Risk dilemmas often surface: How do I protect myself from harm when my job
– Avoidance of potentially catastrophic, that is, terminal and irreversible, losses with respect to the entity. – Irrelevance of cost/benefit comparisons. – Pervasive uncertainty due to knowledge imperfections. – The potential for risk dilemmas: We may find ourselves in the position of being “doomed if we do, doomed if we don’t”. Figure 4.1
The Hallmarks of Precaution.
58 Risk Dilemmas
involves hazardous activities? What can my business do when insurance becomes unaffordable? How does society respond to the threat of global climate catastrophe given our dependence on carbon-based energy sources? The problems and solutions are common among contexts. So while context does have an effect on how we respond to catastrophe, the commonalities remain stronger.
4.2
Context and risk acceptance criteria
In applying precaution, we need to be aware of contextual differences and adjust our decision-making processes accordingly. For example, risk acceptance thresholds may be set considering only individual exposures. Yet, while a one in a thousand annual chance of negativities may sound exceedingly remote in the individual context, it means that 300 thousand will be affected in a population of 300 million and a million in a population of one billion. To prevent the widespread potential for catastrophic risk from “getting away from us,” we need to pay close attention to how any target risk criteria apply across contexts. Contextually, in terms of population exposures, we may think in terms of maintaining a constant degree of individualized injury, fatalities or other damage, or perhaps rather avoidance thereof. To do so, our risk acceptance criteria need to be adjusted to maintain this constant rate. For example, if we maintain a risk level of one in a million for the individual, this implies the potential for ten such incidents in a population of ten million. If we want to maintain some absolute possibility of only one potential loss, regardless of population size, then our probability of loss among a population of ten million needs to be adjusted down to 10−7 (or one in ten million).2 We show generally how risk acceptance criteria adjusted to maintain this risk parity might look in Figure 4.2. By using a single probability threshold among contexts, we introduce the possibility that a greater and greater absolute number of individuals will be affected. Assessing risk thresholds at the societal level emphasizes the potential incongruity between the notion of “possibility” and “acceptability” of risk. Expecting ten serious incidents among a population of ten million implies an individual risk of one in a million, yet it is obvious the loss of some number of individuals will be accepted. “Practical impossibility” always implies some acceptability
Precaution in Context 59
1
Probability
Unacceptable
Acceptable
0
Individual
Community
The World
Consequences (population imperiled) Figure 4.2
Differential risk acceptance criteria (individual vs. social).
of the inevitable. The alternative is a strictly zero threshold, which is unworkable, both practically and theoretically. The difference in risk thresholds is often made more plain by translating from individual risk to affected population. For example, the significance of an individual risk difference of one in ten thousand (.0001) and one in one hundred thousand (.00001) may be difficult to grasp based on the probability numbers alone. In the context of an affected population of 100,000, the difference is between one estimated fatality a year and ten. In fact, the call to precautionary action is often based not on individual assessments of risk but rather their impact on a wider population: Twenty thousand people in the U.S. perish each year due to. (implying an annual individual probability of one in ten thousand). It is sometimes suggested that we display a greater aversion to losses that may involve a wider context. So, we may dread more the loss of 1000 at a time more than 1000 occurrences affecting only the individual. This implies that we apply a proportionally lower probability threshold as the size of the population exposed grows. Such differentials imply the thorny problem of differential valuation. Do individuals become more “expendable” in this wider sense? The tradeoff is implied as lowering the probability threshold for mass losses disproportionately has the effect of increasing the acceptable probability for
60 Risk Dilemmas
the individual (at least relatively). The idea is reflected in the notion that a few may need to be sacrificed for the sake of preserving the many. The problem enters due to the complex interactions among individuals and the wider (social) context.3 Considering individuals expendable in terms of this sort of argument may seem reasonable until the individuals themselves start to feel expendable. Then, individual behaviors could act to increase the risk to society. Should individuals take less care (i.e., incur greater risk) with respect to immunizations against communicable disease, for example? On the one hand, these immunizations reduce the risk to the individual. On the other, they affect a much wider group. Individual disease may lead to pandemic. This means that while we may be able to judge certain risks based on tradeoffs from a wider contextual perspective, the question becomes, should we? This discussion suggests that all contexts are in some way interdependent. This means that risk acceptance has to be treated with this inter-dependence in mind. We would suggest that just as the balance of life dictates choices on an individual level, so it does on all levels. Differential risk acceptance, in terms of probability thresholds, is tuned among contexts with the natural balance of nature in mind, just as it is within the context of the single individual. We need to identify first how all the individual contexts fit together to make life “work” and protect that worth based on differential thresholds.
4.3
The problem of valuation
Context also has effects on measurement of risk. What is the appropriate measure of consequences, both between contexts and among them? Monetary valuation is often eschewed when it comes to decisions on human safety. This might suggest that things that can be valued in money take a lower spot on the echelon of risk treatment. Yet, at some level, lives, health, and money become somewhat fungible, in a practical sense. A severe financial disruption, either on an individual or social level, can have physical repercussions on human health and safety. It is not a question of valuation at this level, but once again inter-dependencies. A flood that causes $10 billion of damages will have a serious, perhaps catastrophic, effect on human life, whether or not direct casualties result. A nuclear holocaust may leave life “livable,” in the purely physical sense, but at
Precaution in Context 61
what cost to the true value of life? Throughout the valuation process, both between and within contexts, it behooves the risk expert to recognize the fact that there may be things worse than death. There is no point in eschewing the bargain at the moment of truth. We can’t do anything about it. As we have shown, valuation complicates the assessment and treatment of risk among contexts related to size. Deeper issues surface when comparing among the qualitative aspects of context. How do we properly value the loss of an ecosystem? And in what sense can we make comparisons between, say, the loss of an ecosystem and the loss to human populations? Valuation is crucial in that it dictates what is important to us. Reducing all valuations has the effect of making losses commensurable, in terms of comparability. The problems of applying monetary valuations throughout contexts are obvious. What happens is that those things that cannot be conveniently valued are usually ignored. Valuation of loss remains one of the great challenges of intercontextual decisions about risk. Monetary valuation should not be eschewed on face value. Above all, we need to realize that because the worth of something is difficult to evaluate in monetary terms does not mean that it does not have value. Monetary losses are often adjusted based on the disutility of loss, a measure that attempts to reduce complex valuations to measures. Disutility can sometimes be assessed based on comparison of tradeoffs that are not necessarily monetary in nature. Both monetary and non-monetary decision criteria are combined in applications of multi-criteria decision analysis (MCDA).4 Development of MCDA along with appropriate valuation methods in the field of catastrophic potentials can help focus inter-contextual discussions of risk.
4.4
Inter-contextual effects of precaution
A proper view of risk in context takes into account the wider balance of life. It is needed to promote a sensible course of avoidance in the face of catastrophic risk. Catastrophic in what context? All of them! Interaction suggests that precaution is relevant to all significant endeavors of life, from aspects of commerce, individual lifestyles, the community, and the world at large. One way of coordinating response to catastrophe among contexts might be the use of differential risk
62 Risk Dilemmas
acceptance criteria based on context. The practicality of the possibility of disaster is applied in the narrower context, in consideration of the wider context. Incongruities in the differential application of thresholds from an intra-contextual standpoint do not exist under this wider view because some element of sacrifice (in terms of risk acceptance) for a wider goal (human survival) may be necessary to maintain this wider world balance. The issue of prioritization enters once again, but now in a complex fashion. Does it make sense to give global risk issues priority for the reason that they are larger? The complexity enters because the global society consists of its parts – individuals. Can we segregate individual considerations from societal ones or vice versa? The comparison is often obscured by the fact that the tradeoffs may not be observed directly. When precautionary costs are expressed in direct monetary terms, monetary differentials are usually assessed based on alternative uses of these monies. We may, for example, decide to spend $8 billion on highway guard rails to prevent fatal road accidents. That means $8 billion less to spend on other precautionary activities. To the extent all resources are finite at any point in time, failing to spend $8 billion on the other precautionary activity, say improved highway signage, may mean we have to live with some other risk generator. Finite resources instead of technological challenges may seem to be the root cause of this somewhat convoluted version of the precautionary dilemma: We are doomed if we do spend the money on guardrails (due to poor signage) and doomed if we don’t (due to the absence of guardrails). Perhaps, again, we should prioritize. Yet, to the extent some priorities remain unaddressed, the possibility of catastrophe remains. The answer, if there is one, would seem to be efficient innovation toward alternatives that are both inexpensive and safe. This may sound like an intolerable contradiction, but it seems no less ridiculous than continuing some activity until we perish, just because we don’t know how to fix it. The only countervailing consideration here is the low probability of the event. But that only really works if the probability of the event is low enough. Now we can assume, by definition, every risk a society faces collectively, all its members face. As shown above, we can adjust the individual probability of loss in terms of a societal threshold, by either maintaining a one-for-one threshold relationship based on the size of the group or building some degree of risk aversion (i.e., the impact
Precaution in Context 63
of the loss of 100 people is perceived as having more than 100 times the impact of the loss of an individual). In principle, the adjustment for scale effects is straightforward. It does not follow, however, that this relationship holds in reverse. That is where the notion of “sacrifice” enters. A vaccine may have the potential to save millions, but known adverse reactions suggest that some proportion of the population of those vaccinated may also perish. Implied in the notion of non-zero probability thresholds for risk acceptance is the acceptance that some proportion will be sacrificed in any risk acceptance decision that affects multitudes. While I might not like it very much if that affected person turns out to be me, it does not much matter if I accepted the bargain and it was fairly administered. It amounts to accepting one’s fate in such circumstances. The problem gets sticky when societal preservation entails higher individual risks. Technically at least, the differentials can be expressed in differential (i.e., risk averse) risk acceptance criteria. Moral acceptance is another question though there is no indication that it in any way derails precaution as a basic principle of risk avoidance. Individual risk decisions can also affect society. It may seem that a purely personal activity, my dietary decisions, for example, might only affect the individual. On the basis of pure numbers, however, individual decisions may reach proportions that imply epidemic levels. Unhealthy personal life styles may affect the productivity and even the long run viability of a society. It is not unimaginable that society may take action, as a whole, to discourage such behaviors (i.e., eliminate any wider risks to the societal unit). Failing to take precautions on the individual level may result in greater potential for social catastrophe, even when the activity may at first glance appear very “personal.” There are also behaviors that may seem neutral from the standpoint of the individual. For example, producing excess household waste (i.e., garbage) may have an imperceptible direct impact on the individual, yet may have, through accumulation, serious adverse affects on the community. So, failing to take precautions on the individual level can affect the risks faced by society as a whole. While the interactions may at times be complex, they are at least theoretically amenable to treatment utilizing an integrated view of precautionary thresholds. Can individually precautionary behaviors adversely affect societal risk? Selfish behavior with respect to risk can indeed have the effect
64 Risk Dilemmas
of increasing social risk. In times of crisis, the hoarding of food, an overtly precautionary measure by some individuals, can imperil the greater number. While precaution may be a very natural individual response to danger, being part of a wider community entails coordinating these goals with the wider community – for the greater good of the community. We sometimes need to control other instinctual urges to get along in society. The issue is related to the parable known as the “tragedy of the commons”: What appear to be individually optimal solutions can end up in disaster for the community.5 By the same token, precaution on a community level, perhaps driven by individual decisions, may increase risk potentials in another community. These wider equity impacts need to be considered in the societal decision-making process. Once again, nothing suggests that such allocation decisions are unsolvable or that there is a reason why precaution is, in principle, unworkable. Similar inefficiencies can, and do, arise from expected cost/benefit decisions (i.e., are the costs and benefits shared equally). There is also no reason to believe that such inefficiencies can only be redressed on a cost/benefit basis. Inter-contextual difficulties with precaution can be addressed by carefully analyzing the effects of precautionary regimes across all contexts and the adjustment of differential risk thresholds as needed. As an example on inter-community effects, consider the precautionary attitude toward the pesticide DDT. It is sometimes argued that the industrialized world can afford to be precautionary toward DDT and other precautionary targets, such as genetically modified crops, as any opportunity costs affect it minimally. On the other hand, less industrialized, developing nations cannot afford the counterrisks of insect-borne disease and inadequate food supplies. Yet all we do by reversing the focus of precaution is replace one catastrophic loss potential with another. Developed nations are beyond dilemmas in these cases due to adequate food supplies (without the need for genetic modifications) and the relative eradication of insect-borne disease (without the need for pesticides with potentially catastrophic human effects). In other words, they have alternatives that less developed communities don’t. Rather than provide less developed nations with an option that is not really an option at all (rather, a risk dilemma), we should allow them to share the alternatives.
Precaution in Context 65
4.5
Alternatives assessment across contexts
Ultimately, inter-contextual risk tradeoffs in the context of precaution reflect the potential for risk dilemmas. If the trade, or sacrifice, seems reasonable on the basis of proportionality, that is, the cost is “low enough,” there is no problem. If on the other hand the sacrifice entails a cost that approaches the loss we are trying to avoid, we have a dilemma. In such cases, alternative assessment is in order. In the face of a difficult political struggle, we may choose to go to war. That decision entails the sacrifice of some for the good of the many. Psychological, moral, or idealistic inducements aside, from the individuals standpoint, a strictly precautionary approach may be to avoid conscription. The social response may be forced conscription based on some sort of lottery system for selection (to assure fairness). There remains a deep risk dilemma. While we may be saving the lives of many, we may loose many as well. The favorable balance in terms of the pure numbers (roughly, costs vs. benefits) presents far too callous an assessment of the worth of any one life. Might not negotiated alternatives have been an alternative? A devout pacifist would surely suggest so. Businesses are good at precautionary risk management when it comes to their own domain or context – which often does not extend beyond its owners. They buy insurance, they set up legal structures, they transfer risk contractually, and they take physical precautions with plant, equipment, and other physical resources. Likewise, individuals understand precaution. The ultimate precautionary risk manager may in fact be Mother Nature. There is no cost/benefit in nature – nature adopts to a wider balance that promotes survival while remaining at peace with itself. The Gaia Hypothesis suggests the earth acts so as to maintain its stability, and hence survival of its existence and that of its indigenous species. In this regard, preservation may include the mechanism of making life hard and eventually expelling (through extinction) those life forms that make it hard for it to do so. The process is not adaptive in the sense of projecting ahead and adjusting accordingly or incrementally weighing cost/benefits of this course or that. It is driven by a mission or goal, ultimately a view or model that is held in its deeper psyche, for lack of a better word. It’s a natural backcasting process, if you will. Alternatives are “chosen” based on what it takes to fulfill these goals (not that we
66 Risk Dilemmas
should pretend to fully understand even what those ultimate goals may be). We can go along, making these goals (as best as we understand them) our own. This is really what it means to be in harmony with the world. The process of course requires a good bit of trust. But ultimately the question is who knows better, this spiritual force that created and drives the natural world or us? Given the fact that even our most sophisticated creations pale by comparison to natural ones – for example, the electronic computer versus the human brain – we probably should be more apt than we are to respect nature. The fact that an unspoiled natural world may “know better” is reflected in a variety of natural and ecological philosophies geared toward increasing respect and preservation of the natural condition. These views suggest a deeper spiritual authority in nature. Our concern here is the fact that this deeper existence improves our possibility of survival within the system while making life a worthwhile experience in the bargain. In the end, it is unlikely that we can separate the technical conditions for survival from the spiritual ones.
4.6
The need for coordinated goals
When all members of the group face the same or similar potentials for loss, individualized choices about risk may adequately represent those of the group. In some cases, the “invisible hand” of self-interest can adequately guide the group as a whole. The individual’s decision to look both ways before crossing a busy street helps preserve that individual’s survival and, as a result, that of the group. Where groups are affected disparately, individual choice may have differential, possibly adverse, effects on the group. We may, for example, seek individualized economic benefits for survival while in the process diminish the survival potential of others, as suggested in the classic commons problem described above. As a result, we may need to introduce some form of social coordination or coordination of action by a higher authority. To do so, we may need to commensurate measurement of loss potentials between contexts. This is no different than the requirement that cost/benefit analysis seeks to commensurate cost and benefit measures among stakeholders. The challenge is determining a common basis for this measurement, which in the case of cost/benefit is almost universally taken as a monetary valuation. This approach
Precaution in Context 67
faces many issues, among them properly identifying and evaluating externalities. These external costs may fall out of the standard accounting processes and therefore result in imperfect comparisons of costs among stakeholders.6 Once again, we find challenges to effective decision based on the decision framework that is independent of the chosen decision criteria. The difficulties in balancing the potential for risk across contexts can affect cost/benefit balancing as well and is not unique to precaution.7 Coordination of precautionary regimes in wider contextual settings can be achieved in this broader view of precautionary action. Government regulation, for example, can proceed on the basis of properly specified precautionary thresholds. The proper response to such thresholds by those whose goal is to further progress is a preactionary assessment of alternatives for achieving such goals. In this way, socially established precautionary thresholds become a source of common guidance among the population rather than a mechanism of strict regulation. Properly understood and articulated, any involvement of governmental authority in this regard itself becomes preactionary rather than post-fact. It is likely that by properly anticipating risk, much of the friction that follows from costly and often ineffective post-fact regulation of risk can be avoided. Allocation of risk among contexts is not a work of detailed human intervention. We need not resort to some sort of complex input– output analysis carried out by a central authority, matching risk levels across social, economic, and environmental contexts. Ultimately, setting risk thresholds is a matter of natural law. It is that natural law which preserves a balance of risk acceptance in a wider sense, promoting survival for all. Our faith in the validity of such natural law is, once again, driven by the remarkable streak of evolutionary success that has come before us. Yet, even natural laws may need protection against those who, for whatever reason, would choose to violate them.
5 A Reassessment of Risk Assessment
In the analysis of high-stakes risk, as in the case of statistical risk, the distinction between risk assessment and risk management is important. Risk assessment involves identifying the probability/ consequence characteristics of loss or what we have considered the primary drivers of what we consider “risky”. Using this data to make decisions is what risk management is all about. How we use this information in the context of decision can, however, influence how and what data are collected. In the high-stakes domain, we find that the nature of risk data changes. As data becomes scarce, our knowledge about probability becomes imperfect. As we have shown, this imperfection becomes part of the decision process itself. If we fail to take this imprecision into consideration during the risk assessment process, our risk management actions will be misguided. On the other hand, we need not dismiss all risk assessment on the grounds that its primary use heretofore has been to support statistical decision. Many of the techniques can be suitably modified to provide essential guidance to the high-stakes decision-making process. Among the established risk assessment techniques, we will examine here are probabilistic risk assessment (PRA) and those that have arisen within the modern theory of decision under uncertainty and risk. We will focus on the modifications necessary to make these techniques useful in the high-stakes domain. Some modifications are to the underlying assumptions, while others affect how we use them. Once modified, however, these risk assessments can provide us with systematic support for high-stakes risk management decisions. 68
A Reassessment of Risk Assessment 69
5.1
Using risk assessments the right way
Applications of traditional risk assessment techniques in the highstakes domain are often criticized.1 This criticism is not a result of defects in the basic premise of risk assessment. Instead, the issue is one of suiting our methods of analysis to the unique nature of highstakes risk. Among the things that require special treatment are the existence of uncertainties due to knowledge imperfection, the need to adjust to the finality of the catastrophe problem, and the fact that high-stakes decision must be anticipatory rather than post-fact. Within their proper domain of applicability, that is, where their operation can be statistically verified, the traditional techniques work perfectly well. They allow the risk analyst, and subsequently risk manager, to adequately support his or her findings and decisions based on rigorous, formal demonstrations. By adding structure to the data collection, assessment and decision process, they can help us spot defects in our reasoning. The problem comes in when we automatically extend techniques in support of statistical decisions to the non-statistical, high-stakes domain. As we would argue that this error is not always obvious to the analyst or those that depend on the analyst to provide useable results, risk assessment techniques often find inappropriate application to the high-stakes domain. In these cases, they are irrelevant at best and can quite possibly lead to dangerously inadequate decisions at worst (like when we accept a catastrophic impact based on expected value cost/benefit alone). The strong rejection of the undisciplined use of these statistical techniques is therefore rightly criticized. How can we modify traditional techniques to suit high-stakes decision-making? First of all, a respect for the unique characteristics of high-stakes risk with respect to complex uncertainties often means that we have to use existing tools qualitatively rather than quantitatively. Qualitative evaluation suggests the use of modalities, such as “possibility.” As we have shown above, possibility is a much looser form of uncertainty than probability. Modeling methods must be constructed so as to account for this “looseness.” Traditional risk assessment techniques also have to be adjusted to provide a broader feedback structure. We have to recognize that the assessment of alternatives under uncertainty requires that we explore multiple possibilities. Structured or semi-structured analyses
70 Risk Dilemmas
let us perform these explorations in a more methodical manner and use the results to modify processes iteratively before they become entrenched. Linear application of traditional risk assessment methods often regresses to finding deeper and deeper layers of protection against problems that may have been avoided in the first place. We initiate a genuine alternatives assessment by making the evaluation of risk solutions part of the plan of evaluating the overall viability of the process or activity under study. So while there is nothing wrong with formal risk assessment methods used to develop statistical cost/benefit decisions, these methods can, however, be used the wrong way. Simple modifications to existing risk analysis techniques allow them to realistically suit the conditions of high-stakes risk. Among the traditional risk assessment methods we will review here are the use of event trees and devices for modeling the structure of risky decisions.
5.2
Identifying high-stakes risks and their mechanisms
Statistical techniques depend on data availability. Their basis is establishing a representative sampling of the population at interest. When events are rare, the natural dynamics and complexity of the world can affect our ability to obtain representative samples. The world may not hold still long enough for us to get an adequate picture of infrequent events in an environment that we can rarely control. In an attempt to extend our knowledge of the likelihood of rare, high impact events, risk analysts have developed a variety of methods that blend knowledge of the structure of exposures in such a way as to make available statistical properties of various sub-events relevant. Known generally as PRA of probabilistic risk assessment, these techniques are based on the development of models of the failure modes of systems. PRA approaches attempt to logically determine probabilities of rare outcomes based on how they combine in the process of generation of adverse consequences. PRA techniques use knowledge of system structure, as well as data and expert judgment as to the probability of sub-events, to extend the analysis of event probabilities beyond available observational data. While these PRA models allow us to go beyond the statistical data, we need to realize the limitations of doing so (i.e., uncertainty) and we have to appreciate that the results are used differently. Treating
A Reassessment of Risk Assessment 71
the outputs of these exercises either as point estimates or intervals within an expected value framework is misleading. PRA under highstakes risk must therefore become a more qualitative and flexible exercise. Perhaps, the most useful PRA technique is the event tree.2 Event trees postulate scenarios of loss, as the loss proceeds from initiating event to final outcomes. The probabilities of final outcomes are based on the logical combination of assessments of probabilities of the various contributing events traced along the “branches” of the tree. In this way, event trees can provide assessments of final outcomes in terms of probabilities that are so small they would be difficult if not impossible to address statistically. In this sense, they are thought to provide a more useful tool for the analysis of risk involving high stakes than the direct collection and interpretation of sampled data. A simple event tree is shown in Figure 5.1. We trace the effects of a hazardous cargo accident through a series of processes that lead to various outcomes. Event trees usually begin with some initiating event. Here, it is some sort of truck transport accident (collision, overturn, etc.). We assume that the annual probability of an accident
Spill
Fire
Explosion
Scenario
Prob
(a)
0.18
(b)
0.019
No 0.99
(c)
0.00099
Yes 0.01
(d)
0.00001
No 0.90
Transport accident 0.20
No 0.95 Yes 0.10 Yes 0.05
Figure 5.1
A simple event tree.
72 Risk Dilemmas
of this sort is one in five or .2. Initiating event probabilities are usually developed from statistical data. Once an initiating event occurs, accidents progress through time based on the branch structure of the tree. The potential interim events along the way are shown in the headings above. For example, after an accident occurs, the next event is the possibility of a spill or uncontrolled release of cargo. The tree at this point bifurcates, based on the binary possibilities that the spill occurs, or it does not. The probabilities of each are shown as well. Event trees usually follow the sequence of events that occur after initiation. We infer, therefore, that the probability that no accident occurs during the year, resulting in no adverse outcomes, is .8 (1 minus .2, the probability of occurrence). Following events after initiation, we find the first potential sequence involves the spill or release of hazardous cargo. The probability of an accident with no spill (physical damage to the truck only) is, by the laws of probability, .2 (probability an accident occurs) multiplied by .9 (there is no spill) or .18. We label this as scenario “a.” Following the logical progression of events should a spill occur, it can either catch fire or not. Again, the next branch shows the probability of each. The probability of a spill with no fire (scenario “b”) is 2 × 1 × 95 or .019. Spill probabilities can again be inferred from statistical data based on similar accidents. When a fire occurs, it can burn in a controlled fashion or explode. The scenario involving fire only (“c”) has a probability of 2 × 1 × 05 × 99 = 00099 or roughly one in a thousand. The “worst case” scenario here (“d”) involves an accident, with a spill of cargo, which catches on fire and ultimately explodes. The probability of this last sequence of events is calculated as 2 × 1 × 05 × 01 = 00001 or one chance in a hundred thousand years. Added to the probability of no accident, the probabilities of these scenarios represent all possibilities for operating the transport within a 1-year period. They therefore all add to “1” (8 + 18 + 019 + 00099 + 00001). While the probability of the more frequent outcomes could be calculated from observed data with some confidence, the results of the lower frequency, higher impact scenarios would be difficult if not impossible to determine directly. Though the ability to make inferences about the likelihood of rare events is one of the benefits of the event tree approach, the results still suffer from the inability of
A Reassessment of Risk Assessment 73
direct verifiability. For this reason, while informative, the outcome of event tree analysis cannot be taken as definitive. Consequences are not shown here but can involve monetary losses, injury to individuals, or both. A consequence can be associated with each scenario by adding another column. Given the logical sequence of events from scenario “a” to “d,” we can infer they result in progressively more serious consequences, of which we assume “d” and possibly “c” are “catastrophic.” Simple event trees can be extended using a variety of sophisticated event tree software. Trees can also be implemented in readily available desktop spreadsheet programs with a minimum of programming knowledge (often using direct spreadsheet commands and graphics alone). This allows construction of some very complex trees very easily and permits the complex calculations needed to define these trees. Event trees are often used to support a mechanistic application of risk management techniques based on expected value criteria. When monetary consequences are shown in event trees, the probability and consequence outcomes for each scenario can be multiplied to obtain the expected value of loss. Expected values are then used to adjust loss prevention efforts accordingly. Note, however, that there is no “built in” process of reevaluation of safety actions and states involved in event tree construction and evaluation. We either accept or reject the protection, although rejection may suggest we go back to the drawing board and perhaps attempt to adjust the features of physical protection to decrease their failure probability. Note also that any increase of cost would have to be matched by a decrease in the expected value of loss. Cost effectiveness of further improvements is gauged by the incremental reduction in probability and/or loss exposure entailed. Unlike the case of precautionary dilemmas that may arise in high-stakes situations, rejection of loss prevention options viewed statistically presents no particular difficulty, as we assume things will “average out” over the long run. The event tree approach when used solely as a foundation for expected value decision-making cannot address the catastrophe problem. Tools like event trees, used in statistical fashion, are at the heart of the “identify” component of the I-A-T model, discussed above. When used in conjunction with expected value criteria, they form the “assessment” component of the conventional model. If the expected
74 Risk Dilemmas
cost of prevention (often assessed along various branches of the event tree, say, installing safety sprinklers on the trucks in our example) is lower than the expected value of loss, take preventive action, otherwise do not. Such tools are predictive, yet only in a very constrained sense. They are based on prediction of what would happen if we continue to follow along the given model. If we don’t like where that prediction takes us, we attempt alterations along the way. This is the only sense in which the I-A-T model considers “alternatives”: To the extent alternatives are dictated by the model or more specifically, by alterations to the model. We alter the model of, say, flood response based on physical protection methods. Alterations such as the height of protective levies or the proper blend of levies and evacuation policy are all based on the given underlying model. Deeper alternatives, such as land use policies, are not considered because they are not part of the model. Of course, we could require that a wider base of alternative be considered, but such recommendation comes from outside of the I-A-T model, not within it. The tools of I-A-T are perfectly capable of handling alternative ideas once generated, but they are incapable of generating the alternatives themselves. The idea of generating alternatives based on some simple brainstorming exercise, and then applying I-A-T, only adds a random element to the I-A-T selection model. Without some sort of guidance on goals, a simplistic approach that depends on “idea generation” can soon be overwhelmed by its own massive dimensions. We need alternatives directed at some goal, and that is where backcasting comes in (see Chapter 2). Event trees and other PRA techniques can be useful in precautionary and preactionary decision-making if we realize their limitations and account for them. When used in the high-stakes arena, event trees must be modified to include (a) uncertainty of probability estimates, (b) decision criteria based on the minimax, and last but not least (c) understanding of where they fit into the wider scheme of goal-based alternatives analysis (i.e., backcasting). Qualitative event trees, including those based on fuzzy probabilities, can be an effective exploratory tool in the alternatives assessment process. The fuzzy probabilities of the various outcomes that result can be used directly in the precautionary risk assessment process (see Appendix, Chapter 1).3
A Reassessment of Risk Assessment 75
Event trees can also be modified to suit the level of uncertainty involved and the need to backcast from ideal states. The technique of anticipatory failure determination (AFD) uses traditional risk assessment techniques such as event trees but tends to work them from outcome back to initiating pathways, in an attempt to consider a better response to risk based on alternatives.4 Most simply, AFD aids in the analysis of risk by suggesting ways in which the outcome may have been achieved and using this hypothesis to work backward through the traditional event tree. The hypothesis of causation is in this way tested as we work in reverse through the tree. On a predictive or positive basis, the conditions for an “ideal” state with respect to risk could be postulated in terms of desired outcomes and the technical details of how to get there (the tree “branches”) determined accordingly. Event trees could be used in a wider sense, by building trees for individual plausible options and identifying the risk potential of each. In addition to truck transport in our example, we may want to construct trees for rail or ship transport. A deeper analysis might question the need for transporting the substance itself. A tree could be constructed in this case to identify alternative risk–risk tradeoffs associated with the abandonment of the chemical and possible lower risk substitutions. Event trees, properly modified, can have a significant place in a fully precautionary process of risk management. Excluding a technique simply because it has been associated with previous failures of application without identifying the potential for suitable modifications can deprive us of valuable techniques for assessing the impacts of high-stakes risks. Given the complexity of these risk potentials, we could use all the help we can get to understand them better.
5.3
Decision theoretic models
The rise of decision theory parallels that of probabilistic risk assessments. While the systematic study of decision goes back to at least the time of John Stuart Mill and the rise of modern scientific philosophy, decision theory came into its own in the immediate post-World War II era. Its rise was driven, no doubt, by the increasing complexity of the world. Naturally, its first tools were those of what had by then become a well developed mathematical theory of probability, as well
76 Risk Dilemmas
as advances in statistics.5 Decision theory began, and for the most part remains strongly tied to statistical decision-making. Indeed, the most dramatic and (not surprisingly) immediate effects were in the statistical domain. Offshoots, such as the rise of statistical quality control, had distinctly positive effects on industry productivity. Since the beginnings of modern decision theory, the unique character of high-stakes decisions has been at least recognized. They very clearly were seen early on as subject to “uncertainty” (i.e., knowledge imperfection) as well as “risk.” Whereas risk could be assessed using probability theory, the unsureness engendered by uncertainty ran deeper. It was not until at least some decades after the initial development of formal decision theory that uncertainty or more properly, knowledge imperfection (as distinct from randomness) started to receive formal analysis.6 In the process of risk assessment, it behooves us to recognize what tools developed within the decision theoretic domain can help us extend our understanding of high-stakes risk. Among the most important of these tools are those that give us the ability to formally structure our decisions in terms of tables or graphs. In Chapter 1, we introduced high-stakes decision techniques via a simple decision table that helps us identify and structure both decisions and potential outcomes. Decision tables can also be represented in extensive form using decision trees. These trees show the interaction of decisions and states of the world (outcomes) through time or at least sequentially. Figure 5.2 is a simple decision tree based on the tables introduced in Chapter 1. Our decision is, once again, between the option of prevention/avoidance or doing nothing. The tree structure follows the conventions of decision tree construction, with squares representing decision points or nodes and circles representing the “state of the world.” The tree begins with a decision node. The next step, sequentially or through time, is the interaction of this decision with the world. We identify two possibilities here, loss or no loss. Logically, the outcomes are the same as in the tabular or matrix form. Now, however, we can more clearly visualize causality or at least temporal or sequential priority. While the difference in perception between table and tree is not dramatic in this simple case, some very advanced decision structures can be represented in the extensive format. As with event trees, advanced software is available to produce and analyze decision trees.
A Reassessment of Risk Assessment 77
Decision
State
Outcome Loss
X
Do not protect
No loss
0
Protect
Y Figure 5.2
A decision tree.
Note that the basic structure of the decision tree is the same as the event tree, described above. Only in the case of decision trees, direct choices are introduced. The difference in appearance between the event tree and decision tree shown here are strictly due to presentation conventions. Event tree software, for example, can be modified to represent decision trees and vice versa. Also, there is theoretically no reason why decision and event trees cannot be combined in the analysis of safety decisions. We can introduce uncertainty due to knowledge imperfection into decision trees in the form of interval and fuzzy representations of both probabilities and impacts. Decision trees can also be used qualitatively in an exploratory fashion. Perhaps the greatest benefit of formal representations such as event trees and decision tables/trees, however, is that they allow us to more carefully identify or frame the decisions and events under study. As we have shown above, the ability to properly identify alternative choices, mechanisms, and outcomes (i.e., risk versus risk tradeoffs) is critical to effective decision-making – whether it is at the statistical or highstakes level. Identifying all choices relevant to an outcome within the decision theoretic framework can help alleviate the problems of misspecification. In the statistical domain, failure (or inability) to consider all alternatives is known as sample space ignorance. If we know a bowl is filled
78 Risk Dilemmas
with black and white balls, our sample space is defined in terms of these colored balls. We may not know whether the next draw will be black or white, but we know that it will be either black or white. The draw is complicated if we only know there are black balls and those of some other color (partial ignorance), more so if the balls are of some color (complete ignorance). Not knowing or failing to consider all alternatives and potential outcomes could severely interfere with the real-world decision-making process. We described earlier on the fact that precaution is often criticized unfairly based on the failure to consider alternatives. All decisions must consider counter-effects. In the case of DDT, for example, we need to consider possible counter-threats from insect-borne diseases.7 In the precautionary domain, this failure to consider alternatives can affect the decision by possibly distorting our focal point – the worst case. If the worst case is not fully identified or if choices result in interactions that affect the outcome structure, then we may make a very imperfect decision based on these unspecified or poorly specified outcomes. The same misspecification can, of course, adversely affect cost/ benefit decisions. The problem is wider than eventual choice criteria, and is once again rooted in problem decision specification. By carefully identifying the structure of decisions in a formal framework, we can more adequately observe their complex structures and possibly spot defects in specification before they adversely affect the decision. Analysis in terms of decision tree construction can help us do so by formalizing the specification process. Once again, the proper framing of alternatives is essential to both statistical and precautionary decision-making.
5.4
Integrating fuzzy risk thresholds
Our extended view of risk assessment methods provides us with valuable information about the structure and outcomes of possible loss scenarios, as well as providing ways to formally outline the decision process itself. Risk assessment methods do not make the decision for us, however. They provide the base materials to which logically constructed decision criteria apply. We will examine in this last section how decision criteria, specifically those based on minimax precaution, can be integrated with the results of an extended risk assessment.
A Reassessment of Risk Assessment 79
Decisions about high-stakes risk follow from the application of decision criteria to measurements of an exposure’s probability/consequence characteristics. We show how the process is integrated based on the uncertainties inherent in both the measurement of risk and the definitions of suitable risk thresholds in the Appendix to Chapter 1. We will further extend this analysis now to a twodimensional risk space, or map, created by the intersection of the probability and consequence (loss) dimensions of risk.8 A simple risk map is shown in Figure 5.3. Here, we roughly represent the fuzzy risk threshold, in terms of both probability and consequences. The shaded danger zone represents “the possibility of significant impacts.” By the rule of precautionary minimax, we avoid exposures that may fall into this region. To identify the probability characteristics of exposures to adverse consequences, we can use fuzzy or interval-based event trees. The output of such an exercise is shown on the risk map as well. Scenarios (a), (b), (c), and (d) are the hypothetical outcomes of a scenario-based risk assessment using event trees (such as the cargo spill tree, shown above), suitably extended to provide interval estimates of outcome probability. Applying our precautionary minimax criteria, we see that scenarios (c) and (d) fall into the danger zone [(c), unequivocally and (d), potentially]. This suggests that either the exposure be suitably modified to 1 a “Danger zone”
Probability
b
c
d 0 N
0 Loss
Figure 5.3 Comparing interval-based probabilistic risk assessments to fuzzy thresholds for high-stakes risk decisions.
80 Risk Dilemmas
reduce or eliminate these potential outcomes, or if we are unable to do so, the activity is abandoned altogether. Once again, working the event tree in reverse may suggest ways of doing this, or a deeper alternatives analysis could proceed in event trees for multiple options. In fact, the integration of the results of probabilistic risk assessments and risk acceptance criteria based on thresholds has a long history in the field of high-stakes risk assessment of advanced technological systems. Among its first uses were risk assessments of the emerging issue of nuclear power safety and other complex systems in the early 1960s.9 Even then, the enormity of potential consequences resulted is many risk analysts advocating a threshold approach to risk, above which the chances of catastrophe were merely unacceptable. Probabilistic risk assessments were perfectly compatible with this approach; once an assessment was complete, its output could be matched against risk thresholds. Known generally as “limit lines,” these thresholds were a frequent feature of probability/loss graphs called “F/N diagrams.” These charts typically portrayed probability in terms of frequency of potential occurrences (“F”) and losses as number of lives lost (“N”). The rationale for just where to set the line varied although there was considerable support near the “one in a million” threshold for risk to individuals, often based on variants of the natural occurrence or background level view of risk acceptance. Early approaches to safety in terms of limit lines were not fully precautionary in that they failed to consider uncertainty in both the definition of risk and perhaps more significantly, in the assessment of rare event probabilities. As a result, pronouncements about the safety of exposures in terms of limit lines, based on exact estimates of probability that were often as low as 10−7 (one in ten million) or 10−8 (one in one hundred million), were simply seen as not credible by the potentially affected community.10 Once again, criticism focused not so much on the logical validity of these techniques as on their inappropriate usage in the high-stakes domain. We have shown in this chapter that, properly modified to suit the unique characteristics of the high-stakes domain, a variety of established risk assessment tools can be used to implement a properly precautionary approach to risk. This lets us suggest formal plans of action that can be rigorously specified, yet maintain proper respect for uncertainties within the process, as well as recognizing the need to suit decision criteria to the problem at hand.
6 Can We Avoid Risk Dilemmas?
We have granted from the beginning the unrealism of seeking genuinely “zero” solutions to risk. The best we can hope for is some natural background level of risk consistent with perpetuation of life on earth, achieving at least the subsistence level. When we accept risk above this level, we open ourselves up to risk dilemmas of the “doomed if we do, doomed if we don’t” variety. The question is not, then, whether a strictly zero risk level can be achieved. It cannot. Effective precaution, however, is about preventing dilemmas inherent in acquiescing to the possibility of catastrophe. Possibility here is defined in terms of fuzzy acceptance criteria based on natural characteristics – not absolute zero. The real question is, can risk dilemmas be avoided? Some suggest that the only practicable solution is prioritization of action based on expected cost/benefit analysis. In general, this type of analysis suggests we need to accept the possibility of disaster to achieve the benefits of life. A risk-free life, it appears, is just not worth living. And living that life to the maximum of satisfaction simply requires that we make prudent cost/benefit tradeoffs – despite the inherent potentials for doom. The choice is deeper than one between ideologies. It involves survival. The choice as we have reasonably framed it is survive under a preactionary application of precaution (avoid risk) or perish. The deeper issue becomes whether survival is achievable. Given the insidious nature of risk dilemmas, we can only achieve survival if we fight for it. Part of that struggle will certainly involve a radical rethinking of the way we live our lives. 81
82 Risk Dilemmas
6.1
The only two options
The uncertainty associated with high-stakes risk management leaves us with two, and only two, options: Precautionary avoidance and fatalism (“we can’t do anything about the truly significant risks, so why worry?”).1 The decision about treatment of high-stakes risk is not between precautionary principles and expected cost/benefit (and its variants). It is between precaution and fatalism. Yet, the question remains: How can a genuinely precautionary regime be made workable? Risk management is about taking responsibility. If we want to be free from the worry of risk, we have do something about it. When actions are deterministic, A follows from B, we can preclude A by not doing B. Similarly, we can use averages to identify the long-run behavior of statistical property A’ when we change B’. When A and B occur in the realm of high-stakes risk, the reactions are a matter of fate: They will happen as they are meant to happen. We might ask therefore, is risk management, that is, taking responsibility, consistent with fate? Ultimately, how we choose to respond to the technical aspects of risk relates to our philosophy of life. The only genuine freedom we have when faced with unknown and unknowable aspects of high-stakes risk is in our character. Fate, as the philosophers tell us, is shaped by our character. Avoiding risk and accepting risk are outward actions that merely reflect our character. To the extent they are a reflection of this character, we have little control over them. The ancient Stoic philosophers framed the issue in terms of innate properties or characteristics.2 Consider a cylinder that lies at the top of a hill. If we push it, it will role down the hill in predictable fashion. The path it takes follows from the innate shape of the cylinder – just like our actions follow from our innate character. Now if the same force was applied to a cone-shaped object, it would role differently, again, because of its innate properties. Once again, these physical objects behave differently under the same stimulus, based on their innate characters, just as we humans do. When faced with catastrophic loss potentials, do we reject them or accept them? As we have shown above, the answer can’t be reduced to a simple equation, as in the case of cost/benefit analysis. The technical properties described here tell us a lot about the nature of high-stakes risk. Here’s what happens if we do A, and here’s what happens if we do B. In this regard, we
Can We Avoid Risk Dilemmas? 83
have tried to be as complete as possible in describing how high-stakes risk “works.” The choice of how we deal with risk, however, is up to us. And that “up to us” is fundamentally a product of our character. So responsibility toward risk depends on our character. But what shapes our character, our upbringing, our moral understanding, experience, or perhaps fate itself? That question is far beyond the scope of this book. It does, however, bear close examination, as it is unlikely that we will ever be able to adopt any of the suggestions herein for avoiding risk dilemmas if it is not in our characters to do so, regardless of the technical merit of our arguments. Whatever it is that shapes our fate, we see that everything that follows from it is necessarily intertwined. Treating potentially catastrophic risk using precaution and preactionary avoidance attempts to manage risk by avoiding catastrophic loss potentials at their root. We try to control fate by eliminating the material it will work on. Obviously, this type of approach can entail its own risks. This is much different than trying to stay one step ahead of risk, technologically or otherwise. The ultimate risk dilemma involves the question, “can we continue to live a beneficial life without incurring the threat of eventual disaster?” It is ultimately this dilemma that all others reduce to. If we can find that single riskfree starting point, there is hope that we can achieve this goal. We do so by suitably adjusting alternative pathways to achieve truly beneficial human progress without the threat of disastrous consequences to ourselves and our environment. The approach to high-stakes risk here is consequentialist. The outcomes of high-stakes risk decisions are real and serious. If we want to avoid them, the approach suggested here shows the way. How equitable the results may be and any other side effects we incur, while important, are not amenable to technical solutions. This should not devalue the understanding of risk as part of our physical world, albeit a part that can only be understood intuitively.
6.2
Facing the paradox of progress
We have expressed from the onset our uneasiness about the idea of trading the possibility of disaster for the greater benefits of life. There is indeed a paradox inherent in all approaches that suggest the potential of catastrophic risk is the inevitable concomitant of progress.
84 Risk Dilemmas
Ultimately, the world we obtain is a very unhappy, or at least, uneasy one. We gain all the benefits of science, yet can only utilize them under the unknown threat of extinction. We can resolve this paradox by either dismissing it, perhaps believing that the solution to our problems is just around the corner (the optimistic approach) or by reassessing our goals. The only satisfying solution, we would argue here, is based on the elimination of risk. True utopia is a risk-free one. Likewise, a fail-safe approach assumes there is some certainty – certainty about avoiding risk – that we can retreat to. A theoretical safe-harbor represents an ideal or goal. Pronouncements to the effect that a naturally risk-free world is unachievable, and that we need to settle for partial solutions as a result, leave us stranded. Once we realize that partial solutions are logically defective, we are left with nothing. To treat axiomatically the premise that there is no such thing as freedom from risk is to invite despair. In fact, the notion that risk-free living can never be achieved undermines not only the precautionary approach but any approach that tries to actively manage risk: We merely accept the benefits because the costs are, ultimately, unknown. At best, we can only make rough comparisons of risk, some better some worse, none truly safe. As we have shown above, any relative comparisons of risk run afoul of the catastrophe problem – how can we live in a world that is “somewhat” dangerous? As we have suggested above, there is nothing that says we cannot make progress beyond subsistence. Yet, to avoid the paradox of progress we need to do so in a relatively risk-free fashion. Indeed, the true measure of our progress is how far we can get beyond subsistence without incurring risks that may eventually doom us. Deeper progress comes from reducing risk beyond the subsistence or background level. This it would seem is the more reasonable path to a perfect world. The twentieth century philosopher Ernst Bloch had a view of utopia based on the notion of the “not-yet.”3 Progress toward utopia is based on the notion of becoming. It is this level of hope that drives progress. The notion that risk is unavoidable, or that whatever action we take to reduce risk results in other more dangerous risks, underlies a philosophy of resignation. If we can’t do anything about risk, then what? Live life hedonistically. Seek as much pleasure as we can derive, which we have been indoctrinated to believe means feeding our desire
Can We Avoid Risk Dilemmas? 85
for material things. The idea that risk is irreducible, that a goal of a risk-free life is ridiculous or even counterproductive promotes a materialistic parasitism based on our need for “more.” The “not-yet” is replaced with “here it is” and hope replaced with satiety. Instructive in this analysis of the paradox is who gains and who losses. A good deal of our approach to high-stakes risk is ultimately based on trust. That means understanding the motives of all interested parties and separating the honest ones from the less than honest ones.
6.3
Risk dilemmas and self-interest
On purely technical grounds, ignorance of risk can benefit selfinterested parties. This ignorance is often promoted by clever superstitions based on faith in economic solutions to the problems of high-stakes risk. The confidence of many that economic solutions to the problem of high-stakes risk are always forthcoming is, however, highly delusional. Economics simply does not deal well with the discontinuities introduced by high-stakes risk. The effects, as we have shown, cannot be integrated into a simple cost/benefit framework (the traditional format of modern economics based on free enterprise). High-stakes risk is a cost of doing business, a potentially enormous one, that either escapes treatment altogether or is deemphasized by forcing it into the framework of statistical decision (in which its true nature is invariably misrepresented). The existence of economic influence also drives the bias of misplaced technological optimism. Powerful economic interests can buy the best (worst?) science available and make a convincing case to those at risk. Instilling optimism thus becomes more of a sophisticated “advertising campaign.” Creating a sense of optimism in this fashion, based on the manipulation of scientific ideals, is misleading. Yet, acquiescence to risk seems to be a phenomenon uniformly distributed among social and economic classes. If high-stakes risk assessments are so biased, wouldn’t we expect at least the most severely disadvantaged to speak up? A large part of the unquestioned acceptance of risk is probably due to the acceptance of the disparity of wealth and economic power in general, based on the idea that the wealth of the few “trickles down” among the masses, making everyone better off than they would be under a more equalitarian
86 Risk Dilemmas
economic and social regime. In this way, the masses accept their fate with respect to high-stakes risk, much in the way they do with their given social and economic status. Clearly, this vested interest in the status quo does not portend well for alternatives assessment with respect to risk avoidance. Alternatives often require at least an honest evaluation of the current structure of risk generation and that goes to the core of our techno-economic and even political structure. How can we make powerful interests with a huge vested economic stake in current methods of fossil fuel energy production think in terms of safer alternative energy sources, for example? The fact that we may be able to do so in the individual or limited organizational context does not necessarily translate to the greater good. These biases do not instill great confidence in our ability to reduce risk dilemmas. Optimism suggests that with enough time, money, and effort, the threat of dilemmas will go away. The intentional bias introduced by powerful self-interests, economic or otherwise, perhaps driven consciously or subconsciously to an honestly held optimism, downplays risk dilemmas. We try to dress them us as simple cost/benefit tradeoffs despite the fact the cost/benefit calculus does not apply to this domain. We muster scientific evidence to prove the safety of this activity or that activity, ignoring the underlying uncertainties. These biases, in turn, result in a strange incongruence between precautionary activity in its various contexts. On the individual level, we take precautions routinely. While optimism may exist, it is often kept in check. Some new innovation may look promising, but how will it affect me? Certain basic human traits, among them fear, are innately precautionary. The same self-interest that may suggest we dispense with restrictive precaution and ignore dilemmas on the wider socio-economic level urges us to embrace it at the personal level. We might rightly wonder, once again, when personal commitments to safety overcome socially dictated morays that ultimately benefit the few at the expense of the many. At the organizational level, businesses are quick to protect their interest from catastrophe, in the financial sense. They purchase insurance, itself an activity that bears the hallmarks of precaution. Additional precautions to individual and collective wealth are implemented via a variety of legal and economic institutions, such as bankruptcy protection, statutes of legal liability, and a variety
Can We Avoid Risk Dilemmas? 87
of government sponsored safety vales, in the form of insurance or indemnification against widespread, high-stakes economic effects.4 On the non-economic social level, we practice precaution in terms of foreign policy, nuclear weapons policies, and even our decisions to go to war. Precaution, it would seem, is only bad when (a) special interests are at stake and (b) the effects of ignoring precaution can be spread unequally among society (and quite possibly among future generations).These considerations are mentioned as they can present severe impediments to alternatives assessments, and hence make it more difficult to eliminate risk dilemmas: We have met the impediment to a safer world, and it is us. The creation of risk dilemmas is not strictly a technical matter nor is their elimination. We have shown here how the technical parameters of high-stakes risk can, under certain conditions, promote risk dilemmas. We have also suggested pathways out of these dilemmas. The efficacy of these solutions to risk dilemmas, which show us the way out of forced choice and toward the promotion of survival, depends on our willingness to apply them. Suggestions that we achieve some sort of parity or “oneness” with nature when it comes to risk have almost mystical overtones. The desire to get back to nature is similarly viewed as a moral goal, with no practical implications. In fact, these desires are often derided as patently impractical. On the other hand, we would suggest that some natural level of risk exists, which promotes survival – a very practical effort indeed. This longing for a natural life is therefore a manifestation of what is ultimately a pragmatic instinct for survival. We would argue as well that maintaining risk at some subsistence level need not mean that our lives must return to or remain at this most basic level. As we have suggested, true progress is measured by how far we increase our standard of living while maintaining this natural or subsistence level of risk. There is no indication natural risk can only be obtained by living at this otherwise unattractive subsistence level, as the detractors of the notion of natural risk levels suggest. Once again, a straw man has been set up only for the purpose of conveniently tearing him down. Under this view, criticisms based on the unobtainability of zero risk and the undesirability of a subsistence level are united: We don’t really want to, or just can’t, get there, so why try? The argument is very weakly disguised form of lobbying for
88 Risk Dilemmas
the status quo. It is a status quo that may bring (very unequal) riches to some at the expense of potential disaster for all. The commitment to a naturally risk-free life must permeate the individual, organizational, institutional, and societal levels if it is to have its desired result: Survival. We cannot practice precaution on the individual or organizational level, yet ignore it at a wider social or ecological level. This reflects a selfishness that ultimately promotes the paradox of progress. We move toward some desired utopia of self-satisfaction, ignoring the wider implications. At the end, this “utopia” contemplates nothing more than a constant state of worry over what might come next. By observing the characteristics of naturally acceptable risks, we note that important differentiators come down to promoting one essential characteristic of the human spirit. That characteristic is freedom. Freedom depends on control, trust, and voluntariness. To the extent that any of these qualities is removed from a situation, risky or otherwise, we feel less free to make reasoned choices that suit us best. Unnatural sources of risk deny freedom. When unnatural risks are forced upon us or we are somehow cajoled or intimidated to accepting them, we become less free, as individuals and as people. Risk dilemmas reduce our freedom to choose or at least reduce the impacts of that choice artificially (i.e., unnaturally). To the extent these restrictions are a result of humankind’s efforts, they amount to one person or group restricting the freedom of others. Unnatural risk dilemmas constitute a form of tyranny, which the average human might reasonably be expected to resist. Now, to some extent, freedom from risk will always be limited by the natural world. Once again, it is humankind’s destiny to live within these restrictions. This in turn suggests that we cannot escape some degree of acceptance of our fate. As the Stoic philosopher Seneca suggested, if you can’t live within these basic conditions of life (what we have previously referred to as risk at the subsistence level), “get out, any way you choose.”5 The assumption is of course that our fate in the case of such natural calamity is somehow reasoned. On the other hand, the tyranny of imposed risk suggests a troubling, forced fatalism against which our wills rebel. The solution, as we have suggested, is actively promoting a risk-free life. Ultimately, freedom from risk suggests that we all live by the code of a risk “bill of rights” (Figure 6.1). These rights include our own
Can We Avoid Risk Dilemmas? 89
– Everyone has the right to live a risk-free life based on the natural risk that exists at the “subsistence level.” The safety of progress needs to be judged against this natural background level. – No-one shall incur potentially catastrophic risks against their will. – No-one shall intentionally subject another to catastrophic risk without their knowledge, or via trick or artifice. – No-one shall subject future generations to catastrophic risk that is unacceptable to the present generation. Figure 6.1
The Risk “Bill of Rights.”
ability to live a relatively risk-free life, without others trying to take advantage for their own personal gain. More heinous is the imposition of risk without our knowledge, through some sort of trickery or false pronouncement with regard to our safety. The risk bill of rights has strong implications for just who should bear the burden of proof in assuring safety in any actions that promote self-interests. If someone is going to reduce our freedom, they need to tell us why and what’s in it for us. That some marginal side-benefits exist from potentially catastrophic exposures is immaterial. In terms of intergenerational fairness, we also need to consider the effects of high-stakes risk on future generations.
6.4
The prospect of “infinite disutility”
Self-interested criticism of the precautionary approach often suggests that precaution doesn’t provide any useable guidance. In fact, the opposite is true – it usually provides too much guidance to assimilate within the status quo. Closer to the truth is the observation that precaution can “paralyze” decision. The more we accept the principle, the harder the decisions become. While true, this is rather an observation of the current facts of life with respect to risk and not a failure of precaution. Precaution did not make the world the way it is. Arguably, incautiousness did. For that reason, we can hardly blame precaution. The problem of course remains in precisely what we have identified as risk dilemmas. Expected value suffers no such difficulties, or does it? Properly adjusted for the “disutility” for catastrophe, expected cost/benefit decisions may themselves not be as
90 Risk Dilemmas
far off from precautionary ones as the detractors of precaution may suggest. The distinction between expected cost/benefit and precaution becomes moot if we assign an infinite, or otherwise very large, disutility to catastrophic loss. This disutility value itself reflects a valuation of the terrible, terminal consequences of catastrophe: It’s infinitely “bad” in the sense there is no recovery. On an expected disutility basis (probability of loss × disutility), even very small probabilities will yield overwhelming costs when multiplied by very large, possibly infinite, disutiltities.6 Looking at it from another perspective, we might even suggest that when expected cost/benefit calculations recommend acceptance of exposures with possibly catastrophic outcomes, the disutility has been undervalued, prima facie. A logical result of the terminal nature of catastrophe (the catastrophe problem) is that either the exposure is not catastrophic and can perhaps be made acceptable to some degree under the right (statistical) circumstances or it is catastrophic and hence unacceptable (under any circumstances). Cost/benefit does not eliminate dilemmas, it just disguises them. To perfect the ruse, not only do we apply decision criteria that are inapplicable to the situation (high-stakes risk), we must also undervalue the true impact of the outcome (by assigning a less than infinite disutility to disaster). Once we properly value the disutility of disaster, along with recognizing the fact that we cannot identify the probability of rare events with any degree of precision, cost/benefit faces the same tough choices as precaution. Yet, in the hands of skillful analysts, bent on manipulation, it remains a powerful emollient in the face of risk. On this somewhat shaky ground, there exists a huge edifice designed to convince, cajole, and otherwise make happy a population that really should know better. Ultimately, cost/benefit analysis, properly construed, and precautionary approaches may not be that much different in their final conclusions about treatment of high-stakes risk. Precaution does not “ignore” benefits. It simply recognizes the fact that there is no way to commensurate benefits against the possibility of irreversible damage. In cost/benefit terms, event high benefits and low probabilities can be overwhelmed by infinite costs. Striving for benefits while ignoring the potential for disaster leads to risk dilemmas of the “doomed if we do, doomed if we don’t” variety. These dilemmas loom large,
Can We Avoid Risk Dilemmas? 91
and they cannot be explained away. Alternatives assessment is about achieving benefits safely. Is safe progress too much to ask for? Whether from a properly framed cost/benefit perspective or a precautionary stance, high-stakes risks involve insurmountable problems. Logically, we cannot make those problems go away. We cannot select what outcomes we feel most comfortable with or which seem to provide the greatest short-run gains based on selective rationality. We need to face the facts about the accumulation of high-risk potentials head on. That means we need a science that can help us solve these problems with a respect for the deeper goals of life.
6.5
The need for a wider approach to science
Seeking answers to the problems of high-stakes risk demands a science that recognizes the limitation of humans to fully know the world in which they live. It recognizes that what we don’t know is just as important as what we do know. Science is based on measurement, yet measuring what and how much we don’t know doesn’t make sense. The best we can do is to treat knowledge imperfection instrumentally: We define intervals of uncertainty based on how well they let us deal with a complex and dynamic world. On the more mundane matters of life, the mere technicalities as it were, knowledge imperfection, is incidental. Whatever defects in our thinking we incur because of it, we can fix incrementally. We don’t have that option when the results are potentially catastrophic. As a result, science tends to ignore knowledge imperfections as irrelevant. Instead, modern science is based on the simple dichotomy of truth and falsehood. We identify false hypothesis through the existence of error. Verification of scientific ideas involves testing in an effort to find errors. The errors deemed most critical by scientists are those that represent “false positives” or those that involve laboring under untrue assumptions. In the testing of hypothesis subject to statistical error due to randomness, this type of error is referred to as type I error.7 Emphasis on reducing type I error is aimed at keeping out “junk science” and “mere speculation.” It seeks to keep scientific knowledge pure by doing everything it can to keep out false ideas. Thoughts at the margins or “fringe” of science are dismissed as irrelevant to genuine progress. Emphasis on type I error places the burden of proof on those that propose the hypothesis.
92 Risk Dilemmas
When we are unsure of the truth or falsity of a hypothesis, another type of error enters. It is the error of regarding a true hypothesis as false based on the evidence at hand. We call this type II error in the jargon of hypothesis testing. Reducing type II error therefore reduces “false negatives”: Turning away a hypothesis that turns out to be true. In the high-stakes domain, type II error amounts to ignoring exposures that turn out to have the potential for catastrophe based on the most currently available information (or lack thereof). Now when our knowledge, or to follow the statistical analogy, sample size, is fixed, we can only reduce type I error by increasing type II error and vice versa. At any point of time, therefore, our knowledge base always remains the “most true,” based on the specified tradeoff of type I and type II errors. As a result, ignorance exists outside of the traditional scientific process. We can, nonetheless, give it its due within the process by altering the mix between type I and type II errors. Very simply, when ignorance is important, as in the case of high-stakes risk, we should be willing to pay more attention to type II errors. Traditional science, either intentionally or unintentionally, remains resistive to emphasizing the potential for false negatives. One practical reason, from the standpoint of “progress,” is that emphasis on type II error can slow the acceptance of scientific innovations. In terms of risk, type I error can be characterized as accepting the hypothesis of danger when none, in fact, exists. Being falsely alarmist obviously slows bringing scientific innovations “to market.” We do so, however, only by increasing the potential that we falsely accept as safe innovations that are not. In the longer run, both type I and type II errors can be reduced by increasing our knowledge base. In the mean time, however, prudent respect for what we don’t know when it comes to risk makes sense. The emphasis or overemphasis on avoiding type I error in risk assessment is evidenced by how quickly the traditional scientific establishment is to label theories that entail the increasing possibility of risk as “crackpot.” Those that raise too many issues concerning high-stakes risk potentials are often negatively labeled as “doomsayers.” In fact, those that attempt to increase our awareness of these potentials are merely showing a respect for minimizing type II error when decisions are made under conditions of knowledge imperfections (where wrong decision could mean disaster). While over time the accumulation of
Can We Avoid Risk Dilemmas? 93
knowledge will reduce both type I and type II errors, why take a chance in the mean time? Science may be further subdivided into the processes of discovery and verification. The formal process of error analysis only pertains to the latter. There is no formal process of discovery. It depends instead on complex, creative processes. Science only goes forward, however, in terms of discovery. It is the element of discovery that is entailed in explorative analysis of alternatives to risk. It is only after these alternatives are postulated that they are subject to verification. Again, cautiousness with respect to risky outcomes maintains that all potential alternatives be assessed relative to the potential for type II as well as type I errors. Given the uncertainty involved in exposures to high-stakes risk, it is obvious that arguments about the potential for high-stakes consequences should be taken very seriously. Downplaying an approach based on precaution, on the assumption that it introduces too much error, begs the question of, “error as to what?” If we choose to take a view of science that demands attention only to that which can be proved beyond a reasonable doubt (i.e., the minimization of type I error), we do ourselves a disservice of potentially catastrophic proportions. A wider view of safe science requires us to entertain assessments even if they may seem somewhat extreme or even radical when contrasted with conservatism toward eliminating scientific errors based on false positives.
6.6
Radical rethinking
Increased attention to type II error, based on the uncertainties inherent in dealing with high-stakes risks, suggests that incremental solutions be replaced with a wider reaching, perhaps even what we might consider radical, approach. As opposed to fatalistic acquiescence or a piecemeal “wait and see” attitude, the radical approaches to risk suggest deep change, sooner than later. These solutions involve significant changes in the way we think and act about progress and its potentials, both for good and bad. Changes in world views, it is argued, can only be supported by institutional changes. In this way, radical solutions address the all or nothing nature of precaution directly.
94 Risk Dilemmas
In many cases, the ability to deal effectively with risk on an individual basis may require radical adjustments in lifestyle. For example, if one is diagnosed with a serious illness, recovery may depend on significant lifestyle changes for survival. Businesses often need to change their structures radically if they are to survive in a complex and dynamic world, depending on circumstances. This goes for the institutional development of technology as well. Innovation is often more a process of revolutionary change than evolutionary change. In a wider, social context, radical suggestions for the reshaping of progress with respect to the large-scale risks we face have been around forever. The industrial revolution, however, signaled, if not a distinct acceleration in humankind’s rate of change, at least a watershed. The pace of change in the modern world, driven to a great extent by innovations in technology and science caused by the potential for increasing global conflicts, accelerated after the World War II. Shortly thereafter, say in the late 1950s–mid-1960s, some observers were becoming more and more aware about widespread catastrophic potentials that could accompany such sweeping change. This period marked the beginning of a modern surge in radical solutions in response to high-risk potentials on a global scale.8 While the radical solutions that emerged differ in their world-views of risk, certain common characteristics emerge. We have summarized some of these in Figure 6.2. The commonalities suggest that there may indeed be at least some rational starting point for significant reform on a social level. These radical solutions to wider issues can help define the natural framework that individual and organizational issues need to conform to. Attention to contextual issues does not mean that we always have to start from the top down, that is, the larger, social issues first. If anything, contexts must be integrated for wider radical solutions to work. A common characteristic of radical suggestions for change is the adoption of an explicitly precautionary stance to high-stakes risk. Radical approaches also uniformly adopt the natural environment as both a spiritual and practical guide to acceptable risk. This reflects the notion developed here, that natural risk levels present the most logical criteria for risk acceptance based on the history of evolution. Note that while technological solutions are discounted and the link from technology to many of the high-stakes risk issues of today is duly
Can We Avoid Risk Dilemmas? 95
– Significant structural and cultural changes are needed to prevent global scale disaster. – We need to work backward (“backcast”) from desired states to assess potential alternatives to risk. – The acceptable level of exposure to risk (if one even exists) should be determined by nature. – Technological optimism is misplaced or at least needs to be tempered by the facts. – Our progress toward disaster is clouded by uncertainty: In such situations, it is better to be safe than sorry (i.e., minimize type II error !). Figure 6.2
Characteristics of Radical Solutions to Risk in a Social Context.
noted, radical solutions are not uniformly anti-technology. Most do, however, include technology as part of the radical rethinking process. As a result of this circumspect approach to technology, radical solutions are often deemed to be against progress. However, before we declare any approach “anti-progress,” we need to get a good idea of what progress really means. Measurement in terms of national economic accounts, like gross output (i.e., GDP), is a highly limited proxy for progress. Likewise, before we associate progress with our own happiness, it behooves us to really understand the conditions for happiness and what they mean in the wider world context. Can real happiness exist in a situation where progress implies ultimate doom (the “paradox of progress”)? As these approaches attempt to address science beyond the margin of error, radical initiatives are often marginalized and their proponents labeled as “fatalists” or even “crack pots.” All are, in a sense, fatalists in that they prognosticate a complex series of interactions that account for existence in this world, many of them out of our control. This fatalism, based on a certain degree of predestination, does not, however, automatically imply doom. It is by ignoring fate and trying to in some way become its master, rather than vice versa, that humans get into trouble. As suggested by our discussion of error in science, when the stakes are high, we need to be more concerned with the potential for false negatives (“type II error”) than false positives (“type I error”). This suggests a higher emphasis on what may be rather than what will be for sure. And this in turn entails a greater
96 Risk Dilemmas
appreciation of ideas that may seem radical in both thought process and practical suggestion. Labeling radical solutions as the product of imperfect science or irresponsible doomsayers once again shows the traditionalists intolerance for false positives at the risk of serious, irreversible consequences. As we have shown above, high-stakes risks cannot be treated in contextual isolation. How we act as individuals affects the community. Our structure of production and distribution of goods and services – the business economy – likewise influences the survival of the community. We should not think of the economic structure as some disembodied machine for coordinating resources with needs. It too has become part of a wider natural environment in modern times. If anything, the personal and organizational understanding we gain of the proper treatment of high-stakes risk should be shared with the community, for the good of the community. Also, as suggested above, we need to resolve any conflicts that individual survival can have with the survival of the wider community, including the ecology. This means that radical solutions as well need to be considered across contexts. Recognition of the need for radical reform at one level cannot be achieved unless it is perceived and achieved at all levels.9 Throughout the history of the more radical solutions to risk, the depth and urgency of the solution(s) has been roughly proportional to the degree which it is believed we approach the possibility of doom. Recent radicalism based on the premises of the so-called deep ecology are based on the absolute primacy of nature, not just as suggesting a livable risk goal but also as an ideal not to be tampered with.10 Rather than a guidepost of risk-free human development, nature is a standard, which cannot be compromised. The penalty of any species that tries is extinction. Can human progress be achieved against such a strict backdrop? There is no unambiguous answer, short of “we must at least try.”
6.7
Science to the rescue?
The dictionary defines dilemma as “a choice between equally balanced alternatives, most often unattractive ones.”11 In the case of risk dilemmas, that choice is between two potentially fatal results. One or more of those outcomes may depend on chance. As a result,
Can We Avoid Risk Dilemmas? 97
we may be tempted to weigh one or another option according to their probability of occurrence. Due to the terminal nature of the options, random or otherwise, this approach does not have much to recommend it. So, basically, this attempted resolution of a dilemma between risky choices “a” and “b” leads us back to where we started: dooma OR doomb = doom. The hope is of course that many of the high-stakes risk issues we face are not genuine dilemmas, at least not yet. We might suggest that the high-stakes choices that confront us are really just tough risk problems or perhaps emerging dilemmas. In this sense, there may still be a chance to at least do something. That “something” may require implementation of radical changes toward elimination of risk. Not just elimination of risk on a selected level, for as we have shown, the uncertainty involved in piecemeal approaches gets us nowhere. Rather, it will take a widespread implementation of a “no risk” strategy. We may in fact view this as a wider application of alternatives assessment. The difference is, that once we are in a dilemma, alternatives assessment does not fail-safe. Its failure puts us back into the dilemma and harms way. It is for this reason that we cannot rely on alternatives once we are already in a dilemma. Doing so relates directly to the notion of optimism: We are optimistic that if we find ourselves in a dilemma, we will find a way out. Entering a potential dilemma armed only with this confidence is a dangerous gambit. Just as in the case of incurring the possibility of risk, we are incurring the possibility that we may not be able to avoid that risk. The difficulties are not just based on chance but also on dynamics (see Chapter 2, Section 2.4). Reason, it would seem, demands a careful and realistic pre-assessment of the situation. Optimism with respect to solving tough risk problems or emerging dilemmas is driven by a variety of factors. Not the least of which is our faith in science, technology, and our own ingenuity to get us out of these tough problems. The idea is that we can always find a way out, technologically or otherwise. The problem is of course that, once the path toward catastrophic outcomes is set, this optimism in solutions to our problems must be taken on faith. Indeed, science has and will continue to show its ability to solve difficult problems. It can make our environment safer in many ways. However, our view of science is very deficient when it comes to
98 Risk Dilemmas
uncertainty due to knowledge imperfection. Our ability to know the true nature of things is limited. We need to understand what science can do and can’t do. Given this uncertainty, optimism is not enough reason to ignore the potential for high-stakes risk. To take actions without considering the possibilities of risk is foolhardy. To continue actions once we have the suspicion of risk is irresponsible. Some would argue, however, that due to this uncertainty, we only really know there is a problem when its bad effects start to become evident. Tough risk problems and emerging dilemmas suggest the need for action and, in fact, provide the impetus we need to take that action. The distinction is often posed as one of anticipating risk versus one of building our resistance to risk.12 The argument goes that we cannot build resistance if we are not challenged, and we can’t rely on anticipation alone, as we just don’t always know what to anticipate. This amounts once again to the argument that to be able to handle risk, we need to demonstrate it statistically. As we have shown, by the time high-stakes risks manifest themselves over the statistical horizon, it may be too late. It is the scientific development of advanced risk assessment tools, as described in Chapter 5, which can help us identify risk potentials before it is too late. This wider view of science may result in some false positives. That in turn seems like a small price to pay for survival. This stance is not meant to devalue science. To suggest technology will bring us out of risk dilemmas may, however, be asking too much of it, even given its remarkable record. In risk dilemmas, we are creating problems that are by definition difficult or impossible to solve. We are pushing technology to the brink and hoping that a solution will be forthcoming. It is precisely at this point that technology is most fragile. It is ultimately this type of technological optimism that handicaps technology itself in the face of true progress, not precaution. Precaution based on preaction means that technology is applied to risk problems up front, where it has its best chance of success. If technology has a role in resolving tough risk problems, it is helping assure that we achieve a suitably risk-free baseline So, how much faith should we put in science? We are suspect about the future potential of technology to solve the problems it creates early on in its adoption, yet we are willing to place ourselves firmly in the hands of science and technology in the assessment of viable alternatives. The distinction falls in what we in fact commit ourselves
Can We Avoid Risk Dilemmas? 99
to and how much trust we are willing to place behind that commitment. This speaks once again to the fail-safe nature of alternatives assessment. If we can’t make that commitment to technology and science as a way out, we don’t go ahead. This is different than placing all our faith in some future potential. If science can help, prove it, and prove it before we entrench the activity. The process is onerous only if we are committed to putting everything on the line for it. There is a difference between being confident and being irresponsibly optimistic. Many of us certainly spend enough time and effort trying to convince ourselves in the power of science to solve all future problems or at least those with a vested interest in the results do. In most cases, we are fed what amounts to today as a fantasy, and we are asked to have faith that it will become a reality (before it is too late). As those affected may likely be future generations, those that make the claim may have little to lose themselves. What we are being asked to do, however, is to mortgage our future on the bet that these fantasies will become reality. The past is not much of a guide when it comes to forging a path into the novel and unknown. How much greater odds than say 50/50 are we supposed to give the proposition that technology will save the day, regardless of how recklessly we go into that future? Misplaced optimism about tough risk problems can also be driven by the inappropriate use of expected cost/benefit comparisons. Statistical arguments based on the preponderance of expected (probabilistic) benefits over expected costs can only work themselves out in the long run. The catastrophe problem suggests the terminal effects of high-stakes risk make such long run balancing problematic, if not impossible. Nonetheless, we tend to place some sort of mystic reliance on the statistics of loss, as if betting at a casino. We don’t like to see bad things happen in our world without suitable resolution. So statistical optimism drives us to the belief that if they do, everything will average out for the best in the “long run.” It doesn’t take much coaxing for someone to take up this belief, as our general willingness to accept what amount to some feeble cost/benefit arguments shows. The risk calculus is just seen as part of that same techno-structure that will assure survival, no matter what – you can bet on it. We would argue instead that it is not so much that the odds are so bad, as they are so unknown. Why be willing to bet so much on the unknown
100 Risk Dilemmas
when in the end you may not be able to collect? We need to expunge inapplicable extrapolations from the statistical domain, as they do more harm than good in these cases. The true bias we face in highstakes risk assessment is the unsupported optimism that science can find answers to all our problems and that statistical methods can divulge the worth of such efforts. Allowing this bias to overcome us, we go headlong into what is ultimately a more dangerous world, with no way out.
6.8
The dangers of giving up
Mere acquiescence to potential dilemmas is giving up. We can’t do anything about them, so we live with them. This approach, as we have shown, may bring us closer to the possibility of disaster – either personally, organizationally, or as a society. The fine line of optimism with regard to dilemmas requires that we do whatever we possibly can to avoid them and try to resolve them should they occur. At the same time, we must not become so confident as to feel dilemmas are always resolvable. The latter position suggests that a resolution may come from faith in technological, scientific, or other progress or as a result of ignoring the true potential for disaster using cost/benefit analysis. Either way, while precaution holds strong intuitive sway, we often believe that any lapses in our precautionary attitude, intentional or unintentional, can otherwise be fixed. Precaution is seen as the primitive solution. It works well but in a brute force fashion. Clumsy application may stand in the way of more refined (i.e., materialistic) goals. On the other hand, technological solutions offer a precision approach to surgically removing our ills. We tolerate the primitive (as long as it remains “cost effective”) but fall back on the technological. Surrender to risk and dilemma may not be that complete after all, since we have a solution standing by. In this regard, we would consider defaulting to scientific and technological optimism on the brink of catastrophe as just another form of acquiescence. Putting our fate in the hands of science at this critical juncture is, as we have argued above, doing so too late. We have in effect given over our control of the situation to some unknown, unknowable, perhaps mystical power. The superstition of olden times is now simply replaced with a mystical belief in the power of science.
Can We Avoid Risk Dilemmas? 101
The other path to resignation is based on the futility of it all. Risk avoidance is properly an all or nothing thing. Once risk dilemmas come into existence, they are by their very nature irreducible. So why do anything? Under this view, we should base our progress on some social, economic, or other ideology that makes most people feel good about the most number of things (roughly, the utilitarian stance of modern economics) and forget about risk. We have shown that a considerable amount of high-stakes decision proceeds on the basis of possibility, and hence fate. Fatalism, however, is neutral with respect to doom. It may happen, or it may not. At the very least, a rational belief in fate would suggest a neutral position with respect to the potential for doom. At first glance, this approach seems to present difficulties for some very obvious decisions: If I take ill, my recovery or demise is fated, so why see a doctor? The fact is that the world is very complex in its interconnections. As the Stoic would argue, many actions are co-fated.13 If you want to get well, then you need to see a doctor. If we want to avoid doom, we need to develop an appropriate attitude toward risk. That attitude is not as simple as taking post-fact action to reduce risk once the exposure has been set. The conviction to seek alternatives to risk needs to be built in to our character and is in turn based on our wider philosophy of life. Some may choose to be optimistic, based on a faith in science and technology. Others may feel it is simply futile to do anything. We would suggest that none of these approaches to risk acquiescence, whether it is perceived as temporary or not, is ultimately justified by either the facts or logically. The pragmatic approach we suggest seeks elimination of risk as its goal. If a trace of fatalism remains, it is based on the reasoned acceptance of some degree of residual (non-zero) risk at the natural level. The hope is that, by accepting the logical superiority of the risk-free approach, the aspect of feasibility may in fact resolve itself.
7 Summary and Conclusion (of Sorts)
All other things being equal, we obviously prefer survival to doom. We achieve survival by avoiding risk. Avoidance, however, can create its own set of challenges (i.e., risk dilemmas), which is why effective precaution requires the preactionary development of risk-free alternatives. To the extent that the bleak choices inherent in risk dilemmas are forced upon us, they negatively impact the possibility of our survival, as individuals, in productive organizations, as a society or even as a wider ecosystem. By pursuing a policy of alternatives assessment, early on in the process of planning for progress, we can avoid risk dilemmas. The devil is, of course, in the fine points of execution of any program that purports to deal with high-stakes risk. This in turn assumes that from the very beginning of our efforts, we need some sort of understanding, both at an intuitive and formal level, of highstakes risk. The unique characteristics of high-stakes risks are often ignored in suggestions for their treatment. High-stakes risks are by their very nature complex, dynamic, and messy. We don’t like messy things. When we encounter them, we try to clean them up. In the case of high-stakes risk, that means using the well-established theory of statistical decision applied to cost/benefit optimization in the classic tradition of economics. Cleaning is not about sweeping things under the rug. In applying simple theories of statistical decision to highstakes risk, we sacrifice a cogent approach in exchange for tractability. Those who would use this ease of application as a proxy for truthfulness, either out of their own ignorance or to further self-interested agendas, are making a very serious mistake. From the standpoint 102
Summary and Conclusion (of Sorts) 103
of action, we conclude that a significant rethinking of the way we interact with our world is in order.
7.1
Understanding high-stakes decision processes
Despite their critical importance, high-stakes risk decision criteria get remarkably little pedagogical attention in standard texts on the subject. Look at most any educational treatment of the theory and practice of decision-making under conditions of chance, and it is likely that very early on in the discussions, the criteria for decision when probabilities are unknown (or irrelevant) are described. These criteria are then quickly dismissed based on impracticability or on the premise that we always know something about probabilities. The examples that follow, however, are firmly lodged in the statistical domain. Rarely do we see any demonstrations of results or case studies where the exposition goes into truly rare events. The exception is cases where the study can be carefully controlled and replicated – product quality control and reliability decisions, for example. When statistical results are extended to the low-probability/highimpact domain, we feel an intuitive discomfort in applying statistical methods designed for verification in the long run. The problem with catastrophe is, in the long run, there may be no long run. The crucial difference – the fact that the stakes themselves impact the decision process – requires a different set of decision principles to adequately deal with the intellectual and practical challenge. Equally egregious is the failure of most main-stream studies of risk to understand and account for types of uncertainty other than randomness (as expressed by probability measures). What we don’t know can be as important as what we do know. Uncertainty due to knowledge imperfection is an absolutely critical part of the understanding of and ability to deal with high-stakes risks. We cannot develop a cogent, useable theory of risk without it. The Appendix to Chapter 1, a review and application of uncertainty modeling using fuzzy sets, attempts to put our intuitive perceptions of danger and the proper response to it in more formal terms. Further development of the concept of precaution along these lines may help make both its theoretical and practical aspects clearer. To date, the approach to high-stakes decision has been a combination of extrapolating mathematical and economic techniques used
104 Risk Dilemmas
in the study of statistical risk to the high-stakes domain, supplemented by the study of human psychology with respect to such risks. Expected value/utility decision-making remains a completely valid approach to decisions when results can be verified statistically (i.e., in the short run). It is inapplicable to the high-stakes domain simply because rare, catastrophic events cannot be dealt with based on averaging results over time. Using the expected value criteria and economic optimization based on cost/benefit tradeoffs as the touchstone of “rationality,” psychologists then attempt to determine why humans respond so differently than this rational theory projects. As discussed above, deviations or “biases” are carefully cataloged and reconciliations attempted. The consideration seldom enters that it is the underlying theory of rationality with respect to high-stakes risk that is flawed or otherwise biased, not intuitive human responses to these risks. Based on principles that can be easily verified in the statistical domain, expected value cost/benefit is and will remain an integral part of the overall risk management process. Precautionary techniques for handling non-statistical, high-stakes risk will continue to be used as well. From the business perspective, fire can have catastrophic potentials. The installation of sprinkler protection is a fundamentally precautionary response to the potential for catastrophic fire risk. The reliability of sprinkler protection, in turn, is based on controlled trials that allow us to determine this reliability statistically, and hence apply it prudently within a precautionary regime. We can, for example, assess the reliability of sprinklers to operate under a variety of conditions that can be repeated experimentally and permit a large number of trials to be carried out within a reasonable time span. Nothing in the argument for precautionary or preactionary treatment of risks suggests that statistical analysis is useless. Its power just has to be focused on the proper domain.
7.2
Making precaution work
Anyway we look at it, precaution based on the minimax is strong medicine. It assures us, in theory at least, that high-stakes losses won’t happen, because we won’t let them happen. We either avoid the exposure altogether or take reliable protective measures. On the other hand, such a strong approach can be expensive. It requires us
Summary and Conclusion (of Sorts) 105
to spend up to the amount of loss to prevent it. The “all or nothing” basis, however, is nothing that can be trifled with if we want to avoid the catastrophe problem. Of course, we often do find low-cost/no-cost applications of precaution, even when the parameters of risk exposure are set and we ignore the dynamics of risk. Dilemmas in these cases don’t exist. To the extent that costs remain reasonably proportional to benefits, the application of precaution might lead us to reasoned choices that are not that much different from those suggested by a traditional cost/benefit analysis. On the other hand, the post-fact application of precaution can entail great costs, either directly or in terms of opportunity costs (i.e., forgone benefits). The key is avoiding risk before costs of avoidance build to unmanageable proportions. Risk dilemmas can, in principle, be defused by consideration of safe alternatives early on in the process of planning for progress. Doing so requires a formal approach to backasting or planning in reverse, from risk goals to current actions for their achievement. A complex process, our faith in alternative assessment is ultimately dependent in how strong our science and technology really is or can be. Science in this sense should not be about getting us out of precautionary difficulties once we get into them but rather preventing them in the first place. We call this approach preactionary, in that it requires thinking ahead. It suggests that in addition to risk avoidance and acceptance, we include risk anticipation as an integral part of the risk management process. Essential to the practical application of precaution via the assessment of alternatives, given its all or nothing character, is the ability to establish some reasonable probability threshold value against which we can determine the possibility of random catastrophic risk, and hence, its acceptability. This concept forms the essence of our “risk goal.” Clearly, requiring a strictly zero threshold is too strict. Under a zero threshold, everything becomes risky, and hence, according to the minimax, everything must be avoided. Just where the threshold is set, however, is of crucial importance to the effectiveness of our approach. The whole process is complicated by uncertainties due to imperfect knowledge and at least partial ignorance of the relevant parameters. Fuzzy representations of both the assessment of probability of potential losses and the threshold of risk itself result in an
106 Risk Dilemmas
ultra-fuzzy representation of the possibility that our risk regime will lead to ultimate disaster. We have suggested here that the threshold of “acceptable” risk be based on some natural level of background risk. We can define (roughly) this threshold by reference to that level of risk that might exist at some human “subsistence level.” This does not mean that humans must forever exist at the level of progress suggested by mere subsistence. Instead, we judge safe progress by what we can achieve while maintaining this same (low) level of risk.
7.3
How do current regimes compare?
How does our current regime of risk management, be it on an individual, organization, or social basis, compare to what we have suggested in terms of our outline of precautionary treatment of risk? In Figure 7.1, we contrast potential “real-world” risk criteria to proactive precaution, based on preaction, and show how their actual results may be tested against precautionary suggestions. To the extent that our current approach fails the indicated test, we deem that its results contradict precaution via risk anticipation. As a result, we might rightly question whether our current regime has provided guidance that is suitably robust in the face of high-stakes potentials. Take, for example, the fatalistic attitude. As we have suggested, the foundation of our fatalism is key. If it is based on acceptance of risk (any risk) as a result of mere acquiescence, it is very far off from what
Precaution (preaction) versus:
Test:
Fatalism
Is acceptance based on mere acquiescensce in the face of risk dilemmas?
Cost/benefit
Does the decision change when disultity of high-stakes, irreversible loss is properly adjusted?
Mechanistic, post-fact precaution
Does the decison lead to risk dilemmas?
Figure 7.1
Comparing Decision Criteria in Practice.
Summary and Conclusion (of Sorts) 107
a genuinely precautionary approach would suggest. More often than not, this sort of fatalism is a result of simply accepting risk dilemmas. We choose to live with them. The problem is, on a possibilistic basis, we can’t. Our ultra-fuzzy perspective on catastrophic risk suggests that a tolerant attitude toward risk promotes the increasing potential for disaster. On the other hand, a reasoned fatalism, based on the adoption of alternatives that sufficiently reduce our likelihood of doom, is perfectly in tune with a wider precautionary stance, based on the notion of preaction. We take action early on in the process, assuring a safe path toward progress. We have suggested that a possible reconciliation between precaution and expected cost/benefit exists when we place a sufficiently large (possibly infinite) value on significant, terminal losses. To contrast a formal cost/benefit analysis with precaution, we can simply adjust the disutility estimates on all significant, irreversible outcomes strongly upward and note the results: Turning the disutility “knob” on the analysis to “full blast” if you will. The performance of the model based on cost/benefit and pseudo-precaution based on external adjustment of parameters forms our comparison. The results may very well coincide, as when we apply precaution based on proportionality of costs (i.e., the costs of precaution are relatively “inexpensive”). In other cases they won’t. In these cases, we would argue that the contrasts are disguised in the form of inadequate representations of the true cost of loss. Viewed from a different perspective, expected cost/benefit models that suggest acceptability of catastrophic exposures seek to adjust the disutility of loss to benefits rather than the other way around. It is the benefits that are to be justified, not the losses avoided. To achieve this forced balance we resort to questionable valuation of both costs and benefits, often devaluing the former while overstating the latter. Attempts to measure the “value” of human life, an ecosystem, or even severe economic disruptions are manifestation of this search for numbers that help justify what are properly dilemmas that need elimination, not validation. The resistance to valuations in these terms usually arises because they usually don’t make sense, not because they are so difficult to determine. If we are going to value life, or an ecosystem, or an economic system, we need to do so from the wider perspective, specifically, that of the balance of life at its most basic level. This process proceeds from
108 Risk Dilemmas
basic principles upward (on the basis of safe progress), not down from presumed benefits that carry unknown risks. In doing so, we accept and reject risks depending on how hard they are to fix, not on their potential for disaster. The paradox here is that we introduce a bias toward acceptability for the sake of gaining benefits that ultimately makes serious risks harder to fix. Yet doesn’t the exercise of maximizing disutility with respect to all catastrophic loss potentials simply show the folly of a widespread precautionary approach? Adjusting all disutilities upward in this fashion would soon bankrupt us in terms of direct and opportunity costs of avoiding these risks, wouldn’t it? First of all, the difference only shows potential costs we would be willing to pay. The aggregate problem is just a collection of individual dilemmas written large: We would be willing to pay a very large sum to prevent infinite disaster. The focus should not be the potential maximum cost but rather what we can do to reduce potential costs to manageable (i.e., natural) proportions. This is where alternatives assessment comes in. To call the problem insurmountable based on the maximum we might be willing to spend invites a negative pessimism. While we have suggested that risk dilemmas should not be approached with an undue degree of optimism, we would argue that if we have any degree of faith in science and technology, it should not produce undue pessimism either. In this sense, a precautionary approach based on preaction is about realism: What can we really do to avoid risk dilemmas? Which brings us to the precautionary approach: Avoid risk. Precautionary approaches based on the minimax are indeed becoming more widely heralded today as alternatives to expected value decision. As we have shown, however, not all precautionary approaches are equal. Taking what we have described as a mechanistic, post-fact approach to precaution can lead to risk dilemmas. And this may lead to fatalism by default (risk acquiescence). Of course, this does not mean precaution caused the problem. Applied mechanistically, however, it will be unable to solve the problem. Only precaution that can be achieved on a low-cost/no-cost basis, via a careful comparison and implementation of anticipatory risk management based on alternatives assessment, can help assure survival on a basis that is free of dilemma. By these simple tests, if our current decision methods give us suggestions that differ from precaution, they may end up being very misleading. We can circumvent the possibility that these methods
Summary and Conclusion (of Sorts) 109
will go wrong, by adopting a preaction-based view of precaution from the beginning. Let a suitable precautionary scheme guide us, and we won’t have to worry about making excuses for defective regimes when something goes wrong.
7.4
Doing the right thing
Safe progress entails responsibility. That means doing the right thing. What is the “right thing,” and how do we achieve it? In terms of highstakes risk, we set some very simple criteria: It’s about survival, the continuation of important things. If we consider some basic quality of life as the starting point, it is reasonable that we can build on the principle to assure that the progress we achieve is done risk-free. In this way, we avoid the paradox that our ultimate utopia might be one in which we live in fear of terminal risk – even though we may have sufficiently tamed the more mundane risks. Yet, doing the right thing with respect to high-stakes exposures is made more difficult by the perceived remoteness of the whole thing. Risk dilemmas are often seen not as a call to action but a justification of largess. So what do we do about, say, global warming? If we decide to curtail the production of industrial waste gases, we could cause a global economic collapse, which could imperil billions of humans. And after all, what is the chance that global warming (or any other low probability potential for disaster) will affect me? The trigger of action of course remains our assessment of the possibility of doom if we don’t do otherwise. And that, as we have pointed out, is very fuzzy. In terms of our ultra-fuzzy assessment of the possibility of doom (Chapter 3, Section 3.5), the question from either a personal, organizational, or societal perspective is “where do we lie on this spectrum of possibility?” Uncertainty suggests we cannot know the answer with any degree of precision, and the catastrophe problem implies that once we get sufficient statistical evidence, it may be too late to do anything about it. The prudent course of action would seem to be rather safe than sorry under such circumstances. Might we infer anything further, no matter how vague, that may suggest were we are now, with respect to risk? While the background is fuzzy, can we perceive signals or factors that differentiate our current status in any way? In terms of our ultra-fuzzy universe, reproduced in Figure 7.2, are we closer to point “a” or point “b”?
110 Risk Dilemmas
1
Possibility of disaster
b
a 0 Accept no risk Figure 7.2
Accept all risk
Where are we now?
How, might we ask first of all, could anyone in their right mind approach point “b”? Why would we tempt fate? Nonetheless, the signs of a generally permissive attitude toward risk are all around us. Our science emphasizes the minimization of false positives (type I error) while all but ignoring false negatives. Science, it is argued, can’t proceed if we are “too cautious.” Reasonably high levels of both individual and societal levels of risk – 10−4 (one chance in ten thousand) or even 10−3 (one chance in a thousand) – are often supported today by government regulation based on “tolerability” compromises, as we suggested in Chapter 3 (Section 3.8). Given the uncertainties due to knowledge imperfection associated with so many of our potentially high-risk endeavors, such high tolerances give us no room for error. This all may satisfy our need for definitive evidence (“seeing is believing”), but it does so at a tremendous potential cost. When uncertainty is high, the chasm between evidence assuring us that the exposure definitely cannot harm us and that it surely can is wide. This has implications for where the burden of proof lies: With those that deem the exposure safe until proven harmful or those that suggest the exposure should be avoided until proven safe. While we have argued that the uncertainty and finality of potential disaster demand the burden of proof be placed on those that would promulgate potentially risky actions, our social laws, and regulation, for the most part, make no such demands. The burden of proof remains the other way around: Assume it is safe until proven otherwise. Requiring too much proof of harm, or definitive
Summary and Conclusion (of Sorts) 111
proof that may never be forthcoming, is itself dangerous. At the very least, it allows risk dilemmas to develop. In our search for definitive answers, we are all too willing to accept extrapolations from the statistical domain based on simplistic applications of cost/benefit analysis. When precaution is applied, it is often as a last resort, when its effectiveness is difficult to achieve. In turn, we become daunted by the resulting dilemmas. Defective decisionmaking regimes intensify the potential for risk, by allowing any exposure we deem marginally beneficial to slip by. Last but not least, there is little evidence that we pay any attention to the accumulation of risk. As catastrophic loss potentials are very sensitive to accumulation (the “all or nothing” factor), ignoring accumulation is a prescription for disaster. As accumulation is fuzzy (indeed, ultra-fuzzy), it may catch us off-guard. The result is that risk may unnoticeably, but undeniably, work its way in, representing the phenomenon of creeping risk. We may not be particularly concerned about global warming affecting us and thereby chose to do nothing (ostensibly, avoiding a more immediate counter-risk). Arguably, the creep of other risks we face is more immediate. The general attitude of inattention can, therefore, bring about peril. Are there signs we face such “creep” today? In fact, statistics show the rate of fatal cancers, heart disease, and even that related to automobile accidents is alarmingly high (Figure 7.3). Are these risk merely
Cause of Death
Annual Probabaility
Heart Disease Cancer
.0022, or 2.2 × 103 .0018, or 1.8 × 103 .0017, or 1.7 × 103
Work Injury (High Hazard)
Figure 7.3
Stroke
.00052, or 5.2 × 104
Automobile Accident Suicide Work Injury (Typical)
.00013, or 1.3 × 104 .00011, or 1.1 × 104 .000028, or 2.8 × 105
Drowing Electrocution
.000011, or 1.1 × 105 .0000013, or 1.3 × 106
Insect Bite
.00000032, or 3.2 × 107
Lightning Strike
.00000016, or 1.6 × 107
Annual Probability of Fatality from Various Causes.1
112 Risk Dilemmas
a product of “natural causes”? Or, might they themselves be signs of creeping exposures to risk form a variety of potentially controllable factors? One might reasonably argue that they are the products of risk dilemmas, with ugly resolutions. That such risk dilemmas may be caused by our actions (or inactions) and do not spring into this world fully formed is evidenced by the fact that one hundred years ago the mix of risks the individual faced was very different. Leading health risks, though undoubtedly serious, were simpler: Pneumonia, tuberculosis, and enteritis. Why did we eradicate these just to replace them with more complex and widespread causes?2 The point is that if we acquiesce on one or few potential highstakes risks, we might as well give up on all. Looked at from the wider perspective, the need to heed precautionary warnings is not about some obscure future potential that may or may not happen. Proper risk management cannot be achieved by taking a wait and see attitude. It is about recognizing what is going on around us, right now. Doing the right thing hinges not only on fear of the unknown. Many positive factors suggest we know how to, and should, pursue the risk-free alternative. We know that by avoiding exposures to risk, we avoid its perils. We also may have some idea of possible countereffects of taking or not taking action (i.e., we are not completely ignorant as to the existence of risk–risk tradeoffs). We may even have some pretty good knowledge about how we might modify exposures to reduce or eliminate risk (in terms of loss prevention or control initiatives). These are all fairly reasonable assumptions to make about the world. These assumptions then give us the tools to reasonably identify some state of safety, though as we have seen, achieving (and maintaining) it may be more difficult. That is all we need for the hope that some relatively risk-free (i.e., natural risk only) utopia might in fact be achievable. We would further argue that this ideal may be achievable even under generalized conditions of (perhaps) irreducible uncertainty due to our inability to completely understand the world. We don’t know if this world is perfectly safe, but we have genuinely done all we can do to make it so. We would suggest that such a world, with all its irreducible faults, would nonetheless be a “happier” one than we live in now. We would also suggest that, as of now, we may be very far away from that world.
Summary and Conclusion (of Sorts) 113
7.5
Who will lead the way?
That leads to perhaps the most significant implication of treating risk dilemmas. It needs to be done at a wide, social level, with community involvement, and it must entail widespread, wholesale change – not incremental adjustments. It is in this sense that the simple notion of precaution has incredibly strong implications for the way we lead our lives within the wider realm of nature. Resistance to precaution is not primarily based on any theoretical defects in a precautionary approach to risk or even in the inherent challenge of risk dilemmas (we would argue that they are preventable, in theory at least). The bigger challenge lies in making these wholesale changes acceptable. So, who can we depend on to lead the way in taking the required actions to resolve the challenge of risk dilemmas? Individuals? It is difficult for those in the midst of dilemmas to see and act on their role in a wider context. Individuals have issues of value, and there is clearly a psychology of self-interest at work as well. Though they understand the hallmarks of precaution, individuals may not be able to appreciate precaution on a wider contextual basis. We all know how to be precautionary ourselves but how about toward others? We have suggested that the principle of precaution transcends contexts. Individually, we base a great deal of our behavior toward high-stakes risk on prudent avoidance. To do so, it makes sense that we think ahead (i.e., anticipate risk). To take precaution beyond homily, we as individuals need to integrate it into a wider philosophy of life. That means taking a closer look at our interactions with the world around us (both human and non-human). Treating precaution in isolation may ultimately have realizable individual benefits for those that lead an isolated existence. It is an existence that makes their individual decisions a closed system, or at least a closely knit system, based on their immediate environment. The extent to which this philosophical system promotes us getting along with a wider world is an open question for those who chose to live in this world. Can we depend on business? Businesses, as we have suggested above, often act in a precautionary manner, yet only with regard to their own financial self-interests. So how do we get business to care beyond the preservation of their productive assets? Precaution in the wider sense is not adequately captured in the simple profit-maximizing paradigm of modern business that relies so fundamentally on
114 Risk Dilemmas
cost/benefit analysis, either deterministic or statistical. This is a result of the failure of traditional economic models of business to capture externalities of risk. So, why not expand the paradigm of economic thought? As we have argued throughout, the mainstays of traditional economics, optimization, and decision based on cost/benefit analysis (either deterministic or statistical) are not suitable to address the catastrophe problem. A symbiosis between traditional economic methods and high-stakes decision criteria may be achieved through the application of a sort of meta-rule: We apply traditional economic methods to risk until the impacts approach the catastrophic, then we switch to a precautionary regime.3 We could of course also more accurately value catastrophic externalities, assuming we can properly identify them within the traditional framework by assigning them a suitably high disutility. This suggests that the business commitment must include, at a minimum, some acceptance of precaution beyond their immediate context. Unlike vague commitments to “sustainability,” precaution and associated preaction (i.e., risk anticipation) requires palpable, verifiable commitments. This commitment to precaution also suggests a reversal in the traditional scientific and legal view of burden of proof with respect to harm. Specifically, under precaution, the burden of proof falls on those who would propose the exposure – in the case of the business enterprise, some product, service, or related operations. In the wider regulatory arena, a precautionary commitment may need to be further enforced by the imposition of strict liability for activities that fall under it purview. Strict liability essentially imposes a policy of “no excuses” for any harm that may have been forestalled by prior application of precautionary standard, regardless of negligence. To put it another way, a strict penalty is imposed for the existence of “type II” errors. The dilemma this entails from the business standpoint is that the business must be willing to give up one of its own contextual precautionary fallbacks: Reliance on limitations of liability beyond those attributable to simple negligence (i.e., failure to exercise reasonable care under traditional burden of proof requirements). We would argue, however, that what business gives up in terms of self-serving precaution within a limited context promotes its own growth as a factor within a modern system of genuinely sustainable progress.
Summary and Conclusion (of Sorts) 115
Can we depend on society and its coordinative and regulative mechanisms to somehow come together to alleviate precautionary dilemmas? That depends on society’s beliefs and values. If society as a whole is committed to acquiescence, perhaps based on some degree of technological optimism, then it is doubtful we will get very far in solving the issue of risk. That there is collective wisdom in society cannot be doubted. This does not mean that, as history has shown us, whole societies cannot go wrong, with disastrous results. Society may perhaps provide an overview and way to collect the thoughts and possibly stir the actions of its members. The wide spread dissemination of knowledge on how high-stakes risks really work, which shows a respect for people’s intuitive understanding of the value of precaution in the face of catastrophe, may help. That society can do this without some sort of coordinated vision is doubtful. That vision may be of a brighter, safer future, or it may be the prospect for disaster. The problem with the later is that it may force ultimate resignation. If, instead, an optimism is to be built around our future, it will need to focus on what we must do now, not on what might “save us” in the future. With regard to catastrophic risk, can the natural physical environment that surrounds us tell us anything about how we should be managing high-stakes risk? Do other life forms practice precaution – animals, insects, or vegetation? Evolutionary history suggests they do. The evolutionary history of many natural life forms is longer than that of humans, though it too has been dotted by irreversible, unpreventable, though quite rare, events, such as asteroid impacts or natural changes in the climate. The problem is that as we alter or even extinguish physical nature ostensibly to further our own progress, we are precluding the chance of ever learning anything about risk management and survival from it. If we destroy nature, how will we learn anything from it? Chances are it does hold important messages. Understanding how nature works within its boundaries has practical implications for high-stakes risk and in turn survival for both humans and nature. Traditionally, as individuals, organizations, and societies, we have looked to science – the organized acquisition of knowledge – for answers to the tough questions of life. Science, to a great extent, proceeds experimentally. Can we ultimately understand our fate
116 Risk Dilemmas
experimentally? Experiment implies incremental progress. Unfortunately, when it comes to high-stakes decisions, we have shown that incremental solutions are not the answer. They do not avoid the catastrophe problem. And the time which we have to act with respect to risk is, by virtue of the overall uncertainty in high-risk situations, unknown. Every day we face a coin toss: Heads we win, tails we lose. The other issue brought about by uncertainty is that small changes may get lost in the enormity of the uncertainties involved: By taking small steps in this very uncertain environment, how do we even know what direction we are going in? Real impacts require big changes, those whose effects can be observed against a fuzzy background. Yet, isn’t “taking it slow” a sign of precaution? It depends on your vantage point. Being almost full way into a situation that bodes no well and that may be fraught with irreversibility is different from a benign, safe-harbor. We are arguably in the former spot. Take action, or things will get worse. You don’t slowly dab away at decay, hoping to forestall its growth. You have to remove it completely – and clean out the affected area to prevent its regrowth. The most valuable form of science for dealing with high-stakes risk is informed by an advanced form of induction that relies on the interconnectedness of knowledge. To a great extent, this interconnectedness is based on analogy. Analogy underlies the notion of concatenation in science.4 We observe all swans we have seen so far are white. Do we infer all swans are white? Concatenation of knowledge suggests that while our limited observation may suggest all swans are white, there is considerable variation in bird color in other species. This suggests that our white swan hypothesis be treated with caution. In the study of high-stakes risk, we are limited in the inferences we can make through observation. Yet, we can concatenate inferences based on how the world works, extrapolating from more immediate experience. Applying the universal hallmarks of precaution among contexts is an example. We can also make inferences based on a wider history of human existence (such as when we recognize the preservative value of natural risk levels). Science can help lead the way on thinking about risk. It is science, however, that recognizes its own limitations and the need to base inference on a variety of experience, not just the quantity of experience. Natural limitations on what we can know need to be dealt with accordingly. The science of uncertainty in the face of high-stakes risk
Summary and Conclusion (of Sorts) 117
is in need of further development, hopefully along the lines we have suggested here. The issue of who will lead the way is inexorably linked to what ideas will lead the way. We would suggest that these ideas will come from an honest appraisal (reappraisal?) of science and technology and what they can do for us in the framework of preactionary risk management. Setting sustainable, safe guidelines for progress will require development of knowledge in this regard, again in a proactive rather than reactive manner. Last but not least, the development of these ideas must proceed on a democratic, participatory basis. Ideas developed in a closed process will always be met with some suspicion. This means that leadership, in both thought and action, will require building a degree of trust among all involved. Under natural conditions, we can be sure that we are not being taken advantage of. Mother Nature gains nothing from deceiving us. On the other hand, we might rightly wonder, what is the impetus behind imposed risk? We might not be concerned with the way in which possible disaster may come about (be it natural or “human made”), but we should definitely care why it comes about. Above all, we need to be sure that everything that could have been done to prevent it was. In this way, what might be considered subjective moral or ethical beliefs about trust, control, and the voluntariness of risk have very tangible consequences for our future. The question then becomes how we make all these issues relevant to decision makers, whoever they may be. The economics of production and progress will undoubtedly fit into the process but will not, unlike today, dictate it. Relevance may be achieved autonomously or through authority. That part is yet to be determined. Given the contextual differences, at least some degree of coordination will be required among the various entities affected. The overall mission is, however, clear: Achieving freedom from high-stakes risks and their associated dilemmas. Our reward? Surviving long enough to enjoy the fruits of our progress.
Notes Chapter 1: A review of high-stakes decision criteria 1. On the underlying notion of chance and probability in risk, see Lowry (1989). For an introductory discussion of how the basic notions of chance fit into decision processes about risk, see Jablonowski (2006), Chapter 1. It is critical at the onset that the reader recognizes the difference between the underlying process of chance and its manifestation in terms of observed statistical data. As we will describe later, the true nature of the underlying random phenomenon of risk may be very imperfectly known, especially in the high-stakes domain. We need to go beyond statistical data in these cases, in fact structuring our decision processes to properly respond to these complex uncertainties. 2. For a deeper introduction to expected value decision-making within the context of risky decisions and its potential defects when applied to the high stakes domain, see Rescher (1984). The application of expected value in the high-stakes domain is also criticized, in the wider context of risk management, in Haimes (2004), Chapter 8 (“The Fallacy of the Expected Value”). 3. This discussion draws upon the wider treatment of high-stakes decision in Jablonowski (2006), Chapter 2. See also Jablonowski (2005). 4. The minimax (loss) criterion is described, within the context of modern decision theory, in Luce and Raiffa (1957). See also Jablonowski (2006, Chapter 2). 5. The importance of identifying the counter-risk associated with any preventive action, the so-called risk–risk tradeoffs, is discussed in Graham and Weiner (1997). Note that in general the presence of counter-risks only becomes problematic once they become large enough. Once they do, both precautionary and cost/benefit analyses face tough problems. The best way to solve such problems is what this book is about. The idea that no risk can be reduced without increasing another, or that any such tradeoffs cannot be otherwise managed, is unsupported by the facts. 6. The notion that the potential for dilemma is a key factor in precautionary decisions about high-stakes risk is identified in Rescher (1984) and more recently in Haller (2002). 7. The minimin criteria is contrasted with minimax in Luce and Raiffa (1957). We refer to this choice as fatalistic, following the classical discussion of fatalism in philosophy, which implies something that is destined and which it is futile to try to change [see Taylor (1991)]. 8. The types of uncertainty and the notion of imperfectly known probabilities (the combination of randomness and epistemic uncertainty), with
118
Notes 119
9.
10.
11.
12.
implications for the support of decision-making, are discussed in Walker et al. (2003). The wider typology of uncertainty is also the subject of Smithson (1989). Fuzzy sets, their representation, and logic are reviewed in Zimmermann (1991). Perhaps the best introduction to fuzzy sets is the early papers by fuzzy logic pioneer, Lotfi Zadeh, collected in Yager et al. (1987). For a good, nontechnical introduction, see Kosko (1994). Appropriate thresholds for action in terms of confidence are a matter for further investigation. On different notions of “possibility” in modern science, see Barrow (1998). While we have postulated a probabilistic threshold here, non-probabilistic interpretations are possible as well. The precautionary principle, as a principle of law and regulatory guidance, is discussed in the variety of essays appearing in O’Riordan and Cameron (1994), Raffensperger et al. (1999), and Tickner (2003). Worldwide development of the principle, with application to human health, safety, and the environment, is discussed in Whiteside (2006). For more on the interpretation of the precautionary principle in terms of minimax decision, see Gardiner (2006). Ministry of the Environment (Norway) (2002).
Chapter 2: Finding alternatives to risk 1. Potential counter-risks associated with banning DDT are described in Goklany (2001). While Goklany points out that such interactions must be considered in a properly comprehensive risk assessment, he goes beyond simply assessing the merits of competing decision criterion by suggesting that the risk of disease spread by banning DDT is greater than the threat of DDT. His further implication that applications of the precautionary approach consistently, or systematically, either ignore or undervalue potential counter-risks is simply not founded. 2. See Jablonowski (2007). It is estimated that the average firm pays an amount equal to roughly 1 percent of assets in premiums each year – a relatively small price to pay to assure financial survival. Examples from our individual lives and in society are easy to multiply [see Jablonowski (2006), especially Chapter 3 on “Practical Precaution”]. 3. Alternatives assessment as an adjunct to precautionary action is discussed in detail in O’Brien (2000). 4. For a more detailed description of backcasting and applications, see Dreborg (1996) and Robinson (1982, 1990). While Robinson was the first to codify the process, he credits the earliest formal applications of the approach to the work of Amory Lovins on the determination of sustainable energy alternatives (Lovins, 1977). 5. Adapted from Bannister and Snead (2004), where the authors use “sustainability level” instead of “catastrophe level,” with similar implications. 6. Martin (1997).
120 Notes
7. The risk–risk tradeoffs inherent in DDT and disease prevention were identified and explored as early as Rachel Carson’s seminal exposition of the danger of DDT in 1962 (Carson, 1962). Ms. Carson’s review of potential solutions was very much in the spirit of the preactionary alternative assessment. Recent developments in alternatives to DDT are described in McGinn (2000). 8. See Bankes (1993). As he points out, exploratory analysis is often more computationally intensive than consolidative models. Exploration of complex models has become feasible with the advent of the electronic computer and more widely accessible with the distribution of more powerful “desktop” computing solutions. 9. For engineered systems, uncertainties due to knowledge imperfection should become part of the design process. See Antonsson and Otto (1995) and Nikolaidis et al. (2004). The notion of uncertainty-aware design suggests that we actively design while accounting for uncertainty, reliability, and safety metrics (Adams, 2006). This approach sets the stage for precautionary design with safety constraints in mind. In this regard, we discuss further modifications to traditional risk assessment techniques in Chapter 5. 10. For a description of hysteresis phenomenon and dynamic physical models in general, see Cook (1994). Complex dynamic phenomenon, including discontinuities, is described and applied to study ecosystem risks in Dasgupta and M¨aler (2004). 11. While the enormous sunk costs of the fossil fuel economy are undoubtedly a huge factor in the risk dilemmas associated with global warming, scientists suggest that the dynamic behavior of associated physical systems may also contribute to hysteresis, and hence potential irreversibility, of accumulated greenhouse effects (Schneider, 2004). 12. The concept of sustainable development is presented and discussed, including technical, economic, and social perspectives, in Beder (1993). 13. The ideas of “cost” and “risk” in the natural domain can perhaps be made more obvious if we think in terms of resource utilization rather than monetary exchange. Bradley (2002) represents safety as a function of resource usage. While monetary costs are often used to valuate the scarcity of these resources, a more direct exposition of tradeoffs in terms of resources places risk more directly in the picture, as a “negative” force on resources in need of a wider balance within the system under study (be it of individual, organizational, or social importance). Once again, this balance comes not in the form of individualized cost/benefit tradeoffs but instead forms the basis of how we define safe progress. Safety is therefore not seen so much as a cost but rather as a component of a properly functioning system. Proper system design considers safety from the beginning. 14. See Ackerman and Heinzerling (2002) on the deeper problems of valuation in the high-stakes setting. 15. See Hardin (1978), in Chapter 17, “Is Civilization Ready for Nuclear Power?.”
Notes 121
Chapter 3: Risk avoidance: All or nothing 1. Note that in the discussions that follow, we will be using the concepts of de minimis risk, natural risk, acceptable risk, and the possibility/impossibility threshold roughly synonymously. 2. The notions of additive growth in high-stakes risk have been extensively explored with respect to the issue of the proliferation of nuclear weapons, for example (Schell, 1982). The discussions of exposures to risk from nuclear proliferation could just as easily apply to the proliferation of high-stakes risk in general. 3. The probability of at least one loss among independent risks, all with a probability of loss p, is given by the expression: 1 − 1 − pn
4. 5.
6.
7. 8.
9. 10.
A complete derivation and discussion of this expression, with applications to the risk of proliferation of nuclear weapons, can be found in Lyttle (1983). On the complexities of determining the mathematical aggregation of risk through time, see Campbell (2005). We should caution at this point that while thresholds offer a very conservative approach to risk once exceeded, they can be less conservative than probabilities when the potential for significant accumulation of probabilities just below the threshold exists. That is, while any single exposure may not exceed the threshold, the accumulation of large number of “sub-critical” exposures can. See Nikolaidis et al. (2004) and also Whipple (1987). We can alleviate this potential by setting suitably low thresholds in the face of potential sub-critical aggregation. Elster (1989) argues that a coin toss is a good a decision mechanism as any other, and probably better than most, when rational means of decision escape us. Type-2 fuzzy sets are described in Mendel and John (2002). With respect to issues in the proliferation of nuclear arms, Goodin (1985) argues that only modal approaches, as opposed to probabilistic ones, can be used in high-risk situations that entail so many unknowns. Risk acceptance criteria are reviewed in Philipson (1983). See also Fischhoff et al. (1984). We introduce scientific or exponential notation here to more compactly express small probability numbers. For example, a chance of one in one hundred can be written as the fraction 1/100 or using the exponential for 100, 1/102 . A negative sign in front of the exponent means “inverse,” so we can also write 1/100 as 10−2 . In this way, we can present very small probabilities in less space. For example, one in a million or 1/1,000,000, which in decimal notation is .000001, reduces to 10−6 . We can express non- decimal numbers using multiplication: So, .0000065 becomes 65 × 10−6 .
122 Notes
11. With roughly 50 people killed a year, the annual fatality rate from lightning is around one in six million. Source: National Safety Council. 12. On the notion of selective fatalism, see Sunstein (1998). While Sunstein’s discussion relies heavily on the psychometric literature of risk assessment, he does suggest that some degree of high-stakes risk acceptance may be based on desensitization to the genuine perils of many “everyday” risks. 13. Cost/benefit assessments based on the observed behavior of humans in taking risk, the so-called perceived preference approach, is described and applied in Starr (1969). Critiques are cited in Philipson (1983). 14. See Slovic (1987). The assessment of acceptable risk based on psychometric studies are woven into a wider theory of high-stakes risk decision in Fischhoff et al. (1984). 15. See Sjoberg (2000), as well as Sjoberg (1999). 16. While driving a car is a case whose “voluntariness” may be questionable, the issue has far more ominous and far reaching implications for the acceptance of risk with regard to occupational health. In many industrialized countries, the level of annual fatality risk among workers that is accepted may be 100 times that for other risks of day-to-day life (sometimes as high as one chance in thousand) based on the observed characteristics of the general working population to tolerate such levels (Rimington et al., 2003). The implication here, as with driving, is that the acceptance of risk in this case is voluntary. As in the case of driving, and perhaps even more so, we can question the degree to which the exposure is genuinely voluntary or indeed necessary to life. The situation is complicated by the fact that occupational exposure to risk is far from uniform. Certain lower risk occupations may in fact exhibit risk levels that blend with the natural background level. Others exhibit risk exposures many times higher. While differences in wages in terms of a “risk premium” are sometimes noted, most higher risk occupations are hardly what we would consider high paying. Once again, the degree to which high-risk occupation is voluntary is arguable, as is the notion that those that accept such higher risk can be adequately compensated in terms of money. 17. The tolerability concept was extensively developed in the U.K. as a background for nuclear safety (HSE, 1992). The concept was later more fully broadened to include the regulation of a variety of high-stakes risks (HSE, 2001). Tolerability has since entered into a variety of worldwide regulatory structures, especially those in the European Union (EU), Trbojevic (2005). 18. The relatively uncontroversial nature of ALARP constraints based on identifiable levels of intolerable (on the high end) and de minimis (on the low end) risk suggests that self-interested resistance to de minimis levels is not so much a matter of being able to suitably identify those levels, as it is of being required to meet them. 19. This exhibit follows HSE (2001).
Notes 123
Chapter 4: Precaution in context 1. We would suggest that the context of preserving our natural ecological environment, including the animal world, falls near the high-end of this continuum. Our emphasis on humans and their institutions is based on the fact that these may actually be the most frail inhabitants of this world. In terms of extinquishment of the human race, the natural world has existed far before human kind and will undoubtedly continue far after. This notion is represented in the Gaia hypothesis, described in Lovelock (1979). 2. In general, to maintain constant risk in terms of absolute number of accidents, we need to scale probability against number of individuals by a factor of –1 on the log–log scale. This means that in order to maintain an absolute risk level as the population increases, we need to decrease the probability by a factor of ten for each factor of ten increase in population (as demonstrated in the example above). See Menkes and Frey (1987). 3. On the further challenges of setting acceptable societal risk levels, see Stallen et al. (1996). The authors discuss differential criteria in terms of the tradeoffs implied by the slope of our risk acceptance criteria curve and the implications of an aversion to large aggregations of loss among the population. 4. Multi-criteria decision analysis is applied to the valuation of potentials for ecological disaster in O’Connor (1998). See also Munda (1995). 5. See Hardin (1968). 6. On externalities and economic theory, see Pearce and Turner (1990, Chapter 4). On a statistical basis, externalities can be treated by widening our perspective on what properly constitutes a business “cost.” High-stakes externalities have a different character that suggests partial solution based on economic optimization (however wide we choose to extend its reach) won’t work. Extending the decision to non-monetary criteria is discussed in Section 4.3. 7. See, for example, the general discussion of risk acceptance across contexts in Pate (1983).
Chapter 5: A reassessment of risk assessment 1. For criticisms of traditional risk assessment methods applied to the highstakes domain, see Jablonowski (2002), Shrader-Frechette (1991), and O’Brien (2000). 2. PRA tools and techniques, including event trees, are detailed in NASA (2002). 3. Fuzzy event trees are described in Kenarangui (1991). A more detailed analysis, including an application to nuclear power safety, is given in Chun and Ahn (1992). 4. For a fuller description of anticipatory failure determination, see Kaplan et al. (2005). 5. See Raiffa (1970).
124 Notes
6. See Smithson (1989). 7. In this case, we should also properly consider the failure of DDT to prevent disease spread through the development of pesticide-resistant insect strains, as suggested in Carson (1962). 8. Note that risk maps resemble risk compendia presented in Chapter 3, Section 3.7. Compendia are usually based solely on the probability dimension, the consequence dimension held uniformly “catastrophic.” 9. The earliest systematic integration of formal risk assessment results and threshold-based decision criteria for safety in the nuclear domain was given by Farmer (1967). Farmer proposed crisp (i.e., precise) acceptance criteria based on de minimis risk level, subsequently know as “Farmer’s lines.” 10. See Webb (1976) for a discussion of the wide uncertainties present in many studies of U.S. reactor safety, both industry and government sponsored. This failure to account for uncertainty was immediately seized upon, and rightly so, by an aggressive anti-nuclear lobby (e.g., Nader and Abbotts, 1977), undermining much of the scientific pronouncements that nuclear power was unequivocally “safe.”
Chapter 6: Can we avoid risk dilemmas? 1. When probabilities are unknown (or irrelevant), we can choose based only on the best and worse consequences of each action. The result is shown mathematically in Arrow and Hurwicz (1972), following the arguments presented in a widely circulated early paper by Hurwicz (1951). The ideas are discussed, with examples in Luce and Raiffa (1957) and Elster (1989). In response to the extreme nature of solution under ignorance, Hurwicz, an early pioneer of modern decision theory had developed “balancing” between the two extremes of what we have called fatalism and precaution. Hurwicz’s “” is used to weight decision on the continuum between what Hurwicz called optimism (the minimin of loss) and pessimism (the minimax). Unfortunately, the choice of remains completely subjective. Ultimately, any such partial solutions do not solve the catastrophe problem. 2. The Stoics held a remarkably refined view of responsibility under conditions of fate (in which they strongly believed), the gist of which may be applicable to what we have termed the “natural” acceptance of risk. See Bobzien (1998). 3. See Bloch (1986). 4. Government backstops against financial risk are documented in Moss (2002). Big businesses seeing themselves as “risk takers” is more surely a reflection of a false bravado meant to enhance their take-charge image – something that many potential shareholders admire (to a degree). Practically, no sensible business person gambles with the life of their enterprise. This is not to say that taking calculated statistical risk doesn’t result in genuine, verifiable gains in profitability.
Notes 125
5. This quotation is from Seneca’s letters (Seneca, 1969, Letter XCI), this one attempting to consol a friend disquieted by the fire that completely destroyed the thriving city of Lyon (around the turn of the first century A.D.). The Stoic viewed such natural risks as something we must accept with a reasoned fatalism. 6. See Jablonowski (2006), p. 95 ff. 7. This typology of errors as it applies to precautionary science is further developed in Jablonowski (2006). See also Shrader-Frechette (1991, Chapter 9). While the analogy between errors in statistical hypothesis testing and the accumulation of knowledge is often used, we need to be careful not to stretch it too far. Hypothesis testing in the statistical sense only serves as a rough proxy for a much more complicated process of knowledge acquisition in general. 8. The beginnings of this new radicalism with respect to the risks facing us, particularly in the ecological domain, goes back at least as far as Barry Commoner’s Science and Survival, published in 1966 (Commoner, 1966). Other early examples of radical rethinking include Taylor (1970), developed more fully in Taylor (1973), Goldsmith et al. (1972), and the further work of Commoner (1976, 1990). 9. Many early radical approaches based on community recognition and action were criticized based on political workability. Recognition of the need for change and the pursuit of viable directions could not occur independently of the wider social and political (i.e., institutional) structures (Ashby, 1978). Some of the more radical rethinking was more definite as to social and economic structures that support change, as in, for example, the work of Bookchin (1982). Bookchin advocated a “social ecology” that suggests that social risks are increasing due to the exercise of power within social hierarchies, which entails concomitant excesses. He recommends instead a socially anarchic system, with emphasis on communal groups. Others, like Jonas (1981), suggest, on the other hand, a solution in a more authoritarian social system (based on a sort of “parent–child” paradigm). Contrasting capitalist and socialist systems in being able to support such change, he gives the slight edge to socialism on the basis of coordination though he strongly criticises social utopianism. 10. Deep ecology suggests that humans do not pay enough (any?) respect to the primacy of natural systems. With respect to risk goals, deep ecologists do not see the preservation of humans as necessarily the primarily goal of risk reduction but rather the preservation of (physical) nature. Otherwise, their platform is similar to other radicals. For a review of the current state of radical theories with respect to risk on a global scale, see Merchant (2005). A new radicalism that takes its cues from the extreme eco-centrism of deep ecology is becoming best known for its disruptive tactics (“eco-sabotage”) aimed at promoting the wider agenda of preserving nature from human onslaught. The new radicalism based on deep ecology is described in Scarce (2006) and Best and Nocella (2006). Most incidents of eco-sabotage, to date, have been more symbolic than significantly disruptive to the systems
126 Notes
they oppose. These acts, however, do express the strong conviction that at least something must be done. 11. Ref., American Heritage (1985). 12. See Wildavsky (1988), Chapter 4. 13. On the Stoics notion of co-fated events, see Bobzien (1998).
Chapter 7: Summary and conclusion (of sorts) 1. The numbers are based on the “average” person in the U.S., of which there are roughly 300,000,000 as of this writing. The occupational number is based on employed workers. Data were obtained from statistics compiled by the National Safety Council, U.S. National Center for Health Statistics, and the Bureau of Labor Statistics. 2. The argument for complacency in light of these figures is, despite them, we are living much longer than we did 100 years ago. The problem with this argument is that it is wrong. Arguments like this are usually based on average life expectancy, a figure which has increased dramatically in the last 100 years The large increases in this average are due to the drastic reduction in infant mortality and not increases in longevity. Consider the average life expectancy of five individuals calculated on observed ages at death of 70. The average expectancy of this group is, obviously, 70. Add a single infant mortality (death at age “0”), and the average life expectancy of the group goes to 56. A U.S. male aged 60 today, can expect to live 4 years longer than his counterpart of 1920. A significant increase but not as dramatic as the implications drawn from the fact that the “average” person died at 50 a century ago. In fact, the maximum duration of life has increased little, if any (Nesse and Williams, 1994). 3. A similar proposal is contained in Peterson (2001). 4. See Reichenback (1938).
References Ackerman, F. and L. Heinzerling, Pricing the Priceless: Cost-Benefit Analysis of Environmental Protection, Georgetown Environmental Law and Policy Institute, Georgetown University Law Center, 2002. Adams, B. M., Uncertainty Quantification for Credible Simulation and Risk Analysis: Methods, Software Tools and Research Needs, paper presented at ACE Workshop on Sensitivity Analysis and Uncertainty Quantification for the Nuclear Fuel Cycle, North Carolina State University, 2006. American Heritage, The American Heritage Dictionary, Houghton Miffin, 1985. Antonsson, E. K. and K. N. Otto, “Imprecision in Engineering Design,” Journal of Mechanical Design (ASME), 117, 1995. Reprinted in Antonsson, E.K. (ed.), Imprecision in Engineering Design, Engineering Design Research Laboratory, California Institute of Technology, 2001. Arrow, K. and L. Hurwicz, “An Optimality Criterion for Decision-Making Under Uncertainty,” in Carter, C. F. and J. L. Ford, eds., Uncertainty and Expectations in Economics: Essays in Honor of G.L.S. Shackle, Blackwell, 1972. Ashby, E., Reconciling Man with the Environment, Stanford University Press, 1978. Bannister, D. and D. Snead, Visioning and Backcasting: Desirable Futures and Key Decisions, Presentation: Stella Focus Group 4 Meeting, TU Delft., March 2004. Bankes, S. “Exploratory Modeling for Policy Analysis,” Operations Research, 41 (3), 1993. Barrow, J. D., Impossibility: The Limits of Science and the Science of Limits, Oxford University Press, 1998. Beder, S., The Nature of Sustainable Development, Scribe, 1993. Best, S. and A. J. Nocella,II, Igniting a Revolution: Voices in Defense of the Earth, AK Press, 2006. Bloch, E., The Principle of Hope, MIT Press, 1986. Bobzien, S., Determinism and Freedom in Stoic Philosophy, Oxford University Press, 1998. Bookchin, M., The Ecology of Freedom: The Emergence and Dissolution of Hierarchy, Chesire Books, 1982. Bradley, J., Elimination of Risk in Systems: Practical Principles for Eliminating and Reducing Risk in Complex Systems, Tharsis, 2002. Campbell, S., “Determining Overall Risk,” Journal of Risk Research, 8 (7–8), 2005. Carson, R., Silent Spring, Houghton Mifflin, 1962. Chun, M. -H. and K. Ahn, “Assessment of the Potential Applicability of Fuzzy Set Theory to Accident Progression Event Trees with Phenomenological Uncertainties,” Reliability Engineering and System Safety, 37, 1992. Commoner, B., Science and Survival, Viking Press, 1966.
127
128 References
Commoner, B., The Closing Circle, Knopf, 1976. Commoner, B., Making Peace With the Planet, Pantheon Books, 1990. Cook, P. A., Nonlinear Dynamical Systems, Prentice Hall, 1994. Dasgupta, P. and K. -G. Mäler (eds), The Economics of Non-Convex Ecosystems, Kluwer Academic Publishers, 2004. Dreborg, K. H., “Essence of Backcasting,” Futures, 28 (9), 1996. Elster, J., Solomonic Judgements: Studies in the Limitations of Rationality, Cambridge University Press, 1989. Farmer, F. R., “Reactor Safety and Siting: A Proposed Risk Criterion,” Nuclear Safety 8, 1967. Fischhoff, B., R. L. Keeney, P. Slovic, S. Lichtenstein, and S. L. Derby, Acceptable Risk: A Critical Guide, Cambridge University Press, 1984. Gardiner, S. M., “A Core Precautionary Principle,” Journal of Political Philosophy, 14 (1), 2006. Goklany, I. M., The Precautionary Principle: A Critical Appraisal of Environmental Risk Assesment, Cato Institute, 2001. Goldsmith, E., R. Allen, M. Allaby, J. Davoll, and S. Lawrence, A Blueprint for Survival, Pelican, 1972. Goodin, R. E., “Nuclear Disarmament as a Moral Certainty,” Ethics, 95 (3), 1985. Graham, J. D. and J. B. Wiener, Risk Vs. Risk: Tradeoffs in Protecting Health and the Environment, Harvard University Press, 1997. Haimes, Y. Y., Risk Modeling, Analysis and Management, WileyInterscience, 2004. Haller, S., Apocalypse Soon?: Wagering on Warnings of Global Catastrophe, McGill-Queens University Press, 2002. Hardin, G., “The Tragedy of the Commons,” Science, 162, 1968. Hardin, G., Stalking the Wild Taboo, Kaufmann, 1978. Health and Safety Executive (HSE), The Tolerability of Risk from Nuclear Power Stations, HSE Books, 1992. Health and Safety Executive (HSE), Reducing Risk, Protecting People, HSE Books, 2001. Hurwicz, L., “Optimality Criteria for Decision Making Under Ignorance,” Cowles Commission Discussion Paper: Statistics. No. 370, December, 1951. Jablonowski, M., “Are Formal Risk Assessments of Any Use?,” Risk Management, 49 (8), 2002. Jablonowski, M., “High-Risk Decisions When Probabilities Are Unknown (or Irrelevant),” Risk Management: An International Journal, 7 (3), 2005. Jablonowski, M., Precautionary Risk Management: Dealing with Catastrophic Loss Potentials in Business, the Community and Society, Palgrave Macmillan, 2006. Jablonowski, M., “Insurance as ‘Financial Precaution’, ” The John Liner Review – Quarterly Review of Advanced Risk Management, Fall, 2007. Jonas, H., The Imperative of Responsibility: In Search of an Ethics for the Technological Age, University of Chicago Press, 1981. Kaplan, S., S. Visnepolschi, B. Zlotin, and A. Zusman, New Tools for Failure Risk and Analysis: Anticipatory Failure Determination (AFD) and the Theory of Scenario Structuring, Ideation International, 2005.
References 129
Kenarangui, R., “Event-Tree Analysis by Fuzzy Probability,” IEEE Transactions on Reliability, 40 (1), 1991. Kosko, B., The New Science of Fuzzy Logic, Hyperion, 1994. Lovelock, J., Gaia: A New Look at Life on Earth, Oxford University Press, 1979. Lovins, A. B., Soft Energy Paths: Toward A Durable Peace, Harper, 1977. Lowry, R., The Architecture of Chance, Oxford University Press, 1989. Luce, R. D. and H. Raiffa, Games and Decisions: Introduction and Critical Survey, John Wiley, 1957. Lyttle, B., The Flaw in Deterrence, Midwest Pacifist Publishing Center, 1983. Martin, P. H., “ ‘If You Don’t Know How to Fix It, Please Stop Breaking It’: The Precautionary Principle and Climate Change,” Foundations of Science, 2, 1997. McGinn, A. P., Why Poison Ourselves? A Precautionary Approach to Synthetic Chemicals, Worldwatch Paper #153, Worldwatch Institute, 2000. Mendel, J. M. and R. J. John, “Type-2 Fuzzy Sets Made Simple,” IEEE Transactions of Fuzzy Systems, 10 (2), 2002. Menkes, J. and R. S. Frey, “De Minimis Risk as a Regulatory Tool,” in C. Whipple, ed., De Minimis Risk, Plenum, 1987. Merchant, C., Radical Ecology: The Search for a Viable World, Routledge, 2005. Ministry of the Environment (Norway), The Bergen Declaration, Fifth International Conference on the Protection of the North Sea, Bergen, Norway, 2002. Moss, D. A., When All Else Fails: Government as the Ultimate Risk Manager, Harvard University Press, 2002. Munda, G., Multi-criteria Evaluation in a Fuzzy Environment – Theory and Applications in Ecological Economics, Phsyica-Verlag, 1995. Nader, R. and J. Abbotts, The Menace of Atomic Energy, Norton, 1977. National Aeronautics and Space Administration (NASA), Probabilistic Risk Assessment: Procedures Guide for NASA Managers and Practitioners, Office of Safety and Mission Assurance, 2002. Nesse, R. M. and G. C. Williams, Why We Get Sick: The New Science of Darwinian Medicine, Random House, 1994. Nikolaidis, E., S. Chen, H. Cudney, R. Haftka, and R. Rosca, “Comparison of Probability and Possibility for Design Against Catastrophic Failure Under Uncertainty,” Journal of Mechanical Design (ASME), 126, 2004. O’Brien, M., Making Better Environmental Decisions: An Alternative to Risk Assessment, MIT Press, 2000. O’Connor, M. (ed.), The VALSE Project (Valuation for Sustainable Environments), Final Report, European Commission, Environment and Climate Research Program, 1998. O’Riordan, T. and J. Cameron (eds.), Interpreting the Precautionary Principle, Earthscan/James & James, 1994. Pate, M. E., “Acceptable Decision Processes and Acceptable Risks in Public Sector Regulations,” IEEE Transactions on Systems, Man and Cybernetics, 13 (3), 1983. Pearce, D. W. and R. K. Turner, Economics of Natural Resources and the Environment, Johns Hopkins University Press, 1990.
130 References
Peterson, M., “New Technologies and the Ethics of Extreme Risks,” Ends and Means, 5 (2), Autumn, 2001. Philipson, L. C., “Risk Acceptance Criteria and Their Development,” Journal of Medical Systems, 7 (5), 1983. Raffensperger, C. J., J. Tickner, and W. Jackson (eds.), Protecting Public Health and the Environment: Implementing the Precautionary Principle, Island Press, 1999. Raiffa, H., Decision Analysis, Addison-Wesley, 1970. Reichenback, H., Experience and Prediction: An Analysis of the Foundations and the Structures of Knowledge, University of Chicago Press, 1938. Rescher, N., Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management, University Press of America, 1984. Rimington, J., J. McQuaid, and V. M. Trbojevic, Application of Risk Based Strategies to Workers’ Health and Safety Potection – UK Experience, Ministry of Social Affairs and Employment (Netherlands), 2003. Robinson, J., “Energy Backcasting: A Proposed Method of Policy Analysis,” Energy Policy 10 (4), 1982. Robinson, J., “Futures Under Glass: A Recipe for People Who Hate to Predict,” Futures, 22 (9), 1990. Scarce, R., Eco-Warriors: Understanding the Radical Environmental Movement, Left Coast Press, 2006. Schneider, S. H., “Abrupt Non-Linear Climate Change, Irreversibility and Surprise,” Global Environmental Change, 14, 2004. Schell, J., The Fate of the Earth, Knopf, 1982. Seneca (R. Campbell, trans.), Seneca: Letters From a Stoic, Penguin, 1969. Shrader-Frechette, K. S., Risk and Rationality: Philosophical Foundations for Populist Reforms, University of California, 1991. Sjoberg, L., “Consequences of Perceived Risk: Demand for Mitigation,” Journal of Risk Research, 2 (2), 1999. Sjoberg, L., “Perceived Risk and Tampering with Nature,” Journal of Risk Research, 3 (4), 2000. Slovic, P., “The Perception of Risk,” Science, 236, 1987. Smithson, M., Ignorance and Uncertainty: Emerging Paradigms, Springer, 1989. Stallen, P. J. M., R. Geerts, and H. K. Vrijling, “Three Conceptions of Quantified Societal Risk,” Risk Analysis, 16 (5), 1996. Starr, C., “Social Benefits versus Technological Risks: What is Our Society Willing to Pay for Safety?,” Science, 165, 1969. Sunstein, C. R., “Selective Fatalism,” Journal of Legal Studies, 62, 1998. Taylor, G. R., The Doomsday Book, Thames and Hudson, 1970. Taylor, G. R., Rethink: A Paraprimative Solution, Dutton, 1973. Taylor, R., Metaphysics, Prentice hall, 1991. Tickner, J. (ed.), Precaution, Environmental Science and Preventive Public Policy, Island Press, 2003. Trbojevic, V. M., Risk Criteria in EU, paper presented at ESREL ’05, Poland, June 2005. Walker, W. E., P. Harremoes, J. Rotmans, J. P. van der Sluijs, M. B. A. van Asselt, P. Janssen, and M. K. von Krauss, “Defining Uncertainty: A Conceptual Basis
References 131
for Uncertainty Management in Model-Based Decision Support,” Integrated Assessment, 4 (1), 2003. Webb, R. E., The Accident Hazards of Nuclear Power Plants, University of Massachusetts Press, 1976. Whipple, C., “Application of the De Minimis Concept in Risk Management,” in C. Whipple, ed., De Minimis Risk, Plenum, 1987. Whiteside, K. H., Precautionary Politics: Principle and Practice in Confronting Environmental Risk, MIT Press, 2006. Wildavsky, A., Searching for Safety, Transaction Publishers, 1988. Yager, R. R., H. T. Nguyen, and R. M. Tong (eds.), Fuzzy Sets and Applications: Selected Papers by L. A. Zadeh, John Wiley, 1987. Zimmermann, H. -J., Fuzzy Set Theory and Its Applications, Kluwer Academic Publishers, 1991.
Index
acceptance, see fatalism accumulation of risk, see growth in (aggregate) risk potentials acquiescence, 17, 33, 45, 51, 52, 54, 85, 93, 100, 101, 106, 108, 115 defined, 17 as “giving up”, 100 and peace of mind, 52 and selective fatalism, 51 and tolerability of risk, 54 versus reasoned acceptance, 17, 33, 34–5, 45, 51, 52, 93, 106 ALARP (as low as reasonably practicable), 52–3 “all-or-nothing”, 7, 36, 40, 42, 45, 93, 101 and growth of aggregate risk, 36–8 and radical rethinking, 93 alternatives assessment, 17, 19, 24, 27, 32, 34, 41, 70, 86, 91, 97, 99, 102, 108 across contexts, 65 defined, 17–8 as “fail-safe”, 18–19 inhibited by special interests, 86–7 and probabilistic risk assessment, 69–70 see also backcasting anticipation, see risk anticipation anticipatory failure determination (AFD), 75 average, see expected value backcasting, 18–26 defined, 18 under uncertainty, 22–4 versus “backtracking”, 24–6 versus forecasting, 18 “backtracking”, 24
(the) balance of life, 26–28, 46, 107 as cost/ benefit, 26 and natural risk levels, 46 bias, 17, 85, 100, 104, 108 driven by self-interest, 85–6 toward the status quo, 17 beliefs, 49, 115 influence on risk acceptance, 49 Bergen Ministerial Declaration, 13 “bill of rights” (with respect to risk), 89 see also freedom (from risk) burden of proof, 41–2, 44, 89, 110, 114 and precaution, 44 reversed, 110, 114 business, 3, 4, 7, 17, 56, 57, 65, 85, 86, 96, 113 as an impetus for change, 113–114 precaution in, 17, 54, 65, 86, 104 and radical change, 94 survival of, 57, 86, 94 (the) catastrophe problem, 5, 7, 30, 36, 39, 40, 45, 49, 54, 56, 69, 73, 84, 90, 99, 105, 109, 114, 116 defined, 5 and prioritization, 39, 84 and risk avoidance, 36 see also “all-or-nothing” catastrophic risk, 1, 2, 3, 5, 7, 11, 18, 22, 26, 27, 29, 36, 39, 40, 42, 47, 56, 58, 61, 83, 89, 105, 115 avoiding, 2, 6, 7, 15–34, 36 and balance of life, 26–7 in context, 58, 61–4 growth of (aggregate), 42–3 and infinite disutility, 89–91 making decisions under, 5–6
132
Index 133
possibilistic model of, 10–14 statistics inapplicable to, 4–5 co-fated events, 101 concatenation, 116 consequentialism (consequentialist), 83 context (and risk), 56–66 cost/benefit, 3, 5, 7, 15, 20, 26, 29–32, 35, 37, 39, 46, 52, 57, 64, 65, 66, 70, 78, 81, 82, 85, 89, 100, 102, 104, 105, 107, 111, 114 and catastrophe problem, 5–7 defined, 3 and forced choice, 107 and infinite disutility, 89–91 not applicable to catastrophe, 3–5 as post-fact risk management, 29–32 and prioritization, 39–40 and risk acceptance, 3–4 versus precaution, 5–7 see also expected value counter risks (risk-risk tradeoffs), 6, 64, 78 elimination through proper decision framing, 77–8 creeping risk, 111 danger, see catastrophic risk “danger zone”, 79 see also risk threshold, possibility (of risk) de minimis, 36, 45–6, 52, 54 see also risk threshold, possibility (of risk) decision criteria (high-stakes), 1–2, 5, 37–8, 42, 62–3, 67, 74, 78, 87, 90, 102, 106 when probability is unknown (or irrelevant), 5 see also precaution, fatalism, cost/ benefit decision framing, 77–8 decision matrix (table), 2, 14–15, 76 see also decision tree
decision theory, 75–8 decision tree, 76 relation to decision matrix (table), 76 (the) dilemma of precaution, see risk dilemmas discontinuities, 24, 85 disutility, 60–1, 89, 107, 108, 114 infinite, 89–91 “doomed if we do, doomed if we don’t”, see risk dilemmas dynamics of risk, 17, 25, 43, 70, 97, 105 and risk dilemmas, 17 economics, 3, 30, 85–9, 101, 102, 104, 114 as basis of cost/ benefit, 3, 102 and self-interest, 85–7 empirical knowledge and risk, 46, 91 error, 91–3, 95 see also type I/ type II error in science event tree, 70–5 qualitative versus quantitative, 74 evolution, 45–6, 67, 94 and natural risk levels, 46, 115 expected value, 3–5, 7, 20, 29, 32, 39, 69, 71, 73, 89, 102, 104, 108 adjusted for “disutility”, 89–91 defined, 29 inapplicable to high-stakes risk, 4–5 manipulation of decision via, 20, 85–90, 102 see also cost/ benefit exploratory modeling, 22–4 externalities (economic), 67, 114 extinction, 2, 65, 84, 96 F/N diagrams, 80 “fail-safe” nature of alternatives assessment, 18–9, 23–4, 84, 97, 99
134 Index
fatalism, 7–9, 14, 17, 26, 32, 33, 40, 45, 50, 52, 54, 82, 88, 101 conditions for indifference between precaution and, 7–10 defined, 14 does not imply doom, 95 and natural risk levels, 45 as reasoned acceptance versus acquiescence, 14–17 selective, 47–8 see also minimum finality, 5, 30, 44, 69 and the catastrophe problem, 5, 69 forced choice, 18, 48, 51, 52, 87, 88, 102, 107 freedom (from risk), 15, 52, 82, 88–9, 117 fuzzy risk thresholds, 78–80 see also “danger zone” fuzzy sets, 10–4, 103 see also ultra-fuzzy sets, knowledge imperfection Gaia hypothesis, 65 giving up, dangers of, 100–1 growth in (aggregate) risk potentials, 36–8, 109–110 probabilistic model, 37 ultra-fuzzy (possibilistic) model, 42–5 where are we now?, 109–12 high-stakes risk, see catastrophic risk hope, 20, 84–5, 112 and progress, 84 identify-assess-treat (I-A-T) model, 28–9, 33, 73, 74 purely statistical basis of, 28 imposed risk, 51, 88–89, 117 see also forced choice, freedom (from risk) impossibility, 22, 38, 48 fuzzy definition of, 10–12
individual risk, 3, 15, 36, 43, 47, 52, 56–7, 80, 86, 96, 107, 110, 113 and precaution, 56–7, 86, 107, 113 versus social risk, 58–60 insurance, 17, 57, 65, 86, 87 as financial precaution, 17, 65, 86 irreversibility, see (the) catastrophe problem knowledge, 10, 22, 38, 44, 69, 70, 103, 112, 115 growth through concatenation, 116 imperfect, 10–11, 22, 42, 57, 68, 91–3 see also error leadership, 113–7 membership (in a fuzzy set), 10–13, 22, 42–5 ultra-fuzzy (“type-2”), 42–5 minimax, 5, 7, 11, 16, 36, 45, 74, 790, 104 defined, 5 as the basis for precaution, 5 “uncertainty-modified”, 10–14 see also risk avoidance minimum, 7, 32 defined, 7 see also fatalism multi-criteria decision analysis (MCDA), 61 natural risk level, 45–7, 49, 87–8, 94–5, 112, 116–17 see also subsistence level (of risk), risk threshold optimism, 85–6, 95, 97–8, 99, 100, 108, 115 technological, 85, 95, 98, 108, 115 opportunity cost (as forgone benefits), 6, 15, 16, 26, 31, 42, 52, 64, 105, 108
Index 135
optimization / optimum, 3, 31, 102, 104, 114 cost/ benefit as, 3 as the basis of traditional economics, 102, 104, 114 (the) paradox of progress, 83–5, 88, 95, 109 philosophy of risk, 20, 33, 82, 113 possibility (as knowledge imperfection), 38 possibility (of risk), 6, 10, 16, 21, 22, 26, 27, 28, 35, 43, 44, 45, 48, 50, 62, 69, 78, 81, 90, 97, 100, 105, 109–110 definition (fuzzy), 10, 22 and the natural level of risk, 45–7 and precaution, 6 see also possibility threshold possibility threshold, 6, 11, 34, 36, 38, 47, 48, 50 graphical representation of, 78–80 in terms of probability of loss, 10–14, 21, 45–7, 78–80 versus “zero risk”, 6, 28 see also natural risk level, “danger zone” preaction, 16–18, 33, 34, 35, 39, 41, 45, 67, 74, 81, 83, 98, 102, 104, 105, 106, 109, 114, 117 alternatives assessment as, 16–18 and science, 15, 20, 32, 91–3, 98, 99, 105, 116, 117 precaution, 5, 6, 7, 25, 15, 27, 22, 32, 36, 41, 45, 47, 48, 50, 53, 56, 73, 78, 81, 82, 86, 87, 89, 90, 93, 100, 102, 103, 104–5, 106, 107, 109, 111, 113, 115–17 as “all or nothing”, 36, 38, 40, 42, 45, 93, 101, 105, 111 avoiding mechanistic, 32–4 and the burden of proof, 41–2, 44 conditions for indifference between precaution and fatalism, 7–10
in context, 56–67 fuzzy (possibilistic) interpretation of, 10–14 hallmarks of, 56–8 and infinite disutility, 89–91 as low cost/ no cost, 15, 17, 27, 26–8, 105, 108 and the “precautionary principle”, 13 and radical rethinking, 93–6 versus fatalism, 7–10 precautionary dilemmas, see risk dilemmas precautionary principle, 13–14 prioritization, 7–8, 62, 39–40 not applicable to catastrophic risk, 39–40 see also “all-or-nothing” probabilistic risk assessment (PRA), 70–1 probability, 3, 4, 5, 6, 11, 21, 29, 34, 36, 37, 42, 46, 52, 59, 68, 70–5, 76, 90, 103, 105, 111 decisions when unknown (or irrelevant), 5–7, 68 versus possibility, 10–11, 38 see also statistics progress, 15, 16–18, 20, 28, 33–4, 35, 39, 41, 45, 83, 84, 89, 91, 92, 95, 96, 101, 102, 105, 106, 107, 109, 115, 117 risk free, 83–4, 91, 96, 106, 107, 109, 117 a true measure of, 46, 84, 87 see also (the) paradox of progress proportionality (of precautionary costs), 65 psychology (of risk), 48, 50 see also beliefs radical rethinking, 93–103 common characteristics of, 95 randomness, 5, 10, 21, 76, 91, 103, 105 versus knowledge imperfection, 10, 76, 103
136 Index
responsibility, 82–3, 109 consistent with reasoned fatalism, 82 risk acceptance criteria, see risk threshold risk acceptance, see selective fatalism risk accumulation, see growth in (aggregate) risk potentials risk anticipation, 34–5 see also alternatives assessment risk assessment, 17, 21, 29, 33, 74–80, 85, 92, 98, 100 mechanistic, 32–4 recognizing knowledge imperfection in, 73–5 see also identify-assess-treat (I-A-T) model risk avoidance, see precaution risk dilemmas, 6–7, 9, 15, 17, 19, 22, 24, 29, 32, 35, 36, 39, 41, 45, 50, 52, 53, 57, 65, 73, 81, 102, 105, 106, 108, 111, 112, 113, 115, 117 can we avoid them?, 81–101 defined, 6–7 emerging, 97 and progress, 17 resolving, using alternative assessment, 15–18 and “selective fatalism”, 50–2 risk growth, see growth in (aggregate) risk potentials risk management, 3, 6, 18, 28, 29, 34, 36, 44, 46, 56, 65, 68, 73, 75, 82, 104, 105, 106, 108, 112, 115, 117 compared to risk assessment, 68 preactive versus “post-fact”, 28–9 as reducing worry, 33 and responsibility, 82 risk map, 79 risk-risk tradeoffs, see counter risks risk threshold, see possibility threshold
sample space ignorance, see decision framing science, 15, 20, 32, 84, 85, 91, 94, 95, 96, 101, 105, 110, 115, 116, 117 critical to alternative assessment, 20, 32, 91 and error, 91–3 faith in, 96–100 goal of precautionary science, 15 need for a wider approach to, 91–3 and radical rethinking, 95–6 “seeing is believing”, 54, 110 selective fatalism, 47–50 self-interest, 1, 66, 85, 102, 113 economic basis of, 86–7 and risk dilemmas, 85–90 societal risk, 17, 43, 58, 62, 63, 64, 88, 110 versus individual risk, 58, 62–3 statistical risk, see statistics, expected value statistics, 3, 6, 7, 21, 28, 32, 35, 37, 38, 41, 52, 54, 68, 69, 70, 73, 75–6, 77, 82, 90, 92, 98, 99, 102, 103–4, 109, 111, 114 not applicable to catastrophic risk, 3–6 and probabilistic risk assessment (PRA), 70–2 and verification, 4, 69, 70 see also expected value, identify-assess-treat (I-A-T) model Stoic philosophy, 82, 88, 101 on fate and responsibility, 82 strict liability, 114 subsistence level (of risk), 45–7, 81, 84, 87, 88–9, 106 see also natural risk level, progress survival, 28, 30, 46, 57, 62, 65, 66, 67, 81, 87, 88, 94, 102, 108, 109, 115 evolution as, 45–6 sustainable development, 13, 28
Index 137
technology, 94, 95–8, 105, 108, 117 see also science tolerability of risk criteria, 52–5 trust, 49, 66, 85, 88, 99, 117 influences risk acceptance, 49 type-2 fuzzy set, see Ultra-Fuzzy sets type I / type II error in science, 91–3, 110, 114 ultra-fuzzy sets, 43–5, 54, 106, 107, 109, 111 see also growth in (aggregate) risk potentials
utility, see disutility utopia, 83–5, 109, 112 see also (the) paradox of progress valuation, 30, 31, 59, 60, 66, 90, 107 among contexts, 60–1 infinite, 89–91 verification, 23, 24, 46, 91, 93, 103 voluntary risk acceptance, 48, 50–2 “desensitization”, 51 zero risk, 6, 28, 34, 35, 45, 54, 55, 59, 81, 87, 105