LIST OF CONTRIBUTORS Richard M. Adams
Department of Agricultural and Resource Economics, Oregon State University, Corv...
7 downloads
556 Views
16MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
LIST OF CONTRIBUTORS Richard M. Adams
Department of Agricultural and Resource Economics, Oregon State University, Corvalis OR 97331, USA
Keyvan Amir-Atefi
Department of Economics, University of California, Santa Barbara, CA 93106, USA
Duane Chapman
Department of Agricultural, Resource, and Managerial Economics, Cornell University, Ithaca, NY 14853-7801, USA
C. C Chen
Department of Agricultural Economics, Texas A&M University, College Station, TX 77843-4228
Stephen J. DeCanio
Department of Economics, University of California, Santa Barbara, CA 93106, USA
Catherine Dibble
Department of Geography, University of Maryland, College Park, MD 20742-7215, USA
Reyer Gerlagh
Institute for Environmental Studies, Vrije Universiteit, De Boelelaan 1115 1081 HV Amsterdam, The Netherlands
Eban Goodstein
Department of Economics, Lewis and Clark College, Portland, OR 97219, USA
Brent M. Haddad
Department of Environmental Studies, University of California, Santa Cruz, CA 95064, USA
Darwin C. Hall
Department of Economics, Environmental Science and Policy Programs, California State University Long Beach, CA 90840-4607, USA vii
viii Richard B. Howarth
Environmental Studies Program, Dartmouth College, 6182 Fairchild Hall, Hanover, NH 03755, USA
Larry Karp
Department of Agricultural and Resource Economics, University of California Berkeley, CA 94720, USA
Kristin Kuntz-Duriseti
Department of Political Science, University of Michigan, Ann Arbor, MI 48109-1220, USA
Xuemei Liu
Department of Agricultural and Resource Economics, University of California Berkeley, CA 94720, USA
Neha Khanna
Department of Economics, Binghamton University, Vestal Pky E, Binghamton NY 13902-6000, USA
Jonathan G. Koomey
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Building 90-4000, Berkeley, CA 94720, USA
John A. Skip Laitner
Senior Economist for Technology Policy EPA, Office of Atmospheric Programs, 1200 Pennsylvania Avenue NW, MS-6201J, Washington, DC 20460, USA
Robert J. Markel
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Building 90-4000, Berkeley, CA 94720, USA
Chris Marnay
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Building 90-4000, Berkeley, CA 94720, USA
Bruce A. McCarl
Department of Agricultural Economics, Texas A&M University, College Station, TX 77843-4228, USA
ix Kimberly Merritt
Department of Environmental Studies, University of California, Santa Cruz, CA 95064, USA
Glenn Mitchell
Economic Analysis LLC, 2049 Century Park East, Suite 2310, Los Angeles, CA 90067, USA
R. Cooper Richey
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Building 90-4000, Berkeley, CA 94720, USA
David E. Schimmelpfennig
Economic Research Service, United States Department of Agriculture, 1800 M Street, NW Washington DC 20036-5831, USA
Stephen H. Schneider
Department of Biological Sciences and Institute of International Studies, Stanford University, Stanford, CA 94305-6015, USA
William E. Watkins
UCSB Economic Forecast Project, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
Bob van der Zwaan
Institute for Environmental Studies, Vrije Universiteit, De Boelelaan 1115 1081 HV Amsterdam, The Netherlands
BEYOND A DOUBLING: ISSUES IN THE LONG-TERM ECONOMICS OF CLIMATE CHANGE Richard B. Howarth and Darwin C. Hall INTRODUCTION The basic science of climate change has been understood since the seminal work of Svante Arrhenius (1896) in the late 19th century. In short, confirmed physical principles and supporting empirical measurements imply that greenhouse gases (GHGs) - notably carbon dioxide, chlorofiuorocarbons, methane, nitrous oxide, and water vapor - allow short-wavelength sunlight to penetrate the Earth's atmosphere while impeding the transfer of longwavelength radiation from the planet's surface to outer space. As GHG concentrations increase due to anthropogenic emissions, the mean temperature of the planet should therefore increase. According to the IPCC (1990, 1996b), a doubling of GHG concentrations will lead to an equilibrium temperature increase of 1.5-4.5°C. This range is quantitatively in line with Arrhennius' early calculations. From the perspective of economists, however, climate change is sometimes viewed as a new and untested hypothesis. Despite decades of scientific research, climate change received little attention amongst policy analysts and decision-makers until the late 1980s, when a hot Washington summer combined with increasing concerns about the global environment to put this issue on the agenda. The response from economists was rapid and influential. Since GHG emissions are dominated by the combustion of fossil fuels, it was natural to extend the energy demand models developed in response to the 1970s The Long-Term Economics of Climate Change, pages 1-9. 2001 by Elsevier Science B.V. ISBN: 0.7623-0305-0
2
RICHARD B. HOWARTHAND DARWINC. HALL
energy crises to the consideration of this new policy issue. The early literature suggested that reducing GHG emissions might entail high economic costs. A review by Weyant (1993), for example, concluded that capping carbon dioxide emissions at 1990 levels would lead to long-term costs equivalent to 4% of gross world output. While some economists focused on the costs of GHG emissions abatement, others studied the potential costs that climate change would impose on future society. From the beginning, this literature emphasized the quantifiable impacts associated with a doubling of atmospheric GHG concentrations. Although scientists speculate that climate change might substantially transform the structure and functioning of ecosystems, formal impact assessments suggest that, given the human potential to adapt to changing conditions, a doubling of GHG concentrations might lead to rather small economic costs. Nordhaus (1994), for example, estimated that the costs associated with changes in sea level, world agriculture, water supply systems, and the energy sector might constitute no more that 1.33% of economic activity. As quantitative estimates of the costs and benefits of GHG emissions abatement became available, economists sought to identify optimal climate change response strategies using the techniques of applied welfare economics. Although some researchers employed the partial equilibrium methods of costbenefit analysis (Cline, 1991), the use of optimal growth models has come to dominate the literature. In Nordhaus' (1994) Dynamic Integrated model of Climate and the Economy (DICE), for example, the goal is to maximize the discounted sum of present and future utility. With a pure rate of time preference of 3% per annum, the impacts that climate change will impose a century or more into the future receive essentially no weight in short-term decision making. As a result, Nordhaus concludes that relatively modest steps towards emissions control are economically warranted. The chapters included in the present volume examine these points of the common wisdom from three main perspectives. First, they challenge the notion that analysts may identify "optimal" climate change response strategies by considering the impacts of a doubling of greenhouse gas concentrations on human and environmental systems. Since a GHG doubling is likely to occur by the mid-21st century with much larger changes possible in the long-term future, impact assessment must consider the consequences of much larger (and potentially more severe) climatic alterations. Second, the common wisdom appeals to a model in which GHG emissions abatement costs are measured in a first-best world in which technological developments are independent of public policies and market conditions. A wealth of evidence, however, suggests that significant emissions reductions
Beyond a Doubling: Issues in the Long-term Economics of Climate Change
3
might be achieved at a negative economic cost through the full adoption of cost-effective, energy-efficient technologies. In addition, there is reason to believe that short-run efforts to reduce emissions might provide experience and incentives that would foster the development of low-cost abatement technologies. These possibilities are not well integrated in the prevailing set of climate-economy models. Third, the notion that policy makers should seek to maximize the discounted sum of present and future utility has been sharply criticized since Frank Ramsey (1928) argued that utility discounting was an "ethically indefensible" practice that arose "from the weakness of the imagination." Yet although a range of authors have developed alternative approaches to intergenerational social choice, the discounted utility model remains dominant in the economics of climate change. The details of these issues must naturally be considered through a careful reading of each chapter. It is useful, however, to describe the main points that are developed by the various authors - a task that we take up in the following sections. IMPACT ASSESSMENT In Chapter 2 of this volume, Stephen H. Schneider and Kristin Kuntz-Duriseti consider the question of impact assessment from a perspective that integrates insights from the natural and social sciences. According to the authors, there is little debate amongst scientists regarding the question of whether or not climate change will occur. There is substantial uncertainty, however, regarding the impacts of climate change on human and natural systems. While scientists worry about poorly understood "catastrophe scenarios" in which climate change would impose major costs on future society, impact assessment studies typically abstract away from this uncertainty, emphasizing the best quantified impacts of a doubling of GHG concentrations. Schneider and Kuntz-Duriseti argue that questions of uncertainty are poorly handled in the current generation of climate-economy models. Their chapter also examines recent research on induced technical change in the context of emissions abatement. In Chapter 3, Brent M. Haddad and Kimberly Merritt examine the impacts of climate change on water resource management in the state of California. While aggregate impact assessments have emphasized macro-level variables such as total precipitation and surface runoff, Haddad and Merritt argue that analysts must consider the detailed aspects of hydrological cycles and management regimes to accurately gauge the consequences of a generally wanner world. In California, for example, increases in total precipitation would
4
RICHARD B. HOWARTHAND DARWINC. HALL
likely be accompanied by reductions in the winter snow pack that is essential to dry season water supply. In conjunction with changes in climatic variability that might exacerbate the flood/drought cycle that affects the state, these factors suggest that the impacts of climate change on the California water system might be quite substantial. While adaptation of existing management patterns certainly will occur, they may not be "least cost," given disagreements over water management priorities, decision making lag times, and challenges associated with reallocating water from historical uses. In Chapter 4, Richard M. Adams, C. C. Chen, Bruce A. McCarl and David E. Schimmelpfennig consider the potential impacts that climate change would impose on the U.S. agricultural sector. As the authors note, crop simulation models, which gauge agricultural production as a function of climate conditions and a range of other variables, play a key role in the impact assessment literature. In the typical case, analysts use the results of general circulation models (GCMs) to estimate changes in mean temperature and precipitation at the regional scale. Adams et al. argue that this approach fails to consider the impacts of weather variability and extreme weather events on agricultural production. They describe how crop simulation models can be extended to consider these effects, drawing on a case study of the E1 NifioSouthern Oscillation phenomenon as it affects crop production. In Chapter 5, Darwin C. Hall analyzes the comparative dynamics of the impacts of climate change on U.S. agriculture, presenting the sensitivity of the results to key variables: (1) how the rate of GHG emissions varies with economic activity; (2) the degree of mean global temperature increases from GHG concentrations; (3) the amount of precipitation corresponding to global warming; and (4) the extent to which technological change will reduce the discovery and extraction costs of fossil fuels. His analysis extends beyond the doubling of GHG concentrations analyzed by Nordhaus (1994), allowing economically available fossil fuels to be exhausted, causing between a 9- and 16-fold increase in GHG concentrations. Hall emphasizes that Nordhaus (1994) admits to two reasons why his DICE model is not applicable for analyzing more than a doubling: (i) for greater than a doubling, carbon uptake by plants is limited so that the atmospheric lifetime of carbon increases from the 120 years in Nordhaus' model to between 380 and 700 years; and (ii) for a 4-fold increase, the earth's ocean circulation changes to a new equilibrium, whereas in Nordhaus' model the ocean temperature eventually returns to that of today. Also troublesome, Hall points out that Nordhaus (1994, p. 40) made assumptions about "the two least important parameters" in his ocean circulation model that turn out to be both extremely important and inconsistent with the more recent findings of oceanographers (Levitus et al., 2000). Hall
Beyond a Doubling: Issues in the Long-term Economics of Climate Change
5
formalizes the climate model used by Cline (1992) and shows that it is consistent with recent work by physical and biological scientists who are modeling increases in GHG concentrations beyond a doubling (McElwain, Beerling & Woodward, 1999). He modifies the model by Cline to account for temperature increases found to have occurred over the last 50 years in the deep ocean. By introducing an ocean thermal lag in the transfer of heat from the atmosphere through the ocean and back to the atmosphere, Hall shows that if we delay policies to slow global warming until we detect damage to the agricultural sector of the economy, we could suffer undesirable consequences.
EMISSIONS ABATEMENT COSTS The second main topic considered in this volume is the cost of reducing greenhouse gas emissions. As we noted above, there are good reasons to believe that the prevailing literature overstates the costs of emissions control. Stephen J. DeCanio, William E. Watkins, Glenn Mitchell, Keyvan Amir-Atefi, and Catherine Dibble begin the discussion of this issue in Chapter 6, which explores how questions of organizational complexity can hinder the adoption of cost-effective, energy-efficient technologies. According to the IPCC (1996), energy productivity might be improved by some 10-30% through the full adoption of least-cost technologies. DeCanio et al. employ a network model of complex organizations to explain the institutional factors that impede the achievement of this potential. According to the authors, businesses and nonprofit organizations can evolve through time in ways that generally do not support the achievement of cost-minimization, even when the behavior of individuals within those organizations is rational. In a world of organizational imperfections, well designed policies can spur organizational innovation and hence improved performance. DeCanio et al. highlight the importance of this issue and its links to climate-economy modeling. Chapter 7 presents an empirical assessment of the technical potential for carbon dioxide emissions abatement written by Jonathan G. Koomey, R. Cooper Richey, Skip Laitner, Robert J. Markel and Chris Marnay. Focusing on the U.S. economy, the authors describe a simulation exercise that expands a well-known model of national energy demand - the National Energy Modeling System - to include policies and measures that promote enhanced energy efficiency. Significantly, Koomey et al. embed the insights from careful microlevel studies of specific technology options in an economy-wide model of consumer and producer behavior. They conclude that energy efficiency measures could provide net cost savings of $50 billion per year along with half of the GHG emissions reductions stipulated under the Kyoto Protocol.
6
RICHARD B. HOWARTHAND DARWIN C. HALL
In Chapter 8, Eban Goodstein examines the role of renewable energy technologies - in particular wind power - in reducing GHG emissions. In conventional models, the cost of environmentally benign energy sources is determined by autonomous technological change, and technologies are adopted at the point that they become cost-competitive with fossil fuels. Goodstein, however, notes that learning-by-doing implies that the costs of wind power are in fact endogenously determined by patterns of investment and technology adoption. Since technological change yields spillover effects that cannot be fully captured by private sector investors, Goodstein argues that it would be socially efficient to provide public sector subsidies to accelerate the adoption of wind technologies. Chapter 9 presents an analysis by Neha Khanna and Duane Chapman of potential long-term developments in energy supply. While many studies assume that autonomous technological trends will substantially reduce carbon dioxide emissions per unit of energy use, Khanna and Chapman argue that, in the absence of countervailing policies, scarce supplies of crude oil and natural gas might be supplanted by far more plentiful - and carbon intensive - coal and/or shale oil. The authors revise Nordhaus' (1994) DICE model to include an explicit representation of available energy resources, arguing that Nordhaus substantially underestimates baseline GHG emissions and, therefore, long-term rates of climate change. Along with Goodstein, Khanna and Chapman emphasize the importance of policy measures to promote the availability of low-carbon energy technologies. In concluding the volume's treatment of GHG emissions abatement, Chapter 10 provides a careful analysis of the Clean Development Mechanism (CDM) written by Larry Karp and Xuemei Liu. The CDM is a mechanism defined by the Kyoto Protocol in which industrialized nations may obtain emissions abatement credits for sponsoring GHG emissions control projects in lowincome countries. The authors argue that the CDM provides a means to exploit the potential for low-cost emissions reductions in the developing world, but is constrained by institutional imperfections and transaction costs. In this perspective, the CDM is an important but limited tool for achieving least-cost emissions abatement. These points are backed up by an interesting and insightful empirical analysis. INTERGENERATIONAL
SOCIAL
CHOICE
The final chapters included in this volume are premised on the notion that the standard practice of utility discounting offers a contestable and possibly inadequate approach to evaluating the questions of intergenerational fairness
Beyond a Doubling: Issues in the Long-term Economics of Climate Change
7
that are central to climate stabilization policy. In Chapter 11, Reyer Gerlagh and Bob van der Zwaan employ an overlapping generations model to explore an alternative approach to questions of intergenerational social choice. In particular, the authors consider how the distribution of rights to pollute and/or to enjoy the benefits of sustained climatic stability affect the distribution of economic welfare between present and future generations. Although the Coase Theorem suggests that questions of economic efficiency and distributional fairness can sometimes be decoupled in environmental policy analysis, Gerlagh and van der Zwaan show that this suggestion is misleading in the context of climate change. The question of whether future generations must compensate polluters for GHG emissions abatement, or whether polluters must compensate future generations for climatic damages, has important and quite substantial distributional implications. The concluding chapter by Richard B. Howarth presents an analysis that is closely related to the study by Gerlagh and van der Zwaan. Howarth examines an overlapping generations model of climate-economy interactions that is numerically calibrated based on the assumptions of Nordhaus' (1994) DICE model. Although the common wisdom holds that the stabilization of current climatic conditions might jeopardize both short-term welfare and long-term economic growth, Howarth reaches a rather different conclusion. In this model, climate stabilization reduces short-term consumption but results in enhanced levels of long-term productivity and economic output in comparison with a laissez faire scenario in which emissions remain unregulated. Although economists have heavily emphasized the criterion of Pareto efficiency in climate change policy analysis, Howarth shows that the efficiency gains attained by balancing the costs and benefits of GHG emissions abatement are small in comparison with the distributional disparities between climate stabilization and laissezfaire. Together, Chapters 11 and 12 show how economists' concern for economic efficiency may be reconciled with a moral concern for respecting the perceived rights of future generations. Analysts need not fall back on the discounted utility criterion in evaluating long-term policy decisions.
CONCLUSIONS A critical issue for economic policy analysis is the selection of the appropriate baseline against which the consequences of policy options may be compared. In the economics of climate change, analysts commonly assume that greenhouse gas emissions abatement would entail quite substantial short-term costs that would provide relatively modest environmental benefits some
8
RICHARD B. HOWARTH AND DARWIN C. HALL
decades into the future. In conjunction with the use of conventional discounting procedures, these assumptions suggest that it might be decidedly inefficient to impose stringent emissions control policies (Nordhaus, 1994). This volume advances the economic analysis of climate change by focusing on key assumptions that are implicit in most existing studies and making explicit - rather than implicit - connections between policy choices and economic outcomes. Some assumptions are strictly geophysical while others more subtly connect the economic and geophysical systems. These assumptions concern the time frame for analysis, the amount of fossil fuels that are economically recoverable, potential irreversibilities in the atmospheric accumulation of GHGs, the spatial distribution of precipitation and regional droughts, fluctuations of ocean circulation among multiple equilibria, the potential for catastrophic damages from warming, potential carbon sequestration options, the state-dependence of technology and technological change for conventional and alternative sources of energy and energy efficiency, and complexities in economic organization that provide opportunities for effective policy intervention. The volume also directly addresses the issue of intergenerational fairness, rather than implicitly making an ethical judgement and burying it within the technical choice of the discount rate. Taken as a whole, the analyses presented in this volume suggest that plausible sets of economic assumptions can support aggressive steps towards GHG emissions abatement in the short-term future. The results of climateeconomy models are strongly sensitive to changes in baseline assumptions that are themselves uncertain and hence open to debate. The pursuit of these issues stands to enrich both economic analysis and real-world debates over climate stabilization.
ACKNOWLEDGMENTS A work of this kind depends critically on the distinct contributions of numerous individuals. The editors thank the authors and the reviewers whose anonymous comments have improved the quality of this volume.
REFERENCES Arrhennius, S. (1896). On the Influenceof CarbonicAcid in the Air upon the Temperatureof the Ground. Philosophical Magazine, 41,237-276. Cline, W. R. (1992). The Economics of Global Warming. Washington: Institute for International Economics. IntergovemmentalPanelon ClimateChange (IPCC) (1990). Climate Change: The IPCC Scientific Assessment. New York:CambridgeUniversityPress.
Beyond a Doubling: Issues in the Long-term Economics of Climate Change
9
Intergovemmental Panel on Climate Change (IPCC) (1996a). Climate Change 1995: Economic and Social Dimensions of Climate Change. New York: Cambridge University Press. Intergovemmental Panel on Climate Change (IPCC) (1996b). Climate Change 1995: The Science of Climate Change. New York: Cambridge University Press. Levitus, S., Antonov, J. I., Boyer, T. E, & Stephens, C. (2000). Warming of the World Ocean. Science, 287, 2225-2229. McElwain, J. C., Beerling, D. J., & Woodward, E L (1999). Fossil Plants and Global Wanning at the Triassic-Jurassic Boundary. Science, 285, 1386-1390. Nordhaus, W. D. (1994). Managing the Global Commons: The Economics of Climate Change. Cambridge, Massachusetts: MIT Press. Ramsey, E (1928). A Mathematical Theory of Saving. Economic Journal, 38, 543-559. Weyant, J. E (1993). Costs of Reducing Global Carbon Emissions. Journal of Economic Perspectives, 7(4), 27-46.
INTEGRATED ASSESSMENT MODELS OF CLIMATE CHANGE: BEYOND A DOUBLING OF CO2 Stephen H. Schneider and Kristin Kuntz-Duriseti ABSTRACT One of the principal tools in analyzing climate change control policies is integrated assessment modeling. While indispensable for asking logical "what if" questions, such as the cost-effectiveness of alternative policies or the economic efficiency of carbon taxes versus R&D subsidies, integrated assessment models (IAMs) can only produce "answers" that are as good as their underlying assumptions and structural fidelity to a very complex multi-component system. However, due to the complexity of the models, the assumptions underlying the models are often obscured. It is especially important to identify how IAMs treat uncertainty and the value-laden assumptions underlying the analysis. In particular, IAMs have difficulty adequately addressing the issue of uncertainty inherent to the study of climate change, its impacts, and appropriate policy responses. In this chapter, we discuss how uncertainty about climate damages influences the conclusions from IAMs and the policy implications. Specifically, estimating climate damages using information from extreme events, contemporary spatial climate analogs and subjective probability assessments, transients, "imaginable" surprises, adaptation, market distortions and technological change are given as examples of problematic areas that IA modelers need to explicitly
The Long-Term Economics of Climate Change, pages 11--64. 2001 by Elsevier Science B.V. ISBN: 0-7623-0305-0 11
12
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
address and make transparent if IAMs are to enlighten more than they conceal. INTRODUCTION
1
It is often asserted that human societies are "better off' as we enter the 21st century than were in all previous generations. There are many more of us enjoying increasing material standards of living and increasing life expectancy as a result of technological developments and social organization in the wake of the industrial revolution. However, while most medical studies point toward increasing (though not necessarily sustainable) human health status now compared to centuries ago, few conservation biologists would accept a comparable claim that natural ecosystems are "better off' today given the many human-induced disturbances to nature that have compounded over the centuries, including the "discernible" and growing human impact on climate (e.g. IPCC, 1996a). Although our principal focus will be on global climate change, its potential synergism with the other global change disturbances (such as atmospheric ozone depletion, habitat destruction, pesticide runoff, chemical releases, or exotic species invasion) should always be borne in mind. When discussion is limited to a doubling of atmospheric CO2 concentrations, it is conceivable to talk about adaptation as a reasonable response to many impacts of climate change. However, when we consider the trajectory of CO 2 concentrations into the 22nd century and acknowledge that there is the potential to double CO 2 concentrations several times over, passive adaptation becomes an increasingly difficult prospect - particularly for natural systems (IPCC, 2001b). If we are "better off' in the future, will future generations be sufficiently endowed to deal with CO: concentrations beyond a doubling? One of the principal tools in analyzing climate change control policies is integrated assessment modeling. In integrated assessment models (IAMs) of climate change, modelers typically "combine scientific and economic aspects of climate change in order to assess policy options for climate change" control (Kelly & Kolstad, 1999). The role of IAMs is to help elucidate how certain policy choices could alter the likelihood or costs of various options and/or consequences. While indispensable for asking logical "what if" questions, such as the cost-effectiveness of alternative policies or the economic efficiency of carbon taxes versus R&D subsidies, IAMs can only produce "answers" that are as good as their underlying assumptions and structural fidelity to a very complex multi-component system. IAMs can explore the behavior of complex systems more reliably and consistently than mental models or intuition, provided the assumptions embedded in the IAMs are clearly recognized and
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
13
understood. It is especially important to identify how IAMs treat uncertainty and the value-laden assumptions underlying the analysis. However, due to the complexity of the models, the assumptions underlying the models are often obscured. Not all potential users of IAM results will be aware of hidden values or assumptions inherent in such tools. If the assumptions and values embedded in such topics are not made explicit to users, then IAMs may obscure and confuse the debate rather than provide useful insights. Because of their social and political purpose to provide insights into value-laden decision-making processes, IAMs should be as transparent as possible to facilitate use by a variety of users with greatly varying analytical skills (e.g. see the discussion of "good practice" in IAM by Ravetz, 1997). In the end, policy making is an intuitive judgment about how to manage risks or make investments to deal with a wide array of possible consequences. Thus, incorporation of decision makers into all stages of development and use of IAMs is one safeguard against misunderstanding or misrepresentation of IAM results by lay audiences. In particular, IAMs have difficulty adequately addressing the issue of uncertainty inherent to the study of climate change, its impacts, and appropriate policy responses. In this chapter, we discuss how uncertainty about climate damages influences the conclusions from IAMs and the policy implications. Specifically, estimating climate damage using information from extreme events, contemporary spatial climate analogs and subjective probability assessments, transients, "imaginable" surprises, adaptation, market distortions and technological change are given as examples of problematic areas that IA modelers need to explicitly address and make transparent if IAMs are to enlighten more than they conceal. Disclaimers on the Use of lAMs The discipline of economics, since it has the best developed formalism and empiricism for cost/benefit analyses, is in a particularly advantaged position to contribute to IAMs. However, some have challenged the cost/benefit technique in particular, and the utilitarian principle upon which it rests in general, as incommensurate with the full spectrum of social values. 2 Even though expanding on specific cost/benefit paradigm-challenging arguments such as these are beyond the scope of this discussion, it must be kept in mind, nonetheless, that when applying IAM results to actual decision making these philosophical underpinnings of analytic methods do influence the outcomes-what has often been labeled as "framing" the problem in the sociology of scientific knowledge literature (Jasanoff & Wynne, 1998). To the extent that IAMs inform the value-laden process of decision making, they can educate our
14
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
intuitions and aid decision making. To the extent that, in a haze of analytic complexity, IAMs obscure values or make implicit cultural assumptions about how nature or society works (or the modelers' beliefs about how they "should" work), IAMs can thus diminish the openness of the decision-making process. The cultural differences across professional or other social groups must also be explicitly accounted for in IAMs. Ecologists typically argue that it is neither responsible stewardship nor good economics to mortgage our environmental future and leave the burden of finding solutions to our posterity. Economists typically retort that we're leaving future generations greater flexibility, through increased wealth, to deal more cost-effectively with these burdens. In order for the political system to find a balance between these opposing viewpoints, we must first recognize the assumptions and belief systems embedded into any of the analytical tools that are designed to inform the process. IAMs can help by describing quantitatively the logical consequences of an explicit set of assumptions, including values and beliefs. Decision makers, hopefully more aware and better informed thanks to insights from IAMs, make the value judgments that are their franchise. It is the responsibility of IAM builders and users to make such values and beliefs transparent and accessible in their products. To do less is to make IAMs at best irrelevant to policy-makers, and at worst, misleading. Space does not permit an exhaustive catalogue of every imaginable strength and weakness of using IAMs for developing climate change policy. Fortunately, several authors have attempted to diagnose and debate this topic in considerable depth (e.g. Rothman & Robinson, 1997; Parson, 1996; Rotmans & van Asselt, 1996; Morgan & Dowlatabadi, 1996; Risbey et al., 1996 - from which scores of additional and earlier references can be found). For example, Wynne and Shackley (1994) assert that IAMs are primarily tools for IA to use to generate insights into the decision-making process; they are not "truth machines". Modeling complex physical, biological and social factors cannot produce credible "answers" to current policy dilemmas, but can put decision making on a stronger analytical basis (Rotmans & van Asselt, 1996). Furthermore, Rotmans and van Asselt suggest that policy recommendations in response to projected climate change also depend on the underlying cultural view of development and nature. Risbey et al. (1996) address the value-laden analysis of the differential monetary value of human life, typically determined from the discrepancy between how much poor and rich societies are willing to invest to prevent the loss of a "statistical person." Although analytically convenient since this objective method permits risks to be put into a common metric (i.e. the dollar), it values the losses of poor countries from climatic damages that include loss of life much below (in absolute dollar terms) that of
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
15
rich countries in an integrated assessment. Understanding the strengths and weaknesses of any complex analytic tool is essential to rational policy making, even if quantifying the costs and benefits of specific activities is controversial (e.g. Schneider, 1997a). Predictability Limits Quantitatively separating cause and effect linkages from among the many complex, interacting processes, sub-systems and forcings within the climate system is extremely difficult and controversial. These difficulties are compounded by the difficulty in identifying a trend when there is a large variation around the trend, let alone the possibility that there can be trends in that variability as well. It is understandable that debates about the adequacy of models often erupt. Complex systems inherently suffer from a limit on their predictability, which is an issue that has received inadequate attention. This is due, in part, from the difficulty in predicting structural change, such as the effect of OPEC's oil embargo on energy prices in 1974 or the USSR's decision in 1972 to massively buy grain on the world market (see Liverman, 1987). Despite the likely unpredictability of such salient "surprises", IAMs can be used for sensitivity analyses of how certain policies can reduce the risks of a number of plausible "surprise" events, such as the collapse of the thermohaline circulation in the North Atlantic Ocean (see Mastrandrea & Schneider, 2001). Few state variables of complex systems would enjoy much "predictability" if predictability means a reliable forecast of the time series of the state variable. Accuracy in the time evolving projection of a multi-component IAM would very likely degrade as unpredictable events - exogenous or endogenous occurred (see the "cascade of uncertainty" in Schneider, 1983, or "the uncertainty explosion" in Henderson-Sellers, 1993, or the "chain of causality" in Jones, 2000). However, a forecast of the sensitivity of the system to specific exogenous factors, for example, could carry a high degree of precision even if many unpredictable events caused the system variables to drift from its projected state. By analogy, even though no individual weather events can be reliably predicted past a week or two owing to the chaotic internal dynamics of the atmosphere (Lorenz, 1975), the effects of a volcanic dust veil on the climate of the few years following the eruption are likely to be highly predictable. In other words, the system might be predictably different from what it otherwise would have been because of the well-modeled response to an exogenous factor even if the absolute state of the system over time is largely unpredictable. Currently we have a much more developed conception of boundary condition changes that could plausibly disturb the climatic system than how social
16
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
systems might evolve. Thus, we need to focus on which social and environmental sub-systems are most sensitive to global change disturbances rather than to overemphasize attempts to forecast changing social conditions and norms over long periods, even though such changing conditions can very much alter societal vulnerability to global change disturbances (e.g. IPCC, 2001b). Because of the complex interactions between climate, ecology and society, it is likely to be tougher to provide credible projections of state variables in the coupled model than for any individual sub-model. Modelers must therefore be aware of the danger of "lamp-posting", which is Ravetz's (1997) term for the clich6 that warns of the tendency to look for a lost set of keys under the lighted lamppost, rather than in the dark field where they were probably dropped. Salience must compete with tractability in model design, which is one reason why it is critical to involve model users, most notably decision makers and stakeholders, at very early stages of model design.
UNCERTAINTY AND MODEL ASSUMPTIONS Estimating Climate Damages Under Uncertainty The inherent uncertainty in predicting climate change and its implications creates problems for IAMs. Moss and Schneider (2000) note that the term "uncertainty" implies anything from confidence just short of certainty to informed guesses or speculation. Lack of information obviously results in uncertainty, but often, disagreement about what is known or even knowable is a source of uncertainty. Some categories of uncertainty are amenable to quantification, while other kinds cannot be expressed sensibly in terms of probabilities (see Schneider et al., 1998, for a survey of the recent literature on characterizations of uncertainty). Uncertainties arise from such factors as linguistic imprecision, statistical variation, measurement error, variability, approximation, and subjective judgment. These problems are compounded by the global scale of climate change, but local scales of impacts, long time lags between forcing and response, low frequency variability with characteristic times greater than the length of most instrumental records, and the impossibility of before-the-fact experimental controls. Moreover, it is important to recognize that even good data and thoughtful analysis may be insufficient to dispel some aspects of uncertainty associated with the different standards of evidence. The combination of increasing population and increasing energy consumption per capita is expected to contribute to increasing CO2 and sulfate emissions over the next century, but projections of the extent of the increase are very
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
17
uncertain. Central estimates of emissions suggest a doubling of current C O 2 concentrations by the middle of the twenty-first century, leading to projected warming ranging from one degree Celsius to nearly six degrees Celsius (if aerosol effects are controlled) by the end of the twenty-first century (IPCC, 2001a). Wanning at the low end of the uncertainty range could still have significant implications for a number of "unique and valuable" assets such as species adaptation (e.g. IPCC, 2001b, Chapter 19), whereas warming of five degrees or more could have catastrophic effects on natural and human ecosystems, including hydrological extremes and serious coastal flooding. The overall cost of these impacts in "market sectors" of the economy could easily run into many tens of billions of dollars annually (Smith & Tirpak, 1988, IPCC, 1996b). Although fossil fuel use contributes substantially to such impacts, associated costs are rarely included in the price of conventional fuels; they are externalized. Internalizing these environmental externalities (see Nordhaus, 1992; IPCC, 1996c; Goulder & Kennedy, 1997) is a principal goal of international climate policy analyses. Uncertainties are compounded when projections of climatic impacts are considered. The extent of the human imprints on the environment is unprecedented: human-induced climate change is projected to occur at a very rapid rate, natural habitat is fragmented for agriculture, settlements, and other development activities, "exotic" species are imported across natural biogeographic barriers, and our environment is assaulted with a host of chemical agents (e.g. Root & Schneider, 1993). For these reasons it is essential to understand not only how much climate change is likely, but also how to characterize and analyze the value of the ecosystem services that might be disrupted. How the biosphere will respond to human-induced climate change is fraught with uncertainty. However, it is clear that life, biogeochemical cycles, and climate are linked components of a highly interactive system. The assumptions about how to model this uncertainty affect the conclusions and policy implications. Nowhere is this more evident than in estimating damages, or costs, of climate change. The overall cost of climate change involves the cost of mitigation, the cost of adaptation and the cost of the remaining damages. Uncertainty and the possibility of surprises surround each of these components and have a profound effect on each of them. We highlight issues that are crucial when costing climatic impacts, particularly when the possibility is allowed for non-linearities, surprises and irreversible events. The assumptions made when carrying out such exercises largely explain why different authors obtain different policy conclusions. (See Schneider, KuntzDuriseti & Azar, 2000, for discussion of similar problems and solutions in the costing of mitigation activities.)
18
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
In this section, we review and comment on alternative methods for estimating uncertain climate damages, including the use of current variation in climate conditions as a proxy for future climate, the use of empirical data on extreme weather events and subjective probability assessments to estimate damage costs. We conclude that a point estimate, or "best guess", by obscuring the wide range of possibilities in the underlying probability distribution, can be misleading; probability distributions more honestly convey the state of knowledge when uncertainties are inherent.
The Use of "Ergodic Economics" to Model Climate Change Over Time The assumptions underlying climate change scenarios determine to a large degree the impacts that specific climatic change scenarios are predicted to have on agriculture, coastlines or forestry. Some analysts (e,g. Mendelsohn et al., 1996) have suggested a shortcut around explicitly modeling the complex, coupled physical, biological, and social dynamics that determine the profitability of agriculture or forestry. They argue that cross sectional analyses can estimate empirically the adaptation responses of real farmers to changes in climate (e.g. how yields would change, adaptation responses, etc.) by simply comparing these bio-economic activities in warm places like the U.S. Southeast and colder places like the Northeast. This spatial difference in climate provides a proxy for how temperature changes in each place might affect these segments of the bio-economy. Climatic model simulations for CO2-induced climate changes are used to determine regional annual temperature and precipitation changes (Mendelsohn et al., 1999) and drive regional numerical values of associated damages (typically net market benefits in cold regions and net costs in warm places). Schneider (1997a) objects that fundamental assumptions are invoked both implicitly and explicitly by the use of one of the techniques in such studies. These assumptions are not universally lauded, are not always transparent, and are such that plausible alternatives could radically change the "answer." The method is controversial (e.g. see Hanemann, 2000; Darwin, 1999, and Adams, 1999), since it ignores time-evolving or transient changes in temperature and other variables, not to mention surprises. In essence, these methods assume a perfect substitutability for changes at one place over time (i.e. the climate impact we seek to cost) with changes across space at the same time - a debatable assumption that is tantamount to the ergodic hypothesis in mathematical statistics. A system is "ergodic" if an ensemble of replicates averaged at one instant of time produces the same statistical results as an infinite time average of one member of the ensemble. Time and space are, in essence, substitutable - the
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
19
system is ergodic. This result will only occur if the system has a unique steadystate response to any exogenous forcing. In other words, an ergodic system's single equilibrium state has no memory of its evolutionary path, only its boundary conditions; i.e. it is a "transitive" system (e.g. Lorenz, 1968, 1970). However, the reliability of these methods rests on three quite fundamental assumptions that need to be explicit in order to assess the merit of the conclusions: (1) Ergodic Economic Substitutability (static and dynamic systems are equivalent): Variations over time and space are equivalent (e.g. long-term averaged climate and/or economic differences between two separate places are equivalent to changes of comparable magnitude occurring over time in one place). The underlying process governing a dynamic systems' response to disturbances produce transient pathways that may not resemble the equilibrium response to that disturbance. Thus, when cross-sectional models are derived from a system already in equilibrium, it is implicitly assumed that the dynamic processes that govern transient behavior have been fully captured in that static cross-sectional structure. (2) Transitivity: Only one steady state occurs per set of exogenous conditions (i.e. the same path independent, long-term impacts occur for all possible transient scenarios). In other words, surprises and synergisms, which are nonlinear and likely to depend on the path of system changes, pose no qualitative threats to the credibility of the results. Although non-linearity and "surprises" do not necessarily imply intransitivity (i.e. multiple equilibria), they certainly alter transient responses, which is of particular relevance to the policy response to climate change occurring over decades. (3) Higher Moments are Invariant: A primary variable used to assess climate change is annually averaged surface temperature. However, annual mean surface temperature may not be a good proxy for actual climatic changes occurring either in equilibrium or over time because annual means do not capture higher moments such as daily or seasonal cycles or variability (see e.g. Mearns et al., 1984, or Overpeck et al., 1992). For example, if much of the anthropogenic warming were to occur at night (as some climate models project), this could have very different ecological or agricultural effects than if there were no change in the diurnal cycle. Or, if seasonality were altered, then even the same annual mean surface air temperature difference today across space would likely be a poor analogy for the impact either in equilibrium or over time for a future climate change that included altered seasonality. Or, if between now and a specified future time, precipitation increases by 10%, but
20
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
more than half this annually averaged increase were distributed in the top decile of rainfall intensity (as it has in the U.S. since 1910, see Karl & Knight, 1998), then using annual precipitation (let alone just annual temperature) difference between two regions today as a proxy for the effects of a 10% precipitation increase in the future in the drier location could well be a very poor representation of what would happen, even given the same annually averaged difference. While these methods can inform one's intuition about possible marketvariable impacts of certain climate changes under specified assumptions, caution must be exercised in taking the conclusions literally as the implicit assumptions are unlikely to be appropriate for many climate change scenarios and/or applications. Furthermore, these methods probably underestimate the costs of transition and adaptation since the current, known, relatively stable climate is substituting as a proxy for adaptation to uncertain rapidly changing climate.
Costing of Extreme Event Climate Damages Subjective probability assessments of potential climate change impacts provide a crude metric for assigning dollar values to certain aspects of ecosystem services. We can anticipate costs associated with global change and place a preliminary value on some of the ecosystem services that could be affected. One way to assess the costs of climate change is to evaluate the losses from extreme climatic events, such as floods, droughts, and hurricanes (see Alexander et al., 1997 and IPCC, 2001b, Chapter 8). Catastrophic floods and droughts are cautiously projected to increase in both frequency and intensity with a warmer climate and the influence of human activities such as urbanization, deforestation, depletion of aquifers, contamination of ground water, and poor irrigation practices (IPCC, 1996a and IPCC, 2001b, Table 3-10). Humanity remains vulnerable to extreme weather events. For example, consider that between 1965 and 1985 in the United States floods claimed 1,767 lives and caused more than $1.7 billion in property damage; the effects of these floods are felt across a wide range of economics sectors. Alexander et al. (1997) estimate federal response and recovery costs to the 1993 Midwest flood, including $4.2 billion in direct federal expenditures, $1.3 billion in payments from federal insurance programs, and more than $621 million in federal loans to individuals, businesses, and communities (see Table 1). Other effects of the flooding are still largely unknown, including cumulative effects of releases of hazardous material such as pesticides, herbicides, and
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02 Table 1.
21
S u m m a r y o f federal expenditures for the M i d w e s t flood o f 1993 (in millions o f U.S.D). F r o m A l e x a n d e r et al., 1997. Missouri
Iowa
Minnesota
Illinois
Other Statesa
Total
USDA FEMA HUD Commerce USACE HHS Education Labor National Community DOT EPA DOI
141.6 291.5 152.1 51.9 128.7 19.3 4.5 15.0 1.0
376.2 189.8 107.7 48.5 9.7 22.8 11.1 15.0 1.2
446.2 62.9 29.8 7.9 0.3 4.0 0.8 5.0 0.7
63.3 197.5 94.9 .4 70.3 7.4 1.4 10.0 0.4
512.2 290.9 75.1 23.8 12.0 15.2 2.2 19.6 0.7
1,699.9 1,098.0 500.0 201.3 253.1 75.0 100.0 64.6 4.0
73.5 7.6 5.1
22.1 4.6 2.1
7.3 2.2 6.0
33.3 5.3 11.8
36.9 12.4 8.3
146.7 34.0 41.2
Total
891.8
810.8
573.1
504.0
1,009.3
4,217.8
a Denotes costs combined, including those for the states of Kansas, Nebraska, North Dakota, South Dakota, and Wisconsin. Abbreviations: USDA, United States Department of Agriculture; FEMA, Federal Emergency Management Agency; HUD, Housing and Urban Development; USACE, United States Army Corps of Engineers; HHS, Department of Health and Human Services; DOT, Department of Transportation; EPA, Environmental Protection Agency; DOI, Department of the Interior. Source: Interagency Floodplain Management Review Committee report to the Administrative Floodplain Management Task Force, 1994.
other toxics; effects on groundwater hydrology and groundwater quality; distribution o f contaminated river sediments; and alteration o f forest canopy and sub-canopy structure. In addition, the loss o f tax revenue has not been quantified for the M i d w e s t flood. It is important to note that while not all costs o f the 1993 flood can be directly calculated in monetary terms, both quantifiable and non-quantifiable costs were significant in magnitude and importance. W h i l e we are not claiming that this event was directly caused by anthropogenic climate change, it does allow a rough estimate.of the magnitude of costs should such climate change cause increases in extreme weather events. Moreover, similar events in less developed parts o f the world (e.g. flooding from Hurricane Mitch in Central America) m a y have caused less absolute monetary damages but much greater losses in terms o f human life,
22
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
infrastructure and the social fabric of whole communities, not to mention the much higher percentage loss to GDP - sometimes estimated to be greater than 50%! Clearly, it is important to be explicit what units of cost (i.e. what is the "numeraire") are being considered in each specific case. Like floods, severe droughts of the twentieth century have affected both the biophysical and socioeconomic systems of many regions. Drought analyses indicate that even reasonably small changes in annual streamflows due to climatic change can have dramatic impacts on drought severity and duration. For example, changes in the mean annual streamflow of a region of only + / - 10% can cause changes in drought severity of 30 to 115% (Dracup & Kendall, 1990). Damage estimates from the 1988 drought in the Midwestern United States show a reduction in agricultural output by approximately onethird, as well as billions of dollars in property damage. Hurricanes can also cause devastation in the tens of billions of dollars. Warmer surface waters in the oceans currently produce stronger hurricanes (that is, they are warm season phenomena). Other meteorological factors are involved, though, that may act to increase or decrease the intensity of hurricanes with climate change. An increase in intensity of hurricanes with warmer waters is plausible (e.g., Knutson, 1998), yet still speculative given the number of other uncontrolled factors involved. Most recently, IPCC (2001a) has estimated that increases in hurricane intensities up to 10% seem likely for most 21st century climate change scenarios. There is little doubt, however, of the heightened damage that arise from more intense hurricanes. Damage assessment is one possible way in which we can relate the cost of more inland and coastal flooding, droughts, and hurricanes to the value of preventing the disruption of climate stability. In the 1993 Midwest flood example, Alexander et al. (1997) delineate the costs of a single event. It is also possible to perform a more integrated analysis, such as the cost assessment of future sea level rise along U.S. coasts associated with possible ice-cap melting or with ocean warming and the resulting thermal expansion of the waters. In a probability distribution of future sea level rise by 2100, changes range from slightly negative values to a meter or more rise, with the midpoint of the distribution being approximately half a meter (Titus & Narayanan, 1996). A number of studies have assessed the potential economic costs of sea level rise along the developed coastline of the United States. For a 50 cm rise in sea level by the year 2100, depending on associated assumptions about the level of adaptation, estimates of potential costs range from $20.4 billion (Yohe et al., 1996) to $138 (Yohe, 1989) in lost property. Adaptive capacity in places like Bangladesh, for example, is low, and thus, vulnerability is higher.
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
23
Characterizing Uncertainty and the Need for Probability Distributions in Climate Damage Assessment A U.S. Environmental Protection Agency study used an unusual approach to assess the effects of climate change by combining climatic models with the subjective opinions of many scientists on the values of uncertain elements in the models to help bracket the uncertainties inherent in this issue. Titus and Narayanan (1996) - including teams of experts of all persuasions on the issue calculated the final product of their impact assessment as a statistical distribution of future sea level rise, ranging from negligible change as a low probability outcome, to a meter or more rise, also with a low probability (see Fig. 1). The midpoint of the probability distribution is something like halfmeter sea level rise by the end of the next century. Since the EPA analysis stopped there, this is by no means a complete assessment. In order to take integrated assessment to its logical conclusion, we need to ask what the economic costs of various control strategies might be and how the costs of abatement compare to the economic or environmental losses (i.e. impacts or "damages" as they are called) from sea level rises. That means putting a value - a dollar value typically - on climate change, coastal wetlands, fisheries, environmental refugees, etc. Hadi Dowlatabadi at Carnegie Mellon University led a team of integrated assessors who, like Titus, combined a wide range of scenarios of climatic changes and impacts but, unlike the EPA studies, added a wide range of abatement cost estimates. Their integrated assessment was presented in statistical form as a probability that investments in CO2 emissions controls would either cost more than the losses from averted climate change or the reverse (e.g. Morgan & Dowlatabadi, 1996). Since their results do not include estimates for all conceivable costs (e.g. the human or political consequences of persons displaced from coastal flooding), the Carnegie Mellon group offered its results only as illustrative of the capability of integrated assessment techniques. Its numerical results have meaning only after the range of physical, biological and social outcomes and their costs and benefits have been quantified - a Herculean task. Similar studies include a Dutch effort to produce integrated assessments for policy makers (Rotmans & van Asselt, 1996). Understanding the strengths and weaknesses of any complex analytic tool is essential to rational policy making, even if quantifying the costs and benefits of specific activities is controversial. Attempts to achieve more consistency in assessing and reporting on uncertainties are just beginning to receive increasing attention. Some researchers express concern that it is difficult to even know how to assign a distribution of probabilities for outcomes or processes that are laced with different types of uncertainties. However, the scientific complexity of the -
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
24 3.5-A
3.0""
"--' 2.5" 2.0" m
rr.
1.0
(1.
0.5 0
i
, , ipIilI jrj, • DISTRIBUTION OF FUTURE SEA LEVEL RISE (am)
2.5 2.0 "I
1.5
~ 1.0' rr El.
0.5'
O'
DISTRIBUTION OF FUTURE SEA LEVEL RISE ( ~ ) •3.5 m
,.~
DISTRIBUTION OF FUTURE SEA LEVEL RISE (cm)
Fig. 1. Plots showing the probability of various rises of sea level in the years 2030, 2100, and 2200, calculated on the basis of the "Monte Carlo" estimation technique, combining experts' probability distributions for model parameters. From Titus and Narayanan, 1994.
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
25
climate change issue and the need for information that is useful for policy formulation present a large challenge to researchers and policymakers alike it requires both groups to work together towards improved communication of uncertainties. The research community must also bear in mind that readers often assume for themselves what they think the authors believe to be the distribution of probabilities when the authors do not specify it for themselves. For example, integrated assessment specialists may have to assign probabilities to alternative outcomes (even if only qualitatively specified by natural scientists) since many integrated assessment tools require estimates of the likelihood of a range of events in order to calculate efficient policy responses. Moss and Schneider (2000) argue in their IPCC uncertainties guidance paper that it is more rational for experts to provide their best estimates of probability distributions and possible outliers than to have users make their own determinations. The first step in developing an estimate of a probability distribution is to document ranges and distributions in the literature, including sources of information on the key causes of uncertainty, describe how the ranges and distributions are constructed, and clearly specify what they signify. This is not simply a reporting of values available in the literature, but rather an assessment of the relative likelihood that different values in the literature represent accurate estimates or descriptions. It is important to guard against the potential for "gaming" or strategic behavior, in which estimates of outliers might be selected to compensate for what they consider to be over- or under-estimates in the literature. As part of the process of assessing the literature and drafting an assessment, it is critical not to characterize a single estimate, but rather a range of estimates and associated probability distributions. This should include attention not only to the central tendency, but also to the end points of the range of outcomes, possible outliers, the likelihood that outcomes beyond the end points of the range might occur, and the type of distribution of potential outcomes, e.g. normal, bimodal, etc. The next step might be to quantitatively or qualitatively characterize the distribution of values that a parameter, variable, or outcome may take. First identify the end points of the range, and/or any high consequence, low probability outcomes or "outliers." Particular care needs to be taken to specify what portion of the range is included in the estimate (e.g. this is a 90% confidence interval) and what the range is based on. It should be clear what sort of range and confidence interval is being constructed, or what sort of possible outcomes are included in the range. For example, do the range endpoints (or outliers beyond them) include potential known or imaginable non-linear rapid events? Does the "true" value fall into the specified range with a two out of
26
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
three chance (or some other probability)? Or is the range defined to be one that includes two thirds of modeled outcomes available in the literature? These are all very different statements, with different implications, and care should be taken in clarifying exactly what is meant. Finally, an assessment of the central tendency of the distribution (if appropriate) should be provided. In developing a best estimate, authors need to guard against aggregation of results (spatial, temporal, or across scenarios) if it hides important regional or inter-temporal differences. It is important not to combine automatically different distributions into one summary distribution. For example, most participants or available studies might believe that the possible outcomes are normally distributed, but one group might cluster its mean far from the mean of another group, resulting in a bimodal aggregate distribution. In this case, it is inappropriate to combine these into one summary distribution unless it is also indicated that there are two (or more) "schools of thought." Climate sensitivity is an example (see Fig. 2). Here scientists 2 and 4 offer a very different estimate of range outliers (i.e. values below the 5th percentile or above the 95th percentile) for imaginable abrupt events. But the means and variance of scientists 2 and 4 are quite similar to most of the remaining scientists in this decision analytic survey, except scientist 5. This is an example where it would likely be inappropriate to aggregate all respondents' distributions into a single composite estimate of uncertainty since scientist 5 has a radically different mean and variance estimate than the other 15 scientists. In this case, it is not appropriate to aggregate such "schools of thought" into a single distribution, but rather to show the two "paradigms" and mention the amount of support expressed for each distribution. Truncating the probability distribution narrows the range of outcomes described and means excluding outliers that may include "surprises." It is important to note that by providing only a truncated estimate of the full range of outcomes, one is not conveying to potential users a representation of the full range of uncertainty associated with the estimate. This has important implications regarding the extent to which the report accurately conveys uncertainties. Moss and Schneider (2000) acknowledge that some authors are likely to feel uncomfortable with the full range of uncertainty, because the likelihood of a "surprise" or events at the tails of the distribution may be extremely remote or essentially impossible to gauge, and the range implied could be extremely large. Thus, there may be a case to be made for providing a truncated range in addition to outliers for a specific case, provided that it is clearly explained what the provided range includes and/or excludes. It should be stressed that if a truncated range is provided, it is important that authors
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
-s I
1
o
s
1o
is
at
I
I
I
I
I
+~ with state ~ang~
t :
3 with "surprise" 5 6
7
: = =
,
:
: ;: • I' IF i ; -; : t
J,--'~':
t
~ ::::t
14
~
15 16
[--,~ ~ 1 0
i. -10
I -5
.86
2.7
2.3
0.3 2.7 3.1 2.9 2.9 2.6 3.0 2.8 1.9
5. 0.2 2.0 1.5 1.4 1.8 .98 1.4 1.1 1.0
3.1
1.0
2.2 2.8
1.3 1.0
I
]
•, ~
~ ~
2.3 t
:
: ,~-
8 9 10 11 12 13
27
:
: I 5
I 10
I 15
1 20
Temperature response given 2x[COz] (K)
Fig. 2. Box plots of elicited probability distributions of climate sensitivity, the change in globally averaged surface temperature for a doubling of CO2 (2 x [CO2] forcing). Horizontal line denotes range from minimum (1%) to maximum (99%) assessed possible values. Vertical tick marks indicate locations of lower (5) and upper (95) percentiles. Box indicates interval spanned by 50% confidence interval. Solid dot is the mean and open dot is the median. The two columns of numbers on right hand side of the figure report values of mean and standard deviation of the distributions. From Morgan and Keith, 1995. specify how likely it is that the answer could lie outside the truncated distribution, and what was the basis for specifying such possibilities. Note that our estimates are situated within a universe of outcomes that cannot be fully identified (nominated as "knowable" and "unknowable" uncertainties by Morgan & Henrion, 1990). The limits of this total range of uncertainty are unknown, but may be estimated subjectively (e.g. Morgan & Keith, 1995). The inner range represents the "well-calibrated" range of
28
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI MI
I.,,
M2
M3
M4
,,
-
°
d
~H.OHO0HOH**OOOll JoooloHoooeooooHlooo.eoH*omol~tooooHoooo*eoooo~eoH~
i
"JUDGED" RANGE OF UNCEKTAINTY
! !
,!
1 ! ! !
~
I ! !
........._...._......--......................--...--......------.------"
"FULL" RANGE OF UNCERTAINTY
! ! ! !
1
Fig. 3. Schematic depiction of the relationship between "well-calibrated" scenarios, the wider range of "judged" uncertainty that might be elicited through decision analytic techniques, and the "full" range of uncertainty, which is drawn wider to represent overconfidence in human judgments. M1 to M4 represent scenarios produced by four models (e.g. globally averaged temperature increases from an equilibrium response to doubled CO2 concentrations) (modified from Jones, 2000). uncertainty, which consists of well-documented and quantified findings (see Fig. 3). The "judged" range of uncertainty incorporates the broader assessments of uncertainty, due to differences in opinion, based on expert judgments (e.g. the survey of expert opinion by Morgan and Keith in Fig. 2, see also the following section). Given the finding in the cognitive psychology literature for experts to define subjective probability distributions too narrowly due to overconfidence, the "judged" range may not encompass the "full" range of uncertainty. The "full" range of uncertainty is not fully identified, much less directly quantified by existing theoretical or empirical evidence. Although the general point remains that there is always a much wider uncertainty range than the envelope developed by sets of existing model runs, it is also true that there is no distinct line between "knowable" and "unknowable" uncertainties; rather, it is a continuum. The actual situation depends on how well our knowledge (and lack thereof) has been integrated into the assessment models. Moreover, new information, particularly empirical data if judged reliable and comprehensive, may eventually narrow the range of uncertainty to well inside of the well calibrated range by falsifying certain outlier values.
Using Subjective Assessments of Probability Distributions to Evaluate Climate Damages Several studies suggest that climatic change will have only minor economic impacts, and that an optimal policy would therefore incorporate only modest
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
29
controls on greenhouse gas emissions (Kolstad, 1993; Nordhaus, 1992; Peck & Teisberg, 1992). For instance, Nordhaus (1992) estimates the climate damage for several degrees Celsius warming at about at one-percent reduction in GWP based primarily on market sector losses for a central estimate of slowly changing climate. However, many of these "modest controls" conclusions are based on point estimate values - that is, results that are derived from a series of "best guesses". This point estimate method fails to account for the wide range of plausible values for many parameters. Policy making in the business, health and security sectors is often based on hedging against low probability but high consequence outcomes. Thus, any climate policy analysis that represents best guess point values or limited (i.e. "truncated") ranges of outcomes restricts the ability of policy makers to make strategic hedges against such risky outlier events. Nordhaus (1992) has been criticized for considering only a single damage function and not accounting for abrupt climate "surprise" scenarios. In response to such concerns, Nordhaus (1994a) conducted a survey of conventional economists, environmental economists, atmospheric scientists, and ecologists. Since these defy simple quantitative treatment, he took an alternative approach. Nordhaus used decision analytic techniques to sample the opinions of a wide range of experts who have looked at climatic impacts. He asked them to provide their subjective probabilities as to what they thought the costs to the world economy would be from several climate-wanning scenarios. Their median estimates of loss of gross world product (GWP) resulting from a three-degree Celsius warming by 2090 varies between a loss of 0 and 21% of GWP with a mean of 1.9% (Nordhaus, 1994a). Even a 2% loss of GWP in 1994, however, represented annual climate damage of hundreds of billions of dollars. For a six-degree Celsius warming scenario, the respondents predicted a median loss of the world economy ranging from 0.8 to 62% with a mean estimate of 5.5%. The Nordhaus (1994a) decision analytic survey is an example of how estimates of probability distributions can inform. Although the numbers themselves are revealing, what is really interesting is the cultural divide across natural and social scientists in his study. The most striking difference in the study is that the social scientists - conventional economists predominantly virtually as a group, believe that even extreme climate change (i.e. 6°C warming by 2090) would not impose severe economic losses. Although this scenario is usually considered to be a low probability event (e.g. see Fig. 4), it is equivalent to the magnitude of change from an ice age to an inter-glacial epoch in a hundred years, rather than in thousands of years.
30
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
O.35 O.3 O.:7.5
f(x) 0.05 0
-5
0
5 10 %dGWP
15
20
Fig. 4. Probability distributions (f(x)) of climate damages (market and non-market components combined) from an expert survey in which respondents were asked to estimate tenth, fiftieth, and ninetieth percentiles for the two climate change scenarios shown. From Roughgarden and Schneider, 1999. Data from Nordhaus, 1994a. Although they express a wide range of uncertainty, most conventional economists surveyed still think climatic change even this radical would, on average, have only a several percent impact on the world economy in 2100. In essence, they accept the paradigm that society is almost independent of nature. In their opinion, most natural services (e.g. Daily, 1997) associated with current climate are either not likely to be significantly altered or could be substituted for with only modest harm to the economy. On the other hand, natural scientists' estimates of the economic impact of extreme climate change are twenty to thirty times higher than conventional economists' (Nordhaus 1994a; Roughgarden & Schneider, 1999). Nordhaus suggests that the ones who know the most about the economy are less concerned. Schneider (1997b) counters that the ones who know the most about the environment are more worried. The natural scientists, in essence, are less sanguine that human ingenuity could substitute for ecological services. Also, as Roughgarden and Schneider (1999) show, there is a positive correlation between the absolute amount of damage each respondent estimates and the percentage of total damages each assigns outside of standard national accounts (i.e. the natural scientists have higher percentages of their losses assigned to the non-market sectors). Regardless, either judgment involves both economic and ecological assessments, not single-disciplinary expertise. Clearly, the evolution of interdisciplinary communities cognizant of both economic and ecological
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
31
knowledge and belief systems will be needed to make these subjective opinions more credible - and to produce cost estimates that span a reasonable range of currently imaginable outcomes. Note, however, that despite the magnitude in difference of damage estimates between economists and ecologists, the shape of the damage estimate curve was similar. The respondents indicated accelerating costs with higher climate changes. This stands in marked contrast to the fiat willingness-to-pay (WTP) curve revealed by survey respondents in a contingent valuation exercise (Berk & Schulman, 1995). Respondents were surveyed to determine how much they would be willing to pay to prevent a given global climate change scenario from happening. Predicted probabilities were determined from the respondents' willingness to pay for the abatement of different mean high temperatures. In these scenarios, respondents were willing to pay an average of $140 to offset a mean high temperature of 100 degrees Fahrenheit, while a mean high temperature of 80 degrees was worth approximately $100. This represents a 40% increase in willingness to pay for a 20-degree rise in temperature and other scenario characteristics. Note, however, that the respondents reached a plateau in their willingness to pay at about 100 degrees Fahrenheit. Respondents were not willing to pay much more to prevent 120 degree Fahrenheit mean high temperatures than to prevent l l0-degree mean high temperatures. This is very dissimilar to the pattern exhibited by the "expert" respondents. The expert survey respondents, in general, are at least aware of non-linearities in climate change damages, unlike the lay public respondents. The differences in various respondents' estimates of climate damages are cast into subjective probability distributions by Roughgarden and Schneider (1999) and then are used to recalculate the optimal carbon tax rate, using the DICE model (see Fig. 5). The natural scientists' damage estimates processed by DICE produce optimal carbon taxes several times higher than either the original Nordhaus estimate (Nordhaus, 1994b) or those of his surveyed economists. However, most respondents, economists and natural scientists alike, offer subjective probability distributions that were "right skewed". That is, most of the respondents consider the probability of severe climate damage ("nasty surprises") to be higher than the probability of "pleasant surprises". Because of this right skewness, even though the best guess of the economists for climate damages is comparable to the original DICE estimate, the median optimal carbon tax DICE computes when the full distribution of economists is used is somewhat larger than either original DICE or the tax calculated using the economists' 50th percentile climate damage estimates. Including the opinions of natural scientists in the construction of the damage distributions yields an
32
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
so=,
0.0~ 0.03 0.025 0.02 0.015 0.01 0.005 0
I
I
I
I
I
l
I
I
20 40 60 80 100 Carbon tax (S/ton C) in 1998 0
0.014 0.012 0.01 N 0.008 sk. 0.006 0.004 0.002 0 i 0
C
!
I
50
ZOO
_--! _ ~
"~I
150
bo- tax (S/ton C) in 2O55
0.014
I
I
I
I
0.012
0.01 0.008 N 0.006 0.004
0.002 0
0 LS01106 Carbon t a x ( S / t o n ~0~ in Fig. 5. Probability distributions (f(x)) of optimal carbon taxes in the years 1995, 2055, and 2105 from Monte Carlo simulations. Points showing the optimal carbon taxes calculated by the DICE model are shown for comparison. From Roughgarden and Schneider, 1999.
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
33
Table 2. Comparison of Monte Carlo simulation results with the standard DICE model. "Surprise" values are 95th percentile results. From Roughgarden and Schneider, 1999. 1995
Optim~ Carbon Tax($~on C) 2055
2105
5.24 22.85 40.42 193.29
15.04 51.72 84.10 383.39
21.73 66.98 109.73 517.09
Input
DICE Median Mean "Surprise"
increased asymmetry in the probability density functions and higher expected damages given a particular temperature increase. This in turn increases the expected value of both optimal control rates and optimal carbon taxes. Clearly, the use of probabilistic information, even if subjective, provides a much more representative picture of the broad views of the experts as well as a fairer representation of costs which, in turn, allow better potential policy insights from an IAM. Several comparisons between the optimal carbon tax distributions from Roughgarden and Schneider (1999) and the original DICE model can be made, using the data summarized in Table 2 and Fig. 5. Comparing the mode (the most frequent value) of the Roughgarden and Schneider (RS) distribution with the results of the original DICE model, it seems that DICE is a good representative of the expert opinion expressed in Nordhaus' survey. The modes of the optimal carbon tax distributions are slightly above zero, close to DICE's recommendation for a relatively light carbon tax. However, the other properties of the RS distributions justify very different policies. The median and mean of the optimal carbon tax distributions range from three to eight times as high as those featured in the original DICE run.
The differences between the modes of the RS distributions and their medians and means can be attributed to the preponderance of fight-skewness of the opinions given in Nordhaus' survey, discussed earlier (e.g. Figs 4 and 5). These long, heavy tails (which Roughgarden and Schneider label "Surprise" in Table 2) pull the medians and means of the distributions away from the modes. The "surprise" estimates (95th percentile) for optimal carbon taxes in Table 2 are at least twenty times the level of those projected by DICE for the three dates calculated (1995, 2055, and 2105).
34
STEPHEN H. SCHNEIDERAND KRISTIN KUNTZ-DURISETI
Two different effects cause these differences. First, the means of these distributions (4.04% and 11.22% of GWP damage for 3°C warming and 6°C warming, respectively) are much higher than the damage estimates used in DICE (1.33% and 5.32%). Thus, the simulation study of Roughgarden and Schneider uses more pessimistic damage functions than that of the original DICE model. Second, the non-linearities of the model will, on average, push optimal carbon taxes even higher. Intuitively, damage functions derived from these damage distributions will never cause far more optimistic results than those with the original DICE damage function, but they will occasionally result in far more pessimistic outcomes. These occasional "catastrophic"' damage functions will lead to a relatively pessimistic expected value of output. In other words, the significant chance of a "surprise" causes a much higher level of "optimal" abatement, relative to the original DICE formulation. In addition, Roughgarden and Schneider analyze the effects of the relative severity of the average survey damage estimate versus those of the nonlinearities of the DICE model in a probabilistic analysis. Approximately one third of the difference between the optimal carbon taxes of DICE and the means of their optimal carbon tax distributions are accounted for by the relatively high survey damage estimates, and the remaining two-thirds of the difference can be attributed to the non-linearities in the model. In a sense, the original DICE carbon tax may be regarded as a point estimate between the mode and median of the distribution of expert opinion. However, output from a single model run does not display all the information available nor does it offer sufficient information to provide the insights needed for wellinformed policy decisions. One cannot simply look at a recommendation for a "five dollars per ton carbon tax" and claim that higher carbon taxes are "necessarily less economically efficient". In particular, strategic hedging policies to deal with the 95th percentile, high damage outcome may well be chosen by policy makers, just as individuals or firms purchase insurance against low probability catastrophic outcomes. Regardless of how risk-prone or risk-averse is the individual decision maker, the characterization and range of uncertainties of the information provided by decision analysis tools must be made explicit and transparent to policy-makers (Moss & Schneider, 1997). This range of uncertainty should also include estimates for the subjective probability of varying climatic effects (e.g. Morgan & Keith, 1995 and Nordhaus, 1994a), damage estimates, discount rates (e.g. Cline, 1992, Chapman et al., 1995, Azar & Sterner, 1996), carbon cycle effects on CO2 uptake (e.g. Kaufmann, 1997, Schultz & Kasting, 1997), and the sensitivity of the economy to structural changes such as induced technological change (e.g. Grubb et al., 1994; Repetto & Austin, 1997; Goulder & Schneider, 1999; Azar & Dowlatabadi,
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
35
1999). The end result of any set of integrated assessment modeling exercises will be, as always, the subjective choice of a decision-maker (Schneider, 1997b), but a more comprehensive analysis with uncertainties in all major components explicitly categorized and displayed will hopefully lead to a betterinformed choice. We will not easily resolve the paradigm gulf between the optimistic and pessimistic views of these specialists with different training, traditions and world views; one thing that is clear from the Nordhaus studies is that the vast bulk of knowledgeable experts from a variety of fields admit to a wide range of plausible outcomes - including both mild benefits and catastrophic losses in the area of global environmental change. This is a condition ripe for misinterpretation by those who are unfamiliar with the wide range of probabilities most scientists attach to aspects of global climate change. The wide range of probabilities follows from a recognition of the many uncertainties in data and assumptions still inherent in earth systems models, climatic impact models, economic models or their syntheses via integrated assessment models (Schneider, 1997a, b). It is necessary in a highly interdisciplinary enterprise like the costing of climatic impacts or mitigation policies to be used as inputs to integrated assessment of global change problems, that a wide range of possible outcomes be considered, along with a representative sample of the subjective probabilities that knowledgeable assessment groups like the IPCC believe accompany each of those possible outcomes.
ASSUMPTIONS ABOUT STRUCTURAL CHANGE IN IAMs Current analyses of climate policy assume smoothly varying (usually monotonic) scenarios when estimating the costs of climate change. However, both paleoclimatic data and modeling simulations suggest that even smooth forcings can induce rapid, non-linear responses, such as deglaciation of ice sheets, more intense hurricanes, E1 Nifio weather events, and perhaps most dramatically, the collapse of thermohaline circulation. Often impact analysis fails to consider potential non-linear events that constitute low probability/high consequence climate change scenarios. Since adaptive capacity of human societies often depends on the ability to anticipate and respond to such nonlinear events, it is critical to include such responses in integrated assessment models of climate change. Furthermore, most IAMs model an equilibrium
36
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
response to a one-time doubling of C02 atmospheric concentrations, which ignores the transient effects of climate change (i.e. the effects of a changing rather than a changed climate).
Imaginable Conditionsfor Surprise Strictly speaking, a surprise is an unanticipated outcome. However, in the IPCC Second Assessment Report (SAR), "surprises" are defined as rapid, non-linear responses of the climatic system to anthropogenic forcing, and analogies to paleoclimatic abrupt events were cited to demonstrate the plausibility of such a possibility. Moreover, specific examples of such non-linear behaviors that the authors could already envision as plausible are given (e.g. reorganization of thermohaline circulation, rapid deglaciation, fast changes to the carbon cycle). In particular: "Future unexpected, large and rapid climate system changes (as have occurred in the past) are, by their nature, difficult to predict. This implies that future climate changes may also involve "surprises." In particular these arise from the non-linear nature of the climate system; when rapidly forced, nonlinear systems are especially subject to unexpected behavior. Progress can be made by investigating non-linear processes and sub-components of the climatic system. Examples of such non-linear behavior include rapid circulation changes in the North Atlantic and feedbacks associated with terrestrial ecosystem changes" (IPCC, 1996a: 7). Strictly speaking, it would be better to define these as imaginable abrupt events. Note, the Working Group I (Climate Effects) SAR concludes (see above quote) its Summary for Policymakers with the statement that non-linear systems when rapidly forced are particularly subject to unexpected behavior. Here is an example of unknown outcomes (i.e. true surprises) but imaginable conditions for surprise. Of course, the system would be less "rapidly forced" if decision makers chose as a matter of policy to slow down the rate at which human activities modify the atmosphere. Whether the risks of such imaginable surprises justify investments in abatement activities is the questions that IAMs are designed to inform (IPCC, 1996c). However, to deal with such questions the policy community needs to understand both the potential for surprises and how difficult it is for integrated assessment models (IAMs) to credibly evaluate the probabilities of currently imaginable "surprises," let alone those not currently envisioned (e.g. see Schneider et al., 1998 for discussions and a review of the literature). Few modeling groups explicitly include "surprise" scenarios, although some models (e.g. Dowlatabadi & Kandlikar, 1995; Roughgarden & Schneider,
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
37
1999) do formally treat uncertainties via probability distributions whose outlier values are, in some sense, "imaginable surprises" (e.g. see Schneider et al., 1998). Even the most comprehensive models of such a very complicated coupled system are likely to have unanticipated results when forced to change very rapidly by external disturbances like CO2 and aerosols. Indeed, some of the transient coupled atmosphere-ocean models run out for hundreds of years exhibit dramatic change to the basic climate state (e.g. radical change in global ocean currents - e.g. see Manabe & Stouffer, 1993, Haywood, 1997, or Rahmstorff, 1999). Stocker and Schmittner (1997) have argued that rapid alterations to oceanic currents could be induced by faster forcing rates. In this connection, preliminary analyses (Mastrandrea & Schneider, 2001) in which a climate model (capable of simulating the collapse of the thermohaline circulation in the North Atlantic as a function of the rate and amount of CO2 concentration increase) is coupled to the DICE model show that abrupt climate changes can occur as an emergent property of the coupled climate-economy system in the 22nd century. In this study, agents with infinite foresight would adjust their current optimal CO2 emissions control rates based on the potential severity of these far-off abrupt changes that were triggered by nearer-term emissions policies. Of course, very high discount rates cause little additional near-term policy response from 22nd century thermohaline collapse relative to lower discount rates, but the choice of discount rate is not only a technical option, but rather a normative judgment about the value of present versus future interests. Thompson and Schneider (1982) used very simplified transient models to investigate the question of whether the time evolving patterns of climate change might depend on the rate at which CO2 concentrations increased. For slowly increasing CO 2 buildup scenarios, the model predicted the standard model outcome: the temperature at the poles warmed more than the tropics. Any changes in equator-to-pole temperature difference help to create altered regional climates, since temperature differences influence large-scale atmospheric wind and ocean current patterns. However, for very rapid increases in CO2 concentrations, Thompson and Schneider found a reversal of the equatorto-pole difference occurred in the Southern Hemisphere over many decades during and after the rapid buildup of CO2. If sustained over time, this would imply unexpected climatic conditions during the century or so the climate adjusts toward its new equilibrium state. In other words, the faster and harder we push on nature, the greater the chances for surprises - some of which are likely to be damaging.
38
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
Transient Effects of Climate Change Until recently, climate modeling groups did not have access to sufficient computing power to routinely calculate time evolving runs of climatic change given several alternative future histories of greenhouse gases and aerosol concentrations. That is, they did not perform so-called "transient climate change scenarios." Rather, the models typically estimated how the Earth's climate would eventually look (i.e. after a long-term transition to equilibrium) after CO2 was artificially doubled and held fixed indefinitely rather than increased incrementally over time as it has in reality or in more realistic transient model scenarios. Transient model simulations exhibit less immediate warming than equilibrium simulations because of the slowly building radiative forcings combined with the high heat holding capacity of the thermally massive oceans. In other words, some of the warming isn't realized immediately (e.g. Hansen et al., 1984). However, that unrealized warming eventually expresses itself many decades later. This thermal delay, which can lull us into underestimating the long-term amount of climate change, is now being accounted for by coupling models of the atmosphere to models of the oceans, ice, soils, and biosphere (socalled earth system models - ESMs - which are essential components of any IAM effort). Early generations of such transient calculations with ESMs give much better agreement with observed climate changes on Earth. When the transient models at the Hadley Center in the United Kingdom (Mitchell et al., 1995) and the Max Planck Institute in Hamburg, Germany (Hasselmann et al., 1995), were also driven by both greenhouse gases and sulfate aerosols, these time evolving simulations yielded much more realistic fingerprints of human effects on climate (e.g. Santer et al., 1996). More such computer simulations are needed (e.g. Haywood et al., 1997) to provide greater confidence in the models, but many more scientists are now beginning to express growing confidence in current projections (IPCC, 1996a, Chap. 8; IPCC, 2001a). This discussion of transients and surprises can be connected to the earlier discussion of the third assumption inherent in "ergodic economics": invariance of higher moments. Clearly, rapid transients or non-linear events are likely to alter higher statistical moments of the climate (e.g. week-to-week variability, seasonal amplitudes, day to night temperature differences, etc). Such rapid or unexpected events would likely contradict the "invariance of higher moments". Thus, resultant environmental or societal impacts are likely to be quite different from those that would occur with smoother, slower changes. The long-term impact of climate change may not be predictable solely from a single steadystate outcome, but may well depend on the characteristics of the transient path. In other words, the outcome may be path dependent. Any exercise which
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
39
neglects surprises or assumes transitivity of the earth system - i.e. a path independent response - is indeed questionable, and should carry a clear warning to users of the fundamental assumptions implicit in the technique dependent on steady state results.
MODEL ASSUMPTIONS AFFECT CONCLUSIONS ABOUT THE ABILITY TO ADAPT TO CLIMATE CHANGE IAMs based on assumptions of smooth, gradual change probably overestimate the adaptive capacity of human society. However, adaptation is a powerful response to climate change. Adaptation is a complex process involving adjustment at many different levels of society, from the individual to the national and international levels. Many early generation climatic impact assessments (e.g. Schneider & Chen, 1980) did not explicitly attempt to account for adaptive responses, and thus have been criticized for neglecting adaptation potential (e.g. Yohe, 1990). While second generation IAMs do include adaptive responses, assessments typically assume smoothly varying climatic change trends. However, adaptation will occur as a response to climatic change trends embedded in a very noisy background of natural climatic variability (see West et al., 2001, for adaptive responses of coastal residents, and Schneider, 1996, and Schneider, Easterling & Mearns, 2000, for adaptive responses of farmers). Variability can, of course, mask slow trends and thus delay adaptive responses (but see Kolstad et al., 1999). They may also prompt false starts leading to maladaptation. In addition, unforeseen non-linear events can lead to unwarranted complacency. Adaptation as a Policy Response Schneider and Thompson (1985), in an intercomparison of climate change, ozone depletion and acid rain problems, differentiated passive adaptation (e.g. buying more water rights to offset impacts of a drying climate) from "anticipatory" adaptation. They suggest investing as a hedging strategy in a vigorous research and development program for low carbon energy systems in anticipation of the possibility of needing to reduce CO2 emissions in the decades ahead. The idea was that it would be cheaper to switch to systems which were better developed as a result of such anticipatory investments made in advance. Such active (i.e. anticipatory) forms of adaptation (e.g. building a dam a few meters higher in anticipation of an altered future climate) have been prominent in most subsequent formal assessments of anthropogenic climate change (e.g. NAS, 1991). Nearly all modern integrated assessments explicitly
40
STEPHEN H. SCHNEIDERAND KRISTIN KUNTZ-DURISETI
(e.g. Rosenberg, 1993; Rosenzweig et al., 1994; Reilly et al., 1996), or implicitly (e.g. Mendelsohn et al., 1996, 2000) attempt to incorporate (mostly passive) adaptation. While these studies should be applauded for attempting to recognize and quantitatively evaluate the implications of adaptive responses on the impact costs of climate change scenarios, serious problems with data, theory and method remain. It will be argued that a wide range of assumptions should be part of any attempted quantification of adaptation (e.g. as recommended by Carter et al., 1994). Moreover, as repeatedly argued earlier, both costs and benefits of climate change scenarios treated by any integrated assessment activity should be presented in the form of statistical distributions based on a wide range of subjective probability estimates of each step in the assessment process (e.g. as advocated by Yohe, 1991, Morgan & Dowlatabadi, 1996, or Schneider, 1997a).
Natural Variability Masks Trends, Delays Adaptation One of the major differences in estimates of climatic impacts across different studies is how the impact assessment model treats the adaptation of the sector under study (e.g. coastline retreat, agriculture, forestry, etc.). For example, it has often been assumed that agriculture is the most vulnerable economic market sector to climate change. For decades agronomists have calculated potential changes to crop yields from various climate change scenarios, suggesting some regions now too hot would sustain heavy losses from wanning whereas others, now too cold, could gain (e.g. see references in Rosenzweig et al., 1994 or Smith & Tirpak, 1988). But Norman Rosenberg (e.g. Rosenberg & Scott, 1994) has long argued that such agricultural impact studies implicitly invoke the "dumb farmer assumption." That is, they neglect the fact that farmers do adapt to changing market, technology and climatic conditions. Agricultural economists like John Reilly (e.g. Reilly et al., 1996) argue that such adaptations will dramatically reduce the climate impact costs to market sectors like fanning, transportation, coastal protection or energy use. Ecologists and social scientists, however, often dispute this optimism, since it neglects such real world problems as people's resistance to trying unfamiliar practices, problems with new technologies, unexpected pest outbreaks (e.g. Ehrlich et al., 1995), or the high degree of natural variability of weather. The latter will likely mask the slowly evolving human-induced climatic signal and discourage farmers from risking anticipatory adaptation strategies based on climate model projections. Clairvoyant adaptation is seriously challenged by the very noisy nature of the climatic system. It is doubtful that those in agriculture or situated along the coast will invest heavily in order to adapt their practices so as to preempt
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
41
before-the-fact climate model projections, rather than react to actual events. We can only speculate on whether or not agricultural support institutions, the research establishment particularly, will be influenced by such projections. The high natural variability of climate likely will mask any slowly evolving anthropogenically induced trends - real or forecasted. Therefore, adaptations to slowly evolving trends embedded in a noisy background of inherent variability are likely to be delayed by decades behind changes in the trends (e.g. Kaiser et al., 1993; Schneider, 1996; Morgan & Dowlatabadi, 1996; Kolstad et al., 1999). Moreover, were agents to mistake background variability for trend or viceversa, the possibility arises of adaptation following the wrong set of climatic cues, and setting up a major system malfunction. In particular, agents might be more influenced by regional anomalies of the recent past in projecting future trends. They may be unaware of the likelihood that very recent anomalous experience in one region may well be largely uncorrelated with slowly building long-term trends at a global scale or may be part of a transient response that will reverse later on. In addition, unwarranted complacency may result from the inability to foresee non-linear events. The case of coastal flooding is a good example of how incorporating climatic variability can significantly reduce the damage reduction potential adaptive activities might otherwise have offered if very high levels of natural variability did not plague climatic change trends. West and Dowlatabadi (2000) devise a set of decision rules by which coastal dwellers would choose to rebuild, remain in place or abandon coastal structures, based on the random occurrence of storm surges superimposed on a slowly rising sea level trend. The "noise" of such random storm surge events substantially alters the adaptive behavior of coastal dwellers relative to those clairvoyant agents whose decision rules do not include the masking effects of climatic variability. (Of course, other masking effects from social uncertainties could arise as well: if new sets of decision rules were imposed by coastal zone planners in the form of set-back requirements or insurance regulators insisting on new actuarial accounting schemes for premium rates, etc.) Foresight and Adaptation - the "Realistic Farmer"
For a number of years there has been debate among some agricultural economists (who assert that modern farmers and their supporting institutions could overcome virtually any plausible climatic change scenario) and other analysts who counter that such an efficient response would require farmers to be plugged into the electronic superhighway in real time, to be aware of the probability distributions of integrated assessments and to be financially and intellectually capable of instant response to a bewildering array of changing
42
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
pest, crop, weather, technology, policy and long- and short-term climatic conditions (e.g. see Schneider, 1997a, adaptation section). The adaptation optimists had simply replaced the unrealistic "dumb farmer" assumption of the past with the equally unrealistic "genius farmer." Yohe (1992) contrasts a "dumb" farmer with a "smart" farmer, noting that it is as inappropriate to analyze the impacts of climate change assuming all "dumb" (i.e. non-adaptive) farmers as it is "to fill a model of the future with 'clairvoyant farmers', who are too smart." Rothman and Robinson (1997, p. 30), in a conceptual synthesis of IA, also contrast the "dumb farmer" to a "clairvoyant farmer", and, borrowing from Smit et al. (1996), suggest that "the next step in the evolution of IAs is to assume a 'realistic farmer'." Real farmers, we agree, are likely to behave somewhere in between. Toward the positive side of the spectrum, in developed countries, land grant universities with their Research and Extension centers continually monitor environmental trends and develop adaptive strategies for farmers, thus providing a passive early warning system. Toward the negative side in developing countries, problems with agricultural pests, extreme weather events and lack of capital to invest in adaptive strategies and infrastructure will be a serious impediment to reducing climatic impacts on agriculture for a long time (e.g. Kates et al., 1985), even for a "genius farmer" or one possessed with clairvoyance. Schneider, Easterling and Mearns (2000) show for the case of agricultural agents how natural variability (which masks slowly evolving climatic trends) could affect farmers' capacity to adapt to the advent or prospect of slowly evolving climatic change. Using the Erosion Productivity Impact Calculator (EPIC) crop model, they consider the implications for different adaptation assumptions: no adaptation (the "dumb farmer"); perfect adaptation (the "genius farmer" who foresees future climate change trends perfectly and makes adjustments to maximize yields and revenues); and a more lagged adaptation behavior (a "realistic farmer" who, because of the masking effects of climatic noise, waits twenty years - an assumption to represent learning - before acting on the slowly emerging CO2-induced climatic signal). Lagged adaptation is identical to no adaptation in the first third of climate change. This follows from the assumption that the farmer in the lagged adaptation case has not yet detected a signal of climate change. Hence the first steps toward adaptation are not invoked until the second third of climate change. Adaptations tested in EPIC include adjustments to planting dates and to crop varietal traits regulating the length of time from germination to physiological maturity. Warmer temperatures allow planting to proceed earlier in the spring, thus avoiding risk of damaging mid-summer heat during the critical reproductive periods. The longer growing season (frost-free period) enables
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
43
farmers to plant varieties that take longer to reach maturity, which enables longer grain filling periods and thus higher yields. These two adaptations are always simulated together in EPIC. Perfect adaptation - as we would of course expect - always improves the yield change relative to no adaptation (see Table 3). Lagged adaptation, which is intended to simulate crudely the masking effects of natural variability on farmers' perceptions of climatic trends, is also an improvement over no adaptation, but is inferior to perfect adaptation. However, if stochastic weather variations had been included instead of fixed, delayed adaptations, then at least some of the calculations would have led to mal-adaptations, including adaptations contrary to the emerging long-term climate change, when the noise-to-signal ratio in the weather was large. A more realistic set of adaptation rules could have farmers adapt to a scenario of a smooth climatic trend
Table 3. Percentage differences between corn yields simulated with baseline observed climate (1984-1993) and corn yields simulated with 1/3, 2/3, and 3/3 of 2 x CO: climate change for three levels of adaptation: (1) no adaptation ("dumb farmer"), (2) perfect adaptation ("clairvoyant farmer"), and (3) adapatation lagged 20 years behind climate changes ("realistic farmer"). From Schneider, Easterling, and Mearns, 2000. A. Central Iowa Climate Change
No Adaptation
Perfect Adaptation
Lagged Adaptation
1/3 o f 2 × C O 2 2/3 o f 2 × C O 2 3/3 o f 2 × C O 2 M e a n o f Thirds
- 3 - 8 - 17 - 10
2 2 - 0.3 1
- 3 2 - 3 - 1
Climate Change
No Adaptation
Perfect Adaptation
Lagged Adaptation
1/3 o f 2 × C O 2 2/3 o f 2 x C O : 3/3 o f 2 × C O 2 M e a n o f Thirds
8 12 10 l0
13 23 24 20
8 17 22 16
B. South Central Minnesota
44
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
embedded in a realistic, stochastically-varying weather noise background in which the farmer-adapter places greater weights on the yields of the recent past years in choosing future cropping strategies (see, e.g. Yohe 1992, which is an early attempt at probabilistic analyses of adaptation decisions, though not in the context of climatic noise). Although the specific numbers in Table 3 should be viewed as model-dependent results, and thus, should not be taken literally, the relative differences for the alternative decision rules representing the various degrees of adaptation are likely to be more robust across different models, for different crops and for different locations. The "bottom-up" approach we have shown on Table 3 (in which we explicitly model farmer decisions to adapt their practices based on their perceptions of time-evolving climatic changes) suffers from the difficulty encountered by any process-based modeling technique: trying to aggregate all the complex factors that govern real decision makers into a few simple, explicit decision rules. An alternative approach to such bottom-up modeling would be to search for "top-down" relationships that implicitly aggregate the complexity of farmers' decisions into already measured behaviors (e.g. see Root & Schneider, 1995 for a discussion of scaling issues involved in cycling between top-down and bottom-up approaches).
MARKET ASSUMPTIONS The optimization paradigm prevalent in IAMs makes a number of simplifying assumptions, including: • all markets are perfectly competitive (i.e. there are no market failures or distortions); • markets adapt without transaction costs; • property rights are well-defined and protected; • prices reflect the true cost of goods, i.e. there are no externalities; • markets are complete and comprehensive, i.e. there aren't any "non-market" goods; • there is complete and perfect information about the market structure. If one assumes that markets operate optimally, then there isn't room for "no regrets" policies to mitigate climate change since "top-down" optimization assumes that any and all "no regrets" options have already been pursued and adopted. However, "bottom-up" models demonstrate that many emission reducing, currently available technologies have not been fully exploited, which suggests at least some abatement is possible at low cost. For example, a 1991 NAS study shows a supply curve with an intercept below zero, which implies
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
45
that markets were not using the most efficient technologies in the energy sector; this is not consistent with the assumption of optimal growth models that abatement costs are necessarily greater than zero. Critics of the bottom-up approach point out that transaction and administrative costs, which tend to increase the costs of implementing an emission abatement policy, are ignored in bottom-up models. The most likely conclusion is that the perfectly operating market system represented in the optimal growth models simply doesn't exist in the real word. To criticize the optimal growth models for making simplifications is unfair; simplifying assumptions are necessary to study and illuminate areas of interest. Our objection is that the assumptions may distort the conclusions and have tremendous implications for the debate on climate policy. The best way to minimize this concern is to run models with a range of assumptions - including pre-existing market failures - to determine the sensitivity of their policy conclusions to these various assumptions (e.g. see Schneider & Goulder, 1997). Marginal Dollar This represents the marginal cost of the foregone opportunity to invest the dollar in an alternative activity. At the optimal level of carbon emissions identified by IAMs, the marginal opportunity cost of abatement is equal to the marginal benefit; economic efficiency is ensured, provided the underlying assumptions are valid. In our context, it means that given all the complexity of interconnected physical, biological and social systems, climate abatement may not be perceived as the best place to invest the next available dollar so as to bring the maximum social benefit. It is a great mistake to be trapped by the false logic of the mythical "marginal dollar"; it is not necessary that every penny of the next available dollar go exclusively to the highest priority problem (i.e. the lowest cost on a supply curve) with the highest social return while competing priorities (particularly problems with surprise potential and the possibility of irreversible damages) remain unaddressed until priority one is fully achieved. This is particularly relevant to countries with more pressing concems, such as reducing poverty, increasing nutrition, raising literacy levels, lowering morbidity and mortality rates, increasing life expectancy, reducing local air and water pollution, increasing access to health care and providing employment opportunities. In the context of these concerns, addressing climate change is simply, and understandably, a low priority. However, climate change can exacerbate all of these problems. Thus, investments that both reduce the risks of climate change and mitigate its effects and promote economic development (e.g. transfer of efficient technologies) are crucial. The first step is to get that "marginal dollar" cashed into "small change", so that many
46
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
interlinked priority problems can all be at least partially addressed. Given the large state of uncertainty surrounding both the costs and benefits of many human and natural events, it seems most prudent to address many issues simultaneously and to constantly reassess which investments are working and which problems - including climate change - are growing more or less serious.
Technological Change The costs to the global economy of carbon abatement policies depend dramatically on the rate of technological improvements in non-fossil fuel powered (so called "non-conventional") energy supply systems and the rate of improvement of energy end use efficiency (Tol, 2000). The costs to the global economy of carbon abatement policies depend dramatically on the rate of technological improvements in non-fossil fuel powered (so called "nonconventional") energy supply systems and the rate of improvement of energy end use efficiency. The Stanford Energy Modeling Forum (EMF-12) compared the costs to the economy of a given carbon tax for a standard case and one with "accelerated technologies" in which non-conventional energy systems and greater efficiency in general are available sooner and cheaper (Gaskins & Weyant, 1993). They concluded that tremendous reductions in the costs of carbon dioxide emissions abatement could be enjoyed if technological development were accelerated. The EMF-12 studies also showed that the emissions paths and costs of abatement depend directly on the rate of energy efficiency improvements. The authors assumed that the rate of energy efficiency improvement varied only with time and was thus exogenous to the economic systems simulated in the study (so called AEEI - the "autonomous energy efficiency improvement" parameter - typically around 0.5-1% reduction in the amount of energy it takes to produce a unit of GDP per year). However, in the actual economy neither the cost of non-conventional energy supply systems nor the rate of energy efficiency improvements (EEl) are fully "autonomous". On the contrary, as Grubb et al. (1994) have argued, EEI are endogenous to the economic system. Treating technology change as independent of carbon policies in economic models is analogous to treating the "cloud feedback" problem (e.g. IPCC, 1996a) in climate models by making changes in cloudiness vary exogenously with time rather than endogenously with, say, internally calculated humidity and atmosphere stability (e.g. as argued in Schneider, 1997a). Standard economic theory would suggest that the price of non-conventional energy and the rate of EEl, both would be favorably adjusted as conventional energy prices increased in response to carbon abatement policies like a carbon tax, for example. Similarly, a climate policy such as a
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
47
subsidy to non-conventional energy research and development (R&D) would also accelerate EEl or decrease the long-term price of non-conventional energy beneath its projected baseline path.
Induced Technological Change and Market Distortions By allowing energy R&D to compete with other economic sectors in a highly aggregated general equilibrium model of the U.S. economy, Goulder and Schneider (1999) postulate that a noticeable carbon tax would likely dramatically redistribute energy R&D investments from conventional to nonconventional sectors, thereby producing induced technological changes (ITC) that lower long-term abatement costs. Unfortunately, most integrated assessment models (IAMs) to date do not include any endogenous ITC formulation (or if they do, it is included in a very ad hoc manner). Thus insights about the costs or timing of abatement policies derived from IAMS should be viewed as tentative. However, even simple treatments of ITC (e.g. Grubb et al., 1994; Goulder & Schneider, 1999; Dowlatabadi, 1998; Goulder & Mathai, 1999; Nakicenovic, 1996) can provide qualitative insights that can inform the policymaking process, provided the results of individual model runs are not taken literally given the still ad hoc nature of the assumptions that underlie endogenous treatments of ITC in IAMs. Goulder and Schneider (1999) (here after GS) develop analytical and numerical general equilibrium models to investigate the implications of ITC on the costs of a specified carbon tax, with the costs of investments in R&D explicitly recognized. Each model characterizes technological change as a result of optimizing decisions to invest in various R&D sectors of the economy. The basic principles are demonstrated by the U.S. case, even though the specific quantitative results may not be generalized, especially to countries with structurally different economic and political systems. They demonstrate that there may be an opportunity cost from ITC. Even if a carbon tax were to induce increased investment in non-carbon technologies (which, indeed, does happen in the GS simulations), this imposes an opportunity cost to the economy by crowding out investments in conventional energy systems R&D. The key variable in determining the opportunity cost is the fungibility of human resources. If all knowledge generating labor is fully employed, then increased R&D in non-carbon technologies will necessarily come at a cost to reduced labor in conventional resources. In other words, there would be a loss of productivity in conventional energy industries relative to the non-carbon-policy baseline case. This imposes a cost that is paid early in the simulation, while the benefits from lowered costs in non-conventional energy systems are enjoyed decades later. With conventional discounting that means the early costs from
48
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
the crowding out is likely to have a greater effect on present value calculations than the later benefits, which are heavily discounted because they occur many decades hence. A similar effect might be realized, even when knowledge generating labor is not fully employed, simply due to transition costs. For example, engineers cannot switch from one industry to another without incurring a cost, e.g. from oil to solar power; in general, they require retraining. On the other hand, if there were a surplus of knowledge-generating workers available in the economy, then the opportunity costs of such transitions could be dramatically reduced. Similarly, if the carbon policy were announced sufficiently far in advance, industries could anticipate training workers to have the necessary skills in non-carbon energy systems. This would offset much of the opportunity cost that GS calculate with the assumptions of fully employed R&D workers and no advance notice of the carbon policy. The gross costs (i.e. the costs before accounting for environment-related benefits of abated COz) of a specified carbon tax are higher with ITC than without ITC (Fig. 6a). This result, which appears to contradict earlier studies of ITC (e.g. Grubb et al., 1994), is due to the explicit inclusion of the opportunity costs of R&D. This comparison assumes no prior subsidies for R&D in any industry, no knowledge spillovers, and that all prior inefficiencies in R&D markets (in particular, subsidies) are absent. In general, these "efficiency" or "optimality" assumptions are not met in any real economy. Assuming an absence of prior R&D subsidies neglects historic inequities in which past subsidies to various energy systems have given them an "unfair" competitive boost. Investment in R&D may also be sub-optimal out of concern by some finns that some of their investment will not be appropriable and thus could "spill over" to competitors. This R&D market failure suggests that private finns would likely under-invest in R&D activities relative to the social optimum for efficiency of the economy as a whole. Under such conditions, R&D subsidies from the public sector to correct the spillover market failure would be economically efficient (see Schneider & Goulder, 1997). Finally, if there were, as noted above, adequate pools of R&D providers available to the non-conventional energy industries without causing a shift of knowledge producers from conventional energy industries, or if there were serious prior inefficiencies in R&D markets such that the marginal benefit of R&D is much higher in alternative energy sectors than in conventional, carbon-based sectors, then ITC can imply lower gross costs than would occur in its absence. As a note of comparison, Fig. 6a also shows the GDP path with ITC and climate policy when there is no opportunity cost of R&D - the upper curve on the figure. The assumption of no opportunity cost of R&D implies that there is a surplus of R&D resources in that economy that can be transferred without cost to creation
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
49
0.4
ITC at No Cost
0.2 ~
.-
0
o~-0.2
/
No
ITC
~-0.4
~ITC
43.6
-0.8
0
10
20
30
40
80
50
Years from Policy Change 1000
$~/ton tax
800 ¢O
600
$2~ontax ~
0 ,,,J
r3
/
/
~
ITC
400
2O0
0
10
20
30
40
50
60
70
80
Cumulative Abatement Fig. 6. (a) Impacts of carbon tax on GDP, with and without ITC. (b) GDP loss as a function of abatement. From Goulder and Schneider, 1999.
50
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
of non-carbon based technologies. In this scenario, ITC with a carbon tax positively affects GDP - ITC makes the carbon policy efficiency improving. However, for the idealized assumptions of: (1) perfectly functioning R&D markets and; (2) a scarcity of knowledge-generating resources (e.g. all capable engineers already fully employed) at the time the carbon tax is imposed, the presence of ITC by itself appears unable to make carbon abatement a zero cost option, and in the GS model can actually increase the gross costs to the economy of any specific, given carbon tax. Clearly, these idealized assumptions are very restrictive and fail to account for likely market failures. Taken together, however, the dashed and solid lines for ITC in Fig. 6a can be thought of as giving plausible bounds on the cost of a carbon tax under ITC. Under more realistic assumptions, the effects would likely fall somewhere in the middle, with ITC quite possibly offering net advantages over a carbon tax implemented without ITC. Moreover, GS use a 5% discount rate, which favors the immediate opportunity costs of the loss of productivity in conventional energy industries to the eventual gains in nonconventional industries from ITC. Thus, lower discount rates would alter the opportunity cost effects relative to the ITC long-term gains. GS show that even though ITC with full employment opportunity costs leads to higher gross costs, the presence of ITC implies reduced costs of achieving a given carbon abatement target (or greater abatement per unit carbon tax). This result supports previous studies on ITC (Grubb et al., 1994; Dowlatabadi, 1998). ITC raises the attractiveness of CO 2 abatement policies by reducing the costs to the economy per unit C02 abated. One way to observe this is to compare the costs of achieving given reductions in emissions in the presence and absence of ITC. Figure 6b, which plots emissions reductions against the present value of GDP losses, illustrates this point. At every point, the cost curve with ITC case lies below that for the cost curve without ITC; the GDP loss associated with achieving any given level of abatement is lower with ITC. Thus, the carbon tax necessary to achieve a given level of abatement is lower with ITC than without. Even with an accounting for maximum opportunity costs, the presence of ITC implies larger net benefits from a given carbon tax. For example, the carbon tax induces more investment in R&D in nonconventional energy industries, which leads to more rapid discoveries which, in turn, lower the costs of future energy services generated by non-conventional energy systems. Thus, more abatement can be brought about per unit carbon tax with ITC than without ITC. Put another way, the benefits in the form of averted climate damages from augmented abatement could more than compensate for the higher gross costs of ITC for a given carbon tax.
Integrated Assessment Models of Climate Change: Beyond a Doubling of C O 2
51
Alternatively, a given abatement target could be reached at lower costs (e.g. a lower value of carbon tax) because of ITC. As a general note of caution, policy-makers need to be aware of underlying and/or simplifying assumptions when interpreting IAM results with or without reduced form treatments of ITC. More specifically, many caveats to the GS model are needed: (1) questionable generality of the U.S. economy-oriented GS model for non-developed country economies, (2) the returns on investment in energy R&D in GS are based on data from a decade past which might not be valid very far into the future, (3) the extent to which R&D knowledgegenerators (e.g. under-employed or not-yet-trained engineers) can be quickly made available to non-conventional energy sectors so that the opportunity costs of a redeployment of technologists from conventional energy sectors would be lessened is uncertain, (4) the degree and kinds of R&D market failures present can radically alter the cost estimates relative to a perfectly functioning R&D markets assumptions, (5) the possibility of multiple equilibria in which the quantity of energy provided may or may not be price sensitive during transitions to alternative equilibrium states, and (6) even the very paradigm of maximizing utility or consumption inherent in the general equilibrium model cost/benefit optimizing approach can all be challenged on technical and philosophical grounds (e.g. Repetto & Austin, 1997; Munasinghe, 2000). Nevertheless, the added insights this early type of ITC analysis brings to the integrated assessment of climatic change policy options are instructive provided users of these model results are aware of the many implicit technical and philosophical assumptions. Schneider and Goulder (1997) also address the question of how to induce technical change. Specifically, they consider whether a carbon tax or a subsidy to R&D is most cost effective. A general economic principle is that governments should apply the policy instrument most closely related to a particular "market failure." In this case, the central market failure is the climate damage associated with carbon emissions. A carbon tax is the most direct way to address this externality by altering the price of carbon-based fuels to account for the social cost of climate damage. If there were no other market failures, a subsidy to R&D would be efficiency reducing. However, if there is a second market failure in the market for R&D, then a research subsidy might well be justified. Because of the inherent difficulties of keeping knowledge acquired through R&D private, R&D can generate knowledge spillover benefits that are acquired by "free riders." For this reason, as noted above, there is a tendency for firms to under-invest in R&D even if the social benefits are greater than the private benefits. A government research subsidy can counteract this effect and bring public plus private R&D closer to the social optimal level of investment.
52
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
0.4 0.2
0
~
PriorSul~ldiuin EC -0.2
J
NopriorS u l ~ ~'/Prior ~.~.ai~ in EA
i-0.4
/ -0.6
-0.8
0
10
20
30
40
50
60
Years from Policy Change
0.4 0.2
0
~
/
Sl~llov~llin F.AOrlly No Spillovers(CentralCase)
-0.2
¢:
~
~
e
r
s
mECandEA
~-0.4 ---~.-_
.'--'-S.".. _--_ .................
-0.6 Spinoversm EC Only -0.8
0
I0
20 30 40 Years from Policy Change
50
60
Fig. 7. (a) Effects of R&D subsidies, without knowledge spillovers. (b) Effects of R&D subsidies, with knowledge spillovers in alternative energy sector. From Goulder and Schneider, 1999.
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
53
Thus, the answer to the policy question of whether it costs more to abate carbon by carbon taxes or direct subsidies to carbon-reducing activities appears to hinge on the critical issue of market imperfections like knowledge spillovers. In GS, when the alternative energy sector only enjoys knowledge spillovers then subsidies to this industry improve efficiency and are a negative cost. If there are no knowledge spillovers (see Fig. 7a), the costs of achieving emissions reductions are lower without a subsidy on R&D; subsidies in fact raise the costs of achieving a targeted emission reduction. If there are knowledge spillovers in the non-carbon energy sector only (see Fig. 7b), a moderate subsidy to R&D can reduce the costs of achieving a given emissions target. These results indicate that induced technological change per se is not a rationale for subsidies to R&D in non-carbon technology. It is knowledge spillovers - the external benefit from R&D - that provide the rationale. R &D and Knowledge Spillovers These issues are explored further in Schneider and Goulder (1997) (hereafter SG), which takes into account incentives to invest in research and development, knowledge spillovers, and the functioning of R&D markets, in order to estimate the costs of reducing cumulative CO2 emissions (see Table 4). Given the uncertainty about the model parameters, these results should not be taken literally. However, qualitative conclusions may be inferred from the general pattern. A research subsidy alone is unlikely to be the cheapest way to Table4.
Costs of 15% reduction in CO2 emissions 1995-2095. From Schneider and Goulder, 1997.
Model
Carbon tax Targeted R&D Carbon tax plus Carbon tax plus alone subsidy alone targeted R&D broad R&D subsidy of 10% subsidy of 10%
no splillover from R&D
0.94
8.52
1.02
1.18
Spillovers from R&D only in alternative energy sector Spillovers from R&D investment in all industries
0.66
5.98
0.60
0.78
1.03
9.55
1.09
0.81
Figures are percentage reduction to the present value of GDE All simulations involve carbon tax rates that increase at a rate of 5% annually until the year 2075 and remain constant thereafter. The carbon tax profile is the lowest path of (rising) tax rates that leads to the 15% reduction in cumulative emissions relative to the baseline model. The most cost-effective policy for each model scenario is in bold. From Schneider and Goulder, 1997.
54
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
meet the target reduction in cumulative emissions, and indeed, can be many times more costly than the other policies. When there are knowledge spillovers, however, a combination of a carbon tax and a research subsidy is optimal in order to correct both the climate damage externality and the under-investment in R&D. SG conclude that a carbon tax (or equivalent in cap and trade policies in which an increased price for carbon emerges) is essential for cost-effective reductions of CO2 emissions, and that this tax should be accompanied by a research subsidy only when there are knowledge spillover benefits from R&D (which is likely). Some have objected to carbon taxes, regardless of efficiency, as being regressive. Others have responded that revenue recycling could accompany a carbon tax and be used to either further improve efficiency by offsetting a less efficient tax and/or by offsetting a more regressive tax (see the debate on the socalled "double dividend": Jorgenson et al., 1995, Goulder 1995, 1996, Hamond et al., 1997). Finally, given the wide-ranging assumptions inherent in the analytical tools used to estimate mitigation costs of carbon policies, it is misleading (as argued in Schneider, 1997a and Schneider, Kuntz-Duriseti & Azar, 2000) to present cost estimates as a single value or one curve. Rather, it is preferable that alternative estimates are given with explicitly stated structural assumptions or estimated probability distributions, with the method for determining the distributions clearly explained (see Moss, 2000). This discussion once again provides an example of the use of models for insights that can inform the policy-making debate provided decision makers are aware of the many assumptions embedded in the modeling exercises (e.g. R&D resources are scarce and shifts in R&D priorities have opportunity costs, or rapid rates of climate change might increase the chances of surprises). Decision makers also need to be aware of the limited context of many IAMs: economic efficiency optimization based on "best guess" climate damages for a market economy, similar to that of the U.S., rather than non-market based economies, equity considerations, or hedging strategies against low probability, catastrophic outcomes. Certainly, a climate policy isn't necessary to correct market failures, but if we adopt a climate mitigation policy which corrects existing market failures then we should be able to credit these cost savings. In this case, the costs of the market failures would be less with a climate policy than without one.
BEYOND A DOUBLING OF CO2 Current IAMs ignore the transient effects of increasing CO2 concentrations. Rather, most IAMs assume that CO 2 atmospheric concentrations double all at
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
55
once and remain at that level with no further change in atmospheric concentrations. This scenario is clearly unrealistic and ignores two potentially critical factors in the accumulation of carbon in the atmosphere. The first is the feedback factor. When CO 2 concentrations are in equilibrium, the rate of decay of CO2 offsets the rate of carbon emissions. With a one-time injection of CO2, the atmosphere will be able to dissipate the excess CO 2 over time. However, we are currently adding to the carbon concentrations at rates higher than CO 2 is dissipated. Thus, we are increasing the level of concentrations over time. When CO2 is continuously being pumped into the atmosphere, we may slow the capacity of the atmosphere/ocean/biosphere system to dissipate the excess CO2 over time. The second consideration is that rates of change in CO2 emissions and CO2 concentrations may affect climate response. The climate response may be very different when CO 2 concentrations change rapidly than when an equivalent amount of CO2 is added to the atmosphere over a longer period of time. Furthermore, and most importantly for this discussion, most IAMs do not consider carbon dioxide concentrations greater than a doubling. Schelling (1992) notes that CO2 doubling is a convenient benchmark. "Doubling, like a half-life in reverse, is a natural unit if it is within the range of practical interest, and it is. A doubling is expected sometime in the next century, so it is temporally relevant; and a doubling is estimated to make a substantial but not cataclysmic difference. If a fixation on a doubling seems to imply an upper limit on any expected increase, the implication is unfortunate: enough fossil fuel exists to support several doublings" (p. 2). Doubling is often referred to as a "safe upper limit", but it is by no means a given. What level constitutes a "safe" limit is a value judgment based on an assessment of the acceptability of a range of plausible impacts (see IPCC, 2001b). Furthermore, most assessments assume that very large increases in CO2 beyond a doubling would be much more damaging than a doubling (e.g. Nordhaus, 1994a, b). If CO2 emissions remain above current levels beyond the late 2100s, which is typical of most emissions scenarios (e.g. Nakicenovic & Swart, 2000), then the 22nd century is likely to see CO 2 concentrations tripling or quadrupling before the end of that century. Although it is difficult to imagine social and technological systems in the 22nd century, bio-geophysical timescales are already well-known. Thus, the likelihood of increasing CO 2 concentrations several times over in the period beyond 2100 is a highly plausible result of presently projected emissions (as noted by Cline, 1992).
A Perspective on the Costs of Meeting the Climate Target There is a wide-spread concern that CO2-control will impose catastrophic economic costs. In an article in the Economist, Nordhaus (1990) warns "that a
56
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
vague premonition of some potential disaster is, however, insufficient grounds to plunge the w o r d into depression". Nordhaus and other "top-down modelers" often find the costs of meeting stringent CO2 control targets to count in trillions of dollars. Manne and Richels (1997), for example, estimate the global present value costs (using a 5% per year discount rate) of meeting a 450 ppm target to be as high as 4-14 trillion U.S.D. Other top-down modelers report similar cost estimates. In absolute terms, this certainly appears to be a considerable cost and thus may create the impression that we cannot afford to reduce CO2emissions. However, viewed from another perspective a different picture emerges. This admittedly huge cost nevertheless only has a minor impact on the overall growth rates and income levels in the economy in the very models used to estimate the costs. In a survey of top-down studies, global per capita income by the year 2100 is assumed to be 5.4 times higher than at present if no carbon abatement occurs. If carbon emissions are kept at two thirds of the present level for the next century, per capita income would be 5.1 times higher (see Azar, 1996, for details of this review). Given assumed growth rates, the global income would be delayed a couple of years before the higher income level is attained (see also Azar & Schneider, 2001; Schneider, 1993, Grubb et al., 1993, and Anderson & Bird, 1992, for similar observations). It is interesting to note that there is near consensus even among top-down modelers that this is the case. Note also that the full range of potential environmental benefits from reducing the emissions have not been included in these estimates (e.g. as Roughgarden & Schneider, 1999, showed, a wide distribution of damage costs produces a very wide distribution of "optimal" carbon taxes). In what sense will this information be useful for policy makers? In order to answer this question, it is important to remember the context in which these modeling exercises are performed. The threat of climatic change increasingly is being recognized as one of the most important challenges for the next century. There is mounting pressure from scientists and many different stakeholder groups to take action to reduce emissions, but the speed of action is still fairly low. Some politicians and representatives from certain business sectors continue to oppose measures to reduce CO2-emissions. Perhaps even more importantly, there is a genuine public concern that emissions reductions might reduce the material standard of living (in absolute terms), force people into unemployment, or in the words of President Bush during the UNCED meeting in Rio de Janeiro in 1992: "threaten the American way of life". Thus, although actual numbers are uncertain as we have repeatedly argued in this article, top-down models clearly do find that stringent CO2-constraints are compatible with a significantly increased material standard of living and they
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
57
do not threaten to plunge the world into depression. This way of presenting modeling results (i.e. showing that the relative paths o f per capita G D P or consumption over the next century with and without carbon policies are almost identical) deserves more attention since there is a widespread impression among policy makers and the general public that the opposite holds true. If most people realized that the bulk of global warming could be mitigated and the "cost" of this "insurance policy" were recognized to be only a delay of a few years to a decade in achieving some 400% increase in per capita economic growth, then perhaps climate policy making would be much less controversial.
ACKNOWLEDGMENTS Kristin Kuntz-Duriseti is also associated with the Institute for International Studies and Biological Sciences at Stanford University. She gratefully acknowledges support from the Winslow Foundation for part of this effort.
NOTES 1. Modified after Schneider, 1997a. 2. For example, see Ehrlich, 1989, for undervaluing nature, Brown, 1997, for neglecting our fiduciary responsibility to nature or the future, which requires a "stewardship" paradigm as the operating principle, or Jenkins, 1996, for equating economic efficiency with social good rather than recognizing that the "invisible hand" of the market system "disregards the moral and cultural problems raised by its concentration on individual self-interest and competitiveness and produces values which seem to over-reward greed, aggression and irresponsibility" (228-2299), or Schneider et al., 2000, for neglecting other "numeraires" such as monetary loss, loss of life, quality of life (including coerced migration, conflict over resources, cultural diversity, loss of cultural heritage sites, etc.), species or biodiversity loss, and distribution/equity.
REFERENCES Adams, R. M. (1999). On the search for the correct economic assessment agricultural effects of climatic change. Climatic Change, 41(3/4), 363-370. Alexander, S. E., Schneider, S. H., & Lagerquist, K. (1997). The interaction of climate and life. In: G. C. Daily (Ed.), Nature's Services: Societal Dependence on Natural Ecosystems (pp. 71-92). Washington, DC: Island Press. Anderson, D., & Bird, C. D. (1992). Carbon accumulations and technical progress - a simulation of costs. Oxford Bulletin of Economics and Statistics, 54, 1-29. Arrow, K. J., Cline, W., M~iler, K. G., Munasinghe, M., Squitieri, R., & Stiglitz, J. (1996). Intertemporal equity, discounting and economic efficiency.In: J. E Bruce, H. Lee & E. E
58
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
Haites (Eds), Climate Change 1995: Economic and Social Dimensions of Climate Change. Second Assessment of the lntergovernmental Panel on Climate Change (pp. 125-144). Cambridge, U.K.: Cambridge University Press. Azar C. (1996). Technological change and the long-run cost of reducing CO2 emissions. Working Paper, INSEAD, France. Azar C., & Dowlatabadi, H. (1999). A review of technical change in assessments of climate change policy. Annual Review of Energy and the Environment, 24, 513-544. Azar, C., & Schneider, S. H. (2001). Are the economic costs of stabilizing the atmosphere prohibitive? Science (submitted). Azar, C., & Sterner, T. (1996). Discounting and distributional considerations in the context of climate change. Ecological Economics, 19, 169-185. Berk, R. A., & Schulman, D. (1995). Public perceptions of global warming. Climatic Change, 29, 1-33. Brown, P. G. (1997). Stewardship of Climate. An Editorial Comment. Climatic Change, 37(2), 329-334. Carter, T. R., Parry, M. L., Harasawa, H., & Nishioka, S. (1994). IPCC Technical Guidelines for Assessing Climate Change Impacts and Adaptations. London, U.K.: University College. Chapman, D., Suri, V., & Hall, S. G. (1995). Rolling dice for the future of the planet. Contemporary Economic Policy, 13, 1-9. Cline, W. (1992). The Economics of Global Warming. Washington, DC: Institute of International Economics. Daily, G. C. (1997). Nature's Services: Societal Dependence on Natural Ecosystems. Washington, DC: Island Press. Darwin, R. (1999). A farmer's view of the Ricardian approach to measuring agricultural effects of climate change. Climatic Change, 41(3/4), 371-411. Dowlatahadi, H. (1998). Sensitivity of climate change mitigation estimates to assumptions about technical change. Energy Economics, 20, 473-493. Dowlatabadi, H., & Kandlikar, M. (1995). Key Uncertainties in Climate Change Policy: Results from ICAM-2, The 6th Global Wanning Conference, San Francisco, CA. Dracup, J. A., & Kendall, D. R. (1990). Floods and droughts. In: P. E. Waggoner (Ed.), Climate Change and U.S. Water Resources. New York: John Wiley. Ehrlich, P. R. (1989). The Limits to Substitution: Meta-Resource Depletion and a New EconomicEcological Paradigm. Ecological Economics, 1, 9-16. Ehrlich, P. R., Ehrlich, A. H., & Daily, G. (1995). The Stork and the Plow. New York: Putnam. Gaskins, D., & Weyant, J. (1993). EMF-12: Model comparisons of the costs of reducing CO2 emissions. American Economic Review, 83, 318-323. Goulder, L. H. (1995). Effects of Carbon Taxes in an Economy with Prior Tax Distortions: An Inter-temporal General Equilibrium Analysis. Journal of Environmental Economics and Management, 29, 271-297. Goulder, L. H. (1996). Environmental Taxation and the Double Dividend: A Reader's Guide. International Tax and Public Finance, 2(2), 155-182. Goulder, L. H., & Kennedy, D. (1997). Valuing ecosystems: Philosophical bases and empirical methods. In: G. C. Daily (Ed.), Nature's Services: Societal Dependence on Natural Ecosystems (pp. 23-48). Washington, DC: Island Press. Goulder, L. H., & Mathai, K. (1999). Optimal CO2 abatement in the presence of induced technological change. Journal of Environmental Economics and Management, 39, 1-38. Goulder, L. H., & Schneider, S. H. (1999). Induced technological change and the attractiveness of CO2 emissions abatement policies. Resource and Energy Economics, 21, 211-253.
Integrated Assessment Models of Climate Change: Beyond a Doubling of
CO 2
59
Grubb, M., Edmonds, J., ten Brink, E, & Morrison, M. (1993). The cost of limiting fossil-fuel CO2 emissions: A survey and an analysis. Annual Review of Energy and the Environment, 18, 397-478. Gmbb, M., Ha-Duong, M., & Chapuis, T. (1994). Optimizing climate change abatement responses: On inertia and induced technology development. In: N. Nakicenovic, W. D. Nordhaus, R. Richels & E L. Toth (Eds), Integrative Assessment of Mitigation, Impacts, and Adaptation to Climate Change (pp. 513-534). Laxenburg, Austria: International Institute for Applied Systems Analysis. Hamond, M. J., DeCanio, S. J., Duxbury, E, Sanstad, A. H., & Stinson, C. H. (1997). Tax Waste, Not Work. How Changing What We Tax Can Lead to a Stronger Economy and a Cleaner Environment. Redefining Progress. Haneman, W. M. (2000). Adaptation and its measurement. An editorial comment. Climatic Change, 45(3/4), 571-581. Hansen, J., Lacis, A., Rind, D., Russell, G., Stone, E, Fung, I., Ruedy, R., & Lemer, J. (1984). Climate Sensitivity: Analysis of Feedback Mechanisms. In" J. Hansen, & T. Takahashi (Eds), Climate Processes and Climate Sensitivity, Geophysical Monograph 29, Maurice Ewing (Vol. 5, pp. 130-163). Washington, D.C.: American Geophysical Union. Hasselmann, K., Bengtsson, L., Cubasch, U., Heged, G. C., Rodhe, H., Roeckner, E., Storch, H. V., & Voss, R. (1995). Detection of Anthropogenic Climate Change Using a Fingerprint Method. Max-Planck Institut fiir Meteorologie Report No. 168. Haywood, J. M, Stouffer, R. J., Wetherald, R. T., Manabe, S., & Ramaswamy, V. (1997). Transient response of a coupled model to estimated changes in greenhouse gas and sulfate concentrations. Geophysical Research Letters, 24(11), 1335-1338. Henderson-Sellers, A. (1993), An Antipodean climate of uncertainty, Climatic Change, 25, 203-224. Intergovernmental Panel on Climatic Change (IPCC) (1996a). Climate Change 1995. The Science of Climate Change: Contribution of Working Group I to the Second Assessment Report of the Intergovemmental Panel on Climate Change. J. T. Houghton, L. G. Meira Filho, B. A. Callander, N. Harris, A. Kattenberg, & K. Maskell (Eds). Cambridge: Cambridge University Press. Intergovemmental Panel on Climatic Change (IPCC) (1996b). Climate Change 1995. Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical Analyses. Contribution of Working Group II to the Second Assessment Report of the Intergovemmental Panel on Climate Change. R. T. Watson, M. C. Zinyowera, & R. H. Moss (Eds). Cambridge: Cambridge University Press. Intergovemmental Panel on Climatic Change (IPCC) (1996c). Climate Change 1995. Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change. J. E Brace, H. Lee, & E. E Haites (Eds). Cambridge: Cambridge University Press. Intergovemmental Panel on Climatic Change (IPCC) (2001a). Third Assessment Report of Working Group I: The Science of Climate Change. Cambridge: Cambridge University Press (in press). Intergovbernmental panel on Climatic Change (IPCC) (2001b). Third Assessment Report of Working Group II: Impacts, adaptation and vulnerability. Cambridge: Cambridge University Press (in press). Jasanoff, S., & Wynne, B. (1998). Science and Decisionmaking, In: S. Rayner & E. L. Malone (Eds), Human Choice and Climate Change, vol. 1 (pp. 1-87). Ohio: Batelle Press.
60
STEPHEN H. SCHNEIDER A N D KRISTIN KUNTZ-DURISETI
Jenkins, T. N. (1996). Democratising the Global Economy by Ecologicalising Economics: The Example of Global Wanning. Ecological Economics, 16, 227-238 Jones, R. N. (2000). Managing uncertainty in climate change projections: Issues for impact assessment. An editorial comment. Climatic Change, 45(314), 403-419. Jorgenson, Dale W., & Wilcoxen, P. J. (1995). Reducing U.S. Carbon Emissions: An Econometric General Equilibrium Assessment. In: D. Gaskins & J. Weyant (Eds), Reducing Global Carbon Dioxide Emissions: Costs and Policy Options, Stanford University Press. Kaiser, H. M., Riha, S., Wilks, D., Rossiter, D., & Sampath, R. (1993). A farm-level analysis of economic and agronomic impacts of gradual climate warming. American Journal of Agricultural Economics, 75, 387-398. Karl, T. R., & Knight, R. W. (1998). Secular trends of precipitation amount, frequency, and intensity in the U.S.A. Bulletin of the American Meteorological Society, 79(2), 231-241. Kates, R. W., Ausubel, J. H., & Berberian, M. (Eds), (1985). Climate Impact Assessment." Studies of the Interaction of Climate and Society. SCOPE 27, Wiley, Chichister. Kaufmann, R. K. (1997), Assessing the DICE Model: Uncertainty Associated With the Emission and Retention of Greenhouse Gases. Climatic Change, 33, 139-143. Kelly, D. L., & Kolstad, C. D. (1999). Integrated Assessment Models for Climate Change Control. In: H. Folmer & T. Tietenberg (Eds), International Yearbook of Environmental and Resource Economics 1999/2000: A Survey of Current Issues (pp. 171-197). Cheltenham, U.K.: Edward Elgar. Knutson, T. R. (1998). Simulated increase of hurricane intensities in a CO2-warmed climate. Science, 279(5353), 1018-1020. Kolstad, C. (1993). Looking vs. leaping: The timing of CO2 control in the face of uncertainty and learning. In: Y. Kaya et al. (Eds), Costs, Impacts, and Benefits of CO2Mitigation. CP-93-2 (pp. 63-82). Laxenburg, Austria: International Institute for Systems Analysis. Kolstad, C. D., Kelly, D. L., & Mitchell, G. (1999). Adjustment costs from environmental change induced by incomplete information and learning. Department of Economics, UCSB working paper. Lempert, R. J., Schlesinger, M. E., & Bankes, S. C. (1996). When we don't know the costs or the benefits: Adaptive strategies for abating climate change. Climatic Change, 33(2), 235-274. Liverman, D. M. (1987). Forecasting the Impact of Climate on Food Systems: Model Testing and Model Linkage. Climatic Change, 11(112), 267-285. Lorenz, E. (1968). Climatic determinism in causes of climatic change. Meteorological Monographs, 8, 1-3. Lorenz, E. (1970). Climatic change as a mathematical problem. Journal of Applied Meteorology, 9, 325-329. Lorenz, E. (1975). Climate Predictability. In: The Physical Basis of Climate and Climate Modeling. Report of the GARP Study Conference, Stockholm, 29 July-10 August, GARP Publication Series No. 16. Manabe, S., & Stouffer, R. J. (1993). Century scale-effects of increased atmospheric CO2 on the ocean-atmosphere system. Nature, 364, 215-218. Manne, A. S., Mendelsohn, R., & Richels, R. G. (1995). MERGE: A model for evaluating regional and global effects of GHG reduction policies. Energy Policy, 23, 17-34. Manne, M., & Richels, R. (1997). On stabilizing CO2 concentrations - cost-effective emission reduction strategies. Environmental Modeling and Assessment, 2, 251-265. Mastrandrea, M., & Schneider, S. H. (2001). Integrated assessment of abrupt climatic changes. Science (submitted).
Integrated Assessment Models of Climate Change: Beyond a Doubling of CO2
61
Mearns, L. O., Katz, R. W., & Schneider, S. H. (1984). Extreme high temperature events: Changes in their probabilities and changes in mean temperature. Journal of Climate and Applied Meterology, 23, 1601-1613. Mendelsohn, R., Morrison, W., Schlesinger, M., & Andronova, N. (2000). Country-specific market impacts of climate change. Climatic Change, 45(3/4), 553-569. Mendelsohn, R., Nordhaus, W., & Shaw, D. (1996). Climate impacts on aggregate farm value: Accounting for adaptation. Agricultural and Forest Meteorology, 80, 55--66. Mitchell, J. E B., Johns, T. C., Gregory, J. M., & Tett, S. F. B. (1995). Transient Climate Response to Increasing Sulphate Aerosols and Greenhouse Gases. Nature, 376, 501-504. Morgan, G., & Dowlatabadi, H. (1996). Learning from integrated assessment of climate change. Climatic Change, 34(314), 337~i8. Morgan, M. G., & Keith, D. W. (1995). Subjective judgments by climate experts. Environmental Science and Technology, 29, 468A-476A. Moss, R. H., & Schneider, S. H. (1997). Characterizing and communicating scientific uncertainty: Building on the IPCC second assessment. In: S. J. Hassol & J. Katzenberger (Eds), Elements of Change. (pp. 90-135). Aspen, CO: AGCI. Moss, R. H. & Schneider, S. H. (2000). Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment and reporting. In: R. Pachauri, T. Taniguchi & K. Tanaka (Eds), Guidance Papers on the Cross Cutting Issues of the ThirdAssessment Report of the IPCC (pp. 33-51). Intergovernmental Panel on Climatic Change, Geneva; available from Global Industrial and Social Progress Research Institute: http//www.gispri.or.jp Moss, R. H. (2000). How to deal with uncertainty for cost estimation. Pacific Asia Journal of Energy (in press). Munasinghe, M. (2000). Development, equity and sustainability (DES) in the context of climate change. In: R. Pachauri, T. Taniguchi & K. Tanaka (Eds), Guidance Papers on the Cross Cutting Issues of the ThirdAssessment Report of the IPCC (pp. 69-110). Intergovernmental Panel on Climate Change, Geneva; available from the Global Industrial and Social Progress Research Institute: http//www.gispri.or.jp Nakicenovic, N. (1996). Technological change and learning. In: N. Nakicenovic et al. (Eds). Climate Change: Integrating Science, Economics, and Policy. CP-96-1. Laxenburg, Austria: International Institute for Applied Systems Analysis. Nakicenovic, N., & Swart, R. (2000). Special Report of the Intergovemmental Panel on Climatic Change (IPCC) on Emissions Scenarios (SRES). Cambridge, U.K.: Cambridge University Press. Summary for Policymakers available online at http://www.ipcc.ch/ National Academy of Sciences. ( 1991). PolicyImplications of Greenhouse Warming. Washington, DC: National Academy Press. Nordhaus, W. D. (1990). Count before you leap. The Economist (July 7), 19-22. Nordhaus, W. D. (1992). An optimal transition path for controlling greenhouse gases. Science, 258, 1315-1319. Nordhaus, W. D. (1994a). Expert opinion on climatic change. American Scientist, 82, 45-52. Nordhaus, W. D. (1994b). Managing the Global Commons: The Economics of Climate Change. Cambridge, MA: MIT Press. Overpeck, J. T., Webb, R. S., & Webb, III T. (1992). Mapping eastern north american vegetation change over the past 18,000 years: No analogs and the future. Geology, 20, 1071-1074. Parson, E. A. (1996). Three Dilemmas in the Integrated Assessment of Climate Change. An Editorial Comment, Climatic Change, 34(3/4), 315-326. Peck, S. C., & Teisberg, T. J. (1992). CETA: A model of carbon emissions trajectory assessment. The Energy Journal, 13(1), 55-77.
62
STEPHEN H. SCHNEIDER AND KRISTIN KUNTZ-DURISETI
Rahmstorf, S. (1999). Shifting seas in the greenhouse? Nature, 399, 523-524. Ravetz, J. R. (1997). Integrated Environmental Assessment Forum: Developing Guidelines for "Good Practice", Working Paper ULYSSES, Darmstadt University of Technology. Reilly, J., Baethgen, W., Chege, E E., van de Geijn, S. C., Erda, L., Iglesias, A., Kenny, G., Patterson, D., Rogasik, J., R6tter, R., Rosenzweig, C., Sombroek, W., Westbrook, J., Bachelet, D., Brklacich, M., D~immgen, U., Howden, M., Joyce, R. J. V., Lingren, E D., Schimmelpfennig, D., Singh, U., Sirotenko, O., & Wheaton, E. (1996). Agriculture in a changing climate: Impacts and adaptation. In: R. T. Watson, M. C. Zinyowera & R. H. Moss (Eds), Intergovernmental Panel on Climatic Change (IPCC) 1996: Climate Change 1995. Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical Analyses (pp. 427-467). Contribution of Working Group II to the Second Assessment Report of the Intergovemmental Panel on Climate Change. Cambridge: Cambridge University Press. Repetto, R., & Austin, D. (1997). The Costs of Climate Protection: A Guide for the Perplexed. Washington, D.C.: World Resources Institute. Risbey, J., Kandlikar, M., & Patwardhan, A, (1996). Assessing Integrated Assessments. Climatic Change, 34(3/4), 369-395. Root, T. L,, & Schneider, S. H. (1993). Can large-scale climatic models be linked with multiscale ecological studies? Conservation Biology, 7(2), 256-270. Root, T. L., & Schneider, S. H. (1995). Ecology and climate: Research strategies and implications. Science, 269, 331-341. Rosenberg, N. J. (Ed.). (1993). Towards an integrated impact assessment of climate change: The MINK study. Climatic Change (Special Issues), 24(1/2), 1-173. Rosenberg, N. J., & Scott, M. J. (1994). Implications of policies to prevent climate change for future food security. Global Environmental Change, 4(1), 49-62. R~senberg, N .J. (1992). Adaptation of agriculture to climate change. Climatic Change, 21, 385-405. Rosenzweig, C. M., Parry, M. L., & Fischer, G. (1994). Potential impact of climate change on world food supply. Nature, 367, 133-138. Rothman, D. S., & Robinson, J. B. (1997). Growing Pains: A Conceptual Framework for Considering Integrated Assessments, Environmental Monitoring and Assessment, 46, 23-43. Rotmans, J., & van Asselt, M. (1996), Integrated assessment: a growing child on its way to maturity - an editorial, Climatic Change, 34(3/4), 327-336. Roughgarden, T., & Schneider, S. H. (1999), Climate change policy: quantifying uncertainties for damages and optimal carbon taxes. Energy Policy, 27(7), 415-429. Santer, B. D., Taylor, K. E., Wigley, T. M. L., Johns, T. C., Jones, P. D., Karoly, D. J. Mitchell, J. F. B., Oort, A. H., Penner, J. E., Ramaswamy, V., Schwarzkopf, M. D., Stouffer, R. J., & Tett, S. (1996), A search for human influences on the thermal structure of the atmosphere, Nature, 382, 39-46. Schelling, T. C. (1992). Some economics of global wanning. The American Economic Review, 82(1), 1-14. Schneider, S. H. (1983). CO 2, climate and society: A brief overview. In: R. Chen, E. Boulding & S. H. Schneider (Eds), Social Science Research and Climate Change: An Interdisciplinary Appraisal (pp. 9-15). Boston: D. Reidel. Schneider, S. H. (1993). Pondering Greenhouse Policy. Science, 259, 1381.
Integrated Assessment Models of Climate Change: Beyond a Doubling of C02
63
Schneider, S. H. (1996). The future of climate: Potential for interaction and surprises. In: T. E. Downing (Ed.) Climate Change and WorldFood Security. (pp. 77-113). Heidelberg, NATO ASI Series 137: Springer-Verlag. Schneider, S. H. (1997a). Integrated assessment modeling of global climate change: transparent rational tool for policy making or opaque screen hiding value-laden assumptions? Environmental Modeling and Assessment, 2(4), 229-248. Schneider, S. H. (1997b). Laboratory Earth: The Planetary Gamble We Can't Afford to Lose. New York: Basic Books. Schneider, S. H. (2000). Accounting for induced technological change and pre-existing market inefficiencies: Commentary on costing methods paper by Richard Tol. Schneider, S. H., & C h e n , R. S. (1980). Carbon Dioxide Warming and Coastline Flooding: Physical Factors and Climatic Impact. In: J. M. Hollander, M. K. Simmons, & D. O. Wood (Eds), Ann. Rev. Energy, (pp. 107-140). Schneider, S. H., Easterling, W. E., & Mearns, L. O. (2000). Adaptation: Sensitivity to Natural Variability, Agent Assumptions and Dynamic Climate Changes. Climatic Change, 45(1), 203-221. Schneider, S. H., & Goulder, L. H. (1997). Achieving low-cost emissions targets. Nature, 389, 13-14. Schneider, S. H., Kuntz-Duriseti, K., & Azar, C. (2000). Costing Nonlinearities, Surprises and Irreversible Events, Pacific Asia Journal of Energy (in press). Schneider, S. H., & Thompson, S. L. (1985). Future changes in the global atmosphere. In: R. Repetto (Ed.), The Global Possible (pp. 397-430). New Haven, CT: Yale University Press. Schneider, S. H., Turner, B. L., & Morehouse, G. H. (1998). Imaginable surprise in global change science. Journal of Risk Research, 1(2), 165-185. Schultz, E A., & Kasting, J. E (1997). Optimal reductions in CO2-emissions. Energy Policy, 25, 491-500. Stair, B., McNabb, D., Smithers, J., Swanson, E., Blain, R., & Keddie, E (1996). Fanning Adaptations to Climatic Variation. In: L. Mortsch, & B. Mills (Eds), Great Lakes - St.
Lawrence Basin Project on Adapting to the Impacts of Climate Change and Variability: Progress Report One (pp. 125-135). Smith, J., & Tirpak, D. (Eds) (1988). The potential effects of global climate change on the United States: Draft report to congress. Vols. 1 and 2. U.S. Environmental Protection Agency,
Office of Policy Planning and Evaluation, Office of Research and Development. Washington, DC: U.S. Government Printing Office. Stocker, T. E, & Schmittner, A. (1997). Influence of CO 2 emission rates on the stability of the thermohaline circulation. Nature, 388, 862-864. Thompson, S. L., & Schneider, S. H. (1982). CO z and climate: The importance of realistic geography in estimating the transient response. Science, 217, 1031- 1033. Titus, J., & Narayanan,V. (1996). The risk of sea level rise: A delphic monte carlo analysis in which twenty researchers specify subjective probability distributions for model coefficients within their respective areas of expertise. Climatic Change, 33(2), 151-212. Tol, R. S. J. (2000). Modeling the costs of emission reduction: Different approaches. Pacific Asia Journal of Energy (in review). West, J. J., Small, M. J., & Dowlatabadi, H. (2001). Storms, investor decisions and the economic impacts of sea level rise. Climatic Change, 48(2/3), 317-341. Williams, J. R., Jones, C. A., & Dyke, P. T. (1990). The EPIC model. EPIC-Erosion/Productivity Impact Calculator. 1. Model Documentation. U.S.DA-ARS Tech. Bull. No. 1768, 3-92.
64
STEPHEN H. SCHNEIDER A N D KRISTIN K U N T Z - D U R I S E T I
Wynne, B., & Shackley, S. (1994). Environmental Models: Truth Machines or Social Heuristics. The Globe, 21, 6-8. Yohe, G., Neumann, J., Marshall, P., & Ameden, H. (1996). The economic cost of greenhouse induced sea level rise for developed property in the United States. Climatic Change, 32(4), 387-410. Yohe, G. (1989). The cost of not holding back the sea - economic vulnerability. Ocean and Shoreline Management, 15, 233-255. Yohe, G. (1991). Uncertainty, climate change, and the economic value of information. Policy Science, 24, 245-269. Yohe, G. (1992). Imbedding dynamic responses with imperfect information into static portraits of the regional impact of climate change. In: J. M. Reilly, & M. Anderson (Eds), Economic Issues in Global Climate Change: Agriculture, Forestry, and Natural Resources (pp. 200214). Boulder, CO: Westview Press. Yohe, G. (1990). The Cost of Not Holding Back the Sea: Toward a National Sample of Economic Vulnerability. Coastal Management, 18, 403-432.
EVALUATING REGIONAL ADAPTATION TO CLIMATE CHANGE: THE CASE OF CALIFORNIA WATER Brent M. Haddad and Kimberly Merritt ABSTRACT This chapter contributes to efforts to improve the accuracy of estimating damages resulting from climate change. It examines potential hydrological impacts on California, and how the state might adapt. For a doubled-C02 scenario, general circulation models coupled with California hydrological data predict increased winter precipitation and dryer summers, elevated snowlines with correspondingly reduced snowpack, shifts in seasonal peak runoff patterns, increased numbers and intensity of extreme weather events, increased evapotranspiration, and declining soil moisture. Adaptations by water managers could include de-emphasizing the role multi-purpose reservoirs play in f o o d control in order to enhance their water-storage capabilities, making firm long-term commitments to provide water to wetlands and other ecologically-sensitive areas, and increasing the management flexibility available to local water agencies through intraregional contracting and mergers. In its conclusion, the chapter notes that while the water sector is accustomed to adapting to climatic variation, adaptations may not be consistent with an integrated assessment model's least-cost path. A region's gain or loss of overall water supplies should be evaluated in the context of its ongoing reallocation of water among competing uses. And in order to capture an appropriate level of detail, the The Long.Term Economics of Climate Change, pages 65-93. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0 65
66
BRENT M. HADDADAND KIMBERLYMERRITT
scale of impact studies needs to be reduced to the national or sub-national level.
THE CHALLENGE OF MEASURING DAMAGES Economists interested in measuring the costs and benefits of policy responses to climate change have identified monetary quantification of impacts as a particular challenge. Fankhauser (1995) calls damage estimation a "daunting task" that is "still in its infancy" while Nordhaus (1998) describes damage estimation as the "most difficult and controversial" of all areas of the climate debate. Weyant et al. (1996), writing for the Second Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), ~ list "developing a credible way to represent and value the impacts of climate change" as number one of the five biggest challenges facing integrated assessment modeling. And Reilly (1998) characterizes the challenge as multi-faceted, involving assumptions about ease of adaptability, scientific ability to detect change, political and economic capacity to react to climate signals, and indecision on who bears responsibility for impacts in the absence of adaptation. Economists are attempting to improve cost assessments of impacts and adaptations from both top-down and bottom-up perspectives. Recognizing the inherent weaknesses of sector-by-sector approaches to impact estimation, economists are developing and refining integrated assessment models (IAMs) that will include interfaces not just between economic sectors but also with physical parameters supplied by general circulation models (GCMs). These models should capture adaptation behaviors, as well as both positive and negative feedbacks, as sectors adjust to climate-induced impacts. At the same time, economists and others are undertaking regional studies for which evaluations of impacts and responses are more tractable than on a global level. The IPCC, for example, commissioned a volume (Watson et al., 1998) that divides the Earth into ten regions primarily by continent. The meaning of adaptation in a climate-change context started coming into focus in the mid-1990s. A 1995 International Conference on Climate Change Adaptation defined adaptation to climate change as "all adjustments in behavior or economic structure that reduce the vulnerability of society to changes in the climate system" (Smith et al., 1996). Consistent with this definition, a volume on engineering responses to climate change discusses both social and technical approaches (Watts, 1997). Stakhiv (1996) points out that the level and cost of adaptation are linked to a society's willingness to accept increased vulnerability. There is no fixed criterion for social risk-taking, he notes, so adaptation choices (and therefore costs) will be different for different
Regional Adaptation to Climate Change
67
societies. Shriner and Street (1998) also point out that adaptation is not costfree and then note that the cost of adaptation could be positively correlated with the rate of climate change. Further, unforeseen barriers to adaptation could arise and adaptive strategies could have unanticipated secondary effects. Cline (1992) characterizes the cost of adaptation as one of society diverting its resources from other potential advances. Apropos to this chapter's topic is an "early" volume edited by Knox and Scheuring (1991) that addresses climate-change impacts and responses for California. It includes a chapter on water-resource management (Vaux, 1991) that highlights the importance of ongoing research and advanced planning for effective adaptation. The challenges of impact assessment are nowhere more clear than in the case of water-resource management. At least two issues arise. Fankhauser (1995) describes how water-sector impacts are typically estimated in climate-change benefit/cost calculations. As developed by Cline (1992), for the United States, an estimate of reduced availability of runoff is made: 7% of total U.S. water withdrawals. It is then multiplied by the nation-wide average cost of water ($0.42/cubic meter). This result, $13.7 billion, represents nearly one quarter of Fankhauser's estimation of total costs of climate change to the U.S. 2Adaptation within the water sector plays no role in arriving at this damage figure, yet it would be reasonable to assume that water managers would take major steps to adapt if changing climatic conditions reduce water supply. A second issue involves the separation of water-resource-management issues into separate subsections (typically chapters) of impact studies. Table 1 provides the categories of impacts found in four studies. Each study segregates water-resource management into its own section, yet water plays a central role in almost every category listed. How, for example, can water-resource management be treated separately from ecosystems, agriculture, forestry, fisheries, human health, human settlement, and wetland loss, among other categories? 3 Within the text of these various chapters or subsections, water issues receive significant attention, and in their IPCC 2nd Assessment Report chapter on water resources management, Kaczmarek et al. (1996) utilize an integrating framework. But overall, water impacts and adaptations are broken into separate pieces in these major impact studies. Shriner and Street (1998), writing about North America for the IPCC, recognize this flaw. They describe water as a "lynchpin that integrates many subregions and sectors," and insist that impact studies "must account for the inherent competition for water supplies . . . . " The current organization of impact studies does not facilitate such an accounting.
68 Table 1.
B R E N T M. H A D D A D A N D KIMBERLY M E R R I T F
How Impacts Are Categorized in Climate Change Impact and Mitigation Studies.
Fankhauser
Watson et al. (1996)
Watson et al. (1998)
Smith et al.
Agriculture
Agriculture
Food and Fiber Production
Agriculture
Coastal Zones/Islands
Coastal Systems
Coastal Resources
Ecosystems
Ecosystems and Forests
Air Pollution Amenity Coastal defense
Cryosphere (i.e. frozen regions) Dryland Loss
Deserts Desertification
Ecosystem Loss
Hydrology/Freshwater Ecology
Energy
Industry/Energy/ Transportation Financial Services
Fishery
Fisheries
Forestry
Forests
Life/Morbidity
Human Population Health
Migration
Human Settlement
Fisheries
Human Health Human Settlements
Mountain Regions Natural Hazards Oceans Other Sectors Rangelands Water
Water Resources Management
Wetland Loss
Wetlands
Hydrology and Water Resources
Water Resources
Wood Production Sources: Fankhauser, S. 1995, Table 3.15. Watson, R., M. Zinyowera, and R. Moss. 1996 and
1998, table of contents. Smith, J., et al. 1996, table of contents.
Regional Adaptation to Climate Change
69
In this chapter, we examine the potential hydrological impacts on California as a result of climate change. California can provide valuable insight to a discussion of regional impacts and adaptation in industrialized regions of the world. Though smaller than a continent, California can hardly be considered a "small" region. It comprises 4% of the land region of the United States and receives 4% of its average annual precipitation. It is home to 12% of the U.S. population, and accounts for 12% of its economic output. California has a number of geologically-based borders including tall mountain ranges, deserts, and the Pacific Ocean that provide regional and climatic definition. These factors can combine with the state's heavily-engineered water-supply system and its status as an individual political unit within the United States to yield a discrete portrait of the potential impacts of and adaptations to climate change with respect to water management. This chapter now turns to a review of hydrological predictions of GCMs and regional hydrological models. Focusing on the western U.S. and California, modeled hydrological implications of a doubled-CO2 world are presented.* A brief section then describes empirical data on precipitation and runoff scenarios in light of modeled predictions. The discussion then turns to watermanagement challenges and potential adaptations implied by doubling scenarios. Trade-offs between flood control and dry season water supply are described, as well as water-supply challenges to managing wetland-based biodiversity. The centrality of focusing on water reallocation in a context of evolving water institutions is then emphasized, followed by conclusions.
GCMS AND HYDROLOGICAL PREDICTIONS GCMs are mathematical, computer-driven representations of global atmospheric climate systems. These models do not render precise predictions of global warming, but describe the feedbacks associated with various wanning trend scenarios. GCMs have been able to describe past climate with acceptable accuracy. GCM results track evidence from a variety of sources including treering samples, ice cores, and isotopic analysis that suggest that the period from 1400 to 1900 experienced an average increase per century of 0.125°C, while the last century (1900 to present) has experienced a temperature increase of 0.5°C (Wilkinson & Rounds, 1998). GCMs model climate by superimposing a large grid over the earth's surface. Chen et al. (1996) separate the spatial scales for most GCMs into large scale, which ranges from 100,000 km 2 to planetary, and regional scale, which ranges from 100 to 10,000 krn 2. For example, NASA's large-scale GISS Model II GCM uses about 45 grid boxes, each measuring 4 °NS by 5 °EW, to
70
BRENT M. HADDADAND KIMBERLYMERRITI"
characterize weather over the United States. This lack of resolution obscures the impact of varied topography (e.g. mountain ranges) on hydrology within each grid-box. Riley et al. (1996) note, therefore, that GCMs are currently unable to provide realistic meteorological input variables to regional hydrological models at the basin scale. Errors in these models may critically affect temperature and precipitation forecasts and that interactions between precipitation, surface runoff, and evapotranspiration may be the most poorly represented aspects of GCMs (Rind et al., 1997). Ward and Proesmans (1996) assert that the "state of the art of present GCMs is that water-substance transport can be modeled with fair capability . . . but rather primitively from a hydrological viewpoint." The recent introduction of Atmosphere-Ocean GCMs and new regionalization techniques represent an intermediate step toward more accurate predictions of regional hydrological impacts of increased atmospheric concentrations of CO2.
CHANGING PATTERNS OF PRECIPITATION AND HYDROLOGY The most important prediction of GCMs is that they generally suggest temperature increase as a result of rising atmospheric concentrations of CO2. This said, GCMs vary in their predictions regarding associated feedbacks such as precipitation. Models such as the United Kingdom's Hadley Centre Climate Model (HadCM2), the Canadian Global Climate Model (CGCM), the Goddard Institute for Space Studies (GISS) GCM, and the National Center for Atmospheric Research (NCAR) GCM have been used to predict California's future climate in circumstances of increased greenhouse gas concentrations. Given that GCMs generally are poor at describing the direct linkages between CO2 increase and hydrological changes, researchers typically address these linkages by describing the impacts of temperature increase on precipitation and surface runoff (Rind et al., 1997; Bardossy, 1997).
Wetter Winters, Dryer Summers Watson et al. (1998) report that when greenhouse gas levels increase, most GCMs predict an increase in global mean precipitation, including increased precipitation over North America. Filipo et al. (1994) link a regional climate model to GCM outputs. For the southwestern U.S., 5 their model suggests a 20-100% increase in precipitation in the winter and a 0--60% decrease in precipitation in the summer, putting some regions at zero summer precipitation. Regardless of whether models suggest increased, decreased, or stable levels of
Regional Adaptation to Climate Change
71
annual precipitation on a national level as a result of increased CO2, most models predict wetter winters and dryer summers for California. The U.S. Environmental Protection Agency (USEPA) describes such a scenario in its 1997 paper entitled "Climate Change and California." Here the HadCM2 model estimates that California could see an increase in annual precipitation averaging 20% to 30%: 10% to 50% in the spring and fall, with possibly larger increases in winter (USEPA, 1997). The figures relating to this combination of precipitation changes strongly suggest decreased summer precipitation. But even with decreases in summer season precipitation, Frederick and Gleick (1999) find that by the year 2090, there could be an average increase in daily precipitation of 5-7 millimeters across the western United States. Reduced Snowpack and Earlier Snowmelt Models indicate that warmer temperatures could reduce the volume of California snowpack, as well as the quantity of water stored as ice and snow (Shriner & Street, 1998; USEPA, 1997). The California Department of Water Resources (DWR, 1994) reports that a 3°C rise in temperature is predicted to raise California's historical snowlines by approximately 460 meters. This would reduce April snowpack from the current 32,900 square kilometers to 15,000 square kilometers - or by 45%. As Fig. 1 shows, the southern Sierra snowpack around the San Joaquin-Tulare Lake drainage basin would decrease by 33% while the snowpack above the Sacramento River drainage basin in the northern Sierra would decrease by 75% (DWR, 1994; CEC, 1989). The accumulation of snowpack in high mountains is an important freshwater storage mechanism. For example, California's peak snowpack runoff usually occurs in May, when springtime temperatures rise sufficiently to release the frozen water into streams and waterways. Heavy precipitation and snowpack runoff are usually offset by several weeks. Runoff that occurs during California's high precipitation periods, winter and early spring, may lead to increased flooding by overloading drainages and reservoirs already filled with runoff from rain events. Riley et al. (1996) find that temperature increases of 1.1 °C, 2.2°C, and 3.3°C alone and in combination with other scenarios shift the peak runoff from May to April, while a temperature increase of 4.4°C shifts the peak runoff from May to March. In summer, when drainage basins are particularly dry in California, reduced flows of snowmelt may simply seep into soil. Projected changes in snowfall and snowmelt - as well as increases in warm-period rainfall intensity - could shift the periodicity of the flood regime in North America. Increased flooding is more likely in arid regions, agricultural regions with exposed winter soils, and in urban areas with high levels of
72
BRENT M. HADDAD AND KIMBERLY MERRITlr
10¢
80
60
40
20
South Sierra (San Joaquin -
Noah Sierra (Sacramento River
Tulare Basin)
Basin) LOCatloItl
Fig. 1. Decreases to California Snowpack with an Average Temperature Rise of 3°C.
impermeable surfaces. This shift in peak runoff is widely predicted to render California more vulnerable to flooding (Shriner & Street, 1998; DWR, 1994; and Sandberg & Manza, 1991). In the western U.S., small changes in precipitation can lead to relatively large changes in runoff. Recent climate simulations have found that average runoff in California may increase by as much as 26% by 2030 and as much as 139% by 2090 - most of which is likely to be conveyed in winter and spring (Frederick & Gleick, 1999). The NCAR GCM2 and NCAR regional climate models considering a doubled-CO2 scenario reach similar conclusions and suggest that California can expect "very high" winter runoff conditions (Wilkinson & Rounds, 1998). Shriner and Street (1998) find that climate projections suggest increased runoff in winter and early spring but reduced flows during summer in regions in which hydrology is dominated by snowmelt. California matches this profile with the heaviest rains falling in winter and early spring. Drainage basins will be saturated with rain water and convey snowmelt more quickly, resulting in rapid flows of water that flood-prevention systems will deflect to the ocean. Thus, increases in winter and spring runoff
Regional Adaptation to Climate Change
73
will not necessarily be available for human use and may increase ecological disturbance. Increased Number and Magnitude of Extreme Weather Events Shriner and Street (1998) recognize that increases in North American hydrological variability (e.g. larger floods and longer droughts) are likely to result from a doubled-CO2 scenario. In addition to variation in seasonal precipitation levels, some studies also suggest increasing annual variability. Gleick et al. (1995) point out that the last quarter of the 20th century produced new records for dry periods as well as the wettest years in recorded history. Thus, they note, while average runoff remained about the same, both drought and flood years became more common. As stated previously, several models suggest dryer summer periods for California, with correspondingly reduced runoff. Moreover, the greatest impact of reduced runoff is expected in arid and semi-arid regions, with already-high ratios of water use compared to renewable supply. Shriner and Street (1998), the USEPA (1997), and Lane et al. (1999) assert that California's water supply may be particularly vulnerable to decreases in summer runoff. Figure 2 illustrates two predictions of how temperature increases could reduce runoff due to increased evapotranspiration. Several agencies, including the USEPA (1997), the California Department of Water Resources (DWR, 1994), and the Federal Bureau of Reclamation (Leverson, 1997; Riley et al., 1996; Sandberg & Manza, 1991) report that an increase in extreme weather events may be a feedback associated with atmospheric CO2 increases. Average temperature increase begins the feedback scenario. The HadCM2 model, for example, finds that by the year 2100, temperatures in California may rise by 2.8°C (USEPA, 1997). Many models have found that increased temperatures lead to increased evaporation from oceans, lakes, streams, soils, etc. A wanner atmosphere has relatively greater capacity to carry that moisture than a cooler atmosphere (Moran & Morgan, 1997). Leverson (1997) notes that GCMs and theoretical arguments suggest that the hydrological cycle will be enhanced by increases in the moisture content of the warmed atmosphere. That is, for a given intensity, individual storms should produce more precipitation than they presently do. Another suggested consequence of increased temperature, not mutually exclusive to extreme-weather predictions, is that global storm tracks will rise 1-2 ° in latitude (Dennis, 1991; Riley et al., 1996; Leverson, 1997). Dennis (1991) suggests that a general warming of the earth could result in a reduction of the temperature gradient between tropical and polar latitudes. It is this
74
BRENT M. HADDADAND KIMBERLYMERRIqT
temperature gradient that drives the weather patterns, including the familiar "jet stream." Because the majority of temperature increases would be concentrated in the mid-latitudes rather than at the equator, storm tracks would shift poleward, approximately to the U.S.-Canadian border. Following up on this hypothesis, Leverson (1997) chose several "global warming analog" months over 42 years that approximated Dennis' globalwarming storm-track scenario. Leverson found that shifting the storm track 1-2 ° north consequently shifted precipitation toward Canada and away from the intermountain regions of Utah and Arizona. Table 2 illustrates Leverson's prediction for winter precipitation in Montana, Utah, and Arizona. While precipitation predictions for Montana, the northern-most state in the study, are only slightly less than for periods of normal storm-track patterns, Arizona could receive less than half of its normal amount of precipitation if storm tracks shift northward. Leverson asserts that if Dennis' hypothesis is correct and
Temperature Increase (°C) 0
1
2
0%
3
4
5 I
6 ,
-2% .-4% 6%
-10%
~
-12%
-14% -16% Riley Model
\
-18% -20%
Fig. 2. ProjectedDecreases in Runoff Rates with IncreasedTemperature.
Regional Adaptation to Climate Change Table 2.
75
Percentage of Normal Precipitation under the North-shifted Storm Track Scenario. December
January
February
Montana
100%
93%
93%
Utah
61%
64%
74%
Arizona
36%
40%
47%
Source:
Leverson, 1997.
storm tracks drift northward as temperatures increase, parts of the intermountain western U.S. will experience significant decreases in winter precipitation. Do These Predictions Match Observations ? Models predicting a combination of temporally-redistributed streamflow with increased temperature match recent observations fairly closely. The U.S. Geological Survey has found that in the Sierras, mean monthly streamflow during December through March was substantially greater for water years 1965-1990 compared to water years 1939-1964 (Pupako, 1993). This shift in peak streamflow from snowmelt is attributed to small increases in temperature and is consistent with the climate change models discussed above. Precipitation patterns have changed as well. As predicted by the model proposed by Dennis (1991) and applied to regional hydrological conditions by Leverson (1997), northward shifts in the storm track have produced an increase in precipitation in higher latitudes (35°N - 75°N), while the mid-latitudes have seen reduced precipitation. In addition, Shriner and Street (1998) describe a trend toward higher frequencies of extreme (greater than 50.8 mm) one-day rainfalls over the United States between 1911 and 1992, due mainly to heavier warm-season rainfall. There is a chance of less than 1/1000 that this scenario could be found in a quasi-stationary climate. Water Availability and Plant Performance Shriner and Street (1998) and Rind et al. (1997) report that when models hold precipitation constant, a rise in temperature leads to greater rates of evaporation
76
BRENT M. HADDAD AND KIMBERLY MERRrlq"
of soil moisture and transpiration from plants. Atmospheric warming and altered water resources are expected to affect plant performance in a number of ways. With a predicted early onset of snowmelt in western U.S. montane ecosystems (Harte et al., 1995), plants will be exposed to an earlier onset and longer duration of drought conditions. Although certain species can undergo physiological changes to continue extracting water from drying soils, other species may not be able to make such adjustments (Loik & Harte, 1997). The ability for some plants to conduct photosynthesis is unaffected by the warminginduced drying, whereas others show signs of photosynthetic dysfunction (Loik et al., 1999). These results suggest that some plant species may be more successful at survival, growth, and reproduction in a warmer and drier future, which could lead to changes in species composition for certain ecosystems (Brown et al., 1997; Allen & Breshears, 1998). A key aspect of photosynthesis is the opening of tiny pores ("stomata") on the surface of the leaf, which allows the entry of carbon dioxide and a countercurrent (and unavoidable) loss of water. Such water loss from plant surfaces during photosynthesis comprises a large fraction of the water that returns to the hydrological cycle via evapotranspiration. Water flux from soils to plants to the atmosphere is accompanied by large fluxes of heat energy (Nobel, 1991). Taken together, the water and heat fluxes through ecosystems have an important influence on local patterns of weather (Ahrens, 1991). Moreover, if the species composition of a particular location changes (such as to more drought tolerant species), then the water and energy flux from soils to air will be further altered. The effects of warming on vegetation changes, as well as water and heat fluxes, are linked to one another and not well understood. However, they are likely to produce as yet unknown consequences for local and regional climates and therefore surface-water availability. Increases in atmospheric temperatures are only one aspect of global climate change that may alter water resources via effects on photosynthesis change (Dukes & Mooney, 1999). The aforementioned stomata are sensitive to carbon dioxide concentrations; when the carbon dioxide content within a leaf is relatively high, stomata close to prevent further water loss (Taiz & Zeiger, 1998). As a result, plants exposed to elevated levels of carbon dioxide are able to conduct photosynthesis with somewhat less water loss. This should lead to a water savings for a particular plant and extra soil water available for the roots of neighboring plants (Field et al., 1995). Indeed, it is expected that there will be an increase in photosynthetic productivity and growth for different ecosystems due to the effect of elevated carbon dioxide on stomatal opening (Mellilo et al., 1993). Yet for certain regions (such as deserts), the water savings
Regional Adaptation to Climate Change
77
due to elevated carbon dioxide may only occur for short periods of each year (Huxman et al., 1998). In summary, the climate-change models and impact studies examined in this chapter find some agreement with respect to hydrological impacts of climate change on California: • • • • • •
Increased winter precipitation and dryer summers; Elevated snowline with correspondingly reduced snowpack; A shift in seasonal peak runoff patterns; Increased number and intensity of extreme weather events; Increased evapotranspiration; and Declining soil moisture, with a combination of adaptation and spatial reconfiguration of plant species.
ADAPTATION IS ALREADY PART OF WATER MANAGEMENT The possibility of significant change in established hydrological patterns presents additional, but not necessarily new, challenges to water managers, some of whom are already considering and framing the challenges (see Boland, 1998; Steiner, 1998). In its findings and recommendations related to waterresource management and climate change, the Public Advisory Forum of the American Water Works Association (AWWA) has called upon water professionals to undertake a sweeping review of design assumptions, practices, and contingency planning for water systems (Public Advisory Forum, 1997; see similar recommendations in McAnally et al., 1997). The review should encompass structural and nonstructural aspects of water systems and should include a reevaluation of legal, technical, and economic approaches to water management. Further, water managers should explore establishing partnerships with other water agencies and other public agencies to help reduce greenhouse gas loading, work with scientific organizations to better understand the potential impacts of climate change on water resources, and improve communication between themselves and climate-change scientists. These encompassing recommendations should not come as a surprise given that water managers regularly plan for and deal with the variability of the hydrocycle; responding to climate change-induced changes largely involves adjustments in scope and scale to existing patterns of behavior. Stakhiv (1996) notes that many of the early adaptation strategies suggested by the IPCC and U.S. National Academy of Sciences were derived from conventional practices of water-resource managers in the U.S. and European Community.
78
BRENT M. HADDADAND KIMBERLYMERRITT
Kaczmarek and Napi6rkowski (1996) identify three approaches to adaptation: • postponement of decision-making until more data is available; • minimizing regrets, or preparing water systems for potential shocks; and • applying optimality rules to a range of climate-change scenarios. Each approach has its strengths and weaknesses as well as its own implications for data requirements, cost, and public policy action. The following two examples illustrate the adaptation challenges California water managers could face. W A T E R S T O R A G E VS. F L O O D M A N A G E M E N T : C H A N G I N G PRIORITIES Shriner and Street (1998) recognize potential "critical supply-demand mismatches" in regions like California that combine long dry seasons with high dry-season water demands. Flood-control concerns could exacerbate this problem. Climate models suggest stronger and more frequent flood episodes as a result both of stronger precipitation events as well as shorter periods between snow deposition and snowmelt. Water storage and flood management are linked because key facilities used for flood management, dam/reservoir systems, typically also serve water-storage goals. A conflict in traditional dam/reservoir management priorities potentially arises. To provide a flood-protection role, reservoirs are drawn down (or not completely filled) in anticipation of heavy inflows. But with declines in the water-storage period provided by snowpack, reservoirs will have to be filled sooner (corresponding with earlier snowmelt), and greater emphasis will have to be placed on maintaining full reservoirs in anticipation of longer dry seasons, even if flood events remain a possibility before the dry months of summer arrive. In short, the flood-control role played by multi-purpose dam/reservoir systems during mid- to late-spring may have to be de-emphasized in comparison with the water-storage role. This perspective on reprioritizing the different roles of multifunction dam/reservoir systems differs from the California Energy Commission's (CEC, 1989) expectation that flood control will continue to be emphasized with a resulting increase in the risk of late-summer water shortages in California, as well as from McAnally et al. (1997), who generally call for keeping reservoir water-levels lower in response to increasing vulnerability to flooding. A de-emphasizing of the flood-control role played by major water-storage facilities, given the potential for more frequent flood events, suggests that alternative flood-management methods will have to be pursued. By the late
Regional Adaptation to Climate Change
79
1980s, roughly 75% of California communities contained land that lies within Special Flood Hazard Areas or floodplains vulnerable to 100-year floods (CEC, 1989). Among the alternatives (and complements) to reservoir capture of flood water are: 6
Enhancing watershed management. This includes re-establishment or preservation of upland forests, reconnecting stream channels with extended riparian zones (wetlands), and avoiding the dredging or channelizing of streams.
Pre-designating flood zones. Such zones, typically in agricultural or openspace/park regions, would be diversion regions for flood waters before they reach urban lands. Ideally, flood zones would also serve as aquifer-recharge zones, but a region's topography and geology may not align effectively.
Restricting flood-plain development. This intervention in local economic development has historically been resisted since flood plains appear to present low-cost opportunities for new residential and business development. Enforcement of existing flood-plain building restrictions often lapses during extended periods of normal and below-normal flows. However, since such intervals between major floods may become shorter in the future, existing restrictions on flood-plain development could be re-emphasized and new restrictions added.
Requiring flood insurance. Flood insurance serves to spread the economic impacts of flood-related disasters to large numbers of homes and businesses and allows affected areas to recover more quickly. Homes can be rebuilt and businesses reopened more quickly with less localized economic loss if they are insured against floods. Currently, flood insurance is not profitable for privatesector insurance companies, so it is offered instead by the federal government. If the government cannot find ways to induce the private sector to provide this service, it should nevertheless continue to offer it and, in some cases, require it for individuals and businesses located in Special Flood Hazard Areas.
Preparing in advance for flood-emergency management. Upstream gauging stations linked electronically to emergency-service providers (rescue crews, etc.), combined with up-to-date evacuation plans and a prepared populace, can greatly diminish losses of human life during a flood event, reduce discomfort in the flood's aftermath, and accelerate the recovery process. Numerous federal and state agencies and numerous information systems, such as ALERT (Automated Local Evaluation in Real Time), already exist and can be utilized or deployed in regions whose flood risks increase in the coming decades (Water Education Foundation, 1998). Interest in these and other similar measures is likely to increase should climate change result in the combination of earlier snowmelt and increased
80
BRENT M. HADDAD AND KIMBERLY MERRITT
frequency and magnitude of flood events. Instead of the current philosophy of
flood control, one can imagine a new philosophy (actually, a return to the older philosophy) of flood management. Costs involved would combine structural enhancements and non-structural adaptations, but would depend in their details and magnitude on how Californians balance the conflicting objectives of flood control and late-summer water supply.
MANAGING WATER SUPPLY FOR FRESHWATER ECOSYSTEM For their size, freshwater ecosystems contain a disproportionately large number of the world's species. More than 40% of the world's fish species and roughly 12% of the animal species reside in freshwater habitats which themselves cover only about 1% of the earth's surface (World Resources Institute et al., 1998). With respect to wetlands, in recent years, numerous social benefits (or "ecosystem services") of wetlands have been identified. In a table adapted from Kusler (1983), the National Research Council (1992) presents fifteen separate services provided by wetlands (reproduced as Table 3). Among them are flood conveyance and storage, sediment control, and recreation. On a local level, smaller towns (-10,000 population) can rely on wetlands for tertiary wastewater treatment at half the cost or even less compared to technologybased advanced treatment methods (Ewel, 1997). Additional benefits of wetlands include the return of nitrogen to the atmosphere (denitrification), which counter-balances to some extent human introductions of nitrogen in fertilizers, as well as the reduction of sulfates into insoluble complexes, which partially mitigates human introductions of sulfur via acid rain. Wetlands have the potential to be a net sink for carbon (through accumulation of peat), but in recent years due to extensive wetland conversions they have become a net source (Mitsch & Gosselink, 1993). Profound impacts on freshwater ecosystems could result from a doubling of atmospheric CO2. In addition to increases in mean ambient temperature, climate change models predict declining levels of soil moisture, changes in timing and intensity of rainfall, shifting of storm tracks, and increasing frequency and intensity of drought periods. Shriner and Street (1998) suggest that North American non-forested ecosystems could experience losses of migratory waterfowl and mammal breeding and forage habitats, invasions of exotic species, and increased sediment loading into rivers and lakes. Novel assemblages of plant and animal species could result as the ranges of some species expand while other ranges decline (McKinney & Lockwood, 1999; Lockwood & Duncan, 1999).
Regional Adaptation to Climate Change Table 3.
81
Wetland Functions.
Flood conveyance - Riverine wetlands and adjacent floodplain lands often form natural floodways that convey floodwaters from upstream to downstream areas. Protection f r o m storm waves and erosion - Coastal wetlands and inland wetlands adjoining larger lakes and rivers reduce the impacts of storm tides and waves before they reach upland areas. Flood storage - Inland wetlands may store water during floods and slowly release it to downstream areas, lowering flood peaks. Sediment control - Wetlands reduce flood flows and the velocity of floodwaters, reducing erosion and causing floodwaters to release sediment.
Habitat for fish and shellfish - Wetlands are important spawning and nursery areas and provide sources of nutrients for commercial and recreational fin and shellfish industries, particularly in coastal areas.
Habitat for waterfowl and other wildlife - Both coastal and inland wetlands provide essential breeding, nesting, feeding, and refuge habitats for many forms of waterfowl, other birds, mammals, and reptiles. Habitat for rare and endangered species - Almost 35% of all rare and endangered animal species either are located in wetland areas or are dependent on them, although wetlands constitute only about 5% of the nation's lands.
Recreation - Wetlands serve as recreation sites for fishing, hunting, and observing wildlife. Source of water supply - Wetlands are becoming increasingly important as sources of ground and surface water with the growth of urban centers and dwindling ground and surface water supplies. Food production - Because of their high natural productivity, both tidal and inland wetlands have unrealized potential for food production from harvesting of marsh vegetation and aquaculture. T i m b e r production - Under proper management, forested wetlands are an important source of timber, despite the physical problems of timber removal. Preservation of historic, archaeological values - Some wetlands are of archaeological interest. Indian reservations were located in coastal and inland wetlands, which served as sources of fish and shellfish.
Education and research - Tidal, coastal, and inland wetlands provide education opportunities for nature observation and scientific study.
Source of open land and contribution to aesthetic values - Both tidal and inland wetlands are areas of great biological diversity and beauty, and provide open space for recreational and visual enjoyment.
Water quality improvement - Wetlands contribute to improving water quality by removing excess nutrients and many chemical contaminants. They are sometimes used in tertiary treatment of urban wastewater.
Source:
National Research Council, 1992, Table 6.1, adapted from Kusler (1983).
82
BRENT M. HADDADAND KIMBERLYMERRITr
In addition to solar energy and wind, the most important driving forces for wetlands are hydrologic, including tides, streamflow, surface runoff, and groundwater flow (Mitsch & Gosselink, 1993). Wetland management involves active attention to a project or region over time. The time dimension is critical to preserving a wetland's biodiversity and ecosystem services because a region's rate of environmental change may directly affect the qualitative biological outcome of such change (Kingsolver et al., 1993). California's San Luis National Wildlife Refuge Complex, commonly known as "the Grasslands," offers insight to the management and coordination of water supply necessary for wetlands to remain viable over time. The Grasslands is located along California's San Joaquin River and is the state's largest remaining inland wetland. It encompasses more than 59,000 hectares straddling Merced and Fresno Counties in the San Joaquin Valley. It is a critical stopover for migrating and wintering waterfowl, with more than 50% of all San Joaquin Valley shorebirds in residence during the peak spring migration season. The Grasslands provides habitat to 46 plants and animals that are endangered, threatened, or are candidates for listing as a special status species. Ownership and management of the Grasslands involve a complex mosaic of state, federal and private entities. The U.S. Fish and Wildlife Service owns and/ or manages 32,100 hectares, part of which includes perpetual conservation easements donated from private parties. Another 13,600 hectares is owned and managed by the California Department of Fish and Game as the Los Banos, Mendota, Mud Slough, North Grasslands, and Volta Wildlife Areas, and Grasslands State Park. Private duck hunting clubs add to the mosaic, owning and managing 13,900 hectares. Three phases in the evolving institutional structure of water supply to the Grasslands can be identified. The first was simply the historical pattern of capturing unclaimed flood flows from the San Joaquin River. The second phase began when the Grasslands was cut off from its historical source of San Joaquin River water in 1944 with the completion of the Friant Dam, part of the federal Central Valley Project. Water managers for the Grasslands began securing water delivery though multiple contracts with irrigation districts and the federal government. In an average year, 37 million cubic meters are delivered to the Grasslands from the San Luis Canal Company (a neighboring irrigation district), which receives its water via the Delta-Mendota Canal and the California Aqueduct. The Merced Irrigation District is contracted to supply groundwater to the Grasslands, but due to this water's high cost, it is reserved for dry years. In addition, private duck clubs purchase approximately 18.5 million cubic meters per year from the Grassland Water District, whose sole purpose is to deliver water to the wetlands.
Regional Adaptation to Climate Change
83
The third phase was launched when revisions to the water-delivery system for the Grasslands were mandated by the 1992 Central Valley Project Improvement Act. This Act, which seeks in part to mitigate the long-term environmental impacts of the Friant Dam and other federally-owned water facilities in California, when fully implemented will assure firm water supplies to the Grasslands for the first time in its history. The federally-guaranteed supplies will provide a basic quantity to the Grasslands while other existing sources will continue to provide supplemental supplies. The Grassland's water supply has thus transferred from its original (unmanaged) linkage to flood flows of the San Joaquin River to a mosaic of uncoordinated and often insecure water agreements involving state, federal, and local water agencies, as well as private organizations, to subsequently include a basic level of supply guaranteed by federal legislation. The intermediate phase of multiple uncoordinated agreements may not have been effective at preserving biodiversity and ecological services in a post CO2doubled era. For example, demand for dry-year groundwater supplies from the Merced Irrigation District is likely to intensify in the coming decades. Water supplies for wetlands or other ecologically-important areas may not remain financially competitive with demand from urban and agricultural regions. State or federal legislation may be required to guarantee available funding to purchase water supplies for wetlands and other regions of ecological value. To the extent Californians make wetland preservation and restoration a priority, costs would involve water procurement, development of conveyance infrastructure, as well as more intensive research into and oversight of ecologically-important areas to evaluate the effectiveness of existing management practices. 7 Benefits would include those provided by the wetlands themselves (Table 3) as well as existence values. WATER REALLOCATION IN A CONTEXT OF EVOLVING WATER-MANAGEMENT INSTITUTIONS As long as the underlying demographic, ecological, economic, and physical conditions in a region do not vary significantly, there is little reason to examine the existing boundaries and authorities of local water agencies. In the coming decades, however, all of these conditions are likely to change and in most cases already are changing. Vaux (1991) has described California water institutions as inflexible, established in bygone eras, and poorly suited for today's watermanagement challenges. Yet water institutions do have a history of incremental adaptation to changing hydrological, demographic, economic, cultural, environmental, and other trends. Even property rights to water evolve as state and
84
BRENT M. HADDAD AND KIMBERLY MERRITI"
federal judicial decisions are rendered and as legislatures revise the water code. Property rights to water in California, for example, have evolved throughout the 20th century away from a more market-friendly form to one that encompasses a wide range of social values that inhibits market-based transfers. Environmental rights, area-of-origin rights, state Public Trust duties, and other abridgements to private property rights have been recognized by courts and have undermined the "single decision-maker" principle necessary for lowtransaction-cost markets (Haddad, 2000). How society is organized to manage its water resources also is evolving. At the local level, water procurement, treatment, and delivery is managed by private, mutual, and public agencies. These agencies commonly exercise monopoly power within their service territories. Historically, water agency8 boundaries have been set for many reasons. They may have been defined by the service territories of earlier private water companies, according to existing municipal or county boundaries, in alignment with a one-time expectation of a region's long-term economic growth, or according to watershed boundaries. Territories typically are not large. As of mid-1998, the Association of California Water Agencies had more than 430 member-agencies. Steps toward local realignments are already occurring. In terms of changing authorities, and with the growth in the use of recycled water, agencies that once focused on water treatment are now entering into water-supply activities. And with respect to changing boundaries, neighboring or nearby agencies that have identified potential benefits from closely-coordinated management are discussing or have created avenues for coordination. Three examples illustrate this trend toward contractual integration of water agencies seeking to enhance supply reliability. Along California's central coast, the County of Santa Cruz is reviewing its role in county-wide water management. California's second smallest county, Santa Cruz is experiencing rapid economic and demographic growth, as well as continuing overdraft of key aquifers, reduction in surface-water supplies, declining water quality, and degradation of fish habitat. There are nine city and autonomous special-district water agencies, as well as over 150 smaller private and mutual water providers in the county. Each operates fairly independently, even though surface water and aquifers are shared. In 1998, the county established a pilot Water Resources Management Program with the initial goals of identifying existing data on water monitoring, supply and demand, and conservation within county limits. This effort is intended to identify information gaps, encourage sharing of information among water agencies, and, ultimately, to arrive at regional solutions to shared problems. For example, agreements may be reached
Regional Adaptation to Climate Change
85
between surface-water-dependent and groundwater-dependent agencies on mutually-beneficial sharing during droughts and wet periods. Similarly, the Metropolitan Water District of Southern California (MWD) has entered into long-term water-storage agreements with two California agricultural water districts whose territories overlie confined aquifers: the Semitropic Water Storage District and the Arvin-Edison Water Storage District. MWD will ship surplus wet-year surface supplies to these districts for groundwater storage purposes and then recall the water in dry years. MWD will pay the districts for the services of groundwater storage and delivery. And third, in an effort to secure additional State Water Project contractual water rights, the Castaic Lake Water Agency, also located in southern California, purchased an entire agricultural water district, the Devil's Den Water District, and then formed a joint powers authority to facilitate the agriculture-to-urban water transfer. All of these examples involve or envision contractual relations between separate agencies; one involves a merger. As the importance and complexity of water management grows over time, closer operational relationships may become necessary. Contractual commitments could eventually turn into de facto or actual mergers. As the institutional structure of water management evolves, so too will the vocabulary that describes a central activity of current and future water management: water reallocation. Weyant et al. (1996) recognize the importance of accounting for water allocation "among competing ends" when discussing the interactive nature of IAMs. The language of water reallocation used today is dependent upon the existing boundaries and roles of water-management authorities (local water agencies as well as state and federal agencies). Table 4 links four common water-reallocation techniques with their connections to prevailing water-management authorities and the flow of money related to the water reallocation taking place. The techniques are: Water conservation. Water conservation describes a number of practices undertaken by industrial, residential, and agricultural end-users to reduce consumption. Saved water is then reallocated by the region's water agency to other/new users within the same agency or held in anticipation of drought periods or future growth. Because there is no direct connection between original users and new users, the financial resources of potential new users cannot be tapped directly to increase the incentive to conserve. Money flows from new users to agencies in the form of hook-up fees and monthly water fees. Money sometimes also flows from the agency to conservers in the form of
86
Table 4.
BRENT M. HADDAD AND KIMBERLY MERRITT Links between Water-reallocation Vocabulary, Water-agency Boundaries, and Flows of Money.
Management Technique
Water Conservation
Relationship to Existing Water Agencies
Intra-agency
Money Flow
New user to agency; Some rebate money from agency to former user
Water Recycling*
Inter- and Intra-agency
New users to treatment plant or water-supply authority
Water Marketing
Inter- and Intra-agency
New User to former user and former water agency
Conjunctive Use/ Groundwater Substitution
Inter- and Intra-agency
Same as water marketing
* Recycling creates an opportunity for reallocation unless treatment-plant effluent proceeds directly to an ocean outfall or saline sink. Note: The term water agency is used to mean water agencies, irrigation districts, and any other authorities that administer or oversee the delivery of fresh water within a service territory.
rebates on conserving technologies, but conservation is also encouraged on a voluntary basis. Individual agencies oversee conservation programs.
Water Recycling. Water recycling involves the advanced treatment of postconsumer flows and its reallocation to urban, suburban, and agricultural uses. Unless the original wastewater outfalls are located along a coastline or above an unusable saline aquifer, recycling represents a reallocation from instream flows, groundwater recharge, and/or downstream uses to other (typically local) uses. Money flows from new users to agencies involved in wastewater treatment or water supply. Numerous authorities oversee recycling-based reallocation, including water supply, environmental protection, and public health agencies (Haddad, 2000). Water marketing. Water marketing involves the voluntary, compensated reduction or cessation of use of water, typically by agricultural users, and its reallocation to other farmers, urban regions, and environmental or other public purposes. Decision-makers typically include the original end-user, the original end-user's water agency, and a purchasing water agency. Money flows from purchasing agencies to the selling end-user(s) and their water agencies. Oversight mechanisms typically are unique to each agreement.
Regional Adaptation to Climate Change
87
Conjunctive use/groundwater substitution. The coordinated use of surface and ground waters available to a single water user is generally known as conjunctive use. One form of conjunctive use is groundwater substitution for surface water. Here, a water user or water agency transfers a surface-water right to a new user and then pumps groundwater to replace the surface supply. Money flows from the purchasing agency to the original user and water agency. Guy and Cordua (1997) note that "conjunctive use programs can silently change priorities for water use . . . and lead to a reallocation of agricultural water to other uses." In fact, two reallocations occur: one from the original surface-water user to the new user, and one from those who share the aquifer to the former surface-water user. Oversight is provided by the water agency where the conjunctive use is occurring, as well as state water authorities. The four reallocation techniques described above are dependent for their meaning in large part on the current institutional structure of water management. If the boundaries and authorities of water agencies change, so too will their roles in reallocation and the language we use to describe those roles. With climate change, a very different list of leading options for reallocating water may emerge.
CONCLUSIONS: ESTIMATING THE COST OF WATERSECTOR ADAPTATION TO CLIMATE CHANGE Current approaches to evaluating potential water-related adaptation to climate change offer an incomplete picture that may at times result in over-estimations and at other times in under-estimations of actual costs. With respect to overestimation, both the methods and the "mindset" necessary to adapt effectively to climate change exist in the water-management sector. Structural and institutional adaptations are available to water managers, in some cases as complements to each other and in other cases as substitutes. The costs of structural adaptation (flood-control projects, for example) are easier to estimate than those of institutional adaptation but are typically more expensive. If one focuses on structural adaptations, one's cost estimates will likely be higher than actual costs. In terms of under-estimation, authors discussing adaptation to climate change almost universally call for increasingly-efficient distributions of water, typically through pricing/marketing systems (e.g. Frederick & Gleick, 1999; Watson et al., 1998; Shriner & Street, 1998; McAnally, et al., 1997; Kaczmarek et al., 1996). Examples of integrative modeling efforts utilizing least-cost
88
BRENT M. HADDAD AND KIMBERLY MERRITT
economic adaptation are found in papers by Kochendorfer and Ramirez (1996) and Hurd et al. (1996). But efficiency criteria are just one set of criteria and may in some cases present infeasible policy options. The social values that inform water reaUocation may differ from region to region and may not consistently call for least-cost ordering. For example, disagreements over adaptation priorities, such as enhanced flood control vs. late-summer supply reliability, are inevitable, will extend negotiation times, and could result in outcomes that are not least-cost. Models that utilize least-cost ordering of adaptations could therefore underestimate actual adaptation costs. New approaches to estimating the cost of adaptation to climate change could reduce the controversy and increase the reliability of such studies. Regional impact studies should engage climatic, hydrological, and economic details as well as real-world decision-making patterns. The level of detail needed cannot be achieved at a continental level, and in some cases not even at a national scale This suggests that a significant reduction in the geographic scope of such studies is needed. IAMs should include adaptation lag times and algorithms that do not always track the least-cost adaptation. They should also include financial impacts associated with inter-regional and inter-sectoral water reallocation. One can account for a region's net water loss, the current focus of damage estimations, in the larger context of regional reallocation by relaxing the input-output assumption that one sub-region's (or economic sector's) loss of water is another sub-region's (or sector's) gain. Impacts of net losses (or gains) of available water can then be evaluated in the larger context of reallocation of existing water resources. This approach could also help to reintegrate water-related discussions that currently reside in separate chapters or sub-sections of existing impact/adaptation studies.
NOTES 1. The IPCC was jointly established in 1988 by the Wodd Meteorological Organization and the United Nations Environment Programme to assess the scientific and technical literature on climate change, the potential impacts of changes in climate, and options for the adaptation to and mitigation of climate change. 2. In a footnote to his discussion, Fankhauser notes that this specific methodology results in an over-estimation, but does not critique the general approach. By way of contrast, to estimate the cost to an urban region of reduced water consumption during a drought, Fisher et al. (1995) measure lost consumer surplus. Estimated cost of reduced consumption from studies using this and similar approaches range from $0.04/cubic meter to $0.15/cubic meter, which are lower than Fankhauser's cost of $0.42/cubic meter.
Regional Adaptation to Climate Change
89
3. Even the less obvious categories, such as air pollution and financial services, have water management components to them. Air quality can be improved by transferring potentially airborne pollutants, such as toxics, to a liquid medium prior to disposal, which could create water-related disposal challenges; financing for development projects may be contingent in part upon long-term availability of water resources. 4. For the sake of economy, all references to CO 2 are intended to mean all greenhouse gases measured in CO2-equivalent form. 5. Filipo et al.'s model includes six states: Arizona, California, Colorado, Nevada, New Mexico, and Utah. 6. The capital-intensive (both financial and social) and technology-intensive nature of many of these recommendations means that they are appropriate primarily for industrialized nations. Flood-management challenges in developing nations would be significantly different. They might emphasize, for example, emergency relief and postflood provision of sanitation services and disease control. 7. For wetlands that are linked to flood-control programs, climate change could result in additional flows during flood events. 8. The term water agency is used to mean water agencies, water districts, and any other authorities that administer or oversee the delivery of fresh water within a service territory.
REFERENCES Ahrens, C. (1991). Meteorology Today: An Introduction to Weather, Climate, and the Environment. St. Paul, Minn.: West Publishing. Allen C., & Breshears, D. (1998). Drought-induced Shift of a Forest-Woodland Ecotone: Rapid Landscape Response to Climate Variation. Proceedings of the National Academy of Sciences, 95, 14839-14842. Bardossy, A. (1997). Downscaling from GCMs to Local Climate Through Stochastic Linkages. Journal of Environmental Management, 49, 7-17. Boland, J. (1998). Water Supply and Uncertainty. In: K. Schilling & E. Stakhiv (Eds), Global Change and Water Resources Management. Carbondale, I11.:Universities Council on Water Resources. Brown J., Valone, T., & Curtin, C. (1997). Reorganization of an Arid Ecosystem in Response to Recent Climate Change. Proceedings of the National Academy of Sciences, USA, 94, 9729-9733. California Department of Water Resources (DWR). (1994). The California Water Plan Update, Bulletin 160-93. California Energy Commission (CEC). (1989). The Impacts of Global Warming on California. Interim Report. Chen, Z., Levent Kavvas, M., Tan, L., & Soong, S. (1996). Development of a Regional Atmospheric-Hydrological Model for the Study of Climate Change in California. Proceedings, North American Water and Environment Congress. Anaheim, Ca.: American Society of Civil Engineers. Cline, W. (1992). The Economics of Global Warming. Washington, D.C.: Institute for International Economics. Dennis, A. (1991). Initial Climate Change Scenario for the Western United States. Denver: Bureau of Reclamation Global Climate Change Response Program.
90
BRENT M. H A D D A D AND KIMBERLY MERRITT
Dukes, J., & Mooney, H. (1999). Does Global Change Increase the Success of Biological Invaders? Trends in Ecology & Evolution, 14(4), 135-139. Ewel, K. (1997). Water Quality Improvement by Wetlands. In: G. C. Daily (Ed.), Nature's Services. Washington, D.C.: Island Press. Fankhauser, S. (1995). Valuing Climate Change: The Economics of the Greenhouse. London: Earthscan Publications Ltd. Field, C., Jackson, R., & Mooney, H. (1995). Stomatal Responses to Increased CO2 -Implications from the Plant to the Global-Scale. Plant Cell Environ., 18(10), 1214-1225. Filipo, G., Broder, C. & Bates, G. (1994). Regional Climate Change Scenarios over the United States Produced with a Nested Regional Climate Model. Journal of Climate 7, 375-399. Fisher, A., Fullerton, D., Hatch, N., & Reinelt, E (1995). Alternatives for Managing Drought: A Comparative Cost Analysis. Journal of Environmental Economics and Management, 29, 304-320. Frederick, K., & Gleick, E (1999). Water and Global Climate Change: Potential Impacts on the U.S. Water Resources. Arlington, Va.: The Pew Center on Global Climate Change. Gleick, E, Loh, E, Gomez, S., & Morrison, J. (1995). California Water 2020: A Sustainable Vision. Oakland, Ca.: Pacific Institute for Studies in Development, Environment, and Security. Guy, D., & Cordua, J. (1997). Conjunctive Use from the Ground Up: The Need to Protect Landowners' Rights to Groundwater. Proceedings of the 21st Biennial Groundwater Conference. Sacramento, Ca.: Groundwater Resources Association. Haddad, B. (2000). Rivers of Gold. Washington, D.C.: Island Press. Haddad, B. (2000). The Monterey County Water Recycling Project: An Institutional Study. Journal of Water Resources Planning and Management, forthcoming. Harte, J., Torn, M.,Chang, E, Feifarek, B., Kinzig, A., Shaw, R., & Shen, K. (1995). Global Warming and Soil Microclimate: Results from a Meadow Warming Experiment. Ecological Applications, 5, 132-150. Hurd, B., Kirshen, E, & Callaway, M. (1996). Modelling Climate Change Impacts on Water Resources. Proceedings, North American Water and Environment Congress. Anaheim, Ca.: American Society of Civil Engineers. Huxman, T., Hamerlynck, E., Moore, B., Smith, S., Jordan, D., Zitzer, S., Nowak, R., Coleman, J., & Seemann, J. (1998). Photosynthetic Down-Regulation in Larrea tridentata Exposed to Elevated Atmospheric CO2: Interaction with Drought under Glasshouse and Field (FACE) Exposure. Plant, Cell & Environment, 21, 1153-1161. Kaczmarek, Z., Arnell, N., Stakhiv, E., Hanaki, K., Mailu, G., Somly6dy, L., Strzepek, K., Askew, A., Bultot, J. Kindler, J., Kundzewicz, Z., Lettenmaier, D., Liebscher, H., Lins, H., Major, D., Pittock, A., Rutashobya, D., Savenije, H., Somorowski, C., & Szesztay, K. (1996). Water Resources Management, In: R. Watson, M. Zinyowera & R. Moss (Eds). Climate Change 1995: Impacts, Adaptations and Mitigation of Climate Change: ScientificTechnical Analysis. Contribution of Working Group II to the Second Assessment Report of the Intergovernmental Panel on Climate Change. New York: Cambridge University Press. Kaczmarek, Z., & Napi6rkowski, J. (1996). Water Resources Adaptation Strategy in an Uncertain Environment, In: J. Smith, N. Bhatti, G. Menzhulin, R. Benioff, M. Budyko, M. Campos, B. Jallow, & E Rijsberman (Eds), Adapting to Climate Change: Assessments and Issues, New York: Springer. Kingsolver, J., Huey, R., & Kareiva, P. (1993). An Agenda for Population and Community Research on Global Change, In: P. Kareiva, J. Kingsolver, & R. Huey (Eds), Biotic Interactions and Global Change, Sunderland, Mass.: Sinauer Associates, Inc.
Regional Adaptation to Climate Change
91
Knox, J., & Scheuring, A. (Eds) (1991). Global Climate Change and California. Berkeley: University of California Press. Kochendorfer, J., & Ramirez, J. (1996). Integrated Hydrological/Ecological/Economic Modeling for Examining the Vulnerability of Water Resources to Climate Change. Proceedings, North American Water and Environment Congress. Anaheim, Ca.: American Society of Civil Engineers. Kusler, J. (1983). Our National Wetland Heritage: A Protection Guidebook. Washington, D.C.: Environmental Law Institute. Lane, M., Kirshen, P., & Vogel, R. (1999). Indicators of Impacts of Global Climate Change on U.S. Water Resources. Journal of Water Resources Planning and Management, 125(4), 194-204. Leverson, V. (1997). Potential Regional Impacts of Global Warming on Precipitation in the Western United States. Denver: Bureau of Reclamation Global Climate Change Response Program. Lockwood, J., & Duncan, J. (1999). The Anthropogenic Homogenization of the World's Fish Fauna. Unpublished manuscript. Loik, M., & Harte, J. (1997). Changes in Water Relations for Leaves Exposed to a ClimateWarming Manipulation in the Rocky Mountains of Colorado. Environmental and Experimental Botany, 37, 115-123. Loik, M., Redar, S., & Harte, J. (1999). Photosynthetic Responses to a Climate Wanning Manipulation for Contrasting Meadow Species in the Rocky Mountains, U.S.A. Unpublished manuscript. McAnally, W., Burgi, P., Calkins, D., French, R., Holland, J., Hsieh, B., Miller, B., & Thomas, J. (1997). Water Resources. In: R. Watts (Ed.), Engineering Response to Global Climate Change, New York: Lewis Publishers. McKinney, M., & Lockwood, J. (1999). Biotic Homogenization: A Few Winners Replacing Many Losers in the Next Mass Extinction. Trends in Ecology and Evolution, 14(11), 450-453. Mellilo, J., McGuire, A., Kicklighter, D., Moore, B., Vorosmarty, C., & Schloss, A. (1993). Global Climate Change and Terrestrial Net Primary Production, Nature, 363, 234-240. Mitsch, W., & Gosselink, J. (1993). Wetlands. New York: John Wiley & Sons, Inc. Moran, J., & Morgan, M. (1997). Meteorology: The Atmosphere and the Science of Weather. 5th Edition. Upper Saddle River, N.J.: Prentice-Hall. National Research Council. (1992). Restoration of Aquatic Ecosystems: Science, Technology, and Public Policy. Washington, D.C.: National Academy Press. Nobel, P. (1991). Physicochemical and Environmental Plant Physiology. San Diego, CA.: Academic Press. Nordhaus, W. (1998). Assessing the Economics of Climate Change. In: W. Nordhaus (Ed.), Economics and Policy Issues in Climate Change, Washington, D.C.: Resources for the Future. Public Advisory Forum. (1997). Climate Change and Water Resources. Journal of the American Water Works Association, 89(11), 107-110. Pupako, A. (1993). Variations in Northern Sierra Nevada Streamflow: Implications of Climate Change. U.S. Geological Survey, Carson City, Nv. Water Resources Bulletin, 29(2), 283-290. Reilly, J. (1998). Comments: Climate Change Damages. In: W. Nordhaus. Economics and Policy Issues in Climate Change, Washington, D.C.: Resources for the Future.
92
BRENT M. H A D D A D A N D KIMBERLY MERRITT
Riley, J., Sikka, A., Limaye, A., Gunderson, R., Bingham, G., & Hansen, R. (1996). Water Yield in Semiarid Environments under Projected Climate Change. Provo, Utah: Bureau of Reclamation Global Climate Change Response Program. Rind, D., Rosenzweig, C,, & Stieglitz, M. (1997). The Role of Moisture Transport between Ground and Atmosphere in Global Change. Annual Review of Energy and the Environment, 22, 47-74. Sandberg, J., & Manza, E (1991). Evaluation of Central Valley Project Water Supply and Delivery System. Sacramento, Ca.: Bureau of Reclamation Global Climate Change Response Program. Shriner, D., & Street, R. (1998). North America. In: R. Watson, M. Zinyowera, & R. Moss (Eds), The Regional Impacts of Climate Change: An Assessment of Vulnerability, New York: Cambridge University Press. Smith, J., Bhatti, N., Menzhulin, G.,Benioff, R., Budyko, M.,Campos, M., Jallow, B., & Rijsberman, E (1996). Adapting to Climate Change: Assessments and Issues. New York: Springer. Stakhiv, E. (1996). Managing Water Resources for Climate Change Adaptation. In: J. Smith, N. Bhatti, G. Menzhulin, R. Benioff, M. Budyko, M. Campos, B. Jallow, & E Rijsberman. Adaptation to Climate Change:Assessments and Issues, New York: Springer. Steiner, R. (1998). Climate Change and Municipal Water Use. In: K. Schilling, & E. Stakhiv (Ed.),Global Change and Water Resources Management, Carbondale, Ill.: Universities Council on Water Resources. Taiz, L., & Zeiger, E. (1998). Plant Physiology. (2nd ed.), Sunderland, Mass.: Sinauer Associates. U.S. Environmental Protection Agency (1997). Climate Change and California. EPA 230-F97-008e. Washington, D.C.: U.S. Government Printing Office. Vanx, H., Jr. (1991). Global Climate Change and California's Water Resources. In: J. Knox & A. Scheuring (Ed.), Global Climate Change and California, Berkeley: University of California Press. Ward, G., & Proesmans, P. (1996). Hydrological Predictions for Climate Change Modeling. Denver, Co.: Bureau of Reclamation Global Climate Change Response Program. Water Education Foundation (1998). Layperson's Guide to Flood Management. Sacramento, Ca.: Water Education Foundation. Watson, R., Zinyowera, M., & Moss, R. (Eds) (1996). Climate Change 1995: Impacts, Adaptations and Mitigation of Climate Change: Scientific-TechnicalAnalysis. Contribution of Working Group II to the Second Assessment Report of the Intergovernmental Panel on Climate Change. New York: Cambridge University Press. Watson, R., Zinyowera, M., & Moss, R. (1998). Summary for Policymakers. In: The Regional Impacts of Climate Change: An Assessment of Vulnerability. New York: Cambridge University Press. Watts, R. (Ed.) (1997). Engineering Response to Global Climate Change. New York: Lewis Publishers. Weyant, J., Davidson, O., Dowlatabade, H., Edmonds, J., Grubb, M., Parson, E., Richels, R.,Rotmans, J., Shukla, P., Tol, R., Cline, W., & Fankhauser, S. (1996). Integrated Assessment of Climate Change: An Overview and Comparison of Approaches and Results. In: J. Bruce, H. Lee, & E. Haites (Eds), Climate Change 1995: Economic and Social Dimensions of Climate Change, Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change. New York: Cambridge University Press.
Regional Adaptation to Climate Change
93
Wilkinson,R., & Rounds, T. (Eds) (1998). Potential Impacts of Climate Change and Variabilityfor the California Region. Santa Barbara, Ca.: California RegionalWorkshop Report. World Resources Institute, United Nations EnvironmentProgramme, United Nations Development Programme, and World Bank. (1998). World Resources, 1998-99. New York: Oxford UniversityPress.
CLIMATE VARIABILITY AND CLIMATE CHANGE: IMPLICATIONS FOR AGRICULTURE Richard M. Adams, C. C. Chen, Bruce A. McCarl and David E. Schimmelpfennig ABSTRACT Crop yield variability is a defining character&tic of agriculture. Variations in yield and production are strongly influenced by fluctuations in weather. Concern has been expressed about the consequences of the buildup of greenhouse gases (GHGs) in the atmosphere on long-term climate patterns, including the frequency of extreme events, and the subsequent effect on crop yields and yield variability. In this chapter we present background on the variability issue, including a review of the physical and human dimensions of climate change as related to agricultural production. We also present the results of two recent studies; the first focuses on the effects of climatic variability on yields and the second on the effects of increases in extreme weather events on agriculture. The first study shows that temperature and precipitation changes affect both the mean and variances of crop yields, usually in opposite ways, e.g. under increasing temperatures, corn yields decrease and yield variance increases, while increases in precipitation increase corn yields and reduce variability. In the second study, increases in the frequency and strength of one type of extreme event, the El Ni~o-Southern Oscillation or ENSO, results in
The Long-Term Economics of Climate Change, pages 95-113. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0 95
96
R.M. ADAMS ET AL.
economic damages to agriculture. These damages can be averted by using forecasts of such events in agricultural planting decisions.
INTRODUCTION Crop yield variability is a defining characteristic of agriculture. Variations in yield and production are strongly influenced by fluctuations in weather, both in terms of overall seasonal weather characteristics and extreme events. There has been substantial public policy interest concerning the consequences of the buildup of greenhouse gases (GHG) in the atmosphere on long-term climate patterns and associated crop yield effects (Adams et al., 1990; Mendelsohn et al., 1994; Rosenzweig & Hillel, 1995; IPCC, 1996). Identification and prediction of seasonal- to-interannual climate phenomena like the E1 NifioSouthern Oscillation (ENSO) have also brought attention to possible short-term impacts of climate changes on agriculture. A range of global crop yield effects have been attributed to ENSO and other ocean circulation patterns (Cane et al., 1994). These long and short term climatic phenomena are expected to alter the mean and variance of crop yields. Variability also arises because of the influence of changing production practices such as the introduction of new tools, new hybrids and varieties or cultivars, development of new diseases and pests, and government policy. While some feel that agricultural production is likely to become more variable because of climatic shifts (Mearns et al., 1997) others argue variability is increasing because of increased use of fertilizer and other managed impacts (Roumasset et al., 1987; Tollini & Seagraves, 1970). Greater correlation between regional production in and between countries caused by standardization of varieties, adoption of common varieties, more uniform planting practices, and timing is also believed to contribute to greater variability in production. In this chapter we present background on the variability issue, including a review of the physical and human dimensions of climate change as related to agricultural production. We also present the results of two recent empirical studies on climatic change and variability. We organize our discussion around three questions: 1. What hypotheses have been advanced about climate change and variability? 2. Does current data on yields suggest climate change will increase variability? 3. What are the economic consequences of extreme events becoming more common?
Climate Variability and Climate Change: Implicationsfor Agriculture
97
The exploration of these three questions draws on research and literature reviews from both the topic of long-term climate change and the issue of shorter term climatic variations as exemplified by ENSO-type events.
HYPOTHESES CONCERNING CLIMATE CHANGE, CLIMATIC VARIABILITY AND AGRICULTURE Plant systems and hence crop yields are influenced by many environmental factors, and these factors, such as moisture and temperature, may act either synergistically or antagonistically with other factors in determining yields (Waggoner, 1983). Plant scientists explore these effects on yields using two general approaches, controlled experiments and simulation models. Controlled field experiments can generate information on how the yield of a specific crop variety responds to a given stimulus, such as water or temperature. Such experiments are useful in isolating the influences of a specific factor. However, most quantitative estimates of climate change effects on crop yields are derived from crop simulation models (e.g. Rosenzweig & Parry, 1994) because climate change is likely to cut across a host of environmental factors. Plant scientists also use crop simulators to assess the influence of climate variability on the variability of yields (Riha et al., 1996). While the use of crop simulation models makes tractable the assessment of climate effects across a range of crops, such models are sensitive to the variability of weather conditions that affect production in the field. Thus, it is important to simulate how climate change will affect weather patterns in the field. A number of arguments have been made which relate climate change to changes in weather that agriculture is likely to face in the future, l First, it has been argued that an increase in mean (maximum and/or minimum) temperature will increase the likelihood of extreme daily temperature events; i.e. a small change in mean temperature will produce a relatively large change in the probability of extremes occurring since the frequency of such events is nonlinear with the change in mean temperature (Meatus et al., 1984; Katz & Brown, 1992). Second, a number of simulations performed with General Circulation Models (GCMs) show seasonal weather patterns change in selected regions or latitude bands. For example in the northern and mid-latitudes, the daily variance of temperature increases in summer, but tends to decrease in winter. In turn the frequency of extreme high temperature events rises due both to the mean shift in temperature and the greater variance. Simultaneously there are decreases in low extremes in winter due to warmer overall mean conditions and the decrease in variance (Meehl et al., 1999).
98
R.M. ADAMS ET AL.
There is a third weather effect suggested by GCM model simulations. Specifically, GCM studies have found a tendency for increased precipitation intensities and this result continues to be found in recent studies. For example, Zwiers and Kharin (1998) found that mean precipitation increased by about 4% and extreme rainfall values increased by 11% over North America in a doubled CO2 experiment. Other GCM studies have found a tendency toward midcontinental drying in summers under increases in COz, which results from increases in temperature and decreases in summer precipitation (e.g. Wetherald & Manabe, 1999). Some studies have shown increased intensity of tropical cyclones, but the models are still too coarse to resolve many important features of such storms (e.g. the eyes of hurricanes). Similarly, several studies suggest that with a warmer base condition, precipitation extremes associated with E1 Nifio events may become more extreme, i.e. more intense droughts and flooding conditions may be found (e.g. Meehl, 1996; Timmermann at al., 1999). This literature suggests that weather patterns may become more variable under a changing (warming) climate. From the standpoint of understanding effects on economic welfare, these possible changes in frequencies of weather events must be linked to things people value. In agriculture, this is the provision of food and fiber commodities. How food and fiber production (crop yields) will be affected by climatic variability depends on a number of factors, including both biophysical and economic responses to climate change. These factors are reviewed below.
The Biophysical Dimensions The agroecosystem is a complex system of interactions between atmosphere and climate, nutrients and soils, and biological factors such as plant type and pressures from biological stressors (i.e. weed competition, insects, and diseases). Both crop and livestock systems are influenced by many climatic and environmental factors, many of which work in concert either synergistically or antagonistically (Rosenzweig & Hillel, 1995). Crops, for example, respond directly to changes in temperature, moisture, and carbon dioxide. Livestock production may be affected by heat-induced appetite suppression, changes in the supply of feed crops, and changes in the extent and productivity of pasture and grassland (Hanson, et al., 1993). Factors such as temperature, rainfall, and carbon dioxide levels are discussed below.
Temperature. Temperature affects the rate of photosynthesis, and hence the rate at which plants absorb (and respire) carbon dioxide from (and to) the atmosphere. Temperature increases lead to higher respiration rates, can reduce
Climate Variability and Climate Change: lmplications for Agriculture
99
crop yields, and, can lead to lower quality grain since the higher temperatures result in a shorter grain filling period and, hence smaller and lighter grains (Rosenzweig & Hillel, 1995). Optimum temperature ranges vary for different crops and crop varieties. For example, the optimal range for many C3 plants is 15°C-20°C, and for C4 plants it is 25°C-30°C. Climatic changes can alter the suitable geographic range of crops, leading to possible changes in the types and extent of crops in some areas. Temperatures in many low latitude countries are often close to the thermal tolerances of many crops grown in these countries. Temperature changes can interact very closely with changes in the availability of water and nutrients. For example, elevated temperatures lead to increased evaporation and transpiration rates and, hence, diminished soil moisture (depending, of course, on changes in rainfall patterns and other climatic variables). Soil nutrient levels could be affected by increased rates of decomposition induced by higher temperature.
Rainfall. Without increases in rainfall, soil moisture will decrease as a result of higher temperatures. Averaged across the globe, rainfall is expected to increase. The changes, however, will not be uniform; some areas may experience decreases while others may receive increases. In addition, some evidence suggests that more rain will fall in heavy rainfall events, and dry periods between such events may increase in some areas. The combined effect of rainfall and temperature changes on soil moisture will vary by location and by season. In areas where dryland crop yields are currently limited by soil moisture, increases in soil moisture during critical development stages would decrease water stress and increase yields (ignoring the direct effect of temperature on plant physiology); decreases in soil moisture in these areas would decrease yields.
C02 Concentrations. Increasing atmospheric CO2 concentrations generally increase the rate of photosynthesis, and can also increase plant water-use efficiency. This is sometimes referred to as the CO2 'fertilization effect', and it can partially mitigate the adverse effects of higher temperatures (Allen et al., 1987). However, the extent of this effect is uncertain and depends on which factors are most limiting to plant growth and development. Some studies have estimated yield increases of 30% and 7% for many C3 and C4 crops, respectively. However, there is concern about possible feedback effects that might further contribute to higher temperatures. The increase in water-use efficiency occurs because CO 2 fertilization allows plants to reduce the rate of gas exchange in leaves (Kimball, 1983). Less water, therefore, is transpired across leaves, resulting in higher leaf temperatures that in turn may contribute to climate change by reducing precipitation and warming the surrounding
100
R.M. ADAMS ET AL.
atmosphere. Hence, water-use efficiency attributed to C O 2 concentrations may be offset somewhat by reductions in soil moisture. Estimates of fertilization effects are primarily based on greenhouse experiments in which water and nutrients are not limiting factors on plant growth. The experiments also do not address competition from weeds, which will also benefit from CO2, or changes in feeding of insects on crops, which may increase in a CO2 enriched world. Under field conditions, such factors may reduce the benefits of CO2 from those that have been achieved in experimental settings (Wolfe and Erickson, 1993). Climate variability and extreme events. Crop and livestock systems are influenced by variation in climate and extreme events. There is significant value in understanding the sensitivity of agriculture to (and possibly projecting) changes in variability. For example, recent advances linking long-run weather forecasts to the E1 Nifio-Southern Oscillation phenomena (ENSO) events have the potential to benefit agriculture by providing valuable information about precipitation and temperature (Solow et al., 1998). Climate variability affects agricultural crops mainly through the frequency of climate extremes, which in many cases are more strongly affected by changes in variability than by changes in average climate. Climate variability is likely to change as radiative forcing increases average temperatures. Small changes in climate variability, as well as climate means, can produce relatively large changes in the frequency of extreme events. Some evidence indicates that the hydrologic cycle will be intensified such that droughts and floods will become more severe in some places. For example, the United States and other low- to mid-latitude regions are likely to experience increased rates of evapotranspiration as a result of climate change (Rosenzweig & Hillel, 1995). Where droughts and floods become more severe or frequent, agricultural losses would increase. Effects of changes in climate variability or extreme events are only now beginning to be investigated. Indirect effects. In addition to the direct effects of climate change on agriculture, there are important indirect effects that can negatively affect production; with few exceptions, these have been largely ignored in assessments of climate change impacts. For example, sea level rise can inundate agricultural areas in low lying countries, such as Bangladesh, or at least require mitigation efforts along low-lying coastal regions. Indirect effects may also arise from changes in the incidence and distribution of pests and pathogens, rates of soil erosion and degradation, ozone levels, UV-B radiation, changes in runoff and groundwater recharge rates, and changes in capital or
Climate Variability and Climate Change: Implicationsfor Agriculture
101
technological requirements such as surface water storage and irrigation methods.
Role of Human Response and Adaptation to Climate Change Humans have adapted agricultural systems and practices to changing economic and physical conditions by adopting new technologies (including investments in genetic improvements), changing crop mixes and cultivated acreages, and changing institutional arrangements. Such flexibility suggests significant human potential to adapt to climate change. For example, farm level adaptations can be made in planting and harvest dates, crop rotations, selection of crops and crop varieties for cultivation, water consumption for irrigation, use of fertilizers, and tillage practices. These adaptations are the natural consequence of producers' goals of maximizing returns to their land resource. Each adaptation can lessen potential yield losses from climate change and improve yields where climate change is beneficial. At the market level, price and other changes can signal further opportunities to adapt. Trade, both international and intranational, can reallocate supplies of agricultural commodities from areas of relative surplus to areas of relative scarcity. In the longer term, anticipatory adaptation might include the development and use of new crop varieties that offer advantages under future climates, or investments in new water management and irrigation infrastructure as insurance against potentially less reliable rainfall. The consistent pattern of growth in global yields over the past fifty years (of approximately 2% per annum) suggests that crop yields will be higher in the future, with or without climate change. This growth is, in part, due to adoption of new technologies (Reilly & Hohmann, 1993). A fundamental question with regard to climate change is whether agriculture can adopt quickly and autonomously or will the response be slow and dependent on structural policies and programs? Failure of assessments to account for adaptations will overstate the potential negative impacts, or understate potential positive gains associated with climate change.
Importance of Adaptation Assumptions in Economic Assessments Several studies describe substantial opportunities for adaptation to offset negative effects of climate change, but adaptation is not without costs (Schimmelpfennig et al., 1996). Changes in technology imply research and development costs, along with the costs of farm-level adoption, including possible physical and human capital investments. Changes in climate may add
102
R.M. ADAMS ET AL.
stress to local and regional agricultural economies already dealing with longterm economic changes in agriculture. In addition, there may be barriers to adaptation that limit responses, such as the availability and access to financial resources and technical assistance, as well as the availability of other inputs, such as water and fertilizer. Uncertainty about the timing and rate of climate change also limits adaptation and, if expectations are incorrect, could contribute to the costs associated with transition and disequilibrium. Because explicit adaptation responses are difficult to project, no assessment of the agricultural effects of climate change can account for the full range of adaptation options likely to arise over the next century. Conversely, adaptation options incorporated into recent assessments may not be technically or economically feasible in some cases or regions. Generally, the capacity for adaptation is less in developing countries as a result of limited access to markets for crop inputs or outputs, and limited infrastructure development.
WILL CLIMATE CHANGE INCREASE YIELD VARIABILITY? Almost all studies of the effects of climate change on agriculture assess climate change in the form of changes in mean temperature or precipitation (for example, Rosenzweig & Parry, 1994; Adams et al., 1990, 1999; or Mendelsohn et al., 1994). Thus, changes in the distribution (frequency) of weather events and the associated changes in crop yields have not been investigated. In this section we begin to explore the consequences of climate change on actual (field-level) crop yields. This discussion is based on a recent study that examines how year-to-year and region-to-region climate variation alters the distribution of crop yields) In this analysis, variability influences of climate are investigated using state level yields and acreage harvested for 25 years (1973 to 1997) drawn from U.S.D.A.-NASS (1999) Agricultural Statistics. These crop yield data are associated with state-level climate data from the NOAA Internet home-page (1999) which includes time series observations for thousands of weather stations. The temperature data used are predominantly April to November averages, in turn averaged across all the weather stations in the NOAA data for a state. However, for regions growing predominantly winter wheat, where the November to March average temperature are used. Rainfall data are state annual totals, reflecting both precipitation falling directly on a crop, as well as inter-seasonal water accumulation. The methodology used to assess climate effects on yield variability is based on work by Just and Pope (1979), who developed a stochastic production function that allows examination of how factors such as climate influence the
Climate Variability and Climate Change: lmplications for Agriculture
103
mean and variability of yields. Following Just and Pope (1979), Chen et al. (1999b) estimated production functions of the form y = f(X, b) + h(X, a) e where y is crop yield, f (o) is an average production function, and X is a set of independent explanatory variables (climate, location, and time period). The functional form h (o) is an explicit form for heteroscedastic errors, allowing estimation of variance effects. Estimates of the parameters of f (o) give the average effect of the independent variables on yield, while h 2 (o) gives the effect of each independent variable on the variance of yield. Chert et al. (1999a) present the results of the estimation. The elasticity estimates show how a 1% change in temperature and precipitation affects yields in percentage terms, and are reported in Table 1 for two functional forms, the linear and Cobb-Douglas. The precipitation effect on corn, cotton, and sorghum crops is positive; the temperature effect is negative for these crops. This indicates that crop yields increase with more rainfall and decrease with higher temperatures, as expected. Elasticities for soybean and wheat crops are mixed. Sorghum showed the highest elasticities for both rainfall and temperature. In terms of variability, the clearest results are obtained for corn, cotton and sorghum, and do not depend on functional form (Table 2). Increases in rainfall decrease the variability of corn, cotton, and wheat yields. Corn yields are predictably more variable with higher temperatures. The variability effects of rainfall on cotton and sorghum are small, with a 1% increase in rainfall leading to a half of 1% or less increase or decrease in yield variability. Cotton and sorghum yield variance exhibits higher sensitivity to temperature, with a 1%
Table 1.
Percentage Change in Average Crop Yield for a 1% Change in Climate. Linear
Production Function F o r m Corn Cotton Sorghum Soybean Wheat N.S. not significant
Cobb-Douglas
Precipitation Temperature 0.3273 0.0371 2.8844 --0.2068 4). 1309
4).2433 -1.5334 -2.0866 0.0002 4).5076
Precipitation 1.5148 0.4075 1.8977 0.3464 1.4178
Temperature -2.9792 --0.7476 -2.6070 N.S. 4).3721
R. M. ADAMS ET AL.
104 Table 2.
Percentage Change in Variance in Crop Yield for a 1% Change in Climate. Linear
YieldVariability Function Corn Cotton Sorghum Soybean Wheat
Precipitation -9.7187 -0.3028 0.5230 -0.7932 -2.1572
Cobb-Douglas
Temperature 7.5058 -10.9386 -5.3517 -0.2739 -0.1035
Precipitation -1.4461 -0.0212 0.4802 0.8194 -1.6473
Temperature 0.8923 -3.5800 -2.5633 0.0586 5.0875
increase in temperature producing up to an 11% decrease in yield variability. Similarly large elasticities are obtained for rainfall effects on corn and wheat yield variability. These results are consistent across both functional forms. Soybean elasticities are all less than one, but sign inconsistency across functional forms confound interpretation of these results. Finally, for perspective, the yield functions were evaluated using a different source of climate change data. Specifically, the functions were evaluated using the regional estimates of climate change arising under the Canadian and Hadley climate simulators generated for the U.S. Global Climate Change Research Program's National Synthesis using the 2090 climate projections. This generates projections of the effects of these GCM-based climate forecasts on crop yield variance. Specifically, this involved using the projected precipitation and temperature changes from the GCMs for the selected regions and computing the projected yield changes. The results are given in Table 3 for the Cobb-Douglas functional form and show uniform decreases in corn and cotton yield variability, with mixed results for the other crops.
WHAT ARE THE ECONOMIC CONSEQUENCES OF EXTREME EVENTS BECOMING MORE COMMON? The preceding analysis lends some support to hypotheses concerning effects of climate change on crop yield variability. A number of atmospheric scientists have also hypothesized that global climate change may alter the frequency and strength of extreme events. One example of an extreme event that has recently received considerable public attention is the ENSO climatic phenomenon. Timmermann et al. (1999) recently presented results from a modeling study
Climate Variability and Climate Change: Implicationsfor Agriculture Table3.
Percentage Increase in Crop Variability for 2090, by GCM Scenario. Canadian Climate Change Scenario
Corn Soyb. CA CO GA IL IN IA KS LA MN MT MS NE OK SD TX
105
Cott
Wht
Sorg
Hadley Climate Change Scenario Corn Soyb.
-12.84
Cott
-10.60
-10.35
-6.92
21.28 8.06 33.14
-24.73 -26.31 -26.83
18.90 20.30 20.90
-14.39 -0.75
-18.16 3.38
-13.03
-7.97 10.60
4.01 32.86
~.36
-13.92 15.30 -21.75
-4.74
Sorg
-11.81 34.43
-25.71 -8.73 -36.89
Wht
48.22 -16.15 -15.05 16.34 -9.27 -6.94 -24.37 -13.21 27.86 -10.83
-7.73 11.65
-5.57 -1.72 -17.07 2.83 -19.10 -8.05 2.26 -3.10
implying that global climate change would alter ENSO characteristics and cause: * the mean climate in the tropical Pacific region to change towards a state corresponding to present day E1 Nifio conditions; * stronger inter-annual variability with more extreme year-to-year climate variations; ° more skewed inter-annual variability with strong cold events becoming more frequent. ENSO events have been found to influence regional weather and, in turn, crop yields. Changes in crop yields have obvious economic implications. Several studies have estimated the value of farmers adapting to ENSO events. Results indicate that there is economic value to the agricultural sector from the information on ENSO events (Adams et al., 1995; Solow et al., 1998). In terms of aggregate U.S. and world economic welfare, the estimates of using ENSO information in agricultural decision making have been in excess of $300 million annually. Such estimates imply that a shift in ENSO event frequency or strength may carry substantial economic consequences.
106
R.M. ADAMS ET AL.
According to Timmermann et al. (1999), the current probability of ENSO event occurrence (with present day concentrations of greenhouse gases) is 0.238 for the El Nifio phase, 0.290 for the La Nifia phase, and 0.512 for the Neutral (non-E1 Nifio-non-La-Nifia) phase. They then project that the probabilities for these three phases will change under increasing levels of greenhouse gases. Under such a scenario, ENSO event frequency is forecast to become 0.339, 0.310, and 0.351 for E1 Nifio, La Nifia and Neutral, respectively. Thus, the frequency of the two extreme phases, E1 Nifio and La Nifia, are expected to increase, while the Neutral phase frequency would be reduced. While not offering specific evidence, they argued that such a frequency change could be expected to have strong ecological and economic effects. The implications of such a shift were explored in an assessment by Chen et al. (1999a) using a model of the U.S. agricultural sector that allows for changes in production and consumption in the rest of the world. This model, known as the Agricultural Sector Model or ASM, has been frequently used in climate change assessments. The ASM represents production and consumption of primary agricultural products including both crop and livestock products. Processing of agricultural products into secondary commodities is also included. The production and consumption sectors are assumed to be composed of a large number of individuals, each of whom operates under competitive market conditions. This leads to a model which maximizes the area under the demand curves less the area under the supply cures. The area between baseline supply and demand curves equals the baseline economic welfare. Similarly, the area between supply and demand curves after a posited climate change equals the new economic welfare. The difference between these two areas equals the change in economic welfare, equivalent to the annual net income lost or gained by agricultural producers and consumers as a consequence of global climate change. Both domestic and foreign consumption (exports) are included. The model integrates a set of micro- or farm-level crop enterprises for multiple production regions which capture agronomic and economic conditions with a national (sector) model. Specifically, producer-level behavior is captured in a series of technical coefficients that portray the physical and economic environment of agricultural producers in each of the 63 homogeneous production regions in the model, encompassing the 48 contiguous states. These regions are then aggregated to two macro regions. Irrigated and non-irrigated crop production and water supply relationships are included in the ASM. Availability of land, labor, and irrigation water is determined by supply curves defined at the regional level. Farm-level supply responses generated from the 63 individual regions are linked to national demand through the objective
Climate Variability and Climate Change: Implicationsfor Agriculture
107
function of the sector model, which features demand relationships for various market outlets for the included commodities (see Chang & McCarl, 1993, for details of ASM). The situations evaluated here with the ASM may be viewed as a set of 'experiments' within the ASM modeling framework. In this case, the experiments involve prospective ENSO conditions. Specifically, in this analysis two fundamentally different situations will be simulated within the economic framework described above. • Producers are assumed to be operating without use of any information concerning ENSO phase and thus choose a crop plan (set of crops to be planted on their land base) that represents the most profitable crop mix across a uniform distribution of the full spectrum of the 22 years of events. Hereafter this is called the 'Without use of ENSO Phase Information' Scenario. • Producers are assumed to incorporate information regarding the pending ENSO phase and thus choose a set of crops that is the best performer economically across that individual phase. Thus, crop mixes which are optimized for E1 Nifio events are selected across a distribution of the five El Nifio states, as are crop mixes for the other states. Initially, the strengths of each E1 Nifio are assumed to be equally likely. This analysis is called the 'With use of ENSO Phase Information' Scenario. In addition to structuring the analysis to vary the response of farmers to ENSO information, a second key component is varied in the model experimentation. In particular, three ENSO phase event probability conditions are evaluated. • The first represents current conditions with respect to the probability of each phase. Specifically, we assume El Nifio phases occur 0.238 of the time, La Nifia with a probability of 0.250 and 0.512 for Neutral. Within an E1 Nifio phase, we assume that individual crop yields for five E1 Nifio weather years contained in our data set are each equally likely (i.e, same strength), with a comparable assumption for the four La Nifia events and the 13 Neutral yield states. • The second incorporates the frequency shifts suggested by Timmermann et al. (1999). Here the E1 Nifio phase occurs with a frequency of 0.339, the La Nifia phase 0.351 and the Neutral phase 0.310. Within each of the phases we again assume the cropping yield data states are equally likely. • The third represents both shifts in event frequency and event strength. The frequency shifts are those from Timmermann et al. (1999) as computed above. To evaluate event strength shifts, we assume that stronger E1 Nifio and La Nifia events occur with a 10% higher frequency. Specifically, if the
108
R.M. ADAMS ET AL.
1982-1983 and 1986-1987 E1 Nifios each occur with a 0.20 probability within the set of five E1 Nifio events observed in the data set above (assuming a uniform distribution across the five observed E1 Nifios in our data set) we shift those probabilities to 0.25 and reduce the probabilities of the three other E1 Nifio years to 0.167. Similarly, the two strongest (in terms of yield effects) La Nifia states have their probabilities raised to 0.30 from 0.25, while the weaker two La Nifios have their probabilities reduced to 0.20. Table 4 contains estimates of aggregate annual economic welfare before and after the ENSO probability shifts. Table 5 contains a more disaggregated picture of these economic effects. These economic consequences are evaluated for both situations regarding producer decision-making (ignore or use the ENSO forecasts). The welfare measure consists of annual global consumers' welfare plus the welfare change for producers. As noted earlier, these welfare measures are in terms of consumers' and producers' surplus. Economic 'surplus' is a concept commonly used in applied economics to approximate changes in welfare of individuals or groups. It is a monetary measure which is captured as geometric areas below demand curves and above supply curves. While the sum of economic surplus is frequently used to measure economic efficiency of alternative policies, the individual components (consumers, producers) of economic surplus can be compared to see which groups gain and which lose under alternative states of nature. Three major insights regarding phase shifts and producers' reactions can be drawn from the results of the model experimentation.
Table 4.
Annual Aggregate Economic Welfare Comparisons under Shifts in ENSO Frequencies. Without use of With use of Gain of use ENSO information ENSO information ENSO information (millions of U.S. dollars)
Current probabilities
1,458,947
1,459,400
453
Phase frequency s h i f t
1,458,533 (-414)
1,459,077 (-323)
544
Phase frequency and strength shift
1,457.939 (-1008)
1,458,495 (-905)
556
Note: The value in the ( ) represents the difference with respect to current probabilities due to
the ENSO frequency and possibly strength shift.
Climate Variability and Climate Change: Implications for Agriculture Table 5.
109
Annual Welfare, by Component, With Use of ENSO Information. Current probabilities
Phase frequency shift
Phase frequency and strength shift
(millions of U.S. dollars) Producers
35,883
35,576 (-307)
35,562 (-321)
Consumers
1,175,699
1,176,290 (591)
1,176,025 (326)
Foreign interests
247,818
247,211 (-607)
246,908 (-910)
Total
1,459,400
1,459,077 (-323)
1,458,495 (-905)
Note: The value in the ( ) represents the difference with respect to current probabilities due to the ENSO frequency and possibly strength shift.
• First, the effects of frequency shifts are measured as the difference between the first two rows in Table 4 (current ENSO frequency vs. the new frequency). The values in parentheses indicate that there are economic damages arising from the ENSO event frequency shift. Specifically, the annual welfare loss due to the frequency shift (comparing the first and second rows), ranges from $323 to $414 million. When both frequency and strength shifts are considered (i.e. comparing the first and third rows) the annual welfare loss increases to a range of $905 to $1,008 million. This is about 5% of typical U.S. agricultural net income or about 0.15% of total food expenditures in the U.S. The strength shift, if more substantial than the one assumed here, could have substantially larger effects. • Second, the potential value of ENSO monitoring and of early warning can be assessed by comparing the 'with and without ENSO information' columns of Table 4. As can be seen from the first row, the use of ENSO forecasts under current ENSO frequency and strength results in a net welfare gain of approximately $453 million. This value is consistent with the value of information noted in Solow et al. (1998). Incorporating ENSO information also reduces the negative effects of ENSO phase shifts or increases in strength. Specifically, incorporating ENSO information under a phase shift causes annual welfare to increase by approximately $544 million and $556
110
R.M. ADAMS ET AL.
million under both a phase and intensity shift. The gains from ENSO information are about the same under these two scenarios. These gains are greater than under the current ENSO frequency and strength but the gains do not offset the losses due to the ENSO shifts. Thus, the use of ENSO forecasts in producer decision making helps mitigate some of the negative economic effects of the shift. • Third, the results reported in Table 5 show that there may be gainers and losers in these outcomes. For example, the total welfare loss due to the shift in ENSO frequencies results in welfare losses for both domestic producers and foreign countries but gains to domestic consumers. Most of these welfare losses occur in the foreign markets. These differences across groups arise from changes in U.S. and world prices for the traded commodities. For the commodities evaluated here, there are price declines due to slight increases in world-wide trade when phase frequency shifts. The price declines result in losses to producers and exporting countries but gains to consumers. CONCLUDING
COMMENTS
The importance of extreme events in the context of the impacts of climatic change and variability on agriculture has received increased attention in recent years. However, our knowledge regarding possible shifts in the frequencies of extreme events with a new climate regime is limited. It is also much more difficult to incorporate many types of extreme weather events into climate change scenarios for use in economic assessments. It is important to distinguish among the relevant time scales and spatial scales of extreme events important to agriculture. In general, crop models adequately handle extreme events that are longer than their time scale of operation. For example, crop models operating on a daily time scale can simulate fairly well the effects of a seasonal drought (lasting a month or more), but they will have more difficulty properly simulating responses to very short term extreme events, such as daily temperature or precipitation extremes. Crop models have difficulty in properly representing composite extreme events such as a series of days with high temperatures combined with precipitation extremes. Therefore, in considering the possible effects of extremes and climate variability on crops from a policy point of view, caution must be exercised in interpreting the analyses of climate models on what types of changes in extremes might occur in the future and in interpreting the responses of crop models to extreme climate events. However, it is expected that research in these areas will continue to develop rapidly.
Climate Variability and Climate Change: Implications for Agriculture
111
Temperature and precipitation changes affect both the mean and variances of crop yields, usually in opposite ways. With increasing temperature, corn yields decrease and yield variance increases. Thus, a warmer future climate in the corn growing regions could result in reduced yields and greater year-to-year fluctuation in corn yields. The mean and variance of cotton and sorghum yields decrease with increasing temperature, indicating that a warmer climate in the cotton and sorghum growing regions could see reduced yields with less year-toyear yield variation. Conversely, increased precipitation could result in higher corn and cotton yields with reduced year-to-year variability. In sorghum growing regions, increased precipitation could result in increased yields and greater year-to-year yield variability. While it is impossible to predict future climate, the analyses presented in this chapter provide some indication of the most and least favorable future climates. For corn, a wetter and cooler climate is the most favorable, while a hotter and drier climate is the least favorable, resulting in decreased yield and greater year-to-year yield variability. A wetter and warmer climate would result in the greatest decrease in the year-to-year yield variability. Conversely, a drier and cooler climate would result in increased year-to-year yield variability. Sorghum year-to-year yield variability would be reduced most by a drier and warmer climate. The United States consumer wins in the case of a future climate with a change in the ENSO phase frequency and an ENSO phase frequency shift with a change in the strength of the phases. Agricultural producers, on the other hand, are losers due to lower prices for their crops. Foreign interests also lose. The United States is overall a winner when both producers and consumers are considered,
NOTES 1. This section draws heavily upon literature reviews in Hollinger et al., 2000 and Adams et al., 1998. 2. Procedures and findings are discussed in more detail in Chen et al., 1999.
REFERENCES Adams, R. M., Rosenzweig,C., Ritchie,J.,Peart, R.,Glyer,J. D., McCarl,J. D., Curry,B., & Jones, J. (1990). GlobalClimate Change and U.S. Agriculture.Nature, May 219-224. Adams, R. M., Hurd, B. H.,Lenhard,S. & Leafy,N. (1998). Effectsof GlobalClimate Change on Agriculture: An InterpretativeReview.ClimateResearch, 11, 19-30. Adams, R., McCarl, B. A., Solow, A., Bryant, K., Legler, D., & O'Brien, J. (1995). Value of ImprovedLong RangeWeatherInformation. Contemporary Economic Policy July: 10--19.
112
R . M . ADAMS ET AL.
Adams, R. M., McCarl, B. A., Sergerson, K., Rosenzweig, C., Bryant, K. J., Dixon, B. L.,Conner, R., Evenson, R. E., & Ojima, D. (1999). The Economic Effects of Climate Change on U.S. Agriculture. In: R. Mendelsohn & J. Neumann (Eds), Chapter 2 The Economics of Climate Change, Cambridge University Press. Allen, L. H. Jr., Boote, K. J., Jones, J. W., Jones, E H., Valle, R. R., Acock. B., Rogers, H. H., & Dahlman, R. C. (1987). Response of Vegetation to Rising Carbon Dioxide: Photosynthesis, Biomass, and Seed Yield of Soybean. Global Biogeochemical Cycles, 1, 1-14. Cane, M. A., Eshel, G., & Buchland, R. W. (1994). Forecasting Zimbabwean Maize Yield Using Eastern Equatorial Pacific Sea Surface Temperature. Nature, 370, 204-205. Chen, C. C., McCarl, R. W., & Adams, R. M. (1999). Economic Implications of Potential Climate Change Induced ENSO Frequency and Strength Shifts. Draft manuscript created as part of this report, Department of Agricultural Economics, Texas A&M University and Oregon State University. Chen, C. C., McCarl, B. A., & Schimmelpfennig, D. E. (1999). Yield Variability as Influenced by Climate: A Statistical Investigation. Draft manuscript created as backup to U.S. National Assessment Agriculture Sector Report. Department of Agricultural Economics, Texas A&M University and Economics Research Service, USDA. Hanson, J. D., Baker, B: B., & Bourdon, R. M. (1993). Comparison of the Effects of Different Climate Change Scenarios on Rangeland Livestock Production. Agricultural Systems, 41, 487-502. HoUinger, S. E., McCarl, B. A., Mearns, L. O., Adams, R. M., Chen, C. C., Riha, S. J., & Schimmelpfennig, D. E. (2000). Impacts of Variability on Agriculture. In: J. Reilly (Ed.), Chapter 4. U.S. NationalAssessment Agriculture Sector Report. IPCC (1996). Climate Change 1995: The IPCC Second Assessment Report, Volume 2: ScienfificTechnical Analyses of Impacts, Adaptations, and Mitigation of Climate Change, Chapters 13 and 23. R. T. Watson, M. C. Zinyowera, & R. H. Moss (Eds). Cambridge and New York: Cambridge University Press. Just, R., & Pope, R. M. (1979). Production Function Estimation and Related Risk Considerations. American Journal of Agricultural Economics, 61,277-284. Katz, R. W., & Brown, B. G. (1992). Extreme Events in a Changing Climate: Variability is More Important Than Averages. Climatic Change, 21, 289-302. Kimball, B. A. (1983). Carbon Dioxide and Agricultural Yields: An Assemblage and Analyses of 430 Prior Observations. Agronomy Journal, 75, 779-788. Mearns, L. O., Katz, R. W., & Schneider, S. H. 0,984). Extreme High Temperature Events: Changes in their Probabilities with Changes in Mean Temperature. Journal of Climate and Applied Meteorology, 23(12), 1601-1613. Mearns, L. O., Rosenzweig, C., & Goldberg, R. (1997). Mean and Variance Change in Climate Scenarios: Methods, Agricultural Applications, and Measures of Uncertainty. Climatic Change, 35, 367-396. Meehl. G. A. (1996). Vulnerability of Fresh Water Resources to Climate Change in the Tropical Pacific Region. J. Water,Air and Soil Pollution, 92, 203-213. Meehl, G. A., Zwiers, E, Evans, J., Knutson, T., Mearns, L. O., & Whetton, E (1999). Trends in extreme weather and climate events: Issues related to modeling extremes in projections of future climate change. Bulletin of the American Meteorological Society (in press). Mendelsohn, R., Nordhaus, W., & Shaw, D. (1994). The Impact of Global Warming on Agriculture: A Ricardian Analysis. American Economic Review, 84, 753-771. NOAA (1999). Home-page: http://ftp.ncdc.noaa.gov/pub/ushcrd.
Climate Variability and Climate Change: Implications for Agriculture
113
Reilly, J., & Hohmann, N. (1993). Climate Change and Agriculture: The Role of International Trade. American Economic Association Papers and Proceedings, 83, 306-312. Riha, S. J., Wilks, D. S., & Simons, P. (1996). Impact of Temperature and Precipitation Variability on Crop Model Predictions. Climatic Change, 35, 397-414. Rosenzweig, C., & Hille|, D. (1995). Potential Impacts of Climate Change on Agriculture and World Food Supply. Consequences, Summer 24-32. Rosenzweig, C. M,, & Parry. (1994). Potential Impacts of Climate Change on World Food Supply. Nature, 367, 133-138. Roumasset, J., Rosegrant, M., Chakravorty, U., & Anderson, J. (1987). Fertilizer and Crop Yield Variability: A Review in Variability in Grain Yields Implications for Agricultural Research and Policy in Developing Countries. John Hopkins University Press. Schimmelpfennig, D., Lewandrowski, J., Reilly, J., Tsigas, M., & Parry, I. (1996). Agricultural Adaptation to Climate Change: Issues of Long Run Sustainability. Agricultural Economic Report No. 740. U. S. Department of Agriculture, Natural Resources and Environment Division, Economic Research Service, Washington DC. Solow, A. R., Adams, R. M., Bryant, K. J., Legler, D. M., Brien, J. J. O., McCarl, B. A., & Nayda, W. I. (1998). The value of improved ENSO prediction to U.S. Agriculture. Climatic Change, 39, 47~0. Timmermann, A., Oberhuber, J., Bacher, A., Each, M., Latif M., & Roeckner, E. (1999). ENSO Response to Greenhouse Warming. Nature 694-697. Tollini, H., & Seagraves, J. A. (1970). Actual and Optimal Use of Fertilizer: The Case of Nitrogen on Com. Economic Research Report, Department of Economics, North Carolina State University, Raleigh NC. U.S.DA-NASS Agricultural Statistics 1999. http://www.usda.govlnass/pubs/agr99/acro99.htm Waggoner, P. E. (1983). Agriculture and a climate changed by more carbon dioxide. In: Changing Climate (pp. 383-4 18). Washington, D.C.: Maximal Academy Press. Wetherald, R. T., & Manabe, S. (1999). Delectability of Summer Dryness Caused by Greenhouse Warming. Climatic Change (in press). Wolfe, D. W., & Erickson, J. D. (1993). Carbon Dioxide Effects on Plants: Uncertainties and Implications for Modeling Crop Response to Climate Change. Agricultural Dimensions of Global Climate Change. H. M. Kaiser & T. E. Drennen (Eds), Delray Beach FL: St. Lucie Press. Zwiers, F. W., & Kharin, V. V. (1998). Changes in the Extremes of the Climate Simulated by CCC GCM2 under CO2 Doubling. J. Climate, 11, 2200-2222.
OCEAN THERMAL LAG AND COMPARATIVE DYNAMICS OF DAMAGE TO AGRICULTURE FROM GLOBAL WARMING D a r w i n C. Hall 1
ABSTRACT As C02 equivalent gases increase beyond a doubling, there will likely be unavoidable damage to U.S. agriculture. In equatorial regions of the world, damage from global warming will occur earlier than in the U.S. Biogeophysical lags, including deep-ocean mixing with warmer surface waters, can delay the warming caused by C02 emissions. In this chapter, comparative dynamics trace the path of damage to U.S. agriculture from climate change, after considering adaptation to climate change, technological change that will occur both with and without climate change, and ocean thermal lag.
INTRODUCTION In order to understand the effect of human activity on climate, we cannot perform controlled experiments. A useful alternative is to perform a thought experiment to answer the question, "what would happen to the earth's climate if a pulse of greenhouse gases were injected into the atmosphere, doubling the concentration of those gases emitted by human activity, especially economic The Long-Term Economics of Climate Change, pages 115-148. Copyright © 2001 by Elsevier Science B.V. AH rights of reproduction in any form reserved. ISBN: 0-7623-0305-0 115
116
DARWIN C. HALL
production and consumption?" That type of computer modeling exercise is familiar to economists as an example of comparative statics. Most of the modeling done by climate experts is based on the idea of a doubling of anthropogenic greenhouse gases. Economists have borrowed the physical science models, as well as the limitations of those models. For example, the model by Nordhaus (1994) relies on a model of climate change that may be a reasonable approximation for a doubling, but not reasonable for a tripling or quadrupling, as discussed in detail in the next section. A comparative statics approach to the economic benefits and costs of global warming and policies to slow it can lead to faulty analysis. One comparative static analysis of the damage from a rising sea-level found a very low cost. In the new equilibrium, while the former coast will be submerged, new valuable coastal property will be available so the author valued loss using less valuable interior land (Nordhaus, 1993). To avoid that type of error, Nordhaus (1994) calculates the time path of the impact of climate. But if the climate model used by Nordhaus is not applicable to increases in greenhouse gases beyond a doubling, what is the potential that the arbitrary selection of the terminal time affects the results? Economists argue that future benefits and costs beyond 50 to 100 years will not greatly affect the analysis of benefits and costs of policy today. One reason is the discount rate used to convert future dollars into present dollars. Another reason is that policy can be adjusted over time as we learn more about the effects of climate change. Consider these two reasons in turn. For a population that is stable over time, the social rate of time preference is the sum of two parts. One part is the pure rate of time preference to consume today rather than in the future. The second part depends on whether or not the economy is growing, and weights the value of consumption depending whether future generations have relatively more or less to consume. Economists disagree over the values that should be placed on each part separately and jointly. Arrow et al. (1996) present the arguments between two approaches. Khanna and Chapman (1996) derive the equation that underlies the debate, and review the fundamental issues, economic efficiency and intergenerational equity. Howarth and Norgaard (1995) argue that discounting based solely on efficiency can lead to policy inaction that leaves future generations worse off than the present, and argue for policies that directly account for intergenerational transfers of wealth and poverty through damage to the environment. Among economic analyses of climate change, discount rates differ significantly. Nordhaus (1994) suggests using a rate of social time preference equal to 3% to discount consumption. His corresponding discount rate for capital investments required by policy starts at 6% and falls over time to 3% as
Agricultural Damagefrom Climate Change
117
economic growth slows, roughly equivalent to a constant, annual discount rate of 4.6% (Nordhaus, 1994, p.131). Cline (1992, p.255) argues that intergenerational discounting is indefensible, and recommends a social rate of time preference for consumption equal to 1.5%. In a previous paper (Hall, 1999), I analyze two climate scenarios that eventually damage between one-third and two-thirds of U.S. agricultural consumer and producer surplus. A third climate scenario leads to eventual collapse of agriculture. I then compare two discount rates, 5% and 1% (Hall, 1999, Fig. 10, p. 207). With a discount rate of 5%, the present value of the impact on economic surplus from climate change is slightly positive or equal to zero in the three climate scenarios, because of slight gains in the early years as climate warms. In one case, the damage begins at year 2025, yet with a discount rate of 5% there is no value to avoid damage that occurs later. With a discount rate of 1%, the result is that for all three climate scenarios there is substantial damage that justifies equally large payments for policies that would avoid climate change. Just splitting the difference, a 3% discount rate gives results virtually indistinguishable from a 5% discount rate. The reason is that for a 5% discount rate the first 100 years of analysis determine 80% of the present value, compared to a 1% discount rate where the present value depends on when the analysis ends. Consequently, the argument is unreasonable that social discounting justifies an end to economic analysis at a doubling of greenhouse gases. The question in this chapter is whether policy can be adjusted over time as we learn more about the effects of climate change, and still avoid potentially disastrous outcomes. While the definitive answer to this question requires more analysis than can be developed here, this study presents the case that there is no a priori reason to dismiss the question. There are several components essential to this analysis. First is a clear consideration of the amount of economically available fossil fuels. This point should be obvious. Work by most economists, however, either just considers a doubling of greenhouse gases, selects an arbitrary time frame, or obscures this consideration among myriad assumptions about autonomous increases in energy efficiency or endogenous improvements in renewable and alternative energy technology without justification or concomitant improvements in fossil fuel technology. Second is a climate model that can analyze emissions of greenhouse gases beyond a doubling. Third is a dynamic representation of the impacts of global wanning and associated climate change on an important sector of the economy. The remaining portion of this introduction summarizes these components. The terminology "demonstrated recoverable reserves" refers to the amounts measured and indicated that can be extracted at today's prices and with today's
118
DARWIN C. HALL
Table 1.
Recoverable Reserves.
EIA (1995)
Edmunds & Reilly (1985)
Identified Coala 1,145,000d mst WEC (1992) 693,270g mmtce WEC (1980) Demonstrated Oilb 1,000~bbl Oil&GasJ (1993) 610h bbo Oil&GasJ (1980) Demonstrated Gasc 5,000~tcf Oil&GasJ (1993) 2,670.403~tcf Oil&GasJ (1980) a Identified = demonstrated and inferred. bDemonstrated = measured and indicated. c Demonstrated = measured and indicated. d rest = million short tons, table 11.16, p. 315. e bbl = barrels, rounded from table 11.3, p. 289. f tcf= trillion cubic feet, rounded from table 11.3, p. 289. g mmtce = million metric tons of coal equivalent, table 11-2, p. 156. hbbo = billions of barrrels of oil, Oil and Gas Journal estimates adjusted by Edmunds and Reilly, table 7-4, p. 81. i tcf= trillion cubic feet, table 9-3, p. 122.
technology. "Reserves" also includes amounts inferred from existing deposits, extractable at t o d a y ' s prices and with existing technology. Over time, new discoveries, changing prices, and changes in technology have increased recoverable reserves (Table 1). As fossil fuels are used up, prices will rise, adding reserves. Over time, we expect that technology will continue to improve the fraction that is recoverable. We will also discover deposits that are now considered hypothetical and speculative. As prices rise, there will be substitution among fossil fuels, taking into account costs to convert among solid, liquid, and gaseous forms, and the uses of those forms for heating, transportation, and electricity. Eventually, fossil fuel prices will rise to the point where alternative energy sources make fossil fuels uneconomic. E d m o n d s and Reilly (1985) estimate "recoverable resources," which they define to include identified and undiscovered deposits that will be recoverable with future technology at future prices, accounting for substitution among fossil fuels. E d m o n d s and Reilly (1985) survey the literature on estimates o f recoverable resources for all forms o f fossil fuels and recovery technologies. W h i l e their review is dated, it remains the only summary that adjusts the estimates to make them consistent across fossil fuels and definitions (including undiscovered hypothetical in known districts and speculative). O f all the sources, shale oil is both the largest resource and the resource for which the least is known about the range of values that will b e c o m e economic. Heavily discounting their estimate for shale oil, Table 2 reproduces their estimates, and with the
Agricultural Damagefrom Climate Change Table 2.
119
L o n g Term E c o n o m i c Resources and Cumulative CO2 Emissions: M a x i m u m Coal Price at $85/metric-ton. Recoverable resources (exajoules) Low
Best
Maximum Cumulative Emissions (metric gigatons carbon) High
Low
Best
High
Coal Oil-conventional Enhanced oil recovery Tar sands Shale Gas-conventional Gas in tar sands Gas in coal seams Gas in shale
146500a 330000 527400a 44600 1 3 4 0 0 15200b 1500 3500c 5500 700 4100c 7500 4400d 6100J 91800~ 6300 11400 13500 40e 320~ 600e 30" 40c 50° 30° 40~ 50°
3567.774 815.7981 27.43716 12.80401 80.48233 84.94966 0.539363 0.404522 0.404522
8036.625 245.1053 64.02003 74.9949 111.5778 153.7184 4.314903 0.539363 0.539363
12843.99 278.0299 100.6029 137.1858 1679.154 182.035 8.090444 0.674204 0,674204
Total
204100
368900 661600 4590.594 8691.435
15230.43
Source: Edmonds and Reilly (1985), Table 1-3, p. 8, unless otherwise indicated. Prices (19795) up to: $10/mcf- gas; $40/bbl - oil; $85/metric-ton - U.S. coal. mcf= thousand cubic feet bbl = barrels Converted (multiply by 29.3, round) from p. 160, "5,000 to 18,000 GT of coal are available for exploitation at costs less than $85/ton (1979 dollars)." bEquals best estimate plus A, where A = best - low. Average of high and low estimates. d Converted (multiply by 5.8 × 1.055056, round) from Table 8-3, p. 98. Resource Grade: 25 to 100 gallons of oil per ton of shale. Low: Measured and indicated; Best: measured, indicated and inferred; High: identified and undiscovered. e Converted (multiply by 1.055056, round) from Table 10-3, p. 146.
exceptions noted in the table the values correspond to those given in Edmonds and Reilly (1985, Tables 1-3, p. 8). Given the numbers in Table 2, the ultimate fossil fuel to provide energy in solid, liquid, and gaseous forms is coal. E d m o n d s and Reilly (1985) estimate that global e c o n o m i c a l l y available coal equals between 5,000 and 18,000 metric Gigatons (Gt), at an eventual price o f $85/metric-ton (1979 prices). Cline (1992) adjusts upward their estimate for coal to the range o f 10,000 and 20,000 Gt. Cline's adjustment is to account for the estimate by Man n e and Richels (1990) that the cost o f the backstop technology for fossil fuels would require a tax o f $250 per metric ton o f carbon emissions from coal (1988 prices). Deflating to 1979 dollars, Cline calculates the price o f coal equal to $118 per metric ton o f coal, at which coal eventually b eco m es uneconomic.
120
DARWIN C. H A L L
Table 3 replaces the coal estimates by Edmonds and Reilly with those of Cline. C l i n e c o n v e r t s t o n s o f c o a l to t o n s o f c a r b o n e m i s s i o n s f r o m coal, a n d h e c i t e s N o r d h a n s a n d Y o h e ( 1 9 8 3 ) f o r t h e c o n v e r s i o n factor. B a s e d o n t h e e m i s s i o n r a t e s g i v e n i n T a b l e 4, T a b l e 3 s h o w s t h a t t h e m a x i m u m c u m u l a t i v e e m i s s i o n s f r o m c o a l v a r y b e t w e e n 7 a n d 14 m e t r i c t e r a t o n s , t h e a m o u n t s C l i n e u s e s f o r h i s a n a l y s i s . A c c o u n t i n g f o r all f o s s i l fuels, T a b l e 3 p r e s e n t s a r a n g e t h a t v a r i e s b e t w e e n 8 a n d 17 m e t r i c t e r a t o n s . T h u s , t h e w o r l d ' s e c o n o m i c a l l y a v a i l a b l e f o s s i l f u e l s c o n t a i n b e t w e e n s o m e 8 a n d 17 m e t r i c t e r a t o n s o f c a r b o n . T h e rate o f e m i s s i o n s o v e r t i m e is b a s e d u p o n t h r e e m a c r o - e c o n o m i c m o d e l s
Table 3.
Long Term Economic Resources and Cumulative CO 2 Emissions: M a x i m u m C o a l P r i c e at $ 1 1 8 / m e t r i c - t o n . Recoverable resources (exajoules)
Maximum Cumulative Emissions (metric gigatons carbon)
Low
Best
High
Low
Coal Oil-conventional Enhanced oil recovery Tar sands Shale Gas-conventional Gas in tar sands Gas in coal seams Gas in shale
293000 a 44600 1500 700 4400 d 6300 40 ° 30e 30°
439500 13400 3500c 4100 c 6100d 11400 320~ 40c 40c
586000 ~ 7135.549 15200 b 815.7981 5500 27.43716 7500 12.80401 91800 d 80.48233 13500 84.94966 600~ 0.539363 50e 0.404522 505 0.404522
10703.32 245.1053 64.02003 74.9949 111.5778 153.7184 4.314903 0.539363 0.539363
14271.1 278.0299 100.6029 137.1858 1679.154 182.035 8.090444 0.674204 0.674204
Total
350600
478400
720200
11358.13
16657.54
8158.368
Best
High
Sources: coal - Cline (1992); all other fossil fuels - Edmonds and Reilly (1985), Table 1-3, p. 8, unless otherwise indicated. Prices (19795) up to: $10/mcf - gas; $40/bbl - oil; $118/metric-ton - U.S. coal. mcf = thousand cubic feet bbl = barrels a Cline increases coal estimates to allow for U.S. prices up to $118 per metric ton (19795). This price corresponds to Manne and Richels (1990) estimate of a carbon tax of 1988 $ 250 per metric ton to induce switching to the backstop technology and stabilize carbon emissions at 80% of the 1990 rate (Cline, p. 45, note 5). b Equals best estimate plus A, where A = best - low. Average of high and low estimates. d Converted (multiply by 5.8 × 1.055056, round) from Table 8-3, p. 98. Resource Grade: 25 to 100 gallons of oil per ton of shale. Low: Measured and indicated; Best: measured, indicated and inferred; High: identified and undiscovered. Converted (multiply by 1.055056, round) from Table 10-3, p. 146.
Agricultural Damagefrom Climate Change Table 4.
121
Emission rates. lbs CO2 per MMBtu
Coal Oil Gas
207.7 156 115
Sources: Oil and Gas - Hall (1990, Table 1, Notes,p. 287); Coal - EnergyInformationAgency (1995, p. 363). To convertfrom CO2 to ambient carbon: Multiplyby 12/44,the molecularweight of carbon divided by the molecularweight of CO2.
that link forecasts of future economic activity with the potential use of fossil fuels (Nordhaus & Yohe, 1983; Reilly et al., 1987; Manne & Richels, 1990). Cline (1992) extrapolates three economic forecasts to provide a sensitivity analysis of nine future emission paths of CO2, based upon three macro-models and three alternative amounts of economically available fossil fuels. Cline calculates the cumulative addition to atmospheric CO2, the atmospheric stock, the atmospheric concentration, and the radiative forcing. He then adjusts the radiative forcing to account for other greenhouse wanning gases. Finally, Cline considers three alternative increases in global mean temperature, based upon the IPCC (1992) forecast, giving the range of warming from 1.5 to 4.5 degrees Celsius for a doubling of greenhouse gases from the pre-industrial level. The ratio of ambient CO2 to the pre-industrial revolution level of 280 parts per million volume (ppmv) - RCO2 - is a benchmark for general circulation models (GCMs) of the atmosphere and oceans. For a doubling of greenhouse gases - RCO2 equal to 2 - a warming of 1.5, or 2.5, or 4.5 degrees Celsius corresponds to radiative forcing at 0.375, 0.625, and 1.125 watt per meter squared (per unit of Earth's surface). Given these alternative rates of global wanning, Cline has a total of 27 scenarios (three amounts of economically available fossil fuels, three macro-models, three values for climate wanning sensitivity). Cline pares this down to nine scenarios by terminating the analysis at year 2275 when the three macro-models predict total emissions at 7,201, 5,992, and 10,141 metric gigatons (M-Gt) of cumulative carbon emissions. Skeptics of anthropogenic global warming point to the discrepancy between historic warming since the pre-industrial revolution and the predictions by GCMs, given the increase in atmospheric COz. The GCM predictions are higher than the actual temperature increase up to 1990. The IPCC (1994) presents a lower range of 1.0 to 3.5 degrees Celsius for RCOz equal to 2, based
122
DARWIN C. HALL
upon possible transient effects of aerosols (IPCC, 1996, Working Group I, p. 39; Working Group III, p. 188). Hall (1996, 1999) adjusts downward Cline's warming forecasts, making them consistent with the IPCC (1996) adjustment. Recent compilation of data, measuring temperatures in the oceans to a depth of 3000 meters, makes manifest a rising ocean temperature over the last 50 years that is equivalent to a radiative forcing of 0.3 watt per meter squared (Levitus et al., 2000). By itself, this finding does not reconcile predictions by global climate models with actual warming between the pre-industrial revolution and today. The reason is that both the upper and deep ocean significantly warmed since the mid-1980s, a period when ambient temperatures also significantly increased. While the heat is coming from somewhere, data do not exist that provide time series of ocean temperature in the very deep ocean, depths below 3000 meters. Hansen (1999) interprets the recent rapid ambient temperature increase in the 1990s as consistent with predictions by GCMs. Those models were the basis for the original range of warming from 1.5 to 4.5 degrees Celsius for a 2× CO2 (IPCC, 1990, 1992). Also consistent with Hansen's interpretation, ocean thermal lag is in the order of a half century. The analysis here is based upon the range of global warming of 1.5, 3.0, and 4.5 degrees Celsius for each 2× CO2 equivalent gases from the pre-industrial level, a warming rate consistent with GCMs. A reasonable assumption is that the ocean thermal lag is a period that lasts for a half-century; the oceans capture 50% of the radiative forcing potential from emitted anthropogenic sources of CO2 and release the heat 50 years later. That lag is modeled in the analysis that follows. Coupling ocean and air general circulation models of the globe, GCMs project regional climates, based upon RCO2 equal to 2, causing the radiative forcing of the atmosphere to increase (IPCC, 1996). Output from the GCMs includes forecast temperature and precipitation; these predictions are then used as inputs for crop simulation models (CSMs). Rosensweig and Parry (1994) and Adams et al. (1988, 1990, 1995, 1999) use CSMs to project the changes in potential crop yield and product, for each region in the U.S. with RCO2 equal to 2; Adams et al. (1999) include wheat, corn, soybeans, oranges, tomatoes, pasture, range land, and livestock. Using crop yield and product forecasts from CSMs as input to non-linear programming models of the U.S. and models of international agricultural trade, they go on to estimate changes in the U.S. net producer and consumer surpluses from a doubling of CO2 equivalent gases. In their latest work, Adams et al. (1999) start with 64 combinations of temperature, precipitation, and ambient CO2. In essence, the CSMs allow for computer experiments of climate change, accounting for technical efficiencies that capture some adaptation to climate change. They estimate rates of
Agricultural Damage from Climate Change
123
technological change using the past 50 years of data, and adjust agricultural output from the CSMs to account for technological change. Since farmers would select crop combinations as a further adaptation to climate change, they account for economic efficiencies by using quadratic programming and trade models to estimate economic surplus in the U.S. agricultural sector. For each of the 64 climates, Adams et al., estimate economic surplus, with and without technological change. Below, I estimate a generalized power function (GPF) from the data generated by Adams et al. (1999). The GPF estimates aggregate agricultural surplus as a function of climate and technological change. Technological change takes two forms: embodied in climate variables capturing the effect of adaptation to climate through specific research and development, and disembodied capturing general improvements in technology. With the estimated GPF, I predict a time path for agricultural surplus, conditional on the following: Cline's (1992) time paths that forecast ambient CO2, assumptions about precipitation based on GCMs, and mean global temperature forecasts that incorporate a 50-year ocean thermal lag. The ocean thermal lag is consistent with the results by Levitus et al. (2000) and ambient temperature increases over the last decade, discussed above. The recoverable resources, macro-models, and values for climate warming and precipitation all combine to allow for a sensitivity analysis. In an earlier paper (Hall, 1999), I performed a similar analysis. There are several new contributions here. The next section presents a formal representation of the climate model modified to incorporate ocean thermal lag. This is the first time an ocean thermal lag, rather than an ocean thermal sink2 (Nordhaus, 1994), has been considered in a comparative dynamic analysis of the economic impact of climate. Also new is an adjustment for the difference between mean global temperature and temperature in the U.S., to account for the latitude of the U.S. The temperature data are updated, initialized at year 2000. The forecast of ambient CO2 is presented in the context of "geoeconomic time". 3 I re-estimate the GPF, improving on the earlier estimation. Finally, the precipitation assumptions are based upon results from GCMs, improving on the sensitivity analysis.
FUTURE GREENHOUSE GAS EMISSIONS AND LAGGED RADIATIVE FORCING Cline (1992) extrapolates the macro models developed by Reilly et al. (1987), Nordhaus and Yohe (1983), and Manne and Richels (1990) to forecast, respectively, metric gigatons of carbon emissions, approximately4 as follows:
124
DARWIN C. HALL RE:
Ct = (1 + rt) Ct-i
where r,=(0.013643 for 2000
0.008311
Ct = (1 + rt) Ct_ !
(1) for 2025
where rt = (0.025413 for 2000 < t_< 2025; 0.01027 for 2025 < t_< 2050; 0.011057 for 2050 < t_< 2075; 0.005354 for 2075 < t) and C20oo= 5.5. MR:
Ct = (1 + rt) Ct_l
(3)
where r,= (0.019755 for 2000
Et = E1990~+ Z
Cs,
(4)
s = 2000
where E19~ is the amount of emissions from fossil fuels used in the 1990s -equal to 55.5, 62.0, and 65.0 metric gigatons of carbon (MgtC) for the three macro models. The constraint on the model is that Er = 8,000, 11,000, or 17,000,
(5)
the cumulative emissions from fossil fuels. The estimates of economically available fossil fuels by Edmonds and Reilly (1985) reflect the process of technological change in the recovery of fossil fuels, future discoveries, and higher prices, all extending recoverable reserves. The IPCC uses 120 years as the e-folding time atmospheric residence for a pulse increase of CO2 sufficient for RCO2 equal to 2. The e-folding time 5 is the number of years at which the biosphere and ocean removes 63% of the increase from pre-industrial levels. For increases greater than RCO2 equal to 2, the efolding time increases dramatically. Carbon uptake by plants is limited, as biological material decays. As discussed below, at the Triassic-Jurassic boundary, CO2 was so high that climate effects lead to the disappearance of most species. "Kasting and Walker (1992) point out that the assumption of linearity may seriously understate the atmospheric retention of carbon. Kasting
Agricultural Damagefrom Climate Change
125
suggests that a pulse of C O 2 emissions that is three times pre-industrial concentrations would have an atmospheric lifetime (e-folding time) between 380 and 700 years" (Nordhaus, 1994, note 4, p. 26). To model the atmosphere beyond a doubling of CO2, the model proposed by Nordhaus is mis-specified. For a tripling (RCO2=3) or larger, as in Cline's model, a reasonable assumption is that 50% of the emissions remain in the atmosphere, so the cumulative additions, A, to atmospheric carbon are given by At = EJ2.
(6)
The atmospheric stock, S, of carbon in any year equals the stock in the initial year (750 in 1990) plus the cumulative additions: St = 750 + At
(7)
The atmospheric concentration of CO2, CO2 in parts per million volume (ppmv), is found by multiplying the 1990 ratio of concentration (353 ppmv) to stock (750 MgtC): CO2 t = 353SJ750
(Sa)
The cumulative atmospheric concentration ultimately depends on the amount of economically available fossil fuels. For the three values given in equation (5), the corresponding atmospheric concentration of CO2 equals 2,400, 3,200, and 4,500 ppmv. Define RCO2t, the ratio of CO2 to the pre-industrial level, by: RCO2t - CO2JCO20
(Sb)
where the denominator is the ambient concentration in the late 1800s. The three models predict RCO 2 equal to 9, 1 l, and 16. To get a sense of the scale involved, Fig. 1 from Bemer (1997) shows atmospheric concentrations measured in R C O 2 o v e r the past 600 million years. The macro models forecast that economic forces will cause, within the next 325 to 350 years, the atmosphere to revert to an era never experienced by most plants and animals living today, levels not experienced by earth at any time during the last 375 to 425 million years. Figure 2 gives the predicted values over the next 325 to 350 years. Cline (1992, p. 25) calculates CO2-related radiative forcing, R c (in watt per meter squared), above pre-industrial levels as follows: RCt = 6.3 log(CO2JCO20)
(9)
where CO20 is the pre-industrial level (279ppmv). A convention in the literature is to refer to "CO2 equivalent gases" to account for other greenhouse warming gases humans emit due to economic activity.
126
DARWIN C. HALL t3~
Od
0
O tr
-600
-500
-400
-300
-200
-100
0
Time (million years) Fig. 1. Atmospheric C O 2 v s . time for the Phanerozoic (past 550 million years). The parameter RCO 2 is defined as the ratio of the mass of CO2 in the atmosphere at some time in the past to that at present (with a pre-industrial value of 300 parts per million). The heavier line joining small squares represents the best estimate from GEOCARB II modeling (10), updated to have the effect of land plants on weathering introduced 380 to 350 million years ago. The shaded area encloses the approximate range of error of the modeling based on sensitivity analysis (10). Vertical bars represent independent estimates of CO2 level based on the study of paleosols. Reprinted with permission from Berner, R. A. 1997. "The Rise of Plants and Their Effect on Weathering and Atmospheric CO2." Science 276(25): 544-546. April 25. Copyright American Association for the Advancement of Science. Cline (1992) adjusts CO2 radiative forcing upward to estimate radiative forcing, R E, from all anthropogenic greenhouse gases. Cline's adjustment is based upon data from IPCC (1992) for values of R c up to 6.8 w m -2. For the years 1990 and 2000, Cline gives the values for R E equal to 2.5 and 2.8 watt per meter squared, respectively. Keeping in mind that the Montreal Protocol and related treaties reduce CFC's and other gases by 2000 and 2005, a good approximation to Cline's value for R E after the year 2000 can be found by: REt/RCt = 1.447 exp[k(6.8 - RCt)] for RC< 6.8 and t___2000.
(10a)
The value for k can be found by using the values in Cline for R E and R c, when R c < 6.8 and t > 2000, in the following:
Agricultural Damagefrom Climate Change
127
15.0
16.0 -
14,0 -
12.0
O~ 10.0
No,nnu-va~ i-'41-
8.0
I~ane-Rit~ls I
6.0 4.0 2.0 0.0 2000 20~5 2060 2075 2100 2125 2150 2175 2200 2225 2250 2275 2300 2325 2350 2375 Year
Fig. 2. RCO 2 - Ratio of Atmospheric Concentration of COz to Pre-industrial Level.
k = [log(RE/1.447RC)]/(6.8 - RC).
(10b)
The value for k is approximately 0.0277503. For values above 6.8 wm -2, Cline multiplies radiative forcing from CO 2 by 1.447 to obtain CO 2 equivalent radiative forcing, R E, accounting for other greenhouse gases, REt/RCt = 1.447 for R c > 6.8.
(10c)
Cline (1992) calculates the change in mean global temperature in degrees Celsius for three alternative climate sensitivities, a 1.5-degree increase for a doubling of radiative forcing, a 2.5-degree increase, and a 4.5-degree increase. The pre-industrial value for radiative forcing equals 2 wm -2, so a doubling is 4 wm -2. For example, for a 2.5 degree sensitivity, the change in temperature is given by: ATt = 2.5REt/4
(11)
To compare Cline's model with other models that extend beyond a doubling, note that by substitution among equations (8b) and (9) into (11), the temperature change due to CO2 equals 3.9 log(RCO2), which corresponds to the "formulation from Z. Kothavala, R. J. Oglesby, and B. Saltzman [Geophys. Res. Lett. 26, 209 (1999)]: A T = 4 . 0 log(RCO2)" (McElwain, Beerling & Woodward, 1999, footnote 24, p.1389).
128
DARWIN C. HALL
For 1990, the change in temperature is initialized at zero, and for the year 2000 the change in temperature is initialized at 0.6 degrees Celsius (Hansen, 1999). Nordhans presents a two-equation exponential decay model from early work by Schneider and Thompson (1981) to describe the process by which the ocean captures heat. The model allows for heat to be released eventually, but the Nordhans model is more accurately described as an ocean thermal sink rather than an ocean thermal lag. Nordhaus specifies the e-folding time equal to 500 years, so increases in deep ocean temperature decay back toward the preindustrial revolution level outside the time frame of any model. His specification is inconsistent with historical data. 6 More problematic for analysis of RCO2 greater than 2, "the striking finding of the 4 X CO2 run is that the atmosphere-ocean system settles into a second, locally stable equilibrium with a different ocean circulation" (Nordhaus, 1994, note 8, p. 35-36). Consequently, an exponential decay model mis-specifies the geophysical system for models that extend to a quadrupling (RCO2 = 4) or beyond. An exponential decay process presumes a single equilibrium for biogeophysical processes. Moreover, values for the parameters can, for all practical purposes, treat biogeophysical processes as a heat sink. It is more reasonable to incorporate a thermal lag into climate models. Next I introduce a simple way of capturing the idea of ocean thermal lag into the model, where one half of the radiative forcing is stored in the ocean for 50 years and then released. This is consistent with the finding by Levitus et al. (2000). They find that from 1948 to 1998 the heat content of the top 3000 meters of the oceans has increased by 2 × 1023joules, corresponding to a warming rate of 0.3 watt/m 2 or about one half of the warming predicted by GCMs for the last century. Moreover, in the last decade of the analysis by Levitus et al., from the late 1980s to 1998, ambient temperatures increased by about one degree Fahrenheit. To account for this lag, adjust equation (11) as follows, dropping the superscript on radiative forcing, R: AT, = 2.5RJ8 AT t =
2.5 ( R t +
R~t_50))/8,
for t < 2040, and
(12a)
for t > 2040.
(12b)
For a doubling of CO2 equivalent gases, Table 5 presents a comparison of estimates of mean global temperature to U.S. average temperature forecasts. The comparison is among three GCMs that forecast temperature changes for a 2 × CO2. Because the U.S. is at higher latitude, the models predict greater warming for the U.S. than the global mean. The ratio of mean global temperature change to U.S. temperature change, averaged over the GCMs, equals:
Agricultural Damagefrom Climate Change Table 5.
129
Comparison of GCMs for a Doubling of CO2 Equivalent Gases.
General Circulation Model (GCM)
A°C global mean t
%A precipitation global mean 1
A°C U.S. Average (Winter, Summer) 2
%A precipitation U.S. Average (Winter,
Summer)2 Goddard Institute for Space
Studies (GISS) GeophysicalFluid Dynamics Laboratory(GFDL) Oregon State University(OSU)
4.20
11%
4.00
8.3%
2.84
7.8%
4.32
(5.46, 3.50) 5.09 (5.25, 4.95) 2.95 (2.95, 3.10)
20%
(13, 24) 9% (19,-8) 17% (24, 11)
ISouwe: Williamset al. (1996). 2Source: Adams et al. (1988).
AosT/A~T = 1.12.
(13)
The adjustment for latitude allows us obtain average U.S. temperatures: AusTt = 1.1212.5Rt/8]
for t< 2040, and
(14a)
AusTt = 1.1212.5(Rt + R(t_50))/8],
for t > 2040.
(14b)
Averaged over the 30 year period from 1951 to 1980, the mean global temperature is estimated at about 15 degrees Celsius (table, p. xxxvii, IPCC, 1990). Consistent with equation (13), the average U.S. temperature for 1990 is initialized at 12% greater, or 16.8 degrees. The future temperatures for each scenario are calculated by adding the change to the initial temperature. Tt = 16.8 + AusTt
(15)
The IPCC (1990) assessment, based on a doubling, projects U.S. precipitation to increase from 0 to 15% in winter and decrease from 5 to 10% in summer. In a review of 16 GCMs, Williams, Shaw and Mendelsohn (1996) present mean global temperature increases and precipitation. The temperature increase averaged across GCMs is about 3.5 degrees Celsius, with an increase in precipitation equal to 7%. But there are duplicate numbers for mean global temperature and precipitation increases, presumably because some of the models are closely related in their construction - close derivatives of one, another. Mendelsohn, Nordhaus and Shaw (1994) state they are following the IPCC with an 8% increase in precipitation corresponding to a 3 degree Celsius warming for a doubling of COE equivalent gases.
130
DARWIN C. HALL
Averaging across the three GCMs in Table 5, the percentage increase in U.S. precipitation equals 15.33%, with a corresponding average across GCMs of U.S. temperature increases equal to 4.12 degrees Celsius. The change in summer rainfall varies across the models, from a low of an 8% decrease to a high of a 24% increase. This is the basis for the sensitivity analysis presented below. Let 50 inches annually be the initial value for average U.S. precipitation. In the mid-case scenario, precipitation will be proportionately increased by 15.33% per increase in temperature of 4.12 degrees Celsius. For the years from now until a temperature increase of that amount, precipitation, P, equals: Pt = 5011 +0.1533(T t - 16.8)/4.12].
(16a)
The dry scenario is an 8% decrease in precipitation for an increase of 4.12 degrees: Pt = 5011 - 0.08(T t - 16.8)/4.12].
(16b)
The wet scenario is a 24% increase in precipitation for an increase of 4.12 degrees: Pt = 5011 + 0.24(Tt - 16.8)/4.12].
(16c)
Equations (16a-c) are valid for temperature increases for the first incremental increase of 4.12 degrees, but should be compounded for each additional increment of 4.12 degrees. Without compounding, for example, precipitation could become negative in the dry scenario. To generalize for compounding, precipitation is given by Pt =
P~ [1 + r(T t - 16.8)/AT] for T~99o+ NtAT < T < T~99o+ (Nt + 1)AT, (17a)
where P~ = 50(1 + r) N'.
(17b)
In equations (17a) and (17b), T199o= 16.8, AT=4.12, and r = %AP. N t is the number of times temperature has increased from 1990 by an increment of AT = 4.12. With an Excel IF statement, N t can be written as follows: Nt = IF(D > 7,7, IF(D > 6,6, IF(D > 5,5, IF(D > 4,4, IF(D > 3,3, IF(D > 2,2, IF(D > 1,1,0))))))),
(17c)
where D = (Tt - T199o)/AT. Equations (8), (15), and (17) give a forecast that includes the combination of the following variables: ambient CO 2 (equation 8), average U.S. temperature (equation 15), and average U.S. precipitation. If we had an aggregate agricultural surplus function that predicts agricultural surplus as dependent upon these three variables, we could forecast the time path of the surplus.
Agricultural Damage from Climate Change
131
ADAMS ETAL. (1999) DATA AND ESTIMATION OF AGGREGATE AGRICULTURAL SURPLUS Adams et al. (1999) generate data from a combination of crop simulation models, non-linear programming models, and agricultural trade models. Their unique approach includes adaptation by farmers to climate change, and allows for technological change. I use their data to estimate aggregate agricultural surplus as a function of ambient CO2, temperature, precipitation, and the rate of technological change. Using the estimated aggregate agricultural surplus function, in the next section I predict future surplus conditional on the climate forecasts from equations (8), (15), and (17). By altering the forecasts, the comparative dynamics show the impact on U.S. agriculture from climate change. A. Adams et al. Data 7
The publication by Adams et al. (1999) is the culmination to date of their previous work (Rosensweig & Parry, 1994; Adams et al., 1988, 1990, 1995). Their approach is to simulate crop yields using dynamic growth Crop Simulation Models (CSMs). They do so for various combinations of temperature, ambient CO z, and precipitation, letting the computerized CSMs simulate global climate experiments. Then they input changes in crop yields into an economic quadratic programming model of the agricultural sector. The economic model allows farmers to adapt the mix of crops to maximize profit, given the changes in yield and prices that result from global warming and the demand for food. The CSMs originally projected the impact of wanning on soybeans, corn and wheat (Adams et al., 1990); later (Adams et al., 1999) they added cotton, potatoes, tomatoes and citrus, forage and livestock. The CSMs account for solar radiation, precipitation, temperature, soil properties that capture moisture, and the enhanced yield "fertilizing effect" of increased CO2 in the atmosphere. The results from the CSMs are extrapolated to other crops in the economic model. Adams et al. (1999) consider 64 climate configurations: precipitation changes ( - 10, 0, 7, and 15%), temperature changes (0, 1.5, 2.5 and 5.0°C), and ambient CO2 fertilization (at 355, 440, 530, and 600 ppmv). They assume that each configuration is spread uniformly across the U.S. They calibrate the crop simulation models for each region in the U.S., changing present climate data by these amounts.
132
DARWIN C. HALL
The speed of wheat, corn, and soybean crop development increases with temperature, causing yield decreases and higher water demand. Increases in ambient CO2 decrease water demand by increasing the efficiency of water use. Cotton has decreased yield from temperature, since it reaches maturity in fewer days, but increased yield from precipitation. For irrigated areas, no change in cotton yield is expected from changing precipitation. Similarly, potatoes, tomatoes, and citrus are modeled to have no effect from precipitation since they are irrigated. Adams et al. (1999) assume increases in citrus yield from CO 2, although the reason is "poorly substantiated in the present literature". Increases in temperature cause the loss of a suitable dormant period, decreasing citrus yield in the south and potentially increasing yield in the north, but potential migration is constrained because sandy soils don't exist in the north. Potato yields fall with temperature, and rise with CO 2. Tomato yields increase with CO2 and increase with temperature up to a + 1.5 to 2.5 degree rise, then yields fall. Adams et al. (1999) use two CSMs for forage production and livestock, one for the more arid west of the U.S., and another for the east. These models were calibrated for various locations, using existing weather data to get a baseline prediction, and then modifying the amounts of precipitation and temperature. For example, changes in precipitation were "applied uniformly to each monthly value". The impacts of temperature, precipitation, and CO2 fertilization varied by location, with increases in precipitation generally increasing yields, CO2 fertilization increasing yields, and the effect of temperature mixed, depending on the existing temperature and the size of the increase. Averaged across regions, they predict increases in yields; where predicted yields fall, the amounts are small, but other locations are projected to have rather large increases in yields, depending on the climate configuration. Direct effects on livestock include appetite-suppressing temperature increases, and decreased energy needed in the winter to stay warm. Averaged across regions and effects, they predict falling livestock production. Adams et al. (1999) account for changes in technology and adaptation to global warming. There will be adjustments to warming - R&D will help crops to migrate, and will develop heat tolerant and drought tolerant varieties. Farmers will adjust inputs, the timing of planting and harvesting. Crop migration is constrained by soil barriers with significant yield losses. Adams et al., rely on time series regression to relate improvements in yields over time, and crop migration. They use cross section regression to account for adjustments and adaptation of farmers to regional temperature differences. They compare the crop simulation results to results from regression of county yield on temperature and precipitation. Yields do not fall as much with
Agricultural Damage from Climate Change
133
increases in temperature (but that could be due to correlation with solar radiation). For some regions, wheat yields rise and then fall with rising temperature. Regressions show yields rising with precipitation, but not by as much as projected by CSMs. Yields fall with April precipitation, reflecting the monsoon effect; intense precipitation damages crops. Based on their examination of these results (not presented but discussed), they assume that at least 50% of the CSM-projected damage from 2.5 degrees Celsius mean global warming can be mitigated through soil amendments, irrigation, crop migration, and technological change. For 5 degrees, they assume 25% mitigation of yield losses. The economic model (Adams et al., 1999) accounts for differences in crop demand, the impact of precipitation on surface and ground water supply, costs of surface and ground water, crop selection to maximize consumer and producer surplus, costs of feed for livestock as a secondary industry, regional and international trade. The model allows for future trends of basic variables, based upon trends over the past 40 years, to account for increases in demand through population growth, quantities of inputs, and import levels and supplies. Inputs are adjusted to account for changes in yields over time. Forecasts of economic surplus are developed for the years 1990 and 2060, with and without the effects of climate change. Adams et al. (1999) calculate net consumer and producer surplus for each region of the U.S., as given in the economic model, and sum the impacts to obtain the aggregate impact on the U.S. (including foreign consumer surplus for exports). This is repeated for each of the 64 climate combinations. For each year, 1990 and 2060, they then regress the economic value against precipitation, temperature and ambient CO2, using a quadratic form, and also a simple analysis of variance. The result is a climate change response function. For each climate combination, they compare the predicted net surplus with the prediction conditioned on 355 ppmv of CO2 (today's ambient concentration), 0% change in precipitation, and 0% change in temperature. The 1990 regression results perform a comparative static experiment, predicting the impact of climate change if it were imposed on agriculture today, and the agricultural industry could instantaneously respond. The 2060 regression results perform the comparative static experiment of imposing climate change over time, with research and development to adapt, and comparing agricultural surplus in 2060 to the 2060 surplus if there were no warming, but research and development were to continue to improve yields. The comparative static results for 1990 conditions are smaller relative to the comparative static results for the year 2060. The impact of global warming is larger given the adjustments for technology and economic conditions. Whether
134
DARWIN C. HALL
the impact of global warming is positive or negative depends on the climate configuration. Overall, the results by Adams et al. (1999) indicate that the impact is positive, but there are some climate combinations they consider where the opposite result is obtained. For flLrther details about their approach and some additional results, see Adams et al. (this volume). B. Aggregate Agricultural Surplus
I estimate a generalized power function (GPF) with the data generated by Adams et al. (1999) from both years, 1990 and 2060. The GPF is of the form: S = 13oX~x exp(~bX) + e)
(18)
where 130 is a scalar, 13 and ~b are vectors, and X is a matrix of explanatory variables. This function is quite general (de Janvry, 1971). The Cobb-Douglas is a special case. Since a power function cannot include zero values for the explanatory variables, the Adams et al. (1999) data cannot be used directly. Adams et al., present the change in surplus from a base case, given changes in temperature, ambient CO2, and precipitation. The 1990 values for surplus, temperature, and precipitation must be added to the data in Adams et al. The initial values for temperature and precipitation are 16.8 degrees Celsius (equation 15 above) and 50 inches, respectively. Adams et al. use a linear-dummy variable specification to estimate the impact of temperature, precipitation, and CO2 fertilization effect on total surplus (consumer, producer, foreign) for U.S. agriculture. The intercept coefficient in Adams et al. (1999, Appendix, Table 3) is 1239.412, which is the value of surplus in millions of 1990 $ for U.S. agriculture. Similarly, the intercept coefficient in year 2060 is 1750.594, which is the total surplus for the year 2060 in millions of 1990 $ in the absence of climate change. To generate the aggregate surplus data, for each of the 64 climate combinations, I add the Adams et al., estimated changes in 1990 to 1239.412, and the estimated 2060 changes to 1750.594. A simple version of the GPF that captures the effect of technological change is given by: S ~-- 130p~l +I32YTI33+~34YcI35+I~6Y exp(137P + 138T+ 139C + 1310Y + e).
(19)
where S = producer plus consumer plus foreign surplus, P = precipitation, T = temperature in degrees Celsius, C = ambient CO2 in ppmv of carbon, and Y = number of years (set to zero for 1990, increasing by one for each 5-year period of the analysis). This function has the desirable property that the marginal surplus (with respect to the climate-input variables) can take on 15 shapes (see Hall, 1999),
Agricultural Damagefrom Climate Change Table 6.
135
GPF Equation (19).
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C LNP Y*LNP LNT Y*LNT LNCO2 Y*LNCO2 P T CO2 Y
2.933413 0.412327 0.001703 1.196930 -0.001613 0.122423 0.001416 -0.006399 -0.069016 -0.000136 0.014576
1.174671 0.325362 0.001774 0.282567 0.001696 0.075481 0.000808 0.006387 0.014651 0.000162 0.009875
2.497220 1.267287 0.959949 4.235920 -0.950998 1.621892 1.752045 -1.001853 -4.710636 -0.839087 1.476111
0.0139 0.2076 0.3391 0.0000 0.3436 0.1075 0.0824 0.3185 0.0000 0.4031 0.1426
R-squared Adjusted R-squared S. E. of regression Sum squared resid Log likelihood Durbin-Watson stat
0.995498 0.995110 0.012564 0.018310 381.4204 1.796943
Mean dependent var S.D.dependent var Akaike info criterion Schwarz criterion F-statistic Prob(F-statistic)
7.299164 0.179658 -5.833392 -5.587045 2564.912 0.000000
Dependent Variable: LNS Method: Least Squares Date: 04/03/00 Time: 17:29 Sample: 1 127 Included observations: 127
only four of which are consistent with theory. These four cases have the desirable property of convexity: diminishing marginal surplus. We should expect that total surplus will rise to some maximum with increases in any of the climate variables (temperature, precipitation, and ambient CO2), and then fall thereafter. A good test of the model is that the results are consistent with one of these four cases. If the hypotheses are rejected that the model takes on one of the four shapes, then we can reject the functional form. s After taking logs of both sides, I use ordinary least squares to estimate the parameters in equation (19), and present the results in Table 6. The alternative shapes of the functions are consistent with the hypothesized impacts of precipitation, temperature and ambient CO2 on agricultural surplus. Technological change can be both embodied and disembodied. Disembodied technological change (f3~0~ 0) allows for the marginal surplus curve to increase irrespective o f the climate variables, while embodied technological change (~2,~4,~6 :;/:0) has its influence through one or more of the climate variables.
136
DARWIN C. HALL
Table 7.
GPF Equation (20).
Variable
Coefficient
Std. Error
t-Statistic
Prob
C LNP LNT LNCO2 Y*LNCO2 P T CO2 Y
2.961494 0.416651 1.179967 0.120769 0.001387 -0.006258 -0.068699 -0.000132 0.016712
1.171963 0.324943 0.281908 0.075405 0.000807 0.006380 0.014636 0.000161 0.004973
2.526953 1.282230 4.185643 1.601608 1.719232 -0.980820 -4.693771 -0.817886 3.360335
0.0128 0,2023 0,0001 0,1119 0.0882 0.3287 0.0000 0.4151 0.0010
R-squared Adjusted R-squared S. E. of regression Sum squared resid Log likelihood Durbin-Watson stat
0.995428 0.995118 0.012553 0.018593 380.4469 1.777927
Mean dependent var S.D.dependent var Akaike info criterion Schwarz criterion F-statistic Prob(F-statistic)
7.299164 0.179658 -5.849557 -5.648001 3211.577 0.0000012
Dependent Variable: LNS Method: Least Squares Date: 04/03/00 Time: 17:48 Sample: 1 127 Included observations: 127
Embodied technological change can be interpreted as an adaptation to global warming - technological change embodied in precipitation, ambient CO2, and temperature. Disembodied technological change can be interpreted as due to general improvements in agricultural productivity. Testing the hypothesis that there is no technological change, I reject the hypothesis that there is no disembodied technological change, and the hypothesis that there is no technological change embodied in ambient CO2. The hypotheses are not rejected for no technological change embodied in temperature or precipitation (Table 6). Accepting the null hypothesis that there is no technological change embodied in precipitation, similarly for temperature, the model is re-specified. In this specification, there is technological change embodied in CO2 and disembodied technological change: S = 130 P~LT~2C~3+~4Yexp(135P + 136T + [~7C + 13sY + e)
(20)
Table 7 presents the results. From Table 7 and equation (20), it is possible to estimate the optimal values for precipitation, temperature, and ambient COz.
Agricultural Damagefrom Climate Change
137
These values are found by maximizing (20) with respect to P, T, and C. The optimal values are given by: P* = [31/- 135= 66.58
(21a)
T* = [32/- [36 = 17.18
(21b)
C* = (133+ 134Y)/- 136= 914.92 + 10.51Y
(21c)
where Y increases by one for every five years. From equation (21b), we can conclude that it is already almost as warm as is optimal. In fact, another increase by one-half degree Celsius and global warming will begin to reduce agricultural surplus in the U.S. The potential benefits from global wanning to U.S. agriculture are not from the change in temperature, but from possible increases in precipitation and from CO2 fertilization that will occur simultaneously with global warming. In addition, over time we will see an increase in the marginal surplus from CO2 fertilization as we learn how to better take advantage of ambient CO2, and adapt to climate change.
COMPARATIVE DYNAMICS OF DAMAGE TO AGRICULTURE FROM CLIMATE CHANGE In this section, the estimated agricultural surplus function given by equation (20) predicts future surplus, conditional on the climate forecasts from equations (8), (15), and (17), and the comparative dynamics show the impact on agriculture from climate change. For the case of no climate change (NC) and the three sets of geoeconomic assumptions given in Table 8, MR, RE, and NY, the predicted surplus for each year Y is given by: SNC = exp[[3o + [3, log(50) + 1321og(16.8) + [33 log(353) + 134Ylog(353) + [3550 + 13616.8 + 137353 + 13sY]
(22)
SRE = exp[130 + 1311og(PRE) + [32 log(T~) + [331og(CRE)+ [34Y 1og(CRE) + 135P~ + 136T~ + 137CRE-'1- 13sY]
(23a)
SMR= exp[130 + 1311og(PMR)+ 132log(Tr,4g) + 133log(C~g) + 134Y 1og(CMR) + 135PMR+ 136TMR+ 137CMR+ 138Y1
(23b)
SNy = exp[130 + 131 Iog(PNy) + 1321og(TNy) + [33 1og(CNy) + 134Y1og(CNy) + 135PNy+ 136TNy+ 137CNy+ 138Y] (23c) The forecast given by equation (22) presents the effect of technological change over time on the time path of agricultural surplus.
138
DARWIN C. HALL
Table 8.
Geoeconomic Assumptions
Macro Economic Temperatureincrease Economicallyavailable Model for a doubling of CO2 fossilfuels in metric equivalent gases gigatons of carbon Dry climate: - 8% precipitation increase per 4.12°C increase
RE MR NY
1.5°C 2.5°C 4.5°C
8,000 11,500 17,000
Ave. climate: + 15% precipitation increase per 4.12°C increase
RE MR NY
1.5°C 2.5°C 4.5°C
8,000 11,500 17,000
Wet climate: + 24% precipitation increase per 4.12°C increase
RE MR NY
1.5°C 2.5°C 4.5°C
8,000 11,500 17,000
RE: Reilly, Edmonds, Gardner, and Brenkert (1987) MR: Manne-Richels (1990) NY: Nordhaus-Yohe(1983)
Each of the three macro models in equations (23a-c) has a separate forecast for carbon emissions. The ambient concentration of CO2 is from equation (8) and the changes in average U.S. temperature are from equation (14), forecast for the next 300 to 325 years. There are three alternative temperature forecasts summarized in Table 8. The time path of carbon emissions is lowest for the RE macro-model; the mid-range is the MR macro-model; the highest is the NY macro-model. Since the carbon emissions determine the temperature, I couple the most optimistic macro-model (RE) with the most optimistic temperature sensitivity assumption (a 1.5 degrees Celsius global increase for a CO 2 equivalent doubling), and the most optimistic assumption about cumulative carbon emissions (7,000 metric tons). Similarly, I couple the mid-range MR model with the mid-range temperature sensitivity assumption (a 3.0 degrees Celsius global increase for a CO2 equivalent doubling), and the mid-range assumption about the extent of economically available fossil fuels. I couple the most pessimistic macro model, NY, with the most pessimistic temperature sensitivity assumption (a 4.5 degrees Celsius global increase for a CO2 equivalent doubling), and the most pessimistic assumption about the extent of economically available fossil fuels. In this fashion, I summarize and encompass 27 possible combinations with three combinations of geoeconomic assumptions.
Agricultural Damage from Climate Change
139
The effect of precipitation on the results is complex because, from equation (21a), the optimal amount of precipitation is significantly higher than at present. Consequently, Figs 3-5 present a total of nine results, based upon the assumptions in Table 8. Figures 3-5 show the time paths of agricultural surplus. Figures 6--8 show the change in surplus from the case of no climate change. All of the temperature forecasts include the lagged effect of one half of the radiative forcing for 50 years. The calculations are for the years 1990, 2000, 2025, etc., in 25 year intervals, until fossil fuels are economically exhausted. The switch to alternative energy sources occurs between the years 2300 and 2325, depending on the macro-model and the economically available fossil fuels. In some respects, these results confirm analyses by Adams et al. (1999) and Mendelsohn, Nordhaus and Shaw (1994), but in other respects the results provide new insights. Consistent with earlier studies, if temperature increases are small, there may be a small near-term benefit to U.S. agricultural surplus. If temperature rises between 3 to 4.5 degrees Celsius per RCO 2, however, then eventually there is a significant loss. Moreover, the loss rapidly grows to substantial proportions. At the high end of temperature increases, agricultural surplus stops growing and may even collapse altogether. If climate change is drier or wetter than expected (Figs 3 and 5), then for all cases, there is an eventual loss to U.S. agricultural surplus. All the greater the 12000
10OOO
i-i i
Re411y-Edmondsat 1,5 degrees •-~-- No(dhatm-Yohe al 3 degmm -II-- Mlmrm-Richels at 4.5 degrims i Warming ~
4OOO
2OOO
Yur
Fig. 3. Agricultural Surplus for Dry Climate.
DARWIN C. HALL
140 1200O
p-Edmondsm 1.5 degrees I
--
I-~- ~ - Nocllhllu~Yo~e l~tdhllt~Y~ at 3 degre~
6000 . . . . . . . . . . . . . .
; •"-II--II-- Man Mani~l.RIclle ne-RIclle~ at 4.5 degrees
Year
Fig. 4. Agricultural Surplus for Average GCM Precipitation.
12000 •
10OOO
80O0 -&-- Reilly-Edmonds at 1.5 degrees] --e-- Nordhaus-Yohe at 3 d e g m u •-II-- Malnne-Richekl at 4,5 degrees No W a m t ~
60OO
4OOO
~ooo
Y~
Fig. 5. Agricultural Surplus for Wet Climate.
Agricultural Damagefrom Climate Change
141
0 -500
-1011O
i
i "-dr- Reiny-Edmonds at 1.5 degrees1 --e-- Nordlmus-Yohe at 3 degrees I L--ill.- l@,ann ~ Richels at 4 5 d e g ~
-1500 -2000
-2500
-3OOO
-3500
Y~r
Fig. 6. Change in Surplus for Dry Climate.
2000
-2000
i
-40OO
J|
-~ooo
-4,.- Reil~-Edrnonds at 1.5 degrm~ 1 Noe~hmn~-Yoheat 3 degrus i--i-- Manne-Richels at 4.5 degrees
-8OOO
-10000
-120G0
Yur
Fig. 7. Change in Surplus for Average GCM Climate.
142
DARWIN C. HALL 1000
-1000
-2000
Yur
Fig. 8. Change in Surplus for Wet Climate.
temperature increase for a doubling of CO2 equivalent gases, all the sooner the losses begin. These results can be contrasted with my earlier work (Hall, 1999). With a 50-year ocean thermal lag, damage does not occur as soon as without the lag. On the other hand, if we delay any significant policy intervention until we detect damage, these results show that it is too late to prevent significant losses in agricultural surplus. Under all scenarios, once damage begins, within the following 50 years the damage becomes pronounced. There may be an equivalent lag due to political-economic inertia that delays policy intervention well after damage is detected. The ownership of fossil fuels is the source of vested interests with the political ability to stop or delay effective policy. In addition, once we decide to subsidize research and development of alternative energy technologies, the transition will take decades because the economic comparison will be between the long-run cost of alternatives to the short-run cost of existing energy sources. If together the political-economic inertia plus the transition adds another 50 years, then the damage to agricultural surplus could be catastrophic (Figs 6--8). There are a number of reasons to believe that losses to U.S. agricultural surplus will occur sooner than indicated by the results, these reasons are given below.
Agricultural Damage from Climate Change
143
IMPLICATIONS Many economists conclude that we should delay implementation of any significant new policies to substantially slow emissions that cause global warming, arguing that we can always implement such policies later if related climate change proves to cause serious damage. They argue that, instead, we can adapt to warming and perhaps benefit from it; meanwhile, by putting off action, we may be able to avoid the cost of reducing emissions altogether, or at least delay the cost. The same economists argue that serious damage - if it occurs at all - will most likely occur in the agricultural sector, so I have focused here on that sector as a bellwether. This analysis introduces biogeophysical lags, formally represented by ocean thermal lags of a half-century. The evidence shows that the earth is now experiencing these lags (Levitus et al., 2000; Hansen, 1999). It is precisely because of such lags that comparative static economic analysis based on a doubling of CO2 equivalent gases uses the wrong counterfactual. Cline (1992) developed this argument a decade ago, but most of the subsequent publications by economists have ignored this point. Mendelsohn, Nordhaus and Shaw (1994) performed a comparative static analysis, estimating the difference between agriculture today as if a doubling of greenhouse gases occurred instantaneously and adaptation also instantaneously occurred, compared to agriculture today without climate change. The correct counterfactuals for comparative static analysis would be a future agriculture with climate change, compared to a future agriculture without climate change. We must expect that technological change would increase future agricultural surplus in the absence of climate change. Consequently, their comparative static analysis understates the losses from climate change. Improving on Mendelsohn, Nordhaus and Shaw (1994), Adams et al. (1999) perform the latter comparative static analysis, accounting for technological change that would occur in the absence of climate change. Comparative dynamic analysis is a further improvement because it can avoid arbitrarily selecting a doubling of greenhouse gases as the point for comparison. To get the counterfactuals right, we need to specify the dynamic time paths with and without climate change. A comparative dynamic analysis considers the difference between the time paths for agricultural surplus both with and in the absence of climate change. In the absence of climate change we can expect future technological improvements. In the absence of significant policy intervention, climate change will continue until the economically available fossil fuels are gone. The correct time length for analysis, as Cline (1992) argues, extends until fossil fuels are no longer economic. The time path with climate change should include adaptation to climate change. Both time paths
144
DARWIN C. HALL
should incorporate technological change. But the technological change will differ between the two time paths. With climate change, there is the opportunity cost of diverting research and development to climate adaptation. For a number of reasons, the forecasts with climate change in equations (23a-c) are optimistic. Some of the reasons are given in the literature. Adams et al. (1990) acknowledge several critical omissions in their work and caution that their main contribution is "highlighting uncertainties" (p. 219). The GCMs do not "include changes in the space and time distributions of climate events. Therefore many significant climate and biophysical features are ignored" They further caution (p. 220) that they do not account for changes in climate variability, such as frequency of droughts, "mesoscale convection complex" rainfall, and hail damage. Adams et al. (this volume) consider these variations. The forecasts in equations (23a-c) do not capture the impact of a long-term drought, which could be harsh, turning the interior Great Plains to desert. For water supply, as long as the annual constraint is not exceeded, the CSMs allow for the optimal amount of water to optimize crop growth over the growing season. Implicitly, the assumption is that the Army Corps of Engineers will build dams along the Mississippi river, canals throughout the Great Plains, and divert the headwaters of the Missouri river to the other side of the Great Divide to flow Southwest instead of going to the Mississippi. The forecasts do not capture the impact of torrential rains, stripping the land of topsoil. Nor do these forecasts capture the possibility that fluctuations between floods and droughts could amplify, both washing away the topsoil and baking into laterite (McNeil, 1964) what soil remains. The crop simulation models assume no limits to soil nutrients, and no pests that limit crop growth. Adams et al. (1999) explain that the CSMs allow amounts of fertilizer to vary for optimal results, so the results do not admit damage to soil. The CSMs for corn, soybeans and wheat assume optimal pest management, and no nutritional limits in the soil that could limit CO2 fertilization. See Erickson (1993) for a more complete set of reasons to be concerned that the forecasts with climate change are optimistic. There is another reason, not noted in the literature, to be concerned. The forecasts in equations (23a--c) assume that technological change will continue into the future at rates equal to those of the past. McElwain, Beerling and Woodward (1999) examine the period at the Triassic-Jurassic Boundary when 90% of all plants became extinct. They estimate "that ambient CO2 increased from 600 to 2100-2400 ppm across the T-J boundary" (p. 1387), so that RCO2 equaled 2 and then increased until RCO2 equaled between 7.5-8.5. Their estimate is consistent with the range of error in Berner's (1997) biogeological
Agricultural Damagefrom Climate Change
145
model (see Fig. 1). The forecasts presented above admit the possibility that technological change will allow us to survive an event that destroyed most life on the planet. What if such improvements are not possible? And even if they are, who would want to live in those conditions? All three macro-models predict RCO2 equal to 2 by 2075. The ReillyEdmonds model predicts RCO2 equals 8 before 2325. The Nordhaus-Yohe macro-model predicts this occurs before 2300. The Manne-Richels macromodel predicts this occurs before 2250. Technological change may be unable to meet this challenge. With a 50-year ocean thermal lag, to avoid the consequences the world economy would have to complete the transition to alternative energy sources at least a half century prior to RCO 2 equal to 8. Based upon the emissions forecast from the Manne-Richels model, we have to complete the transition within the 21 st century. This begs the question, when do we have to begin implementing significant policies in order to complete the transition this century? Goodstein (this volume) explores some issues that affect the answer to that question, such as state dependency for technological change in the case of renewable energy sources. The forecasts presented above are especially optimistic when comparisons are made between the impacts of global warming on U.S. agriculture and the developing countries. There are two important reasons. The first is that the U.S. has a greater ability to adapt. The expectation of adaptation in the U.S. is predicated on a continuation of the present government policy of subsidizing research and development. The world-view held by many economists is antagonistic to government intervention in the economy. Yet agriculture in the U.S. has the best possibility to adapt because of the Agricultural Experiment Stations and Extension Service. The institutional structure for adaptation in most of the developing parts of the world is minimal to non-existent.
NOTES 1. I thank Jane Hall and Richard Howarth for helpful comments on earlier drafts. 2. As discussed in the next section, in the DICE model by Nordhaus (1994) there is only one climate equilibrium, and all economic valuation of climate impacts occurs prior to any feedback effects in the model of heat re-emerging from the ocean. 3. For a definition, see Hall (1996). 4. The approximation is in Hall (1996); compare the tables there with those in Cline (1992). 5. E-folding time is for exponential decay what doubling time is for exponential growth. Suppose that a stock can be described over time by exponential decay to zero. At initial time, the amount is X0 and over time the amount is given by X t = Xo exp( - rt). The time derivative is given by dX/dt = - rXt. At t = l/r, X t = Xo/e where e is the exponential number, approximately 2.71828. The ratio 1/e equals approximately 37%,
146
DARWIN C. HALL
so the e-folding time is the time at which the stock falls by 63% of the original value X0. For an annual decay rate, for example, if r equals 2%, the e-folding time is 50 years. 6. Nordhaus (1994, p. 40) states, "There is insufficient variation in the data (output from GCMs simulating warming from the pre-industrial period to 1990) to allow us to estimate more than two of the parameters, so we used physical data from the models to calibrate the two least important parameters." The "physical data" are based on a single number, the 500-years e-folding time for deep oceans (p. 37). Other physical parameters are values for the heat capacity of the top 133.5 meters of ocean plus land and air, and the heat capacity of the deep ocean, between 133.5 meters to 1500 meters in Norhaus' analysis. This is inconsistent with the historical record compiled and analyzed by Levitus et al. (2000). They show substantial changes in the surface to 300 meters, from 300 to 1000 meters, and from 1000 to 3000 meters, all interacting within a 50-year time scale. Obviously, the "two least important parameters" are important. 7. This sub-section is from Hall (1999). 8. Adams et al. use CSM models for which crop yields are hill-shaped with respect to climate variables. The producer and consumer surplus is derived from models in which producers maximize profit and consumers maximize utility. One would expect the results for the aggregate surplus to show theory-consistency. There is a well-known perversity for agriculture, however. When weather destroys yield, producer surplus may increase substantially because of inelastic demand. Another reason for testing hypotheses regarding the coefficients is that technological change varies across crops in the CSMs. Hypothesis testing helps to decide upon a specific version of the aggregate function that may reflect technological change in more than one fashion. This point is developed next.
REFERENCES Adams, R. M., McCarl, B. A., Dudek, K. J., & Glyer, J. D. (1988). Implications of Global Climate Change for Western Agriculture. Western Journal of Agricultural Economics, 13(2), 348-356. Adams, R. M., Rosenzweig, C., Peart, R. M,, Ritchie, J. T., McCarl, B. A,, Glyer, J. D., Curry, R. B., Jones, J. W., Boote, K. J., & Allen, L. H. Jr. (1990). Global Climate Change and U.S. Agriculture. Nature, 345(6272), 17 May, 219-224. Adams, R. M., Fleming, R. A., Chang, C. C., McCarl, B. A., & Rosenzweig, C. (1995). A Reassessment of the Economic Effects of Global Climate Change on U.S. Agriculture. Climate Change, 30, 147-167. Adams, R. M., McCarl, B. A., Sergerson, K., Rosenzweig, C., Bryant, K. J., Dixon, B. L., Conner, R., Evenson, R. E., & Ojima, D. (1999). The Economic Effects of Climate Change on U.S. Agriculture. In: R. Mendelsohn, & J. Neumann (Eds), Chapter 2 The Economics of Climate Change, Cambridge University Press. Adams, R. M., Chen, C. C., McCarl, B. A., & Schimmelpfennig, D. E. This volume, Climate Variability and Climate Change: Implications for Agriculture.
Arrow, K. J., Cline, W. R., Maler, K-G., Munasinghe, M., Squitieri, R., & Stiglitz, J. E. (1996b). Intertemporal Equity, Discounting, and Economic Efficiency. In: Chapter 4 IPCC (1996b), Climate Change 1995: Economic and Social Dimensions of Climate Change, Cambridge University Press: 125-144.
Agricultural Damage from Climate Change
147
Berner, R. A. (1997). The Rise of Plants and Their Effect on Weathering and Atmospheric CO2. Science, 276(25), 25 April, 544-546. Cline, W. (1992). Global Warming: The Economic Stakes. Washington D.C.: Institute for International Economics. de Janvry, A. (1972). The Class of Generalized Power Production Functions. American Journal of Agricultural Economics, 54(2), May, 234-237. Edmonds, J., & Reilly, J. (1985). Global Energy: Assessing the Future. New York: Oxford University Press. Energy Information Agency. (1995). Annual Energy Review 1994. Department of Energy. Pittsburgh, Pennsylvania: U.S. Government Printing Office, DOE/EIA-0384(94). July. Erickson, J. D. (1993). From Ecology to Economics: the Case Against CO2 Fertilization. Ecological Economics, 8(2), October, 157-175. Hall, D. C. (1990). Preliminary Estimates of Cumulative Private and External Costs of Energy. Contemporary Policy Issues, 8(3), 283-307. Hall, D. C. (1996). Geoeconomic Time and Global Warming: Limits to Economic Analysis. International Journal of Social Economics, 23(4/5/6), 4, 64-87. Hall, D. C. (1999). Impacts of Global Warming on Agriculture. In: G. H. Peters & J. von Braun (Eds), Food Security, Diversification and Resource Management: Refocusing the Role of Agriculture, Ashgate, Aldershot, United Kingdom: Ashgate Publishing Limited, pp. 186211. Hansen J. M. (1999). Quoted in Pumping Up the Greenhouse. Science, 285, 24 September, 2057b. Howarth, R. B. & Norgaard, R. B. (1995). Intergenerational Choices under Global Environmental Change. In: D. W. Bromley (Ed.), Handbook of Environmental Economics, Oxford: Blackwell. IPCC (1990). Climate Change: the IPCC Scientific Assessment, J. T. Houghton, G. J. Jenkins, & J. J. Ephraums (Eds), Cambridge and New York: Cambridge University Press. IPCC (1992). Climate Change 1992: the Supplementary Report to the IPCC Scientific Assessment, J. Houghton, B. Callander and S. Barney (Eds), Cambridge and New York: Cambridge University Press,. IPCC (1994). D. Schimel, I. G. Enting, M. Heimann, T. M. L. Wigley, D. Raynaud, D. Alves, U. Siegenthaler, et al., CO 2 and the Carbon Cycle. In: J. T. Houghton, L. Meira Filho, J. Bruce, H. Lee, B. Callander, E. Haites, N. Harris & K. Maskell (Eds), Climate Change 1994:
Radiative Forcing of Climate Change and An Evaluation of the IPCC IS92 Emission Scenarios, Cambridge and New York: Cambridge University Press. IPCC (1996a). Climate Change 1995:, The Science of Climate Change, J. T. Houghton, L. G. Meira Filho, B. A. Callander, N. Harris, A. Kattenberg, & K. Maskell (Eds), Cambridge and New York: Cambridge University Press. IPCC (1996b). Climate Change 1995: Economic and Social Dimensions of Climate Change, J. P. Bruce, H. Lee, & E. E Haites (Eds), Cambridge University Press: 125-144. Khanna, N., & Chapman, D. (1996). Time Preference, Abatement Costs, and International Climate Policy: An Appraisal of IPCC 1995. Contemporary Economic Policy, •4(2), 56-66. Levitus, S., Antonov, J. I., Boyer, T. P., and Stephens, C. (2000). Warming of the World Ocean. Science, 287(5461), 24 March, 2225-2229. Manne, A., & Richels, R. (1990). CO2 Emission Limits: An Economic Cost Analysis for the USA. The Energy Journal 11(2), 51-74. McElwain, J. C., Beetling, D. J., & Woodward, E I. (1999). Fossil Plants and Global Wanning at the Triassic-Jurassic Boundary. Science, 285, 27 August, 1386-1390.
148
DARWIN C. H A L L
McNeil, M. (1964). Lateritic Soils. Scientific American, 211(5), 96-117. Mendelsohn, R. Nordhaus, W., & Shaw, D. (1994). The Impact of Global Warming on Agriculture: A Ricardian Analysis. American Economic Review, 84(4), 753-771. Nordhaus, W. D. (1994). Managing the Global Commons: The Economics of Climate Change, M1T Press, Cambridge, Massachusetts. Nordhaus, W. D. (1993). Reflections on the Economics of Climate Change, The Journal of Economic Perspectives, 7(4), 11-26. Nordhaus, W., & Yohe, G. (1983). Future Carbon Dioxide Emissions from Fossil Fuels. In: Changing Climate. Washington D.C.: National Research Council, National Academy Press, 87-153. Reilly, J., Edmonds, J., Gardner, R., & Brenkert, A. (1987). Uncertainty Analysis of the IEA/ ORAU CO2 Emissions Model. The Energy Journal, 8(3), 1-29. Rosenzweig, C., & Parry, M. L. (1994). Potential Impact of Climate Change on World Food Supply. Nature, 367, 13 January, 133-138. Schneider, S. H., & Thompson, S. L. (1981). Atmospheric CO2 and Climate: Importance of the Transient Response. Journal of Geophysical Research, 86(C4), 20 April, 3135-3147. Williams, L. J., Shaw, D., & Mendelsohn, R. (1996). Evaluating GCM Output with Impact Models. Palo Alto, California: Electric Power Research Institute, 94303-0813.
COMPLEXITY IN ORGANIZATIONS" CONSEQUENCES FOR CLIMATE POLICY ANALYSIS Stephen J. DeCanio, William E. Watkins, Glenn Mitchell, Keyvan Amir-Atefi and Catherine Dibble ABSTRACT Algorithmic models specifying the kinds of computations carried out by economic organizations have the potential to account for the serious discrepancies between the real-world behavior of firms and the predictions of conventional maximization models. The algorithmic approach uncovers a surprising degree of complexity in organizational structure and performance. The fact that firms are composed of networks of individual agents drastically raises the complexity of the firm's optimization problem. Even in very simple network models, a large number of organizational characteristics, including some for which no polynomial time computational algorithm is known, appear to influence economic performance. We explore these effects using regression analysis, and through application of standard search heuristics. The calculations show that discovering optimal network structures can be extraordinarily difficult, even when a single clear organ&ational objective exists and the agents belonging to the firms are homogeneous. One implication is that firms are likely to operate at local rather than global optima. In addition, if organizational fitness is a function of the ability to solve multiple problems, the structure that
The Long.Term Economics of Climate Change, pages 149-174. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0
149
150
S.J. DECANIO ET AL.
evolves may not solve any of the individual problems optimally. These results raise the possibility that externally-driven objectives, such as energy efficiency or pollution control, may shift the firm to a new structural compromise that advances other objectives of the firm also, rather than necessarily imposing economic losses. INTRODUCTION As we shall see . . . . the parallels between branching evolution in the tree of life and branching evolution in the tree of technologybespeaka commontheme: both the evolution of complex organismsand the evolution of complex artifacts confront conflicting "design criteria." Heavierbones are stronger, but may make agile flight harder to achieve. Heavier beams are stronger, but may make agile fighter aircraft harder to achieve as well. Conflicting design criteria, in organismor artifact, create extremelydifficult"optimization" problems - juggling acts in which the aim is to find the best array of compromises (Kauffman, 1995, p. 14). Computational or algorithmic models of organizational behavior offer an alternative to traditional economic thinking about organizations. The standard theory specifies an "objective function" for the firm 1 and proceeds to maximize that objective function subject to various constraints embodying technological possibilities, availability of information, and the firm's expectations about the state of the world and the reactions of other actors. The conventional modeling approach carries with it important consequences for climate policy analysis. In particular, if firms are optimized and located on their production-possibilities frontiers, reductions in greenhouse gas emissions (which had previously been discharged costlessly into the atmosphere) can only be purchased at the expense of net profitability. The marginal cost of greenhouse gas reductions will be everywhere positive, and, under standard assumptions, upward sloping in the quantity of emissions reduced. These features of the cost curve for carbon emissions reductions are exhibited, for example, by the state-of-the-art models involved in the recent Energy Modeling Forum study that estimated the costs of the Kyoto Protocol under various policy scenarios (Weyant & Hill, 1999; Weyant, 1999). However, the very act of writing down the objective function and the constraints subsumes a great deal of implicit theorizing. In particular, it abstracts from the ubiquitous structural interconnections that characterize all human organizations. 2 Requiring that organizational behavior be described in terms of algorithmic computation is a natural alternative approach for modeling organizations. Defining the processes and rules of procedure followed by individuals who are members of an organization, together with their patterns of communications and lines of authority, is equivalent in a very real sense to
Complexity in Organizations
151
defining the organization. Making explicit the "task" facing the organization, as well as how the agents interact to accomplish the task, amounts to specification of an algorithm that describes the functioning of the organization. Instead of assuming ab initio that the organization sets out to maximize an objective function, this algorithmic approach allows a richer description of the process by which the choices and behavior of individuals results in collective action (see Olson, 1971 for a classic treatment of the collective action problem). In turn, better understanding of the internal functioning of organizations has consequences for key issues in environmental economics such as resolving the "efficiency paradox" and estimating the costs (and potential benefits) to firms of emissions reductions. In the models developed in this chapter, we will focus on two kinds of organizational tasks: (1) the building up of shareholder value through adoption of a profitable innovation; and (2) performance of an "associative task," specifically, adding up a set of numbers) In both cases, the key enhancement to the modeling framework is addition of the requirement that information can flow only through defined channels of communication. The problem of optimization is then one of finding the organizational structure that most effectively carries out the organization's task or tasks. The informationprocessing capability of the individual agents is defined very simply, to maintain focus on the structural issues that are usually ignored. Explicit modeling of the algorithm by which organizations carry out their tasks also enables us to perform experiments using search heuristics characterized by simulated selection, mutation, and reproduction of model organizations in a population over time. Our approach allows us to examine quantitatively such questions as: (a) how different organizations compare in their fitness to carry out particular tasks, (b) the computational complexity of the finn's optimization problem, and (c) how market forces and selection pressures shape the development of organizations over time. It bears emphasis that we do not claim that our stylized models are in any sense "descriptive." Our goal is not to mimic the behavior of real-world organizations, but rather to illustrate by example the kinds of complexities that appear once the importance of a firm's network characteristics is acknowledged. Nor do we argue that our computational models are in any sense "better" than others that have been proposed in the literature. The point is that the structural features that enhance or detract from performance depend on the organization's rules of procedure and its environment, and that finding optimal structures is difficult even in very simple cases. This can most clearly be shown with models that are as sparse as possible, so that the conclusions about rule-
152
S.J. DECANIO ET AL.
dependence and complexity follow afortiori for more elaborate and realistic cases.
THE MODEL Mathematically, the organization is defined as a digraph, with the agents as vertices and the actual or potential interactions as edges. 4 The edges may be thought of as channels of communication between the members of the organization. Formally, an organization will be defined as a digraph G consisting of n vertices (members, agents, or individuals) denoted by v, i = 1, . . . . n, and m edges, (v, v), where the order of the vertices in an edge means that agent i can receive information from agent j, or that agent i "sees" agent j. These directed edges distinguish a digraph (directed graph) from an ordinary undirected graph in which the edge (v, v) is the same as the edge (vs., vi). Of course, it is possible for both (vi, v) and (v:, v3 to be edges in a particular digraph) It is often convenient to represent the digraph G by its adjacency matrix A~ = [a0], where a 0 = 1 if agent i sees agent j, and a 0 = 0 if agent i does not see agent j. It is known that the number of non-isomorphic digraphs with a given number of vertices n grows very rapidly with n; no simple formula exists for the relationship, but the number of digraphs for any given number of vertices can be calculated using the P61ya Enumeration Theorem (Trudeau, 1976; P61ya & Read, 1987 (1937); Harary, 1955). For the remainder of this chapter, we will sometimes refer to digraphs simply as "graphs" with the understanding that we are discussing only directed graphs unless we specify otherwise.
A. Adopting a Profitable Innovation The first of the tasks modeled is the adoption of a profitable innovation. This task may be thought of as undertaking a profitable energy-saving investment, but the model applies to any type of investment with positive net present value. It is assumed that at any given time, each agent in the firm can be in one of two states, designated by 0 and 1, representing non-adoption and adoption of the innovation. The first innovator is selected at random from the members of the organization at time zero. In each successive time period, the agents "see" the others to whom they are connected ("seeing" being the directional linkage) and adopt the innovation with probability Pi, where
P, (l[O)=f[(E s Y0)/H3]
(1) Here, Ej = 1 if agent j is connected to agent i (in the sense that i sees j's state) and agent j is in state 1, while Y~:= 0 if agent j is seen by agent i and agent j is
Complexity in Organizations
153
in state 0. H i is the total number of agents seen by agent i and does not change over time. (H,. is the sum of the elements of row i of the adjacency matrix, that is, Hi=Zj au.) The time subscript is suppressed to simplify the notation, but it should be borne in mind that Pi is recalculated at each discrete time step, and that the state of non-adoption or adoption of the innovation is evolving over time for each agent. Thus, ~j Y,j is the number of agents seen by agent i that are in state 1 at the time step for which Pi is being evaluated. Once an agent switches to state 1, the agent remains in that state permanently. When an agent adopts the innovation, the agent receives a payoff A. The dollar value of this reward is the same for all agents, and can be thought of as the discounted value of a perpetual cash flow beginning at the point in time the innovation is adopted. From the standpoint of the organization, the shareholder value associated with a particular adoption sequence is the present value of the rewards summed over all the members of the organization. That is, n
II~ = E
A/(1 + ry'
(2)
i
where Ti is the time period in which agent i adopts the innovation, n is the number of agents in the organization, and r is the discount rate. Thus, the sooner the agents adopt the innovation, the greater the shareholder value of the organization. The last element of the model is specification of the probability function f. If this function is S-shaped and depends on the information processing capability of the agents, then there will be a tension between having a high degree of connectedness within the organization (which would tend to reduce the number of "steps" the innovation must go through as it diffuses within the organization, but which has the downside of exposing the agents to "information overload") versus having only sparse connectedness (which reduces the information overload problem but increases the number of steps). A suitable functional form f o r f i s
f(xi)
=
1/{ 1 + e -ix' (,uc)~/(b/c)}_ 1/[1
+ e (a/b)]
(3)
where x i is the same as the argument o f f on the fight-hand side of equation (1). Here, a and b are scaling parameters, and c is the parameter that specifies the information processing capability of the agents in the organization, given exogenously. The c parameter could be related to agents' human capital, or to the amount of computing power available to them. For low c, the probability of an agent's adopting the innovation is low unless a relatively large proportion of the others seen have already adopted the innovation; for high c, the agents will
154
S.J. DECANIO ET AL.
adopt with high probability even if only a relatively small fraction of those seen have adopted. Given c, the optimization problem for the firm reduces to finding the best structure of linkages between agents. Thus, for high c, a completely connected organization will be best, while for very low c, the best organization will be one in which each agent sees only one other. The interesting cases are the intermediate ones in which it is not clear how best to connect the members of the organization to maximize the speed of diffusion of the innovation. 6
B. Solving an Associative Problem The second kind of task we set for the model organizations is to find the sum of a set of n numbers (where n is the size of the organization), denoted by {zi, z2. . . . . z,}. The numbers are assigned at random to the members of the organization, and the process by which they find the sum is as follows: (1) At each discrete time step t, pick one of the nodes of the digraph at random and denote it by i. (The time subscript is suppressed in describing the algorithm to simplify the notation.) Let {zil, Zg2. . . . . zi~} denote the set of numbers assigned to the k nodes that are seen by node i. (Note that k is the same as Hi in the notation used to describe the adoptive task.) k
(2) Replace zi with zi + ~
zij.
j=l
(3) Reset all the elements of {z~, z~2. . . . . z J to zero. (4) Pick another node at random and repeat the process (for time step t + 1). (The random selection of nodes is done with replacement, so that a node can be picked more than once.) (5) Stop when only one node has a non-zero value assigned to it. To give an economic value to the solution of this adding-up problem, it is necessary to specify a cost to each step and to assign a reward for solution of the problem. (If steps (1)-(5) could be performed costlessly, the optimal organization would simply be the completely connected digraph and the first iteration would yield the sum.) The cost function for the tth iteration (the iteration occurring at time t) is defined as C,= ek~')/(l + r) t
(4)
where e is the cost parameter, k(t) is the number of nodes seen by the node being sampled at the tth iteration, and r is the discount rate. The total cost of solving the problem is the sum of the C, over all time periods. The payoff for solving the problem is B, so if the problem is solved at time r*, the present value for finding the solution is given by
Complexity in Organizations
155 -r*
172=B/(1 +r)'* - ~
C,
(5)
t
There is no guarantee that any particular organization will eventually solve the associative problem, so we arbitrarily give the organizations in the simulations 100 time periods or iterations to succeed] With organizations of size 8, this is a sufficient number of periods for the process described by steps (1)-(5) to solve the problem for most trials, provided the organizational network structure is capable of solving it. Because of the discounting, organizations that can solve the problem relatively quickly have an advantage relative to those that take longer. The organizational tension here is between a high degree of connectedness (which tends to lead to a quicker solution) versus the cost of each iteration (which goes up non-linearly with the number of other agents seen by the individual sampled). 8 The same kind of associative problem (viz., adding up a string of numbers) has been explored by Radner (1992, 1993), Van Zandt (1997, 1998), and Miller (1996), who examine tree-like structures for solving the problem. The primary difference between our setup and these is the rules of procedure by which the organization performs the task. In Radner's model, for example, processing takes place in parallel by all the agents in the organization, and optimality is achieved by minimizing the amount of "idle time" experienced by the members of the organization. In our model, processing takes place sequentially (with the sequence determined at random) and costs are incurred at each step depending on the computational "load," that is, how many numbers the particular individual selected has to add up at once. All of these models are highly stylized representations of an algorithm that will accomplish the task; our algorithm for solving the associative task is not better, just different. Our objective is to show how the structural characteristics of well-adapted organizations differ depending on the rules of procedure, not to design efficient procedures. In the real world, the computational routines followed by organizations will be far more complex and multifaceted than the rules of any of these model organizations, but the simple models of the type we are examining can yield insights into the more general cases.
PERFORMANCE DEPENDS ON STRUCTURE To examine the ways in which organizational structure influences performance, we first generated a sample of small organizations (size 8) and regressed the profitability or "fitness" measures H 1 and 172 on a variety of characteristics of the network structure. A separate regression was performed for each of a
156
S.J. DECANIO ET AL.
number of values of the c parameter (corresponding to different informationprocessing capabilities of the agents) for the adoption task, and for various values of the e parameter (representing different levels of cost) for the associative task. Table 1 presents the network characteristics used as regressors, brief descriptions of these variables, and their abbreviated names. This table also contains a brief description of the graph characteristics. More detailed definitions can be found in Garey and Johnson (1979). The Combinatorica (Wolfram Research, 1996) add-on package for Mathematica (Wolfram, 1996) was used to compute the graph characteristics. Some discussion of the population from which the sample of digraphs was drawn is required. It would be possible to pick a random sample from the population of all labeled digraphs of size 8 simply by allowing each possible edge to be present or absent with equal probability. (A labeled digraph affixes Table 1. Variable Name
avec
diam rad aved aFvr
b~ edcn vrcn
mxcl
mnvr
Sources: See text.
Variable Names and Definitions. Definition
Average eccentricity. The eccentricity of a vertex is defined as follows: Find the shortest path from the given vertex to each other vertex in the graph. The eccentricity of the vertex is the longest of these shortest paths. The average eccentricity for a graph is the average over the vertices of the eccentricities of the vertices, Diameter. The diameter of a graph is the m a x i m u m eccentricity. Radius. The radius of a graph is the minimum eccentricity. Average number of edges. This is the number of other vertices each agent sees, averaged over all the vertices of the graph. Number of articulation vertices. An articulation vertex of a graph G is a vertex whose deletion disconnects G. Bridges. A bridge is an edge such that removing it increases the number of disconnected components in a graph. Edge connectivity. This is the minimum number of edges whose deletion would disconnect the graph. Vertex connectivity. This is the minimum number of vertices whose deletion would disconnect the graph. Maximum clique. A clique is a subgraph of G that is completely connected. The maximum clique is the number of vertices in the largest clique. Minimum vertex cover. A vertex cover X of size K is a subset of vertices in G such that for each edge in G, at least one of its endpoints is in X. The minimum vertex cover is the number of vertices in the smallest vertex cover.
Complexity in Organizations
157
a different "name" to each vertex.) Thus, if there are eight vertices, there would be 2 x 8C2= 56 possible edges, and each digraph in the random sample would be constructed by giving each of these 56 possible edges a 0.5 probability of being present. This would yield 2 56 possible structures. However, the labeling of vertices has no intrinsic significance, and most graph characteristics are independent of any particular assignment of names to vertices. Two digraphs of the same size are isomorphic if one can be transformed to the other (they have the same adjacency matrix) simply by renaming the vertices while preserving adjacency. If the complete population of digraphs of size 8 is thought of as the population of non-isomorphic digraphs (that is, structures that are intrinsically different), then the population size is smaller than the 256 (=7 X 10 ]6) labeled digraphs; it is known that there are approximately 2 x 1012 non-isomorphic digraphs of size 8 (Wilson, 1985). If we were to sample from the population of labeled digraphs, a large proportion of the picks would be isomorphic, so that there would be little variation in the graph characteristics that differ only for non-isomorphic digraphs. This lack of variation in the independent variables would make it difficult to discern any effects of the graph characteristics on organizational performance. For this reason, we constructed what amounts to a stratified sample as follows. For each digraph in the sample, we first chose a random variable p from the uniform distribution over [0, 1]. Then for each of the 56 possible edges, another random variable q from [0, 1] was picked, and the edge was set to being present if q > p and was set to being absent if q < p. The effect of this stratification was to "spread out" the variety of digraphs in the sample. (No two digraphs can be isomorphic if they have a different number of edges.) It is still possible that some isomorphic digraphs would be selected using this procedure, but the fraction of isomorphisms would be smaller than if the edges had been inserted at random with equal probability for all digraphs picked. 9 This procedure was repeated until a sample of digraphs of size 2,000 was drawn. Then for each graph, the average fitness (profitability) measures II 1 and H 2 were computed over a series of 2,500 simulations. For the adoption task, the innovation was introduced to a vertex randomly chosen in each run, The fitness measures were averaged to control for random variation due to the initial site of the innovation and the realizations o f f (in the case of the adoption model), or the randomness of the sequence of nodes chosen to perform the associative task. Next, the graph characteristics of Table 1 were computed for each of the digraphs. Finally, average fitness was regressed on the set of graph characteristics, including linear and quadratic terms in the regression. Thus, the regression equation estimated (with distinct observations denoted by the subscript s) was
158
S.J. DECANIO ET AL. Ys= [~0+ ~ j [~jxjs+ ~ j 0jxjs2 + us,
(6)
where Ys is the average profitability of organization (graph) s, the [~s and 0s are the coefficients of the independent variables xj and their squares, and u is the error term. Equation (6) was estimated by ordinary least squares. It should be noted before presenting the results that computation of two of the variables, maximum clique and minimum vertex cover, are NP-complete problems. This means that the time required for any known algorithm to compute either of these two variables rises faster than any polynomial function of the number of vertices) ° It is possible to calculate the maximum clique and minimum vertex cover values for the sample of digraphs we selected only because the digraphs were small. Even problems whose computation time increases exponentially with problem size can be solved in finite time, and the finite time computation can feasibly be implemented on existing hardware if the problem is small enough. Restricting ourselves to organizations of size 8 enables us to test for the effect of these "NP-complete variables" on the performance of the organizations. Two versions of the regressions were performed in addition to the results reported in Tables 2 and 3. Three of the variables whose coefficients are shown in these tables - the radius, diameter, and average eccentricity - can take on infinite values for randomly generated digraphs. (If there are two vertices in the digraph such that there is no path from one to the other, then the eccentricity of the "starting point" vertex is infinite.) The regression results reported in Tables 2 and 3 exclude all such digraphs, so the sample sizes are less than 2,000. We ran the same regressions without these three variables, with essentially the same results as those reported in Tables 2 and 3. Similar results were also obtained when the regressions were run for organizations of size 16. The first notable feature of Tables 2 and 3 is that the graph characteristics explain a very large fraction of the variance in profitability across the organizations in our sample. The adjusted R 2 is above 0.94 in all the regressions. Second, it is clear that the coefficients vary across the regressions. That is, the coefficients describing the influence of graph characteristics on performance are different depending on the task, and depending on the capabilities of the agents (as measured either by the agents' processing capacity c or by the cost parameter e)) ~ For example, in the regression for associative fitness, the linear coefficient of average eccentricity is positive and statistically significant for the high-cost case (e = 10) but negative and significant for the low-cost case (e = 3). In the regressions with adoptive fitness as the dependent
159
C o m p l e x i t y in O r g a n i z a t i o n s
Regressions Explaining Adoptive Task Fitness, Various Processing Capability Parameters, Dependent Variable I/~ of Equation (2).
Table 2. e=0.6
R z = 0,962 N--- 1235
Explanatory Variable
Coefficient
Std. Error
t
P > It I
avec avec2 diam diam2 rad rad2 aved aved2 arvr arvr2 brdg brdg2 edcn edcn2 vrcn vrcn2 mxcl mxcl2 mnvr mnvr2 constant
-1.2812 0,2031 0.0482 -0.0200 -0.2936 0.0964 0.1386 -0.1108 -0.0236 -0.0176 0.0031 -0.0030 0.1410 -0.0146 0.0281 -0.0023 -0.0797 0.0061 0.7950 -0.0600 6.4073
0.1609 0.0342 0.0500 0.0082 0.0959 0.0294 0.0746 0.0082 0.0768 0.0317 0.0799 0.0330 0.0237 0.0034 0.0157 0.0019 0.0305 0.0035 0.1227 0.01 04 0.3627
-7.961 5.939 0.964 -2,428 -3.061 3.277 1.857 - 13.510 -0.307 -0.555 0.039 -0.092 5.948 -4,298 1.794 -1.250 -2.613 1.740 6.481 -5.718 17.667
0.000 0.000 0.335 0.015 0,002 0,001 0,064 0,000 0,759 0.579 0.969 0.927 0.000 0,000 0.073 0.211 0.009 0.082 0,000 0.000 0.000
avec avec2 diam diam2 rad rad2 aved aved2 arvr arvr2 brdg brdg2 edcn edcn2 vrcn vrcn2 mxcl mxcl2 mnvr mnvr2 constant
0.0677 -0.0301 0.0185 -0.0041 -0.0155 0.0079 0.1907 -0.0111 -0.0003 -0,0011 0.0042 -0.0021 -0.0049 0.0008 0.0004 0.00001 -0.0107 0.0014 -0.0219 0.0018 6.5807
0.0104 0.0022 0.0032 0.0005 0.0062 0.0019 0.0048 0,0005 0,0050 0,0020 0.0052 0.0021 0.0015 0.0002 0.0010 0.0001 0.0020 0.0002 0.0079 0.0007 0.0234
6.511 -13.618 5.722 -7.768 -2,510 4.157 39.558 -21.005 -0.061 -0.521 0.817 -1.001 -3.169 3,799 0.409 0.094 -5,439 6.121 -2.762 2.721 280.957
0.000 0.000 0.000 0.000 0.012 0.000 0.000 0.000 0.951 0.602 0.414 0.317 0.002 0.000 0.683 0.925 0.000 0.000 0.006 0.007 0.000
c=3 R 2 = 0.995
N = 1235
160 Table 3.
e=
S, J. DECANIO ET AL. Regressions Explaining Associative Task Fitness, Various Cost Parameters, Dependent Variable I-Iz of Equation (5).
10
R2 = 0.975 N = 1235 13s + 1000
Explanatory Variable
Coefficient
Std. Error
t
P > [t [
avec avec2 diam diam2 rad rad2 aved aved2 arvr arvr2 brdg brdg2 edcn edcn2 vrcn vrcn2 mxcl mxcl2 mnvr mnvr2 constant
10.3843 -1.5985 -2.1328 0.2413 7.1397 -1.3227 2.5292 -0.3510 -0.3103 0.1941 0.2483 -0.2233 0.2530 -0.0093 -0.0268 0.0132 -0.5264 0.0596 -0.3134 0.0240 -23.6204
0.5183 0.1102 0.1611 0.0265 0.3088 0.0948 0.2404 0.0264 0.2474 0.1019 0.2573 0.1061 0.0763 0.0110 0.0550 0.0060 0.0983 0.0113 0.3950 0.0338 1.1679
20.037 -14.512 -13.241 9.100 23.120 -13.960 10.521 -13.287 -1.254 1.904 0.965 -2.104 3.316 -0.845 -0.531 2.186 -5.355 5.292 -0.793 0.711 -20.224
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.210 0.057 0.335 0.036 0.001 0.398 0.596 0.029 0.000 0.000 0.428 0.477 0.000
avec avec2 diam diam2 rad rad2 aved aved2 arvr arvr2 brdg brdg2 edcn edcn2 vrcn vrcn2 mxcl mxcl2 mnvr mnvr2 constant
-1.0769 0.0099 -0.3489 0.0676 1.0485 -0.2749 1.8301 -0.1691 -0.0118 0.0065 -0.0030 -0.0093 0.5836 -0.0507 -0.0454 0.0053 -0.2763 0.0222 -0.2798 0.0263 2.8122
0.2528 0.0537 0.0786 0.0129 0.1506 0.0462 0.1172 0.0129 0.1207 0.0497 0.1255 0.1518 0.0372 0.0053 0.0246 0.0029 0.0479 0.0055 0.1926 0.0165 0.5696
-4.261 0.184 -4.441 5.228 6.961 -5.949 15.610 -13.127 -0.098 0.131 -0.024 -0.180 15.679 -9.483 -1.843 1.809 -5.763 4.035 -1.452 1.594 4.937
0.000 0.854 0.000 0.000 0.000 0.000 0.000 0.000 0.922 0.896 0.981 0.857 0.000 0.000 0.066 0.071 0.000 0.000 0.147 0.111 0.000
e=3 R2 = 0.940 N = 1235
Complexity in Organizations
161
variable, the coefficients of the linear terms for average eccentricity, edge connectivity, and minimum vertex cover all reverse signs from the case of c = 0 . 6 (low processing capacity) to the case of c = 3 (higher processing capacity). Next, it is clear that the NP-complete variables, maximum clique and minimum vertex cover, influence performance of both the adoptive and associative tasks. Most of the individual coefficients of the NP-complete variables and their squares are statistically significant in Tables 2 and 3, as well as in other regressions (not reported here) for different values of e and c. The linear hypotheses that the coefficients of the NP-complete variables were all zero could be rejected by an F-test in every regression we ran. These results are not the same thing as a proof that the problem of finding the optimally performing firm is NP-complete. Indeed, for some values of the parameters, the optimum structure for each of the tasks can be derived deductively. If e = 1 for the associative task, for example, the cost of each time step's operation is a constant no matter how many other agents are seen, so a completely connected organization will have the highest present value. (Similarly, for very high c in the adoptive task, the completely connected organization will be optimal.) For low c or high e, a sparse and/or hierarchical organization must be best (if e is high enough, the "do nothing" organizational form will minimize cost, because all structures that can solve the problem have even more negative net present values). The total number of possible organizational configurations is finite, so for intermediate values of c or e, some degree of connectedness between complete and sparse will work best. There does not appear to be any continuous mapping of the alternative graph structures onto fitness values, however; the regression results show that graphs with the same degree of "connectedness" can have very different performance (otherwise, the only variable that would show up with a statistically significant coefficient in the regressions would be the average number of edges). 12Finding that optimum may be computationally complex. We conjecture that both tasks are NP-complete or harder for some range of parameter values, but we do not know how to delimit those ranges. It appears that the computational complexity of both tasks exhibits phase transitions depending on the values of the c and e parameters, much as other NP-complete problems show phase transitions depending on the value of an "order parameter" (Cheeseman, Kanefsky & Taylor 1991)J 3 We do know that brute-force search methods will not solve the optimization problem in polynomial time, because the number of non-isomorphic digraphs grows faster than polynomially in the number of members of the organization. An asymptotic approximation for the number of non-isomorphic digraphs is
162
S.J. DECANIO ET AL.
known from published and unpublished work of P61ya (see Harary and Palmer, 1973, Chapter 9). In particular, the number d, of digraphs with n vertices satisfies d. = [2"("- 1)/n!][1 + 4 n ( n - 1)/22"+ 0(n3/23")]
(7)
The terms in the second pair of brackets on the right side of (7) are additive, so it is necessary only to show that part arising from the first one grows faster than polynomially. Applying Stirling's formula to the n! in the denominator of (7), one sees that d. = ( 1 / ~ ) 2
"("- 1)-(. +(1/2))log2nen
(8)
from which it is clear that d, grows faster than polynomially in n. The substantive economic point is that there may be no practical way to find the "best" organizational form. The performance of any particular structure depends on the task the organization is trying to perform, the cost of its computational procedures, and the processing capabilities of the agents comprising it. In addition, it does not appear to be possible to approximate an optimal solution by continuous techniques. It may be that a structure wellsuited to carry out either task is one in which the organization is segmented into subgroups that are connected by a small number of edges. Removal of these connections would split the organization into non-communicating groups, making it impossible to adopt the innovation fully or to solve the associative problem at all. In the regression, the variables measuring edge connectivity and vertex connectivity are statistically significant for some values of c and e. The problem of designing the optimal organizational structure is intrinsically a "discrete" problem and, as such, its solution may be subject to the limits imposed by computational complexity. ~4 Exogenous changes in the environment (or new demands on the firm) also will change the optimal structure. Furthermore, if the finn is required to perform multiple tasks, the structure that emerges may not be ideally suited for any of the tasks individually. If a firm is fodused on its performance of the associative task, it may adopt a structure that is less suited for adopting profitable innovations. These results all follow from the extremely simplified and stylized computations expressed in our model; it can safely be inferred that the same considerations will extend to real-world organizations of much larger size, which are faced with far more difficult tasks, and which have potential structures of great variety. 15
Complexity in Organizations
163
FINDING BETTER ORGANIZATIONAL STRUCTURES If the problem of optimizing organizational structure is indeed NP-complete or harder, then no known algorithm can guarantee finding the structure with maximal fitness in polynomial time (as a function of organization size), and the actual processes of organizational change can only be expected to lead to improvements, not a global optimum. In the real world, both selection pressures and conscious efforts to restructure organizations contribute to the evolution of performance. Without claiming that we can describe the historical course of organizational change, it is possible to specify computable methods that produce organizations with improved fitness within our model framework. These search heuristics are sometimes called "evolutionary algorithms," although it should be kept in mind that they are not meant to describe the actual mechanisms of organizational evolution. ~6 We employ both a Hill Climbing Algorithm (HCA) 17 and a Genetic Algorithm (GA) TM to compute the results reported below for organizations of size 8.
A. Search by a Hill-Climbing Algorithm Hill-climbing algorithms can be set up in various ways. We selected a particularly direct method. Starting with a random graph (constructed with each potential edge being "on" or "off'), we simply picked an edge at random, changed it from its current status, then compared the fitness of the new graph to the previous one. If the average fitness of the new graph over a large number of trials was greater than the fitness of the original graph, the new graph was substituted, and the random change of an edge was repeated. This process was continued for a specified number of "generations," and the graph remaining after that process was considered to the product of that particular run of the HCA. In order to determine whether this search process was effective, the entire algorithm was repeated, thereby generating a sample of "adapted" graphs. The characteristics of graphs in this sample were then compared to the graph characteristics of the original random graphs. The results of these comparisons are presented in Table 4. Several inferences can be drawn. Consider first the associative task: for all three cost parameters represented, the graphs resulting from 100 steps of the hill climb show clear differences from the sample of random graphs representing the starting points. The low value of the cost parameter (e = 1) corresponds to the case in which we know that the optimal graph is completely connected, and indeed the search process moves in that direction. The average
164
S.J. DECANIO ET AL.
number of edges in the adapted sample equals 5.84, vs. 3.58 in the initial random sample. It is interesting to note, however, that even after 100 generations, the HCA has not (on average) reached the optimal completely connected graph with an average number of edges equal to 7. Table 4.
Comparison of Sample Statistics for Random and Adapted Populations, Hill Climbing Algorithm. Associative Task
Random Population
Adoptive Task
Adapted Population
Smpl. Mean
Smpl. Std. Dev.
0.55 3.58 1.10 2.93 5.14 2.22 2.26
0.87 1.98 2.04 1.49 1.58 1.96 2.12
Smpl. Mean
Smpl. Std. Dev.
Random Population Z for H0: P~R= IXA
Smpl. Mean
0.02 5.84 0.04 4.60 6.48 4.66 4.44
0.14 6.01"* 0.58 -11.0"* 0.28 5.15"* 1.07 -9.10"* 0.56 -7.99** 1.09 -10.9"* 1.39 -8.60**
0.54 3.74 1.10 3.16 5.15 2.35 2.54
~=3 arvr aved brdg mxcl mnvr edcn vrcn
0.58 3.36 1.26 2.91 4.87 1.93 2.09
0.89 2.16 2.02 1.66 1.86 2.07 2.18
0.61 3,42 1.62 2,97 4.85 2.06 2.25
0.90 2.22 2.14 1.91 1.79 2.24 2.37
Smpl. Mean
Smpl. Std. Dev.
Z for H0: I~R= P~A
0.35 4.00 0.10 2.88 4.91 3.29 2.96
0.59 2.03 0.82 1.96 4.22 1.41 1.09
0.96 2.11 2.21 1.79 1.72 2.15 2.31
0.00 6.45 0.00 5.80 6.82 5.35 5.73
0.00 0.41 0.00 1.23 0.39 1.06 1.43
5.63** -12.6"* 4.98** -12.2"* -9.47** -12.5"* -11.7"*
0.00 6.41 0.00 5.77 6.81 5.42 5.89
0.00 0.46 0.00 1.36 0.39 1.04 1.43
7.01"* -13.5"* 6.56** -17.3"* -11.0"* -13.9'* -13.7"*
0.26 2.29 0.44 1.93 4.99 1.27 1.01
0.46 0.29 0.83 0.29 0.36 0.45 0.66
2.92** 6.34** 3.58** 6.00** 1.06 4.08** 5.14"*
c=3 0.76 2.11 0.44 1.49 2.42 1.85 1.76
1.97" -2.12" 5.61"* 0.13 4).13 4.90** -3.11"*
0.68 3.55 1.62 3.13 5.12 2.16 2.17
~=10 arvr aved brdg mxcl mnvr edcn vrcn
Smpl. Std. Dew c=9
e=l arvr aved brdg mxcl mnvr edcn vrcn
Adapted Population
0.97 2.07 2.47 0.69 1.49 2.11 2.31 c=0.6
0.82 1.46 1.78 1.47 1.93 1.49 1.73
0.16 5.23* 2.87** 4.19"* 2.39* 2.42* 3.95**
0.56 3.54 1.40 2.97 5.15 2.13 2.18
0.92 1.95 2.55 1.71 1.46 2.06 2.18
* Probability-value < 0.05 under H 0 of no difference in population means. ** Probability-value < 0.01 under H0 of no difference in population means.
Complexity in Organizations
165
It is clear, however, that the HCA search process tends to move in the right direction. For the low value of e, the organizations evolve towards greater connectedness, while for the high-cost value of e, evolution moves towards sparser connectedness. But more is involved in determining organizational fitness than the average number of edges. The other graph characteristics listed in Table 4 are also different in the adapted samples than in the original samples of randomly constructed graphs. The signs of the Z-statistics corresponding to the tests of equality of means are almost always opposite when e = 1 and when e = 10, especially in the cases in which the difference in means between the original and final populations is statistically significant. An exception is that in all cases, the number of bridges falls in the adapted population. (This is likely to be due to the fact that the brdg variable is computed as if the graphs were undirected. See footnote 11.) Graphs with improved performance are characterized by features other than simply the average number of edges. Except for two cases (both involving the minimum vertex cover), the variances of the characteristics of the adapted graphs are lower than the variances of the characteristics of the randomly selected initial graphs. This indicates that not only are the adapted graphs different from the initial population, but that the "good" organizational forms produced by the HCA are more alike than those in the general population. The same sort of results hold true for the adoptive task. When the agents have high processing capacity (c =9), the search process finds structures of greater connectivity (an increase in average edges), and when agents' processing capacity is poor (c=0.6) adaptation proceeds in the opposite direction. In this case, however, the number of articulation vertices and bridges moves in the same direction for the c = 9 case and the c = 0.6 case. For most parameter values in our model, it seems unlikely that the HCA will be able to find a globally optimal structure. In models exhibiting rugged fitness landscapes, the HCA is quite likely to get stuck on a local maximum. The search algorithm can only switch one edge "on" or "off" at a time, and hence will never be able to make a "jump" to a higher local maximum that differs in structure by more than two edges. Thus, this search procedure can in general only be expected to find structural improvements, not the globally optimal organizational form.
B. Search by a Genetic Algorithm. Genetic algorithms operate in a manner that imitates sexual reproduction in biological populations. In addition to allowing for the possibility of random mutations (which are akin to the bit flips occurring in the HCA), a GA allows
166
S.J. DECANIO ET AL.
entire sets of genes to be exchanged between members of a population, so that offspring are created having a mixture of the genes of their parents. If the fitter members of the population are differentially selected for reproduction, it is possible to search over a wider range of the space of genotypes than can be reached by relying on random mutation alone. GAs are preferred to HCAs in situations where non-local jumps are required to reach higher points on the fitness landscape (Goldberg, 1989). In our model, the structure of a particular organization is naturally encoded by specifying its chromosome simply as the row vector obtained by concatenating the successive rows of its adjacency matrix. Fitness is calculated as in Section II above, depending on whether the adoption problem or the associative problem is being solved. We used a GA with settings for reproduction, crossover, and mutation in the range of magnitudes suggested by Grefenstette (1986). The GA behaved normally, with rapid improvement in the early generations followed by a leveling off of both the population average fitness and the fitness of the best-performing organization. Organizational innovations after the early rapid increases in fitness appear as discrete "jumps" in fitness levels that are otherwise largely fiat over time. We applied the GA repeatedly for each of the values of the cost or processing capacity parameters that were used in Table 4. Each GA run was begun with a size-100 "uniformly distributed" stratified random population of the type described in Section III. In each run, we let the GA operate for 100 generations, then selected the "champion" structure with the highest fitness as the outcome of that search. This procedure was carried out 100 times for each separate value of c or s, to obtain a population of 100 champion organizations for each of the parameter values. We then computed statistics on the fitness scores of the members of the adapted populations of champions. Table 5 compares the fitness statistics for the original and adapted populations for both search procedures. Several features of these results are interesting. First, application of either search algorithm increases population fitness considerably. For example, for low processing capability in the adoptive task (c = 0.6), the GA increases average population fitness by 53%, while the HCA increases average population fitness by 45%. Second, both the GA and the HCA "converge" in the sense that the variance in fitness of the adapted populations is much smaller than the variance of the original populations. Third, the search algorithms are in some cases able to find the structure yielding the theoretical maximum present value of the firm. For example, the minimum number of steps for complete adoption of the profitable innovation is 2, with an associated present value of 7.3636. For c = 3 and c = 9, the best
167
Complexity in Organizations
organizations are at or quite near this maximum value after 100 generations, although the population averages fall slightly short of this maximum. In the case of c = 0.6, the best of the GA champions is 7.2% below the theoretical maximum and the average of the champions is 7.5% below the theoretical maximum. The best organization from 100 runs of the HCA is 7.5% below the theoretical optimum, while the population average is 8.6% below the optimum. For the associative task, the theoretical maximum present value if e = 1 is 9,090, which is achieved by the best organizations in both the random and adapted populations. However, the theoretical optimum depends on the value of the cost parameter if e > 1. This optimum is not known exactly, but Table 5 shows that, for example, in the case of the high cost parameter (e = 10) the GA
Table 5.
Fitness Statistics for Random and Adapted Populations. Associative Task
Fimess Statistic
Genetic Algorithm Gen. 1
Adoptive Task Hill Climbing Algorithm
Gen. 100
Gen. 1
Gen. 100
Genetic Algorithm Gen. 1
std. dev. max min
4,563 5,370 3,317 9,090 -9.274
9,079 9,090 37.26 9,090 8,851
4,801 5,926 3,256 9,090 -9.274
8,357 8,394 420 9,090 7,283
std. dev. max rain
3,760 4,750 2,680 7,103 -890
7,083 7,103 39.93 7,103 6,923
c=3
3,496 4,428 2,825 7,103 -165.7
5,354 6,750 2,785 7,148 -16.76
5.7755 7.3635 5.8573 7.3146 7.0274 7.3635 7.0090 7.3219 2.1102 0.000055 1.9380 0.0380 7.3636 7.3636 7.3635 7.3635 1.0000 7.3634 1.0000 7.1647
e=lO mean median
std. d e v .
max min
Gen. 100
5.8202 7.3635 5.9358 7.3182 7.0351 7.3636 7.0620 7.3220 2.0737 0.000045 2.0678 0.0341 7.3636 7.3636 7.3636 7.3636 1.0000 7.3634 1.0000 7.2083
e=3 mean median
Gen. 1
c=9
e=l mean median
Gen. 100
Hill Climbing Algorithm
-2,133,431 2,063 -2,368,339 -453,780 4.4496 -166,244 2,204 -102,130 624 4.8021 3,056,876 600 3,209,361 1,989,975 1.5043 1,242 2898 -9 2,232 6.7105 -9,130,480 -31.85 -9,093,000 -9,081,820 1.0000
c=0.6 6.8105 4.6316 6.7289 6.8122 4.7535 6.7345 0.0123 1.3370 0.0569 6.8346 6.4944 6.8109 6.7762 1.0000 6.4998
168
S.J. DECANIO ET AL.
champions are consistently able to show a positive present value, despite the fact that the average fitness for the initial set of populations is quite negative. The substantive economic point is that adapted populations will in general show a range of fitness values, even if selection pressure has been operating for a considerable period of time in a stable environment. Not all or even many of the surviving firms achieve the highest attainable level of fitness (nor is there any reason to expect them to). The GA champions seem to generally have a higher average level of fitness than the population of HCA survivors (although the best HCA result for the associative task with e = 3 was better than any of the GA champions), but this has no clear economic meaning because much more computation is involved in the GA searches than in application of the HCA. (The total "initial population" for all the GA runs for one of the parameter values consists of 10,000 organizations, while the HCA initial population has only 100.) The performance of the GA relative to the HCA may be a reflection of the fact that, as in biological populations, diversity of genotypes provides resiliency and the "genetic raw material" for a variety of new organizational forms that may have fitness advantages.
CONCLUSIONS AND POLICY IMPLICATIONS Organizations exhibit a degree of complexity that is not incorporated into standard economic models. Recognizing the network structure of communication within a finn, and requiring that performance of the firm's productive tasks be carried out by well-specified computational algorithms, projects the theory of the firm into the realm of combinatorial optimization and computational complexity. This conclusion is important in itself, but it also raises a set of issues related to the way economic phenomena are conceptualized. Economic theorizing and policy analysis are informed by what characteristics one does or does not expect idealized model firms to exhibit. In the conventional neoclassical approach, finns are expected to maximize profits subject to their market and technological constraints. The firms' productive configurations are presumed to be efficient in the sense that one kind of output can be increased only if another is reduced. Finns should be identical unless specific differences in endowments or access to technology can be identified. In contrast, models whose firms' optimal organizational structures are too complex to be determined exactly will have different properties. In such models, the heterogeneity of firms and measurable differences in their performance are a natural consequence of whatever imperfect search heuristics are used to improve performance. Organizational structure will influence profitability, sometimes in non-intuitive ways. Path dependence will be the
Complexity in Organizations
169
rule, as finns' computational algorithms are built up from components that have been developed and tested in past circumstances. As in the case of biological organisms, selection, not optimization, would be the driving force behind the emergence of order in this view. Selection (at least in market economies) creates powerful pressures for improvements in organizational performance, but even if optimal network structures could be found for coping with the economic environment at a particular moment in time, that environment shifts so rapidly that the useful adaptations of yesterday can become liabilities today. From this perspective, it is possible to make theoretical sense of phenomena such as the "efficiency paradox" (DeCanio, 1998a and the literature cited therein; DeCanio and Watldns 1998a) or the organizational change required to realize the productivity benefits of computerization (Brynjolfsson & Hitt, 1996). If firms are faced with multiple tasks that are computationally complex, there is no reason to expect that a perfectly profit maximizing set of investments (in energy efficiency or any other technologies) will be made. Giving greater explanatory weight to the structural characteristics of organizations also has policy implications. The efficiency paradox is important because it matters a great deal whether, for example, substantial reductions in greenhouse gas emissions can be accomplished without large cost (in terms of conventionally measured output) to the economy. Charging a price for the emissions through a carbon tax or emissions permits would internalize the greenhouse externality, but might do so only at the expense of a reduction in profits or income as conventionally measured. But what if maximization of profits or utility with respect to energy usage or technology choices is not to be expected? In that case, a whole range of unconventional policy options to reduce greenhouse gas emissions could become feasible and attractive. Policies that would influence the network structures of finns and markets or that would induce jumps from the current points on the firms' fitness landscapes (possibly at local optima) to other local optima (perhaps ones that are more energy efficient) might be possible. Moving to a local optimum that is environmentally more benign may have no impact or even a positive impact on the other dimensions of a finn's performance, as suggested by the Porter Hypothesis (Porter, 1990, 1991; Porter & van der Linde, 1995a, 1995b). Such possibilities remain conjectural. What has been demonstrated is that the introduction of network structure adds a relatively unexplored layer of complexity to the theory of finns and other economic organizations. Search heuristics that are beginning to see widespread application in economics and other fields offer one promising approach to examining how the performance of complex organizational structures might be improved. Further characterization
170
S.J. DECANIO ET AL.
of the properties of more or less efficient structural forms may be possible through application of measures that have been developed in mathematical graph theory and the study of networks. But even if better static descriptions of the functioning of organizations can be developed with such methods, an important area for further research is to develop algorithms that model the actual processes of organizational evolution and change.
ACKNOWLEDGMENTS The authors are current or former members of the Computational Laboratories Group, University of California, Santa Barbara. The research was supported in part by a grant from the United States Environmental Protection Agency, and time on the Cray T3E supercomputer at the Lawrence Berkeley National Laboratory was provided by the National Energy Research Scientific Computing Center (NERSC) funded by the U. S. Department of Energy, Office of Energy Research. We acknowledge helpful comments from Jeffrey Williams, Skip Laitner, John Gliedman, participants in seminars at the Energy and Resources Group at the University of California, Berkeley, the Lawrence Berkeley National Laboratory, the Environmental Sciences Department at the University of California, Santa Cruz, and the Tellus Institute, Boston, and from an anonymous referee. The views expressed in the chapter are those of the authors alone, and are not those of the University of California, the U.S. Environmental Protection Agency, the U.S. Department of Energy, the UCSB Forcasting Project, Economic Analysis LLC, or the University of Maryland.
NOTES 1. We will use the "firm" as the typical example of an economic organization. Firms are central to the study of market economies, and profit maximization is the archetype of the traditional economic approach. Nevertheless, it should be kept in mind that our approach carries over to other human organizations (government bureaucracies, nonprofits, etc.) with modification only of the criteria by which the organizations survive and evolve. 2. The specialized literature on the theory of the firm goes beyond the simple specification and maximization of an objective function for the firm, and focuses on principal-agent problems and other manifestations of the non-unitary nature of organizations (Williamson, 1985; Alchian & Woodward, 1988). Considerations of this type are not our central concern, however, because they do not involve modeling the network structure of the firm. 3. The associative task is akin to production in which a number of parts have to be brought together to produce a finished whole but in which the assembly can take place in any order, hence the task is "associative."
Complexity in Organizations
171
4. The type of model to be developed here was introduced by DeCanio and Watkins (1998b) with a fitness function based on the time taken for an organization to adopt an innovation completely. H~igerstrand (1967) developed a Monte Carlo agent-based model of innovation diffusion within a population. In that model, diffusion was a stochastic process structured by the a priori spatial locations of agents. We use a similar approach to evaluate network structures as a function of their effective diffusion properties. 5. For reasons that will become apparent below, we rule out (vi, v) "loops" in which agents see themselves. 6. For a more extensive discussion of this probability function and its properties, see DeCanio and Watkins (1998b). 7. Indeed, for high enough values of the cost parameter relative to the reward, it can be optimal for the organization to do nothing. If there are no connections (so that nothing ever happens), the problem will never be solved, but cost will be low. This situation might be characterized as the "procrastination equilibrium." 8. To facilitate comparisons, we set a time limit of 100 iterations for organizations to solve the adoption of technology problem as well. Discounting the rewards for individuals' adoption of the technology provides the positive incentive for early adoption in the solution of that task, as it does in the case of the associative problem. 9. This procedure is similar to what might be done if one were interested in assessing the influence of peoples' heights on some personal characteristic such as income. Rather than selecting individuals at random from the population (which would yield a high proportion of individuals close to the mean height) and regressing income on height, one would select a stratified sample having equal numbers of individuals of each different height (or height interval). This would increase the variation in the explanatory variable and result in a better test of the null hypothesis. 10. For a full discussion of NP-completeness and computational complexity, see Garey and Johnson (1979) and Papadimitriou (1995). The decision problem of determining whether a particular graph has a clique larger than a particular number (of vertices) is known to be NP-complete, and for problems of this type (in which the cost function is relatively easy to evaluate), "the decision problem can be no harder than the corresponding optimization problem" (Garey & Johnson, 1979, p. 19). Whether polynomial time algorithms will some day be found that solve problems in the NPcomplete class is considered to be one of the most important open questions in theoretical computer science. 11. Note that Combinatorica computes arvr and brdg for undirected graphs only. The software assumes that every edge is undirected when computing these two variables. Thus, it is not surprising that these variables do not perform well in the regressions. 12. Recall that in all the models we are considering the agents making up the organization are identical. This imposes a great deal of symmetry on the problem that is unlikely to correspond to real-world situations. The presence of explanatory variables other than average edges is indicative of the existence of deeper structural determinants of performance. Comparison runs not reported here also show that simple connected graphs (such as the hypercube or k'th order rings (in which each agent i sees agents i + 1, i + 2 . . . . . i + k), are not optimal performers for ranges of values of the cost and processing power parameters. 13. There is a growing body of literature that finds problems of computational complexity at the heart of standard economic models. See, for example, Deng and
172
S.J. DECANIO ETAL.
Papadimitriou (1994) for the solution concepts of game theory, Papadimitriou and Tsitsiklis (1986) for problems in control theory, Spear (1989) for rational expectations, and Rust (1997) and DeCanio (1999) for general surveys. 14. Combinatorial optimization problems are not the only ones subject to the limits of computational complexity. In addition to the paper by Papadimitriou and Tsitsiklis (1986) cited above, see also Ko (1991). 15. Only the simplest forms of communication and information processing represented. In the real world, additional layers of complexity are present. For example, instead of the network linkages being simple "on" or "off" (and represented as O's or l's), real linkages could be functions with the range [0,1], where the value of the function indicates the frequency or intensity of communication. 16. Biological or evolutionary descriptions of economic dynamics have been offered by Alchian (1950), Rothschild (1990) and perhaps most extensively by Nelson and Winter (1982) and their followers. De Vany (1997) describes a model of evolution leading to emergent order that has similarities to ours. It is worth noting that the biological/evolutionary metaphor for economic life has long been an alternative to the rational choice models that are most frequently invoked by economic theory. Krugman (1996) provides a very nice discussion of the similarities between conventional economic models and biological models. 17. Simulated Annealing, an algorithm having some similarities to both Hill Climbing and Genetic Algorithms, is not used in this paper. For a recent application of Simulated Annealing to a question of political economy, see Kollman, Miller and Page (1997). 18. See Goldberg (1989) and Holland (1992) for an introduction to Genetic Algorithms.
REFERENCES Alchian, A. A. (1950). Uncertainty, Evolution, and Economic Theory. Journal of Political Economy, 58(3), June, 211-221. Alchian, A. A., & Woodward, S. (1988). The Finn is Dead; Long Live the Firm: A Review of Oliver E. Williamson's The Economic Institutions of Capitalism. Journal of Economic Literature, 26(1), March, 65-79. Brynjolfsson, E., & Hitt, L. (1996). Paradox Lost? Firm-level Evidence on the Returns to Information Systems Spending. Management Science, 42(4), April, 541-558. Buckles, B. P., & Petry, F. E. (1992). Genetic Algorithms. Los Alamitos, CA: IEEE Computer Society Press. Cheeseman, P., Kanefsky, B., & Taylor, W. M. (1991). Where the Really Hard Problems Are. In: J. Mylopoulos & R. Reiter (Eds), Proceedings of IJCAI-91, San Mateo, CA: Morgan Kaufmann. DeCanio, S. J. (1997). Economic Modeling and the False Tradeoff Between Environmental Protection and Economic Growth. Contemporary Economic Policy, 15(4), October, 10-27. DeCanio, S. J. (1998a). The Efficiency Paradox: Bureaucratic and Organizational Barriers to Profitable Energy-Saving Investments. Energy Policy, 26(5), April, 441-454. DeCanio, S. J. (1999). Estimating the Non-Environmental Consequences of Greenhouse Gas Reductions is Harder Than You Think. Contemporary Economic Policy, 17(3), 279-295.
Complexity in Organizations
173
DeCanio, S. J., & Watkins, W. E. (1998a). Investment in Energy Efficiency: Do the Characteristics of Firms Matter? The Review of Economics and Statistics, 80(1), 1-13. DeCanio, S. J., & Watkins, W. E. (1998b). Information Processing and Organizational Structure. Journal of Economic Behavior and Organization, 36(3), August, 275-294. De Vany, A, (1997). Information, Chance, and Evolution. In: J. R. Lott, Jr. (Ed.), Uncertainty and Economic Evolution: Essays in honor of Armen A. Alchian, London: Routledge. Garey, M. R., & Johnson, D. S. (1991). Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman and Company. Goldberg, D. E. (1989). Genetic Algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley. Grefenstette, J. J. (1986). Optimization of Control Parameters for Genetic Algorithms. IEEE Transactions on Systems, Man, and Cybernetics, 16(1), 122-128. Reprinted in Buckles and Petry (1992). H~igerstrand, Torsten (1967). Innovation Diffusion as a Spatial Process. Chicago: University of Chicago Press. Harary, E (1955). The number of linear, directed, rooted, and connected graphs. Transactions of the American Mathematical Society, 78(2), March, 445-463. Harary, E, & Palmer, E. M. (1973). Graphical Enumeration. New York: Academic Press. Holland, J. H. (1992). Adaptation in Natural and Artificial Systems. Cambridge, MA: MIT Press. Kauffman, S. A. (1995). At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. New York: Oxford University Press. Ko, Ker-I. (1991). Complexity Theory of Real Functions. Boston: Birkh~iuser. Kollman, K., Miller, J. H., & Page, S. E. (1997). Political Institutions and Sorting in a Tiebout Model. American Economic Review, 87(5), December, 977-992. Krugman, E (1996). What Economists Can Learn from Evolutionary Theorists, talk given to the European Association for Evolutionary Political Economy. Available at http://web.mit.edu/ krugman/www/evolute.html. Miller, J. H. (1996). Evolving Information Processing Organizations, manuscript, Department of Decision and Social Sciences, Carnegie Mellon University. Nelson, R. R., and Winter, S. G. (1982). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard University Press. Olson, M. (1971). The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge: Harvard University Press. Papadimitriou, C. H. (1995). Computational Complexity. Reading, MA: Addison-Wesley Publishing Company. Papadimitriou, C. H., & Deng, X. (1994). On the Complexity of Cooperative Solution Concepts. Mathematics of Operations Research, •9(2), May, 257-266. Papadimitriou, C. H., & Tsitsiklis, J. (1986). Intractable Problems in Control Theory. SlAM Journal of Control and Optimization, 24(4), July, 639-654. P61ya, G., & Read, R. C. (1987) (1937). Combinatorial Enumeration of Groups, Graphs, and Chemical Compounds. New York: Springer-Verlag. Porter, M. E. (1990). The Competitive Advantage of Nations. New York: Free Press. Porter, M. E. (1991). America's Green Strategy: Environmental Standards and Competitiveness. Scientific American, 264(4), April, 168. Porter, M. E., & van der Linde, C. (1995a). Toward a New Conception of the EnvironmentCompetitiveness Relationship. Journal of Economic Perspectives, 9(4), Fall, 97-118. Porter, M. E., & van der Linde, C. (1995b). Green and Competitive: Breaking the Stalemate. Harvard Business Review, 73(5), September-October, 120-134.
174
S.J. DECANIO ET AL.
Radner, R. (1992). Hierarchy: The Economics of Managing. Journal of Economic Literature, 30, September, 1382-1415. Radner, R. (1993). The Organization of Decentralized Information Processing. Econometrica, 61(5), September, 1109-1146. Rothschild, M. (1990). Bionomics: Economy as Ecosystem. New York: Henry Holt and Company. Rust, J. (1997). Dealing with the Complexity of Economic Calculations, paper for workshop on Fundamental Limits to Knowledge in Economics, Santa Fe Institute, August 3, 1996. Spear, S. E. (1989). Learning Rational Expectations Under Computability Constraints. Econometrica, 57(4), July, 889-910. Tmdeau, R. J. (1976). Dots and Lines. The Kent State University Press. Van Zandt, T. (1997). The scheduling and organization of periodic associative computation: Essential networks. Review of Economic Design, 3, 15-27. Van Zandt, T. (1998). The scheduling and organization of periodic associative computation: Efficient networks, Review of Economic Design, 3, 93-127. Weyant, J. P., & J. N. Hill (1999). Introduction and Overview, The Energy Journal (Special Issue, The Costs of the Kyoto Protocol: A Multi-Model Evaluation, (Ed.), J. P. Weyant): vii-xliv. Weyant, J. P., (Ed.), (1999). The Energy Journal (Special Issue, The Costs of the Kyoto Protocol: A Multi-Model Evaluation). Williamson, O. E. (1985). The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting. New York: Free Press. Wilson, R. J. (1985). Introduction to Graph Theory (3rd Ed.). New York: Longman Group Limited. Wolfram, S. (1996). The Mathematica Book (3rd Ed.). Champaign, IL: Wolfram Media/Cambridge University Press. Wolfram Research (1990). Mathematica 3.0 StandardAdd-on Packages. Champaign, IL: Wolfram Media/Cambridge University Press.
TECHNOLOGY AND GREENHOUSE GAS EMISSIONS: AN INTEGRATED SCENARIO ANALYSIS USING THE LBNL-NEMS MODEL Jonathan G. Koomey, R. Cooper Richey, Skip Laitner, Robert J. Markel, and Chris Marnay ABSTRACT This report describes an analysis of possible technology-based scenarios for the U.S. energy system that would result in both carbon savings and net economic benefits. We use a modified version of the Energy Information Administration's National Energy Modeling System (LBNLNEMS) to assess the potential energy, carbon, and bill savings from a portfolio of carbon saving options. This analysis is based on technology resource potentials estimated in previous bottom-up studies, but it uses the integrated LBNL-NEMS framework to assess interactions and synergies among these options. The High-Efficiency Low Carbon scenario analyzed in this study would result in significant annual net savings to the U.S. economy, even after accounting for all relevant investment costs and program implementation costs. This strategy would result in roughly half of the carbon reductions needed to meet the Kyoto target being achieved from domestic U.S. investments, and net savings of more than $50B per year for the U.S. in 2010.
The Long-Term Economics of Climate Change, pages 175-219. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0 175
176
J.G. KOOMEY ET AL.
INTRODUCTION The Kyoto protocol was a watershed event in the history of environmental policy. For only the second time in history, national governments have agreed to seek binding targets for pollutants that have global effects. If the U.S. decides to ratify the treaty and meet these targets, it will need to achieve aggressive reductions of greenhouse gas emissions in the 2008 to 2012 time frame. While many analyses have attempted to assess the potential costs of such a commitment, the debate is now shifting in a subtle way. An increasing number of business leaders and policy analysts are instead asking the question: "What are the key policy choices that might actually enhance our industrial competitiveness and still lead to a significant reduction of greenhouse gas emissions?" Critical to this policy perspective is an investment-led deployment of cost-effective technology that can both save money and reduce such emissions. Unfortunately, none of the existing policy models has successfully incorporated a full range of technological change within their analytical framework. In other words, such models understate the opportunity for widelyavailable but underutilized and cost-effective technologies to reduce greenhouse gas emissions. Nor do they anticipate or allow new technologies to emerge in response to changing conditions in the marketplace. Hence, much of the modeling response to any given climate change strategy reflects a more limited technical capacity to respond positively to changing price signals, nonprice policy initiatives, and economic conditions. For that reason, there is a need: (a) to identify the broader range of possible technological change through off-line analysis; and (b) to capture the magnitude and direction of technological change within existing policy models to evaluate the potential economic impacts, using a scenario-based approach. Developing a wide range of alternative technology scenarios can help us better understand the ordinary business of making better choices with respect to both the environment and the economy. In the words of Kenneth Boulding, "Images of the future are the keys to choice-oriented behavior" (Boulding & Boulding, 1995). For this reason, the analysis in this chapter builds on previous estimates of possible "technology paths" to investigate four major components of an aggressive greenhouse gas reduction strategy: (1) the large scale implementation of demand-side efficiency, comparable in scale to that presented in two recent policy studies on this topic;
An Integrated Scenario Analysis Using the LBNL-NEMS Model
177
(2) a variety of "alternative" electricity supply-side options, including biomass cofiring, extension of the renewables production tax credit for wind, increased industrial cogeneration, and hydropower refurbishment. (3) the economic retirement of older and less efficient existing fossil-fired power plants; and (4) a permit charge of $23 per metric ton of carbon (19965/t), 1 assuming that carbon trading is implemented in the U.S., and that the carbon permit charge equilibrates at this level. This level of carbon permit charge, as discussed later in the chapter, is in the likely range for the Clinton Administration's position on this topic. These four options are important contributors to large carbon reductions identified in the so-called "Five Lab study" released in the fall of 1997. The extensive engineering and economic analysis contained in the 1997 study suggested that cost-effective technologies could reduce carbon emissions by as much as 390 million metric tons (MtC) by 2010 at a permit price of $50/tC (Interlaboratory Working Group 1997). Perhaps more important for this analysis is that both the Five Lab study assumptions and each of the four options referenced above can be represented in the U.S. Department of Energy's (U.S. DOE's) National Energy Modeling System (NEMS) in an accurate and conceptually clear way. Integrating the various technology assumptions within the NEMS framework allows us to capture dynamic feedbacks, particularly those between energy demand and prices. NEMS is an all-sector, integrating model of the U.S. energy system that is exemplary for its comprehensive treatment of supply-side technologies (particularly in the electricity sector) and its detailed treatment of energy demand at the end-use level (see Fig. 1). We rely on a modified version of the NEMS model2 as an accounting tool, but also to capture the income, price, and intersectoral effects on both energy demand and fuel prices. Such interactions are treated exogenously in most bottom-up studies. We set out to determine whether an endogenous treatment within the LBNL-NEMS framework would alter the main conclusions of such studies.
Analytical Objectives of This Study The goals of this analysis are twofold. The first is to generate scenarios using LBNL-NEMS to explore the effects of aggressive but cost-effective U.S. effort to implement certain demand and supply-side technologies that reduce greenhouse gas emissions. The second is to investigate synergies and interactions between demand and supply-side options to derive lessons for
178
J.G. KOOMEY ET AL.
Adaptedfrom U.S.
DOE (1998b).
Fig. 1. Schematic representation of the NEMS modeling system.
policy. An integrating model such as LBNL-NEMS is particularly useful for such explorations. Our analysis parallels at least two previous efforts undertaken by the Energy Information Administration (EIA) itself. In both 1996 and 1997, EIA analysts examined the effect of accelerated technological change on the U.S. energy markets (Boedecker et al., 1996, Kydes, 1997). While the two previous EIA reports offer an important step forward with respect to understanding the role of technological progress within an energy and economic framework, the scenarios in this study differ in two ways. First, in these earlier studies, the EIA analysts made less-detailed assumptions about technological progress than done for the Five-Lab study, for example. For instance, within the industrial model EIA assumed that energy intensity would decline by about 1.4% annually compared to the reference case forecast of a 1.0%/year decline. In our analysis we relied on the Long-Term Industrial Energy Forecast (LIEF) model (Ross et al., 1993) and the Five-Lab Study to define cost-effective reductions in energy intensity (Intedaboratory
An Integrated Scenario Analysis Using the LBNL-NEMS Model
179
Working Group 1997). Second, the EIA analyses were independent of any real policy scenarios while our analysis builds upon the kind of technological progress and behavioral changes one might expect to see in a post-Kyoto world. Indeed, many of the policy options that might drive the kind of technological changes envisioned in the Five-Lab study are now in various stages of review by the Administration and Congress (Laitner 1998).
Incomplete Technology Por(folio This analysis, because of time constraints, implemented an incomplete portfolio of carbon reduction options. For example, the Five-Lab study included (but we did not include) carbon savings from fuel cells, biomass and black liquor gasification, cement clinker replacement, industrial aluminum efficiency technologies, ethanol in light duty vehicles, repowering of coal plants with natural gas, life extension of nuclear power plants, and fossil power-plant efficiency improvements. Our analysis also includes less than onethird of the carbon savings from wind generation that is tapped in the Five-Lab study. It further omits carbon savings from the use of photovoltaics, landfill gas, combined heat and power in non-industrial space heating applications, and advanced efficiency options in the building sector. We include none of these carbon savings options, and for this reason, the total carbon savings calculated in our High Efficiency/Low Carbon scenario should be viewed as significantly less than the full potential. METHODOLOGY
The Scenario Approach None of the existing policy models captures the full effects of policy-induced technological, institutional, and behavioral changes, especially as they relate to climate change strategies. For this reason we use the scenario approach in our analysis. The purpose of scenario analysis, as explained by Peter Schwartz in his now classic book, The Art of the Long View, is to explore several possible futures in a systematic way (Schwartz 1996). Schwartz builds on the work of Pierre Wack, a planner in the London Offices of Royal Dutch/Shell whose own scenario analysis helped the international petroleum enterprise respond quickly and successfully to the Arab oil embargo following the Yom Kippur war in 1973 (Wack, 1985a, 1985b). Schwartz notes that scenarios are tools "for ordering one's perceptions about alternative future environments. The end result," he says, "is not an accurate
180
J.G. KOOMEY ET AL.
picture of tomorrow, but better decisions about the future" No matter how things might actually turn out, both the analyst and the policy maker will have "on the shelf' a scenario (or story) that resembles a given future and that will have helped them think through both the opportunities and the consequences of that future. Such a story "resonates with what [people] already know, and leads them from that resonance to re-perceive the world". Scenarios, in other words, "open up your mind to policies [and choices] that you might not otherwise consider." Most of the current thinking about how the United States might cope with climate change is built around a reference case (the Annual Energy Outlook 1998, or AEO98) that projects both energy use and carbon emissions through the year 2020. Under a business-as-usual strategy, the nation's economy is projected to grow by nearly 50% between 1998 and 2020. Reflecting some improvements in overall energy efficiency, carbon emissions are projected to grow slightly more than half as fast, increasing about 28% between 1998 and 2020 (U.S. DOE, 1997a). This scenario, which we use as our baseline, might be appropriately labeled as "The Official Future." A number of studies suggest that enforcing the so-called Kyoto Protocol will force American businesses and consumers to cut their energy use drastically to reduce their carbon emissions. Such huge cuts, they assert, will greatly weaken overall economic activity (Novak, 1998). These analyses are generally based on modeling methodologies that ignore the potential for energy efficiency technologies to concurrently save money and reduce pollution, and rely on inadequate characterizations of carbon saving energy supply options (Krause et al., 1993). In contrast, our analysis investigates scenarios where programs and policies promote the adoption of new efficiency and low carbon supply technologies in an aggressive way. The emissions reductions contained in our scenarios assume that the appropriate investments are actually made as a result of effective policies which are adopted within the United States. Our chapter does not lay out the details of such policies, but explores the implications for a scenario in which they are assumed to take effect. We believe that such policies are capable of promoting the adoption of carbon saving technologies at the levels contained in our scenario, as shown in other analyses. For example, an analysis of programs either now in place or now under consideration - including such programs as the Climate Change Action Plan (CCAP), the President's Climate Change Technology Initiative (CCTI), and the Administration's proposed plan for electric utility restructuring indicates that the proposed programs and funding levels might obtain as much as 250 MtC of carbon reductions. These are reductions that are not fully -
An Integrated Scenario Analysis Using the LBNL-NEMS Model
181
reflected in other analytical scenarios, including the reference case forecast. Hence, a reasonable extension of such programs might allow the nation to secure the balance of the reductions suggested in this chapter (Laitner, 1998). Similarly, a recent analysis by the American Council for an Energy-Efficient Economy (Geller et al., 1998) indicates that just five major policy strategies could obtain more than 330 MtC of carbon reductions. We base many of our assumptions for the low carbon resource potentials on the recently published study, Scenarios of U.S. Carbon Reductions: Potential Impacts of Energy Technologies by 2010 and Beyond (Interlaboratory Working Group, 1997), but we rely on other sources as well. The Interlaboratory report is exemplary for its in-depth examination of technological options to reduce carbon emissions. The study uses a bottom-up modeling methodology to assess the carbon saving resource potentials in the U.S. residential, commercial, industrial, transportation, and utility sectors, relying on the technical expertise of staff at five national laboratories. It finds that, although significant investments and program spending are necessary to support a climate change mitigation strategy, large carbon reductions are possible at zero or negative total net costs to society. We do not limit our analysis to those options contained in the Interlaboratory report, nor do we address all the options considered in that report. Rather, the scenarios we create are intended to show the carbon savings potential for some important options without claiming to be comprehensive. We believe that scenarios implemented in the LBNL-NEMS modeling framework (even those that are not comprehensive in scope) can help explore important policy issues related to creating a low-carbon world.
Demand Side The first step in building our scenarios was to create energy demand scenarios that are roughly comparable to those in the Five Lab study's High Efficiency Low Carbon case (HELC) or in the Energy Innovations study (ASE et al., 1997, Interlaboratory Working Group, 1997). In this analysis we made changes in discount rates, technology costs, and growth trends (where necessary). We based these changes on our past expelience of end-use demand modeling and associated data. These changes reflect a low-carbon world in which aggressive policies and programs accelerate the development of new efficiency technologies, reduce their cost, and make it more likely that people and institutions will adopt them. Appendix A in Koomey et al. (1998) describes in detail how the NEMS input files were modified to implement these changes. Once the changes are implemented, NEMS then evaluates the opportunity for technology
182
J.G. KOOMEY ETAL.
improvements given normal capital stock turnover within each sector of the economy. On the demand side, NEMS interprets a series of "hurdle rates" (sometimes referred to as "implicit discount rates") as a proxy for all the various reasons why people don't purchase apparently cost-effective efficiency technologies. They include constraints for both the consumer (purchasing) and for the supplier (product manufacturing and distribution). Among the constraints are transaction costs, manufacturer aversion to innovation, information-gathering costs, hassle costs, misinformation, and information processing costs. The hurdle rates embody the consumers' time value of money, plus all of the other factors that prevent the purchase of the more efficient technologies. In this regard, the NEMS modeling framework follows a long and rich history in the economics of energy efficient technology adoption (DeCanio, 1998, Howarth & Sanstad, 1995, Koomey et al., 1996, Meier & Whittier, 1983, Ruderman et al., 1987, Sanstad et al., 1993, Train, 1985). See Ruderman et al. (1987) for a discussion of how implicit discount rates differ from the more standard use of the term "discount rate" with which most economists are familiar. In the residential and commercial sectors, for example, the financial component of the reference case hurdle rate is about 15% (in real terms) with the other institutional and market factors pushing such rates to well above 100% for some end-uses. Because our scenario embodies an emissions trading program that is coupled with a variety of non-price policies to eliminate many of the barriers to investing in cost-effective efficiency technologies, we reduce the hurdle rates for many end-uses in our High-efficiency/low-carbon Case. Residential and Commercial
For the residential and commercial demand sectors, we roughly matched site energy consumption to the Five Lab study's HELC case results by end-use and fuel type. For major end-uses, logit parameters were modified such that the implicit discount rate was reduced to 15% for all residential technologies after the year 1999 and to roughly 18% for all commercial technologies after the year 1999. For minor end-uses that are treated in lesser detail in NEMS (such as miscellaneous electricity and residential lighting) basic input assumptions (such as energy consumption growth rates and lighting efficiencies and market shares) were modified to match the Five Lab study efficiency potentials. Industrial We started with the Energy Information Administration's (EIA) Hi-Technology Case in the Annual Energy Outlook (U.S. DOE 1997a). We then modified parameters so that total electricity and fuel savings matched the results (in
An Integrated Scenario Analysis Using the LBNL-NEMS Model
183
percentage terms) of the High Efficiency runs of Argonne National Laboratory's Long-Term Industrial Energy Forecasting (LIEF) model (Ross et al., 1993). In particular, we changed parameters in the Hi-Technology Case that characterize the rate of efficiency improvements over time and the rate of equipment turnover for all the NEMS industrial sub sectors, which include: (1) Agricultural Production - Crops (2) Other Agriculture including Livestock (3) Coal Mining (4) Oil and Gas Mining (5) Metal and Other Non-metallic Mining (6) Construction (7) Food and Kindred Products (8) Paper and Allied Products (9) Bulk Chemicals (10) Glass and Glass Products (11) Hydraulic Cement (12) Blast Furnaces and Basic Steel (13) Primary Aluminum (14) Metals-Based Durables (15) Other Manufacturing. All other parameters other than equipment turnover and rate of efficiency improvements are identical to those found in the AEO98 reference case. Transportation For the transportation sector, the NEMS input files and source code modifications used in the Five Lab study were used as the basis for our LBNLNEMS analysis. The Five Lab study NEMS technology input file was designed for the AEO97. We updated the technology input file by extending the trends implicit in the Five Lab study from 2015 to 2020 and accounting for changes in the model structure from AEO97 to AEO98. Five Lab Study modifications to the NEMS input files and source code included changes to behavioral variables (discount rates, payback periods, load factors, and the tradeoff between horsepower and efficiency) and technological parameters (capital costs, entry years, penetration rates, and efficiency trends). Supply Side On the supply side, we include carbon permit trading, forced retirements of old fossil-fired electric generators, hydroelectric refurbishments, extension of the renewables production tax credit for wind, biomass cofiring, and expansion of industrial cogeneration. We implement those measures as described below.
184
J.G. KOOMEY ET AL.
Carbon Permit Trading We assume that the emissions trading system is implemented by giving away as many permits within the U.S. as the Kyoto cap would allow. Any transactions in the trading system to allow emissions up to the Kyoto cap will therefore constitute transfer payments between people and institutions within the U.S., while any such transactions to exceed the Kyoto cap are a real cost to the U.S. (these latter transactions are a transfer payment from the global perspective, however). We treat these two components of the carbon trading separately in the aggregate cost-benefit analysis below. We implemented the carbon trading system as a Carbon Charge early in the forecast period to give the model time to adjust before 2010. This assumption reflects the reality that consumers and companies will act with foresight, knowing that an emissions trading r6gime is soon to be implemented. The Carbon Charge values (in 1996 dollars) were increased linearly by year from $0/t in 2000 to $23/t in 2004, and kept constant in real terms at that level after 2004. The size of the carbon permit trading fee is based on the recent consensus within the Clinton Administration about what equilibrium price might result from international emissions trading. We included this fee in our analysis to see what additional contribution might come from domestic carbon saving options when the alternative was purchasing carbon permits at $23/tC. As a sensitivity case, we also estimated carbon savings from a permit trading fee of $50/tC, which was the main fee level included in the Five Lab study's analysis (we include this estimate in the Appendix, but do not discuss it in this chapter). In the face of the carbon trading system, low or zero carbon emitting technologies, such as natural gas thermal or renewables, will be favored in utility dispatch and capacity expansion, resulting in a shift in the electricity generation fuel mix in favor of these options and against others, notably coal. Economic Retirement of Fossil-Fired Plants Existing fossil-fired generation is an important source of criteria air pollutants (particularly SO2, NOx, and particulates). These plants have largely been "grandfathered" by existing clean air regulations, so they are much dirtier than new fossil-fired plants being built today. The are also generally less efficient, so their carbon emissions per kWh are also greater than those of new fossil-fired plants. Because they are relatively expensive to operate, they are relegated to peaking and intermediate operation, so they generate fewer kWh than a new baseload plant would. The AEO98 version of NEMS does not allow endogenous premature retirement of existing plants, although the AEO99 and AEO 2000 versions do.
An Integrated Scenario Analysis Using the LBNL-NEMS Model
185
We found in previous analysis that some level of retirements beyond those considered in the AEO98 base case will actually reduce the total energy bills below those of AEO98 levels (or below that of our high efficiency/low carbon case without the retirements). We therefore implemented cost-effective capacity retirements for old, inefficient fossil-fired plants within LBNLNEMS. When we evaluate the costs of retirements of existing fossil-fuel fired generating capacity compared to the baseline, we base our decision rule for how many plants to retire on total energy bills, not just electricity bills. An analyst narrowly focused on the optimal level of retirements in a restructured electricity sector would examine the effect of such retirements by limiting her assessment to electricity bills. But one concerned with the total societal cost of carbon reductions associated with such retirements must focus on total energy bills, because fuel price changes and fuel switching will affect the overall results. In this first phase of our analysis, we explore one level of coal retirements (16 GW), which corresponds to retiring all coal steam plants built before 1955 that are still existing in 2020 in the AEO98 reference case. We removed these plants over the years 2000 to 2008 through changing the "retirement year" field of the plant data file. We also add retirements of all oil- and gas-fired steam power plants (about 100 GW relative to the AEO98 reference case). We found that this level of retirements always saves money for society and reduces both criteria pollutant and carbon emissions, so we included it in our retirement scenarios. The monetary savings is the result of the relatively high fixed O&M costs for these plants combined with their low efficiency and low capacity factors.
Hydroelectric Refurbishments Refurbishing existing hydro facilities is one of the most cost-effective options for expanding renewable power generation. Studies both in Europe and the U.S. show that refitting old dams with bigger and more efficient turbines is inexpensive and has small environmental effects (Krause et al., 1995a, SERI, 1990). We model such refurbishments by increasing hydroelectric capacity by 13 GW by the year 2008, the same amount analyzed in the Five Lab study. This capacity is distributed in equal parts across all 13 NEMS electricity market module regions. Extension of the Renewables Production Tax Credit for Wind The current renewables production tax credit of 1.5C/kWh will expire on January 1, 1999. We extended this credit for wind through 2020 (it is
186
J.G. KOOMEY ET AL.
implemented as a negative variable O&M charge). This policy change is distinct from the assessment of the wind generation potential in the Five Lab study, and it results in only about one third as much wind generation being implemented as was included in that study.
Biomass Cofiring We converted 10 GW of coal-fired capacity to combust biomass by 2010 (scaling up linearly from 0 GW/year in 2000). This level of cofiring is the same amount analyzed in the Five Lab study. We ensured that no plants affected by our retirement scenarios would be converted to burn biomass.
Expansion of Industrial Cogeneration We roughly doubled current levels of gas-fired cogeneration by adding gasfired industrial cogeneration capacity, increasing it by 35 GW by 2010 (ramped up linearly starting at 0 GW/year in 2000). This capacity generates 214 TWh per year (70% electrical capacity factor), while also supplying heat for process use. This level of capacity additions is based on the analysis in U.S. DOE (1997b), but is less than the roughly 50 GW potential for advanced turbine cogeneration in the HELC case of the Five Lab study.
Combined Scenarios We combine these options in different scenarios to explore relevant dimensions of uncertainty. Our hypothesis is that some of these options will work together to achieve carbon reductions greater than the sum of the carbon reductions attributable to each option separately (we call this situation one with "positive synergy"). Others will work against each other, yielding carbon reductions less than the sum of the carbon reductions attributable to each option separately ("negative synergy"). We define the "synergy index" to describe these effects in a quantitative way. Given a set of distinct options to reduce carbon emissions, the synergy index (SI) is Carbon savings when options are implemented together SI= sum of carbon savings for each option implemented separately For example, if the carbon savings in 2010 for High Efficiency implemented alone is 179 MtC, the savings for the Carbon Charge is 36 MtC, and the total savings when these policies are implemented together are 207 MtC, the synergy index is 205 SI - 0.96, 179 + 36
An Integrated Scenario Analysis Using the LBNL-NEMS Model
187
which indicates negative synergy because SI < 1.0. When High Efficiency, coal retirements, and gas/oil retirements are implemented together, the SI in 2010 is SI-
205 - 1.03. 179+ 1 2 + 8
The High Efficiency and retirements scenarios work together to create positive synergy. When exploring the synergy between different policies in the HELC case, we focus on the 16 GW coal/100 GW oil-gas steam retirement case, because that is the level of retirements that is cost effective - based on our analysis of the present value of energy bills over the analysis period. When investigating the uncertainties surrounding retirements, we use all the different retirement levels plus the demand reductions, supply-side options, and carbon permit charge. Investment Costs Currently, LBNL-NEMS uses capital cost/efficiency curves to choose the appropriate efficiency level for new purchases for some end-uses in the building and transport sectors, but does not pass the total investment costs to the macroeconomic module. For end-uses that are "hard-wired" as well as for the industrial module, there is no investment cost accounting whatsoever. We estimated investment costs for demand-side efficiency and cogeneration options in our HELC case using a spreadsheet that multiplies the costs of conserved energy by end-use from the Five Lab study by the energy savings. This calculation, which is identical in process to the one used in the Five Lab study's cost-benefit analysis, yields the total annualized investment costs for efficiency improvements (Interlaboratory Working Group 1997). The energy and bill savings we calculate are somewhat different from those of the Five Lab study, as are the total investment costs, in part because of fuel price and other interactions not captured in the Five Lab study's bottom-up analysis, and in part because the technology portfolios differ between the two studies. For cogeneration, we used capital costs of $940/kW (19985) a lifetime of 20 years, and discount rates of 12.5% and 20% for the optimistic and pessimistic cases, respectively (the discount rates are the same as those used for the costbenefit calculations in the Five Lab study). The current cost treatment of LBNL-NEMS on the supply side is largely satisfactory, so with the additional calculation for demand-side costs and
188
J.G. KOOMEY ET AL.
cogeneration investments, we can, in a simplified way, properly estimate the total costs of energy services for our HELC case.
The Programs-Based Perspective Our scenario assumes that programs and policies exist that can capture the energy savings potentials identified in the Five Lab study. We assign implementation costs to those programs based on real-world program experience, and express these costs as a percentage of the investment costs. The Five Lab study used a range of 7% to 15% of investment costs for their optimistic and pessimistic cases, respectively, and we follow that convention here. These costs represent a crude estimate of weighted average implementation costs for a mix of voluntary programs (like ENERGY STAR), efficiency standards, tax credits, government procurement, golden carrots, and other nonenergy-price components of a carbon reduction strategy. For a detailed example of how to create such a programs-based scenario, see Kranse et al. (1995b). For electricity supply side options, we also include program costs of 7% and 15% of investment costs for the optimistic and pessimistic cases, respectively (the Five Lab study used program costs of 1% and 3% of total net costs for these investments). For purposes of estimating program costs for these investments, we made rough estimates of the investment costs related to these supply side changes (the LBNL-NEMS model correctly accounts for these investment costs, but does not report them separately in a convenient way, so a back-of-the-envelope calculation was required to compute program costs). For renewable electricity generation, we calculated program costs based on an assumed average capital cost of $1500/kW. For retirements, we assumed capital costs of $200/kW to shut the plants down. For all these options, we assumed a lifetime of 20 years, and real discount rates of 7% and 15% for the optimistic and pessimistic cases, respectively.
RESULTS This section describes key analysis results, including savings in primary energy, carbon, and energy bills, as well as investment costs and program costs needed to implement the High Efficiency Low Carbon case. It also explicitly treats the transfer payments associated with the carbon permit trading system. The High Efficiency case reduces demand in each of the end-use sectors, and in response, the electricity capacity expansion model in LBNL-NEMS builds 30 GW less coal capacity, 77 GW fewer gas-fired combined cycle plants, and 30 GW fewer combustion turbines by 2020. The high efficiency case also
An Integrated Scenario Analysis Using the LBNL-NEMS Model
189
reduces natural gas demand, thus reducing gas prices and making gas more competitive with coal. The Carbon Charge promotes fuel switching towards less carbon intensive fuels, favoring natural gas and renewables over coal and oil. The Retirements Case forces old fossil-fired plants out of the fuel mix, and LBNL-NEMS chooses new plants to replace them. The supply-side options include adding additional capacity for hydroelectric refurbishments, biomass cofiring, cogeneration, and wind. Primary Energy Use
Tables la and lb show the energy use and carbon emissions results for 2010 and 2020 from a subset of our modeling runs. All options affect primary energy use, with the High Efficiency Case reducing primary energy demand the most, and the Carbon Charge, other electricity supply-side, and retirements options following far behind. Total primary energy demand, when all options are combined, is reduced by about 13% in 2010 and 22% in 2020. Energy synergy is negative in 2010 and 2020 for the Carbon Charge + High Efficiency and HELC cases with varying retirement levels, while it is neutral in 2010 or slightly positive in 2020 for the Retirements + High Efficiency Case. The negative synergy in the Carbon Charge + High Efficiency Case results because the High Efficiency Case reduces the number of highly efficient natural gas power plants that would be built by 2020, but does not affect the number of gas/oil steam plants. When the Carbon Charge is put in place, the inefficient gas/oil steam plants are run in preference to coal plants. As a result, primary energy use goes up. In the Retirements + High Efficiency Case, synergy is slightly positive in 2020. The High Efficiency Only Case prevents more coal plants from being built, but the existing coal plants are not retired, because the AEO98 version of NEMS does not have an endogenous retirement function. Instead, highly efficient advanced combined cycle plants and some combustion turbines, which are largely built after 2000, are displaced in the High Efficiency run. When coal plants are retired in the coal retirements only case, many of them are built back as coal plants. When the retirements are combined with High Efficiency, many of the retirements are not built back as coal plants. Instead, they are built back as high efficiency gas combined cycles, in large part because the High Efficiency technologies keep gas prices low and make gas-fired generation relatively more attractive than coal. These higher efficiency power plants increase the efficiency of electricity generation and reduce the primary energy used per kwh generated.
190
J.G. KOOMEY ET AL.
~.~ ~ 0 e~
~'~ ~
~
t",l
r~
.
.
.
.
.
.
~
~
6
d
r~
o"
l//
~TT I TTTT o~ C~
e"
;.=
C~
Z
An Integrated Scenario Analysis Using the LBNL-NEMS Model
191
o.
~5 ¢'q t'q
.=. r~
~d
t..)
z 0 ,-.d
oo
R 0
"0
0
..=
~d "0
~-g'Z
d
z
e¢% •
192
J.G. KOOMEY ET AL.
Figure 2 shows how our HELC scenario compares to the AEO98 baseline and historical trends in energy-GDP ratios. The AEO98 reference case projects a decline in energy/GDP ratio of about 0.9% per year, which is comparable to the historical rate of decline from 1960 to 1997 (1.1% per year). Our HELC scenario projects a decline in energy/GDP ratio of about 1.9% per year, which is somewhat faster than that experienced in the 1970 to 1995 period, but significantly slower than that experienced from 1976 to 1986. It is clear from
18
] ¢' Historic [ --41---AEO98 Reference Case I ] ~LBNL-NEMS HELC Case
16
14
Period
•
12
I 10 [ AAGR
29%
8 Historic Average Annual Growth Rate (1970-1997) AAGR=-I.5% Predicted Average Annual Growth Rate (1999-2020) AE098 Ref. Case = .0.9% LBWt~-NEMSHELC Case = -1.9"/o
Historic Average Annual Growth Rate (1960-1997) AAGR ~ -1.1%
HistoricGDPdatasmuce: BureauofEconomicAnalysi$ 1997 Survey of Currem Buabwsj i Historic Enerlgyconsumption dala som'ce: EnerlD,Information Administration 1997 Annual Energy Review AE098 Reference Case data flora EIA AE098 LBNL-NEMS I.ELC data fiem NEMS98
0
,,I
1960
1965
1970
1975
1980
1985 1990
1995 2000
2005
2010
Year
Fig. 2. Energy/GDP ratio over time, historical and projected.
2015
2020
An Integrated Scenario Analysis Using the LBNL-NEMS Model
193
history that the economy's energy intensity can improve at least as fast as postulated in our scenario, given the right incentives and policy changes. Carbon Emissions
Total carbon emissions decline 15% in 2010 and 23% in 2020 when all options are combined, as shown in Tables la and lb (above). The savings are larger in percentage terms than for primary energy because carbon emissions are also affected by fuel switching. The High Efficiency Case is the most efficacious in terms of absolute carbon savings, but savings from the Carbon Charge and other supply-side options are also significant. The sum of carbon savings from coal and oil and gas retirements is relatively small (about 5% of the savings in the High Efficiency Only Case). In 2010, carbon synergy is positive for the Carbon Charge/High Efficiency Case, while it can be either positive or negative for the various HELC cases with various combinations of retirements. Carbon synergy is negative for all cases in 2020. In 2010, total carbon savings for the HELC case are about 274 MtC, which brings total emissions to 1530 MtC, about 14% above 1990 levels (1346 MtC). The Kyoto Protocol specifies that, in the years 2008-2012, the total U.S. greenhouse gas emissions should average 7% below 1990 levels. If this standard were applied only to carbon emissions for the year 2010, the U.S. would need to reduce total emissions by 554 MtC in that year. According to this scenario, then, about half of the savings needed to meet the Kyoto target are achieved domestically. The balance of the reductions - about 280 MtC - would have to come from the international trading of greenhouse gas permits. This result depends on the assumptions detailed above, including a $23/ton carbon permit price. As noted elsewhere, a number of other studies suggest that an even larger potential exists for domestic reductions (ASE et al., 1997, Brown et al., 1998, Interlaboratory Working Group, 1997, Krause, 1996, Laitner et al., 1998). However, the resources needed to achieve these additional reductions are not incorporated into the scenarios described in this chapter. Power Plant Retirements
The AEO 98 Reference Case appears to contain at least a few GW of coal-fired power plants that would be cost effective to retire (from a societal perspective), regardless of any greenhouse gas strategy at all. It becomes progressively more cost effective to retire more fossil capacity as electricity supply side options, carbon charges, and high efficiency technologies are implemented. The AEO98
194
J.G. KOOMEYET AL.
version of NEMS does not currently retire such capacity automatically, so any modeling runs using NEMS to simulate a low carbon world MUST contain some exogenous retirements to account for this effect. Calculating the exact amount is laborious because it involves iteration, but it is an essential part of such analysis. Our own retirement cases provide a useful example. Relative to the AEO98 reference case, retiring 16 GW of coal and 100 GW of gas and oil steam plants costs about $9B in present value terms by 2020 (calculated using total energy bills at a 7% real discount rate, base year 1999). Relative to the HELC case without retirements, however, adding the retirements saves about $6B in present value terms by 2020 and saves carbon. Any high efficiency low carbon scenario that did not include retirements beyond those found in the AEO98 reference case would therefore be omitting an important carbon and money saving option.
Energy Bill Savings As shown in Tables 2a and 2b, the impact of the "Climate Change Investment Strategy" appears to be highly positive in terms of the nation's overall energy expenditures. Total energy costs for all sectors in 2010, for example, are decreased by $89 billion (in 1996 dollars). This result is made possible by the cost-effective technology investments outlined in the scenario analysis. When the carbon permit charge transfer payments are removed, the total energy bill falls even further, by an additional $35 billion. The price response to the reduced demand in this scenario more than offsets the increase in energy prices created by a $23/MTC cost of carbon. Indeed, the reduced demand for petroleum and natural gas lowers energy prices by $0.52 and $0.35 per MBtu, respectively. Although coal and electricity prices are up $0.53 and $0.92 per MBtu, respectively, the weighted price for all energy resources actually falls by about $0.15 per MBtu. The significantly lower energy consumption, when coupled with the lower energy prices, reduces the nation's energy bill by about 13% over the reference case.
Investment and Program Implementation Costs The results of our simplified calculation of additional annualized investment costs for our HELC case are shown in Tables 3a and 3b. Supply-side investment costs (with the exception of cogeneration) are already captured in the bill savings calculations, while the energy efficiency investment costs are not tracked in the AEO98 version of the NEMS model.
An Integrated Scenario Analysis Using the LBNL-NEMS Model
--
I
I
I I
195
I I I I
~=
Z
o
.=.
E o o
._=
.=. e~ rj o
+~ Z
t~
r,.9
196
J.G. KOOMEY
~
t",l
t"-I t"-I
2~ z
@
~=
~
O",
-d
e~
tS~
~ ,-d ,-d
.¢'=~ z
.=.
ET AL.
An Integrated Scenario Analysis Using the LBNL-NEMS Model
197
We applied costs of conserved energy from the Five Lab study to the energy savings calculated from our scenarios in each of the demand sectors. We also estimated the cost of cogeneration investments. The optimistic and pessimistic assumptions for discount rates and program costs correspond to those used in the Five Lab study cost analysis. Total additional investment costs in 2010 are $40 to $60B/year, and program implementation costs range from $3 to $9B/ year. In 2020, total additional investment costs are between $80 and $100B/year, while program implementation costs range from $5 to $17B/year. Impact on the Nation's Economy As summarized in Tables 3a and 3b, total net savings in 2010 (after accounting for investment costs and program implementation costs) are between $80 and $60 billion per year for the optimistic and pessimistic cases, respectively. While these investment cost estimates, at best, are of "one significant figure" accuracy, they convey a qualitative picture that is similar to the results of the Five Lab study. The energy bill savings from the incremental efficiency options more than offset the sum of incremental investment and program costs for those options. Even if the carbon charge transfer payments are treated as a cost and not subtracted from energy bills, total net savings would still be in the tens of billions of dollars every year. The results do not match exactly with the Five Lab results because of differences between the options included in our scenario and those in the Five Lab study, and because of feedbacks captured in the NEMS model that were not included in the Five Lab study. The results in 2020 contain larger net savings than in 2010. We also estimate the additional cost of permits needed to meet the Kyoto goal. These permits are in excess of those assumed to be distributed in the U.S. when the emissions trading system is first created. They represent a cost to the U.S. (although from the global perspective, they are just a transfer payment). The costs are the difference in carbon emissions between the Kyoto goal for the U.S. and those in the scenario, multiplied by $23/t. In the AEO98 reference case in 2010, the U.S. would have to purchase about $13B/year worth of emissions allowances to meet the Kyoto goal. 3 The reduced emissions in the HELC case would result in international emissions credit purchases of about $6B/year, a savings of more than $6B/year in these permit costs compared to the reference case. The HELC scenario would result in significant annual net savings to the U.S. economy, even after accounting for all relevant investment costs and program implementation costs. Not pursuing this technology-led investment strategy would have an opportunity cost of more than $50B per year for the U.S. in
198
J.G. KOOMEY ET AL.
~.~ ~o
~D
r~
0
.~o~ ~-~ ,~ o
¢~'~ ~ ~
~
~'~o
~
• ,~.
,.o
N
N
~
oM
~.~ ~
~"
~.i
An Integrated Scenario Analysis Using the LBNL-NEMS Model
199
r'-
I
.=. e~
~
m
0
°
° c~
~Tg~
o
e'~a, o
e~ o
0
~
m
0
p.~'"
0
c2,
(~
~ "0
~ ~,..~ ~ 0
~
~J 0
200
J.G. KOOMEY ET AL.
2010 and more than $100B per year by 2020. In any case, this scenario identifies significant "no-regrets" options that make sense to implement even if climate change is not a concern. If a scenario shows cost-effective investments - that is, investments which generate a net savings over a reasonable period of time, the nation's Gross Domestic Product (GDP) should also increase. However, we do not report the NEMS GDP estimates since the NEMS model does not adequately track changes in investment or capital to provide a reliable estimate of the economic impacts of our investment scenario. At the same time, we can report on a modeling exercise using the AMIGA model (a general equilibrium model with rich sectoral detail). 4 The runs conducted with the AMIGA model were also based upon the technology assumptions of the Five-Lab study, but with a more complete accounting for investment costs than that of the LBNL-NEMS framework. Within the AMIGA system, the level of technology associated with the FiveLab study, together with a $50/tC permit price, generated a much higher level of domestic carbon reductions - 366 MtC in the year 2010. In that analysis, GDP increased by $63 billion in 2010, or about 0.6% higher than in the reference case. Employment shows a small net gain of about 35,000 jobs by 2010 (Hanson, 1998). Hence, the AMIGA evaluation of a Climate Change Investment Strategy, similar in scope to that analyzed here, provides a clear indication that the United States can achieve a significant level of domestic carbon reductions and still maintain a competitive momentum to the benefit of the nation's economy. DISCUSSION
The Importance of 'Hard-Wired' Specification of End-Uses and Technologies in the NEMS Framework Our analysis explicitly treats carbon savings from end-uses and technologies that are currently "hard-wired" in the NEMS framework, including miscellaneous electricity and residential lighting. Any policy study using NEMS that merely uses carbon charges or other price instruments to achieve carbon reductions essentially assumes that no savings can be achieved in these building sector end-uses. Since miscellaneous electricity and residential lighting are responsible for a large fraction of growth in the buildings sector, it is inappropriate to ignore them. Similarly, carbon charges do not affect the growth in industrial cogeneration in the NEMS AEO98 framework, so any policy study that does not
An Integrated Scenario Analysis Using the LBNL-NEMS Model
201
exogenously alter the adoption of industrial cogeneration in response to a changing policy context is ignoring a potentially large source of cost effective carbon reductions.
Implementing Technological Change Many top-down modelers, in assessing the costs of reducing carbon emissions, fail to consider technological change that may be induced by climate change mitigation policies. Part of this failure stems from the current generation of models that cannot adequately capture such changes. Any policy case that involves significant price changes or aggressive non-price policies will lead to changes in behavior and to technology-costs. For example, it is a fundamental oversight to model a large carbon tax, on the one hand, and fail to change the discount rates and the technology costs associated with behavioral changes and learning curve effects for low carbon and efficiency technologies on the other. The size of these changes in behavior are uncertain, so it is difficult to ascribe exact changes in discount rates due to different policies, but it is inappropriate to make NO such changes in the face of massive policy shifts like carbon permit trading and aggressive efficiency programs.
Understanding Power Plant Retirements Many studies of options for reducing carbon emissions focus on new gas-fired power plants, efficiency technologies, and non-fossil resources, but neglect to incorporate power plant retirements. Our analysis shows that such retirements work together with energy efficiency and Carbon Charges to achieve higher carbon savings than could be achieved by any of these policies in isolation. The costs of these retirements does not include the reduced damages from criteria air pollutant emissions that are the direct result of the retirements. The older plants that are retired contribute disproportionately to emissions of these pollutants, in part because they are extremely dirty, but in part because they tend to be located closer to urban areas than do the more modem plants.
Effect of Integrated Analysis There does appear to be an effect of using an integrated modeling approach, though it is smaller than we initially expected. When all the options in our HELC case (with the $23/tC charge) are implemented separately in the LBNL NEMS framework, we find total savings of 280 MtC in 2010. When implemented together in the LBNL-NEMS model, however, we found total
202
J.G. KOOMEY ET AL.
savings of only 274 MtC, about a 2% reduction from the non-integrated assessment. This relatively small effect indicates that sectoral studies that do not conduct integrated all-energy-sector analyses may not be missing much, though it is not possible to generalize this finding without significant future work. At least some of the important changes we implemented on the demand side are independent of prices (e.g. savings in miscellaneous electricity and residential lighting), but many other important changes (e.g. the reductions in discount rates) increase the model's sensitivity to price changes. This area is clearly one that warrants further investigation.
FUTURE WORK Additional Carbon Saving Options Several carbon saving options have not been included in this analysis, including: (1) (2) (3) (4) (5) (6) (7)
combined heat and power in non-industrial space heating applications; photovoltaics; fuel cells; landfill gas; repowering of old coal plants with natural gas; advanced efficiency options in the building sector; and ethanol in the transportation sector.
These other options are potentially important, and in each case will increase the carbon savings from the high efficiency low carbon case. A comprehensive assessment for Europe that included these options uncovered significant cost effective carbon reduction potentials associated with these technologies (Krause et al., 1994, 1995a, b), and we expect the same conclusions to hold for the U.S.
Better Treatment of Currently Analyzed Carbon Saving Options Industrial cogeneration, biomass, hydroelectric refurbishments, and wind generation have been treated in a relatively cursory manner in this work. The resource potentials for these options are potentially large. A more careful assessment of the geographic variations in resource availability and costs could potentially pay dividends in terms of further carbon reductions. Wind, in particular, could contribute many times more power than in our HELC case,
An Integrated Scenario Analysis Using the LBNL-NEMS Model
203
based on its rapidly declining costs and the large absolute size of the resource. Detailed analysis of these options for Europe has demonstrated large unrealized cost-effective potentials (Krause et al., 1994, 1995a).
Improvements in Synergy Index In the current version of this analysis, the synergy index is calculated based on annual energy use or emissions. In future work, we expect to convert the synergy index to use cumulative energy use or emissions, thus giving a more accurate picture of the total response over time to our policy excursions. We also will explicitly attempt to assess synergies in costs and benefits for different scenarios.
Implementation Roadmap for Programs and Policies Our scenario analysis is predicated on the existence of successful programs and policies to promote cost-effective energy efficiency technologies. Further work is clearly needed to lay out an implementation path for such policies. This roadmap would be similar to that created by Brown (1993, 1994) for the residential sector, but would cover policies and programs in all sectors. Logistic constraints in ramping up programs would be incorporated in this roadmap. For example, it takes years to complete all the legal requirements to create a new standard, and it often takes years for a new ENERGY STAR program to achieve significant market penetration. These logistic constraints should be characterized based on recent evaluations of the experience for the programs in question. Implementation costs needed to achieve the level of reductions described above would also be incorporated explicitly in such a roadmap. According to the Five Lab study, these costs are on the order of 7-15% of the level of technology investments needed to reduce carbon emissions, but these costs should be calculated explicitly, based on the cost of each policy or program and its expected contribution to future energy savings.
Ancillary Benefits of Power Plant Retirements One variable that we have not explicitly treated in our retirements analysis is the interaction between criteria pollutants (e.g. SO> NOx, and particulates), energy efficiency, and retirement levels. Efficiency improvements and retirements both reduce criteria pollutant emissions, and within many urban airsheds, these pollutants have a value to society that is known or can be
204
J.G. KOOMEY ET AL.
approximated. An analysis using Geographic Information Systems that assesses the economic characteristics of specific power plants and the local value of reducing urban air pollution could determine which additional power plants it might be cost effective to retire given those ancillary benefits.
Other Aspects of Power Plant Retirements A regional analysis of retirements would also help us better understand the interactions between retirements, utility deregulation, and regional fuel prices. The AEO98 electricity supply module assumes regulated electricity markets in all States except California, New York and New England, and this representation affects the potential benefits from retirements in a complicated way. More investigation is needed on this point.
Power Plant Efficiency Although the average heat rates of existing power plants improve over time in the AEO98 reference case forecast (due to retirements of less efficient units), studies indicate that policies to promote restructuring within the electric utility sector may promote such improvements for many existing plants (U.S. DOE 1998a). Those opportunities should be examined carefully in future work. However, we anticipate that resulting carbon savings can, to first approximation, be added to the results presented in this analysis (assuming that the benefits of these improvements apply only to those existing plants that are not retired in our analysis).
Macroeconomic Effects in NEMS The AEO98 version of NEMS used for this analysis relies on a very simple reduced form version of the DRI macroeconomic model. When EIA runs NEMS, they use the full DRI model to capture macroeconomic effects. It is important to understand, however, that even this full model cannot capture macroeconomic investment effects if the capital expenditures on cogeneration and end-use efficiency investments are not tracked and passed on to the macro model. Fully tracking these investments and reporting them to the macro model would be a major improvement to the NEMS modeling framework. In addition, it is clear that the macroeconomic forecast generated by the reduced form version of the DRI model is solely dependent on energy prices passed from the integrating module of the NEMS model. It is therefore impossible for the AEO98 version of the NEMS modeling framework using the
An Integrated Scenario Analysis Using the LBNL-NEMS Model
205
reduced form macro model to reflect accurately the effect of a reduction in energy bills. This problem limits the usefulness of this tool in assessing macroeconomic impacts from programs and policies that are designed to reduce energy bills. We do not know if the EIA version of NEMS using the full DRI model suffers from this limitation, but if it does, it is a fundamental one. There are, of course, other well known limitations of the DRI model for such long-term forecasting, so results from that model should be interpreted with extreme caution (Jorgenson & Nordhaus, 1998).
Review of Gas Supply and Demand Interactions Because of the importance of gas demand and prices to our results, and because our scenario incorporates relatively large perturbations of reference case gas consumption, we believe that more attention to the feedback between demandside efficiency and natural gas prices would yield important lessons. This feedback is clearly one of the critical ones affecting the costs of reducing carbon emissions, and a more detailed assessment of its effects is needed.
Regional Distribution of Hydroelectric Refurbishment Capacity More research is needed on how hydroelectric refurbishment opportunities are distributed geographically. Future work should estimate the potential by region, because this distribution will affect carbon savings from this option. POLICY
IMPLICATIONS
The LBNL-NEMS analysis highlights some key policy implications: (1) The domestic U.S. options analyzed here achieve about half of the carbon reductions needed to meet the Kyoto commitment. This is a significant fraction of the total commitment that actually reduces the nation's total cost of energy services. The rest of the savings will need to come from international emissions trading and other options we did not analyze. This result is dependent on our assumption of a $23/ton carbon permit price, as well as the other assumptions detailed above. (2) Many carbon savings options are not included in our analysis. These include fuel cells, biomass and black liquor gasification, cement clinker replacement, industrial aluminum efficiency technologies, ethanol in light duty vehicles, repowering of coal plants with natural gas, life extension of nuclear power plants, fossil power-plant efficiency improvements, photovoltaics, landfill gas, combined heat and power in non-industrial space
206
(3)
(4)
(5)
(6)
J.G. KOOMEY ET AL. heating applications, and advanced efficiency options in the building sector. The potential of some other options (such as wind generation and industrial cogeneration) are probably underestimated in our study. For these reasons, we believe the total carbon savings calculated here are substantially less than the full potential. Power plant retirements are a key option for carbon reductions that have been inadequately explored heretofore. They combine ancillary benefits from criteria air pollutant reductions with positive carbon synergy when implemented in combination with energy efficiency. Demand and supply-side options can work together to create positive synergy. Switching power plants to natural gas will increase demand for that fuel, and drive up the price. If large amounts of energy efficiency options are implemented in concert with the switch to gas-fired power plants, the price of natural gas can be kept below reference case levels, which allows these power plants to compete more effectively with coal. NEMS is not a complete cost accounting framework (particularly for demand-side investments and cogeneration), so any estimates of the GDP effects of various energy policy options must be viewed with extreme caution. This problem is not unique to NEMS, but it is an important issue for anyone attempting to interpret the results of any NEMS analysis. Our analysis explicitly incorporates carbon savings from miscellaneous electricity, residential lighting, and industrial cogeneration, which are currently "hard-wired" in the NEMS framework. Any NEMS analysis that implements large shifts in policy (such as carbon permit trading and nonprice policies) but does not exogenously treat these hard-wired items is not correctly assessing the costs of reducing carbon emissions.
SUMMARY AND CONCLUSIONS The common perception among many policy makers and industry leaders is that the twin objectives of reducing greenhouse gas emissions and promoting a more competitive economy are inherently contradictory. Many believe that anything done to lower such emissions will necessarily restrict economic activity. Others argue that if the economy moves forward at current levels of efficiency, growth in greenhouse gas emissions will be inevitable and the global climate will be seriously damaged. Because of the "unavoidable tradeoff" between these two objectives, the various industry, government and environmental groups wage a constant policy battle over which objective merits the greater support. From a perspective of cost-effective investments in technology, however, it becomes increasingly clear that these two goals are not at all
207
An Integrated Scenario Analysis Using the LBNL-NEMS Model
contradictory. The reason is that the U.S. economy falls short of an optimal level of overall carbon efficiency. Figure 3 illustrates the different points of view in a schematic way. The curves on this graph represent different "Production Possibility Frontiers" that characterize the relationship between carbon emissions mitigation and economic activity. The frontier defines the outer boundary of what is feasible given a set of technologies and economic activity levels. Most modeling of the costs of reducing carbon emissions assumes that the reference case carbon intensity is on the frontier, and that any increase in carbon mitigation must also result in a decrease in Gross Domestic Product (this point of view corresponds to the curve labeled "Apparent Year 2010
Level of Carbon Mitigation
Year 2010 HELC Case Frontier
"-•
Actual Year 2010 I Reference Case | Frontier ~
I Referencecase I
..0 \
• 2010 EIA AEO98 Reference Case -- ~170 g/19965
\
0 2010 LBNL-NEMS HELC Case ~147 g/19965
\ \ I
Gross Domestic Product Fig. 3. Schematic production possibility frontiers.
208
J.G. KOOMEY ET AL.
Business-As-Usual Case Frontier"). Our analysis demonstrates that the "Actual Year 2010 Business-As-Usual Case Frontier" is further out than the apparent frontier, which means that both carbon mitigation and GDP can be increased at the same time, given the right set of policies and programs. In addition, since the frontier is a function of technology, and the cost of that technology is a function of policy choices made between now and 2010, taking aggressive actions now to reduce carbon emissions can actually move the frontier further out than it would be given the technologies that exist in the reference case. This possibility is represented by the curve labeled "Year 2010 Aggressive Implementation Case Frontier". This chapter describes an analysis of possible technology-based scenarios for the U.S. energy system that would result in both carbon savings and net economic benefits. We use a modified version of the Energy Information Administration's National Energy Modeling System (LBNL-NEMS) to assess the potential energy, carbon, and bill savings from a portfolio of carbon saving options. This analysis is based on technology resource potentials estimated in previous bottom-up studies, but it uses the integrated LBNL-NEMS framework to assess interactions and synergies among these options. The U.S. economy now emits 192 grams of carbon for each dollar of valueadded (measured as GDP in constant 1996 dollars) that it produces. With a "normal" rate of improvement in the Business-As-Usual case, it appears that by the year 2010, the nation would reduce this emissions rate to about 170 grams per dollar. Despite this improvement in the emissions rate, however, the anticipated growth in the economy will increase total carbon emissions to 1803 MtC in 2010, or to about 23% above 1996 levels. The LBNL-NEMS analysis conducted in this study suggests that implementing a set of policies to encourage the development and deployment of energy-efficient and low-carbon technologies can close this gap - to the benefit of both the climate and the economy. In this study, we find a cost-effective path that can reduce the rate of carbon emissions to 147 grams per dollar of GDP. This will reduce carbon emissions to about 1530 MtC by 2010. Other studies suggest that with the right mix of policies and technologies, the frontier might actually extend well beyond that described in this chapter (ASE et al., 1997, Brown et al., 1998, Intedaboratory Working Group, 1997, Krause, 1996, Laitner et al., 1998). The High-Efficiency Low Carbon scenario analyzed in this study would result in significant annual net savings to the U.S. economy, even after accounting for all relevant investment costs and program implementation costs. This strategy would result in roughly half of the carbon reductions needed to meet the Kyoto target being achieved from domestic U.S. investments. Not
An Integrated Scenario Analysis Using the LBNL-NEMS Model
209
pursuing this technology-led investment strategy would have an opportunity cost of more than $50B per year for the U.S. in 2010 and more than $100B per year by 2020.
ACKNOWLEDGMENTS The work reviewed in this chapter and the associated original background report (LBNL-42054, downloadable from http://enduse.lbl.gov/Projects/ GHGcosts.html) was made possible through EPA funding of the E. O. Lawrence Berkeley National Laboratory (LBNL). The analysis was primarily conducted by Jonathan G. Koomey and Cooper Richey (LBNL) under the overall guidance of EPA's Skip Laitner (who also supplied significant amounts of text). In addition, Robert Markel and Chris Marnay of LBNL contributed to the research leading up to the publication of this report. For a recent study that builds upon the work described in this chapter and that addresses some of the future analysis needs identified here, go to http://enduse.lbl.gov/Projects/ CEF.html This original report from which this chapter was derived benefited from the comments of many colleagues. Their extensive comments are summarized in the appendix of the original report, together with an indication of how we incorporated their suggestions into our analysis. The colleagues (in alphabetical order) who generously shared their insights with us include: Dr. Stephen Bernow, Tellus Institute; Dr. Stephen DeCanio, University of California-Santa Barbara; Dr. Neal Elliott, American Council for an Energy-Efficient Economy; Dr. Eban Goodstein, Lewis and Clark College; Dr. Julie Fox Gorte, NortheastMidwest Institute, Washington, DC; Dr. Lorna Greening, Economic Consultant, Boulder, CO; Dr. Bruce Hutton, University of Denver; Dr. Florentin Krause, International Project for Sustainable Energy Paths, E1 Cerrito, CA; Dr. Andy Kydes, EIA Office of Integrated Analysis and Forecasting; Dr. James E. McMahon, Lawrence Berkeley National Laboratory; Dr. Alan H. Sanstad, Lawrence Berkeley National Laboratory; Dr. Thomas Tietenberg, Colby College; Dr. Michael Toman, Resources for the Future; and Dr. Frances Wood, On Location, Inc., Dunn Loring, VA The listing of individual affiliation for each of our reviewers is for identification purposes only. Although we gladly acknowledge their involvement, we do not mean to imply their endorsement of the report. The final responsibility for the results of the analysis and the content of the report lies solely with the authors.
210
J.G. KOOMEY ET AL.
This work was supported by the Office of Atmospheric Programs of the U.S. Environmental Protection Agency. Prepared for the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
NOTES 1. In this chapter, all monetary figures are in constant 1996 dollars, unless otherwise specified. All references to tons of carbon are to metric tons. 2. Hereafter we use the term LBNL-NEMS to refer to our version of NEMS, to denote that we make substantial modifications to the NEMS input data and some code changes to model the scenarios of interest. When we refer to generic characteristics of the model that apply to both the standard AEO 98 version and to our version, we still use the term NEMS. 3. This simplified calculation ignores the effect the $23/t permit trading fee would have on emissions. 4. AMIGA is the All Modular Industry Growth Assessment system developed by the Argonne National Laboratory. It has been developed with the capability to represent and evaluate many of the specific policy options now under discussion for reducing energyrelated carbon emissions.
REFERENCES ASE, ACEEE, NRDC, Tellus Institute, and UCS (1997). Energy Innovations: A Prosperous Path to a Clean Environment. Washington, DC: Alliance to Save Energy, American Council for an Energy-Efficient Economy, Natural Resources Defense Council, Tellus Institute, and Union of Concerned Scientists. June. Boedecker, E., Cymbalsky, J., Honeycutt, C., Jones, J., Kydes, A. S., & Duc Le. (1996). The Potential Impact of Technological Progress on U.S. Energy Markets in Issues in Midterm Analysis and Forecasting. Washington, DC: Energy Information Administration, U.S. Department of Energy. DOE/EIA-0607(96). September. Boulding, K. E., & Boulding, E. (1995). The Future: Images and Processes. Thousand Oaks, CA: Sage Publications. Brown, M. A., Levine, M. D., Romm, J. P., Rosenfeld, A. H., & Koomey, J. G. (1998). Engineering-Economic Studies of Energy Technologies to Reduce Greenhouse Gas Emissions: Opportunities and Challenges. Palo Alto, CA: Annual Reviews, Inc. Brown, R. E. (1993). Estimates of the Achievable Potential for Electricity Efficiency in U.S. Residences. M.S. Thesis, Energy and Resources Group, University of California, Berkeley. Brown, R. E. (1994). Estimates of the Achievable Potential for Electricity Efficiency Improvements in U.S. Residences. 1994 ACEEE Summer Study on Energy Efficiency in Buildings. Asilgmar, CA. DeCanio, S. J. (1998). The efficiency paradox: bureaucratic and organizational barriers to profitable energy-saving investments. Energy Policy, 26(5), April, 441-454. Geller, H., Nadel, S., Elliott, R. N., Thomas, M., & DeCicco, J. (1998). Approaching the Kyoto Targets: Five Key Strategies for the United States. Washington, DC."American Council for an Energy-Efficient Economy. August.
An Integrated Scenario Analysis Using the LBNL-NEMS Model
211
Howarth, R. B., & Sanstad, A. H. (1995). Discount Rates and Energy Efficiency. Contemporary Economic Policy, •3(3), 101. Interlaboratory Working Group. (1997). Scenarios of U.S. Carbon Reductions: Potential Impacts of Energy-Efficient and Low-Carbon Technologies by 2010 and Beyond. Oak Ridge, TN and Berkeley, CA: Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory. ORNL-444 and LBNL-40533. September. Jorgenson, D., & Nordhaus, W. D. (1998). Memo on the limitations of the DRI model, submitted to the Clinton Administration's lnteragency Task Force in April 1997, and published in Volume I1 of Countdown to Kyoto, Parts I-II1. Washington, DC: Subcommittee on Energy and Environment of the Committee on Science, U.S. House of Representatives. Published by the U.S. Government Printing Office. October 7, 9, and November 6, 1997. Koomey, J. G., Cooper Richey, R., Laitner, S., Markel, R. J., & Marnay, C. (1998). Technology and greenhouse gas emissions: An integrated analysis using the LBNL-NEMS model. Berkeley, CA: Ernest Orlando Lawrence Berkeley National Laboratory. LBNL-42054. September. Koomey, J., Sanstad, A. H., & Shown, L. J. (1996). Energy-Efficient Lighting: Market Data, Market Imperfections, and Policy Success. Contemporary Economic Policy, XIV(3), 98-111. July (Also LBL-37702). Krause, E (1996). The Costs of Mitigating Carbon Emissions: A Review of Methods and Findings from European Studies. Energy Policy, 24(10/11 ), October/November, 899-915. Krause, E, Haites, E., Howarth, R., & Koomey, J. (1993). Cutting Carbon Emissions-Burden or Benefit?: The Economics of Energy-Tax and Non-Price Policies. Energy Policy in the Greenhouse. Volume II, Part 1. El Cerrito, CA: International Project for Sustainable Energy Paths. Krause, E, Koomey, J., Becht, H., Olivier, D., Onufrio, G., & Radanne, E (1994). Fossil Generation: The Cost and Potential of Low-Carbon Resource Options in Western Europe. Energy Policy in the Greenhouse. Volume II, Part 3C. El Cerrito, CA: International Project for Sustainable Energy Paths. Krause, E, Koomey, J., & Olivier, D. (1995a). Renewable Power: The Cost and Potential of LowCarbon Resource Options in Western Europe. Energy Policy in the Greenhouse. Volume II, Part 3D. E1 Cerrito, CA: International Project for Sustainable Energy Paths. Krause, E, Olivier, D., & Koomey, J. (1995b). Negawatt Power: The Cost and Potential of LowCarbon Resource Options in Western Europe. Energy Policy in the Greenhouse. Volume II, Part 3B. El Cerrito, CA: International Project for Sustainable Energy Paths. Kydes, A. S. (1997). Sensitivity of Energy Intensity in U.S. Energy Markets to Technological Change and Adoption, in Issues in Midterm Analysis and Forecasting 1997. Washington, DC: Energy Information Administration, U.S. Department of Energy. DOE/EIA-0607(97). July. Laitner, S. (1998). Working Memo on Estimating the Carbon Reduction Benefits of Proposed Policy Initiatives. Washington, DC: EPA's Office of Atmospheric Programs. July. Laitner, S., Bernow, S., & DeCicco, J. (1998). Employment and Other Macroeconomic Benefits of an Innovation-Led Climate Strategy for the United States. Energy Policy, 26(5), April, 425-432. Meier, A., & Whittier, J. (1983). Consumer Discount Rates Implied by Purchases of EnergyEfficient Refrigerators. Energy, 8(12), 957-962. Novak, M. H. (1998). Implementing the Kyoto Protocol: Severe Economic Consequences. Washington, DC: Testimony before the National Economic Growth, Natural Resources, and Regulatory Affairs Subcommittee of the House Government Reform and Oversight Committee, U.S. House of Representatives. April 23.
212
J . G . KOOMEY ET AL.
Ross, M., Thimmapuram, E, Fisher, R., & Maciorowski, W. (1993). Long-Term Industrial Energy Forecasting (LIEF) Model (18-Sector Version). Argonne, IL: Argonne National Laboratory. ANL/EAIS/TM-95. Ruderman, H., Levine, M. D., & McMahon, J. E. (1987). The Behavior of the Market for Energy Efficiency in Residential Appliances Including Heating and Cooling Equipment. The Energy Journal, 8(1), 101-124. Sanstad, A. H., Koomey, J. G., & Levine, M. D. (1993). On the Economic Analysis of Problems in Energy Efficiency: Market Barriers, Market Failures, and Policy Implications. Lawrence Berkeley Laboratory. LBL-32652. January. Schwartz, P. (1996). The Art of the Long View: Planning for the Future in an Uncertain World. New York, NY: Doubleday. SERI (1990). The Potential of Renewable Energy: An lnterlaboratory White Paper. Idaho National Engineering Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Solar Energy Research Institute. SERI/TP-260-3674. March 1990. Train, K. (1985). Discount Rates in Consumers' Energy-Related Decisions: A Review of the Literature. Energy, 10(12), 1243-1253. U.S. DOE (1997a). Annual Energy Outlook 1998, with Projections to 2020. Energy Information Administration, U.S. Department of Energy. DOE/EIA4)383(98). December. U,S. DOE (1997b). Combined Heat and Power: The Potential to Reduce Emissions of Greenhouse Gases. Washington, DC: Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy. Working paper. U.S. DOE (1998a). Comprehensive Electricity Competition Act." Supporting Analysis. Washington, DC: Office of Economic, Electricity and Natural Gas Analysis, Office of Policy and InternationalAffairs, U.S. Department of Energy. DOE/PO-0057. July. U.S. DOE (1998b). The National Energy Modeling System: An Overview 1998. Washington, DC: Energy Information Administration, U.S. Department of Energy. DOE/EIA-0581(98). February. Wack, E (1985a). The Gentle Art of Reperceiving-Scenarios: Uncharted Waters Ahead (part 1 of a two-part article). In: Harvard Business Review. September-October, 73-89. Wack, E (1985b). The Gentle Art of Reperceiving-Scenarios: Shooting the Rapids (part 2 of a two-part article). In: Harvard Business Review. November-December, 2-14.
An Integrated Scenario Analysis Using the LBNL-NEMS Model
213
APPENDIX: Key outputs from selected modeling runs The following tables summarize key results from our runs. Each column represents a scenario that combines demand-side and supply-side options in different ways. The first two tables show results in 2010, and the second two show results in 2020. The last Table shows fuel use by fuel type and sector for selected scenarios in 2010 and 2020. To download the full set of detailed output tables from this analysis, go to < http://enduse.lbl.gov/Projects/GHGcosts.html >
214
J. G. K O O M E Y E T A L . t-..i o ca
~5
o
ro
0~ eO
.=
z,~..,~
o.
~
o
$ eo ca
<
o Z~ o
.k
aa z z ,.-a ,4
4
~eq
~
An Integrated Scenario Analysis Using the LBNL-NEMS Model t'q
t"q
~5
~
~
~
~°
t"-I ,d
~
~
o
C~ M
O o
M i
<
~
~
©
~o E
, ~ _ ~_ ©
.6
t=
0 r~
Z
2~ Z
~
-~
.=
215
J. G. KOOMEY ET AL.
216 t"q
r,-:
~5 ¢-,I t"q
r.¢3 e.d,
,.L'
< yr,
.1
~eq
;z,.,, ¢.,)
o~
et0
z, ,-d
217
An Integrated Scenario Analysis Using the LBNL-NEMS Model t'q o t'q
~5 t'q t'q
I
t~
o M
~
o
< ,d
o t" ~
q3
, ~ tt~
~Z
r.) ,...1
Z Z
~~
~
~~
eq
~e
J. G. K O O M E Y ET AL.
218 ¢q oo
o O
% rE
o
~2
Z .]
t'q
Oq eq.
$ © r~
%
8 tD Oq ,.O e~
(m
eq e~
t-i
Lr~
'E
tr~
An Integrated Scenario Analysis Using the LBNL-NEMS Model t.q o t-q
~6
z Z
r~
d d M d
r~
M
d d ~ d
t-,i
ddcq
~
d o ~ o ~
e. o f~ c5 ,::5 e,i
'
'
¢,i
e~
o
o
b G ~ ~"d
"d
0 "o
~~ ~
.~
219
PRICES VERSUS POLICY: WHICH PATH TO CLEAN TECHNOLOGY? Eban Goodstein ABSTRACT The conventional economic perspective on long run resource limitations is that short run scarcity will lead to price increases, which will induce innovation, which will in turn, overcome scarcity. In the global warming case, if we are convinced that cost-effective low carbon technologies will in fact emerge as carbon prices rise, why wait? Given that the investment dollars will be spent regardless, would it not be more efficient to invest in the new technologies today? In that way, we might avoid several decades of carbon emissions and consequent environmental damage. Moreover, such an approach is attractive for its likely impacts on both the size and composition of national R&D spending, as well as for its insurance function. On the other hand, informational constraints may argue against a technology policy strategy. This chapter explores these issues. In an application to the wind industry, I conclude that if wind power continues down its experience curve at its historical pace, early investment in wind would be socially efficient.
The Long-Term Economics of Climate Change, pages 221-237. 2001 by Elsevier Science B.V. ISBN: 0-7623-0305-0 221
222
EBAN GOODSTEIN
INTRODUCTION With a few prominent exceptions, economists have tended to be relatively unconcerned about any absolute scarcity of marketed natural resources. The standard model argues that in the face of any shortage, rising prices will lead to the development of new technologies which are either able to tap lower grade resources, or provide substitute materials. While this perspective reflects a much debated technological optimism, Krautkraemer (1998) shows that in most natural resource markets, innovation has kept pace with scarcity, leading to no secular increase in resource prices. Given this, the standard prescription to address global warming is to turn the carbon-absorption capacity of the atmosphere into a marketed resource, either via emission taxes or a marketable permit system. This kind of incentive-based regulation would then provide the proper market signals; in the long run, as carbon emission prices rose, technological innovation would lead to less carbon intensive production and consumption patterns. This chapter accepts as a starting point this "no long run scarcity for marketed resources" argument, and asks the following question: If we are convinced that low carbon technologies will in fact emerge as carbon prices rise, why wait? Given that the investment dollars will be spent regardless, would it not be more efficient to invest in the new technologies today? In that way, we might avoid several decades of carbon emissions and consequent environmental damage. Moreover, such an approach is attractive for its likely impacts on both the size and composition of national R&D spending, as well as for its insurance function. On the other hand, informational constraints may argue against a technology policy strategy. To explore these questions, Section 2 develops a simple model contrasting incentive-based regulatory approaches with a clean technology approach. Section 3 addresses issues of R&D spillovers. Section 4 considers the informational and insurance aspects surrounding technology policy. Section 5 looks at the wind power industry to illustrate the main issues. I conclude that if wind power continues down its experience curve at its historical pace, early investment in wind would be socially efficient.
REGULATION VERSUS INVESTMENT IN TECHNOLOGY Consider a pollutant (say, carbon) with a standard annual marginal benefit and annual marginal cost of clean-up schedule, as illustrated in Fig. 1. The costs are recurrent, and marginal costs rise, because the firms adjust to a tax or permit
223
Prices Versus Policy: Which Path To Clean Technology?
$/% MB of Reduction
t.
~
~
MC of Reduction
c*
100%
Carbon Reduction
(%)
Fig. 1. Marginal Costs and Benefits of Carbon Reduction.
system in a marginal fashion - they tend to not reinvent their production processes, instead seeking marginal, typically end-of-the-pipe type solutions. In the carbon case specifically, we might find compliance investment to be concentrated around more efficient fossil fuel combustion for electric power plants and vehicles. Under these circumstances, efficient regulation would require either a carbon tax of t* dollars per ton, or a cap and trade system with c* permits issued. The total costs of this kind of incentive-based regulation would be, on an annual basis, area Y; total benefits would be X + Y, while the net benefits would be X. The area Z represents "residual" benefits of further clean-up beyond the efficient level, which are not captured by the regulatory policy. The present value of this approach would thus be: E~t=, Xr t
(1)
where r = 1/(1 + p), with p equaling the discount rate. By contrast, consider a clean-technology option. For a cost of D, a carbon free technology can be developed and installed. Expenditures of D may come from the private sector voluntarily, or they could be induced via government subsidies (procurement policies, R&D competitions, or direct R&D spending) or technology forcing regulation (CAFE or ZEV standards for vehicles, or utility portfolio standards for renewable energy). In that case, the net benefits would be:
224
EBAN GOODSTEIN ]L=t=l ( X + Y + Z ) rt - D
(2)
From a social perspective, the clean technology approach will be preferred if (2) is greater than (1). This in turn will be true if:
~t=1 Zrt > D -
]L=t=l Yrt
(3)
In other words, society should require investment in the carbon free, clean technology if the present value of the residual benefits not captured by regulation are greater than the cost differential between the two strategies. Inequality (3) will always hold as long as D < Y ~ t = l Yr~, i.e. it is simply less costly to invest in the clean technology. Here society should clearly choose the clean technology option since it is cheaper. (Recognize, however, that private firms will not necessarily make this choice, if expenditures of D generate positive learning externalities, an issue addressed below). But equation (3) also highlights that a more expensive clean technology option may be justified by the presence of the residual benefits. Let us now complicate the model, by assuming short run resource scarcity, modeling that in the form of a marginal damage function that shifts upwards over time. This tightens the efficient regulatory target, leading to higher taxes or fewer permits, and generating the total benefit area X(t) + Y(t) + Z(t), with X'(t), Y'(t), Z'(t) > 0. Let us add in further a backstop condition. At some time t = "r, carbon control costs become so high that firms now, on their own accord choose to spend D dollars to develop and install the carbon-free technology. In this case, the net benefits of the regulatory strategy and clean technology strategies are, respectively: ~"'rt= 1 X(t) rt +
~ t = , (X(t) + Y(t) + Z(t))rt - Dr"
]~=t=l (X(t) + Y(t) + Z(t))rt - D
(4) (5)
In this new model, developing the clean technology in the first place will be more efficient as long as (5) > (4), or: X'rt =1 Z(t) rt > (D - Dr') - •'r t =1 Y(t) rt
(6)
This condition is similar to that of equation (3), that is the clean technology option is preferred again, as long as the stream of residual environmental benefits are greater than the cost differential - now characterized by the difference between D and a discounted investment of D at time 'r, plus the stream of annual abatement expenditures up to that point. There is, however, a critical point illustrated here. The conventional assumption is that the backstop technology will be developed sooner or later: thus the social costs of investing today must be reduced by the present value of that future investment.
Prices Versus Policy: Which Path To Clean Technology?
225
Moreover, there are two additional features here, both of which favor the clean technology investment. First, because of the carbon scarcity, the residual benefits grow over time ( Z ' ( t ) > 0 ) ; in addition, as the efficient clean-up standard tightens, the total costs of control (Y'(t)) also rise. This model captures several arguments for a clean technology strategy. First, it avoids ongoing residual damage from carbon emissions while society waits for the new technology to be conjured into existence by price signals of scarcity. Moreover, the eventual cost of investment in clean technology must be factored into a decision about whether to invest today. Finally, technology policy will be preferred both because the residual damages are rising, and because marginal (and total) control costs are rising. However, clean technology promotion is not always the more efficient strategy; that judgment depends on the relative cost of the two options, including the opportunity cost of the up-front investment required by the carbon free approach.
R&D AND SPILLOVER EFFECTS There is a second line of argument speaking for early investment in clean technology. Such policies can boost overall R&D expenditures as a share of GDP. There is a substantial body of evidence, both theoretical and empirical, supporting the claim that because research and development is a public good, market actors generically underinvest in R&D (Arrow (1962), Scherer (1999)). Decanio (1997) in a survey of the literature, shows that the average social rate of return to R&D across the studies was 63.8%, while the private rate of return averaged 31.8%. Jones and Williams (1998:1199) working in a growth theory framework, conclude that "optimal R&D is at least 2 to 4 times actual investment". If clean technology policy increases overall R&D spending, broad spillover effects would increase efficiency economy-wide. Of course, climate policy might not lead to more efficient R&D spending it might instead merely divert it from more productive uses. 1 Goulder and Schneider (1996) show this will be true if total R&D spending in the economy is fixed, and the allocation between carbon reducing R&D and other R&D is already efficient. But as Decanio (1997: 25) notes: Neither of these conditions would appear to hold now. Indeed, we know that it is possible to increase aggregateR&D substantially; this is a policy decision having mainlyto do with the funding of graduate education for scientists and engineers, and with the availabilityof jobs and equipmentfor those researchers upon completionof their degrees... We have the experience of the post-Sputnik push that demonstrates the feasibility (and benefits) of an increase in society-wideR&D. Nor is the national allocationof R&D effort optimal. Public research dollars are not allocated on the basis of their expected rate of return, even excluding the very large expenditures on the military.
226
EBAN GOODSTEIN
Indeed, technology-based climate strategy would directly boost spending on fundamentally new technologies with broad spillover potential: solar photovoltaic cells, fuel cells, energy storage systems, hydrogen fuels. By contrast, a regulatory strategy only indirectly affects R&D spending, and concentrates it around marginal changes in existing technology: for example, improving the efficiency of coal and natural gas fired power plants, and internal combustion engines. INFORMATION
ISSUES
The primary objection to technology policy is that governments have a hard time "picking winners". Indeed, in the presence of technology subsidies, as the nuclear power and ethanol industries illustrate, policy can even lock a society onto a losing path (Goodstein (1995)). By contrast, the argument goes, if technology is left to respond in a decentralized fashion to rising prices, then more efficient winners will emerge "naturally". In the language of Section 2 above, early government investment in technology (formerly D) should be written as F = D + E, where E is a premium reflecting the possibility that government technology policy will be more costly than private sector investment. Moreover, general technical progress may lower the costs of any new future technology: thus we could write D = D(t), with Dt(t ) < 0. Both of these features would argue against early action via technology policy. However, limited information also speaks in favor of technology policy via the insurance function it provides. The smoothly rising marginal damage function assumed in Section 2 is at best, an approximation to environmental costs that may be inflicted by nonlinear ecological processes as atmospheric carbon sinks are depleted. For example, Britain's Hadley Centre (1998) has recently predicted major forest die-backs beginning mid-century in northern Brazil, the eastern and southern U.S., southern Europe, and northern Australia. As the trees die, and are replaced by shrubs and grassland, they will release the CO 2 stored in their leaves and wood, leading to a significant increase in carbon emissions, and an acceleration of the greenhouse induced warming. (The specific temperature consequences of this kind of positive feedback loop have not yet been modeled.) Since people are generally risk averse, and since the variance of the expected future damages appear to be quite large, buying a zero emission technology today makes more sense. Moreover, the model in Section 2 assumes that the realization of a clean technology via expenditure of D is itself certain and instantaneous. However, if
Prices Versus Policy: Which Path To Clean Technology?
227
technical progress is not general, but is instead path dependent, early ~investment acquires an option value. Given that catastrophic environmental damages are a distinct possibility, society will value flexibility in technological choice as new information about the expected realization of damages develops.
EMPIRICAL
ANALYSIS
This section develops some rough estimates for the decision rule represented by equation (6), for the wind energy industry. The key parameters to be estimated include: D: the total value of the investment needed to render the renewable technologies competitive with fossil fuel generated power; Y~t= 1 Z(t) rt: the net value of the residual environmental benefits gained from
implementing the clean technology "r years earlier than under the regulatory strategy; and ]~=t=~ Y(t) rt: the avoided compliance costs from implementing the clean technology "r years earlier than under the regulatory strategy.2 Investment
To estimate D, I adopt a very simple experience curve approach. Experience curves relate production costs to total output; costs fall with output due to both economies of scale and learning effects. [Spence (1982), Princeton Economic Research (1995)]. Empirically, experience curves have been found to fit the following form: C = aV b where C equals marginal production cost, V equals cumulative output for the firm, a is a constant ( > 0) equal to the marginal cost of the first unit produced, and b is the experience elasticity, with 0 > b > - 1. The "progress ratio" for an experience curve relates output doublings to cost reductions, and is defined as h, where h = 2b. For each doubling of output, costs fall by (1 - h). For example, a progress ratio of 0.85 indicates that unit costs fall by 0.15 each time output doubles. Progress ratios in manufacturing tend to range from 70% to 95% [Cody and Tiejde (1996); Henderson and Kalejs (1995); Princeton Economic Research (1995)]. Experience curves are justified theoretically at the firm level, but due to data limitations, they are typically estimated for entire industries. [Gruber (1992)] Figure 2 graphs the experience curve, in log-log form, for the wind industry.
228
EBAN GOODSTEIN 10 Frond lirm Ihowtng In I
1% progrmm ratio
~1b
J
u
Q.
0o o~ ~d
will Im
I
.bulkpowor madwt I ~
v
(D ._O
er
II
¢_.J
0.1 10
I
I
I
100
1000
10000
100000
LN INSTALLED CAPACITY, pMW
Fig. 2. Experience Curve for the Global Wind-Power industry. Sources: Installed capacity from Worldwatch (1999); cost figures are Enron's factory prices for wind turbines from Robertson (1999).
The costs are presented in terms of capital costs per peak watt. Assuming O&M costs of $0.01 per kWh for renewables (EIA, 1998b: 59), a cost of approximately $0.51 per peak watt translates into electricity production costs of $0.03 per kWh. This is a just below the EIA's (1999a: 63) projected cost for natural gas powered electricity in the next decade. 3 The estimated progress ratio is 0.81 for wind, a bit lower than the 0.85 figure assumed by Princeton Economic Research (1995). (Note, however, that omitting the first observation in Fig. 1 implies an even lower progress ratio.) The data suggest that wind energy will become competitive with new natural gas plants when worldwide production has doubled two or three times above 1998 levels, an addition of about 68,000 MW. The validity of this forecast hinges on two assumptions. First, the progress ratio must remain constant as production volume increases. This assumption seems reasonable on several grounds. Across industries, the progress ratio has been found to be a reliable management tool for cost forecasting in manufacturing. In addition, as the figure suggests, the experience curves for the historical data on the industry provides a very good fit (Adjusted R2 = 0.90).
Prices Versus Policy: Which Path To Clean Technology?
229
Finally, Princeton Economic Research (1995) argues that since many wind components are custom built, and that the industry still relies on job shop as opposed to mass production assembly, there remains room for significant continued cost reduction. While renewable costs must continue to decline for the analysis to hold, it also assumes that the cost of natural gas powered electricity will not fall as well. In the past, renewables have indeed been chasing a moving target, as fossil fuel prices have declined due to deregulation and the breakdown of the OPEC cartel. [McVeigh et al. (1999)] However, further significant reductions in natural gas-fired electricity are not foreseen. The EIA (1998a: 63) projects a slight drop in capital costs and improved efficiency for combined cycle natural gas, offset, however, by gradually rising fuel prices. The net impact will be a rise in generating costs from 3.06 cents per kWh in 2005 to 3.25 cents per kWh in 2020. Natural gas will clearly not see experience curve effects like wind for two reasons: production volume is already quite high, meaning anything close to a doubling is infeasible, and capital costs are only 25% of total generation costs. Given these conditions, as a first approximation to D, we can use the price premium paid for renewables up to their break-even output levels. For competition with new bulk power gas, this would be area M in Fig. 1 above. This works out to around $9.1 billion. (By comparison, simply extending the $0.015 per kWh U.S. subsidy currently provided to wind power to cover all the new capacity would be more expensive. Carrying that subsidy forward for 10 years on 68,000 MW of wind power, assuming 33% efficiency, a 10 year phasein, and a 3% discount rate, would cost about $24 billion). This figure of $9.1 billion is a first approximation only, for several reasons. Prominent among these is that the wind power market is a global one. Wind installation in Europe, where fossil fuel electricity rates are higher than in the U.S., will often come at no cost penalty. Since a significant proportion of new capacity will come on line outside of the U.S., the cost premium for wind power identified in Fig. 1 is too high. On the other hand, there may also be administrative costs to subsidy policies not captured in the $9.1 billion estimate. Avoided Compliance Costs For purposes of exposition, this analysis monetizes two major control costs: for reducing CO2 and SO2 emissions. The underlying counterfactual assumption of the study is that carbon emissions are controlled so that carbon prices rise. I will assume that, consistent with the Kyoto process, a cap and trade system is put into place by 2010, and that carbon permits trade at $50 per ton after that
230
EBAN GOODSTEIN
date. SO2 permit prices are assumed to be $200 per ton throughout the analysis. For coal, these prices translate into an average cost increase in 2010 of approximately 13 mills, and for natural gas, about 5 mills.[EIA (1998a: Table 8)] In the simulations below, I assume conservatively that wind power displaces coal and natural gas equally, and so average the figures, yielding a damage estimate of 9.4 mills per kWh. EIA (t998a: 209) assumes, by contrast, that a high renewables scenario would mostly displace coal generation. The average avoided cost of sulfur dioxide emissions can be determined from EIA (1998a: Table 8) for the fossil fuel powered electric sector. The Figure employed here, based on projections for 2010, is 0.6 mills per kWh.4
Residual Benefits In this analysis, the benefits from installing renewable power will include reduced carbon emissions through 2010, and reductions in nitrogen oxide and particulate emissions throughout the entire period. (Because both sulfur dioxide emissions, and carbon emissions after 2010, are capped under trading programs, renewables investment does not reduce net emissions of these pollutants.) The damages from nitrogen oxide and particulate emissions are location specific. Krupnik and Burtraw (1996) nevertheless provide per kWh estimates of pollution damages, excluding SO2 and CO2, for natural gas and coal of 0.3 mills and 2.2 mills, respectively. Again, I average the figures to yield a damage estimate of 1.25 mills per kWh. Before CO2 emissions are capped, early investment in renewables will also provide residual benefits in the form of carbon reduction. Burtraw et al. provide a summary of damage estimates for coal power from three studies, which range from a high of 22.9 mills per kWh at a discount rate of 1%, to a low of 0.5 mills at a 10% discount rate. I use the 3% discount rate estimate from Cline (1992) of 2.8 mills for coal. Adjusting for the fact that natural gas emits about 40% the CO2 levels as does coal, this generates an average damage estimate from fossil fuels of 1.96 mills per kWh. This figure should be treated as very conservative, since it includes only damages per kWh that occur within the United States, and of course, emissions from the U.S. will cause global damage.
Simulation The policy scenario underlying this simulation is the imposition of a cap and trade system, beginning in 2010. I examine benefits and costs through 2030. For ease of exposition, I assume the following:
Prices Versus Policy: Which Path To Clean Technology?
231
• Carbon prices are zero up until 2010, rising to $50 per ton and then holding constant beginning that year. • For a present value cost of D, society can purchase, over a ten-year period, the peak capacity volume for a given renewable technology needed to insure that the technology becomes cost effective. In other words, by investing a premium of $9.1 billion from say, 2000 to 2010, the world could install the 68,000 pW of wind power needed to drive costs down to a level competitive with bulk power. This assumption implies that new capacity comes on line at a 19.1% growth rate over the ten years. For comparison, wind power grew at an annual average rate of 26% from 1995 to 1998. The cost, D, is distributed across the years in proportion to new capacity installation. • Once wind power is competitive in the bulk power market, capacity is assumed to grow at a rate of 10% per year for the next 10 years, and 5% per year for the following 10 years. This would imply an addition, worldwide, of about 125,000 MW of wind power over the first decade, and 127,000 MW the second. This is a feasible scenario. For comparison, EIA (1998a) forecasts a net addition of 379,000 MW of power generation capacity in the U.S. alone over the decade 2010 to 2020, and a growth rate of 11.8% for net additions of combined cycle natural gas from 2000 to 2020. 5 Given this set up, there are three relevant choices: (A) Spend D between 2000 and 2010. (B) Spend D between 2010 and 2020. (Note that since damages and avoided control costs per kWh stay constant after 2010, there is no advantage to delaying expenditure of D beyond that date: either it is efficient to begin investing in 2010 or it is never efficient). (C) Never spend D. The simulations indicate that it is always optimal to choose the first option, and begin investing in 2000. Table 1 illustrates the value of the model variables for comparing option A with option B. Three discount rates are presented: 1% and 3% for evaluating socially optimal behavior, and 20%, reflecting corporate hurdle rates for investment in new technology. [Scherer (1988)] The first row illustrates the net increase in environmental benefits from early investment in wind, as compared to investment in 2010. The value of improved environmental quality ranges from $9.2 billion at a 1% discount rate to $1.2 billion at a 20% discount rate. Rows 2-4 present the data on wind power spending. Recall that the estimated investment in wind power needed to render it competitive with bulk power was $9.1 billion. The model spreads that out over 10 years: rows 2 and 3 illustrate the present discounted values of wind spending beginning in 2000 and 2010 respectively. Row 4 then highlights a
232
EBAN GOODSTEIN
Table 1.
Simulation results.
Costs and Benefits
Discount Rate 1%
3%
20%
Y=t=] Z(t)rt: Net Environmental benefits
$9.2E + 09
$6.7E + 09
$1.2E + 09
D: Yr 2000 Wind Investment (Gross)
$8.5E + 09
$7.3E + 09
$2.6E + 09
Dr': Yr 2010 Wind Investment (Gross)
$7.7E + 09
$5.4E + 09
$4.3E + 08
D - Dr': Yr 2000 Wind Investment (Net)
$8.0E + 08
$1.9E + 09
$2.2E + 09
YYt=tY(t)r~Avoided compliance costs
$3.7E + l0
$2.5E + 10
$1.9E + 09
~t=1Z(t)r' + ~ t = 1Y(t)rt - (D - DrT): Net Benefits of Early Investment
$4.5E + 10
$3.0E + 10
$9.0E + 08
central point of the model: if investment in wind happens at some point regardless, the net cost of wind investment today needs to reflect that reality. At a discount rate of 1%, the cost of today's investment is reduced by an order of magnitude; at a 20% discount rate it is cut by only 1/6. Row 5 shows the avoided compliance costs associated with early investment in wind. At the 1% and 3% discount rates, these values are quite large relative to the other costs and benefits. Since avoided compliance costs grow exponentially over the modeling period with the increase in renewable capacity, however, a 20% discount rate reduces these avoided costs to a level comparable in size to the other variables. Finally, the model simulation is consistent with the assumed scenario. Evaluated from a year 2000 perspective, at a 20% discount rate, D > Y,~t=] Y(t)r t, so that firms would have little incentive to invest in wind technology on their own accord. (This holds afortiori, considering the difficulty firms face in capturing the full value of their investment due to the experience curve effect). However, by the time the year 2010 rolls around, the ]~=t=l Y(t) rt would equal 1.1E+ 10, even at a 20% discount rate. This is 4 times greater than the discounted 10 year investment cost for wind of $2.6E + 09. This suggests that, with a $50 per ton carbon price, market forces would clearly be signaling the go ahead for wind power by the year 2010. In terms of net benefits, at the 1% and 3% discount rates, investment in wind is justified by the environmental benefits alone, that is ] ~ t = ] Z(t)r' > (D - DRY). Add in the very large savings from avoided compliance costs, and the net benefits from early investment rise to $45 and $30 billion respectively. Even at
233
Prices Versus Policy: Which Path To Clean Technology?
a 20% discount rate, investing in wind in 2000 generates positive net benefits for society as a whole of $900 million. Discussion There are two factors driving these results. First, the cost of early investment is reduced significantly by the fact that the investment occurs regardless 10 years down the road. Second, once wind power becomes competitive in the bulk power market, both environmental benefits and avoided compliance costs grow exponentially. This is evident from Fig. 3. In the early investment case, by 2030, 870 billion kWh of wind power are generated, compared to only 530 billion if investment is delayed until 2010. In the former case, all wind power investments after 2010 come at a zero cost premium, so they generate a stream of pure benefits (reduced environmental damages and avoided compliance costs) for the rest of the simulation. The environmental benefits before 2010 arise from both reductions in carbon dioxide and criteria air pollutants (excluding SO2) and are not large, 3.2 mills kWh. (As noted above, however, the CO2 damage figure should be considered a lower bound, since it includes only U.S., not global, benefits.) After 2010 the benefits arise predominantly from reduced criteria air pollutants and are even smaller, just over a tenth of a mill per kWh. By contrast, the avoided compliance costs after 2010 are large: about a penny a kWh. The simulation points to large net benefits from early investment in wind power, if indeed wind power follows the cost reductions dictated by an 81%
1000
ooo_
o~ 800 ® E
~
1
i
400
200 0 1990
]
2000 2010 2020 2030 2040 Year mlnvestmentin 2000 mlnvestmentin2010 I
Fig. 3. Annual Generation of Wind Power Under Two Scenarios.
234
EBAN GOODSTEIN
progress ratio. This conclusion is robust to one of the main objections to technology policy identified above: that the government will waste some money in development of wind power. Doubling, or even tripling, the initial investment cost D would still yield large net social benefits at the 1% and 3% discount rates. If the pace of learning were to drop off, the market growth rate needs to accelerate to achieve the necessary cost reductions. An 85% progress ratio beginning in the year 2000 implies the need for a worldwide market penetration volume of 143,300 for wind to be competitive in the bulk power market by 2010. This is almost double the penetration level needed with an 81% ratio, and would require a 26% annual growth rate over the 10 year investment period. (As noted above, this was the actual average growth figure from 1995-1998). If such a growth rate could be sustained, wind power would still be an excellent buy. Under this scenario, the total lump sum, gross cost of wind investment only rises from $9.1 to $15 billion, while the benefits and avoided costs also rise above those in Table 1. Progress ratios much higher than 85% imply the need for a lead time longer than 10 years for the full commercialization of wind power, a scenario not modeled here. Current subsidy policy in the United States, combined with aggressive development of wind power resources in Europe, have been driving rapid growth rates in the wind market. [Montague (1998)]. An experience curve analysis suggests that if these historical growth rates are maintained, then wind will achieve cost parity with natural gas in about 2010. Further, subsidy dollars directed towards this goal appear to be wise investments. With a $50 per ton carbon price in 2010, private market actors will have the incentive to invest heavily in wind at that time. Given that these investment dollars will be spent eventually, the model presented here indicates that society is significantly better off spending them now. Why is the market failing to deliver an optimal path of technological development? There are two reasons. First, learning and R&D externalities prevent firms from capturing the full value of any wind investment. Second, high private discount rates make private firms wary of investments generating only long-term payoffs. Because of their perceived risk, market actors develop new technologies only if lured to do so by very high expected profit rates. Wind, however, will not generate these types of returns until carbon prices rise. The argument in this work thus hinges on one critical, and I have argued defensible, assumption: that wind power is not, in fact, a particularly risky investment for society as a whole. Put another way, I assume, that wind will
Prices Versus Policy: Which Path To Clean Technology?
235
continue down the experience curve at its historical pace. If this is true, then it is socially inefficient to wait for rising carbon prices to generate the high rates of return needed to generate large-scale, private wind investment.
CONCLUSION This chapter has explored a tension between the conventional perspective on resource scarcity on the one hand, and the costs of pollution reduction on the other. In the first instance, economists have an empirically grounded faith in the ability of technology to overcome resource limitations, implying no long run price increases. In the second instance, the conventional view is that the marginal costs of pollution control are always increasing. The contradiction lies in the fact that pollution itself reflects a resource limitation: the exhaustion of the natural absorbative capacity of air and water. If we put a price on that resource, via taxes or marketable permit systems, then the standard story suggests that in the long run, innovation should yield fiat or falling - not rising - marginal control costs. Given this, the question becomes, why wait for the long run? Over the last two decades, wind power has achieved impressive cost reductions, and there is no reason to believe that those cost reductions will tail off significantly. This study has shown that, for progress ratios below 85%, investment on a feasible scale in wind power - a decade earlier than the market would dictate - will yield large net social benefits. There are two reasons for this. First, investment in wind is coming sooner or later regardless, and probably sooner. A $50 per ton carbon price means that wind power will be quite close to competitive with fossil fuels, and if introduced by 2010, it will stimulate the investment at that time needed to make wind competitive. Given this, the simulation results find that the net cost of early wind investment in the year 2000 is reduced 4 fold at a 3% discount rate. Second, like all new technology adoptions, wind power will follow an exponential growth path. In my simulation, I assume the growth rate drops from 19% to 10% to 5%; but the power of compounding nevertheless remains. Investing 10 years earlier means that by the year 2030, the installed wind power base is 60% higher than it otherwise would be. And each of those peak watts of wind power installed after the first 10 years generate a stream of pure benefits in the form of reduced environmental damage and avoided control costs. By construction, the benefits of early investment grow over time, while the costs are confined to the initial outlay. This means, first, that examining only a
236
EBAN GOODSTEIN
30 year time horizon stacks the deck against wind power. Even with this limitation, as well as the other conservative assumptions in the model presented here, however, early investment in wind power is clearly socially efficient. Beyond that, initiating the exponential growth process a decade earlier will make a tremendous difference in the installed wind base by mid-century.
ACKNOWLEDGMENT I would like to acknowledge the contributions of Matthias Fripp, who supplied both conceptual and research assistance for this chapter.
NOTES 1. But note also this problem will be minimized if tax dollars supporting higher R&D spending come primarily out of consumption [Cline (1992)]. 2. Note that these formulations differ slightly from the way they are presented in equation (6). This is because in the simulation, the clean technology is not introduced discretely at time r, but instead is phased in beginning at time ~. 3. Capital costs per peak watt were converted to S/kWh using the formula found in Cavallo (1993: 147) and assuming a 30% capacity factor, a 6% discount rate and a 25 year life span. Further reduction in wind capital costs to $.27 per peak watt would imply generation costs of $.02 per kWh; at about this point, renewable power falls below the variable fuel costs for coal and gas plants, implying cost savings from shutting down existing fossil fuel plants and replacing them with renewables. What would it take to achieve cost levels of 2 cents per kWh - the variable cost of coal and gas plants? With a progress ratio of 0.81 for wind energy, about six doublings would be required, which is roughly equivalent to the industry growth since 1980. This implies an installed base worldwide of 540,000 MW. 4. Note these costs might fall somewhat as efficiencies are induced by the permit fees. However, the general results of this study are robust to moderate changes in these parameters. 5. Even at these growth rates, wind power by no means replaces fossil fuel production by 2030. For example, from 2010-2020 the addition to worldwide wind generating capacity is less than one third of the predicted increase in U.S. generating capacity alone.
REFERENCES Arrow, K. (1962). Economic Welfare and the Allocation of Resources for Invention. In: The Rate and Direction of Inventive Activity. Princeton, NJ: Princeton University Press. Cavallo, A., Hock, S., & Smith, D. (1993). Wind Energy: Technology and Economics. In: Thomas Johansson et al. (Eds), Renewable Energy: Sources for Fuels and Electricity. Washington, DC: Island Press. Cline, W. R. (1992). The Economics of Global Warming Institute for International Economics: Washington, DC.
Prices Versus Policy: Which Path To Clean Technology?
237
Cody, G., & Tiedje, T. (1996). A Learning Curve Approach to Projecting Cost and Performance in Thin Film Photovoltaics, Conference Record of the 25th IEEE Photovoltaic Specialists Conference. Salem, MA: Institute of Electrical and Electronics Engineers. DeCanio, S. (1997). The Economics of Climate Change. San Francisco: Redefining Progress. EIA (1998a). Annual Energy Outlook 1999. Washington, DC: Energy Information Administration. EIA (1998b). Assumptions to the Annual Energy Outlook •999. Washington, DC: Energy Information Administration. Goodstein, E. (1995). The Economic Roots of Environmental Decline: Property Rights versus Path Dependence, Journal of Economic Issues, 29(4): 1029-1053. Goulder, L., & Schneider, S. (1996). Induced Technological Change, Crowding Out, and the Attractiveness of CO2 Emissions, unpublished manuscript, Stanford University. Gruber, H. (1992). The Learning Curve in the Production of Semiconductor Memory Chips, Applied Economics, 24, 885-894. Hadley Centre. (1998). Climate Change and its Impacts. U.K. Meteorological Office: http://www.meto.gov.uk/sec5/CR_div/Brochure98 Henderson, E. J., & Kalejs, J. E (1996). The Road to Commercialization in the PV Industry: A Case Study of EFG Technology, Conference Record of the 25th IEEE Photovoltaic Specialists Conference. Salem, MA: Institute of Electrical and Electronics Engineers. Jones, C., & Williams, J. (1998). Measuring the Social Return to R&D, Quarterly Journal of Economics, 113(4), 1119-1136. Krautkraemer, J. (1998). Non-renewable Resource Scarcity, Journal of Economic Literature, 36(4), 20165-2107. Krupnick, D., & Burtraw, D. (1996). The Social Costs of Electricity: Do the Numbers Add Up? Resource and Energy Economics, 18(4), 423-466. Montague, M. (1998). Wind Energy and Climate Change: A Strategic Initiative, The Ecological Economics Bulletin, 3(1), 21-25. McVeigh, J., Burtraw, D., Darmsadter, J., & Palmer, K. (1998). Renewable Energy: Winner, Loser or Innocent Victim? Research Report No. 7 Washington, DC: Renewable Energy Policy Project. Princeton Economic Research. (1995). The Effects of Increased Production on Wind Turbine Costs Golden. CO: NREL. Robertson, K. (1999). American Wind Energy Association, personal communication. March 9. Scherer, E M. (1999). New Perspectives on Economic Growth and Technological Innovation Brookings Institution: Washington, DC Scherer, E M. (1988). Corporate takeovers: The Efficiency Arguments, Journal of Economic Perspectives, 2(1), 69-82. Worldwatch. (1999). Worldwatch Database Set - Energy. Washington DC: Worldwatch Institute.
ENERGY EFFICIENCY AND PETROLEUM DEPLETION IN CLIMATE CHANGE POLICY Neha Khanna and Duane Chapman ABSTRACT This chapter examines the validity of standard technology assumptions used in climate economy models, and explores the policy consequences of changing "them to reflect actual as opposed to postulated trends. In this analysis, global oil production is determined by an augmented Hotelling model in which demand functions incorporate growth in worm income and population. The equilibrium production trajectory rises in the near term, peaks, and then declines as the resource approaches depletion. Contrary to most other work, oil is replaced by an even more carbon intensive but proven energy form, such as coal or shale based synthetic fuel, for an appreciable length of time. At the same time, our econometric model projects energy intensity of the global economy stabilizing around the current level. This alternative arises from an analysis of historical data from the early 1970s to the present. While the scenarios explored here might be interpreted as pessimistic, we consider them highly plausible. The significant policy conclusion that emerges is the need for earlier and more aggressive climate policies than typically found in other work: the optimal control rate for carbon emissions is significantly higher. With existing and known alternative technologies significant reductions in carbon emissions are very expensive, as evidenced by the very high tax The Long-Term Economics of Climate Change, pages 239-264. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0
239
240
NEHA KHANNA AND DUANE CHAPMAN
rates needed to achieve these reductions. We believe these results underlie the desirability for policies with increased emphasis on research on low cost, efficient substitutes for current technologies.
I. INTRODUCTION The standard practice for determining the empirical structure of economic models is to draw upon recent empirical history. This provides a good insight into the near future. However, as the analysis extends further into time, and especially into the far future, the variability around empirical and, perhaps, even structural assumptions necessarily increases. The researcher is, thus, forced to invoke some tangible representation of his or her particular world view and expectation regarding the evolution of economic societies. In the case of climate change, there are several economic assumptions that are extremely tentative and yet vital to the results of the models currently in use. 1 An important subset of these relates to the evolution of technology. How will the global economy's use of energy as a factor input in the production process change over time? When will global oil resources be depleted? What energy source will replace it and for how long? The answers to these and related questions directly determine the future trajectory of greenhouse gases, particularly carbon dioxide, and hence the magnitude of the climate change problem and the economic response to it. This chapter examines the validity of standard technology assumptions commonly used in climate economy models for the far future, and explores the consequences of changing them to reflect actual as opposed to postulated trends. While the alternatives explored here might be interpreted as pessimistic, we consider them highly plausible. The significant policy conclusion is the need for an earlier and more aggressive implementation of climate policies than typically found in other work.
II. ENERGY USE AND THE EVOLUTION OF TECHNOLOGY Economic models typically incorporate a simple representation of technology. Technology and technical change are represented by the form of the production function and the changes in the numerical values of exogenously specified parameters. For climate-economy models a key parameter is one that represents the efficiency with which the economic system uses energy as a factor in the production process. The evolution of energy technologies is typically represented through autonomous energy efficiency improvements (AEEI)
Energy Use, Depletion and Climate Policy
241
which are exogenous to these models (Manne & Richels, 1992, 1999; Kurosawa et al., 1999; Peck & Teisberg, 1995, 1999; among others. Nordhaus, 1994, Nordhaus & Yang, 1996, and Nordhaus & Boyer, 1999, use autonomous decarbonization in place of AEEI.) Improvements in energy efficiency are assumed to be quite rapid. For instance, Manne and Richels (1999) assume the annual AEEI rate to be 40% of the annual GDP growth rate. This translates into a continuous improvement in the efficiency of energy use in the global production process, though the growth rate declines from approximately 0.98% per year in 2000 to about 0.77% per year in 2100. This trend is assumed to hold not only for the world as a whole, but also for every region, including developing countries such as India and those in Africa and South East Asia. 2 Comparable assumptions are found in other integrated assessment models of climate change. Note this reduction in energy demand is independent of the impact of rising energy prices: it is an assumed pure technology effect at the global level) The implication of this assumption is that welfare as measured by gross economic product can be increased in the future without a corresponding increase in energy use and carbon dioxide emissions. That is, the ratios of energy use and CO 2 to GDP decline steadily over time, regardless of prices, income, and population changes. Data from the 1970s onwards do not support this assumption unambiguously. Figure 1 shows that global energy use per unit of economic output was, on average, higher in the early 1970s than in 1998, the latest year for which data are available. A similar trend is observed for high income countries (as defined by the World Bank, 2000). For the remaining countries, the trend is mixed. During the 1980s, energy intensity in the rest of the world showed a rising trend. However, the collapse of the Soviet Union and the subsequent economic crisis in South East Asia led to a decline in the energy intensity for this group of countries in the first half of the 1990s. From 1995 onwards, energy intensity has increased for both group of countries, as well as for the world as a whole. Based on world data, we estimated a simple two-parameter econometric model. This asymptotic curve, shown in Fig. 1, has a better fit than a simple exponential decay. The estimated growth rate falls from about 0.43% per year in 2000 to around 0.02% per year in 2100. 4 A priori, it is difficult to predict the outcome for the next century or so. Manne and Richels (1992, p. 34) ascribe the common assumption of a positive and high AEEI to the optimistic outlook of energy technologists. A more pessimistic outlook might be based on the generally slow spread of efficient production technologies to developing countries, coupled with the lack of basic
242
NEHA KHANNA AND DUANE CHAPMAN
.c 4)
C 4)
CL
I
A
~J
O) v
eq)
q) r" 4) "0 4~ 4)
0
o
t'O [-
i~
i
i
i
i
i
0
$ Srl 9661, / 11.1.8 0001.
T-
Energy Use, Depletion and Climate Policy
243
infrastructure, especially in rural areas where the bulk of the world's population lives. This view might conclude that, at least for the next 100 years or so, the energy intensity in these countries might rise slowly. Then, it is not unlikely that global energy intensity would stabilize somewhere around the current level for the course of the next century as predicted by our econometric model. We consider this possibility equally likely to the commonly posited decline in the ratio of the global energy use to gross economic produce.
IlL OPTIMAL PETROLEUM DEPLETION Data from the 1950s show world oil production rising steadily (EIA, 2000). Yet throughout the last decade, economic models of climate change have typically projected a monotonically declining oil production trajectory. Underlying this assumption is usually some type of resource depletion model loosely based on the Hotelling (1931) model for exhaustible resources. This traditional depletion model is typically based on static demand curves which ignore the growth in income and population, often assumes rising marginal extraction costs, and usually does not reflect the important geological concept of undiscovered resources. Consequently, this conventional model yields a monotonically declining equilibrium production trajectory, a result that was clearly discordant with 20th century global reality. Chapman (1993, and with Khanna, 2000a) augmented the traditional Hotelling model to reflect the growth in income and population. In the absence of a backstop technology, this model yields an optimal equilibrium production trajectory that increases in the near-term, peaks, and then declines as the resource approaches exhaustion. The historic trend of rising oil production is likely to continue over the next few decades. However, as oil resources are depleted and the rising user cost of extraction begins to dominate the positive impact of shifting demand curves, it is not unlikely that the oil production trajectory would eventually be on a downturn. Alternatively, in the presence of a backstop the global production trajectory might continue its monotonically increasing trend till prices rise to the level of the backstop, a result that is also yielded by the Chapman model. Another conspicuous feature of many current integrated models is the omission of cross-price effects as a direct determinant of fossil fuel use (Khanna & Chapman, 1997). From a policy perspective, this could have serious implications. Market based CO2 abatement instruments typically operate by changing the relative prices for different fuels based on the differential carbon content of each fuel. As relative fuel prices change, there are cross-substitution
244
NEHA KHANNA AND DUANE CHAPMAN
effects which would affect the ability of an instrument to achieve any given emissions trajectory. Here we extend the augmented Chapman model to include multiple exhaustible resources whose markets are linked through cross-price effects. This more complex depletion model yields a parabolic production trajectory (rather than an always-declining path), and an equilibrium price trajectory which may show stable prices in the near term (rather than always-rising prices) .5 Suppose there are M fossil fuels, meM, each of which has a finite stock of remaining resources, Sm. Each faces a marginal cost of extraction, C7,, that varies over time. Suppose also linear demand curves that shift over time in response to a growing world population, L,, rising per capita incomes, y,, and the price of the substitute fuel, P~tubs'ra. 6 For exhaustible fuels, the price of the backstop, ,p~m, sets the upper bound on their respective price trajectories. Producers in each market maximize the net present value (NPV~) of competitive profits by choosing the optimal duration of production, T", and the quantity produced in each time period, qT',, given the demand and cost schedules, and remaining resources. This can be written as: Maximize NPW w.r.t. [q~t, 7"], where - r / [ P ~ , (q~,, L.
NPV=~,t=I~
yt, H.b~,m)_ cm(t)]q~, ~i~;~l +r~) )
PT, = f3"~L~,' Ynt~(e~tubs'm) "q3 -- ~37q7 C~(t) = C~o(1+ ~m)t S~>
v" Z,o,~
P~/, q~,, P~/- q7 >_0 and
/7,: C"(t):
Yt: sm: ~: ~l: ~2:
price of fuel m at time t marginal extraction cost for fuel m at time t per capita income at time t stock of remaining resources calibration constant population sensitivity parameter income sensitivity parameter cross-price sensitivity parameter
(1)
245
Energy Use, Depletion and Climate Policy
q~t" Lt:
r,: ckm:
production of fuel m at time t population at time t real interest rate at time "r slope of the demand curve growth rate of extraction cost
Under perfectly competitive markets the Hamiltonian for the above problem
is: W = [PT,(') - C~(t)lqm, _ h,q?, H',=,(1 + r,)
0m, 0 q.
(2)
=0
where h . is the costate variable representing the change in the discounted N P V n due to a small change in the quantity of remaining resources for fuel m. The optimal production trajectory, 4 * , is found by solving the first order conditions and the constraints, simultaneously. The solution is: q~'* = [33m+
H'.=l(1 + r~) X"(r~) (S" - [3~')
[33m = [3?Ln,'Y~,~(~"O~'m) TM - cm(t)
87 ~ m4 -~-
(3)
~ " T ~' f ~ m z-'t= 11~3t
Xm(r.)-- Z ,~= l ( I I , =, l ( 1
+ r,)).
where Xm(r,) is the compound discount factor. The optimal production trajectory is made up of two components. The first, 133";, is the equilibrium production trajectory in the absence of a resource constraint. It is the intertemporal locus of the intersection between the shifting demand curves and the changing marginal extraction cost. The second part represents the impact of scarcity arising due to the finite stock of resources. It defines the distance between the actual equilibrium, q~,*, and the hypothetically unconstrained equilibrium, [33"~, and is based on the difference between remaining resources, g", and the cumulative production in the absence of a resource constraint, 13~'. The optimal production horizon, ira, is the minimum of ir~and T~2: m
t'~ m r l l
I
"112/nsubs,
T 1 = T " ~ O 2 L z . Y r . t l - " T.
T~"= T m ~ P r% = Pbr"~k'm
mx'q3
)
= C m ( T m)
(4)
246
NEHA KHANNA AND DUANE CHAPMAN
where T~ is defined as the period when the marginal cost of extraction rises to the level of the intercept of the demand curve, and T~ is the period when the equilibrium price of the exhaustible fuel rises to the price of the backstop. An obvious extension of the above framework is to assume reserve dependent extraction costs. In the present context, however, this might not be appropriate since it would yield rapidly rising marginal costs. In fact, there is some evidence to indicate that extraction costs for petroleum have been slowly declining (Fagan, 1997; Adelman, 1992, 1994). The current model structure allows the flexibility of determining the growth rate of extraction costs exogenously, and testing the sensitivity of model results to the numerical values assumed. Despite the evidence of stable or even declining extraction costs over the last few decades, we believe it is reasonable to assume a positive though slow growth in extraction costs in the context of a climate change model whose horizon extends up to 400 years into the future. This would incorporate the interaction of technological improvement and depleting resources. Two issues remain. The first relates to the geological concept of remaining resources. Remaining resources refers to the total conventional crude oil available for recovery. It is the sum of both undiscovered resources and identified reserves (Masters, 1991; Chapman, 1993). The undiscovered resources concept is adapted from geology: it is probabilistic, based upon geological extrapolation from known formations and petroleum occurrence. Identified reserves are similar to an inventory concept. It refers to the economically recoverable crude oil at known reservoirs and fields with expected technology. Over time, the USGS has provided a shifting probability distribution of the world's original endowment of oil resources. 7 Between 1983 and 1991, there was a greater shift in the distribution at the higher probability levels, while the distribution remained almost unchanged at the lower tail (Masters, 1991, summarized in Chapman, 1993). At the median 50% probability, the growth in the estimates of original endowment has exceeded the growth in cumulative production over this period. In both 1983 and 1991, there was only a 5% probability that the original resource endowment exceeded 2600 billion barrels. According to Manne and Richels (1992: see pp. 38-39 for discussion), the 95th percentile constitutes the practical upper bound for undiscovered resources as it allows for technological improvements such as those that might affect the costs of deep drilling. Chapman (1993) is also in favor of this approach. Thus, we use the 5% probability estimates as the preferred economic guideline for remaining resources.
Energy Use, Depletion and Climate Policy
247
A final issue pertaining to the resource depletion model relates to the backstop technology. Many climate-economy models assume that petroleum would be replaced by a liquid synthetic fuel such as coal or shale oil, which is quickly replaced by a carbon-free energy form such as solar or ethanol from biomass. Often, the dates of introduction and maximum rates of expansion or decline are specified exogenously based on the researchers' expectation of the future development of these technologies and their ability to penetrate energy markets (see, for instance, Manne & Richels, 1992, 1999, and Peck & Teisberg, 1992, 1995, 1999). Based on cost assumptions and other considerations, Manne and Richels point out that in the absence of a global carbon constraint, a highly carbon intensive liquid fuel would place an upper bound on the future cost of nonelectric energy. In our perspective, this is not unreasonable. There are approximately 15 trillion tons of remaining coal resources (Chapman, 2000). By current rates of consumption, this implies enough coal to meet demand for the next 3000 years or so. What, then, provides the incentive for a shift to a carbon-free alternative whose cost per energy unit might be an order of magnitude higher? In the analysis that follows, we explore the consequences of the possibility that a coal based liquid fuel replaces petroleum as the backstop. Realistic options are coal powered rail transport and liquefied coal as personal vehicle transportation fuel. Given the huge remaining resources, we assume that the cost of this energy source increases very slowly over time, such that carbonfree alternatives to liquid fuels do not become economically attractive over our model horizon.
IV. INTEGRATED ASSESSMENT: DISCUSSION AND RESULTS In this section, we examine the implications for climate policy of the preceding discussion regarding the evolution of energy use and technologies. In order to do so, we incorporate the resource depletion model and other assumptions into an existing model and examine the changes in the results obtained. Nordhaus (1999, 1996, 1994) provides a convenient framework and starting point. The advantage of this model is its compact representation of a fairly detailed, global climate-economy model, accompanied by a candid discussion of model structure, assumptions, and results. The logic of this model is summarized in the Appendix. Further details are available in the references cited, therein.
248
NEHA KHANNA AND DUANE CHAPMAN
In the present analysis, there are four carbon based fuels - coal, oil, natural gas, and a coal-based synthetic fuel that is the backstop. It is assumed that the oil market is, in the near future, the driving force of the energy economy and the first resource that may reflect economic scarcity in the future. The resource depletion model is, therefore, operated for oil only. The demands for coal and natural gas are determined by population, per capita income, own prices, and the prices of all other fuels. Oil is ultimately replaced by the synfuel, whose demand is also determined through a similar function of prices and income. 8 The substitutability between fuels is captured by the cross-price elasticities. Based on the discussion in section II, no exogenous improvements in energy or carbon intensity are imposed. Changes in carbon and energy intensity are determined by the model in response to relative price changes. CO 2 emissions are determined through exogenously specified coefficients, v", that translate energy units to billions of tons of carbon. That is: E, = Z,v"qT,
(5)
where N: n E N and MCN, is the set of fossil fuels, and q7 refers to the aggregate consumption of all fossil fuels, including the exhaustible fuels and backstop, at time t. The macro-geoeconomic model and the optimal resource depletion model operate iteratively until they converge to a solution. Per capita income and the interest rate from the economic growth model serve as inputs in the energy module which determines the optimal trajectory of oil and other fossil fuels? CO2 emissions are based on equation (5), and feed back into the climateeconomy model via changes in equilibrium temperature and the resulting loss in global economic output. The energy module parameter values are summarized in the Appendix (see Table A). All other parameter values are consistent with the DICE model. The model is operated under two scenarios: the base case with no CO2 control, and the case where the control rate for CO2 is optimized (the "optimal case"). The basic results are shown in Figures 2-6, with the latter four showing comparative paths with the Nordhaus work) ° In Figure 2, the global transition to synthetic fuel takes place towards the first quarter of the next century. Since the synthetic fuel releases a much higher amount of carbon per unit of energy that either oil or conventional coal, carbon emissions shift upward and accelerate relative to the DICE projections (see Fig. 3). 11 This figure highlights the implications of assumptions regarding the backstop technology. In the case of a carbon-free alternative, there would be a decline in emissions of a similar magnitude, which would have the opposite impact on the optimal carbon control rates shown in Fig. 6.
249
Energy Use, Depletion and Climate Policy 140 •- -B-Conventionalcoal •--*--Conventionaloil Natural Gas
i 50 ~
60
~ 40 20
0
I
1995
I
2015
2035
I
2055 Year
2075
2095
2115
Fig. 2. Per Capita Energy Consumption (base case).
80 70
--~--Nordhaus • Khanna&C
_j.......~a h
a
p
m
a
n
~
60
50
40
~ 30 ~ 20 10
0
1995
I
I
L
I
i
2015
2035
2055 Year
2075
2095
Fig. 3. Carbon Emissions (base case).
2115
250
NEHA KHANNA AND DUANE CHAPMAN
The exogenous decarbonization imposed in the Nordhaus model, and other similar analyses implies that the carbon intensity declines steadily. In our analysis, this is not the case. Initially, the carbon intensity increases, rising sharply when the synfuel replaces crude oil. Thereafter, the ratio remains more or less stable (see Fig. 4). This is an intuitively appealing result. For the next few decades, while a large proportion of the world's population in developing countries strives to meet its basic energy needs, the global energy and carbon intensity is likely to rise. Once these nations have acquired some minimum level of energy consumption, and as energy prices rise world-wide, there may be an increased effort to reduce energy consumption per unit of economic output, resulting in the subsequent stabilization of energy and carbon intensities. The paramount importance of the AEEI assumption, and future oil depletion, is evident in Fig. 5 with a higher trajectory for global mean surface temperature. Note that because of the lags in the transfer of heat between the layers of the atmosphere and the ocean, the difference in the temperature becomes much greater after the mid-21st century. As a consequence, our optimal control rates for carbon emissions (Fig. 6) are much higher than the Nordhaus projections.
70 Khanna & Chapman
60
50
"6 arbon ~
Int ensity •
40
~
•
6
~ 30 20
._._..,.------*-~ ~ ~..~ Energy Intensity
10 1995
I 2015
I 2035
I 2055
4 2075
I 2095
Year
Fig. 4. Projected Energy and Carbon Intensity (base case).
2115
Energy Use, Depletion and Climate Policy
251
12
10
.-N-Nordhaus --B-Khanna & Chapman
8
.s*
o~ c
8
5
Eo~ r -~
0 1965
I 2015
I 2065
J 2115
I 2165
I 2215
I 2265
12315
I
I
2365
2415
Year
Fig. 5. Rise in Mean Surface Temperature (base case).
0.23 0.21
- 4 - Nordhaus [] Khanna& C
h
a
p
m
a
n
~
0.19
.| "~ 0.17
-,g 0.15 ~= 0.13
"6 o¢ 0.11 u. 0.09 0.07 0.05 1995
i
I
i
I
I
I
2015
2035
2055 Year
2075
2095
2115
Fig. 6. Optimal Control Rate for Carbon Emissions.
252
NEHA KHANNA AND DUANE CHAPMAN
Our conclusion is that a less optimistic assumption regarding energy use in developing countries, and the continued growth in oil use followed by the use of synthetic liquid fuel, can result in a much higher carbon emissions trajectory and global temperatures than is typically found in similar work.
V. SENSITIVITY ANALYSIS How dependent are our results on the petroleum-linked parameter values? We investigate this question with a sensitivity analysis. In the first sensitivity case, A1, we allow the marginal extraction cost to grow very slowly at 0.5% per year, as compared to 1.61% per years in the base case. In the high growth case, A2, we shift in the opposite direction with the marginal cost of extraction growing rapidly at 2.5% per year. This might partially reflect higher production costs associated with environmental protection. The implications for the optimal oil production trajectory are shown in Fig. 7. In the base case, we assume that oil resources are at the 95th percentile of the frequency distribution. This implies that there is a 5% probability that
70.
-- "~-- Base case 60.
a .t
Scenario A1 Scenario A2
)(
Scenar!o B1
_J ~,w
~ 4 .,~-,
.~4o.
=ol 10 ] 1965
I
p
I
I
I
I
Year Scenario AI: Scenario A2: Scenario B1 :
Growth rate of marginal cost of extraction for oil = 0.5% per year. Grow~ rate of marginal cost of extraction for oil = 2.5% per year. Remaining oil resources estimated using 50~ percentile on frequency distribution for original
Scenario B2:
Remaining oil resources estimated using 97.5th percentile on frequency disthbution for original resources.
resources.
Note: In all cases, the optimal oil production trajectorj terminates in the decade of 2025.
Fig. 7. Equilibrium Oil Production Trajectory Under Alternative Scenarios.
I 2035
Energy Use, Depletion and Climate Policy
253
resources exceed the estimated amount. In scenario B1, we use the 50th percentile of the frequency distribution for petroleum resources. The remaining resources corresponding to this level are 2150 billion barrels. In the more optimistic case, B2, remaining resources are 2650 billion barrels, corresponding to the 97.5th percentile. This case allows for breakthrough technological developments that might increase the amount of economically recoverable reserves in the future. The sensitivity results in Fig. 8 have an obvious interpretation. The optimal carbon emissions trajectory as well as the optimal control rate are not very sensitive to the assumptions used in the current analysis. However, note that scenario B1 (lesser remaining oil resources) results in visibly higher CO2 emissions and optimal control rates.
VI. A TAX POLICY SIMULATION Since the introduction of the Buenos Aires Action Plan in 1998, climate economists have focused their research on the potential carbon permit prices under various emission trading scenarios. The optimal carbon tax rates furnished by our model correspond exactly to the optimal permit price under a global emissions trading regime. However, even under such a regime, there is likely to be some differential impact on fossil fuel prices due to their different carbon contents. In this section we simulate the effectiveness of changing relative fossil fuel prices in lowering the emissions trajectory towards the optimal level. 12 Under the first two scenarios, we impose taxes at rates that are ranked according to the relative carbon intensities of the fossil fuels, with the tax rates in the second case being twice as high as in the first case. The third scenario is designed such that the resulting emissions trajectory approximately tracks the optimal emissions trajectory obtained earlier. The tax rates used for the analysis are shown in Table 1. Note that the tax on oil is levied on the marginal cost of extraction. The impact of an energy tax on the emissions trajectory depends on the simultaneous interplay of several forces. First, as the marginal cost of oil extraction increases due to the imposition of an exogenous tax, the optimal production horizon changes, and therefore, the optimal price and quantity trajectories change. Second, the introduction of synthetic fuels, the most carbon intensive of all the fuels considered, depends on the optimal production horizon for oil. Third, there are substitution possibilities between the various fuels. As the price of a fuel rises there is not only the decline in emissions due to the negative own price effect on demand, but also a partially offsetting increase in
254
NEHA KHANNA AND DUANE CHAPMAN
CO2 Emissions Under Alternative Scenarios 95 85 75
'2
I
1995
2015
2035
2055
2075
2095
2115
Year
Control Rate Under Alternative Scenarios 0.235 T
m,
0.195 T
.|
0.075 1995
2015
2035
2055
2075
2095
2115
Year - - ~ - - B a s e case x
Scenario B1
'I
Scenario A1
i
Scenario B2
x
ScenarioA2
Scenario AI: ScenadoA2:
Growth rate of marginal cost of extraction for oil = 0.5% per year. Growth rate of marginal cost of extraction for oil = 2.5% per year.
Scenado B I :
Remaining oil resources estimated using the 50 percentile on the frequency distribution for original resources. Remaining oil resources estimated using the 97.5th percentile on the frequency distribution for original resources.
Scenado B2:
th
Fig. 8. Sensitivity Analysis.
Energy Use, Depletion and Climate Policy Table 1.
255
Tax Rates and Levels Under Alternative Tax Scenarios. Scenario 1 Low Tax Rate
Scenario 2 Medium Tax
Tax Level
(%)
Oil ($/bl) A Coal (S/ton) Nat. Gas ($/1000 cf) Synfuel ($/bl) B
20 30 10 40
Rate
Tax Level
(%) 1995
2015
2.2 6.8 0.3 -
7.6 0.4 23.6
40 60 20 80
Scenario 3 Optimal Control Rate
Tax Level
(%) 1995
2015
4.3 13.5 0.6 -
15.1 0.7 47.2
100 200 100 300
1995
2015
10.8 45.1 3.1 -
50.4 3.5 177
The tax rate refers to the percentage by which energy prices are raised. The tax level is the absolute level of the energy tax (units are shown in the first column). A: The tax is levied on extraction. B: The tax is levied once synfuel production begins in the decade of 2005.
the emissions level due to the positive cross-price effect on the demand for substitute fuels. As evident from Fig. 9, the first two scenarios have limited success in reducing the emission levels to the optimal trajectory. For this to be achieved, extremely high tax rates are required, an example of which is shown in scenario three, which raises energy prices by as much as 300%.
VII. CONCLUSIONS The modeling of the far future necessary for the analysis of climate change raises challenging economic issues. Ultimately, the empirical structure of a climate-economy model is based on an expectation regarding the development and spread of technologies and their impact on the economic system. This necessarily involves some amount of informed and educated guessing. The current paper examines the implications for climate policy of some technology related assumptions regarding future energy use. One can only conjecture what will happen when oil becomes relatively scarce. A common approach is to assume that a carbon-free backstop, such as hydrogen produced by electrolysis, or solar and nuclear power, will take its place. ~3The result is the presumption of a concave carbon trajectory: in the near term CO 2 emissions rise, but then continuously decline as the carbon-free
256
NEHA KHANNA AND DUANE CHAPMAN 800 -- <,-- Base case (no controls) 700
i
600
. o .. Optimal control case .I.
Scenario 1 (Iowtax)
N
Scenario 2 (medium tax)
.." A," ,,." .-"~/'x .,,"
g
~
'~
,o'"
== soo
~j..~¢ " ., . a .,B-"
.B-'"
400 .," •
300
jf*" ,ir"
E m 200
100
0 1965
I
i
~
I
]
I
I
I
1985
2005
2025
2045
2065
2085
2105
2125
Year
Fig. 9. CO2 Emissions Under Alternative Tax Scenarios.
backstop replaces a larger and larger fraction of the traditional fossil fuels (Chapman & Khanna, 2000b). This presumption is important in the global wanning context because, it is, in a sense, the "don't worry, be happy" approach: if you wait long enough, the problem will solve itself because the very source of the problem will begin to disappear through the postulated AEEI and the shift to carbon-free fuels. In this analysis, we consider the problem from a different, more pessimistic perspective, where oil may be replaced by an even more carbon intensive but proven energy form, such as a coal or shale-based synthetic fuel, for an appreciable length of time. This is accompanied by rise in the energy intensity in presently developing countries such that the frequently posited decline in global energy and carbon intensity is not realized. Our model yields a much higher level of carbon emissions accompanied by a higher optimal control rate relative to that obtained in other work. The bottom line is the need for greater, quicker, and consequently, more expensive abatement efforts. Furthermore, we find that in the presence of cross-price effects between fossil fuels, high levels of energy taxation would be required to reduce carbon emissions to their optimal level. (There may be opportunity cost equivalents such as CAFE fuel economy standards, and building insulation
Energy Use, Depletion and Climate Policy
257
code requirements.) In the current economic and political setting it seems unrealistic to expect these to be implemented. Yet, any delay in their implementation might warrant even higher taxation in the future.
NOTES 1. For an overview of recent literature, see Weyant and Hill (1999) and the 1999 Special Issue of the Energy Journal. 2. In the case of India, Manne and Richels (1999) assume a much lower AEEI in the early years of 2000 and 2010 due to the shift from non-commercial to commercial energy use. 3. At the national level, energy consumption may also decline due to the effect of policies, and the shift from manufacturing to services that occurs as the economy matures. 4. Details of the econometric model and data are provided in the appendix. 5. Chapman (1993) also developed the model under cartel monopoly-like assumptions and a combination of competition and monopoly, both with and without a backstop technology. 6. Note that in the case of a linear demand curve, the shifting intercept implies that the responsiveness of demand to own-price varies from period to period. The expression for the own-price elasticity corresponding to the demand function in equation (1) is: E=
P7 ~TL?,y?2(p~/'bs,m)~3-- p'~
7. Original endowment is defined as the sum of undiscovered resources, identified reserves, and cumulative production (Masters et al., 1991). 8. Computationally, per capita demands for coal, natural gas, and the synfuel are modeled as linearly homogenous Cobb-Douglas functions in per capita income and prices. Aggregate demand is the product of rising per capita demand and population. 9. The real interest rate in equation (1) is determined from the optimal economic growth model as the real rate of return on capital, and varies from period to period. At the steady state equilibrium, it is numerically equal to the discount rate: r = tx+ 0g where Ix is the pure rate of time preference, 0 is the elasticity of marginal utility w.r.t. per capita consumption, and g is the growth rate of per capita consumption (Khanna & Chapman, 1996). This relationship is modified if climate change impacts have a direct negative impact on utility. In that case, the pure rate of time preference might be lower than 3% per year as assumed in this analysis. (We thank an anonymous reviewer for pointing this out). Khanna and Chapman (1996) have argued for a zero pure rate of time preference in the case of climate change models. However, the original Nordhaus assumption of 3% per year is retained here so as to focus on the impacts of technology assumptions on climate policy. Chapman et al. (1995) explore the sensitivity of optimal climate policy to changes in the pure rate of time preference. 10. The model generates results for a 400 year horizon. However, for ease of presentation, we present results through the year 2110 only. The only exception is Figure 5.
258
NEHA KHANNA AND DUANE CHAPMAN
11. In Nordhaus (1994) "emissions" refer to the sum of CO 2 and CO2-equivalent CFC emissions. Our model considers only the former, as does the later work by Nordhans and Yang (1996) and Nordhaus and Boyer (1999). 12. Note that this is a simulation and not an optimization exercise. The base case trajectory of per capita income is treated as an exogenous variable for this section of the analysis. 13. A contrasting view is presented by Drennen et al. (1996). They argue that even after including externality costs, solar photovoltaics are unlikely to be competitive and available for widespread adoption without significant technological breakthroughs.
REFERENCES Adelman, M. A. (1992). Finding and Development Costs in the United States, 1945-1986. In: J. R. Moroney (Ed.), Advances in the Economics of Energy and Resources. Greenwich, C.T: JAI Press. Adelman, M. A. (1994). The World Oil Market: Past and Future. Energy Journal. Special Issue: The Changing World Petroleum Market, 15, 3-11. AGA. (1981). Gas Facts: 1981 Data. American Gas Association, Arlington, V.I. BEA. (1996). Survey of Current Business. Bureau of Economic Analysis, U.S. Department of Commerce, Economics and Statistics Administration 76(12). January/February. Brown, L. R., Lenssen, N., & Kane, H. (1995). VitalSigns 1995: The Trends thatare Shaping Our Future. Worldwatch Institute. N. Y. and London: W. W. Norton and Company. Chapman, D. (1993). World Oil: Hotelling Depletion or Accelerating Use? Nonrenewable Resources. Journal of the International Association for Mathematical Geology 2(4), 331-339. Chapman, D. (2000). Environmental Economics: Theory, Application, and Policy. Addison Wesley Longman. Chapman, D., & Khanna, N. (2000a). World Oil: The Growing Case for International Policy. Contemporary Economic Policy, 18(1), 1-13. January. Chapman, D., & Khanna, N. (2000b). Crying No Wolf: Why Economists Don't Worry About Climate Change, And Should. Climatic Change. Forthcoming. Chapman, D., Suri, V., & Hall, S. G. (1995). Rolling DICE for the Future of the Planet. Contemporary Economic Policy, XIII(3), July, 1-9. Drennen, T. E. (1993). Economic Development and Climate Change: Analyzing the International Response. Ph.D. dissertation. Cornell University, Ithaca, N.Y. January. Drennen, T. E., Erickson, J. D. & Chapman, D. (1996). Solar Power and Climate Change Policy in Developing Countries. Energy Policy, 24(1), 9-16. EIA. (1979). International Energy Annual. Energy Information Administration, U.S. Department of Energy, Washington, D.C. EIA. (1987). International Energy Annual. Energy Information Administration, U.S. Department of Energy, Washington, D.C. EIA. (1994). Annual Energy Review 1993. Energy Information Administration, U.S. Department of Energy, Washington, D. C. January. EIA. (1996). Monthly Energy Review. Energy Information Administration, U.S. Department of Energy, Washington, D. C. July. EIA. (1997). International Energy Annual. Energy Information Administration, U.S. Department of Energy, Washington, D.C.
Energy Use, Depletion and Climate Policy
259
EIA. (2000). Monthly Energy Review. Energy Information Administration, U.S. Department of Energy, Washington, D. C. February. ERP. (2000). Economic Report of the President. U.S. Council of Economic Advisors. Washington D. C. February. Fagan, M. N. (1997). Resource Depletion and Teclmical Change: Effects on U.S. Crude Oil Finding Costs from 1997 to 1994. Energy Journal, 18(4), 91-106. Hall, D. C. (1996). Geoeconomic Time and Global Warming: Renewable Energy and Conservation Policy. International Journal of Social Economics, 23(415/6), 63-87. Hotelling, H. (1931). The Economics of Exhaustible Resources. The Journal of Political Economy, 39(2), 137-175. Howarth, R. B. (1996). Climate Change and Overlapping Generations. Contemporary Economic Policy, 14(4), 100-111. October. Khanna, N., & Chapman, D. (1996). Time Preference, Abatement Cost, and International Climate Policy. Contemporary Economic Policy, XIV, April, 56-66. Khanna, N., & Chapman, D. (1997). A Critical Overview of the Economic Structure of Integrated Assessment Models of Climate Change. Working paper 97-23, Department of Agricultural, Resource, and Managerial Economics, Cornell University, Ithaca, N.Y. Kurosawa, A., Yagita, H., Zhou, W., Tokimatsu, K., & Yanagisawa, Y. (1999). Analysis of Carbon Emission Stabilization Targets and Adaptation by Integrated Assessment Model. Energy Journal. Special Issue: The Costs of the Kyoto Protocol - A Multi-Model Evaluation. 157-175. Manne, A. S., & Richels, R. G. (1992). Buying Greenhouse Insurance: The Economic Costs of CO2 Emission Limits. Cambridge, M.A: MIT Press. Manne, A. S. & Richels, R. G. (1999). The Kyoto Protocol: A Cost Effective Strategy for Meeting Environmental Objectives? Energy Journal. Special Issue: The Costs of the Kyoto Protocol A Multi-Model Evaluation. 1-23. Masters, C. D., Root, D. H., & Attanasi, E. D. (1991). World Resources of Crude Oil and Natural Gas. U.S. Geological Survey. Presented at the 13th World Petroleum Congress, Buenos Aires. Nordhaus, W. D. (1994). Managing the Global Commons: The Economics of Climate Change. Cambridge, M.A: MIT Press. Nordhaus, W. D., & Yang, Z. (1996). A Regional Dynamic General-Equilibrium Model of Alternative Climate-Change Strategies. American Economic Review, 86(4), September, 741-765. Nordhaus, W. D., & Boyer, J. (1999). Requiem for Kyoto: An Economic Analysis of the Kyoto Protocol. Energy Journal. Special Issue: The Costs of the Kyoto Protocol - A Multi-Model Evaluation. 93-130. Peck, S. C., & Teisberg, T. J. (1992). CETA: A Model for Carbon Emissions Trajectory Assessment. Energy Journal 13(1), 55-77. Peck, S. C., & Teisberg, T. J. (1995). International CO2 Control: An Analysis Using CETA. Energy Policy, 23(415), April/May, 297-308. Peck, S. C., & Teisberg, T. J. (1999). CO2 Emissions Control Agreements: Incentives for Regional Participation. Energy Journal. Special Issue: The Costs of the Kyoto Protocol - A MultiModel Evaluation. 367-390. Ramsey, E P. (1928). A Mathematical Theory of Saving. The Economic Journal XXXVIIl(152), 543-559. Suri, V. (1997). Environment, Trade, and Economic Growth: An Analysis of National and Global Linkages. Cornell University. Ph.D. dissertation. August. -
260
NEHA KHANNA AND DUANE CHAPMAN
Weyant, J. P., & Hill, J. N. (1999). Introductionand Overview.Energy Journal. Special Issue: The Costs of the Kyoto Protocol - A Multi-Model Evaluation. vii-xliv. World Bank. (2000). WorldDevelopmentIndicators. CD-ROM.
APPENDIX A. Estimating Future World Energy Intensity The econometric model underlying the predicted energy intensity shown in Fig. 1 is Model 1. All estimated coefficients are significant at the 1% level.
Model I (figures in parenthesis are standard errors):
1199+465( ) 2965()+error (0.44)
(4.07)
(4.18)
R 2 = 0.9273 where E,
= energy intensity in year t
trend = trend variable, measured in terms of calendar years. An alternative model, Model 2, was also estimated.
Model 2:
,- o(exp(o*tren + *,ren ))+error where Eo refers to the observed energy intensity in year 0 (1973 in our case) Econometrically, it is hard to distinguish between the two models. Both models seem to fit the data equally well, and with all coefficients statistically significant at the 1% level. The significant difference between the two models in our context is that Model 2 predicts a u-shaped curve for global energy intensity. However, given the discussion in the text, Model 1 has a greater intuitive appeal. Both models were estimated using energy data from the EIA (1979, 1987, 1997). Economic data were obtained from the World Bank (2000). World GNP in current U.S.$ was converted to constant 1996 U.S.$ using the U.S. GDP deflator (ERP, 2000). This methodology is preferred over the standard GNP in constant U.S.$ series provided by the World Bank due to the known under-
Energy Use, Depletion and Climate Policy
261
reporting of inflation rates by national governments for political reasons. See Suri (1997) for details.
B. An Overview of the DICE Model, and Some New Developments The DICE model comprises a representative agent, optimal growth model with an intertemporal objective function which maximizes the present value of utility. The decision variables are the rate of investment and the fraction by which GHG emissions are reduced. The model has a 400 year horizon starting from 1965, and operates in time steps of one decade. The pure rate of time preference is 3% per year. The w o r d economy produces a composite economic product using a constant returns to scale, Cobb-Douglas production function in capital and labor with Hicks neutral technical change. Production is associated with the emissions of GHGs. The model assumes that only CO2 and chloroflorocarbons (CFCs) are controlled. Other GHGs are determined exogenously. The uncontrolled level of emissions in any period is proportional to the level of output. The transformation parameter is assumed to decline over time, according to the growth in total factor productivity. The accumulation of GHGs in the atmosphere depends not only on the emission levels, but also on the rate at which carbon diffuses into the deep ocean. The ambient atmospheric carbon level in any period, therefore, depends on two, time invariant parameters - the atmospheric retention ratio and the rate of transfer to the deep ocean. The accumulation of GHGs results in the rise of global mean surface temperature. The relation between GHG emissions and increased radiative forcing has been derived from empirical studies and climate models. The link between increased radiative forcing and climate change is established by another geophysical relation that incorporates the lags in the warming of the various layers in the climate system, such that a doubling of ambient CO 2 emissions increases radiative forcing by 4.1 Wm-2. The economic impact of climate change, represented by the fraction of output lost, is a quadratic function of the increase in atmospheric temperature. The cost of reducing emissions is also assumed to increase with the rise in temperature through an empirically determined relationship. Damage and cost relations come together through an additional shift parameter in the production function. Details of the model, including the GAMS code are available in Nordhaus (1994). The model has also been developed at a regionally disaggregated level
262
NEHA KHANNA AND DUANE CHAPMAN
(Nordhaus and Yang, 1996). Howarth (1996) has interpreted the representative agent model in the context of an overlapping generations framework. Hall (1996) questions the use of such models to obtain optimal climate policies. He argues that the correct framework for analyzing climate policy is based on the concept of geoeconomic time which incorporates the interaction between the economic system and the earth's geophysical system. Recently, both the 1994 and 1996 Nordhaus models were updated. The most significant change includes a multi-reservoir climate model calibrated to existing climate models (Nordhaus & Boyer, 1999). In addition, the regionally disaggregated RICE model incorporates carbon-energy as a factor input in the production process. While this is an improvement over the simple two factor production function used in the earlier DICE and RICE models, the formulation is subject to the same criticisms applicable to other models mentioned in the text. The new RICE-98 model assumes a long-run carbon supply curve. The supply price of carbon is determined by the ratio of cumulative consumption to remaining resources. The model does not distinguish between the different fossil fuels. A generic carbon-free backstop is assumed to be available at a high cost. None of these changes would affect the qualitative results of our analysis.
C. Integrating the Energy Module with the DICE Model The integrated energy-DICE model is run in three steps. (Copies of the corresponding GAMS code are available from the authors.) (1)
(2)
The pure Ramsey growth model is run to obtain initial values for the gross economic output and the discount rate. The Ramsey model is obtained as a subset of the DICE model, excluding the emissions, concentrations, forcings, temperature change, damage, and cost equations. Since there are no climate effects in this model, the damage coefficient in the production function is set at 1. The discount rate is obtained as the rate of return on capital. The equilibrium per capita economic output and the discount rate obtained in step l are used as the starting values for the energy model, which simulates the demand for oil, coal, natural gas, and the synfuel. The per capita demand for coal, natural gas, and the synfuel are modeled as Cobb-Douglas functions of per capita output, own prices, and prices of substitute fuels. The optimal production horizon for oil production as well as the equilibrium production trajectory are obtained using equations (3) and (4) in the text.
Energy Use, Depletion and Climate Policy
(3)
263
The total demand for each energy type determines CO2 emissions through emission coefficients (see equation (5) in the text, and Table A in this Appendix). The emissions are then used to determine optimal temperature change and its economic impacts using the climate change equations of the DICE model. In turn, these determine the optimal net economic output, i.e. economic output after taking account of climate impacts. Finally, steps 2 and 3 are repeated using the net economic output and the corresponding discount rate till the integrated energy-economy model converges to a steady state.
2~
NEHAKHANNA AND DUANECHAPMAN
D. Parameter Valuesfor the Energy Module Table A. Parameter
Value
Energy Demand Elasticities Own-price Cross-price Income
-0.5 0.25 1
Energy Prices: 1965, 1995 Coal (price to utilities, S/ton) Natural gas ($/1000 cf) Growth Rate (% per year) Per capita energy consumption: 1965 Coal (mbtu) Natural gas (mbtu)
Drennen (1993)
21.82, 22.56 EIA (1994 and 1996, respectively) 2.27, 3.14 AGA (1981), EIA (1996) 0.1 15.58 7.14
Cost of backstop Initial value ($/bl) Growth rate (% per year)
55 0.1
Cost of extraction 1965 value ($/bl) Growth rate (% per year)
6.71 1.61
Remaining oil resources: 1965 (trillion barrels)
Carbon coefficients (BTC/quad) Coal Oil Natural gas Synfuel
Data Sources
Based on energy data from Brown et al. (1995) and population data from Nordhaus (1994).
Based on Chapman (1993)
2.5
0.0254 0.0210 0.0144 0.0421
Based on 95th percentile of distribution for 1990 and cumulative production from 1965-1990. See text also. Based on Manne and Richels (1992)
Notes: 1. Data are in 1989 $ where applicable. The base year was changed using the implicit GDP deflators obtained from EIA (1994) and BEA (1996). 2. Natural gas price is the volume weighted average for all consumers.
THE CLEAN DEVELOPMENT MECHANISM AND ITS CONTROVERSIES* Larry Karp and Xuemei Liu ABSTRACT The Clean Development Mechanism (CDM) has been proposed as a means of reducing the costs of abating greenhouse gasses, and for assisting developing countries. Although the CDM offers apparent environmental benefits, in addition to benefiting both investors and developing country hosts, it has generated considerable controversy. We review and evaluate the arguments surrounding the CDM and we provide new empirical evidence concerning its potential benefits.
INTRODUCTION It is important that developing countries participate in efforts to limit greenhouse gas (GHG) emissions. Developing countries contribute a substantial and growing share of total emissions. In addition, their participation in plans to control the global stock may be essential in overcoming the resistance within developed countries to implementing the Kyoto Protocol. There is no ethical basis and little political will for coercing developing countries into agreeing to a limit. There is little prospect that they will voluntarily limit emissions without compensation. Such a limit may not be in their self-interest, even if they were able to overcome the problems of free-riding. The Clean Development Mechanism (CDM) has been proposed as a method for obtaining The Long-Term Economics of Climate Change, pages 265-285. 2001 by Elsevier Science B.V. ISBN: 0-7623-0305-0
265
266
LARRY KARPAND XUEMEI LIU
the cooperation of developing countries in controlling GHG emissions. This proposal has been controversial and its efficacy is uncertain. We have two objectives in this study: to attempt to shed light on the controversy, and to provide empirical evidence of the potential benefits of the CDM. The growth rate of GHGs emissions in developing countries is increasing, and their aggregate emissions are projected to exceed those of developed countries within a few decades (although developing countries' per capita emissions will still be lower). Even if developed countries manage to control their emissions, global stabilization of GHG concentrations requires reductions in developing countries. The CDM reallocates reductions from developed to developing countries and therefore does not directly lead to the additional aggregate reductions that are needed for stabilization. However, by participating in the CDM, developing countries may help to defuse political opposition within the developed countries to the Kyoto Agreement. Thus, the CDM might be part of the solution to the problem of excessive emissions, although it is certainly not the entire solution. Developing countries were not willing to commit to making reductions under the Kyoto Protocol. Developed countries were not willing to impose sanctions, such as trade or credit restrictions, to induce participation. Since developed countries are responsible for two thirds of current emissions and for three-quarters of historical GHGs emissions, they are largely responsible for the current anthropogenic stock. On ethical grounds it is therefore hard to argue that developing countries should bear the cost of limiting GHGs stocks. Developing countries may be the greatest victims of global wanning, because of geography and because of their limited ability to adapt to a changing climate (Goldemberg, 1997). Nevertheless, on the grounds of rational selfinterest, it is debatable whether they should be willing to incur costs to control the stock of GHGs. The relation between GHGs stocks and global warming, and the resulting damages from global warming, are uncertain and lie in the future. In the meantime, developing countries have urgent needs involving food supplies, health and education, and local environmental problems. Their reluctance to divert resources from these needs in order to target a global environmental problem is understandable. The Clean Development Mechanism (CDM) was proposed as a means of efficiently reducing GHGs. The rationale for the CDM is the assumption that it is cheaper to reduce emissions in developing countries. Under the CDM, industrialized countries (or firms in those countries) which pay for abatement in developing countries would receive credits. These credits would be used to offset the developed country's agreed reductions. If it functioned properly, the
The Clean Development Mechanism and Its Controversies
267
CDM would be similar to a market for tradeable permits, since abatement would tend to occur where it is cheapest. The opportunity to abate cheaply in a developing country rather than expensively at home would be valuable to the developed country - the value is the difference in the abatement costs. Provided that the developing countries are able to capture some of this surplus, they would be better off than under the status quo. In this sense, the CDM is similar to a market for tradeable permits in which the initial allocation treats the developing countries generously. Although the assumption that abatement is cheaper in developing countries is plausible, and the CDM has characteristics that offer potential benefits to all parties, the idea has attracted considerable controversy. In the next section we describe the CDM and identify the protagonists in the debate. The following section considers several of the arguments against the CDM. Next we present econometric evidence regarding the assumption that abatement is cheaper in developing countries. FUNDAMENTALS
OF THE CLEAN MECHANISM
DEVELOPMENT
We begin with a clarification of terminology, and then discuss the relation between the CDM and a market for permits. We then identify the chief actors in the debate. Joint Implementation and the CDM
The 1992 UN Framework Convention on Climate Change considered the use of Joint Implementation (JI), a general term which describes cooperative agreements for reducing emissions. In such an agreement, a developed country receives credits for "jointly implementing" an abatement project in a host country. The Europeans proposed JI as a system for allowing emissions trading within Annex I countries. The U.S. wanted to allow Annex I countries to receive credits for financing reductions in developing countries. The U.S. position was controversial because the Framework Convention on Climate Change had called for North-to-South technology transfers in addition to - not as a substitute for - reductions in North's emissions. In much of the literature, JI has been used to refer to the mechanism for promoting cooperation between developed countries and developing country hosts. For example, the UNEP glossary defines JI as "a controversial concept whereby a developed country would receive some type of credit for emissions reductions it helps to finance in a developing country."
268
LARRY KARP AND XUEMEI LIU
As a means of finessing the disagreement between the U.S. and European positions, and in recognition of the asymmetry between developed and developing countries, the Kyoto Protocol distinguished cooperative agreements between developed countries and agreements in which the host is a developing country. The former type of agreement was denoted "Joint Implementation", and the latter "Clean Development Mechanism". We follow this usage, and we focus on the CDM. Most of the earlier literature does not recognize the distinction, and refers only to JI. Some of the issues surrounding the two types of proposals are the same. However, the asymmetry between developed and developing countries leads to important differences. The CDM and a Tradeable Permits Market
The CDM, like a tradeable permits market, promotes efficiency by directing abatement to the time and place where it is cheaper. Both the CDM and tradeable permits offer developing countries the possibility of capturing some of the surplus created by the savings in abatement costs. There is, however, an important distinction between the CDM and a market in permits. A permit to emit a given quantity of pollution over a given time period is clearly defined, whereas the CDM could take many forms. This adaptability is one of its most important characteristics. Buying and selling a permit is a straightfoward transaction, but entering into an agreement under the CDM requires negotiating a contract. The developed country and the host need to agree on the terms, including the details of the technology and the financing, and the responsibilities of the host. The most significant transactions costs arise from the need for a third party to decide what it is that the developed country is buying, i.e. the abatement credits it obtains. The transactions costs appear to be higher under the CDM than in a market for tradeable permits. Nevertheless, the CDM has some practical advantages over a market for tradeable permits. In order for developing countries to participate in a market for tradeable permits, they must either have permits to sell or an incentive to buy them. Both of these conditions require that the country has agreed to an allocation of permits (a ceiling on emissions). However, the developing countries do not feel obliged to incur costs to reduce GHGs stocks, and are unwilling to agree to an allocation that might lead to such costs. Since the Kyoto Agreement did not constrain the developing countries in any manner, there apparently existed the potential for a mutually advantageous deal between the developed and developing countries. In this deal the developing countries would receive a generous allocation and be entitled to sell permits. Both developed and
The Clean Development Mechanism and Its Controversies
269
developing countries would benefit from reduced global emissions, and the developing countries would benefit from permit sales. Even in the absence of uncertainty it would have been very difficult to strike this kind of deal. The developed countries are reluctant to make the large transfers to developing countries that would have been implicit in this arrangement. Given the uncertainty about future emissions and abatement costs, and thus the uncertainty about what constitutes a generous allocation, there was no chance for such a deal. The CDM is a compromise that achieves some of the benefits of a market for tradeable permits, without requiring an assignment of property rights to a commodity whose value is highly uncertain. The CDM commits agents to nothing, and therefore involves no politically visible risk. (It does, of course, involve the environmental risk associated with doing nothing.) The CDM is merely an agreement to allow certain types of contracts in the future, not a division of property rights. It leaves open the possibility that some of the gains from trade can be realized. Some of the transactions costs that arise under a CDM would also occur under tradeable permits. A CDM agreement requires monitoring at the level directly related to the investment project, whereas tradeable permits require monitoring a country's aggregate emissions. This aggregate monitoring, verification of the results, and enforcement of ceilings may be especially difficult in developing countries. It may be difficult to measure aggregate emissions, or to determine whether a target has been exceeded because of reasons beyond the control of the governing authorities. These reasons might include acts of nature such as forest fires, or an undeveloped regulatory structure. Similarly, enforcement may be difficult because of the problem of making credible commitments to impose sanctions against an entire nation especially a poor one. Monitoring and enforcement of the CDM presents its own set of problems, but these may be more tractable because of their greater specificity. The target of a CDM is likely to be easier to define and to measure (compared to an economy-wide target). In many cases it may be possible to determine whether the failure to reach the target is due to actions under the control of the signatories. The punishment of non-compliance (e.g. withholding future credits) is more credible, because it is possible to focus the punishment on a narrow group of responsible individuals. Superficially, GHGs pollution permits appear to be a homogenous commodity, but trade in permits between developed and developing countries is unlike trade in grain, or even trade in SO2 permits within the United States. It would be difficult to use the judicial system and public pressure to enforce
270
LARRY KARP AND XUEMEI LIU
a commitment to reduce emissions within developing countries. Although an actual reduction in GHGs emissions anywhere in the world provides approximately the same environmental benefits, the environmental value of a promise to reduce emissions may differ widely. Under the CDM it is easier, relative to a market in permits, to take into account these factors. Finally, many groups - particularly environmentalists - are skeptical of the benefits of markets in general, and markets for pollution permits in particular. The CDM requires negotiation between the investor and the host and verification by a third party. In these respects it resembles a political process more than a market transaction. It therefore may face less opposition from environmentalists and other NGOs than would a market in tradeable permits.
The protagonists The members of different groups have overlapping interests. Environmentalists' ability to lobby for the environment depends to a great extent on their (relative) wealth. Most environmentalists also care about poverty in developing countries. Average citizens - producers and consumers - in developed countries are becoming increasingly aware of global environmental risks, but are reluctant to decrease their consumption of material goods. Many people in developing countries also care about global environmental problems, although these do not lead their list of priorities. Despite their overlapping interests, it is useful to identify distinct protagonists as an aid in sorting through the arguments about the CDM. Environmentalists want to reduce the stock of GHGs. Their concerns for equity or economic efficiency are secondary to protecting the environment. Investors from the developed countries want to achieve exogenous abatement targets as cheaply as possible. In a competitive economy, lower costs of abatement benefit consumers by lowering commodity prices, and benefit (most) owners of factors of production (workers, capital owners) by increasing factor prices. To the extent that these benefits are widely shared - i.e. to the extent that markets are competitive and factor ownership is widely dispersed the self-interest of investors is aligned with that of consumers and producers in the developed countries. In addition to the immediate pecuniary benefits, investors may also be interested in promoting exports, expanding investment opportunities in developing countries, improving goodwill, and increasing standing in international negotiations (Parson & Fisher-Vanden, 1997). Host governments want to capture surplus from a CDM transaction. In some cases this surplus may be a monetary transfer, e.g. if the host is paid to maintain forests as a carbon sink rather than harvesting them. In other cases the benefits
The Clean Development Mechanism and Its Controversies
271
of the transaction may be in the form of job opportunities, technology transfer, biodiversity and habitat protection, or improvement of local air and water quality. The host government is also concerned that the transaction is consistent with development goals, and that it does not foreclose future development opportunities. This description of the protagonists' objectives makes the CDM appear universally beneficial. By providing direct benefits to investors and host governments, the CDM makes it cheaper to achieve the environmentalists' objective, thus decreasing resistance to that objective. However, there is considerable dispute about the merits of the CDM.
THE DEBATE OVER THE CDM In order for the CDM to be successful, it must enable developed countries to reduce their costs of abating GHGs emissions and it must benefit (or at least not harm) developing countries. Both of these conditions have been met with skepticism, although most of the opposition to the CDM has centered on the second criterion. Here we attempt to disentangle and assess the plausibility of the main arguments against the CDM. Transactions costs are larger than the difference in abatement costs. Harvey and Bush (1997) summarize UNEP studies on seven developing and three developed countries, and additional studies using estimates of emission reduction cost for Poland, the United Kingdom, Denmark and Zimbabwe. Based on these reports they conclude that monitoring and verification costs could exceed the abatement cost savings. This empirical evidence is valuable, but the magnitudes of the difference in abatement costs and of transactions costs are speculative. Even if the transactions costs are initially high, they may decrease with experience. The empirical evidence should not be construed as an argument against the CDM, but as a warning not to exaggerate its potential benefits. The CDM would interfere with national sovereignty (Goldemberg, 1997). This objection is based on developing countries' distrust of developed nations. Without denying the historical basis for this attitude, we emphasize that the CDM is a voluntary arrangement. The objection to the CDM on the basis of distrust is no more rational than is the objection to any form of foreign investment - which developing countries generally pursue. Nevertheless, distrust is a key feature of several arguments against the CDM. Developing countries lack the technical expertise to negotiate complex CDMs and would be exploited. Even if the CDM would generate a net surplus,
272
LARRY KARP AND XUEMEI LIU
developing countries would achieve little or nothing, thus violating the second condition for its success. This kind of objection also surfaces in the arena of general trade negotiations at the World Trade Organization. It is another manifestation of the distrust developing countries feel toward developed countries, and taken to its logical conclusion is an argument for minimizing relations. The lack of technical sophistication coupled with naivete is likely to be a serious disadvantage in a negotiating situation. However, lack of technical sophistication in a skeptical bargainer may be as damaging to the rival. For example, suppose that the technically sophisticated investor knows the true value of a particular project (i.e. he knows the savings in abatement costs). The unsophisticated but skeptical developing country negotiator cannot accurately assess the value, but assumes that his country (the host) will be cheated. In order to close the deal, the investor may have to offer the host a large share of the surplus; in some cases where there is a positive surplus, the deal is not completed because the investor cannot offer the host enough to overcome his suspicion. In this example, the host may benefit from his lack of information because it can lead to an increased share of surplus. Nevertheless, imperfect information reduces aggregate expected surplus. This example suggests that imperfect information resulting from the lack of technical expertise should be viewed as a form of transactions cost. It reduces aggregate surplus and may harm either or both parties in the negotiation. The investor may have as much (or more) incentive as does the host to improve information. Like other types of transactions costs associated with the CDM, it is likely to decrease with experience. Rather than providing an argument against the CDM, the fear of being duped is an argument in favor of providing a public good: public information.
The CDM may distort host countries' development priorities. This objection could be viewed as a repetition of the fear regarding the erosion of national sovereignty. However, a fundamentally different kind of argument can be made: the CDM may increase the scope for the abuse of national sovereignty. The analogy between debt and the CDM is useful here. The ability to borrow on international markets has left developing nations with tremendous debt, in some cases without offsetting benefits. This outcome may be partly due to bad luck, but in part is a consequence of incompetence and corruption on both sides of the debt contract. There is a temptation for borrowers to saddle future generations with debt in order to enjoy the freedom to misspend current loans. The debt contract provides a method for the current government of the developing country, in complicity with lenders, to
The Clean Development Mechanism and Its Controversies
273
appropriate future national earnings. Individuals currently in power are not able to sell national assets because they do not own them; they can, however, incur debt, which is a method of selling the future returns to these assets. Unfortunately, this "sale" does not merely represent a redistribution, but may entail enormous real costs. National income may be diverted from useful development projects. A CDM is likely to involve an intertemporal exchange, as does debt. Instead of an intertemporal exchange of dollars, the CDM may involve the exchange of technology or development assistance today in return for the promise of using a forest as a carbon sink in the future. If the current benefits are squandered or stolen by the governing elite, the host country may lose from the transaction even if the global environment benefits. It is conceivable that bribes might be paid to smooth a CDM, or that a bogus project might be invented to launder development assistance. In spite of the extensive regulation of the banking sector there appears to have been considerable abuse of debt contracts between developed and developing countries. In view of the near absence of regulatory experience for CDMs, the danger of abuse here seems even greater. However, the abuse of debt contracts was exacerbated by the moral hazard problem resulting from lenders' belief that they would be bailed out. Investors in CDMs may lack a similar incentive to push bad investments. The possibility that the CDM would enlarge the scope for abuse by governing elites means that the third party that monitors each agreement needs to be concerned not only that the environmental benefit is achieved, but also that the developing country's objective is met. In order that the second type of monitoring not be construed as paternalism, the third party needs to be truly international.
CDM investors would choose the most lucrative projects; if in the future, the developing country is obliged to reduce emissions, it would be left with only high cost options (Parson & Fisher-Vanden, 1997; Goldemberg, 1997). Rather than being a mistake, beginning with the most lucrative projects is efficient. The theory of non-renewable resources provides a useful analogy here: it is efficient to first extract low-cost deposits before mining more expensive deposits. The real fear may not be that investors undertake the most lucrative projects first, but that the host country receives inadequate compensation. This point was addressed above. Long term commitments, such as carbon sinks, may foreclose future development opportunities, such as increased agricultural output (Goldemberg, 1997; Parson & Fisher-Vanden, 1997). This objection is also a variant
274
LARRY KARP AND XUEMEI LIU
of the fear that the host country will be inadequately compensated, in this case for the loss of an option. The value of this option, for the developing country, must be included in the calculation of that country's abatement cost. Attempts to measure relative abatement costs may neglect option values; in that case, they would be likely to underestimate abatement costs in developing countries. By lowering abatement costs, the CDM would discourage developed countries from improving abatement technologies, or would reduce their incentive to alter domestic policies to reduce emissions (Parson & Fisher-Vanden, 1997). Implicit in this argument is the belief that higher abatement costs contribute to lower emissions. There are two parts to the argument, both implausible. One part asserts that higher abatement costs promote policy-induced conservation, and thus lower emissions. However, the primary environmental objective is (presumably) lower GHGs emissions, regardless of whether these are obtained by domestic or foreign reductions. By making abatement cheaper, it becomes more likely that abatement targets will be met. The second part of the argument relies on the existence of a market failure, such as the inability of finns to capture the benefits of cost reductions, e.g. those achieved through learning-by-doing. The knowledge used to reduce abatement cost may be a public good, in which case firms invest too little (from society's perspectives) in developing this knowledge. One way to induce more investment is to provide firms with a (limited) monopoly on future sales of abatement services, e.g. by disallowing the CDM. Here, disallowing the CDM is a quasi-trade restriction - the prohibition against the "import" of abatement services from developing countries. This trade restriction is an inefficient remedy to the market failure, since it requires the country to forgo the use of cheaper abatement services in developing countries, available under the CDM. In addition, this policy may be ineffective because of time-consistency problems of the type discussed in Karp and Paul (1994). The production of cheaper abatement technologies is an investment decision. Firms make current research investments because of the expectation of future rewards. If a quasi-trade restriction (banning CDMs) is used to induce innovation, the policy needs to affect future abatement costs. Therefore, it is necessary that the trade restriction be maintained in the future. If the real reason for banning the CDM is the desire to induce technological innovation, this reason vanishes as soon as the innovation has been produced. When this innovation occurs, the government has an incentive to begin allowing trade (i.e. to begin using the CDM). 1 Recognizing this, rational forward-looking finns do not take the current trade restriction as a signal of
The Clean Development Mechanism and Its Controversies
275
future policy. The current ban on CDMs therefore does not succeed in inducing investment, but it does result in lost opportunities to reduce abatement costs in the current period. In short, if the desire to induce innovation is the real reason for rejecting the use of the CDM, it is likely to be ineffective because of time-consistency problems. Even if these problems can be overcome, i.e. if the government can make a credible commitment to maintain the prohibition, the restriction is an inefficient means of inducing innovation. The correct policy is to subsidize research, so that private and social returns to this research are equal. Investors would transfer obsolete technologies to developing countries, locking them in to a dependent role (Karekezi, 1997). This claim surfaces in general complaints about foreign investment in developing countries, and is also raised in the context of the CDM. The argument is part conspiracy theory, and in part it is an example of the claim that developing countries get a bad deal in negotiations - an issue which we discussed above. In summary, many of the objections to the CDM are based on the concern that it would be detrimental to developing countries, because of their weak bargaining position relative to developed countries, or because of corruption in governing elites. This concern is not specific to the CDM but also arises in discussions about liberalization of trade and capital markets. One could accept that the exchange between developed and developing countries has been unjust, without concluding that developing countries should seek to reduce exchange. However, the recognition of the possibility of unequal exchange can be useful if it helps in constructing a mechanism that does benefit developing countries. Developing countries should participate in the creation of the CDM framework to ensure that it serves their goals. Another basis for rejecting the CDM is that it might harm the environment, via its effects on induced changes-in policy and technology. Although it is possible to rationalize this position, the rationalization is implausible. Environmentalists should encourage the formation of the CDM. A third basis for skepticism about the CDM is that the potential benefits are small. The transactions costs may be greater than the difference in abatement costs, and the abatement cost in developing countries may be underestimated, e.g. by ignoring options values. The best way to test this conjecture is to experiment with the CDM. This experimentation may help to reduce transactions costs and lead to better estimates of the actual abatement costs. The construction of a framework for the CDM, and obtaining the information needed to improve its operation, are public goods. The developed countries should be willing to underwrite the costs of providing these public goods.
276
LARRY KARP AND XUEMEI LIU
ASSESSING THE POTENTIAL GAINS OF THE CDM It is widely believed that the costs of abating GHGs are lower in developing countries, and that the potential gains of the CDM are large. Above we discussed empirical evidence that questions this assumption. Here we describe a simple econometric model that provides a different way of examining the question. We treat carbon dioxide emissions as the proxy for GHGs, and we estimate the relation between these emissions and GDP for developing countries. Using these estimates, we calculate the marginal reduction in GDP caused by a reduction in the country's emissions. This marginal change provides a measure of the country's marginal abatement costs. We compare these estimates of developing country marginal costs to an estimate of the equilibrium price of permits when carbon trade is allowed amongst OECD countries, which are required to reduce their emissions to 1990 levels. That estimated equilibrium price was derived in Karp and Liu (1998).
The model and the estimates We use data from developing countries to estimate a two-equation system adapted from Karp and Liu (1998). One equation, the revenue function, explains GDP as a function of carbon emissions and other factors, and the second equation explains the level of emissions. We have data for 37 developing countries, including 15 low income countries, 16 lower-middle income countries, and 6 upper-middle income countries. These 37 countries account for 61% of the total CO 2 emissions and 40% of the total GDP of the 158 developing countries (defined as countries in which 1996 per capita GNP was 9,635 or less). Thus, the countries in our sample have high intensity of emissions, relative to their income. We assume that there exists a GDP-pollution trade-off frontier that depends on the country's factors of production (e.g. labor and capital). This frontier is the graph of the maximum level of GDP for a given level of emissions and for given factors of production. Denoting Y,E and Z as, respectively, GDP, CO2 emissions and the vector of exogenous factors of production, the implicit form of this trade-off frontier is G(Y,E)=F(Z), for some functions G and F. The variables Y and E are endogenous. Inverting the function G, we write the tradeoff as Y= H(Z,E). This equation is the revenue function. We can think of E as a proxy for "environmental services"; these services play a role in production similar to other factors such as labor and capital.
The Clean Development Mechanism and Its Controversies
277
The level of emissions is determined by the country's level of income, its economic structure (e.g. manufacturing as a share of output) and regulatory decisions. This emissions function is E = M ( Y , X ) , where M is some function and X is a vector of exogenous explanatory variables. In the absence of data about many of the variables which should ideally be included in the vector X, we include only the quantity of energy consumption for commercial use. We consider energy consumption as a proxy for the country's economic structure (i.e. as an alternative to share of manufacturing in GDP). 2 Our data consists of annual observations from 1975 to 1990 for the 37 countries: Y is GDP (measured in constant 1987 US$); E is Industrial CO2 Emissions (in kt, i.e. thousands of metric tons); K is Physical Capital Stock (in constant 1987 US$); L is Labor Force; H is Human Capital Education (General pupils); N is Commercial Energy Use (kt of oil equivalent); 3 Pop is the country population. We include a time trend, t, in the revenue function to account for exogenous changes that we cannot measure, and we include country-specific constants in both the revenue and emissions equation to account for factors such as land and culture. We divide all variables (except time and the country dummies) by country population and take logs to obtain the following loglinear per capita relations: Yit
= Ci -st- Ol.lkit
at- o~21it + ot3t + oL2hit + o~seit + ~,it
eit = di + [31yi, + [32ni, + e2,
(1) (2)
Lower case variables y, k, l, h, e, and n are the logs of the per capita of the corresponding upper case variables. The subscript i identifies the country and the subscript t identifies the time period; eit is the error associated with equation i in period t. Equation (1) is the revenue function and equation (2) is the emissions function. Since y and e are endogenous, we estimate this system using Three-Stage Least Squares. Table 1 contains the coefficient estimates and t statistics. All parameters except for In y in the second equation are highly significant. The second equation implies that energy consumption and emissions are approximately proportional. 4 The sum of all the coefficients in the first equation is 1.2, which implies increasing returns to scales in capital, labor, human capital and "environmental services". This estimation is comparable to that for OECD countries, Karp and Liu (1998) although the magnitudes of the elasticities are different. The point estimate of the elasticity of GDP with respect to capital is 0.3 for developing countries, compared to 0.52 for OECD countries. The corresponding
278
LARRY KARP AND XUEMEI LIU Table 1.
VARIABLE
Estimation Result. Coefficient
t-ratio
Revenue function in (K per capita) In (L per capita) t in (H per capita) In (E per capita) Constant
0.300 0.497 - 0.005 0.056 0.353 8.072
7.871 3.878 - 3.605 3.371 8.841 13.950
Emissions function In (Yper capita) In (N per capita) Constant
- 0.029 1.069 1.793
- 1.427 60.580 6.717
elasticities of labor are 0.497 (developing countries) and 0.29 (OECD), and the elasticities of emissions are 0.35 (developing countries) and 0.11 (OECD). The negative coefficient on the time trend implies that if the inputs for which we have data (K, L, H and E) had been held constant, per capita GDP would have declined by approximately half a percent per year (e '°°5 - 1 = - 4.988 x 10-3). A number of economists have expressed the opinion that growth in developing (particularly in Asian) countries has not been associated with substantial increases in factor productivity, but instead has been a consequence of increases in factors of production. If this opinion is correct we would expect the coefficient on the time trend to be small and possibly statistically insignificant. We were surprised that it was negative and significant. When we included a time trend in the second equation (also negative and significant) the magnitude of the trend in the first equation falls slightly in magnitude (by about 10%) but remains negative and significant. This change in model specification leads to very small changes in other coefficients, and virtually no change in the estimates of marginal costs that we report in the next section. We failed to reject the hypothesis of constant returns to scale in production; the p-value of the test is 0.12. When we impose the restriction of constant returns to scale, the elasticity of output with respect to labor falls from 0.497 to 0.304 and the elasticity with respect to skilled labor falls from 0.056 to 0.047. The other coefficients are virtually unchanged. More important for our purposes, our estimates of the marginal costs are virtually unchanged.
The Clean Development Mechanism and Its Controversies
279
Marginal product o f emissions
The key premise of the CDM is that the marginal cost of abating C O 2 emissions is significantly lower in developing countries than in developed countries. The evidence reviewed by Harvey and Bush (1997), described above, provides mixed support - at best - for this premise. Here we provide a different perspective on the relative abatement costs in developing and in OECD countries. We suppose that initially OECD countries are able to trade CO2 emissions permits amongst themselves, and that each country is given an allocation of permits equal to its 1990 level of emissions. Thus, aggregate OECD emissions are constrained by the Kyoto Agreement. In Karp and Liu (1998) we obtained an estimated equilibrium price of $157 (in 1990 dollars) for a ton of CO2, using the OECD production function coefficients described in the previous subsection. Thus, under an efficient (i.e. a competitive equilibrium) allocation of emissions within the OECD, the marginal OECD abatement cost is $157 per ton. 5 Now suppose that in addition to being able to trade permits amongst themselves, the OECD countries are able to use the CDM to purchase emissions from developing countries. These CDM transactions are efficient if and only if a developing country has a marginal abatement cost of less than $157. We use our estimates of equation (1) to calculate the marginal product of emissions in developing country i
or,
Mp i= ~
= ctsAiE~5-1
(3)
where Ai =- e c' K~' L~2 e ~ I~i 4 Pop 1-~-~2-~°~. This marginal product is the opportunity cost of a unit of emission, so it can be interpreted as the marginal abatement cost in country i. In calculating MPi we treat A i as a constant, equal to its estimated 1990 level. We made the same assumption for the OECD countries in estimating the equilibrium price under intra-OECD trade. Thus, our estimates of the developing and OECD marginal abatement costs are comparable. The magnitude of A i obviously affects the level of MPi and we know that it is not literally true that A i will remain constant. However, we are not really concerned with the absolute levels of marginal abatement costs. We care about the level of these costs relative to the OECD costs, and about the variation of costs amongst the developing countries. Suppose that over the 1990-2010 period the growth in factors of production in country i are such that Ai increases by a factor of hi. In this case, the right
280
LARRY KARP AND XUEMH LIU
side of equation (3) should be multiplied by hi. If we had a consistent set of projections for the increases in all the factors for all of the countries (including the OECD countries), we could use these to obtain an estimate of hi for each i. Since we can only find projected increases for some factors for some countries, any attempt to estimate h i would involve considerable guesswork on our part. Since MPi is proportional to hi, our results would be largely determined by this guesswork. Therefore, we adopt the simpler (in our view, "neutral") assumption that hi is the same for all countries. 6 If we interpret all of the results described below as indications of relative, rather than absolute abatement costs, there is no additional loss of generality in assuming that k = 1 for the developing countries. 7 If, for example, the reader thinks that the index of factors of production, A i, for a particular developing country i (or for a group of developing countries) will grow more quickly or slowly than the average OECD growth, the abatement costs for that country (or that group of countries) should be increased or decreased accordingly. Our comparisons assume that the CDM begins in the year 2010, the time at which OECD countries have (tentatively) agreed to reach their Kyoto targets. We choose the value of E i in equation (3) by assuming that developing country emissions continue rising, from their 1990 level, to the year 2010, at an annual rate equal to the average rate over the 1975-1990 period. We adopt this assumption because of the lack of a consistent set of estimates for the rate of increase of emissions for all developing countries. There are estimates for some countries, and it worth comparing these with the averages we compute. For example, the Department of Energy's Annual Energy Outlook for 2000 (http:/ /www.eia.doe.gov/oiaf/aeo) projects annual increases of 6.5% for China and India, two of the largest emitters amongst the developing countries, while the estimates we use are 5% for China and 7% for India. Our estimated price $157 for a tonne of CO2 implies a price of $575 for a tonne of carbon. 6 This price is much higher than the range found in the literature, reviewed by Karp and Liu (1998) and by Tol. Tol reports that most estimates of abatement costs are in the range of $1.4 to $35 per tonne of CO2 (i.e. from $5 to $125 per tonne of carbon - see the previous footnote) with most estimates at the low end of the range. Our model may exaggerate the cost of abatement, and thus exaggerate the price of permits. The fact that we use the same econometric model to estimate costs in both regions at least means that the two sets of estimates are comparable. Since we are interested only in relative abatement costs, any absolute bias in the estimation of abatement costs is unimportant. We have no way of knowing whether the (possible) bias is larger in one or the other region.
The Clean Development Mechanism and Its Controversies
281
Our estimates of marginal products of emissions for the developing countries might be biased upward because we use industrial carbon emissions to proxy total carbon emissions. Industrial emissions include only emissions arising from burning fossil fuels and manufacturing cement, and contributions from other solid, liquid and gas fuels and gas flaring. In some developing countries, CO2 emissions arising from burning fossil fuels exclude the majority of total emissions. For example, 60% of urban and almost all rural households in subSaharan Africa still rely on biomass energy for household energy needs Davidson (1992), Harvey & Bush (1997). Therefore, some developing countries might have a high marginal product of industrial CO2 emissions (and thus have a high opportunity cost of abating these emissions); nevertheless, their cost of abating non-industrial emissions might be much lower. Since our model does not include these non-industrial emissions, our estimates of marginal product of emissions might be too high for developing countries. On the other hand, our exclusion (because of a the lack of data) of the option value of being allowed to emit may bias our estimates downward. Figure 1 compares the estimated OECD price ( = marginal abatement cost) of $157 with our estimates of the developing countries' marginal abatement costs, obtained using equation (3), our parameter estimates, and the assumptions regarding A i and E i described above. The figure shows considerable variation in the estimated costs; it is lower than $157 for 19 out of the 37 countries in our sample. For several of those 19 countries the difference is small, and is likely to be less than the transactions costs associated with the CDM. Only the countries with the lowest estimates of abatement costs, China and Indonesia, have estimates in the range reported by Tol. Unless transactions costs are very large, these countries appear to be amongst the best candidates for CDM. CONCLUSION We addressed two questions in this chapter. First, we asked whether the Clean Development Mechanism is a good idea in principle. Second, we asked whether there is likely to be a large difference in the abatement costs between OECD and developing countries. In order for the CDM to be useful in practice, such a difference must exist. The major reason for favoring the CDM is its potential to generate savings in abatement costs. In this respect, the CDM is similar to an international market in emissions permits. Despite these similarities, there are important differences between the two institutions. The transactions costs associated with the CDM are more obvious than are the costs of a market in permits. However,
282
LARRY KARP AND XUEMEI LIU
%
O o
@ o
O O
,¢ ' i
;
i
n ,....;
The Clean Development Mechanism and Its Controversies
283
it may be impossible to avoid those transactions costs, regardless of how the reallocation (between OECD and developing countries) of emissions is achieved. Also, the CDM is more feasible politically, because it does not require an explicit division of property fights, and it does not trigger the same visceral distaste that some environmentalists feel toward markets. Despite its apparent advantages, some people have opposed the CDM on principle, worrying that it may hurt developing countries or reduce abatement efforts in OECD countries. In our view, neither of these objections is compelling, but the first is the more important. In order for CDMs to become sufficiently widely used to play a significant role in reducing emissions, informational asymmetries - both real and perceived - must be overcome. The developing countries must be able to negotiate with confidence. The costs of acquiring information about relative abatement costs must be underwritten by OECD countries. Anecdote and casual empiricism suggests that abatement costs are much lower in developing countries. Previous research questions this view. We provided another perspective by comparing the marginal opportunity cost of emissions in developing and OECD countries. We estimated these marginal costs using country panel data. These estimates incorporate several (implausible) assumptions which we regard as neutral, since they do not obviously bias our conclusions in one direction or the other. Our estimates also neglect nonindustrial emissions, which are likely to be important in developing countries. This neglect is likely to lead to a downward bias in the estimated difference in abatement costs between OECD and developing countries, and is therefore likely to underestimate the true benefits of the CDM. Nevertheless, our estimates are useful because they suggest that there is considerable variation in abatement costs across developing countries. The results also support previous research which suggests that we should be guarded in our optimism about the potential cost savings that can be achieved by CDM. Some developing countries appear to be poor choices for CDMs, although in others the savings can be substantial. Our highly aggregated model cannot be used to identify (definitively) which developing countries belong in which group, but only to suggest candidates. In summary, we suspect that both the advantages and the disadvantages of the CDM have been exaggerated. At this stage, it seems to be worthwhile pursuing the development of the CDM, in order to learn about relative abatement costs and to reduce transactions costs. The CDM may become an important means of reducing the costs of controlling GHGs, although it does not seem likely that it will lead to a wholesale transfer of abatement activities from OECD countries towards developing countries.
284
LARRY KARP AND XUEMEI LIU
NOTES * This research was supported by a grant from IGCC. 1. This description treats innovation as a one-shot event rather than as a continuous process. Our argument holds in a much more general setting, however. For example, suppose that the level of innovation is a continuous variable. This variable is the "state" in a game between agents and the policy-maker. The latter chooses the extent to which CDMs are restricted. That is, both the government's choice variable and the consequence of agents' actions are continuous, rather than discrete as in the text. The same kinds of time-consistency issues arise in this setting. We adopt the simpler description in the text merely for reasons of exposition. 2. In order for the linear model to be identified, we need at least one variable in the vector Z to be excluded from the vector X, and vice-versa. Dean (1999) estimates a similar two-equation model for China, using water pollution as the emissions variable. She uses this model to decomopse the environmental effects of trade liberalization into an income and a composition effect. 3. The GDP data, industrial CO2 data, commercial energy use, population and labor force data are taken from World Development Indicator 1998 CD-ROM. The physical capital stock (constant 1987 local price) and human capital stock data are drawn from Nehru and Dhareshwa Data Set (1998). We convert the physical capital stock data from local price to US$ by using the exchange rate data from World Development Indicator 1998 CD-ROM. 4. Our estimates of equation (1) would be very similar if we had used a single equation model with energy consumption rather than emissions on the right hand side. We present the results of the systems estimator because we will use the estimate of the price of tradeable carbon emissions permits, obtained in Karp and Liu (1998). That paper found that income was significant in the emissions equation. We are blending the results of two models (the first for OECD countries and the second for developing countries). We want those models to be as similar as possible, and therefore use a systems estimator in this chapter. 5. We used data on OECD countries to estimate equations (1) and (2). Using the parameter estimates from equation (1) we constructed the inverse demand function for each ORECD country by setting the price they would pay for a permit equal to their marginal product of emissions, equation (3) below. Adding the individual country's demand gives us aggregrage OECD demand, which we set equal to the aggregate level of emissions allowed under the Kyoto Agreement, to obtain an estimate of the equilibrium price. 6. This assumption does not mean that all of the factors grow at the same rate in all countries. It means that the index A~grows at the same rate. 7. In constructing the estimate of the equilibrium price when trade takes place only amongst OECD countries, it is important to assume that kj = k °EcD (a constant, where j is an index of OECD countries). A change in this constant would lead to a proportional change in the estimated OECD price. However, if the values of kj were not the same for all OECD countries, the estimated price would depend on all values of hi, not merely on their average value. In this paper we compare abatement costs in developing countries with an equilibrium price which we take as fixed. We are not solving for a new equilibrium price. Consequently, our estimate of the marginal abatement costs in a
The Clean Development Mechanism and Its Controversies
285
specific developing countriy i can be raised or lowered, depending on our view of how the value of hi compares with the fixed OECD value of k °Ec°. 8. CO2 has a molecular weight of 12 + 2(16) = 44. Thus the ratio of the weight of CO 2 to carbon is 1~ = 3.666 7. That is, 3.666 7 tonnes of CO 2 contain one tonne of carbon. We use this factor in converting tonnes of carbon to tonnes of CO2, and in converting prices. If a country has an abatement cost of $100 per tonne of CO2 emmissions, that country would be willing to pay $366.67 for the right to emit one tonne of carbon. In some of the literature it is not clear whether the authors have in mind the price of carbon or the price of CO2.
REFERENCES Davidson, O. R. (1992). Energy issues in sub-Saharan Africa: Future direcitons. Ann ~ bal Review of Energy and the Environment, 17, 359-403. Dean, J. M. (1999). Testing the impact of trade liberalization on the environment; theory and evidence. In: P. G. Fredricksson (Ed.), Trade, Global Policy and the Environment. World Bank Discussion Paper NO 401, The World Bank Washington D.C. Global Warming Campaign Sierra Club (1998). Risky business: liading away our responsibilities. http://webm333d.ntx.net/. Goldemberg, J. (1997). Is joint implentation a realistic option? Environment, 39, 44-59. Harvey, L. D. D., & Bush, E. J. (1997). Joint implementation: An effective strategy for combating global warming? Environment, 39(8), 14-20, 36-43. Karekezi, S. (1997). Certification and registration of joint implementation projects, http:/ /www.ji.org. Karp, L., & Liu, X. (1998). Valuing tradeable C02 permits for OECD countries. Working Paper No. 872, Department of Agricultural and Resource Economics, University of California at Berkeley; http://are.Berkeley.EDU/karp/. Karp, L., & Paul, T. (1994). Phasing in and phasing out protectionism with costly adjustment of labor. The Economic Journal, 104, 1379-1393. Nehru, V., & Dhareshwar, A. (1998). Data set for a new database on physical capital stock: Sources, methodology and results, http://www.worldbank.org. Parson, E., & Fisher-Vanden, K. (1997). Joint implementation and its alternatives: Choosing systems to distribute global emissions abatement and finance. BCSIA Discusssion Paper 97-02, ENRP Discussion Paper E-97-02, Kennedy School of Government, Harvard University. United Nations Framework Convention on Climate Change (1997). Secretatriat, a glossary of climate change acronyms and jargon, http://www.unep.ch. World Bank (1998). World bank world development indicator 1998 CD-ROM.
OVERLAPPING GENERATIONS VERSUS INFINITELY-LIVED AGENT: THE CASE OF GLOBAL WARMING R. Gerlagh and B. C. C. van der Zwaan ABSTRACT This chapter demonstrates that results from climate change models using the OLG approach can depend significantly on various economic and social conditions. Thereby, policy recommendations derived from OLG models can prove rather different from those resulting from conventional ILA models. This chapter presents the integrated assessment OLG model for the analysis of global warming ALICE 1.2, which allows for modeling a flexible interest rate and for incorporating various assumptions on demographic change and public institutions designed for the protection of the environment. Thus, ALICE 1.2 is particularly appropriate for providing policy makers with quantitative figures about the desirable and feasible reduction levels of carbon dioxide emissions.
I. INTRODUCTION infinitely-lived agent (ILA) and overlapping generations (OLG) approaches towards climate change modeling have different views on intergenerational equity. In ILA models, the presence of an "immortal" representative agent, responsible for optimizing the (discounted) utility sum of both present and future generations, provides altruism between current The
The Long-Term Economics of Climate Change, pages 287-313. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0
287
288
R. GERLAGH AND B. C. C. VAN DER ZWAAN
consumers and their descendants, even if the latter are born in the far future. In OLG models, motives to leave bequests or behave altruistically are usually absent, especially regarding generations living in times distant from now. This chapter presents the OLG model ALICE 1.2. It is pointed out that the two approaches towards climate change modeling can in various circumstances, contrary to current claims in the literature, lead to rather different policy recommendations. Until recently, climate change modelers have mainly used the ILA approach for assessing climate change policies that aim at reducing the anthropogenic emissions of carbon dioxide. Among various existing ILA models are notably those of Peck and Teisberg (1992), Nordhaus (1994) and Manne et al. (1995). In the ILA approach, also named after Ramsey, it is assumed that future generations can be represented by a single consumer living over an infinite period of time. This immortal agent acts as a representative on behalf of all future generations, by possessing the rights to decide on the amounts of investment and savings of the entire present and future population. The agent governs both physical and environmental capital endowments, of both presentday and future generations. In many respects, the Ramsey approach is appropriate to study questions like the abatement of carbon dioxide emissions and the mitigation of an enhanced greenhouse gas effect. More generally, if used for the analysis of the allocation of assets and resources across generations, this approach achieves equity between generations by letting each generation's utility depend only on its own consumption, and adding the resulting utility levels by means of a certain weight procedure using a properly chosen discount factor. As Solow (1986) indicated, the discounted sum of utility levels constitutes a proper measure of social performance. The Ramsey modeling framework allows for incorporating concerns of intergenerational equity. On the other hand, good reasons exist for not adopting the ILA method, but for using an OLG framework instead. In an OLG model, consumers do - in principle - not behave altruistically, since agents are normally merely assumed to save during working years and consume all their savings from the moment they retire. The life-cycle of consumers and the demographic structure of a society thus play an important role in an OLG analysis, while the standard ILA approach neglects possible changes in the composition of a population. Given the demographic shifts that will occur during the 21st century (Word Bank, 1994), this is the first fact in favor of an OLG analysis. Second, an OLG analysis reflects analytically in a realistic way the fact that human beings live through a finite time span, in a world which to good approximation can be considered as open-ended. Marini and Scaramozzino (1995) underline the need
Overlapping Generations Versus Infinitely-lived Agent
289
for the use of a framework reflecting a certain level of disconnection across generations, as ascertained in OLG models, since only then can the effects of different policy options on the trade-off between capital accumulation and environmental quality be determined appropriately. Third, an OLG model circumvents the rather abstract assumption that a representative agent exists who accomplishes establishing a social optimum for society as a whole, including both the present and distant future. Schelling (1995) points out, correctly, that it is hard to assume the existence of a leading agent who considers the well-being of future generations as beneficial to his own utility. Of a different character is the argument, against the ILA and in favor of the OLG approach, regarding the way in which discounting is accounted for. As Howarth and Norgaard (1992) and Gerlagh and van der Zwaan (1999) point out, the ILA approach is unrealistic because the discount rate is imposed exogenously, whereas it ought realistically to be dependent on a range of variables and phenomena such as the intergenerational distribution of assets and resources, changes in demographic composition and the evolution of the population considered, as well as the possibility to implement policies designed to establish a sustainable economic development. The discount rate should therefore be integrated endogenously in economic models, even more so when one analyses climate change or the protection of the global environment, since the periods studied in these cases involve large time spans. The endogenization of the discount rate is accounted for in OLG models. 1 Stephan, Mtiller-Ftirstenberger and Previdoli (1997) and Manne (1999) question whether one can conclude from the arguments made above that OLG models are superior to ILA models. They present a concise and computable general equilibrium model of climate change, which allows for a comparison of these two approaches. They conclude that OLG and ILA models do not differ significantly in their implication for policy making on greenhouse gas abatement. The authors suggest that both models lead to practically the same results with respect to future carbon prices, future shares of fossil fuels in energy consumption, and economic damages resulting from climate change. Stephan et al. (1997) find that only slight differences occur with respect to some macro-economic variables, when the modeler supposes that carbon emissions are taxed and the resulting revenues redistributed either entirely to the young or to the old generation. Stephan et al. (1997) and Manne (1999) conclude that the OLG approach does not differ fundamentally from the ILA method and is very similar in most respects. They state that OLG and ILA are thereby not competing, but rather complementary, approaches to the economic analysis of climate change. In our opinion, both modeling methods have merits and are indeed, as pointed out by these authors, useful complements for the
290
R. GERLAGH AND B. C. C. VAN DER ZWAAN
analysis of global warming. Unlike these authors, however, we do not conclude that the results of the two approaches are rather similar. Below, we demonstrate that, depending on the assumptions one makes on various economic conditions, such as vis-~t-vis the specific nature of the public institutions designed for the protection of the environment, the OLG modeling results on climate change control can be subject to substantial variations. Thereby, climate change policy recommendations derived from OLG models can prove rather different from those resulting from the more traditional ILA models. Our numerical results confirm the formal analysis of Gerlagh and Keyzer (forthcoming). Furthermore, it is shown that in an OLG model many phenomena can be reflected in a more realistic way. The OLG approach is often more flexible, and allows readily for the simulation of, for instance, the ageing of the population, the grandfathering of a natural resource, or the implementation of a trust fund. Section 2 of this chapter analyses the use of property rights for the protection of environmental resources against over-exploitation. It describes two possible public institutions: grandfathering and the set-up of a trust fund. Section 3 introduces the OLG method used for our analysis and describes the ALICE 1.2 model in a concise manner. Section 4 displays the various scenarios analyzed. Section 5 shows the main scenario results, and presents the time evolution of some important variables. Section 6 concludes.
2. PROPERTY RIGHTS AND THE PROTECTION OF ENVIRONMENTAL RESOURCES For a long time, Pigouvian taxes have figured prominently in environmental economics to protect environmental resources, and to overcome inefficiencies created by strictly conservationist measures. Recently, the attribution of property rights over natural resources is receiving growing interest, presumably emanating both from the perception that markets can only function if private agents themselves have an incentive to prevent the use of natural resources without due payment, and from the understanding that the sums at stake might be substantial. Indeed, under appropriate pricing, the sustainable management of the natural environment could potentially become a profitable venture, given the vital role played by environmental services in the world economy. Costanza et al. (1997) estimate the present value of the (possibly indefinite) stream of environmental services at up to about 54 trillion U.S. dollars in 1997 at world level. Although obviously such calculations are debatable, they give some hindsight into the relative importance of the natural environment.
Overlapping Generations Versus Infinitely-lived Agent
291
In establishing new property rights over environmental resources, previously treated as public goods, there is a tendency to "grandfather" these resources, that is to endow the generation that is currently alive with their ownership. This implies that both man-made and natural capital is given in exclusive property to the present generation. The environmental resources become for the owners similar to man-made capital. The generation receiving the property rights sells the capital when it is old to the succeeding generation, in order to provide for its old age (pension). All following generations behave in a similar way. However, grandfathering is not necessarily an ideal choice for environmental protection. From an ethical perspective, it can be argued that present generations ought not to be entitled to natural resources, since grandfathering capitalizes arbitrarily, such that present generations are allowed to deplete these resources before future generations can do so (Sen, 1982). Under grandfathering, future generations have to pay in order to prevent the present use of natural resources, so that de facto it implements the "victims pay" principle. Not only from an ex-ante ethical perspective, but also from an ex-post equilibrium allocation perspective, grandfathering is not an evident option. Distributional issues matter and privatization of the environment does not safeguard its conservation. Pezzey (1992) shows with a simple ILA general equilibrium model that even if competitive markets are established for all natural resources, serious environmental degradation can persist. Mourmouras (1993) encounters the same problem with an OLG model, in which a full system of property rights is introduced through grandfathering. He shows that such a mechanism might be insufficient to prevent a gradual reduction of overall welfare. As an alternative for grandfathering, we propose a more equitable, and possibly more sustainable, mechanism in which the ownership of resources is shared between current and future generations. Sharing property rights over natural resources between present and future generations requires an institution that acts as trustee for future generations, since only immediately succeeding generations are able to communicate directly. Conceptually, the sequence of steps to establish such an institution could be as follows (see also Gerlagh and Keyzer, forthcoming). First, some public authority attributes the ownership of all previously free natural resources to a trust fund. Second, this authority rules that all consumers and firms should henceforth pay for the natural resources they use. Third, it allows the private sector to open trade with the trust fund, exchanging shares in the environment for shares in private-sector enterprises. Finally, the trust fund entitles every current and future consumer to an income claim, expressed as a share in the value of the trust fund's asset portfolio.
292
R. GERLAGH AND B. C. C. VAN DER ZWAAN
To determine the size of this income claim, the trust fund calculates the maximal level of production of environmental services that can be sustained forever, e.g. those provided by clean air and water, or stocks of fish, timber and the like. This output level is referred to as the basic consumption bundle. It might differ from the output level in a steady state of the environment, since substitution is allowed in the former. This maximally sustainable output defines the total claim to be shared among consumers. The trust fund's management maintains sufficient financial assets to meet this claim. The trust fund does not need to own environmental resources, but it is instructed to keep its asset value equal to the current value of the environmental stock that would be needed to generate the sustainable output. In an OLG model, it can be shown that if the instruction is followed during a given period, the trust fund will be able to pay the value of the basic consumption bundle to the consumers for that period and to follow the instruction for the next period, and so forth (Gerlagh, 1998, Section 4.3.3). If environmental degradation persists, future generations can no longer consume the quantity of environmental services on which the claim was based. In that case, their income from the trust fund will exceed actual expenditures on the environment. This revenue can be spent on other commodities. The trust fund operates, as a compensation mechanism, on the "polluters pay" principle, since a generation that uses more of the resource than its entitlement will have to compensate future generations for the degraded environment in which the latter have to live. Early generations pay for the use of the environment insofar as this exceeds the regeneration capacity on which the claims are based. Through its stock holdings, the trust fund transfers the resulting revenues to the future generations who suffer environmental degradation. If future generations judge the preservation of natural resources essential, a trust fund provides for a mechanism which prevents environmental degradation, because it enables future generations to send the appropriate price signal to their predecessors.
3. MODEL SPECIFICATION The model described in this section extends the usual integrated assessment modeling (IAM) of climate change in three respects: it contains a demographic transition, it specifies environmental damages as a loss of an environmental amenity associated to an environmental resource, and it specifies a transfer mechanism that distributes the value of this resource to consumers. This section focuses on a description of the demographic transition, and of consumer and
Overlapping Generations Versus Infinitely-lived Agent
293
producer behavior. The scenario description in the next section is complemented by a specification of the transfer mechanisms. The model distinguishes discrete time steps, t ~ T = { 1. . . . . oo }, each representing periods of 20 years. To solve the model numerically, it is truncated after a period T.2 The first step corresponds to the interval 2000-2020, and in every interval a new generation is born. The model only describes the adult part of the life-cycle, i.e. from the age of 20 onwards. This implies that a two-period life represents an individual who reaches the age of 60, and a three-period life an individual reaching the age of 80. Consumption of children, in the age between 0 and 20, is accounted for by consumption of their parents. A generation is called young when its members have an age between 20 and 40; middle-aged are those individuals between 40 and 60, and old is the generation with members between 60 and 80. Each generation is denoted by the date t on which it starts consumption; it then enters the model. The generation denoted t is born at time t-1. 3 Generations are of different size, denoted by n i, with index i the first interval in which the generation consumes. The life-cycle lengths of generations are not identical. A demographic change is specified to represent increasing life expectancy, modeled as a transition from a lifecycle of two periods to one of three periods. This transition is assumed to take place entirely during the 21st century, that is during the first five intervals considered in the model. In the first interval, only a young and a middle-aged cohort coexist, without the presence of an old-aged generation. The middle-aged in the first interval die at the end of that interval. Twenty percent of the young die in the first interval, i.e. the middle-aged in the second interval live for a third period. Hence, in the third interval there is a small group of old consumers. Of the young generation in the second interval, i.e. the middle-aged of the third interval, forty percent live three periods, while the others live for only two periods. Life expectancy continues to increase linearly until all members of the generation that is young in the 2080-2100 interval live through a lifecycle of three periods. The life expectancy transition is then complete. Now let n I denote the size of generation i in interval t, so that nl denotes the size of generation i when it starts consumption. We assume that no member of a generation dies before the start of the second consumption period of its life: ntt+l =ntt = n ~, for all generations t. As noted, the increase in life expectancy implies that the number of people living the full three periods increases linearly: n~ = 0.2n~, n ] = 0.4n~, and so forth, until n75= n~. The time evolution of the size of a generation is defined recursively according to a logistic growth curve
294
R. GERLAGH AND B. C. C. VAN DER ZWAAN n t÷j = (a" - (a n
-
-
1)(nTfi))n'
(1)
where fi is the maximal size of a cohort, and a" the growth factor if n' is small with respect to ft. In interval t, the population size N, is given by: (2)
Nt=ntt *l +ntt+n't - ' +ntt -2,
where ntt÷1 represents the children who enter the model, in terms of consumption, in period t + 1. The other variables represent the young, the middle-aged and the old, respectively. The population at the beginning of a period can be thought of as the average of the population in the previous and the current period: N, = l/2(N,_ 1 + iV,).
(3)
The series Nt (for t = 1. . . . . oo ) depends on the variables a", and fi via formulas (1) and (2). It also depends on the size of the generation, which starts consumption in 1960 and dies at the beginning of the first model-interval o f 20(0)-2020. These parameters are calibrated such that the series approximates World Bank (1994) data on global population. The results are shown in Table 1. The table also shows the modeled increase in life expectancy used in A L I C E 1.2, which reasonably captures the main characteristics of the World Bank forecasts on life expectancy. Generations maximize their lifetime utility u(c t, bt), derived from rival consumption o f the consumer goods during the life-cycle, c t = (ctt, c~t+l, c~,+2), and non-rival consumption of the resource amenity, for convenience referred to as "environmental services", b t = (b, b~+l, b,÷2). We omit the superscript t in the
Table 1.
Population and Life Expectancy in A L I C E 1.2. 1960 1980 2000 2020 2040 2060 2080 2100 2200
Population, WB1
n.a.
n.a.
6.1
7.7
9.0
9.9 10.6 11.0
n.a
Population, ALICE 1.22
n.a. n.a.
6.1
7.7
9.0 10.0 10.7 11.1 11.1
Life expectancyatbirth, WB1
n.a. 63.5 67.4 71.2 74.7 77.9 80.3 82.6 n.a.
Life expectancyatbirth, ALICE 1.2 60.0 64.0 68.0 72.0 76.0 80.0 80.0 80.0 80.0 t Population in billion people and life-expectancy in years, calculated from World bank data (WB, 1994). 2The population at the beginning of period t is taken to be the average population of periods t - 1 and t. The population in period t includes the children that enter the model one period later.
Overlapping Generations Versus Infinitely-lived Agent
295
consumption of the resource amenity to stress that the amenity consumption is the same for all generations. By this definition of utility, an extension is made with respect to conventional IAMs, since the latter treat climate change damages as if they would constitute merely a decrease of man-made consumer goods. We find this assumption misleading, because global warming damages can be more realistically understood as a decrease in the quality and quantity of environmental functions, rather than as a reduction in the flow of man-made goods. Indeed, the IPCC (1996b, Section 1.3.2) recognizes that a decrease in biodiversity may be one of the major consequences of climate change. Our explicit specification of a resource amenity provides a means to incorporate this insight into a competitive equilibrium framework. The consumption behavior of generations for which not all members live three periods is based on some further assumptions. For any member of a generation, until the beginning of the third period, only the probability of living three periods is known. At the beginning of the third period, a given member is either alive or not. Each member is supposed to maximize expected life-time utility subject to life span uncertainty. There are no non-intended bequests to future generations resulting from the uncertainty in lifetime (Hurd, 1989), because there is an intra-generational life-insurance company to which all members of a generation pay their savings in the second period of life. The insurance company repays the savings in the form of annuities to the living members of the generation in the third period, that is to those people who live into old age. Under this condition, the generation can be described by one representative consumer that maximizes aggregate utility, subject to one budget constraint, and as if there is perfect foresight.4 The utility function u(.) employed is a nested CES function of the form:
U(Ci' bi) = [ ~-d
i i i l/(l+v) (or)t - i nt((c,/n,) (b,) vl(l+v) )
p-i p p-- 0-1
]
,
(4)
t=i,i+l,i+2
where p = 0.67 is the intertemporal elasticity of substitution, p = 1 is the consumers' time preference factor, and v = 0.1 is the constant share of the expenditures on the resource amenity relative to the man-made consumer good:
~,
= vcl,
(5)
where q01are the so-called Lindahl prices for generation i in period t for nonrival consumption of the resource amenity, relative to the price of the man-made consumer good. The first generation i = 0 maximizes utility subject to its budget constraint: max { u(c °, b °) I c° + ~P°bl < wl l° + qt~kl +//~ },
(6)
296
R. GERLAGH AND B. C. C. VAN DER ZWAAN
while generations i = 1. . . . .
m a x { u ( c i , bi) l
~ solve:
Z t=i. . . .
' i + q~tbt) i < _ 131(c,
Z t=i. . . . .
, i+2
t i + n l ) }, 13i(wJ,
(7)
i+2
where prices are normalized so that the consumer good has price unity, 13~is the price deprecation factor from period i to period t (note that 131= 1, and we also use the notation: (1 + rt) = (13,)-~ - 1 = ([3tt÷1)-~ - 1, whenever convenient, where rt is the interest rate at period t), wt denotes the price of labor, 11 denotes the labor endowment, ~1 is the price of the man-made capital stock kl in the first period owned by the first generation, and /Tt is the income which generation i receives in period t as its share in the value of the environmental resource. For generation i, utility maximization gives the following first order conditions: OU(.)[OC i~-
hi(l, [31+1, 131+2),
(8)
and
Ou(.)/Ob'= h'(cpl,
131+1q~i+1, 131+2q~i+2),
(9)
for a scalar Lagrange multiplier h i > 0, such that the budget in (7) holds. 5 ALICE 1.2 includes a simple production sector for a man-made consumer good y,, consisting of one private firm. It uses labor units l,, emission units e,, and a man-made capital stock k, as production factors. The capital stock is itself produced by this production sector. We therefore assume that the capital stock is made up of the consumer good, and that it has to be replaced after use in one period. The production structure is based on a nested function, in which the intermediate good y~, is made of a Cobb-Douglas composite of capital and labor:
yT,=A '
It1 - ~ ,
(10)
where c~= 0.216 is the capital share, and A, is a productivity coefficient.6 The composite good YT' is used, together with emission units, in a quadratic production function: y, + k,+ 1 = YT'+ 1/2('q/~,)e, (2~, - e,/yT'),
(11)
where "q is the maximal CO2 tax at which net emissions become zero, and ~, is the maximum emission intensity when no carbon taxes are imposed. This becomes clear from the first order conditions for emissions: pe >'q(1 -- e,/~tyT') _1. e, > O,
(12)
where p~ is the price of emission units in period t and the complementarity sign '.L' denotes that the left-hand inequality is binding (i.e. becomes an equality)
Overlapping Generations Versus Infinitely-lived Agent
297
if emissions are positive. The parameters -q and ~t are chosen such that the optimal emission levels decrease by 1% for every 4 U.S.$/tC price increase of emission units, and that the maximum emission levels follow the IS92a scenario (IPCC, 1992). Because of constant returns to scale, the value of inputs equals the value of outputs: Yt + kt+l = wtl, + ~
k, + p~e,
(13)
where ~ is the price for capital in period t. Since capital is produced in the previous period, we have: ~,k.~ = 1/[3,,
(14)
for t = 1. . . . . ~ . After substitution of the value equation, we have the following first-order conditions for labor and capital: wtlt = ( 1 -- Ol.)(y t -t- kt+ 1 -ptet),
(15)
'~k,k, = et(y t + k,+ , - p~e,).
(16)
and
The fact that ALICE 1.2 extends existing IAMs by specifying an explicit resource amenity is in accordance with the environmental concerns underlying the issue of climate change. Peck and Teisberg (1992) and Nordhaus (1994) have much contributed to the development of stylized economic IAMs by providing highly simplified representations of biogeochemical interactions which are useable in macro-economic models. The typical simplified aggregate representation employed links emissions to concentrations, concentrations to temperatures, and temperatures to damages. However, regarding the calculation of impacts of climate change, the scientific understanding is grossly insufficient to warrant even something like a "best guess" (IPCC, 1996a, Section 6.2.13). In general, it is assumed that damages caused by climate change will outweigh its benefits. The lack of knowledge is unmistakably revealed by sensitivity analyses carded out with a variety of different damage functions. These damage functions are supposed to provide a reduced form of many complex damages associated with climate change, such as the loss of coastal zones due to sea level rise, the loss of biodiversity, the spread of vector borne diseases, and the occurrence of extreme climate events. Some damage functions take the global temperature as arguments, others take the rate of increase of global temperature as an argument. Some damage functions are quadratic, others are of higher or lower order (cf. Tol, 1995). The lack of understanding of damage functions is recognized by the IPCC (1996a, Section 6.2.13). In our model, we
298
R. GERLAGH AND B. C. C. VAN DER ZWAAN
therefore restrict the resource specification to a simple linear relationship between emissions and resource amenities, and choose parameters resembling the damage estimates listed by the IPCC (1996a, Table 6.4). Let s t be the resource stock from which e t units are subtracted each period: s,+l = st - e,.
(17)
The exhaustible resource has amenity value b,. We follow Krautkraemer (1985) and assume that the amenity value is proportional to the stock level:
bt = st/sl.
Thus,
bt
(18)
is measured as an index, with maximum output b,= 1. The
environmental firm maximizes the value of its output,
E t=l
t e + p,bbt), ~31(pte,
.....
subject to (17) and (18), and given the initial resource stock s~. Let pb be the price of the environmental amenity b t, and ~ the price of the resource stock at the beginning of period t, so that 13t~÷ 1 is the dual variable associated to (17) under profit maximizing, and pb is the dual variable of (18). The first-order conditions read: p~ < 13t~+ 1
&L0 < e,,
[3,~+ l +pb,/s, = ~ .
(19) (20)
where the &L-signrefers again to complementarity conditions: the constraint on the left is binding if the right-hand side is a strict inequality. Because of constant returns to scale in (17) and (18), for every period, the zero profit condition holds:
~ s t = p;et + pbbt +
~tal2"st+lSt+ l.
(21)
This equation states that the value of the resource, ~ s t, is equal to the value of its output. Written out for the entire time horizon, this becomes:
• ~s, = E
fS~t(P{e~+Pb~b~)"
(22)
"r=t,...,
It follows from the first-order conditions, (12), (19), and (20) that if the future value of the resource amenity is sufficiently high relative to the maximal productivity of the extracted resource, "q < [~,~+ ~, no extraction will take place. For zero extraction, the extraction price can take any value on the interval [-q, [3t~+~]. However, the particular choice has no effects for the real variables. This completes the description of producer behavior.
Overlapping Generations Versus Infinitely-lived Agent
299
To close the markets, we have the commodity balance for labor, l, = l~-2 + l', -~ + l',,
(23)
as well as for the consumer good, ct, -2 + ct, -1 + c', =y,.
(24)
Non-rivalry of the demand for the resource amenity implies that consumers should agree about the amenity level, so that Lindahl prices should add up to the production price: p,b = q~,,-2 + tp,-1 + ~p,. (25) The regulatory mechanism for controlling resource extraction and distributing income from the natural resource, as well as the corresponding regulations, are specified in Section 4. Here, it suffices to note that the Lindahl equilibrium represents an economy governed by a mixture of Competitive markets and policies to achieve collective action. We impose the requirement that the income H i, which is distributed among consumers, should balance with the value of the natural resource: qt~s 1= F 1,
(26)
where F, measures the value of total assets to be reserved in period t for meeting future claims. This value can be defined as: "r I3,H~ =
F, =
"r=t,...,~
>i
I"A'rtl"l'r- 2 + H~ - 1 + H~), ~.,,_~
(27)
-r=t,...,~
which recursively is written as: F , = H ~ - Z + H~ - ' + H~ + fStV,+ 1.
(28)
The income claim is differentiated by date of accrual using the super- and subscripts as H ' = H', + H',+ 1 + H',+2; this allows us to define the period-specific claim as H, =ntt -2 + H t t - 1 + n t t . Note that generation t = 0 only has a claim H ° to the resource. We are now in a position to specify the savings-capital balance, which has a central role in the scenario analysis. Let S',÷ ~ and S',+2 represent the savings of generation t at the beginning of period t + 1 and t + 2, respectively. These are defined by the expenditure budgets when young, ~tS',+~ = w,l~ + H ' , - c ' t - q~t,b,,
(29)
13,+,S',+2 = S',÷, + w,+ll~+~ + H',+, - ctt+l - - tptt+lbt+,.
(30)
and when middle-aged,
300
R. GERLAGH AND B. C. C. VAN DER ZWAAN
The life-cycle budget constraint in equation (7) ensures that savings are exhausted when old: 0 = St,+z +
w,+21't+2+
nt,+2 - ctt.2 -
q~,+2b,+2.
(31)
The capital-savings balance equals total savings, which consists of private savings plus the assets held by the trust fund, with the value of capital, which consists of man-made capital and the resource value: Stt -1 +Stt -2 + F t = a.It~kt + a.It~st.
(32)
Validity of the savings-capital equation for t = 2 is ensured by summation of (6), (21), (25) multiplied by the amenity value bl, and (29), and subtracting (13), (23) multiplied by wages wl, (24), (26) and (28). After using (14) for substitution, we have: 131(S~+ F2) = 131(~k2 + ~ s 2 ) ,
(33)
which is the second period savings-capital balance multiplied by the price factor 131. For t= 3 . . . . . oo, validity follows from forward induction. This completes the model description. We can now define the equilibrium: DEFINIa'ION 1. A competitive equilibrium of model (6)-(32) is an intertemporal allocation supported by prices w,, Pt, pb, q~t-2, q0',-1, tpt, ~ , ~tk, ~t, for t = 1. . . . . oo, that satisfy the production identities (10), (11), (17), (18), the first order conditions (8), (9), (12), (14), (15), (16), (19), (20), the commodity balances (23), (24), (25), the savings identities (29), (30), and the savings-capital balance (32), for a given regulatory policy that satisfies (26) and (27).
4. THE SCENARIOS We define five scenarios. Production and consumption parameters are calibrated such that a reference 'Business as Usual' scenario (BAU) resembles the IPCC (1995) IS92A scenario. The second scenario (SUST) resembles a strict conservationist policy of minimizing resource extraction, or imposing even zero extraction, even if such is inefficient for given resource prices in equilibrium. The third scenario (GRANDF) restores efficiency by grandfathering the environmental resource to the first generation that is alive at the moment of the institutional set-up. In the fourth scenario (FUND), a trust fund is established to share property rights over the environmental resource with future generations. All these scenarios are based on the OLG concept in which savings balance with the capital stock; see equation (32). The fifth scenario (ILA) abstracts from the savings-capital budget. It instead invokes the Ramsey rule that links the interest rate with consumption growth.
301
Overlapping Generations Versus Infinitely-lived Agent
We abstain here from an extensive description of the BAU scenario, since it is rather conventional. The three alternative regulatory mechanisms of 'zero extraction', 'grandfathering' and 'trust fund' involve specific rules for controlling natural resource extraction, and distributing the resource value, while meeting the intertemporal budget constraint (26). S U S T scenario
The second scenario (SUST) directly regulates the resource use by abandoning all resource extraction. This amounts to including the restriction: e, = 0.
(34)
The level of the resource amenity is now maximal: b t = 1. This scenario exempts all generations from paying for the non-rival consumption of the resource amenity. This can be represented through an income claim that is equal to the value of non-rival consumption, H 't = qJ,
(35)
and Htt+ l =
(36)
q)tt+l ,
so that the budget equation becomes: ctt+~tt+lctt+l +~*2ctt+2--wtl~+~tt+IWt+ll~.l+~*2wt+21tl+2
.
(37)
The zero extraction policy treats the natural resource as an exogenous factor, and reduces the economy to a one-consumer-good one-capital-stock production economy. However, in this economy, it is possible that the interest rate becomes negative, leading to a dynamically inefficient equilibrium. Indeed, a numerical analysis reveals that in our economy, the demographic transition induces an increase in savings to account for the longer retirement period. This, in turn, decreases the interest rate to a negative value. In theory, it is possible to restore dynamic efficiency by introducing negative fiat money into the economy (Gale, 1973). However, the required amount of fiat money cannot be calculated in advance without solving the equilibrium, and thus, cannot be treated as a fixed endowment. Alternatively, we introduce a 'non-negligible claim' that is given to the first generation, and which in the long term acts as negative fiat money (see Gerlagh, 1998, Sections 3.2.5 and 3.3.1 for a full discussion). We assume that there is a public authority, which can levy taxes, both now and in the future. This authority issues a freely tradable claim and pays the owner a real interest. In ALICE 1.2, a fixed share 0 <~/<< 1 of the value of labor
302
R. GERLAGH AND B. C. C. VAN DER ZWAAN
endowments. The payments by the public authority are balanced with taxes levied during the same period, so that the claim induces an income transfer from all owners of labor endowments to the owner of the claim. Let the value of the claim be denoted by ×,, given by: X t = ~1 ~ ~twt(ltt d¢. ltt -1 dr=It-a). 7=t
(38)
The first generation receives the claim for free, so that its budget (6) becomes: 0
0
cl + q~lbl < wll ° + ~ l k l + ×1. (39) Future generations pay a tax that enables the public agent to meet his obligations. Their budget (7) becomes: ~[(ci+q~ibt) t=i . . . . .
<
~ t=i . . . . .
i+2
13:(I - ~/)w,/i.
(40)
i+2
The claim ensures that the first generation owns a non-negligible part of the total endowments value, and thus, dynamic efficiency is restored. G R A N D F scenario The third scenario (GRANDF) extends private ownership to environmental resources. It grandfathers the entire natural resource to the first generation. 7 Future generations will have to pay to prevent pollution. This can be interpreted as applying a "victims pay" principle. Formally, the scenario is defined by the income distribution rule H ° = ~ s 1,
(41)
HI= 0,
(42)
and for t= 1. . . . . ~o. There is no further intergenerational transfer: F t = O. The natural resource becomes a 'normal' capital good, whose value is equal to lifecycle savings: S't - l + S t t - 2 = ffzr~kt+ aires t.
(43)
Unlike the 'zero extraction' policy, the extracted resource can now be bought by firms. All generations have to pay for their non-rival use of the resource amenity. The budget constraint for all generations t = 1. . . . . oo becomes: t i fSi(c, + q~ib,) <
t=i. . . . .
i+2
S'
t i Z_~ ~iwtl,.
t=i. . . . .
i+2
(44)
Overlapping Generations Versus Infinitely-lived Agent
303
FUND scenario The fourth scenario (FUND) involves a trust fund, which entitles every generation to the same income claim as in the zero extraction policy case, i.e. to one unit of the resource amenity: H~-' = qo~ I
(45)
n', = q~',,
(46)
and for t= 1. . . . . oo. The trust fund is endowed with the initial value of the biogeochemical system F l = ~Sl,
(47)
which is exactly sufficient to meet the commitments expressed in equation (26). In every period, the transfers paid are subtracted: Ft+ l = F, - Hr.
(48)
Consequently, the trust fund holds sufficient assets to meet future obligations, in accordance with equation (27). However, in contrast to the zero-extraction policy case, the income claims are not identical to the output of the biogeochemical system. We must show that these income claims sum to the initial value of the biogeochemical system. Multiplying (20) by s~ gives: ~Sx = p) + ~ +
lSl -
(49)
Therefore, the trust fund can meet its commitments in every period if it holds assets of value Ft= ~s~, starting from FI = ~s~. This ensures that the distribution rule satisfies equation (26) and that although all generations have to pay for their non-rival use of the resource amenity, their real income claim exceeds the actual value of the resource amenity. The budget constraint now reads: [31(ci) < t=i .....
i+2
~ t=i .....
[31(w,/i+ q~i(1 - b,)).
(50)
i+2
The last two terms on the right-hand side are compensations for reductions in environmental quality, relative to the undepleted level (which has b, = 1). The three regulatory scenarios (SUST, GRANDF, and FUND) can be characterized by two parameters, "q and % which describe the claims of future generations for a given natural resource level, and the use of a non-negligible claim to ensure dynamic efficiency. Thus, we can rewrite the budgets (29) and (30) as:
304
R. GERLAGH AND B. C. C. VAN DER ZWAAN [3t~+1 = (1 - ~l)wtltt- (1 - (~/bt)(v/(1 - v)))c',,
(51)
[3t+1S',+2 = S',+, + (1 - ~l)w, ltt- (1 - (~l/bt)(v/(1 - v)))c',,
(52)
and
where we can represent the first generation via: S O= ilJ~k1 q- (1 - qq)~It~Slq- X1,
(53)
and the capital-savings balance (32) S,
1+
St, - 2 = ~ k , +
~ ( s t - "qs0 + X,.
(54)
ILA scenario The fifth scenario (ILA) has a different character. It is based on a dynastic perspective rather than on an OLG approach. The ILA scenario is not subject to the capital-savings budget constraint expressed by equation (54). Instead, it assumes the Ramsey rule, which directly links the interest rate to per capita consumption growth:
[3, = ~P(c,/N,)/(ct+ t/Nt+ I ),
(55)
where 0 < tr p < 1 is the fictitious planner's time preference factor, set to 0.44 for the analysis in this paper, representing a pure rate of time preference of 4% per year. 8
5. SCENARIO RESULTS The BAU and SUST scenarios are inefficient since they do not set a price for current resource use based on future benefits of resource conservation. The three efficient scenarios (GRANDE FUND, ILA) differ amongst each other with respect to the distribution of welfare. This difference can be characterized through the evolution of the interest rate over time, as shown in Fig. 1. The ILA scenario shows a pattern that is most comparable with the existing IAMs, most of which are based on the ILA approach. The interest rate slowly decreases because of the assumed decrease in economic growth, see equation (55). The OLG scenarios show a much sharper fall in the interest rate, indicating a significant difference between ILA and OLG modeling. In the OLG approach, the savings rate increases during the 21st century because of the increased need for pensions in an ageing society. In the long run, the increased savings press the interest rate downwards, a phenomenon not captured in the ILA approach. Within the series of OLG scenarios, we find a difference between the GRANDF scenario, with relatively high interest rates, and the FUND scenario,
Overlapping Generations Versus Infinitely-lived Agent
305
8 7
6 5
2 1 0 2OOO
w
2050 ~BAU
2100
~SUST
2150
2200
.,.(3_GRANDF _.e..FUND ~ I L A
Fig. 1. Interest rates.
with relatively low interest rates. This can be understood as follows. Under the GRANDF scenario, the resource is capitalized and becomes private property. The total value of the capital stock increases, requiring increased savings to restore the balance. This causes an increase in the interest rate of about 1% per year as compared to the BAU scenario. The distance between the two scenarios pertains throughout the dynamic path. If the environmental resource is treated as public property (scenario SUST), the public savings increase by the same amount as the increase of the capital stock, so that the interest rate remains invariant with respect to the BAU scenario (to which the model is calibrated). 9 If a trust fund is established (scenario FUND), the environmental resource is also treated as public property. The FUND scenario, however, has an important difference with the SUST scenario. A short-fall of the resource level with respect to its initial level leads to extra public savings, t° In the long run, the additional public savings further decrease the interest rate from I% per year to 0.5% per year. The distribution of property rights over the environmental resource turns out to have major implications on the interest rate. As shown below, this affects the resource use as well. The assumed carbon dioxide emissions decrease by one percent for each 4 US$/tC increase in the tax imposed is an average of values appearing in the literature, ranging from 1 to 6 US$/tC (see Cline, 1992). This value implies that a 100% CO2 emission abatement corresponds to a tax level of 400 US$/tC. The
306
R. GERLAGH AND B. C. C. VAN DER ZWAAN
450 350
200 150
50
2000
-
2050
2100
,..o_BAU ~ S U S T
2150
_B_GRANDF ~ F U N D
2200 _B..ILA
Fig. 2. Carbon taxes.
SUST scenario assumes a tax of 400 US$/tC from the year 2000 onwards, as can be seen in Fig. 2. The only mechanism in our model responsible for reductions in carbon dioxide emissions is the imposition of carbon taxes. We abstract from endogenous technological learning, as well as from transition costs associated with a shift towards a carbon-free energy technology. In the GRANDF scenario, the carbon emission price slowly increases from nearly zero in 2000 to 50 US$/tC in 2050, after which it starts increasing more rapidly to reach values close to 400 US$/tC shortly after 2100. The CO 2 emissions (see Fig. 3) in this scenario increase slowly up to about the year 2050, after which they start falling rapidly down to levels close to zero shortly after 2100. In the FUND scenario, the carbon emission price increases rapidly from the very start of the simulation in 2000, where it has already a value close to 50 US$/tC. By 2050 it reaches values nearing 400 US$/tC. The CO 2 emissions (see Fig. 3) in this scenario are rather stable in the first modeling period, but decrease rapidly down to levels close to zero around the year 2050. From a comparison between GRANDF and FUND, one concludes that altering the analysis from a "victims pay" to a "polluters pay" perspective, by the introduction of a trust fund, substantially shifts upwards the carbon dioxide emission tax curve. As usual in a general equilibrium model, and contrary to the notion commonly referred to as the Coase Theorem (Coase, 1960), the intergenerational redistribution of property rights through the trust fund affects
307
Overlapping Generations Versus Infinitely-lived Agent 25
20
10 5 0 2000
2050
2100
__O__BAU ...£~SUST ~ G R A N D F
2150
2200
..~_FUND _B_ILA
Fig. 3. Carbon dioxide emissions.
the allocation of resources and hence the level of resource extraction. A formal proof for a simpler economy is given in (Gerlagh and Keyzer, forthcoming). Apart from rigid conservation measures such as those under the SUST scenario, using a trust fund (FUND) achieves most radical CO2 emission reductions. Grandfathering (GRANDF) is less favorable for climate change reduction, but still leads to more stringent emission reductions than are found under the assumption of a fictitious infinitely-lived agent that takes care of the welfare distribution over all generations. Of course, the ILA approach is a significant improvement as compared to Business as Usual (BAU). From this comparison, one sees that at the numerical level, OLG models provide substantial different results from ILA models. As for the theoretical analysis, additional insights in mechanisms and policy instruments to reduce anthropogenic climate change can be gained. Finally, we compare the results of the five scenarios for the consumption of the man-made good and the induced temperature change. In Fig. 4, consumption of the man-made good is shown relative to the BAU scenario. The stringent zero emission measures under a strong sustainability policy (SUST) cut the consumption levels by nearly 6% during the first decades. Over time, as dependence of the economy on fossil fuels decreases, the reduction of consumption decreases as well. The trust fund (FUND) scenario has less stringent emission reductions, particularly during the first periods, which is
308
R. GERLAGH AND B. C. C. VAN DER ZWAAN 1.06 1.04 1.02 1
~" 0.98 0.96 0.94
0.92 0.9 2000
2050
~BAU
~SUST
2100
2200
2150
+GRANDF ~FUND
~ILA
Fig. 4. Consumptionof the man-madegood, relative to BAU.
reflected in the consumption path that stays closer to the BAU path. Grandfathering (GRANDF) capitalizes the value of the environmental resource and gives it to the first generation, which uses it to increase its consumption at the costs of investments in man-made capital. As a result, consumption in the first period is rather high initially, but soon decreases after that. The dynastic (ILA) scenario is not directly comparable with the other scenarios, since in this case the intergenerational and intertemporal distribution of consumption is subject to the Ramsey rule, which does not directly relate to OLG equilibria. Figure 5 shows the calculated temperature increase, relative to pre-industrial levels, consistent with the emission paths of Fig. 3. We used the one-box carbon model as employed in DICE (Nordhaus, 1994), linking CO2 emissions to concentrations and to temperatures, based on a climate sensitivity parameter of 3 degrees Celsius for a doubling of the atmospheric CO 2 concentration. Under business as usual (BAU), temperature increases approximately linearly by nearly 0.2 degrees Celsius per decade. Furthermore, Fig. 5 shows that changes in the paths for temperature change lag behind the emission reductions shown in Fig. 3 for at least fifty years. Policy making has to be performed on a very long time horizon in order to realize an effective climate change control. Even under zero emissions (SUST), temperature continues to increase for the next 50 years, before temperature slowly returns to its pre-industrial level. The immediate and radical emission reductions under the trust fund (FUND) as
Overlapping Generations Versus Infinitely-lived Agent
309
4.50 4.00
-
3.50 3.00 2.50 O "" 2.00
1.50~ 1.00
0.50
0.00 2000
2050
2100
2150
2200
..e--BAU--IIk--SUST..II--GRANDF-.e..FUND--Ei-ILA Fig. 5. Calculated temperature change. compared to business as usual only lead to a deviation of the temperature path after 2050. Grandfathering the resource (GRANDF) or using a dynastic model (ILA) for calculating optimal emission reduction paths brings about no significant decrease in the temperature change in the next century, as compared to business as usual. 6. C O N C L U S I O N In this chapter, we come to the following conclusions. In our opinion, the use of OLG models for the analysis of climate change control differs more fundamentally from that of ILA models than has been suggested in the literature. In particular, the results we obtain with the OLG model ALICE 1.2 are notably different from those of the OLG models by Stephan et al. (1997) and Manne (1999). One of the reasons is that they do not take into account the consequences of possible changes in demographic composition and population size. In addition, Stephan et al. (1997) assume that generations live for two periods only. The model ALICE 1.2 presented in this chapter treats demography in a quite different way. First, the fact that life is considered to consist of three periods renders the model more realistic. Second and more importantly, the population is allowed to age throughout the next century, a phenomenon that will undoubtedly reveal itself over the forthcoming decades.
310
R. GERLAGH AND B. C. C. VAN DER ZWAAN
Another reason for the difference between our results and those by Stephan et al. (1997) and Manne (1999) is that the models of the latter do not properly assume an intergenerational distribution of carbon tax revenues. Stephan et al. allow only a distribution of tax revenues to the young or the old, or a combination of these two, within the same period. Our model allows for a distribution of carbon tax revenues over all present and future generations, via the use of a trust fund. Our model introduces two tools, grandfathering and the creation of a trust fund. The latter can truly allow for an equitable redistribution of natural resources, in casu a carbon-poor atmosphere, over an infinite sequence of generations ACKNOWLEDGEMENTS Reyer Gerlagh is grateful to Michiel Keyzer for the supervision when developing the A L I C E model (several versions) as part of his PhD research. Both authors thank Richard Howarth and an anonymous referee for their constructive and useful suggestions on the presentation and contents of this chapter, as well as John Finn for his comments on the editing of the text. The authors are, of course, entirely responsible for all remaining errors NOTES 1. It is possible to construct an OLG model in which altruism between generations drives the savings decisions, rather than concern for their own retirement. This leads to an OLG model that is much like an 1LA model in its treatment of discounting (Barro, 1974). 2. A full description of the technical elements behind this truncation procedure is beyond the scope of this chapter. ALICE 1.2 uses first order conditions, current prices, and a price deflator, and has beneficial solvability properties; numerical results are obtained in a relatively straightforward way. The model runs show that long time horizons, of e.g. 50 periods of 20 years each, can easily be dealt with. This is quite contrary to models that are solved by optimization, such as DICE (Nordhaus, 1994), which run into accuracy problems when the time horizon is extended too far. Whereas many optimization models can only be used up to about the year 2200, ALICE 1.2 allows simulations over several additional centuries at least. A model with an extended time horizon is a valuable instrument since environmental impacts of greenhouse gas emissions may last for centuries. 3. Quantities referring to a particular generation are indexed by the subscript t. Alternatively, the label i is used where convenient. 4. The life-insurance concept is known from the Blanchard-Yaari-Weil model of 'perpetual youth' (see e.g. Blanchard, 1985). Marini and Scaramozzino (1995) use that model for the intergenerational analysis of environmental issues. In contrast to the
Overlapping Generations Versus Infinitely-lived Agent
311
perpetual youth model, we have discrete time periods and finite lifetime. Therefore, we give a more elaborate discussion of the uncertainty-certainty equivalence in the appendix. 5. Note that c' = (ct,, fit+l, ctt÷2) and b t = (b,, bt+l, b,+2) so that both the left-hand-side and right-hand-side of these first order conditions represent 3-dimensional vectors. 6. The value of et is calibrated such that the value of man-made capital constitutes 3 times the value of annual output. The value of 0.216 may seem rather low, but one has to keep in mind that it reflects the value share of current output that is attributed to investments made in the previous period. Because of the long time interval per period (20 years), a large part of capital is produced in the same period as in which it is used and is thus not accounted for in kr This effect decreases the value of ot relative to models that have shorter time periods. The value o f A t is calibrated such that in the first period gross output is equal to 38.4 trillion US$/yr (1990 market prices), and increases by 3% per year. For a detailed calibration analysis, see Gerlagh (1998b, Section 6.2). 7. In actual practice, emission permits are often grandfathered to the firms that are currently polluting. Since these firms are owned by the current old generation, this amounts in our model to awarding the property rights over the environment to the first generation. 8. The planner's rate of pure time preference is at the high end of the values found in the literature. This choice maintains consistency with OLG models where interest rates can reach relatively high levels in early periods as compared to most IAMs. 9. In relation (54) we have "q = 1 and st = s~. For an elaboration of this reasoning, including an explanatory picture, see Gerlagh and van der Zwaan (1999). 10. In relation (54) we now have -q = 1 and s t < Sl.
REFERENCES Barro, R. J. (1974). Are government bonds net wealth. Journal of Political Economy, 82, 1095-1117. Blanchard, O. J. (1985). Debt, deficits, and finite horizons. Journal of Political Economy, 93, 223-247. Cline, W. R. (1992). The Economics of Global Warming. Institute for International Economics, Washington, D.C. Coase, R. H. (1960). The problem of social cost. Law and economics, III, 1-44. Costanza, R., d'Arge, R., de. Groot, R. S. et al. (1997). The value of the world's ecosystem services and natural capital. Nature, 387, 253-260. Gale, D. (1973). Pure exchange equilibrium of dynamic economic models. Journal of Economic Theory, 6, 12-36. Gerlagh, R. (1998a). The efficient and sustainable use of environmental resource systems. Thela Thesis, Amsterdam. Gerlagh, R. (1998b). ALICE l; an Applied Long-term Integrated Competitive Equilibrium model. W98-21, IVM, Institute for Environmental Studies, Amsterdam. Gerlagh, R., & Keyzer, M. A. (forthcoming). Sustainability and the intergenerational distribution of natural resource entitlements. Journal of Public Economics.
312
R. GERLAGH A N D B. C. C. VAN DER ZWAAN
Gerlagh, R., & van der Zwaan, B. C. C. (1999). Sustainability and discounting in Integrated Assessment Models. Nota di Lavoro, 63.99, Fondazione Eni Enrico Mattei (FEEM), Milan, Italy. Howaxth, R. B., & Norgaard, R. B. (1992). Environmental valuation under sustainable development. American Economic Review, 82, 473-477. Hurd, M. D. (1989). Mortality risk and bequests. Econometrica, 57-4, 779-813. IPCC (intergovernmental Panel on Climate Change), Houghton, J. T., Callander, B. A., & Varney, S. K. (Eds). (1992). Climate change 1992; The supplementary report to the IPCC scientific assessment. Cambridge: Cambridge University Press. IPCC, Houghton J.T., Meira Filho, L. G., Bruce, J. P. et al. (Eds). (1995). Climate Change 1994; Radiative forcing of climate change and an evaluation of the IPCC 1S92 emission scenarios. Cambridge: Cambridge University Press. IPCC, Bruce J. E, Lee, H., & Haites, E. F. (Eds). (1996a). Climate change 1995; Economic and social dimensions of climate change. Cambridge: Cambridge University Press. IPCC, Watson R.T., Zinyowera, M. C., Moss, R. H., & Dokken, D. J. (Eds). (1996b). Climate change I995; Impacts, adaptations and mitigation of climate change: scientific-technical analyses. Cambridge: Cambridge University Press. Krantkraemer, J. A. (1985). Optimal growth, resource amenities and the preservation of natural environments. Review of Economic Studies, LI1, 153-170. Manne, A. S. (1999). Equity, efficiency, and discounting. In: E R. Portney and J. P. Weyant (Eds). Discounting and intergenerational equity, Ch. 12, Resources for the future, Washington. Manne, A. S., Mendelsohn, R., & Richels, R. (t995). MERGE, A model for evaluating regional and global effects of GHG reduction policies, Energy Policy, 23, 17-34. Marini, G., & Scaramozzino, P. (1995). Overlapping Generations and Environmental Control. Journal of Environmental Economics and Management, 29, 64-77. Mourmouras. A. (1993). Conservationist government policies and intergenerational equity in an overlapping generations model with renewable resources. Journal of Public Economics, 51, 249-268. Nordhaus, W. D. (1994). Managing the global commons. Cambridge, Massachusetts: MIT Press. Peck, S. C., & Teisberg, T. J. (1992). CETA: a model for carbon emissions trajectory assessment, Energy Journal, 13, 55-77. Pezzey, J. (1992). Sustainable Development Concepts, An economic analysis. World Bank, Washington, D.C. ScheUing, T. C. (1995). Intergenerational discounting. Energy Policy, 23, 395--401. Sen, A. K. (1982). Approaches to the choice of discount rates for social benefit-cost analysis, In: R. C. Lind (Ed.), Discoantingfor time and risk in energy policy, Ch. 9, Resources for the future, Washington, D.C. Solow, R. M. (1986). On the intergenerational allocation of natural resources. Scandinavian Journal of Economics, 88, 141-149. Stephan, G., Miiller-Ftirstenberger, G., & Previdoli, P. (1997). Overlapping generations or infinitely-lived agents: intergenerational altruism and the economics of global warming. Environmental and Resource Economics, I0, 27--40. Tol, R. S. J. (1995). The Damage Costs of Climate Change Toward More Comprehensive Calculations. Environmental and Resource Economics, 5, 353-374. World Bank. Bos E., Vu, M. T., Massiah, E., & Bulatao, R. A. (Eds). (1994). World population projections 1994-95 edition; estimates and projections with related demographic statistics. Baltimore, Maryland: Johns Hopkins University Press.
313
Overlapping Generations Versus Infinitely-lived Agent
APPENDIX Utility maximization under uncertain life-time
In this appendix, equivalence is shown between utility maximization under uncertain lifetime and utility maximization of a representative agent as described by equation (7). Under uncertain life-time, every individual consumer of generation i maximizes expected utility, E[u(.)], E[u(c i, bi)] = [
Z t=i ......
.
.
.
.
p-I
p
(n',/ni)(tr)t-'((c',/n't)lm+v)(b,) vm+'l ~ ]p:~,
(56)
i+2
where ni/n i is the probability of being alive in period t, and cit/nit is per capita consumption of the rival man-made consumer good. For the young and middleaged of generation t, the per capita budgets are given by: cTn', + qYtb,/n~ + ~,S'~+l/n~ = w,lt,/n ', + H',/n't, and
(57)
C~+l/n~+ , + qYt+,bt+l/n~+ , + ~3,+, S',+2/n',+ 1 = Stt+l/ntt+ l + w,+tlt,+,/n',+, + H',+l/n~+ 1.
(58) If an individual is uncertain about his life-time, that is, uncertain as to whether he will live during the third period or not, insurance is bought from the savings of the second period of life, S~+2/n~+1, to buy an annuity for the third period. Those that live until the end of the third period receive the annuity as income, S~+Jn~+2. Note that, thereby, the insurance mechanism produces a transfer from those that live two periods to those that live three periods; the deposit paid by those who die is distributed over those that live,
S't+z/n',+, < S~+2/n~+2,
(59)
if n't+2 < n',+l. The budget for the third period becomes: ctt+2/ntt+ 2 + qYt+2b,+z/n~+2 = Stt+2/nl+2 + wt+21~+2/ntt+: + H~+2/ntt+2 .
(60)
Every individual maximizes utility (56) subject to the budget constraints (57), (58), and (60). It is now readily shown that the consumption-savings decision can be described by one representative consumer. Multiply the objective function (56) by n i, sum the constraints (57), (58), and (60) after multiplication by rift, ~tt+intt+l, and ~tt+2ntt+2, respectively, and one arrives at the representative program (7).
CLIMATE RIGHTS A N D E C O N O M I C MODELING 1 Richard B. Howarth ABSTRACT Conservationists claim that future generations are morally entitled to enjoy the benefits of stable climatic conditions. Libertarians argue that polluters are entitled to emit greenhouse gases in the absence of undue regulation. This chapter explores the implications of these competing value judgements in a numerically calibrated overlapping generations model. Although short-term welfare is significantly higher under a laissez faire scenario in which greenhouse gas emissions remain unregulated, the stabilization of current climatic conditions confers substantial benefits on future generations that augment long-run economic growth. The finetuning of greenhouse gas emissions to achieve Pareto efficiency generates net gains that are small in comparison with the welfare differences between the laissez faire and climate stabilization paths.
INTRODUCTION The design of policies to address global climate change involves pervasive
questions of intergenerational fairness. The costs of greenhouse gas emissions abatement w i l l fall predominantly on current producers and consumers, w h i l e the benefits will accrue to individuals living decades and perhaps centuries into
The Long-Term Economics of Climate Change, pages 315-336. Copyright © 2001 by Elsevier Science B.V. All rights of reproduction in any form reserved. ISBN: 0-7623-0305-0 315
316
RICHARD B. HOWARTH
the future. Sharp differences of opinion exist regarding the social choice rules to apply in mediating this conflict. According to environmental conservationists, future generations are morally entitled to enjoy the benefits of sustained climatic stability, and present decision-makers hold a duty to protect and defend this right (Brown, 1998). In this perspective, unconstrained greenhouse gas emissions impose uncompensated harms that cannot be reconciled with the fair treatment of posterity. Advocates of this view note that emissions abatement costs could be limited through carefully designed policies. The impacts of climate change, in contrast, are poorly understood, involving the potential for irreversible, catastrophic outcomes. Under these premises, it is seen as morally wrong to impose risks to future lives and livelihoods to avoid short-term changes in technologies and lifestyles to which human beings might readily adapt. Market libertarians, in contrast, proceed from the premise that limiting government intrusion on economic freedom is of foundational importance (Gray & Rivkin, 1991). In this view, it would be morally wrong to impose costs on present society so that future generations could avoid adaptation to altered climatic conditions. Advocates of this perspective note that future generations will likely enjoy living standards far better than those available today and that the benefits of climate stabilization might turn out to be quite limited. Under this interpretation, there is no legitimate basis for imposing policies with shortrun costs that run into the hundreds of billions of dollars. The tension between these views is of direct relevance to the politics of climate-change response strategies. The Framework Convention on Climate Change, for example, strongly echoes the conservationist perspective, calling for the "stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system." The implementation of this goal through the Kyoto Protocol, however, faces strong opposition in the U.S. Congress, where libertarian arguments are entertained with sympathy. Economists have contributed to this debate through attempts to identify the costs and benefits of greenhouse gas mitigation measures (IPCC, 1996a). Standard economic models, however, shed little light on the conflict between the respective rights of present and future generations. Formal models of climate-economy interactions typically invoke the fiction of an infinitely-lived representative agent to balance the costs and benefits of greenhouse gas emissions abatement (Nordhaus, 1994). In this setting, the benefits of consumption and environmental quality are compressed into an aggregate welfare index that abstracts away from the distribution of impacts between social groups and world regions. Future welfare is then discounted to
Climate Rights and Economic Modeling
317
account for the presumed time preference of decision-makers. This strategy is useful in linking economic models to the analytical and computational methods of optimal control theory. The normative interpretation of such models, however, is a matter of long-standing controversy. Under one interpretation, infinitely-lived agent models capture intuitive notions of intergenerational altruism. If society aimed to maximize the preference satisfaction of present decision-makers who attached declining weight to the welfare of their descendants, then the infinitely-lived agent framework would be sufficient to this task under standard technical assumptions (Barro & Sala-i-Martin, 1995). Basing intergenerational choices on the altruistic preferences of present decision-makers, however, is strongly criticized by some analysts; classical utilitarians reject the practice of utility discounting as morally repugnant (Broome, 1992). Defenders of discounting respond that actual choices regarding macroeconomic variables are made as if they maximized the discounted sum of social utility through time. In this sense, utility discounting is defended as description of social preferences that stands apart from prescriptive notions of fairness and equity (Manne, 1995). The empirical adequacy of the infinitely-lived agent model is not universally accepted. Kotlikoff and Summers (1981) view intergenerational altruism as an essential determinant of economic behavior. Hurd (1987), in contrast, rejects this model as inconsistent with observed patterns of savings and investment. According to Hurd, the life-cycle hypothesis, in which individuals spread consumption over their finite life spans with no concern for their descendants, dominates the infinitely-lived agent model as a behavioral description. If Hurd is right, then the parameters of the infinitely-lived agent model are without normative significance. Accepting this conclusion would imply that debates over the so-called "pure rate of time preference" (IPCC, 1996a) are empirically misplaced. For the present purposes, however, the essential point is that infinitely-lived agent models are conceptually inadequate for considering the questions of rights and correlated duties that drive debates over climate change. Neither the market liberties of polluters nor the right of future generations to enjoy an undiminished natural environment can be represented in models where a benevolent planner is free to impose uncompensated costs on some individuals so that others may enjoy higher degrees of preference satisfaction. Rectifying this shortcoming requires the use of revised analytical methods. In recent years, significant progress has been made in the analysis of computable general equilibrium models that incorporate overlapping cohorts of finite-lived agents to explore questions in macroeconomics and public finance
318
RICHARD B. HOWARTH
(Auerbach & Kotlikoff, 1987). Overlapping generations models also play an increasing role in the analysis of natural resource and environmental management (Howarth & Norgaard, 1995). This chapter employs a numerically calibrated overlapping generations model to examine the distributional impacts of alternative climate-change policies. Age-structured models are valuable in this context because they differentiate between two logically distinct notions individual time preference and moral rules for distributing costs and benefits between successive age cohorts - that are essential to policy choices and yet suppressed in the infinitely-lived agent framework. In particular, the study explores the impacts of two policy regimes - a laissez faire case in which greenhouse gas emissions remain uncontrolled, and a climate stabilization case in which greenhouse gas concentrations are frozen at current levels. As we shall see, the distinction between these cases - which turns on the assignment of climate rights between polluters and posterity - has quite substantial implications for the evolution of living standards through time. Conferring climate rights on future generations leads to enhanced levels of long-term economic welfare while reducing the benefits enjoyed by today's producers and consumers. Granting rights to unconstrained emissions benefits present producers and consumers while curtailing long-term economic growth. Neither of these scenarios balances the costs and benefits of greenhouse gas emissions abatement, so each is susceptible to criticism as inconsistent with the achievement of economic efficiency. Relative to the laissez faire path, both present and future generations would benefit if greenhouse gas emissions abatement were accompanied by intergenerational transfers that compensated polluters for the costs of environmental compliance. And relative to the climate stabilization case, both present and future generations would benefit from the relaxation of emissions limits with compensating investments in produced capital. The analysis, however, finds that the welfare gains generated by efficiencyimproving policies are relatively modest under standard assumptions concerning the costs and benefits of climate change. Under the maintained assumptions of the model, questions of rights and fairness have major welfare implications. Attempts to balance the costs and benefits of emissions, in contrast, are of second-order significance. This finding does not diminish the importance of efficiency as a goal of environmental policy. It suggests, however, that economic models that emphasize efficiency while downplaying questions of intergenerational fairness are of limited value in the analysis of climate-change mitigation strategies.
Climate Rights and Economic Modeling
319
THE MODEL The model employed in this analysis was developed by Howarth (1998) to explore the links between intertemporal efficiency and intergenerational faimess in climate-change policy analysis. Infinitely-lived agent models typically give rise to a unique social optimum that maximizes the preference satisfaction of a representative member of society. Overlapping generations models, in contrast, support a continuum of efficient allocations that correspond to alternative sets of capital transfers between successive age cohorts. Howarth shows that short-run rates of greenhouse gas emissions control ranging from 16% to 48% may be justified under alternative rules for aggregating the net benefits of climate stabilization policies - in particular, the Kaldor-Hicks criterion, in which policy-makers seek to attain intertemporal efficiency in the absence of intergenerational transfers; and classical utilitarianism, in which they maximize the summed utility of all present and future persons. The present work applies this model to investigate the implications of alternative assignments of climate rights between polluters and future generations. It thus considers social choice rules that were not examined in Howarth's initial paper. The analysis considers an overlapping generations model of a decentralized, competitive economy that is numerically calibrated to match the core empirical assumptions developed in Nordhaus' (1994) Managing the Global Commons. Although one might reasonably question Nordhaus' assumptions regarding the costs and benefits of greenhouse gas emissions and the future growth rate of the world economy (see Howarth & Monahan, 1996), Nordhaus' Dynamic Integrated model of Climate and the Economy (DICE) has emerged as a standard point of comparison in the climate-change literature. Parameterizing the model on this basis is useful because it highlights the insights that emerge when one shifts from the infinitely-lived agent approach to an overlapping generations framework. Consumer Behavior
The model considers a sequence of dates t = 0, 1, 2 . . . . Time is measured in 35-year increments, and the date t = 0 denotes the period beginning in the year 2000. At each point in time a new generation consisting of: n, = 5.27 - 1.42(.587)'
(1)
billion persons is born who live at dates t and t + 1. The total population, which consists of both old and young individuals, is thus N, = n t_ 1 + n,. Under this
320
RICHARD B. HOWARTH
formulation, population grows from 5.9 billion persons in the year 2000 to 10.5 billion in the long-term future. Most of this growth occurs in the course of the 21 st century. The economy produces a homogeneous consumption-investment good that is the sole object of consumer preferences. A typical individual enjoys the consumption levels cy, when young and co,÷l in old age. Her preferences are represented by the life-cycle utility function: u, = log(cy,) + 0.84 log(co, ÷0.
(2)
The utility discount factor (0.84) is chosen to replicate the savings and investment behavior observed in DICE in baseline simulations. Individuals are endowed with one unit of labor that they supply inelastically to producers in each period of their lives. An individual saves k,+ 1 units of capital in youth that are rented to producers in old age. In addition, she receives the lump-sum income transfers "try,and "rro,+l from an exogenous agency (the "government") in successive periods of her life. Policy-makers may use these transfers to release the revenues generated through greenhouse gas emissions taxes and to achieve a desired distribution of welfare between generations. With w, as the wage rate and r, as the interest rate or rental price of capital, the individual's budget constraints may be written: (3)
Cyt .-1-k,+l = w l + "lTyt Co,+l
= W,÷l + (1 + r,.0k,+l +"tro,+v
(4)
Taking prices and income transfers as fixed by market conditions and public policies, a typical person seeks to maximize her life-cycle utility subject to these constraints. This problem yields the first-order condition: OUl/Ocyt _ 1 + rt+ 1
(5)
OUt[OCot+ 1
that equates the marginal rate of intertemporal substitution with the gross return on capital investment. Producer Behavior
Production is managed by a large number of competitive firms having common access to the technology described by the production function: Y, = A,K~ZSNi 75 = C, + K,+t -
0.025K,
(6)
In this formulation, I1, measures gross output, measured in trillion dollars. 2A, is an index of total factor productivity. K, = n,_ 1 kt is the total capital stock, or the
Climate Rights and Economic Modeling
321
summed asset holdings of all old individuals. N, is the total input of labor, which is proportional to population under the maintained assumptions of the model. And C, = n,_ 1 Co,+ n, cyt is the aggregate level of consumption by both old and young individuals. The production function exhibits the familiar CobbDouglas form with output elasticities of 0.25 for capital and 0.75 for labor. The capital stock depreciates at the rate of 10%/year, and the initial capital stock is set equal to $56 trillion. Total factor productivity is defined by the equation: At= ( 2 3 5 - 142(0.739)')x (1-0.0133(T,/3)2)x (1-0.0686IX2/89)
(7)
where T, is the increase in mean global temperature relative to the preindustrial norm, measured in °C. The variable ix, measures the percentage rate at which carbon dioxide emissions are reduced relative to unconstrained levels. The model allows for a long-term rise in total factor productivity that is driven by autonomous technological change. Following the assumptions of Nordhaus, the rate of technological progress falls from 1.4%/year in the present to zero in the long-term future. Under the assumptions of the model, both increased global temperatures and efforts to reduce carbon dioxide emissions have negative impacts on total factor productivity. The damages imposed by climate change increase with the square of the increase in mean global temperature relative to the pre-industrial norm. A 3°C temperature rise leads to a 1.33% productivity loss. A 25% reduction in carbon dioxide emissions imposes costs equivalent to 0.1% of gross world output, while costs rise to 0.9% for a 50% emissions reduction. These assumptions correspond exactly to those embodied in Nordhaus' DICE model. Following Nordhaus, the model assumes that emissions of carbon dioxide (E,, measured in billion tonnes of carbon) are, ceteris pafibus, proportional to the level of economic activity: E,=(1 - IXt)(0.181 + 0.189(0.622)')y,.
(8)
The emissions/output ratio falls according to the rate of autonomous technological change. As noted above, emissions levels may be reduced through conscious choices to conserve environmental quality. In this equation, Ix, measures the stringency of emissions abatement efforts. The model assumes that firms regard mean global temperature and prevailing market prices as independent of their private decisions. The govemment, however, imposes a unit tax vt on carbon dioxide emissions that provides an
322
RICHARD B. HOWARTH
incentive to maintain emissions at socially desired levels. Firms seek to maximize their profits through the choice of capital and labor inputs and carbon dioxide emissions, equating the price of each factor with its marginal productivity: r, = O Y , / O K , -
0.975
(9)
w t = OY,/ON,
(10)
v, = O Y , / O E , .
(11)
In equation (9), the rental price of capital is set equal to the gross return on investment minus the depreciation rate, which is almost unity over generational time scales. Profits are zero because the production function exhibits constant returns to scale. The Global Environment
Completing the model requires a description of the links between greenhouse gas emissions and the long-term evolution of climatic conditions. The model assumes that 64% of carbon dioxide emissions remain airborne while 36% are immediately absorbed by the surface waters of the ocean. Once in the atmosphere, carbon dioxide is removed at an annual rate of 0.833%, so that the atmospheric stock of carbon (Q,, measured in billion tonnes) obeys the difference equation: Qt+l -
590 = 0.64E, + 0.75(Q,- 590).
(12)
The initial carbon stock, as determined by historical carbon dioxide emissions, is 784 billion tonnes. Mean global temperature increases logarithmically with respect to carbon dioxide concentrations and linearly with respect to forcing by non-carbon greenhouse gases: 7",= (5.92 log(Q,/590) + F,)/I.41.
(13)
Following Nordhaus, the model assumes that the radiative forcing caused by chlorofluorocarbons, methane, nitrous oxide, and water vapor (F, measured in watts/m2) is exogenously determined by the equation: F, = 1.42 - 0.764(0.523)'.
(14)
This specification allows for the phase-out of chlorofluorocarbon production and use under the Montreal Protocol while assuming that emissions of other trace gases will remain unregulated.
Climate Rights and Economic Modeling
323
POLICY SCENARIOS It may readily be shown that competitive equilibria exist for the model described above provided that the government selects and enforces a feasible set of policy variables (Howarth & Norgaard, 1995). Although it is not strictly necessary, it is useful to limit attention to the case where the government maintains a balanced budget so that nt_l"Irot+nt'rryt=Vtgr This condition implies that the lump-sum transfers paid to old and young individuals sum up to the amount of revenue generated through taxing carbon dioxide emissions. Marini and Scaramozzino (1995) describe a related setup that allows for the accumulation of government debt. The analysis considers the welfare implications of four separate policy regimes. In the first scenario, the laissez faire baseline, decision-makers are guided by libertarian political norms, in which policy interventions to control environmental externalities or redistribute income between successive age cohorts are seen as morally illegitimate. Although the government defends property rights to capital and labor and provides the institutional structures necessary to support competitive markets, it assumes that households and firms have the right to render decisions in the absence of environmental regulations and redistributive policies. Hence v, = 0 and fro,= Tryt = 0 at each and every date. In the second scenario, the climate stabilization case, decision-makers invoke the conservationist principle that future generations are morally entitled to inherit an undiminished natural environment. In particular, the time path for the carbon dioxide emissions tax is chosen so that climatic conditions, as indexed by the mean global temperature, are stabilized at current levels. In this scenario, questions of intergenerational fairness are addressed through the public management of environmental resources, while decisions regarding the time paths of consumption and investment are left to private individuals. The revenues generated by carbon taxes are thus distributed in equal lump sums to all living persons so that "fro,= try, = v,E,/N,. A critic might justifiably point out that neither the laissez faire baseline nor the climate stabilization path achieves an efficient allocation of resources since greenhouse gas emissions abatement rates are based purely on questions of rights and entitlements with no balancing of costs and benefits. In the model under consideration, an equilibrium path is Pareto efficient if it is impossible to increase the life-cycle utility of one generation without making another worse off. As Howarth (1996) shows, an allocation is efficient in this sense if it equates the marginal costs and present-value marginal benefits of greenhouse gas emissions abatement so that:
324
RICHARD B. HOWARTH
~Y, _
oY,+, o7",+i
OEt
OTt+i OEt
~
.
(15)
.= 1 + rt+j
In this setting, OY,/OEt captures the marginal contribution that greenhouse gas emissions make to current production, while
OY,+i aTt+i
measures the OTt+ i OE, incremental cost that current emissions impose on future production through the negative impacts caused by increased global temperatures. The discount rate is set equal to the market rate of interest, which reflects both the marginal productivity of capital and the marginal time preferences of individual consumers. Although analysts sometimes invoke efficiency conditions such as equation (15) to define "optimal" policies without explicit attention to the distribution of costs and benefits between stakeholder groups, the present discussion considers stronger notions of optimal resource allocation. Relative to the laissez faire path, efforts to reduce greenhouse gas emissions in the absence of intergenerational transfers would benefit future generations at the expense of today's producers and consumers. If one assumes that economic actors have a right to regulatory freedom, then the imposition of such policies could not be understood as a "welfare gain." A true improvement in social welfare would require the just sharing of costs and benefits between all affected individuals. With this in mind, the analysis focuses on efficient paths in which polluters (or future generations) are fully compensated for invasions of their rights. In the face of such compensation, all present and future persons would agree that the path in question constituted an improvement relative to the laissez faire baseline or the climate stabilization path. In formal terms, suppose that the consumption of old and young persons along the laissez faire path is given by Co~ and Cy~Y,.Then the pollution rights optimum is defined as the solution to the problem: Maximize ~ r = Coo - CotYO
(16)
-opr .,If U(Cyt, Cot) • U(Cylf + ~ , L'ot+l dl- ~p r )
(17)
subject to: plus the technical constraints of the model. In this setting, L~'pr measures the increase in consumption achieved by a typical old person in the first period of the model as compared against the laissez faire path. The goal is to maximize this benefit while ensuring that each future generation achieves a welfare gain that is equivalent to a uniform increase in per capita consumption of similar magnitude at all points in time.
Climate Rights and Economic Modeling
325
It is readily shown that the solution to problem (16) is Pareto efficient so that it obeys the first-order condition specified by equation (15). In intuitive terms, the pollution rights optimum rests on a commitment to two basic normative principles. It: (a) Maximizes the efficiency gains generated by greenhouse gas emissions abatement in comparison with the laissez faire path. (b) Distributes the resulting net benefits between generations so that each present and future person achieves a welfare gain that is commensurate with a uniform increase in consumption. By standard arguments (Howarth, 1996), the pollution rights optimum may be supported as a competitive equilibrium given some set of carbon taxes and lump-sum intergenerational transfers. The reasoning that supports this result is a straightforward extension of the Second Theorem of Welfare Economics. The pollution rights optimum involves the reduction of greenhouse gas emissions relative to the laissez faire baseline with the fair compensation of those bearing the costs of pollution abatement. The next scenario considered in this analysis - the climate rights o p t i m u m - rests on the assumption that future generations hold an ex ante right to enjoy the benefits of an undiminished natural environment. It recognizes, however, that both present and future generations would benefit from policies that (relative to the climate stabilization path) permitted increased levels of greenhouse gas emissions that were accompanied by transfers of income from present to future generations. cs Specifically, let c~ and Cy, represent the consumption of old and young individuals along the climate stabilization path. Then the climate rights optimum is defined as the solution to the problem: Maximize
~cr -~ Co0 m Co¢s0
(18)
subject to: cs cr cs U(Cyt, Cot) ~ U(Cyt.t_ ~ , Cot+ 1 ar ~cr)
(19)
plus the structural constraints of the model. Here e cr measures the increase in consumption achieved by a typical old person in the first period of the model in comparison with the climate stabilization path. Relation (19) is a "fair sharing" rule that requires that future generations enjoy net benefits that are equivalent to a uniform consumption increase of e cr per period. The result is an optimal path for which: (a) The present-value net benefits of greenhouse gas emissions are optimized according to the efficiency condition embodied in equation (15).
326
RICHARD B. HOWARTH
(b) These net benefits are equitably distributed so that each generation achieves a commensurate welfare gain in comparison with the climate stabilization path. The climate rights optimum, like the pollution rights optimum, can be supported as a competitive equilibrium given an appropriate set of carbon taxes and intergenerational transfers. As is shown below, however, the two paths have significantly different implications for the development of the economy through time.
BASE SIMULATIONS The core results of the analysis described above are summarized in Table 1. Under the laissez faire path, average consumption in the world economy increases from $4,058 per person per year to $14,664 in the year 2420, which is usefully described as the "long-run future" for purposes of discussion. This consumption rise is supported by a 10-fold increase in the capital stock and a substantial increase in carbon dioxide emissions. In this scenario, emissions rise from 10.2 billion tonnes per year in the present to 30.9 in the long-run future. This leads in turn to a dramatic increase in mean global temperature of 8.0°C. Although living standards rise significantly in this scenario, current greenhouse gas emissions impose heavy economic costs on future generations. Climate change damages impose a long-term loss of over 9% of economic output - a cost that amounts to some $16 trillion per year, or $1,531 per capita. The common wisdom holds that the stabilization of current climatic conditions would impose substantial costs on both current consumption and long-run economic welfare. The results presented here suggest that this perception is only partially correct. Because emissions of non-carbon greenhouse gases - principally methane and nitrous oxide - are expected to rise during the 21 st century, quite stringent limitations on carbon dioxide emissions are required to stabilize mean global temperature. In the climate stabilization scenario, carbon emissions rise from only 0.1 billion tonnes per year in the year 2000 to 1.1 in the long-term future. Relative to the laissez faire path, these figures constitute emissions reductions of over 95%. Emissions control leads to a 7% reduction in gross world output and a consumption loss of $267 per person in the year 2000. In the long run, however, the climate stabilization path provides a significant net gain in economic performance. Although future generations must themselves bear the costs of carbon dioxide emissions abatement, they also enjoy the benefits of sustained climatic stability. On balance, this scenario leads
Climate Rights and Economic Modeling
327
to a long-term increase in per capita consumption of $607 in comparison with the laissez faire path. In intuitive terms, reductions in greenhouse gas emissions conserve an important form of capital - environmental quality - that
Table 1.
Simulation Results - Base Parameter Values.
Year Population (billion persons)
2000
2105
2210
2315
2420
5.9
9.7
10.4
10.5
10.5
10,467 10,027 10,506 10,488
13,298 13,341 13,417 13,628
14,304 14,714 14,468 14,867
56 56 56 56
307 288 299 302
478 473 458 509
540 550 513 606
561 581 531 648
0 0 -10 56
0 0 -174 20
0 0 -356 415
0 0 -447 707
0 0 -487 870
10.2 0.1 8.6 8.5
25.5 1.0 19.7 19.6
29.2
30.4
30.9
1.1
1.1
1.1
22.4 22.3
23.4 23.4
23.8 23.8
0 560 16 16
0 855 53 57
0 1,016 66 79
0 1,066 69 87
0 1,081 69 90
1.7 1.7 1.7 1.7
5.1 1.7 4.6 4.6
6.9 1.7 6.2 6.1
7.7 1.7 6.8 6.8
8.0 1.7 7.1 7.1
Consumption (1989 S/person/year) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
4,058 3,791 4,063 4,054
14,664 15,271 14,848 15,348
Capital Stock (trillion 1989 $) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
Net Income Transfers (1989 S/year) Laissez faire Climate stabilization Pollution fights optimum Climate rights optimum
Carbon Dioxide Emissions (billion tonnes-carbon/year) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
Carbon Tax (1989 $/tonne) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
Temperature Increase (°C) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
328
RICHARD B. HOWARTH
contributes significantly to production and consumption. Under the normative assumptions of this scenario, future generations are morally entitled to enjoy these benefits, even if a reduction in short-term economic welfare is required to support this objective. Of course, market libertarians would reject this reasoning as ethically unwarranted. The analysis presented here suggests that the distinction between these views has significant implications for the longterm development of the world economy. As noted above, neither the laissez faire path nor the climate stabilization scenario achieves an efficient allocation of resources over time. The pollution rights optimum internalizes the externalities imposed by carbon dioxide emissions and distributes the net gains, as defined in relation to the laissez faire path, equitably between all present and future persons. As the results make clear, the pollution rights optimum gives rise to quite insubstantial net benefits that are equivalent to a permanent consumption increase of only $5 per person per year. Relative to the laissez faire path, carbon dioxide emissions are reduced by 16% in the year 2000 and 23% in the long-term future. To support this outcome, a carbon tax that rises from $16 to $69 per tonne must be implemented and enforced by the government. The pollution rights optimum, like the laissez faire path, allows for a substantial increase in mean global temperature over the course of the coming centuries. The temperature increase is limited to 7. I°C (as opposed to 8.0°C) in exchange for compensatory payments from the young to the old that rise from $10 to $487 per person per year over the period under discussion. These figures are based on the "net income transfers" estimates presented in Table 1, which represent the net payment that a typical young person receives from the government after subtracting her per capita share of carbon tax revenues (i.e. as "try, - vtEt/Nt). Since these transfers are negative in the case under consideration, they represent net payments to individuals in old age. The numerical results that surround the climate fights optimum, in which carbon dioxide emissions limits are relaxed relative to the climate stabilization case with the equal sharing of net benefits between generations, are generally consistent with the Coase Theorem, under which efficient levels of pollution abatement are insensitive to the distribution of property fights between stakeholders (see Coase, 1960). As Table 1 makes clear, both the climate rights optimum and the pollution rights optimum give rise to carbon emissions that rise from about 8.5 billion tonnes in the year 2000 to 23.8 in the long-term future. The climate fights optimum supports a carbon tax that rises from $16 per tonne to $90 over the period under consideration, with a long-run temperature increase of 7.1 °C.
Climate Rights and Economic Modeling
329
It is important to note, however, that the climate fights optimum differs substantially from the polluter fights optimum with respect to the distribution of economic welfare between present and future generations. For the case at hand, young generations receive net income transfers from their elders that increase from $56 per person in the year 2000 to $870 in 2420. These transfers constitute compensation for reductions in environmental quality to which future generations hold an ex ante entitlement. In comparison with the climate stabilization path, the climate rights optimum generates net economic benefits that are equivalent to an across-the-board consumption increase of $337 per person per year. In comparison with the laissez faire path, the climate fights optimum confers roughly equivalent benefits on today's producers and consumers, while supporting a consumption gain of $684 per year to individuals living in the year 2420.
HIGH DAMAGE SCENARIOS The results outlined above are based on the core empirical assumptions of Nordhaus' (1994) DICE model. These assumptions are controversial in many important respects. For starters, the model's representation of technological change implies that global per capita consumption will approach a long-run equilibrium that is considerably below the level already achieved in the world's richest economies. In addition, the model rules out the possibility that the adoption of cost-effective energy-efficient technologies will yield significant zero-cost greenhouse gas emissions reductions. Yet the IPCC (1996a) concludes that emissions reductions of 10-30% are achievable at no cost through accelerated technology adoption. These issues are important and have in fact been addressed in part through the work of Nordhaus himself and a range of other authors. The present analysis has no light to shed on these particular issues. A third aspect of Nordhaus' analysis, however, is of direct concern to the present discussion - namely the assumption that climate change will likely impose relatively modest impacts on the future economy. Environmental conservationists warn that climate change may have unforeseen impacts on future generations that might impose rather devastating damages (IPCC, 1996b). A shutdown of the North Atlantic conveyer belt circulation, for example, might transform Europe's temperate climate into conditions more reminiscent of the Arctic. More prosaically, climate change might enhance the frequency and severity of extreme weather events - floods, droughts, hurricanes, and typhoons that impose catastrophic risks on vulnerable populations. And finally, climate change might lead to substantial reductions in
330
RICHARD B. HOWARTH
biodiversity and ecosystem services. Efforts to evaluate the monetary costs of such impacts remain crude and unsatisfactory. A long-standing literature urges caution in rendering decisions that might involve the irreversible loss of unique natural environments (Krutilla & Fisher, 1985). To illustrate the sensitivity of the model to assumptions regarding the economic costs of climate change, the analysis considers a set of "high damage" scenarios in which a 3°C increase in mean global temperature leads to a 13.3% loss in gross world output - a level of damage that is ten times more severe than is assumed in the model's base specification. In formal terms, this change is implemented by altering the formula that defines total factor productivity so that: At= ( 2 3 5 - 142(0.739)t)x (1-0.133(TJ3)2)x (1-0.06861.L~89). (20) This assumption is invoked for strictly illustrative purposes and is not grounded on rigorous empirical evidence. It is worth noting, however, that Nordhaus (1994) himself develops a "catastrophic climate change" scenario in which a 3°C temperature rise will lead to a 24% output loss. This scenario is based in part on expert surveys of climate scientists and policy analysts. The results that arise in this revised version of the model are outlined in Table 2. As the table makes clear, the assignment of climate rights between polluters and future generations is of crucial importance in a world where greenhouse gas emissions impose major economic costs. In a laissez faire path where carbon dioxide emissions remain uncontrolled, per capita consumption rises from $3,909 in the year 2000 to just $6,790 in the long-term future. A temperature increase of 5.8°C imposes costs equivalent to a 49% reduction in future gross world output. The climate stabilization scenario, in contrast, gives rise to consumption levels that are more reminiscent of the low-damage cases considered in the preceding section. Consumption rises from $3,652 per person in the year 2000 to $14,533 in 2420. Relative to the laissez faire path, a 7% reduction in short-run consumption gives rise to a 114% increase in long-term living standards. In the model, these gains are reported in concrete, measurable units. In reality, climate stabilization provides a hedge against uncertain (and perhaps uncharacterizable) risks (Woodward & Bishop, 1997). The "high damage" scenarios confirm the previous finding that efforts to improve the efficiency of resource allocation are empirically less important than the assignment of climate rights in governing the evolution of living standards through time. Relative to the laissez faire path, the pollution rights optimum supports welfare gains that are equivalent to a uniform consumption rise of $129 per person per year. Relative to the climate stabilization path, the
Climate Rights and Economic Modeling
Table 2.
Simulation Results -
Year Population (billion persons)
331 "High
Damage" Scenarios.
2000
2105
2210
2315
2420
5.9
9.7
10.4
10.5
10.5
3,909 3,652 3,942 3,824
6,883 9,545 7,954 9,307
6,608 12,698 8,275 12,328
6,719 14,004 8,442 13,652
6,790 14,533 8,538 14,201
56 56 56 56
237 275 157 332
251 450 147 533
256 524 150 607
261 553 152 634
0 0 -125 96
0 0 -2,150 1,084
0 0 -2,687 1,295
0 0 -2,759 1,257
0 0 -2,801 1,218
Consumption (1989 S/person~year) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum Capital Stock (trillion 1989 $) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum Net Income Transfers (1989 S/year) Laissez faire Climate stabilization Pollution fights optimum Climate rights optimum
Carbon Dioxide Emissions (billion tonnes-carbon/year) Laissez faire Climate stabilization Pollution rights optimum Climate rights optimum
9.8 0.1 5.2 4.3
16.8 1.0 9.1 3.2
14.5 1.1 9.1 2.7
14.3 1.1 8.9 2.7
14.3 1.1 8.9 2.6
Carbon Tax (1989 $/tonne) Laissez faire Climate stabilization Pollution rights optimum Climate rights optimum
0 560 125 174
0 851 239 677
0 1,011 248 881
0 1,062 261 939
0 1,076 267 956
Temperature Increase (°C) Laissez faire Climate stabilization Pollution fights optimum Climate fights optimum
1.7 1.7 1.7 1.7
4.7 1.7 3.4 2.6
5.6 1.7 4.2 2.6
5.7 1.7 4.4 2.5
5.8 1.7 4.5 2.4
332
RICHARD B. HOWARTH
climate rights optimum generates effective annualized gains of $119 per person. These efficiency gains, while significant, are quite insubstantial when compared with the $7,321 difference in long-run per capita consumption between the climate stabilization and laissez faire scenarios. In a similar vein, the climate rights optimum supports long-run consumption levels that are twothirds higher than those that arise in the pollution rights optimum. The model runs outlined in Table 2 also show that the common wisdom contained in the Coase Theorem breaks down in the case where climate change imposes major damages on economic activity. In the pollution rights optimum - a Pareto efficient path in which polluters are compensated for the costs of carbon dioxide emissions abatement - emissions rise from 5.2 billion tonnes per year in the present to 8.9 in the year 2420, with a long-term temperature increase of 4.5°C. This path is supported by a carbon tax that rises from $125 to $267 per tonne over the period under discussion. In contrast, the climate rights optimum - in which future generations are compensated for reductions in environmental quality - restricts emissions to 4.3 billion tonnes per year in the present and 2.6 in the year 2420. This scenario, in which the carbon tax rises from $174 to $956 per tonne over the next four centuries, gives rise to a long-term increase in mean global temperature of only 2.4°C. Under the "high damage" assumption embodied in equation (20), the assignment of climate rights between polluters and future generations has important implications for both the distribution of economic welfare and the specifics of environmental policy. A final point concerning the numerical results contained in Table 2 is warranted at this stage. Why does the climate stabilization path support higher levels of long-term consumption than the climate rights optimum that (by definition) Pareto dominates it? The answer is that, relative to the climate rights optimum, the climate stabilization path involves excess consumption by old persons and underconsumption by the young. The intergenerational transfers that accompany the fine-tuning of carbon dioxide emissions lead agents to reallocate consumption over their finite life spans. Although total consumption decreases in this transaction, a greater portion of consumption occurs in youth, when satisfaction remains undiminished by the ravages of discounting. In a similar vein, the substantial differences in the consumption levels that arise in the laissez faire path and pollution rights optimum correspond to relatively modest differences in life-cycle utility. In this case, individuals shift consumption from youth to old age under the policies used to internalize the costs of carbon dioxide emissions. These examples suggest that average consumption levels provide an approximate but sometimes misleading gauge of consumer welfare. In overlapping generations models, both the level of
Climate Rights and Economic Modeling
333
consumption and its allocation between age cohorts must be considered in rendering welfare comparisons. SUMMARY AND CONCLUSIONS Standard models of climate-economy interactions identify "optimal" rates of greenhouse gas emissions through appeals to a social welfare function in which the utility of future generations is discounted relative to the present. The politics of climate change, in contrast, focuses largely on the conflict between the right of future generations to enjoy an undiminished natural environment and the fight polluters to regulatory freedom. Since conventional models rest on a framework that departs significantly from decision-makers' expressed views regarding the normative principles that are relevant to policy formulation, an important mismatch exists between the tools of policy analysis and the information requirements of decision-makers. The present paper addresses this gap through the consideration of a numerically calibrated overlapping generations model. Its principal conclusions may be summarized as follows. First, the assignment of climate rights between polluters and future generations has quite significant implications for the evolution of living standards in the world economy. Granting unconstrained pollution rights to present economic actors favors high rates of short-term consumption while imposing significant costs on future generations. By way of comparison, stabilizing current climatic conditions reduces short-term consumption by 7% but increases long-term consumption by 4% under standard assumptions concerning the costs and benefits of climate change. This finding suggests that the familiar claim that excess rates of greenhouse gas emissions control thwart the interests of both present and future generations is only half true. Indeed, climate stabilization conserves an important form of environmental capital that contributes to long-run economic vitality. Second, the analysis finds that efforts to fine-tune greenhouse gas emissions to achieve economic efficiency can enhance the welfare of both present and future society if they are accompanied by intergenerational transfers that ensure the fair sharing of net benefits. The efficiency gains generated by such policies, however, are small in comparison with the distributional impacts of alternative assignments of climate rights. Present producers and consumers are better off in a laissez faire economy than in a world in which the costs and benefits of greenhouse gas emissions are carefully balanced and in which future generations are duly compensated for the adverse impacts of climate change. Future generations would prefer the stabilization of climatic conditions to the
334
RICHARD B. HOWARTH
case where greenhouse gas emissions are set at efficient levels and where polluters are compensated for the costs of emissions abatement. This finding is sharpened by arguments that effective compensation schemes may be institutionally infeasible over the very long time horizons relevant to climate change policy. According to Lind (1995), the efficiency reasoning that supports cost-benefit analysis is undercut by the fact that actual climate change policies will most likely impose costs on some groups so that others may benefit. Such pure tradeoffs are more appropriately addressed through the language of rights and entitlements than through the conventional approach of maximizing net social benefits while abstracting away from questions of distributional fairness. Finally, the study sheds light on the applicability of the Coase Theorem to climate stabilization policy. Under standard assumptions regarding the costs and benefits of climate change - i.e. those developed in Nordhaus' (1994) Managing the Global Commons - efficient rates of greenhouse gas emissions control are not greatly affected by the distribution of climate rights between polluters and posterity. In both the pollution rights optimum and the climate rights optimum, carbon dioxide emissions rise from about 8.5 to 23.8 billion tonnes-carbon per year between 2000 and 2420 with a long-run increase of 7.1°C. Quite different results emerge, however, in a "high damage" scenario in which a 3°C temperature rise leads to a 13% loss in gross world output - a figure that is one order of magnitude larger than Nordhaus' base estimate. Under this assumption, granting climate rights to current polluters leads to an efficient path in which carbon emissions rise from 5.2 billion tonnes in the year 2000 to 8.9 in 2420, with a long-run temperature increase of 4.5°C. In contrast, emissions fall from 4.3 to 2.6 billion tonnes per year in the case where future generations are compensated for the costs imposed by greenhouse gas emissions. In this latter case, the long-run increase in mean global temperature is limited to 2.4°C. These two scenarios differ profoundly in terms of their implications for the welfare of future generations. In the pollution rights optimum, per capita consumption increases from $3,942 to $8,538 over the period of analysis. Relative to this baseline, the climate rights optimum involves a 3% decrease in short-run consumption (to $3,824) and a 66% increase in the long-term future (to $14,201). The empirical salience of the "high damage" scenario is of course largely beyond the scope of this chapter. The parameters that surround this scenario, however, are within the range of respected expert opinion. This fact suggests that the assignment of climate rights between polluters and posterity is crucial to debates over the interplay between climate change and the world
Climate Rights and Economic Modeling
335
economy. The analysis presented here provides a framework for analyzing this important policy issue. NOTES 1. Paper presented to the Pew Workshop on the Economics and Integrated Assessment of Climate Change, Washington, July 21-22, 1999. The author thanks Stephen DeCanio, Robert Lind, Alan Sanstad, Gary Wolff, and two anonymous reviewers for stimulating comments. Financial support was provided by the Pew Center on Global Climate Change. 2. Monetary units are denominated in constant-value 1989 U.S. dollars throughout this analysis.
REFERENCES Auerbach, A. J., & Kotlikoff, L. J. (1987). Dynamic Fiscal Policy. New York: Cambridge University Press. Barro, R. J., & Sala-i-Martin, X. (1995). Economic Growth. New York: McGraw-Hill. Broome, J. (1992). Counting the Cost of Global Warming. Cambridge: White Horse Press. Brown, P. G. (1998). Toward an Economics of Stewardship: The Case of Climate. Ecological Economics, 26, 11-21. Coase, R. H. (1960). The Problem of Social Cost. Journal of Law and Economics, 3, 1-44. Gray, C. B., & Rivkin, D. B. (1991). A No Regrets Environmental Policy. Foreign Policy, 83, 47-64. Howarth, R. B. (1996). Climate Change and Overlapping Generations. Contemporary Economic Policy, 14, 100-111. Howarth, R. B. (1998). An Overlapping Generations Model of Climate-Economy Interactions. Scandinavian Journal of Economics, 100, 575-591. Howarth, R. B., & Monahan, P. A. (1996). Economics, Ethics, and Climate Policy: Framing the Debate. Global Planetary Change, 11, 100-111. Howarth, R. B., & Norgaard, R. B. (1995). Intergenerational Choices under Global Environmental Change. In: D. W. Bromley (Ed.), Handbook of Environmental Economics. Oxford: Blackwell. Hurd, M. D. (1987). Savings of the Elderly and Desired Bequests. American Economic Review, 77, 298-312. Intergovernmental Panel on Climate Change (IPCC) (1996a). Climate Change 1995: Economic and Social Dimensions of Climate Change. New York: Cambridge University Press. Intergovernmental Panel on Climate Change (IPCC) (1996b). Climate Change 1995: The Science of Climate Change. New York: Cambridge University Press. Kotlikoff, L. J., & Summers, L. H. (1981). The Role of Intergenerational Transfers in Aggregate Capital Accumulation. Journal of Political Economy, 89, 706-732. Krutilla, J. V., & Fisher, A. C. (1985). The Economics of Natural Environments. Washington: Johns Hopkins University Press. Lind, R. C. (1995). Intergenerational Equity, Discounting, and the Role of Cost-benefit Analysis in Evaluating Global Climate Policy. Energy Policy, 23, 379-389. Manne, A. S. (1995). The Rate of Time Preference: Implications for the Greenhouse Debate. Energy Policy, 23, 391-394.
336
RICHARD B. HOWARTH
Marini, G., & Scaramozzino, P. (1995). Overlapping Generations and Environmental Control. Journal of Environmental Economics and Management, 29, 64-77. Nordhaus, W. D. (1994). Managing the Global Commons: The Economics of Climate Change. Cambridge, Massachusetts: MIT Press. Woodward, R. T., & Bishop, R. C. (1997). How to Decide When Experts Disagree: UncertaintyBased Choice Rules in Environmental Policy. Land Economics, 73, 492-507.