INTERNATIONAL SEMINAR ON
NUCLEAR WAR AND PLANETARY EMERGENCIES 42nd Session: World Energy Crisis-Energy and Pollution: Essential Technologies for Managing the Coupled Challenges of Climate Change and Energy Security, Energy, Water, Climate, Pollution and Limits of Development in Asian Countries; Global Monitoring of the Planet-Sensitivity of Climate to Additional Co, as Indicated by Water Cycle Feedback Issues, Climate Uncertainties Addressed by Satellites, The Basic Mathematics Needed for All Models; Pollution and Medicine-The Revolution in the Environmental Health Sciences and the Emergence of Green Chemistry; Information Security-Cyber Conflict and Cyber Stability: Finding A Path to Cyber Peace; Cultural Pollution- The Erice Science for Peace Award Scientific Session
THE SCIENCE AND CULTURE SERIES Nuclear Strategy and Peace Technology Series Editor: Antonino Zichichi 1981 -
International Seminar on Nuclear War of Nuclear War
1st Session: The World-wide Implications
1982
International Seminar on Nuclear War - 2nd Session: How to Avoid a Nuclear War
1983
International Seminar on Nuclear War - 3rd Session: The Technical Basis for Peace
1984
International Seminar on Nuclear War - 4th Session: The Nuclear Winter and the New Defence Systems: Problems and Perspectives
1985
International Seminar on Nuclear War - 5th Session: SOl, Computer Simulation, New Proposals to Stop the Arms Race
1986
International Seminar on Nuclear War The Alternatives
1987
International Seminar on Nuclear War - 7th Session: The Great Projects for Scientific Collaboration East-West-North-South
1988 -
International Seminar on Nuclear War - 8th Session: The New Threats: Space and Chemical Weapons - What Can be Done with the Retired I.N.F. Missiles-Laser Technology
1989
International Seminar on Nuclear War -
9th Session: The New Emergencies
1990
International Seminar on Nuclear War -
10th Session: The New Role of Science
1991
International Seminar on Nuclear War -
11 th Session: Planetary Emergencies
1991
International Seminar on Nuclear War (unpublished)
12th Session: Science Confronted with War
1991
International Seminar on Nuclear War and Planetary Emergencies Satellite Monitoring of the Global Environment (unpublished)
13th Session:
1992
International Seminar on Nuclear War and Planetary Emergencies Innovative Technologies for Cleaning the Environment
14th Session:
1992
International Seminar on Nuclear War and Planetary Emergencies - 15th Session (1 st Seminar after Rio): Science and Technology to Save the Earth (unpublished)
1992
International Seminar on Nuclear War and Planetary Emergencies - 16th Session (2nd Seminar after Rio): Proliferation of Weapons for Mass Destruction and Cooperation on Defence Systems
1993
International Seminar on Planetary Emergencies - 17th Workshop: The Collision of an Asteroid or Comet with the Earth (unpublished)
1993
International Seminar on Nuclear War and Planetary Emergencies (4th Seminar after Rio): Global Stability Through Disarmament
18th Session
1994
International Seminar on Nuclear War and Planetary Emergencies (5th Seminar after Rio): Science after the Cold War
19th Session
1995
International Seminar on Nuclear War and Planetary Emergencies - 20th Session (6th Seminar after Rio): The Role of Science in the Third Millennium
1996
International Seminar on Nuclear War and Planetary Emergencies - 21 st Session (7th Seminar after Rio): New Epidemics, Second Cold War, Decommissioning, Terrorism and Proliferation
6th Session: International Cooperation:
THE SCIENCE AND CULTURE SERIES Nuclear Strategy and Peace Technology
INTERNATIONAL SEMINAR ON
NUCLEAR WAR AND PLANETARY EMERGENCIES 42nd Session: \\orld Energl Crisis - Energl and Pollution: Essentiallechnologies lilr \lanaging the Coupled Challenges or Climate Change and Energ) Securit). Energ). \Yater. Climate. Pollution and Limits or Delclopment in Asian Countries: (ilobal [Vlonitoring or the Planet Sensitilit) or (limate to.\dditional Co. as Indicated bl \\ater Clele Feedback issues. Climate Incertain ties Addressed III Satelli tes. The Basic \Iathematics \eeded lill' .\ II \Iodels: Pollution and \ ledicine The ReI olut ion in the Fill irunmentaillealth Sciences and the Fmergence or (ireen Chemistrl: Inlilrlnation Securitl - (Iher Connict and CI her Stabilit): Finding.\ Path to tsber Peace: Cultural Pollution The Erice Science lill" Peace\lIard Scientilic Session
UE. Majorana" Centre for Scientific Culture Erice, Italy, 19-24 August 2009
Series Editor and Chairman: A. Zichichi
Edited by R. Ragaini
,Ii»
World Scientific
NEW JERSEY· LONDON· SINGAPORE' BEIJING· SHANGHAI' HONG KONG' TAIPEI' CHENNAI
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
INTERNATIONAL SEMINAR ON PLANETARY EMERGENCIES - 42ND SESSION: WORLD ENERGY CRISIS-ENERGY & POLLUTION: ESSENTIAL TECHNOLOGIES FOR MANAGING THE COUPLED CHALLENGES OF CLIMATE CHANGE AND ENERGY SECURITY,ENERGY,WATER,CLIMATE,POLLUTION & LIMITS OF DEVELOPMENT IN ASIAN COUNTRIES; GLOBAL MONITORING OF THE PLANET-SENSITIVITY OF CLIMATE TO ADDITIONAL CO 2 AS INDICATED BY WATER CYCLE FEEDBACK ISSUES, CLIMATE UNCERTAINTIES ADDRESSED BY SATELLITES, THE BASIC MATHEMATICS NEEDED FOR ALL MODELS; POLLUTION AND MEDICINE-THE REVOLUTION IN THE ENVIRONMENTAL HEALTH SCIENCES AND THE EMERGENCE OF GREEN CHEMISTRY; INFORMA TION SECURITY-CYBER CONFLICT AND CYBER STABILITY: FINDING A PATH TO CYBER PEACE; CULTURAL POLLUTION-THE ERICE SCIENCE FOR PEACE AWARD SCIENTIFIC SESSION Copyright © 2010 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN-13 978-981-4327-19-0 ISBN-IO 981-4327-19-0
Printed in Singapore by B & Jo Enterprise Pte Ltd
1997 -
International Seminar on Nuclear War and Planetary Emergencies - 22nd Session (8th Seminar after Rio): Nuclear Submarine Decontamination, Chemical Stockpiled Weapons, New Epidemics, Cloning of Genes, New Military Threats, Global Planetary Changes, Cosmic Objects & Energy
1998 -
International Seminar on Nuclear War and Planetary Emergencies - 23rd Session (9th Seminar after Rio): Medicine & Biotechnologies, Proliferation & Weapons of Mass Destruction, Climatology & EI Nino, Desertification, Defence Against Cosmic Objects, Water & Pollution, Food, Energy, Limits of Development, The Role of Permanent Monitoring Panels
1999 -
International Seminar on Nuclear War and Planetary Emergencies - 24th Session: HIV/AIDS Vaccine Needs, Biotechnology, Neuropathologies, Development Sustainability - Focus Africa, Climate and Weather Predictions, Energy, Water, Weapons of Mass Destruction, The Role of Permanent Monitoring Panels, HIV Think Tank Workshop, Fertility Problems Workshop
2000 -
International Seminar on Nuclear War and Planetary Emergencies - 25th Session: Water - Pollution, Biotechnology - Transgenic Plant Vaccine, Energy, Black Sea Pollution, Aids - Mother-Infant HIV Transmission, Transmissible Spongiform Encephalopathy, Limits of Development - Megacities, Missile Proliferation and Defense, Information Security, Cosmic Objects, Desertification, Carbon Sequestration and Sustainability, Climatic Changes, Global Monitoring of Planet, Mathematics and Democracy, Science and Journalism, Permanent Monitoring Panel Reports, Water for Megacities Workshop, Black Sea Workshop, Transgenic Plants Workshop, Research Resources Workshop, Mother-Infant HIV Transmission Workshop, Sequestration and Desertification Workshop, Focus Africa Workshop
2001 -
International Seminar on Nuclear War and Planetary Emergencies - 26th Session: AIDS and Infectious Diseases - Medication or Vaccination for Developing Countries; Missile Proliferation and Defense; Tchernobyl - Mathematics and Democracy; Transmissible Spongiform Encephalopathy; Floods and Extreme Weather Events Coastal Zone Problems; Science and Technology for Developing Countries; Water Transboundary Water Conflicts; Climatic Changes - Global Monitoring of the Planet; Information Security; Pollution in the Caspian Sea; Permanent Monitoring Panels Reports; Transmissible Spongiform Encephalopathy Workshop; AIDS and Infectious Diseases Workshop; Pollution Workshop
2002 -
International Seminar on Nuclear War and Planetary Emergencies - 27th Session: Society and Structures: Historical Perspectives - Culture and Ideology; National and Regional Geopolitical Issues; Globalization - Economy and Culture; Human Rights - Freedom and Democracy Debate; Confrontations and Countermeasures: Present and Future Confrontations; Psychology of Terrorism; Defensive Countermeasures; Preventive Countermeasures; General Debate; Science and Technology: Emergencies; Pollution, Climate - Greenhouse Effect; Desertification, Water Pollution, Algal Bloom; Brain and Behaviour Diseases; The Cultural Emergency: General Debate and Conclusions; Permanent Monitoring Panel Reports; Information Security Workshop; Kangaroo Mother's Care Workshop; Brain and Behaviour Diseases Workshop
2003 -
International Seminar on Nuclear War and Planetary Emergencies - 29th Session: Society and Structures: Culture and Ideology - Equity - Territorial and Economics - Psychology - Tools and Countermeasures - Worldwide Stability - Risk Analysis for Terrorism - The Asymmetric Threat - America's New "Exceptional ism" - Militant Islamist Groups Motives and Mindsets - Analysing the New Approach The Psychology of Crowds - Cultural Relativism - Economic and Socio-economic Causes and Consequences - The Problems of American Foreign PolicyUnderstanding Biological Risk Chemical Threats and Responses - Bioterrorism Nuclear Survivial Criticalities - Responding to the Threats - National Security and Scientific Openness - Working Groups Reports and Recommendations
2003 -
International Seminar on Nuclear War and Planetary Emergencies - 30th Session : Anniversary Celebrations: The Pontifical Academy of Sciences 400th - The 'Ettore Majorana' Foundation and Centre for Scientific Culture 40th - H.H . John Paull! Apostolate 25th - Climate/Global Warming: The Cosmic Ray Effect; Effects on Species and Biodiversity; Human Effects; Paleoclimate Implications; Evidence for Global Warming - Pollution: Endocrine Disrupting Chemicals; Hazardous Material; Legacy Wastes and Radioactive Waste Management in USA, Europe; Southeast Asia and Japan - The Cultural Planetary Emergency: Role of the Media; Intolerance; Terrorism; Iraqi Perspective; Open Forum Debate - AIDS and Infectious Diseases: Ethics in Medicine; AIDS Vaccine Strategies - Water: Water Conflicts in the Middle East - Energy: Developing Countries; Mitigation of Greenhouse Warming Permanent Monitoring Panels Reports - Workshops: Long-Term Stewardship of Hazardous Material; AIDS Vaccine Strategies and Ethics
2004 -
International Seminar on Nuclear War and Planetary Emergencies - 31 st Session: Multidisciplinary Global Approach of Governments and International Structures: Societal Response - Scientific Contributions to Policy - Economics - Human Rights Communication - Conflict Resolution - Cross-Disciplinary Responses to CBRN Threats: Chemical and Biological Terrorism - Co-Operation Between Russia and the West - Asymmetrical Conflicts - CBW Impact - Cross-Disciplinary Challenges to Emergnecy Management, Media Information and Communication: Role of Media in Global Emergencies - Emergency Responders - Working Groups' Reports and Recommendations
2004 -
International Seminar on Nuclear War and Planetary Emergencies - 32nd Session : Limits of Development: Migration and Cyberspace; in Europe; Synoptic European Overview; From and Within Asia; Globalization - Climate: Global Warming; a Chronology; Simple Climate Models ; Energy and Electricity Considerations - T. S. E. : CJD and Blood Transfusion; BSE in North America; Gerstmann-Straussler-Scheinker Disease - The Cultural Emergency: Innovations in Communications and IT - Cosmic Objects: Impact Hazard; Close Approaches; Asteroid Deflection; Risk Assessment and Hazard Reduction; Hayabusa and Follow Up - Aids and Infectious Diseases: Ethics in Medicine; International Co-operation; Laboratory Biosecurity Guidelines; Georgian Legislation; Biosecurity Norms and International Organizations, Legal Measures Against Biocrimes - Water and Pollution: Cycle Overview; Beyond Cost and Pri::e; Requirements in Rural Iran; Isotope Techniques; Clean and Reliable Water for the 21 st Century - Permanent Monitoring Panels Reports - Workshops: Global Biosecurity; Cosmic Objects
2005 -
International Seminar on Nuclear War and Planetary Emergencies - 34th Session: Energy: Nuclear and Renewable Energy; Energy Technologies for the 21 st Century; Repositories Development; Nuclear Power in Europe and in Asia; The Future of Nuclear Fusion - Climate: Global Warming; Celestial Climate Driver; Natural and Anthropogenic Contributions; Climate Data and Comparison with Models; Understanding Common Climate Claims - AIDS and Infectious Diseases: New Threats from Infectious Agents-SARS Epidemic; Vaccines Development; Transmissible Spongiform Encephalopathies Update - Limits of Development: International Points of View on Migration - Pollution: Science and Technology; Subsurface Laser Drilling Desertification: A Global Perspective; Integrated Approaches - Disarmament and Cultural Emergencies: A WFS Achievement in China; Non-Proliferation - Permanent Monitoring Panel Reports - Workshops : Energy; Information Security; Building Resilence Associated with the Third Meeting on Terrorism
2006 -
International Seminar on Nuclear War and Planetary Emergencies - 36th Session: Energy: Global Nuclear Power Future; Global Monitoring of the Planet Proliferation: Nuclear Weapons; AIDS and Infectious Diseases : Avian Flu - Global Health ; Climatology: Global Warming/Aerosols and Satellites; Pollution: Plastic Contaminants in Water; Information Security: Relevance of Cyber Security; Limits of Development: Development of Sustainability; Defence Against Cosmic Objects; WFS General Meeting: Cultural Energy-Focus: Terrorism; Permanent Monitoring Panel Reports; Limits of Development Permanent Monitoring Panel Meeting; World Energy Monitoring Workshop.
2007 -
International Seminar on Nuclear War and Planetary Emergencies - 38th Session: World Energy Crisis; Managing Climate Change; Mitigation of Greenhouse Gases; Geoengineering & Adaptation; Theoretical Alternatives to Climate Modelling; US Missile Defence Shield; Global Monitoring of the Planet; Life Cycle Nuclear Energy Environmental Issues; The Epidemic of Alzheimer; Infectious Agents and Cancer
2008 -
Energy: Nuclear Power Present and Future; Sustainability of Biofuels; Resolving the Nuclear Waste - Climatology Model and Statistics; Ozone and Climate Change Interaction; Spatio-Temporal Field of Atmospheric CO2 ; Forest Policies - Medicine: Vector-Borne Diseases; Screening Technology - Pollution: Air-Borne Particulates Global Monitoring of the Planet: Disarmament and Non-Proliferation Regime; The Crisis in Internet Security; The Northern Sea Route - The Erice Science for Peace Award Scientific Session
2009 -
World Energy Crisis-Energy & Pollution: Essential Technologies for Managing the Coupled Challenges of Climate Change and Energy Security, Energy, Water, Climate, Pollution & Limits of Development in Asian Countries; Global Monitoring of the Planet-Sensitivity of Climate to Additional CO 2 as Indicated by Water Cycle Feedback Issues, Climate Uncertainties Addressed by Satellites, The Basic Mathematics Needed for all Models; Pollution and Medicine-The Revolution in the Environmental Health Sciences and the Emergence of Green Chemistry; Information SecurityCyber Conflict and Cyber Stability: Finding a Path to Cyber Peace; Cultural Pollution -The Erice Science for Peace Award Scientific Session
This page intentionally left blank
CONTENTS
1. OPENING SESSION Antonino Zichichi Why Science is Needed for the Culture of the Third Millennium-The Motor for Progress
3
Nicholas P. Samios Acceptance Remarks on Receiving the 2009 Gian Carlo Wick Gold Medal Award
37
Honglie Sun Glacial Retreat and Its Impact in Tibetan Plateau Under Global Warming
39
Yuri Antonovitch Izrael Climate Stabilization on the Basis of Geo-Engineering Technologies
49
Herman H. Shugart Modeling Forest Ecosystems, Their Response to and Interaction with Global Climate Change
57
Jan Szyszko Forest Policies, Carbon Sequestration and Biodiversity Protection
67
Henning Wegener and William Barletta Avoiding Disaster: Book Presentation
81
2. INFORMATION SECURITY FOCUS: CYBER CONFLICTS AND CYBER STABILITY-FINDING A PATH TO CYBER PEACE Henning Wegener Cyber Contlict vs. Cyber Stability: Finding a Path to Cyber Peace
ix
85
x Hamadoun I. Toure Advancing the Global Cybersecurity Agenda and Promoting Cyberstability Globally
87
Mohd Noor Amin Bridging the Global Gaps in Cyber Security
91
Jody R. Westby Cyber War vs.' Cyber Stability
97
John G. Grimes Cyber Conflict vs. Cyber Security: Finding a Path to Peace
105
Rick Wesson Information Security, Ensembles of Experts
109
Jacques Bus Cyber Conflict vs. Cyber Stability: EU and Multi-National Collaboration
115
Jody Westby and William Barletta Erice Declaration on Principles for Cyber Stability and Cyber Peace
119
3.
POLLUTION FOCUS: INTEGRATING ENVIRONMENTAL HEALTH RESEARCH AND CHEMICAL INNOVATION
John Peterson Myers Fomenting New Opportunities to Protect Human Health
123
John C. Warner Green Chemistry: A Necessary Step to a Sustainable Future
129
Jerrold J. Heindel Health Impact of Environmental Chemicals: Need for Green Chemistry
135
Terry Collins Moving the Chemical Enterprise Toward Sustainability: Key Issues
143
xi
4. ENERGY & CLIMATE FOCUS: ESSENTIAL TECHNOLOGIES FOR MODERATING CLIMATE CHANGE AND IMPROVING ENERGY SECURITY CarlO. Bauer Balancing Perspectives on Energy Supply, Economics, and the Environment
151
Edward S. Rubin The Outlook for Power Plant CO 2 Capture
157
Wolfgang Eichhammer Making Rapid Transition to an Energy System Centered on Energy Efficiency and Renewables Possible
175
Giorgio Simbolotti Beyond Emerging Low-Carbon Technologies to Face Climate Change?
197
Lee Lane, W. David Montgomery and Anne E. Smith Institutions for Developing New Climate Solutions
205
Michael C. MacCracken Moderating Climate Change by Limiting Emissions of Both Short- and Long-Lived Greenhouse Gases
225
Masao Tamada Current Status of Technology for Collection of Uranium from Seawater
243
Roger W. Bentley An Explanation of Oil Peaking
253
Peter Jackson The Future of Global Oil Supply: Understanding the Building Blocks
271
Rodney F. Nelson The Importance of Technology-The Constant Wild Card
283
xii
Maw-Kuen Wu Recent Scientific Development in Taiwan in Response to Global Climate Change
S.
305
CLIMATE FOCUS: GLOBAL WARMING AND GREENHOUSE GASES
Mikhail 1. Antonovsky Exponential Analysis in the Problem of the Assessment of the Contribution of Greenhouse Gases in Global Warming
313
6. ENERGY, CLIMATE, POLLUTION AND LIMITS OF DEVELOPMENT FOCUS: ADVANCED TECHNOLOGIES AND STRATEGIES IN CHINA FOR MEETING THE ENERGY, ENVIRONMENT AND ECONOMY PREDICAMENT IN A GREENHOUSE CONSTRAINED SOCIETY Mark D. Levine Myths and Realities about Energy and Energy-Related CO 2 Emissions in China
329
Zhang Xiliang Technologies and Policies for the Transition to Low Carbon Energy System in China
335
Mingyuan Li Assessment of CO 2 Storage Potential in Oil/Gas-Bearing Reservoirs in Songliao Basin of China
357
Yuan Daoxian Carbon Cycle in Karst Processes
369
lie Zhuang and Gui-Rui Yu Bioenergy in China: A Grand Challenge for Economic and Environmental Sustainability
387
lun Xia Screening for Climate Change Adaptation: Water Problem, Impact and Challenges in China
397
xiii
7. CLIMATE & DATA FOCUS: SIGNIFICANT CLIMATE UNCERTAINTIES ADDRESSED BY SATELLITES John A. Haynes NASA Satellite Observations for Climate Research and Applications for Public Health
407
Judit M. Pap Climate Insights from Monitoring Solar Energy Output
415
8. CLIMATE & CLOUDS FOCUS: SENSITIVITY OF CLIMATE TO ADDITIONAL CO 2 AS INDICATED BY WATER CYCLE FEEDBACK ISSUES William Kininmonth A Natural Limit to Anthropogenic Global Warming
431
Richard S. Lindzen and Yong-Sang Choi On the Observational Determination of Climate Sensitivity and Its Implications
445
Garth W. Paltridge Two Basic Problems of Simulating Climate Feedbacks
463
9. CLIMATE WITHOUT COMPUTER SIMULATION F 0 C US: MA THEMA TICS, PHYSICS, AND CLIMATE Kyle L. Swanson What is the Climate Change Signal?
471 Christopher Essex A Key Open Question of Climate Forecasting
481
10. CLIMATE AND HEALTH FOCUS: WINDBLOWN DUST Mark B. Lyles Medical Geology: Dust Exposure and Potential Health Risks in the Middle East
497
xiv
Dale Griffin Climate Change and Climate Systems Influence and Control the Atmospheric Dispersion of Desert Dust: Implications for Human Health
503
11. SCIENCE & TECHNOLOGY FOCUS: WMD PROLIFERATION-ENERGY OF THE FUTUREMATHEMATICS & DEMOCRACY Gregory Canavan Remote Detection with Particle Beams
511
Lowell Wood Exploring the Italian Navigator's New World: Toward Economic, Full-Scale, Low-Carbon, Conveniently-Available, ProliferationRobust, Renewable Energy Resources
523
K. C. Sivaramakrishnan The Mathematics of Democracy in South Asia
543
12. WFS GENERAL MEETING PMP REPORTS-DEBATE AND CONCLUSIONS Lord John Alderdice Permanent Monitoring Panel on Motivations for Terrorism
551
Franco M. Buonaguro AIDS and Infectious Diseases PMP
555
Nathalie Charpak Mother and Child PMP
559
Christopher D. Ellis Permanent Monitoring Panel on Limits of Development
573
Lorne Everett Pollution Permanent Monitoring Panel: Annual Report
579
xv Charles McCombie Multinational Repositories: Recent Developments and 2010 Session and Workshop Proposals
583
William Fulkerson, Carmen Difiglio, Bruce Stram and Mark Levire Energy PMP Report
589
Sally Leivesley Report of the Permanent Monitoring Panel for the Mitigation of Terrorist Acts: PMP-MT A
599
William A. Sprigg Permanent Monitoring Panel on Climate Activity Report
605
Henning Wegener and Jody R. Westby Permanent Monitoring Panel on Information Security Report from the Co-Chairs
609
13. INFORMATION SECURITY PANEL MEETING World Federation of Scientists: Permanent Monitoring Panel on Information Security Erice Declaration on Principles for Cyber Stability and Cyber Peace
613
World Federation of Scientists: Permanent Monitoring Panel on Information Security Top Cyber Security Problems that need Resolution to Address Communications
615
World Federation of Scientists: Permanent Monitoring Panel on Information Security Quest for Cyber Peace
621
14. LIMITS OF DEVELOPMENT PANEL MEETING Juan Manuel Borthagaray and Andres Borthagaray About Questions to be Discussed on Occasion of the 2009 Erice Meeting of the PMP Limits of Development: The Situation in Argentina
627
xvi
Alberto Gonzalez-Pozo Sustainable Development in Mexico: Facing the Multi-Headed Hydra
639
15. MITIGATION OF TERRORIST ATTACKS MEETING Richard Wilson Permanent Monitoring Panel-Mitigation of Terrorist Acts (MPM-MTA) Workshop Agenda
647
Friedrich Steinhiiusler Development of CBRN Event Mitigation
649
Annette L. Sobel One Science for CBRN Mitigation
657
Richard Wilson The Need for a Corps of Radiation Workers for Immediate Assignment
661
Ramamurti Rajaraman India's Response to the Prospect ofWMD Terrorism
669
Vasily Krivokhizha Politization in the Process of International Cooperation to Mitigate Nuclear Terrorism: Some Dubious Results
677
Robert V. Duncan Immediate Communications in the CBRN Environment
691
Richard L. Garwin Immediate Evaluation of Radiological and Nuclear Attacks
693
Richard Wilson Establishment of a Scientifically-Informed Rapid Response System
705
16. ENERGY PANEL MEETING Akira Miyahara Status of ITER Broader Approach Activities
711
xvii Akira Miyahara Topics of Energy Research in Japan
713
Hisham Khatib Impact of the Financial Crisis of 2008 on World Energy
715
17. GREEN CHEMISTRY WORKSHOP Evan S. Beach and Paul T. Anastas Plastics Additives and Green Chemistry
721
Nicolas Olea Plastics, Plasticizers and Consumer Products
729
Bruce Blumberg, Felix Griin and Severine Kirchner Organotins are Potent Inducers of Vertebrate Adipogenesis: The Case for Obesogens
737
Wim Thielemans Bio-Based Polymers: A Green Chemistry Perspective
747
Karen Peabody O'Brien Revolutionary Sciences: Green Chemistry and Environmental Health
757
Frederick S. vom Saal, Julia A. Taylor, Paola Palanza and Stefano Parmigiani The High-Volume Hormonally Active Chemical Bisphenol A: Human Exposure, Health Hazards and Need to Find Alternatives
763
18. AIDS AND INFECTIOUS DISEASES Franco M. Buonaguro 2009 Progress Report of the MCD-217 Project and 2010 Research Project, East-Africa AIDS Research Center at the Uganda Virus Research Institute (Uvri), Entebbe, Uganda
775
xviii
19. SEMINAR PARTICIPANTS Seminar Participants
783
20. ETTORE MAJORANA ERICE SCIENCE FOR PEACE PRIZE- SCIENTIFIC SESSION Why Science is Needed for the Culture of the Third Millennium Antonio M. Battro The Impact of Digital Technologies Among Children of Developing Countries
797
Richard Wilson The Crucial Role of Science (and Scientists) in Public Affairs: A Suggestion for Coping with Terrorism
799
Christopher Essex When Scientific Technicalities Matter
807
Anastasios Tsonis The Use and Misuse of Science-An Example
813
Robert Huber Innovation Cannot Be Planned
·817
Henning Wegener Why Science is Needed for the Culture of the Third Millennium
821
Albert Arking Global Warming and the Energy Crisis: How Science Can Solve Both Problems
825
Carmen Difiglio Co-Benefits of Climate Policies: The Role of Science
835
Zenonas Rokus Rudzikas Why Science is Needed for the Culture of the Third Millennium: Historical Experience of a Small Country (Lithuania)
839
xix
Maw-Kuen Wu Means to Propagate our Ideas in Scientific and Decision-Making Circles
845
Bruno Maraviglia The Human Brain Function Investigated by New Physical Methods
855
Jan Szyszko Quality of Life-How to use Ecological Science for Sustained Development
861
M.1. Tannenbaum Fundamental Science and Improvement of the Quality of Life-Space Quantization to MRI
865
Frank L. Parker Improving the Chances for Peace by Providing Almost Limitless Energy
877
Lord John Alderdice A Science of the Irrational Can Help Protect Science from Irrational Attacks
889
This page intentionally left blank
SESSION 1 OPENING SESSION
This page intentionally left blank
THE INTERNATIONAL SEMINARS ON PLANETARY EMERGENCIES AND ASSOCIATED MEETINGS 42 nd SESSION PROFESSOR ANTONINO ZICHICHI CERN, Geneva, Switzerland; University of Bologna, Italy; and Centro Enrico Fermi, Italy OPENING SESSION: WHY SCIENCE IS NEEDED FOR THE CULTURE OF THE THIRD MILLENNIUM-THE MOTOR FOR PROGRESS Dear Colleagues, Ladies and Gentlemen, I welcome you all to this 42 nd Session of the International Seminars on Nuclear War and Planetary Emergencies and declare the Session to be open. Why this distinguished group of Interdisciplinary Scientists is at the Ettore Majorana Foundation and Centre for Scientific Culture (EMFCSC) in Erice? Because we care about the consequences of Environmental and Cultural Pollution on the future of the human race. We want to overcome the danger of an Environmental Holocaust: Environmental Holocaust: spending enormous resources-billions of Dollars/Eurosfor the solution of problems whose origin is believed to be known but is not. In our action we have the support of distinguished members of the Italian Government: • • •
The Minister for Foreign Affairs, On. Franco Frattini The Minister of Culture, On. Sandro Bondi The Minister of Science and University, On. Maria Stella Gelmini
and of the Parliament: • • •
The President of the Senate, On. Renato Schifani The President of the Senate Environment Commission, On. Antonio D' Ali The President of the Government, On. Silvio Berlusconi and his deputy Dr. Gianni Letta
As you probably know, Dr. Gianni Letta has long been engaged in modern culture. He had the courage to create in Italy a scientific page in the newspaper he was directing "II Tempo" (The Times). The success of this initiative induced the most popular newspapers to open their doors to scientific culture. I have been working with him for two decades. Let me go back to the Environmental and Cultural Pollution. We are confronted with a very difficult task since, at the ongm of the Environmental and Cultural Pollution, there is a lack of Knowledge, which in our days means a lack of Scientific Culture. This is why we have to convince the great public that Science is needed in the Culture of the third millennium.
3
4 We scientists cannot remain silent when the great public shows a vivid interest for topics such as: • • • • • •
Global warming The energy crisis The information security The environment The Intelligent Design The Evolution
We have to convince the great public that the solution to all these problems require clarity and rigour. EXAMPLES OF INTERVENTIONS IN SOME TOPICS OF VIVID INTEREST TO THE GREAT PUBLIC •
•
•
•
•
• •
•
Rigorous Logic in the Theory of Evolution Zichichi, Pontificia Academia Scientiarum, Plenary Session on "Scientific Insights into the Evolution of the Universe and of Life", Vatican City (2008). Big Bangs and Galilean Science Zichichi, II Nuovo Cimento, Vol. 124 B N. 2, Italian Physical Society (January 2009). Why Science is needed for the Culture of the Third Millennium: The Motor for Progress Zichichi, published in Public Service Review European Union, 18, UK (2009). Let/ere ag/i inglesi dall'Italia, A. Zichichi, II Giomale (29 luglio 2009), The Cradle of Democracy and the Truth about Italy, translated by Barbara Zichichi. Language, Logic and Science Zichichi, Proceedings of the 27th Session of International Seminar on Nuclear War and Planetary Emergencies-2002, The Science and Culture Series, World Scientific (2003). The Logic of Nature and Complexity Zichichi, in Proceedings of the international Conference on "Quantum [unJspeakables" in Commemoration of John S. Bell, International Erwin Schriidinger Institut (ESI), Universitat Wien (Austria), 10-14 November 2000. Complexity at the Fundamental Level Zichichi, Desy, Hamburg, November 2005. Complexity and Planetary Emergencies Zichichi, in Proceedings of the 36th Sessions of the International Seminars on Planetary Emergencies, Erice (Italy), August 2006. Complexity Exists at the Fundamental Level Zichichi, in Proceedings of the 2004-Erice SubnucJear Physics School. "How and Where to go Beyond the Standard Model", The Subnuclear Series Vol. 42, page 251, World Scientific (2007).
5 •
Science and Society Zichichi, MIUR, Rome 2003.
These interventions are all in the direction of convincing people that the best way to study a problem, with clarity and rigour, is through Science. EXAMPLES OF RESULTS OBTAINED 1. A few months ago, a jewel of the world physics, the CERN in Geneva, has run the risk to loose the support of some countries. The Italian Governmentthanks to Berlusconi and Frattini-has taken immediate steps to avoid the rising of a negative phase that would have affected the greatest laboratory of high energy subnuclear physics existing in the world. 2. One of our flags is Science without secrets nor frontiers. Berlusconi has proposed Erice for the peace negotiations between Palestinians and Israeli. In the past, during the Cold War, we have contributed to overcome the danger of a Nuclear Holocaust in the USA-URSS confrontation. How? With a great alliance between Science and Cultural-Political Leaders such as the most beloved President of the Italian Republic, Sandro Pertini and the most beloved Pope in the History of the Catholic Church John Paul II.
The President of the Italian Republic, Sandro Pertini, a strong supporter of the Ettore Majorana Foundation and Centre for Scientific Culture, receiving the Erice Statement.
6
30 March 1979.
Our community of Interdisciplinary Scientists has been able to contribute to overcome the danger of the Nuclear Holocaust, whose worldwide known symbol is the fall of the Berlin Wall. Here comes a sequence of scientific leaders to whom we have dedicated our buildings. • •
Patrick M.S. Blackett which is the starting point (my youth). Isidor I. Rabi a decisive step towards the creation of the Interdisciplinarity
•
Eugene P. Wigner a witness of the crucial steps towards the fall of the Berlin Wall. Victor F. Weisskopf whose support was decisive when this Institution was
Scientific Community.
•
created at CERN-Geneva. What have we been able to do in the past decades are our credentials.
7
ETTORE MAJORANA FOUNDATION AND CENTRE FOR SCIENTIFIC CULTURE
DATA ON ACTFVITIES SINCE 1963 123 SCHOOLS,1.-I97 COrJRSES, 103 .48-1 PARTICIPANTS (124 OF WHICH NOBEL LWREATES) COMING FROM 932 UNIVERSITIES AND LABOR.4.TORIES OF 1-10 NATIONS.
And now the 42 nd Session of the International Seminars on Nuclear War and Planetary Emergencies. Clarity and Rigour is needed to fight cultural pollution. As said before, the best source of Clarity and Rigour is Science.
THIS IS WHY SCIENCE IS NEEDED FOR THE CULTURE OF THE THIRD MILLENNIUM. MOREOVER SCIENCE IS THE lVIOTOR FOR PROGRESS
SCIENCE
&
POLITICAL VIOLENCE
+
+
Unification of all Forces of Nature
15 Classes of Planetary Emergencies Total number: 63
I am pleased to let you know that our actions have given interesting results, as testified by the letter by the President of the Italian Senate Environment Commission, Dr Antonino D' Ali.
Senato della Repubblica Sen. Antonio d'AIi 1''''''id,'IIII' 011'1/11 X/II COII'III;";,,,, .. Terril"ric>, II/I/NclI/C, BClli IlJllbiCII/llli
ROMA. 20 agosto 200<)
( ' hiari~simtl Pr()k~sor~
;\nlol1il1o I.ichich, I'rcsiLi\."nlc lkl
('''111,-.,
di ( 'ulima Scicnl"''''ll'lInrc Majoran" - I.RI(T
r-\d ringra /.iarL.l pCI' il cortesL' inviLo. Illi lluolc rapprL'~enlarl.L' ch(' la mia pL'rmanL'l1 l.a ncgli U.S.A. in
1.:0IlouT1it'\ll/lt COil
10 :'\\olg.ilTlI..'lItl1 dd sl'll1illario :-ilillc fnu:rg\,·rl/ ...· PhlTl\.'taril'
pn:s,o iI CCSF\1. Illi pri\i1 del pian:n: di rote!'
Ella sa
qUiJlllo
rill...'lIgo
L'sscllLiak'
t.·.~st.'n.: pn:~C'ntc.
l'inll.."rllll'lILiolH.: Jd 11101)(..10 pulitko COil qudlu
Sl'ielllilicll. l'd " '1l1eSllI prilll'ipill h" inrorllWlll bUlln
n.: Amhiente e Terrilori" del Senal" della R~puhblic'l
italia"",
Anno rile\antc anchc sui fromc ddlc: rda/ioni intern",ionalL il lellla dd1c EnK'rgclvc Planc:l;,rk inl:llli C Slaw. assk111C a ljudlo ddla crisi IInallziaria mondial.:, al cl.'nlro di luui i rill imponanli
aprllt1lalll~nli d~i
(jO\'erni. dai G1( al Consiglio d'Eurora, al London Summil
20(1). che IrC)\' cr~ il suo mOlllcnln .Ii m:Jssimo dihallilo nella COPIS (COI1Ii:rClll.a di
Cllpenh"gcn ~ ncl rrossimo
IIWSC
di die-ernhrc, ehe dm r~hhc lrallare il lema del Kyoto po,\
2012 . ,\1111<1
nk,unlc: ,\I1eil..·
pcrch~
si 0 "rerlO un rronlC dl dihallilo. nella polilicu lIlontiiaic. del
lUHu IlUO\ o. \.1n~hc Sc...' anCtlf:J non cari....;C
Europ"'i.
C 110n
<.Ii incilh:rc
>010. ell<: appaiollL' IlIlloru
manicra, ()ggi pure alimc:ntato da
1111
SLI l"oll\"innoni e sccltc..' dC I ( i O\\;nli
legal .., ad un ambiclllalismo cataslrotisla di
c\,idcnll: inl,rc:»c dl grandi l1luitimlZlollall della I,nanza
c dcll'induslna.
8
9 Sonu
c~rto
che
I \
oSlri Ill\Ori. unaliuilildo allenll1menlc i dati seientilici ui importanli
rcccntissimc ri,'crchc in ()rdine aile ('(llI,e \'ere e non \crosimili. c()nti:rmcranno i dllhhi sui'" \ulidita della tcsi della
pn:\~lente
origin.: u11lropica dei c.llnbiamel1li ciimatici. come dn anni
gill utlcrll1uto dn chi con rigori.' ha scmprc studiato qud fcnolllcno. pur rimuncndo quasi
inus..:uhuLO, I 'esito dci \os\ri la\ori. Iki ql1ali non manchcr() di pn:nderc im111cdiatal11cn\c nOla. 1110ho plllrit clIlllribuin: au ulla curn:lla infurI113Liom:. c llUillUi dccisiullc. dei Govcrni SlI quesli Icmi, cd
c per qucsto ehe mi pcrmcllll rivolgcrc agli
illuslri scicllziati prcsellti ad Frice. oltn:
almio perSllllale c' grato salutD di oCIl\CnUI,' nella nUSlra terra Irapancsc. l'illvilo a Irasllldlcrc
Ie' loro considerazioni in materia ai (Jo\crni di sinlesi dil Irasmcllcr.: ,Ii ":l)ll1pell.'nli organismi iIlICr11
III
aziolli oi conlraslO pill alk causc ehe alk conseguenze 01 Ull
"globe \\anning" slimJlo in dIre da cmaslroli: plalll:lJri'l. aSSUlllc:n: d\.,(,. isioni tfa h: pill
jnscn~"h:
COli.
a mio giudiLio. il rischio di
della sloria ummla.
Oggi 4uindi. ncll'immincnla di ('0[>15.
nl.'ces~aril)
l'inlervcl1lO ed il cOl1trihuln della lihcra scicJ1/'\. capac\.' di sercile. mJ rigorosc analisi ddk uinamidle cli111atidlC del pianela. Spero allora che il v[lstrn scminario di qucsl'al1l11) parlicnlarmenle si concclllri di pn:siI.'Ol'rl· a prnscgllirc in lulte Ie scdi iSlilu7ionaii nel ennfrnnlo cnraggiosamc'nk .lper\ll ill LJlIesli lIhill1i Il1csi. cd uti aprirc una 1(lnnak fase di discllSsi(lllc cd approlillluill1clllO qil doclll11enlo chI.' "pem \ Orrell.' prodllrre allerll1ine dei \ nslri 1<11 ori,
(iratI) per I'altellzione. con profonda slima c gratitudll1e
Anlol1io d'Ali
10
Let me read a translation of few key statements by President D' Ali.
"Professor Antonino Zichichi President of the Ettore Majorana Foundation and Centre for Scientific Cuiture-EMFCSC, Erice As you know, I consider the interaction between the political world and the scientific one to be essential. I have largely based my first year of work as president of the Environment and Territory Commission of the Senate of the Italian Republic on this principle. Following the indications of the IPCC, many citizens' resources the world over would be soon spent to fight more against the causes rather than the consequences of a "global warming" estimated at a planetary catastrophe scale, with the risk, in my opinion, to take decisions that could be among the most senseless in the history of humankind. Therefore today, at the eve of COPlS, the involvement and the contribution of free science is more than ever needed, with its serene, but rigorous analyses of the climate dynamics of the planet. I then hope that your seminar will particularly focus this year also on the correct actions to be suggested to Governments . I am not asking for an interference from your side but for an assumption of responsibility. Let me personally stress the determination of the Parliamentary Commission that I have the honour to chair to carryon in all institutional seats the courageous debate initiated few months ago and to formally start a phase of review and discussion of the document that you will hopefully issue at the end of your work. Thank you for your attention. With my deep esteem and gratitude, Antonio d'Ali" WHY SCIENCE IS NEEDED FOR THE CULTURE OF THE THIRD MILLENNIUM: THE MOTOR FOR PROGRESS The Culture of our time is based on the first great achievement of our intellect: Language. Rigorous Logic and Science must be brought into the cultural patrimony of the third millennium, since the number one enemy of humanity is Ignorance. Language means Poetry, Literature, Music, Arts, Theatre, Economy, Politics and other manifestations of the human intellect such as Philosophy; all intellectual activities that could exist even if neither rigorous Logic nor Science had ever been discovered. The first monumental construction of rigorous Logic is Euclidean geometrymore than 2,000 years ago. Rigorous Logic includes arithmetic (theory of numbers), algebra (theory of variables), analysis (theory of functions), and topology (theory of domains where the functions exist). None of these activities could be discovered if our intellectual activities were limited in the domain of Language. The next point to realize is that all activities in the domain of rigorous Logic would exist even if Science had never been discovered. In fact, Science is the only rigorous Logic which has been used in order to create the world as it is.
11
There are many logical theoretical structures which are not found in Nature, e.g., a space with infinite dimensions does not lead to any self-contradiction. It therefore exists from a rigorous logic (mathematical) point of view, but the space where we live has a finite number of dimensions apparently four (three of space and one of time), but probably 43, if the Superworld exists.
Fig. 1: The unification of the fundamental forces needs the theoretical existence of the Superworld.
The convergence of the Fundamental Forces of Nature are reported by the three lines of Figure I . For this convergence, it is necessary the hypothesis that the Superworld exists. We see that the "Universe" illustrated in this figure consists of many important details. The "Universe outside" is the one which comes after the decoupling of protons, electrons and photons, when atoms started their formation, 380 thousands years after Big Bang-I. The mathematical basis, which includes the hypothesis of the Superworld, is reported in Figure 2.
12
Figure 2: The lines are the result of calculations executed with a supercomputer using the following system of three weakly coupled differential nOll-linear
equations:
dcx· df.A
~l--I
b·I 2IT
")
cxi +
2: . .I
bij -8 2 IT
CXj
lbis system describes the evolution of all phenomena includiug the Superworld,
from the maximulll level of energy, EGUf, to our world which is at the minimum energy level.
The reason why Science can be an instrument for peace is the fact that Science allows one to distinguish a great achievement of the human intellect from its use, i.e., "application" of a great achievement in real life. Science is neither good nor evil. People currently use the word "science" to mean the "use of science" (i.e., technology) which is no longer "science", just as the "use of language" is no longer "language" and the "use of logic" is no longer "logic". Let me elaborate further on this point. An example of great achievement in Science is the discovery of the "Standard Model", i.e., the superb synthesis that can explain all phenomena of our world in terms of three Fundamental Structures (called the three families of elementary particles) and three Fundamental Forces of Nature (the electroweak, the strong subnuclear and the gravitational). An example of great achievement in Logic is the invention of the rigorous and formidable logical structure called the Infinite. Examples of great achievement in Language are the Pieta by Michelangelo, the Primavera by Botticelli, the Ninth Symphony by Beethoven, the Fifth Symphony by Mahler, the Divina Commedia by Dante-all these achievements are for mankind. Examples of Applied Science, Logic and Language do exist. These "applications" can be for and against mankind, for example the use of Science against mankind, corresponds to war technology. The use of Logic against mankind corresponds to the invention of computer technology for the control of billions of people, thus destroying individual freedom. The use of Language against mankind corresponds to the invention of ideologies like Stalinism and Nazism. Examples that are for mankind do indeed exist in the domain of Applied Science, Language and Logic. Peace technology, instruments to help mankind in Medicine, Agriculture and Meteorology, all correspond to Applied Science for mankind. Applied Logic for mankind corresponds to the invention of new robots to avoid dangerous and unpleasant work for man, of specialized computer-based applications for medicine, for weather forecasting, etc. Applied Language for mankind corresponds to the performance of pieces of poetry, of music and of art, plus all other activities designed to
13
help increase the quality of everyday life in all sectors that would exist even if Logic and/or Science had never been discovered. The components of Applied Logic for mankind and Applied Science for mankind contribute enormously to improve the quality of life. The three pillars (Language, Logic, Science) in their applied part for man, have a common task and reach the same goal, i.e., to improve the quality of everyday life. Science, in the Immanent (therefore without appealing to any existential topic connected with the Sphere of the Transcendent), is the source of a new hope, well rooted in the Fundamental Laws of Nature, which all aim at good and never at evil. But for this to happen also in Applied Science, it is necessary that the technological applications of the great scientific discoveries be entrusted not to political bodies, but to the scientific community itself. This has never been the case because Applied Science implies decision-making actions and this means political activity, which belongs to Applied Language. This can be for mankind (Democracy) and against mankind (Dictatorship). It is Dictatorship that produces weapons and war technology, nevertheless everybody considers us scientists as responsible for Applied Science against mankind. Worse still, people think that Science was born of Technology. Some would even say that we scientists would not be here, if it were not for technological development. The claim, by the fellows responsible for the Culture of our time, that Technology precedes Basic Science is due to the fact that the "fire" and the "wheel" were invented before Galilei discovered Science: i.e., the Logic of Nature. They never say that the "fire" and the "wheel" were understood after Basic Science had been discovered by Galilei. To "invent" a new instrument does not necessarily mean understanding "why" it works . The spectacular successes in the construction of Pyramids and other masterpieces of Architecture, all over the world, did not give rise to the discovery of the first pieces of the Logic of Nature, such as the Principle oflnertia and the other two laws of Mechanics. A discovery in Basic Science corresponds to "understanding" all possible instruments that can be invented. For example, the discovery of a Fundamental Force of Nature, the electromagnetic force, has enabled us to understand that all our senses (sight, hearing, smell, taste and touch) are manifestations of the same fundamental force. This force originates from a unique entity, called "the electric charge". If we could switch off this "charge" our five senses would cease to exist. Thus all present and future "inventions" connected with sight, hearing, smell, taste and touch are understood even before they are really implemented. Before Basic Science was discovered, thanks to Galileo Galilei , it was the other way around. Since the Fundamental Laws of Nature had not been discovered, the technological inventions were always rotating around the same two original ones, the "fire" and the "wheel". Since neither the "fire" nor the "wheel" were "understood", the technological development could not produce anything really new. And this went on until Galilei in the 17th Century started to discover the first Fundamental Laws of Nature.
14
This is how, in just four centuries, we have implemented the enormous number of technological inventions that are a part of our daily life: telephone, TV, computers, internet, and all such instruments that would take pages and pages to list. This list is the proof that the motor of progress is scientific discovery. If the quality of life in the industrialized world has reached the level never realized before, this is due to the enormous number of scientific discoveries. If fundamental Science stopped discovering new phenomena, the quality of life would stop improving. Language, Logic and Science, all contribute to improve the quality of our life, but the decisive role is due to Science, since Applied Science for mankind is the real motor for progress. An example: when Galilei started to discover the Logic of Nature, mankind was measuring the Time using the sundial, with errors of a few seconds per day. Galilei having discovered the laws governing the oscillation of the pendulum made great advancements in the precision of time measurement and this enabled us in just four centuries to reach the present day precision of one second per Universe lifetime, i.e., 20 billion years. During the many millennia preceding the discovery of Science, all civilizations were measuring time with the same error of one second per day. If Science had never been discovered we could never have reached the vision of the Universe as it looks through our instruments, from the structure of the proton to the farthest Galaxy. This is synthetically represented in Figure 1 shown before. To achieve this superb synthesis of human knowledge a immense effort has been necessary. To this have contributed many distinguished scientists, some of them are here: T.D. Lee with the discovery of Parity and Charge conjugation invariance in the Weak Forces; Nick Samios with the discovery of the [1- which at the time was believed to be the last particle in the long list of particles discovered; Dick Garwin and many other scientists who have been during many decades here in Erice at the Subnuclear Physics Courses. To these great achievements of Science correspond the 63 Planetary Emergencies, identified 23 years ago by the World Federation of Scientists.
ill FOOD =5 IV ENERGY .- 5 ---------------+-=-V ! P()UlJTION i: 6 1----i~~;...;..;...;;..:..,;;....------- ~-~--t-! VI UMITS OF DEVELOPMENT I:: 3 CUMATlC CHANGES =1 }-vm r f!l:Q!t.4L MONf[Q!~lNG OF Tfl1!:..!LANEf ___ ~L ! ]X INli.:W MIUTAR'Y' THREATS IN THE j"" 3 MUI:flPOLAR WORLD X SCIENCE AND TECHNOW(JY FOR DEVELOPING COUNTRIES TO AVOID A NORTH-
vn
; SOUTH ENVll10NMFNfAL HOLOCAUST
I----!--
XI
THE PROBLEM OF ORGAN SUBSTflVTlON XII INFECTIOUS DISEASES
=5 =1
i-~ql~T¥RAL PO¥I!.T10N ~-"~>------~J" ~_ l XIV ! COMMON DEJ.'ENSE AGAINST COSMIC ! OBJECTS XV THE INVESTMENTS - HUGE MIliTARY -
=2 ._----=5 t-
._ _-,:r.l= __J=Q
,-- - - I
15
J
rI ] j- -- ',
I
Total: 4-
,~' " '- -' .~-- ----,
WATER
'--~ "
l PROTECTION
,............----Jio1 .!k-
/1 /" ,'1\\
' j\
[ ~~FOR
--'
".,
tID
I
I SAVINOS
I \
OF NATURAL RESOURCES
@
I
! .
I
I
I
l
l, SOURCES ~,I
ID,ES ,', ALINIZA " . , '",TlON I -
16
.
[!!] Totol.~
3
SOIL /'
,
I
~i ~~~ ~
ill
__
!
I
PROTEcrION AGAINST
!
DROUGHT AND DESERTIFICATION
'
POLLUTANTS
DEFENCE FROM CATASTROPHIC EVENTS
17
Iill I
Totol: 5
J
PRODUCTION
I\
.
\
1
IAG~~ I \ ~-'I AGRICULTURE! MARlNB ' \ \
. \ ,
, ~ PROCES:~~ .. \ ~ ..
STORAGE
I. \
r---~---L-~ NEEDS & WASTE
I i
18
.
I
. .- l
19
G 1-
ToloI:6
POLLUTION
I
CGWB ~ ffi I !
GREBNHOUSB EFFECT
l......-._.~_ _
2
---1 I
OZONE
DEPLETION Ji
I
I.....-_ __ __
.......-----1=--L....,
i
!
I
OIL AND CHEMICAL SPILLS
20
URBA."'i AND DOMESTIC
r··-·-~l
l...Y!J
21
Total: J
~~
!L----.J VII i Towl: 1
22
23
IIX I
TOIQ/: 3
NEW MILITARY THREATS IN THE MULTIPOLAR WORLD
DANOEROF PROLIFERATION OFWMD"
24
Ix 1
Total: I
TECHNOu)G~
SClENCE AND FOR DEVELOPING COUNTRIES TO AVOID A
NORTH-SOUTH ENVIRONMENTAL HOLOCAUST
25
IXI I
To/Qj: j
THE PROBLEM OF ORGAN SUBSTITUTION NATURAL ORGANS
I ~C
~INEBRING 3
I Iii
THE UNDERSTANDING
OF "MINIMAL LIFE"
I
.4
FROM INERT TO LIVING MATfER
ARTIFICIAL ORGANS
26
27
r'-"'-"" ........-,,-"-··...."'''',..''--~:~'~·~,.,..'-'-~--.,.___ -_ ~ ...---~-~~'-1
I MA'f"ftE.~TICS &. DEMOCRACY. I
28
r--------.
~ 7vml:2
··--·--·-------··- - ------1
I I
I COMMON DEFENSE
AGAINST COSMIC I, OBJECTS !! ! I
.
~.~,~._ ... ".,.~._ . . ._ .. _ , . _ ._ ._ ..'_~'~~.~~~:~~..~._ _,-.,..,...,."..,.,,...,_,.."'''..,,~..........,,~..Ji
__....~../'~-----..,.----..•..'" r~ !-~~~ ~~~l , ... -"'~--.~ I DETECTION i DEFENSE '11
,
I'
!
L_,___._~,...._ _.._....J
Io..._ _ _
I,'
~~ _
_
._. _ _
r
!
I
, _..._._~_.~~_.
!I
~
L __ METEORITES AND COMETS _--.--J f
I II:
;
DINOSAUR
65~~RS AGO (10~ .MT)
t____
29
j
THE HUGE MILITARY INVESTMENTS DISMANTLING OF THE 60,000
NUCLEAR WARHEADS
THE NEW
ARMS RACE FOR
/
1/
}UGH PRECISION WEAPONS
I
I /' r
I
~.--~'---[l
=~~~
RESOURCES TOWARDS PROJa~ OF PEACE I
RECONVERSION OF WAR INDUSTRIES ~
L-.........__ _ __ _ _ _ __
The Planetary Emergencies have been at the centre of our attention since 1986 and here we are again, one hundred and eighteen scientists from 31 countries and 111 institutes and laboratories, gathered in Erice to analyse a series of crucial
30
31
multidisciplinary scientific issues. They are all part of the 63 Planetary Emergencies identified by the World Federation of Scientists 23 years ago. The main topics of this 42 nd Session are: • Cultural Pollution, which is of great importance to us scientists, as mentioned before. • The World Energy Crisis, with a focus on: Energy and Pollution-which is about Essential Technologies for Managing the Coupled Challenges of Climate Change and Energy Security. These problems are strictly correlated with Energy, Water, Climate, Pollution and Limits of Development the world over, including Asian Countries. • Global Monitoring of the Planet, with a focus on: Firstly, Climate Uncertainties Addressed by Satellites. Secondly, the Sensitivity of Climate to Additional CO2 as indicated by Water Cycle Feedback Issues. Thirdly, the Basic Mathematics Needed for All Models, and that will be about Physics, Mathematics and Climate. and finally, in conjunction with medicine for the problem of windblown dust, Climate and Health. • Pollution and Medicine, Integrating Environmental Health Research and Chemical Innovation, we'll have a session on The Revolution in the Environmental Health Sciences and the Emergence of Green Chemistry. • Information Security, Cyber Conflict and Cyber Stability-Finding a Path to Cyber Peace. And now let me give you a good news. I have received from the Mayor of San Vito Lo Capo, a very interesting letter. This letter is an example of the attention the Erice Seminars have received from many people in Italy. The letter recalls a very interesting episode when, during the Cold War, fellows with very high responsibility in the two superpowers, such as Professor Teller (USA) and Professor Velikhov (URSS), were discussing hot topics in the beautiful transparent waters of San Vito Lo Capo. The episode is at the origin of the famous statement by the father of the timereversal invariance theorem, the great Professor Eugene Wigner who said: "The Berlin Wall started to fall down in the San Vito waters, much before it fell down in Berlin". Mr. Matteo Rizzo was a young fellow at the time; now being the Mayor of San Vito he would like to establish, in a very beautiful region of San Vito, a Museum dedicated to all possible records that we could collect before it is too late. The Mayor is here with us and I would like to give to Mr. Matteo Rizzo the floor in order to read out what he says in his letter.
COMUNB DISANVlTO La CAPO PROVINCTA DI'rRAPANT UiJk(tJ del 8indtu;o "'ll's.",... ..M!'- P;I. Jd'clh 1'd(!')<:$<>llZlI r'lnj)<)'>.$f.lf2U; CF~i#,11l 1i'-·MIi<<;j,.",,¢;.m,~~~~
Prot. N.
-(:Jj f{ Chiar.mo I'rof. Amollmo Zichichi Prl',idcntc "Ellorc Maiorana Foundation And Centre for Scientific Culture" VIa Guamotta 26 91016 E RIC E ('rp)
E. p.c. Spett.le World Federation of Scientists CH-I2I1 GENEVA 23 - SWITZERLAND Chiarissimo Professore, Come Lei sa c da molti anni che scguo Ie attivita del Centro dl Cultura Seicnrifiea di Ence e, cun parlicolarc interesse. tulto qudlo ehe Lei c riuseito a fare per superarc II pericolo di olocauslo nuclearc nel corso della lunga guerra fredda USA URSS. Ero vent'anni PIU giovanc quando Lei ml feee eonosecre aleuni dei masslmi esponenti scicntitici delle due supl'rpolenze, grazic aile atlivita dei seminar! sulle gucrrc nucleari: ricordo quando Lei porto a San \'ito il Professor.: Tellcr con il Professore Velikhov c Ie vostrc lunghe 50ste nellc nostrc splendide acquc. Ricardo quando ci fu la calorosa stretta di mana nell'aula magna "Dirac" di Erice Ira i consiglkri scientifici del Prcsidcntc Reagan (Tclkr). dd Presidente Gorbachcl' (Velikhov) e del Presidente Ocng Xiao Ping (Zhau Guung Zhao). tre anni prima "he i:follassl: il muro di &:rlino. Rlcordo anche quando LeI racconlo I"episodio I'issuto negli anni della Sua giovcntit scicntifica in Inghilterra con il Professorc Blackett, al qualc Lei ha volulo dcdlcarc lIno dci tre conventi di Erice (il San DOlllcnko). E' passato molto tempo ma sOno viw nella mia mcmona Ie parole del Professorc Wigncr ehe destarono in me grande cntllsiasmo. Disse il Prof. \\iigncr: "II crollo del muro di Berlino i: i1l1zialO ncl marc di San Vilo. molto prima del suo crollo a Berlino". Adesso ehe sono sindaco tli questa splendid.~ citladellll sidliana, vorrel che non SI perdcsse la memoria sionea di quanto Lei e riuscilo a fare per 13 Scienza senza segreti C scllza frontl.:rc ncando u Ericc una isliluzionc chc. comc ho kilo nclle ultime noli;£ic. conta 123 scuolc intemanonali, cui hanno pre so parte nitre 100 mila scienziati ( di cui 124 Nobel) provenienli da 932 Universitii c Laboratori di 140 Naziol1l. Sono stato ineoraggiato a scrivcrLc dopa aver letto il Suo bellissimo al1icolo, pubblicato il 29 luglio scorso su "1\ Giomale" in cui spicga al grande pubblico la verila sulle aziol1l del Govcrno I3crlusconi, con Ie sccllc dei Ministri Fraltini. Bondi. Ge1mini e di Gianni Leila. Molti scienziati che
32
33 vengono a Enee visitano spesso la mla San Vito e II ,cnlO cntustUsti ~stimatori ddl'ltalia ~ dd valori di DemocrXlia e Libertil come da Lei detto nel Suo articolo. J.'Amministrazionc comunale, di cui sana rcsponsabile, ha deciso - su mia proposta - di creare in una delle poehlSsime coste incontaminatc della Sicilia (il golfo di Macari) da Lei 5.1lvata con rimervemo di un gruppo di eminenti scienziati daiia instaliazione di una gnll1de cl:ntra!.; clcttri<.:a, un museD dedicato a do ch~ Lei e riuscito a rcalizzare nel corso di questi 43 anm a Erice c. in particolar modo, aile preziose tcstimonianzc che la comunita scientifica intemazionale ha lasciato e continua a Iasciare in questa angola del Mcditerraneo dove e iniziato il crolla del muro di Berlino. Nel Suo articolo ho caito renonme \'alore che In lotta contro il "Mum di Bcrlino culturdlc" puo produrrc nella nostra Italia, dove questa "Mum" i: rimasto quasi ilIeso. Caro Profcssore Zichichi, il messaggio ehe Lei ha landato \'icnc ac~olto con entusiasmo da questa Amministrazione e vorremmo proporLe di creare qui a San Vito, tn apposite strutture, un museo ~Dn cui far conoscere ai giovant d'italia c agli scienziati di tullo il mondo che prendono parte aile attivlta del Suo Centro di Erice il ruolo dell'ltalia nel mondo scicntifico imcmazionale, tra cui 10 studio delle 63 cmcrgellzc plunetarie, il crolla del muro di Berlino ed il ruolo della ver.I grande SClcnza nella cultura del tCflO millennio (c qui mi rifcrisco al Suo rceetlle articolo 'The malar for Progress" pubblicato neJla rivist3 inglese Public Service Review). Resto In attcsa dl Suc rcaz\(mi a questa mta proposta e La ringrazio di cuore per quanto vomi Eire. Cordialita.
Let me translate the part of the letter where the Mayor proposes to create a Museum devoted to what our community of scientists has done. "The City Council, of which I am in charge, has decided-upon my proposal-to create a Museum, in one of the very few unpolluted Sicilian coasts, the Macari Gulf, which you have saved from the installation of a big power station, thanks to the intervention of a team of outstanding scientists. This Museum will be devoted to what you have so far realized in Erice during these past 43 years and, especially, to the precious testimony which the international community has left, and will continue to leave, in this Mediterranean comer from where the breakdown of the Berlin Wall started, In your article I have seized the extremely important role which the struggle against the "Cultural Berlin Wall" may represent in our Country, Italy, where this "Wall" has practically remained uninjured". Let me close with a testimony of how much we believe in the Fermi Statement
Neither sciellce 110r civilization could exist without memory.
34 A few words on the WFS for the newcomers.
THE lVORLD FEDERATION OF SCIElvTISTS (WFS) 1966-2006
1966-1974 1974-1982 1982-1990 1990-1998 1998-2006
Victor F. WEISSKOPF Isidor I. RABI Tsung Dao LEE Kai M.S. SIEGBAHN Antonino ZICHICHI
1st President WFS 2nd President WFS 3rd President WFS 4th President WFS 5th President WFS
--.::--.;:--:[.--.;.--,::--:;.--.;:--
20 Agreements signed with Governments. 65 Scientific Collaboration Agreements signed with Governmental Institutions and Research Institutes worldwide. 43 Research Centres established in Developing Countries. --.)--.:,-~:.--.:o---:~--::-:;=--
35
"VFS PRESIDENT COUNCIL
a Group of Interdisciplinary Scientists
F
WFS Scholarship Program
TIle Ettore Majorana Prize Erice - Science for Peace
SCHOLARSHIPS PROGRAMME
(~
PR_~
J
_________E_R_IC_E__ __ E_________
In order to promote the values of scientific culture worldwide and following a proposal by the WFS, a special law was voted unanimously by the Sicilian Parliament, in 1988, to establish the 'Ettore Majorana Prize-Erice-Science for Peace'. Every year, the Prize is awarded to distinguished scientists and world leaders who have contributed to the promotion of the values of scientific culture. In the last 20 years (1988-2007), 52 prizes were awarded to: P.A.M. Dirac, P.L. Kapitza, A.D. Sakharov, E. Teller, V.F. Weisskopf, J.B.G. Dausset, S.D. Drell, M. Gell-Mann, H.W. Kendall, L.e. Pauling, A. Salam, C. Villi, R. Doll, lC. Eccles, T.D. Lee, L. Montagnier, Qian Jiadong, J.S.
36 Schwinger, U. Veronesi, G.M.C. Duby, R.L. Garwin, S.L. Glashow, D.C. Hodgkin, R.Z. Sagdeev, K.M.B. Siegbahn, Y.P. Velikhov, J. Karle, J.-M.P. Lehn, A. Magneli, N .F. Ramsey, H. Rieben, J.J. van Rood, C.S. Wu, R.L. Mossbauer, A. Muller, H. Kohl, M.S. Gorbachev, H.H. John Paul II, R. Clark, M. Cosandey, A. Peterman, R. Wilson, Lord J. Alderdice, J.I. Friedman, M. Koshiba, S. Coleman. The 2007 prize has been awarded to A.N. Chilingarov, P.C.W. Chu, L. Esaki, W.N. Lipscomb Jr., J. Szyszko and M.-K. Wu.
Giancarlo Wick was a great physicist and strong supporter of the Erice Centre. This is the reason why the Giancarlo Wick Gold Medal Award has been established by the WFS.
It is a great pleasure for me to let you know that the Selection Committee for the Gian Carlo Wick Gold Medal Award has unanimously decided that the recipient for 2009 should be Professor Nicholas Samios of Brookhaven National Laboratory, with the following citation: "For his visionary role in the successful construction of the Relativistic Heavy Ion Collider (RHIC), and for his intellectual leadership in a series of remarkable experimental discoveries which established the existence of Quark Gluon Plasma, a new phase of strongly interacting nuclear matter. " We are pleased to have with us today the laureate of the 2009 Gian Carlo Wick Gold Medal Prize, Professor Nicholas Samios of Brookhaven National Laboratory.
ACCEPTANCE REMARKS ON RECEIVING THE 2009 GIAN CARLO WICK GOLD MEDAL AWARD NICHOLAS P. SAMIOS Brookhaven National Laboratory, Department of Physics, Upton, New York, USA It is indeed an honor and a pleasure to be the recipient of the 2009 G.C. Wick Gold Medal Award. I am particularly pleased because it is named after G.c. Wick, a world renowned theoretical physicist of the 20 th century, a gentleman, a humanist and a person of high ethical standards. I was fortunate to interact with Gian Carlo at Brookhaven during his tenure there from 1957-1964. This was the exciting era when we were discovering new particles such as A's, 1:'s, the
37
38 emergence of lattice gauge theory calculations which indicated the probability of a phase transition into a Quark Gluon Plasma at center of mass energies of approximately 50-100 GeVI A. These observations dovetailed very neatly with building upon the existing BNL facilities (AGS) and infrastructure (3 3;' km tunnel and 20 MW refrigerator) at BNL. The resultant design consisted of two separate concentric super conducting rings of magnets, six possible intersecting regions for collisions and detectors, the capability of accelerating nearly all species of ions from protons to gold with energies up to 100 GeV/A per ring as well as polarized protons with energies up to 250 GeV per ring. Two very large detectors, STAR and PHENIX and two small detectors PHOBOS and BRAHMS were approved and built to unravel the complexities of the RHIC collisions. Construction of this RHIC complex began in 1991 and was completed in 2000 and nine successful runs (2001-2009) have been completed to date. The exciting physics that has emerged has indeed occurred in the anticipated domain of creating hot matter with a very large energy density, 3-10 GeV IPermi, more than 10 times that of ordinary nuclear matter. A new form of matter has also been created, the so-called strongly interacting quark gluon plasma (SQGP), not a gas as had been predicted, but a perfect fluid with very low viscosity. There is intense theoretical and experimental activity attempting to understand this phenomena, its creation and the interaction of quarks and gluons as they pass through this dense matter and amazingly one of the conjectures involves string theory, a theory unifying gravity and the strong and electro-magnetic forces. In the Spin Sector, experiments at RHIC have added to the puzzle as to the origin of the proton spin. It has been known for some time that the quarks only can account for 20% of the spin, the remainder expected to be in the gluons. Not so. RHIC experiments indicate that only a small fraction is contributed by the gluon and therefore the spin must come from elsewhere, possibly angular momentum. These are just a few of the astonishing experimental results emanating from RHIC. Time and your patience preclude me from continuing on this exciting subject. Suffice to say there are a plethora of experimental findings emanating from RHIC, many of which will be presented at the ERICE International School of Subnuclear Physics later this month. It has indeed been my pleasure to address you this day and again I am honored to have received this award.
GLACIAL RETREAT AND ITS IMPACT IN TIBETAN PLATEAU UNDER GLOBAL WARMING
HONGLIE SUN Geographic Sciences and Natural Resources Research Institute, Dep. Head Chinese Academy of Sciences, Beijing, China
SIGNIFICANCE OF GLACIERS IN TIBETAN PLATEAU
Water tower of big rivers in Asia, as high as 40% of runoff is from glaciers in some of the upper reaches of these rivers.
39
40 MAJOR CHANGES TAKING PLACE IN THE TIBETAN PLATEAU
• • • • •
Glacial fluctuations Lake variations Changes of wet land Grassland degradation Glacier process is among the major changes
GLACIERS IN TIBETAN PLATEAU Region China Tibetan Plateau
Number 46377 36918
Area (km2) 59425 49903
Volume (km3) 5600 4572
GLACIERS IN THE PLATEAU ARE EXTENSIVELY RETREATING • • •
Retreating glaciers reaches 80-95% of the total glaciers. Glacial area retreated by 4.5% in the past 20a and by 7% in the past 40a. Glacial retreat is accelerating in the past decade!
41 GLACIERS WHICH WERE ADVANCING ARE NOW RETREATING WITH LARGER AND LARGER RETREATING AMPLITUDE
AMPLITUDE OF GLACIAL RETREAT IS LARGEST IN MT. KARAKORUM AND SOUTHEASTERN TIBET, WHILST SMALLEST IN CENTRAL PLATEAU
'~----------------------------------~40
Regional statistic
30 20 10
42 OBSERVED SINGLE GLACIER ALSO SHOWS LARGEST RETREAT IN MT KARAKORUM AND SOUTHEASTERN TIBET, AND SMALLEST RETREAT IN CENTRAL PLATEAU
U~----------------------------------~70
43
THE ANNUAL RETREAT OF ATA GLACIER WAS 30-40M BEFORE 1980, AND INCREASED TO SOM AFTERWARDS
1933
2006
44 THE MAGNITUDE OF GLACIAL RETREAT FROM 1997-2006 IS LARGER THAN THAT FROM 1987-1996
MAJOR CAUSE Direct cause of glacial fluctuations is glacial mass Balance which is mainly controlled by temperature, temperature, therefore, dominates glacial fluctuation in the long run. MAJOR IMP ACT Glacial-Water-Supplied Lake Expansion Flood (GLEF) and Glacial-Terminus Lake Outburst Flood (GLOF) induced by glacial retreat is a serious problem on the Tibetan Plateau and its surrounding regions.
45 IN THE TIBETAN PLATEAU, THERE ARE MORE THAN 1000 GLACIALWATER-SUPPLIED LAKES AND MORE THAN 3000 GLACIAL-TERMINUS LAKES
46
GLEF IS DEV AST ATING PASTURE NEARBY THE LARGE LAKES IN THE TIBET AN PLATEAU
47 WITH CLIMATIC WARMING AND GLACIAL RETREATING, TERMINUS LAKE APPEARS OR ENLARGES AND CAUSES GLOF
GLACIAL
THE LAIGU GLACIAL-TERMINUS LAKE IN SOUTHEAST TIBETAN PLATEAU IS RAPIDLY EXPANDING
48 THE LAIGU GLACIAL LAKE
POSSIBLE ADAPTATION MEASURES • • •
A complete category of hazard distribution caused by glacial retreat, GLEF and GLOF. Early warning systems at the most dangerous sites of GLEF and GLOF. Engineering measures including water pipes to drainage water from the most dangerous GLOF and GLEF .
CONCLUSIONS I. Glaciers are retreating extensively in the Tibetan Plateau under global warming. 2. Glacial retreating is accelerating in the past decade. 3. Temperature controls the general trend of retreat and precipitation controls the regional differences. 4. Glacial retreating causes more GLOF and GLEF in the Tibetan Plateau. 5. Adaptation measures are required to deal with the impact of glacial retreat in the Tibetan Plateau.
CLIMATE STABILIZATION ON THE BASIS OF GEO-ENGINEERING TECHNOLOGIES YURI ANTONOVITCH IZRAEL Institute of Global Climate and Ecology Director, Moscow, Russia IPCC ppm 445-490 490-535
global, tOC 2.0-2.4 2.4-2.8
change,% -85-50 -60-30
CO 2 EMISSIONS AND EQUILIBRIUM TEMPERATURE INCREASES FOR A RANGE OF STABILIZATION LEVELS
49
50 RADIATIVE FORCING COMPONENTS
l"'~_
~''''''~_\
~~~ .;~
9Q''''~'''i!l w~ '~rf7l:"'CH.
~tt..
sur~_
~':~;::Sil
0>:",..w
_~,-e.r~d
lll!;!i
~
i
,,-
AS~lOQt~~
Co---"t.i'NWetti t;:!. ~ ~..."ltd'~
t .Q.W
M..l
w..
t~-
'It,X r~
,
:...
~~ tAI!.'
~
~~~
"lOt'%!
L""
-~ .~~,
b
,_<$ ,,'1(l,.~!1&
-1
{I
RaDIative I"'oro!f\g
1
eN m-2)
"There is also an opportunity to promote research on approaches which may contribute towards maintaining a stable climate (including so-called geo-engineering technologies and reforestation), which would complement our greenhouse gas reduction strategies, The G8 academies intend to organize a conference to discuss these technologies, " "We note a possibility to encourage studies in the field of additional technologies, which may promote the climate stabilization on the planet"
51
Ciouds
Pinatubo (1991). McCormick et a!., 1995
52
Pattern of Pinatubo plume (1991). (McCormick et ai., 1995). Aerosol microphysical parameters were measured with a photoelectric aerosol counter, which has the following characteristics: • • • • • • • •
the range of measured sizes (in diameter) from 0.3 to 103 ll; the upper limit of measured concentrations 6xl03 cm-3; the lower limit of measured concentrations 1 cm-3; the diameter measurement error 20%; the concentration measurement error 10%; angle of vision 2°; minimally required angular altitude of the solar disk 20°; the Sun tracking error for six hours of operation 1°.
53
• • • •
the linear rate of the substance removal from the generator wo-200 mls; the nozzle radius Ro-O.3 m; the inclination of the generator nozzle to the horizon a-12°; the air overheat at the generator output compared to the environmental air temperature I'1T= 425°C.
54
55
,
,.
,
Aerosol generator
,
,
Photometer
56
For the first time, the data were obtained on the solar radiation attenuation with the artificially injected aerosol layers. With the number aerosol concentration of about 102-103 cm-3, which corresponds to the aerosol density in the deposited layer of about 1-10 mg/m2 with the layer thickness (along the ray path) of about 100 m, the solar radiation attenuation with the artificial aerosol layers ranges from 1 to 10%. REFERENCES 1.
2.
3. 4.
Yu. A. Izrael (2005) "An Efficient Way to Regulate the Global Climate is the Main Objective of the Solution of the Climate Problem," Meteorol. Gidrol., No. 10 [Russ. Meteorol. Hydrol., No. 10 (2005)] . Yu. A. Izrael, I.I. Borzenkova, and D.A. Severo v (2007) "Role of Stratospheric Aerosols in Maintenance of the Present-day Climate," Meteorol. Gidrol., No. 1 [Russ. Meteorol. Hydrol., No.1, 32 (2007)]. PJ. Crutzen (2006) "Albedo Enhancement by Stratosphere Sulfur Injection: A Contribution to Resolve a Policy Dilemma?" Climate Change, 77. Yu. A. Izrael, V.M. Zakharov, N.N. Petrov, A.G. Ryaboshapko, V.N. Ivanov, A.V. Savchenko, Yu. V. Andreev, Yu. A. Puzov, B.G. Danelyan, and V.P. Kulyapin (2009) "Field Experiment on Studying Solar Radiation Passing through Aerosol Layers." Russ. Meteorol. Hydrol., No.5, p.5-15.
MODELING FOREST ECOSYSTEMS, THEIR RESPONSE, INTERACTION WITH GLOBAL CLIMATE CHANGE l
AND
HERMAN H. SHUGART AND JACQUELYN K. SHUMAN Center for Regional Environmental Studies, The University of Virginia, Charlottesville, Virginia, USA The rapid change in the concentration of CO 2 in the atmosphere from human activities and the consequent possibility of an altered planetary mean climate has inspired policy and international protocols to lower the emissions of CO 2 . The Kyoto Protocol I developed in 1997 to control the emissions of greenhouse gasses came on the heels of the highly successful Montreal Protocol on Substances that Deplete the Ozone Layer? The Montreal Protocol essentially treated industrially produced chemicals with discrete points of production and emission; the Kyoto Protocol involved a much more complicated intrinsic accounting that, along with human production and emissions, involved sources and sinks of chemicals from "natural" processes such as decomposition of leaves (a source) or growth of trees (a sink). The Kyoto Protocol also considered other radiatively active "greenhouse gasses" often expressed in COrequivalents vis-a-vis potential warming. One logical reason for including natural processes is that the fluxes of CO 2 from these sources are large compared to the human contribution. For example, if one considers the fluxes of CO 2 into the atmosphere for the decade of the 1990s 3 , it is estimated that diffusion from ocean waters represents around 92.8 PgCyr- l , about 122.2 PgCyr- 1 from terrestrial ecosystems, and 8.0 PgCyf l from human activities (1.6 PgCyr- 1 from land use change plus 6.4 PgCy{1 from industrial processes). This makes the human contribution about 4% of the annual flux of CO 2 to the atmosphere. The ocean and terrestrial ecosystems also take up very large amounts of CO 2 that are thought to roughly balance their CO 2 releases. The human component is important over longer time scales because it augments the other CO 2 sources and is responsible for the systematic increase in atmospheric CO 2 observed for the atmosphere for the past 50 years 4 Several lines of evidence (notably involving the ratios of I3 C and 14C isotopes)5 strongly imply that human activities are the source for the increase. The consequences of these human alterations in greenhouse gas budgets at the global level are seen in the unbalanced nature of current accounting of where the anthropogenic input to the atmosphere actually goes. Over the past decade, 7.2 PgCyf l are emitted into the atmosphere annually from burning fossil fuels and another 1.5 PgCyr- 1 are similarly emitted from processes involving human land use change (clearing forests, etc.). Of these anthropogenic emissions, 4.2 GtCIl l remain in the atmosphere and 2.2 GtCIl l are taken up by the oceans. The atmospheric carbon is directly measured and the ocean uptake processes are controlled by diffusion processes which are thought to be relatively well We would like to acknowledge support by grants to H.H. Shugart under NASA grants #NNX07 A063G and #NNG05GN69G, NASA Carbonl04-0231-0148 and to the Ettore Majorana Foundation and Centre for Scientific Culture for inviting us to present at the 42 nd Session of the Erice International Seminars on Planetary Emergencies (Erice, Italy).
57
58 understood. When the carbon remaining in the atmosphere and that stored in the ocean is subtracted from the carbon emitted by human activities, about 2.3 GtC/ are unaccounted for in the budget. This "missing carbon" must be stored somehow and somewhere on the terrestrial surface. This missing terrestrial carbon sink and the carbon released by land use change are both attended by considerable uncertainty. The uncertainty of the terrestrial fluxes of CO 2 is a strong motivation to develop ecological models capable of accounting from the kinetics of terrestrial carbon emission and storage. To understand the storage and dynamics of global carbon, one needs to understand the functioning of forests. Globally, forests and their associated soils represent a large fraction of the carbon stored in terrestrial systems. Of the estimated 2300 Pg of carbon stored in terrestrial ecosystems, about 52% is stored in boreal, temperate and tropical forests. 6 This fraction would only increase if one includes the wooded savanna and other similar systems as some sort of sparse forests. Of the carbon stored in forests, about 57% is stored in boreal forests of the northern latitude. Russia contains the largest forest on Earth inside its national boundaries . Much of this forest carbon is held in forest soil which is calculated to be for the soil to 1 m depth or to bedrock, whichever is shallower. If soils were computed to their full depth, the deep, carbon-rich soils of the boreal forest would make their absolute and relative contribution to the global carbon storage even greater. The Forest as a Dynamic Mechanism Disturbance and recovery of forests ecosystems is essential to the understanding of the temporal dynamics of the terrestrial forest-storage component of the global carbon cycle. A central issue involves the nature and structure of a mature forest system. This concept has deep roots (and occasional rediscovery) in the ecological literature. A.S. Watt in 1947 developed a classic paper7 that is the wellspring for subsequent ideas and extensions of the basic concept that the structure of a mature forest (at the scale of several hectares) is as a heterogeneous mixture of patches in different phases or stages of gap-phase replacement. The mature forest should have patches with all stages of gap-phase dynamics and the proportions of each should reflect the proportional duration of the different gap-replacement stages. This has significant implications for the apparent dynamics of forests when viewed at different spatial resolution. The biomass dynamics for a single-canopy-sized piece of a forest (Figure I) is quasi-cyclical or in the form of saw-toothed curve. 8 The spaces between the "teeth" in the saw-toothed, small scale biomass curve are determined by how long a particular tree lives and how much time is required for a new tree to grow to dominate a canopy gap. After a clear-cutting or forest fire (for example), the summation of several of these biomass curves can be summed to predict the biomass change for a forest landscape. The result is the expected change from deforested land being restored to a forest condition in an effort to increase the regional storage of organic carbon. This landscape-scale biomass dynamic (Figure I) is a simple statistical consequence of summing the dynamics of the parts of the mosaic. If there has been a synchronising event, such as a clear-cutting, one would expect the mosaic biomass curve to rise as all of the parts are simultaneously covered with growing trees (I in Figure Ib). Eventually, some patches have trees of sufficient size to dominate the local area and there is a point in the forest development when the local drops in biomass are balanced by the continued
59 growth of large trees at other locations and the mosaic biomass curve levels out (II in Figure lb). If the trees over the area have relatively similar longevities, there is also a subsequent period when several (perhaps the majority) of the pieces that comprise the forest mosaic all have deaths of the canopy dominant trees (III in Figure I b). Over time, the local biomass dynamics become de synchronised and the biomass curve varies about an equilibrium biomass value (IV in Figure I b).
Tmo-
Ib)
II. Death on some plots balances growth on other plots IV Death on some plots belen ces growth on otll er plots
\
I
.1
CD
I. All plots Increasin 9 bIOmass
Tmo-
Fig. I.
Biomass dynamics for an idealized landscape. The response is from a relatively large, homogeneous area composed of small patches with gap-phase biomass dynamics. The upper part of the figure indicates the individual dynamics of the patches that are summed to produce the landscape biomass dynamics. The landscape biomass dynamics curve has 4 sections indicated: I. Increasing landscape biomass curve rising as all of the patches are simultaneously covered with growing trees. II. Local drops in biomass are balanced by the continued growth of large trees at other locations. The landscape biomass curve levels out. III. If the trees have relatively similar longevities, there is a period when several (perhaps the majority) of the patches that comprise the forest mosaic all have deaths of the canopy dominant trees. IV The local biomass dynamics become desynchronized and the landscape biomass curve varies about an equilibrium biomass value.
60 The occurrence of such patterns has been documented for several different mature forest systems. For example, the presence of shade-intolerant trees in mature undisturbed forest in patches is but one observation consistent with the mosaic dynamics of mature forests 9 The scale of the mosaics in many natural forests are somewhat larger than one would expect from gap filling of single tree gap indicating an importance of phenomena that cause multiple tree replacements. Also, the relatively long records (ca 40 years in most cases) that are available for forests indicate a tendency for the forest composition to fluctuate with species showing periods of relatively weak recruitment of individuals to replace large trees and strong recruitment in other periods. JOThe carbon storage dynamic (Figure 1) implies that carbon taken up by reforestation and growth of trees may be partially released back into the atmosphere in the future. Individual-tree-based Computer Models of Forest Dynamics We have been working on forest models call "gap models"JJ that can simulate the local nonequilibrium dynamics of forests mentioned above. Gap models are a class of individual-based models that simulate the establishment, diameter growth, and mortality of each tree an area on the order of 0.10 ha. Simulation calculations in these models are on a weekly to annual time step. Gap models feature relatively simple protocols for estimating the model parameters (Shugart, 1998). For many of the more common temperate and boreal forest trees, there is a considerable body of information on the performance of individual trees (growth rates, establishment requirements and height/diameter relations) that can be used directly in estimating the parameters of such models. The models have simple rules for interactions among individuals (e.g., shading, competition for limiting resources, etc.) and equally simple rules for birth, death and growth of individuals. The simplicity of the functional relations in the models has positive and negative consequences. The positive aspects are largely involved in the ease of estimating model parameters for a large number of species; the negative aspects with a desire for more physiologically or empirically "correct" functions. Gap models differ in their inclusion of processes which may be important in the dynamics of particular sites being simulated (e.g., hurricane disturbance, flooding, formation of permafrost, etc.), but share a common set of characteristics. Each individual tree is simulated as an independent entity with respect to the processes of establishment, growth and mortality. This feature is common to most individual-tree-based forest models and provides sufficient information to allow computation of species- and size-specific demographic effects. Because the models have stochastic birth and death functions one normally uses a Monte-Carlo simulation of the dynamics of a forest landscape as an ensemble of simulated forest plots. The model structure of gap models emphasizes two features important to a dynamic description of vegetation pattern: I. The response of the individual plant to the prevailing environmental conditions, and 2. The modification of those environmental conditions by the individual tree. Gap models are hierarchical in that the higher-level patterns observed (i.e. , population, community, and ecosystem) are the integration of plant responses to the environmental constraints defined at the level of the individual.
61
Boreal Forest/Climate Interactions The Northern Hemisphere' s boreal forests and, in particular, the Siberian boreal forest zone, may have a particularly strong effect on the Earth's climate through mechanisms involving changes in the regional surface albedo. Gordon Bonan and his colleagues in 1992 12 altered the surface albedo appropriate to a boreal forest clearing in the National (USA) Center for Atmospheric Research CCM I model. This resulted in a predicted cooler Earth not only in the boreal zone but across the either Northern Hemisphere. Richard Betts 13 used the Hadley CM to simulate the climate consequences of albedo changes from growing more trees. He found that the surface albedo changes from growing of coniferous evergreen trees in Siberia had strong leverage to warm the Earthso much so that these temperature increases were large enough to overshadow the effect of the carbon storage that occurred as a result of growing evergreen forest in that region. In Siberia and unlike the rest of the boreal forest zone, Larch forest (species: Larix sibirica and L. gmelinii) covers extensive regions. Conversion of Siberia' s Larch forests to "Dark Conifer Forests" (Spruce and Fir Forests) is a direct analogue to Betts' model experiment with respect to albedo changes. This albedo change with forest cover change implies a potential positive feedback cycle: a warmer climate can convert regional landcover from Larch to Dark-Conifer forest; the resultant albedo change then can promote additional climate warming. This climate/cover feedback motivates development of dynamic models simulating the composition of Siberian forest. Working with colleagues in the Russian Academy of Sciences, we applied a forest-stand composition and biomass model simulating birth, growth and death of the interactive individual trees in a forest stand. The F AREAST model 14 was originally developed to simulate the forests near the border of the People's Republic of China and the Democratic People's Republic of Korea at Changbai Mountain, famous for its rich tree species diversity and forest type diversity associated with different altitudes ; almost all of the typical forest types of Eastern Eurasia are represented along the north slope of the mountain from low-altitude mixed forests to high-altitude dwarf-birch forests. The initial tests of the F AREAST model were spatially hierarchical: First, simulating the composition and basal area of forest at different elevations on Changbai Mountain; then, inspecting the model ' s predictive capability on other Chinese mountains; finally, testing based on simulated composition of mature forests across the Russian Far East. 15 Continuing work focuses on improving model performance and inspecting large area climate change effects on vegetation in the Eastern Eurasian area as part of NEESPI: ~orthern ~urasia ~arth Sciences fartnership Initiative, an iLEAPS project (see: http://neespi.org for details). The ability of the F AREST model to reproduce vegetation patterns from Changbai Mountain and for other elevational gradients over Northern China plus the application of the model in the Russian Far East suggested applying the model to inspect the possible effects of climate change in the Eastern Eurasian region. Because the model simulates the species composition of the elements of a forest landscape responding to succession and climate change, these results increase the significance of the observations of Richard Betts 16 that surface albedo change in this region has considerable leverage in feedbacks with the Earth' s climate. The change in climate appears to have a significant potential for positive feedback with the forest condition. Ningning Zhang and his colleagues 17 used climate change data from the Intergovernmental Panel of on Climate Change Third
62 Report 18 for the CMIP2 and IS92a climate change scenarios to drive the forest dynamic of the FAREAST model. The CMIP2 and IS92a scenarios combine the outcome of 18 GCM models to simulate the future global climate. These climate changes not only influenced each site's forest structure but also in the tree distribution across the East Eurasia Forest. FAREAST simulations indicated that under the climate change conditions with rising temperature and precipitation, the total biomass of the forests will not significantly increase across the study region and may even drop in some western sites currently dominated by Larix. While the area covered by the deciduous needle-leaf tree Larix is reduced, the biomass and the distributional range of the deciduous broad-leaf tree genera should increase. Fraxinus, Ulmus, Quercus, Tilia and other deciduous trees may extend their boundary not only to northwest but also to the southeast of the East Eurasian region which currently is too cold and dry for them. These trees also extend their range to higher latitudes under the climate change. Essentially, the major forest types of this region, boreal forests dominated by coniferous trees, become deciduous forests and mixed forests. With respect to how these changes interact with the albedo effects that potentially feedback onto the climate, one finds a quite complex regional response . A sensitivity study 19 inspected the response of the boreal region of Russia and other former member nations of the USSR (Figure 2). Regional trends of differences in biomass between the precipitation (plus 10% or minus 10% changes) and temperature (no change or plus 2°C warming) cases were compared to the current simulated regional biomass for 2083 points. When the responses to change of temperature and precipitation are separated the results are as one might expect: increasing precipitation induces increasing biomass (Figure 2a), decreasing precipitation creates decreasing biomass (Figure 2b), and warming causes decreases in biomass for certain regions; though in Siberia, where temperatures tend towards extreme cold, warming induces biomass increases (Figure 2c). In addition to total biomass changes there are shifts in the biomass for the genera represented. Specifically, there are different patterns of change for Larix and Pinus in response to temperature warming. Both genera display a decrease in biomass in western and southwestern Russia and the Russian Far East. The number of sites that experience a biomass decrease for Larix is larger than the number of sites that show a decline in Pinus. In particular, the sites that show a decline in Larix extend further northward in both European Russia and the Russian Far East. A more detailed analysis is required to determine whether these patterns are the result of a replacement of Larix with Pinus, but given the results of Zhang et al. 20 and documented shifts of larch in the field 21 replacement of the deciduous Larix is predicted. CONCLUDING COMMENTS
In this chapter we strove to provide a straight-forward application of one of a large class of ecological models capable of simulating the change in biomass, physical structure and biological composition over a large region, Russia's forests . What we find is that the application of the FAREAST model to conditions over more than 2000 locations across Russia under a sensitivity analysis using temperature and precipitation conditions that are below those indicated in the recent IPCC report 22 produce significant changes in the character of the forests. There is also a change in forest cover over what appears to be a region in which cover conditions (and associated albedo changes) have disproportional
63 leverage on global climate. We feel that this sensitivity warrants exploration and also is important to consider when designing protocols to limit greenhouse gas emissions. Certainly over a significant region in Russia, the growing of forests (and the associated storage of carbon dioxide from the atmosphere) has other associated changes that can cancel the intended reduction of planetary warming. This is true in the case of increased evergreen forests in this region, which results in a further increase in global temperature. As a whole the Russian boreal forest also seems to be responsive to what superficially might seem to be small climatic shifts. (2°C warming and 10% positive or negative precipitation shifts).
64
Change in Biomass •
• •
0
0
Fig. 2.
-217--'150 -150 - -100 -100--50 -50 - -0.01 0 0.01 - 50 50 - 100
Difference in magnitude of total forest biomass (tCha") for successional age 200 years comparing a climate scenario with + / 0% precipitation to a baseline scenario with no change in climate. Decreasing biomass shown is in black and increasing biomass is shown in white.23 Overall, the pattern is towards increasing biomass. (b) Difference for the -/0% precipitation case compared to baseline. The pattern with precipitation loss is decreasing biomass. (c) Difference for 2'C increase case compared to baseline. General decreasing biomass in response to increasing temperature is seen at western and southwestern sites, but no consistent regional response with warming. N
REFERENCES 1.
United Nations Framework Convention on Climate Change (1997) Kyoto
Protocol to the United Nations Framework Convention on Climate Change. Secretary-General of the United Nations, New York.
65 2.
3. 4.
5.
6. 7. 8. 9.
10.
11. 12. 13. 14. 15. 16.
Ozone Secretariat United Nations Environment Programme (2006) Handbookfor the Montreal Protocol on Substances that Deplete the Ozone Layer. Seventh Edition. Secretariat of The Vienna Convention for the Protection of the Ozone Layer and The Montreal Protocol on Substances that Deplete the Ozone Layer, United Nations Environment Programme, PO Box 30552, Nairobi, Kenya. Sarmiento, J.L. and N. Gruber (2006) Ocean Biogeochemical Dynamics. Princeton University Press, Princeton, NJ. 503 pp. Keeling, e.D. and T.P. Whorf (2005) Atmospheric C02 records from sites in the SIO air sampling network. In: Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S . Department of Energy, Oak Ridge, TN. http:// cdiac. esd.ornl.gov!trends!co2!sio-keel- flask! sio-keel-flask.html. Forster, P., V. Ramaswamy, P. Artaxo, T. Berntsen. R. Betts, D. W. Fahey, 1. Haywood, J. Lean. D.C. l.owe. G. Myhre, .1. Nganga. R. Prinn. G. Raga, M. Schulz and R. Van Dorland (2007) Changes in Atmospheric Constituents and in Radiative Forcing. In: Climate Change 2007: The Physical Science Basis. Contribution oj" Working Group I to the Fourth Assessment Report oj" the lntergovernmenral Panel on Climate Change [Solomon, S., D. Qin. M. Manning. Z. Chen, M. Marquis. K.B. Averyl. M. Tignor and H.L. Miller (eds.)] . Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Sarmiento, J.L. and N. Gruber (2006) Ocean Biogeochemical Dynamics. Princeton University Press, Princeton, NJ. 503 pp. Watt, A.S. (1947) "Pattern and process in the plant community." J Eco!. 35:1-22. Shugart, H.H. (1998" Terrestrial Ecosystems in Changing Environmenrs. Cambridge University Press, Cambridge. 537 pp. Whitmore, T.e. (1982) On pattern and process in forests (pp. 45-59). In: E.!. Newman (ed.), The Plant Community as a Working Mechanism. Special Pub!. No. 1, British Ecological Society. Blackwell Scientific Pub!., Oxford. Jones, E.W. (1945) 'The structure and reproduction of the virgin forests of the north temperate zone." New Phytologist 44:130-148. Rackham, O. (1992) Mixtures, mosaics and clones: The distribution of trees within European woods and forests (pp. 1-20). In: M.G.R. Cannell, D.e. Malcolm and P.A. Robertson (eds.). The Ecology of Mixed-Species Stands of Trees. Blackwell Scientific Publications, Oxford. Shugart, H.H. and D.C. West (1980) "Forest succession models." BioScience 30:308-313. Bonan, G.B., D. Pollard and S.L. Thompson (1992) "Effects of boreal forest vegetation on global climate." Nature 359:716-718. Betts, R. A. (2000) "Offset of the potential carbon sink from boreal forestation by decreases in surface albedo." Nature 408: 187-190. Yan, X. and H.H. Shugart (2005) "A forest gap model to simulate dynamics and patterns of Eastern Eurasian forests." Journal of Biogeography 32: 1641-1658. Ibid. Betts. 2000. Betts, A.K and J.H. Ball (1997) "Albedo over the boreal forest." Journal of Geophysical Research-Atmospheres 102:28901-28909.
66 17.
18. 19.
20. 21.
22.
23. 24.
Zhang, N, H.H. Shugart and X. Yan (2009) "Simulating the effects of climate changes on Eastern Eurasia forests." Climate Change: DOl 10.1007/sI0584-0099568-4. IPCC. 2001. Climate Change 2001: The Scientific Basis. New York: Cambridge University Press, New York. Shuman, 1.K. and H.H. Shugart (2009) "Evaluating the sensitivity of Eurasian forest biomass to climate change using a dynamic vegetation model." Environmental Research Letters (in press). Zhang, et al. 2009. Kharuk, V, K.Ranson and M. Dvinskaya (2007) "Evidence of evergreen conifer invasion into larch dominated forests during recent decades in central Siberia." Eurasian Journal of Forest Research 10: 163-171. Kharuk, V.I., K.l. Ranson, T.I. Sergey and M.L. Dvinskaya (2009) "Response of Pinus sibirica and Larix sibirica to climate change in southern Siberian alpine forest-tundra ecotone." Scandinavian Journal of Forest Research 24 : 130-139. Solomon, S., D. Qin, M. Manning, R.B. Alley, T. Berntsen, N.L. Bindoff, Z. Chen, A. Chidthaisong, 1.M. Gregory, G.c. Heger!, M. Heimann, B. Hewitson, B.l. Hoskins, F. 100s, 1. 10uzel, V. Kattsov, U. Lohmann, T. Matsuno, M. Molina, N. Nicholls, 1. Overpeck, G. Raga, V. Ramaswamy, J. Ren, M. Rusticucci, R. Somerville, T.F. Stocker, P. Whetton, R.A. Wood and D. Wratt (2007) Technical Summary (pages 19-91). In : Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (S. Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Shuman and Shugart 2009. Stolbovoi, V. and I. McCallum (eds) (2002) CD-ROM Land Resources of Russia. International Institute for Applied Systems Analysis and the Russian Academy of Science, Laxenburg, Austria.
FOREST POLICIES, CARBON SEQUESTRATION AND BIODIVERSITY PROTECTION JAN SZYSZKO University of Warsaw, Warsaw, Republic of Poland INTRODUCTION Three basic problems occur in the central European forestry which are very important from economy and sustained development point of view. These are: disappearance of species (populations) on extensive areas, expansion of species- increasing of forests predisposed to mass outbreaks (high fluctuation) of organisms noxious to tree stands, causing the necessity of utilization of increasing amounts of chemicals and decreasing production of wood. These three problems seems to be connected with each other (Szyszko et al. in press) Our questions are: How should one counteract the extinction of species (populations), would it be rational to do so and concerning what species, how can mass outbreaks of noxious organisms for forest stands be controlled, how can high production of the stands be controlled and what forest policy should be adopted for solution of these problems? Recognition of these and to give answer to our questions should have a short historical analysis. BIODIVERSITY WITHOUT MAN (PART I) Let us carry out the analysis on the example of Europe of the temperate zone. Let us think what would happen if no man and his economic activity existed in this area of Europe. Let us analyze this problem from the latest history perspective, i.e., the last 16 thousand years. It was 16 thousand years ago when the last glaciations began to recede and soils appearing under the ice were inhabited by living organisms: plants fungi and animals. Hence the succession process began and the final stage of that process in the territory of the lowlands of Central Europe would consist of forest plant communities with dominating deciduous trees such as the beech, oak and hornbeam. Such communities are currently extremely rate in this geographic zone and can only be encountered in small fragments of economic forests and in the majority of Polish national parks. They have a great quantity of the collected organic matter. It is here that over 300 tons of accumulated carbon in the dead and living organic matter can be found on one hectare in the soil and on its surface. It is here that we can find thousands of plant, spore and animal species characteristic for old succession stages, also on one hectare. Thanks to the richness of species and its specificity, these communities protect "capital" resources, i.e., the accumulated organic matter (Van Cleve et al. 1993; Viereck et al. 1993). That matter is functional, i.e., used by various organisms, but not wasted, i.e., given away to other environmental systems uselessly. Thanks to that fact, natural forests are rich in the organic matter and produce good air and water- also for human beings. This is why open waters (lakes, rivers) in the impact area of natural forests show the characteristic biodiversity with the species for which clear and well-oxygenated water is necessary to live. These waters are where we can fish for trout, bull trout and salmon (Szyszko, 2002).
67
68 BIODIVERSITY WITH MAN (PART I) However, there exists man and his economic activity, which surely had an impact on the outlook and condition of environmental resources . It seems obvious for each objective observer that the key related areas were: agriculture, industry, infrastructure and water melioration. Agriculture dominated in the territories of states of Central Europe, with fauna and flora much different than in natural forests and mostly foreign to that geographic region. Instead of aurochs, lynxes, beavers and wolves occurring in large areas of natural forests, we have two dominating species relating to the potato and rape cultivation, i.e., the Colorado beetle and the pollen beetle. Agriculture also entails the demand for artificial fertilizers and a wide range of plant protection chemicals, which nearly in whole penetrate, directly or indirectly, groundwater and water reservoirs in a relati vely short time. The more effective the agriculture the higher is the consumption of such agents; hence their more intense flow to water. Agricultural development entails industry development that directly pollutes the atmosphere. In tum, the industry development entails the infrastructure development, including river regulation, construction of highways, pipelines and, of course, cities and settlements. Such an activity not only forces many native species out of the modified habitats but also permanently isolates many of their groups. Such isolation is the key factor causing the extinction of groups of individual species. A less numerous group of isolated specimens entails the more complete genetic information exchange among the specimens and a higher likelihood of that group ' s extinction. It is worth stressing here that each specimen or a group of specimens belonging to individual species requires properly modeled living space. Its occurrence area needs to have a place for reproduction, a place of rest and a place for feeding. The size of such areas varies depending on the species. Wolves and lynxes require different areas (J«drzejewska, l«drzejewski, 2001) than a couple of buzzards (Naumow, 1961) or a Carabus beetle living in the forest litter (Grum, 1962 and 1971; Rijnsdorp, 1980). Depriving each of these species of the possibility to use a place of reproduction, rest or feeding has to result in their extinction. There are also species that, due to their biology, have to migrate a lot. The white stork that feeds on invertebrates and small vertebrates has to migrate for the wintertime from its breeding grounds in Poland to wintering grounds in Central and South Africa. This is where it finds an appropriate feeding base when the activity of small animals decreases in Poland due to the winter. When migrating, it also has to feed and rest, for which it needs appropriate places at such a distance from each other that the stork can cover after a rest and feeding. The destruction of such places enforcing longer travel (flights) has to entail the disappearance of the species. Hence, the infrastructure can have an extremely material influence on the occurrence and disappearance of species through the isolation of groups of species or disorganization of their living space. Water meliorations of the past usually involved the drainage and permanent lowering of the groundwater level, which clearly promoted the modification of the content of existing species of plants and animals. They also had a negative impact on the functioning of forests, in particular, those that are dependent on high groundwater levels. As we talk about the forests, it is worth mentioning that the forestry itself has had an influence on the condition of the natural environment. We should be aware that even though natural forests constitute a great collection of the efficiently functioning organic matter, they have low reserves of the
69 high-quality wood as a raw material. The majority of trees in a natural forest are too young or too old to be of any use other than as a fuel. With the decreasing areas of natural forests and the growing demand for high-quality wood, new wood production was based on the complete felling and the planting of single-species forest stands, mainly pine and spruce ones. Initial effects were extremely promising. In natural forests, it was possible to acquire a few dozen m3 of the raw materials from 1 ha while a few hundred m3 of an excellent raw material was obtained in single-species forest stands after 100 years of cultivation. However, the euphoria ended relatively quickly. It turned out that the next planting of trees after the removal of the forest stand planted by man provided no such a good accrual of the wood mass within 100 years (Figure 1). The emerging forest stands also turned out to be hardly resistant to pathogenic organisms. More and more chemicals were being used to preserve these forest stands. Hence, the result included not only a drop in the wood mass production but also, due to the reduction of the reserves of organic substances (carbon compounds) (Figure 1), the disappearance of many species characteristic for old stages of the succession.
Fig. 1:
Forest condition from the historical point of view on the same place in comparison with hypothetical resources of high-quality of wood and carbon content (from Szyszko, 2002b).
The role of forests in the water and air protection was reduced. However, it is worth mentioning here that it was the foresters who have been criticizing the adopted forest management methods for nearly 200 years, predicting negative consequences. However, they were powerless when faced with the need to satisfy men ad hoc needs relating to the economic development. Summing it up, we can say that the condition of the natural environment as seen from the perspective of the economic development history is correlated with the condition of the economy. Better developed economies entail more modifications to the natural environment. Due to our economic activity, we
70 replaced natural forest systems with their characteristic fauna and flora, self-dependent, very economical in their management, providing good water and good air with systems unable to function without man (agriculture) that require support (forestry, in part) and also have their characteristic but different than previously fauna and flora, wasteful and also providing water and air but with such a quality that they become dangerous for man. Hence, we can say that: more inconsiderate and brutal economic activity causes less native species of plants, fungi and animals characteristic for the natural forests originating from succession stages. Consequently more species foreign to these forests, correlated with the intense economic activity of man, shows a tendency to natural mass occurrences such as the mad cow disease, foot-and-mouth disease or avian flu. BIODIVERSITY WITHOUT MAN (PART II) Having read such a harsh statement, a careful reader would ask what would happen had there been no man and his economic activity, and where such species characteristic for our country as the white stork, nightjar, skylark (Kruszewicz, 2007) or a full range of butterflies (Szyszko, 2007) and bumblebees (Skrok, 2003) would occur in that case. These species surely do not occur in natural forests and cannot be found in the shadowy strict forest reserves of the Bialowieski National Park. However, they occur in the fields and meadows of all parts of Poland. Fields and meadows also see the hunting peregrine falcons and lesser spotted eagle (Acquila pomarina) that, in tum, nest in the strict reserves. The answer is simple and can be found when studying the latest history of the Earth. It is here, without man, that succession processes were from time to time checked by such "ecological disasters" as fires, windfalls or great floods. These disasters were destroying old stages of succession containing much carbon with a high value of MIB (Mean Individual of Biomass of Carabids-Szyszko, 1990; Szyszko et al. 2000) thus creating room for the species characteristic for early succession stages that required a low content of carbon and low value of MIB (Szyszko, 1990; Szyszko at al. 2000) (Figure 2). It is there that bird species such as the nightjar (Camprimulgus europaseus), woodlark (Lullula arborea) northern dune tiger beetle (Cicindela hybrida) and, with the appearing pine wilding, the sticky bun (Suillus luteus) (Figure 2). The created open areas were also an excellent place for landscape species, i.e., those who need to use varied succession stages to survive (Szyszko, 2002a; Skrok, 2003), e.g., breeding in places on advanced stages of the succession (high carbon content) and hunting in open areas (early succession stages with low carbon content). The bird species such as buzzard (Buteo buteo) or the majority of Falconids are typical examples of such species. They nest on old trees in forests and hunt where the visibility is good, i.e., in systems on early succession stages, with low carbon content. "Ecological catastrophes" also created the opportunities for a reconstruction of destroyed environmental resources, i.e., succession changes relating to a change of occurring species and the enrichment of environmental systems in carbon (Figures 3, 4).
71
,;
, ":::!f
".: 'ClJ:.,:.
.it~"<M":.,<""
$,
Fig. 2:
;<;-
,..,,~
}(;
'
§. '
<
Succession changes scheme of Carabidae fauna after the destruction due to fire of natural forest represented old succession stages with high carbon content and high MIB value. From the MIB about 50 mg in plantation with Cicindela hybryda and the sticky bun (Suillus luteus) up to the MIB about 300 mg in timber stand with Carabus intricatu~ and the dotted stem bolete Boletus erythroderus. Broken line- number ofCarabid species. (from the Szyszko, 2007). Full explanation in the text.
(em)
14 .0 1Z .0
10.0
8 .0
5 .0 4.0 r -0.$3
2. 0
p - 0.011
0 .0 3()
50
70
90
"0 Age of stand
Fig. 3:
Relationship between the age of stand and the thickness of the litter in cm (from Szyszko et at. 2003).
72
..,..... 4$1.lOlJ
4000..!l
•
•
3Il00.0
3000.1.1
•
•
25QO.O
•
2000,0
..
1~O
1000.0
eoo.O
.. -.,
1M'! ;-~------ --'-" - "--- "C--"" ~'"--'- '-.'- ---------,-- -'
0.0
Fig 4:
~O
4.0
$.0
8.0
10.0
Relationship between the thickness of the litter in cm and the weight of carbon in gram per 1 m 2, (from Szyszko et ar 2003/
BIODIV ERSITY WITH MAN (PART II) It turns out that man can and has to playa similar role, There are two reasons for that Firstly, "ecological catastrophes" are currently mainly an anthropogenic factor (caused by man) and occur mainly in those places where they destroy the effects of man' s economic activities and pose a danger to human safety. Hence, man needs to try to eliminate them. Secondly, the lack of "ecological catastrophes" entails no opportunity for species of the early succession stages and landscape species to occur. It is the "ecological catastrophes" that destroy the carbon reserves collected on advanced stages of the succession in forests that made it possible for the sticky bun (Suillus luteus), woodlark (Lulluia arborea), nightjar (Camprimulgus europaseus) to appear and also provided hunting possibilities for the majority of birds of prey. Protecting advanced succession stages from destruction due to "ecological catastrophes", man himself has to replace the forces of nature and play its "destructive role". Let us discuss the problem on the example of forests. If it was not for the rough interference with succession processes due to clear-cuts (Figure 5) and the reduction of the carbon content due to clear-cuts up to a few dozen tons (Figure 6) forests would not contain the nightjar (Camprimulgus europaseus), the woodlark (Lullula arborea) and the majority of birds of prey nesting on old trees would have had no place to hunt It would not be possible for the MIB figure to attain the level of 50 mg, hence, there would be no species of such Carabid beetles as Carabus nitens, Bembidion nigricorne, Pterostichus lepidus, Calathus erratus, Masoreus wetterhali and Harpalus rufitarsis. Finally, the mass appearance of fungi species such as sticky buns (Suillus luteus), sulfur tufts (Hypholoma fasciculare) or, slightly later, chanterelles (Cantharellus cibarius) and porcinis (Boletus edulis) would not be possible (Figure 6).
73
Fig. 5:
Clear-cut forest.
2071.2 €
Fig. 6:
The occurrence of characteristic species of birds, Carabid beetles and fUngi as well as the structure of the carbon content in tons per 1 ha in a forest stand, litter and mineral soil up to the 10 cm depth, in a young pine plantation with the MIB of Carabidae about 50 mg, created after the clear-cut of a timber pine stand (more than 100 years old) compared with the annual accumulation of carbon in that young stand, the value of the all carbon content (stand + litter + mineral soil) expressed in carbon dioxide at the prices in euro according to the European trade emission system at 15.08.2008 (from Szyszko, 2007). Full explanation in the text.
74
10525.6 €
• Price of 1 t of carlJon dioxide on 15.08.2008: 23.15 Euro (www.pointcarlJon.com)
Fig. 7:
The occurrence of characteristic species of birds, Carabid beetles and fungi as well as the structure of the carbon content in tons per 1 ha in a forest stand, litter and mineral soil up to the 10 cm depth, in a ca. sixty years old pine stand with the MIB figure about 250 mg, compared with the annual accumulation of carbon in that stand, the value of the all carbon content (stand + litter + mineral soil) expressed in carbon dioxide at the prices in euro according to the European trade emission system at 15.08.2008 (from Szyszko, 2007). Full explanation in the text.
The destruction of the natural environment is also a chance for the regeneration of those environmental resources with time, i.e., for an increase in the carbon content and the modification of occurring species, which is reflected in an increase in the MIB value as the synthetic measure relating to epigeic Carabids (Szyszko, 1990; Szyszko et al. 2000). In a ca. sixty years' old pine stand (Figure 7), the carbon content and the MIB value increased in comparison with a young pine stand. We can observe completely different most numerous species of birds, Carabids and fungi there. Birds characteristic for that stage of the succession include the chaffinch (Fringilla coelebes), great tit (Parus major) and coal tit (Parus ater). Carabids characteristic for that stage of the succession include: Carabus arcensis, C. nemoralis and Pterostichus niger and, as far as fungi are concerned, the sickener (Russula emetica), brown roll-rim (Paxillus involutus), false morel (Gyromitra esculenta) and the cauliflower mushroom (Sparassis crispa).
75
Fig. 8:
The occurrence of characteristic species of birds, Carabid beetles and fungi as well as the structure of the carbon content in tons per 1 ha in a forest stand, litter and mineral soil up to the 10 cm depth, in a ca. eighty years old beech stand created from the undergrowth after the clear of a pine with this stand with the MIB figure about 350 mg, compared with the annual accumulation of carbon in that stand, the value of the all carbon content (stand + litter + mineral soil) expressed in carbon dioxide at the prices in euro according to the European trade emission system at 15.08. 2008 (from Szyszko, 2007). Full explanation in the text.
The planting of the beech as an undergrowth in ca. sixty years' old pine stands and then the removal of the pine stand after 40 years with only the beeches left resulted in the creation of a beech stand about eighty years old (Rylke and Szyszko, 2002, Figure 8). When compared with a sixty years' old pine stand, a higher carbon content on 1 hectare can be observed in that beech stand and the MIB value exceeds 350 mg. Of course, characteristic species of birds, Carabids and fungi occurring there are also different than in succession stages (forest stands) presented previously. Characteristic birds are: the black woodpecker (Drycopus martius), stock pigeon (Columba oenas) and chaffinch (Fringilla coelebes); characteristic Carabids: Carabus coriaceus, C. hortensis and C. intricatus; and characteristic fungi: the dotted stem bolete (Boletus erythropus), fleecy milk-cap (Lactarius vellereus) and the death cap (Amanita phalloides). The data presented above suggest that a greater differentiation of the carbon content in space within living environmental systems, i.e., greater differentiation of succession stages measured with the MIB value, entails greater biodiversity (Figure 9) (Szyszko, 2002).
76
Fig. 9:
Heterogeneous landscape. Top left-a natural forest with the carbon content 325 ton per 1 ha with Carabus coriaceus, top right-arable land with the carbon content 20 tons per 1 ha with Cicindela campestris. In the middle-a peat bog with a very high content of carbon per 1 ha with Panageus bipustulatus. Bottom left-a clear-cut with the content of carbon ca. 90 tons with Harpalus rujitarsis. Bottom right-timber stand with the carbon content 124 tons with Carabus nemoralis (from Szyszko 2007). Full explanation in the text.
Hence, the MIB figure can be adopted as a measure of assessment of the landscape value (Rylke and Szyszko, 2001). However, the evaluation can be complete only if we also take into account the occurrence of those species that use varied succession stages defined by Szyszko (2002a) as landscape species (Figure 10).
77
Fig. J0: Landscape species in a Junctional ecological landscape. The lesser spotted eagle (Acquila pomarina) nesting on very old trees in natural and cultivated Jorests, hunting in the abandon arable land. The crane (Grus grus) nesting in peat bogs and hunting in the abandon land; the kestrel (Falco tinnunculis) nesting on old trees in natural and cultivated Jorests and hunting in clear Jellings. The black stork (Ciconia nigra) nesting on old trees in natural and cultivated Jorests and hunting in the peat bogs (from Szyszko, 2007). Full explanation in the text. More succession stages and higher diversity of the carbon content in environmental space entailed higher biodiversity (Figure 2). On the one hand, various succession stages existing simultaneously side by side guarantee great richness of species and, on the other hand, the possibility of occurrence of landscape species for whom living space needs to include the close proximity of various stages of the succession, i.e., meadows, fields, forest young plantations, natural forests and peat bogs (Figure 3). Species that require such areas are, for example, the already mentioned lesser spotted eagle (Acquila pomarina) and crane (Grus grus) (Szyszko, 2005). The former prefers nesting on old trees in natural forests, finding good hunting grounds on meadows and arable fields. In turn, the latter selects wet open areas, preferably peat bogs, for nesting and likes to feed on arable fields and meadows. Afforestation or the natural development of the succession unavoidably causes the disappearance of species characteristic for meadows and fields as well as the disappearance of landscape species. A similar effect could be obtained for the lesser spotted eagle (Acquila pomarina) with the felling of old trees in a natural forest and, for a crane (Grus grus), with the drainage of wet areas. Shortly speaking, one can cause the disappearance of species characteristic for individual stages of the succession as well as landscape species through the strict protection of
78 succession change processes. For example, by protecting forests from fires and ceasing all economic activities, we would cause the disappearance of all species characteristic for open areas and most landscape species with time. Hence, the economic activity of man made on the base of forest policy not only can but also has to guarantee biodiversity protection. FOREST POLICY, BIODIVERSITY AND MAN. Forest policy however, needs to take place in line with the sustained development concept with native biodiversity, i.e., the complete content of native species of plants, fungi and animals is the measure of such development. Where a full range of native species exists, sustained development only has to correspond with their occurrence control while in those regions where we caused the extinction of native species due to our economic activity, sustained development has to be measured with the return of these species (Szyszko, 2008). The UN Climate Change Convention (1992) and the appendix to it, i.e., the Kyoto Protocol (1997), and UN Convention on Biological Diversity (1992) provides an excellent instrument and opportunity in that area. Such an opportunity comes from the absorption of atmospheric carbon dioxide thanks to the afforestation of degraded arable lands and sustained forest management focused on an increase in that absorption for wood production and biodiversity protection (Szyszko, 2004). The Polish rural areas contain over 2 million hectares of poor soils that do not guarantee profitable farming. According to experts, each hectare of such soils is able to absorb 10-14 tons of CO 2 annually for 100 years after being afforested. One ton of the absorbed carbon dioxide is a specific amount of money that can be defined currently according to the prices of the European emission trade system where the price for one ton can amount to a few dozen euro. It is estimated that the forest cultivation sized 10 ha could significantly support one family and guarantee subsistence that not only gives a chance to survive but also develop further. Hence, forest planting on poor arable lands would create jobs thus reducing the unemployment (Kubacz, 2008; W6jcik, 2008), would protect and improve even more the quality of our environmental resources, at the same time multiplying renewable energy sources in the form of wood (Stasiak, 2008). The UN Climate Change Convention (1992) and the UN Convention on Biological Diversity (1992) provide an opportunity to implement the sustained development concept entailing the rational use of environmental resources for human needs by way of an appropriate management of carbon in environmental space (landscape), where the forest policy has to play the main role. (Szyszko et al. in print). REFERENCES 1.
2. 3.
Grum, L. (1962) "Horizontal distribution of larvae and imagines of some species of Carabidae." Ekol. Pol. 10:73-84. Grum, L. (1971) "Spatial differentiation of the Carabus L. (Carabidae, Coleoptera) mobility." Ekol. Pol. 19: 1-34. Jl(drzejewska, B., Jl(drzejewski, W. (2001) Ekologia zwierzqt drapieinych Puszczy Bialowieskiej. Warsaw, PWN Press.
79 4. 5.
6. 7. 8. 9.
10.
11.
12.
13.
14.
IS.
16.
17.
18.
Kruszewicz, A.G. (2007) Plaki Polski. Encyklopedia iluslrowana. Mullico Oficyna Wydawnicza Press. Warsaw. 312 pp. Kubacz, B. (2008) Konwencja Klimatyczna, Protok6l z Kioto i lasy, szanSq zr6wnowazonego rozwoju Gminy Czerwonka. Typescript of the master thesis. SGGW- Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University. 82 pp. Naumow, N. (1961) Ekologia zwierzqt. PWRiL Rijnsdorp, A.D. (1980) "Pattern of movement and dispersal from Dutch forest of Carabus problematicus Hbst. (Coleoptere, Carabidae)". Oecologia 45: 274-281 Rylke, 1., Szyszko, J. (2001) "Evaluation of landscape value." Ann. Warsaw. Agricult. Univ-SGGW, Horticult. Landsc. Architect. 22: 89-100. Rylke, J., Szyszko, 1. (2002) Didactics trails for field classes on evaluation and assessment of natural resources. Rylke 1. and Szyszko 1. (eds.). Warsaw Agricultural University Press. 166 pp. Skrok, A. (2003) Occurrence of some selected species of bumblebees (Bombus Latr.) in the research object "Krzywda". In: Szyszko 1., Abs M. (eds.) Landscape architecture and spatial planning as the basic element in the protection of native species- modeling of succession stages. Warsaw Agricultural University Press: 116-124. Stasiak, P. (2008) Program zr6wnowaZonego rozwoju gminy Tuczno w oparciu 0 wykorzystanie odnawialnych zr6del energii. Typescript of the master thesis. SGGW - Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University. 96 pp. Szyszko, J. (1990) Planning of prophylaxis in threatened pine forest biocenoses bsed on an analysis of the fauna of epigeic Carabidae . Warsaw Agricultural University Praess. Warsaw. 96 pp. Szyszko, J. (2002a) Determinants of the occurrence of chosen animal species. In: Szyszko 1. (ed.) Landscape architecture as the basic element in the protection of native species. Fundacja Rozw6j SGGW Press: 28-37. Szyszko, J. (2002b) Zarys stanu srodowiska naturalnego (przyczyny, perspektywy, szanse i trudnosci) W: Ocena i Wycena Zasob6w Przyrodniczych. Wydawnictwo SGGW. 338pp. Szyszko, 1., Platek, K., Dyjak, R., Michalski, A., Salek, P. (2003) Okreslenie modelowego projektu w dziedzinie wzrostu pochlaniania gaz6w cieplarnianych przez zalesienie nizinnych teren6w nielesnych na obszarze kraju. SGGW Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University. Manuscript. 48 pp. Szyszko, J. (2004) Foundations of Poland's cultural landscape protectionconservation policy. In: M. Dieterich, 1. van der Straaten (eds.): Cultural landscapes and land use. Kluwer Academic Publishers, The Netherlands: 95-110. Szyszko, J. (2007) Combating climate change: Land use and biodiversityPoland's point of view. In: Ed. R.Ragani. International seminar on nuclear war and planetary emergencies 38 th Session.:5-12. Szyszko, K. (2003) Characteristic of occurrence of diurnal butterflies (Rhopalocera) on the research object "Krzywda". In: Szyszko J., Abs M. (eds.) Landscape architecture and spatial planning as the basic element in the protection
80
19.
20. 21.
22.
23. 24. 25. 26.
27.
28.
29.
of native species-modeling of successIOn stages. Warsaw Agricultural University Press: 125-132 Szyszko, 1., Platek, K., Dyjak, R., Michalski, A., Salek, P. (2003) OkreSlenie modelowego projektu w dziedzinie pochlaniania gaz6w cieplarnianych przez zalesienie nizinnych teren6w nieleSnych na obszarze kraj. Typescript. Independent Studio for the Valuation and Estimation of Environmental Resources SGGW. Warszawa-Tuczno. Szyszko, J., Schwerk, A. (in print): Zwierz<;:ta miarct oceny i wyceny krajobrazu. Szyszko, 1., Schwerk, A., Dymitryszyn, 1., Szyszko, K., Jojczyk, A. (in print): UN Climate Convention and the UN Convention on Biodiversity Protection as a Chance of sustained development of non-urbanized field-and-forest landscapes. Szyszko, J., Vermeulen, H.J.W., Klimaszewski, K., Abs, M., Schwerk, A. (2000) Mean Individual Biomass (MIB) of Carabidae as an indicator of the state of the environment. In: Brandmayr P., Lovei G., Zetto Brandmay T., Casale A., Vigna Taglianti A. (eds.) In: Natural history and applied ecology of carabid beetles. Pensoft Publishers, Sofia, Moscow: 289-294. The Kioto Protocol to the Convention on Climate Change. Published by Climate Change Secretariat, 1997: 1-34. UNFCC. United Nations Framework Convention on Climate Change. Published by UNEP/WMO, 1992: 1-29. UNCBD. United Nations Convention on Biological Diversity (1992) Warszawa. Dz.U.2002. nr 184 poz. 1532. Van Cleve, K., Viereck, L.A., Marion, G.M. (1993) "Introduction and overview of a study dealing with the role of salt-affected soils in primary succession on the Tanana River floodplain, interior Alaska." Can. Journ. Forest Research 23. Viereck, L.A., Dyrness, C.T., Foote, M.J. (1993) "An overview of the vegetation and soils of the floodplain ecosystem of the Tanana River, interior Alaska." Can. Journ. Foresl Research 23 W6jcik, R. (2008) Konwencja Klimatyczna i Protok6l z Kioto-mozliwosci wykorzystania dla zr6wnowazonego rozwoju Gminy Tuczno. Typescript of the master thesis. SGGW-Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University. 95 pp. Communication no 257 of the Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University and Association of Sustained Development of Poland.
A VOIDING DISASTER: BOOK PRESENT AnON HENNING WEGENER Ambassador of Germany (ret.), Information Security Permanent Monitoring Panel, World Federation of Scientists, Madrid, Spain WILLIAM BARLETTA U.S. Particle Accelerator School, Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA As the International Seminars have entered, indeed are far advanced, into their third decade of achievement, the need to fix for memory its history and accomplishments has been felt ever more acutely. Human memory is frail, but the passing of time, while it cannot be arrested, can at least be overcome by a deliberate attempt to remember. Many of you will recall that at the August Seminar two years ago, and then confirmed last year, the idea was tossed around to produce a written memoir of Erice history. The result of the effort that was thus set in motion is now before you. As you can see, two of the more ancient members of the Erice community, witnesses of their time and now almost venerable figures along with Professor Zichichi and a few others, have undertaken to assemble the materials and to package them by means of an appropriate analytical and historical introduction. The main content of the book, however, consists of testimonials from those who have lived all or part of the Erice experience-about 50 in all. This is the moment to thank all of those who have contributed by recalling their own experience. Perusing the book you will see that these personal recollections cover a great deal of ground. They are all different. It is even surprising to what extent they are different--everybody has seen his or her own Erice in his or her personal light, individually, individualistically, as scientists need to be. They surely do not present the case of the two pupils in school who, asked to write an essay on "My Dog", present identical texts with the explanation that they have described the same dog! We have respected the individuality of the authors, and have reprinted all texts as handed in, without any futile editorial attempts at censorship. Adding to the efforts of all, Prof. T.D. Lee, co-chairman of the International Seminars for many years, has provided a most apposite foreword for the book. As set out in the Introduction, the editors set themselves a triple goal: first, to provide-through a synthesis and through individual perceptions-a narrative on the goings-on and on the world-wide political and scientific fallout of the Seminars in an especially interesting and perilous age; then, to preserve personal recollections of the "Erice Spirit", and also to fix the memory of those great scientists who have put their indelible mark on the Erice history but have now disappeared; and, thirdly, to pay a debt of gratitude, a tribute to Professor Zichichi, to Nino, who, undoubtedly, is the hero of this book. Professor Antonino Zichichi, founder and chairman of the Seminars has been the beacon and propelling force of the work accomplished year after year. No member of the Erice community has ever left a Seminar session without an intense feeling of gratitude to him, convinced that the relentless adherence to scientific rigor and, equally, the unique
81
82 atmosphere of common purpose and comradeship so characteristic of the Seminars are owed to him more than to anybody else. From the inception of the International Seminar series in 1981, the dominant themes have been the prevention of war and the perils of the nuclear age. The early sessions demonstrate this with particular force. While the potential for nuclear holocaust was pre-eminent in the early years, it did not completely dominate our attention, nor was the connection of the nuclear technology and wider issues of human welfare ignored. The consequences of human activity on the global climate and nuclear energy to fuel global development were discussed then. Both issues persist to the present. Despite the focus of the International Seminars on the liberating potential of Science, the overwhelming impression of first time participants, and even of many "old hands", is that offered by the intellectual giants that have occupied the centre of the stage at Erice. Scientists like Paul Dirac, Eugene Wigner, Piotr Kapitza, T.D. Lee, Kai Siegbahn, Edward Teller, John Eccles, Evgeny Velikhov, Richard Garwin, Richard Wilson and our host Antonino Zichichi drove the dynamics of the Seminars. They spoke frequently and passionately about a wide range of topics even when as Edward Teller explained to us, "I don't know. But I will tell you." Their example has inspired the rest of us to make the International Seminars on Nuclear War and Planetary Emergencies a transformative human experience. We thank the Secretary General, Dr. Claude Manoli, and the staffs of the International Seminars and the Ettore Majorana Centre. Without their effective and generous help, the Seminars would have been unthinkable though all these years. They have also provided invaluable assistance with the preparation of this book. While Nino leads us masterfully into yet another unforgettable experience-this year's Seminar- we must also be mindful that he will be celebrating, before long, a very important birthday. Nino, please accept, on our behalf and on behalf of all, this small tribute to you and your work also as a birthday present. It is offered with gratitude and affection. Our life would be so much poorer, immensely poorer, without you. May your good health, strength of character, creativeness, scientific rigor, and indomitable spirit be with you-and with us-for many more years.
SESSION 2 INFORMATION SECURITY
FOCUS: CYBER CONFLICTS AND CYBER STABILITY-FINDING A PATH TO CYBER PEACE
This page intentionally left blank
CYBER CONFLICT VS. CYBER STABILITY: FINDING A PATH TO CYBERPEACE HENNING WEGENER Ambassador of Germany (ret.), Information Security Permanent Monitoring Panel, World Federation of Scientists, Madrid, Spain INTRODUCTORY REMARKS Since the inception of our work on Information Security-the PMP was established in 2001-it has become ever clearer that the increasing introduction of digital technologies into every aspect of civilized life has led to a paradigm shift and has ushered in a new era of human endeavor. At the same time, our well-nigh total dependence on ITC confers vital importance upon the stability, security and reliability of digital systems and networks, confidence in their functioning and integrity, and in the protection of privacy. These increasingly become prerequisites for the functioning of society as such. Information security thus needs to be ranked as an overarching societal challenge of global proportions-a planetary emergency. And year after year, while we work on the subject, the threats are growing, and posing greater challenges to scientists, politicians and, indeed, all stakeholders in our digital world. Commensurate with the threat, information security has become more prominent in our work in Erice. Today, again, we are devoting a plenary session with outstanding speakers to the alarming perspectives of cyber insecurity. I am grateful to the WFS and to Professor Zichichi that we can so clearly focus on the issue. With this brief introduction, allow me to set the stage by ticking off those recent trends that have brought about yet another quality jump in the information emergency. With 1.6 billion computers on-line, billions of microprocessors employed in embedded systems, RFID, mobile devices, ultra-miniaturization of digital circuits and the resulting ubiquity of new miniaturized computing elements leading to different and novel structures of processing configurations in digital nets, the steady progress towards an "Internet of Things" with miniature computers inserted in cloths or the frames of eyeglasses, the development of minute computers with self-organizing potential, able to communicate autonomously with other digital devices, new human mind-machine communications (to name just some of the "next generation" trends), we are witnessing an explosive growth of digital actors and an exponential growth curve of interconnectivities, an all-pervasiveness that automatically spells a parallel increase in vulnerabilities. The phenomena of migration-migration of fixed line telephone to mobile systems and to VoIP, migration of computing processes, software management and data storage from individual and business computers to huge server farms with petabyte capacity-and convergence-resulting in an undistinguishable mesh of mobile and fixed systems-add up to a huge integrated network structure with a universe of connectivities-and vulnerabilities-that defies quantification. It includes a myriad of important components that lie totally open to attack.
85
86 Momentous changes, thus, on the "supply side" of cyber insecurity. But even more momentous are those on the "demand" side. Past the romantic era of the individual playful hacker. We have entered the epoch of mega-cyberfraud through huge cybercrime syndicates with technical supremacy, unlimited financial resources, and an implacable thirst for fraudulent money grabbing. Attack techniques have entered a new phase of sophistication and resilience, with breathtaking speed. Any potent buyer, hostile governments and terrorists included, can unscrupulously avail himself of these potentials. The balance of attack vs. defense is tilting. We will be told about the mischief and threat potential of this new class of invisible enemies during our session. The new key word of cyber insecurity is cyber conflict. We have crossed the threshold to an increasingly interdependent digital world in which the economy, critical infrastructures and national security can be attacked simultaneously and massively by data theft and data manipulation, DDoS, and logic bombs. The fragility of our societies becomes more evident than ever before. Not only the stability of the Internet, the stability of society is at stake. Cyberwar is a real threat. Trivializing it will demand a high price. This situation calls for a new level of strategic responses; they are the topic of today's session. Moving beyond wailing and deploring, remaining at the level of concerned analysis, we must move towards the establishment of a positive order of the digital world: to the requirements of stability, and cyberpeace. The Erice Declaration which will be proposed today is designed to commit the scientists assembled at Erice, and through them the world at large, to work on dedicated responses. Difficult tasks that need hard work, courage and imagination. And even more ambitious tasks await us, de legitimizing the use of cyber technology for offensive military operations and strategic planning to that effect; they cannot be omitted from our action agenda. Science as an instrument for peace has always been the underlying ethics of our gatherings. Once again, Erice can make a major contribution to a more peaceful world, to creating a "global culture of cybersecurity", by showing the way to cyber stability and cyberpeace.
ADV ANCING THE GLOBAL CYBERSECURITY PROMOTING CYBERSTABILITY GLOBALLY
AGENDA
AND
DR. HAMADOUN 1. TOURE Secretary General, International Telecommunications Union, Geneva, Switzerland OPENING ADDRESS Professor Zichichi, distinguished colleagues, ladies and gentlemen, It is my great pleasure to be here with you in Erice today to discuss cybersecurity-a subject which is not just topical, but also very close to my heart, and a core part of the work we are doing at ITU. Information and communication technologies have become the keystone of modem society and are as essential to development and prosperity as conventional networks like transport, power and water. High-speed, always-on, broadband access is an increasingly critical platform for business activity of all kinds, as well as for the delivery of services ranging from e-health, e-education and e-government, to entertainment and interpersonal interaction. It is therefore a great irony-as Barack Obama has recently pointed out-that "the very technologies that empower us to create and to build also empower those who would disrupt and destroy." As we approach the end of the first decade of the 21 51 century, we increasingly rely on the Internet in every part of our lives: to shop, to do research, to bank, and to participate in the global society. Cyberspace is now very much part of our common reality, and so are the risks which come with it. Indeed, in confronting cybercrime we have all had to learn a whole new language; a language of malware and spyware, of viruses and Trojan horses, of phishing and botnets. It is estimated that cybercriminals stole up to a trillion dollars worth of intellectual property from businesses worldwide in 2008, and many millions of individuals have had their privacy violated, have suffered identity theft and have had their hard-earned savings stolen from them. Governments constantly face cyberattacks-and terrorists increasingly rely not just on their weapons, but on the power of cyberspace technologies like GPS and VoIP to sow destruction. And increasingly, women and children are being targeted online by traffickers and paedophiles. Ladies and gentlemen, given the scale of the threat-and the phenomenal harm that can be caused by even a single cyber attack-we cannot rely on ad hoc solutions or hope to survive by strengthening our defences only after attacks have occurred. No-we must work together, to ensure a coordinated response. This was clearly recognized by the World Summit on the Information Society in 2005. As a result, ITU-as the facilitator of WSIS Action Line C5 on Building Confidence and Security in the use of ICTs-took the important step of launching the Global Cybersecurity Agenda, or GCA, in 2007.
87
88 Remarkably, given the scale and the global nature of the problem, the GCA is the first truly international strategy to counter cybercrime. Designed as a framework for cooperation and response, it focuses on building partnerships and effective collaboration between all relevant parties. One of ITU's greatest strengths is this ability to bring key decision-makers together, on an equitable footing, to share expertise and build consensus around critical issues such as these. And we are most privileged in our endeavours to have the support of global leaders including Nobel Peace Laureate Dr. Oscar Arias Sanchez, President of the Republic of Costa Rica, and President Blaise Compaore of Burkina Faso. We are also proud to have forged a strong and highly supportive relationship with IMP ACT- the International Multilateral Partnership Against Cyber-Threats-which last year culminated in a Memorandum of Understanding that has seen IMPACT's headquarters in Cyberjaya, Kuala Lumpur, become the physical home of the GCA. The collaboration between the GCA and IMPACT is the world's first global public-private initiative against cyberthreats, and provides ITU's 191 Member States with expertise, facilities, information, and rapid access to resources which allows them to effectively address actual and potential cyberthreats. The strategic global alliance includes key industry players such as Microsoft and Symantec, as well as F-Secure, Kaspersky Lab and TrendMicro. It also includes worldrenowned training institutions such as the EC-Council and the SANS Institute, and more than 20 universities globally have agreed to share their research skills and technical know-how. IMP ACT's state-of-the-art Global Response Centre has been designed to serve as the world' s foremost cyberthreat resource centre, providing a real-time aggregated early warning system that helps countries quickly identify cyberthreats, and offering expert guidance on effective counter-measures. It also provides governments with a unique electronic tool to enable authorized cyber-experts in different countries to pool resources and collaborate with each other remotely and securely, helping the global community respond immediately to cyberthreats. Distinguished colleagues, the first phase of physical deployment has already been launched in some 30 countries, with further deployment in another 20 countries this year, for a total of 50 countries by the end of 2009. [Briefing note to SG: the countries formally agreed are Andorra, Brazil, Bulgaria, Burkina Faso, Costa Rica, Cote D'Ivoire, Democratic Republic of Congo, Egypt, Gabon, Ghana, Israel, Indonesia, Iraq, Kenya, Malaysia, Mauritius, Morocco, Nepal, Nigeria, Philippines, Rwanda, Saudi Arabia, Serbia, Seychelles, Tunisia, Uganda, United Arab Emirates, and Zambia.] lTU is also facilitating the establishment of Computer Incident/Computer Emergency Response Teams (known as CIRT/CERT) which are linked to the IMPACT Global Response Centre, in order to provide all 191 lTU Member States with a truly global and interoperable cybersecurity capability. To promote capacity building, IMPACT also conducts training and skills development programmes delivered in collaboration with leading ICT companies and institutions. At the same time, the organization's Centre for Security Assurance and
89 Research is working with leading ICT experts to develop global best practice guidelines, creating international benchmarks and acting as an independent, internationally recognized, voluntary certification body for cybersecurity. Finally, under ITU leadership, IMPACT's Centre for Policy and International Cooperation is working with partners including governments, UN agencies, regional and international organizations and others to formulate new policies on cybersecurity and help promote the harmonization of national laws relating to cyberthreats and cybercrime. Complementing IMPACT's Malaysia-based facilities, ITU also hosts a ' virtual showcase ' in Geneva, profiling the new early warning system, crisis management capabilities and real-time analysis of global cyberthreats. And I would like to encourage you all to come to Geneva in the first week of October to see this for yourselves, and to participate in the ITU Telecom World 2009 event, which is the defining event for the global ICT industry. This year we will be welcoming a number of heads of state and UN agencies as well as global industry leaders, and I am looking forward to seeing them address and debate vital issues such as the role of ICTs in the economic recovery and how the ICT industry will chart the way forward-as well, of course, as cybersecurity. Ladies and gentlemen, we are making great progress towards a globally coordinated approach to cybersecurity. But we also need to recognize the very real dangers being faced by children and young people online, who are often sent out into cyberspace alone and unprotected, simply because their guardians do not fully understand the risks. That' s why ITU launched the Child Online Protection initiative, a multistakeholder coalition under the GCA framework, which was endorsed by Heads of State, Ministers and heads of international organizations from around the world-including Ban Ki-moon, the UN Secretary-General-at the High Level Segment of ITU Council 2008 last November. To throw the global spotlight on this issue, ITU Members chose 'Protecting Children in Cyberspace' as the theme of this year's World Telecommunication and Information Society Day, which marked the founding of ITU on 17 May 1865. At the WTISD Awards Ceremony and launch of a year-long Child Online Protection campaign with Interpol, I was pleased to be able to salute the three laureates: former FCC Commissioner Deborah Tate; President Lula of Brazil; and Rob Conway, CEO of the GSM Association. Distinguished guests, the Internet cannot continue to flourish as a facilitator of learning, as a platform for telemedicine, as a tool for more efficient and accountable government-or as a key driver of trade and commerce, or as a global communications channel, or as a vital research tool-if users lack faith in its security. So we need to be sure we have an environment where criminals cannot hide behind legal loopholes and regulatory inconsistencies. Nations with less well-developed ICT legislation should no longer find themselves host to villainous online activities. And all states- from the most prosperous to the most disadvantaged-need to have an effective shield with which to safeguard themselves. The Global Cybersecurity Agenda aims to create and maintain this environment, and I appreciate the wide support, encouragement and participation we have enjoyed so far.
90 But there is much still to be done, and we must continue to cooperate and to work together-against the common enemy and for the common good. Thank you.
BRIDGING THE GLOBAL GAPS IN CYBER SECURITY
DR. MOHD NOOR AMIN International Multilateral Partnership Against Cyber Threats (IMP ACT) Cyberjaya, Malaysia WE LIVE IN A HIGHLY CONNECTED WORLD
KEY CHALLENGES 1. The more connected we are the more become-Critical vulnerable we Infrastructure, SCADA systems 2. Evolving cyber threats-The threat landscape has evolved a) Focused attacks becoming norm b) Organised crime c) Terrorist groups 3. Lack of concerted effort-Governments, businesses and academia need to work together to effectively combat cyber threats 4. Cyber security is a dynamic fieldConstant need for upgrading the skill sets
91
CYBER THREATS-INCIDENTS •
•
•
•
Estonia-Estonia, one of Europe's most wired nations, found itself fighting the world's first cyber-war in 2007 Queensland, Australia-A disgruntled unsuccessful job application to a waste management plant in Queensland sabotaged the plant's computerised sewage control system Japan-Malicious code targeted mobile phones to inundate the emergency hotline Georgia-Major attack on Georgia's IT infrastructure
COMBATING CYBER THREATS-STRATEGIES • •
• •
Providing effective information to empowered resources-Disseminating right information to the right people at the right time Collaboration across borders at all levels-Individuals, law enforcement agencies, regulators and governments work together to effectively combat cyber threats Capacity building-Harnessing knowledge from industry and academia to develop skilled individuals with the knowledge to combat cyber threats Involvement of all stakeholders and academia---can playa vital role in helping governments secure national cyber infrastructure
IMPACT International platform for governments + industry + academia to collaborate in cybersecurity and home of the International Telecommunication Union's Global Cybersecurity Agenda
i n-t >!> IrnEt \tic ... a I
Telecommunication Union
92
Introduction to IMP ACT---characteristics • Non-profit organisation-funded by grants, contributions etc. • Focused on 'upper end of cyber threats'including cyber-terrorism • International and multilateral in naturecollectively 'owned' by the global community of partner-nations • Public-private partnership--Private sector, international institutions and academia are partners to assist partner countries secure their IT infrastructure IMP ACT Activities • Global Response Center • Center for Training & Skills Development • Center for Security Assurance and Research • Center for Policy & International Cooperation IMPACT's Partners
(ISC)"
,,:,~J'f:l?:::t;:g~'~~.;.,.;:;~i
93
IMP ACT Advisory Board
IMPACT's IntelTlational AclvisOIY Boal·d cornpdses a distinguished list of ,·enowned expet·ts f,-om industr-y and academia Chaired by Prime Ministerof Malaysia (2008 - 2011)
DR. VINTON CERF ChiefInternet Evangelist of Google, "Father of the Internet"
STEVE CHANG Founder and Chairman of TrendMicro
A YMAN HARIRI Chairman of Oger Systems
MIKKO HYPPONEN Chief Research Officer of F-Secure
EUGENE KASPERSKY Founder and CEO of Kaspersky Lab
PROF. FRED PIPER Professor of Mathematics at University of London and Founder of Codes & Ciphers, Ltd.
94
PROF. HOWARD SCHMIDT Former White House Security Advisor Former Chief Security Officer of Microsoft and eBay
JOHN W. THOMPSON Chairman Symantec Corp.
DR. HAMADOUN TOURE Secretary-General of International Telecommunication Union (ITU)
95
This page intentionally left blank
CYBER WAR VS. CYBER STABILITY
JODY R. WESTBY, ESQ' Global Cyber Risk LLC, CEO Washington, DC, USA Cyber war has become the drumbeat of the day. Nation states are developing national strategies, standing up offensive and defensive cyber war capabilities and, actually conducting cyber reconnaissance missions and engaging in cyber attacks-with alarming frequency. What is blatantly apparent is that far more financial resources and intellectual capital are being spent on figuring out how to conduct cyber warfare than are being spent on figuring out how to prevent it. The lack of international dialogue and activity with respect to the containment of cyber warfare is stunning. As Winston Churchill famously noted, " It is better to jaw-jaw than to war-war. " It is time for governments to begin discussions aimed assuring an agreed upon level of geo-cyber stability through mutual cooperation and international law. "Geo-cyber stability" is defined by the author as the ability of all countries to utilize the Internet for both national security purposes and economic, political, and social benefit while refraining from activities that could cause unnecessary suffering and destruction. With 1.6 billion online users in 266 countries and territories connected to the Internet, cyber attacks have become so commonplace and the capabilities to exploit the full range of information and communication technologies (ICTs) so great, that government systems, military networks, and business operations are in a continual state of risk. Recent Attacks That Undermined Geo-cyber Stability Although cyber attacks have been commonplace for the past decade, the frequency and sophistication of the attacks over the past two years have caused a shift in the stability of the Internet and created uncertainty whether nations will be able to secure and control their infrastructure, systems, and information. The 2007 attacks on government and private sector networks in Estonia were the watershed event that served as a government wake-up call. The attacks quickly escalated, seriously impacting government Web sites and systems and shutting down newspaper and financial networks. The attacks demonstrated the rapid pace at which a cyber attack can become a national security issue, involve other nation states, and raise the issue of collective defense. Even though Estonia is one of the most "wired" countries in the world, the attacks were also significant because Estonia quickly had to call for help in tracking and blocking suspicious Internet addresses and traffic. Before the attacks ended, computer security experts from the U.S., Israel, the EU, and NATO were assisting Estonia-and learning its lessons. The Estonian government was forced to close large parts of the country's , Jody R. Westby is CEO of Global Cyber Risk LLC, located in Washington, DC and serves as Adjunct Distinguished Fellow to Carnegie Mellon CyLab. She also chairs the American Bar Association's Privacy & Computer Crime Committee and is a member of the World Federation of Scientists' Permanent Monitoring Panel on Information Security.
97
98 network to outside traffic to gain control of the situation. Estonia blamed the attacks on Russia and claimed that it had tracked some communications to an Internet address belonging to a Kremlin official. Notably, Russia refused to cooperate in the investigation of the attacks even though it strongly denied any responsibility for them. The attacks highlighted the global nature of cybercrime and the difficulty of tracking and tracing cyber activities. Traffic involved in the attacks was traced to countries as diverse as the U.S., China, Vietnam, Egypt, and Peru. The Estonian attacks also may have represented a situation in which rogue actors, such as botherders or organized cyber criminals, were aligned with a nation state in conducting and concealing the attacks, though this has not been proven. (Botherders are persons who control thousands to millions of computers (botnet) on which they have surreptitiously planted software that can be activated to cause the infected computers to take certain actions, such as sending repeated communications to a network as part of a denial of service attack.) A few months after the Estonia attacks, U.S. Pentagon computer networks were allegedly hacked by the Chinese military in what has been called "the most successful cyber attack on the U.S. defense department,,2, shutting down parts of the Pentagon's systems for more than a week. Chinese hackers have also been blamed for attacks that compromised German government systems and cyber espionage incidents against the United Kingdom's (UK) government systems. The Director-General of the UK's counterintelligence and security agency, MIS, posted a confidential letter to 300 CEOs and security officers on the Web site of the Centre for the Protection of National Infrastructure, warning them that their infrastructure was being targeted by "Chinese state organizations" and that the attacks were designed to defeat security best practices. Like the Estonian events, these attacks raised profound legal questions with respect to nation state use of cyber mercenaries to conduct intelligence or military activities. The 2008 attacks on Georgian systems during the Russia-Georgia conflict over South Ossetia were a more obvious example of cyber warfare that demonstrated the degree to which governments are dependent upon computers and communications networks--especially during crisis management. A sequence of distributed denial of service (DDOS) attacks against Georgian government Web sites essentially shut down government communications. The Georgian government quickly obtained assistance from other countries-and companies. Estonia sent cyber security experts to Georgia and took over the hosting of the Georgian Ministry of Foreign Affairs Web site. The Polish government made space on its Web site for Georgian updates on its conflict with Russia, and U.S. companies, such as Google and Tulip Systems, helped the Georgian government move some of its Web content to the U.S. where it would be protected. While the Estonia attacks raised questions whether cyber attacks could trigger NATO's Article V protections of collective defense, the Georgian attacks raised issues regarding other aspects of international law. Stephen Korns and Joshua Kastenberg have analyzed the assistance provided to Georgia and pondered whether Georgia violated the United States' right of neutrality under the Hague Convention when it took the "unorthodox step of seeking cyber refuge" in the U.S. without first seeking the Demetri Sevastopulo, "China 'hacked' into Pentagon defence system," Financial Times, Sept. 6, 2007 at 1.
99 pennission of the U.S. government. Tulip Systems' CEO, a Georgian who happened to be visiting in Georgia at the time of the attacks, called the Georgian government and volunteered Tulip's services. Korns and Kastenberg note that: "During a cyber conflict, the unregulated actions of third-party actors have the potential of unintentionally impacting U.S. cyber policy, including U.S. cyber neutrality. There is little, if any, modern legal precedent." The Estonia and Georgia cyber attacks serve as excellent examples of the havoc caused by cyber attacks and the uncertainty surrounding the legal frameworks that govern actions taken during such events. Theory falls way to reality in the chaos of such crises: neither NATO nor the countries that came to the assistance of Estonia had clear legal authority to engage in defensive measures to aid Estonia. The Estonian and Georgian attacks highlight the need to revise the doctrines and laws that traditionally support diplomatic, policy, and military decisions in order to address cyber threats that often link national and economic security. More recent cyber attacks highlight the interconnected nature of cyber vulnerabilities and accentuate the need for an agreed-upon level of geo-cyber stability. Researchers at the Munk Center for International Studies at the University of Toronto conducted a 10-month investigation into allegations of a Chinese computer network exploitation against Tibetans. The Information Warfare Monitor's March 2009 report on this investigation, Tracking GhostNet, indicated that the researchers uncovered a network of 1,295 infected computers in 103 countries that were controlled from commercial Internet accounts in China. According to the report, the GhostNet system commanded computers from ministries, embassies, news organizations, and NATO across Europe and Asia to download malware that enabled the attackers to "gain complete, real-time control" that included searching and downloading files and operating devices attached to the computers, such as microphones and Web cameras. In early 2009, cyber researchers from 300 organizations and 110 countries joined together to fight the Conficker wonn, which has infected at least five million systems in 211 countries. Conficker is contained for the moment, but not eradicated. The threat looms that those behind the wonn could break through and take control of these systems. SRI International reported that Conficker first appeared in September 2008, and Chinese hackers were the first to market it. According to Rick Wesson, CEO of Support Intelligence and one of the researchers deeply involved in this effort, the sophistication of this wonn is unprecedented and targets the infrastructure of the Internet. In part, Conficker has relied upon the inability of infected parties to collaborate--{)ne of the gravest weaknesses in the international legal framework, yet one of the easiest to fix through international agreement. As recently as July 2009, at least 35 government and commercial Web sites in South Korea and the U.S., including the Nasdaq and New York stock exchange, suffered denial of service attacks. South Korea intelligence officials have unofficially blamed North Korea. Former U.S. officials have publicly named North Korea among nations perfecting cyber warfare capabilities. In 1996, U.S. Government officials estimated that more than 120 countries either had or were developing computer attack capabilities that could seriously impact the
100
nation's ability to deploy and sustain military operations. Countries certainly need to be able to protect their infrastructure, systems, and information from intrusion, attack, espionage, sabotage, unauthorized access or disclosure, or other forms of negative or criminal activity that could undermine national and economic security. They also, however, need some certainty regarding everyday operations and a legal framework upon which to rely in making decisions regarding national and economic security and the safety of their people. This is lacking in the cyber realm. The political and economic shifts caused by the Internet and globalization have introduced considerations that impact traditional approaches to national security based on geo-political interests, spheres of influence, and correlation of forces. Foreign policy is far more complex in an interconnected world where cyberspace knows no borders, packets hop from country to country, and laws governing collective assistance and armed conflict were intended for traditional warfare, not cyber conflict. Although geo-political considerations still must be afforded great weight, threats to critical infrastructure must be evaluated in a broader policy paradigm that is based on maintaining global cyber stability. Today, all countries need the certainty of a minimum level of cyber stability that is assured through international agreement. At its core, this minimum level of cyber stability means that a country's critical infrastructure shall not be disrupted in a manner inconsistent with the laws of armed conflict and other applicable treaties and conventions, such as the Hague Convention, which requires nations at war to respect the neutrality of other nations, and the Geneva Convention. Legal and Policy Issues The laws of armed conflict regulate the conduct of armed hostilities and are intended to prevent unnecessary suffering and destruction. Under the laws of armed conflict, combat forces can engage in only those actions necessary to achieve legitimate military objectives (principle of necessity), and they must distinguish between lawful and unlawful targets, such as civilians, civilian property, and the wounded and sick (principle of distinction) . The amount of force cannot exceed that needed to accomplish military objectives (principle of proportionality). Lawful combatants are those authorized by the government to engage in military actions, and they must bear distinctive emblems and be recognizable at a distance. Unlawful combatants are those who participate in hostilities without authorization by government authority or under international law. In a cyber context, the first obvious issue is: what constitutes an act of cyber warfare? Other issues concern the attack of communication systems and other critical infrastructures owned by the private sector that support civilian life, including hospitals and treatment for the sick, wounded, elderly, and very young. Should these and the systems of targets protected by the Geneva Convention be off limits? Are attacks on these systems really necessary to achieve military objectives? Is the damage to the networks proportional to the military objective? When an attack occurs, no one knows who is attacking until it can be tracked and attribution can be determined. Legitimate cyber soldiers are indistinguishable from script kiddies or any rogue actor on the Internet. How does one determine whether attackers are military combatants? What international cooperation is required? Likewise, how is it to be known if third parties are acting at the behest of a nation state? They certainly do not have distinctive emblems or are
101
recognizable from a distance. Do cyber soldiers and engaged third parties need to wear cyber uniforms or have recognizable characteristics? What is excessive force in cyberspace? These and numerous other legal and policy questions arise in the context of cyber warfare. The two principal legal instruments that govern nation state action in a conflict situation are the NATO Treaty and the UN Charter. Each document is more than 50 years old and their provisions do not accommodate cyber scenarios. They both use similar language and are equally ambiguous regarding cyber attacks. The NATO Treaty' s use of terms such as "armed attack," "territorial integrity and political independence," and "territory, forces, vessels, and aircraft." The terms self-help, mutual assistance, and collective assistance are used only in the context of an "armed attack." Estonia's defense minister, Jaak Aaviksoo, pinpointed the gaps in the NATO treaty with respect to cyber attacks by stating, "Not a single NATO defense minister would define a cyber-attack as a clear military action at present." Article 12 of the NATO Treaty allows for consultation of NATO members for the purpose of reviewing the Treaty with respect to "factors then affecting peace and security." Thus, this Article could be used as the mechanism by which cyber attacks, collective defense, and geo-cyber security are considered by NATO nations. The UN Charter serves as the foundation in international law for state conduct, including armed conflict. The language in the UN Charter is closely aligned with that in the NATO Treaty, using terms such as "territorial integrity and political independence," "the use of armed force," "action by air, sea, or land forces," and "armed attack." The self-defense provisions confuse more than clarify. Article 51 states that nothing shall block a nation or group of nations from engaging in collective self-defense if an armed attack occurs, raising the question of whether a cyber attack could be deemed to be an "armed attack." Even if the attack came from a branch of the armed forces, Article 41 cuts against that interpretation because it specifically lists actions that are deemed not to be armed force and may be taken to enforce Security Council decisions. The allowed actions specifically include the complete or partial interruption of communications, which could apply to cyber attack scenarios. Quite simply, the UN Charter and NATO Treaty do not accommodate the electronic capabilities of the 21 st century. The need to update these legal instruments to govern the actions of nation states with respect to cyber warfare and attack capabilities has never been more urgent. The rule of law is already in a precarious state due to the disruptions caused by terrorist activities. The ominous threat of cyber attacks by nation states and rogue actors has become a reality, and this issue can no longer be ignored by countries that find it more desirable to war-war than to jaw-jaw. Governments, the private sector, and multinational organizations must begin an international dialogue in this area to accommodate new military capabilities, collective action, and geo-cyber considerations. If left unattended, by 20 IS cyber instability will pose a significant threat to the national and economic security interests of all countries. Although some action has been taken by NATO, it falls woefully short of assuring any sort of geo-cyber stability. Following Estonia, NATO adopted a Cyber Defence Policy and created a Cyber Defence Management Authority to coordinate cyber defense among NATO allies. NATO's Cyber Defence Policy does not address whether a cyber attack can trigger collective defense
102
under Article V. Response centers are necessary, but they are soft options. The steps taken by NATO make an important contribution, but they do not help define what level of cyber stability is sacrosanct and how cyber actions fit within the NATO framework. Where to Begin Countries need to begin the dialogue on global cyber stability by addressing international cooperation. Such cooperation is almost always needed in tracking and tracing cyber communications simply due to the interconnected nature of the Internet and the manner in which the Internet Protocol breaks a communication into packets and routes them across many networks-and countries-before reassembling them at their destination point. Assistance from other nation states is also needed in defending against cyber attacks. The Council of Europe Convention on Cybercrime, which contains excellent provisions regarding mutual cooperation and assistance, was originally believed to be the best vehicle for reaching such agreement. However, it only has been signed by 46 countries and ratified by 26 since it opened for signature in 2001. Considering that over 200 countries are connected to the Internet, the CoE Convention hardly appears to be the answer. The UN clearly needs to take the lead in working toward an international agreement on cooperation and containment of cyber conflict. Although the U.S. invented the Internet, it is unlikely that it will step up to take a leading role at the UN in any such effort. The U.S. has openly criticized the ITU for addressing cybercrime in its Global Cybersecurity Agenda and has refused to support the ITU Toolkit for Cybercrime Legislation, which contains sample language for cybercrime laws and provisions for mutual cooperation and assistance (consistent with the CoE Convention). U.S. opposition to UN activity in the cyber realm has gone on for over a decade, with U.S. delegates continuing to push the CoE Convention and arguing that defensive action and cybercrime laws are the solution. Ironically, Russia-one of the most active countries engaging in cyber warfarehas shown the greatest leadership in this area. Since 1998, Russia has introduced an annual UN resolution concerning "Developments in the field of information and telecommunications in the context of international security" calling for multilateral consideration of threats emerging in the field of cyber security, the definition of basic notions related to the unauthorized interference of information and telecommunication systems, and consideration of international principles to help combat cybercrime and terrorism. The 1999 resolution included the military potential of ICTs. These resolutions have regularly been adopted by the General Assembly, and the U.S. has regularly voted against them. Russia's 2008 resolution was adopted by both the UN's First Committee and the General Assembly-over the sole objection of the United States. Conclusion The international community must come together and realize that the enormous benefits of the Internet are at risk if it is used as an instrument of harm outside the rule of law. Governments have an obligation to help protect the Internet and systems that support their economies, enrich the lives of their citizens, and support government and military operations. They also have an obligation to assist in tracking and tracing cyber activities.
103 A legal framework applicable to cyber conflict that assures a minimum level of geo-cyber stability must be developed, lest the Wild Wild Web become the 21 st century tool of destruction and impede on the rule of law regarding armed conflict, human rights, and friendly relations among nation states.
This page intentionally left blank
CYBER CONFLICT VS. CYBER SECURITY: FINDING A PATH TO PEACE JOHN G. GRIMES Former Assistant Secretary and ChiefInformation Officer, U.S. Department of Defense, Washington, DC, USA THREATS IN CYBERSPACE IS A GLOBAL PROBLEM I wish to thank the WFS for their invitation to participate on the Permanent Monitoring Panel to address the critical issue of Information Security. I was asked to discuss the matter of "containment of cyber warfare" which is a sensitive topic since we have seen numerous events over the past five years that appear to be associated with military operations. The bottom line up front is that it is very difficult to determine the attribution of a cyber attack in the global borderless digital net that provides internet services to government, commercial and to the public to include criminals and terrorist alike. Before I discuss containment and deterrence for cyberspace warfare, I would like to share some examples of cyber attack events that are taking place in the wild jungles of the global cyber space. Over two years ago the small country of Estonia was attacked by what appeared to be professional hackers who disabled the web sites of essential functions of the government and private sector such as banks, newspapers, political parties and companies -a coincidence, I think not. Did NATO articles apply to cyber attack on Estonia? A year ago this month, the nation of Georgia was under cyber attack by hackers prior to Russian troop movement into that country-is this a precursor to future warfare? The cyber attackers on U.S. Government and military web sites over the U.S. Independence Day, 4 July was at a time when diplomacy failed on nuclear issues between two nations-was it an act of war under the UN Charter? During the Israeli Hamas Gaza conflict last year, Israeli web sites were under cyber attack-was this a terrorist attack since Palestine is not an official state? The most recent cyber event that some call "Cyberwar" is the distributed-denialof-service attacks on Twitter, Youtube, Facebook and Live Journal web sites followed by another global attack a week later on Twitter-a coincidence, I think not These are only a few examples of the cyber attacks on nation states, corporations and the public using the internet. When you look at it in the context of cost to both government, businesses and the public it is mind boggling. A study presented by McAfee at the World Economics Forum in Davos-Klosters, Switzerland shows that global companies may have lost over $1 Trillion worth of intellectual property to data theft in 2008. It is estimated that computer worms and viruses cost companies $55 Billion to clean up the damages caused by the attacks, such as "I Love You", "SQL Slammer and SASSER". These events are only the tip of the iceberg of cyber attacks and cost that take place every moment of everyday in cyberspace that can cause irreparable damage to people's lives, and to government and nations' economies that are not recoverable.
105
106
THE CYBER ENVIRONMENT As I stated before, cyberspace is like a wild jungle and nations and international bodies have been unable to establish and enforce the rules of the road for how hackers, state actors and non-state actors including criminals and terrorist. Some say that cyberspace domain is not recognized as a global common as are space and the high seas. The internet, a global media, can flow freely across borders and territories with little or no international agreements or regulations. Where there are national laws or regulations they do not extend to other nation's sovereignty and territories. Containment and deterrence of cyber warfare is almost an impossibility and will take many years to achieve. After the 4th of July cyber attack on U.S. Government agencies including the Department of Defense. The U.S. Senate (Congress) had drafted legislation that requires the U.S . State Department to work for international cooperation in the international community. There are a number of international organizations and bodies that are addressing cyberspace/internet technologies, laws, cultures/ethics, and policy. Dr. Tom Wingfield of the U.S . has developed an excellent framework for International Cyber Security provides algorithms that puts into context what he calls "The Cube". That focuses on law, technology and policy. However, this algorithm does not go far enough, it must take in to consideration cultures or ethics which are different for regions and nations which in many cases is based on religion and tribal norms. CONTAINMENT AND DETERRENCE OF CYBERWAR Who to deter? • Nations states. • State tolerated/encouraged/sponsored organizations. • Terrorists. • Criminals. • Individual hackers or hacker groups. Significant challenges in attempting to deter state tolerated/encouraged organizations, in large measure due to attribution problems and lack of retaliatory options that can be brought to bear. The use of the Internet by violent non-state actors for recruitment, propaganda and command and control will increase and will require continual and increased emphasis . Deterrence of criminal behavior and other hackers should be done through law enforcement regimes. Cyberspace is the newest of the borderless global commons lacking both consensual behavioral norms and a formal international governance regime. This fact challenges the Nation State construct of sovereignty and territorial integrity. • Relatively low barriers to entry.
107
• • • •
Broad range of actors can develop disruptive and/or espionage cyber capabilities. Difficulty in distinguishing among the malicious actors and their intent. Internet is largely a private enterprise and a nation nor governments, more generally don not define/control standards. Few international governance structures exist.
Developing an explicit cyber attack declaratory policy, including potential retaliatory responses, is complicated by lack of precise attribution, clear thresholds, and a credible and demonstrated retaliatory capability. • Explicit vs. ambiguous commitments. • Unilateral vs. multilateral declarations and commitments. • One size does not fit all attacks. • Establishing creditability. • Demonstrating political will. • Compliance with international laws and treaties. Deterrence cannot be achieved without political will and credible response options. • All options and elements of national power must be considered law enforcement and military. • Must be willing to demonstrate potential responses. • Proportionality, precision/collateral damage/fratricide. • Escalation/de-escalation control. • Counter-value vs . counter-force. SUMMARY •
• •
•
•
The West has a huge number of intelligence and law enforcement assets dedicated to stopping the proliferation of weapons of mass destruction but does not have the same type of watchdog systems in place to prevent cyber enablement. Terrorists are very good and getting better at using the internet for propaganda and fund-raising. They are reaching ever-increasing audiences. Terrorists will recognize the opportunity the cyber world offers and that they need help to exploit it. The highly developed cyber criminal networks want money and care little about the source. Unless we get cyber crime under control , it will mutate into a very real national security issue with potentially catastrophic ramifications. Terrorism enabled by cyber criminals is our most likely major cyber threat. The conceptual framework that underpins the U.N. Charter on the use of force and armed attack and today's law of armed conflict provides a reasonable starting point for an international legal regime to govern cyberattack. However, those legal constructs fail to account for non-state actors (criminals and terrorists) and for technical characteristics of some cyberattacks.
108
•
•
A dialog with international/global partners to begin establishing a cyber governance construct and acceptable norms for behavior with the creation of international "rules of the road" and a more formali zed governance regime. Establish international/global forums to influence future network architectures, standards and protocols that promote authentication and help facilitate attribution.
INFORMATION SECURITY, ENSEMBLES OF EXPERTS DR. RICK WESSON Support Intelligence Inc. , CEO San Francisco, California, USA THE CROWD IS INDEED WISER THAN THE INDIVIDUAL Ensembles are becoming a favored technique to address difficult information classification problems. Ensemble Theory was recently used to address several data classification challenges: the Netflix and GitHub challenges. These are two examples where Ensemble Theory proved to be the winning strategy for solving complex, realworld data classification problems. A recent Internet worm named conficker was contained by a diverse group of researchers and information security professionals in much the same way as Ensembles are composed to solve machine-learning problems. This paper describes the Conficker effort as an Ensemble of Experts.
MACHINE LEARNING ENSEMBLE Ensemble learning is defined by Dr. Rob Buck as the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem . Ensemble learning is primarily used to improve classification and prediction or function approximation, performance of models, or to reduce the likelihood of an unfortunate selection of a poor one. Other applications of ensemble learning include assigning a confidence to the decision made by the model, selecting all the double or near optimal features, data fusion, incremental learning, nonstation to learning and error correcting. A commonly used ensemble learning algorithm is described as a mixture of experts which generates several expert classifiers whose outputs are combined through generalized linear rule. The weight of this combination is determined by a gating network typically trained using an EM algorithm. Both experts themselves and the gating network otTer the template for training. Several mixtures of expert models can also be further combined to obtain a hierarchical mixture of experts.
ENSEMBLE OF EXPERTS Netflix is a DVD rental and online movie subscription service hosted in the United States. Three years ago Netflix started a competition to see if they could improve the accuracy of predictions about how much someone is going to enjoy a movie based on the movie watcher's preferences. The winner of the Nett1ix prize would improve Netflix' s ability to connect people to movies they love at a fixed cost. Qualifying algorithms were submitted for one week of judging and the contestants had to give Netflix a royalty-free worldwide nonexclusive license for the copyrights or patents and intellectual property . The grand prize was $1 million, the progress prize was for $50,000 and, until recently, no
109
110 team had won. On July 26, 2009, Netflix stopped gathering submissions for the contest as there were two teams that had met the requirements for the grand prize. Netflix is currently verifying the winner for the grand prize and it has been reported that one of the teams that met the minimum requirements for the grand prize used an Ensemble of Experts machine learning algorithm. The ensemble team is composed of over 30 organizations that originally competed against each other only to coalesce around a group named "the ensemble". One of the most interesting insights from the results of the Netflix challenge is that it was ultimately the cross-team collaboration that ended the contest. Another collaborative filtering effort is the OitHub contest. The contest' s goal was to produce an open-source recommendation engine in several computer languages. The projects that entered the contest were open source and participation required the posting of one' s source code to the open source repository at OitHub. The OitHub challenge allowed users to follow and discover new open source projects. As a challenger you got access to 56,000 users, 120,000 repositories and 144,000 relationships- between each an excellent data set. There was also an immediate feedback loop via post-commit hooks and all of the results and code were open to everyone. Numerous ensemble technologies were submitted and it did not take long for the winning methodology to develop. It developed as follows: scrape the contest winners from the leader-board, find all of their top performing results, download the best results, cleanup missing or incorrect data and build an ensemble. Once this very short piece of code was substantiated any new learning algorithms submitted were instantly incorporated into the leading position. Shortly after the piece of code described above was posted, the game was over and the competition concluded. Observation from these two research projects into machine learning, crowd sourcing, and ensemble theories quickly show the value of the collective over the individual contribution.
CONFICKER WORM The conficker worm created one of the most extensive and threatening botnet worms that the Internet community had ever observed. The origins of the Conficker Working Oroup (CWO) and its work has stimulated much thinking in trying to understand how the group came together and how it performed its activity across such a diverse and global set of first responders. Much thought has gone into trying to understand how the CWO worked and many questions have arisen. Without leadership, tasking, goal setting or hierarchical command and control how can you quickly form a group to interpret and defend thousands of networks over the course of a few weeks? I propose that the group was actually performing as an ensemble of experts. Looking at machine learning algorithms as a template for management of a diverse group towards a single goal is a viable methodology for meeting global security threats. The concept is as simple as the winning OitHub ensemble, which required easy access to information and the results of previous algorithmic success.
111 CONFICKER EXPERTS
Within the CWO there were several subgroups that perfonned distinct tasks. Several organizations that nonnally compete in the Internet security marketplace looked at reverse engineering mal ware. These organizations had teams located on every continent in multiple time zones enabling the CWO to have a constant set of eyes looking at every new variant, developing software, providing analysis, and writing about what they were doing without any intellectual property issues being raised. Every organization was focused on developing new understanding and sharing that as rapidly as possible within the subgroup. The "subgroup leaders" did not provide tasking or control of the subgroup but tried to observe important facts and push those to other communities within the CWO. The group that perfonned the sinkholing of domain names had to coordinate with policy oversight committees such as ICANN and in many cases their local government. The organizations that had to register domains or hold domains to sinkhole them had to make decisions in a matter of days and those decisions were based on information that came through jabber chat rooms, e-mail lists and teleconferences. In many cases decisions had to be made by Top Level Domain (TLD) managers that had no relationship to the CWO. The chaos that developed as organizations were added to the CWO actually helped many of the technical people participating make decisions. Organizations that had to engage public relations had a more difficult time, as there was very little infonnation that could be communicated easily to the general public. The anti-conficker efforts the CWO were engaged in were very complicated, affected every country of the world, were not well understood (even by the experts) and were very difficult to "dumb down" to something that could be printed in a daily publication. IMPROVISATION
Improvisation is the practice of acting, singing, talking and reacting, or making and creating in the moment and in response to stimulus of one's immediate environment or inner feelings. In Malcolm Gladwell' s book "Blink" he describes war games where improvisation can defeat planning, strategy and analysis. Conceptually, improvisation in dynamic environments such as cyber security is a capability that has yet to be tapped. In 2008, Dick O'Neill hosted a Highlands Infonnation Engagement Forum conference in Half Moon Bay, California; the themes were cooperation, competition and conflict. At this conference we discussed numerous strategies by which loose-knit groups could engage a cyber threat and, conceptually, the idea was based on improvisation and ensembles of experts. The skills of improvisation can apply to many different abilities or fonns of communication and expression across all kinds of activities-artistic, scientific and physical. Specifically, I believe the cyber domain requires incredible improvisational flexibility for those performing an offensive action. For last 10 years our cyber efforts have been primarily defensive. Defensive activities lend themselves to clear command and control structures, hierarchal command and static analysis. The majority of activities today performed by cyber deterrence
112
devices are purely defensive in nature. Intrusion detection systems, intrusion prevention systems, firewalls and anti-virus solutions currently compose these static defensive capabilities. All of these systems require a threat to be previously analyzed and described by a set of "fingerprints" or behavioral model. In the last two years the threats have multiplied to such a level that every day there are numerous never seen before (zero-day) malicious exploitations. The reason antivirus solutions are not working is because they cannot acquire the malicious software, analyze and create fingerprints or distribute the fingerprints or behavior descriptions fast enough to protect the systems before the systems are infected. Once infected, the infection disables the antivirus software. In the context of an OODA loop (for observe, orient, decide and act) our adversaries' capabilities are literally measured in microseconds, whereas our own capabilities are measured in days; in the context of enterprise systems capabilities are measured in months to years; in the context of government, well, little or no action may be taken at all. There are literally seven orders of magnitude between government capabilities in actively thwarting the elements that create mal ware and the capabilities of organizations that create and distribute worms like conficker. Our adversaries' OODA loops are so tight that the idea that we could enter and subvert them is nearly unfathomable. Improvisation is intended to solve a problem on a temporary basis-the "proper" solution being unavailable at the time. In my experience I see the defensive effort as futile-albeit necessary. That hundreds of companies can be penetrated on a regular basis and have that penetration exist for months, ifnot years, is evidence of the significance of the problem before us. Improvisation often focuses on bringing one's personal awareness into "the moment" and developing a profound understanding of one's actions. This fusion of "awareness" and "understanding" brings the practitioner to the point where he or she can act within a range of options that best fit the situation, even if he or she has never experienced a similar situation. When we began fighting the conficker worm there were no directions and no "command intent" to provide guidance to the individuals and organizations that were attempting to contain or disrupt the worm. Many of the original participants in the Conficker Working Group (CWG) had wanted to actively engage a global threat. It was easy to encourage individuals and organizations to perform a function that would help the community in a fight against this threat. I saw this situation as an opportunity to test some of the ideas developed at the Highlands Conference in Half Moon Bay. The organizations and individuals that originally engaged the conficker woim worked from an established basis of trust. Several of them had previously worked together and all had a basic respect for one another. Generally, each organization understood how they could contribute and what they could do to affect the situation in a positive light. Each understood how to make positive choices that kept doors open. Though most of the individuals did not understand that they were in an improvisational petri dish, they played their parts well, working to leverage some element of their expertise to bear on the threat. Each organization offered a capability or an endowment to the effort and when a conflict arose it was quickly resolved, because this was believed by some to be "God's work". The improvisational process in theater or music is defined by actors who work together responsibly to define the parameters and actions of the scene in a process of co-
113
creation. Within each activity in the scene the actor makes an offer, meaning that he or she defines some element of the reality of the scene. All successful cyber related offensive efforts have had an element of this improvisational process. The activity known as an endowment is the responsibility of the actors to accept the offers their fellow performers make; to not do so is known as blocking negotiation, or denial, which usually prevents the scene from developing. Improvisers frown upon blocking and the magic that made the CWG work is that the offers that were made were accepted. Accepting an offer is usually accompanied by adding a new offer, often building on an earlier one; this is a process improvisers refer to as "yes, and ... " which is considered the cornerstone of improvisational technique. Every new piece of information added helps the performers to refine their capability and make progress in their engagement. The unscripted nature of improvisation also implies no predetermined knowledge is required, only capability. When we decided to engage in an effort to contain the worm, organizations liberally offered their resources such as mal ware researchers, reverse engineers, large volumes of data, hardware and bandwidth. In several instances we had multiple groups offering similar capabilities; Symantec and SRI offered reverse engineering even though both organizations compete vigorously in the marketplace; despite this, however, they cooperated and collaborated, each contributing significantly to understanding the capabilities of our common adversary. As these two organizations competed to explain the worm in detail each gained greater respect for the other as well as learned how to specialize in complementary research. The lubricant in this effort was information: access to data, complete transparency of data and access to the analysis contributed by all parties. The amount of information produced, accumulated and shared among the participants of the CWG is unprecedented. No contracts, no NDAs and no marketing agreements were executed between the parties. Today several terabytes of data have been accumulated and shared among hundreds of organizations because they all decided that it was in their best interest and within their capability to "do the right thing". If there was ever a point that we lost momentum, it was when a participant did not accept the "yes, and ... " rule, which is the cornerstone of improvisation. IMPROVISATION, SECURITY AND SECRECY
This finally brings me to my objection on secrecy. Secrecy and its main tenets of classification and compartmentalization do not allow for the "yes, and ... " cornerstone of improvisation. For the first time in my experience we set off to battle a peer-to-peer network with a robust, suitable and flexible command structure based on improvisation. I suggest the reasons for the CWG's success is the lack of a command hierarchy, CWG peer relationships and the (unconscious) acceptance of the cornerstone of improvisation, the "yes, and ... " rule. Improvisation requires the liberal flow of information in all directions, which is why I believe true information dominance will leverage ideas completely orthogonal to the life experience of many in the DOD.
114
In the cyber realm we were required to collaborate with many organizations and, in the end, nation-states with whom we do not have strong trust relationships. Leveraging personal relationships to encourage participation in the global effort was facilitated by open and transparent knowledge. Only when we tried to implement compartmentalization of data sharing did the trust relationships break down to the detriment of the entire effort. Improvisation in engineering is solving a problem with the tools and materials immediately at hand. A classic example of such improvisation was the improvisation of creating carbon dioxide scrubbers with the materials on hand during the Apollo 13 space mission. Another improvisation is the lED or improvised explosive device, which The Department of Defense has spent the last several years fighting in the Middle East. We need the opportunity to develop experience in creating and honing a team's ability to improvise in cyberspace. Fundamentally, the requirements for the team to work within current security regimes would effectively limit the team's ability to improvise and succeed. Therefore, I strongly recommend that we create an environment to navigate these issues. DECIDING WHEN TO QUIT
Ensembles of experts are unique to developing environments that favor innovation; however, politics can overcome momentum once a promising solution is found . Disbanding the collection of experts and moving the effort to a management team to maintain operational efforts is critical to allowing the participants to move on. The unique benefit to an ensemble of experts is in the innovation-not the ongoing maintenance or operations of the effort; thus the value is in the quick startup and shutdown of the ensemble, not the long term operations of any effort. NATIONAL CYBER SECURITY R&D
I hope that this short note has illuminated several issues in cyber security and will encourage the use of machine learning algorithms as frameworks for managing groups of experts across diverse fields to meet threats in cyberspace. We have ample opportunity to explore these methods in daily interactions with cyber threats. The number of threats we see on a daily basis is experiencing exponential growth and the opportunities to engage them through various techniques are unbounded. I look forward to working within the Internet security community, the government and those individuals that can affect the security and stability of the network on behalf of those that rely on its communication fabric. PEACE IN CYBERSPACE
To meet the growing threatscape in cyberspace begins by individually accepting that energy invested in peace is ultimately more valuable for the global network community than any escalation in offensive computing. Peace is the only deescalationary tactic available to slow the geometric scaling that is currently overwhelming global network defense.
CYBER CONFLICT VS. CYBER STABILITY: EU AND MULTINATIONAL COLLABORATION JACQUES BUS 1 European Commission, Information Society and Media Directorate-General, Brussels, Belgium INTRODUCTION AND PROBLEM ANALYSIS Since the Web has been invented, the use of it on the Internet has quickly permeated our lives and societies. It now forms the basis of our information collection, becomes an important tool for social communication and for numerous public and private services, and control systems in our critical infrastructures will often not work without it anymore. But with the Internet (a term used further to refer to the combination of the physical Internet together with Web technologies) moving to the centre of our society, its many weaknesses are also exposed. Cyber criminals are increasingly exploiting network vulnerabilities, terrorists use the Web for their illegal activities, identity theft and data loss is reported almost daily in the press, and cyber attacks are reported as part of war actions or threats in international conflicts. As vital services in society are increasingly depending on digital systems and the Internet, disruptions of this digital infrastructure, whether through natural and accidental incidents or deliberate attack, can cause major economic and social damage. Unfortunately, the risks are often not fully comprehended by citizens and their political representatives. The global nature of the Internet is evident and its initial design for a community trusting each other has led to neglecting aspects of security, authentication, identification and location which are important instruments used up to date for implementing state jurisdiction, trust and accountability. The current political order is still, and likely will be for some time to come, based on state sovereignty and law enforcement at state level. The citizen expects protection from his state and respect for culture and local habits. This dichotomy between globalisation through the Internet and local culture and jurisdiction requires urgent attention as it risks leading to chaos and lawlessness. Developing security, privacy and trust in the Internet is a daunting task. It can only be addressed with some effectiveness if all aspects are taken into account: the technological developments and opportunities; social acceptance and citizen awareness; and policy, regulation and law including law enforcement. Society and government should understand technology trends to enable anticipating changes, appreciate its potential and design law and regulation appropriately. But technology must also be developed in support of law implementation and enforcement and with respect to privacy as well as public interests. Building trust and security in the digital world requires a balanced approach of technology, socio-economic developments and international cooperation. The EU is well placed to play a leading role in this process. It is based on democratic institutions respecting freedom and human rights; it has relatively strong Head of Unit Trust and Security. DG Information Society and Media, European Commission.
115
116
social protection mechanisms, including its legal framework for data protection and privacy; it has a strong research and technology base in Information and Communication Technologies, as well as in other relevant disciplines; it has industrial strength in mobile communication, services, consumer industry and smart card industry; and, above all, it has a long history of diplomacy, consensus building and cultural diversity, focusing on the use of soft power in the last 50 years in particular. EU ACTIONS We can distinguish two categories of action: policy actions, and research and innovation. Both are addressed in several policy areas of the European Commission. At this place I can only make a limited selection, which I will focus on some aspects that received attention in the last year in the Directorate general for the Information Society and Media and have particular relevant aspects related to multi-national collaboration. Some policy actions and international cooperation In 2008, the Commission adopted the Communication on Critical Information Infrastructure Protection (CIIP)-"Protecting Europe ji-om large scale cyber-attacks and disruptions.' enhancing preparedness, security and resilience ,,2 The main action lines taken in this Communication are: 1. Preparedness and Prevention: to develop tighter EU cooperation of computer emergency response teams (CERT); to develop pUblic-private partnerships to build resilience in information infrastructures; and to establish a Forum of National (Member State) Authorities to share information and good practices in the area. 2. Detection and Response: the further development of the EU information sharing and alert system, in particular with respect to information to the public and small and medium enterprises. 3. Damage limitation and Recovery: improvement of national contingency plans and the development of exercises for large-scale incident response and disaster recovery; development of pan-European exercises on large-scale network security incidents; and tighter cooperation between national and governmental response teams in the EU. In April 2009, an EU ministerial conference on ClIP was held in Tallinn (Estonia). The conference recognised the relevance of ClIP for society and the economy and the need for urgent action. It focused on enhancing security and resilience of the infrastructures. The Conference also recognised the need for better cooperation within the EU and wider, improvement of wide awareness and joint responsibility of all stakeholders involved, and the need for an early warning capability. And it made a plea to strengthen the global dialogue. The EU strategy aims at: COM(2009)149 of30 March 2009.
117
• • • • •
Establishing EU priorities at long term Developing principles and guidelines at EU level Promoting such guidelines globally Establishing strategic cooperation with third countries Developing global exercises on recovery and mitigation of large scale incidents.
The agenda for research and innovation in Security and Trust In the European framework for Research and Technology development two programmes are of relevance to security and trust. The first is the "Security Programme", which is focusing on integration of systems for security of citizens, borders and infrastructures through multi-disciplinary research. Its budget over 2007-2013 is 1.4 BE. The second is the "ICT Programme" and in particular the Objective on Trustworthy ICT. The budget of this objective is around 50 M€ per year and the research focuses on cyber security and the development of trust in the digital environment. It is this programme that I will give some more attention as it is the more relevant in the context of cyber conflict. Its research focuses on: 1. Trustworthy networks: infrastructures and tools for the security, resilience and trustworthiness of networks for communication storage and computing. It includes highly distributed networked process control systems, understanding threats patterns, software assurance, cryptography and virtualisation. 2. Trustworthy services: secure infrastructures and architectures as part of the future Internet; their management, adaptability and scalability; and trustworthy dynamic services composition. Interoperable frameworks for e-ID management. Research on these topics give particular attention to user-centric and privacy preserving technology for data processing in compliance with the EU legal framework. Moreover, a large scale action is in preparation that addresses a common EU e-ID and authentication framework, including current e-ID card systems at member state level. To ensure social acceptance and legal enforceability when appropriate, research will aim at developing accountability and auditing mechanisms for data services, transparency of data processing and the development of trust architectures and metrics to support this. International cooperation in the field of information and network security is part of research actions. In particular in this field regular workshops are organised between researchers. This started in Nov 2006 and April 2007 between mostly U.S . and EU researchers. It has been extended since in 2009 to Japan, Australia, Canada and Korea and further expansion is expected in the coming years.
This page intentionally left blank
ERICE DECLARATION ON PRINCIPLES FOR CYBER STABILITY AND CYBER PEACE
DR. JODY WESTBY Global Cyber Risk LLC, CEO Washington, DC, USA PROFESSOR WILLIAM A. BARLETTA U.S. Particle Accelerator School, Department of Physics Massachusetts Institute of Technology, Cambridge, Massachusetts, USA PREAMBLE It is an unprecedented triumph of science that mankind, through the use of modem information and communication technologies (ICTs), now has the means to expand economic resources for all countries, to enhance the intellectual capabilities of their citizens, and to develop their culture and trust in other societies. The Internet, like science itself, is fundamentally transnational and ubiquitous in character. The Internet, and its attendant information tools, is the indispensable channel of scientific discourse nationally and internationally, offering to all the benefits of open science, without secrecy and without borders. In the twenty-first century, the Internet and other interconnected networks (cyberspace) have become critical to human well-being and the political independence and territorial integrity of nation states. The danger is that the world has become so interconnected and the risks and threats so sophisticated and pervasive that they have grown exponentially in comparison to the ability to counter them. There is now the capability for nation states or rogue actors to significantly disrupt life and society in all countries; cybercrime and its offspring, cyber conflict, threatens peaceful existence of mankind and the beneficial use of cyberspace. Information and communication systems and networks underpin national and economic security for all countries and serve as a central nervous system for response capabilities, business and government operations, human services, public health, and individual enrichment. Information infrastructures and systems are becoming crucial to human health, safety, and well-being, especially for the elderly, the disabled, the infirm, and the very young. Significant disruptions of cyberspace can cause unnecessary suffering and destruction. leTs support tenets of human rights guaranteed under international law, including the Universal Declaration of Human Rights (Articles 12, 18 and 19) and the International Covenant on Civil and Political Rights (Articles 17, 18, and 19). Disruption of cyberspace (a) impairs the individual's right to privacy, family, home, and correspondence without interference or attacks, (b) interferes with the right to freedom of thought, conscience, and religion, (c) abridges the right to freedom of opinion and expression, and (d) limits the right to receive and impart information and ideas to any media and regardless of frontiers.
119
120 leTs can be a means for beneficence or harm, hence also as an instrument for peace or for conflict. Reaping the benefits of the information age requires that information networks and systems be stable, reliable, available, and trusted. Assuring the integrity, security, and stability of cyberspace in general requires concerted international action. THEREFORE, we advocate the following principles for achieving and maintaining cyber stability and peace:
I. All governments should recognize that international law guarantees individuals the free flow of information and ideas; these guarantees also apply to cyberspace. Restrictions should only be as necessary and accompanied by a process for legal review. 2. All countries should work together to develop a common code of cyber conduct and harmonized global legal framework, including procedural provisions regarding investigative assistance and cooperation that respects privacy and human rights. All governments, service providers, and users should support international law enforcement efforts against cyber criminals. 3. All users, service providers, and governments should work to ensure that cyberspace is not used in any way that would result in the exploitation of users, particularly the young and defenseless, through violence or degradation. 4. Governments, organizations, and the private sector, including individuals, should implement and maintain comprehensive security programs based upon internationally accepted best practices and standards and utilizing privacy and security technologies. 5. Software and hardware developers should strive to develop secure technologies that promote resiliency and resist vulnerabilities. 6. Governments should actively participate in United Nations' efforts to promote global cyber security and cyber peace and to avoid the use of cyberspace for conflict.
SESSION 3 POLLUTION FOCUS: INTEGRATING ENVIRONMENTAL HEALTH RESEARCH AND CHEMICAL INNOVATION
This page intentionally left blank
FOMENTING NEW OPPORTUNITIES TO PROTECT HUMAN HEALTH
JOHN PETERSON MYERS, PH.D. Environmental Health Sciences Charlottesville, Virginia, USA The environmental health sciences are mid-stream in a revolution of Kuhnian proportions (Kuhn, 1962). It promises to change how we evaluate the health and ecological risks of chemical exposure, and it holds the potential of significant reductions in the human disease burden. It may even be of a magnitude to help reduce health care costs. At the core of this revolution, made possible by the engagement of a wide array of scientific disciplines that includes genetics, reproduction, molecular biology, developmental biology, immunology, etc., in addition to toxicology, is the discovery that some common contaminants, well within the range of common human exposures, can alter genetic, epigenetic and non-genomic pathways that are central to basic developmental and physiological processes. Data, methodologies and insights from these disciplines outside of traditional toxicology have been the principal source of anomalies that are now forcing major shifts in thinking on how to assess the risks of exposure. To date, most of the evidence driving this revolution is from studies of laboratory animals and experiments with cells, including human. As this deepens and as the mechanistic understanding of contaminant-induced disease etiology expands, epidemiologists are creating and testing models that reflect the underlying biology, and finding results often consistent with predictions from the laboratory science (e.g., Swan, 2008). The combination of extensive mechanistic understanding, based on laboratory experiments, coupled with robust associations between human health effects and exposures documented through epidemiology, provides extensive new guidance about the limits of current approaches to regulation, and identifies many opportunities where human health can be protected. Four overarching themes have emerged from scientific advances over the past 10 years: I. Extremely low doses can cause serious adverse eflects. Some contaminants can alter cellular processes at extremely low concentrations, concentrations that would have been dismissed as largely irrelevant only 15 years ago. Many laboratory studies now show this repeatedly and reliably. The mechanistic pathways include direct interference with the control of gene expression through interaction with hormone receptors, alteration of epigenetic programming mechanisms so that genes respond differently to biological control mechanisms, and action through "non-genomic" pathways via, for example binding with recently discovered cell surface membrane receptors (e.g., Wozniak et al. 2005). With the advent of extensive biomonitoring programs, particularly that of the U.S. Centers for Disease Control (the National Health and Nutrition Examination Survey, NHANES, CDC, 2003), epidemiologists now have extensive data sets to mine that can be explored for associations between
123
124 exposures experienced by the general public and different health endpoints. The NHANES surveys are carefully structured to be representative of the general public. They now assay chemical concentrations in appropriate biological samples (urine or serum) for over 200 chemicals and gather data on a wide array of health endpoints. For most of the chemicals monitored, the prevailing levels of contamination revealed by the NHAN ES surveys are well beneath the concentration level in animal fluids and tissues normally induced by standard toxicity testing, which are structured around high doses so that adverse effects, if they occur, are more likely to be detected with manageable sample sizes of animals. Most are well below the levels these standard regulatory tests suggest are hazardous, often thousands of times lower. These toxicological standards would predict that few, if any, associations should be found in the NHANES data set between contamination and disease. Yet as the data began to be mined, epidemiologists began to report significant associations (e .g., Lee et al. 2005; Blount et al. 2006; Lang et al. 2008). The same is true for an evergrowing number of case-control studies that have explored associations predicted by recent animallcellliterature (for example, Swan, 2008). The fact that biomonitoring studies increasingly find associations at levels far beneath exposure levels that classical toxicology identifies as problematic constitutes a significant Kuhnian anomaly, but it is consistent with the new generation of experiments focusing on adverse effects of low doses. 2. While mixtures are ubiquitous, the vast majority of research into chemical toxicity has focused on experiments with single chemicals. And with only a few exceptions, safety standards for chemical exposure do not take into account the ubiquity of multiple simultaneous and sequential exposures. Major advances are now underway in the effort to understand how chemical contaminants interact in biological systems. Numerous studies have now shown that chemicals can work together when present as mixtures to cause effects greater than the effect of the chemicals by themselves (CorySchlecta et al. 2008; Christiansen et al. 2009). Indeed, several rigorous experiments have demonstrated that mixtures composed of chemicals, each at a concentration too low to cause a detectable response, will induce measurable effects (e.g., Rajapaske et al. 2002; Christiansen et al. 2008; Howdeshell et al. 2008) . This is of considerable concern (Kortenkamp et al. 2007), because biomonitoring efforts like NHANES (CDC, 2003) indicate that people are likely to be carrying measureable amounts of hundreds, or more, of chemical contaminants in their tissues and fluids at any given time . The potential for interaction among these is quite large. 3. Data increasingly support the hypothesis that exposure to chemical contaminants during fetal and early life is contributing to adult disease, including multiple diseases that have become epidemic in humans today. The first convincing evidence for the fetal origins of adult disease arose from research on the pharmaceutical estrogen diethylstilbestrol (DES), which was
125 administered to millions of pregnant women in the belief that it helped to manage difficult pregnancies (Colborn et al. 1996). In 1971, an alert physician discovered that DES was causing a cancer, adenocarcinoma of the vagina (normally rare and restricted to elderly women), to develop in women in their late teenage years following exposure in the womb (Herbst, 1971). Extensive experimentation with rodents provided mechanistic insights and extended the impacts to many additional endpoints (Bern, 1996), which were then confirmed through epidemiological studies of people (Herbst and Bern, 1981). While DES offers the strongest example of human harm caused by exposure to endocrine-disrupting contaminants (EDCs) during fetal development, a robust animal literature- supported by substantial but still incomplete understanding of underlying molecular mechanisms--combined with a small but growing pool of epidemiological studies indicates the DES model will be the rule for EDCs, not the exception. Other EDCs include contaminants that, like DES, are estrogenic ; they also include chemicals that disrupt androgen and thyroid mediated processes, glucose regulation, neuroendocrine signaling mechanisms, etc. Virtually every hormonal signaling system that has been studied carefully has been found vulnerable to endocrine disruption. Increased focus on fetal origins of adult disease has revealed that traditional epidemiological approaches to assessing the risk of chemical exposures is highly likely to have led to false negatives, i.e., false assurances of safety (Birnbaum and Fenton, 2003). For example, extensive research on the association between several persistent organic pollutants, specifically DDT, DOE and PCBs, typically fails to find any consistent relationship with risk of breast cancer (e.g., Gammon et al. 2002). These studies, however, analyze the association between measures of contamination from samples obtained after cancer diagnosis, often in the fifth decade or later of a woman's life, long after her breast tissues passed through the periods of development that animal experiments suggest would be most vulnerable to exposures, in the womb and during puberty. In contrast, the first study of association between developmental exposure to DDT and adult breast cancer (Cohn et al. 2007) found a strong elevation of risk. Women exposed to elevated levels of DDT prior to puberty were over 5 times more likely to develop breast cancer. No elevation of risk occurred in women whose exposure occurred only after puberty. This is precisely what animal experiments predict. 4. High doses don't predict low-dose effects. The core assumption of regulatory toxicology is that high dose testing can be used to assess the hazards of lowdose exposures (White et al. 2009). This is based upon a 16th Century observation by Paracelsus that has become paraphrased as "the dose makes the poison" (Gallo, 1996). While underpinning all regulatory testing for setting exposure safety standards, this assumption is directly contradicted by decades of basic research in endocrinology (Myers et al. 2009): Hormones, and by implication contaminants that act like hormones, i.e., EDCs) can have effects at low doses that are completely unpredictable by high dose testing.
126
This is because the dose-response curves for hormones and EDCs often are not monotonic, instead they are biphasic (non-monotonic) . Different mechanisms contribute to these non-monotonic dose response curves. For example, at high levels a hormone can be overtly toxic, shutting down gene expression, whereas at low levels, many orders of magnitude beneath the toxic dose, they will upregulate gene expression. As endocrinologists have studied the consequences of exposures to EDCs, many examples of non-monotonicity have been documented (Myers et al. 2009). Yet the standard procedures for testing chemical safety have not been changed to reflect these findings. IMPLICATIONS AND CONCLUSIONS Standard approaches used around the world today to assess the safety of chemical exposures do not incorporate this new generation of scientific results: low dose effects, mixtures, developmental origins of adult diseases, and non-monotonic dose response curves. As a result, it is virtually certain that many chemical standards are too weak, permitting widespread exposures to contaminants with potentially harmful consequences. This is especially problematic because animal experiments and, increasingly, human epidemiology, suggest links between EDCs and a wide-array of human health problems, including reproductive impairment, obesity, type 2 diabetes, heart disease, neurodevelopmental problems (e.g., ADHD) and breast and prostate cancer. Several of these conditions are major and growing contributors to the cost of health care. The CDC recently estimated (Finkelstein et al. 2009) that type 2 diabetes costs the U.S. $147 billion in 2008-9.1 % of all medical spending, and the amount is increasing every year despite concerted efforts to reverse the trend. If, as the data suggest, several contaminants are contributing, reducing exposures may offer a way to reduce some (as yet unknown) portion of those cos~s. The good news is that this science also suggests that strengthening the standard setting process by incorporating this new generation of science may provide effective interventions to help avert some portion of the burden of these diseases. Multiple historical examples (e.g., lead, organochlorine pesticides, etc.) show that policy interventions can be highly successful at lowering exposure levels. It will also be important for green chemists to immediately incorporate this generation of science into the design criteria they use when synthesizing new materials (Anastas and Warner, 1998). This is much more likely to lead to a new generation of materials that will be inherently benign, than following a generation of toxicological guidelines that are now clearly outdated, REFERENCES I.
2.
Anastas, P.T. and J.C. Warner. (1998) Green Chemistry: Theory and Practice. Oxford University Press. Bern, H.A. (1992) "The fragile fetus . Pp 9-15 in Colborn, T and C Clement. Chemically-induced alterations in sexual and functional development: The wildlifelhuman connection." Advances in Modern Environmental Toxicology,
127
3. 4.
5.
6.
7.
8.
9. 10.
II.
12. 13.
14.
IS. 16.
Volum XXI (ed. MA Mehlman). Princeton Scientific Publishing Co, Princeton NJ. Birnbaum, L.S. and S.E. Fenton. (2003) "Cancer and Developmental Exposure to Endocrine Disruptors." Environmental Health Perspectives III :389-394. Blount, B.C., J.L. Pirkle, J.D. Osterloh, L. Valentin-Blasini and K.L. Caldwell. (2006) "Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States." Environmental Health Perspectives 114:1865-1871. CDC (Centers for Disease Control and Prevention). (2003) About the National Health and Nutrition Examination Survey. National Center for Health Statistics. Available: http://www.cdc.gov/nchs/nhanes/about_nhanes.htm [accessed 7 August June 2009]. Christiansen, S., M. Scholze, M . Axelstad, J. Boberg, A. Kortenkamp, and U. Hass. (2008) "Combined exposure to anti-androgens causes markedly increased frequencies of hypospadias in the rat." Int. J Androl. 31 (2):241-248. Christiansen, S, M. Scholze, M. Dalgaard, A.M. Vinggaard, M. Axelstad, A. Kortenkamp and U. Hass. (2009) "Synergistic disruption of external male sex organ development by a mixture of four anti-androgens." Environmental Health Perspectives doi: I 0.1289/ehp.0900689. Cohn, B.A., M.S. Wolff, P.M. Cirillo and R.I. Sholtz. (2007) "DDT and breast cancer in young women: New data on the significance of age at exposure." Environmental Health Perspectives 115: 1406-1414. Colborn, T, D. Dumanoski and J.P . Myers. (1996) Our Stolen Future. Dutton, NY. Cory-Schlechta, D. et al. (2008) " Phthalates and cumulative risk assessment: The task ahead." National Research Council. Available: http://www.nap.edu/ catalogll2528.html [accessed 13 August 2009]. Finkelstein, EA , J.G. Torgdon, J.W. Cohen and W. Dietz. (2009) "Annual medical spending attributable to obesity: Payer- and service-specific estimates." Health Affairs 28: w822-w831. Gallo, M.A. (1996). History and Scope of Toxicology. In: Casarett & Doull's Toxicology (Klaassen CD, ed). New York: (McGraw-Hill, 4-11). Gammon, M.D., M.S. Wolff, A .I. Neugut, S.M. Eng, S.L. Teitelbaum, J.A. Britton, M.B. Terry, B. Levin, S.D. Stellman, G.c. Kabat, M. Hatch, R. Senie, G. Berkowitz, H.L. Bradlow, G. Garbowski, C. Maffeo, P. Montalvan, M. Kemeny, M. Citron, F. Schnabel, A. Schuss, S. Hajdu, V. Vinceguerra, N . Niguidula, K. Ireland and R.M. Santella. 2002. "Environmental Toxins and Breast Cancer on Long Island. II. Organochlorine Compound Levels in Blood." Cancer Epidemiology Biomarkers & Prevention II: 686-697. Herbst, A .L., H. Ulfelder and D.C. Poskanzer. (1971) "Adenocarcinoma of the vagina: association of maternal stilbestrol therapy with tumor appearance in young women." NEJM284:878-88 1. Herbst, A.L. and H.A. Bern. (1981) Developmental Effects of Diethylstilbestrol (DES) in Pregnancy. New York:Thieme-Stratton. Howdeshell" K.L., V.S . Wilson, J. Furr, C.R. Lambright, C.V. Rider, C.R. Blystone, A.K. Hotchkiss, and L.E. Gray, Jr. (2008) "A mixture of five phthalate
128
17.
18. 19.
20.
21. 22.
esters inhibits fetal testicular testosterone production in the Sprague-Dawley rat in a cumulative, dose-additive manner." Toxicological Sciences 105(1): 153-165. Kortenkamp, A., M. Faust, M. Scholze and T. Backhaus. (2007) "Low level exposure to multiple chemicals: Reason for human health concerns?" Environmental Health Perspectives 115: 106-114. Kuhn, T.S. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press. ISBN 0-226-45808-3. Myers, J.P., Zoeller, R.T., Vom Saal, F. (2009) "A clash of old and new concepts in toxicity, with important implications for public health." Environmental Health Perspectives doi: 10.1289/ehp (online 30 July) Rajapakse, N., E. Silva and A. Kortenkamp. (2002) "Combining xenoestrogens at levels below individual no-observed-effect concentrations dramatically enhances steroid hormone action." Environmental Health Perspectives 110: 917-921. Swan, S.H. (2008) "Environmental phthalate exposure in relation to reproductive outcomes and other health endpoints in humans." Environ Res. 108(2): 177-184. White, R.H., Cote, I., Zeise, L., Fox, M., Dominici, F., Burke, T.A., et al. (2009) "State-of-the-science workshop report: issues and approaches in low-doseresponse extrapolation for environmental health risk assessment." Environmental Health Perspectives 117(2):283-287. Wozniak, A.L., N.N. Bulayeva and C.S. Watson. (2005) "Xenoestrogens at Picomolar to Nanomolar Concentrations Trigger Membrane Estrogen Receptoralpha-Mediated Ca2+ Fluxes and Prolactin Release in GH3/B6 Pituitary Tumor Cells." Environmental Health Perspective.
GREEN CHEMISTRY: A NECESSARY STEP TO A SUSTAINABLE FUTURE JOHN C. WARNER Warner Babcock Institute for Green Chemistry Wilmington, Massachusetts, USA The pursuit of knowledge and the development of technology have had an interesting and complicated relationship throughout history. The dependency of one upon the other and the simultaneous independence of one upon the other can be seen as paradoxical. Human society has developed institutions for purposes of facilitation, creation, dissemination or control (in one form or another) with respect to knowledge and technology. Academia, government and industry have historically played important roles in the development and use-and thus the relationship-between knowledge and technology. More recently, non-governmental organizations (NGOs) have begun to have significant impact as weILl Through the input, support and participation of the four sectors (academia, government, industry and NGOs), the development of knowledge as scientific models, policies, laws, regulations and other various forms have been emerging. Technological advances in many different forms including materials, medicinal agents and energy sources have been developed as well and continue to develop. These four sectors have also functioned to provide negative impact on development through various forms of control and suppression. This four-way dynamic equilibrium of "pushes" and "pulls" provides an adaptive system that can change over time in response to various circumstances. The specific requirements and consequences of the four sector's interactions with the knowledge/technology dichotomy have produced a countless number of profound benefits to society. Yet it must be acknowledged that other unintended consequences have occurred that have (or have the potential for) grave consequences to the planet and its inhabitants. Unintended consequences by definition of the word "unintended" is a result of the deployment of a technology while being ignorant or unaware of the impact that will manifest in an unanticipated scenario. If the individual or organization deploying the technology is aware of negative consequences-and exposes the planet and its inhabitants while in possession of this knowledge-then the word "unintended" is inappropriate and raises major ethical and legal issues that fall outside the scope of this discussion. When unintended, however, there are two possible origins of such ignorance. In one case, the knowledge of potential unintended consequences might be unknown to the scientific community, and has no chance of being anticipated by anyone. In the second case, the knowledge of potential unintended consequences might be known somewhere in the scientific community, but the deployer of the 'technology is unaware of this knowledge. In order to address these two cases different approaches are required. In the case of the knowledge being unknown to the scientific community, a rational, wellplanned research program is required. Creation of this knowledge however is not sufficient. Communication of this knowledge to the designers of technology is necessary.
129
130
There are many issues facing society regarding the unintended consequences of technology. Social equity, warfare, human rights, and economic disparity are among many problems that revolve around the knowledge/technology relationship. This paper specifically deals with sustainability of the chemical enterprises with respect to chemicals and chemical processes. In the past several years there has been a growing concern in society about the impacts of anthropogenic materials on human health and the environment. In the recent months of this writing, concerns over phthalate plasticizers,2 bis-phenol A polycarbonate monomers,3 brominated flame retardants4 and others have been greatly discussed. While there remains some debate over the interpretation of some specific reports, there is general agreement that several environmental health issues in the human population are initiated or exacerbated by industrial materials in the environment. There is a widespread concern of the unintended consequences of chemicals in the environment. The field of mechanistic toxicology is far from mature. Recent discoveries are challenging age-old paradigms of dose response-particularly in the realm on endocrine active materials. 5 Structure-activity relationships are constantly being elucidated for hazardous processes ranging from human toxicity through atmospheric accumulation. While this knowledge is being generated, there exists a failure in our current scientific systems to effectively transfer this knowledge to the designers and deployers of technology . Our current training of chemists and materials scientists around the world does not include any significant information regarding toxicological or environmental mechanisms of harm. The opportunity to avoid unintended consequences relies on providing designers of materials and processes ample information to anticipate negative impacts. If the scientists designing materials and processes are unaware and ignorant of the knowledge of mechanisms of harm, it is essentially impossible for them to design products that will avoid hazard. A simple review of courses that chemistry undergraduate and graduate students world-wide are required to take will reveal an alarming lack of any such courses. Somewhat universally required courses include General Chemistry, Structural Chemistry, Organic Chemistry, Analytical Chemistry, Thermodynamics, Kinetics, Quantum Chemistry, Inorganic Chemistry, Biochemistry and Instrumentation. Most universities require some combination of classes that include these courses. Classes in mechanistic toxicology and environmental mechanisms of harm are absent from virtually all required curriculum (unless the student is specializing specifically in an environmental or toxicological program). Issues of toxicity and environmental protection are addressed at most universities from the perspective of compliance. Students are subjected to training in the proper handling and labeling of chemicals, and the use of appropriate safety and protection equipment. Often some governmental oversight process mandates this training. But molecular level information of what makes materials hazardous-and how one can design materials to avoid the use or generation of hazard are simply not provided to students. Green Chemistry is the proactive molecular level science designed to fill this gap.6 Green Chemistry is the design of chemical products and processes that reduce or eliminate the use and/or generation of hazardous substances. Critical is the existence of
131
intent. What is currently absent from the chemical endeavor is the deliberate design to avoid or eliminate hazard. While much research has occurred in the past that led to technologies that are serendipitously less hazardous, Green Chemistry requires the purposeful intent to integrate safety into the design process. It is important to understand that Green Chemistry is active pollution prevention. It is not an aspiration philosophy or a theoretical exercise. For a technology to be truly Green Chemistry it must actively reduce pollution. While the reduction of impacts on human health and the environment is requisite, in order to accomplish the goals of pollution prevention it must be successfully deployed in the real world. Thus superior product performance and competitive cost is also necessary for a material to be considered an example of Green Chemistry. Of course this is quite difficult, but an increasing amount of illustrative examples of successful implementation exist. The twelve principles of Green Chemistry serve as a set of guidelines of how a designer of a material or process can anticipate and avoid negative impacts. 6 The twelve principles listed below, create a basis from which sustainability can be achieved at the molecular level. Extended descriptions of these principles exist elsewhere. 7 A quick glance shows specific objective molecular level approaches. Addressed are opportunities to integrate knowledge of energy usage, catalysis, solvents, renewable feedstock, environmental hazard, work place safety, and atmospheric disruption. This is all accomplished at the design bench in the chemistry lab. Green Chemistry does not signal the emergence of ethics and responsibility in the chemical sciences. Chemists have always cared deeply about the impacts of their work on human health and society. The revolution that is Green Chemistry is a shifting the focus from an exposure based mitigation of risk to that which focuses on the intrinsic hazard of a material. Traditionally chemists have worn gloves to protect from dermal exposure, worn masks to protect form inhalation exposure, and worn goggles to protect their eyes from accidents. Scrubbers and filters have been installed in factories to protect the land, the air and the sea. These exposure control technologies have served society well, but accept as inevitable that the materials of concern must be hazardous. Green Chemistry attempts to address the issue of hazard at the molecular level by designing a material or process that is intrinsically safe in the first place rather than rely on a second technology to protect human health and the environment. At present the vast majority of materials in commerce have some non-sustainable component with respect to properties of the material itself or the manufacturing process leading to the material. Unreasonable energy use, reliance on non renewable feedstocks, excessive generation of waste, lack of degradation after use, are all likely negative impacts to existing technologies. While specific quantitative assessments would be difficult it is reasonably safe to estimate that the majority of materials and processes (perhaps as great as 90%) have some aspect of their life-cycle that has negative impact on human health and the environment. It is critical that new technologies are invented in the future that avoid as much of this hazard as possible. Because Green Chemistry recognizes the necessity for superior product performance and cost, in addition to reduced impact on human health and the environment, it is likely that once discovered, new technologies will be adopted and successful in the marketplace. The costs associated with the use of hazardous materials are increasing every year as various government agencies increase restrictions and
132 regulations. Society cannot wait for government bans of hazardous materials to keep them safe. It is morally and ethically required that scientists take responsibility for their materials in the design stage. The only barrier preventing the development of Green Chemistry from happening at a greater rate is the lack of training of scientists to address these issues. Once proper training is integrated across the curriculum, significant gains will become a reality. This revolution will not happen overnight. There is much work to do. It will take generations for chemistry to make significant progress. Incremental advances will be required for a long time to come. And we must accept the fact that the existence of any technology with absolutely no impact on human health and the environment is impossible. Green Chemistry itself is an elusive goal of perfection. The incremental pursuit of Green Chemistries will be required, and we cannot allow perfection to stand in the way of excellence. Industrial pollution is one of many problems facing society today. Like all of these problems, the issues facing industrial pollution are numerous and complex. But unlike many of these problems, there is a real first step that can be taken. If the void in our academic model can be filled, and chemists given some introductory information regarding mechanisms of toxicity and environmental harm, immediate benefit to society will take place. There is a moral and ethical necessity for academic institutions that grant degrees in chemistry to update their curriculum. In order to adequately prepare a student to work in industry, to send them off on their careers to invent society's next generation of materials, they must have this information. The scientific community should find a way to help facilitate this process. THE TWELVE PRINCIPLES OF GREEN CHEMISTRY 1. Prevention. It is better to prevent waste than to treat or clean up waste after it is formed. 2. Atom Economy. Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product. 3. Less Hazardous Chemical Synthesis. Whenever practicable, synthetic methodologies should be designed to use and generate substances that possess little or no toxicity to human health and the environment. 4. Designing Safer Chemicals. Chemical products should be designed to preserve efficacy of the function while reducing toxicity. 5. Safer Solvents and Auxiliaries. The use of auxiliary substances (solvents, separation agents, etc.) should be made unnecessary whenever possible and, when used, innocuous. 6. Design for Energy Efficiency. Energy requirements should be recognized for their environmental and economic impacts and should be minimized. Synthetic methods should be conducted at ambient temperature and pressure. 7. Use of Renewable Feedstocks. A raw material or feedstock should be renewable rather than depleting whenever technically and economically practical.
133 8. Reduce Derivatives. Unnecessary derivatization (blocking group, protectionJdeprotection, temporary modification of physical/chemical processes) should be avoided whenever possible. 9. Catalysis. Catalytic reagents (as selective as possible) are superior to stoichiometric reagents. 10. Design for Degradation. Chemical products should be designed so that at the end of their function they do not persist in the environment and instead break down into innocuous degradation products. II. Real-time Analysis for Pollution Prevention. Analytical methodologies need to be further developed to allow for real-time in-process monitoring and control prior to the formation of hazardous substances. 12. Inherently Safer Chemistry for Accident Prevention. Substance and the form of a substance used in a chemical process should be chosen so as to minimize the potential for chemical accidents, including releases, explosions, and fires. REFERENCES l. 2.
3.
5.
6.
7.
Hawken, P. (2007) Blessed Unrest, Viking Press, NY. Sabbieti, M.G., D. Agas, G. Santoni, S. Materazzi, G. Menghi, and L. Marchetti. (2009) " Involvement of p53 in phthalate effects on mouse and rat osteoblasts" J Cell. Biochem. 107(2):316 - 327. Carwile, J.L., H.T. Luu, L.S. Bassett, D.A. Driscoll, e. Yuan, J.Y. Chang, X. Ye, A.M. Calafat and K.B. Michels. (2009) "Polycarbonate Bottle Use and Urinary Bisphenol A Concentrations" Environ. Health Persp. 117(9): 1368-1372. Birnbaum, L.S . and D.F. Staskal. (2004) "Brominated Flame Retardants: Cause for Concern?" Environ. Health Persp. 112:9- 17 Myers, J.P., R.T. Zoeller, F.S. vom Saal. (2009) "A Clash of Old and New Scientific Concepts in Toxicity, with Important Implications for Public Health", Environ. Health Persp. xx,xxx-xxx. Anastas, P.T. and J.C. Warner (1998) Green Chemistry: Theory and Practice, Oxford University Press. Warner, J.e., A.S. Cannon, K. Dye, (2004) "Green Chemistry" , J Env. Impact. Assess. 24:775-799.
This page intentionally left blank
HEALTH IMPACT OF ENVIRONMENTAL CHEMICALS: NEED FOR GREEN CHEMISTRY JERROLD J. HEINDEL, PHD. National Institute of Environmental Health Sciences Research Triangle Park, North Carolina, USA It is now clear that all complex diseases are the result of gene-environment interactions. Thus to truly understand the etiology of disease it is critical to understand not just the genetics of the disease but also to understand the role of environmental exposures. Indeed there are tens of thousands of chemicals in commerce today, and many have been shown to be toxic in animal studies. This means it is critical to understand what chemicals humans are exposed to and when, how and why these exposures can interact with an organism's genetics to cause or exacerbate disease and dysfunction. Over the past decades a growing body of evidence suggests that a subset of the chemicals in commerce today, both man-made and natural, can interfere with the endocrine system. These chemicals, called endocrine disrupting chemicals (EDCs) interfere with the production or activity of hormones of the endocrine system leading to adverse health effects. These chemicals can mimic, partly mimic, or antagonize naturally occurring hormones in the body like estrogens (female sex hormones), androgens (male sex hormones) and thyroid hormones, potentially producing over-or under-stimulation. Environmental chemicals with estrogenic activity are probably the best studied. However chemicals with anti-estrogen, progesterone, anti-androgen or anti-thyroid like activity have also been identified (Diamanti-Kandarakis et al. 2009). It is not clear how many chemicals are endocrine disruptors, but we do know that there are hundreds, including herbicides (atrazine, linuron, nitrofin), fungicides (benomyl, maneb, vinclozolin), insecticides (chlordane, DDT, endosulfan, methoxychlor, parathion, toxophene), metals (cadmium, mercury, lead, manganese), and numerous industrial chemicals (dioxin, bisphenol A, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), perfluorooctanoic acids (PFOAs), phthalates, styrene, perchlorate ). Some of these chemicals, especially dioxin, PBDEs, and PCBs are persistent, lasting in human tissues for many years, even decades, after the initial exposure. Others, while not persistent, appear to be so ubiquitous that there is constant exposure (bisphenol A, phthalates). Humans are exposed to mixtures of these and other EDCs every day. Exposure is ubiquitous, from plastic containers, food can linings, pipes and tubing, to cosmetics, recycled paper, suntan lotions and disinfectant soaps. The CDC reported that humans are exposed to over 200 chemicals, which can be measured in their urine (CDC, 2005). While the human exposures are to very low levels, laboratory experiments show that these EDCs can act at the very low doses within the range of exposures experienced commonly by wildlife and humans. They interact with receptor systems that are sensitive to parts per billion (and lower) concentrations of hormones and lead to alterations in the signaling pathways that disrupt cellular function, which animal studies have shown can lead to disease and dysfunction. The Chapel Hill Consensus Statement, signed by 38 environmental scientists, noted that the commonly reported circulating levels of the
135
136
chemical bisphenol A in humans exceed the circulating levels extrapolated for acute exposure studies that caused a wide range of adverse effects in laboratory animals (vom Saal et al. 2007). Another important aspect of EDC action is that their dose response curves are not linear but are non-monotonic (Myers et al. 2009). This is due to the fact that they act via specific receptors which bind to response elements on the DNA and stimulate transcription of new proteins. These receptors are sensitive to low levels of hormones (thus EDCS) and at high levels there is a feedback system that shuts down the receptor system. At low levels, different for each EDC, the EDC can bind to the receptor and either stimulate or inhibit its actions resulting in an alteration in the physiology and leading to some form of toxicity or sensitivity to disease. At higher doses there may be very different effects, including non-specific effects (not mediated via hormone receptors) of the EDCs. Thus, one cannot extrapolate low-dose effects of EDCs (within the humanexposure range) from high-dose studies. It is necessary to run detailed dose responses to look for low-dose effects not seen at high doses (i.e., non-monotonic dose response curves). This has not previously been done, which greatly limits of confidence in the safety standards derived from studies that only examined very high doses ofEDCs. Sensitivity to EDCs varies extensively with life stage, indicating that there are specific windows of increased sensitivity at multiple life stages. Thus, it is essential to assess the impact of EDCs across the lifespan. The most sensitive life stage is early development (in utero and the first few years of postnatal life). A complex series of events is involved in the development of the mammalian fetus and neonate; in order to go from a single cell at birth to a fully developed organism containing over one trillion cells composed of over 300 different cell types at birth, a number of well orchestrated events are required. Processes including cell division, proliferation, differentiation, and migration are all involved and are closely regulated by hormonally active substances that communicate information between specializing cells, tissues and organs. Over the past 50 plus years, embryonic and fetal development was thought to occur by the "unfolding of a rigid genetic program" where environmental factors played no significant role (for review see Soto et al. 2008). However, this strict interpretation of developmental events has been challenged because numerous experimental and epidemiological studies point out the extreme developmental plasticity of the fetus and neonate. In fact, it is becoming increasingly apparent that environmental factors such as nutrition, and external stressors and toxicants can dramatically alter developmental programming cues. This represents a major paradigm shift in developmental biology/toxicology and focuses attention on the role of environmental factors in fetal growth and development. Professor Howard Bern coined the term "the fragile fetus" to denote the extreme vulnerability of developing organism to perturbation by environmental chemicals, in particular those with hormone-like activity (Bern, 1992). He pointed out that rapid cell proliferation and cell differentiation coupled with complex patterns of cell signaling contributed to its unique sensitivity. Further, fetuses and neonates have a high metabolic rate and liver metabolism is incompletely developed as compared to adults; fetuses also have an under-developed immune system, lack many detoxifying enzymes, and the bloodlbrain barrier is not fully functional, making them more prone to chemical insult. Exposure to environmental chemicals during development can result in death in the most
137
severe cases or, structural malformations and/or functional alterations in the embryo or fetus. Unlike adult exposures that can result in reversible alterations, exposure to environmental chemicals or other factors during critical windows of development can cause irreversible consequences. It is now clear that the reason that developmental exposures can cause irreversible changes that don't show up till later in life is that these exposures can modify the epigenetic programming. Epigenetics means "on top of' and refers to a system that controls gene expression, not via alterations in the genetic code, but via changes to molecular mechanisms that hinder or facilitate the up-and down-regulation of gene expression. These include alterations in methylation of clusters of cytosine-guanine based, referred to CpG islands and changes in histone proteins that the DNA is wound around. Changes in methylation and changes in histone proteins can prevent nuclear receptors from binding to their response elements on the DNA and initiating transcription. This epigenetic system is most active during development when genes are turned on and off as cells differentiate into specific tissues. The epigenetic control of gene expression during development that then remains throughout life is termed epigenetic programming; genes are programmed to be on or off and if programmed to be on, their rate of transcription is also programmed. The consequence is that changes in the epigenetic control of gene expression during development can remain throughout life. If EDCs alter this process, then genes can be inappropriately either turned on or off, thereby making the tissue "abnormal" at the cellular level and thus more susceptible to disease and dysfunction later in life (Heindel et al. 2006). Supporting evidence for this concept independently developed in the field of environmental chemical exposure, specifically developmental toxicology where it was recognized that between 2 and 5% of all live births have major developmental abnormalities. Up to 40% of these defects have been estimated to result from maternal exposures to harmful environmental agents that impact the intrauterine environment (Heindel, 2006). Although a spectrum of adverse effects can occur, ranging from fetal death or frank structural malformations, to functional defects which may not be readily apparent, the latter that result in increased susceptibility to disease/dysfunction later in life may be the most common. Functional defects are those that only show up at a cellular or molecular level and are usually due to alterations in gene expression that lead to cells that because of the altered proteins are more susceptible to disease. These defects are the most difficult to detect because the length of time may be years between exposure and detection of the abnormality. There are numerous examples in experimental animals and wildlife populations that document that perinatal exposure to EDCs can cause functional changes (probably due to alterations in gene expression), which alter the developing organism and cause long term effects including infertility/subfertility, retained testes, altered puberty, premature menopause, endometriosis, uterine fibroids, ADHD, cognitive problems, neurodegenerative diseases, cardiovascular and respiratory diseases, obesity/diabetes/ metabolic syndrome as well as immune problems and increased cancer rates. Thus in animal models, developmental exposures to low environmentally relevant doses of EDCs can result in many of the most common and devastating human diseases (Newbold and Heindel, 2009; Diamanti-Kandarakis et al. 2009; Crain et al. 2008). Awareness of developmental sensitivity was also noted in the nutrition field
138
where epidemiology studies described "low birth weight" babies resulting from poor nutrition of their mothers had the latent appearance of disease in adult life, which included increased susceptibility to non-communicable diseases, coronary heart disease, obesity/overweight, type 2 diabetes, osteoporosis, and metabolic dysfunction (together referred to metabolic syndrome). Chronic stress during development was also associated with similar latent responses; for example, experimental studies using Macaque monkeys demonstrated that stress during early life resulted in obesity and increased incidences of metabolic diseases later in life. Maternal smoking, another fetal stressor, was also linked to the development of obesity and disease later in life in human studies (reviewed in Gluckman et al. 2007). These studies represent some examples in the literature that have lead to a substantial research effort focusing on perinatal influences and subsequent chronic disease. This concept is now called the developmental basis of disease or sometimes the developmental basis of health and disease (DoHAD) (Newbold and Heindel, 2009) . Taken together, nutritional studies describing an association of restricted fetal growth with the subsequent development of obesity and metabolic diseases, and experimental toxicology studies showing a correlation of prenatal exposure to EDCs with multiple long-term adverse effects, provide an attractive framework to understand delayed effects of toxicant exposures. For example, the "Developmental Origins of Disease" paradigm now incorporates features that are common to both nutritional and environmental exposure studies; these features include: •
Time specific (vulnerable window) and tissue-specific effects can occur with both nutritional and environmental chemical exposures.
•
The initiating in utero environmental insult (nutritional or environmental chemical) can act alone or in concert with other environmental stressors. That is, there could be an in utero exposure that would lead by itself to pathophysiology later in life or there could be in utero exposure combined with a neonatal exposure (same or different environmental stressor(s) or adult exposure that would trigger or exacerbate the pathophysiology.
•
The pathophysiology can manifest as: the occurrence of a disease that otherwise would not have happened; an increase in risk for a disease that would normally be of lower prevalence; or either an earlier onset of a disease that would normally have occurred; or an exacerbation of the disease.
•
The pathophysiology can have a variable latent period from onset in the neonatal period, to early childhood, to puberty, to early adulthood or to late adulthood, depending on the environmental stressor, time of exposure and tissue/organ affected.
•
Either altered nutrition and/or exposure to environmental chemicals can lead to aberrant developmental programming that permanently alters gland, organ or system potential. These states of altered potential or compromised function (regardless of the stressor-nutritional or chemical exposure) are likely to result from epigenetic changes e.g., altered gene expression due to effects on
139 imprinting, and the underlying methylation-related protein-DNA relationships associated with chromatin remodeling. The end result is an individual that is sensitized such that it will be more susceptible to certain diseases later in life. •
The effect of either developmental nutrition or environmental chemical exposures can be transgenerational, affecting future generations.
•
While the focus of nutritional changes during development has been on low birth weight, effects of in utero exposure to toxic environmental chemicals or nutritional changes can both occur in the absence of reduced birth weight. The lack of a specific, easily measurable biomarker like birth weight makes it more difficult to assess developmental effects. Thus, for both exposures, new sensitive biomarkers of exposure are needed.
•
Extrapolation of risk from both nutritional studies and environmental exposures can be difficult because effects need not follow a monotonic doseresponse relationship. Nutritional effects that result in low birth weight are different from those that result in high birth weight. Similarly, low dose effects of environmental chemicals may not be the same as the effects that occur at higher doses. Also, the environmental chemical and/or nutritional effects may have an entirely different effect on the embryo, fetus, or perinatal organism, compared to the adult.
•
Exposure of one individual to an environmental stressor (environmental chemical or nutritional or combinations) may have little effect, whereas another individual will develop overt disease or dysfunctions due to differences in genetic background, including genetic polymorph isms.
•
The toxicant (or nutritional)-induced pathogenic responses are most likely the result of altered gene expression or altered protein regulation associated with altered cell production and differentiation that are involved in the interactions between cell types and the establishment of cell lineages. These changes can lead to abnormal morphological and/or functional characteristics of the tissues, organs, and systems. These alterations could be due, at least in part, to altered epigenetics and the underlining methylation-related protein-DNA relationships associated with chromatin remodeling. Effects can occur in a time specific (i.e., vulnerable window) and/or tissue specific manner and the changes might not be reversible. The end-result is an animal that is sensitized such that it will be more susceptible to specific diseases later in life.
Obesity is a good example of a disease whose prevalence has risen dramatically in developed countries over the past two to three decades, reaching epidemic proportions in the United States. It is proposed to be caused by prolonged positive energy balance due to a combination of overeating and lack of physical activity on a background of genetic predisposition. The alarming rate of increase in obesity in only two to three decades indicates that the primary cause must lie in environmental and behavioral changes rather than genetics. Indeed, there is an emerging hypothesis that obesity has its origins during
140 development and is influenced by the environment. In this case, environment refers to a specific class of EDCs now called 'obesogens' . While the role of environmental chemicals in developing obesity may still be an emerging hypothesis, considerable human and animal data link altered nutrition during development to obesity later in life. Recent data lend support to the 'obesogen' hypothesis (Newbold et al. 2008; Heindel and vom Saal, 2009). Bailie-Hamilton (2002) noted a correlation of the obesity epidemic with increasing exposure to ' man-made' chemicals. Although such data are only correlational, it is tempting to speculate that there is indeed a role for increased exposures to environmental chemicals in the recent epidemic of obesity. It is well established that many substances including anabolic steroids and DES have been used to promote fattening and growth of animals; further, other chemicals including organophosphate pesticides, carbamates, and antithyroid drugs also cause obesity in animals. Also, there is increasing evidence in animal models that in utero exposure to environmental chemicals at environmentally relevant concentrations alters developmental programming of adipose tissue and/or gastrointestinal-hypothalamic centers. The subsequent obesity observed in these models has been linked to irreversible alterations in tissue-specific function as a result of altered gene expression. The most likely candidates for altering in utero tissue function in ways that may result in obesity later in life includes environmental estrogens such as DES or BP A. Newbold et al. (2007) have shown that low doses of DES (I ug/kg/day), either prenatal or neonatal, caused increased body weight in outbred mice that was not evident at birth but reached significance by 6 weeks of age. At 16 weeks of age, DES-exposed animals had a body fat of 27.6% compared with 20.9% in controls. These DES-treated mice had excessive abdominal fat which has been reported to be associated with cardiovascular disease and diabetes in humans. These mice also had elevated levels of leptin, adiponectin, interleukin-6 and triglycerides that actually developed before the obesity was apparent. Perhaps these endpoints can be used as early biomarkers of subsequent obesity. Increased leptin levels may be due to altered leptin programming due to the environmental chemical exposure. Neonatal exposure to other estrogens, 20H estradiol, and 40H estradiol also caused a significantly increased body weight at 4 months of age (Newbold et al. 2007), suggesting that DES is not unique, and that in utero exposures to low doses of environmental agents with estrogenic activity can alter the set point for body weight. In addition, the naturally occurring phytoestrogen genistein (an estrogenic component of soy) has also been linked to obesity (Penza et al. 2006). In fact, in utero exposure to environmentally relevant doses of BP A, which has estrogenic activity and which has been found in human fetal blood and amniotic fluid at low doses, also results in increased body weight of mice. Further, BP A has been shown in vitro to increase glucose transport in preadipocytes and, in combination with insulin, to increase conversion of mouse 3T3-L 1 fibroblasts into adipocytes while also increasing lipoprotein lipase activity (reviewed in Newbold and Heindel, 2009) . There are a growing number of environmental chemicals that have been shown in animal models to increase obesity later in life when administered during .development. It is therefore likely that obesity is 'set' based on nutrition and exposures to environmental chemicals during development acting on the genetic background. If this is indeed shown to be true, then the focus on obesity must be changed to prevention by reducing environmental stressors (including chemical exposures) during development, rather than
141
intervention once obesity has occurred. CONCLUSIONS It is clear that all complex diseases are due to gene-environment interactions. Since there has been an epidemic of many diseases over the past 30 years, it is likely that environmental chemicals are playing an important role in human disease. A particular class of chemicals of great concern is those that disrupt the endocrine system, endocrine disrupting chemicals. The most sensitive period of exposure to EDCs is during development. Numerous diseases in animals indicate a role for developmental exposure to EDCs in their etiology; these include reproductive problems (uterine fibroids, endometriosis, early reproductive senescence, altered fertility and sperm counts), cancers (breast, prostate), and diseases of the cardiovascular, immune, nervous (ADHD, Parkinson' s disease) and endocrine (diabetes, obesity) systems. Therefore, to reduce the incidence of disease one must reduce exposures to environmental chemicals including the EDCs. The Green Chemistry movement is critical. In order to reduce exposures to EDCs we need to find alternatives that are not toxic to replace the current chemicals with EDC activity. Together, environmental health scientists, toxicologists and green chemists can, by reducing environmental exposures to toxic chemicals, significantly improve the health of people across the globe. REFERENCES I.
Baillie-Hamilton, P.F. (2002) "Chemical toxins: a hypothesis to explain the global obesity epidemic ." J Altern Complement Med; 8: 185-192.
2.
Bern, H.A. (1992) The fragile fetus . In: Chemically-induced Alterations in Sexual and Functional Development: The WildlifelHuman Connection (Colborn, T., Clement, C., eds). Princeton, NJ:Princeton Scientific Publishing; 9-15.
3.
Crain, D.A. , Janssen, S.J., Edwards, T.M. , Heindel, J.J. , Ho, S.M. , Hunt, P. et al. (2008) "Female reproductive disorders: the roles of endocrine-disrupting compounds and developmental timing." Fertil Steril. 90:911-40.
4.
Diamanti-Kandarakis, E., Dourguignon, J.P., Giudice, L.c., Hauser, R. , Prins, G.S. , Soto, A.M. , Zoeller, R.T. (2009) "Endocrine-disrupting chemicals: an Endocrine Society scientific statement." Endocr Rev 30(4) :293-342
5.
Gluckman, P.D., Hanson, M.A., Beedle, A.S. (2007) "Early life events and their consequences for later disease: a life history and evolutionary perspective." Am J Hum Bioi 19:1-19.
6.
Grun, F. , Blumberg, B. (2009) "Endocrine Disruptors as obesogens." Mol Cell Endocrinol. 25: 19-29.
7.
Heindel, 1.1. (2006) "Role of exposure to environmental chemicals in the developmental basis of reproductive disease and dysfunction." Semin Reprod Med
142
24(3): 168-177. 8.
Heindel, J.J., McAllister, K.A., Worth, L., Jr., Tyson, F.L. (2006) "Environmental epigenomics, imprinting and disease susceptibility. " Epigenetics 1 (1) 106.
9.
Heindel, J.J., vom Saal, F.S. (2009) "Role of nutrition and environmental endocrine disrupting chemicals during the perinatal period on the aeteology of obesity." Mol Cell Endocrinol. 25:304, 90-6.
10.
Myers, J.P., Zoeller, R.T. , vom Saal, F. (2009) "A clash of old and new concepts in toxicity, with important implications for public health." Environ Health Perspect doi:! 0 1289/ehp (online 30 July)
II.
Newbold, R.R., Heindel, J.J. (2009) Developmental origins of health and disease: the importance of environmental exposures. In: Early Origins of Human Health and Disease. Eds Newman, J.P ., Ross, M.G., Karger Publishing, Basal, pp 41-50.
12.
Newbold, R.R., Padilla-Banks, E., Snyder, R.J., Jefferson, W.N. (2007) "Perinatal exposure to environmental estrogens and the development of obesity." Mol Nutr Food Res. 51:912917.
13 .
Penza, M., Montani, c., Momani, A. , et al . (2006) "Genistein affects adiose tissue deposition in a dose dependent and gender specific manner." Endocrinology. 147:5740-5751.
14.
Soto, A.M. , Maffini, M.V., Sonnenschein, C. (2008) "Neoplasia as development gone awry: the role of endocrine disruptors." int J AndroI31(2):288-293.
15 .
Third National Report of Human Exposure to Environmental Chemicals 2005 www.cdc.qov/exposurereport/pdf/thirdreport summarv.pdf
16.
vom Saal, F.S., Akingbemi, B.T., Belcher, S.M., Birnbaum, L.S., Crain, D.A. , Eriksen, M. et al. (2007) "Chapel Hill bisphenol A expert panel consensus statement: integration of mechanism, effects in animals and potential to impact human health at current levels of exposure." Repard Taxica!. 24(2): 131-8
MOVING THE CHEMICAL ENTERPRISE TOWARD SUSTAINABILITY: KEY ISSUES
TERRY COLLINS Thomas Lord Professor of Chemistry, Department of Chemistry Carnegie Mellon University, Pittsburgh, Pennsylvania, USA In the technology arena, capitalism is not working as well as is needed for a good future for our civilization. We need to be more effective in moving our energy base to the much safer territory of renewables, especially solar to electric energy and solar to chemical energy technologies. We need to stop expanding our global economy as if fossilized carbon is an infinite resource that can be used with impunity. And we need to recognize that the exceptionally long lifetimes and high toxicities of fission isotopes mean that a large expansion in nuclear represents a perilous journey for our civilization. We need to stop allowing industry with all its conflicts of interest to have too great an influence over the regulation of toxic chemicals. In the United States, corporate influence over the instruments of government confounds decisions in sustainability areas- money has too often been trumping health and the environment. Green chemists are endeavoring to steer our civilization toward a safe course. Green science thinking envisions a reinvention of the chemical enterprise around the principles of green chemistry (GC). It brings sustainability ethics to the centers of the technological and educational stages. It is often asserted that the modern chemical enterprise began in 1856 and 1857 when William Henry Perkin, then a teenager, patented and began commercializing the first synthetic dye. He called the purple dye "mauveine". According to the U.S. Environmental Protection Agency, about 87,000 chemicals are registered in the Toxic Substances Control Act Inventory and can be in commercial use today. I According to the American Chemistry Council, "about 10% of the inventory, about 8,300 chemicals, is in actual commerce in significant amounts.,,2 While one would think that regulatory agencies around the world are supposed to ensure that these chemicals are safe, there is a growing realization that we have much further to go to achieve a safe corpus of commercial chemicals. For most of the time since Perkin, chemicals have been commercialized under an unstated but underlying premise that they will not have a profound impact on health (or the environment) unless, as with drugs, they were designed to do so. When we have found problems with specific chemicals, the underlying premise has been backed up by the notion that ex post facto control of exposure can work well enough to protect health and the environment. The premise is fundamentally wrong. Low dose adverse effects, particularly of everyday chemicals, highlight the many inadequacies. 3 Moreover, even as individuals, groups and regulatory agencies worldwide have tried to address problems more assertively, they have encountered the well-funded and formidable defenses of interest groups who are out to protect the status quo from which they benefit. 4 Most academic chemists continue to pay only fleeting attention to toxicology. Industry and academia are being challenged to accept and adapt to the new reality of low dose adverse effects. Pete Myers, Jerry Heindel, Fred vom Saal, Bruce Blumberg and others are reminding us very clearly of the challenges at this conference. And green chemists are working to translate the critical messages for our chemistry colleagues.
143
144
As a chemist, I will reflect on a particular structural facet of how our civilization deals with toxic chemicals. This facet is almost never spoken of in the halls of academic chemistry, yet it has a huge negative influence on sustainability. I will do so because this facet must change if we are going to succeed in building a sustainable technology base. As background, I began teaching green chemistry in 1992. I searched the literature and called scientists who were doing relevant work to get started. But as with most chemists, I had little education in toxicity and ecotoxicity. So to try to understand how to follow the field's mission to reduce and eliminate hazardous substances, I began reading broadly in these areas searching for good course content. First, I was inspired by the meticulous mechanistic toxicology in Casarett and Doul' s "Toxicology" .5 But as I dug in, I read "Our Stolen Future,,6 and I found its thesis deeply troubling that low doses of synthetic chemicals could alter the hormonal signals controlling cellular development. As my course developed, I became much more aware of another vital dimension to toxic chemicals, namely the power dynamic surrounding the regulation of chemicals. I read Thornton' s "Pandora's Poison,,3b and Markowitz and Rosner's "Deceit and Denial,,3. and I watched Moyer's television program, "Trade Secrets".7 I learned that well funded trade associations playa key role in shaping how we understand and regulate toxic chemicals. Too often, trade associations have worked to put a good face on bad toxicity problems and have frustrated healthy forward movement to remove the hazards or reduce the exposures. Pittsburghers, Rachel Carson and Herbert Needleman, as well as other scientific luminaries who have alerted us to the dangers of toxic chemicals, have been fiercely opposed and even ridiculed. Knowledge of this sort of counterproductive behavior belongs in the chemical curriculum so that it will go away in the future. Today, an ethical inadequacy still pervades much of the chemical enterprise as recently illustrated with bisphenol A. 8 With endocrine disruptors, the welfare of our descendants is on the chopping block. Bullying thought leaders who find inconvenient truths and producing flawed studies does not help to build a civilization worth handing down to our progeny. Sustainability ethics should be taught in all disciplines in our universities so that we might handle more constructively the powers over the ecosphere that science and technology bring to us. John Warner has introduced the Anastas and Warner "Principles of Green Chemistry" to this conference audience. 9 The principles have the character of self-evident truths and they are guiding the development of green chemistry. For considering areas on which green chemists and environmental
145 health scientists should work collaboratively, there is value in thinking of the problem space as like a bookcase. 10 In this bookcase, each virtual shelf will contain the proposed pedagogy of a collection of related green chemical technology achievements as well as challenges that must be solved to attain a sustainable technology base. Every shelf is important. The virtual bookcase contains an equator. The challenges on the shelves in the design for lower half are much easier for chemists to embrace as they fit into the current paradigm of how chemistry currently functions within our civilization. For example, the greening of synthetic procedures is on the lowest shelf. The greening of drug syntheses to reduce waste and eliminate toxic reagents and solvents is already well underway in the pharmaceutical industry. It has been shown in the last decade that this redesign of drug syntheses can improve profit margins. On the second lowest shelf, one finds achievements and challenges associated with the development of renewable feedstocks to obtain the materials for the economy from plant matter to replace fossilized carbon sources. Many valuable inventions have been attained in the last five years alone. On the third shelf, the roles that chemists are playing in producing better ways of converting solar to electrical and chemical energy are highlighted. Here again, the economic potential is palpable. It can be shown that the discovery of new and better photovoltaic materials, following a period of federally supported research and development, can readily attain support for commercial development from the private sector. An example in Pittsburgh is the Carnegie Mellon University spin-off company, Plextronics. Plextronics holds the current record for efficiency for a polymer-based photo voltaic of ca. 6%. It is important from a green perspective to carefully consider the elements that underpin new photovoltaic technologies. If they are going to be based on highly toxic elements such as cadmium, then we must be assured that the life cycle does not involve environmental contamination. So on each of these lower three shelves, the challenge is to review and facilitate the expansion of a body of excellent materials created in the context of green chemistry principles with an eye toward making that sector sustainable. The top three "design against" shelves are more challenging, but they must be addressed if we are to have any hope of creating a sustainable civilization. They deal directly with chemistry designed to tackle serious toxicity and ecotoxicity problems. This isn't easy, and it will require much fundamental change as I have intimated above. In the Institute for Green Science, we aim to contribute by training a new generation of green chemists. We are working to create free web-based green chemistry curriculum materials for use throughout the world. And we endeavor to solve real world problems through green chemistry research and development. Chemists are blessed with outstanding graduate and undergraduate students. My faith in their leadership ability to advance green chemistry runs deep. We just have to point them on the right track. Our web-based curriculum materials are in development. And we have invented and developed iron-T AML activators that are being commercialized though a Carnegie Mellon spin-off company called GreenOx Catalysts, Inc. Biologically inspired oxidation catalysis is ideally suitable for environmentally friendly applications, where it is important to avoid the use of toxic metal reagents and oxidants, energy-consuming processing steps and undesirable reaction media. My group has worked for almost thirty years on the design and development of small molecule peroxidase mimics. I 1-13 Iron-T AML catalysts are only ca. 1% the size of the peroxidase
146 enzymes. There are now >20 iron-TAMLs that differ in reactivity and lifetimes. IronT AML activation of peroxides follows similar pathways to the peroxidases, i.e., high valent iron-oxo complexes are the likely reactive intermediate. An iron(V)-oxo complex has been isolated at low temperature and characterized. 14 The suite of currently available iron-TAMLs with varying TAML donor capacities makes a broad diversity of reactivity possible. Importantly, the enzyme-like reaction pathways result in high efficiency of peroxide use and the ability to function over a wide pH range. This stands in contrast with Fenton chemistry, i.e., peroxide activation carried out by simple iron salts and complexes with less electron-donating ligand systems, which exhibits low peroxide efficiency and requires acidic conditions. At billionth molar to low millionth molar concentrations (an iron-TAML weighs about 500 grams per mole and million molar has one millionth of this in a liter of water), iron-T AMLs activate hydrogen peroxide and other peroxides to rapidly degrade numerous recalcitrant water contaminants. The list of degraded compounds includes natural and synthetic estrogens and testosterone, highly recalcitrant chlorophenols, pesticides, dyes, organosulfur compounds, the colored, smelly and organochlorine species in pulp and paper effluents, chemical warfare agents, and more. The reactions typically lead to near mineralization. Water can also be purified of hardy pathogens. 15 Iron-T AML lifetimes are limited such that their oxidizing chemistry self-extinguishes at rates determined by the catalyst design and process conditions. 16 Degraded products have been nontoxic by all tests to date. 17 •18 The catalysts have fulfilled our design criteria and now provide the basis of a platform technology for myriad applications. Numerous kinetic and mechanistic studies illustrate the success of the design protocol described in attaining peroxidase enzyme mimics. As we work to develop the interface between green chemistry and environmental health sciences (EHS), it will help if we candidly evaluating chemistry's and toxicology's strengths and weaknesses for dealing with sustainability challenges. It makes sense to focus on making the most of the strengths and on finding ways to overcome the weaknesses. Our future progress must be science based and not overly controlled by political or monetary considerations. Identifying and prioritizing hazardous substances for reduction and elimination has begun in the European Union 19 and we all need to follow that lead-reducing and eliminating endocrine disruptors is a top priority. We must develop distribution budgets for hazardous substances across marketplace product/process profiles. We need to bring EHS forefront understanding of hazards to GC design and GC understanding of the chemical enterprise dynamics to EHS. We must expeditiously complete the development a portfolio of assays to ensure that GC products and processes are not hazardous by the standards of contemporary understanding. We should enshrine the assay portfolio in the principle that it must be readily adaptable to new knowledge. Chemists should test products experimentally with the new assays and/or screen their creations via Quantitative Structure Activity Relationships (QSARs) to increase confidence that they are not burdened by knowable toxicity. Our governments must work to level the playing field for green chemical products and processes with regulations where and when appropriate. We need to work together to defend nonmonotonic dose-response science against those forces that attempt to obfuscate it. EHS findings of adverse health effects at environmentally relevant concentrations have tectonic significance for public policy. A coalition between EHS and GC is the next and an essential phase of sustainability science. Ultimately, plainspoken and determined
147 leadership at allleve1s is the vital ingredient for ensuring that our civilization might have a good and lasting future. REFERENCES 1. 2.
3.
4.
5. 6. 7. 8. 9. 10. II. 12.
13.
USEPA, Endocrine Disruptor Screening Program (EDSP), http://www.epa.gov/ endo/pubs/edspoverview/primer.htm Michael Walls, American Chemistry Council, Chemistry and Engineering News, PointCounterpoint, January 8, 2007, 85, 34- 38; http://pubs.acs.org/cen/ governmentl85/8502regulation.html a) J. Peterson Myers, Publisher, http://www.ourstolenfuture.com/; b) Myers, J. P.; Hessler, W. "Does 'the dose make the poison'?" Extensive results challenge a core assumption in toxicology. Available at http: //www.ourstolenfuture.com/ NewScience/lowdose/2007/2007-0525nmdrc.html; c) Myers, J.P. and vom Saal, F.S. (2008) "Time to update environmental regulations: should public health standards for endocrine-disrupting compounds be based upon 16th Century dogma or modem San Francisco Medicine. 81, 301-31. Available at endocrinology?" http://www.stiTIs.org/source/members/magazine archive list.cfm? theme=JanuarviFebruary%202008%20Linked%20For%20Life:%20Health,%20Human% 20 Bein gs. %20and%20the%20 Environment§ion=Article Arch ives See for example: a) Markowitz, G.; Rosner, O. Deceit and Denial: The Deadly Politics of Industrial Pollution, University of California Press, Berkeley and Los Angeles, 2002; b) Thornton, J. Pandora's Poison: Chlorine, Health, and a New Environmental Strategy, MIT Press, Boston, 2000; c) Thanh Nien News: Chemical companies, U.S. authorities of Agent Orange, knew dangers http://www.thanhniennews.com/print.php?catid=1 0&newsid=51587 Klaassen, C.O., Casarett and Doull's Toxicology: The Basic Science of Poisons, McGraw-Hili, New York, 2008. Colborn, T., D. Dumanoski, and J.P, Myers. (1996) Our Stolen Future, Penguin Group, New York. Moyers, B., Trade Secrets: A Moyers Report, PBS, 2001. David Case, The Real Story Behind Bisphenol A, http://www.fastcompany.com/nodeIl139298/print Anastas, P.T. and J.C. Warner. (1998) Green Chemistry: Theory and Practice (Oxford Univ. Press, Oxford. Learning Green: Developing a Web-Based Green Chemistry Curriculum for the World Terry Collins and Chip Walter, Graduate Education Newsletter, Spring Edition, 2008. Collins, TJ. (1994) " Designing Ligands for Oxidizing Complexes." Ace. Chem. Res. 27 (9),279-285. Collins, TJ. (2002) "TAML oxidant activators: A new approach to the activation of hydrogen peroxide for environmentally significant problems." Ace. Chem. Res. 35 (9), 782-790; Doi 10.1021 /ArOI0079s. Collins, TJ. and C. Walter. (2006) "Little green molecules." Sci. Amer. 294 (3), 8290 .; ://000235329300035.
148 14.
15.
16.
17.
18.
19.
de Oliveira, FT., Chanda, A., Banerjee, D., Shan, X.P.; Mondal, S., Que, L., Bominaar, E.L., Munck, E., and Collins, T.1. (2007) "Chemical and spectroscopic evidence for an Fe-V-Oxo complex." Science. 315 (5813),835-838; Doi 10.1 I 26/Science. I 133417. Banerjee, D., Markley, A.L., Yano, T., Ghosh, A., Berget, P.B., Minkley, E.G., Khetan, S.K., and Collins, T.J.(2006) '''Green' oxidation catalysis for rapid deactivation of bacterial spores." Angew. Chem.-Internat. Edit. 45 (24), 3974-3977; Doi 10.10021Anie.2005045 I I. Chanda, A., Ryabov, A.D., Mondal, S., Alexandrova, L., Ghosh, A., Hangun-Balkir, Y., Horwitz, C.P., and Collins, T.1. (2006) "Activity-stability parameterization of homogeneous green oxidation catalysts." Chem.-a Euro. J 12 (36), 9336-9345; Doi 10.1002/Chem.200600630. Sen Gupta, S., Stadler, M., Noser, C.A., Ghosh, A., Steinhoff, B., Lenoir, D., Horwitz, c.P., Schramm, K.W., and Collins, T.J. (2002) "Rapid total destruction of chlorophenols by activated hydrogen peroxide." Science 296 (5566), 326-328. Shappell, N.W., Vrabel, M.A., Madsen, P.1., Harrington, G., Billey, L.O.,Hakk, H., Larsen, G.L., Beach, E.S., Horwitz, C.P., Ro, K., Hunt, P G., and Collins, T.J. (2008) Destruction of estrogens using Fe-T AMLIperoxide catalysis. Environ. Sci. Techno!. 42 (4),1296-1300; Doi 10.102I1Es7022863. See for example, Commission of the European Communities, Brussels, 30.11.2007 SEC(2007) ategy for Endocrine Disrupters" - a range of substances suspected of interfering with 1635. Commission Staff Working Document on the implementation of the "Community Str the honnone systems of humans and wildlife (COM (1999) 706), (COM (2001) 262» and (SEC (2004) 1372).
SESSION 4 ENERGY & CLIMATE
FOCUS: ESSENTIAL TECHNOLOGIES FOR MODERATING CLIMATE CHANGE AND IMPROVING ENERGY SECURITY
This page intentionally left blank
BALANCING PERSPECTIVES ON ENERGY SUPPLY, ECONOMICS, AND THE ENVIRONMENT
CARL O. BAUER Director, National Energy Technology Laboratory, U.S. Department of Energy, Pittsburgh, Pennsylvania, USA ABSTRACT An increasingly complex energy economy with growing energy demand and a need to reduce greenhouse gas emissions needs a balanced perspective that considers all three dimensions- supply, economics, and environment--concurrently. Assured, affordable, and sustainable energy supplies will depend on a broad portfolio of proven, new, renewable, and alternative sources coupled with enhanced recovery technologies, greater fuel flexibility, and increased efficiency. Energy strategies and technologies are increasingly sensitive to the water-energy nexus where water consumption for thermoelectric power is an emerging environmental issue. Key among carbon abatement technologies is carbon capture and storage (CCS) for large point-source emitters, particularly fossil fuel power generation. However, while CCS technologies have evolved from proven industrial applications, continuing research is needed to resolve issues with scaling, costs, and water use, as well as to demonstrate the widespread availability of storage reservoirs. Every Energy Source Faces Challenges Every energy source faces challenges on multiple fronts. Energy supplies are increasingly constrained by peaking production, increasing global competition, and societal instabilities in major energy producing regions. Deployment of renewable sources is constrained by high costs, intermittencies, distances from population centers, and access to compatible distribution grids. Coal and gas face economic challenges in the costs projected to reduce greenhouse gas emissions. In addition to greenhouse gas emissions, the sustainability of water supplies and the treatment and disposal of waste streams loom as environmental concerns, particularly for each of the major sources of base load electric power generation--coal, natural gas, and nuclear. And, it seems, each source faces questions of public acceptability, either in principle or in local land use decision making. Energy Strategy Complexity In order to be truly sustainable, a source of energy must perform in each of the three dimensions of the global energy economy-supply, economics, and the environment. That is, there must be an adequate and assured amount available at an affordable price and with minimal environmental impacts in order to succeed. Aligning these dimensions can present a formidable challenge, as they are often in competition with one another in energy policy making and in technology development. Energy policy discussions often key in on one of the three dimensions as most significant, occasionally to the exclusion of the others. Such a single-sided approach can ultimately lead in directions that impinge on the other dimensions. These pressures can impact the proper
151
152
functioning and economic sustainability of all energy markets, including electric power, liquid and transportation fuels, and chemical and agriculture feedstocks. A Venn diagram can be used to represent the challenges in developing a coherent energy strategy may clarify the dynamics at work. Strategies that focus on one area can realize improvements in the chosen area of research or investment, but the difficulties of achieving other equally important objectives are exacerbated in the process. In a zero-sum decision-making environment, competitive tensions pull the spheres in opposing directions. Compatible opportunities, represented by the overlap of the spheres at the center of the diagram, shrink as the spheres are pulled apart but increase as the spheres are brought closer together. A more effective approach to meeting our energy needs depends on seeking alternatives that provide an equitable balance of attention to all three requirements, seeing them as a single, complex issue that demands our continued attention. Carbon Capture and Sequestration One of the most promising carbon abatement technologies for point source emitters, including fossil fuel power plants and heavy industries, is carbon capture and storage (CCS). Carbon capture currently relies on adapting industrial gas separation technologies designed to process comparatively small volumes in specialized operating environments for materials where costs can be recovered. These technologies must be scaled-up and modified to realize CCS commercially. For example, at 17,000 tonnes of carbon per day, a typical coal-fired power plant would capture more than four times the 4,000 tonnes per day handled at the largest CO 2 separation plant in the world. Geologic storage of captured C02 essentially represents returning extracted carbon to subsurface isolation, in principle and in magnitude something like the reverse of the global oil and gas industries. Successful demonstration projects have provided proof-of-concept, but wide-scale deployment now depends on demonstrating the permanence of large-scale injection, widespread availability of storage reservoirs, and the evolution of local, national, and international regulatory frameworks. In parallel with geologic storage, terrestrial capture and storage is creating opportunities for advanced agricultural and forestry practices, potentially coupled with other energy technologies such as biofuels. The U.S. Department of Energy (DOE) is leading research in pre- and postcombustion CO 2 capture and oxy-fuel technologies. And, while solvent-based capture systems dominate the landscape today, membranes and other advanced technologies are already on the horizon. DOE is also investigating various types of underground formations for geologic storage, primarily through the Regional Carbon Sequestration Partnerships and through participation in a number of international demonstration projects. DOE is focusing its efforts for terrestrial sequestration on increasing carbon uptake through reforestation and amendment of mine lands and other damaged soils. Additional research in terrestrial capture and sequestration is conducted through the Regional Partnerships.
153
The Energy-Water Nexus In addition to curbing greenhouse gas emissions, a second environmental issue has risen in importance as global demand for energy and our understanding of climate change have increased-the interrelationships between energy production and water supplies. From the resource exploration to power generation, energy production takes water. It is used in energy-resource extraction, refining and processing, and transportation. Water is also an integral part of electric-power generation. It is used directly in hydroelectric generation and is also used extensively for cooling and emissions scrubbing in thermoelectric generation. Conversely, the creation and management of potable water supplies requires a substantial amount of energy. Energy is needed to pump water from aquifers and reservoirs, to treat waste water and effluents. Energy is needed to heat and cool water for domestic use and industrial processes. In arid coastal regions, energy may be needed to desalinize water. Withdrawal versus Consumption, an Important Distinction It is important, however, to distinguish between water withdrawal and consumption. Withdrawal is the removal of water from any water source or reservoir, such as a lake, river, stream, or aquifer for human use. For power plants, the primary purpose of water withdrawal is cooling. Consumption, on the other hand, is that portion of the water withdrawn that is no longer available for use because it has evaporated, transpired, been incorporated into products and crops, or consumed by humans or livestock. In contrast to water withdrawal for irrigation, of which most is consumed, most water withdrawn in power generation is not consumed, but returned to its source. For example, the U.S. Geological Survey has estimated thermoelectric generation as approximately 39 percent of freshwater withdrawals, ranking only slightly behind agricultural irrigation for freshwater withdrawal in the U.S. However, corresponding water consumption associated with thermoelectric generation accounted for only about 2.5 percent of total U.S. freshwater consumption. Although the relative consumption rates for thermoelectric power are much smaller than those for irrigation, water is an increasingly important consideration in the siting and operation of power plants. This is particularly true for arid regions such as the southwestern United States, North Africa, and the Middle East, but it is also increasingly an issue for wetter regions that are susceptible to drought, such as the southeastern United States and southern Europe. Climate change and rapid shifts in weather patterns may heighten these sensitivities. CCS will place further demands for water and power generation. Depending on the generation and capture technologies used, CCS can increase water consumption from 50 to 90 percent over the same technologies without CO 2 capture. Natural gas combined cycle (NGCC) plants use the least amount of water with and without CO 2 capture because it is highly efficient and two-thirds of the power is generated from the combustion turbine and only one-third is generated in the steam cycle-therefore, requiring less condensing steam cooling and evaporation. Integrated gasification combined cycle (IGCC) also generates two-thirds of its power via the combustion turbine, analogous to the NGCC power plant, but at a lower efficiency. The increased water consumption for CO 2 capture
154
is primarily to supply additional steam to the water-gas-shift process and CO 2 compression train cooling (additional evaporation). Pulverized coal power plant systems with and without CO 2 capture consume the most water per megawatt-hour (MWh) because all of the power is derived from the steam cycle. About half of the incremental water consumption increase is due to the energy penalty (larger plant built) and the half is due to process and compression cooling. Thermoelectric Power Plant Water Consumption While water demand for thermoelectric power is generally associated with base load fossil fuels, especially coal and natural gas, other thermoelectric power sources can have even greater water consumption rates. An NGCC power plant typically consumes around 180 gallons per megawatt hour (gal/MWh) and coal-fired plants consume between about 250 and 520 gal/MWh. These values are in the same range as fossilbiomass-waste co-fired plants (300 to 480 gal/MWh) but lower than nuclear (400 to 720 gal/MWh). Among the most water-intensive technologies are solar thermal, which consumes somewhere between 750 and 920 gallMWh depending on whether tower or trough heat collectors are used. Geothermal steam, like solar counted as a renewable resource, consumes about 1,400 gal/MWh, or between 2.7 and 7.7 times the amount of water needed for coal- and gas-fired power plants. Water Demand for Biofuels Water demand for energy is by no means restricted to thermoelectric power generation; biofuels place their own demands for clean water supplies. Most water for biofuel crops is used for irrigation and water consumption is typically dependent on crop type, growing environment, and agricultural practices. Irrigated com, for example, requires between 2,000 and 4,000 gallons per bushel. And one bushel of com can produce about three gallons of ethanol or two gallons of gasoline equivalent. Water demand for biofuels continues through to the biorefinery where the U.S. National Academy of Sciences estimates that a biorefinery that produces 100 million gallons of ethanol per year would use the equivalent of the water supply for a town of 5,000 people. Integrated Energy Systems Balance Resources Ideally, energy technologies can be linked and balanced to extract the maximum possible energy with the minimum possible impacts. One way to do this, for example, would be to use renewable resources, like wind, in combination with fossil sources, like coal, to store and transfer energy, generate power, and capture by-products for beneficial use. Using wind to electrolyze water produces hydrogen that can be used to gasify coal and oxygen that can be fed to an oxy-combustion system with the char from the coal. The synthetic natural gas, or "syngas", from the coal gasification can then be fed separately to industrial, commercial, and residential customers or used in a combined cycle power plant to generate electricity. The C02 streams from both the oxy-combustion and combined cycle electricity generation can be fed to algae farms to produce oxygen and additional biomass for further power generation. The net outputs of this system are electricity, oxygen, and syngas.
155
Such a system helps to overcome the intermittency of wind and the greenhouse gas emissions of coal. Hydrogen produced by wind-powered electrolysis stores the energy for use when needed. The greenhouse gas emissions from coal are captured in a controlled environment that places no further demands for agricultural land or potable water. Algae can grow in brackish water or power plant grey waters without introducing further competition for food crops. In summary, with mounting challenges world-wide to secure adequate, affordable, and sustainable energy supplies, energy sources must increasingly perform acceptably in all three dimensions of the energy economy. It is further possible to harmonize seemingly disparate energy resources and find synergies in their use, achieve increased efficiencies, and improve their mutual environmental performance. With no single source of energy now pre-eminent, nor any likely to be in the foreseeable future, the global energy economy demands balanced approaches and balanced portfolios.
This page intentionally left blank
THE OUTLOOK FOR POWER PLANT CO 2 CAPTURE EDW ARD S. RUBIN Engineering and Public Policy, College of Engineering Carnegie Mellon University, Pittsburgh, Pennsylvania, USA ABSTRACT
There is growing international interest in carbon capture and sequestration (CCS) technology to reduce carbon dioxide emissions linked to global climate change. CCS is especially attractive for electric power plants burning coal and other fossil fuels , which are a major source of global CO 2 emissions. This paper describes the performance and cost of CO 2 capture technologies for large-scale electric power plants including pulverized coal (PC), natural gas combined cycle (NGCC), and integrated gasification combined cycle (IGCC) plants. Different types of capture technologies, including precombustion, post-combustion, and oxyfuel combustion capture systems are discussed, along with the outlook for new or improved technology. Future cost for power plants with CO 2 capture are estimated using both a "bottom-up" approach based on engineering analysis and a "top-down" approach based on historical experience curves. The limitations of such projections are discussed along with needs for future research, development and demonstration of CCS technology. The most urgent need at this time is financing for several full-scale demonstrations of CCS at coal-based power plants. INTRODUCTION On first hearing, the idea may sound a bit far-fetched. To avoid emitting billions of tons of carbon dioxide (C0 2) to the atmosphere-the major greenhouse gas associated with global warming--engineers propose to equip coal-burning power stations with chemical plants that strip CO 2 from the flue gases before they go up the chimney. The concentrated CO 2 would then be compressed to a liquid and piped to a storage site where it would be injected deep underground. There it would be trapped by impermeable layers of rock and very slowly, over centuries, transform into solid carbonate minerals. This method of sequestering C02 would not come cheap--if applied to an existing power plant today, the cost of generating electricity would nearly double. Surely, one would think, there must be an easier way to reduce power plant CO2 emissions. Such was the general view when the idea of carbon capture and sequestration (CCS) was first proposed as a greenhouse gas mitigation strategy three decades ago (Marchetti , 1977). But over the past decade things have changed. Scientists, engineers and policy analysts have been taking a closer look at CCS and finding that it could indeed make sense both technically and economically-making it a potentially important player in mitigating global climate change. Why the interest in CCS7 Current worldwide interest in CO 2 capture and sequestration (or storage) stems principally from three factors. First is the growing evidence that large reductions in global CO 2 emissions are needed to avoid serious climate change impacts-as much as
157
158
an 85% reduction in projected emissions by the middle of this century (IPCC, 2007). And because power plants are a major contributor to greenhouse gas emissions (mostly from coal-burning plants), these reductions cannot be achieved unless power plant emissions also are greatly reduced. Second is the recognition that large emission reductions cannot be achieved easily or quickly solely by using less electricity or replacing fossil fuels with renewable energy sources such as wind and solar that emit no C02. While alternative energy sources are vital elements of any greenhouse gas reduction strategy, technical, economic and societal factors limit the speed and extent to which they can be implemented. The reality today is that the world relies on fossil fuels for over 85% of its energy use, much of that for electric power generation based on the combustion of coal. Changing that picture dramatically will take time. CCS thus offers a way to get large reductions in CO 2 emissions from fossil fuel use until cleaner more sustainable technologies can be widely deployed. Finally, energy-economic models show that adding CCS to the suite of other greenhouse gas reduction measures significantly lowers the cost of mitigating climate change when deep reductions in emissions are required (IPCC, 2005). In its most recent assessment, the Intergovernmental Panel on Climate Change (IPCC) affirmed CCS to be a major component of a cost-effective portfolio of technologies needed to mitigate climate change (IPCC, 2007). OPTIONS FOR CO 2 CAPTURE A variety of technologies are commercially available and in widespread use for separating (capturing) CO 2 from industrial gas streams, typically as a purification step in the manufacture of commercial products. Common applications include the separation of CO 2 in natural gas treatment and in the production of hydrogen, ammonia and ethanol. In most cases, the captured C02 stream is simply vented to the atmosphere. C02 also has been captured from a small portion of the flue gas at power plants burning coal and natural gas, and then sold as an industrial commodity to nearby food processing plants. Globally, however, only a small amount of CO 2 is utilized for industrial products, and nearly all of it is soon emitted to the atmosphere (think about the fizzy drinks you buy). To date, however, there has been no application of CO 2 capture at a large fossil fuel power plant (e.g., at a scale of hundreds of megawatts), although designs of such systems have been widely studied and proposed. As a climate change mitigation strategy, CO 2 capture and storage is best suited for facilities with large CO 2 emissions. The four biggest CCS projects to date-each sequestering \-3 million metric tons C0 2/yr--capture C02 from industrial processes that produce or manufacture natural gas. Other industrial sources, including refineries, chemical plants, cement plants and steel mills, also are potential candidates for CCS. However, power plants are the principal target because they account for roughly 80% of global CO 2 emissions from large stationary facilities . Most CO 2 is formed via combustion, so capture technologies are commonly classified as pre-combustion or post-combustion systems, depending on whether carbon is removed before or after a fuel is burned. A third approach, called oxyfuel or oxycombustion, does not require a CO 2 capture device. This concept is still under
159
development. In all cases, the aim of CO 2 capture is to produce a concentrated CO 2 stream that can be transported to a sequestration site. To facilitate transport and storage, captured C02 is first compressed to a dense "supercritical" state, where it behaves as a liquid, making it easier and much less costly to transport than in gaseous form. High pressures, typically 11-14 MPa, also are required to inject CO 2 deep underground for geological sequestration (Benson and Cole, 2008). The CO 2 compression step IS commonly included as part of the capture system since it occurs inside the plant gate. Post-combustion Capture In these systems CO2 is separated from the flue gas produced when coal or other fuel is burned in air. Combustion-based systems provide most electricity today. In a modern, pulverized coal (PC) power plant, the heat released by combustion generates steam, which drives a turbine generator (Figure 1). Hot combustion gases exiting the boiler consist mainly of nitrogen (from air) and smaller concentrations of water vapor and CO2. Other constituents, formed from impurities in coal, include sulfur dioxide, nitrogen oxides, and particulate matter (fly ash). These are pollutants that must be removed to meet environmental standards. Subsequently, CO 2 can be removed. Flue gas
Air Pollution Control Systems (NO" PM, S02)
"'~f
Fig. 1. Schematic of a pulverized coal-fired (PC) power plant with postcombustion CO 2 capture using an amine system. Other major air pollutants [nitrogen oxides, particulate matter (PM) and sulfur dioxide] are removed from the flue gas prior to CO 2 capture. Because the flue gas is at atmospheric pressure and the concentration of CO 2 is fairly low (typically 12-15% by volume for coal plants), the most effective method to remove CO 2 is by chemical reaction with a liquid solvent. The most common solvents are a family of organic compounds known as amines, one of which is monoethanolamine (MEA). In a vessel called an absorber, the flue gas is "scrubbed" with an amine solution, typically capturing 85-90% of the CO 2. The CO 2-laden solvent is then pumped to a second vessel, called a regenerator, where heat releases the CO 2 as a gas. The resulting concentrated CO 2 gas stream is then compressed into a supercritical fluid for transport to the sequestration site, while the solvent is recycled (Figure 2a).
160
Fig. 2a: An amine-based post-combustion CO 2 capture system treating a portion of the flue gas (- 40 MW equivalent) from a coaljired power plant in Oklahoma, USA. (Photo courtesy of u.s. Department of Energy) Post-combustion capture also can be applied to natural gas combined cycle (NGCC) power plants, which have come into broad use over the past decade. In this type of plant, clean natural gas is combusted with compressed air to produce a hightemperature gas stream that drives a turbine. The hot exhaust from the turbine is then used to produce steam, which powers a second turbine, generating more electricity (hence the term "combined cycle"). Although the CO 2 in NGCC flue gas is even more dilute than in coal plants (about 3-5% by volume), high removal efficiencies are still achieved with amine capture. Amine capture technology is also widely used to purify industrial gas streams, as in the processing of raw natural gas to remove CO 2 , a common impurity (Figure 2b).
Fig. 2b: An amine-based CO 2 capture system used to purify natural gas at BP's In Salah plant in Algeria. Approximately 1 Mtly of CO 2 is captured and transported by pipeline to a geological sequestration site. (Photo courtesy oflEA Greenhouse Gas Programme)
161
Pre-combustion Capture To decrease CO 2 emissions, fuel-bound carbon can first be converted to a form amenable to capture. This is accomplished by reacting coal with steam and oxygen at high temperature and pressure, a process called coal gasification. By restricting the amount of oxygen, the coal is only partially oxidized, providing the heat needed to operate the gasifier. The reaction products are mainly carbon monoxide and hydrogen (a mixture commonly known as synthesis gas, or syngas). Sulfur compounds (mainly hydrogen sulfide, H2 S) and other impurities are removed using conventional gas-cleaning technology. The clean syngas can be burned to generate electricity in a combined cycle power plant similar to the NOCC plant described above. This approach is known as integrated gasification combined cycle, or lOCC. To capture CO 2 from syngas, two additional process units are added (Figure 3). A "shift reactor" converts the carbon monoxide (CO) to CO 2 through reaction with steam (H 20). Then, the H2-C0 2 mixture is separated into streams of CO 2 and H2 . The CO 2 is compressed for transport, while the H2 serves as a carbon-free fuel that is combusted to generate electricity.
Fig. 3: Schematic of an integrated gasification combined cycle (IGCC) power plant with pre-combustion CO 2 capture using a water-gas shift reactor and a Selexol CO 2 separation system.
Although initial fuel-conversion steps are more elaborate and costly than postcombustion systems, the high pressures of modem gasifiers and the high concentration of CO 2 produced by the shift reactor (up to 60% by volume) make C02 separation easier. Thus, instead of chemical reactions to capture CO 2 , commercial processes such as Selexol use sorbents (such as glycol) to physically adsorb CO 2 , then release it in a second vessel when the pressure is quickly reduced. This technology for pre-combustion capture is favored in a variety of processes, mainly in the petroleum and petrochemical industries (Figure 4).
162
Fig. 4: A pre-combustion CO 2 capture system used to produce synthetic natural gas (syngas) from coal at the Dakota Gasification Plant in North Dakota. About 3 Mt/y captured CO 2 is currently transported by pipeline to the Weyburn and Midale oil fields in Saskatchewan, Canada, where it is used for enhanced oil recovery and sequestered in depleted oil reservoirs. (Photo courtesy of us. Department of Energy) Oxy-combustion Capture Oxy-combustion (or oxyfuel) systems are similar to conventional combustion systems, except that oxygen is used rather than air to avoid nitrogen in the flue-gas stream. After the particulate matter (fly ash) is removed, the gas consists mainly of water vapor and C02, with low concentrations of pollutants such as sulfur dioxide (S02) and nitrogen oxides (NOx). The water vapor is easily removed by cooling and compressing, leaving nearly pure CO 2 that can be sent directly to sequestration. Oxy-combustion avoids the need for a post-combustion capture device, but most designs require additional processing to remove conventional air pollutants to comply with environmental requirements or CO 2 purity specifications. The system also requires an air- separation unit to generate the relatively pure (95-99%) oxygen needed for combustion (FIGURE 5) and must be sealed against air leakage. Approximately three times more oxygen is needed for oxyfuel systems than for IGCC plants, which adds considerably to the cost. Because combustion temperatures in oxygen are much higher than in air, oxy-combustion also requires roughly 70% of the inert flue gas to be recycled back to the boiler to maintain normal operating temperatures.
163
Flue gas to atmosphere
Steam Turbine Generator
ISteam
Coal
Air Pollution Control Systems (PM, S02)
PC Boiler
CO 2 Compressio
CO 2 to storage
O2 H2O Air Separation Unit
Air
Fig. 5. Schematic of a coal-fired power plant using oxy-combustion. Approximately 70% of the CO 2-laden flue gas is recycled to the boiler to maintain normal operating temperatures. Depending on the purity of the oxygen from the air separation unit, small amounts of nitrogen and argon also enter the flue gas. As a CO 2 capture method, oxy-combustion has been studied theoretically and in small-scale test facilities. A major demonstration project (10 MW electrical equivalent) began in September 2008 at a pilot plant in Germany (Vattenfall, 2008). Although, in principle, oxyfuel systems can capture all of the CO 2 produced, the need for additional gas treatment and distillation decreases the capture efficiency to about 90% in most current designs (lEA GHG, 2005). For all approaches, higher removal efficiencies are possible, but more costly. Thus, engineers seek to optimize design to achieve the most cost-effective CO 2 capture. THE ENERGY PENALTY AND ITS IMPLICATIONS Current C02 capture systems require large amounts of energy to operate. This decreases net efficiency and contributes significantly to C02 capture costs. Post-combustion capture systems use the most energy, requiring nearly twice that of pre-combustion systems (Table 1).
164 Table I. Representative values of current power plant efficiencies and CCS energy penalties. Sources: IPCC (2005); EIA (2007). Power plant type (and capture system type)
Net plant efficiency (%) without CCS* 33
Net plant efficiency (%) with CCS*
Energy penalty: Added fuel input (%) per net kWh output 40%
Existing subcritical (PC) 23 (+post-combustion) New supercriticai (SCPC) 40 31 30% (+post-combustion) New supercritical (SCPC) 40 32 25% (+oxy-combustion) 40 34 19% Coal gasification (IGCC) ( +pre-combustion) New natural gas (NGCC) 16% 50 43 (+post-combustion) • All efficiency values are based on the higher heating value (HHV) of fuel , not the lower heating value (LHV) used in Europe and elsewhere, which yields greater efficiencies by omitting the fuel energy needed to evaporate water produced in combustion. For each plant type, there is a range of efficiency values around those shown here. See Rubin et al. (2007a) for details
Lower plant efficiency means more fuel is needed for electricity generation. For coal plants, this added fuel produces proportionally more solid waste and requires more chemicals, such as ammonia and limestone, to control NO x and S02 emissions. Plant water use also increases proportionally, with additional cooling water needed for amine capture systems. Because of efficiency loss, a capture system that removes 90% of the CO 2 within a plant actually reduces net emissions per kilowatt-hour (kWh) by a smaller amount, typically 85-88%. In general, the more efficient the power plant, the smaller are the energy penalty impacts. For this reason, replacing or repowering an old, inefficient plant with a new, more efficient facility with C02 capture can still yield a net efficiency gain that decreases all plant emissions and resource consumption. Thus, the net impact of the energy penalty is best assessed in the context of strategies for reducing emissions across a fleet of plants, including existing facilities as well as planned new units. Innovations in power generation and carbon capture technologies are expected to further reduce future energy penalties and their impacts.
THE COST OF CO 2 CAPTURE Table 2 summarizes the cost of individual components of the CCS system. The broad ranges reflect different sets of assumptions used in various studies of hypothetical power plants in North America and Europe. The most costly component is capture, including compression. The lowest capture costs are for processes where CO 2 is sep<.:ated as part of normal operations, such as during hydrogen production, where the added cost is simply for CO 2 compression.
165 Table 2. Estimated costs of CO, capture, transport, and geological storage (2007 U.S.$/t CO,). Ranges renect differences in the technical and economic parameters affecting the cost of each component. (Source: IPCC 2005 data, adjusted to 2007 cost basis). CCS system component Capture: Fossil fuel power plants Capture: Hydrogen and ammonia production or gas-processing plant Capture: Other industrial sources Transport: Pipeline Storage: Deep geological formation
Cost range (U.S.$) $20-95t CO, net captured $5-70/t CO, net captured $30-145/t CO, net captured $I-IO/t CO, net captured $0.5-10/t CO, net captured
Figure 6 depicts the cost of generating electricity with and without CCS, as reported in recent studies. The total electricity cost ($/MWh) is shown as a function of the CO 2 emission rate (t C0 2/MWh) for new plants burning bituminous coal or natural gas. One sees a broad range of values. While variations in capture-system design contribute to this range, the dominant factors are differences in design, operation, and financing of the power plants to which capture technologies are applied. For example, higher plant efficiency, larger plant size, higher fuel quality, lower fuel cost, higher annual hours of operation, longer operating life, and lower cost of capital all reduce the costs, both of CO 2 capture and electricity generation. No single set of assumptions applies to all situations or all parts of the world, so estimated costs vary. Broader range would appear if other factors were considered, such as subcritical boilers or non-bituminous coals. :2120
3: ~
Ie g
~~
100
SCPC New Natural Gas Plants
u~ P~~~ts
g
ccs
80 <: 60
~
[GCC
0
NGCC
New Coal Plants [GCC
--0 0
....
SCPC
a:
40
'0 ....
20
iii
Current Coal Plants
PC
VI
o
u
I *BituminouJ coals; gas prices::: $../-7:GJ; 90% capture; deep aquifer storage
oI o
,
0.1
. ' "
0.2 0.3 0.4 0.5 0.6 0.7 0.8 CO 2 Emission Rate (tonnes I MWh)
0.9
1.0
Fig. 6. Cost of electricity generation (2007 US $/MWh) as a function of the CO 2 emission rate (t CO2/MWh) for new power plants burning bituminous coal or natural gas (PC = subcritical pulverized coal units; SCPC = supercritical pulverized coal; IGCC = integrated gasification combined cycle; NGCC = natural gas combined cycle). Ranges reflect differences in technical and economic parameters affecting plant cost. Figure based on data from NETL (2007); Holt (2007); MIT (2007); Rubin et al. (2007); IPCC (2005), adjusted to 2007 cost basis.
166 Over the past several years, construction costs for power plants and other industrial facilities have escalated dramatically (CEPCI, 2008). So too has the price of fuel, especially natural gas, making NGCC plants uneconomical in most locations where coal is also available at much lower cost. Uncertainty about future cost escalations further clouds the "true" cost of plants with or without CCS. On a relative basis, however, CCS is estimated to increase the cost of generating electricity by approximately 60-80% at new coal combustion plants and by about 30-50% at new coal gasification plants. On an absolute basis, the increased cost of generation translates to roughly $40-70IMWh for PC plants and $30-501MWh for IGCC plants using bituminous coal. The CO2 capture step (including compression) accounts for 80-90% of this cost, while the remaining 10- 20% is due to transport and storage. Note, however, that consumers would see much smaller increases in their electricity bills because generation accounts for only about half the total cost of electricity supply, and only a gradually increasing fraction of all generators might employ CCS at any time in response to future climate policies. Figure 6 can also be used to calculate the cost per ton of CO 2 avoided when a plant is built with CCS instead of without. For a new supercritical (SCPC) coal plant with deep aquifer storage, this is currently about $60-80/t CO 2, which is the magnitude of the "carbon price" needed to make CCS cost-effective. For IGCC plants with and without capture, the CCS cost is smaller, about $30-50/t CO 2. All costs are decreased when CO 2 can be sold for EOR with storage. The cost of CO 2 avoided depends on the type of "reference plant" used to compare with the CCS plant. For example, without capture, a SCPC plant today is about 15-20% cheaper than a similarly sized IGCC plant, making it preferred. But with CO 2 capture, an IGCC plant gasifying bituminous coal is expected to be the lower-cost system. Thus, it is useful to compare a SCPC reference plant without capture to an IGCC plant with CCS. In this case the cost of C02 avoided is roughly $4060ft CO 2 • The relative cost of SCPC and IGCC plants can change significantly with coal type, operating hours, cost of capital, and many other factors (Rubin et al. 2007a). Experience with IGCC power plants is still quite limited, and neither SCPC nor IGCC plants with CCS have been built and operated at full scale. Thus, neither the absolute nor relative costs of these systems can yet be stated with confidence. For existing power plants, the feasibility and cost of retrofitting a CO 2 capture system depends especially on site-specific factors such as plant size, age, efficiency and space to accommodate a capture unit. For many existing plants, the most cost-effective strategy is to combine CO 2 capture with a major plant upgrade (repowering) in which an existing unit is replaced by a high-efficiency unit or a gasification combined cycle system (Chen et al. 2003; Simbeck, 2008). In such cases, the cost approaches that of a new plant. OUTLOOK FOR LOWER-COST TECHNOLOGY Research and development (R&D) programs are underway worldwide to produce CO 2 capture technologies with lower cost and energy requirements (lEAGHG, 2008). For example, the European CASTOR project aims at lower post-combustion capture costs by developing advanced amines and other solvents. In the U.S. , the Department of Energy (U.S. DOE) has a major R&D program supporting a variety of approaches to C02 capture (Figure 7) as well as a Regional Partnership Program that supports CCS data collection
167
and field tests across the country (NETL, 2009). U.S. electric utility companies and equipment manufacturers are also testing a post-combustion process using chilled ammonia in the hope of greatly reducing the CCS energy penalty and with it the cost of capture. Researchers in Australia, Europe, Japan, and North America are seeking major improvements also in pre-combustion capture with membrane technologies for oxygen and hydrogen production and CO 2 separation. A number of national and international programs are also pursuing new process concepts such as chemical looping combustion.
Fig. 7. Advanced approaches for CO2 capture being pursued in the Us. DOE R&D program. Values in parenthesis are the number of projects in each category as of early 2007. Total funding for these projects was $205 million (averaging approximately $62 million per year). Largest areas of funding were hydrogen membranes and oxygen separation systems. (Source: data from NETL 2007) Although future costs remain highly uncertain, technological innovations in capture systems, in conjunction with improvements in power plant design, are projected to yield sizeable reductions in the future cost of CO 2 capture. Two methods are used to estimate future costs. One is a "bottom-up" approach that employs engineering and economic analyses of proposed new process designs. Figure 8 shows examples of such projections by the U.S. DOE for post-combustion and pre-combustion systems. These analyses estimate cost reductions of 200/0-30% in the total (levelized) cost of electricity generation with CCS using advanced technologies. Similar results are obtained from a "top-down" approach to cost estimation based on historical "experience curves." This approach does not specify any details of future technology design; rather, it assumes that costs evolve in a manner consistent with past experience for similar technologies. A large body of literature shows that technologies
168
typically become cheaper as they mature and are more widely adopted. The rate of cost reduction is commonly represented as a "learning rate" expressed as a function of cumulative production or installed capacity. In this analysis, the cost of different types of power plants with CO 2 capture was estimated using historical learning rates for seven different energy or environmental technologies (see Rubin et ai. 2007b for details). Key results are summarized in Figure 9 for four types of power plants. Coal gasification-based power plants (IOCC) show the largest potential for cost reductions since the major components of that system are not nearly as mature as the major components of combustion-based systems. Thus, unlike the bottom-up approach, the experience-based approach to cost estimation requires not only sustained R&D but also deployment and adoption of technologies in the marketplace to facilitate learning-by-doing. Policies that promote CCS deployment are thus essential to achieve the cost reductions that are projected.
169
19% reduction in COE
Advanced Selexol
w 0 0
c: Q)
III Cl!
...
Q)
-... (,)
WGS Membrane
c: cQ):
(,)
Q)
11.
c
B
A
E
D
F
G
(a) IGCC Plant (Pre-combustion Capture)
80 70 w 0 60 0 c: 50 III Cl! ... 40 .E 30 c: ... 20 11. 10
28% reduction in COE
Q) Q)
(,)
Q) (,)
Q)
0
A
B
c
D
E
F
G
(b) SCPC Plant (Post-combustion Capture) Fig. 8. Projected increases in the cost of electricity (CaE) for cO2 capture and storage using current technology (column A) and various advanced
170 capture technologies for, (a) an IGCC plant with pre-combustion capture (note: ITM= ion transport membrane; WGS= water gas shift) and (b) a SCPC plant with post-combustion capture (note: RTI= Research Triangle Institute). The height of each bar shows the percent increase in COE relative to a similar plant without CO2 capture and storage. The absolute value ofCOE (in Us. cents/kWh) appears in small print at the top ofeach bar. For the IGCC plant, the total COE is projected to fall from 7.13 ¢/kWh currently to 5.75 ¢/kWh with advanced technology (columns F and G)-an overall COE reduction of 19%. For the PC plant, the COEfalls from 8.77 ¢/kWh to 6.30 ¢/kWh-an overall reduction of28%. A similar projection for a PC plant with oxy-combustion (not shown in this figure) estimates that advanced technologies can reduce the total cost of electricity from 7.86 ¢/kWh (currently) to 6.35 ¢/kWh, a 19% reduction in COE. (Source: NETL, 2006)
COST OF ELECTRICITY
--
0/0 REDUCTION 30
'#. 1C
CD
ca
a:
W 25
8
0
6
.
()
c:
CI
s::: s:::
...ca
-a:
.~
u
::::I 15
CD ..J
s:::
"C (I)
-...
a:
4
ca
...CD
c:
10
(I)
u
2
(I)
D..
3:
0 D.
20
0
0
5
0
NGCC PC IGCC Oxyfuel (a)
NGCC PC IGCC Oxyfuel
(b)
Fig. 9: (a) Projected plant-level learning rates (percent reduction in cost of electricity generation for each doubling of installed capacity) after an assumed 100 GW of total installed capacity worldwide for each of four types ofpower plants with C02 capture. (b) The resulting overall percent reduction in cost of electricity (COE) generation after 100 GW of installed capacity. (Source: Rubin et al. 2007b)
CONCLUDING REMARKS Although C02 capture and sequestration holds considerable promise, its acceptance will depend on the nature and pace of government policies to limit CO 2 emissions and/or to
171
provide financial incentives for its use. At present, only the European Union (EU) has CO 2 emission limits in the form of a "cap-and-trade" policy, which requires industrial sources either to reduce emissions or to buy "allowances" to emit CO 2 . The price of a CO 2 allowance is established in a financial market called the European Union's emissions trading system (ETS), the largest existing market for carbon reductions (Ellerman and Joskow, 2008). At current ETS carbon prices, CO2 capture and storage remains prohibitive relative to other measures for meeting emission limits. Although under considerable attention, unresolved legal, regulatory, and public-acceptance issues pose additional barriers to CCS deployment. New post-20l2 EU emission limits are under negotiation. In the U.S., most cap-and-trade policies recently proposed in Congress fall far short of the carbon prices needed to stimulate use of CCS, although some proposals included financial incentives for its early adoption (Pena and Rubin, 2008). More recent bills, such as the Waxman-Markey bill adopted by the U.S. House of Representatives, would complement cap-and-trade with power plant performance standards that restrict CO 2 emissions to levels only achievable with CCS. Whatever the method, until there are sufficiently stringent limits on CO 2 emissions, CCS will be used only at a small number of facilities that can exploit governrnent incentives or other economic opportunities such as enhanced oil recovery. In the absence of strong policy incentives or requirements, where do we go from here? There is broad agreement that progress on CCS requires several full-scale demonstrations at fossil fuel power plants, especially coal-based plants. Such projects are critically needed to establish the true costs and reliability of the various approaches in different settings and to resolve legal and regulatory issues of large-scale geological sequestration (Wilson et al. 2008). To date, adequate financing (of roughly one billion U.S. dollars per project for a large coal-based power plant) has not yet been forthcoming. Government-industry partnerships in Asia, Europe, and North America are currently in various stages of planning and financing CCS projects (TABLE 3). Once adequate funding is in place, it will take several years to design and build each facility, followed by several years of operation to evaluate its reliability, safety, public acceptance, and performance in reducing CO 2 emissions. If all goes well, a viable CCS industry could be launched in approximately a decade.
172 Table 3. A few examples of planned and proposed CO 2 capture and storage projects (beyond 2008). Many would proceed in phases beginning with smaller units than shown here. As of mid-2008, approximately 65 projects have been announced worldwide. Several large projects also were cancelled in 2007-2008. (Source: MIT 2008) Project Name
Location
Feedstock
SizeMW
Capture
Callide-A Oxy Fuel
Australia
Coal
30
Oxy
2009
GreenGen
China
Coal
250
Pre
2009
Williston
USA Norway
Coal
450
Post
2009-2015
Coal
400
Post
2011
S&S Ferrybridge
UK
Coal
500
Post
2011-2012
Naturkraft Karst0
Norway
Gas
420
Post
2011-2012
Fort Nelson
Canada
Gas
Process
Pre
2011
Zero Gen UAE Project
Australia
100 420
Pre
2012
Pre
2012
Sargas Husnes
Start-up
UAE
Coal Gas
Appalachian Power
USA
Coal
629
Pre
2012
UK CCS Project
UK
Coal
300-400
Post
2014
Statoil Mongstad
Norway
Gas
630
Post
2014
RWEZero CO,
Germany
Coal
450
Pre
2015
Monash Energy
Australia
Coal
60 k bpd
Pre
2016
REFERENCES 1. 2. 3.
4.
5.
6.
7. 8.
Benson, S.M., and Cole, D.R. (2008) "C0 2 sequestration in deep sedimentary formations." Elements, 4:324-331. CEPCI (2008) "Chemical engineering plant cost index." Chemical Engineering, January. Chen, C., Rao, A.B., and Rubin, E.S. (2003) "Comparative assessment of CO 2 capture options for existing coal-fired power plants." Proceedings of 2nd Annual Carbon Sequestration Conference, u.s. Department of Energy, National Energy Technology Laboratory, Pittsburgh, PA. EIA (Energy Information Administration) (2007) Annual Energy Review 2006. Report no. DOEIEIA-0364(2006). u.S. Department of Energy, Washington, DC, http://tonto.eia.doe.gov/FTPROOT/multifuel/03 8406. pdf, 441 pp. Ellerman, D., and Joskow, P. (2008) The European Union's Emissions Trading System in perspective. Pew Center on Global Climate Change, Arlington, V A, 52 pp. Holt, N. (2007) CO 2 capture and storage-EPRI CoalFleet program. PacificCorp Energy IGCCIClimate Change Working Group, Salt Lake City, Electric Power Research Institute, Palo Alto, CA, January 25. IEA GHG (2005) Oxy-combustion for CO 2 capture. Report No. 2005/09. International Energy Agency, Greenhouse Gas Programme, Cheltanham, UK. IEA GHG (2008) RD&D projects database, research programmes. International Energy Agency, Greenhouse Gas Programme, Cheltanham, UK. Accessed September 9, 2008 at: http://co2captureandstorage.info/research programmes. php.
173 9. 10. 11.
12. 13. 14. 15.
16.
17.
18.
19.
20.
21. 22.
23. 24.
25. 26.
IPCC (2005) IPCC special report on carbon dioxide capture and storage. Working Group III of the Intergovernmental Panel on Climate Change. Metz B. et al. (eds). Cambridge University Press, New York, NY, USA, 442 pp. IPCC (2007) Climate change 2007: mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change . Metz B, et al (eds). Cambridge University Press, New York, NY, USA. Macfarlane, A.M. (2007) "Energy: the issue of the 21 st century." Elements, 3:165170. Marchetti, C. (1977) "On geoengineering and the CO 2 problem." Climatic Change 1:59-68. MIT (2007) The Future of Coal. Massachusetts Institute of Technology, Cambridge, MA. ISBN 978-0-615-14092-6, 192 pp. MIT (2008) Carbon dioxide capture and storage projects. Massachusetts Institute of Technology, Cambridge, MA, Accessed July 4, 2008 at: http://sequestration.mit.edultools/projectslindex .html. NETL (2006) C02 capture developments. Presentation to Strategic Initiatives for Coal, Queenstown, MD by SM Klara, National Energy Technology Laboratory, U.S . Department of Energy, Pittsburgh, PA. NETL (2007) Cost and Performance Baseline for Fossil Energy Plants. Volume I: Bituminous Coal and Natural Gas to Electricity Final Report, Revision I. Report No. DOEINETL-200711281, Prepared by Research and Development Solutions LLC for National Energy Technology Laboratory, U.S . Department of Energy, Pittsburgh, PA, 516pp. NETL (2009) Regional Carbon Sequestration Partnerships, U.S. Department of Energy, National Energy Technology Laboratory. Accessed June 2009 at: http ://www.netl.doe.gov/technologies/carbon_seq/partnerships/partnerships.html. Pena, N., Rubin, E.S. (2008) A Trust Fund Approach to Accelerating Deployment of CCS: Options and Considerations, Coal Initiative Reports. White Paper Series, Pew Center on Global Climate Change, Arlington, VA. Rao, A.B., and Rubin, E.S. (2002) "A technical, economic, and environmental assessment of amine-based CO 2 capture technology for power plant greenhouse gas control." Environmental Science & Technology 36:4467-4475. Rubin, E.S., Chen, c., and Rao, A.B. (2007a) "Cost and performance of fossil fuel power plants with CO 2 capture and storage." Energy Policy 35:4444-4454. Rubin, E.S., Yeh, S. , Antes, M., Berkenpas, M., and Davison, J. (2007b) "Use of experience curves to estimate the future cost of power plants with CO 2 capture." International Journal of Greenhouse Gas Control, 1: 188-197. Schrag, D.P. (2007) "Confronting the climate-energy challenge." Elements, 3:171-178. Simbeck (2008) The carbon capture technology landscape. Energy Frontiers International Emerging Energy Technology Forum, SFA Pacific, Inc., Mountain View, CA. Vattenfall (2008) Vattenfall's project on CCS, Vattenfall AB, Stockholm, Sweden. Available at http://www.vattenfall.com/www/c02_en/c02_en/index.jsp. Wilson, EJ. and 16 coauthors (2008) "Regulating the geological sequestration of CO 2.'' Environmental Science and Technology 42:2718-2722.
This page intentionally left blank
MAKING RAPID TRANSITION TO AN ENERGY SYSTEM CENTERED ON ENERGY EFFICIENCY AND RENEW ABLES POSSIBLE
WOLFGANG EICHHAMMER Fraunhofer Institute for Systems and Innovation Research Karlsruhe, Germany
INTRODUCTION : DYNAMICS OF A SUSTAINABLE ENERGY SYSTEM A sustainable energy system, or to be more precise, an energy system in transition towards sustainability may have a variety of components which depend in general on the specific context of a country (its resources, the historic acquaintance with a technology etc.). Figure 1 lists some of these possible transitional components such as carbon management, the production of hydrogen from fossil resources, nuclear energy.
Fig. 1: Dynamics of a sustainable energy system. (Source: Fraunhofer lSI)
However, in all scenarios to set up a more sustainable electricity system, energy efficiency options and renewables appear to playa very central role, if not to say, the only role, once the transition process is behind us. The electricity sector will playa very specific role in a sustainable energy system as it will increasingly take up uses from other sectors that are currently run on fossil fuels:
175
176 • • • • •
Electricity can be used nearly universally; The shift towards electricity will continue in the industrial sector; A large part of the transport sector could be shifted to the electricity sector through a larger market share of electric cars. Carbon Capture and Storage options, during the time they may operate would require larger amounts of electricity for the separation and transport of C02. The energy consumption of the building sector could be largely reduced in the coming decades through low energy and passive houses in the new, but partially also the old building stock. However, the remaining energy demand may be covered partially by electricity e.g., by a larger use of heat pumps.
For this reason, this paper concentrates on the electricity sector and discusses how rapid transition of this system towards strong contributions from renewables and energy efficiency can be geared and what could be obstacles on that road. The paper concentrates on the example of Germany, a country which has, at least on the renewables side, advanced more than many other countries. In the first section that follows we will discuss ambitious scenarios that Germany has set up for the promotion of renewables, reaching shares of 70% renewables in the electricity sector by 2030. After that we discuss the obstacles on the path to this achievement. We try to explain the differences among countries in their approach to renewables and energy efficiency by differences in entrepreneurship. In a last chapter we tackle the issue of reduced demand for electricity and what steps need to be undertaken to progress further on this road more rapidly. POLICY SCENARIOS IN GERMANY FOR THE ELECTRICITY SECTOR: AMBITIOUS POLICIES-BUT BUILDING ON SUBSTANTIAL EFFORTS IN THE PAST 15 YEARS In 2007 the share of renewables in electricity production reached 14.2% of the gross electricity demand (BMU, 2008) (see Figure 2) . The new Renewable Energy Sources Act' which rules the German feed-in tariff system and which entered into force on 1 January 2009 aims at a share of at least 30% of the gross electricity consumption in 2020. More ambitious policy scenarios are envisaged from the time period up to 2030 (Oko-InstitutlFraunhoferlForschungszentrumJtilich/ DIW, 2009), and according to those most recent projections, it seems possible, not only to reduce electricity demand by more than 20% but also to increase the share of renewables in the then reduced demand to over 70% (Figure 3).
http://www.bmu.de/english/renewable_ energy/downloads/ doc/4 2934. php
177 MW
40,000 35,000
•
Wind
Hydro
O PV
Biomass
30,000 25,000 20,000 15,000 10,000 5,000 0
Fig. 2: Development of the installed power for renewables in Germany (MW). [Source: own calculations based on BMU (2008), DEWI (2008), BSW (2008)} 700
600
600
OCok'OVe'l9~
08Imfunac:.gn
• RethErt on a Oi
400
QN~hnf~ ·
'"~
rEWCHP
aNitU'ii g)S . ne'WtOndanS3'Son(CC$
~~;:~~nBNa:hli'
on --rew'(ondensa1on o Natl.lalon oUCHP
300
Q N~luafO):S ·
ots(onden$:aton
o Hardtoar ·rewCHP
IS Hard (0 iI · rewtO ndansaJo n (CC$
200
8 Hard(oar· rlfM'tondens:don
100
0 2005
1010
2020
2030
Fig. 3: Policy Scenarios in Germany for the electricity sector. [Source: 6koInstitutl Fraunhoferl Forschungszentrum JiilichlDIW (2009)) In total this may lead to an overall decrease in CO 2 emissions from the electricity sector by about two thirds by 2030 as compared to 2005 (Figure 4). The fossil park has then shrunk considerably or partly reduced with CCS options. Nuclear will have been
178
phased out according to the present decision. 400y-------------------______________________- - .
~D~~~.----------~ OCo Ir:tC1{tn~1
OBI:i$C furnace
o-as
0 01 g N~lI ..
ga,·ntwCHP
• Nahnl on- new, on6eos.1o n(¢C8) II tbh.rJlon- ntWcon6eMdon
g
O N.II.r~9H·o5dCHP
- 200 .2
S N.h.nlou· oktc0n6etmtbn
:E
• Hard (0111· t'lMcotMSetlSlliOn (¢C8) 8 Hatd coal· nerN(otldtflUlllOn fa HJldcoal·o5d¢HP
100
OH.,6 coal · oktcondeouUon . 810'M'Oal · MIlt(~n(CC8)
8 BreYr('lco", newcon6enuton o Brow'Ho"·oJdCHP
88rowuoal-olcUondensatbn
o 2005
2010
2020
2030
Fig. 4: Policy Scenarios in Germany jar the electricity sector (Impacts on CO2 emissions). [Source: Oko-InstitutIFraunhojerIForschungszentrumJulich! DIW (2009)}
"THEY DID NOT KNOW IT WAS IMPOSSIBLE, SO THEY DID IT!" (MARK TWAIN) Reaching shares of 70% of renewables in the electricity mix is a challenge and requires careful studying the interactions between the renewables portfolio, the grid and the demand for electricity. This sections presents some reflection on this is, although it is far from being exhaustive. Much more work is still necessary to understand such radical changes in all detail. We study the case of Germany. Fraunhofer ISIlUni Wtirzburg/Technical University Vienna (2008)2 analyse with respect to the share of renewabIes in the grid the present situation and two scenarios: o
•
In Scenario S 1 by 2020 renewables generate 171 TWh corresponding to 28% of the gross electricity demand of 611 TWh. In Scenario S2 renewables contribute 426 TWh or 81 % of an assumed gross electricity consumption of 525 TWh.
These scenarios represent a high to very high share of fluctuating renewable energy sources. They show which challenges occur in the energy system in the case of an ambitious extension of renewable energy sources. In that context it is essential to consider electricity demand and electricity generation together. In the following analysis we consider the residual demand which is obtained by This section represents a short excerpt from the study results.
179 subtracting from the hourly load curve of 2006, which is scaled for the annual consumption, the feed-in profiles of the different renewable energy sources. A central assumption is further that there are no substantial obstacles in the grid that limit the transport of electricity. This implies the future enhancement of the grids, which is necessary both to realise the European common electricity market and to integrate renewables efficiently. With respect to the importance and need of grid enhancement we refer to the grid studies DENA I (DENA, 2005), DENA II (ongoing 3) as well as current studies for the capacity enhancement of existing electricity lines (Lange, Focken, 2008). Analysis of the 2006 context The analysis of the load profile for 2006 (Figure 5) shows that the electricity generation from renewables does in no circumstances exceed the demand.
90000
Residual demand MW (to be met by convrntional rossllllOwer planls)
80000 70000 Residual dema nd
60000 50000
1
I .~
40000 30000
.k
~
~
I
20000
~
~
Nuclea r e ne~;.J
Secondar Rese ..·,
I
Primary Rese" 'e
'!I~
~
10000
o 1
721 1441 2161 2881 31
Hours
5041 5761 6481 7201 7921 8641
Stunden Fig. 5: Analysis of the load profiles for 2006. [Source: Fraunhofer ISllUni WurzburglTU Vienna (2008)}
However, it has to be taken into account that some generation units must have priority in the system. The generation units that must in any case maintain their function are belonging to the primary reserve.' Another important factor in the regulation of the
http://www.dena.de/enltopics/energy-systems/projects/proje ktlgrid-study-iil The primary reserve is used to regulate the electricity grid and to compensate for short term variations between supply and demand and the corresponding variations in frequency. The regulation with the primary reserve should be fully active at a time scale of 0-30 seconds, and may be used up to 15 minutes. This change in power is
180
grid is the secondary reserve. s A third socle may arise from nuclear energy" A fourth socle may arise from CHP plants. The residual demand and the three socles are presented in Figure 5. It is important to underline that this presentation does not contain pump hydro plants nor the transport of electricity abroad. The transmission capacities with the neighbouring countries are more than 10 GW and may make a substantial contribution. From Figure 5 is evident that in 2006 the renewable offer did never exceed the demand. The nuclear socle is concerned with 40 hours per year. However, it can be estimated that in reality electricity exports and the use of pump hydro plants have avoided to reduce the plant power. The maximum power that must be covered by power plants is 80 GW. Due to regional bottlenecks there were already in 2006 situations, where the demand was below the supply. Those regionally very limited cases summed up to less than 0.2% of the renewable generation in 2006 and should possibly through the future enhancement of the grid and other measures be avoided. Another aspect is the gradient of the residual demand, which indicates the necessary change of the conventional generation capacity within Ih intervals due to the fluctuation of the load and of the renewable energy sources. This is a suitable indicator for the required regulation speed of the electricity grid. In 2006 during more than 90% of the time the hourly gradient of the residual demand is less than 6 GW: However, there are already gradients in the system up to about 23 GW. These results show that the present supply grid can already handle high power gradients of the residual demand which result from variations in demand and the fluctuations of the renewables. To this flexibility contribute in particular rapidly regulated conventional power plants. In order to ensure also in the future a high degree of flexibility renewable generation technologies need also to provide similar system services. regulated automatically. Special technical features are necessary for this. The power of the tendered primary reserve was in 2006 about 660 MW (Sensfuss, 2007). The primary reserve usually contains conventional power plants in part load or hydro plants. If we assume that the primary reserve comes from conventional power plants that may provide 4% of their nominal power at a minimum working point of 65% the minimum power that cannot be taken out of the system is II GW. The secondary reserve takes up from the primary reserve and must in a time frame of 30-300 seconds be fully available. The secondary reserve is also called automatically. The tendered secondary reserve was about 3.3 GW in 2006 . In the secondary reserve there may be hydro plants like pumping power stations or conventional fossil power plants. If one assumes that about 113 of the secondary reserve is provided with conventional plants in part load, the there is another socle of 4.4 GW. Although nuclear electricity has, in difference with renewable, no priority in the grid, for reasons of permit procedures the plant operators try to avoid cutting nuclear plants even if they have to pay negative market prices. The exact height of the additional socle is difficult to determine as nuclear plants may also be used to guarantee the above regulation electricity for the reserves. For the following it is estimated that about 45% of the installed nuclear power comes in as additional socle for generation.
181
A third important aspect for the system integration of renewables is the height and frequency of forecast errors, as those determine essentially the required amounts and the costs of the required energy for regulation. This aspect cannot be debated here in detail but it can be shown for 2006 that the regulation necessary by forecast errors was small. Analysis of the 2020 context An analysis of the load profiles in Fig. 6 shows that also in 2020 with a share of renewables close to 30% an overproduction of renewable electricity is not a major issue.'
90000 80000
-- .......---------------------- ---------
R... ldual d.rnand M IV
00 be met by connntlono_1fossU power pl ants)
70000 +-----~--~----------_P_4,
60000 +------------:~~~;:;:;;;:~~~~~_i 50000 tt~~;;;;~~--~----~~~~------------~
40000 30000 2000
1000 1
r----,----,---,-----,.---....,.--...,..--.--.---.....,---....,-------,-------.10000
2_1__ 14_4;.;. 1 ...:2_ 1 .-. 6'1~ 28 _8;.;. 1 ~3;.;:; 6.-. 01.:...,:.; 43:i::i2_1~5;;;:;04 ..:.1~ 571 ~6_ 1 ..;6:;..:.;;. 4 81~ 72J =O_ 1 ..;T!...:9=. Q1.:.....;;; 86 ;:;..o 1
J -_ _
Hours
Fig. 6: Analysis of the load profiles for 2020. [Source: Fraunhofer /S/IUni WiirzburglTU Vienna (2008)}
An important element is that due to the planned phase out of nuclear power plants a conflict with those plants which are difficult to switch off can be avoided.' This is why the nuclear socle in Figure 6 is lower. Otherwise trade-offs between brown coal and nuclear will occur. If the existing export/import capacities and the existing pump hydro plants are taken into account, the problem of overproduction can essentially be handled with existing possibilities. Under the assumption that regional bottlenecks can be The demand for 2020 is assumed to be still at the same level as for 2006. The debate around the lifetime of nuclear power plants may come back due to the recent elections in Germany as both parties in the possible future government are in favour ofrevising the nuclear phase out. This could then constitute an obstacle to the further increase in renewables.
182
avoided through grid expansion and the activation of other grid-based measures, only a very limited amount of additional load shift or storage capacity is required. Also the gradients of the residual demand have not yet changed dramatically. Due to the scaling with the larger amounts of renewables in the system, a larger bandwidth of forecast errors arises in absolute terms of GW. This implies about a doubling of the regulation energy necessary for the shirt term (4h). Analysis of the 2050 context Up to 2050, however, with 80% renewables, there is a substantial overproduction (Figure 7). 80000
60000
Residual demand MW (to be met by conventional fossil power plants)
Primary Reserve Secondary Reserve
40000
Residual demand
1
20000
j
_ ,--=-~:1
,~
~
~
-
o 20000 -
721 1441 2161 2881 3601 4321 5041 5761 6481 7201 7921
86~
40000 ·60000
.aoooo Hours
Fig. 7: Analysis ofthe load profiles for 2050. [Source: Fraunhofer ISIIUni Wurzburgl TU Vienna (2008)J
However, through the provision of regulating services by renewables itself the necessary socle of conventional power plants can substantially be reduced and therefore the overproduction be diminished. Generally spoken, few conventional power plants will reach more than 4000 full load hours in this scenario. If one takes in addition the power gradients into account, it is unlikely in such a scenario that conventional power plants in the current configuration as base load power can be used for such purposes in a reasonable way. Further measures to better match demand and supply may arise from enhanced load management on the demand side and a more flexible feed-in of renewables that can be regulated such as biomass. The not very frequently arising but potentially high over production can most likely not be economically compensated to a full degree with storage systems. For that reason it may be necessary in the time period 2030-2050 to proceed in some events to a down-regulation of renewable generation.
183 An interesting option can be storage systems whose primary function is not in the regulation of the electric system but in a secondary function. This may be batteries from electric vehicles. If one assumes a conservative estimate for the penetration of plug-in hybrids of about 7 million units in 2050, the disposable storage could be 40 OW. This power would be enough to take up a larger share of the surplus from renewables in the scenario discusses further up. The analysis of the forecast errors shows that the improvement of the forecast quality is a central element for the cost effective integration of fluctuating renewables in the electricity grid. In addition, more flexibility in the supply system in the Intra-Day period is of special relevance. Even if the forecast quality is largely enhanced, after 2020 substantial power is necessary on the supply and demand side to compensate for forecast errors. KEY ENTREPRENEURIAL QUALITIES: "ENTREPRENEURS SEE OPPORTUNITY IN EVERY PROBLEM AND SEEK A SOLUTION WHEN FACED WITH A SETBACK" (STEPHEN C. HARPER) As stated in the introduction all studies on climate change and energy security agree: the central contributors to a sustainable energy system are energy efficiency technologies and renewables. However, implementation within European countries and world-wide differs widely . So is the perception of these technologies and of the time frame in which they can contribute to climate change mitigation and supply security: while some consider them as technologies for at best the second half of the century, others clearly see them come today and highly necessary, especially in times of economic downturn. Many studies like the IPCC Work and the Stem Report have shown that energy efficiency technologies are highly economic while most renewables, though still more expensive, follow rapid learning curves and create local employment. So the differences in implementation among countries must be seen from a normative perspective: countries, who are convinced that these technologies will make a substantial contribution to industrial development, push strongly for rapid implementation. The example of wind energy: why this has worked As an example for the normative role that policy takes (or does not take) consider the cumulative installed and newly installed wind power capacities world-wide (Figure 8).
184
TOP10 TOTAL INSTAllED CAPACITY 2008
TOP 10 NEW CAPACITY 2008
f_. n>1y ~
Sp>~
MW
~
USA
2S,170
Cormany Span (hiM Indl. Italy
23,903 16)S4
20.8 19.8
france UK Oennwk Portugol Rest of world
12.l10 9,645 3)36 3,404
3.l41 3,180 2,862 16,693
13.9 10.1 8.0 3.1 2.8 2.7 2.6 2.4 13,8
USA el1lM nlill Cormany SpaIn Italy france
MW
~
8)58
30.9 23.3
6,300 1,800 1,665 1,609
6.2
1.010
3.7
950 836 712
l .S 3.1 2.6
Res t of world
526 3,285
12.2
UK Portug.l (MOdo
6.7 S.9
1.9
Tot. ltopl0
104,104
86.2
Tot. lto p10
2 3,766
87,8
Wor ldtot. l
120,798
100.0
Wor ldtot. l
27,OS1
100.0
Fig. 8: Cumulative and new installed wind power 2008 (MW) What drives the development? [Source: GWEC (2008)J It is clear from the country comparisons (e.g., France and Germany) that the installed capacities have not so much to do with the available potentials or with the true costs of the technologies but rather with the perceived costs. While one country recognises that renewable energy promotion costs some money but that it does not exceed 10-20 Euro per household (figure derived from the evaluation of the German feed-in law) and per year which can be considered an ordinary insurance against climate change (and contributing at the same time to industrial development), another country may only see that the promotion costs per armum several billion Euro due to the expensive promotion of PV. This very different perception of entrepreneurial chances in renewables and energy efficiency leads to the strikingly difference in results for the different types of renewables. So why at the end wind technology is now taking up? It started with a very early pioneer (Denmark) and two decided champions (Spain and Germany) who where able to buy the costs of the technology very substantially down for the more hesitating countries. In the developing world, India had an early start showing that this is not a technology for rich countries only. In more or less recent times, support from high fossil fuel prices came in as an additional factor, persuading the really big players like USA and China to
185 add tremendous capacities in the past two years contributing to further cost decrease. By the end of2009, overall installed wind capacity will be close to 150 GW. The example of concentrating solar power: why this has not (yet) worked An example which is at the opposite side to wind energy at present is concentrating solar power (Figure 9).
Fig. 9: Concentrating solar power technologies.
Although this technology had a good start in the eighties with 350 MW installed rapidly in California, and the technology was at an equal footing with wind energy, up to now the 1 GW has not yet been reached although rapid growth is expected in the next years up to 2015, mainly in Spain and California (Figure 10). Why has this technology more than hundred times less installed than wind energy at present?
186
Capacity (IVI\N)
USA
-
lOOOO r----------------------
..L
•
-
~ t-------------------Ctp.Idty
2000
Other countries: Morocco E
-
•
iii
Spanlen
>000
.,.. .COO
II
""• IL
L
••
Fig. 10: Concentrating solar power technologies. [Source: Fraunhofer lSI (20l0)} . There is a bundle of reasons rather simple to present (but also to overcome ... ): •
•
•
First there was a lack of champions on the side of industrial countries that bought the technology costs down. That is, why the costs of CSP today are still substantially above the fossil generation costs, which wind energy is already approaching or has already reached (Figure 11). California had initially assumed this role but then the support for the technology faded away for 15 years until Spain and again California emerged as the second generation champions that have put in place sustained promotion schemes. Countries that would have had an interest to promote the technology like Germany were unable to introduce a demand driven policy because CSP does not works very suitably at the latitude of Germany. Hence those countries were left to promote R&D which effectively happened. This was nevertheless a cornerstone in keeping the technology alive and the scientists and engineers in the field. Third, the technology unfortunately comes "in big lumps" that is rather 30100 MW and not 1 kW like for example PV. So a concentrating solar plant cannot be paid out of everyone's pocket, unlike PV which is considerably more expensive than CSP but has already reached a much larger spread because 1 kW can be easily installed by individuals. In addition, PV had again champions (Japan, Germany) that promote the technology with demanddriven policies) which CSP did not have. That has lead to a fast learning curve for PV. So at present CSP is like a train that is late: everyone thinks that this
187
train can even take a bit more time as it is already late.
Geothormalolectricity Biowaslo (Solid) Biomass (Solid) Biomas. co-firing
Bi~as 1-__lJ~~~"II~~~~~~~~~!t o
20
40
60
80
100
120
140
160
__L-__1---J 180
200
220
Coli Of electrtclty (LRMC • pa~ck time: 15 years) [e/MWh)
Fig. 11: (Current) cost levels of electricity generationfrom renewables-Last 15 years for Concentrating Solar Power. (Source: Fraunhofer lSI based on various sources)
• Other countries like developing countries in North Africa, that would have enormous potentials both for their own consumption and for exports, only perceived the comparatively high costs, but not the chances of enhanced supply security, of balancing the increasing amounts of wind energy in the supply system and of regional employment and championship. • An essential actor in the promotion of CSP was the World Bank and the GEF. They promoted several CSP plants, in particular in North Africa. However, their error was that they considered grants as the only funding mechanism while this may not reach the necessary capacities to bring the costs down. In another attempt the World Bank is investigating to bring more sustainable financing sources together for this technology (Fraunhofer lSI, 2009), including new financing mechanism as foreseen in the EU Directive on Renewables and that allow to export renewable electricity to Europe and receive financing via for example a fee-in tariff. IF ENERGY EFFICIENCY COULD MAKE AS MUCH WIND AS WIND ENERGY ... Energy efficiency has tremendously contributed to reduce our dependency on fossil fuel consumption but has not really a sexy image. It is much more heterogeneous, much more difficult to understand and hence also very difficult to sell, especially on the policy side, especially as there is nothing to see or to show, except from nice events which document that even after so many years of energy savings it is still possible to save 91 % of energy compared to a conventional plant or appliances (Figure 12).
188
Energy • Waste heat recovery from work machines. , Optimised heat distribution. • Use of a heat pump with a coefficient of petionnance greater than 4. * Displacement ventilation via source air outlets. • Optimal dimensioning of the piping. • Use of heating and cooling water pumps with energy efficiency class A. * Use of energy-saving EC fans. • Use of a 153-kWp photovoltaic system.
Fig. 12: 1st Award: dena Energy Efficiency Award 2009. ebm-papst Mulfingen GmbH & Co. KG-construction of new energy-efficient production plant in Hollenbach. (Source: dena Energy Efficiency Award 2009 9) Numerous studies have shown that most energy efficiency options, in difference to renewables, payoff already now at fairly low fossil fuel prices (Figure 13). Increasingly we also start to realise that also energy efficiency options follow experience curves similar to renewables and that it is important therefore to stimulate the market demand for those technologies to help them getting over the first market hurdles (see the example of efficient windows in Figure 14; much more examples of this type can be shown).
http://www.industrie-energieeffizienz.de/energy-efficiency-award/energy-efficiencyaward-2009.html
189
.-
Potentlel th60rlque d'6conomle d'6nergle en Belgique en 2030
• -
$NI ck rtntlbUU.n fonetkH\ du prix do brut
•
T'....,.I ~..
Si b"
Oil price yesterday and tomorrow
120 100
eo
Oil price today
20
6
10
15 20
26 30 35 40 45 50 55
eo
as eo
55 10 15 60
05 100 105
105 mliions d. bte'
,~
..
~
.
R"'-'tdon des Comlsslont d. go ••• /fol d...". : 42/,lIco,. ~
8OUACI: 1oIcfChMy00000000000Or~ o.t ~ eo.lC\ntv1.0: ""OCQ:/III'WIt$'~
Fig. 13: McKinsey study for an energy effiCient Belgium. Table 4
Cost of window manufacturing in 1970 and in 2000. nomina! and rea! (U-value 1970 approx. 2.5- 3.0 \Wm'K; 2000-approx. 1.3 Wim'K). expressed ill CHF!m' standard window .\laten-a/,
Glass
coating
lVindow manufacturing
AssembJ"
Calculated
incl. tJ'ansport
contribution mm'gin
Total
1970
nominal
150
70
120
60
80
480
real!
2021
94'
135 3
80'
90 3
601
2000
100
100
80
80
90
450
iRea! 2000 prices;
2Adjusted with the Swiss producer price index for the lllanufacnu-ing: industry: 3Adjusted
with the average price index for the construction of residential buildings, Source: [3]. data obtained from an int"rview with a representative ofSZFF (Schweizerische Zentra!stelle fur Fenster-1Uld Fassadenbau). Dietikoll'ZH.
Fig. 14: Do efficient windows cost more? [Source: Jakob (200 7)]
It would not be fair to say that nothing has been done so far to improve energy efficiency. Numerous instruments have been introduced and tested, some with very large success. Just to mention the energy labelling of electric appliances or of buildings and cars, regulation for energy efficient buildings, incentive systems like the French BonusMalus system for energy efficient cars etc.'o We have seen further up the importance of '0
On European energy efficiency measures much can be learned from the MURE database on energy efficiency measures in Europe (www.mure2 .com).
190 the electric system from the supply side pat. However, there is also the other face the coin which represents electricity savings. The EU for example has initiated the ecodesign process which has already spurred mandatory standards for 10 products and will spur another 20 or more mandatory standards in the next years (Figure 15).
The EU Ecodesign Directive A comprehensive regulatory approach to electricity efficiency
aC\Mnh~l(t t l ~~~00nYri1et21 •
ElO
WooMonf«llftV"''''/&1f 2(0&.
C<mr
Product tot and study
Status In tho EuP pr<>e.. s
90i0fs""<~ ld c.r..-.I'uun 24-25 Ju>o 2<X». 1 ld 2
.W__ •
~~ld '
30 Mlttlt2O.».
----
OVoru.nr' ........ ld 17
~
IdS
_1'I'~00rmIIe0
..~ ld t3
"".
ld ~
C
"",""" ,_ _ "
" " , " " , , _ Idl. ~00rmIIe0_
_ 0""''''' .......''''Il..-. o LI
-Carp\.ler.""
""N« """""
•
0" "'''''''''9 _
ldlO _ _
1I _ _ _ ldl$o
eII...,.,._
£nIy"*''''.. 25 F.......,.=.
dE~lde
~'OI~ld 7
•
fI*Y HI> 1or(O 27 ~12OO).
r"""YWftQ. ldU
fI*Y HI> 1or(O 13 ~12<X».
R>om '" <Mi6or*'Q
CiM9.&>IM I'uun 22 Ju>o aw.
0RiarX ... ldto
?,"ttc.-sl-l!OkW,ld ::r=~Coom""' tt
""'""" _ . " " " ' . U!I*'O~ ld t9
o o ld20 · 2f)
~"""'~ ld
.. prI
=.
£nIy,,*,"'," ....... ~"""",," "' ~b regJi)lO\ 8 N'IIJ 20». _~
O=~~
_~
O~~ 2
-~
O~~~~_~
Fig. 15: Ecodesign directive for energy efficiency electric appliances and usages. [Source. ECEEE (2009)}
Yet, the contribution of energy efficiency must be much larger in the course of the next 40 years than it is at present. Armory Lovins once compared our way of using energy with taking a bath in a bath tube with a plug that has the shape of Figure 16. Instead of looking for another bath plug we just open more the bath tap to get more water in the tube. Figure 16 points to the main paths to reduce energy consumption but also electricity consumption by the required 25% discussed above but potentially by much more. These options include: demand-driven policies to bring existing technologies to the market, R&D more very advanced energy efficiency technologies (Figure 17 and Figure 18) and material efficiency options (Figure 19).
191
AlmofY l ovin's bathroom plug 2080
{.\.lrrtlll tflklt fl( Y
produ('m lafnb l S' rA '
Current efficiency level
(
Itnl
I No-rtgr t pottnllal 2020
£"''1')' EmdtnfY R\.~ O
I
lO~O
Fig. 16: Armory Lovin's Bathroom Plug.
Fig. 17: Long-term energy saving potentials industrial sector-Ratio current energy consumption/minimum energy consumption. (Source: Fraunhofer lSI)
192
Fig. 18:
Shortening of the process chainfor rolled steel and impact on energy consumption. [Source: AichingerlSteffen (2006)]
GIIICHlOUACHillGU/lG
By u!lng high strengh steel can we reduce material thlclmess so much that we save 25% or more In weight? Will our production tools be ~ollt by working with the new material!!? Will th e product not be too expen!lve?
IIISr1(l lOA alII GIWIHII OURCH HOCH.UHH STAHL
el"I
I,n
441«11_ 44f1fow 1Ie/T..... SI1«11O'W1O
-
The an9.'ler to these questions by the SM E Zelenka In Bavaria wasa tranlport plalform for high-value machine partsthat Is220/0 lighter and has 12% lower production costs. The product Is produoed and sold now on a regular ba!ls.
.fIS<
""
O,OImfoM
Fig. 19:
Lower material costs-Improve energy efficiency. [Source: Zelenka GmbH (2004)]
193
CONCLUSIONS These examples tend to show that the perception of the policy makers and of important stakeholders has an overwhelming influence on sustainable technology uptake, although there are "objective factors" such as energy price hikes and the Ukrainian-Russian gas supply crisis that favour rapid deployment. The price we are willing to pay for something reflects our priorities. •
We accept high costs when we feel that an issue is important: Military expenses worldwide are 1500 billion Euro annually - The cost of the worldwide fossil energy supply is 2000 billion Euro (with a modest oil price of 50$ per barrel) Saving our financial system amounts already to similar amounts Renewables and energy efficiency technologies are partly facing upfront financing issues but are far from requiring similar amounts. Already now they are contributing to a more efficient economy. The evaluation of the German Feed-in law for example has shown that the costs for a household do not exceed 20 Euro annually and will further decrease once we are further down the learning curves (Figure 20).
Fig. 20:
•
•
Annual cost of a personal insurance to protect the climate, to incentivise new sustainable technologies, to create more employment and to enhance competitiveness and supply security. [Source: Cost derivedfrom BMU (2007)]
The key to further developing those technologies are ambitious policies to push the demand for those technologies very rapidly riding them down the experience curve. During the Erice seminar on planetary urgency there was a lively debate between "climate skeptics" (or "optimists"; they work with equations where the feedback mechanisms are damped) and "climate concerned" (they work with more or less linear feedback mechanisms). "Climate pessimists" which work with dramatically enhanced feedback loops leading to catastrophic climate change were not present. While this session was extremely interesting
194
it failed to address the point that we have a number of important measures in the field of energy efficiency and renewables which, despite the fact that they are addressing effectively climate change, at least in the view of the climate concerned, they have much more dimensions beyond climate change alone, which makes them worth to pursue even in the perspective of doubts about human impact on climate. REFERENC ES I.
2.
3.
4.
5.
6.
7.
8.
9.
10. II.
Aichinger, H.M. and Steffen, R. (2006), "M a13nahmen zur COz-Minderung bei der Stahlerzeugung. Special Issue: Kohlendioxid und Klimaschutz." Chemie lngenieur Technik, Volume 78 Issue 4, Pages 397-406, 27 March 2006. BMU (2007), Erfahrungsbericht 2007 zum Erneuerbare-Energien-Gesetz (EEGErfahrungsbericht). Bundesministerium fUr Umwelt, Naturschutz und Reaktorsicherheit. http://umweltministerium.de/erneuerbare energienldownloadsl doc/40342.php. BMU (2008), Entwicklung der erneuerbaren Energien in Deutschland im Jahr 2007. Bundesministerium fUr Umwelt Naturschutz und Reaktorsicherheit. Stand: 12. Marz 2008. http://www.erneuerbareenergien.de/files/pdfs/allgemeinl applicationlpdf/ee_ hintergrund2007 .pdf. BSW (2008), Statistische Zahlen der deutschen Photovoltaikbranche. Bundesverband Solarwirtschaft. http://www.solarwirtschaft.de/fileadminl content_files/faktenblattyv_ 0408.pdf. Dena (2005), Energiewirtschaftliche Planung fUr die Netzintegration von Windenergie in Deutschland an Land und Offshore bis zum Jahr 2020. Deutsche Energie-Agentur 2005) http ://www.offshore-wind.de/medial article004593 /denaNetzstudie, %20Haupttext, %20r. pdf. DEWI (2008), Status der Windenergienutzung in Deutschland - Stand 31.12.2007. Deutsches Windenergie Institut. http://www.dewi.de/dewi/ fileadminlpdf/publications/Statistics%20Pressemitteilungenl31.12.07Ifolien%20st atistik_2007. pdf. ECEEE (2009), The Eco-design Directive for energy using products (EuP). European Council for an Energy Efficient Economy.http://www.eceee.org/ Eco_ desi gnl Fraunhofer lSI (2009), "MENA Region - Regional Concentrating Solar Power (CSP) Scale-up Initiative". Report on behalf of the World Bank (forthcoming 2009). Fraunhofer ISIlUni Wi.irzburg/TU Vienna (2008), Fortentwicklung des Erneuerbare-Energien-Gesetzes-Analysen und Empfehlungen. Fraunhofer Institut for Systems and Innovation research lSI. Intermediate report. Karlsruhe, Wi.irzburg, IS August 2008. GWEC (2008), Global Wind Energy Council. http://www.gwec.net/ Jakob, M. (2007): Essays in Economics of Energy Efficiency in Residential Buildings-An Empirical Analysis http: //e-collection.ethbib.ethz.chlview/ eth:29755
195
12.
13.
14.
Harper, S.c. (2005), Extraordinary entrepreneurship. The professional's guide to starting an exceptional enterprise. John Wiley and Sons. Hoboken, New Jersey 2005. Lange, M.; Focken, U. (2008), Studie zur Abschatzung des Netzkapazitat in Mitteldeutschland in Wetterlagen mit hoher Windeinspeisung. http://www.erneuerbareenergien.de/files/pdfs/allgemeinlapplicationlpdf/studie_ ne tzkapazitaet_windeinspeisung. pdf. Zelenka GmbH (2004): www.materialeffizienz.de
This page intentionally left blank
BEYOND EMERGING LOW-CARBON TECHNOLOGIES TO FACE CLIMATE CHANGE?
GIORGIO SIMBOLOTTI Senior Advisor on Energy Technology, ENEA-President's Office Rome, Italy SUMMARY Major energy projection studies (e.g., Energy Technology Perspectives 2008-lEA,' 2008) indicate that effective emissions mitigation in the energy sector is technically feasible, but hardly achievable. To stabilise the greenhouse gas concentration between 450 and 550 part per million (ppm) and avoid significant temperature increase (lPCC,2 2007), we need to meet two key conditions: a) Effective emission reduction policies must be immediately agreed upon and implemented at global level; b) A number of emerging low-carbon technologies must be deployed worldwide over the next 20 years. Although rather expensive, some emerging technologies (e.g., wind and solar energy, biomass for combined heat and power, 3rd generation nuclear power plants, efficient end-use devices) are already being commercialised and entering the market to a significant extent. Others, such as carbon capture and storage (CCS), 2nd generation biofuels, low-carbon vehicles, are still under development. We do need all these technologies as each of them can make a significant contribution to reducing emissions, but no one is decisive to achieve the mitigation objectives. The two conditions above are challenging: A global climate policy agreement is the ambitious objective of the 15 th Conference of the Parties of the United Nations (December 2009, Copenhagen); A timely, widespread deployment of emerging lowcarbon technologies will depend on policy measures and financial incentives, and on the ability of industry to overcome technical hurdles and reduce technology costs. Missing one of the two conditions could either jeopardise the mitigation process or result in unsustainable mitigation costs. In this context, to secure our energy and climate future we need to search for costeffective breakthrough technologies 3 that hold the potential to revolutionise the energy sector. Most such technologies are in early stage of development and require advances in basic science to emerge from labs. They are not included in current energy projections,
lEA: International Energy Agency (lEA OECD, Paris) IPCC: United Nations Intergovernmental Panel on Climate Change e.g., Highly-efficient, low-cost PV; Membranes for C02 capture; Microalgae for biofuels; Photo-electrolysis; Artificial photo-synthesis; Low-cost fuel cells; Marine energy; High-temperature solar dishes; Gen IV fast breeders; Adv. energy storage; Portable power; Piezoelectric devices, Power electronics; Oled lighting; etc.
197
198
but already attract industrial interest because of their potential to drive-in a few decades-radical changes in the way we generate and use energy. In a time of urgent action, economic crisis and budget constraints, we need to focus our effort on most promising options. High-level, authoritative scientific frameworks can playa key role in selecting realistic targets from technology dreams or options that may only have an impact well beyond the timescale available for climate change mitigation. THE lEA ETP STUDY If the conclusions of the Intergovernmental Panel on Climate Change (lPCC, 2007) are correct, to avoid major climate changes and significant increases of the global atmospheric temperature we need to reduce global emissions of greenhouse gases (GHG) by more than 50% by 2050. A number of authoritative studies are available on how to reduce GHG emissions from the energy sector, the most important source of C02 emissions. One of the most detailed analyses focusing on energy technologies is Energy Technology Perspectives 2008 by the International Energy Agency (lEA ETP, 2008, lead author Dolf Gielen). Based on a partial-equilibrium model (Markal) of the world energy systemincluding energy trading between geo-political regions- and on a detailed energy technology database, the ETP study analyses the competition between current and future energy technologies in the global market and determines--over time and on a regional basis-the energy and technology mix that satisfy the energy demand at the minimum cost. The ETP study includes two basic scenarios with 2050 time horizon: • ACT-Global energy-related emissions are returned back to the current (i.e., 2005) level by 2050 and the CO 2 concentration in the atmosphere is stabilized at the level of some 520 ppm, (corresponding to a global temperature increase higher than 2.4°C, IPCC, 2007); • BLUE-Global energy-related emissions are reduced by 50% over the current level by 2050 and the CO 2 concentration in the atmosphere is stabilized at the level of some 450 ppm (corresponding to a global temperature increase between 2.0°C and 2.4°C, IPCC, 2007).
In addition to the basic scenarios, the study includes a number of variants to explore uncertainties on the development of emerging technologies as well as regional diversification of mitigation strategies, e.g. , accelerated penetration of nuclear power, delayed or reduced penetration of either energy efficiency or renewable and carbon capture and storage technologies, as well as accelerated deployment of efficiency in transport, electric and fuel cell vehicles. The study determines the level of emissions associate to each scenario. Conceived in 2007-2008, an important characteristic of the ETP study is that its basic assumptions on energy prices (e.g., basic long-term oil price of $60-65/bbl) were not influenced by the 2008 energy price peak, nor by the current economic crisis. However, ETP does include the significant cost increase of energy technologies (roughly,
199 a factor of two) that occurred during the current decade-in particular between 2004 and 2008-as a consequence of higher material prices and sharply increasing demand for energy technologies in emerging economies. Whether and to what extent such an increase was partially driven by speculation and/or by unprecedented demand for energy technologies (same as the oil price peak in mid 2008) and whether it could be mitigated or offset by the economic crisis is currently matter of analysis. In very summary (Table 1), the conclusion of the ETP study is that the mitigation of the energy related emissions is technically and economically achievable assuming an immediate global commitment at governmental level and the urgent deployment of a number of emerging low-carbon technologies, including early-stage technologies with significant R&D and cost uncertainties. Table 1 - Summary of the ETP 2008 Basic Scenarios.
The mitigation is technically achievable because low-carbon technologies to reduce CO 2 emission are presently available, although at average cost higher than the cost of current energy technologies. It is also economically feasible because in the most ambitious scenario (i.e., BLUE), the global, cumulative investment to mitigate emissions amounts to some U.S. $45 trillion over the period 2010-2050, equivalent to 1.1% of the global GDP. This is an additional investment with respect to the baseline scenario. Some 80% of this effort would be invested in end-use energy technologies and it could be substantially compensate for by savings in fossil fuels use, depending on discount rate assumptions. The mitigation effort would require a global governmental commitment as more than 65% of the cumulative energy-related emissions (Table 1, BLUE) is expected in non-OECD countries. THE TECHNOLOGY CHALLENGE The mitigation process does also require the urgent deployment of a number of emerging low-carbon technologies because many technologies can make a significant contribution, but no single one alone is decisive for achieving the mitigation objectives. In both the ACT and BLUE scenarios (see Figure 1), important emission reductions are obtained from the deployment of highly-efficient technologies in all the energy sectors, including
200 power generation, transport, buildings and industry; from carbon capture and storage (CCS) technologies; from generation III and IV nuclear reactors; and from renewable technologies either commercially competitive (e.g., wind power) and in early deployment (e.g., solar photovoltaic and concentrating solar power) or under demonstration (e.g., 2nd generation biofuels from ligno-cellulosic feedstock) . In the most ambitious BLUE scenario, significant contributions are also provided by electrical vehicles and by hydrogen-powered fuel cells vehicles. C02 Reduction by technology (Gt CO,lyr) ror---------------------~
BLUE
ACT
60 50
40 30
20 10
2005
2015
2025
2035
2045
Fig. 1: Technology Contribution to C02 Abatement in the ETP Scenarios.
Considering that CCS technologies are currently under demonstration, with commercial deployment expected beyond 2020, that the availability of fuel cell vehicles cannot realistically expected before 2020, that Gen-IV nuclear reactors will be available on the market beyond 2030, and that other technologies such as wind, solar power and biofuels must be deployed at unprecedented rates to achieve the mitigation objectives (Figure 2), it is clear that reducing the CO 2 emissions in the energy sector (in particular in the BLUE scenario) is technically feasible, but implies very significant--if not unprecedented--technology challenges. A number of emerging low-carbon technologies should be urgently developed and commercialized in order to peak the emission growth and start the reduction process as soon as possible (after 2012, in the BLUE scenario).
201
2500
C'---.-.'-.----~--·'--'-·--
I
II
-,- .------ -- --'----.-,------..----.--'----;--.------.--',
End-us" efficiency rate: from less than
: ~•. ~ ~~~~~:M"~ "':<~~~~::~i}~~j~'i := 1000
-
_~.r",,?,k""~~
. . //"" -,-.,-,-"""'",--,,J~~
t-
~ ~ ....,~ -~-~=_:,'..,.?::::~~::--« ~;~~;~fi~:;~ --
_-
,.",";.,.........:c"'
,'i-' MUc \\1 2~J
~ ..i:::::~~~~~~~=:::~:~::~:-::r:~--t::=~-~~ !
o ___
2005
2010
2020
2030
2040
2050
Fig 2: Ambitious Technology Deployment Rates and Targets (GW) in the BLUE Scenario,
THE ECONOMIC AND COST CHALLENGES Not least is the economic challenge. Many low-carbon technologies offer negative C02 abatement costs (actually, business opportunities) as the fossil fuel savings exceed the incremental cost of the new technologies over the conventional ones, This is the case for many efficient technologies in the end-use sectors (see Figure 3), However, most lowcarbon technologies imply positive CO 2 abatement costs and a net additional cost for the energy system as the fuel saving does not compensate for the incremental cost of the new technologies, This applies, for example, to CCS in power generation, to most renewable technologies, and-to a large extent-to CCS in industrial processes and to fuel cell vehicles, In the ACT scenario, the marginal CO 2 abatement cost does not exceed $50/tC0 2. In the BLUE scenario the marginal cost is $200/tC0 2 under optimistic assumptions on technology development and cost reduction over time, and rises up to $500/tC02 if these assumptions are less optimistic, For comparison, the current C02 price in the European Emission Trade System is some €14/tC0 2 ,
202
e.g.:enlklse efficiency
: e.!!.: renew. & ; ees in : e.g.FC ; CCSpowergen ;Induslry:vehicles
~---~.:.
.
,
:~:
.
~~it_~~._~':l~ .!~I?~·_~"J~lnj~!'!. __ ;. ___ _
21150 CO, emissions reduction (Gt COdYrl
Fig. 3: Marginal CO2 Abatement Costs in the ETP Scenarios.
An important consideration in the evaluation of the ETP results is that an oil price increase of $10 per barrel translates into an economic incentive to the CO 2 abatement of some $25 per tC0 2 . As a consequence, under the ETP oil price assumption ($60-65!bbl), the CO 2 abatements at $200/tC02 in the BLUE scenario come out for free when the oil price reaches the level of$140-$145!bbl (June 2008). As mentioned above, in both the ACT and BLUE scenarios the global incremental investment in low-carbon technologies may be substantially offset by the fuel saving depending on the assumptions on the discount rate. However, the financial needs to implement the mitigation strategies (i.e., $0.1-0.3 trillion per year for R&D and technology learning in the short- to mid-term, and $0.5-2.0 trillion per year for deployment and commercialization over the long-term) and the burden sharing between geopolitical areas remain as critical negotiation issues (upcoming United Nation Conference of the Parties, 15 th CoP-Copenhagen, Dec. 2009). The key element is the high cost of most emerging low-carbon technologies if compared with the current technologies. The ETP study builds on the expectation that the investment costs of these technologies will decline over time as a result of technology learning, industrial production and economy of scale. The emerging technologies will therefore be competing with cheaper fossil fuel technologies in the global energy market. The capital cost of key renewable technologies (see Figure 4) is currently well above some $3000/kW, but it is projected to decline to below this threshold. Significant cost reductions are also projected for CCS, for Gen-IV nuclear power plants and for efficient end-use technologies. The faster the cost reduction for emerging technologies the lower the cost that Governments and tax payers must bear in form of carbon taxes, cap and trade and incentives, to support the deployment of such technologies. The process of cost reduction by technology learning (R&D and industrial learning) usually characterizes the new technologies when they move from labs to market. The process slows down or disappears when the technologies reach a certain level of deployment and maturity. As matter of fact, on-shore wind power offers limited cost reduction opportunities while PV power still offers significant further potential. In
203 addition, the more the technical complexity of a technology (e.g., number of components and sub-systems), the less the potential for cost reduction. Therefore, as the CCS technology does increase the complexity of mature technologies such as coal- and gasfired power plants, we cannot expect dramatic cost reduction in that area. Similar considerations apply to nuclear.
Energy Technologies Investment Cost ($x1000IkW)
,
""""
Fig. 4: Current and Projected Investment Costs of Energy Technologies. BEYOND EMERGING LOW-CARBON TECHNOLOGIES? Important cost reductions may be driven by major technology or material breakthroughs, e.g., moving from silicon PV to thin-film PV. Actually, basic and material sciences are currently exploring a number of breakthrough technologies that hold the potential to change the way we generate and use energy while promising short-term advances (20152020) and low-cost prospects. For example (see Figure 4), new generation photovoltaic cells (including organic PV) promise manufacturing cost below $1 OOO/kWp within a time span of 10 years. Similarly, fuel cells producers are confident to be able to produce in ten years from now fuel cells for automotive applications at cost of some $100/kW that will compete with conventional internal combustion engines. Apart from these two examples-not necessarily the most promising ones--other breakthrough technologies with different level of development include e.g., membranes for CO 2 capture; micro algae for biofuels production; photo-electrolysis; artificial photosynthesis; marine energy technologies; high-temperature solar dishes; advanced fastbreeder reactor concepts; energy-storage technologies; devices for portable power; power
204 electronics; organic leds. Some such technologies are in early stage of development and require advances in basic sciences to emerge from labs. While most of them are not included in the current energy projections because of their early stage, some already attract private and public investment because of their potential to drive-in a few -a technology revolution in the energy sector. Recently, in sectors other than energy, we have seen new technologies to replace in a few years mature technologies and infrastructure, and radically change our habit (e.g., mobile phones, 1990-2005). These revolutions were not anticipated by technology projections and required neither goverrunental incentives nor implementation policies to conquer the market. They have simply offered new services to consumers and required governments to implement regulation policies. The basic question is whether we can image and design an energy and a climate future other than the challenging way depicted in the current energy scenarios and whether focused scientific communities can help select the most promising technology options and lead the development process as it did happen in the discovery of the nuclear energy. REFERENCES l.
2. 3.
International Energy Agency. (2008), Energy Technology Perspectives 2008 (lEA-OECD 2008). International Energy Agency. (2008), World Energy Outlook 2008 (lEA-OECD 2008). European Union Strategic Energy Technology Plan (EU SET Plan, 2008).
INSTITUTIONS FOR DEVELOPING NEW CLIMATE SOLUTIONS
LEE LANE' American Enterprise Institute Washington, DC, USA W. DAVID MONTGOMERY" Charles River Associates, Vice President Washington, DC, USA ANNE E. SMITH t Charles River Associates, Vice President Washington, DC, USA ABSTRACT Coping with climate change will require mankind to generate a vast array of new knowledge and to spread that knowledge in the form of physical capital to the farthest reaches of the globe. New knowledge will be a key to restraining greenhouse gas emissions at socially acceptable costs. It is essential for assessing proposals to engineer climate change and is also needed for effective adaptation to unavoidable changes in climate. This triple challenge will require governments, and other actors, to consider how best to organize a far flung search for ways to find new knowledge and to apply old knowledge in new ways. The apposite economics literature suggests that governments can and should create private sector incentives for some kinds of R&D and for technology transfer, but it also implies that governments must fund some research directly and that they need to help to supply inputs like trained personnel and research facilities. This paper discusses approaches for structuring these tasks and for allocating resources among them in the face of persistent uncertainty. It also briefly considers some of the implications that flow from the global nature of the problem. NATURE AND SCOPE OF THE CHALLENGE CLIMATE CHANGE MAY CAUSE EXTENSIVE ECONOMIC HARM (NORDHAUS, 2007). THE EXTENT, TIMING, NATURE, AND INCIDENCE OF THE POTENTIAL THREATS THAT IT POSES REMAIN IN DOUBT, BUT PRUDENT ACTION MIGHT WELL DIMINISH THE RISKS. IF IT IS TO DO SO, HOWEVER, NEW TECHNOLOGIES WILL BE ESSENTIAL, AND, AS IS
Lee Lane is a Resident Fellow at the American Enterprise Institute for Public Policy Research. He can be contacted at [email protected]. W. David Montgomery is W. David Montgomery is a vice president of Charles River Associates. His e-mail isdmontgomery!iVcrai .com. Anne E. Smith is vice president of Charles River Associates. Her e-mail isasmith!iVcrai.com.
205
OFTEN THE CASE, INSTITUTIONS4 DEEPLY INFLUENCE THE PACE AND PATH ALONG WHICH TECHNOLOGY CHANGES (NELSON AND WINTER, 1982). CLIMATE POLICY MUST, THEN, SOMEHOW PUT IN PLACE INSTITUTIONS THAT WILL RAISE THE ODDS OF FASHIONING THE NEEDED TECHNOLOGIES. THIS PAPER ASKS: WHAT INSTITUTIONS MIGHT BEST SERVE THAT PURPOSE? IT BEGINS, HOWEVER, BY DESCRIBING FOUR FACTORS THAT, TAKEN TOGETHER, LARGELY DEFINE THE CHALLENGE AT HAND. THESE FACTORS INCLUDE: A) THE VARIOUS OPTIONS FOR RESPONDING TO CLIMATE CHANGE, B) THE EXTENT OF THE REQUIRED CHANGES IN THE GLOBAL ENERGY SYSTEM, C) THE KIND OF INNOVATIONS THAT WILL BE NEEDED, AND D) THE BASIC FEATURES OF THE ECONOMICS OF INNOVATION. Possible Climate Solutions Broadly speaking, three ways exist for diminishing the expected risks of climate change. First, it is possible to lower the concentrations of greenhouse gases (GHGs) in the atmosphere. This goal can be achieved either by reducing emissions of carbon dioxide (C0 2) and other greenhouse gases or by air capture (AC) of CO 2 that is already in the atmosphere. Second, it appears to be possible, through climate engineering (CE), to prevent warming despite rising GHG levels. This end might be attained through a number of concepts that would slightly reduce the amount of sunlight that reaches the Earth's surface. Third, it is possible to lower the expected damages from climate change by adapting to it. That approach requires adjusting choices of location and technology in order to accommodate the effects of climate change. With current technology, none of these three approaches seems adequate to hold the net costs of climate change to low levels. Further, none, applied in isolation, seems to address all aspects of the challenge. GHG control proposals have dominated the debate on how to respond to climate change. Controls will, in fact, certainly be part of any effective policy response. A policy limited to GHG controls would be, however, deeply flawed. Attempts to impose more than modest ceilings on GHG emissions encounter costs that appear to exceed the avoided damages (Kelly and Kolstad, 1999). As a result, if GHG curbs are the main recourse, the lowest cost strategy will involve accepting substantial harm from climate change (Nordhaus, 2007). Also, GHG controls are slow to take effect. Since emission limits will require replacing much of the world's stock of capital, controls can bring down emissions only slowly. Climate change is driven by the concentration of GHGs in the atmosphere. Concentrations of GHGs do not respond to changes in emissions over short periods of time . They depend, instead, on cumulative emissions over longer time spans. Therefore, many different emission reduction time paths can lead to the same outcome in global mean temperatures. To halt warming, GHG discharge levels must shrink to a fraction of those that prevail today, and even after those low emissions are achieved, an actual fall in temperature may take a century or more (IPCC, 2007). Should rapid, harmful climate change appear imminent, GHG curbs might be too slow acting to be much help. "Institutions" refer to fonnal and infonnal rules-from constitutions and laws to mores, customs, operating procedures, and conventions- that shape human behavi or (North, 1990).
206
207 In theory, large-scale AC could speed up this process, but its costs are as high, or even higher, than those of GHG controls. In fact, AC's costs far exceed its expected benefits, and they far exceed the direct costs of CE (Bickel and Lane, 2009). The use of AC is likely to depend on achieving drastic cost reductions. CE technologies would slightly reduce the amount of sunlight striking the Earth's surface. This approach may provide a response that is both faster and far less costly than either GHG curbs or AC. These concepts are, though, as yet untested. Workable systems may require several years of development effort (Robock et al. 2009). As a matter of technology, CE seems a less daunting challenge than that posed by the quest for low-cost, high-volume, non-fossil energy sources. More importantly, the climate system is so poorly understood that deploying a CE system would carry an unknown risk of triggering potentially costly unintended side effects (Smith, 2009). Adaptation can do much to limit damages from climate changes, and it is likely, for the coming century, to dominate the response to climate change. That having been said, even optimal adaptation cannot avoid significant damages (de Bruin et al. 2007). The normal operation of market forces is likely to prompt much action to adapt to climate change. Many of the needed changes may relate more to the wider diffusion of techniques that are already in demand somewhere. It is likely, therefore, that spurring the innovations needed for adaptation may prove to be less problematic than will be the case with the other two just-mentioned strategies. The Scale of the Challenge Halting increases in global average temperature through GHG controls demands that at some time in the future, annual emissions of GHGs from human sources must not exceed the amount removed by natural processes. This goal of zero net emissions implies that global emissions must shrink to roughly 20 percent of business-as-usual projections by mid- to late-century to achieve stabilization of GHG concentrations at 550 ppm CO 2 , and lower if a more ambitious target is chosen (Clarke et al. 2007). For example, Figure I shows the results from three models that analyzed stabilization scenarios for the U.S. Climate Change Science Program. All three models found that for global emissions to stay at 550 ppm or less, emissions would have to remain 80 percent below projected levels in 2100, a level at which each year's emissions would no longer exceed the amount of CO2 naturally removed from the atmosphere. The speed with which this emissions rate is achieved will determine the GHG concentration at which the atmosphere stabilizes, and, therefore, global mean temperature. Thereafter net zero emissions must be maintained to prevent further increases in concentrations. It is, though, also the case that many economic projections foresee global energy consumption doubling, or even tripling, by the end of this century. Without policies to change the choice of energy sources, this could lead to roughly similar increases in GHG emissions. Many existing analyses, including those of the IPCC and Stem, may understate the extent of the challenge. "In assessing what it will take to stabilize atmospheric GHG concentrations (in cost and technology terms), models usually employ noclimate-policy emission scenarios as references or baselines. However, using emission scenarios as baselines for assessing climate stabilization
208 creates a huge understatement of the technological change needed (and, by extension, economic cost incurred) to stabilize climate (Pielke et al. 2008). The problem is that built into most emission scenarios are very large, primarily technologically driven, emission reductions that are assumed to occur automatically." (Galiana and Green, 2009)
Fig. 1.
This dispute revolves around two controversies. One relates to the historical trend. Those who doubt the validity of the IPCC and Stem assessments believe that their scenarios overstate the rate at which decarbonization has been taking place (Green and Lightfoot, 2002). They also contend that existing technologies are so costly and are
209 subject to so many constraints that incremental improvements to them will not be sufficient to reach the goals needed to stabilize GHG concentrations (Hoffert et al. 2002). The Need for Fundamentally New Technologies Breakthrough technologies, then, may be required to meet the goal of stabilizing GHG levels at realistic costs (Galiana and Green, 2009; Hoffert et al. 2002). Many nonIPCC scenarios that project stable atmospheric GHG levels rely on technologies that are not available today. A recent study by the World Business Council on Sustainable Development concluded that, by 2050, most of the decline in emissions that it projects from personal transportation would have come from biofuels and fuel cell technologies that do not as yet exist (WBCSD, 2009). This result is presented in Figure 2.
Fig. 2. With CE, the R&D task is somewhat different than that involved with GHG limits. It might not be too difficult to develop the hardware and techniques to deploy CE
210 - although that is not certain. However, before these tools can be put to use, science will need to better understand the links among the various elements of the global climate system. A recent preliminary research agenda described the task in the following terms: "Components of any comprehensive research agenda for reducing these uncertainties can be divided into three progressive phases: (I) NonInvasive Laboratory and Computational Research; (II) Field Experiments; and (III) Monitored Deployment. Each phase involves distinct and escalating risks (both technical and socio-political), while simultaneously providing data of greater value for reducing uncertainties. The core questions that need to be addressed can also be clustered into three streams of research: Engineering (intervention system development); Climate Science (modeling and experimentation to understand and anticipate impacts of the intervention); and Climate Monitoring (detecting and assessing the actual impacts, both anticipated and unanticipated). While a number of studies have suggested the engineering feasibility of specific SWCE proposals, the questions in the Climate Science and Climate Monitoring streams present far greater challenges due to the inherent complexity of temporal and spatial delays and feedbacks within the climate system." (Blackstock et al. 2009) The report goes on to note that much of the research needed for developing CE will fit well into the larger research agenda for advancing climate science. Technological progress also seems to offer means by which adaptation can lower the costs of climate change. The private sector and state and local governments have strong incentives to take many of the needed steps. Possible examples might include development of drought resistant crops or public health technologies better able to control the spread of tropical diseases. Today, though, a lack of knowledge about how regional climates will change and on what time scale hampers adaptation (Repetto, 2006). Generating and diffusing this kind of scientific knowledge should be a top priority of climate policy. Success will depend on a strong, non-ideological climate science program. The Process of Innovation A climate-related technology policy must start with a sense of the nature of the process of innovation. Broadly, the innovation process consists of two features. One is a stochastic process which generates innovative concepts. The second is a set of institutions that determine which of the possible innovations will be pursued. Both aspects of the process impose constraints (Nelson and Winter, 1977). The innovation process covers a continuum of activities. At the "research" end of this continuum lie activities aimed at discovering new insights about the basic structure of nature. At the other "deployment" end lie activities that apply new (or old) knowledge to achieve some concrete goal-be it social or individual. Applied research, development, and demonstration are a few of the terms that have been used to characterize the intermediate links in this chain. Distinctions among the links tend to be blurred, and exactly how to define terms remains in dispute (Stokes, 1997).
211
One thing, though, seem no longer in much dispute. The actual process of innovation typically involves a two-way flow of tasks rather than one that flows only from basic research toward application (Rosenberg, 1994; Nelson and Winter, 1982). Typically, the effort to develop a discovery's practical application will lead to further questions that, themselves, require basic or fundamental research to resolve (Nelson and Winter, 1977). These later stages may encounter technical problems that throw the process back to the research stage. A pilot plant can, for instance, reveal a challenge that can only be overcome by going back to investigating some fundamental properties of matter. For example, pilot plants for production of alcohol from biomass reveal that the limiting factor on yields and costs is the proportion of lignin to cellulose in the feedstock. Lignin is a woody material that holds stalks up, and cellulose is the required input to fermentation. This observation led back to research in plant genomes to discover the genetic code that controlled this proportion. This step led in tum to genetic engineering to create new variants, and finally, according to the National Renewable Energy Laboratory (NREL), plant research to determine which will grow. The other way of looking at this example is that the questions raised in practice can themselves provide a motivation for a particular form of basic research, such as the recent interest in carbon nanotubes as a result of the focus on cost-effective batteries for electric vehicles. A highly proprietary process being developed in a company may then need basic research that can only be carried out in some other institution, under conditions of inappropriability and uncertainty. Nor is it clear that organizations that had done earlier research on a given concept will be well-suited to addressing the problems that may surface at the later phases of the process. SOURCES OF UNDER-INVESTMENT IN R&D Sources of Under-Investment in Climate-Related R&D The rate of technologic change varies dramatically both within economies and among them. At least two kinds of factors largely determine the rate of change in a given sector or activity. Differences in the difficulty of technically improving various activities cause some of the variance. Some of the disparity in performance, though, reflects differences in institutions (Nelson and Winter, 1977). Similar influences are at work in OHO control technology. Many sectors contribute to OHO discharges. In some sectors-say, transportation-the task of curbing emissions presents a tougher challenge than in others-like power generation. Institutions also clearly playa major role in affecting the rate of change. The large gaps in the OHOintensiveness of various economies stems in large measure from differences in institutions . Limiting the protection for intellectual property, having a weak rule of law, or under-pricing important inputs are all factors that can weaken incentives to innovate (Montgomery and Tuladhar, 2006). Further, without positive action by government, no market incentive exists for limiting emissions. Few states, in fact, have created such incentives, and those that have done so have relied on policies that have compromised the effectiveness of their efforts. Some industry R&D on OHO control has, nonetheless, gone forward in anticipation of future controls. These efforts, though, have been modest in comparison with the scale of
212 the challenge. It is hard to imagine much more for-profit R&D taking place in this area without GHG control regimes that are both better structured and more comprehensive than those that now exist. The Effect of R&D-Related Externalities R&D is, itself, also subject to market failures. For example, it is often impossible to exclude others from the benefits of the discovery of new knowledge. The problem is inherent in the nature of knowledge. In creating information, R&D incurs what will become a fixed cost. Once that information exists, there is a near-zero marginal cost to transfer it. Imitators can often copy a product or process that is based on the discovery of new useful knowledge. Therefore, in competitive markets, anticipated future prices may fall short of levels needed to recoup an innovator's R&D costs. At a minimum, the cost and uncertainty of exclusion reduces the net returns and, therefore, weakens the profit motive for R&D (Arrow, 1962). Moreover, the production function of R&D is often unknown, and sometimes it is unknowable. The more difficult the scientific problem that is being tackled, the less certain is ex ante success. Yet the problem's difficulty is, itself, unknown until a solution has been achieved (Arrow, 1962). This risk of failure will be high. The impossibility of assigning meaningful probabilities to outcomes implies limits on opportunities for spreading or diversifying risk, and risk aversion may further dissuade for-profit R&D. Uncertainties may also degrade the efficiency of the capital market as well as that of the market for the sale or licensing of innovations. An innovator may find it difficult, without losing his exclusive control over new information, to credibly convey it to potential buyers or investors. Network externalities are also a common feature of the R&D process. The outcome of one strand of R&D may tum out to be the key link in some other process (Edmonds and Stokes, 2003). Innovators, however, may wish to hide such connections where doing so may strengthen their hopes of capturing the full value of their innovation. Concealing results, though, diminishes the productivity of R&D activity as a whole. Failures, for example, may convey as much information as success. That a specific approach does not work can be valuable information, and incentives to disseminate information about failures may be very weak. These distortions are more important at some points of the innovation process than they are at other points. As innovative activity moves from basic research to concrete application, its economic features change. The features of inappropriability and uncertainty are greatest at the research stage and diminish, although they do not typically disappear, as the process becomes one of applying knowledge in new ways to concrete goals. As a result, for-profit entities playa much more limited role (Rosenberg, 1990). For-profit R&D does take place. Innovation may increase the value of some assets, and that increase in value may justify some for-profit R&D investment. Even so, the gains are unlikely to provide an incentive equal to the entire marginal social value of the R&D (Hirshleifer, 1971). In other cases, new knowledge may confer monopoly power on an innovator, either through first mover advantages or through intellectual property rules, and this monopoly power may create incentives to invest in R&D. However, the use of this monopoly power will, itself, diminish the social benefits of the
213
innovation. Perhaps most commonly, for-profit firms can conduct even basic research if it is necessary to meet some immediate need (Rosenberg, 1990). Institutional Diversity and Transaction Costs In the United States, a diverse mix of organizations fund R&D, and there is comparable diversity in the mix of those that conduct it. The U.S. innovation system includes governments, various private sector entities, and universities. These institutions perform a wide variety of R&D and are illustrated in Table 1. Among these various kinds of organizations, selection criteria for establishing R&D agendas are varied. In many instances, the profit motive is influential. Nonetheless, creative work is actually done by individuals, who may be subject to complex motives. Then too, research organizations are limited by their staff and budgets and these may differ greatly even within a given sector. Further, a firm seeking to profit from technological advance may often find itself dealing with many non-profit organizations. These organizations may operate on selection criteria that differ importantly from those that prevail in the for-profit sector (Nelson, 2005). See Figure 3. At the very least, all research organizations differ in the staff and budget levels that constrain their choices (Nelson and Winter, 1977).
Industry Government Total
Table 1. Aggregate R&D Spending Basic Applied Development Research Research 20% 76% 5% 59% 33% 16% US $61.5B US $74.7B US $204.3B
Total
US $223.4B US $94.2B US $340.4B
Source: National Science Foundation, Division of Science Resources Statistics, National Patterns of R&D Resources (annual series).
The iterative nature of the innovation process implies that successful innovation is likely to entail more transactions than would have been predicted based on the older, purely linear, model of the process. It is also likely to involve transactions among organizations operating on diverse selection criteria (Nelson and Winter, 1982). Both of these factors would seem in principle likely to raise transaction costs. Higher transaction costs increase the risk that the innovation might fail or that it might take longer than expected. The difficulties may be especially acute for governmentfunded R&D intended for private sector adoption. The somewhat checkered record of government-funded energy R&D supports this view.
214
Fig. 3.
ENCOURAGING PRIVATE-SECTOR R&D ON GHG CONTROL The often ambiguous record of governments in setting R&D priorities has led some to hope that, with regard to climate change, creating a market incentive for curbing GHG emissions could bring forth the desired technologies. Such a market incentive is, indeed, a vital part of any path to the desired solutions. Without such a policy, the prices of the activities that contribute to GHG emissions do not reflect the harmful effects of climate change; therefore, the market will not reward advances that lower emissions. This "external cost" market failure stands over and above the just discussed more general problems with R&D (Edmonds and Stokes, 2003). For that reason, correcting the former problem will not eliminate the latter, and hopes that climate change can be tackled without a government-funded R&D effort are likely to prove vain. Pricing GHG Emissions There is little question that a clear, credible, consistent, and stable policy that puts a price on CO 2 emissions will lead to cost-effective technology deployment and provide a demand-driven inducement to innovation. Credibility is greatest with policies addressing the climate externality that are economy-wide, permanent, and based on long-term goals, but with flexibility and cost containment, so that the policy can be expected to survive the inevitable unexpected shocks. The decision for any large-scale investment to deploy a new technology is certainly complex, depending on many factors not easily reduced to a simple rate-of-return calculation. Price-based GHG control systems are far more cost-effective than command-andcontrol approaches. The costs of curbing GHG emissions vary widely from sector to
215
sector. Many other differences also exist. Regulators lack the detailed knowledge to choose cost-effective technologies. Technological change, and the uncertainties that it entails, compounds the problem. A broad, uniform price on emissions circumvents these problems. Such a system decentralizes technical decisions. To a degree, it also creates incentives for private sector R&D (Stavins, 2006). The incentives it creates, though, rest on beliefs about future government policy, and therein lies the rub. The Problem of Government Credibility While an economy-wide uniform price is more cost-effective than piecemeal controls, it cannot escape two other serious drawbacks. These drawbacks relate to the problem that plagues much government action: governments find it difficult to make credible long-term commitments. This problem is one of the most common sources of government policy failure, not just in climate policy, but elsewhere as well (Glazer and Rothenberg, 200 I). Time inconsistency The lengthy time scales involved in both climate change and technology development imply that expectations of future policies motivate current investments. Expected future prices for GHG emissions are especially important. The credibility of a government's commitment to future policies is vital as an incentive to invest in R&D. Uncertainties about future policies will motivate delays in investment decisions if additional, timely information is expected to become available (Blyth et al. 2007). Policy uncertainty is not necessarily fatal. However, any time inconsistencies that bias government ex post decisions against high carbon prices will weaken private sector incentives to invest in the relevant R&D. Time inconsistency arises because the carbon price required to provide an adequate return on the R&D investment is higher than the price required to motivate adoption of an innovation after it is discovered. Thus, what is optimal for a government to announce as a carbon price in advance of a discovery is greater than what is optimal for a government to announce post-discovery. This policy failure would persist even if current policy projects a high price on future carbon emissions (Montgomery and Smith, 2007). Indeed, existing policy mandates that imply very high future carbon prices may actually fuel doubts about the commitment of future governments to those mandates. Tension between domestic and foreign policy objectives The second problem is that First World governments face a dilemma about the strength of their commitment to domestic GHG reduction. On the one hand, to persuade domestic investors to do R&D on new means of GHG control, such governments must make their long-term commitment to emission curbs appear to be infrangible. On the other hand, to motivate developing countries to adopt controls, First World governments must be able to threaten credibly to abandon, or at least to relax, domestic controls. The predicament of the First World's governments is clear for all to see. To motivate private sector R&D, they must appear to be locked in to controls. To restrain Third World temptations to free ride, they must convey the opposite image. The most likely response to the quandary, a muddle in the middle, risks a response that is convincing on neither score.
216 Private sector innovation: a necessary, but not sufficient, response The time inconsistency problem and the conflict between domestic and foreign climate policy goals are likely to limit the level of First World GHG emission prices. In any case, a price on emissions would do nothing to correct the private sector' s tendency to under-invest in climate-related R&D. Emitters would apply GHG controls, but they would still invest less than optimal amounts in developing better controls.
PROPOSALS, IMPEDIMENTS, AND BACK-UP STRATEGIES Policy Options for Climate-Related Innovation Many economists who have led the development of the discipline's view on the innovation process met late last year at Stanford. The conference sought to codify thinking about those approaches best able to guide and quicken the pace of progress toward climate solutions. The conference produced a consensus statement. Some of its main points adumbrated those made earlier in this analysis. Thus, it stressed that policies to put a price on GHG emissions would be essential; however, it also went on to note that more would be required. An adequate response must also include significant levels of direct and indirect support for basic and applied R&D (Arrow et al. 2008). To be effective, this R&D support would need to embody a stable, long-term commitment. The statement noted the long-run nature of the problem. As such it will not be solved by transitory programs aimed at exploiting " ... short-run improvements in energy efficiency or of low-carbon energy." The statement stressed the need for adequately funding basic research and promoting open access to information. Further, governments must " ... build the fundamental capacity to perform research in the future." In this regard the statement envisions steps to support the training of scientists and engineers, boost laboratory capabilities, and establish programs to broadly disseminate research findings (Arrow et al. 2008). To speed the pace of progress, government R&D programs should take more risks and tolerate more failures . Parallel project funding and management strategies are one means of doing so. Shifting the mix of R&D investment towards more "exploratory" R&D is another (Arrow et al. 2008). Finally the statement cautioned about policies that seemed likely not to work well. Standards and subsidies, it observed, were " ... unlikely to be cost-effective tools for eliciting the major reductions of greenhouse gas emissions that now appear to be called for." The statement took special pains to warn against setting umealistic timelines as a means of forcing progress: "Since the process of technology innovation and diffusion can require an extended period of time, performance standards with shorter compliance periods cannot be expected to stimulate major breakthroughs." It went on to note that this drawback " .. . is especially relevant in dealing with a multi-decadal issue such as climate change, where the challenge is to evolve standards with time in light of new knowledge and experience (Arrow et al. 2008). Barriers to Reform The technology pork barrel While government energy R&D has scored notable successes, it also exhibits many examples of waste and failure. The Stanford statement implicitly recognizes some
217 of the problems. Its admonitions in favor of more daring and more long-term R&D that is more focused on basic research implicitly conveys a litany of the concerns. The statement recommends: "The best institutional protections for minimizing these distortions [i.e. subsidies to favored firms, industries, and other organized interests] are multi-year appropriations, agency independence in making grants, use of peer review with clear criteria for project selection, and payments based on progress and outputs rather than cost recovery" (Arrow et al. 2008). Whether this recommendation is, in fact, practicable is open to doubt. In the U.S., government R&D agencies exhibit an unwillingness to propose a sufficiently wide range of risky, alternative approaches to achieve real breakthroughs. High-risk approaches with high potential may not come to their attention, since in the early stage of R&D there are significant agency problems in communicating the nature and potential of an approach (Cohen and Noll, 1991). Career advancement is also more likely to come from successful projects rather than accumulation of useful information about approaches that do not work. This limits the set of alternatives considered for funding and leads to far too little risk-taking in government R&D and too narrow a view of possible avenues of approach. Further, executive agencies and congressional incumbents have incentives that rarely cause them to wish to back risky R&D. Basic research, for them, is often less appealing than are large-scale demonstration projects, and legislators are apt to hurry concepts into the demonstration phase in order to reap the pork barrel rewards of spending public money on large projects that benefit their constituents. Once such projects have been spawned, political office holders may seek to continue to fund them long after they have ceased to yield public benefits (Cohen and Noll, 1991). The spending pattern that results is exactly the opposite of the stable, long-term research program required to stimulate breakthrough research and introduce game-changing technologies. The incentives that produce these perverse outcomes are deeply rooted in the institutions of government. The electoral process itself raises the political discount rate, especially if terms are short relative to the time lags inherent in R&D. Supporting R&D projects that yield large, but diffuse, net benefits, and those only after a long time, is a poor re-election strategy. However, when an R&D project reaches a large enough scale, it begins to have distributive significance. At that stage, the project may become politically relevant to legislators interested in re-election (Cohen and Noll, 1991). Thus, the Stanford statement's proposed institutional changes collide head on with the political interests of executive branch agencies and, above all, with those of Congress. To change those incentives, however, would require an act of Congress and what would doubtless be disruptive changes in the executive branch. History is not entirely without examples of self-denying decrees of the required kind, but they are uncommon. Problems with technology diffusion In the words of the Stanford statement, "Climate change cannot be halted without technologies that are applicable to developing countries. Developing these technologies and facilitating their adoption will likely require engagement of R&D networks III developing countries" (Arrow et al. 2008). The design of R&D policy must also take into account the major role of developing countries. That is, the opportunity to bring down costs and make action more
218
attractive; different institutional and technical capacIties; R&D networks-linking practice and research; and international networks to combine resources, create capabilities and exchange information, and provide practice-led R&D. The diffusion of macro-inventions can be especially time-consuming, with the pace likely shaped by institutions. Economic history indicates that institutional change was often a necessary prelude to technological change (North, 1990). This generalization will almost certainly apply to GHG-reducing innovations. In many instances, adoption of such technologies will depend on disincentives for GHG discharges, created and enforced by government. Yet some governments may prefer to eschew GHG reduction strategies for sound political and economic reasons (Schelling, 2002). There are often very good economic reasons why old technologies remain in use for extraordinary lengths of time. For example, a long process of adaptation to local conditions may make seemingly primitive technologies formidable competitors (Edgerton, 2007). Because climate policy is, by its nature, a global concern, climate-related technology policy must also confront the international dimension. Cost-effective GHG reductions depend crucially on reducing emissions from all major national sources. Any important country's failure to participate in a control regime will cause a rapid increase in the costs of any given abatement goal (Nordhaus, 2007). With China already the globe's biggest emitter and India the sixth largest, these countries must participate or a GHG control regime will be doomed to fail. Currently these countries' economies are much more GHG-intensive than is that of the U.S., let alone those of Europe or Japan. Although new investment in China is more GHGefficient than its installed capital plant, even the newer capital stock still trails that of the U.S . in this regard. Substantial gains in GHG control could, therefore, occur if China and India were merely to adopt U.S. technology (Montgomery and Tuladhar, 2006). Since even technologies that are currently economically new are not in use, the demand for improved low-carbon technologies will depend on institutional reform. To move beyond this goal, governments would have to adopt pricing or other policies to internalize the climate externality. However, the position taken by these governments in current climate negotiations suggests that they are disinclined to take this step. Absent such policies, no incentive exists to pull GHG-reducing technology into these markets. The successful transfer of technology, however, presents challenges. Currently, many institutional distortions in the Chinese and Indian economies discourage investment in more energy-efficient technologies. Such distortions include poor protection for intellectual property, energy price controls, and a failure to internalize environmental externalities (Montgomery and Tuladhar, 2006). Further, at least in China, a whole suite of policies effectively subsidize the expansion of energy-intensive heavy industries (Rosen and Houser, 2007). By inference, the successful diffusion of less GHG-intensive processes and products is likely to depend in part on institutional change within China and India (Montgomery and Tuladhar, 2006). Climate Engineering as a Back-up Strategy The Stanford statement noted that research on climate engineering (called "geoengineering" in the statement) as a measure to moderate temperature increases and climate impacts should be included in a complete research portfolio (Arrow et al. 2008).
219
A recent preliminary benefit-cost assessment of climate engineering found that the eventual deployment of the most promising CE technologies might yield very large net benefits. Indeed, depending on when CE might be deployed, with an optimal GHG control regime, the most widely discussed CE concept might generate net benefits ranging from $4 trillion to $13 trillion (in constant 2005 $) (Bickel and Lane, 2009). This estimate must be qualified by the large uncertainties about possible unintended consequences that continue to surround CEo While these would have to be very large indeed to cancel out benefits of this scale, they remain a major source of concern and pose a serious barrier to the prospect of ever deploying SRM (Smith, 2009). For example, some climate models suggest that CE might disrupt regional rainfall patterns, although other models find little or no change. At this point, the model results remain inconclusive on this point (Zickfeld et al. 2005). Some of the proposed solar radiation management (SRM) systems might also slow the recovery of the ozone layer. (Other SRM systems are immune to this risk.) Some scientists regard this risk as small (Wigley, 2006). Whatever the scale of this risk, it will clearly diminish with time as the volume of ozone-depleting chemicals in the atmosphere continues to decline. Many other risks or problems might emerge, and some of these might not yet have even been identified. Extensive study, therefore, would be needed before CE systems are likely to lay these fears to rest and win public acceptance (Smith, 2009). An R&D effort would be needed, in any case, to progress CE technology from the stage of being a promising concept to that of being a practical system. Today, then, the economic stakes associated with CE are very large, and the uncertainties are pervasive. Under these conditions, a research program designed to narrow the range of uncertainty would have excellent prospects of producing knowledge that was worth more than the cost of the R&D (Smith, 2009). R&D on climate engineering may, of course, prove ineffective. Even a vigorous effort could fail to discover unintended consequences (Smith, 2009). Moreover, some of the opposition to researching this concept clearly rests on "ethical" and ideological grounds (Tetlock and Oppenheimer, 2008). Resistance based on these factors may be impervious to research results. Also, CE may be too efficient for its own political good. Its costs may be too low to represent a very appealing target for pork barrel politics. Nonetheless, the difficulties that are likely to plague GHG reduction strategies argue strongly in favor of R&D on fallback approaches (Lane and Montgomery, 2008). CONCLUSION Climate change appears to pose a threat of serious harm. At least three responses may offer means of reducing this threat. These responses are GHG controls, CE, and adaptation. Technological progress can, in principle, enhance the cost-effectiveness of all three of these strategies. Such progress, though, will be neither easy nor automatic. The technical and economic challenges are daunting. Without government action, markets will not value technologies that curb GHG emissions. Even with such intervention, markets will place less than optimal stress on fostering new technologies as a path to lower emissions.
220 Some form of policy support by government for R&D is therefore likely to be essential. The economics literature provides at least some counsel about how such efforts might be structured. This counsel stresses the importance of a diversified portfolio of basic and applied research, and a willingness to incur reasonable risk of failure . Stable, long-term research funding is important. Deployment subsidies and technology mandates are not likely to be cost-effective tools. Implementing these policies is likely to be challenging. Many hard-wired features of the political process militate against the use of these approaches. Pork barrel politics and the diversity of national interests are two of the most important of these features. These structural problems add to the difficulty of what was already a great scientific and engineering challenge. While technologic advance can effect massive changes in society, it may not do so at a rate sufficient to avoid some of the more serious risks of climate change. Therefore, the alternative strategies of climate engineering and adaptation deserve attention. R&D can enhance the effectiveness of both of these approaches, and a balanced climate-related R&D portfolio should incorporate both of these approaches. REFERENCES I.
2. 3.
4.
5. 6. 7.
8.
9.
10.
Amabile, T.M., B.A. Hennessey, and B.S. Grossman, (1986), "Social Influences on Creativity: The Effects of Contracted-For-Reward," Journal of Personality and Social Psychology 50, 14- 23. Arnold, R.D., (1990), The Logic of Congressional Action, Yale University Press, New Haven. Arrow, K.J., (1962),: Economic Welfare and the Allocation of Resources for Invention, The Rate and Direction of Inventive Activity: Economic and Social Factors , R. Nelson (ed), Princeton University Press, Princeton. Arrow, K.l., L.R. Cohen, P.A. David, R.W. Hahn, C.D. Kolstad, L. Lane, W.D. Montgomery, R.R. Nelson, R. Noll, and A.E. Smith, (2008), "A Statement on the Appropriate Role for Research and Development in Climate Policy," The Economists ' Voice 6(1), 6. Barfield, C. and 1.E. Calfee, (2007), Biotechnology and the Patent System: BalanCing Innovation and Property Rights, The AEI Press, Washington, DC. Bickel, lE. and L. Lane, (2009), An Analysis of Climate Engineering as a Response to Climate Change, prepared for the Copenhagen Consensus Center. Blackstock, 1.1., D.S. Battisti, K. Caldeira, D.M. Eardley, 1.1. Katz, D.W. Keith, A.A.N. Patrinos, D.P. Schrag, R.H. Socolow, and S.E. Koonin, (2009), Climate Engineering Responses to Climate Emergencies, prepared for Novim. Blyth, W., R. Bradley, D. Bunn, C. Clarke, T. Wilson, and M. Yang, (2007), Investment Risks under Uncertain Climate Change Policy, Energy Policy 35(11), 5766- 5773. Boskin, M.1. and L.l. Lau, (1996), Contributions of R&D to Economic Growth, Technology, R&D, and the Economy, B. Smith and C. Barfield (eds), The Brookings Institution Press, Washington, DC. Brooks, H., (1994), "The Relationship between Science and Technology," Policy Research 23, 477-486.
221 II. 12.
13.
14.
15. 16. 17.
18.
19.
20. 21.
22.
23. 24. 25. 26.
27. 28.
Caldeira, K., D. Day, W. Fulkerson, M. Hoffert, and L. Lane, (2005), Climate Change Technology Exploratory Research (CCTER), Climate Policy Center. Clarke, L., 1. Edmonds, H.D . Jacoby, H.M. Pitcher, 1.M. Reilly, and R.G. Richels, (2007), Scenarios of Greenhouse Gas Emissions and Atmospheric Concentrations: Synthesis and Assessment Product 2.1A, U.S. Climate Change Science Program and the Subcommittee on Global Change Research. Cohen, L.R. and R.G. Noll (with 1.S. Banks, S.A. Edelman, and W.M. Pegram), (1991) , The Technology Pork Barrel, The Brookings Institution Press, Washington, DC. COHEN, L.R. AND R.G. NOLL, (1996), "THE FUTURE OF THE NATIONAL LABORATORIES," PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES 93(23), 12678-12685. Cohen, L.R. and R.G. Noll, (1998), Challenges to Research Universities , The Brookings Institution Press, Washington, D.C. Dasgupta, P. and P.A. David, (1994), "Toward a New Economics of Science," Policy Research 23, 487- 521. David, P.A., (2008), "The Historical Origins of 'Open Science': An Essay on Patronage, Reputation and Common Agency Contracting in the Scientific Revolution," Capitalism and Society 3(2), 5. de Bruin, K.C., R.B. Dellink, and R.SJ. Tol, (2007), AD-DICE: An Implementation of Adaptation in the DICE Model, FEEM Working Paper 51.2007. Deutch, J., (2005), What Should the Government do to Encourage Technical Change in the Energy Sector?, Report No. 120, Massachusetts Institute of Technology, Center for Energy and Environmental Policy Research. Edgerton, D. , (2007), The Shock of the Old: Technology and Global History since 1900, Oxford University Press, New York. Edmonds, J. and G. Stokes, (2003), Launching a Technology Revolution, Climate Policy for the 21st Century: Meeting the Long-Term Challenge of Global Warming, D. Michel (ed), Center for Transatlantic Relations. Galiana, I. and C. Green, (2009), An Analysis of a Technology-led Climate Policy as a Response to Climate Change, prepared for the Copenhagen Consensus Center. Glazer, A. and L.S. Rothenberg, (2001), Why Government Succeeds and Why It Fails, Harvard University Press, Cambridge. Green, C. and H.D. Lightfoot, (2002), "Making Climate Stabilization Easier Than It Will Be: The Report ofWGIII," C 2GCR Quarterly 2002-1 , 6-13. Griliches, Z., (1994) "Productivity, R&D and the Data Constraint," American Economic Review 84(1),1-23. Hahn, R.W., (2008), Greenhouse Gas Auctions and Taxes: Some Practical Considerations, AEI Center for Regulatory and Market Studies, Working Paper 08-12. Hennessey, B.A. and T.M. Amabile, (1998), "Reward, Intrinsic Motivation, and Creativity," American Psychologist 53 , 674- 675. Hirshleifer, 1., (1971), "The Private and Social Value of Information and the Reward to Inventive Activity," American Economic Review 61,570-571.
222 29.
30.
31.
32.
33. 34.
35.
36.
37. 38.
39.
40. 41. 42.
Hoffert, M.l., K. Caldeira, G. Benford, D.R. Criswell, C. Green, H. Herzog, AK. Jain, H.S. Kheshgi, K.S. Lackner, J.S. Lewis, RD. Lightfoot, W. Manheimer, J.C. Mankins, M.E. Mauei, L.J. Perkins, M.E. Schlesinger, T. Volk, and T.M.L. Wigley, (2002), "Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet," Science 298, 981-987. Hoffert, M.I., K. Caldeira, and G. Benford, (2003), Fourteen Grand ChallengesWhat Engineers Can Do to Prove We Can Survive the 21st Century, Mechanical Engineering Power. Hutzler, M.J., (2001), Statement of Mary 1. Hutzler, Acting Administrator of EIA, before the Committee on Environment and Public Works, U.S. Senate Hearing on S. 556, 'The Clean Power Act of 200 1'. Intergovernmental Panel on Climate Change, (2007), Climate Change 2007: Mitigation of Climate Change, B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, and L.A Meyer (eds), Cambridge University Press, New York. International Monetary Fund, (2008), Climate and the Global Economy, World Economic Outlook: April 2008 Edition. Jacoby, RD., (1999), The Uses and Misuses of Technology Development as a Component of Climate Policy, Massachusetts Institute of Technology, Joint Program on the Science and Policy of Global Change, prepared for "Climate Change Policy: Practical Strategies to Promote Economic Growth and Environmental Quality", sponsored by the Center for Policy Research of the American Council for Capital Formation. Kelly, D.L. and C.D. Kolstad, (1999), Integrated Assessment Models for Climate Change Control, International Yearbook of Environmental and Resource Economics 199912000: A Survey of Current Issues, H. Folmer and T. Tietenberg (eds), Edward Elgar, Cheltenham. Klein, B.R., (1962), The Decision-Making Problem in Development, The Rate and Direction of Inventive Activity: Economic and Social Factors, R. Nelson (ed), Princeton University Press, Princeton. Lane, L., (2006), Strategic Options for Bush Administration Climate Policy, The AEI Press, Washington, DC. Lane, L., K. Caldeira, R. Chatfield, and S. Langhoff, (2007), Workshop Report on Managing Solar Radiation. NASA Ames Research Center, Carnegie Institute of Washington Department of Global Ecology. Lane, L. and W.D. Montgomery, (2008), Political Institutions and Greenhouse Gas Controls, AEI Center for Regulatory and Market Studies, Related Publication 08-09. Leggett, lA, (2007), Climate Change: Science and Policy Implications, Congressional Research Service. Mansfield, E., (1977), The Production and Application of New Industrial Technology, W.W. Norton & Co, New York. Mansfield, E., (1985), "How Rapidly Does New Industrial Technology Leak Out?," Journal of Industrial Economics 34, 863-873.
223 43.
Mansfield, E., (1996), Contributions of New Technology to the Economy, Technology, R&D, and the Economy, B. Smith and C. Barfield (eds), The Brookings Institution Press, Washington, DC.
This page intentionally left blank
MODERATING CLIMATE CHANGE BY LIMITING EMISSIONS OF BOTH SHORT- AND LONG-LIVED GREENHOUSE GASES
MICHAEL C. MACCRACKEN Climate Institute, Chief Scientist for Climate Change Programs Washington, DC, USA
ABSTRACT
As emissions continue to increase, both wanning and the commitment to future warming are increasing at a rate of ~0.2°C per decade, with projections that the rate of warming will further increase if emissions controls are not put in place. Such warming and the associated changes are likely to cause severe impacts to key societal and environmental support systems, especially if the changes are abrupt or accelerate from present tendencies. Present estimates are that limiting the increase in global average surface temperature to no more than 2-2.5°C above its 1750 value will be required to avoid the most catastrophic, although certainly not all, consequences of climate change. Limiting peak warming and initiating a return to temperatures below present levels will require sharply reducing the global greenhouse gas (GHG) emissions by 2050 and to near zero by 2100. With fossil fuels providing over 80% of global energy, and increasing use apparently inevitable in many developing nations in order to raise the standard-of-living, reducing emissions sufficiently presents a very significant challenge, with neither developed nor developing nations yet ready to commit to an agreement without commensurate action by all nations. Analyses of the wanning influences of the various greenhouse gases suggests that the extent of action needed is for: (1) developed nations to rapidly reduce their emissions of all greenhouse gases by order of 80% by 2050, and even further by later in the century; and (2) developing nations, in a first phase, to improve their carbon efficiency, reverse deforestation, and sharply limit their non-C0 2 GHG emissions (i.e., emissions of methane, black carbon, and pollutants contributing to tropospheric ozone), and then, as their per capita GOP rises to levels near those of developed nations, to join in initiating sharp reductions in their CO 2 emissions. Because aggressive, near-term reductions in non-C02 emissions by developing nations would both improve the environmental well-being of their citizens and offset the warming influence of their ongoing C02 emissions, this strategy would allow for their ongoing development while cost-effective CO 2-free energy technologies are developed. Such a coordinated approach would demonstrate the necessary commitment by all nations while recognizing the equity imbalance created by very different per capita emissions. To further limit global wanning, if that proves necessary, and to counteract the wanning influence of declining emissions of sulfur dioxide, geoengineering likely also merits consideration to reduce the seriousness of the most critical impacts.
225
226 INTRODUCTION As projected first by Arrhenius (1896) and reaffirmed in the report of an expert panel of the President's Science Advisory Council (PSAC, 1965) more than 45 years ago, increasing emissions of carbon dioxide (C0 2) resulting from the combustion of coal, petroleum, and natural gas, along with changes in land cover, are increasing the atmospheric C02 concentration, changing the climate and impacting the environment and society. 1 Increases in the atmospheric concentrations of all gases with at least three atoms [so including water vapor (H 20), CO 2, ozone (03)] are very important because these gases, while essentially invisible to solar radiation (i .e., they absorb only small amounts of solar radiation), absorb and then re-emit infrared (i.e., heat) radiation emitted by the Earth's surface and the various layers of the atmosphere. Much of the emitted radiation is back toward the Earth's surface, creating a very important and well-established warming influence often referred to as the greenhouse effect. The greater the atmospheric concentrations of these greenhouse gases, the greater their warming influence. Analysis of the CO 2 concentration in the air bubbles trapped in ice cores now provide a record back in time for roughly 800,000 years (EPICA, 2004). Over this period, and somewhat further back in time, changes in the Earth's orbital elements, along with other feedback mechanisms, forced the climate to vary from extensive continental glaciation to milder interglacials (Berger, 2001). A important positive feedback resulted from changes in the CO 2 concentration, which exerted a smaller warming influence during the cold periods, when the concentration was pulled down to about 200 ppmv (parts per million by volume) as cold ocean waters took up more CO 2, than during the warm periods, when the CO 2 concentration rose to about 300 ppm v because of CO2 being driven out of the ocean. During most of the Holocene, which is the most recent interglacial period and extends back roughly 8-10,000 years, ice core records indicate that the CO2 concentration was roughly 280 ppmv. Since the beginning of the Industrial Revolution in the mid 18 th century, the C02 concentration has been rising, reaching about 300 ppmv in 1900, 310 ppmv in 1950, 365 ppmv in 2000, and nearly 390 ppmv in 2009 CIPCC, 2007a and updates by NOAA). This acceleration in the rate of increase matches very closely the rate of rise of CO 2 emissions. Figure 1, prepared by Raupach et al. (2007) and updated by the Global Carbon Project (see http://www.globalcarbonproject.orgl), shows the recent time history of C02 emissions; some of the causes are also discussed in Canadell et al. (2007). The black curve shows emissions prior to 2000, indicating an annual rate of growth of about 0.9%, whereas the more recent compilations of emissions suggest that the growth rate has increased to over 3% per year (2009 emissions will likely be down somewhat) as the generation of electricity by coal combustion has been increased to meet the energy
The science of climate change is evaluated and synthesized in the periodic reviews of the Intergovernmental Panel on Climate Change CIPCC); for further information see IPCC (2007a, 2007b, 2007c) and for a more general overview that also covers the fundamentals of climate change science, see MacCracken (2008).
227 demands of China, India, and other nations seeking to economically develop and pull their citizens out of poverty. The smooth curves shown in Figure 1 represent projections of emissions into the future prepared a decade ago for use as scenarios for model simulations of future climate change (IPCC, 2000). To span what was thought to be possible, these scenarios ranged from A2, B I, and B2 that, even in the absence of international emissions controls, projected a change over to energy technologies with reduced reliance on fossil fuels to scenarios AI T, AI FI, and AlB that, in the absence of controls, projected widespread reliance on coal as the world developed and petroleum supplies ran down. Surprisingly, at least for the expert community (e.g., Raupach et al. 2007), the recent compilations of global emissions indicate that the rate of increase in emissions has been faster than was envisioned possible by leading energy and economic experts only a decade ago OPCC, 2000), causing the atmospheric CO 2 concentration to rise more rapidly than had been projected.
iO~---------------------------~ - - Actual emissioos: CDIAC
9
~
(!)
8
II)
§
'iii
.~
E
7
-- - ----
Actual emissions: ElA 450ppm stabilisation 650ppm stabilisation A1FI (Avgs.) A1B A1T
._
A2,
--81 - - 62
w
SRE$ (2000) aver.
grcwIh NIles for 21)00.2010: Ala: 2.4t%iyr
AlA: 2.71%/yr A1T: 1.63%1yr A2: 2.13%1yr 81: 1.79%/yr
B2: 1.61 %1yr
1'fIII/J.. .e;~Observed growth rate In emissions: 1990-1999: O.9%tyr
5+-____~__--__,-~2~O~~200~7~:~3.~5W~r--~ 1900
1995
2000
2005
2010
Year
Fig. 1: Time history of actual and projected emissions of CO2 (as GtClyr). The solid line projections indicate the emissions to be expected, for various conditions, in a world without limitations on emissions, and the dashed line projections indicate the allowable emissions if the world established an emissions path intended to stabilize the atmospheric CO 2 concentration at either 450 or 650 ppmv. The figure is from Raupach et at. (2007), updated by the Global Carbon Project (see http://www.globalcarbonproject.orgl).
228 In 1992 at the Global Earth summit in Rio de Janeiro, the nations of the world negotiated the United Nations Framework Convention on Climate Change (UNFCCC), which sets the objective of "stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Such a level should be achieved within a time frame sufficient to allow ecosystems to adapt naturally to climate change, to ensure that food production is not threatened and to enable economic development to proceed in a sustainable manner." With total CO 2 emissions from fossil fuel combustion and land cover change nearing 10 GtC/yr and the atmospheric concentration rising at almost 2.5 ppmv/yr, the world is far from stabilizing atmospheric composition, and, at the same time, ecosystems are being disrupted, water resources are being impacted, sea level is rising, and agriculture in some regions is being affected (IPCC, 2007b; UNEP, 2009). The reasons for the increasing CO2 emissions are quite clear: fossil fuels, being safe, transportable, and readily available at reasonable cost, provide over 80% of the world's energy, so the large, industrializing nations of east and south Asia are choosing coal in order to generate the energy needed to raise the standard-of-living of their citizens. Today's large, modem economies have been built relying on relatively inexpensive and available energy from fossil fuels, and, even after many of their heavy industrial operations have moved to the developing world, per capita carbon emissions remain high, being of order 5-6 tonnes of carbon per year in North America and about half that in Europe and elsewhere in the developed world. With per capita carbon emissions being very low (just over 1 tonne of carbon in China down to much lower values across Africa) and population high, even small increases in per capita energy derived from fossil fuels lead to a significant increase in emissions. Until the developed nations demonstrate that a modem economy can prosper with very low CO 2 emissions, the stage seems set for global emissions to keep rising, making stabilization of atmospheric composition a very imposing challenge for the next several decades. The dashed curves shown in Figure I indicate the challenge that stabilization of the atmospheric CO 2 concentration presents. Whether the world wants to set a course that would stabilize the CO 2 concentration at about 450 ppmv (about 60% over pre-industrial) or 650 ppm v (about 130% over pre-industrial), the dashed lines suggest that the least cost emissions path for staying below either of these levels would require sharply reducing the rate of growth in emissions right now-not well off in the future. With a 100% increase in the global CO 2 concentration associated with a projected equilibrium rise in global average temperature of between roughly 2 to 4.5°C (IPCC, 2007a), and with the present temperature already up about 0.8°C even with the offsetting cooling influence of sulfate aerosol emissions, the warming projected for unconstrained emissions scenarios has the world warmer than preindustrial by about 2.4 to 4°C by 2100, with further warming thereafter. With warming of only 0.8°C already causing significant impacts in the Arctic (ACIA, 2004), to the ice sheets (Rignot, 2008), and for the ranges of a large number of plant and animal species (IPCC, 2007b), the likelihood is high for very severe environmental and societal impacts during the 21 st century. As global average temperature rises further, the situation could become even worse as the warming triggers thawing of the permafrost and potential release of stored carbon as either CO 2 or, in the
229 worst case, CH 4, both of which would further amplify global warming. Because there appears to be a significant risk that warming of over 2°C could cause thresholds for such nonlinear effects (e.g., Schellnhuber et al. 2006; Lenton et al. 2008; Pittock, 2008), the leaders of the leading nations have agreed that their goal should be to limit global warming to no more than this amount (CEC, 2008). Recent scientific studies are suggesting that even this limited amount of warming may, however, greatly accelerate loss of mass from the Greenland and Antarctic ice sheets, and that avoiding substantial ice loss from the ice sheets will require that global average temperature be returned to below today's elevated value of temperature and CO 2 concentration (e.g., Wigley, 2005; Hansen, 2007). In projecting future change, however, the rising concentration of CO 2 is not the only reason for concern. The concentrations of methane (CH4 ), nitrous oxide (N20), halocarbons, and tropospheric ozone, along with the atmospheric loading of black carbon (soot), are also rising, and thereby exerting a strong warming influence on the climate (Hansen et al. 2005; Hansen et al. 2007. With the warming climate already intensifying storms (Emanuel, 2005; IPCC, 2007a) melting back Arctic sea ice and permafrost (ACIA, 2004), initiating loss of ice from mountain glaciers and ice sheets (IPCC, 2007a; Rignot, 2008), and with the ranges of important plan and animal species shifting (see IPCC, 2007b; UNEP, 2009), the world is in quite a predicament, with the future looking very insecure (Campbell et al. 2007). The rest of this paper focuses on a possible path forward that would be both workable and effective.
THE RELATIVE WARMING INFLUENCES OF GREENHOUSE GASES AND AEROSOLS OVER THE 21 ST CENTURY
Virtually all of the public and intergovernmental discussion has identified reduction of CO 2 emissions as the most critical component of the steps needed to slow and then stop climate change. Careful studies of the carbon cycle indicate that the CO 2 emitted into the atmosphere is relatively rapidly mixed into the atmosphere, terrestrial biosphere, and upper ocean, such that the airborne fraction (i.e. , the fraction of emissions that appears to persist in the atmosphere for many decades or longer as emissions are going up) ends up at about 50%. As a result, the annual rate of rise in the CO 2 concentration (in ppmv) is about one-quarter of global annual CO 2 emissions (in GtC) from fossil fuel combustion. 2 Once the initial distribution of the emitted CO 2 increment across the three rapidly mixed reservoirs has occurred, it then takes many centuries to millennia for the elevated concentration to decrease as the CO 2 is mixed into the deep ocean (Solomon et al. 2009) and then, over longer periods, primarily into the ocean sediments. It is because of this very long-term persistence of the warming influence that it is essential that global To convert GtC (gigatonnes of carbon) to GtC02 (gigatonnes of CO2), multiply by 3.67, and to then convert to MMTC0 2 (millions of metric tonnes of CO 2), which is the unit used in international negotiations, multiply by 1000. Note that I Gt = I Pg.
230 emissions of CO 2 to be substantially reduced and eventually virtually eliminated over the next several decades. Another reason that there has been so much focus on emission of CO 2 is evident from Table I. The second column shows, subdivided by gas or aerosol, the increase in forcing for cumulative emissions from 1750 to 2000, and the third column shows the projected forcing for 2100, using the mid-range business-as-usual emissions scenario presented in IPCC (2001) for illustrative purposes. The fourth column shows the differences, and makes clear that just over 75% (3.4/4.4) of the change in forcing from 2000 to 2100 is projected to be due to the increase in the C02 concentration. Based on this high percentage, it is quite natural for greatest attention to be paid to reducing CO 2 emissions, especially given the very long duration of the perturbation (e.g., see Keith, 2009, who notes that, on a percentage basis, the warming influence of CO 2 persists for longer than does the radioactive influence of nuclear waste). Just looking at the changes in the contributions to radiative forcing of the various substances, however, does not provide an adequate portrayal of the potential for limiting overall global warming. The reason for this is that the different substances have very different lifetimes in the atmosphere. For example, black carbon and sulfate aerosols have an average atmospheric lifetime of at most one to two weeks, so their entire contributions to forcing in 2100 come from emissions during the last two weeks or so of 2099. More importantly, if emissions of black carbon were sharply and immediately reduced, its warming influence over the entire century would be eliminated. Similarly, chemical reactions limit the lifetime of methane in the atmosphere to about 12 years, so virtually all of methane's contribution to forcing in 2100 is a result of emissions after 2075, and an immediate cutback in emissions would lead to a sharp reduction in its important warming contribution that would take full effect within 25 years and persist throughout the century. A similar situation exists for the precursors that lead to tropospheric ozone; cut these emissions (e.g., by cleaning up transportation emissions), and the reduction in their warming influence would drop sharply over the first year and persist thereafter.
Table I: Contributions of each of the primary greenhouse gases to radiative forcing (i.e., net downward flux at the tropopause) that drives warming of the surface-atmosphere system. Climate changing gas or aerosol
Radiative forcing (W/m')
(17502000)
Projected business-asusual (BAU) forcing scenario' for
Change in projected BAU forcing ' over the 21" century
2100 (W/m')
(W/m')
Carbon dioxide (CO,)
1.66
-5.1
-3.4
Persistence time of the atmospheric perturbation (years)
Change in forcing in 2100 duet021" century emissions (W/m')
Upto thousands
-4
Methane (CH 4 )
0.48
-0.9
-0.4
-12
-0.9
Nitrous oxide (N,O)
0.16
-0.4
-0.25
-114
-0.35
Halocarbons
0.34
-0.4
-0.05
Up to thousands
-0.1
231
Tropospheric ozone (0) Black carbon (soot)
0.35
-0.65
- 0.3
Mostly, up to -0.2 Up to - 0.03, plus effect on snow albedo
-0.65
-0.4
-0.4
-0
-0.4
-0.4
-0
Up to - 0.03
-0.4
Sulfate aerosols (S04), increase in cloud reflectivity
-0.7
-0.7
-0
Up to -0.03
-0.7
TOTAL
-2.3
-6.75
-4.4
Sulfate aerosols (SO,) direct
- 0.4
-5.3
Note I: This scenaflo IS deflved from the UN SCientific Experts Group (SEG. 2007) and IPCC s Third Assessment Report (IPCC, 2001). Values are approximate; see referenced repoi1s for details. Note 2: Using the IPCC (2000) range of scenarios, IPCC (2007a) gives a range around these estimates of the change in forcing over the 21 " century that does not affect the conclusions. Note 3: Compilation of recent observations of aerosols in southern and eastern Asia by Ramanathan and Carmichael (2008) lead them to suggest that the present. and so likely the future, value for forcing for black carbon might well be roughly twice this amount, which would further strengthen the arguments made in this paper.
The potential for emissions cutbacks to lead to rapid reductions in the warming influence of methane, ozone precursors, and black carbon is not the case for the longlived species. For CO 2 , going to zero emissions immediately would still leave about half of the existing CO 2 forcing exerting its warming influence in 2100, with further reductions taking centuries (Solomon et a!. 2009). The situation for some halocarbons is equally discouraging, although on average the decrease in their influence, like that of nitrous oxide (N20), is a bit more rapid than for CO 2. The sixth column in Table 1 provides a rough indication of the significance of considering these differences in lifetime. Neglecting the cooling influence of the sulfate aerosols, which will be considered below, the warming influence of the higher CO 2 concentration is only about 60% of the total warming influence of all of the greenhouse gases on climate, whereas the results presented in column 4 suggested that it was over 75%. Just looking at the forcing in 2100, however, is not enough. What really needs to be done is to look at the integral of the warming influence over the 21 5t century resulting from emissions that occurred during the 21 51 century (see Moore and MacCracken, 2009), because these are the emissions that can potentially be reduced. The most commonly used way to estimate the relative contributions of each gas is to consider the emissions of each, weighted by their Global Warming Potential (GWP). The GWP is the ratio of the time-integrated radiative influence of emission of a unit mass of a particular gas relative to the time-integrated radiative influence of emission of a unit mass of CO 2 . Typically, the integration is over a period of 100 years, and when this is done, the contribution of global CO 2 emissions to global warming is typically about 75% of the total influence of all GHGs. The problem with this approach is that the period of time used is very important-for CO 2 , using 100 years ignores the long-term influence of the CO 2 perturbation (Solomon et a!. 2009), whereas for the short-lived gases, the 100year integral significantly down plays their influence over the period of years to decades
232 (CCSP, 2008). As an example, the 100-year GWP for methane is 22, whereas the 20-year GWP is 75 because virtually all of the influence of methane occurs within the first 20 years of its emission (lPCC, 2007a). For black carbon, the result is even more misleading as it remains in the atmosphere for only a week or two-its 100-year GWP is estimated to be about 460 and its 20-year GWP is 1600 (ICCT, 2009); were one to calculate its GWP relevant to its 1-2 week lifetime, it would likely be near 10 6. These numbers, of course, apply to unit amounts of emissions; what really matters are the total emissions of each GHG and the GWP for each. Figure 2 provides a graphical portrayal of this result. The lowest shaded area shows the carryover warming influence from concentrations elevated by emissions from prior to the year 2005 3. Beyond the first few decades of the 21 51 century, virtually all of this radiative forcing is due to CO 2 emissions, which have come predominantly from the developed nations. Above this base category, each of the areas represents the warming influence from 21 51 century emissions of the indicated greenhouse gas. Because of their relatively short atmospheric lifetimes, the influences of methane and tropospheric ozone quickly return to their year 2000 influence and then change very slowly over the century. Because of its long lifetime, however, the influence of CO 2 emissions during the 21 51 century takes several decades to regain its dominant position. Emissions of halocarbons and nitrous oxide also playa role that merits attention, and efforts to limit halocarbon concentrations under the Montreal Protocol and subsequent agreements and amendments will certainly play an important role (Velders et al. 2007). Integrating the influences of each greenhouse gas over the 21 51 century provides an indication of the relative contributions to warming. That the integral of the lowest area is substantial is an indication of the carry-on warming influence of past emissions, which will be complemented by the warming that will result from the ocean warming up to achieve equilibrium with the existing atmospheric composition. With respect to the influences from future emissions, that the sum of the areas of the methane and tropospheric ozone contributions and the C02 contribution are similar is an indication that the warming contributions of CO 2 and of the short-lived species are roughly comparable 4 • Clearly, to maximize the effectiveness of efforts to limit 21 51 century warming, it is roughly equally important to limit emissions of the long-lived (so CO2, halocarbons and N 20) and the short-lived, non-C0 2 greenhouse gases and aerosols (i.e., methane, ozone-producing pollutants, black carbon, etc.).
These results were drawn simulations using the MAGICC model of Wigley (2008) with an emissions scenario in which anthropogenic emissions of all of the greenhouse gases were linearly decreased from their year 2000 values to zero in 2010 and held at zero thereafter. 4 With respect to total warming contribution, however, CO2 is clearly the dominant human-affected greenhouse gas, for CO 2 is also responsible for the dominant share of the 20 lh century carryover influence-and the influence of past and 21 51 century CO 2 emissions will extend far beyond 2100.
233
$R&L~i'li{ -l'"fNtt~li'i fjf~~-,_ltJ rt~fi$~.~
t;;; '!~ ~};iZ-.aut: ':t~fli,~ ~cm-.u~:(J;."Nlr:flf
*'~''21,,;tn.-toit\..*"r ~t{fi'.Ai~~-~~rl.aUj'<~~
Fig. 2: Projection of radiative forcing at the tropopause due to emISSIOns of greenhouse gases prior to the 21 st century (lower section) and from the emission of various greenhouse gases during the 21't century (five upper sections). For reference, a radiative forcing of approximately 2-3 Wlm 2 is estimated, at equilibrium, to be associated with an increase in global average temperature of 2°C above preindustrial conditions. Derived from simulation using MAGICC model of Wigley (2008).
The situation for the aerosol (or aerosol precursor) emissions is also important to consider (Charlson et al. 1992), especially because the changes in forcing can occur so rapidly. The contribution of emissions of black carbon to warming is emerging as an increasingly important. While indicated in Table 1 as contributing about 0.4 W/m2 to the warming influence (so about 0.3OC at equilibrium), observations in southern and eastern Asia by Ramanathan and Carmichael (2008) suggest the arming influence may actually be twice the size estimated by IPCC (2001, 2007a). To the extent this is the case, the importance of reducing emissions of black carbon and other non-C0 2 greenhouse gases becomes even more important. In contrast to the warming influence of black carbon, emissions of S02, primarily from the elevated stacks of coal-fired power plants, are estimated (using central values) to cause a cooling influence due to their clear-sky effects of -0.4 W/m 2 and a further cooling influence of -0.7 W/m 2 due to their brightening influence on clouds (IPCC, 2007a). This total cooling influence, if removed, would be expected, after the oceans have had time to come into equilibrium, to lead to an increase in the global average surface temperature of roughly 0.8°C (range about 0.6 to 1.3°C). Most emissions scenarios project that emissions of S02 will be dropping over coming decades as air pollution controls are put in place to improve health and visibility.
234
A PRACTICAL PATH FORWARD Global average temperature is already about 0.8°C above its preindustrial value. Model calculations carried out in support ofIPCC's Fourth Assessment (IPCC, 2007a), in which the greenhouse gas and aerosol loadings of the year 2000 were held constant thereafter, projected that another O.SOC of warming would occur as the oceans warmed and the climate came into equilibrium. Going to near-zero net emissions from coal-fired power plants, which will be essential to limiting long-term global warming, would also lead to near zero emissions of S02, thereby leading to rapid loss of the cooling influences of sulfate aerosols. Such a loss would uncover the full warming influence of the present concentrations of greenhouse gases, likely leading to further warming of about 0.8°C. Thus, even if the world could immediately reduce emissions of the long-lived greenhouse gases to levels that would stabilize atmospheric composition, a change that would likely seriously disrupt economic development, the world would seem to face a warming of at least 2°C. The more realistic situation, in which emissions of C02 and other GHGs rise and then are reduced over subsequent decades, has been projected recently to lead to a warming of at least 4°C during this century (see http://www.metoffice.gov.uk Iclimatechange/news/latest/four-degrees.html). Such a warming would cause all sorts of undesired impacts (IPCC, 2007b), but it is the seemingly inevitable outcome if the world is unsuccessful in negotiating sharp reductions in emissions at the December's COP-IS meeting in Copenhagen (i.e., 15 th meeting of the Conference of the Parties to the UN Framework Convention on Climate Change). No wonder the discouragement of the scientific community about stopping global warming is leading to increasing consideration of geoengineering interventions to limit climate change (Crutzen, 2006; Wigley, 2006; Victor et al. 2009), including injection of sulfate aerosols into the stratosphere (Crutzen, 2006; Rasch et al. 2008; Robock et al. 2009), brightening of the troposphere (Latham et al. 2008; MacCracken, 2009), and large-scale scrubbing of C02 from the atmosphere (Keith, 2009). An alternative, although not completely satisfactory approach to global geoengineering, is to focus on sharply reducing the emissions of non-C0 2 greenhouse gases and black carbon. Specifically, if, in addition to limiting CO 2 emissions by reversing deforestation and aggressively reducing the net CO2 emissions from fossil fuel combustion (or at least in the developing world, substantially increasing energy efficiency), substantial reductions are made in the emissions of the short-lived gases and warming aerosols, it seems likely that, even if S02 emissions are reduced, the increase in global average temperature could be limited to about 2-2.S°C. For the developed (i.e., OECD) nations, a rough outline of what would be required is to:
a) Cut net CO2 emissions by about 80% by 2050 and 90% or more within a few more decades. A strong and early start is needed to demonstrate that modern societies can prosper with low greenhouse gas emissions (IPCC, 2007c; Brown, 2008). According to the EASY strategy put forth by Harte and Harte
235
b) c)
d)
e)
(2008), the primary steps needed are to: increase efficiency (E), electrify the transportation sector (A), generate electricity from solar, wind, other renewables and nuclear (S), and live life showing greater concern for the environment (Y). If CO 2 scrubbing becomes viable, its implementation would also be helpful (Keith, 2009). For the other long-lived species, namely halocarbons and N 20, significant reductions also appear possible (e.g., Velders et al. 2007). For methane, which is the primary short-lived greenhouse gas, a reduction in emissions of about 60% by 2050 and 80% or more by 2100 would be very beneficial. In the U.S. (EPA, 2008), methane emissions relating to fossil fuels account for about 40% of current emissions, and sharp reductions can likely be accomplished as fossil fuel use is brought down. About 30% of methane emissions are related to landfills, waste treatment, and stationary combustion, and there are already economical means to limit these emissions. The rest is largely related to agriculture, and limited capture of methane emissions by manure management and even at cattle feeding lots is practical. The threat to this approach is that thawing of permafrost regions could lead to a compensating increase in emissions, although this could occur in any case and sharply accelerate warming. Electrifying the transportation sector would be expected to lead to sharp reductions in emissions of volatile organics, carbon monoxide, and nitrogen oxides. Achieving a 50% reduction by 2050 and a 90% cutback by 2100 would not only significantly reduce the global warming influence of tropospheric ozone, but also have beneficial effects on public health and crop production. The sharp reduction in net CO 2 emissions, especially from coal-fired power plants would be expected to lead to a sharp reduction in S02 emissions, and thus in the cooling offset provided by sulfate aerosols. At the same time, however, visibility would improve and health effects would be reduced. Research is needed to determine whether injection of sulfate aerosols into the stratosphere (Rasch et al. 2008; Robock et ai, 2009) or the troposphere in remote areas (MacCracken, 2009) might be an effective and not significantly disruptive and damaging means of more rapidly limiting global warming than is possible as a result of emissions reductions.
For the non-OECD nations, for reasons of equity and ethics, any strategy aimed at limiting climate change must also be designed to alleviate poverty and promote a more sustainable relationship with their environment- if living from year to year is a life and death challenge, there is little reason to be concerned about long-term climate change. That both short- and long-term objectives relating to development and climate change can be pursued in a coordinated manner is, fortunately, becoming more and more apparent (see SEQ, 2007 ; World Bank, 2009). Because beginning an immediate reduction in their C02 emissions would be likely to greatly restrict economic development in developing nations, relegating them to ongoing poverty, it has not been surprising that there has been a refusal of developing
236 nations to do more over the next few decades than to limit the rate of growth in their emissions. In 2001, even though U.S. per capita emissions were roughly five times those of the largest developing nations, this refusal was one of the reasons that president George W. Bush gave for withdrawing the U.S. from the process of finalizing the Kyoto Protocol's implementation. Similar sentiments seem to remain widespread as negotiations for the post-Kyoto Protocol near the critical point. To bridge the sharp disagreement, MacCracken (2008) and Moore and MacCracken (2009) have proposed a two-phase approach for the developing nations that would take advantage of the important contribution that can be made by limiting emissions of the short-lived greenhouse gases and aerosols. They propose the following:
Phase 1: The first steps, which would begin immediately, and continue until national per capita GDP and greenhouse gas emissions rose to the bottom of the hopefully declining range of per capita emissions in developed nations, the nonOECD nations would: a) Limit growth in their C02 emissions by committing to aspirational goals to improve the energy efficiency of their economies, thus reducing the amount of CO 2 generated per dollar of GDP, seeking to reach (or exceed) the levels of present OECD nations over coming decades (and the OECD nations would pledge to help in this effort via technology transfer and helpful and verifiable financial measures). These actions would not only reduce the prospective CO2 contribution to climate change, but also assist in their economic development, environmental clean-up, and alleviation of poverty. Using already developed technologies, commitments to reductions in emissions of halocarbons and nitrous oxides would also be valuable. b) Reverse deforestation. This step would not only increase uptake of carbon, but it is also vital to stabilize soils, enhance wildlife and biodiversity, and encourage ecotourism. In that cutting of trees and shrubs for biofuels is a primary factor in deforestation, use of more efficient wood-burning cooking stoves could also play an important role in reducing the pressure on forests and the time devoted to gathering firewood. c) Aggressive reduction of CH4 emissions. While the mix of sources in developing nations is different than in developed nations, there are substantial opportunities for emission reductions by sucking methane out of coal mines (which would reduce incidence of coal mine explosions), tightening up the natural gas and petroleum distribution systems (which would also increase energy efficiency and reduce air pollution), capturing methane from waste treatment and landfills (which would provide a useful fuel), and even by altering and/or reducing production from agriculture. d) Reduce emissions of air pollutants. Pollutant emissions are not only causing serious health and environmental problems, but are also contributing to the build-up of tropospheric ozone and thus global warming. Many countries are already moving to reduce emissions from their transportation systems, both by raising mileage standards and imposing emission limits. Although not
237 originally put in place to limit climate change, efforts by developing nations to reduce air pollutant emissions deserve credit and encouragement. e) Reduce emissions of black carbon. Recent observations are indicating that black soot may even be the second most important warming influence (Ramanathan and Carmichael, 2008), making reduction of such emissions a very important component of a comprehensive effort to limit global warming. Primary sources are burning of biomass and biofuels (e.g., in inefficient cookstoves), use of kerosene for light and cooking (kerosene use is estimated to be about a million barrels of oil per day), and two-stroke and diesel engines. One key step would be to provide rural families with small solar panels that can power a cell phone, a computer, and a small light that would help lengthen the time for study and education. Again, developing countries have the potential to playa major role in reducing emissions and warming, and a commitment to pursue appropriate actions merits official encouragement.
Phase 2: With so many people, significant growth in CO 2 emiSSIOns from developing nations alone could, if not controlled, lead to very significant global warming. Therefore, in addition to continuing to pursue all the actions in phase 1, it will be essential that the developing nations, beginning generally within a few decades, also take steps to join the developed nations in driving their CO 2 (and other greenhouse gas) emissions toward zero. An appropriate commitment might well be for each nation to never exceed the per capita emissions of developed nations, which, to seriously deal with global warming, need to be coming down rapidly over coming decades (Moore and MacCracken, 2009). The recent study by the Energy Modeling Forum (http://emf.stanford.edulresearchlemf221) found that fore-knowledge that a nation would, at a specified date, graduate from a first phase without a hard CO 2 emissions limit into a second phase with declining limits for CO 2 emissions would lead investors to shift their investments into low or no carbon energy technologies starting well before a nation's graduation date in order to minimize stranded investments. In addition, that many of the developing nations may be manufacturing the low carbon energy technologies (e.g., solar panels, wind turbines, etc.) needed by the developed nations would likely facilitate their early use in the developing nations.
As to the ability to reduce overall emiSSions, if, for example, the non-OECD nations can, by 2100 or earlier, collectively cut in half CO 2 emissions that are presently projected for 2040, then per capita emissions in both developed and developing nations would be near equal and at the very low levels needed to stabilize the climate. With aggressive emission reductions, there may even be the potential to start pushing the atmospheric CO 2 concentration back toward mid_20 th century levels, which it is increasing likely will be required to stabilize the mass of water held by mountain glaciers and ice sheets (Wigley 2005; Hansen, 2007).
238 SUMMARY Without strong action, the most recent emissions projections will lead to global warming of several degrees Celsius by the end of the century, far above the 2°C goal set by world leaders based on warming that seems likely to trigger 'dangerous' changes to the climate, sea level, ice sheets, ecosystems and more (Schellnhuber et at. 2006; Lenton et at. 2008). With the C02 concentration rising at a high rate as a result of accelerating emissions, halting the increase in temperature will require reducing net global emissions of CO 2 by 80% or more over coming decades. In addition to this essential step to limit long-term warming, reducing the concentrations of short-lived greenhouse gases and the atmospheric loading of black carbon is also critical to limiting warming over the next few decades. Indeed, of the emissions that can potentially be controlled, the contributions of non-C02 greenhouse gases and loading of black carbon will contribute approximately as much to 21 51 century warming as do this century's emissions of CO 2 (C0 2 emitted prior to 2000 will also be an unavoidable contributor to the warming). The only path to limiting global warming to less than about 2.SoC thus appears to be a combined effort to reduce the emissions of CO 2, non-C0 2 greenhouse gases, and black carbon. To achieve sufficient reductions, a comprehensive strategy is needed":
•
•
The OECD nations (which generally have high per capita emissions of C02 and some other greenhouse gases and aerosols) need to move expeditiously to demonstrate that a modem economy can prosper with reduced emissions, as a few nations are working hard to do; and The non-OECD nations (which generally have low per capita emissions of fossil fuel CO 2 due to a lower standard of living and low per capita use of fossil fuels, but much higher emissions of biomass CO 2, methane, air pollutants leading to tropospheric ozone, and black carbon) need, over the next couple of decades, to greatly increase their energy efficiency, reverse their deforestation, and sharply reduce their non-C0 2 greenhouse gases. Then, as poverty is alleviated and new, clean energy technologies are proven, these nations need to join in sharply reducing their CO 2 emissions, taking advantage of the technologies and approaches being utilized by the developed nations.
If emissions are cut sharply enough, there appears to be a narrow path forward, though one for which there will surely be some significant impacts. More likely, due to the apparent inability or unwillingness to cut CO 2 emissions sharply enough due to concerns of cost and backlash, the cuts will be slower, allowing the warming to become greater, which in tum opens up the question of whether, despite its shortcomings, climate engineering (i.e., solar radiation management) will be required to both offset the warming influence of the increased greenhouse gases and to strengthen the cooling effect of the sulfate aerosols presently associated with coal combustion. Such efforts are conceivable (Crutzen, 2006; Rasch et at. 2008; AMS, 2009: Robock et at. 2009; MacCracken, 2009), but likely viable over the long-term only if the emissions of CO 2 and non-C02
239 greenhouse gases are on a sharp downward trajectory. Without emissions reduction, the legacy passed to future generations will be daunting and debilitating (Campbell et al. 2008); with emissions reduction, perhaps aided by geoengineering, there is the potential that at least some of the worst effects of climate change can be moderated, although there is really no time to spare in getting started.
ACKNOWLEDGEMENTS
The views included here represent those of the author and not necessarily of any of the organizations with which he has been affiliated. Thanks are due to Francis Moore of Yale University for her calculations of some of the changes in fluxes due to human-related activities.
REFERENCES
1.
2. 3. 4.
5. 6.
7.
American Meteorological Society (AMS), (2009) Geoengineering the Climate System: A Policy Statement of the American Meteorological Society, adopted by the AMS Council July 20, 2009, American Meteorological Society, Boston, MA (downloadable at: http://www.ametsoc.org/policy/2009geoengineeringclimate amsstatement.html). Arctic Climate Impact Assessment (ACIA), (2004) Impacts of a Warming Arctic: Arctic Climate Impact Assessment, Cambridge University Press, 140 pp. Arrhenius, S., (1896) "On the influence of carbonic acid in the air upon the temperature of the ground," Philosophical Magazine 41, 237. Berger, A., (2001) "The role of C02, sea-level and vegetation during the Milankovitch forced glacial-interglacial cycles," pp. 119-146 in GeosphereBiosphere Interactions and Climate, L. Bengtsson, and C.U. Hammer, Cambridge University Press, Cambridge, UK. Brown, L.R., (2008) Plan B 3.0: Mobilizing to Save Civilization, W.W. Norton, 384 pp. Campbell, K.M., J. Gulledge, J.R. McNeil, J. Podesta, P. Ogden, L. Fuerth, RJ. Woolsey, A.T.J. Lennon, J. Smith, R. Weitz, and D. Mix, (2007) The Age of Consequences: The Foreign Policy and National Security Implications of Global Climate Change, Center for Strategic and International Studies, Washington DC, 119 pp, Canadell, J.G., C. Le Quen!, M.R. Raupach, C.B. Field, E.T. Buitenhuis, P. Ciais, TJ. Conway, N.P. Gillett, R.A. Houghton, and G. Marland, (2007) "Contributions to accelerating atmospheric C02 growth from economic activity, carbon intensity, and efficiency of natural sinks," Proceedings of the National Academy of Sciences 104, 18866-18870.
240 8.
9.
10.
11. 12. 13 .
14. 15. 16.
17.
18. 19.
20.
21.
Charlson, RJ., S.E. Schwartz, J.M. Hales, R.D. Cess, lA. Coakley, lE. Hansen, and OJ. Hofmann, (1992) "Climate forcing by anthropogenic aerosols," Science 255, 422-430. Climate Change Science Program (CCSP), (2008) Climate Projections Based on Emissions Scenarios for Long-Lived and Short-Lived Radiatively Active Gases and Aerosols, H. Levy II, D.T. Shindell, A. Gilliland, M.D. Schwarzkopf, L.W. Horowitz, (eds.), prepared under the direction of the U.S . Climate Change Science Program and the Subcommittee on Global Change Research, Department of Commerce, NOAA's National Climatic Data Center, Washington, D.C., USA, 100 pp. Commission of European Communities (CEC), (2007) Communication from the Commission to the Council, the European Parliament. the European Economic and Social Commiltee and the Committee o.lthe Regions, Limiting Global Climate Change to JOe, the Way A head for 2020 and Beyond, European Union, Brussels. Crutzen, PJ., (2006) "Albedo enhancement by stratospheric sulfur injections: A contribution to resolve a policy dilemma?" Climatic Change 77, 211-219. Emanuel, K. (2005) "Increasing destructiveness of tropical cyclones over the past 30 years," Nature 436, 686-688. Environmental Protection Agency (EPA), (2008) Inventory of u.s. Greenhouse Gas Emissions and Sinks: 1990-2006, U.S. Environmental Protection Agency, Washington DC, USA, 394 pp. EPICA community members, (2004): "Eight glacial cycles from an Antarctic ice core," Nature 429, 623-628 . Hansen, lE., (2007) "Scientific reticence and sea level rise," Environmental Research Letters 2, 024002, doi: 10.1 088/1748-9326/2/2/024002. Hansen, J., Mki. Sato, R. Ruedy, L. Nazarenko, A. Lacis, G.A. Schmidt, G. Russell, 1. Aleinov, M. Bauer, S. Bauer, N. Bell, B. Cairns, V. Canuto, M. Chandler, Y. Cheng, A. Del Genio, G. Faluvegi, E. Fleming, A. Friend, T. Hall, C. Jackman, M. Kelley, N.Y. Kiang, D. Koch, J. Lean, l Lerner, K. Lo, S. Menon, R.L. Miller, P. Minnis, T. Novakov, V. Oinas, Ja. Perlwitz, Ju. Perlwitz, D. Rind, A. Romanou, D. Shindell, P. Stone, S. Sun, N. Tausnev, D. Thresher, B. Wielicki, T. Wong, M. Yao, and S. Zhang, (2005) "Efficacy of climate forcings," Journal of Geophysical Research 110,018104, doi: 10.1 029/2005JD005776. Hansen, l, M. Sato, P. Kharecha, G. Russell, D.W. Lea, and M. Siddall, (2007) "Climate change and trace gases," Philosophical Transactions of the Royal Society A 365, 1925-1954. Harte, J., and M.E. Harte, (2008) Cool the Earth, Save the Economy: Solving the Climate Crisis is EASY, published online at http://www.cooltheearth.us/. Intergovernmental Panel on Climate Change (IPCC), (2000) Special Report on Emissions Scenarios (SRES), N. Nakicenovic, et aI., eds., Cambridge University Press, 599 pp. Intergovernmental Panel on Climate Change (IPCC), (2001) Climate Change 2001: The Scientific Basis, l Houghton et aI., eds., Cambridge University Press, 881 pp. Intergovernmental Panel on Climate Change (IPCC), (2007a) Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth
241
22.
23.
24.
25. 26.
27.
28.
29.
30.
31.
Assessment Report of the Intergovernmental Panel on Climate Change, S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Avery, M. Tignor, and H.L. Miller (eds.), Cambridge University Press, Cambridge and New York, 996 pp. Intergovernmental Panel on Climate Change (IPCC), (2007b) Climate Change 2007: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, M. Parry, O. Canziani, 1. Palutikof, P. van der Linden, and e. Hanson, et al. (eds.), Cambridge University Press, Cambridge and New York, 976 pp. Intergovernmental Panel on Climate Change (IPCC), (2007c) Climate Change 2007: Mitigation , Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, B. Metz, O. Davidson, P. Bosch, R. Dave, and L. Meyer (eds.), Cambridge University Press, Cambridge and New York, 851 pp. International Council on Clean Transportation, (2009) A policy-relevant summary of black carbon climate science and appropriate emission control strategies, Washington DC (see http://www.theicct.orgl). Keith, D.W., (2009) "Why capture C02 from the atmosphere?" Science 325, 1654-1655. Latham, J., PJ. Rasch, e.e. Chen, L. Kettles, A. Gadian, A. Gettelman, H. Morrison, K. Bower, and T.W. Choularton, (2008) Global temperaturestabilization via controlled albedo enhancement of low-level maritime clouds, Philosophical Transactions of the Royal Society A, doi: 10.1 098/rsta.2008.0137. Lenton, T., H. Held, E. Kriegler, 1. Hall, W. Lucht, S. Rahmstorf, H.1. Schellnhuber, (2008) "Tipping elements in the Earth's climate system," Proceedings of the National Academy ofSciences 105, 1786-1793 MacCracken, M.C., (2008) "Prospects for Future Climate Change and the Reasons for Early Action," Journal of the Air and Waste Management Association, 58, 735-786. MacCracken, M.e., (2009) Beyond Mitigation: Potential Options for CounterBalancing the Climatic and Environmental Consequences of the Rising Concentrations of Greenhouse Gases, Background Paper to the 2010 World Development Report, Policy Research Working Paper (RWP) 4938, The World Bank, Washington, DC, 43 pp. Moore, F.e., and M.C. MacCracken, (2009) "Lifetime-leveraging: An approach to achieving international agreement and effective climate protection using mitigation of short-lived greenhouse gases," International Journal of Climate Change Strategies and Management 1, 42-62. Pittock, A.B., (2008) Ten reasons why climate change may be more severe than projected, pp. 11-27 in Sudden and Disruptive Climate Change: Exploring the Real Risks and How We Can Avoid Them, M. C. MacCracken, F. Moore, and J. C. Topping, Jr., eds., Earthscan, London, UK, 326 pp.
This page intentionally left blank
CURRENT STATUS OF TECHNOLOGY FOR COLLECTION OF URANIUM FROM SEAWATER MASAO TAMADA Environmental Polymer Group, Environment and Industrial Materials Research Division, Quantum Beam Science, Gunma, Japan Total amount of uranium resource in seawater is one thousand times of that in terrestrial ores. A polymeric adsorbent being capable of collecting uranium in seawater was developed in early 1980s, since uranium is inevitable resource to operate atomic power plants. This adsorbent fabric was synthesized by radiation-induced graft polymerization which could impart a desired functional group into fibrous trunk polymers. The amidoxime group was selected as a high affinity group for uranium collection from seawater. As a marine experiment, 350kg of the adsorbent stacks was dipped at 7 km off of Mutsu-Sekine seashore in Aomori prefecture, Japan. In total 9 tests over three years, I kg of uranium could be collected successfully as a yellow cake. A new braid type adsorbent has been developed to achieve the practical cost of uranium collection. This braid adsorbent can stand on the bottom of the sea and does not need the heavy adsorbent cage for adsorbent stacks. The adsorption performance in marine experiment indicated 1.5 g-Ulkg-ad for 30 day soaking. This value was three times higher than that of adsorbent stacks. The collection cost of uranium was calculated by including processes of adsorbent production, uranium collection, and purification at annual collection scale of 1200 t-U. The uranium collection cost based on the adsorbent durability in the laboratory scale experiment, 32 thousand yen/kg-U. When the braid type is utilized 18 times, the collection cost reaches 25 thousand yen/kg-U which is equivalent to $96/1b-U30s. INTRODUCTION Uranium is inevitable mineral resource to generate the electricity in atomic power plants. Uranium has been mined as a uranium ore and its exhaustion within 100 years will be apprehensive. The extremely huge amount of uranium dissolved in the seawater is 4.5 billion tons which is equivalent to a thousand times of that in the terrestrial ores. The uranium in seawater is expected as corresponding resource for increasing demand in the future for atomic power generation. However, uranium concentration is only 3 ppb in almost all area and depth of sea. The collection of such low concentration of uranium in seawater needs the advanced adsorbent which has extremely high selectivity and capacity against uranium in seawater. This paper dealt with the current state of uranium collection technology including development of adsorbent for uranium collection from seawater, demonstration of I kg uranium collection from seawater, a practical uranium collection system using braid adsorbent, and cost estimation for uranium collection from seawater.
243
244 URANIUM ADSORBENT Development of uranium adsorbent has been researched since the middle of 1960s. Davies et al. found that hydrous titanium oxide was a suitable adsorbent for the collection of uranium from seawater in 1964. 1 Then, many metal oxides were screened in terms of adsorption rate, the hydrous titanium oxide was confirmed as a promising adsorbent for uranium which is dissolved in seawater as uranium oxide tricarbonate, U02(C03)34owing to pH 8.3. 2 First experimental plant for collection of uranium from seawater with hydrous titanium oxide was operated by Agency for Natural Resource and Energy, the Ministry of International Trade and Industry and Metal Mining Agency of Japan from 1981 to 1988. Adsorption ability of the hydrous titanium oxide was reported at 0.1 gU/kg-adsorbent (hereafter termed kg-ad). This ability is not enough for practical level and should be improved more than 10 times to reduce the collection cost. In this plant, electricity for pumping of seawater pushes up the collection cost since pumping is necessary to retard the sedimentation of adsorbents in moving bed system for effective contact between adsorbent and seawater. Additionally, the mechanical strength of adsorbent is not enough for wearing motion in moving bed system. 3 After screening of the many other uranium adsorbents including organic materials, a chemical structure of amidoxime was found as a new promising functional group for collection of uranium from seawater. 4 Meanwhile, Egawa 5 and Astheimer6 synthesized the polymer beads having cyano groups to obtain the amidoxime adsorbent. Then the cyano groups were converted to amidoxime groups by reacting with hydroxylamine. However, beads type adsorbent needs a package for feasible handing and for effective contact between adsorbent and seawater. On the viewpoint of practical handling in adsorption process, National Institute of Advanced Science and Technology (Shikoku) developed the amidoxime fiber by reacting commercially available acrylonitrile fiber with hydroxylamine. Fibrous adsorbent obtained can utilize the ocean current and the wave motion when it is moored in the sea. 7.S In this case, however, the R.Y. Davies, J. Kennedy, R.W. McIlroy, R. Spence, and K.M. Hill, (1964) "Extraction of uranium from seawater," Nature, 203:1110-1115. K. Saito and T. Miyauchi, (1982) "Chemical Forms of Uranium in Artificial Seawater," J Nuc!. Sci. and Tech., 19:145-150. N. Ogata, (1980) "Review on recovery of uranium from seawater," Bull. Soc. Sea Water Sci. Japan, 34:3-12. H.1. Schenk, L. Astheimer, E.G. Witte, K. Schwochau, (1982) "Development of sorbers for the recovery of uranium from seawater. 1. Assessment of key parameters and screening studies of sorber materials," Sep. Sci. Technol., 17:1293-1308. H. Egawa and H. Harada, (1979) "Recovery of uranium from sea water by using chelating resins containing amidoxime groups," Nippon Kagaku Kaishi, 958-959. L. Astheimer, H.1. Schenk, E.G. Witte, and K. Schwochau, (1983) "Development of sorbers for the recovery of uranium from seawater. Part 2," Sep. Sci. Technol., 18:307-339. H. Nobukawa, M. Tamehiro, M. Kobayashi, H. Nakagawa, 1. Sakakibara, and N. Takagi, (1989) "Development of floating type-extraction system of uranium from
245
mechanical strength is not enough for mooring in the seawater. This is because amidoxime groups were imparted evenly in the fiber and the intrinsic mechanical strength of fiber was lost after amidoximation.
• •
• Contllct with •• rellctive • • monomer
~ ~
H" '''", It It I -C- C - [ )' - C. -~" C~ C- CI I , I I I I H H H H H H H H
- cII -
Polyethylene
Qr "
II I
Xi II !
I
- 9- 9- 'f-9 H HII H
Uranium adsorbent
Fig. 1.' Synthesis of uranium adsorbent with radiation-induced graft polymerization.
To overcome this problem, graft polymerization was applied to synthesize the fibrous amidoxime adsorbent. The graft polymerization is powerful technique to introduce desired functional group to conventionally available polymers. When polyethylene non-woven fabric is selected as a trunk polymer for grafting, the fabrics can playa role of mechanical strength of the obtained adsorbent, since polyethylene fiber is used for a fence against oil discharge on seawater. In the grafting process as shown in Figure 1, polyethylene was irradiated with electron beam and then contacted with the reactive monomer. The graft chains are propagated from the active sites in the irradiated trunk polymer. In such way, acrylonitrile was grafted onto polyethylene non-woven fabrics and subsequently the imparted cyano group of the grafted polymer chain was converted into the amidoxime group. This grafting led the production of adsorbent having enough mechanical strength and high capacity of uranium adsorbent. The detail process for experimental synthesis of amidoxime adsorbent fabric is as follows: 1. Nonwoven fabric made of fibrous polyethylene as a trunk polymer was irradiated with electron beam of200 kGy in nitrogen gas. 2. Irradiated nonwoven fabric was immersed into the monomer solution which was composed of 50% dimethyl sulfoxide, 35% acrylonitrile, and 15% methacrylic acid after oxygen gas in the monomer solution was substituted with nitrogen gas. The irradiated nonwoven fabric in the monomer solution sea water using sea water current and wave power," J Shipbuild. Soc. Japan, 165 281-292. H. Nobukawa, M. Kitamura, M. Kobayashi, H. Nakagawa, N. Takagi, and M. Tamehiro, (1992) "Development of floating type-extraction system of uranium from sea water using seawater current and wave power," J Shipbuild. Soc. Japan, 172:519-528.
246 were warmed up to 40°C. This temperature was maintained for 4 h for the graft polymerization. The degree of grafting, which was calculated by the increasing weight, reached 150%. 3. Grafted nonwoven fabric was reacted with 3% hydroxylamine solution at 80°C for 1 h. In this reaction, the cyano groups in the polyacrylonitrile moiety of the grafted nonwoven fabric were converted into amidoxime in the yield of 95%. Co-graft polymerization of hydrophilic monomer, methacrylic acid, with acrylonitrile was effective for improving the adsorption rate of uranium in seawater. 9 Table 1 Characteristics of uranium adsorbent against metals in seawater. Elements
Concentration I n seawater" III giLl
Concentration in adsorbent b III g/g-adJ
Na
1.08X 107
618.5
K
3.80X 105
45.9
AI
Distribution coefficient (b/1000a) 0.057 0.12 435X10 3 3.62X 106
2
86.94
0.03
108.82 1A9
1.49 X 103
Fe
2
414.44
CO
0.05
23.57
2.07X105 4.71X105
1.7
78.17
3.2
63.72
F'b TI
Ni U
460X 104 1.99X104
Adsorption conditions:0.2g adsortlent 25°C, 3Umin seawater, and 7 days.
The adsorption characteristic of the amidoxime adsorbent is shown in Table 1. Though this adsorbent has low affinity against the alkaline metals such as sodium and potassium ions, transition metal ions of Pb, Fe, Co, Ni, U, and V ions were selectively adsorbed in the seawater. This is a reason why the amidoxime adsorbent can collect the uranium in seawater. MARINE EXPERIMENT BY ADSORBENT FABRIC STACKS The amidoxime adsorbent fabric was synthesized in bench scale equipment as shown in Figure 2. The roles of polyethylene nonwoven fabric, 200 m long and 1.5 m wide, were irradiated by gamma ray of 200 kGy. Then, the irradiated fabrics were reacted with the monomer mixture of acrylonitrile and methacrylic acid. The grafting reaction of Sh achieved the degree of grafting of 120%. After amidoximation, uranium adsorbent fabric of 6,000 m2 and 700 kg in weight, was obtained to be used for marine experiment.
T. Kawai, K. Saito, K. Sugita, T. Kawakami, A. Kanno, A. Katakai, N.,Seko, and T. Sugo, (2000) "Preparation of hydrophilic amidoxime fibers by cografting acrylonitrile and methacrylic acid from optimized composition," Radiat. Phys. Chern. 59:405-411.
247
Mllr gnlUng of lill
+
~QTeB qtQTttf!!rlif; 120$)
Before grafting
10% Hydroxylamine (60' C and 1h)
Fig. 2: Graft polymerization/or bench scale production o/uranium adsorbent fabric.
The uranium adsorbent fabric, 0.2 cm thick, produced was cut into the sheets, 16 cm wide and 29 cm long. The adsorbent stacks were assembled by 120 sheets of adsorbent fabrics alternately with spacer nets as shown in Figure 3. The collection system for uranium collection using adsorbent stacks is shown in Figure 4. This collection system is composed of floating flame and adsorption beds. The floating frame was stabilized with ropes connoting to four 40 t-anchors placed on the sea bottom. One side of floating flame is 8 m. The square adsorption bed, 16 m2 in crosssectional area and 30 cm in height, can pack 144 adsorbent stacks. Three adsorption beds, connected with four ropes, with a span of 1.5 m were hanged in seawater from the floating frame in the sea depth of 20 m. The frame was designed to endure the following ocean weather conditions: wind strength of 30 mis, tidal current of 1.0 mis, and wave height of 10 m. To evaluate the uranium collection of adsorbent stacks, the collection system was placed in the Pacific Ocean at 7 km offshore from Mustu-Sekine in Aomori prefecture of Japan. The sea depth of this site was approximately 40 m.
248
Fig. 3: Adsorbent stack composed of adsorbentfabrics and spacer nets.
ITO anchoij Adsorption bed . . . . 4m
Fig. 4: Uranium collection system for adsorbent stacks.
The uranium collection experiment was performed from 1999 to 2001. The adsorption beds were hanged out of the seawater by using a crane ship every about 20-40 days. Adsorbed uranium on adsorbent fabric was fractionally eluted by 0.5 M hydrochloric acid. The amount of uranium eluted from the adsorbent fabrics is summarized in Table 2 with seawater temperature. Total amount of uranium collected by this demonstration reached roughly one kilogram in terms of yellow cake. The average ability of the adsorbent was 0.5 g-U/kg-ad for 30 days' soaking. The uranium adsorption was correlated with the temperature of seawater and the wave height. lo This is because 10
N. Seko, A. Katakai, S. Hasegawa, M. Tamada, N. Kasai, H. Takada, T. Sugo, K. Saito, (2003) "Aquaculture of uranium in seawater by fabric-adsorbent submerged system," Nuclear Technology, 144:274-278.
249 the warming of seawater enhances the chemical adsorption of uranium on the adsorbent. The motion of wave was transferred to the adsorption beds through hanging ropes and the motion of up and down of adsorption cage reali zes the effective contact between seawater and adsorbents.
Table 2 Amount of uranium eluted from adsorbent stacks. Submersion period 199929 Sep .-20 Oct. 2000 8 Jun .-28 Jun. 28 Jun.-8 Aug. 8 Aug.-7 Sep. 7 Sep.-28 Sep 28 Sep .-19 Oct. 2001 15 Jun .-17 Jul. 18 Jul.-20 Aug. 15 Jun.-20 Aug. 20 Aug .-21 Sep. 18 Jul.-21 Sep 15 Jun .- 21 Se~.
S b . Seawater Number Adsorbed u merSlon temperature of uranium days C] slacks [g]
r
21 20 40 29 21 21 32 32 65 31 63 96
19 12 13 20 24 20 13 18 13 20 18 13 -
21 13 22 24 22 18 18 20 20 19 19 19
144 144 144 144 144 144 216 216 72 216 144 72
66 47 66 101 76 77 95 119 48 118 150 120 1083
IMPROVEMENT OF ADSORBENT FOR COST REDUCTION
To reduce the collection cost, the most expensive part is analyzed in the collection system using adsorbent stacks. If the floating frame and the adsorbent beds could be deleted, it was found that 40% of total cost would be reduced. In this reason, a new braid type adsorbent was developed. I I When these braid adsorbents are connected to anchor, they can stand like sea weeds on the sea floor as shown in Figure 5 without costly floating flame and adsorption bed. \2
II
12
N. Seko , M. Tamada, N. Kasai, F. Yoshii, and T. Shimizu, (2002) "Synthesis and evaluation of long braid adsorbent for recovery of uranium from seawater," Proceedings of Civil Engineering in the Ocean, 18:737-742. T. Shimizu and M. Tamada, (2004) "Practical scale system for uranium recovery from seawater using braid type adsorbent," Proceedings of Civil Engineering in the Ocean, 20:617-622.
250
Fig. 5: Image of collection system for braid adsorbent.
A certain length of braid adsorbent can be controlled by braiding the uranium adsorbent fiber around the porous polypropylene float, 2 cm in diameter. The suitable length of adsorbent fiber surrounding the float was 10 cm. The adsorbent fiber was produced by changing the trunk polymer from the nonwoven fabric to polyethylene fiber in grafting process.
Fig. 6: Recovery of braid adsorbent in marine experiment at Okinawa.
The uranium adsorption of a braid adsorbent, 60 m long, was evaluated in the sea of Okinawa area in Japan. After the braid adsorbent was thrown into the sea, it simultaneously stood on the sea bottom. When collected, it was cut off from the anchor using wireless operation. The braid adsorbent appeared on the sea surface can be recovered by fishing boat as shown in Figure 6. Figure 7 shows the average ability of the adsorbent became 1.5 g-U/kg-ad for 30 days' soaking. Temperature of the seawater in Okinawa was 30 ce and I oce higher than that of Mutsu area. The rise of 10 degrees in the seawater temperature enhanced 1.5 times of uranium adsorption for the nonwoven fabric adsorbent. As a result, the braid type adsorbent has three times higher than that of adsorbent stacks. Therefore, the braid type adsorbent had 2 times higher adsorption
251
ability of uranium in seawater than the stacks of nonwoven fabric adsorbent owing to the better contact between seawater and adsorbent. 2.0
D
Braid adsorbent
ro
b
~
1.5
(30°C) ",
E! c
..-
0
""~ u
u
Adsorbent
E
C
"
1.0
0
:J
,.'
"
0.5
1/1
ro
:s 10
20
30
40
Soaking [d]
Fig. 7: Uranium adsorption of braid adsorbent and adsorbent stacks,
252 COST ESTIMATION Uranium collection cost including processes of adsorbent production, uranium collection, and purification was estimated by using braid adsorbents in the scale of 1,200 t_U/y.13 Figure 8 shows the ef Lnium cost when the adsorption abilities ar, S' 10 .ing. The adsorption \' Ulkg-ad for 30 days' abilities of 2 g-U/kg-a< C» . 1 braid adsorbent. soaking which is avera: ~ 8 · In this case, the ~ nal of 8 m should be . :eawater temperature set in 1,000 km 2 sea -g 6
:~~~~e~;;~ht~;o~t~~~l ~
:'.
· . ~2g-Wkg-ad
~~ ~~~a~a::~I;:I~:~~
:~=,:~~'f~;;:;;: i :. ~ ~~:;;:;:~;~;;;
result, 32,000 yenlkg· .~ repetition is 18 times, t'~ to $961lb-U 30 R•
::s
0
0
10 20 30 40 50 60 70
ing cost. When the which is equivalent
1
Repetition usage of adsorbent [times]
Fig. 8: Collection cost of uranium in seawater using braid adsorbent. CONCLUSIONS Radiation-induced graft polymerization cloud synthesize the amidoxime adsorbent which has enough high mechanical strength for direct mooring in sea and 15 times higher adsorption performance of uranium in seawater than that of former hydrous titanium oxide. The stacks of amidoxime adsorbent fabrics revealed that 1 kg of uranium as yellow cake could be collected by marine experiment. The braid adsorbent was developed to delete the cost for the expensive floating frame and adsorption bed which is necessary for the mooring of adsorbent stacks. Additionally, the adsorption ability rose to 1.5 g-U/kg-ad for 30 days' soaking which was three times value of adsorbent stacks. The cost of the uranium collection could be calculated by braid adsorbent system. The expecting collection cost is 25,000 yenlkg-U which is roughly twice of weekly spot price, $48/Ib-U 30 8, on August, 2009. As a future planning, the extensive research should be carried out to clarify the number of repetition usage of adsorbent in adsorptionlelution of uranium and to dramatically improve the adsorbent ability.
13
M. Tamada, N, Seko, N. Kasai and T. Shimizu, (2006) "Cost estimation of uranium recovery from seawater with system of braid type adsorbent," Transactions of the Atomic Energy Society ofJapan, 5:358-363.
AN EXPLANA nON OF OIL PEAKING ROGER W. BENTLEY Department of Cybernetics The University of Reading, Reading, UK INTRODUCTION The resource-limited peak of conventional oil production is not an obvious phenomenon, and many analysts do not understand it. This report sets out to explain the concept, and to give reasons why it is poorly understood. It is perhaps natural to think that if a region contains a large amount of oil that can be extracted at relatively low cost, and only a fairly small proportion of this will be used over the period of a production forecast, then the forecast need not consider resource availability. This is usuall y expressed as "There is plenty of oil to meet the foreseeable demand". Examples of this view, from the lEA, the British government, oil companies, and academic institutions are given below in Section 6. Early in a region'S oil development such a view can be correct. But once oil discovery is fairly mature this view is generally wrong. Section 2 sets out why this is the case. Section 3 takes the UK as a specific example, and Section 4 presents similar analysis for other regions, and for the world as a whole. Section 5 then explains why a production peak is probably also expected for 'all-liquids', although not in this case a resource-limited peak. Section 6, as already mentioned, lists some forecasts that have ignored peaking, and gives reasons for this. Section 7 presents conclusions. THE MECHANISM OF THE CONVENTIONAL OIL PEAK At the outset it is important to recognise that the world can potentially access very large quantities of oil. This includes not only conventional oil, but also heavy oils, oil from tar sands and oil shales, natural gas liquids, and the conversion of gas or coal to oil. The lEA recently estimated the long-term potentially recoverable resource base of all oils to be nearly 10 trillion barrels. In addition, oil can come from biofuels; and mankind can substitute away from oil-for example, by gas or electrically-powered vehicles. Given that the world has used just over 1 trillion barrels of oil to-date, and the forecast amount required for the next 30 years is also around 1 trillion barrels, there would seem to be little risk of an imminent supply constraint. To understand why there is indeed a concern we need to look at the production of conventional oil. We define the latter as the fairly easily flowing oil that can be produced by primary or secondary extraction methods (including own-pressure, physical lift, water flood, and pressure maintenance from water or natural gas injection); as well as that already recovered, or scheduled to be recovered, by tertiary extraction (such as steam heating, nitrogen or CO 2 injection, or miscible flood). On this definition over 85% of all oil produced to-day is conventional oil. The question that then lies at heart of the peaking argument is: How is the production of conventional oil in a region constrained over time?
253
254
This turns out to be a rather complex question, so we approach the answer in steps. Let us start with a simplified view of oil production in a region, as given by Figure lao 50
c:
-'-§
40
0 :;: 1/1 (,)
:J
'C
....0 Q.
30
~
....C'G
iii ~ 20 :J c: c:
ct
....C'G
-
10
o 3 5 7 9 11 13 15 17 1921 2325272931 3335373941
Years Fig. 1a: A simplified model of why production in a region goes over peak.
Here each triangle represents the production from a single field. As can be seen, it is assumed that production from each field starts in succeeding years; and that each field is smaller than the preceding one; in this case, 90% of the size. From these simple assumptions two perhaps surprising properties emerge: • •
Production reaches a peak when about one-third of the total oil indicated on the plot has been produced. The peak is resource-limited, driven by the amount of oil in the large early fields. As the Figure indicates-but by all means create your own spreadsheet to verify-the smaller later fields, no matter how numerous or how much oil they contain in total, do not affect peak; they just fatten and extend the tail.
Crucially-although the peak looks fairly obvious in this diagram, it is completely counter-intuitive when looked at with the forecasting tools of most analysts. How so?
Here, if the first field starts production in year-I; then the region reaches peak in year-12. If one draws a line on the graph at year-l 0, what would most analysts see at that date? • •
That production has risen rapidly in the past, and is still trending upward (see Figure lb). There are large quantities of remaining reserves in the fields already in production. (Assume that fields are discovered 5 years before getting into
255
•
• •
production. Then the reserves at year-lO are as shown in Figure Ic, where field 1 still has nearly half its original reserves; fields 2 to 9 considerably more; field 10 is only just coming into production; and reserves have also been assessed for fields II to 15.) These reserves are mostly low-cost. Much is in fields already in production, where the incremental cost of production is low; and reserves in the fields not yet in production are in a region where the geology is understood and some infrastructure already in place. Discovery is continuing (the smaller later fields shown on Figure Ic have not been found by year-I 0). Technology is moving on apace; so recovery factors in the existing fields are increasing.
What to forecast at Year-lO? (1) Production has been rising rapidly 50 Year-lO
40
c:
0
:;::I
u
$'
:J '230 'C :J
e i!-e
ll.
1V :J c: c:
«
:g
... 20
~
10
o 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 4
Years Fig. 1b: The view at year-lO-The production trend
256
What to forecast at Year-lO? (2) Reserves are large (World: 40 years' of reserves) (3) New fields are being discovered (e.g., Tupi) (4) Technology is increasing recovery factors (e.g., 4-D seismic) ~
,-----------------------------------------,1
o 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 4
Years
Fig. I c: The view at year-l O- Reserves, new fields, and technology.
The above point is so important that it is worth re-iterating. If you are a forecaster and look only at: • • • •
past production, reserves (or even the expected total recoverable resource), the current rate of discoveries, and the march of technology;
and meet the oil required for your forecast out of the tolal oil available that seems reasonable on the above data, then you will get caught out by the peak. Many analysts, at the equivalent of 'year-I 0', have naively predicted production increases throughout their forecast period. Shortly we will look at the main factor that indicates if peak is expected. But first we ask how realistic is the above simple model-are we concluding too much from an over-simplification? The first thing to note is that the shape of the production curve in Figure I a is a surprisingly good approximation of reality. Based on U.S. experience of regions and individual States going over peak, this was the shape that Hubbert drew in his early paper predicting U.S. peak; and which he later depicted in an interview on film. Many sizeable regions of the world are now past their conventional oil resource-limited peak-Dver 60 countries, and many on-shore and offshore regions within these countries-so there is plenty of evidence to show that the production profile in Figure I a is indeed typical; see
257 the examples later in this report. (This is true of course only for regions where production has not been significantly constrained by other factors; OPEC quotas being a case in point.) Although the production profile depicted in Figure la is realistic, we need to look in detail at the assumptions behind the model to elucidate the mechanism that drives peaking. First we examine the shape of production profile assumed for individual fields. It is important that they are roughly triangular? Real fields display a great variety of profiles, but-at least in more recent times, and for fields not excessively constrained by pipeline or FPSO capacity-a quick rise to full production, a fairly short plateau, and a long decline is typical. However, the model is fairly robust to the individual field profiles. M. Smith of Energyfiles Ltd., for example, has modelled the addition over time of fields with a fairly complex profile and finds a similar regional peak; while if one takes the extreme case where all fields have essentially constant-flow rectangular profiles, the regional peak is again clear, albeit sharper and later than in Figure lao The conclusion is that for all realistic field profiles, the regional peak from combining these fields occurs before or near a region'S 'half-way' production point. The second aspect of the model are the assumptions that fields come on-stream in regular succession, and that each field is 90% the size of the preceding one. The 90% ratio used here is based very roughly on that for UK North Sea fields, but it turns out that the model is remarkably insensitive to both field size distribution and rate they come onstream, as long as the key feature is maintained that the smaller fields mostly come onstream later. The latter is generally true-see the examples below - because the larger fields in a region are mostly easier to find, and because the economics of production helps ensure that large finds are put into production before smaller ones. Note that these assumptions reflect a complex intertwining of geology, knowledge, engineering and economics. The rate that fields are discovered in a region, and then brought on-stream, is affected by how fast geological knowledge of the region can be built up; the basic geology of how easy the big fields are to find compared to the smaller ones; and the economics that determines both the initial search effort, and the rate that new fields are brought into production. It is always possible, for example, for a surge of small fields to be brought on-stream rapidly-at least for a short while-as happened with UK production in 1999 when the oil price fell to $lOlbbl. But analysts need to point to very special circumstances for the general features of the above model not to be valid. So what really drives the peak? It is the decline in discovery. If many new fields are being discovered that contain significant quantities of oil, then the added production of these fields can offset the decline from earlier fields. The resource-limited production peak only occurs once discovery in a region is well into decline. To know if the production peak is near it is therefore useful (though not essential) to see the discovery trend. Let us re-visit Figure la, and now add discovery. As before, it is assumed that discovery of each field takes place 5 years before its production starts, roughly in line with UK North Sea experience. This leads to Figure 2a, which shows both discovery and production data.
258
-4
Fig. 2a:
-1
2
5
8
11
14 17 20 23 26 29 32 35 38 4 Years
The simplified model of Figure I a, showing both discovery by field, and each field's subsequent production, assuming that fields take 5 years from discovery to production. The plot is to-scale, such that, for example, the volume of oil shown as discovered for field-I (leftmost g.rey bar, IOO units) is the same as indicated for fieldI production (the lowermost production triangle, which starts in year-I, reaches 9.09 units/yr. in year-2, andfalls to zero by year-23).
This is a very telling plot, and explains oil peaking. But for analysis purposes-to be able to predict peak, and also to be sure that no later peak will arrive-the data are best presented on a cumulative plot such as Figure 2b.
Estimating the Date of Peak: Cumulative data 1000
900+C~~--~,-----~~~----~----~~~::-----~----~
H
I
,....--
cumu,ative ~ 800 !-~~-------~~======~----I 700 .J...-C=d=is=c=o=v=er=y=;.-I/L________ /ql Cum ulative If--_____I
/
/
Iproduction I
600 T----~/'~-----/~~~~~~~--~ 500T----+/-------~-+----------------~
400+---~------~~~~~~~~~-----.E
/
I
~ URR - 1000 units 300 +--/+-------/---7<--~1 Pea k at 36 % of UR R;
200t-~------/~~---L~a~t~4~4~%~O~f~c~u~m~.~D~iS~c~oV~e~rryJ
100
/
all
-4 ·1
,/' 2
5
8
11
14 17 20 23 26 29 32 35 38 4 Years
Fig. 2b: The same data as Figure 2a, but on a cumulative basis-discovery and production. The resource-limited peak in production (at year-I 2) is shown by the square.
259 As can be seen, by the time production of the first field starts in year-I, about 50% of the final discovery has occurred. By the time production peaks in year-12 the discovery curve has turned well towards asymptote. In the real world-see the examples below-the discovery asymptote is usually clear well before peak has occurred. Of course basins-and even more so larger regions--can be complex; and new plays can open up. In such cases the discovery trend for a region can display 'multiple asymptotes', and it takes geological knowledge to judge when overall discovery in the region is drawing to a close. This aspect is brought out in the example regions reported next. Summarising this section we can say: I. The resource-limited conventional oil production peak in a region is caused by adding the output of successive fields, where the later fields are generally smaller than the earlier. This reflects the fact that the size distribution of fields in most areas is very skewed, with most of the oil being held in a relatively small number of large fields that tend to get found first. 2. The peak occurs once discovery has declined significantly; and indicates the point at which reduced output from the early fields is no longer compensated by increased production from the later. The typical shape of the regional production curve is driven by the profile of decline in individual fields, primarily from field pressure loss, drive fluid breakthrough, or both. In the case of this simple model, the peak of discovery is 16 years before the production peak. 3. The production peak is counter-intuitive. This is because it occurs when: production has been trending steadily upward; remaining reserves are large, and generally low-cost; discovery is continuing; technology is improving, and recovery factors are increasing. 4. If a region sees significant separate phases of discovery, such as on-shore followed by offshore, then production may also show a number of resourcelimited peaks, each reflecting a different discovery phase. PEAKING OF CONVENTIONAL OIL - THE EXAMPLE OF THE UK Now we turn from a theoretical model to actual cases where the peak has occurred. Figure 3 shows the UK production ofliquids (oil plus NGLs) from 1960 to 2000. In terms of peak, there was a peak in the mid 1980s, and perhaps a hint of one 1999. In terms of proved reserves, these were absolutely no help in identifying the peak; having remained unchanged at about 5 billion barrels (Gb) since the mid-/980s.
260
UK - Liquids. Annual Production
-'.....:-1
12 IJ.l
.. !'O.6 OB
"GO
.0
0
0 .,
02
)
OD cr>
0)
O'l
m «> m
~
~
~
~
0
'" '"«>
«> «>
I
I
/
r \\,...../I
1
I .....
N
In
r-
(D
m
cr>
O'l
m
m
en m
m
O'l
en
en '" m
m en en
~
~
~
~
~
~
~
~
~
~
.....
r-
CD
~
~ (D
CD
0
C'l
Fig. 3: UK liquids (oil plus NGLs) production, 1960-2000. (Source: BP Statistical Review.)
So to see what was happening one needs to look at the history of discovery, using the 'proved plus probable ' ('2P') discovery data, rather than the much lower volumes reported by the proved (' IP') reserves. The '2P' discovery data are shown in Figure 4. UK: YelUly "U discoveries and production
»®~-----'----1=~=------------
3m
!
-
Oil
i
Z$
~ 'a
~ +--------------~.~.------------------
1.. $11l) + - - - - -
.-
1
I
,OGO ~i---------- -,--~ c +-~~.~~~,~~
11150
1970
C/ fn!rwIIeo Ud
Fig. 4: UK proved plus probable (,2P,) oil discovery (bars), and production (line). (Source: Energyfiles Ltd.)
As can be seen, discovery in the UK mirrors that given in the simple model of Section 2, with the bulk of UK discovery (by volume) taking place before offshore production started. Although discovery here is not broken out by field, the pattern was that once a small initial field had been discovered in 1969 nearly all the very large fields
261 were discovered fairly rapidly thereafter. By comparing the volume -discovered with the volume produced, the Figure indicates clearly that the 1984 peak was not resourcelimited; but the 1999 peak was. In the UK's case the trough between these two peaks was caused mainly by the safety work carried out on all fields following the Piper-Alpha disaster. (Lesser factors included the 2-year work-over on Brent due to high gas production, a fall in oil prices, hinted changes in petroleum revenue tax that may have delayed the start up of new fields, and-as Laherrere notes-a secondary peak in discovery in the late 1980s.) Had the Piper-Alpha disaster not occurred, the UK production profile would have been much as indicated by Figure 1. Figure 5 shows the graph of production by field, and allows the simplification of Figure 1 to be compared to this real case. As can be seen, the explanation that peak is caused by the larger fields mostly getting into production first is clearly borne out. 160 140
~
120
<:
§ I .5 <: 0
~
100 80 60
:::I
'tJ
e
40
a..
20
1975
1980
1985
1990
1995
2000
2005
2010
Ludwig-Bolkow-Systemtechnik GmbH, 2007 Source: DTI, May 2007; Forecast: LBST
Fig. 5: UK production by field. The larger fields mostly get into production first; and the majority have the general profile exhibited by Forties (the large field at the bottom of the graph). Source: Ludwig-Bolkow-Systemtecknik (LBST). Original production data from UK DTI;forecastfrom LBST.
Finally, in terms of graphs, Figure 6 compares to the simple model's Figure 2b by plotting the UK's '2P' discovery data, and the production data, on a cumulative basis. As can be seen, by the time the 1999 peak occurred, discovery had tended well towards an asymptote.
262
UK: Cumulative oil discoveries and production
r--------,.
NIl: Nun".
--Oil prODUCtion
4()
e
detin ons
-<>- Discoveries
35 3il
...'0
. 0
25
11
t: 20
'"
l> C
~
15
Ii 10 5+---~--------~~----------------------------_4
1965
1979
© Eoo", yules Ltd,
1975
1900
1!185
l!l!!O
199>
2\100
2005
2010
lO15
20lll
2025
Year
Fig 6: UK '2P' oil discovery and production, displayed on a cumulative basis, Source: Energyfiles Ltd Also shown are four estimates for the UK's conventional oil 'ultimate', The UK Department of Energy's estimate (,DOE,) is from 1974; the others are more recent, The Campbel/lUppsala and USGS year-2000 estimates exclude NGLs (these add -4,5 Gb); the USGS also excludes UK West of Shetlands basins,
Now we come to an important point We have indicated that the 1999 peak is resource-limited, and clearly this is the case based on the oil already discovered (see
Figure 4). But how do we know this will remain true in future? Perhaps the UK has big new plays waiting in the wings that in time will yield much greater quantities of oil, enough to surpass the 1999 peak. As has been mentioned, the situation often occurs where historical discovery data (the 'creaming curve' vs. time) indicates an apparent asymptote, but where this increases as a new play enters the scene. So what was known to indicate that the UK's 1999 peak was indeed resource-limited; unlike, therefore, the 1984 peak? Knowledge of peak cannot be based solely on discovery data, it must also include geological appraisaL The latter will always be a jUdgement, and can never be known with absolute certainty. But a great deal of geological knowledge now exists for much of the world's likely oil plays. In the UK's case there are still several significant future potential sources of oiL There may be quite large quantities of oil undiscovered in subtle stratigraphic traps; there is new potential in the deeper Atlantic; and there are certainly large amounts of oil in-place currently deemed unrecoverable, But geological and reservoir knowledge says it is virtually certain that none of this oil, if it exists, can be developed rapidly enough to push UK production back up past the 1999 peak. The subtle traps, if they hold significant amounts of oil, will need highly calibrated seismic to find, so will not be found rapidly; the deeper Atlantic will offer surprises but is not thought especially prospective due to poor source rock and traps; while the many routes to
263 improved recovery in existing fields have already seen much trial and analysis. Overall, combining the UK's 2P discovery data with geological knowledge indicates that the country's conventional oil peak in 1999 was indeed resource-limited. Figure 6 brings out this point by including four estimates of the UK's ultimately recoverable resource (,ultimate'). The earliest is a UK government DoE 'Brown Book' estimate made back in 1974, and the more recent are from Campbell, the USGS, and Energyfiles. These 'ultimates' are in close agreement with each other, and with the asymptote of the '2P' discovery creaming curve. (As already indicated, the reason that the UK Department of Energy estimate made in 1974 for the UK 'ultimate' could be so accurate - before UK offshore production had even started-was that by 1974 most of the big fields had already been discovered.) An important question, therefore, is why did the 1999 peak-and perhaps more so, the very steep subsequent decline in production---{;ome as such a surprise to the UK government? It should not have done so. Using the 1974 estimate of ultimate; and plotting a simple 'mid-point' isosceles triangle based on the initial production trend certainly finds peak at around the right date; a fact reported at the time (and see below, Figures 7b and 7c). But 'mid-point' peaking got forgotten (and not just in the UK, as we shall see), and a deep myth developed based on the behaviour of proved reserves. Table I: UK Data on Reserves A: PROVED RESERVES ('IP') Year Gb Year Gb 1975 16.0 1991 4.0 1976 16.8 1992 4.1 1977 19.0 1993 4.6 1978 16.0 1994 4.5 15.4 4.3 1979 1995 1980 14.8 1996 4.5 1981 14.8 1997 5.0 1982 13.9 1998 5.2 1983 13.2 1999 5.2 1984 13.6 2000 5.0 1985 13.0 2001 4.9 5.3 1986 2002 4.7 1987 5.2 2003 4.5 1988 4.3 2004 4.5 1989 3.8 2005 4.0 1990 3.8 2006 3.6 2007 3.6 (Source: BP Statistical Review, various dates.) B: PROVED PLUS PROBABLE RESERVES ('2P') Year Gb USGS 1996 9.7 C/U 2005 9.3 Note: C/U: Campbell I University ofUppsa1a
264 As the table shows, the UK's proved reserves from 1975 to 1985 were in the region of 15 Gb; but then dropped in 1986 to about 5 Gb, and stayed close to this figure until very recently. Of course, all that changed in 1986 was the basis of reporting. Proved plus probable (2P) reserves are currently about twice the proved value. (The full reason that the UK's proved reserves have been so much below the 2P reserves still needs elucidating. It almost certainly reflects, in part, reserves reporting by oil companies under U.S. Securities & Exchange Commission rules; but probably also the non-inclusion of reserves of discovered fields until sanctioned for development.) The long period of static values for UK proved reserves-staying at the equivalent of roughly 5 year's supply-would not matter except that it fooled many analysts into thinking that something special was going on. Year after year oil was being produced, but the proved reserves were not falling. This replacement of the reserves was thus very widely ascribed, including within the oil industry, the UK government and the lEA, as being primarily due to improvements in technology; horizontal drilling and 4-D seismic being frequently cited. The real explanation was that as the proved reserves were produced, reserves in the probable category became classed as proved. But why did analysts not see this for what it was? The reason lies in the usual definition of proved reserves: "".those quantities that geological and engineering information indicate with reasonable certainty can be recovered in future under existing economic and operating conditions." Most analysts then-and still today-treat proved reserves as a fairly accurate measure of the amount of oil likely to be available. The simple reality-that the quantities of oil likely to be recovered under existing economic and operating conditions are generally much larger than the proved reserves-was not recognised; and all too often is still not recognised today. Figure 7a, though a little complex to read, sets all this out plainly. It shows UK data on cumulative production, IP and 2P reserves; and, importantly, shows the estimates made at the time of the total amount of oil likely to be recovered in UK waters (the 'ultimate'). The data are taken from issues of the UK's Brown Book for the years indicated.
265
UK Cum. Production , Reserves & Ultimates: BB
8000
.
•
•
•
~-
.-: ..-
I
100'.1
•
I 0000
I
...... .....-
ii
•
'000
•
•
3000 2000 I QOO
----
..........---
,
:1 I
a-a-~ . - : : : - - - - - - -
1/(_.4.
a,--(i
0
~
..
·• . • • s::::: .-. · ·
5000
:e
~
_Cum . Prod. ~P 3
.....-Ult. (Av9.)
m
~
~
m
:B
:n
m
m
;;;
~
~
m
:;;
___ P roved
-.-P2
--+- Ult (Low)
--+- U~ . (High)
•
Campbell
•
~
USGS Mean
Fig. 7a: UK oil data taken the UK government's Brown Books at the dates indicated See text for discussion.
Look initially at the data for 1974. Offshore production had not started, so the cumulative production was essentially zero. Proved reserves from fields discovered at that date (see Figure 4) were being reported under the original rules, so were fairly significant; on top of which were probable reserves, and then possible. At the same date the government gave a single estimate for the likely total recoverable, which stood at 4,500 million tonnes (33 Gb). By 1977 more fields had been discovered, so the ' old-basis' proved reserves had grown significantly, and likewise the 2P and the 'proved plus probable plus possible' ('3P') reserves. By this point, the government gave a range for the ultimate, and the plot shows the low value, the high value, and the average. As can be seen, the average (red dot) had fallen a bit from the 1974 estimate. By 1987 the lower rule for reporting the proved reserves had been adopted, but despite this the average value for the ultimate (at about 30 Gb) was little changed from the 1974 estimate. In the subsequent years, the average ultimate value grew somewhat, to just over 40 Gb. But as the numbers at the right of the plot indicate (and see Figure 6) estimates by others cluster between 29 to 34 Gb; very close to the original 1974 estimate of33 Gb. Thus the original information of the likely date of peak, based on a 33 Gb ultimate, looks entirely sensible; with the date of maximum shifting a little due to PiperAlpha. Perhaps the most striking lesson from this plot is how easy it is-with the information displayed-to make a reasonable guess at the date of peak. This is illustrated in Figures 7b and 7c. Figure 7b reverts to the simple model, and shows how a simple isosceles triangle of area equal to the apparent asymptote of discovery (here, close to 1000 units of oil)
266 gives a pretty good indication of the date of peak (though naturally overstates the production at peak, as the triangle does not reflect the region's long decline curve). Estimating the D~lte of Peak contd. - Isosceles tJ'iangle
120
c o :;
Areas below all tlu'ee curves ~ 1000 lUlits
!5
100
'C
Evr 80
Q. ~
06
Isosceles triangle estimating date of peak
§
~ 16 60 > ... o~
,~~
40
«I ~
20
0-
c
c(
o -4 -1
2
5
8
11 14 17 20 23 26 29 32 35 38
Years Fig. 7b: Using an isosceles triangle and the simple model of Figures Ia to Ic to predict the date of a region 's resource-limited production peak.
Estimating the Date of UK Peak - Isosceles triangle UK Production (red Hn e); Isoscel es: fOf UK DoE 1974 est. of URR (4.6 Gr} -Puk ilt1 992 (dotted) ; and Estimated s moo. thed pro duction If no Piper A lpha disaster for sam e URR (purple). 5
4
o
Fig. 7c: Using the 'isosceles triangle approach ' to predict the UK 's production peak.
267 Figure 7c shows the same procedure applied to estimate the date of the UK peak; and indicates both actual UK production, and an estimate of what it might have been had Piper-Alpha not occurred. An alternative, and more precise, analysis results if the original 1974 estimate for the UK's ultimate of 4,600 Mt is used in combination with the standard 'mid-point peaking' rule. On this basis the UK's resource-limited peak would be expected when the cumulative production reached 2,300 Mt. This was not in 1984 (when cumulative production had reached only about 700 Mt), but occurred at about 1998 or 1999. Given the general straightness of the cumulative production line, despite the trough from 1985 to 1995, this date could be (and was) predicted with reasonable precision from the first years of production. Piece of cake, really. It is reasonable to ask at this point: Where does economics come in? Economic factors are important, of course. A higher oil price encourages exploration, brings on economically marginal fields, permits more expensive recovery, and reduces demand. But in a country well past its discovery peak the effects are fairly small. More exploration just moves the country further along the long-declining discovery trend; the economically marginal fields are known, and are often small or difficult; and the more expensive recovery techniques can be identified and their impacts calculated. In general, though each country needs specific analysis, the ability of a higher oil price to Significantly impact the geologically-based estimates of ultimate is usually fairly limited. However, having just said how easy is the topic, before we leave the UK data we will examine one of the uncertainties that do remain, that of evaluating the impact of reserves growth on a region's date of peak. 'Reserves growth' as used here, and generally, means the increase over time in size of fields already discovered; i.e., for a region, it sums the growth of the original recoverable reserves (the 'field ultimates' ) of the individual fields . With the global volume-weighted average recovery factor of perhaps 35%, the scope for reserves growth in fields is large. Some modellers of future global production assume reserves growth as zero-effectively holding that the field 'ultimates' in the industry '2P' datasets are pretty accurate; while other modellers assume extraordinarily high numbers for reserves growth. So it is important to gather what data we can. Firstly, of course we must rule out the simple apparent reserves growth that occurs as a field's I P data get updated over time to finally equal the true 2P (i.e., the most probable) value. Odell, for example, reported nine-fold growth for Western Canadian oil fields; while U.S. fields exhibit typically a six-fold growth in size if on-shore, and about three-fold for offshore. These sort of percentage growths (i.e. , up to 900%) are almost undoubtedly mainly due to moving from IP to 2P numbers; with a physical explanation- at least for large fields-often being simply the 'drilling-up' of fields . Early in the life of a large field only a relatively small number of production wells are sunk, and under SEC rules only the oil judged in 'direct communication' with these wells can be classed as reserves. Over time an increasing number of production wells get drilled, increasing the area 'in communication', and thus raising reserves. Other large increases have occurred famously in large old heavy oil fields, where naturally reserves increases from improvements in recovery technology have been significant. The general rule on reserves growth, therefore, is to be very cautious of accepting data at face value.
268 The data we seek, by contrast, are reasonably current data on 'real' (technologyor knowledge-driven) gains over time in field 2P values. Figure 8 shows such data for the UK. ~'!.====c==~====~==~----r---'----'----'----'---'----'----,----,-,
r- -----r>\
- -- -ca.)ltflo"
-J.YQIlUS
- Nin~n
-
-Sohl.hllion
Plptl
~---jl----I----+----H , --+---t--...l\H -----l---+-l
!
/'7'
\
__ f--
~r_--~--~~~_I~-
.~" ---~
r--/r i - il~
-I-
I--'"
') '~~ .~. 'C
I-
+-
~~I----~--~--~----4---_+----~--~--~----+_--~--_t----_I_--_+--
"
"
25
Fig. 8: Reserves growth for UK oil fields; '2P' data. Data from R. Miller of BP. Upper graph: UK large fields, showing the change in industry data for proved plus probable' (2P) reserves with time after first declaration. The Beryl field seems to be anomalous between years 18 and 22, but the trend of the data is clear: after 25 years, reserves for large fields had grown by some 50% on average. Lower graph: UK small fields. The data are probably statistically unreliable by 25 years, as few small fields have yet operated so long. Interestingly there is no significant change in industry data for declared 2P reserves for 9 years, but then a steady growth sets in, reaching 25% after 25 years altogether. This might suggest a very good initial estimate of field size, with only statistical fluctuation of the mean. After some 10 years, further
269 exploration ejJorl (driven by approaching exhauslion?) has discovered a suite of satellile fields, slacked reservoirs and other deposits entirely excluded from Ihe inilial estimates. Miller noled Ihal "II would be interesting know whether Ihe large fields (> 500 mmbbl recoverable) grew from the discovery of new pools. "
As Figure 8 shows, field growth is very variable between fields, but averaged over time, the large fields grew by about 50%; and the smaller fields by about 25%, These are significant increases, and should not be ignored in the modelling, But these values are less than one-tenth the U,S, and Canadian 'IP reserves becoming 2P' reserves growth values of 600% to 900% reported above. And even with 2P reserves, a caution is needed. Campbell, with long experience in industry of field discovery, and of watching how the size of fields is reported over time, identifies a 'U-shaped reporting curve'. This starts with an original 'geological' value, kept internal to the company, which is based on an estimate of oil in-place, and factored by an initial estimate of overall recovery factor. This is followed by the first published value, based on conservative engineering evaluation of the infrastructure likely to be initially committed. Then there is a slow reported growth in field size as subsequent investments are made in the field; with this growth often taking the field size back to close to the original 'geological' estimate. The evolution of the reported size of Prudhoe Bay, for example, has shown just this process, as confirmed by BP's Gilbert. The main conclusions from this section on UK data are: • •
•
•
The simple model of Section 2 captures much of what happens in reality, at least for the UK. The UK government forecast made in 1976, that the UK production peak would occur shortly before the year 2000, is easy to understand on the basis of the estimate for the UK's 'ultimate' and the 'mid-point peaking' rule. It was a pity that this comprehension of the mechanism of peaking got eroded over time, to be replaced by the widespread myth of very high levels of technology-driven reserves replacement-becoming the favoured explanation of why the UK's 5 years' of proved reserves had lasted for over 20 years without diminution. Moderate levels of reserves growth do occur in 2P data however, at least as reported III industry datasets, and need to be accounted for.
This page intentionally left blank
THE FUTURE OF GLOBAL OIL SUPPLY: UNDERSTANDING THE BUILDING BLOCKS DR. PETER JACKSON Cambridge Energy Research Associates, Senior Director Cambridge, Massachusetts, USA
KEY IMPLICATIONS The controversy surrounding future oil supply can be divided into two components: a determination of the factors that will drive the much-debated "peak" and, more importantly, a consideration of the consequences and actions required when oil supply no longer meets demand. IHS CERA sees a number of critical observations at the core of the analysis: • • • • • •
Supply evolution through 2030 is not a question of reserve/resource availability. IHS CERA projects growth of productive capacity through 2030, with no peak evident. There are no unique answers: we are dealing with a complex, multi component system. Above-ground drivers----economics, costs, service sector capability, geopolitics, and investment~are crucial to future supply availability. Market dynamics will remain highly volatile. The upstream oil industry faces some major challenges.
CONTEXT Fears about "running out" of oil coincide with periods of high prices and tight supplydemand balance. The latest such period of "peak oil" concerns became very visible from 2004, when strong oil demand ran up against capacity constraints. IHS CERA's reference case for global liquid productive capacity shows steady growth through 2030 to around 115 million barrels per day (mbd), and there is no evidence of a peak in supply appearing before that time. Hydrocarbon liquids--crude oil, condensate, extra heavy oil, and natural gas liquids~are a finite resource, but based on recent trends in exploration and appraisal activity there should be more than an adequate inventory of resources available to increase supply to meet anticipated levels of demand in this time frame. Post-2030 supply may well struggle to meet demand, but an undulating plateau rather than a dramatic peak will unfold. In the short term, the industry is at another crossroads following the precipitous fall in demand in 2008-09 in response to the onset of the recession. The oil price has halved from its peak of $147 per barrel in July 2008, OPEC has cut nearly 4.2 mbd of production, OPEC spare capacity has nearly tripled to 6.5 mbd, and the industry has slowed its pace of expansion. Early in 2009, IHS CERA predicted that as much as 7.5 mbd of new productive capacity could be at risk by 2014 if costs remained high and oil
271
272 prices hovered just below the cost of the marginal barrel for two years. 1 Since then the oil price has recovered strongly to around $70 per barrel and some confidence has returned. Even in these unpredictable times the industry has continued to invest and to build new productive capacity; indeed, Saudi Arabia recently brought onstream the giant Khurais field, which at plateau is expected to produce 1.2 mbd. With sustained investment, a healthy cushion of spare capacity, and slow to moderate post-recession economic growth, supply should not present major problems in the short term. Of course looking further ahead, it is important to recognize that oil is a finite resource and that, at some stage, supply will fail to meet demand on a consistent basis. It is impossible to be precise about the timing of this event, but given the pace at which demand has increased in the past decade, a pivot point may well be reached before the middle of this century. Much depends on key factors such as global economic growth, the capability of the upstream industry, costs, government policies on access and taxation, the evolution of renewable and alternative energy sources, and the effect of climate change issues on policies and regulations concerning the use of fossil fuels. However, there is time to prepare and to make rational decisions to avoid being forced into shortterm approaches that may not resolve longer-term problems. Many studies of future oil supply examine subsurface issues and focus in particular on the scale of the resource while giving limited consideration to technology, economics, and geopolitics (Deffeyes, 2005). Though belowground factors are critical, it is aboveground factors that will dictate the ultimate shape of the supply curve. This IHS CERA Report summarizes our current productive capacity outlook to 2030 and discusses the architecture of future liquids supply. In addition, the methodology and foundations of the outlook are reviewed and the results of supporting studies on decline rates and giant fields are summarized. Though a peak of global oil production is not imminent, there are some major hurdles to negotiate. METHODOLOGY Productive capacity is defined as the maximum sustainable level at which liquids can be produced and delivered to market. Productive capacity estimates account for routine maintenance and general operational inefficiency but not for dramatic swings in political or economic factors or temporary interruptions such as weather or labor strikes. For example, a field may have a productive capacity of 140,000 barrels per day (bd) but in reality produce 130,000 bd on average over a year because of unforeseen maintenance issues, regulatory inspections, rig movements, and tie-ins. At the core of IHS CERA's methodology is recent production history, which is considered the most reliable data available on which to base a forecast. We can measure the barrels arriving at the surface. Future production trends are extrapolated using a comprehensive framework of decline rates and knowledge of operational plans for individual projects and fields. Remaining reserve data are an important constraint on the
The marginal barrel is the most expensive oil to find and produce globally; currently the oil sands in Canada are regarded as representing the marginal barrel.
273 future supply profiles but- given the uncertainties in reserves estimation---can be used only as a broad guideline of future supply. Four key components of supply are included in the outlook: fields in production (FIP), fields under development (FUD), fields under appraisal (FUA), and yet to find (YTF) resources. IHS CERA has fully incorporated the data from the IHS International Field and Well Data database so that there are approximately 24,000 fields and discoveries underpinning the outlook. In addition, we have conducted detailed analysis of field production characteristics, especially decline rates, which have been incorporated at the field and project levels (see the IHS CERA Private Reports Giant Fields: Providing the Foundationfor Oil Supply Now and in the Future? and Finding the Critical Numbers: What Are the Real Decline Rates for Global Oil Production?). A detailed database of approximately 450 OPEC and non-OPEC FUD provides a clear insight into the immediate plans of the industry to execute new projects ranging individually up to 1.2 mbd at production plateau. YTF resources are estimated by extrapolating historical activity and success rate data and making assumptions about future levels of activity in key countries. We have recently compiled historical exploration data from the IHS International Field and Well Data database on well count, success rate, and discovery sizes for each country that have improved the YTF analysis. In this activity-based model we take account of project efficiency, costs, timing, hardware availability, and our detailed oil price outlook. We adopt a holistic portfolio perspective to evaluate global productive capacity. Although it is clear that some giant fields such as Canterell are now strongly in decline after a very successful secondary production program, and many countries are past their "peak," the sum of the parts as we currently see them show that productive capacity should be able to grow for the next two decades . WHY SO MUCH VARIATION BETWEEN PUBLISHED OUTLOOKS? The long and complex debate about the future of global oil supply is characterized by two overriding characteristics: the very large range of potential outcomes projected and sustained disagreement about "the answer" (e.g. , Mills, 2008). Production volumes are closely related to reserves, rock physics, and investment. Publicly available data tend to be limited and of variable quality. A wide range of different methodologies have been applied to the problem, from those encompassing systematic analysis and careful assumptions (International Energy Agency [lEA] 2008) to less robust techniques such as Hubbert's method (Deffeyes, 2005; AI-Bisharah et. al. 2009), which can provide a good approximation in certain circumstances. Additionally different studies are based on variable views on reserves/resources, field production performance, future exploration, technology, and commercial issues. Few have attempted to incorporate the impact of aboveground factors such as demand and geopolitics. Some models are based on a very pessimistic view of the future, which is not borne out by scrutiny of recent trends in exploration and production. For example, claims that half of global oil reserves have been produced, global reserves are not being replaced on an annual basis, and deepwater exploration is essentially exhausted (e .g., Leggett, 2006) are questionable. The recent discoveries of 10 giant oil fields below a thick salt layer in the Santos Basin, Brazil, may have boosted global resources by at least 25 billion
274
barrels. Further claims that giant oil fields are past their prime have been refuted in a recent detailed study of 548 giant oil fields in the IHS CERA Private Report Giant Fields: Providing the Foundation for Oil Supply Now and in the Future?, which demonstrates their continuing strong contribution to global supply and that some 76 giant fields, representing 84 billion barrels, remain undeveloped. Fields in general and giant fields in particular still show considerable potential for reserves upgrades, as illustrated in many studies (Klett and Gautier, 2005). IHS CERA'S 2009 SUPPLY OUTLOOK: "PAUSING FOR BREATH" In our most recent reference case outlook, global productive capacity is expected to average approximately 92 mbd in 2009 and to rise to 115 mbd by 2030. This is a lower rate of growth than we have projected in the past and reflects the reaction of the oil industry to recent changing market forces . This is just one version of many possible outcomes, and we use it in this report to illustrate the architecture of supply and the nature and scale of the problem. This reference case provides a view of the building blocks of future supply in terms of FIP, FUD, FUA, and YTF as well as "Others," which include extra heavy oil, biofuels, coal-to-liquids/gas-to-liquids, and natural gas liquids. With aggregate decline rates of around 4.5 percent per year, FIP provide a diminishing proportion of the total future capacity. But in terms of the conventional oil asset life cycle, exploration replenishes the appraisal project inventory, which feeds into sanctioned development projects and ultimately producing fields. Figure I is a snapshot of a very dynamic system. Figure 1 Global Liquids Productive Capacity Outlook 140 120 100 80 Million Barrels 60 per Day
40 20
2005
2010
2015
Source: IHS Cambridge Energy Research Associates. 90509-3
2020
2025
2030
275 This summary does not show evidence of a peak in oil productive capacity before 2030. However, it does emphasize the importance of future exploration and the role of unconventional liquids in generating growth in the future. IHS CERA believes that unconventional liquids already contribute around 14 percent of total global capacity, and we expect this share to grow to 23 percent by 2030. The contribution of exploration is emerging as one of the key uncertainties and is the subject of current IHS CERA research. This model assumes that: • • •
The oil price stays above the cost of the marginal barrel for most of the period to 2030. Adequate existing and future resources exist to support these sustained volumes of higher capacity. The industry can build the hardware and develop the technical capability to implement investment programs.
WHAT ARE THE CHALLENGES TO PRODUCING A ROBUST OUTLOOK? Predicting future productive capacity hinges on an in-depth understanding of a complex multicomponent system, which is driven by the interplay of both aboveground and belowground factors. It is not realistic to treat the global oil endowment as if it were simply in a tank being emptied. IHS CERA's experience of evaluating productive capacity over two decades suggests that there are no unique answers, a point reinforced by the wide variety of published outlooks noted above. As part of our ongoing research program IHS CERA has concentrated on a number of factors that will strongly influence future supply: •
•
Data. The IHS CERA reference case outlook is based largely on the IHS International Field and Well Data database, which is arguably the most comprehensive commercially available upstream data set available. A reliable and comprehensive database is critical to any credible forecast-but the complexity of the analysis requires some bold assumptions to be made. Even a perfect data set would generate a wide range of possible outcomes in modeling such a complicated system. The debate about future supply and data has tended to focus on subsurface technical data, especially reserves data. But there is a wide range of sources related to aboveground drivers that is also crucial in assessing country-specific economic data and projections-which drive supply-as well as rig count, yard space, and service sector capability. Reserves. To date the analytical core of this debate appears to have hinged on knowledge of field and global reserves (Mills, 2008). Oil and gas reserves are defined as the volumes that will be commercially recovered in the future . Hydrocarbons are trapped in reservoirs underground and can't be physically audited or inspected, so estimates are based on the evaluation of data that provide evidence of the scale of the reserve base. The Society of Petroleum Engineers (SPE) has produced a detailed set of six categories of
276
•
reserves and contingent resources and three categories of undiscovered prospective resources (ref: SPE website http://www.spe.org/speapp/spelindex.jsp). These reserve estimates entail large degrees of uncertainty, and a lot of experience and judgment are required In performing the calculations. Given the complexity of the calculations there are no unique answers at the individual field or global levels, and we still don't know exactly how much has been discovered or what remains to be found, despite any claims to the contrary. Current estimates can only be considered as being orders of magnitude. The questionable use of resource estimates is well illustrated by Hubbert's (1982) approach, which suggests that a peak of production occurs when half of the global inventory of supply has been produced. This seems plausible, given that some 1.1 trillion barrels of oil has been produced to date and there are apparently some 1.2 trillion barrels remaining to be produced. What this approach does not testify is that this analysis is based on proven plus probable conventional reserves alone, which amounts to 2.3 trillion barrels. It ignores all the remaining categories of conventional and unconventional reserves and resources (including possible, contingent, and prospective reserves), defined by the SPE, which could ultimately contribute at least as much again. IHS CERA estimates that global resources could be approximately 4.8 trillion barrels, including just over 1.1 trillion barrels of cumulative production to date (see the IHS CERA Private Report Why the Peak Oil Theory Falls Down: Myths, Legends, and the Future of Oil Resources). It is clear that we are dealing with a finite resource, but more consistency in reserves reporting and further systematic studies are needed, such as the United States Geological Survey (2000) study of global YTF resources, to improve the quality of the numbers. Remaining reserves data are an important constraint on the future supply analysis-but given the uncertainties this can be used only as a broad guideline. Decline rates and field performance At the core of IHS CERA's productive capacity model is an extrapolation of historical production data into the future. We have completed a study of over 1,000 fields to understand the characteristics of field production through the buildup, plateau, and decline phases. Central to this analysis was an attempt to estimate typical decline rates for a range of field sizes and types in different geological and geographic environments. Information from relatively mature, data-rich areas such as the North Sea and Norway suggested that decline rates were well above an alarming 10 percent on an individual field basis, so it was important to complete this study to develop a more accurate and representative picture around the world. All oil fields start to deplete the day production start-up occurs, but not all fields are in decline. From our 1,000 field study database only 40 percent of production comes from fields in decline, suggesting, perhaps surprisingly, that a significant proportion of all production comes from fields building up or on plateau. This study showed that the average decline rate for fields was 7.5
277
•
percent, but this number falls to 6.1 percent when the numbers are production weighted. The numbers were subsequently corroborated by the lEA (2008). Importantly, the aggregate decline rate of all fields currently in production (which includes fields building up and on plateau) works out to be around 4.5 percent. It is anticipated that aggregate decline rates might increase slowly with time, and also that ultimate recovery will continue to increase medium term. Giant fields are still the cornerstone of global production. Some 548 giant oil fields contribute 61 percent of global production; and although production from the giants has risen, that proportion has remained steady in recent years. Recent IHS CERA research on giant oil fields shows that collectively the giant fields are not in decline and some 60 percent of their recoverable oil remains to be produced. The number of giant field discoveries has declined in recent years, but their contribution seems unlikely to plummet in the near term. Costs and capability. The IHS CERA Upstream Capital Costs Index (UCCI) is a set of indices used to monitor the current state of the global upstream cost environment. Set at 100 in 2000, it more than doubled by the end of 2008 (230). This means that oil companies were essentially spending twice as much to undertake the same amount of work as in 2000. Recently the UCCI has declined by 8.5 percent, putting costs back to early 2008 levels; and although oil prices recently fell back to 2004 levels, cost reductions are projected to drop only an additional 10 percent over the next six months, bringing costs back to 2007 levels by third quarter 2009. Some service sectors, such as the deepwater rig market, will sustain a high pricing structure because of the sustained demand; others, such as jackup rig markets, have softened and may continue to do so. Current upstream sector demographics are such that a large proportion of experienced professionals will retire in the next ten years. The industry has acknowledged this for a number of years and has taken steps to hire and train a new generation of experts, but this may be too little too late. In the current downturn the industry is again in danger of further erosion of its skills base. The service sector in particular is under pressure from operating companies to reduce costs, and this means rationalizations of staff, which will seriously restrict the capability of the service sector in future.
Any outlook can present only one potential version of the future. IHS CERA uses a reference case productive capacity outlook to generate three scenarios for future production-Asian Phoenix, Break Point, and Global Fissures-which enable an understanding of the range of possible drivers of future supply and describe three feasible outcomes (see the IHS CERA Multiclient Study Dawn of a New Age: Global Energy Scenarios for Strategic Decision Making-The Energy Future to 2030). Recent oil price volatility has further reinforced the point that the future is highly uncertain and a range of outcomes should be considered.
278 THE BIG PICTURE It would be easy to interpret the following recent market and oil price events in isolation to support the belief that a peak in global supply has passed or is imminent: • • •
Oil price spike to $147 per barrel in July 2008 Tight supply-demand balance of around 2.5 mbd through mid-2008 Considerable decline in global production to around 83 mbd
However, these events are linked to an array of economic and political factors; they do not herald the onset of a peak and at the simplest level illustrate that the market continues to act as the shock absorber of major volatility. Supply continues to respond to prices (conditioned by expectations of future demand), and simultaneously demand responds to prices. Improved data availability and transparency could help to produce more accurate outlooks for future capacity- but even this will not provide unique, reliable answers. Subsurface data on reserve levels and decline rates are only a part of the story. Any prediction of the date of the peak based on subsurface data alone would be umeliable. Once it is possible to accurately model the following building blocks-and only thenwill a truly reliable and useable range of outcomes become available: • • • • •
Future course of the global economy Balance and impact of the complex web of geopolitics Future course of oil prices Course of government policies that focus on controlling demand Development of renewable energy sources and climate change issues
Many projections, including those based on the methodology of Hubbert, fail to account for the impact of economics, technology, or geopolitics (Deffeyes, 2005), while others concentrate on conventional oil alone (Bentley et al 2007) and fail to account for the growing proportion of unconventional oil being developed and produced. IHS CERA tackled this issue by developing a possible range of outcomes through plausible scenarios for the future of global energy (see the IHS CERA Multiclient Study Dawn of a New Age, cited above). Even this comprehensive study does not present a unique base case projection, but rather develops the three scenarios noted above-Asian Phoenix, Global Fissures, and Break Point---extending to 2030. In Asian Phoenix the center of economic and political gravity shifts to Asia. Strong growth in China and India puts them on a path to eventually challenge the United States for global economic preeminence. In Global Fissures, a widespread political backlash against free trade and globalization, combined with global trade and political disputes, lowers economic growth and weakens energy prices. One of the triggers is a hard landing of the United States economy. Global Fissures reflects the current global climate most closely.
279 In only one scenario, Break Point, do we envisage a period of very tight supplywhere supply difficulties would limit production growth, but with no imminent peak in sight. In 2006 we anticipated that oil prices would reach $150 per barrel. In this scenario fear of peak oil encourages programs to enhance energy efficiency and accelerate growth of alternative fuels, and oil loses its monopoly on transportation. Looking ahead, the upstream industry faces many challenges. There is little doubt that the existing and possible future resource base can support growth in capacity through 2030. There is no shortage of new projects or exploration potential to replenish the hopper. Exploration and field upgrades have tended to replace global production in recent years. It has been said that the "easy oil" has all been discovered, but this statement reflects access and commercial challenges rather than fundamental exploration potential or operational issues in every environment. Exploration is not yet in terminal decline, and while recently some 12 billion barrels of oil has been discovered annually, the five-year moving average is actually growing (see Figure 2). Figure 2 World Liquids Resource Discovery and Production, 1930 to 2007 160 140
EE3 OPEC Liquids Discovery IIi!IIIII Non.oPEC Liquids Discovery
120
-
100 Billion Barrels
Uquids Production
Historical discoveries in all current OPEC countries (excludes Gabon and Indonesia)
80 60 40 20 0 1930
1940
1950
1960
1970
1980
1990
2000
Source: IHS Cambridge Energy Research Associates. 90509·25
The longer-term problem lies not below ground, but in obtaining the investment and resources that the industry will need to grow supply significantly from current levels. Both OPEC and non-OPEC countries have a strong current inventory of some 450 projects under development. The recent fall in oil prices has precipitated a slowdown in the rate at which projects are being sanctioned and developed-but this temporary situation will ease when the global economy starts to recover. The projected mediumterm slowdown in the rate of supply growth is a simple function of economics rather than evidence of an imminent peak.
280
However, not everything is working according to plan. Non-OPEC growth has been worryingly anemic for five years, driven largely by slowing growth of productive capacity in Russia. Non-OPEC may well struggle to regain the annual growth levels greatly exceeding 500,000 bd that were common before 2004. OPEC countries will be a key element of future growth, but prolonged periods of low oil prices (below $60 per barrel) and abundant spare capacity of around 6.5 mbd might well start to inhibit longterm supply growth. But just over the horizon a period of strong economic growth could quickly reverse this trend. However, structural changes currently occurring in the service sector in response to falling costs will pose a threat to future supply expansion. After nearly a decade of strong grO\vth in response to increasing demand, some service sector companies are downsizing, and this will affect the ability of the service sector to bring on new supply at an appropriate pace. While the current economic situation has driven a reduction in E&P investment, it has also coincidentally provided a supply cushion that will take some time to work its way back into the system. Companies continue to build new productive capacity, albeit at a slower rate than one year ago. Collectively this will provide a short-term cushion until the global economy starts to pick up again from 2010 onwards; and so the current recession has effectively postponed any imminent peak. There are many areas of overlap between IHS CERA's view of future oil supply and other outlooks. Oil is a finite resource, and at some stage supply will begin to fall short of meeting demand on a consistent basis. The basic differences in opinion appear to center on when this will happen, but what happens after the inflection point is also crucial (e.g., Campbell, 2009; lEA, 2008 and Hirsch et al. 2005). The view that oil supply will plummet after the inflection point and oil will run out, like the gasoline in an automobile, is misleading for the layperson. IHS CERA believes that this inflection point will herald the beginning of an undulating plateau of supply which will last for perhaps two decades before a long, slow decline sets in (see Figure 3). It marks the start of a transition period when traditional market forces and government policy will be unable to adjust supply to meet growing demand and limits are reached. Peak demand is an equally important concept and may well be viewed in hindsight as the main driver of peak supply.
281 Figure 3 Undulating Plateau versus Peak Oil: Schematic .. ~,,- • ~ Refe-rence Case liquids Capacity (IHS CERA 2U(9)
140
K • •-
-
Undulating Platsaa
Conv$'otianal Crude Cap"ctty (IHS CERA 2009)
r~
120
r 2Atrillion
100
barrels post 2010
Million Barrels per Day
80
I 1.9 trillion barrels past20W
60
Histortcal
40
(L 1.1 trillion barrols cumulatlye)
Production
20
0'--1990
........- 2000
2010
2020
2030
2040
2050
2060
2070
Source: Cambridge Energy Research Associates.
60907-9_2107
REFERENCES 1.
2. 3. 4. 5.
6.
7. 8.
Bentley R.W., Mannan S.A., and Wheeler S.J. (2007), "Assessing the date of the global oil peak: The need to use 2P reserves," in Energy Policy, Elsevier, vol. 35, pp 6364-6382, December 2007. Campbell C.J. ed. (2002), The Essence of Oil & Gas Depletion: Collected Papers and Excerpts, Multi-Science Publishing Company Ltd., 341 pages. Deffeyes K.S., (2005), Beyond Oil: The View from Hubbert's Peak, Princeton University Press. Hirsch R.L., Bezdek R., and Wendling R., (2005), "Mitigating a long term shortfall of world oil production," World Oil, May 2005, pp 47-53. Hubbert M.K. (1982), Techniques of Prediction as Applied to Production of Oil and, U.S. Department of Commerce, NBS Special Publication 631, May 1982. Klett T.R. and Gautier D.L. (2005), "Reserve growth in oil fields of the North Sea," in Petroleum Geoscience, May 2005, vol. 11, no. 2, pp 179190, April 2005, Geological Society of London. Leggett, J., (2006), Half Gone: Oil, Gas, Hot Air and the Global Energy Crisis, Porto bello Books Limited. Mills R.M., (2008), The myth of the oil crisis: Overcoming the challenges of depletion, geopolitics, and global warming, Westport, Conn.: Praeger.
282 9.
10. 11.
Mohammed AI-Bisharah, Saud Al Fattah, and Ibrahim Sami Nashawi, (2009), Forecasting OPEC Crude Oil Supply, Society of Petroleum Engineers Paper Number l20350-MS. u.s. Geological Survey World Petroleum Assessment 2000- Description and Results, USGS World Energy Assessment Team, 2000. World Energy Outlook 2008, International Energy Agency, 2008.
THE IMPORTANCE OF TECHNOLOGY-THE CONSTANT WILD CARD RODNEY F. NELSON Senior Vice President for Technology and Strategy, Schlumberger, Ltd. Houston, USA
The modern oil industry is approaching 100 years of age. During that time we have moved from collecting oil at ancient surface seeps to imaging the subsurface with startling precision below 10,000 feet of water and 5,000 feet of salt. The constants throughout this period are the steady progression of the technology deployed and the amazing complexity of the earth which is slowly revealed to us. We have certainly learned that the hydrocarbon endowment we have been given is much larger than anyone imagined even a few years ago.
283
284 INDUSTRY MACRO
Many of you may have seen versions of this chart before or at least the data in another form. It is the history of last 40 years of the oil business. From the oil shocks of the 1970s, through the huge over-capacity of the 1980s, the first Gulf war, the sudden increase in non-OECD demand in 2004 and more recently the current recession, it's all here. For the future, the estimates for the next few years have been updated to include a scenario based on the latest IEA Medium-Term Oil Market Report and you can see the increased current spare supply capacity that has resulted from a combination of lower demand and new supply. As a result of this combination, prices have fallen from last summer' s highs and investment levels have dropped. Here, however, I'd like to add a word of caution. Even though investment in exploration and production almost tripled from 2000 to 2008, the industry didn't add very much additional oil production capacity. As industry observers have pointed out, some of this investment was consumed by inflation across the supply chain, but even so production capacity outside a handful of OPEC producers hardly changed, and within non-OPEC producers it either levelled off or began to fall. As long-term global energy demand remains little changed I, for one, remain concerned that the inevitable higher finding and development costs of new supply, coupled with lower oil and gas prices and more restrictive credit markets, are stifling investment flows. This situation, if it persists, could lead to inadequate supply when demand growth returns.
285
Increasing demand and natural production decline create growing need for significant new production capacity
While we are seeing lower demand, we are also seeing lower overall supply capacity. Already, two-and-a-half million barrels of expected additional production capacity have already been lost, and the forecasts do not show much change in that number over the near term. Much of the expected capacity lost comes from the delay or cancellation of projects associated with the more-difficult-to-produce heavy oils, which are uneconomic at lower product prices. But that is only part of the story. As fields progress through their natural life cycle they begin to decline. This creates the constant need to invest in new capacity, which must offset this decline to keep production constant or grow it. As this slide indicates, even with conservative estimates of production decline, the new capacity required grows to dramatically large numbers. Obviously this wedge of liquid demand will be met by some combination of OPEC, non OPEC and unconventional fuels.
286
Proven Reserves versus Production
OPEC
As everyone here knows, the earth's endowment of oil is not distributed evenly by country. OPEC member countries hold the majority of the remaining conventional oil reserves. This, of course, creates geopolitical concerns across the globe.
Drilling Intensity in the United States1954-2007 10
12,000
'.000
o
"" " """""""""",
287 Remaining reserves is partly due to the natural distribution and partly due to the production history. The United States, for example, has drilled more wells than any other country. And as you can see here, the footage drilled per year normalized to production has varied dramatically over the past 50 years.
Drilling Intensity in Libya i
1 '1.')
g
~ ~-
~. ~
f
0.$
1,900
O.6()
I}oo
l§ ,::
i
£ 15
040
0.20
1,3!XJ
000 1999
2000
III
1001
=
Drilling Intensity
1.100 2007
...... _Oil P"Cl!'i..ctlon
To give you a comparison, here is a similar plot for Libya. The corresponding drilling footage intensity is orders of magnitude less. One conclusion to an analysis like this is that there is considerably more upside potential for significant discoveries in Libya than in the U.S. for example.
288
Most analysis tends to focus on conventional oil. But, as you know, not all oil is created equal. This plot which is taken from an lEA report that is, I think, particularly instructive. On the X axis is the estimated resources in billions of barrels. On the Y axis is the estimated production cost of those reserves as of 2008 in 2008 dollars. The rectangles estimate the size and the cost range of different classes of oil. Under these circumstances, the challenges to which new technology must respond are two-fold. First, operating costs in environments where new resources exist are high. Even in today's recessionary climate, deepwater project expenditures have been little reduced with the cost of operating in up to two kilometers of water to drill and develop reserves below two kilometers of salt remaining elevated. Technology that can mitigate risk in such drilling and completion operations continues therefore to be in demand. Second, the complexities of such potential reservoirs require sophisticated measurement and modeling before and during exploration and development to ensure that the right well is drilled, and the right information collected. More and more this is becoming a matter of integration across previously somewhat discrete technologies with the goal being to improve reservoir performance.
289
For natural gas, supply and demand presents a similar story with the difference that supply is changing in a different way as the commodity rapidly becomes a global business. After four decades of nearly uninterrupted growth, worldwide demand for gas is still expected to increase at an average rate of 1.8% over the 25-year period from 2006 to 2030. This is nearly double the average increase in oil demand over the same period. By 2030, natural gas demand will represent 22% of total energy demand, while for the next two decades the power generation sector will account for nearly 60% of the growth.
290
• Signifioant Ur,tapped Resources "' Developing global LNG market II
Geographically I Politically DisPersed
Ii
"Gleaner"
.. Lead lime for Power Generation
- Nuclear
Hi years
- Coal
5y~ars
- Gas
1~2y sai'$
The largest relative growth in demand will come from Asia and the Middle East, driven not only by increasing use for power generation, but also by housing needs, and as feedstock for the petrochemical industry. By 2030, these two regions will account for a combined 30% share of global gas demand-up from 19% today. And while natural gas is expanding outside traditional consuming countries, a significant share of the projected production increase will come from the Middle East with most of the remainder coming from the Former Soviet Union countries and Africa. Such changing patterns are leading to a global change in inter-regional gas trading-something that is expected to more than double over the period to 2030-and something that is being fueled by liquefied natural gas supply and transportation. In the mediurn- to longer-term therefore, significant efforts will be needed to find and produce considerably more gas than is available today. But just as in the case of oil, we must also look at where future supply will lie, as this will guide a number of the needed technology development efforts for just as the age of easy oil is over, so perhaps is the age of easy gas.
291
.. Workh>ide Gas In PlwJe "Corwentiol1ai"
14,000 Td
- Shale
15,OO() Tef
_. Coal Bed -
:I!
j 2. GOO Tcf
_. Tight
Hydr~tes
".f.J'orldifAde ,xtnsumedto date
6,000 let >'IllO.OOusTcf
3,000 Td
We know that considerable natural gas resources exist from estimations such as those from the 2009 BP Statistical Review which puts even conventional resources at over 185 trillion cubic meters-a figure more than double the corresponding 1980 estimate, But you can see that the majority of today's known resources are nonconventional--coming from tight sand, shale and coal-bed methane accumulations, Nowhere is this trend more evident than in North America where nonconventional gas now represents more than 40% of U,S. domestic production-a figure that has been made possible by some exciting new technologies that maximize the contact between the shale formation and the well bore completion such as in horizontal wells drilled and fractured hydraulically in multiple stages to enhance well productivity, Worldwide however, non-conventional gas resources represent only 10% of total production with commodity prices and project costs dictating whether, where and when their development will expand, That said, major coal-bed methane projects already exist in China and in Australia. Yet even if we were to limit ourselves conventional resources, much remains to be done and this will require considerable new technology,
292
Summary-Energy is a Long-Term Industry • World energy demand forecast to increase by about 45% by 2030 • Fossil fuels will supply 80% of this as alternative energies lack scale and investment within this timeframe • Oil demand growth strongest in the developing economies and weakest in the OECD. Natural gas a global issue • OPEC production increases as non-OPEC production peaks but increasing production hinges on adequate and timely investment and new technology • Non-conventional resources will playa greater role in the energy mix • Energy and the environment are inextricably linked with global energy-related C02 emissions increasing 45% by 2030
Technology Challenges • Deepwater exploration will need a changing technology mix. The priority will be on mitigating risk and on service execution • Enhancing production from existing fields will require improved workflows, faster well construction, improved completions and better efficiency The focus will be on increasing performance • Technologies for unconventional hydrocarbon production will become more important. Service intensity will increase • Technology development is a long-term commitment that must be maintained through the cycle • Environmentalfootprint is increasingly important and must be reduced
293
One can divide the E&P challenges into at least four major categories. • • • •
Cost effective production of known reserves. Reducing risk in exploration of new reserves. Expanding our capabilities in the deeper and harsher environments such as deepwater and the arctic . And , finally, technology specifically developed to unlock unconventional hydrocarbons.
294
R&E Spend versus Hydrocarbon Resources
~% ~__. .------------------------------------------.----,100% ~
o
<:>
'"'0 140%
.8t
~ ., l:!
tD%
8:. I.l
C
120%
I E
0% . KnownReseN.s Yetto be Explored
Deep + Harsh
k "'2007%
--,---'-'--""-"'----+ 110% Unconventional C02
I>-
~
-2007 as % of 2006
And just to show you we, and I am sure others, are putting our money where our mouths are this plot shows you the breakdown by category of our R&D spend and how it has evolved from 2007 to 2008. I can assure you that the 2009 spend is even more heavily shifted towards unconventional and CO 2 sequestration.
295
The Value of Technology Integration Pre-Drill Pp:>gno$is
DalaAe-quisitiQn While Drilling
Intsgrmed Drilling
Update
/"'-
/"
.,
'-,"}f
Drifling Operations Support Center
Drilling involves two distinct cycles. The first is planning-which is long-termwhile the second is execution and is short-term. Seismic-guided drilling changes this by integrating the two in a real-time model which is continuously updated by new information added as the well is drilled. Seismic-guided drilling uses an earth model, updated in near real-time. In this picture the left-hand seismic section is representative of the model as it stands when we start drilling a given welL It incorporates all our prior understanding of the sub-surface. When we add real-time seismic logging-while-drilling data, we can update the model as we drilL In the field, we use our own InterACT transmission technology to send the information to a dedicated support team at a Drilling Operations Support Center. At the Center, the team analyzing the data consists of experts from various disciplines including geophysics, petrophysics and drilling engineering. The data are compared to the earth model and any deviations are analyzed. When these reach certain thresholds, a complete re-imaging of the sub-surface is undertaken. This incorporates all available information as well as the new information gathered while drilling. With conventional technology even this advanced re-imaging can take several months-a time period far too long to have any bearing on the drilling of the welL Using a unique combination of software, process and computer technology however we have developed a solution that significantly reduces the time required for re-imaging to match the timeframe available during drilling so that characterization measurements can almost immediately influence the operation. On the right-hand side you can see the result in this example case. As you can see, the top of structure was not found where our pre-drill prognosis forecast it to be. The availability of the while-drilling seismic data allowed us to update the model and
296 sidetrack the well-shown in blue-to be able to drill into the top of the potential reservoir.
Technology Deployed-Productive Drilling
Three years ago we introduced a new innovation in logging-while-drilling services under the PeriScope brand name. This technology uses electromagnetic measurements to determine the position of the bottom-hole assembly with respect to nearby formation boundaries-including reservoir tops, bottoms and fluid contacts. Here is an example where the red path traces the trajectory of an extended-reach well. The scale is vastly distorted-the yellow reservoir is about 15 feet thick, and the section shown extends just over 2000 feet horizontally. Through measurement and modeling PeriScope tools yield the information needed for rotary-steerable systems to keep the well within the most productive part of the reservoir. Since its introduction, the technology has been deployed in more than 20 countries and has drilled well over one million feet in reservoirs ranging from coal-bed methane to heavy oil, and from complex sands to carbonates. It has allowed horizontal wells to be optimally placed in complex geologies where previous attempts were unsuccessful; it has improved production rates by increasing the length of the well bore placed within the pay zone; and it has improved recovery by eliminating early breakthrough of gas or water. So how do you make a good thing better? Well, you make it read deeper. PeriScope technology can see up to some 15 feet around the tool. This is fine for well placement in thin reservoirs, but not quite good enough for thicker beds or for helping steer into the reservoir in the first place and this leads us to the next generationPeriScope UltraDeep.
297
Waterflood and Recovery Monitoring
Do,m·hole permanent pressure gauge. electrical array and produc\iVll logging tool continuaily measure pressure, resistivity and injection rate
The same type of integration you have just seen in drilling is also true for production. In which case we see integration of downhole valves together with downhole instrumentation and modeling to yield higher recovery rates and ultimately leave less oil behind.
298
Technology Development-lnSitu Downhole Fluid Analysis Optical num AMlysls • Fluididenlificalioo • OHM/star mtlo • FhJidCoICf
• GasOeibciioo
1 .-
: · ·.·1 ·" 11
!f: .
1992
1996
Oii,W"ie!'.Glis 08Mi=i!i...4i:1 vets~S
Gf'..jde j,!
2001
2001
COOlr4!r!moot~.
Fi<)j·iTrj;;lts d
C"'1'I}iOSJiiO(,:u \hrht!oo
;t;;j'ij C emily
But before I go much further, I would like to emphasize one very important point-technology development demands a long-term commitment. As an example, if we could determine the composition of such complex reservoir fluids as sour gases in situ, we would save time by not having to send samples for analysis, and we would be better equipped to manage immediate operational issues that corrosive fluids present. We would be better warned of some of the problems that I just described. We would also be able to determine reservoir compartmentalization. And this would help determine different development solutions. The technology to do this on wireline cable is now available, but has taken more than 15 years to develop. It started in fact with our desire to produce uncontaminated samples in the early 1990s from which we progressed to successfully detail downhole analyses of fluid composition using a downhole grating spectrometer. This has been a remarkable scientific achievement which has been made possible by close co-operation between operators, research laboratories, field operations and engineering facilities. Development has included major technology advances as well as significant progress in reservoir understanding. To make it happen, two small companies were acquired, and considerable intellectual property developed. All these points go to show that the execution and follow-up of a critical technology roadmap is a long-term process that must not be interrupted by industry cycles.
299
What's Unconventional- Heavy Oil Technology
StetlrJt Assisted C;:ravit'y Drai11£lge
(SA C;r» Process
Let me give you one example from what most people would call unconventional oil. In Canada, an insitu process of heating the very viscous bitumen has been developed called Steam Assisted Gravity Drainage or SAGO. This innovative idea requires precise horizontal drilling which was impossible a few years ago. Pairs of wells are drilled parallel to each other, steam is injected into the top well and the heated oil is produced from the lower one. This technology continues to develop with people looking for more efficient ways to drill the wells, downhole steam generation and solvent injection to assist in producing the oil. Ultimately this process will not look very unconventional at all.
300
My last technology example is from the U.S., namely the unconventional shale gas play you may be hearing about. This is a resource that while recognized as a potential gas source some 20 or more years ago no one saw the enormous potential there which has been unlocked by a combination of technology and brute force. This plot shows you the results of this playing out. The U.S. natural gas rig count is plotted along side the gas production curve. At a time when many were predicting the long slow decline of U.S. gas production and an increased reliance on imported LNG we have in fact seen a dramatic upturn in domestic production fueled by gas shales
301
The technology behind this is the combination of horizontal drilling (and lots of it) and better hydraulic fracturing techniques. This is required because of the extreme tightness or harness of this rock. Without breaking the rock hydraulically no gas will flow. This slide shows the evolution with time of the technology. The early experiments were not economic.
302
"
M;:ljOf enabler for UncQmf)mional G;3$
.. Greater understanding of Fre.; Treatments
- Fracturelength
- Fracture orientation -
Zonal dmerel1ces Asymmetry
Last, let me give you some insight into what else is coming to further improve the hydraulic fracturing process and production in these shales . We now have the capability to monitor the fracture creation in real time and therefore to adjust the process in real time to optimize it. The monitoring process is achieved via acoustic monitoring and then updating the model. The decision to modify the fracing process can be made virtually anywhere, whether that be the office or the well site.
303
I hope this presentation has shed some light on the demand scenarios that condition the upstream industry today and how those drivers could develop over the next year or two. I also hope that the technology examples that I've shown you strike a chord in their integration as the complexity of the challenges we will ultimately face are enormous. I certainly do not believe that the age of oil is over, and that the upstream industry will be called upon to deliver more production from more complex reservoirs in more remote areas than ever before. As a final thought I hope that I have clearly expressed my view that while technology forms a large part of the answer, its future contribution demands time, commitment and resources that must be maintained through industry cycles.
This page intentionally left blank
RECENT SCIENTIFIC DEVELOPMENT IN T AIW AN IN RESPONSE TO GLOBAL CLIMATE CHANGE MAW-KUENWU Institute of Physics, Director, Academia Sinica Taipei, Taiwan INTRODUCTION
Taiwan is a small beautiful island with unique geographic characteristics. It sits right at the intersection of the Eurasia and Philippine tectonic plates. Due to the dynamic movement of the region, earthquake has become a regular event in Taiwan. It consequently results in the fragile and steep landform. Taiwan is also located in a subtropical monsoon region. Heavy rainfall, especially during the typhoon season, is common and often causes severe property damage and loss of life. The recent, severe weather condition that has resulted from the global change has made the situation worse. For example, an unexpected disastrous flood struck southern Taiwan in early August this year, and took more than six hundred lives and huge property loss. Figure 1 shows the collapse of a hotel building located in a hot spring resort area in Southeast Taiwan by the flood. The main cause of the flood was due to the enormous record high rainfall. It is generally believed that such a catastrophic rainfall is the consequence of the climate change we are facing today.
Fig. 1. A hotel building leans before falling in a heavily flooded river after Typhoon Morakot hit Taitung county, Taiwan, Sunday, Aug. 9, 2009. The six-story hotel collapsed and plunged into the river Sunday morning after floodwaters eroded its base-all 300 people in the hotel had been evacuated and were uninjured, officials said. (AP Photo/AP Photo/ETTV Television).
305
306 NATIONAL PROGRAM ON HAZARD MITIGATION The people and the government in Taiwan realized the problem long ago, and have made plans to try and resolve the problem. An aggressive National Initiative in hazard mitigation was proposed almost 30 years ago. After almost 10 years of planning, the National Science and Technology for Hazard Mitigation was launched in late 1997. 1 The government and private sectors determined to engage in the long-term disaster prevention research, technology development and eventually the execution of the technology. Great efforts have been devoted to the investigation and analysis of potential disasters, risk level analysis and simulation, early warning and forecasting techniques, and a disaster management decision-support system since the inauguration of the national program. The program also implemented projects for disaster prevention education, development of a disaster prevention and emergency response system, and disaster prevention strategies. All the efforts have come with promising results. Some of the results have been applied in practical work and have laid solid foundation for future development. The program has the missions to effectively integrate and strengthen the research capabilities and results, and to enhance disaster prevention and response technique standards. More specifically, the program is to: 1. Coordinate, plan and implement disaster prevention and response technology related research and development; 2. Deploy disaster prevention and response technologies to support the actual field tasks; and 3. Promote research and development and apply those results on the disaster prevention and response system. The ultimate goals of the national program are to upgrade the disaster risk analysis system in coordination with the environmental features and regional developments; to formulate new disaster and new strategies; to strengthen disaster prevention and response capabilities; to develop disaster prevention industries and attract private investments. It is not a one-step process to achieve the above goals. Thus, the program has designed to achieve the goals by considering a few stages with specific work focuses: 1. Lay solid foundation for R&D and implementations-Set up disaster prevention and response collaborating systems and expand R&D capabilities; strengthen basic level disaster prevention and response operation capabilities; set up disaster loss investigation methods and a benefit assessment system ; execute, assess and amend disaster prevention-related laws and regulations; strengthen the promotion of disaster prevention and response education and public awareness. 2. Integrate resources to enhance efficiencies-Strengthen the disaster prevention and response related socio-economic aspect strategies; set up the public safety management guidance and assessment system; strengthen local and overseas disaster prevention and response-related academic exchanges and collaborations to enhance technological R&D standards.
307 3. Comprehensively strengthen the overall anti-disaster capabilities-Set up various disaster early warning criteria and improve related operation mechanisms; enhance the disaster prevention and response preparations and response operation capabilities; set up comprehensive systems to strengthen the society's overall anti-disaster capabilities. In order to ensure that the overall R&D works are strategic, integrative, and practical, it is essential to establish an organization to keep the disaster prevention-related databases and techniques periodically updated, and to engage in setting up the R&D focus, labor planning, technical transfer and implementation based on the plan. The key functions of The National Science and Technology Center for Disaster Reduction are: "R&D promotion", "technical support", and "application implementation". Thus, the program has established the National Science and Technology Center for Disaster Reduction. The center is responsible for the integration among the technical systems on disaster prevention and responses developed by academia, government sponsored research institutes and private sectors. In order to ensure sustainable developments of the National Program for Hazards Mitigation, the Center needs to strengthen the technical support and to develop technologies that are critical for better and sustainable environment development. It also has the responsibility to promote the society's awareness on the importance of disaster mitigation, which is the most effective means to reduce losses of lives and properties at disastrous incidents. The major functions of the Center can be summarized as following: 1. Research and Development-Coordinate and set research directions; integrate research resources; execute projects related to strategic and missionoriented research and development. 2. Technical Support-Assist system planning for disaster prevention and rescue policies making; enhance system performance for emergency response; improve preparedness and prevention. 3. Application and Implementation-Set up operation procedures on public safety management; evaluate related plans, resources and operational capabilities; provide technique training and public education. SUPPORTING INFRASTRUCTURE The support of some major facilities is essential for the success of the hazard mitigation R&D program. The National Space Program Organization (NSPO) under the National Science Council has launched two major satellites since 2004 2 to provide the needed information for better understanding of the landform and weather conditions. The first remote sensing satellite developed by NSPO, FORMOSAT-2 as shown in Figure 2, was successfully launched on May 21, 2004 into the Sun-synchronous orbit located at 891 kilometers above ground. The main mission of FORMOSAT-2 is to conduct remote sensing imaging over Taiwan and on terrestrial and oceanic regions of the entire earth. The images captured by FORMOSA T-2 during daytime can be used for land distribution, natural resources research, environmental protection, disaster prevention and rescue
308 work, etc. When the satellite travels to the eclipsed zone, it will observe natural phenomena such as lighting in the upper atmosphere. The observation data can be used for further scientific experiments. Therefore, FORMOSAT -2 carries both "remote sensing" and "scientific observation" tasks in its mission. More valuably, it orbits above the Taiwan region twice every day. Thus, it is able to take daily images of the landscape in Taiwan. This provides important information to resolve how the landform changes before and after the disaster occurs. Figure 3 is the image of the Taipei-IOI building taken by the satellite.
Fig. 2: The FORMOSAT-2 developed by the Taiwan NSPo. 2
Fig. 3: Image of Taipei-1 01 (the current highest building in the world) taken by FORMOSAT-2 2 Another valuable satellite is the FORMOSAT-3 (COSMIC),2.3 which was launched on April 14,2006. The six spacecraft of the Constellation Observing System for
309 Meteorology, Ionosphere and Climate (COSMIC) measure the bending and slowing of microwave radio signals as they pass through Earth's atmosphere. The signals are transmitted from U.S. global positioning system (GPS) satellites to COSMIC's GPS science receivers, which were designed by NASA's Jet Propulsion Laboratory. These bending and slowing events, referred to as occultations, occur when the GPS satellite signals are interrupted as the satellites rise or set on Earth's horizon, blocking their transmission. By precisely measuring-to a few trillionths of a second-the time delay from this bending, scientists can infer information on atmospheric conditions such as air density, temperature, moisture, refractivity, pressure and electron density. This makes GPS radio occultation a powerful new tool for weather and climate forecasting and space weather research.
Fig. 4: Occultation map a/COSMIC (From NSPO, Taiwan)2
COSMIC is currently feeding real-time, weather balloon-quality data on Earth's atmosphere every day, over thousands of points on Earth, as shown in Figure 4. Temperature and water vapor profiles derived from COSMIC will help meteorologists improve many areas of weather prediction and observe, research and forecast hurricanes, typhoons and other storm patterns over the oceans. Over time, the mission should be a valuable asset to scientists studying long-term climate change trends. COSMIC data will also help improve the forecasting of space weather-the geomagnetic storms in Earth's ionosphere. Those storms can disrupt communications around the world and affect electrical power grids. It has become one of the most valuable satellites for international weather research. In addition to the satellites, Taiwan has also established a supercomputing facility for the development of better simulation and modeling. Another supporting entity is the Earthquake Research Center, which has a complete program for developing better earthquake detection and warning system. RESEARCH INTEGRATION WITH PROJECTS ON CLIMATE AND ENVIRONMENTAL CHANGE 1
310 It is certainly important to integrate the mitigation program with other projects related to the climate and environment changes. The program has included several important topics to address the issues on global changes. These topics include (I) to assess the status of climate change, with particular emphasis on the current climate change and its potential to cause disaster in Taiwan; (2) to analyze the potential of flood and its effect on landslide, and the coastal disaster and sea-level rise; (3) to monitor the land use change, including coastal, low-lying areas and urban land use; (4) to assess the impact on socioeconomic changes; and (5) to apply the knowledge and experiences established to construct the national adaptation policies.
SUMMARY The National program on Hazard Mitigation in Taiwan has been established for more than ten years. It has successfully implemented an R&D program for a more in-depth study of the source of disaster and development of technologies for disaster prevention and response. A Center for Disaster Reduction was also established under the national program. It has continued to promote disaster prevention- and response-related activities. It plays the role to coordinate, plan and implement R&D on disaster prevention and response technology; to deploy disaster prevention and response technologies in support of the actual field tasks; and has successfully applied R&D results to the disaster prevention and response system. It has further created education and training programs. It believes in continuing the effort to advance the technology. In the meantime to educate the government and general public on the value and importance of hazard mitigation, we should be able to make a much better society with much less fear of the disaster brought about by the global climate change. ACKNOWLEDGMENTS The author thanks the scientists in the National Center for Disaster Reduction in Taiwan and the National Space Organization for providing the valuable materials in preparation of this manuscript. The author also acknowledges the financial supports from Academia Sinica and the National Science Council. REFERENCES I. 2. 3.
Annual reports of the National Program on Hazard Mitigation, and the National Center for Disaster Reduction. Annual reports of the National Space Organization, Taiwan. http://www.nasa.gov/visioniearthllookingatearthlcosmicf-20061130.html; "Purveyors of the Cosmic 'Occult"'.
SESSION 5 CLIMATE FOCUS: GLOBAL WARMING AND GREENHOUSE GASES
This page intentionally left blank
EXPONENTIAL ANALYSIS IN THE PROBLEM OF THE ASSESSMENT OF THE CONTRIBUTION OF GREENHOUSE GASES IN GLOBAL WARMING MIKHAIL J. ANTONOVSKY Carbon Dioxide Division, Institute of Global Climate and Ecology Moscow, Russia This paper investigates a mathematical representation of the response of atmospheric CO 2 to the anthropogenic emission of carbon dioxide. This is an essential aspect of the consequences of different scenarios for anthropogenic emission used in models of the global carbon cycle . In the Reports of Working Group I of the Intergovernmental Panel on Climate Change (IPCC) is used exponential approximations Green's functions of these models on an impulse influence. In the focus of the attention of this paper is the assessment of the scientific value of the results ofIPCC on exponential approximation. The results of our analysis are based on the classical fact that the general problem of the exponential approximation is strongly incorrect in mathematical sense. Studies of the Reports of the IPCC issued at \990, \995,200\ and 2007 [1-4] show that the important role in the whole 20 years of activity of the Working Group 1 of the IPCC play the problems, solution of which in the IPCC Reports, use the response functions of the global carbon cycle on pulse influence. These functions are given at the Reports as the exponential approximation k
L
[I,
exp( -[ / f,)
(I)
1= 0
Thus the response function is described as a set of parameters { [I,. t, } i=O,1 , ... ,k. The goal of the present Report to expound the results of the conducted analysis of the applicability of such description of the response function to be based on the fundamental mathematical fact: general task of the exponential approximation of the functions is strongly incorrect, i.e. small changes of the response function could have the essential changes of the parameters {[Ii , tj } . This convincingly shows the classical example of Lancosh (see below). Thus arise the task I: to asses of the scientific meaning of the parameters {[Ii . 'tj }, introduced in the reports of the IPCC. The task 1 has following modification 2: to find time intervals (time horizons) in the limits of which presented solutions of task of exponential approximation are scientifically valid. At this report we are discussing non abstract problem of the exponential approximation, but the problem of description of response function (Green's function) of the global carbon cycles models, that initially are described by the system of equations, whose parameters have physical meaning. This position brings us to the 3 problem:
313
314
If the parameters of the approximation {aj, 1"j }, i=O,I,,,,k links with the physical parameters of the model of the global carbon cycle, The actuality of the presented research is caused by that IPCC Reports and in the original papers on which these Reports are based are given the results of exponential approximations, but does not contain the response on the question fonnulated in the problem 1,2,3 it is impossible to assess the uncertainty bringing on by the use of the function of the fonn (I), Let us single out the key problems in which response functions are used: The Problem A- the assessment ofGWPi, The Problem B- the assessment of the response of the atmospheric C02 on the scenario of the CO 2 emission into atmosphere. Let us consider more carefully the matter of the problem A. Under the radiative forcing is to understand the changes of the difference between descending and rising flow of the radiation (expressed in W1m2) at the tropopause as the result of change in external force, that control the climatic changes, such as the changing of the CO 2 atmospheric concentration or the changing of the flows from the sun, The Global Wanning Potential (GWP) is an index (basing on radioactive properties well mixed greenhouse gases), measuring radiative forcing of unit mass of a given well mixed greenhouse gases, in the present atmosphere, integrated on chosen time horizon, relatively such unit mass of the C02. The role of this index- to express the influence of the limited into atmosphere i-th greenhouse gas in the units of corresponding limited into the atmosphere a mass of CO 2. "Kyoto Protocol is based on GWP of pulse emissions on one hundred time intervals" (from the Report, IPCC-2007). Several problems arise linked with the exact definition of the index GWP. The sources of the problem linked with the selection of adequate physical model for each greenhouse gas, selection of time horizon, selection of scientifically grounded algorithm, taking into account the input of the greenhouse gas. Currently it is used the definition of the GWP that was introduced in the Report of the Working Group 1 (1990) and was accepted to calculation on Kyoto Protocol tll nowadays (p, 210, Report 4 IPCC-2007): TH
fai
GWP
(t ) X i (t ) dt
o
(2)
TH
f a co
o
2
( t ) X co
2
( t ) dt
Where a;(1} - radiative forcing of the unit of the mass of the i-gas at a time t, X j (I) - portion of the emitted greenhouse gas that is remaining at the atmosphere in the time moment t, as a denominator of GWP j it is taking the integral of the function XC02(t), as the radiative forcing of the different greenhouse gas it is accepted to calculate relatively ofa CO 2=1, Thus at the fonnula (2) the denominator completely is defined by the response function X C02(I) . The content of the problem B:
315 For a given model of the global carbon cycle the response of the atmospheric e0 2 on a given scenario of the emission E(t) is calculated as a result of ranging of this model, when on the entrance of the model the scenario E(t) is given. In the case when the model is linear, the response of the model Xco2(1) on the pulse impact is its Green's function, i.e. the response of the model on scenario E(t) is calculated as convolution X C02 (t) * E(I). For the sake of present days the reference sources is known for more than 500 models of the global carbon cycle. At the reports of IPee it used the results obtained on the models: Bern[3], Box-Diffusion (BD)[8], Hilda[ 10], 2D, 3D[ 10] Printstone and others. Response of the model of the global carbon cycle on impulse influence, as it was said above is described by formula (I): k
X CQ2(I) =
The parameters {a; . Ti material of the IPee.
},
L
a, exp(-( IT,)
i=O,I, .... ,k are given at the Table I composed by us on
Table 1. The values of the parameters of the function (1), obtained by approximation of the G reen 'sfunctIOn . 0 fh t e mo d els 0 f the 10 b al carbon cycle.
[pee
1996
OT'leT
lPee
1996
Joos( 1996) T.K. Berntsen et al. Tellus, 2005
0,130
0,1756
lPee 2007 [pee
Model parameters X C02 (t)
MRH(1987) Maier-Reimer and Hasselman, 1987
AO
0,142
AI
0,241
0,333
0,1375
0,259
A2
0,323
0,261
0,1858
0,338
A3
0,206
0,166
0,2423
0,186
A4
0,088
0, II
0,2589
OT'leT
2007 0,217
TO
00
00
00
00
TI
313,8
414
421
172,3
T2
79,8
58,5
70,6
18,51
T3
18,8
18,6
21 ,4
1,186
T4
1,7
4,14
3,4
The response function of the model is described as exponential approximation of the function, obtained as a result of the running of the models on entrance of that are given the unit impulses. Thus we are coming to the classical mathematical problem: On internal [O,T] is given a function F(t) such, that F(O) =l,F'(t)
J(t)
t
= a D + 2>; exp(--) i =1
'i
k
, where
ao + La, = 1, i == 1
that approximated the function F(t) with a given preciseness.
316
--0-------------··-------00
+1--.---,----.------,----" m:;
years
years
Fig 1: Exponential approximation of the response fimction F(t) with the help of presentation (1) for the different numbers of items_
The curves on Figure 1 are erected by parameters of the models from the Table 1: a) IPCC (1996) and b) IPCC (2007). It is shown the input of the different component: the curve with index 0- corresponds to a component aD, the curve with index 1- corresponds to aD +aJ exp(-tIYJ}, the curve with index 2 - a o+ aJ exp(-tIr J}+ a2 exp(-tIr2} and soon. The curves on Figure 1 show the existing of which components in the additive presentation of exponential approximation of the response of the model of carbon cycle is formed their difference on the different time horizon. The important characteristics from the point of view of the long term assessment (more than 100 years) is the coefficient aD, that gives part that remains for ever at the atmosphere of the unit impulse. Nevertheless on time horizon of 10 till 200 years the role of the component a1 exp(-tIr J} is very important. The Lancosh's example was published at 1956, became classical, and following functions were considered:
12=3- 05exp(-1. 58t) +2_ 02exp(-4. 451}
317
fj= 1. 55 76exp(-5t) +0. 8607exp(-3t) +0. 095exp(-t)
such, that max I h(t) - fj(t)I <0,05 on the interval [0,2], in the same time it is easy to see that the difference between corresponding parameters are big enough. Thus the problem of reconstruction of parameters of function hand fj on interval [0,2], is strongly incorrect. In the same time on the interval [-1,0] for the same function max I h(t) - fj (t I» 50.The calculation made by us shows that interval [-1,0] is a solution for the problem 2 for the functionsh andfj. In our research that was obtained the solution of the problem I and 2 for the exponential approximation of the resonance functions with parameters from the Table 1. The results are presented on Figure 2 and Figure 3. 1.0
E
0.8
.-.;~
0-
~~ ~~
~ ~
.~ ~
_ E
0.6
~;
'.
-0 0_
~~
.
0.4
02
years
Fig. 2: The graphics of response functions on unit impulse CO 2 with the parameters from Table 1.
On the time horizon 20 years the curves are grouping by the following way: the curves of 1990, 1996 years have the value;::: 0.7, and the curves of 200 1, 2007 have the value;::: 0.55. The latest assessments of experts raise the assessment of GWP of the others (different from CO 2) greenhouse gases on this time horizon. On the time horizon for 40 years, we can see the same tendency but the curves inside the groups are diverging. On the time horizon 100 years all evaluations are diverging and the assessment of 2007 became closed to the assessment of 1990. Let us note that the curve IPCC, 2007 on time horizon 100-200 years practically co-inside with the curve MRH 1987. Farther, for parameters from the Table I was calculated the index TH
1 (TH)= fXC02 (t)dt o
and was shown the contribution of each of the items in exponential approximation of the Green function.
318
60 le!:)
J:
~
40
20
40
80
120
160
200
Time honzon, yealll
Fig. 3: Input of the items of the function XC02 into the integral I (FH)a) from MRH 1987 b)from IPCC 1996 (curve 0 - contribution of item 0, curve 0+1 sum of contribution item 0 and I and so on). In both cases a) and b) the contribution of the item 4 into the value of the integral so small that in the chosen scale it can not be distinguished by the eye:
319
C;
4()
-
20
4()
8()
120
160
TIme Horizoots, years
Fig. 4: Input of the items of the function XC02 into the integral I c) from J1996 d) from IPCC 2007. In the case c (from J 1996) the item 4 playa more considerable role than in other cases. In the case d there are only 4 items, accordingly relatively bigger contribution into the integral items 3.
320
--
300
~ l\I' ~
.. ....
MRti,"7 IPCO'~
';1-99' 1PCC2007
200
0
a. ~ (!) c(
100
04----r--____--.---~--~----~--~--~--_, 100
200
300
400
500
600
700
800
900
1000
Time horizon, yeal'8
Fig. 5: The value of the integral J (TH) for the function with all items, presented on Fig. 3, 4 for the time horizon from 100 until 1000 years. In the long term perspective the curve IPCC 2007 is rising to the curve IPCC 1996, and the curves MRH 1987 and J 1996 also coverage on lower level. First of all it is defined by the value of the coefficient ao. The difference between maximal and minimal values of integrals are between 15% til 20% on the time horizon of 100 till 500 years. In the frame of the problem 3 we have considered the response function of C02 given in the Report (WMO) 1998 [16]. This function is based on the "Bern" carbon cycle model (see IPCC, 1996) run for a constant mixing ratio of CO 2 over a period of 500 years. An analytical fit to this response function has been derived by L. Bishop (AlliedSignal Inc., U.S., 1988) as a rational function R(t):
R(t) =
279400 + 722401 + 730.3t 2 279400 + 1070001 + 33671 2 + I)
We digitized the function R(t) with the step 1 year on the interval [0,100] and organize the 3 sets of data with the step 1,5 and 10 years. In the Table 2 the results of the exponential regression are given of these data.
321 . I approximatIon 0 f t he f unctIOn t T a bl e 2 Th e resu It 0 f exponentIa Exponential regression results for digitizing function Function X co,(l) parameters R(t) in fit (1) for data sets: aO
Stepl 0,2157
StepS 0,2225
SteplO 0,2248
al
0,2599
0,2599
0,2531
a2
0,3384
0,3367
0,3360
a3
0,1860
0,1857
0,1867
10
'"
'"
00
173, 11
165,8
163,40
12
18,52
18,18
18,45
13
1,860
1,171
1,5
St. err.
0.00037
0.00036
0,00032
11
From the Table 2 it is seen, that the solution of the problem of restoration of the parameters completely correct and the parameters that were restored by 10 points (the step 10), are differed insignificantly from the parameters restored by 100 points (step 1 year). Sizable difers only the values of the parameter Tl. In the frame of the problem 2 we have analyzed exponential approximation obtained on the 5 intervals digitization of the function R(t) with the step one on each of them (see Table 3). Let us designate by I [0,20], i[0,40] and soon the sets of points of the function R(t) on corresponding intervals Table 3. The results of the exponential approximation of the function R(t}. Function XC02 (1} parameters
Regression results for digitizing points on base R(t) function i\0,201
i\0,40]
i\0,60]
i\0,80]
i\O,IOO]
AO
0,2009
0,2100
0,2380
0,2160
0,2157
Al
0,2410
0,2656
0,2407
0,2597
0,2599
A2
0,3716
0,3385
0,3353
0,3383
0,3384
A3
0,1865
0,1860
0,1860
0,1860
0,1860
TO
00
00
00
00
00
Tl
333,33
178,53
151,81
173,55
173,1l
T2
19,692
18,52
18,45
18,52
18,52
T3
1,1892
1,1860
1,1862
1,1860
1,860
St. err.
0.00027
0.00003
0.000031
0.000035
0.000037
From this table it is seen that the values of the parameters t, for the regression on the intervals [0,20] and [0,40] differ essentially, but on intervals [0,60] , [0,80] and [0, I 00] practically does not differ. From the Table 3 it is seen that the standard error is small enough and hence in result of the regression it is impossible to obtain valuable links
322 between the parameters of regression and content parameters of the "Bern" model. In the frame of he problem 3 we have note that the parameters of rational approximation R(t) and its exponential approximation X C02(t) in no way does not link with the content parameters of the "Bern" model of global carbon cycle. To regret, this essential question is not in the IPCC Report. The idea of the level of scientific strictness in this area gives the following (that is forerunner of the research in this direction): At the same time with theoretical-functional measure of proximate of the proposed approximation it is necessary also to consider the measures of proximate of the functional of these function, entered in formulation of the problem A and B- for functional TJH
Xco2(t'-PI, and in the problem B- for functional
o
m JX"'2 (t)E(t - t')dt . 0
Our calculations for these are given on Figure 2 and Table 6. The idea of the level of scientific strictness in this area gives the following (that is forerunner of the research in this direction): Maier-Reimer and Hasselman, "Transport and storage of CO 2 in the ocean," Climatic Dynamics, 1987,2:63-90. " ... it is useful to represent the function G(t) (through a suitable fitting procedure) as a superposition
G(t)
=
I
k
a i exp( -
t /
r i)
i=O
of a number of exponentials of different amplitude ai and relaxation time ri. Amplitude ao IS) represents the asymptotic airborne fraction for the equilibrium response of ocean atmosphere system any finite- duration unit integral input function. The amplitudes aj may be interpreted as the relative capacity of the reservoirs, which are filled up independently by the atmospheric input at rates characterized by the relaxation time scale
co. rio
On the references in the frame of the problem B' it is known of the scenarios of anthropogenic emission of the CO2 into the atmosphere. We select from them 4 scenarios (see Figure 6), which completely enough present all set of scenarios as the centers of clusters. We investigated input of the projections on different time horizons 20,40 and 100 years. Time interval of the projection is (2010, 2110).
323
Fig. 6: The scenarios 0/ anthropogenic emmisions 0/ CO 2into the atmosphere. Scenario 1- constant 7,3 Gtly; scenario 2- BaU, E(t) = 2.6 exp (0.02*(t1958)) Gtly scenario 3- realistic parabola E(t) = 5.0+ 0.05*(t1958)+0.0001*(t-1958/ Gtly; Scenario 4- parabola o/stabilization E(t) =5. 0+0. 05*(t-1958)2-0. 0001 *(t-1958/ Gtly In Table 6 it is given projection of the summing amount C02 at the atmosphere, obtained on the base of different models and selected scenarios of the emissions.
324
Table 6. The summary amount of CO, that possible to be accumulated at the atmosphere from 2009 for the period in 20, 40 and 100 years as the result of anthropogenic emission on 4 different scenarios Scenarios of anthropogenic emission into the atmosphere
Scenarios of anthropogenic emission into the atmosphere
Model proposed IPee for the calculation of accumulation of the CO, atmosphere By the year of report Model proposed lPee for the calculation of accumulation of the CO, in the atmosphere
Accumulation of the amount CO, Gt/y
Accumulation of the amount CO" Gt/y
Scenario I
MRH 1987 IPee 1996 J 1996 lPee 2007
At time horizon 20 years 110,5 114,3 95,3 96,7
At time horizon 40 years 195,2 209,8 165,7 169,0
At time horizon 100 years
Scenario 2
MRH 1987 IPee 1996 J 1996 IPee 2007
137,5 142,5 120,2 120,8
310,3 324,6 266,0 269,0
1427,6 1516,4 1216,6 1249,5
Scenario 3
MRH 1987 IPee 1996 J 1996 IPee 2007
118,3 122,4 102,8 \03,6
220,2 230,6 187,1 190,6
497 ,2 534,8 419,5 436,5
Scenario 4
MRH 1987 IPee 1996 J 1996 IPee 2007
112,5 116,6 97,7 98 ,6
220,2 209,8 165,7 169,0
303,8 331 ,2 255 ,4 268,6
399, 1 420,9 328,9 343,3
Analysis of table 6 show that on the time horizon in 20 years uncertainty of the selection of the model only few little than uncertainty of selection of scenario. On time horizon in 40 years a difference between the level of the C02 for different scenarios reach 40%. In the same time the difference by models are of20%. On the time horizon 100 years the level CO 2 for different scenarios differs in times. In the same time the difference by models is between 20% or 30%. In the conclusion we would like to stress that the parameters a" T , describing exponential approximation Green's function of global carbon cycle does not link directly, general speaking, with the substantial physical parameters of the initial model. In some cases (for example, when "Lanzoch effect" takes place) in the main could not be linked directly with the substantial physical parameters. Let us note, that in those models of carbon cycle in which response on some emission scenarios is expressed as sum of the exponents (for example at box-diffusion
325 models) it is possible to obtain explicit formulas, given the response parameter through substantial physical parameters. This is an example of direct links. Our investigation shows that at the time of interval of 100 years, accepted by Kyoto Protocol, "Lanzoch effect" has no place, and hence, it is not excluded possibly of giving substantial physical sense to the value of parameters a" r:, . Without the analysis that was done by us such conclusion would not have place. In the IPCC Reports and papers based on them this necessary analysis has not been done. REFERENCES I. 2. 3. 4. 5. 6. 7. 8.
9. 10. II.
12.
13. 14. 15. 16.
IPCC (1990), Climate change 1990, Cambridge University Press, London, UK. IPCC (1995), Climate change 1995, Cambridge University Press, London, UK. IPCC (2001), Climate change 2001, Cambridge University Press, London, UK. IPCC (2007), Climate change 2007, Cambridge University Press, London, UK. Maier-Reimer and Hasselman, (1987) "Transport and storage of CO 2 in the ocean," Climatic Dynamics. D. Lashof and D. Ahuja, (1990) "Relative contributions of greenhouse gas emission to global warming," Nature vol. 344. K. Caldera and J.F. Kasting (1993) "Insensitivity of global warming potentials to carbon dioxide emissions scenarios," Nature vol. 366. M.Ya. Antonovsky, V.M. Buchstaber, V.A. Pivovarov, (1995) "Simulation, Global Analysis, and Interpretation in Carbon Cycle studies," Meteorology and Gidrology. L.D. Harvey, (1993) "A guide to global warming potentials," Energy policy. F. Joos et al. (1996) '"An efficient and accurate representation oceamc and biospheric models," Tellus. M.Ya. Antonovsky, V.M.Buchstaber, VAPivovarov (1997) "Simulating Consequences of various Scenarios of anthropogenic Carbon Emissions," Meteorology and Gidrology. M.Ya. Antonovsky, V.M. Buchstaber, V.A. Pivovarov (1998) "Analysis of Uncertainties in Problem of the Assessment of the Contributions of different Greenhouse Gases Emissions in Global Warming," Meteorology and Gidrology. A.A. Istratov and O.F. Vyvenko (1999) "Exponential analysis in physical phenomena,"Review of scientific instruments, Volume 70. Steven 1. Smith and T.M.L Wigley, (2000) "Global Warming Potentials:2.Accuracy," Climate Change 44.. Bernstein et al. (2006) "Response of climate to regional emissions of ozone precursors: sensitivities and warming potentials," Tel/us. Scientific Assessment of Ozone Depletion: 1998, WMO, Global Ozone Reserch and Monitoring Project-Report Ml44, 10.4.4. Global Warming Potential.
This page intentionally left blank
SESSION 6 ENERGY, CLIMATE, POLLUTION AND LIMITS OF DEVELOPMENT FOCUS: ADVANCED TECHNOLOGIES AND STRATEGIES IN CHINA FOR MEETING THE ENERGY, ENVIRONMENT AND ECONOMY PREDICAMENT IN A GREENHOUSE CONSTRAINED SOCIETY
This page intentionally left blank
MYTHS AND REALITIES ABOUT ENERGY AND ENERGY -RELATED CO 2 EMISSIONS IN CHINA MARK D. LEVINE Lawrence Berkeley National Laboratory, Environmental Energy Technologies Division, Berkeley, California, USA Not A Myth: • China has the cities with the worst air pollution in the world. • For the first time, policies to improve air quality throughout China are taking effect. • There are reasons to be hopeful that this problem will be addressed with some vigor over the coming decade. Myth: China's Subsidized Energy Prices Give Firms a Competitive Edge. Reality: China's Energy Prices are Mostly at International Levels or Higher. 200 180
-.;,.., Overall Industry 160
--Power
140
---Coal
120
~ II
~
- - Petroleum
100
80 60 40
20
Residents of Guangzhou pay more ($0.16/kWh) for electricity than residents of San Francisco. Natural gas prices in Shanghai are the same as in San Francisco ($1 Olmcf). Coal prices in China (-$ 150/t) are now higher than in the U.S. Myth: Energy demand growth faster than GDP in developing countries during periods of industrialization.
329
330 Reality: China, virtually unique in the developing world, has demonstrated since 1980 that this need not be the case.
1,400 1,200
c
...c
II C ClO
...
1,<XX:l
ro> Em .- .---- -
Q)
400 200
However: From 2001 to 2005, energy demand in China changed course radically, growing more rapidly than GDP. Myth: China is profligate in its use of energy and becoming more so. Reality: Per capita energy use in China is only 118 U.S. and 114 EU.
u.s.
w. Europe
China
Not a Myth: China has caught up and surpassed the United States as the largest emitter of energy-related CO 2 ; however. Reality: By any measure of contribution to atmospheric CO 2, the Chinese have done far less harm than the United States.
331 Annual energy-related carbon emissions in China have been growing rapidly since 2001 30,000
'C
25,000 '
';(
0
'is c
20,000
of!
15,000
0
'" <J
I/)
10,000
ell C C
.sc
5,000
~
'E
1950
1970
1960
1990
1980
2000
But per capita emissions are much lowerJhan those of the U.S. 25 -,---
i: c:
K! 10
2 o
5 - -------- - . . - -- -------- ---- - -- __ __ _ globalallfJcagB. ______________ .. .-.. ----------
-
0
......... -- ... - ....... ---- .... -- .. -- .... --
t.:.,:--~.~.~--~-~-; - ~.-J·~R~:::::;::::;:=::;::::;::::::=:::==:]
1950
1956
1962
1968
1974
1980
1986
1992
1998
2004
And cumulative per-capita emissions are very much smaller than those of the U.S. 1,200 ' 1 - - - - - - - - - - - - - - - - - - - - - , II) Q)
c c
...§ a Q)
~
1,000 800
Q)
~.e- 600 nlN -:;0 400
EU
::::I
U
1---------.--... -._-----------_..... _. _--=..._ .......= ..
!
I
200 -1- ---
PRe
I 1950
--- -~
1960
1970
1980
1990
2000
332
Myth: China is doing little to reduce its growth of CO 2 emissions. Reality: China's Target of 20% Energy Intensity Reduction by 2010: 1.5 billion metric ton CO 2 reduction relative to fixed energy intensity baseline China is on a course to achieve by much or all of the goal by 2010. Myth: China is inefficient in it energy use and becoming more so. Reality : Industry consumes 70% of energy in China. Energy intensities within industrial sub-sectors continue to decline. 2.0 - , - - - - - - - - - - - - - - - - - - - - - - - - - - - - - , 1.8 ·· ·······~~
1.6 1.4
s
g
1.2
~
m 1.0 +-c::__~c :E
..... .
a:
~
~
0.8 -
0.4 1:>.".·.~te:xt:i:le:s:.:..~..~_.:_~. . ~..:...:.~:.:._.. ~~i!~~~~~~~~~~:::::~~~ 0.2 0.0 +-----~----~----,-----,-----,-----,-----.-----4
1995
1996
1997
1998
1999
2000
2001
2002
2003
333 Myth: China's vast consumption of coal dwarfs any global attempt to address climate change emissions. Reality: On a per capita basis, China consumes just slightly more than half as much coal as the U.S.! 2.5{l
i?
~
i
2.00
.§
~
1.50
~
s<
"3U
f.::
1.00
050
Source: 8P Statistical Review 0 1World Energy 2008; World Bank, World Development Indicators database 2008.
Myth: China's vast coal reserves, which it is bound to use, will swamp any effort to tackle global climate. Reality: On a per capita basis, China is not well-endowed with coal. i: 4.000 0
i -;;;
3.500
c c
3,000
..
g "1= ~
..
3,645
----------------------------------
2.500
21Rl ________.___ ______._..____ ___.________ __
~ 2,000
"8'"... c "e> "-
1,500 1,000 500
197
87
SO
37
J3
Q.
(l I..
:
~~
r/" Of''>
§ 4'
:§ ~"b",7i
!'.>rf' <'"
J>'"
~~<:-«
,,<"c.... ""Of'
v"
.»'"
oS~
".1>
a-"
&",..
~
"o.§'
9
<10'"
Source: BP Statistical Review of World Energy 2008; World Bank,. World Development Indicators database 2008.
'i$~ ,~
;.~
('>'
~
334 Myth: China is hogging the world's oil imports. Reality: China's imports, while growi ng, remain a very small part of traded oil on world markets. 00 80
I
I
IlWorld DemBtld
IIWOOdOiITrarl.
IIChinaOemand
mChina Ner Imports
i I
70
.,.. 60 ..t
:J2
J!
50
.8
40
~
E
- -..-
1
30
20 10
J,10
(10)
I
I---
- .- .
.. ..
II
~
II
II
II
-_..
-- f-
-
I
II I. l
II
f-.-
g '"
-- f-
1--
I I
I
In L,
g
N
g g
N
N
N
o o
N
II)
o o
'"
k .~ 8 '"
....o o
N
335 TECHNOLOGIES AND POLICIES FOR THE TRANSITION TO LOW CARBON ENERGY SYSTEM IN CHINA ZHANG XILIANG Institute of Energy, Environment and Economy, Tsinghua University Beijing, P.R. China
GDP, ENERGY PRODUCTION AND ENERGY CONSUMPTION IN CHINA
200000
.I------------#,.~__+
100000
l·I.....~~=-I
o ,l.-...r--~~.~"''t·
ENERGY INTENSITY IN CHINA
~-
t -,-
160000
t-
-r ""l""~~~"...~..-~<>
0
336 ENERGY FLOWS IN CHINA
Nllclear Wind
Biomass
Coat
N"hmu
Gas
Crude Oil Vult: 100 miUwn tc\:
CHINA'S COAL FLOW IN 2006 Power generation: 47%; Industry: 21 %; Coking: 19% China's Cool Flow Chart 2{)OO
337 CHINA'S CRUDE OIL FLOW IN 2005
~
+
~
g
•
<~'"
r>'"
,~\1'i il11 nHlJ
'"
--t-
i
Oil eonsumption"~ total=330.9
"i",. a ~"'_"""'"-,, «<,,~
~
'0-"
~ <-
"I
_,"",,,,,;c.s
~,~ );t:1{~:'1~~~"~~
•
" 3' ...c:,,;r.€'.J,:;~~ir
~
"sb'ti::
"'i~",,~?,~"'~ ~;p~ .. ~;/ ..~;;t~-~
:'
'3tmi transportation
;
total=149.06 Share=45%
~
~ ,
!
.c~""" <~~""'?~";: :'=f<,,\$.W ~
Unit: Mtoe Source: adopted from China Energy Statistical Yearbook 2006 Copyright © Tsinghua-BP Clean Energy Research and Education Center
338 CHINA'S NATURAL GAS FLOW IN 2006
China's Natural Gas Flow Chart 2006 ( 100 million cU.m )
Exports
29.0
Imports & Stock
9.5
-4.6
Note: Others includes farming, forestry, animal husbandry, fishery conservancy, construction, transport, storage and post, wholesale, retail trade and hotel, restaurants, etc. Data Source: China Energy Statistical Yearbook 2007 Copyright © Tsinghua-BP Clean Energy Research and Education Center
Nearly 75% of the total demand is from industry (mainly for the raw chemical materials and chemical products)
339 U.S . AND CHINA : ANNUAL AND CUMULATIVE EMISSIONS Annual Emissions
Cumulative Emissions
Per Capita Emissions
(2007)
(180 0- 2007)
(2005) 25
11.000
350.000 7.000
20
3OD,OCIO 6,OCIO
2SO,OCIO
1 J
5.000
ff
•
4.000 3.000
If
200.000
I
150.000
2.000
100.000
1.000
50.000
15
10
5
--
0
us
0
0
us
ao..
a.;,.
us
Source: U.S.-China Roadmap, Pew Center, 2009
INDUSTRIALIZA TION IN CHINA 35,000
250,000
• Service -. 28,000
200,000
I:
• IndustIy
0::
~
I:
21 ,000
150,000
• Agriculture
.2
~
14,000
100,000
§
7,000
50,000
Il.
0
0 00 l'0\
..... v 00 00 0\
0\
l'00 0\
0
0\ 0\
0\ '" 0\
\0 0\ 0\
0\ 0\ 0\
M 0 0 N
V)
0
0
N
00
0 0
N
0 0 0
'"
0 ..... 0 N
0
0
0
.....
'"0
N
N 0
N
V
0
0
'1",
0 .....
340 URBANIZATION RATE (%) IN CHINA
90
80
79
70
60 50 40 30 20 10
o 2005
2010
2020
2030
2040
2050
ENERGY CONSUMPTION IN TRANSPORTATION SECTOR
25000 • Pipeline
20000
Air
Q)
B o 15000 o o
o
: 10000 ·c ::>
5000
o 1990
1995
2000
2005
341
OUTPUT OF CHINA'S AUTO INDUSTRY SINCE 2000
"0
jDl o
HIGHWAY VEHICLE POPULATION IN CHINA SINCE 2000 4500
4000 3500
~ 3000
Ei 2500 ._ 111
1 CII
2000 1500
"0
j
1000
500
o 2000
2001
2002
2003
2004
Year
2005
2006
2007
342 CHANGES IN AUTO MARKET AND STOCK
Share of different new regis ted vehicle from 2002 to 2007 100%
iII Others i ~ Mini Truck I
80%
ilLDTmck iEllMDTruck IIHDTmck ! I!] Mini Bus
60%
I
40%
IIM.Bll'i
III L.Bus
i
I!] Passenger~ Car:, L_____
20% 0% 2002
2003
2004
2005
2006
2007
Stock share of different vehicle from 2002 to 2007 100% ml Mini Trnck
80%
iLDTrock ~MDTruck
60%
I HD Trnck o Mini. Bus I M.Bus ilL. Bus !] Passenger Car
40% 20% 0% 2002
2003
2004
200S
2006
2007
343 FREIGHT VEHICLE STOCK SCENARIO
70 60 50 c;;:
40 30 20 10
~
"e
0 2000
2010
m Large scale truck iTA
Mini truck
2020
2030
2040
2050
!I;l!
Middle scale truck m Small scale truck
i:'l'J
Rural Veh icle
PASSENGER VEHICLE STOCK SCENARIO
700 600 .500 c
400
"s
300
~
200 100 0 2000 §l;
Private Cars
2010 ii!l
Buses
2020 tl\\
Mot or
l1li
Taxi
2030 ~
2040
Business Cars
iIIl
2050
Middle Bus
344 AUTOMOTIVE ENERGY CONSUMPTION SCENARIO
700 600
500 QJ
..,.0 400 c
~ 300 'E 200
100 0 \':> 0 0
<::>
.0
N
0
N
<::>
700
600 500 400 300 200 100 0 0
01 0
N
.,...;
N
N
.0
0
r't"t
~
0
\':>
N
<"oJ
<::>
I.l'!
0
N
345 FINAL ENERGY DEMANDS BY SECTOR
500000 400000 (i.I
~
300000
;:)
Q
q Q rl
200000
100000
o :W05
~
2010
transportation
2020
2040
2030
~
industry
~
construction
~
urban resident
~
wral resident
~
2050
service
FINAL ENERGY DEMANDS BY ENERGY CARRIER 500000
400000
11
300000
;;;
2.00000
g 't-i
100000
o 2005 ~
coal
!lii
c.oke
2010
2020
~
ill!
coal gas
oil
2030 !ii!
NG
~
heat
2040 >1l
electricit'l
2050
346
REFERENCE SCENARIO OF CHINA'S PRIMARY ENERGY SUPPLY 70 . Other RE
:; Biomass 60
!ill Solar
ill Wind 50
" 40 '"'"
i"
Hydro
• Nuclear . Iil NG
~
;Ii Oil
~
!it 30 go
!;l 20
10
2010
2015
2020
2025
PRIMARY ENERGY SUPPLY-CO 2 CONSTRAINED SCENARIO
2030
EMISSION
2045
2040
2035
AND
OIL
2050
IMPORT
70
60
50
"
E
~
40
'" ~
~ ~ 30
if
!;l 20
10
0 2005
2010
2015
2020
2025
2030
2035
2040
2045
2050
347 CO 2 EMISSION 11
10 9 8 7 6 5 4
2005
2010
2015
• Reference
2020
2025
Ii Double
2030
2035
Control
2040
2045
2050
o C02 Fmission Control
INCREMENTAL COST FOR LOW CARBON TRANSFORMATION WILL BE SIGNIFICANT.
ENERGY
SYSTEM
450 400 350
...fl 300 ~
250 200 150 100 2010
2015
2020
• Dual Constraints
2025
2030
2035
• C02 Fmission Constraint
2040
2045 o Reference
2050
348 CHINA'S ENERGY DILEMMA
Based on current scientific and technology knowledge, it is really hard for China to achieve sustainable energy system transformation via a cost-effective way! TECHNOLOGIES FOR ENERGY EFFICIENCY IMPROVEMENT
• • •
Research and development of new energy efficiency technologies Wide deployment of cost-effective energy efficiency technologies Re-adjustment of economy structure and product portfolio
1.2
·3
::~~--"'----_ _ _ _ _ _ _ _~u: nit Product l Energy Consumption 2.5
~-----------~ c: o
a§
0.8 ·
2
0.6
1.5
8 u ~c:
LU
z.
.~
2
E
1:l
.g
- 1
0: 0.4
e> Q)
C
W CL
o
Cl
0.2
0.5
°
0
c:n
CT>
CT> C'>
'"
0> C'>
C')
CT> C'>
.-<
... m
C'>
LC
to
CT> CT>
m
en
t'--
m m
co CT>
m
C'> CT>
m
0
0 0
'"
.-<
0 0 N
N 0 0
M
lO
0 0 '" 0 0 '" '" '" '" 0 0
°
Unit Product Energy Consumption. Share of Heavy Industry and GOP Energy Intensity Change
Though energy consumption per unit of major products decreased, the outputs increased rapidly during 2000 to 2005, resulting in energy intensity increase.
349 HIGHLIGHTS OF REC ENT ACCOMPLISHMENTS-ENERGY EFFICIENCY IN CHINA Thumal Power Mix
a5
Energy efficiency of dift'eftllt size of coal powe~ twit in
of end 2007
China in 2006
~~:;;;n~~)
Uait Size (MW)
6 12 25
600
50
440 410
550 500
100 300 600 600
340
299 292 285.6
1000
~------~==~==~------------------------------------------,
4W
330
-CoalConsumptionofPo'WtrSupply (gce/KWh)
300
L-__~~~~~~~~~~~~----~------~------~------~
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
Sw", N.a.",; B."." ojS:.IUJin Ifa;.. (NBSq N"""" D""I4p_t ... ~ C-un.. (NDRC),AIin ~'" Bo ... (ADB).
DEVELOPMENT OF FUTURE NUCLEAR-POWER SYSTEMS
GetJaraBon II ~
lWRs
.m
~m+
f~ ~
ECOI'iOl1IIaI! - Erilanted Sarety . /hi!'.'IiI\ W!IIlIe - ProifeIaion
.~
• DI'fi!len. f ... 1 - 1IIIgTmc
- ABWR
- LWR-PWR.8WR
- Sys\BmllOt
- CAM)U · AGR j95l)
I
tll60
I
1910
I
l!l1l1l
I
1990
I
2000
I
~rI
2020
I
Source:
!pee AR4, 2007
350 CHINA RENEWABLE ENERGY AS OF 2008
Renewable Energy Consumption (250 Mtce)
Primary Energy Consumption (28500Mtce)
WIND POWER CAPACITY Moreman 12000MW
I Cumulative capacity
10000
Ii Incremental capacity
8000
---~-
----- --
-~-----
-
-~-----~
-----
6000 c-------- ------------------- --- -- --- -- ------- ~-- ----------- ---- -------- --- - ------ - ----~----,
4000
2000
1995
1996
lSS7
1998
1999
20GO
2001
2002
2003
2004
2005
2006
2007
2008
Source: Li & Gao, 2008
351
LIFE CYCLE ENERGY USE OF GASOLINE CAR AND EV IKJ/KM
Gaso 1i lie car
EV
LIFE CYCLE GHG EMISSION OF GASOLINE CAR AND EV IG C0 2/KM
352 RELA TIVE VALUE OF LIFE CYCLE FOSSIL FUEL AND GHG EMISSION FOR DIFFERENT FUELS TO DRIVE THE SAME DISTANCE
1
MeOH e from
3
coal
S
'rn ..... !'l
2
t st biofuel
E9 1
0
0
2
1
Fossil e n e :t"g y u se
2008 CO 2 CAPTURE AND STORAGE (CCS)
Post combustion
Pre combustion
Oxyfuel
Industrial Processes
c;: "_~~~P~IDC8lS~~~~~Sep~li-~C021.-J
Biomass
Raw malarial
Gas, Ammonia, Steel
Source: lin, 2008
Energy Penalty: 7-15 percent points of reduction in efficiency; Economic Cost: 20- 60 $/t C02
353 NOVEL POLYGENERA TION SYSTEM WITH CO 2 RECOVERY Liquid Fuel
New Pre-combu. - Post Synthesis Reaction
51% 36%
PORTFOLIO OF ENERGY TRANSFORMATION IN CHINA
TECHNOLOGIES
FOR
q
Energy Penalty 120%
POWER
SYSTEM
20052010 2015 2020 20252030 20352040 2045 2050 20 10 2015 2020 2025 2030 2035 2040 2045 2050 2010 2015 2020 20252030 2035 2040 2045 2050
IPCClsc,;{i'SCliccc • Polygutrilioa • Polygucntion+CCS INC. Fuel Oil
Hydro I Nudn r WiD d . BiomUI . Solar
Source: Tsinghua University AL TENERGY Model output.
BASIC THEMES OF CHINA ENERGY STRATEGY
• • • •
Giving priority to energy conservation Relying on domestic resources; Encouraging diverse patterns of development; Relying on science and technology;
• Protecting the environment; and • Increasing international cooperation for mutual benefit. From China Energy Policy White Paper 2007 RECENT EFFORTS TOWARD SUSTAINABLE TRANSFORMATION-ENERGY LEGISLATION • • • •
•
ENERGY
SYSTEM
Renewable Energy Law (2006) Energy Conservation Law (Amended in 2007) Circular Economy Law (2008) Laws under Revising Mineral Resources Law Coal Industry Law Electric Power Law Laws under Drafting Energy Law Law on the Protection of Oil and Natural Gas Pipelines
ENERGY POLICY IN CHINA • • • •
Command-and-Control Market-based instruments Research and development Information provision
EFFORTS TOWARD SUSTAINABLE RECENT TRANSFORMATION-BINDING TARGETS •
•
•
SYSTEM
The Outline of the 11th Five-Year Plan for National Economic and Social Development of the People's Republic of China (2006) Energy intensity going down by 20 percent from 2005 to 20 I 0; S02 emission and COD discharge going down by 10 percent; Measures: Retirement of 50 GW of small sized coal-fired power plants; Energy efficiency standards for new project approval; and Disaggregating the national targets into the provincial targets Medium- and Long-term Programfor Renewable Energy Development (2007) Increasing renewable energy consumption to 10 percent of the total energy consumption by 2010 and 15 percent by 2020
RECENT EFFORTS TOWARD SUSTAINABLE TRANSFORMATION-ECONOMIC INCENTIVES •
ENERGY
ENERGY
Fiscal Fundfor Subsidizing Energy Conservation Projects - RMB 7 billion/Year Supporting measures of Renewable Energy Law
354
SYSTEM
•
•
Premium/Allowance for renewable electricity: 0.25yuan/4 US cents/kWh Electricity surcharge for RE electricity Premium: O.02yuan/kWh Fiscal Fund for Renewable Development Tax credit for wind farm and biogas power projects State Council's Regulation on Pollutant Charge Levy and Use Emission Levy Levying Pollutant Setting up fiscal fund for environment protection Subsidizing Renewable Energy Projects in Rural Areas
RECENT EFFORTS TOWARD SUSTAINABLE TRANSFORMATION-TECHNOLOGY R&D
•
• •
• •
SYSTEM
Outline of the National Plan for Medium- and Long-term Scientific and Technological Development (2006-2010) gives top priority to the development of energy technologies 2.5 billion Yuan for climate related science and technology development during the 10th Year Plan Period (2001-2005); 7 billion Yuan has been used for science and technology develop in energy conservation and addressing climate change since 2006.
FURTHER EFFORTS TRANSFORMATION •
ENERGY
TOWARD
LOW
CARBON
ENERGY
SYSTEM
Strengthening R&D of radically innovative sustainable energy technologies and systems; Enhancing the domestic manufacturing capacity of low carbon energy technologies and systems; and Playing a leading role in international technology collaborations to promote effective transfer of low carbon energy technology and know-how among countries.
355
356 INDICATIVE TRAJECTORY TRANSFORMATION IN CHINA
OF
LOW
CARBON
ENERGY
SYSTEM
.-t
II Ll'I
o
o
!:;!.
iii
C02 emission
"'0 C
: C02 intensitv of GOP c- --------------,--------------- r-{).-L-----, ----------------,-----------,---------------,------ ------,------t--J
o o
o
t--J
o
CJl
o
ASSESSMENT OF CO 2 STORAGE POTENTIAL IN OIL/GAS-BEARING RESERVOIRS IN SONGLIAO BASIN OF CHINA
MINOYUAN LI, HUA ZHOU, MIN WANO, BO PENO AND MEIQIN LIN EOR Research Center, China University of Petroleum Beijing, P.R. China ABSTRACT This work is one part of the Near Zero Emissions Coal (NZEC) initiative project cooperated between China and UK. In order to assess the CO 2 storage potential in the oilbearing reservoir of Songliao basin of China, an assessment model is developed based on the geological conditions, the properties of rock, crude oil and formation water in Songliao basin. The C02 storage potential in the oil-bearing reservoir of Songliao basin is assessed by calculating the CO 2 storage capacity in the main oil-bearing reservoirs in Daqing and Jilin oil fields in the basin by the assessment model. The potential of enhanced oil recovery (EOR) by CO 2 injection in Songliao basin is also evaluated in the work. The results show that the oil/gas-bearing reservoirs in Songliao basin is suitable for CO 2 geological storage and the total storage capacity of CO 2 is about 506.7 Mt as CO2 is stored in a supercritical state. Key words: CO 2 storage, CO 2 storage potential, CO 2 EOR, Songliao basin.
INTRODUCTION With the development of economies and an increase in human activities, global emissions of greenhouse gases (OHO) such as carbon dioxide (C0 2 ), methane (CH4 ), nitrous oxide (N02) and chlorofluorocarbons (CFCs) have increased rapidly and lead to global climate change and ocean acidification with severe consequences for ecosystems and for human society.! CO 2 is responsible for about 64% of the greenhouse effect among the greenhouse gases, and the average concentrations of CO 2 in the atmosphere have risen from a pre-industrial level (1750) of280 parts per million by volume (ppm v) to over 370 ppmv currently and, if unabated, is projected to reach 1100 ppmv by 2100. 2.4 The anthropogenic CO 2 emitted into the Earth's atmosphere originates mainly from the burning of fossil fuels to produce energy, which provide 75%-85% of the world's energy demands. 5•6 Even though increasing the efficiency of energy usage and/or developing lower carbon or non-carbon energies (e.g., natural gas, nuclear, wind, and biomass) to replace high carbon fuels (coal and oil) can reduce the accumulation of CO 2 in the atmosphere,7,8 fossil fuels will still remain a major component of world's energy supply in the near future 9 because of their inherent advantages, such as availability, competitive cost, ease of transportation and storage, and large resources. 10 The International Energy Outlook 2005 (IE02005) published by the Department of Energy (DOE) U.S. shows that China is the world's second largest emitter of CO 2 after the United States in 2002, accounting for 13.6% of the world's total CO 2 emissions (approximately 3.3 Ot of CO 2), and predicting that the CO 2 emissions in China will grow
357
358 4.0% per year, reaching 8.1 Gt of CO 2 by 2025 , which is about 21 % of world's total CO2 emissions and becoming the world's largest emitter of CO 2. II Therefore, the reduction of anthropogenic CO 2 emissions in China is very important and urgent to against climate change. Carbon dioxide capture and storage (CCS) technologies have a critical role to play in mitigating carbon emissions and averting dangerous climate change. CCS involves capturing the C02 in fossil fuels either before or after combustion and storing it for the long-term in deep geological formations such as depleted oil or gas fields. As part of the storage process, the captured CO 2 can be used to enhance hydrocarbon (oil, gas or coalbed methane) production. CCS technology can reduce C02 emissions from large industrial sources and coal-fired power plants by around 85%. CCS therefore has the potential to be an essential technology to reduce C02 emissions significantly and allow the continued use of fossil fuels for energy security, without damaging climate. As one of the green-house gases C02 causes the climate problem but as a gas CO2 is a very important for enhanced oil recovery (EOR). United State has carried out 82 CO 2 EOR projects in 1986 to 2006 (Table 1), the projects are successful and promising is 87.8% of the total projects. 12
..
Table I CO, EOR projects in US (1986-2006)
Project number Ratel%
Successful
Promising
Not evaluated
S6
16
S
Discouraging 5
68.3
19.5
6.1
6.1
Among the 72 successful and promising projects the projects with the porosity of reservoirs less than 20% is 88.7% (Table 2) and the projects with the permeability of reservoirs less than 50 mD is 79.2%(Table 3). Table 2. Reservoir 20%<91 Rate/%
11.27 (I 986-2006)
100 mD
12.5
11.11
9.72
Table 4 (broken down into three parts) shows that the rate of CO2 EOR project of the gas injection projects in U.S. is increased year by year from 1986 to 2006 and the rate is up to 68.22% and the oil production is 234,420 bid in 2006. 12 It shows clearly that CO 2 EOR is one of the most important EOR technologies has been applied in United State. Meanwhile, there are 20-30 million tones CO 2 injected into oil-bearing reservoir every year and about 60% of the CO 2 injected is trapped in the reservoirs.
359 Table 4a. EOR projects by gas injection in V.S (1986-2006) 1986 1990 Rate 1988 Rate
Rate
1992
bId
'Yo
bId
%
bId
%
bId
%
Gas
33,767
31.20
25,935
19.80
55,386
29.05
113,072
37.94
CO, miscible CO, immiscible Nitrogen
28,440
26.28
64,192
49.00
95,591
50.14
144,973
48.65
1,349
1.25
420
0.32
95
0.05
95
0.032
18,5 \0
17.10
19,050
14.54
22,260
11.68
22,580
7.58
Flue gas
26,150
24.16
21,400
16.34
17,300
9.08
11,000
3.69
6,300
2.11
Other Total
108,216
130,997
190,632
Rate
298,020
Table 4 (continued) EOR projects by gas injection in V .. S (1986-2006) 1994 Rate 1996 Rate 1998 Rate
2000
Rate
bId
%
bId
%
bId
%
bId
%
Gas
99,693
34.54
96,263
32.16
102,053
32.55
124,500
37.87
CO, miscible CO, immiscible Nitrogen
161,486
55.95
170,715
57.03
179,024
57.10
189,493
57.64
66
0.02
23,050
7.99
28,017
9.36
28,117
8.97
14,700
4.47
Other
4,400
1.52
4,350
1.45
4,350
1.39
0
0
Total
288,629
Flue gas
299,345
313,544
328,759
Table 4 (continued) EOR projects by gas injection in V .. S (J986-2006) 2002 Rate 2004 Rate 2006 Rate bId
%
bId
%
bId
Gas
95,300
32.04
97,300
30.61
95,800
27.56
CO, miscible CO, immiscible Nitrogen
187,410
63.00
205,775
64.73
234,420
67.44
66
0.022
102
0.032
2,698
0.78
14,700
4.94
14,700
4.62
14,700
4.23
Other
0
0
0
0
Total
297,476
%
Flue gas
317,877
0 347,618
0
There are 632 million tones oil in low permeability reservoirs (permeability <50 mD) discovered in China. 13 This oil is difficult or impossible recovered by injection of water but is suitable for CO 2 injection. In order to mitigate climate change and increase
360 the enhanced oil recovery for these low permeability reservoirs, the following main projects have been carried out in China since 2006. 1. Research for Utilizing Greenhouse Gas as Resource in EOR and Geological Storage which is a 973 project (The National Basic Research Program) authorized by the Ministry of Science and Technology (MOST) of China in 2006 . 2. Utilizing Greenhouse Gas as Resource in EOR and Storage in Oil-bearing Reservoirs which is as a key research project of PetroChina in 2007. 3. Pilot Test of CO 2 EOR and Storage in Jilin Oil Field which is one ofkey pilot projects of Petro China in 2007. 4. Research of CO 2 Capture from Coal Fire Power Plant, EOR and Storage in Oil-bearing Reservoirs which is as a key research project and pilot test of SlNOPEC in 2008. Coal is China's primary fuel for power generation, and will almost certainly remain so for the foreseeable future. At present China's installed capacity of power generation plant totals about 510GWe, with over 80% of that based on coal. The Near Zero Emissions Coal (NZEC) initiative was announced as part of the EU-China Partnership on Climate Change at the EU-China Summit in September 2005. It was agreed that both partners would aim "to develop and demonstrate in China and the EU advanced, near-zero emissions coal technology through carbon capture and storage" by 2020. Recently, at the UK-China Summit 2009, both countries expressed their hope that demonstration could be achieved by 2015. The UK-China NZEC project was started in 2007. AEA Energy & Environment is the UK Project Coordinator working alongside ACCA21 as the Chinese Project Coordinator. The work packages of NZEC project includes (I) knowledge sharing and capacity building; (2) future technology perspectives; (3) case studies for CO 2 capture; (4) CO 2 storage potential; and (5) policy assessment. The main aim ofNZEC project is to achieve a common understanding of the potential applications for a range of carbon capture technologies in the power generation sector and build capacity for evaluating storage potential and performing appropriate first stage site characterisation for site selection in China. China University of Petroleum (Beijing) is the Chinese lead for work package 4 of the NZEC project. The aim of the work package is to assesses CO 2 storage potential in China and undertake preliminary characterization for site selection of CO 2 storage. The objectives of the work package are to provide information on the future potential for CO 2 storage both as an additional benefit in enhance oil recovery (EOR) and enhanced coalbed methane (ECBM) recovery by CO 2 injection, and as direct storage in saline aquifers in a range of basins. Due to CO 2 EOR has the advantages both on CO 2 geological storage to reduce the CO 2 emissions into atmosphere and the economic revenue by enhancing oil recovery. The assessment of CO 2 storage potential in oil/gas-bearing reservoirs is the first option of NZEC project. In this paper, in order to assess the CO 2 storage potential in the oil-bearing
361
reservoir of Songliao basin of China an assessment model was developed to calculate the CO2 storage capacity in the basin. The CO 2 storage potential in the oil-bearing reservoir of Songliao basin is assessed by calculating the CO 2 storage capacity in the main oilbearing reservoirs in Daqing and Jilin oil field complex in the basin by the assessment model. The potential of enhanced oil recovery (EOR) by CO 2 injection in Songliao basin is also evaluated in the work. GEOGRAPHIC LOCATION OF SONGLIAO BASIN I4 . 15 The Songliao basin is the largest sedimentary basin in the northeast of China. The basin is located at a longitude of 119°40'E-128°24'E and a latitude of 42°25'N-49°23 'N. The length of the basin from north to south is 750 km and the width of the basin from east to west is 330-370 km, with an area of about 256x I 03 km 2. The main part of the basin is located in Heilongjiang and Jilin provinces, the west and the southwest part of the basin is located in Inner-Mongolia and the south part of the basin is in Liaoning province (Figure I). Songliao basin is divided into two main parts by the Nen River and the Songhua River. The northern Songliao basin is about 119.5xl03 km2 and the southern Songliao basin is about 136x 103 km 2 in size. Songliao basin is the largest oil and gas producing area in China for the last 50 years, with a current annual oil production of around 50 million tones. There are two large oil fields in the basin, the Daqing oil field complex which is in the central depression area of northern Songliao basin and Jilin oil field complex which is in the southern Songliao basin (Figure I).
Fig. I: Geographic location ofSongliao basin. C02 STORAGE POTENTIAL IN SONGLIAO BASIN Calculation model of C02 storage capacity After study of geology, stratigraphy, tectonic structure, cap-rock, faults and
362 lithology of Songliao basin it shows that the oil/gas-bearing reservoirs in the basin are suitable for storage of CO 2. The assessment model of CO 2 storage capacity has been developed based on the geological conditions and the properties of rock, the crude oil and fonnation water of the Daqing and Jilin oil field complex in Songliao basin. The equation used for calculating CO 2 storage capacity in oil-bearing reservoirs and water formations is as follows: (I) M(C02) -
total storage capacity of CO2 (m3)
MI - storage capacity of CO 2 dissolved in oil and water in oil-bearing reservoir
M2 - storage capacity of CO 2 dissolved in fonnation water M3 - storage capacity of CO 2 in oil-bearing reservoir which is the space occupied by CO 2 during CO 2 flooding M4 - storage capacity of CO 2 reacted with rock Due to the data of C02 reacts with reservoir rock is not available for Daqing and Jilin oil field complex M4 is not evaluated in this work. Because the volume and porosity of the water fonnation in Sonliao basin is unknown, assumed the volume and porosity of the water formation is as same as the volume and porosity of the oil-bearing reservoir, equation (I) is modified as follows : (2)
Er- overall sweep efficiency (fraction), EFS-2S% A - area of oil-bearing reservoir (m2) h - thickness of reservoir (m) qJ - porosity of reservoir(fraction) So - oil saturation in reservoir (fraction) Ro (C02) - CO 2 dissolved in oil (fraction) Rw (C02) - C02 dissolved in water (fraction) Sw - CO 2 dissolved in formation water (fraction) Mp - residual oil in reservoir (1 04t) PI' - oil density in reservoir (kg/m3) It should be noted that Equation (3) is a modified version of the basic estimation methodology proposed by the CSLF.16 As such, it takes into account the solution of CO 2 in both the water and oil legs. It assumes rapid and total solution of CO 2 in both phases and is consequently a maximum storage capacity.
ffi storage capacity in Daging oil field complex The 7 largest oil fields in Daqing oil field complex were chosen for the assessment of CO 2 storage capacity in the oil-bearing reservoirs during C02 EOR process. The reservoir temperature is 45°C, the average reservoir pressure is 12.7MPa,
363 assume EF 18%, So=65%, CO 2 dissolved in Daqing oil (R o(C02 » is 15% (weight), CO 2 dissolved in water (R w(C02» is 5% (weight), the density of CO 2 in supercritical state is 600 kg/m3 Taking C02 dissolved in the formation water(Sw) as the same as Rw(C02)' and is 0.083 m 3/m 3, the residual oil in the reservoir with water flooding (Mp) in the year of 2000, h,((!, A, pr are shown in Table 5. The average oil recovery rate by CO 2 flooding is 4% based on a pilot test. 17 Ta bl e 5 Th e b aSlc . parameters of t h e 01'1 -bearin reservoirs in Daqing oil field complex. I~ 15 Oil field Ir tp A Mp Ro(COl) PI (x lO't) (kg/m' ) (m) (%) (xIO'm') (m'/m') Lamadian
72
23.7- 26.7
100
0.149
570
803
Sa' ertu
35-62
23- 31
200
0.201
930
797
Xingshugang
13-20
21.4-25
216
0.202
250
791
4.4
23
9.5
0.227
2.9
792
Taipingtun
2.9-3.3
23
61
0.226
13
795
Putaohua
2.0-4.5
23- 24
95.2
0.128
22
781
Aobaota
1.0-1.5
23
40
0.231
3.3
780
Gaotaizi
total
1791.2
721.7
The CO 2 storage capacity during CO 2 flooding in the Daqing oil field complex is shown in Table 6. The amount of CO 2 that will be dissolved in oil and water in oilbearing reservoirs (M1) is about 15-23% of the total storage capacity of CO 2 . The storage capacity of CO 2 dissolving in formation waters (M2 ) is about 55- 62% of the total storage capacity of CO 2 . The storage capacity of CO 2 in oil bearing reservoir during CO 2 flooding (M3) is about 20-22% of the total storage capacity of C02. It is clear that the formation water has the largest potential for storing CO 2 . Table 6. CO, storage capacity in the main oil-bearing reservoirs of Daqing oil field complex. Oil field M,IM M ,IM Total M, M, M/M M. (x\O'm') ( xIO'm') (xIO'm') (x I O'm'} (%) (%) (%) Lamadian
43.27
17.40
151
60.72
54.43
21.89
248.7
Sa'ertu
80.85
21.42
218
57.76
78.57
20.82
377.42
Xingshugang
25.62
21.49
68.8
57.72
24.78
20.79
119.2
0.8
56.34
0.29
20.42
1.42
Gaotaizi
0.33
23.24
Taipingtun
1.49
23.24
3.6
56.16
1.32
20.59
6.41
Putaohua
1.53
15.58
6.1
62.12
2.19
22.30
9.82
Aobaota
0.42
24.14
0.96
55.17
0.36
20.69
1.74
153.51 (92.2Mt)
20.01
449.26 (269.7Mt)
58.75
161.94 (97.2Mt)
21.18
764.71 (458.8Mt)
Total
The total storage capacity of CO 2 in the Daqing oil field complex is 764.71 x l 06 m3 (458.8 Mt, assuming C02 density 600 kg/m3) in a supercritical state. The storage
364 capacity of CO 2 in Lamadian, Sa' ertu and Xingshugang oil fields is 97.46% of the total CO 2 stored in the Daqing oil field complex. If CO 2 t100ding could be used for enhanced oil recovery (C0 2 and the crude oil is miscible in part of the reservoirs) in Daqing oil field complex, the potential of the oil recovery by CO 2 t100ding with different EOR rates based on the geological conditions, reservoir conditions and the properties of crude oil is shown in Table 7. If the oil could be increased by 10% of the residual oil in the reservoir (in 2000) by CO 2 t100ding the potential oil production will be 179.12 Mt. Table 7 Potential of EOR by CO, flooding in Daqing oilfield complex (Mt) EOR rate (%) Mp/(Mt)
2
4
6
8
10
Lamadian
570.00
11.40
22.80
34.20
45.60
57 .00
Sa'eI1u
930.00
18.60
37.20
55.80
74.40
93.00
Xingshugang
250.00
0.500
10.00
15.00
20.00
25.00
Gaotaizi
2.90
0.058
0.116
0.174
0.232
0.29
Taipingtun
13 .00
0.26
0.52
0.78
1.04
1.30
Putaohua
22.00
0.44
0.88
1.32
1.76
2.20
Aobaota
3.30
0.066
0.132
0.198
0.264
0.33
1791.20
31.324
71.648
107.472
143.296
179.12
Oilfield
Total
ffi storage capacity in Jilin oil field complex The 5 largest oil fields in Jilin oil field complex were chosen for the assessment of CO 2 storage capacity in the oil-bearing reservoirs during CO 2 EOR process. The storage capacity of CO 2 in Jilin oil field complex is calculated by using equation3 and assumed reservoir temperature of 45°C and pressure of 12.7 MPa (same as the Daqing oil field). The other parameters have been used: assume EF 18%, C02 dissolved in Jilin oil (R o(co2) is 15% (weight), CO 2 dissolved in water (R w(co2) is 5% (weight), the density of CO 2 in supercritical state is 600 kg/m 3 . CO 2 dissolved in fonnation water(Sw) is as the same as Rw (C02) , the residual oil in reservoir(Mp) in 2000, h,rp, A, pr are shown in Table 8 and the average oil recovery rate of 4% is from a CO 2 t100ding pilot test. 17 Table 8 Reservoir parameters of Jilin oil fields A /km'
/tI m
Hongang
49.4
4.6
Xinli
120.6
5.3
Mutou
20.0
6.9
Qian'an
170.5
Yingtai
51.7
Total
510.1
Oilfield
14,1S
TfC
P IMPa
Mp/(xIO't)
Pr /(Kg/m J )
22
17.54
885
55
12
16.3
49 .36
863
66
12.2
23.5
18.21
891
40
6.8
8.8
15
121.39
857
76
19.3
16
22
100.17
874
65
15
'P 1%
365 The C02 storage capacity during CO 2 flooding in Jilin oil fields is shown in Table 9. The storage capacity of C02 dissolved in oil and water in the oil-bearing reservoir eM) is about 19.1 %-20.8% of the total storage capacity for the reservoirs. The storage capacity of CO 2 dissolved in formation water (M2) is about 60 .7%-67.8% of the total storage capacity of the reservoirs and the storage capacity of CO 2 in oil-bearing reservoir during CO 2 flooding (M3) is about 13.1%-18.5% of the total storage capacity of CO 2 . It is clear that the formation water has the largest potential for storing CO 2, and the oil field with the largest volume of formation water has a greater CO 2 storage potential. The total CO 2 storage capacity for the Jilin oil fields is 79.81 x 10 6 m3 (47.9 Mt) if CO2 is stored in a supercritical state. Among the oil fields in the Jilin oil field complex the Qian'an oil field has the largest storage capacity (30.72x 10 6 m3 or 18.4 Mt), closely followed by the Yingtai oil field (24.84 xI0 6 m3 or 14.9 Mt) . Table 9 CO, storage capacity in Jilin oil fields (xI0 6 m3 ) M,IM MIIM M, MI Oil field (XI06 mJ) (XI0 6 m3) (%) (%) Hongang 1.17 19.1 4.15 67.8 Xinli
2.77
8.64
20.2
M3 (XI0 6 m3 )
63.1
Total
0.8
M31M (%) 13.1
2.29
16.7
13.7 4.43
_lxI0 6m 3)
6.12
Mutou
0.92
20.8
2.69
60.7
0.82
18.5
Qian'an
6.37
20.7
18.68
60.8
5.67
18.5
30.72
Yingtai
5.15
20.7
15.10
60.8
4.59
18.5
24.84
14.17 (8.6 MI)
17.8
79.81 (47.9 Mtl
Total
16.38 49.26 20.5 61.7 (9.9 Mtl (29.6Mt) Note: with Mt in brackets assuming CO 2 density 600 kgm- 3
If CO 2 flooding could be used for enhanced oil recovery in Honggang, Xinli, Mutou, Qian'an and Yingtai oil fields, the potential of the oil produced by CO 2 flooding based on the geological and reservoir conditions and properties of crude oil is shown in Table 10. If oil production could be increased 10% of the residual oil (based on reserve estimates from 2000) by C02 flooding the potential of additional oil production will be 30.666 Mt from the Jilin oil field. Table 10 Enhanced oil recovery by CO, flooding in the Jilin oil fields (Mt) Oil field
EOR rate (%)
Mp/(Mt) 2
4
6
8
10
0.701
1.052
1.403
1.754 4.936
Hongang
17.54
0.351
Xinli
49.36
0.987
1.974
2.961
3.949
Mutau
18.21
0.364
0.729
1.093
1.457
1.821
Qian'an
121.39
2.428
4.856
7.283
9.711
12.138
Yingtai
100.17
2.003
4.006
6.01
8.014
10.017
Total/(MI)
306.67
6.133
12.266
18.399
24.534
30.666
366 CONCLUSIONS An assessment model is developed based on the geological conditions, the properties of rock, crude oil and formation water in Songliao basin. The oil/gas-bearing reservoirs in Songliao basin are suitable for CO 2 geological storage and the total storage capacity of CO2 is about 506.7 Mt as CO 2 is stored in a supercritical state. The storage capacity of C02 dissolving in formation waters (M2) is about 55-62% and 60.7%-67.8% of the total storage capacity of CO 2 in Daqing and Jilin oil fields, respectively. If the oil production could be increased by 10% of the residual oil in 2000 by CO 2 flooding, the potential of the oil enhanced by CO 2 flooding could be 209.786 Mt in Daqing and Jilin oil fields. REFERENCES 1.
West, J.M., Pearce, J., Bentham, M. et al. (2005) "Issue profile: environmental issues and the geological storage of C02." Eur. Environ. 15:250-259.
2.
Bachu, S., Adams, 1.1. (2003) "Sequestration of CO 2 in geological media in response to climate change: capacity of deep saline aquifers to sequester CO 2 in solution." Energy Converso Manage. 44:3151-3175.
3.
Hepple, R.P., Benson, S.M. (2005) "Geological storage of carbon dioxide as a climate change mitigation strategy: performance requirements and the implications of surface seepage." Environ. Geol. 47:576-585.
4.
Kharaka, Y.K., Cole, D.R., Hovorka, S.D. et al. (2006) "Gas-water-rock interactions in Frio Formation following C02 injection: implications for the storage of greenhouse gases in sedimentary basins." Geology, 34:577-580.
5.
Holloway, S. (2001) "Storage of fossil fuel-derived carbon dioxide beneath the surface of the Earth." Annu. Rev. Energy Environ., 26:145-166.
6.
Allen, D.E., Strazisar, B.R., Soong, Y. et al. (2005) "Modeling carbon dioxide sequestration in saline aquifers: significance of elevated pressures and salinities." Fuel Process. Technology, 86:1569-1580.
7.
Jean-Baptiste, P., Ducroux, R. (2003) "Energy policy and climate change." Energy Policy, 31:155-166.
8.
Li, Z., Dong, M., Li, S. et al. (2006) "C0 2 sequestration in depleted oil and gas reservoirs-caprock characterization and storage capacity." Energy Converso Manage., 47:1372-1382.
9.
Grimston, M.e., Karakoussis, v., Fouquet, R. et al. (2001) "The European and global potential of carbon dioxide sequestration in tacking climate change. Climate Policy, 1: 155-171.
10.
Bachu, S. (2003) "Screening and ranking of sedimentary basins for sequestration of CO 2 in geological media in response to climate change." Environ. Geol.,44:277-289.
11.
Meng, K.C., Williams, R.H., Celia, M.A. (2007) "Opportunities for low-cost
367 CO2 storage demonstration projects in China." Energy Policy, 35:2368-2378. 12.
Special report EOR/Heavy oil survey. Oil&Gas Journal, Apr.I7, 2006: 45- 57.
13.
Shen Pingping, Research for Utilizing Greenhouse Gas as Resource in EOR and Geological Storage, Report, Petro china, 2007. (in Chinese)
14.
Zhai Guangming. Petroleum geology of China, Vol. 2. Beijing: Petroleum Industry Press. 1993 . (in Chinese)
15.
Li Guoyu, Zhou Wenjin. Atlas of oil fields in China. Beijing: Petroleum Industry Press. 1990.2 .(in Chinese)
16.
Bachu, S., Bonijoly, D., Bradshaw, 1. et al. Phase II Final Report from the Task Force for Review and Identification of Standards for CO2 Storage Capacity Estimation: Estimation of CO 2 storage capacity in geological media-phase 2, 2007.
17.
Dong, Xigui et al. (1999) "Pilot test of C02 flooding in Daqing oil field." Petroleum Industry Press. 45-49,166 (in Chinese).
This page intentionally left blank
369 CARBON CYCLE IN KARST PROCESSES YUAN DAOXIAN UNESCO International Research Center on Karst Guangxi, P.R. China KARST DYNAMIC SYSTEM AND METHODOLIES OF ITS RESEARCH Basic Concepts of Carbon Cycle Definition: The movements and mutual transfer processes of the carbon in the interfaces between lithosphere, hydrosphere, atmosphere and biosphere in the forms of C0 32 - (with CaC03, MgC03 as basic form) HC0 3_, C02, CH 4 , (CH 20)n (organic carbon) and etc. In In In In
atmosphere: CO 2, CH4 , CO Hydrosphere: HC0 3_ Biosphere: (CH 20)n lithosphere: C0 3 2-, (CaC0 3 , MgC03 and etc.)
Atmosphere
Biosphere
Hydrosphere
(CaC0 3,
C0 3' Mg C0 3 W)
Lithosphere
370 The general fluxes of carbon components that affect the equilibrium of CO 2 in the atmosphere.
The Functions of KDS 1. To drive the formation of karst 2. To regulate atmospheric CO 2 and mitigate environment acidification 3. To drive the movements of elements,thus influence life and bring about formation of mineral deposits 4. To record environmental changes
371 A Conceptual Model of KDS
Gaseous pbase
D
P
i
r
ec
ss
i
0
P
I u
i
t a t i 0
t
m jfji 0
~
t!i
n
n
1M *D
CaC03
Solid phase
372
The Detection Instruments
~*tJ
~
m ZdJ jJ
~
#t ;t;t ~~
nr:
CO 2 (aq)
H20~ ~lfI
11k
II
1~
[!J
Ca2+
I
HZCO,l
I
?-
C03
~j *,"
CaC03
3\
HC0
B V
.(t::t'
M
I
C02(g)
l
1 m j§
I
RCO, -
373
}8o
t
{ Mt
Prcc ~ pH3 li o n
~ l Z0~
I 60
14,000
co,
120[)O
- ... 1000D
1 o
8000
U
6.00 4()OO
HeQ,
OIIW ~
V :t 110 150 L-__~__~__~__~__~__~__~__~__~·__~·~__~ · __ I U) \I 12 M 'on
Fig. 4. Seasonal variations and rc~ htiouships be· tw<>cn ,<)il CO, at two dep,h, (20 nod SO em), water
pH. HCO) and meaD
monthly
precipitation
.at
YUdrul g ground
understream,
Zhen'~n
Culmt)',
ShM.:llxi Pro....ince.
374
Results (2): The Impacts of the Karst Landform (Doline) on Atmospheric CO 2 (Yaji Experiment Site, Guilin). Heighl <'flt<)\ffl urmmd
!OO
--.. (a)
Slope
bonom lop
-
~
tt • • M.
"'"
WI
_
co, itl4:
(0) Oollne bonom
..IIll>
\. 001'''''0".
ft*lIM.
\
--till • •
o .l---,....---.-=;:~:::.,.....-.,
m
.1:1..
Oulille top
_
4al
440
I
O+-~~--r-~"~'-~~~ 300 »0 4110
(xlO~
(a) The contents of atmospheric C02 reduced with height.
eo
fOG
-seo
CO2 itlf
III» leO
(x 10-<5)
lQ)
7SO
375 (b) The contents of atmospheric CO 2 are higher in the do line bottom (600-700ppm), whereas they are lower on its top (300ppm). Results (3): Sensitive and Complex Response to Environmental Change (Rainfall,Yaji Experimental Site, Guilin).
Rainfall Water table
Water Temp.
PH Conductivity PC02
SiC
376 Results (4) : Different Response at Different Parts of a KDS and time lag.
I
! Ii Hydrolog ical Profile of Yajl KDS,Guilln
w,~,
1.1 '1f;\i_~ 1,J:()~: :riWH<'
c.J,'A!-),f;;~· X;'J.;f-if·'i~:
},it"it-,'t);'fo.l,U'Ai:L
I
L!
ri!
t):~ :
Conduc.
Black line: respon se in Doline
Blue line: response at the resurgence
Results (5): The CO 2 concentration in soil is one order of magnitude higher than in the atmosphere. The subsoil dissolution rate is also one times higher than subaerial.
377
Discoveries in Huanglong Ravine, Songpan, Sichuan, Early 90s
Tufa and CO 2 emission in Huanglong China.
17. Tufa
d~.m
,,\ Huanglnl1g Ravine. Songpan CNmty,
Sichuan. China. result of deep source CO:! outgassing.
"" . ~
-~
,,11> ~
..
'~
1].0
11.0
'.0
.7.0
,.
'.0 •
......~ ,
..
~
7.
'.','
pH-Val_
'«:0
No. of ElI:peri.e"tal Sites
V.riation In Hydrochemical P~amet." Do.ate. . From the GootHnaai K.... t 5prt,. of "_loti, Rari.
378 INTERNATIONAL BACKGROUND (lGCP379)
IGCP3 79 (1994-1999) : Karst Processes and the Carbon Cycle. Objective: 1. Global uptake of atmospheric CO 2 by Karst processes. 2. Global deep source C02 outgassing in Karst regions. 3. Karst records of environmental change.
NSFC key project Karst processes and carbon cycle in typical karst regions of China (40231008) 2003-2006.
379 THE FINAL PRODUCT OF IGCP379
IGCP299 book
Limestone dissolution rate measurements Rainfall (mm/a) Location Malaysia 5000 Madagascar 1800 French Alps 850 Serbia 3400 Shanxi, China 400 Yichange, China 1200 Guilin, China 1900
IGCP379 book
Dissolution Rate (mm/ka) 180 135 20 31 10.7 84.9 40 (subaeriaD_ 80 (subsoil)
380 The Carbonate Rock Dissolution Rate of the World (M.Pulina)
..:'
~.,;S ~~HO &$1~2!)
~!iJ 2D-3) !!i!iI3G-40 ~ .o·~) ~eo.-100
ji"100m1ru'!OC\Tree!'3;
The Estimation of CO 2 Sink in Surface Karst Dynamic System of the World 6.08x 108/a (Yuan,J\t)i)t;, China 1997) 2.2x 108/a (Kazuhisa,iStIJH0..,Japan,1996) 3.02x 108/a (Philippe Gombert,France, 1999) That makes about"20-40% of the world CO 2 "missing sink"
381 Plates of Earth, Red points: Tufa Deposits
o
. . . . . of . . CIUII '1 ,
.,n
zit 11 U.... CMI
G
DhctIon
of"" ~
~ T~""
Figure 69
M«Jnf "'f1mrhmc plales of a.nll "'" IqIIms or lIent'rG.lon .nd spteadlo, of Cl'USI (mid_
ridllt'l loci dcslluctian ... ,..... 01 CNsl (subduellM zones).
f 1\11<'1' Chl"~ 1981.)
The Estimation of Deep Source CO, Degassing around Rome Tufa versus its Age
Through the Thickness of
382
CARBON CYCLE IN KDS OF CHINA The Estimation of CO 2 Sink in the Surface Karst Processes 1.
Limestone Denudation Tablet Approach F-Dissolution rate per unit area S-Exposed area of Carbonate rocks C-CaCO) content in limestone MC02, MCaC0 3 - Molecular weight
2.
Hydrochemical Approach: Ca CO) + H2 0 + CO 2 = Ca + 2HC03
The Spatial Scale Transfer of Denudation Rate from Monitoring Sites (C0 2 Sink to a Geographical Region
383 I. Give the key factors that influence the Denudation Rate: Lithology (geology), Precipitation (climate), Vegetation (biology). 2. On the basis of the key factors mentioned above, make a regionalization of the whole country through GIS approach. 3. Estimate the Carbonate Rock Denudation amount and Carbon Sink for each region. The Regionalization of CO 2 Sink Estimation in China
8 regions are divided according to precipitation, vegetation, lithology and monitoring data.
384
Qh Sink Estimation for 18 Regions of China
Carbon Sink in Geological History
D4 stalagmite from Dongge Cave, Libo Guizhou.
385 D4 is 304cm high, and 10 cm in diameter. The 8l3C has clear shift at points a and d (red arrow), which represent Terminal I and II, aged l1.3±O.lkaBP and l29.3±1.0kaBP. 20
0
40
60
140
80
160
·10
iii ~
460 -8
-l
5'
c: 0
_
J§'E
:>
gl:
c~
450';:
-
-4
z
e1h EN Olm
"i:o
11 111 In ! - - - Dongge D4 - Hulo
-
Do09ge03
~ ·32
~ .36 oJ '0 fI),
..
7¢ -40
20
40
60
80
100
120
140
160
c5018 of D4 (Green), and D3(Black) stalagmites from Dongge Cave and its comparison with Hulu Cave, Namjing (Red), and GISP2 (grey line is the summer insolation at 25N) FUTURE WORKS • • •
Deep source CO 2 emission The function of enzyme in karst processes Carbonic Anhydrase
This page intentionally left blank
BIOENERGY IN CHINA: A GRAND CHALLENGE FOR ECONOMIC AND ENVIRONMENTAL SUST AINABILITY
JIE ZHUANG Institute for a Secure and Sustainable Environment, University of Tennessee, Knoxville, Tennessee, USA GUI-RUI YU Institute of Geographic Science and Natural Resources, Chinese Academy of Sciences, Beijing, China ABSTRACT China' s economy is one of the globally dominant drivers of fossil fuel consumption and release of greenhouse gases and is thus strategically linked to the sustainable development of alternative and renewable energy sources. Chinese government and renewable energy industry have been poised to capitalize on the marketing potential of biofuels. China reports that 500 million coal equivalent (TCE) of cellulosic material may be available annually for biofuel production from various biowastes. They include forest residues (200 million TCE), crop stalks (150 million TCE), animal wastes (57 million TCE), food-grainlwaste-oil/oil-plant (50 million TCE), and municipal solid waste (15 million TCE). China's Medium and Long-term Development Plan for Renewable Energy targets 30 GW of biomass power by 2030. Four large starch-based bioethanol facilities have been in operation since 2005. However, in 2006 Chinese government banned further increase in production of any kind of starch-based ethanol because of a big concern of food security and pricing, but the government encouraged promotion of non-food fuel ethanol production including lignocellulosic ethanol. Currently, biomass production in China is still facing many challenges in view oflimits of available natural resources (such as lands and water). China is thus making great efforts to explore a diverse species of energy plants that are tolerant to environmental stresses and easy to breakdown in bioconversion. Biotechnology of bioenergy feedstocks is also developing very rapidly in China. Recently, China starts to realize that bioenergy development may be an integrated approach to improving the environmental and economic conditions of poor and environmentally degraded regions. This is because growth of bioenergy plants on marginal and degraded lands can help restore damaged ecosystems, reduce soil erosion, increase carbon sequestration, and protect water resources. Such positive impacts have been demonstrated by the national large reforestation projects in the south and northwest of China. In northwestern China, governments have invested approximately 7 billion U.S. dollars in the past two decades to restore forests from 5% coverage in 1980 to 15% in 2008. In southern China, forest coverage has increased from 30% in 1985 to near 70% in 2008. Since 2006, energy plants have been planted in many large areas of China because lignocellulosic biofuel production is considered having potential for bolstering the economy in rural areas or those with underserved residents (mostly minority) while improving ecosystem service. In collaboration with the scientists of the United States, China is assessing the impact of bioenergy development (including biocoal production) on agriculture, livestock, carbon sequestration, and rural economy. Development of
387
388 sustainability criteria and integration ofbioenergy production with green development are also underway. INTRODUCTION China has a huge need for new energy resources to feed economic growth and security with annual energy demand growing by about 4-5 percent through 2015. China is currently the second largest consumer of energy and the second largest importer of oil in the world, and will surpass the United States to become the largest consumer soon after 2010. China's economy highly depends on coal, which makes up more than 70 percent of its energy mix. In 2008, coal production in China was at 2.7 billion tons, double that in 2002. The demand for coal is expected continuously to rise at a rapid pace, reaching 4 to 6 billion tons by 2020. This upper value of 6 billion tons would be equivalent to total global coal consumption today. However, energy use efficiency in China is only 32 percent, which is about 60 percent that of the United States and 20 percent that of Japan. A major challenge for the Chinese government is to increase energy efficiency, use less coal, and reduce sulfur and carbon dioxide emissions by 20 percent by 2010 even as energy consumption increases. For this purpose, China is making efforts that are targeted to increase the share of renewable energy in its total energy mix to 15 percent by 2020. China has a long history of central government support for renewable energy, especially since the 9th Five Year Plan (1997-2002). In the past decade, renewable energy has developed rapidly in China. In 2004, the government released the first Renewable Energy Law draft for discussion, and it took effect in January 2006. The law has greatly facilitated the development of renewable energy in China, with its currently available and as yet untapped renewable resources. Overall, bioenergy is now at a rapid development stage in China, and some technologies have been commercialized or nearly commercialized. Bioenergy is playing a role in the energy structure and has the potential for large scale development, particularly in remote rural areas. In practice, economic and environmental sustainability are even more important for bioenergy production in China than in the rest areas of the world. In China, with 21 percent of the world's population but only 7 percent of the world's fresh water and cropland, 3 percent of its forests, and 2 percent of its oil, food and feed securities are paramount. Exploration of non-grain bioenergy crops in China is therefore crucial for bioenergy development. This paper addresses the status and strategy of bioenergy development in China from a sustainability perspective, and aims to clarify the comprehensive impacts of biomass-based clean energy production on ecological restoration and reductions of rural poverty and greenhouse gas emissions. STATUS AND STRATEGY OF CHINA'S BIOENERGY Bioenergy is one of the development priorities in China's renewable energy strategy. Bioenergy development has been written into the Long-term National Economic and Social Development Strategy. According to the Bureau of Energy created under China's National Development and Reform Commission, the development goal of renewable energy by 2020 amounts to 15 percent of the total energy capacity, while the goal for biomass-based energy is 30 GW. The achievement of these goals will result in reduction
389 of emissions of 1.1 billion tons of carbon dioxide and 8 million tons of sulfur dioxide. Key areas in bioenergy development include (l) biogas production from biowaste, such as methane generation in rural areas, (2) biomass gasification and solidification from agro-straws, (3) biomass-to-Iiquid fuel, such as biodiesel and ethanol, and (4) straw-fired heat and power generation. In 2000, China granted licenses to five plants owned by four companies to produce starch-based fuel ethanol in the provinces. In 2005, capacity was about 0.92 million tons annually, and in 2007, China's bioethanol production reached 1.50 million tons in 2007 and 1.94 million tons in 2008 (Li and Chan-Halbrendt, 2009), of witch 80 percent was produced from corn. Though its progress has been rapid in the past few years, the rising price of corn and global shortage of grains have restricted the development of grain-based ethanol. In light of the global situation, the Chinese government has decided to stop expansion of this line of production. In early 2008, a decision was made to shut down these ethanol production facilities and encourage only non-grain based ethanol production. This regulation makes it even more important to move to corn stover-based ethanol production. China's goal is to produce 60 million tons of cellulosic ethanol by 2020. The cellulosic ethanol industry network will consist of 1,000 plants each with 60,000 tons production of ethanol a year. By 2030, China hopes to catch up with the United States and have the capacity for high-volume commercial production of cellulosic ethanol. However, breakthroughs in science and technology, such as genetically improved plants and microbes and enzymes, are needed to improve the efficiency of biomass-to-energy conversion. BIOMASS RESOURCES AND PRODUCTION IN CHINA 1% D Corn DR ice DWheat D Oil crop • Beans D Other cereal • Carbonhydates
Fig. I: Crop residue resources in China (Shen, 2008). The total exploitable annual capacity of biomass energy in China is 1 billion tons of biomass (500 million ton of coal equivalents [TCE]). Of 700 million tons of biomass from agricultural residues, half can be used to generate energy, representing a coal savings of 160 million TCE. Livestock and poultry manure theoretically could yield
390 enough biogas to generate the equivalent of 57 million TCE. Firewood and wood biomass energy could create 200 million TCE, and municipal solid waste and wastewater could generate nearly 93 million TCE. As shown in Figure 1, nearly 40 percent of crop residues come from corn, followed by rice (27%), wheat (15%), oil crops (10%), beans (5%), and others (1%), but there are regional differences in the distribution of biomass resources reserves in the provinces and autonomous regions of China. At present, biomass energy resources in China are utilized mainly in conventional combustion technologies, but newer technologies, such as gasification, liquefaction, and power generation are being developed rapidly. Currently the main technologies are ethanol fuel technology and biooil technology. The 11th Five Year Plan for Renewable Energy Development (2006-2010) calls for increasing biomass sources. However, China is a nation with a large area of highland and upland. The mountainous and hilly areas occupy 43 percent of the national land total. In contrast, per capita food cropland area is less than 0.1 hectare and the arable lands are mostly distributed in eastern China (see the map at the left side of Figure 2). It is thus impossible to switch cropland to produce biofuel. The critical issue is how to fully utilize the hilly areas, which has low agricultural value, for bioenergy development. For this purpose, the Chinese government plans to grow more herbaceous and woody plants in northwestern desert areas. By 2020, 13 million hectares (32 million acres) of bioenergy forest will be planted, providing raw materials for production of 6 million tons of biodiesel oil and 15 million k W in annual power generation. This plan will not cause conflict with the food production. Instead, it will facilitate ecological restoration while reducing rural poverty. According to the results of the 6th national forest census, China has nearly 283 million hectares (700 million acres) of forest land, and 57 million hectares (140 million acres) of that is timberless: forest that has been burned or deforested. The map at the right side of Figure 2 demonstrates that in the northern arid and semi-arid areas of China the northwest available unused land accounts for 6 percent (60 million hectares or 140 million acres) of the terrestrial area of China. It consists of saline alkali lands, marshland, bare land, and desert and sandy land. Here, the conditions of water resources and the temperature are too poor for food crops to survive, but drought-resistant bioenergy crops can be planted on part of this land.
Fig. 2: Distribution of arable land (left) and unused available land (right) in China (Xie, 2008).
391
DIVERSITY OF ENERGY PLANTS IN CHINA More than 200 species are potential biofuel plants in China. These include biofuel derived from forest species such as poplar and willow; grassland species such as bamboo and switchgrass; farmland species such as com, sugarcane, sweet potatoes, and transgenetic plants; wetland species such as the common reed and narrow-leaf cattail; aquatic species such as algae. However, energy plant production in China depends on the biological and environmental suitability of each region. Generally the northwest is suitable for drought-resistant plants (such as Yellowhom and switchgrass), the southwest for Jatropha curcas and cassava, the southeast for dogwood and switchgrass, and the northeast for Yellowhom, switchgrass, poplar, and sorghum.
\ .'':'
. I
Fig. 3: Plan/or plantation o/woody Energy Plants (Zhang, 2008). According to the State Forestry Administration of China, "National Bioenergy Directed Forest Construction Program" and "Woody Feedstock Plantation Plan for Biodiesel" are underway with the guidance of Figure 3 for plantation in different regions. The plan designates 400,000 hectares (990,000 acres) of Jatropha curcas to be planted in Yunnan, Sichuan, Guizhou, and Chongqing provinces; 250,000 hectares (620,000 acres) of Pistacia Chinensis in Hebei, Shanxi, Shaanxi, Anhui, and Henan provinces; 50,000 hectares (123,000 acres) of Comus wilsonianna in Hunan, Hubei, and Jiangxi provinces; and 133,000 hectares (320,000 acres) of Xanthoceras sorbifolia in inner Mongolia, Liaoning, and Xinjiang provinces. If this plan is completed by 2010, it might be the world's largest project that applies restoration for bioenergy production.
392 INTEGRATION PRODUCTION
OF
ECOLOGICAL
RESTORATION
AND
BIOENERGY
In the past three decades, the Chinese government has successfully implemented many large projects to restore ecosystems in degraded regions. Two representative examples are Reforestation at Poyang Lake Basin in subtropical China and Vegetation Recovery on the Loess Plateau in northwestern China.
Fig. 4: Ecological restoration in Poyang Lake watershed. Poyang Lake Basin, located in Jiangxi province of southern China, is the largest freshwater lake in China, which occupies about 97 percent of the total land of the province. The climate is humid, subtropical with an annual mean temperature of 17.8° and precipitation of 1,360 mm. Since 1980s, integrated management of Poyang Lake watershed has caused great changes in the landscape and ecosystem. In 1985, forest covered about 30 percent of Poyang Lake Basin, but by 2005, forest coverage had expanded and occupied 60 percent of the area (Figure 4). Similar vegetative coverage has now been achieved in the entire southern China. Field investigations have indicated that theses ecological restoration projects have a large effect on the carbon cycle and water conservation, and also show great potential of biomass production in the hilly subtropical China. Similar projects were implemented on hill-gullied Loess Plateau, which is located in Shaanxi province of northwestern China. Loess Plateau has an area of approximately 800 by 800 square kilometer. The climate is semiarid, temperate with an annual mean temperature of 8.8 0 and precipitation of 400 mm. The gully density of the region is 4.2-8 km per square kilometer and soil erosion is 13,500 tons per square kilometer annually.
393 The vegetation in the region belongs to the temperate forest-steppe zone. However, the original vegetation has already been destroyed due to farming on the slopes of the hills. In the past two decades, Chinese central and local governments have invested approximately 7 billion U.S. dollars in the past two decades to return the crop terrains for reforestation. As illustrated in Figure 5, the vegetative recovery was very successful, and the overall land coverage in the area has increased from 5% in 1980 to 15% in 2008 resulting in effective control of soil erosion and increase in water conservation. Meanwhile, these ongoing projects demonstrate that great potential remains for growing energy plants on Loess Plateau for both ecosystem recovery and bioenergy production.
Fig. 5: Ecological restoration on the Loess Plateau.
It is also important to note that switchgrass, a major model energy plant in the United States, grows well on the Loess Plateau (Figure 6). Field experiments have shown a competitive survival ability of switchgrass than other local grasses in the same dry environment. Switchgrass is a native, warm-season perennial grass that can be grown on marginal lands or rotated with other crops. The fossil fuel energy ratio (i.e., ratio of energy delivered to custom to fossil energy used during production) is 5.3 for switchgrass in contrast to 1.4 for corn. Switchgrass provides excellent nesting habitat for vertebrates and is also important habitat for invertebrates. Its root mass can reach quite deeply (more than 2 meters), which provide a carbon sink and nutrient retention in soils. Switchgrass also has lower fertilizer applications than annual crops such as corn, and it allows greater infiltrations and less erosion from surface flow. It is a large plant, which helps protect soil from wind erosion due to decreased wind flow and evapotranspiration. Unfertilized switchgrass is commonly used as vegetative filter strips and riparian buffers in
394 agricultural watersheds. Results from a number of watershed studies of switchgrass find sediment export reductions of 50 to 95 percent, nitrogen export reductions ranging from 25 to 90 percent, phosphorus export reductions from 20-85 percent (Dale, 2008). The percentage of retention is positively related to the width of the buffer along riparian corridors. Therefore, when a perennial grass like switchgrass is planted on the eroded lands that once were in agriculture, there was fewer adverse effects compared to unmanaged lands including forests and pastures, and there are many environmental benefits.
Fig. 6: Switchgrass on the Loess Plateau, Northwest China.
Integration of bioenergy-targeted biomass production with ecological restoration will add economic benefit to the environmental sustainability while helping remove China's concerns on food and environmental securities, to a significant extent. Bioenergy-driving ecological restoration can actually increase the area of crop-productive lands. Further, restoring degraded ecosystems such as grassland and timberless forest land can increase the carbon sequestration capacity of China. Overall, the northwestern and northern regions of China remain huge potential for implementation of such kind of multi-beneficial renewable energy project. ECONOMIC IMPACT China has a bioenergy market of $60 billion per year. However, given a competitive economic environment, it is unsure whether biomass energy can be economically sustainable. Currently, 60 to 90 percent cost for bioenergy is in feedstock costs. This provides opportunity to increase the income of farmers, but also stimulates conversion of land for food production to fuel production eventually causing food-energy conflict. The rising cost of food has recently become an issue of international concern. As of summer 2008, global food prices had increased by as much as 43 percent, mainly due to energy factors. The basic premise is thus that cultivating bioenergy crops must not infringe on
395 grain supply and marketing for human. This, of course, requires technological innovation and exploration in the development of agricultural or engineering biomass resources, and coordinated development between the biomass suppliers and related industries. The good thing is that China has large areas of marginal land, degraded agricultural land, timberless forest land, and land unsuitable for food production. If China has effective biotechnology of energy plants for sufficient feedstock production, energy, environmental, and economic goals of bioenergy development will be very possibly achieved. As mentioned above, biomass production can also be a process of ecological restoration of degraded natural systems in many regions of China. In this sense, bioenergy development is capable of saving government's costs on environmental remediation. It is also worthy of noting that addition of biomass (for instance 10-20% in mass) to coal for co-firing, usually referred to as biocoal approach, can create an effective mechanism that allows biomass producers (i.e. , farmers) to share the profit of coal energy industry. This may help lessen the rural socio-economic conflicts between rich coal mines and poor local farmers. Integration of biomass-based clean coal energy production (including electricity, clean liquid fuels, and syngas) with restoration of waste lands resulting from coal mining undoubtedly has multiple benefits on the environment, farmer income, reduction of greenhouse gas emissions, and rural society. SUMMARY AND OUTLOOK Bioenergy represents a great large opportunity for not only China's clean energy but also improvement of environmental conditions and rural economy, but it requires some scrutiny. Plantation of drought- and nutrient-resistant perennial energy plants, such as switchgrass, is a win-win situation for emissions, clean energy, rural economy, and food security in China. However, China, like the rest of the world, will need to explore options in legislation, strategic planning, and economic incentives to fully realize the potential of biomass as an important source of renewable energy. Eventually some kind of certification schemes to ensure sustainable use of the land will be necessary. Current technical barriers include incomplete biomass assessment, lack of scenario-specific sustainable roadmap, lack of domestic facility suppliers and equipment standards and testing, poor linkages from R&D to commercialization, and lack of coherent and clear policy incentives. Our assessment suggests that, in general, (I) China's approach to bioenergy development will be integrative and diverse, but adherence to economic and environmental sustainability is critical; (2) huge potential for bioenergy exists in the west of China (though warranting breakthrough in plant biotechnology), while in the east of China the production is mainly based on agricultural and forest wastes; (3) priority of bioenergy development should be given to the regions of coal mines or environmentally degraded regions; (4) a white paper guiding bioenergy development, which includes policy options, marketing regulation, and equipment standards, should be issued at national level as soon as possible.
396
REFERENCES 1.
2. 3.
4.
5.
Dale, V.H. (200S) "Selecting metrics for sustainable bioenergy feedstocks." The proceeding of China-US. Workshop on Bioenergy Consequences for Global Environmental Change. Page 49-53. http://isse.utk.eduljrceec/workshops/pdf/ proceedingsOS.pdf. Beijing, China, October 15-17, 200S. Li, S.Z., and Chan-Halbrendt, C. (2009) "Ethanol production in China: Potential and technologies." Applied Energy S6, 5162-5169. Shen, L. (200S). "China's renewable energy potential and policy options." The proceeding of China-US. Workshop on Bioenergy Consequences for Global Environmental Change. Page 25-2S. http://isse.utk.eduljrceec/workshops/pdf/ proceedingsOS.pdf. Beijing, China, October 15-17, 200S. Xie, G.D. (200S) "Land Resources for bioenergy development in China from standpoint of food security." The proceeding of China-US. Workshop on Bioenergy Consequences for Global Environmental Change. Page 19-21. http://isse.utk.eduljrceec/workshops/pdf/proceedingsOS.pdf.Beijing,China, October 15-17, 200S. Zhang, S.H. (200S) "Woody bioenergy development and its possible effects on the ecological environment in Jiangxi province." The proceeding of China-US. Workshop on Bioenergy Consequences for Global Environmental Change. Page 33 -35. http://isse.utk.eduljrceec/workshops/pdf/proceedingsOS .pdf. Beij ing, China, October 15-17, 200S.
SCREENING FOR CLIMATE CHANGE ADAPTATION: WATER PROBLEM, IMPACT AND CHALLENGES IN CHINA
JUN XIA Department of Hydrology and Water Resources, Chinese Academy of Sciences Beijing, P.R. China ABSTRACT As climate change impacts become more apparent, adaptation is an increasingly important area of work around the world. In China, the publication of the National Climate Change Programme by NDRC in 2007 has given impetus to adaptation in the context of sustainable development. A crucial role for this paper has therefore been to strengthen capacity and raise awareness by sensitizing experts to the systematic management of climate change impacts through adaptation. In this paper, water issue in China was addressed, and also a screening framework to assess climate change impacts and integration of adaptation into development projects was briefly introduced. It is a systematic step-by-step process for assessing climate change impacts and adaptation responses. The research shows that the adaptation management could achieve a good economic result, and reduce related impacts of climate change to water resources. For much uncertainty in the impact of climate change on water resources, it gives some advices to strengthen the base research and practice on water resources in the future. Key Words: Impacts of climate Changes; North China; Water resources; Adaptation management EMERGENCY WATER ISSUE Water resource issues are not only regional but also global. As we enter the 21 st Century, a global water crisis threatens the security, stability and environmental sustainability of all nations, particularly those in the developing world. It was shown from World Water Resources Assessment Programme (WW AP, 2006), UN, that one-third of humanity lives in countries where water is scarce, and 1 billion people lack access to clean water. Today's water crisis takes many forms and threatens aspects such as, (1) Drinking Water: Due to a lack of effective management and infrastructure, one- fifth of people in the world lack access to clean drinking water, and 40% of humanity are lacking in basic sanitation; (2) Human Health: Due to bad water quality and poverty, 3.1 million people in the world die from disease, in which 90% is children less than 5 years old; (3) Disasters: 90% of natural disasters result from water and related unsuitable land use. An example is that East Africa recently suffered seriously drought: an area of Chad Lake had 90% reduced water; (4) Food Security: Conflict between water supply and demand in agriculture is further increased as global food demand will increase 55%. Water uses in agriculture occupy 70% of the total water resources and lower water use efficiency; (5) Urbanization: Growth rate of urban popUlation will reach to 2/3 in 2030, that will result in significant growth of water demand; (6) Ecosystem: Freshwater ecosystems and species are rapidly degrading: one-fifth offreshwater fish will be close to die out.
397
398 Critical Global Water Problems: (a) Drinking Water Issue (Surface water and ground water); (b) Agricultural Water Issue, Urban Water Issue (including water recycling etc.); and (c) Eco-water issue (river, lake, wetland, land and coast eco-system etc.). From point of review for scale issue: global and regional water system. Considered Aspect will focus on water quality and water quantity. China is a developing country with a variety of climate and with much stress from its population and economic development, water resources becomes the most important issue associated with regional and global sustainability (Figure I). For instance, North China is the case to show how serious water security in both water shortage and water quality, and related eco-system degradation in recent 30 years. These include the dryingup of rivers, decline in groundwater levels, degradation of lakes and wetlands, and water pollution. It is shown that 4000 km of the lower reaches of the Hai River-some 40% of its length-has experienced zero flows and, as result, parts of this river have become an ephemeral stream. The area of wetland within the Basin has decreased from 10,000 km 2 at the beginning of 1950s to 1,000 km 2 at present. Over-extraction of groundwater occurs beneath 70% of the North China Plain, with the total groundwater over-extraction estimated at 90 billion m3 . Understanding causes of this un-health water cycle and integrated water resources management will be key issues in this important region in China. Basically, big challenges on sustainable water resources focus on the impact of climate change and human activities on water resources. For example, there are today some 45,000 large dams operating in the world and 22,000 of these dams are in China. Climate change, land use and cover change have significantly changed land hydrological processes. For the Hai River Basin of North China, it was shown that the amount of surface water resources has reduced 40% related 20 years ago in same precipitation condition.
J ~Ool~ U ~~ ttt~.)~ l7 ~~ tr~~....:
"l
i'1t'W~
'nZ-~~~t!E
nu~m6:!!'iWI
u:
~~
-,:,-~~, ;t~
~
o-
m-
um·~~
iii ,,•
¥l7~1
r::?!
!
j'
f
;" l
i
___l
L_~
Fig. I: Major river basins and climate zoning in China.
I
399 In line with global change, China's climate has witnessed significant change in the last 50 years. These changes include increased average temperatures, rising sealevels, glacier retreat, reduced annual precipitation in north and northeast China and significant increases in southern and northwestern China. Extreme weather and climatic events are projected to become more frequent in the future and water resource scarcity will continue across the country. Coastal and delta areas will face greater flood and storm risk from sea level rise and typhoon generation. The impacts of climate change have the potential to slow-down economic and human development in China, and therefore present risks to the efficiency and effectiveness of development investments. At the same time, in some cases climate change may create more favourable circumstances which can provide opportunities for economic growth and human development. Thus, the impact of climate change on water resources security is a new challenging issue with widespread concern globally. It is as well, the great strategic issue in the national sustainable development of China. China is one of the thirteen water-poor countries all around the world; particularly the East China monsoon area with a dense population has witnessed a more serious imbalance of water resources between supply and demand. In addition, the drought and waterlogging frequently occur in East China monsoon area. Under the circumstances of the climate change, drought aggravation in the northern region, water ecological deterioration, and the increasing extreme flood disaster in the southern region severely restricted the sustainable development of the economy and society during the past 30 years. The future climate change will have great influence on the existing pattern of "north drought and south flooding" in China and the water resources distribution in the near future, and consequently exert some unexpected influence on the effects of major engineering projects in China, including food increasing project in North and Northeast China, water transfer project, flood control system planning of southern rivers etc. The project will focus on the major river basins in eastern monsoon region of China, and investigate the mechanism of the impact of climate change on water resources and the relevant adaptation strategies. The study aims to meet the major strategic demand of enhancing the water resources security for China. The climate change and water cycle study is the international forefront in the area of climatology, meteorology and hydrology. The detection and attribution of water cycle components change have become the international challenging problems, as well as the quantitative analysis and prediction of the uncertainties in a hydrological system. The research of water cycle response to climate change is developing from offline hydrologic simulations to coupling climate change with hydrological dynamics. Study of the water resources vulnerability has become a key problem to deal with climate change and secure the water resources security. In the circumstances of the climate change, it is necessary to re-examine the hypotheses in traditional hydrological theories, as well as the spatial variability, uncertainty and hydrological extremes in regional hydrological studies. Thus, is becoming a very important issue how to screen climate change impact to water sector in China.
400 SCREENING CLIMATE CHANGE IMPACT AND CASE STUDIES IN CHINA A screening framework to assist with assessment of climate change impacts and integration of adaptation into development projects in China was developed (Xi a and Tanner, 2007). To enable the application of the framework in a wide range of projects and sectors, it does not prescribe a single model, methodology or tool. It is a systematic step-by-step process for assessing climate change impacts and adaptation responses. The screening framework has 3 phases (see Figure 2), relating to framing, analysis and decision making: I. A rapid qualitative analysis of the entire development investment to identify potentially significant problems posed to a development project by climate and/or socio-economic change; 2. A semi-quantitative and quantitative analysis of impacts that climate change may have on the development investment, and the adaptation options that might be required to enable the investment to achieve its intended beneficial outcomes. This includes a cost benefit analysis of the adaptation options to indicate their economic efficiency; 3. An analysis to assess the suitability of different adaptation options against a range of appropriate decision-making criteria to suggest the preferred option. This includes assessing the option of making no major additional changes to the project Cno changes currently needed"). Under this option, ongoing monitoring of climate impacts and maintenance of flexibility to cope with potential change is recommended. Phase
Steps f~~~~~-'·' ''?'8n--W'''¥<-~!'!;;;N~iF''''·:pr"".~
i . Descriptive overview of each Case Study ~
------.,
;n;;J .. ;;::-'i-·.....-&Zzn:~::;;;,....¢f,<--·"%1£1"~).·,\."2t·!Fmif.g; fV~"-Y.t
1 • Ras:>id strategic descriptive ~
summa~~
• Identify climate-sensitive components
! . Identify relevant quantitative project objectives
cc==~:=~;=~;~f=;::::::::::::~==~i:~.~~~e:_~~~~;=~:f:~~f~:;~~~~~=~~~~==-=-=:-:==::=-==-::=..=:- _==.:====:_J
--
if'JM:;';~-""'4~.,""'i;.~:ti&~~~..o;~~~~""':,~
I • Develop scenarios
I •Compare levels of stress in each scenario against project ! objectives -
can it cope?
i •Assess need for adaptation
Fig. 2: Overview afthe phases and steps afthe climate screeningframewark.
401 This study developed and tested a generic screening framework for assessing the effects of climate change on development projects in China. The framework was tested in four case studies representing contrasting water sector development projects. The case studies projects, geographical location, objectives and partners are shown in Table I. The case studies demonstrate the use of the screening framework rather than acting as the focus of the study. Table I: Case studies for testing the screening framework Case Development project Region and broad study objectives I Flood control and land Huai River Basin; drainage management Reduce flooding and project waterlogging
Related partners World Bank, Ministry of Water Resources
2
Management ofMiyun reservoir for water security for Beijing
Chaobai in Hai River Basin; Sustainable water supply to Beijing
Chinese National Environmental Protection Agency (CEPA), Ministry of Water Resources, World Bank, Global Environment Facility, Municipality of Beijing
3
Water Conservation Project for China
Ministry of Water Resources, World Bank
4
Integrated Restoration Plan for the Shi Yang River Basin
Hai River Basin; Improved agricultural water use efficiency Shi Yang River Basin; Sustainable water management
Shi Yang River Basin Administration Bureau
For instance, case study in Chaobai River in Hai River Basin for Sustainable water supply to Beijing was shown that the inflow to the Miyun Reservoir has been decreasing due to rainfall change and human activities in recent years. The Miyun Reservoir is fed by the 15,788km2 catchments of the Chao and Bai rivers in the Hai river basin. The inflow to the Miyun Reservoir has been decreasing due to rainfall change and human activities in recent years, which has brought great pressure on the municipal water supply to Beijing City. The objective of the Beijing municipal government is to increase water inflow into the Miyun Reservoir supply in order to satisfy demand. There is a long term decreasing precipitation trend in the basin and 5 successive drought years from 1999 to 2003, leading to serious water supply problems to Beijing. The active storage capacity of the reservoir has also been reduced through siltation. The overall runoff in the basin from 2000 to 2005 has declined by about 66 percent since the 1950s. The Baihebao reservoir water transfer has been used on five occasions since its creation in 2003 and has delivered a total of 2x10 8m3 . In addition to supplying Beijing city, the Miyun reservoir catchment has a population of 2.18 million people, produces 1.38 million tons of foodstuffs and has 1.92 million head of livestock. Climate change projections using the SRES A2 and B2 scenarios suggest that average annual temperature may increase by 2.42.8 D C and precipitation may increase by around 4-5% by 2050, increasing average annual inflow into the reservoir. Thus, climate change is projected to increase reservoir inflows in the long term but inflows may continue to decline in the medium term in SRES_A2, necessitating adaptation measures to assure water supply to Beijing.
402 Suggested measures for adaptation of climate change included: (a) converting paddy field to dry farming land; (b) engineering projects for clean water supply to Miyun reservoir through building a channel from Lanhe River; and (c) sewage treatment engineering and other projects for clean water flow. The sensitivity analysis results shows that the adaptation management could achieve a good economic result, and reduce related impacts of climate change to water resources. For much uncertainty in the impact of climate change on water resources, it gives some advice to strengthen the base research and practice on water resources in the future. The examples of adaptation proposed in the case studies in the Huai (floodplain drainage improvements and improved flood forecasting and warning), Hai (land use change, water pricing policies and water conservation projects), and Shiyang river basins (water conservation and water transfers) demonstrate the need for tackling demand-side aspects of development investments in the water sector such as water pricing and water conservation measures, as well as supply-side factors such as canal lining or raising embankments. Across sectors, this shows how soft technologies and management measures will be equally as important as hard engineering solutions in tackling climate change. The case studies also highlight the importance of considering the wider implications of adaptation measures including risk transmission and 'mal-adaptation', such as where vulnerability to floods may be inadvertently increased downstream by upstream flood prevention measures. The economic efficiency of measures tested using the cost benefit analysis exercises is just one means of assessing adaptation options. The multi criteria analysis provides a useful means of informing the decision-making process by providing a systematic basis to assist in evaluating the many aspects of adapting to future climate change. Anyhow, it was shown by China in four case studies in three representative climate regions that: (1) Climate change is a big issue to water sustainable use in China due to existing or planning water projects and programming that do not fully consider potential impact on climate change, particularly on the possibility of increasing extreme events (floods and droughts); (2) Basic research should be emphasized for the adaptation of climate change in China related to four key issues: (a) How to change in the past? (b) How to change in the future, in particular to the coming 20-50 years? (c) What's the mechanism for such changes?, and (d) How to adapt climate change and wisely manage water? The impact of climate change and human activity on water cycle and socialeconomics and eco-system still have much un-known science problems, that becomes to key limited factor for China new water security policy, namely Building SOCiety a/Saving Water to change environment. New challenges in the 21 st century in sustainable water resources in China will focus on: (a) Understanding water cycle process under a changing environment, i.e., climate change and human activity; (b) Quantifying water security linked with social-economic and environment issues to support the sustainable management of water resources.
403
CONCLUSION REMARKS 1. Water problem is a key issue in China due to much stress of its population and natural geo-condition of monsoon area with high flood and droughts risk. 2. The impact of climate change on water resources security is a challenging issue with widespread concern globally. It is as well the great strategic issue in the national sustainable development of China. 3. The aim of screening for climate change adaptation is to meet the great strategic demands of China, which is targeted at the international forefront of water sciences. Specifically, the study focuses the impact of climate change on the water resources scarcity, droughts and floods, food security, water security and other related issues in China. 4. Basic research is very important for understanding impact of climate change to water sector in China, such as, the spatialtemporal variability and uncertainty of water cycle components under climate change; Interaction and feedback mechanism between land surface hydrology and regional climate; and the vulnerability and sustainability of water resources under climate change. Integrated studies and practices should be emphasized.
ACKNOWLEDGEMENT This study were supported by the Knowledge Innovation Key Project of the Chinese Academy of Sciences (Kzcx2-yw-126)and the Natural Science Foundation of China (No.40730632/40671 035). Also, thanks to Professor A. Zichichi to provide a good opportunities to exchange ideas and fruitful discussion during China section, the 42 nd Erice International Seminars on Planetary Emergencies, Aug. I 8-24,2009. REFERENCES 1.
2. 3.
4. 5.
6.
Qin, D.H., Ding, Y.H. and Su, J.L. (eds.) (2005) "China Climate and Environment Change," Vol. 1, Science Press, Beijing, 319-392, 455-506. Editorial Committee of National Assessment Report on Climate Change, 2007, China's National Assessment Report on Climate Change, Science Press, Beijing. Gangsheng, Wang, Jun, Xia and Ji, Chen, (2009) "Quantification of effects of climate variations and human activities on runoff by a monthly water balance model: A case study of the Chaobai River basin in northern China," Water Resour. Res., 45, WOOA 11, doi: 10.1 029/2007WR006768. Ren, G.Y. (ed.) Climate change and China's Water Resources, China Meteorological Press, Beijing. pp 314. Tanner, T.M., Hassan, A., Islam, K.M.N., Conway, D., Mechler, R., Ahmed, A.U., and Alam, M. (2007) ORCHID: Piloting Climate Risk Screening in DFID Bangladesh. Summary Research Report. Institute of Development Studies, University of Sussex, UK. Tanner, T.M., Bhattacharjya, S., Kull, D., Nair S., Sarthi, P.P., Sehgal, M., Srivastava, S.K. (2007) 'ORCHID': Climate Risk Screening in DFID India.
404
7. 8.
9.
10.
Research Report. Institute of Development Studies, University of Sussex, UK. Xi a, 1., and Chen, Y.D. , (2001) "Water problems and opportunities in hydrological Sciences in China," Hydrological Science Jdurnal, 46(6):907-921. Xia, J. and Zhang, L. (2005). "Climate change and water resources security in North China." In. Wagener, T. et a!., eds. Regional Hydrological Impacts of Climatic Chang: Impact Assessment and Decision Making. IAHS Publication No. 295. Wallingford, pp. 167-173. Xia, Jun and Zhang Yongyong, (2008) "Water security in north China and countermeasure to climate change and human activity," Physics and Chemistry of the Earth, 33(5):359-363 Xia, 1., Lu, Z., Changming, L., and J.J Yu, (2007) "Towards Better Water Security in North China," Journal Water Resources Management 21 :233-247.
SESSION 7 CLIMATE & DATA
FOCUS: SIGNIFICANT CLIMATE UNCERTAINTIES ADDRESSED BY SATELLITES
This page intentionally left blank
NASA SATELLITE OBSERVATIONS FOR CLIMATE RESEARCH AND APPLICATIONS FOR PUBLIC HEALTH JOHN A. HAYNES National Aeronautics and Space Administration Washington, DC, USA ABSTRACT The purpose of NASA's Earth science program is to develop a scientific understanding of Earth's system and its response to natural or human-induced changes and to improve prediction of climate, weather, and natural hazards. As one of the eight application areas for the Applied Sciences Program, the Public Health Program Element extends the benefits of increased knowledge and capabilities resulting from NASA Earth science satellite observations, model predictive capabilities, and technology, into partners' decision support systems for public health, medical, and environmental health issues. Through the Public Health Program, NASA and partnering organizations have built a network that focuses on the relationships between NASA Earth observation systems, modeling systems, and partner-led decision support systems for epidemiologic surveillance in the areas of infectious disease, environmental health (including air quality), and emergency response and preparedness. The next-generation of NASA Earthobserving satellites will be launched over the next several years. These satellites will provide observations of even greater temporal and spatial resolution to further enhance decision support for society. INTRODUCTION Earth is changing on all spatial and temporal scales. The purpose of NASA's Earth science program is to develop a scientific understanding of Earth's system and its response to natural or human-induced changes and to improve prediction of climate, weather, and natural hazards. NASA's partnership efforts in global modeling and data assimilation over the next decade will shorten the distance from observations to answers for important, leading-edge science questions. NASA's Applied Sciences Program will continue the Agency' s efforts in benchmarking the assimilation of NASA research results into policy and management decision-support tools that are vital for the Nation's environment, economy, safety, and security. J This will result in expanded societal and economic benefits from NASA research to the nation and the world. As one of eight areas of national priority for the National Aeronautics and Space Administration's (NASA' s) Applied Sciences Program [http://nasascience.nasa.gov/ earth-science/applied-sciences], the Public Health Program Element extends the benefits of increased knowledge and capabilities resulting from NASA research and development of Earth science satellite observations, model predictive capabilities, and technology, into partners' decision support systems for public health, medical, and environmental health issues. Through the Public Health Program, NASA and partnering organizations have built a network that focuses on the relationships between NASA Earth observation
407
408 systems, modeling systems, and partner-led decision support systems for epidemiologic surveillance in the areas of infectious disease, environmental health, and emergency response and preparedness. International health is included in the scope of the Program. To this end, NASA has strong connections with the Group on Earth Observations, the United States Group on Earth Observations, and the World Health Organization. Over the next several years, the next-generation of NASA Earth-observing satellites will be launched. In June 2008, the Ocean Surface Topography MissionlJason2, in cooperation with NOAA and the French space agency CNES, was placed in lowEarth orbit. This satellite will be joined over the several years by other missions with potential health applications, such as Glory, the NPOESS Preparatory Mission (NPP), the Global Precipitation Mission (GPM), the Landsat Data Continuity Mission (LDCM), the Soil Moisture Active-Passive mission (SMAP), the Deformation, Ecosystem Structure and Dynamics of Ice mission (DESDnyI), and the Hyperspectral Infrared Imager (HyspIRI). STRATEGY The use of Earth observation technology, observations, and forecasts for health-related research is well established in the Earth science literature and is focused primarily on the identification of infectious disease vectors based on habitat characteristics. The potential of using Earth observations to enhance predictions of major health events, such as outbreaks of Rift Valley Fever [http://www.cdc.gov/ncidod/dvrd/spb/mnpages/dispages/ rvf.htm] and other arthropod-borne diseases, has been demonstrated with considerable accuracy on a regional scale? However, until relatively recently, little use of the technology or science has been made in a systematic manner by public health policymakers or practitioners to make health-related decisions, such as the allocation of public health resources or rapid response to outbreaks. NASA, the Centers for Disease Control and Prevention (CDC) [http://www.cdc.gov/], and other partners are addressing this issue by integrating predictive information into epidemiologic surveillance systems based on environmental and other determinates of disease that are observable from the vantage point of low-Earth orbit. 3 NASA collaborates with members of the professional public health community who are responsible for epidemiologic surveillance to understand and respond to factors in the environment that adversely affect the health of the American public. These factors include disease vectors, air and water contaminants, ambient temperature extremes, and ultraviolet radiation associated with public health problems. International health is included in the scope of the Program because it represents a national health concern through its potential effect on American public health, economics, and national security. To this end, NASA has strong connections with the Group on Earth Observations [http://earthobservations.org/], the United States Group on Earth Observations [http://usgeo.gov], and the World Health Organization [http://www.who.intienl]. The decision support structure of the public health community is based partially upon health information provided by epidemiologic surveillance systems. According to the CDC, epidemiologic surveillance may be described as "the ongoing, systematic collection, analysis, interpretation, and dissemination of data regarding a health-related
409 event for use in public health action to reduce morbidity and mortality and to improve health".4 As outlined by the CDC, the primary attributes of a surveillance system that combine to determine its usefulness for decision-makers include simplicity, flexibility, acceptability, sensitivity, predictive value positive (specificity), representativeness, and timeliness. 4 A useful surveillance system enables the continual collection of observations for monitoring disease trends and outbreaks for a public health response. While these observations may be used for scientific investigations, surveillance systems are designed primarily to support decision makers, not to support research. In general, the incorporation of Earth science observations into measurement systems and models is intended to improve their accuracy with regard to spatial and temporal dimensions of the phenomena they represent. These improvements enhance the representative attribute of surveillance systems. NASA, the CDC , and partners collaborate on plans to enhance the ability of surveillance systems to assimilate observations and predictions of weather, climate, and environmental risk factors to predict disease events. In surveillance terms, the goal for integrating Earth science and Public Health observations is to represent more accurately these environmental risk factors in terms of the populations potentially affected by them. The NASA collaboration on public health addresses four attributes of a reliable surveillance system listed above: simplicity, flexibility, acceptability, and timeliness. These four attributes of partner surveillance systems will be enhanced by ensuring interoperability of Earth system science measurements with other important public health functions identified by milestones in each stage of the collaboration. NASA partners with federal agencies and with regional and national organizations that have public health responsibilities as well as mandates to support public health practitioners. NASA' s primary partners are the Department of Health and Human Services/CDC, the U.S. Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the Department of Defense (DOD), the U.S . Agency for International Development (USAID), the U.S Department of Agriculture (USDA), the U.S. Geological Survey, and the Department of Homeland Security. As global climate change (both natural and human-induced) will have a major effect on public health through regional weather changes, air pollution levels, contamination pathways, pollution transmission dynamics, and the habitats of potential infectious disease vectors, NASA strongly supports the interagency programs of the U.S. Global Change Research Program. SELECTIONS FROM CURRENT PORTFOLIO NASA currently supports projects that demonstrate the capacity of Earth system research results to enhance different decision support systems. Some of these projects include: 1) the National Environmental Public Health Tracking Network (EPHTN) including the Health and Environment Linked for Information Exchange (HELIX) Atlanta demonstration project and work in the Southwest USA concerning dust storms, 2) the Arbovirus Surveillance Network (ArboNET)/Plague Surveillance System, 3) the Global Situational Awareness Tool (GSAT), 4) the Famine Early Warning System Network (FEWS NET)/Malaria Early Warning System (MEWS), 5) the Global Emerging
410
Infections Surveillance and Response System (GElS), and 6) the California Mosquitoborne Virus Surveillance Response Plan (CMVSRP). The National Environmental Public Health Tracking Network (EPHTN) [http://www.cdc.gov/ncehltrackinglnetwork.htm] is a decision support system owned and operated by the CDC. The system is designed to establish a national network of local, state, and federal public health agencies that tracks trends in priority chronic diseases. Fully operational as of July 2009, the EPHTN is a national early warning system for the rapid identification of health threats, such as toxic chemical releases. The EPHTN also establishes the long-term collection of information on harmful exposures to be used in future studies of new environment-disease correlations. Earth science results provide new information on the environmental contribution to chronic disease and predictive value based on coupled Earth system-chronic disease models. Under the collaboration on the EPHTN, NASA and the CDC are partners in linking environmental and health observations to enhance public health surveillance through the Health and Environment Linked for Information Exchange, Atlanta (HELIX -Atlanta) demonstration project [http://www.cdc.gov/ncehltrackinglhelix.htm ]. Through this project, NASA Moderate Resolution Imaging Spectroradiometer (MODIS) [http://modis.gsfc.nasa.gov/] aerosol optical depth observations was combined with U.S. Environmental Protection Agency monitoring data to create more representative particulate matter products. High concentrations of particulate matter are associated with adverse health reactions, including respiratory and cardiovascular problems. Additional Earth science satellite observations, such as ozone concentrations (from NASA's Total Ozone Mapping Spectrometer (TOMS) [http://toms.gsfc.nasa.gov/] and from the Ozone Monitoring Instrument (OMI) [http://aura.gsfc.nasa.gov/instruments/omilindex.html] onboard Aura) and surface temperature (from NASA's MODIS and from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) [http://asterweb.jpl.nasa.gov/] onboard Terra), were also used to enhance EPHTN!HELIX. The enhanced air quality products that resulted from this collaboration may be used to forewarn emergency care providers about spikes in respiratory admissions that occur during periods of adverse air quality. In 2009, NASA and the CDC renewed for five years a Memorandum of Understanding that formalized the relationship in 2004. Additionally, NASA has partnered with the New Mexico Department of Health, the University of New Mexico, and the University of Arizona to integrate Earth System science observations and predictive modeling capabilities into the New Mexico state EPHTN portal in order to forecast atmospheric ozone, dust, and other aerosols that trigger asthmatic responses or myocardial infarction. NASA Earth observation data from MODIS (onboard the Terra and Aqua satellites) and CALIPSO are being used to improve and validate forecasting capabilities of the Dust Regional Atmospheric Model (DREAM) and the Community Multi-scale Air Quality (CMAQ) model for EPHTN enhancement [http://nmtracking.unm.edu]. Plague surveillance is another CDC priority due to its virulent nature and potential as a bioterrorist agent. Plague prevention and response efforts are underway at regional, state, and local levels through the CDC-sponsored Arbovirus Surveillance Network (ArboNET)!Plague Surveillance System. ArboNET is a passive surveillance system designed to collect and to archive information for the study and operational monitoring of regional and national arthropod-borne viral disease trends. The CDC, participating health
411
departments, the DOD, and the U.S. Geological Survey are the primary users of ArboNET. NASA Earth science observations and model predictive capabilities provide information on plague vector habitats that enhance ArboNET forecasts of outbreak conditions, particularly over the Four Corners region. Plague is endemic in the United States west of the Rocky Mountains, and the Four Corners region is considered particularly susceptible for outbreaks. The goal of NASA's participation in this project is to understand the combination of vegetation, rainfall, and slope characteristics to enable prediction of rodent food supply and the consequent migration of rodent vectors into proximity with humans. The CDC has partnered with NASA to use MODIS Normalized Difference Vegetation Index (NDVI) observations, MODIS surface temperature observations, Shuttle Radar Topography Mission [http://www2.jpl.nasa.gov/srtml] information, Tropical Rainfall Measuring Mission (TRMM) [http://trmm.gsfc.nasa.gov/] observations, and Landsat 7 [http://iandsat.gsfc.nasa.gov/] land cover observations to predict areas susceptible to plague transmission from rodents to humans. Earth-system science model capabilities (including the Global Historical Climatology Network (GHCN) [http://cdiac.esd.ornl.gov/ghcn/ghcn.html], the Goddard Space Flight Center (GSFC) Global Modeling and Assimilation Office (GMAO) [http://gmao.gsfc.nasa.gov/], and the GSFC Plague Algorithm) provide additional information on plague vector habitats that enhance ArboNET forecasts of outbreak conditions. The enhanced ArboNET products resulting from this project inform health care providers on the potential for plague cases to present themselves to hospitals and clinics. These enhanced products may also be used to determine the likelihood that a particular plague outbreak was a function of the environment rather than being induced by some external means (i.e., bioterrorism). Malaria is another high priority infectious disease target for domestic agencies, such as the CDC, USAID, and the DOD, as well as for international health entities, such as the World Health Organization and the Pan American Health Organization. Malaria affects nearly 1,600 Americans each year and kills an estimated 3 million people worldwide, many of whom are children. In addition, malaria costs African nations approximately $12 billion in economic productivity. The health and economic consequences of malaria make it a destabilizing phenomenon. Both USAID and the DOD have developed decision support systems to better predict and respond to malaria. Earth science observations and model predictive capabilities can enhance these decision support systems by providing new information on vector habitats and environmental conditions that precede malaria outbreaks. The Global Situational Awareness Tool (GSAT) is an environmental planning tool owned and operated by the U.S. Air Force Strategic Operations Command (AFSOC). The GSAT provides environmental safety and health information to AFSOC planners and decision makers. Malaria is a disease of significant interest to the GSAT operators specifically and to military decision makers in general, because it can have a large impact on military operations. For example, approximately one-third of U.S. personnel involved in the 2003 Liberia operation came down with malaria. NASA validated and verified that the integration of Earth science satellite observations into the GSA T enhanced its ability to track and predict malaria vectors in Southeast Asia and Afghanistan. Observations utilized included radiance values from NASA ASTER, MODIS, and EO-I, precipitation estimates from NASA TRMM, and land cover observations from Landsat 7. Model predictive capabilities from the GMAO and from the GHCN were also used. The first use
412 of the NASA-enhanced GSAT occurred during the 2006 DOD Joint Forces Exercise in the Pacific Ocean. Public health officials in developing nations typically have limited resources for malaria control, and when there are malaria epidemics, available staff and supplies are overwhelmed. USAID provides humanitarian assistance to vulnerable populations facing disasters or epidemics, such as malaria. To enhance USAID humanitarian programs, NASA Earth observation and modeling results are being integrated into the Malaria Early Warning System (MEWS) developed by the International Research Institute for Climate Prediction (IRI) for use in sub-Saharan Africa. A variety of satellite derived rainfall and temperature estimation products have been tested against ground observations of malaria outbreaks. These studies are informing the development of integrated vectorial capacity estimates (the environmental driving force of malaria transmission) which will be provided routinely through the Famine Early Warning System network (FEWS NET) Africa data dissemination service, maintained by the U.S. Geological Survey. By coordinating the program development with user requirements, public health officials' ability to identify and track negative and positive anomalies in climate conditions will be significantly enhanced. Improved early warning systems can make efficient distribution of limited resources for malaria control possible, helping to reduce rates of infection. This project provides real-time rainfall, temperature, vegetation and humidity data for the MEWS vectorial capacity model. This work involves obtaining historical data for rainfall, precipitable water, and humidity using NASA Earth observations from such satellites as TRMM and sensors such as MODIS. These observations are integrated with a model of mosquito behavior, which will enable identification of possible malaria epidemics. Recent results indicated strong links between clinical malaria incidence and rainfall patterns across Eritrea, with malaria incidence peaks lagging behind rainfall peaks by 2 to 3 months. NASA collaborates with DOD-GElS on the issues of Ebola and Rift Valley Fever in Africa and avian influenza (H5NI) in Southeast Asia. The African hemorrhagic fever project aims to provide monthly environmental and on-demand risk maps to DoD-GElS by integrating information from NOAA AVHRR-NDVI (vegetation density), MODIS (surface temperature, NDVI, land cover), AMSR-E (soil moisture) on Aqua, TRMM (precipitation) and SRTM (topography) with additional simulated products from upcoming missions. By enhancing DoD-GElS with NASA-derived environmental risk maps, the project supports: 1) GElS efforts toward improving surveillance systems as crucial to preventing, detecting and containing these diseases, 2) GElS overseas laboratories with their service to host country counterparts, WHO, and UNIFAO to improve local epidemiological capabilities. The influenza project in Southeast Asia aims to enhance the decision support capabilities concerning avian influenza risks and pandemic early warning at the team members' organizations-the DOD-GElS and the Naval Medical Research Unit-2 (NAMRU-2). Environmental parameters such as land cover, precipitation, temperature, and humidity may be key factors in the spread of influenza. The CMVSRP was developed jointly by the California Department of Public Health, the University of California, Davis and the Mosquito and Vector Control Association of California to provide state-wide guidelines for the collection of
413 surveillance information to monitor the distribution and amplification of mosquito-borne encephalitis viruses in California, focusing on West Nile (WNV), St Louis encephalitis (SLEV) and western equine encephalomyelitis (WEEV) viruses. Weekly or biweekly measures of mosquito abundance and infection, sentinel chicken infection, numbers of dead birds reported by the public and tested for WNV, horse cases and human cases are captured in real time within the Surveillance Gateway, a data management system written by B. Park, UC Davis ().Originally,climate measures were gathered from ground stations and provided by CIMIS (California Irrigation Management Information System) or NOAA weather stations and related to long term averages to determine trends. The CMVSRP did not originally incorporate information from NASA satellites or ecosystem models. A survey of vector control agencies in California conducted at the start of this project found that only 1 in 29 responding agencies previously used satellite data of any kind, yet over 75% of respondents indicated that they would use risk maps and other relevant environmental measures derived from NASA satellites and ecosystem models if it was easily accessible. In addition, the CMVSRP lacked a predictive capability, because operation of predictive models required acquisition and processing of environmental measures in near real-time. NASA's Terrestrial Observation and Prediction System (TOPS) provides a suite of environmental measurements derived from NASA satellites and ecosystem models that are well suited to the development of models for mosquito abundance and virus transmission risk. In addition, TOPS provides a capability for delivery of these data sets in near real-time. The CMVSRP system now incorporates temperature data from TOPS to enhance surveillance and vector control efforts and to extend the existing decision support system. Starting in 2009 the CMVSRP will be enhanced to include early season predictive capability and evaluate the use of additional ecosystem measures . THE FUTURE The next-generation of NASA Earth-observing satellites will be launched over the next several years. Many will have potential benefits to public health applications. Glory will be launched in 2010. Glory will collect data on the chemical , microphysical, and optical properties, and spatial and temporal distributions of aerosols, as well as continuing the collection of total solar irradiance data for the long-term climate record. The NPOESS Preparatory Mission (NPP) will be launched in 2011. NPP will serve as a bridge mission between the NASA Earth-observing research satellites and the operational National Polar-Orbiting Operational Environmental Satellite System (NPOESS) constellation. The Landsat Data Continuity Mission (LDCM), in partnership with the United States Geological Survey, is to be launched in 2013. LDCM will maintain the satellite observations of land use and land cover that began with the first Landsat mission in 1972. The Global Precipitation Mission (GPM), to be launched in 2014 in partnership with the Japanese space agency JAXA, will provide accurate observations of the intensity and distribution of global precipitation. GPM builds on the heritage of the TRMM mission. Additionally, many new missions included in the US National Research Council's report Earth Science and Applications from Space: National Imperatives for the Next
414
Decade and Beyond [http://www.nap.edu/catalog.php?record_id=11820] will have benefits to public health applications. The Soil Moisture Active Passive (SMAP) mission, to be launched in 2013, will use a combined radiometer and high-resolution radar to measure surface soil moisture and freeze-thaw state. The Hyperspectral Infrared Imager (HyspIRI), to be launched in 2015, will employ a hyperspectral imager and a thermal infrared scanner to monitor a variety of ecological and geological features at a wide range of wavelengths, including data on changes in vegetation type and deforestation for ecosystem management. The Deformation, Ecosystem Structure and Dynamics of Ice mission (DESDnyI), to be launched in 2015, is a dedicated InSAR and LIDAR mission optimized for studying hazards and global environmental change, including the effects of changing climate on land use and species habitats. Each of these satellites will provide observations of even greater temporal and spatial resolution to further enhance decision support for society. In general, NASA's contribution to epidemiologic surveillance systems is to increase both the descriptive and analytical information available to them. More importantly, enhancement of these systems with NASA research results will increase their predictive value. NASA Earth science satellite observations and model predictive capabilities will also add to the understanding of the distribution and frequency of disease as a function of climate and weather-related phenomena. Research to date has suggested many correlations between these Earth processes and disease. Through all of these efforts, NASA is helping to bring public health surveillance into the 21 st century.
REFERENCES
1. 2.
3.
4.
NASA. (2006) NASA Strategic Plan. http://www.nasa.gov/pdfl142302main 2006 NASA Strategic Plan. pdf Linthicum, K.J., A. Anyamba, C.J. Tucker, P.W. Kelley, M.F. Myers, and C.J. Peters. (1999) "Climate and satellite indicators to forecast Rift Valley Fever epidemics in Kenya." Science, 285: 397-400. Beck, Louisa R., Bradley M. Lobitz, and Byron L. Wood. (2000) "Remote sensing and human health: New sensors and new opportunities." Emerging Infectious Diseases, 6(3):217-26. Also available online at http://www.cdc.gov/ ncidodl eidlvol6n03/pdf/beck.pdf (accessed July 27, 2006). CDC [Centers for Disease Control and Prevention]. (2001) "Updated guidelines for evaluating public health surveillance systems: Recommendations from the Guidelines Working Group." Morbidity and Mortality Weekly Report. Vol. 50, No. RR-13. Also available online at http://www.cdc.gov/mmwrIPDF/RR! RR50 I 3.pdf (accessed July 28, 2006).
CLIMATE INSIGHTS FROM MONITORING SOLAR ENERGY OUTPUT JUDITM. PAP NASA Goddard Space Flight Center Greenbelt, USA OVERVIEW The most important environmental problems facing humanity today are to understand and predict global change (both natural and man-induced) as well as the rapid changes in our space environment. CRITICAL ISSUE What is the relative impact of natural, specifically solar variability and cosmic rays, and anthropogenic influences on changes in the Earth's atmosphere? BASIC SCIENTIFIC PROBLEMS Understanding the origin of solar variability and the way it affects the radiative, photochemical and dynamical processes of the Earth's atmosphere. REQUIREMENTS 1. Accurate long-term total and spectrally resolved solar irradiance measurements are required to fully understand the response of the Earth's atmosphere and climate to irradiance changes. 2. Joint efforts are required to carry out careful laboratory measurements to understand fundamental effects of cosmic rays and caused chemical precesses and to determine their climate influence with field observations and modeling. SOLAR IRRADlANCE VARIATIONS Understanding I: The three-decade long irradiance measurements demonstrated that solar irradiance (both total irradiance and spectral irradiance at various wavelengths) varies on several time scales: from minutes to the II-year solar cycle, and the existence of slow secular changes is still debated but cannot be ruled out. Understanding 2: Similar to solar irradiance variations, other solar-type of stars also show changes in their luminosity. Problem 1: The time period of interest far exceeds the lifespan of any single experiment, thus composite irradiance time series must be compiled from data of several irradiance experiments. Problem 2: On time scales longer than the almost three-decade long irradiance measurements, surrogates (magnetic activity indices) for irradiance have to be used to mimic the observed irradiance changes.
415
416 Question I: How well these indices used for irradiance modeling are reliable for predicting long-term irradiance changes? Question 2: What are the underlying mechanisms of irradiance variations and to what extent surface magnetic activity and to what extent global solar changes, like temperature or radius changes, contribute to irradiance variations? Question 3: What new tools we have/need to better predict solar irradiance variability than the currently available empirical models? TOTAL SOlAR IRRADIANCE MONITORING RESUlIS: 1978 to Present
--l ACflIMSAT/ACRlM3
I SORCElTIM
Wl0
TOTAL IRRADIANCE COMPOSITES To study the climate effect of irradiance variations, two composites are used: the ACRIM composite and the PMOD composite. ACRIM Composite: Willson and Mordvinov (2003) adjust the ACRIM I and ACRIM II data via their mutual inter-comparison with the Nimbus-7lERB measurements, using TSI data as published by the original instrument teams. This composite shows a small (0.05%) increase from the minimum of solar cycle 21 to the minimum of cycle 22. PMOD Composite: In contrast, Frohlich and Lean (1998) make adjustments to the Nimbus-7 and ACRIM I data for 1980 and to the Nimbus-7 data between 1989 and 1991, arguing about instrumental drifts and improper correction for
417 degradation effects on the ACRIM I and Nimbus-7 data at the beginning of their operation, and that the Nimbus-7 data were influenced by several drifts after 1987.
ACRJM Composite TSI Time Series (Daily Means)1 j~r-~--------'--------r-------.--------.--------r-------.
i
!
1366
;:)
«
:! 1365 f1
AflruU CPm~iti!; ~ N!mt~;"~R5 _
ACRiM1, e 8. 3 1e$tttz
UMs Nj:n~7iERB~~~tD:hf-'-!ig:e--me. 'AC-~M~' Us;:s TS{ ie5"~ ~..~~ b<-/u1el!~~~1Mm_Q,
~iJ;.~~~~~~gae Ff~4'..ct t:OI'r,p~mACAtt.t~w~: AG?U~t.
~]'%
~?!'ER9.:
'\:1,.$ ~
13b~L---L---------L---------~--------~--------~---------L--------~
1995
y ....r f
w.:;~~,& Mo:d"ino'J, G-fl... 2OJ3
RU '!NillKda._ E<4f1h_oh50_fl026 11J.23J2:006
418 Days (Epoch Jan 0, 1980)
o
2000
4000
6000
8000
10000
iff'"'
'E 1368
~
g t::
~ 1366 f!
.!::
....
.!li!
til
1364
~ 78 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08 Y ear Th e PMOD Composite by Frohlich and Lean, 1998
SOLAR ACTIVITY INDICES FOR EMPIRICAL MODELS The Mg II h & k core-to-wing ratio (Mg c/w) derived from the vicinity of 280 nm (as derived at NOAA/SEC, Rodney Viereck). This index is considered as a good proxy for faculae. Ca II K-line observations are available from several observatories, providing a good indicator for chromo spheric activity. The 10.7 cm radio data which are available since 1947 and considerable effort has been devoted to extend irradiance models using this index. The sunspot number (SSN), which is the longest solar time series available for solar modeling (other than the cosmogenic isotopes). Full disk magnetic field strength data from the National Solar Observatory at Kitt Peak. EMPIRICAL MODELS I . Linear regression models: TSI = a + bxPSI + cxMg c/w, where PSI is the index for sunspot darkening and Mg c/w is the proxy for faculaePSI (Photometric Sunspot Index is calculated from the area, position and contrast of sunspots). 2. Ass umption I: Sunspots modify irradiance on short time scale (dips in solar irradiance) while bright features (faculae and network) cause the solar cycle variations.
419 3. Assumption 2: Irradiance is high when solar activity level is high and low when activity level is low.
VARIATIONS IN THE ACTIVITY INDICES : MG II CORE-TO-WING RATIO (Mg c/w) Mg cIw Ratio and Res (WL=3000)
0.3 0.295 0
t6
0.29
0.;..
~
.~
BI
0.285
!
0.28
c<J ..c:
0.275
0'>
0.27
8 .:.::
= Z
0.265 0.26
0
i 500
3000
4500
6000
7500
Day Number 1 = November 7, 1978
0000
10500
420 VARIA nONS IN THE ACTIVITY INDICES: Ca KLINE Kltt Peak Ca II K Index & 365-day Running Means 0.11 0.105
x ~
s:
::.:.
d)
0.1
0.095
is
"5 u..
0.09
~ {lj
<..)
0.085
0.08
1500 3000 4500 6000 7500 .9000 Data between January 1,1977 and June 10,2005
10500
421 VARIATIONS IN THE ACTIVITY INDICES: SUNSPOT NUMBER TIME SERIES
Sunspot Number and RC Trends (WL=3000)
320 280 240 r...
2
200 E ::I
Z
'5 '160 ~ c:
::I \J)
120
80 40 1500 3000 4500 6000 7500 9000 10500 Data between January 1, 1977 and September 30, 2005 SOLAR MAGNETIC INDICES Since 1975 full disk magnetograms are taken to produce two numbers for each one: the average field strength (sum of all the individual measurements divided by the number of measurements) and the average absolute field strength (same but take absolute value of the measurements first). Major changes occurred in 1992 and 2003 when the instruments changed. Minor changes occurred at other times when software changed. The KPVT observations were carried out in the 868 .8 nm line starting in 1977. For day numbers between 8146 and 8357, a different spectrum line (550.7 nm) has been used. After day number 8357 (11130/03), the new SPMG was in use with the 868.8 nm line.
422 KPVT and SOLIS Magnetic Field Strength and Res (WL=3000) 40 35
~tV 30 ~
oS 25
g>
e (j)
20
"0
]i u.. 15
,.,.o :g
tf10
~
5
1500 3000 4500 6000 7500 9000 10500 Data between January 7, 1977 and November 2, 2005 As estimated, the magnetic field values are all on the same scale (this is probably correct to of order 10%). As can be seen, the maximum of cycle 23 was far lower than the last cycle {despite the fact that solar irradiance was almost as high as during the last strong cycles}. By mid-200S, the KPVT data were close to minimum condition, and the SOLIS data indicate that we are already at solar minimum conditions. However, in the case of the absolute values of the measured field can be at error due to various effects (error in the zero point level, image resolution, instrumental noise). Thus, further studies are required to confirm whether the low level of the absolute magnetic field strength during the declining portion of cycle 23 is entirely (or to what extent) due to the Sun. CLIMATE IMPLICATIONS OF SOLAR VARIABILITY The role of solar variability in climate change has been debated for a long time. Now, the new results from various space experiments monitoring the radiative and particle emissions from the Sun and space have opened a new era in solar-terrestrial physics. The shown high-resolution spatial and temporal observations conducted from space and the ground demonstrate that the surface of the Sun and its outer atmosphere are highly variable on almost all time scales {and the variable solar output may affect the climate in many fundamental ways}.
423 However, the observed changes in total irradiance over a solar cycle is small (about 0.1%) and secular variations in solar irradiance are not yet confirmed, partially because of the length of measurements, different adjusting methods and insufficient modeling.
SURFACE TEMPERATURE CHANGES
0.5 (a) Global average surface temperature
E I!!
14.0
-S
~
13.5 1850
1900
Year
1950
t.!
2000
Global averaged surface temperature changes from the beginning of industrial era (by courtesy of M. Muesheler).
per CH-blend Moberg
~
t CI
i
E ~
:r
z
volcanic ghg + aerosol 1300 1400 1.
1700 1ROO 1QOO 2000
424 PROPOSED MECHANISMS-COSMIC RAYS AND CLIMATE The topic of cosmic rays and climate is very relevant in this time of an extended and deep solar mimmimum. With the low solar activity and weak solar wind, the increase of GCRs is likely to go on to increasing for several years {as it did for several decades during the Maunder Minimum}, as the weak solar wind works its way to the outer heliosphere.
Clouds play an important role in the radiation budget of the Earth. Especially low clouds reflect a large amount of the incoming solar radiation back to space {leading to a cooling effect on the Earth's climate during the day}. However, during the nighttime, low clouds tend to warm the Earth by returning the IR radiation. In winter, at high latitudes, low clouds tend to warm the Earth's even daylight. In contrast, thin high clouds tend to attenuate sunlight, but more effectively absorb and radiate back to Earth the infrared radiation. Based on satellite observations, Dickinson (1975), and Svensmark and FriiesChristensen (1997) suggested that cosmic rays contribute significantly to the production of cloud condensation nuclei . It was assumed that this might be the "missing link" of the solar influence on the Earth 's climate and it might potentionally explain the current warming trend between 1981 and 1995. In this theory, the GCR flux is directly related to the strength of the Interplanetary Magnetic Field, causing cooler temperatures during strong solar cycles, when the solar wind carries more magnetic fields than during weak cycles. However, various other investigations and extending the time period up-to-date rule out this scenario, although it is not inconsistent with the proposition by Tinsley that the link between solar activity and cloud formation acts via cosmic ray induced changes in the global electric circuit. Specifically, Tinsley' s hypothesis assumes that the cosmic rays ionize the air in the troposphere and the variations in the solar wind modulate the cosmic ray flux arriving in the Earth's atmosphere. This ionization produces charged atmospheric aerosols which increases their effectiveness as ice nuclei . According to this theory, the induced changes in the ionization favor the freezing of super cooled cloud drops, releasing latent heat that may modify the development of middle latitude depressions. It has also been suggested that the galactic cosmic rays modulate the ionosphere-Earth current, thus modulating the ice nucleation rates. UV Irradiance Changes It has been shown that changes in the UV irradiance over the solar cycle is considerable larger than changes in total irradiance. For example, the maximum-tominimum change in the Mg h & k line, as shown before, is about 6% and the magnitude of changes is increasing with the decreasing wavelengths. Changes in the solar UV irradiance cause changes in the amount of the heat deposited in the ozone layer of the atmosphere. This might cause changes in the global atmospheric circulation with significant influence on climate (Haigh, 1994; 1996; 2004).
425 This mechanism implies a cooling in the high northern latitude atmosphere during periods of low solar activity. Bond et al. (200 I) suggest that this might lead to increased drift ice and cooling of the surface ocean and atmosphere. MODELING OF SOLAR VARIABILITY EFFECT ON CLIMATE In simplified cases, climate signals are the responses of the surface temperature field to the external forcings , such as (i) greenhouse gas signal; (ii) volcanic dust sharing the sunlight, (iii) anthropogenic aerosols, and (iv) changes in the solar cycle. The main assumption is that the signals are small and thought as signals superimposed on the background noise of natural variability, and the presence of one signal does not influence the other signal. As shown, climate models predict a 4 to 5 degree increase in the Earth temperature without taking into account the Sun' s influence. In contrast, taking into account the solar influence, the anticipated temperature increase will be about 50% smaller (using the Lean et al. 1995 model), reduced to 2.7 C, and will be reduced to 2.4 C when using the Hoyt and Schatten model. These numbers, as well as the fact that the Sun had changed in the past and might change in the future, underscores that it is extremely important for both science and economic/policy making to reveal to what extent solar variability may influence climate.
Frequency Distribution for AT 2x 6
5
i ....., t'
l:: ~
4 3
!:!
!
2 1 ()
T
0
2
4
8 6 AT2:s. (OC)
10
(From Andronova and Schlesinger, 2004)
12
14
426 CONCLUSIONS 1. It has been established that solar variability had played an important role in climate change in the past, however it is apparent that the current temperature increase, especially after the 70s, has mostly been driven by anthropogenic effects. 2. Climate simulations, however, show that adding solar forcing to climate models may reduce the anticipated increase in the temperature, however the probability of rising temperatures will be substantially higher. 3. The role of solar variability in climate change is still not understood and several hypotheses has been put forward. 4. It is apparent that the measured total irradiance change is too small to cause climate effects, however it is not clear whether a secular trend may occur, because of the length of measurements, different and inadequate adjustment techniques of individual time series and lack of good long-term models. 5. Solar cycle 23 has proven that the relationship between solar activity (surface magnetic activity) and global events is non-linear, whereas current empirical models assume linear relationship. 6. UV irradiance variations may provide a big contribution to climate change, however, we again lack long-term measurements, inadequate adjustment techniques and surrogates for long-term models. 7. Numerous investigations suggest that galactic cosmic ray variability is associated with climate change. However, it is a question whether the GCR flux directly affects climate or it is simple a proxy for the variable total irradiance and UV fluxes related to solar variability. 8. At solar minimum and Maunder-type of minima the GCR flux increases by more than 10% because the weaker solar wind gives less attenuation of the cosmic rays coming in from the Galaxy. The increased GCR flux is associated with cooler temperatures, especially on a regional basis in Northern Europe. Decreased GCR flux during strong solar activity tends to associate with warmer climate. However, the physical origin of GCRs and climate changes is still unresolved. However, the found relationships indicate the importance of understanding and predict solar variability for climate change and for societal and economic/political purposes. 9. The question remains open: whether and to what extent the climate is influenced by solar and cosmic ray variability. To answer these questions is central to our understanding of the anthropogenic contribution to climate change over the industrial era. It is also apparent that changes in the solar radiative output, both bolometric and at various wavelengths, must be studied in parallel with GCRs and climate changes {an extraordinary effort requesting collaboration and the joint work of solar physicists, heliospheric physicists, atomspheric scientists and climatologists to maintaining joint efforts with economists} to gain a maximum outcome for the future of our society.
427 Problems we are faced with today: I. Low solar activity at the maximum of solar cycle 23, when both the number and size of sunspots and faculae were low but irradiance was high-missing component in irradiance models or global effects? 2. We are in a very deep solar minimum, this minimum has more spotless days than any cycle since 1910. 3. Solar cycle 4 around 1788 was the longest cycle (l4-year long), followed by two very week cycles called as "Dalton" Minimum which associated with cool Earth surface temperature (one of the Maunder-type of Minima).
Yearly Averaged Sunspot Numbers 1610·2007 200
15
.0
E
150
::3
Z (5
100
Q. <J)
c:
;:)
W
1650
1700
1750
1800
1850
1900
1950
200.0
2050
DATE
Long-Term Sunspot Number Record 4. During the current minimum the cosmic ray flux is exceptionally high and solar wind and eruptive events are low. S. The nature of the minimum of current cycle fits the minimum of cycle 4, and maybe even lower. One can speculate whether we are to enter to a Maundertype of Minima and what the consequences will be. 6. If the sun continues to be as quiet as observed now, we are faced with a cooling effect caused by the combined effect of solar radiation and cosmic rays and warming effect of human activity. These results underscore the need of understanding the contribution and relation of the effect of irradiance and cosmic ray variations to the Earth's climate to better predict forthcoming climate changes and economic/political consequences. Collaboration between climatologists, solar-terrestrial physicists and economists are urgently needed to be well-prepared to any kind of scenario facing the climate change on Planet Earth. NEW MEASUREMENTS 1. Solar Dynamics Observatory (SDO) : Planned launch date is November 2009, will measure longitudinal and horizontal magnetic fields, white light features, solar corona and solar EUY flux.
428 2. GLORY: one side is looking at the Earth, other side is looking at the Sun, total irradiance measurements, providing a bridge between the current SORCErrIM and the forthcoming NPOESS irradiance measurements from NOAA platforms. 3. French PICARD Experiment: Planned launch date is January 2010, total and spectral irradiance measurements (at 215, 268, 535, 607, 782 nm), images (at 235, 393, 535, 607 782 nm), helioseismology, radius measurements. The goal of PICARD is to investigate solar forcing on Earth's climate and the physics of the Sun that leads to solar irradiance variability. I. SOlar Diameter and Surface Mapper (SODISM): will carry out solar diameter, aspherity, and helioseismologic observations. 2. SOlar Y Ariability PICARD (SOY AP): will measure total irradiance. 3. PREcision MOnitoring Sensor PREMOS: will measure total irradiance and spectral irradiance at 5 wavelengths matching the SODISM wavelengths. I. Improved confidence in total irradiance measurements can be achieved by simultaneously operating two radiometers of different designs. 2. The PREMOS instrument may help to resolve the issue of the 5W/m2 difference between SORCErrIM and the rest of the measurements since PREMOS is an absolutely characterized instrument and also has vacuum power calibration traceable to NPL as well as a comparison to the irradiance facility in Boulder. 3. SDO, GLORY and PICARD will help to maintain the long-term irradiance measurements for climate studies, and combined research on the SDO and PICARD data will lead to physical models and more accurate models/predictions of solar activity, irradiance changes and related climate changes.
SESSION 8 CLIMATE & CLOUDS FOCUS: SENSITIVITY OF CLIMATE TO ADDITIONAL CO 2 AS INDICATED BY WATER CYCLE FEEDBACK ISSUES
This page intentionally left blank
A NATURAL LIMIT TO ANTHROPOGENIC GLOBAL WARMING WILLIAM KININMONTH' Australasian Climate Research Kew Victoria, Australia The burning of fossil fuels and other activities of modern industrial and agricultural economies emit carbon dioxide (C0 2) and so-called greenhouse gases into the atmosphere. The build-up in concentration of the greenhouse gases will enhance the greenhouse effect, causing global warming. It is claimed that uncontrolled emissions will have catastrophic impact for life on Earth, including more storm destruction, inundation of low lying coastal margins from rising sea levels, more frequent heatwaves and droughts reducing food and water availability, and the spread of disease to higher latitudes. The concept of dangerous climate change, although central to the UN's Framework Convention on Climate Change and its Kyoto Protocol to restrict humancaused greenhouse gas emissions, has never been formally defined. A general understanding has evolved within scientific and political discussions that global warming of more than 2°C above pre-industrial levels would constitute dangerous climate change. Some scientists go so far as to suggest that 2°C represents a 'tipping point' beyond which 'runaway global warming' and irreversible climate change is likely. The evidence, however, is speculative and linked to the projections of computer models. The computer projections of global temperature change over the 21 st century are based on various scenarios for limiting CO 2 emissions and a hierarchy of computer models. The so-called best estimates range between 1.8°C and 4.0°C temperature rise over the 21 st century. The claims, that recent global temperature rise is largely attributable to human activities and that unregulated greenhouse gas emissions will cause dangerous climate change, largely have their foundation on three premises: 1. The climate was stable prior to industrialisation and the Earth was in radiation balance, emitting to space as much infrared radiation as the solar radiation being intercepted and absorbed. 2. The apparent stability is now being disrupted as accumulation of humancaused CO 2 emissions in the atmosphere is reducing infrared radiation to space in wavelengths characteristic of CO 2 and other human-caused greenhouse gases. In order to return to radiation balance it is necessary for the Earth to warm so that more infrared is emitted across other wavelengths at a higher temperature.
William Kininmonth is a former head of Australia 's National Climate Centre. He was actively engaged in work of the World Meteorological Organization ' s Commission for Climatology for more than two decades and is author of Climate Change: A Natural Hazard (2004, Multi-Science Publishing Co., UK).
431
432
3. There is a direct and linear relationship between the reduction of infrared radiation to space 2 (the so-called radiation forcing, L'>F,p) and the increase in surface temperature, L'>T,
There is little disagreement that additional CO 2 in the atmosphere will enhance the greenhouse effect. However, these seemingly plausible statements are either demonstrably false or not verified by rigorous theory or observation. The relationship between radiative forcing and surface temperature response does not have theoretical underpinning and the sensitivity factor, A can only be estimated from computer models. The value of A given by different computer models varies over a relatively broad range; there is no way of assessing whether A should have a low value or a high value. The IPCC, without rigorous scientific analysis, suggests that the average of all models is the most realistic estimate that should be used. Faced with such uncertainty it is reasonable to re-examine the scientific premises. It comes as little surprise that our understanding of the climate system has advanced since the premises were first formulated more than two decades ago. It is surprising that the IPCC has not incorporated new knowledge into its description of the climate system and its evaluation of computer model performance outlined in the most recent 2007 assessment report. CARBON DIOXIDE AND RADIATION TO SPACE CO 2 absorbs and emits radiation within selected bands of the infrared spectrum. That is, within these bands the CO 2 molecules absorb radiation that has been emitted from the earth's surface; the intensity of that emission is characteristic of the local surface temperature. Also, within these bands the CO 2 molecules emit radiation in all directions but with intensity that is dependent on the prevailing gas temperature and its emissivity. Treating the atmosphere as a layer we find the emission to space is of much less intensity than the radiation emitted from the surface. This is because the earth's surface is much warmer than the cold high layer of the atmosphere from whence the radiation to space originates. However, the lowest warm layer of the atmosphere is also emitting radiation back to earth. What is of importance in this discussion is the change in radiation intensity as the concentration of CO 2 varies. Figure I illustrates how the changing concentration of CO 2 affects the radiation intensity, both the emission from the atmosphere to space and the downward emission from the atmosphere to earth. These calculations have been performed using the MODTRANS 3 radiation transfer model based on the U.S. Standard Atmosphere under clear sky conditions. As the CO 2 concentration of the atmosphere increases the infrared The intergovernmental Panel on Climate Change (IPCC) refers to the 'radiation forcing' as the reduction in upward directed infrared at the tropopause due to the increase in CO, concentration. MODTRANS is a medium resolution radiation transfer model and is accessible through the University of Chicago at http://geosci.uchicago.edu/archer/cgimodelslradiation.html.
433 radiation in the CO 2 wavelengths emanates from a higher, colder altitude and the intensity decreases. At the surface, the downward infrared radiation emanates from a lower, warmer altitude as the CO 2 concentration increases .
-- ---------T
300.0
275 .0
Ni:
r '. ,. --..-__ _ I "
150.0
t 145.0
f __ ___ ~~.,-'"~~:"~::::::~~,,.-"'l.::.::::: ~ c 250.0
'"
E
140.0
~
135.0
.9
ill
t
...o
;!!
'"
,.;
II
IL
__ . : ._: , , _ "
_"
_ · ·_ __
_ _ _ _ _ _ _ _ _ _ _- ' -
:::::
225 .0
o
50
100
200
400
800
(02 Concentration (ppm) - - . - Upward at 70Km
' . ..... Downward at Surface
- -'- Net Loss from Atmosphere
Fig. I,' Changes in upward infrared emission to space, downward emission at the surface (both LH scale), and net radiation loss from the atmosphere (RH scale) for changing concentrations of CO 2. (Computed from MODTRANS for the U.S. Standard Atmosphere and clear sky). Two points of Figure I are of interest: 1. As the concentration of CO 2 increases the reduction in intensity of the emission to space is similar in magnitude to the corresponding increase in intensity of downward radiation at the surface. As a consequence, as CO2 concentration increases there is only a small increase in net radiation loss from the atmospheric layer. 2. Figure 1 does not give support to the notion that, as the atmospheric CO 2 concentration increases, there is more absorption of infrared radiation by the atmospheric layer, leading to warming of the atmosphere. There is an equal or greater loss of energy to the surface as downward emission increases with increasing CO 2 concentration. The notion of radiation forcing is further weakened when the variation with latitude of net radiation at the top of the atmosphere (solar absorption less infrared emission) is considered. Figure 2 clearly shows a surplus of solar radiation over tropical latitudes and excess emission to space over polar latitudes. Nowhere are surface temperatures determined by local radiation balance. In order to achieve overall global
434 radiation balance large quantItIes of energy are transported from the tropics to polar regions by the ocean and (principally) the atmospheric circulations. As a consequence of the poleward transport of energy the polar temperatures are warmer than they would be under local radiation equilibrium. Moreover, the polar temperatures (and ice mass magnitude - glaciation) will vary as the poleward energy transport varies. The ocean and atmospheric circulations are two interacting fluids and it is to be expected that the partitioning of the poleward energy transport will vary over a range of timescales. Indeed, there is every reason to believe that the partitioning will fluctuate with time such that polar temperatures fluctuate on similar timescales. 100
~
50
E
..................... ..................... ................. ... .. .. ..... .........
G c o
..
.......... .
;,; --50 :::::::::. o ....... . c
o
N
-100
-150 L - - L_ _ _ _ 90
60
__
~
40
_ _L -_ _
~
30
20
os
__
~
10
__
~
EO Latitude
__
~
10
__
~
20
____
~~
30
40
_L~
60
90
"N
Fig. 2: Zonal mean variation with latitude of net radiation (solar absorption minus infrared emission to space) at the top of the atmosphere (TOA). (Trenberth and Caron/
The message of Figure 2 is that the ocean and atmospheric circulations are continually acting to bring about overall global radiation balance at the top of the atmosphere. There is no unvarying steady-state. At times the climate system is accumulating energy and at other times there is a net loss of radiant energy, depending on the changing ice mass, changing energy storage of the respective fluids and the thermodynamics of the fluid flows. This is evident because the earth's annual climate cycle is not exactly repeated. In addition, known oceanic-atmosphere phenomena such as El Nino and various multi-decadal oscillations reflect major variations to the climate cycle. The anthropogenic global warming hypothesis is critically dependent on the assumption that a reduction of infrared radiation to space in the CO 2 wavelength bands will cause the earth to warm and increase the intensity of emissions across the rest of the radiation spectrum. This assumption does not take cognisance of the fact that, at least for Trenberth, K.E. and 1.M. Caron, (2001) "Estimates of meridional ocean and atmospheric heat transports." J of Clim. 14:3433-3443.
435 tropical and subtropical latitudes, the main variation in infrared radiation emission to space is brought about through variations in cloud and water vapour distribution.
NCEP!NCAII lI.onoly.i. OLR (lI'/mA2) Climatology 196B-19!l6
Jon to Dec:
170
Fig. 3: Spatial variations of climatological infrared radiation to space (OLR). Radiation to space is reduced in the regions of deep tropical convection because the emission largely emanates from the high cold cloud tops. Radiation is highest in the regions of dry descending air where the emission emanates from warm layers near the surface. Radiation is also reduced over the cold polar regions. The dominant control of cloud and water vapour distribution can be readily seen in Figure 3. In regions of recurring deep convective clouds with tops in the high cold troposphere, such as over the Congo and Amazon Basins and the warm equatorial oceans extending from the Indian Ocean to the western Pacific Ocean, the radiation to space is reduced. In contrast, over much of the subtropics and other regions of dry subsiding air the radiation to space emanates from much lower in the atmosphere where temperatures are warmer. Variations in infrared radiation emission to space can be more than 80 Wm,2 from cloud to cloud-free regions. In addition, these spatial patterns are not fixed in time but vary on hourly, daily, weekly and longer scales, including the annual cycle and from year to year. There are major disruptions to the cloud and outgoing infrared radiation patterns during El Nino events when the deep convective clouds form over the central and eastern equatorial Pacific Ocean. The changing cloud and moisture patterns during El Nino
436 events significantly change the magnitude of poleward energy transport and the pattern of infrared radiation emission to space. Interactions between the ocean and atmospheric fluids regulate internal variability of the climate system, especially the changing poleward transport of energy and the changing cloud and moisture patterns. These internal processes have a dominant control over the magnitudes and pattern of infrared radiation to space. It is not plausible that the only response from a change to CO 2 concentration, and its small reduction of infrared radiation to space, will be an increase of surface temperature. The small decrease in infrared radiation to space resulting from CO 2 increase will be overwhelmed by the magnitude of the ever-changing patterns resulting from the atmospheric circulation and associated cloud and moisture distribution. There is no sound theoretical basis to expect a reduction in infrared radiation to space in the relatively narrow CO 2 wavelength bands to be directly and unequivocally linked to an increase in surface temperature. CARBON DIOXIDE AND SURFACE ENERGY EXCHANGE In contrast to the upper atmosphere and the ever-changing infrared radiation to space, any change in CO 2 concentration and downward infrared radiation will directly affect the surface energy balance and surface temperature. An increase in the concentration of atmospheric CO 2 will increase the downward infrared radiation and tend to warm the surface. The magnitude of the actual surface temperature rise will be regulated by the response of other surface energy exchange processes to the CO 2 radiation forcing. At the surface, the energy inputs are solar radiation and the back radiation from the atmosphere (the emissions of infrared radiation from the greenhouse gases, principally water vapour and CO 2, and clouds). The surface energy losses are primarily by way of direct heat exchange between the surface and the atmosphere, Latent energy exchange between the surface and the atmosphere due to evaporation of water, and the emission of infrared radiation from the surface. There is also a loss or gain of energy to surface storage (in the land surface or ocean surface layer) if the surface temperature is warming or cooling but this is small compared to the energy exchange processes and is neglected here. The increase in downward radiation, Ll.FC02 due to increased CO 2 concentration will vary the magnitudes of the surface energy exchange processes and cause an increase in surface temperature, Ll.Ts given by: Ll.FC02
= [dFu/dT + dLHldT + dHidT - dSJdT -
dFd/dT]
* Ll.Ts
(I)
Here: dSJdT is the rate of change of solar radiation absorbed at the surface with temperature; dFd/dT is the rate of change of back radiation with temperature; dFu/dT is the rate of change of surface emission with temperature; dHidT is the rate of change of direct surface heat exchange with temperature; and dLH/dT is the rate of change of latent energy exchange with temperature.
437 The magnitude of solar radiation at the surface will vary with cloudiness changes but not directly with variation of CO 2 concentration. Cloudiness may change with surface temperature of the earth but a priori we do not know the direction or magnitude of any potential change. In the first instance solar radiation is treated as a constant that does not change with temperature. The downward infrared radiation at the surface varies directly with greenhouse gas concentration and temperature of the air near the ground. The main greenhouse gases are water vapour and CO 2 ; water vapour concentration varies with temperature and CO 2 concentration varies with fossil fuel usage. In the context of anthropogenic global warming, CO 2 is the forcing process; atmospheric temperature and water vapour concentration are response processes. The back radiation at the surface will increase as the concentration of either CO 2 or water vapour increases. The direct exchange of heat between the surface and atmosphere varies with the vertical gradient of air temperature at the surface. However the atmosphere has a relatively low thermal capacity and the temperature of the air near the ground increases as the surface temperature increases. Consequently, the rate of heat exchange between the surface and atmosphere does not vary appreciably as the surface temperature changes; it is ignored in this discussion. The infrared emission from the surface varies with emissivity and temperature according to the Stefan Boltzman Law. The emissivity varies with the nature of the surface (land, vegetation or ocean) but not with temperature. The evaporation of water that exchanges latent energy between the surface and the atmosphere varies with the wetness of the surface (water body, moist soil, evapotranspiration from plants, etc.) and the vapour pressure gradient near the surface. The IPCC suggests that the relative humidity near the surface does not vary with temperature. More than 70 percent of the Earth's surface is water and ice and there is no a priori information on how the wetness and vegetation of land surfaces may vary with temperature. It is assumed that the rate of evaporation and latent energy exchange vary according to the Clausius Clapeyron relationship (the rate of change of saturation vapour pressure with temperature) . Recognising that solar absorption and direct heat exchange vary little with temperature then equation 1 can be reduced to: ~FC02
= [dFu/dT + dLHldT - dFd/dT)
* ~Ts
(2)
and rearranged to: ~T,
=~TC02 I (1 -
(3)
r)
where ~TC02
=
~FC02
I [dFu/dT + dLH/dT)
(4)
and r =dFd/dT I [dFu/dT + dLH/dT)
(5)
438 Here t. T CO2 is the direct surface temperature response resulting from CO 2 forcing and I /(I-r) is the feedback amplification due to atmospheric temperature and water vapour increase. It is important to note that the rate of change of surface energy loss with temperature, given by [dFu/dT + dLHldTj constrains both the direct surface temperature response to radiation forcing and the magnitude of the feedback amplification.
_ ___ J Fig. 4: Changing magnitudes of the major surface energy exchange processes over the range of typical temperatures of the Earth's surface. (The Back Radiation is computed for the U.S. Standard Atmosphere under clear sky conditions using the MOD TRANS model). At Figure 4 are plotted the magnitudes of the major surface energy exchange processes across a range of temperatures typical of the Earth's surface. The surface emission is according to the Stefan Boltzman Law (emissivity = I) while the back radiation is computed using the MODTRANS radiation transfer model for the U.S . Standard Atmosphere (approximately average global temperature and moisture) under clear sky conditions and constant relative humidity. Latent energy exchange is according to the Clausius Clapeyron relationship (7 percent change with each degree Celsius variation: 7% C 1) scaled to the global average exchange of78 Wm'2 at IS°C. What is clear from Figure 4 is that the magnitudes of surface emission and the back radiation increase in near parallel, as is to be expected because the temperatures of the surface and near surface atmosphere also increase in near parallel. As a consequence, there is little change in the magnitude of net infrared radiation loss from the surface across the temperature range. It is the latent energy exchange, approximately doubling in magnitude with every 10°C temperature rise, which dominates the changing surface
439 energy loss with temperature. The importance of evaporation for limiting surface temperature has previously been discussed by Priestley (1966).5
I
220
Solar and other Unvarying
;no
I
: ,;gy~~~'=t-!-+--i-~ !
200
I 190
!
i
! ..
,./i~i --.J::..... C02 increases back Radiation
i...... I k· I
,
and reduces Net Surface Energy Loss
[
1
;
1
l
i i
-~---~ --j~~~~:··~--J--.--I ---J-----L---J----l-------j
10
I
i
15
20
Temperature QC
1__ .________._. __ .. _. ____.__. _ ,_,,_, . ____._.________________. ___. ___.... ______________. __
Fig. 5: The magnitude of the net surface energy loss with the solar absorption and other processes that do not vary with temperature scaled to be in steady state at the Earth's mean temperature of 15°C. As CO 2 concentration increases the back radiation also increases, thus reducing the net surface energy loss. The surface temperature rises to a new steady state for energy balance with the near constant energy processes. When the magnitude of the net surface energy loss (net infrared radiation plus latent energy) is plotted against temperature and scaled for steady state at the average temperature of the Earth, as in Figure 5, it is found that the surface temperature is relatively stable. A small change in surface temperature, either to a lower or a higher value, causes the surface energy loss to be out of balance with the steady energy input and there is a strong tendency to return to the steady state temperature. A change in the atmospheric C02 concentration will also cause a shift to a new steady state surface temperature. For example, a doubling of the CO 2 concentration from prevailing values will increase the back radiation by about 4 Wm- 2 . As a consequence, the net surface energy loss will be reduced by an equal magnitude and the surface energy
Priestley, C.H.B. (1966) "The limitation of temperature by evaporation in hot climates." Agr. Meteorol., 3:241-246.
440 processes are out of balance. A new steady state is achieved by an increase in surface temperature of about 0.6°C, as shown in Figure 5. It should be noted that this adjustment to surface temperature is independent of changes that might be wrought by changing atmospheric circulation and distributions of cloud and moisture patterns. The changing CO 2 concentration will directly affect the local surface temperature because of the impact that CO 2 concentration has on back radiation and the ensuing surface energy balance. Unlike the tenuous connection between CO 2 forced change to the infrared radiation to space and surface temperature, the change in back radiation has a direct impact on surface temperature and the effect is mathematically tractable. Moreover, because of the rapid increase of latent energy exchange with temperature, the surface temperature rise is constrained to a relatively small response. THE EXAGGERATED RESPONSE OF COMPUTER MODELS There is nearly an order of magnitude difference between the relatively small surface temperature response of 0.6°C to a doubling of CO 2 concentration calculated above and the projected responses quoted by the lPCe. The latter are based on computer models with individual estimates ranging from 1. 1°C to about 6.4°e. The key to the difference can be found in the formulation of the changing rate of latent energy exchange with temperature. Over a water surface with constant relative humidity the rate of increase in evaporation (and latent energy exchange) with temperature will equate to the Clausius Clapeyron relationship of 7% per degree C, all other factors not varying. Held and Soden (2006)6 have identified that for the computer models used in the lPCC fourth assessment, on average the rate of increase of evaporation with temperature rise was only about onethird this value. This low value in computer models was confirmed by Wentz et al. (2007),7 who identified a range of 1-3% K'I for the global average evaporation increase across the models. The anomalous reduction in the rate of evaporation increase with temperature, as specified in computer models, has significant consequences for the magnitude of temperature projection under CO 2 forcing. The tendency to return to the steady state temperature is weakened. The slope of the curve of Figure 5 is reduced and surface temperature must rise by a larger magnitude to recover from the same radiative forcing of a doubling of CO 2 . More importantly, if the rate of increase of evaporation with temperature is significantly less than the Clausius Clapeyron relationship then the surface temperature response becomes very sensitive to CO 2 forcing. The reduction in latent heat exchange with temperature means that the offsetting energy loss necessary to arrive at a new steady state from back radiation forcing must come from additional infrared radiation emission. That is, the new steady state energy exchange will be at a higher surface temperature than if the evaporation was following the Clausius Clapeyron relationship.
Held, I.M. and BJ. Soden, (2006) "Robust responses of the hydrological cycle to global warming." J of Chill 19:5686-5699. Wentz, FJ .. L. Riccaiardulli, K. Hilburn and C. Mears, (2007) "How much more rain will global warming bring." Science Express, 31 May 2007.
441 The changing sensItivity of surface temperature to radiative forcing under different evaporation rate assumptions can be readily assessed by way of equation 3 above. At the average temperature of the Earth (lSOC) the rate of increase of surface infrared emission with temperature change is given by the Stefan Boltzman Law as 5.4 Wm- 2C I . The equivalent rate of increase of back radiation with temperature can be assessed, for example using the MODTRANS radiation transfer model. With the assumptions that the U.S. Standard Atmosphere approximates the mean profile of the atmosphere , that relative humidity is constant (that is, the atmospheric water vapour increases with temperature in accordance with the Clausius Clapeyron relationship) and ignoring clouds, it is found that the natural rate of increase in back radiation at the surface is about 4.8 Wm'2C I . Table I sets out indicative values for the sensitivity of surface temperature to radiative forcing for a range of rates of latent energy exchange with temperature. The value of 6% C l is the global average estimate by Wentz et a!. (2007) based on satellite estimates of changing precipitation during global warming of recent decades. It is less than the Clausius Clapeyron relationship but this is not unexpected given the magnitude of arid and semi-arid land areas. The other values are typical for computer models (GCM) used in the !PCC fourth assessment of 2007. Table 1: Indicative values of surface temperature increase from a doubling of CO 2 concentration and with a range of rates of increase of evaporation with surface temperature. The rates of surface latent energy exchange, dLHldT correspond to global values assessed from satellite analysis, and values corresponding to computer models (GCM) used in the 2007 IPCC fourth assessment. ATjAF c 02 AT, (2 x CO 2) dLHldT O.16°ClWm·2 6% C' (satellites) O.6°C 2% C' (Average GCM) O.45°ClWm·2 I.7°C O.83°ClWm·2 1 % C' (Low-end GCM) 3.l oC
It is very clear from Table I that surface temperature response to CO 2 forcing is very sensitive to the specification of the rate of increase of evaporation, and hence latent energy exchange, with temperature increase. The analysis at Table 1 clearly points to a high likelihood that the computer models used as the basis for the !PCC estimates of anthropogenic global warming are significantly exaggerating the projected global temperature response. If we accept that the rate of surface evaporation will increase according to near the Clausius Clapeyron relationship then a doubling of CO2 concentration, from current level to near 800 ppm by the end of the 21 st century, is not likely to cause global temperature rise exceeding 1°C. Such a rise is well within the range of natural variability and should not be construed as dangerous.
ISSUES WITH SURFACE EVAPORATION Surface evaporation, and the associated latent heat exchange, is a very difficult process to quantify. Over extensive water bodies the thermal capacity of the mixed surface layer is often a sufficient source of energy and the primary regulating factors on evaporation are wind speed, atmospheric stability and vertical vapour pressure gradient. The relationship between the factors is not linear and evaporation can vary significantly with space and time. Over land the surface and vegetation have only a limited thermal capacity and
442 evaporation additionally responds to solar insolation, plant moisture availability and surface wetness. The estimation of surface evaporation is further complicated in computer models because the regulating factors often vary over a wide range within the scale area of computation. Simple averaging is inadequate because of the non-linear relationships involved. The magnitude of global precipitation provides a suitable closure condition for estimating global evaporation but this does not assist in formulating methodologies for estimating spatial and temporal variation across the ocean and land surfaces. The highlighted difficulties of estimating evaporation are compounded in the estimation of rate of change of evaporation with surface temperature. It is clear, however, that it is the rate of change of evaporation (and latent heat exchange-see equations 3, 4 and 5) with surface temperature change that is fundamentally important for estimating the magnitude of global surface temperature response to greenhouse gas forcing. CONCLUSION Carbon dioxide is a greenhouse gas and interacts with the Earth's infrared radiation, both the emission to space and the back radiation at the surface. Contrary to popular explanations, it is not the reduction in radiation to space across the CO 2 bands that is important for enhancing the greenhouse effect; it is the increase in back radiation at the surface that is important because it directly leads to an adjustment of the surface temperature. An increase to the concentration of CO 2 will enhance the greenhouse effect but the magnitude remains controversial. Water vapour is important in regulating the magnitude of the enhanced greenhouse effect in two ways: increased water vapour in the atmosphere has an amplifying effect on the C02 forcing because it further increases the back radiation as temperature rises; and, more importantly, any increased evaporation and latent heat exchange between the surface and atmosphere constrains the surface temperature rise. It is the evaporation that is dominant because I) the Earth's surface is more than 70 percent ocean and much of the remainder is covered by transpiring vegetation; and 2) the rate of increase of evaporation with temperature approximately follows the Clausius Clapeyron relationship, nearly doubling with each lOoC temperature rise. A doubling of CO 2 concentration by the end of the century from current levels is expected to cause a modest global temperature rise not exceeding 1°C. The computer models on which the IPCC based its fourth assessment projections have been shown to significantly underestimate the rate of increase of evaporation with temperature. The indicative analysis presented here suggests that projections of global temperature made by these contemporary computer models are nearly an order of magnitude too large. As a consequence, a better representation of evaporation and surface latent heat exchange in computer models, particularly the important response to surface temperature, is a primary requirement if the uncertainty about anthropogenic global warming is to be reduced. Without this improvement the projected temperature response to anthropogenic forcing will continue to be exaggerated. It is also evident that suggestions of Earth passing a 'tipping point' temperature, and even going into a phase of 'runaway global warming', are an outcome of the flawed computer models that do not represent a realistic future scenario. The extensive oceans
443 and the hydrological cycle are a natural constraint on global temperature and dangerous anthropogenic global warming is not a feasible outcome.
This page intentionally left blank
ON THE OBSERVATIONAL DETERMINATION SENSITIVITY AND ITS IMPLICATIONS
OF
CLIMATE
RICHARD S. LINDZEN AND YONG-SANG CHOI Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA PREFATORY REMARKS The following paper is, indeed, significantly different from what was presented in Erice. Both the original version of this paper as well as the published paper that the present work is an expansion on, have been widely circulated, and there has been a very helpful response that has led us to examine all aspects of the original paper more carefully. The result has led to some changes in analysis, some expanded explanations, and, finally, some changes in results-though the last has not been very great. The new analysis is, we think, much more robust and comprehensible. Among the items addressed are the compensation for the 36 day precession period of the ERBE satellite; the original ignoring of the fact that in the observations, it was necessary to distinguish radiation changes that resulted from surface temperature changes from both noise and those radiation changes that forced the temperature changes; the use of a more reasonable zerofeedback flux; and the undue smoothing of the time series for short wave outgoing radiation. We have also added arguments concerning the concentration of feedbacks in the tropics. INTRODUCTION It is usually claimed that the heart of the global warming issue is so-called greenhouse warming. This simply refers to the fact that the earth balances the heat received from the sun (mostly in the visible spectrum) by radiating in the infrared portion of the spectrum back to space. Gases that are relatively transparent to visible light but strongly absorbent in the infrared (greenhouse gases) will interfere with the cooling of the planet, thus forcing it to become warmer in order to emit sufficient infrared radiation to balance the net incoming sunlight. By the net incoming sunlight, we mean that portion of the sun's radiation that is not reflected back to space by clouds and the earth's surface. The issue then focuses on a particular greenhouse gas, carbon dioxide. Although carbon dioxide is a relatively minor greenhouse gas, it has increased significantly since the beginning of the industrial age from about 280 ppmv to about 390 ppmv, and it is widely accepted that this increase is primarily due to man's emissions. However, it is also widely accepted that the warming from a doubling of carbon dioxide would only be about 1°C (based on simple Planck black body calculations ; it is also the case that a doubling of any concentration in ppmv produces the same warming because of the logarithmic dependence of carbon dioxide's absorption on the amount of carbon dioxide). This amount of warming is not considered catastrophic, and, more importantly, this is much less than current climate models suggest the warming from a doubling of carbon dioxide will be. The usual claim from the models is that a doubling of carbon dioxide will lead to warming of from 1.5°C to 5°C and even more. What then is really
445
446 fundamental to 'alarming' predictions? It is the 'feedback' within models from the much more important greenhouse substances, water vapor and clouds. Within all current climate models, water vapor increases with increasing temperature so as to further inhibit infrared cooling. Clouds also change so that their net effect resulting from both their infrared absorptivity and their visible reflectivity is to further reduce the net cooling of the earth. These feedbacks are still acknowledged to be highly uncertain, but the fact that these feedbacks are strongly positive in most models is considered to be a significant indication that the result has to be basically correct. Methodologically, this is a most peculiar approach to such an important issue. In normal science, one would seek an observational test of the issue. As it turns out, it is may be possible to test the issue with existing data from satellites and there has recently been a paper (Lindzen and Choi, 2009) that has attempted this though, as we will show in this paper, the details of that paper were, in important ways, incorrect. The present paper attempts to correct the approach and arrives at similar conclusions. FEEDBACK FORMALISM A little bit of simple theory shows how one can go about doing this. In the absence of feedbacks, the behavior of the climate system can be described by the following illustration.
-~Q-I~.e-~-Q-"'~"I____. . ---I~. ~To Fig. 1. A schematic for the behavior of the climate system in the absence of feedbacks. !'lQ is the radiative forcing, Go is the zero-feedback response function of the climate system, and !'lTo is the response of the climate system in the absence of feedbacks. The checkered circle is a node. Figure 1 symbolizes the temperature increment,!'lT o, that a forcing increment,!'lQ, would produce with no feedback, (1)
It is generally accepted (Hartmann, 1994) that without feedback, doubling of carbon dioxide will cause a forcing of !'lQ '" 3.7 Wm- 2 (due to the black body response), and will increase the temperature by !'lTo ;::; 1.1°C (Schwartz, 2007). We therefore take the zero-feedback response function of (1) to be Go;::; 0.3 (=1.1/3.7) °c W· 1m 2 for the earth as a whole. With feedback, Figure 1 is modified to
447
Fig. 2. A schematic for the behavior of the climate system in the presence of feedbacks.
The response is now (2)
Here F is a feedback function that represents all changes in the climate system (for example, changes in cloud cover or humidity) that act to increase or decrease feedback-free effects. Thus, F should not include the response to I'!.T that is already incorporated into Go. The choice of zero for the tropics in Lindzen and Choi (2009) is certainly incorrect in this respect. At present, the best choice seems to remain 1/00 (3.3 Wm-2 °C 1) (Colman, 2003; Schwarz, 2007), though a lower value than this might be appropriate due to the high opacity of greenhouse gases. Solving (2) for the temperature increment I'!.T we find I'!.T= I'!.To . 1- f
(3)
The dimensionless feedback fraction is f =F Go. From Figure 2, the relation of the change in flux, I'!. Flux, to the change in temperature is given by
f I'!.Flux - ZFB = --I'!.T Go
(4)
The quantities on the left side of the equation indicate the amount by which feedbacks supplement the zero-feedback response to I'!.Q (ZFB). At this point, it is crucial to recognize that our equations, thus far, are predicated on the assumption that the I'!.T to which the feedbacks are responding is that produced by I'!.Q. Physically, however, any fluctuation in I'!.T should elicit the same flux regardless of the origin of I'!.T. When looking at the observations, we emphasize this by rewriting (4) as (5)
448 When restricting ourselves to tropical feedbacks, equation (5) is replaced by _G o
(~FIUX - ZFB ) ~SST
""
21
(6)
Impi",
where the factor 2 results from the sharing of the tropical feedbacks over the globe following the methodology of Lindzen, Chou and Hou (2001) (See Appendix 2 for more explanation) . The longwave (LW) and shortwave (SW) contributions to I are given by
ILlY
= _ Go
2
(~OLR -
ZFB)
~SST
I sw = _ Go (~SWR) 2
~SST
(7a) Impio
(7b) ''''pin
Here we can identify ~Flux as the change in outgoing longwave radiation (OLR) and shortwave radiation (SWR) measured by satellites associated with the measured ~SST, the change of the sea-surface temperature. Since we know the value of Go, the experimentally determined slope allows us to evaluate the magnitude and sign of the feedback factor I provided that we also know the value of the zero-feedback flux. Note that the natural forcing, ~SST, that can be observed, is different from the equilibrium response temperature ~T in Eg. (3). The latter cannot be observed since, for the short intervals considered, the system cannot be in equilibrium, and over the longer periods needed for equilibration of the whole climate system, ~Flux at the top of the atmosphere is restored to zero. Indeed, as explained in Lindzen and Choi (2009), it is, in fact, essential, that the time intervals considered, be short compared to the time it takes for the system to equilibrate, while long compared to the time scale on which the feedback processes operate (which are essentially the time scales associated with cumulonimbus convection). The latter is on the order of days, while the former depends on the climate sensitivity, and ranges from years for sensitivities of O.SoC for a doubling of CO 2 to many decades for higher sensitivities (Lindzen and Giannitsis, 1998). Finally, for observed variations, there is the fact that changes in radiation (as for example associated with volcanos) can cause changes in SST as well as respond to changes in SST, and there is a need to distinguish these two possibilities. This is not an issue with model results from the AMIP program where observed variations in SST are specified. Of course, there is always the problem of noise arising from the fact that clouds depend on factors other than surface temperature.
449 THE DATA AND ITS PROBLEMS 36-day average Monthly 0.8 ~Ie----------~,I·e-------=-----~,I
0.6 Q' 0.4 >-
~ 0.2 o
~
0.0r-,J--~--T_~~~~v_~~L-~--~L-----------~~~
-0.2 -0.4
L....JL...----'._----'-_--L_--'--_..L.-_L.....---"_---L_--L_--L.._....l-J
1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Fig. 3: Tropical mean (20'S to 20'N latitude) 36-day averaged and monthly sea surface temperature anomalies with the centered 3-point smoothing; the anomalies are referenced to the monthly means for the period of 1985 through 1989. The SST anomaly was scaled by a factor of O. 78 (the area fraction of the ocean to the tropics) to relate with the flux. Red and blue colors indicate the major temperature fluctuations exceeding 0.1 "C. Now, it turns out that sea surface temperature is measured (Kanamitsu et al. 2002), and is always fluctuating as we see from Figure 3. High frequency fluctuations, however, make it difficult to objectively identify the beginning and end of warming and cooling intervals (Trenberth et al. 2010). This ambiguity is eliminated with a 3 point centered smoother. (A two point lagged smoother works as well.) In addition, the net outgoing radiative flux from the earth has been monitored since 1985 by the ERBE satellite, and since 2000 by the CERES instrument aboard the Terra satellite (Wielicki et al. 1998). The results for both long wave (infrared) radiation and short wave (visible) radiation are shown in Figure 4. The sum is the net flux. With ERBE data, there is, however, the problem of satellite precession with a period of 36 days. In Lindzen and Choi (2009) that used ERBE data, we attempted to avoid this problem (which is primarily of concern for the short wave radiation) by smoothing data over 7 months. It has been suggested (Takmeng Wong, personal communication) that this is excessive smoothing. In the present paper, we start by taking 36 day means rather than monthly means. The CERES instrument is flown on a sunsynchronous satellite for which there is no problem with precession. Thus for the CERES instrument we use the conventional months. However, here too we examine the effect of modest smoothing. Both ERBE and CERES data are best for the tropics. ERBE field-of-view is between 60 0 S and 60 0 N. For latitudes 40° to 60°, 72 days are required instead of 36 days to reduce the precession effect (Wong et al. 2006). Both data sets have no/negligible shortwave radiation in winter hemispheric high latitudes, which would compromise our analysis. Moreover, our analysis involves relating changes in outgoing flux to changes in SST. This is appropriate to regions that are mostly ocean covered like the tropics or the
450 southern hemisphere, but distinctly inappropriate to the northern extratropics. However, as we will argue in an appendix, the water vapor feedback is almost certainly restricted primarily to the tropics, and there are reasons to suppose that this is also the case for cloud feedbacks. The methodology developed in Lindzen, Chou , and Hou (2001) permits the easy extension of the tropical processes to global values. Finally, there will be a serious issue concerning distinguishing atmospheric phenomena involving changes in outgoing radiation that result from processes other than feedbacks (the Pinatubo eruption for example) and which cause changes in sea surface temperature, from those that are caused by changes in sea surface temperature (namely the feedbacks we wish to evaluate). Our admittedly crude approach to this is to examine the effect of considering fluxes with a time lags and leads relative to temperature changes. The lags examined are from one to five months. The discussion is in Section 4. ERBElERBS NS (36-day average)
_!ECERESrrerra (monthly) _!
8r!E,--,__~__~~__~~__~__~~__~~~ LW 4 0~~--~~~~4-------~--L-----------------4
'i'
E
~ -4 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 >.
(ij
E
8
0
c::
4 0 -4
-8 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008
Fig. 4: The same as Figure 3 but for outgoing longwave (red) and reflected shortwave (blue) radiation from ERBE and CERES satellite instruments. 36-day averages are used to compensate for the ERBE precession. Missing periods are the same as reported in Wong et al. (2006). Turning to the models, the AMIP (atmospheric model intercomparison projects) program that is responsible for intercomparing models used by the IPCC (the Intergovernmental Panel on Climate Change), has obtained the calculated changes in both short and long wave radiation from models forced by the observed sea surface temperatures shown in Figure 3. These results are shown in Figures 5 and 6 where the observed results are also plotted for comparison. We can already see that there are
451
significant differences. Note that it is important to use the AMIP results rather than those from the coupled atmosphere-ocean models (CMIP). Only for the former can we see the results for the same SST as applies to the ERBE/CERES observations. Moreover, in the AMIP results, we are confident that the temperatures are forcing the changes in outgoing radiation; in the coupled models it is more difficult to be sure that we are calculating outgoing fluxes that are responding to SST forcing rather than temperature perturbations resulting from independent fluctuations in radiation. 8 4
~ Ar~Af\..\./~ !y ~.
8 r''''\
,!'~
4
0
0
-4
·4 86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08 8 4
Nyv..f'l.i
,\,.../\ V
w..J
8
r
\v
:'\
4
0
0
-4
·4
8
86 88 90 92 94 96 98 00 02 04 06 08
I'\....- /\
ilyr'',}
"-Jf'\"
86 88 90 92 94 96 98 00 02 04 06 08 8
f"
'1
4
~
0
0
>Iii -4
·4
E
E 0
8
a: ...J
4
c
0
V
i/r··r'""v/\I.J,'"\v.
4
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
0 -4 8 4 0 -4 8 fl..,.-.
. ltv"\} v
4
/\ _ "
I..,!
\v
0 -4 86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
Fig. 5: Comparison of outgoing longwave radiations f rom AMIP models (black) and the observations (red) as found in Figure 4.
452
86 88 90 92 94 96 98 00 02 04 06 08
86 88 90 92 94 96 98 00 02 04 06 08
Fig. 6: Comparison of reflected shortwave radiations from AMI? models (black) and the observations (blue) shown in Figure 4.
CALCULA nONS With all the above readily available, it is now possible to directly test the ability of models to adequately simulate the sensitivity of climate. The procedure is simply to identify intervals of change for llSST in Figure 3 (for reasons we will discuss at the end, it is advisable, but not essential, to restrict oneself to changes greater than 0.1 0c), and for each such interval, to find the change in flux. Let us define i], h, ... im as selected time steps that correspond to the starting and the ending points of intervals. llFlux/llSST can be basically obtained by Flux(id-Flux(i z) divided by SST(i l ) -SST(iz). As there are many intervals, llFlux/llSST is a regression slope for the plots (llFlux, llSST) for a linear regression model. Here we use a zero y-intercept model (y = ax) because the absence of the y-intercept is related to noise other than feedbacks. Thus, a y-intercept model may be
453 more appropriate for the purpose of our feedback analysis; however, the choice of regression model turns out to be minor. As already noted, the data need to be smoothed to minimize noise, and it is also crucial to distinguish IlSST that are forcing changes in llFlux, and not responses to llFlux. Otherwise, llFJux/llSST can vary (Trenberth et al. 2010) and/or may not represent feedbacks that we wish to determine. As an attempt to avoid such problems, though imperfectly, we need to consider smoothing (i.e., use of Flux'(i) and SST (i), where the prime designates the smoothed value) and lag-lead methods (e.g., use of Flux'(i+lag) and SST (i» for ERBE 36-day and CERES monthly data. For a stable estimate of dFlux/dSST, the time step i should be also selected based on the maximum and minimum of the smoothed SST (i.e., SST). As shown in Figure 3, this study selected SST(il) -SST(i2) that exceeds 0.1 K. The impact of thresholds for t-.SST on the statistics of the results is minor. Figure 7 shows the impact of smoothing and leads and lags on the determination of the slope as well as on the correlation, R, of the linear regression. 10~----------------,
10~----------------,
SW
:.:
~
'"~E
"'E
~ f-
(fJ
~
:J
u::
10
a
f-
(fJ (fJ
LW+SW 5
a
x:J
-5
u::
-5
-10 ~4~~~
a 1 2345
~4~~~
Lagged month
a
1 2345
-5 -4 -3 -2-1
Lagged month
1.0 ,--------------------;
1.0 ,--------------------;
0.5if
a: 0.0 -0.5
a: - - No smooth ---- 2-month smooth _ _ 3-month smooth
0.0
45
a
1 2 3 4 5
Lagged month
LW+SW 05
a:
00
_
~
~ ...
- -.
45
-1.0 -5 -4 -3 -2 -1
1 2 3 4 5
1.0
SW
0.5
a
Lagged month
-1.0
J.,-~~~~~~~-,-J
~4~~~012345
~4~~~012345
Lagged month
Lagged month
Fig. 7: The impact of smoothing and leads and lags on the determination of the slope (top) as well as on the correlation coefficent, R, of the linear regression (bottom). In general, the use of leads for flux will emphasize forcing by the fluxes, and the use of lags will emphasize responses by the fluxes to changes in SST. For LW radiation, the situation is fairly simple. Smoothing increases R somewhat, and for 3 point symmetric smoothing, R maximizes for slight lag or zero - consistent with the fact that
454 feedbacks are expected to result from fast processes. Maximum slope is found for a lag of I 'month,' though it should be remembered that the relevant feedback processes may operate on a time scale shorter than we resolve. The situation for SW radiation is, not surprisingly, more complex since phenomena like the Pinatubo eruption lead to increased light reflection and associated cooling of the surface (There is also the obvious fact that many things can cause fluctuations in clouds, which leads to noise). We see two extremes associated with changing lead/lag. There is a maximum negative slope associated with a brief lead, and a relatively large positive slope associated with a 3-4 month lag. It seems reasonable to suppose that the effect of forcing extends into the results at small lags, and is only overcome for larger lags where the change in flux associated with feedback dominates. Indeed, excluding the case of Pinatubo volcano for larger lags does little to change the results (less than 0.3 Wm- 2/K). Under such circumstances, we expect the maximum slope for SW radiation in Figure 7 to be an underestimate of the actual feedback . Also shown in Figure 7 is the behavior of the total flux. However, given the different behavior of long and short wave fluxes, it is probably more appropriate to take the maximum slopes for both LW and SW radiation associated with lags to be more indicative of the feedbacks. (Thus, for example, the SW slope for Lag= I is clearly inappropriate for feedback.) We also consider the standard error of the slope to show data uncertainty. The results are shown in Table I. Table 1. Mean±standard error of the variables. Also shown are the estimated mean and range of climate sensitivity for 90 % , 95 % confidence levels Variables
Lag = 1
Lag=2
Lag=3
Likely
a
Slope, LW
5.2±1.3
4.5±1.7
2.6±1.3
5.2±1.3
Lag = I
b
Slope, SW
-2.7±2.9
0.5±4.2
2.2±3.0
2.2±3.0
Lag = 3
c
Slope. Total
2.5±2.4
5.0±3.1
4.8±2.5
7.1±2.2
= a+b for the same SST interval
d
fLw
-0.3±O.2
-0.2±O.3
0.1±0.2
-0.3±O.2
Calculated from a
e
fsw
0.4±0.4
-0.I±O.6
-0.3±O.5
-0.3±0.4
Calculated from b
hotal
0.1±0.4
-0.2±O.5
-0.2±O.4
-0.6±0.3
Calculated from c
1.3
0.9
0.9
0.7
Calculated from d
0.8-3.8
0.5-2.3
0.6-1.8
0.5-1.1
Calculated from e
0.7-6.7
0.5-3.3
0.6-2.2
0.5-1.2
Calculated from f
f g h
i
Sensitivity, mean
Sensitivity, 90% Sensitivity, 95%
Comments for likely lag
The standard error of the slope in total radiation for the likely lag comes from the regression for scatter plots of (L\SST, L\(OLR+OSR)). As we see in Table 1, model sensitivities indicated by the IPCC AR4 (Figure 8) are likely greater than the possibilities estimated in observation (except for the clearly inappropriate Lag=l case).
455
Fig. 8: Equilibrium climate sensitivity of 11 AMI? models. We next wish to see whether the outgoing fluxes from the AMIP models are consistent with the sensitivities in Figure 8. For the AMIP results, for which there was no ambiguity as to whether fluxes constituted a response, there was little dependence on smoothing or lag, so we simply used the AMIP fluxes without smoothing or lag. The results are shown in Table 2. Table 2. LW, SW, and total feedbacks in AMIP models. LW
LW+SW
SW Slope
R
SE
f
-1.6
-0.3
2.7
0.7
1.7
0.3
3.0
0.2
-3.0
-0.7
1.6
1.0
1.1
0.2
2.5
0.3
0.5
-0.5
-0.1
1.8
0.6
0.5
-0.3
-0.1
1.9
0.5
2.0
0.4
-3.0
-0.5
2.1
0.9
-0.4
3.1
0.6
-5.0
-0.6
2.6
1.2
-2.1
-0.5
1.6
0.3
-1.4
-0 .3
2.5
0.7
-0.2
-5.3
-0.7
2.3
0 .8
-0.9
-0.2
1.9
0.6
-0.3
-5.9
-0.7
2.1
0.9
-0.8
-0.1
2.2
0.6
N
Slope
R
SE
f Lw
Slope
R
SE
CCSM3
19
1.5
0.4
1.8
0.3
-3.1
-0.5
2.2
fsw 0.5
ECHAM5IMPI-OM
18
2.8
0.6
1.7
0.1
-1.1
-0.2
3.1
0.2
FGOALS-g 1.0
18
-0.2
-0.1
1.6
0.5
-2.8
-0.7
1.3
0.4
GFDL-CM2.1
18
1.5
0.6
1.0
0 .3
-0.4
-0.1
2.8
0.1
GISS-ER
22
2.9
0 .6
1.4
0.1
-3.3
-0.5
2.3
INM-CM3.0
24
2.9
0.6
1.5
0.1
-3.1
-0.6
1.7
IPSL-CM4
22
-0.4
-0. 1
2.1
0.6
-2.6
-0.5
MRI-CGCM2.3.2
22
-1.1
-0.2
2.2
0.7
-3 .9
MIROC3.2(hires)
22
0.7
0.1
2.2
0.4
MIROC3.2(medres)
22
4.4
0.7
1.8
UKMO-HadGEMI
19
5.2
0.7
2.2
456 In contrast to the observed fluxes, the implied feedbacks in the models are all positive, and in one case, marginally unstable. Given the uncertainties, however, one should not take that too seriously. Table 3 compares the sensitivities implied by Table 2 with those in Figure 8. Table 3. Comparison of model sensitivities from IPCC AR4 and from feedback factors estimated in t h'IS stu dJY. Models AR4 sensitivity Sensitivity from f INM-CM3.0 2.1 2.4 22.4 FGOALS-g1.0 2.3 4.3 CCSM3 2.7 GISS-ER 2.7 2.5 MRI-CGCM2.3.2 3.2 Infinite 1.4 ECHAMSIMPI-OM 3.4 GFDL-CM2.1 3.4 1.6 MIROC3.2(medres) 3.0 4 MIROC3.2(hires) 4.3 3.8 IPSL-CM4 4.4 19.5 UKMO-HadGEMI 4.4 2.8
The agreement does not seem notable. However, this is indicated by at least two counts . First, the AMIP determinations of sensitivity were rough. They involved replacing the specified SST with slab oceans, and running models with doubled CO 2 for a fixed time. The various models used slabs with different heat capacities. Moreover, the time scale for approach to equilibrium depends on the sensitivity (Lindzen and Giannitsis, 1998). Second, for positive feedbacks, sensitivity is strongly affected by small changes in f that are associated Standard Errors in Table 2. This is seen in Figure 9 in the pink region. Response as a function of Total Feedback Factor 12H--+--+--+--+--+--r--+--r--+--~ 10H--+--+--+--+--+--+--+--r--+--~
F' 8 If--+--+--+--+--+--+--+---+--+---I-" ~
~6 o
£
4
If--+--+--+--+--+--+--+---+--+---I-" If--+--+--+--+--+--+--+---+--+---I-"
-2.0
-1.5
-0.5 0.0 -1.0 Total Feedback Factor
0.5
1.0
457 It has, in fact, been suggested by Roe and Baker (2007), that this sensitivity is why there has been no change in the range of sensitivities indicated by GCMs since the 1979 Charney Report. By contrast, in the green region, which corresponds to the observed feedback factors , sensitivity is notably much less. Since our analysis of the data only demands relative instrumental stability over short periods, it is difficult to see what data problems might change our results significantly. The addition of CERES data to the ERBE data used by Lindzen and Choi (2009) certainly does little to change their results concerning UFlux/ USST-except that its value is raised a little (This is also true for the case that CERES data is only used.). The inescapable conclusion is that all current models seem to exaggerate climate sensitivity (some greatly), and that current concerns are exaggerated as well. It also suggests, incidentally, that in current coupled atmosphere-ocean models, that the atmosphere and ocean are too weakly coupled since thermal coupling is inversely proportional to sensitivity (Lindzen and Giannitsis, 1998). It has been noted by Newman et al. (2009) that coupling is crucial to the simulation of phenomena like El Nino. Thus, corrections of the sensitivity of current climate models might well improve the behavior of coupled models. It should also be noted that there have been independent tests that also suggest sensitivities less than predicted by current models (Lindzen and Giannitsis, 1998, based on response time to sequences of volcanic eruptions; Lindzen, 2007, and Douglass et al. 2007 , both based on the vertical structure of observed versus modeled temperature increase; and Schwartz, 2007, 2008, based on ocean heating). (Lindzen and Giannitsis, 1998, also noted that the response to individual volcanoes in the two years following eruption were largely independent of sensitivity, and, hence, of little use for distinguishing different sensitivities). Most claims of greater sensitivity are based on the models that we have just shown can be highly misleading on this matter. There have been attempts to infer sensitivity from paleoclimate data (Hansen, 1993), but these are not really tests since the forcing is essentially unknown and may be adjusted to produce any sensitivity one wishes. Thus, the existing evidence is that climate sensitivity is low. It is important to realize that climate sensitivity is essentially a single number. Economists who treat climate sensitivity as a probability distribution function (Weitzman, 2009; Stern, 2008; Sokolov et al. 2009) are mistakenly confusing model uncertainty concerning this particular number with the existence of a real range of possibility. The high sensitivity results that these studies rely on for claiming that catastrophes are possible are almost totally incompatible with the present results-despite the uncertainty of the present results. One final point needs to be made. Low sensitivity of global mean temperature anomaly to global scale forcing does not imply that major climate change cannot occur. The earth has, of course, experienced major cool periods such as those associated with ice ages and warm periods such as the Eocene (Crowley and North, 1991). As noted, however, in Lindzen (1993), these episodes were primarily associated with changes in the equator-to-pole temperature difference and spatially heterogeneous forcing. Changes in global mean temperature were simply the residue of such changes and not the cause. It is worth noting that current climate GCMs have not been very successful in simulating these changes in past climate.
458 APPENDICES Appendix I. Origin of Feedbacks. While the present analysis is a direct test of feedback factors, it does not provide much insight into detailed mechanism. Nevertheless, separating the contributions to f from long wave and short wave contributions provides some interesting insights. The results are shown in Tables I and 2. It should be noted that the consideration of the zerofeedback response, and the tropical feedback factor to be half of the global feedback factor is actually necessary for our measurements from the Tropics; however, these were not considered in Lindzen and Choi (2009) . Accordingly, with respect to separating longwave and shortwave feedbacks, the interpretation by Lindzen and Choi (2009) needs to be corrected. These tables show recalculated feedback factors in the presence of the zero-feedback Planck response. The negative feedback from observations is from both longwave and shortwave radiation, while the positive feedback from models is usually but not always from longwave feedback. As concerns the infrared, there is, indeed, evidence for a positive water vapor feedback (Soden et al. 2005), but, if this is true, this feedback is presumably cancelled by a negative infrared feedback such as that proposed by Lindzen et al. (2001) in their paper on the iris effect. In the models, on the contrary, the long wave feedback appear to be positive (except two models), but it is not as great as expected by water vapor feedback (Colman, 2003; Soden et al. 2005). This is possible because the so-called lapse rate feedback as well as negative longwave cloud feedback serves to cancel the TOA OLR feedback in current models. Table 2 implies that TOA longwave and shortwave contributions are coupled in models (the correlation coefficient betweenJLw andfsw from models is about -0.5.). This coupling most likely is associated with the primary clouds in models-optically thick high-top clouds (Webb et al. 2006). In most climate models, the feedbacks from these clouds are simulated to be negative in longwave and strong positive in shortwave, and dominate the entire cloud feedback (Webb et al. 2006). Therefore, the cloud feedbacks may also serve to contribute to the negative OLR feedback and the positive SWR feedback. New spaceborne data from CALIPSO lidar (CALIOP; Winker et al. 2007) and CloudSat radar (CPR; 1m et al. 2005) should provide a breakdown of cloud behavior with altitude which may give some insight into what exactly is contributing to the radiation. Appendix 2. Concentration of climate feedbacks in the tropics. Although, in principle, climate feedbacks may arise from any latitude, there are substantive reasons for supposing that they are, indeed, concentrated in the tropics. The most prominent model feedback is that due to water vapor, where it is commonly noted that models behave as though relative humidity were fixed . Pierrehumbert (2009) examined outgoing radiation as a function of surface temperature theoretically for atmospheres with constant relative humidity. His results are shown in Figure 10.
459 600 500 400
-e
~
300
cr:
...l
0
200 100
o
......l..-
200
!
I
220
I
I!
!
!
)
240 260 280 300 Ground Temperature (K)
320
Fig. 10: OLR vs. surface temperature for water vapor in air, with relative humidity held fixed. The surface air pressure is 1bar, and Earth gravity is assumed. The temperature profile is the water/air moist adiabat. Calculations were carried out with the ccm radiation model. We see that for extratropical conditions, outgoing radiation closely approximates the Planck black body radiation (leading to small feedback). However, for tropical conditions, increases in outgoing radiation are suppressed, implying substantial positive feedback. There are also good reasons to suppose that cloud feedbacks are largely confined to the tropics. In the extratropics, clouds are mostly stratiform clouds which are associated with ascending air while descending regions are cloud-free. Ascent and descent are largely determined by the large scale wave motions that dominate the meteorology of the extratropics, and for these waves, we expect approximately 50% cloud cover regardless of temperature. On the other hand, in the tropics, upper level clouds, at least, are mostly determined by detrainment from cumulonimbus towers, and cloud coverage is observed to depend significantly on temperature (Rondanelli and Lindzen, 2008). As noted by Lindzen et al. (2001), with feedbacks restricted to the tropics, their contribution to global sensitivity results from sharing the feed back fluxes with the extratropics. This leads to the factor of 2 in Eq. (6). ACKNOWLEDGEMENTS This research was supported by DOE grant DE-FG02-0IER63257. The authors thank NASA Langley Research Center and PCMDI team for the data, and William Happer, Lubos Motl, Tak-men Wong, Roy Spencer and Richard Garwin for helpful suggestions. We also wish to thank Dr. Daniel Kirk-Davidoff for a helpful question. REFERENCES 1.
Charney, 1.G. et al.(1979) Carbon Dioxide and Climate: A Scientific Assessment, National Research Council, Ad Hoc Study Group on Carbon Dioxide and Climate, National Academy Press, Washington, DC, 22pp.
460 2. 3. 4.
5. 6. 7. 8.
9. 10. II. 12. 13. 14. 15. 16. 17. 18. 19.
20.
21.
22.
Colman, R. (2003) "A comparison of climate feedbacks in general circulation models." Climate Dyn., 20:865-873. Crowley, T.J. and G.R. North (1991) Paleoclimatology, Oxford Univ. Press, NY, 339pp. Douglass, D.H., I .R. Christy, B.D. Pearson, and S.F. Singer (2007) "A comparison of tropical temperature trends with model predictions," Int. 1. Climatol., DOl: 10.1 002/joc.1651. Hansen, 1., A. Lacis, R. Ruedy, M. Sato, and H. Wilson(\993) "How sensitive is the world's climate?," Natl. Geogr. Res. Explor., 9:142-158. Hartmann (\ 994) Global physical climatology, Academic Press, 411 pp. 1m, E., S.L. Durden, and C. Wu (2005) "Cloud profiling radar for the Cloudsat mission," IEEE Trans. Aerosp. Electron. Syst., 20:15-18. Kanamitsu, M. , W. Ebisuzaki, J. Woolen, S.K. Yang, J.J. Hnilo, M. Fiorino, and J. Potter (2002) "NCEP/DOE AMIP-U Reanalysis (R-2)." Bull. Amer. Met. Soc. 83:1631-1643. Lindzen, R.S. (\ 988) "Some remarks on cumulus parameterization." PAGEOPH, 16:123-135. Lindzen, R.S. (1993) "Climate dynamics and global change." Ann. Rev. Fl. Mech. , 26:353-378. Lindzen, R.S. (2007) "Taking greenhouse warming seriously." Energy & Environment, 18:937-950. Lindzen, R.S., M.-D. Chou, and A.Y. Hou (2001) "Does the Earth have an adaptive infrared iris?" Bull. Amer. Met. Soc. 82:417-432. Lindzen, R.S., and C. Giannitsis (1998) "On the climatic implications of volcanic cooling." 1. Geophys. Res., 103:5929-5941. Lindzen, R.S. , and Y.-S. Choi (2009) "On the determination of climate feedbacks from ERBE data," Geophys. Res. Lett., 36:LI6705. Newman, M., P.D. Sardeshmukh, and C. Penland (2009) "How important is airsea coupling in ENSO and MIO Evolution?" 1. Climate, 22:2958-2977. Pierrehumbert, R.T. (2009) Principles of planetary climate, available online at http://geosci.uchicago.edu/-rtpl/ClimateBookiClimateBook.htm!' Roe, G.H., and M.B . Baker (2007) "Why is climate sensitivity so unpredictable?," Science, 318:629. Schwartz, S.E. (2007) "Heat capacity, time constant, and sensitivity of Earth's climate system," 1. Geophy. Res., 112:D24S05. Schwartz, S.E., Reply to comments by G. Foster et a!. R. Knutti et a!. and N. Scafetta on "Heat capacity, time constant, and sensitivity of Earth's climate system". 1. Geophys. Res., 2008,113:DI5195. Soden, B.J., D.L. Jackson, V. Ramaswamy, M.D. Schwarzkopf, and X. Huang (2005) "The radiative signature of upper tropospheric moistening," Science, 310:841-844. Sokolov, A.S ., P.H. Stone, C.E. Forest et a!. (2009) : "Probabilistic forecast for 21 st century climate based on uncertainties in emissions (without policy) and climate parameters," 1. Climate, in press . Stern, N. (2008) "The economics of climate change." American Economic Review: Papers & Proceedings, 98:1-37.
461 23.
24. 25. 26.
27.
Trenberth, K.E., J.T. Fasullo, Chris O'Dell, and T. Wong (2010) "Relationships between tropical see surface temperature and top-of-atmosphere radiation," Geophys. Res. Lett., in press. Webb, MJ., et al. (2006) "On the contribution of local feedback mechanisms to the range of climate sensitivity in two GCM ensembles," Clim. Dyn., 27: 17-38. Weitzman, M.L. (2009) "On modeling and interpreting the economics of catastrophic climate change." Review of Economics and Statistics, 91: 1-19. Wielicki, B.A. et al. (1998) "Clouds and the Earth's Radiant Energy System (CERES) : Algorithm overview," IEEE Trans. Geosci. Remote Sens ., 36:11271141. Winker, D.M., W.H. Hunt, and MJ. McGill (2007) "Initial performance of CALIOP," Geophys. Res. Lett., 34:L19803 assessment
This page intentionally left blank
TWO BASIC PROBLEMS OF SIMULATING CLIMATE FEEDBACKS GARTH W. PALTRIDGE Australian National University and University of Tasmania Hobart, Australia A doubling of the concentration of carbon dioxide (C0 2 ) in the atmosphere will probably occur over the next hundred years or thereabouts because we are burning lots of fossil fuel. It is fairly easy to calculate the likely rise of global-average surface temperature caused by such a doubling, provided that we confine ourselves to the purely theoretical situation where nothing else is allowed to change. The answer is just over one degree Celsius, and it would take two or three hundred years to complete the change. The problem is that in the real world there are lots of other things happening that can 'feed back' on surface temperature. Some of them amplify and some of them reduce any change caused by an increase of carbon dioxide. THE UNCERTAINTY OF FEEDBACKS IN CLIMATE MODELS Imagine that the basic rise without feedbacks of global temperature from doubled CO 2 is L1To. Imagine as well that gl, g2, g3 and so on are the actual values of the individual feedback 'gains' associated with each of the various atmospheric processes dependent on surface temperature. They may be positive or negative. That is, they may amplify or reduce the basic rise in temperature fl.To associated the increase of CO 2. The total gain G of the overall system is simply the sum (g, + g2 + g3 + ... ) of all the individual gains, and the actual temperature rise fl.T when all the feedbacks are allowed to operate is (roughly) the value of L1To divided by a factor (J -G) as shown in the equation :
(1- G) Let us look at the individual feedback gains gl, g2 and so on that are associated with each of the major feedback processes typically built into numerical climate models. The diagram on the right has vertical solid lines which indicate the range of each of the individualprocess gains that can be found by looking over all the various models. Put another way, an individual model has a gain for a particular type of feedback which falls somewhere along the relevant vertical solid line. The information
.
_
1.2 1.0 ...................................................•...... wv 0.8
OJ
o 0.6 <.9 0.4 ~ 0.2 ·ro 0 <.9 -0.2 -0.4
463
n.
CI
Re
LR
AII(G)
464 is derived from a research paper by Sandrine Bony and many of her colleagues published late in 2006 in the Journal of Climate. The feedback processes are associated with responses to temperature change of water vapour (WV), cloud (Cl), albedo (i.e., the reflection Re of sunlight by the ground) and lapse rate (LR). Also on the figure is the spread (i.e., the vertical solid line) over all the models of the overall gains G. It ranges from about 0.4 to 0.8, and a bit of calculation with the equation given earlier suggests that the corresponding temperature range for doubled CO 2 is about 2 to 6°C. In principle there is no reason why any particular model's set of individualprocess gains are more realistic than any other. So if all the individual processes were truly independent there should be no reason why the spread of total gain could not be as large as that indicated by the vertical dotted line on the diagram. That is, the overall gain G could range from less than zero to something greater than one. Suffice it to say that the relatively narrow range of total gains displayed by the actual models (roughly 0.4 to 0.8) is fairly surprising, and must come about for one of two reasons. Either the individual process gains are physically correlated in some convenient way, or there has been some subconscious choice of process description to keep the total gains of the various models within physically realistic bounds. On the one hand for instance, the research literature talks at some length about the correlation between water vapour and lapse rate. On the other, a lot of subconscious tuning effort goes into ensuring that climate models don't run off the rails of reasonableness. BASIC PROBLEM I The main uncertainties concern the feedbacks of water vapour and cloud. Earth is hot because it absorbs a vast amount of solar energy. Because it is hot, it radiates infrared energy back to space. Because the amount of radiated infrared energy increases with temperature, and because in the end the overall system must settle down to a sort of steady state where things don't change much over the long term, the earth must adopt a temperature and a temperature distribution that ensures there is a close balance between absorption of solar energy and emission of infrared radiation. The major twist to the picture (in very simplistic terms) is that the infrared energy radiated to space is emitted both from the earth's surface (via windows in the absorption spectra of the atmosphere) and from the 'tops of the blankets' of carbon dioxide, water vapour and cloud in the atmosphere. The temperatures of the blanket tops are much less than that of the surface because they are high in the atmosphere. If we increase the concentration of atmospheric carbon dioxide, we effectively increase the thickness of the CO 2 blanket. The top of the blanket becomes cooler because it is higher, and as a consequence it radiates less infrared energy to space. Since the amount of absorbed solar energy is more-or-Iess fixed if nothing else is happening at the time, infrared radiation from the earth's surface must increase to ensure that the total energy radiated to space continues to balance the solar energy input. In other words the surface temperature must rise. Note that it is what happens at the top of the blanket that governs the temperature rise at the surface.
465 The same story applies to water vapour and water vapour feedback. If increasing CO 2 raises the surface temperature, then indeed one might reasonably expect the rate of evaporation of water from the surface to increase. The global rainfall will increase because it must balance the evaporation, and in the process the amount of water vapour in the atmosphere will increase. (This is not a certainty, but it seems reasonable). The increase in water vapour should in turn increase the thickness of the water vapour blanket and should further increase the surface temperature for much the same reason as with increasing CO 2 . Again it is what happens at the top of the blanket that matters. Provided the water vapour concentration goes up more or less proportionally at all heights in the atmosphere as the surface temperature rises, then indeed the water vapour feedback will be positive. It will amplify the temperature rise due to CO 2 alone. The computer models of climate behave in just this way-they tend to maintain a constant relative humidity at each height as the temperature rises. On the other hand, it is perfectly possible that in the real world the small water vapour concentrations in the upper levels of the atmosphere-above about 3 or 4 km or thereabouts-could decrease as the larger concentrations in the lower levels increase. There are plausible physical reasons why this might be so. In which case the overall water vapour feedback would be negative and the original temperature rise from increasing CO 2 would be reduced. It is not well known even among the cognoscenti of the climate research fraternity that it is the behaviour of water vapour in the middle and upper troposphere which is by far the dominant control on whether water vapour feedback is positive or negative. And it is at just these levels where the world's fifty-year-Iong record of balloon measurements of atmospheric water vapour (the record at these levels shows a decrease of water vapour over the years) is the most doubtful. Some would say it is probably nonsense. It is at just these levels also that the world's 25-year-Iong record of satellite remote measurements (which show an increase of water vapour over part of the time) is the most doubtful. Some would say that it too is probably nonsense. Suffice it to say that at this time there exist no experimental data which are sufficiently accurate to settle the question about whether water vapour feedback is positive or negative. Feedback associated with variation in cloud amount is even more of a problem. The scientific difficulty is that cloud reflects incoming solar radiation back to space and therefore tends to cool the world-this as well as behaving rather like the water vapour and carbon dioxide infrared blankets and tending to warm the earth because of its effect on outgoing infrared radiation. Which of the processes 'wins'-that is, whether the overall effect of cloud is a cooling or a warming-depends on both the height and the character of the cloud concerned. And again as for CO 2 and water vapour, whether overall cloud feedback is positive or negative is very largely dependent on the exact height, character and distribution of cloud in the upper levels of the atmosphere. Just those levels in fact where satellite measurements are hard put to it to make sufficiently accurate measurements. So basic problem number 1 is that water vapour (and cloud-infrared) feedbacks are controlled by the conditions at altitudes where actual measurements are simply not accurate enough to verify model behaviour.
466 BASIC PROBLEM 2 The figure below shows examples of the 35-year trends in annual-average, zonal average specific humidity at various levels in the tropical and mid-latitude atmospheres. The figure shows trends as they appear at face value from the NCEP re-anal ysis data. One has to say immediately that there are lots of problems associated with the balloon radiosonde humidity data which are the only water-vapour input information to the NCEP assimilation model. As well there are issues with the assimilation procedure itself. But one has to bear in mind that beggars can't be choosers. Satellite humidity data also have their problems. Other re-analysis schemes use satellite humidity data as they have become available-this as well as the basic set of radiosonde data. The combination of balloon and satellite data vastly complicates any attempt to estimate long-term trends.
-0.4
-0.2
..0.1
0.9
0.1
0.2
Trend of q (% of U173 value/vear) The bottom line is that, according to the NCEP data set (and indeed to the raw radiosonde data in certain regions) the water vapour concentration in the middle and upper levels of the troposphere has decreased over the last 35 years as the earth's surface temperature has increased. If one could believe these upper-level trends, then water vapour feedback has been negative over the last 35 years. If the trends were real and continued into the future, water vapour feedback would halve rather than double the warming due to CO2 . We won't go into all the 'ifs and buts' of the various sources of humidity data. We are concerned here only with the fact that the apparent long-term decrease in upper-level humidity q with increasing surface temperature T is of opposite sign to the correlation between the short-term variability of q and T. See for instance the 500mb monthlyaverage mid-latitude specific humidities plotted as a function of monthly-average surface temperature in the final figure below. The point of the discussion is that the positive short-term correlation in the figure is not necessarily an indicator of a positive long-term correlation between q and T.
467
Put another way, basic problem number 2 is that long-term feedback need not be of the same sign as short-term feedback. Long-term trends may be determined by entirely different mechanisms to those operating over short time-scales. In the context of water vapour in the upper troposphere for instance, a negative trend in the long term may result from increasing atmospheric stability in the lower atmosphere because of the gradual change in the vertical profile of radiative heating associated with a gradually increasing concentration of CO 2 . The problem is important in a general sense because the successful simulation by climate models of the positive short-term correlation between q and T (or between some other pair of variables) is often used as the main argument to support their predictions of long-term positive feedback. A bottom line of this discussion is that one should not 'write off' balloon observations (or any other observations) which suggest negative water vapour feedback simply on the basis that models predict the opposite. The models are normally tuned (or are at least qualitatively verified) by reference to short- term variability.
This page intentionally left blank
SESSION 9 CLIMATE WITHOUT COMPUTER SIMULATION
FOCUS: MATHEMATICS, PHYSICS, AND CLIMATE
This page intentionally left blank
471
WHAT IS THE CLIMATE CHANGE SIGNAL? KYLE L. SWANSON Department of Mathematical Sciences, University of Wisconsin-Milwaukee Milwaukee, Wisconsin, USA Climate signal identification is vital if we are to understand near-term climate change and properly assess climate model performance. Here, using a parametric multivariate analysis technique we show that the objectively identified 20th century terrestrial climate change signal is remarkably linear with respect to time. Analysis shows that this signal has warmed at a consistent rate of 0.1 ·C/decade throughout the 20th century; that the rate of warming of this signal has not materially increased in recent decades; and that an ensemble of climate model simulations has difficulty capturing this signal after 1960. Current mean temperatures appear to be roughly 0.2 ·C warmer than the observed signal due to an overshoot coinciding with the 1997/1998 super El Nino. This suggests slower global warming over the next decades compared to the 1980-2000 period as the mean temperature returns to this underlying signal.
In the study of the climate system, it is traditional to average over a sufficient number of years so that the noise due to short time-period weather is greatly diminished, thereby revealing the climate signal. However, this approach to signal discretion amounts to driving using the rear-view mirror, as current events in the climate system that may appear significant are by definition beyond the pale of analysis. Insofar as the hallmark of an advanced understanding of the climate is the ability to predict and attribute climate change events, a contemporaneous measure of the climate signal would be invaluable tool, as it would provide a context within which to discriminate climate variability from climate change . Can such a climate signal be objectively identified? It has recently been suggested that pattern recognition-based techniques may allow better signal extraction from spatiotemporal data compared to standard space-time averaging, as they seek to use all multivariate information contained within the relevant data. 6 -8 For the climate problem, these techniques attempt to find those spatio-temporal patterns of climate variability that maximize inter-decadal to intra-decadal surface temperature variance. Hence, these techniques peel back individual layers of longer time-scale climate variability, allowing identification of the dominant climate signal as the pattern that optimally differentiates between various decades. Figure I shows the spatially weighted mean temperature of the pattern that optimally differentiates inter- from intra-decadal variability on spatial networks respectively consisting of those grid cells in the HadCRUT3v surface temperature data set that have continuous monthly coverage over the periods 1901-2008 (left) and 19512008 (right). In the analysis, all months are treated independently, with the climate signal defined as the annual average of these individual monthly time series. Despite the fact that no explicit time dependence is imposed, for both periods a nearly linear warming at a rate of 0.1 T/decade emerges as the climate change signal. This signal departs significantly from observed mean temperatures for these two observed networks even over multi-decadal periods. The robustness of this signal is indicated by the similarity of
472 the 1951-2008 climate signal to the 1901-2008 signal. The former is dominated by the 1970-2008 period during which the warming of the global mean temperature accelerated. Given this, it is natural to expect a signal different from the one that characterizes the entire 20 th century, perhaps marking an altered balance of underlying forcing mechanisms associated with this more rapid warming. However, this is not the case; the 1951-2008 signal has a linear character and rate of warming virtually identical to the 1901-2008 signal.
Fig. I: 1901-2008 (left) and 1951-2008 (right) climate signals for the HadCRUT3v networks. Red dots are annual spatial means, and the blue curve is the observed climate signal calculated using linear discriminant analysis. The green curve is the mean over the model ensemble of climate signals calculated on an identical network to the obsenJed signals. Do models capture this climate signal? For both the 1901-2008 and 1951-2008 periods, the answer appears to be no, as the observed signal qualitatively differs from the ensemble mean of model signals calculated in an identical manner. For this purpose, twentieth century retrospective simulations continued into the 21 st century using the SRES AlB emissions scenario were used. The overall dataset consisted of 34 simulations using 12 different models from the CMIP3 archive. The ensemble model signals cool for roughly a decade after 1960, and then begin to warm at a rate roughly twice that of the observed climate signal (0.2°C/decade) to 2008 and into the near future. It is interesting that the ensemble model simulated signals, which in principle have no explicit knowledge of the observed area weighted spatial mean temperatures, track those observed temperatures better than the observed signal over the 1960-2000 period. The observed signal has a much simpler temporal structure, containing little inter-decadal time-scale variability. What does this signal imply? If the quasi-linear climate signal identified in the figure truly underlies 20 th century climate change, roughly half of the post-1980 warming is due to a return to that signal. Moreover, the existence of such a signal has important implications for near-term climate change. For both periods in the figure, the years 1997/1998 mark a jump in the observed spatially-weighted mean temperature from predominantly cooler than the underlying signal to warmer than that signal. This suggests
473 a fraction of the recent warming is due to an "overshoot" of spatially weighted mean temperature, perhaps marking a global re-distribution of heat in response to the 1997/98 super EI Nino event. 13 If this is the case, all else being equal this overshoot should decay back toward the underlying linearly evolving climate signal as we go further into the 21" century. This decay, counterbalanced by the continued linear increase in the underlying climate signal with respect to time at its 20'h century rate, suggests that mean temperatures will not begin to warm significantly from their present levels until after 2020. This forecast differs markedly from deterministic forecasts that suggest strong warming over the 2005-2015 period, but is consistent with the recent (albeit controversial) plateau in global mean temperature. 14 Curiously, such a scenario may provide a natural test of climate sensitivity. Rapid cooling back toward the established 20'h century signal trend would be consistent with an insensitive climate, while a slow return to that trend marked by constant or even slowly rising temperatures over the next several decades would suggest a very sensitive climate, one not readily able to radiatively dissipate this quite substantial overshoot temperature anomaly. In summary, the nearly linear evolution of this objectively defined climate signal over the 20 th century is consistent with a climate responding to linearly increasing anthropogenic forcing (positive greenhouse gas forcing partially offset by negative aerosol forcing), with incoherent natural variability superimposed upon that signal. The lack of an acceleration of warming over recent decades suggests that the decoupling of greenhouse gas forcing from sulphate aerosol forcing may not yet have occurred. If so, rapid climate change has not yet begun. It is vital to note that the linear climate signal over the 20'h century does not constrain future climate forcing; if /when sulphate aerosol emissions are reduced due to a switch-over to less carbon intensive fuels, the jump in radiative forcing will undoubtedly be larger than it would have been in the 1970' s. On top of this, the impulsive jump in global mean temperature caused by the 1997/98 EI Nino highlights the danger that nonlinearities present in natural modes of climate variability and/or feedbacks may enter into the picture in a completely unpredictable fashion as the planet continues to warm. 14 METHODS The pattern recognition technique used here is a variant of linear discriminant analysis (LDA). LDA identifies a sequence of discriminants consisting of a time-series and a spatial pattern; these are analogous to the leading empirical orthogonal functions and principal components in a principal component analysis. Hence, LDA peels back individual layers of longer time-scale climate variability, with the leading discriminant providing the dominant climate signal, as it is the pattern that optimally differentiates between various decades . LDA as applied here involves three basic steps: (i) Separating the data into "high pass" (intra-decadal) and "low pass" (inter-decadal) components; (ii) Re-scaling the entire data so that it has unit high-pass variance; and (iii) finding the leading mode of variability in this re-scaled data. Steps (ii) and (iii) are readily accomplished using a singular vector decomposition; the full procedure is discussed in the Supplement. Observed surface temperature data is taken from the HadCRUT3v dataset. Continuous monthly coverage over the period 1901-2008 is found for 190 of YxY grid
474 cells; these data form the observed network used in the figure (left panel). The areaweighted mean temperature of this network provides a reasonable facsimile of the full HadCRUT3v global mean (r = 0.91; m = 1.08). Similar analysis over the 1951-2008 period identifies 71 I grid cells having continuous monthly coverage. No infilling is done to avoid the need to synthesize covariance matrices for the discriminant analysis. Note that anal ysis of this time series in a certain sense presents a steeper challenge than for the global mean, as its inherent bias to the Northern Hemisphere continents implies a larger level of weather-related noise. Climate model data is taken from the CMIP3 archive of model simulations run in support of the IPCC AR4. Three simulations were taken from the following models: MRI, MIROC-MEDRES, NCAR CCSM3, GISS E-R, GFDL CM2. I, CCC CGMA, MIUB, and MPI-ECHAM5. Single simulations were taken from the HadCM3 and MIROC-HIRES models. Synthetic observations were extracted from modelled surface temperature fields on a spatial network identical to the observed for both the 190 I -2008 and 1951-2008 periods. ACKNOWLEDGEMENTS The author gratefully acknowledges the use of the CMIP3 archive of climate model data maintained by Lawrence Livermore National Laboratory (http://www-pcmdi.lln!.gov).as well as the HadCRUT3v observed temperature record (http://www.cru.uea.ac.uk/cru/data /temperature/). Discussions with Sergey Kravtsov regarding statistical aspects of this work were particularly enlightening. Correspondence should be addressed to KLS. (e-mail: [email protected]) REFERENCES I.
2. 3. 4.
5.
6.
Lorenz, E.M. (1975) Climate predictability. In: The Physical Basis of Climate and Climate Modelling, WMO GARP Pub!. Ser. No. 16, 132-136. WMO, 265 pp. Lorenz defined deterministic climate predictability as climate predictability of the first kind, while predictability due to the emergence of a forcing signal is climate predictability of the second kind. Smith, D. et a!. (2007) "Improved surface temperature prediction for the next decade from a global climate mode!." Science, 317:796-799. Keenlyside, N.S. et a!. "Advancing decadal-scale climate prediction in the North Atlantic sector." Nature, 453:84-88. Lee, T.C.K., Zwiers, F., Zhang, X and Tsao, M. (2006) "Evidence of decadal climate prediction skill resulting from changes in anthropogenic forcing." 1. Clim.,19:5305-5318. Meehl, G.A., T.F. Stocker, et a!. (2007) Global climate projections. In: Climate Change 2007: The Physical Science Basis, 747-845. Cambridge Univ. Press, 996 pp. Ripley, B.D. (1996) Pattern Recognition and Neural Networks. Cambridge Univ. Press, 403pp.
475 7. 8. 9.
10. I l. 12.
13. 14.
Schneider, T., 1.M. Held (2001) "Discriminants of twentieth-century changes in Earth surface temperatures." 1. Clim., 14:249-254. DelSole, T . (2006) "Low-frequency variations of surface temperature in observations and simulations." 1. Clim., 19:4487-4507. Brohan, P., J.J. Kennedy, 1. Harris, S.F.B. Tett and P.O. Jones (2006) "Uncertainty estimates in regional and global observed temperature changes: a 1850." 1. Geophys. Res., 1111, 012106, new dataset from doi: 10. 1029/2005JD006548. The HadCRUT3v hemispheric and global mean data smoothed in this manner is available at http://www .cru.uea.ac.uk/cru/data/ temperature/. See http://www-pcmdi.lInI.gov for details. Hansen, J., et ai. (2005) "Earth's energy imbalance: Confirmation and implications." Science, 308: 143 1-1435. Forster, P., V. Ramawaamy, et ai. (2007) Changes in atmospheric constituents and in radiative forcing. In: Climate Change 2007: The Physical Science Basis, 129-234. Cambridge Univ. Press, 996 pp. McPhaden, M.J. (1999) "Genesis and Evolution of the 1997-98 EI Nino." Science, 283:950-954. Swanson, K.L. and A.A. Tsonis "Has the climate recently shifted?" Geophys. Res. Lett., 36, doi: 10. 1029/2008GL037022.
476 SUPPLEMENT Linear discriminant analysis Linear discriminant analysis (LDA)I is a technique for finding linear combinations of variables that maximize the ratio of between-group variance to withingroup variance, where for the purposes here the groups are defined as a pre-specified time period, e.g., a decade. In this situation, LDA thus seeks those spatial patterns that maximize inter-decadal to intra-decadal variance. Within the context of climate study, variants of LDA at the level of climate fields have been applied in two studies during the past decade,2.3 but by and large the technique has not gained wide acceptance. In part, this may be because LDA superficially appears mathematically dense. This perception is unfortunate, because LDA is in reality only marginally more complicated than principal component analysis (PCA). Moreover, LDA provides a natural counterpart to PCA, and its use in climate science, particularly climate change detection which seeks to distinguish forced climate variability from a background of natural variability, seems natural. As noted by Scheneider and Held 2 (hereafter SHO\), Ripleyl provides an adequate introduction to LDA. Within the LDA framework, data are divided into g groups, here divided among n time measurements. A group matrix G is then constructed that describes the membership of those groups within the data matrix Xs , along with a covariance matrix estimate Ls. LDA is based upon an eigendecomposition of the matrix r = L~+L;.where L,+ is an approximate pseudoinverse of Ls, (see discussion below), anddescribes the among group (low-pass) covariance matrix. The g-l right eigenvectors u of r when multiplied with the data matrix Xs give the canonical variates. The discriminating patterns are then the regression of individual canonical variates onto the full data. The pair of a canonical variate and a discriminating pattern is referred to as a discriminant (SHO\). Within this context, the leading canonical variate is the climate 'signal' in a manner analogous to the identification of the leading PCAlEOF as the dominant climate mode of variability; just as with PCA analysis, interpretation of higher order variates is generally not straightforward. In this work, we use the area weighted spatial mean temperature of the leading discriminant as our climate signal. As suggested in the appendix to SHO I, it is possible to construct an identical filter, but within the more familiar context of principal component analysis (PCA). First, a 'high pass' time series matrix Xts is defined by subtracting the group means from the full matrix. Then, a singular vector decomposition Xts = U ts Ats V tsT is done on this high pass matrix to find the directions of maximum variance in this reduced high-pass subspace. The spatial patterns associated with maximum high-pass variance are used to scale the truncated data matrix X t so that it has unit high-pass variance Xtl = X t Vts At;! This is equivalent to scaling Xs by the high-pass pseudoinverse Ls+ in the traditional LDA approach outlined above. A second singular vector decomposition is done on this scaled data matrix Xu = U u Au Vu T The canonical variates are then simply c = Uti Au, after normalization so that each canonical variate has unit variance, and the discriminating patterns are simply the regression of these canonical variates on the full data matrix X,.
477 This procedure is conceptually much simpler than the classic application of LDA described by SHOl, and basically amounts to: I. 2. 3. 4. 5. 6.
A singular vector decomposition followed by a truncation; removal of the low-pass mean to define the high-pass data series; a singular vector decomposition of the high-pass data series; scaling of the truncated time series by the high-pass covariance; a final singular vector decomposition to define the canonical variates; and a regression to find the discriminating patterns.
::ll~t~j---·· ---~~~-1
r
l
~
8
I
i,.
-01'
,~
!
-03 -0.4
~
i~!
'..
-0.5
,_._~~=---:,:--_~_._.~._. __J 1910
1920
1930
1940
1950
1000
y"",
1970
1950
1900
2000
2010
Fig. Sl: SHOll method with 3 different group lengths (1913-2008 period). The signal trend is weaker than that shown in the primary manuscript because some trend using this method is identified as intra-group variability. Note the dependence of the 1998 impulsive jump on group length.
This procedure can be implemented using only a handful of lines of Matlab code. The sole difference between this approach and that of SHOI is that the concept of a 'group' is discarded, replaced by more familiar spectral decomposition of a spatiotemporal field into low-pass and high-pass components. However, Figure S I shows that the performance of the SHOI LDA approach and the filter version used here are quite similar. The approach here has a somewhat larger trend over the 20 th century than the SHO I approach, consistent with the inclusion of some of the climate change trend into the high-pass (intra-group) variability due to the definition of the group in the SHOI approach, which in Figure S I spans 8, 12, or 16 years. This is because the group mean is uniformly removed from all years within the group in the SHOI approach. Provided that the high-pass statistics are stationary, the LDF can be applied to the full time series, obviating the need for buffer intervals at the beginning/end of the time window familiar from taking boxcar averages.
478 LDA performance relative to boxcar time averaging Boxcar averaging of fields with respect to time is probably the most popular means of isolating a 'climate' signal. Here we demonstrate the superiority of LDA to boxcar averaging in a situation where spatio-temporal data is available. Figure S2 (top left) shows a signal, which consists of a half-sinusoid in space that has a square-wavetype piecewise constant time evolution. There are 190 spatial and 100 temporal nodes, i.e., similar spatio-temporal resolution to the HadCRUT3 temperature network examined in the primary manuscript. Noise is superimposed upon signal, red in space and white in time, to represent temperature anomalies associated with weather-like fluctuations (top right). The bottom left panel shows the application of an II point (0.2 time unit) boxcar average with respect to time; while a semblance of the signal emerges, edge detection is fairly poor. In contrast, LDA using an II-point filter window, with the truncation retaining sufficient EOFs to explain 90% of the temperature variance (identical to that used in the analysis of the models and observed networks in the primary manuscript), performs much better, both in qualitative terms of edge detection as well as quantitatively, as its variance from the true signal is roughly 50% that of the boxcar averaged signal. LDA also out-performs boxcar averaging by a comparable amount even in situations where the underlying signal is a trend.
I
I .,
-0.5
o
0.5
-1
-0.5
0 Time
0.5
-o.S
0 Time
o.S
Tim.
1
~100 .11
i -o.s
o Time
0.5
50
-1
Fig. S2: Top right: Square wave time/halj sinusoid space signal. Top left: Signal plus noise. Bottom left: Running mean with t=0.2 time unit window. Bottom right: Leading discriminant.
479 Observed data network
-60
0
50
100
150 200 Longitude East
250
350
80 50 0.5
40
""
20
::0
3
0 0
-20 -O.f -40 -60
0
50
100
150 200 Longitude East
250
300
350
-1
Fig. 53: Top: Observed network locations (blue dots). Bottom: Annual average of leading discriminant temperature anomaly in terms of its magnitude in 2008. The primary data set used in this research is the HadCRUT3 gridded surface temperature data. 4 This data set is compiled using monthly station temperature time series over land, and from sea surface temperature (SST) measurements taken on board merchant and naval vessels. This data is compiled and gridded into 5" by 5" boxes. Although data is available from 1850, the emphasis in this work is on data from 1901present, balancing the desire for extensive spatial coverage against the need for relatively long time series for analysis. We choose not to infill missing grid values, i.e. , only those grid boxes that have continuous monthly coverage over this period are used. While this implies restricted spatial coverage, it has the advantage that statistics regarding spatio-
480 temporal station coverage that are used to infill climate data inferred from recent periods need not be assumed to hold in prior periods. Figure S3 shows the data network (top) as well as the 2008 temperature anomaly on this network associated with the annual average of the monthly leading discriminants. SUPPLEMENT REFERENCES 1.
2. 3. 4.
Ripley, B.D. (1996) Pattern Recognition and Neural Networks. Cambridge Univ. Press, 403pp. Schneider, T., I.M. Held (200!) "Discriminants of twentieth-century changes in Earth surface temperatures." 1. Clim., 14:249-254. DelSole, T. (2006) "Low-frequency variations of surface temperature in observations and simulations." 1. Clim., 19:4487-4507. Brohan, P., J.J. Kennedy, I. Harris, S.F.B. Tett and P.O. Jones (2006) "Uncertainty estimates in regional and global observed temperature changes: a 1850." 1. Geophys. Res., 1111, 012106, new dataset from doi: 10.1029/2005JD006548. The HadCRUT3 hemispheric and global mean data smoothed in this manner is available at http://www.cru.uea.ac.uk/cru/datai temperature/.
A KEY OPEN QUESTION OF CLIMATE FORECASTING PROFESSOR CHRISTOPHER ESSEX Department of Applied Mathematics, University of Westem Ontario London, Ontario, Canada Have you ever wondered what climate looks like? I know that you can see the indirect effects such as palm trees, deserts, or glaciers. You can see the indirect effects of the atomic world too: the properties of materials, colors weights etc. But these indirect effects are not the atomic world itself. We cannot perceive climate directly either, despite what a certain U.S. Senator said recently about feeling it on aircraft.
So what would it look like? Normally answering this would be a rather abstract exercise, but some people have unintentionally made it more concrete because of their ultra-long exposure photographs. They were created primarily with art and design in mind, but I think they have useful scientific meaning. For example a photo, by Justin Quinnell, appeared on the New Scientist website: (http://www .newscientist.coml gallery/mg200267 61900-solargraphs-show-half-a-year-ofsunil ) It's an exposure of six months! It uses a technique called solargraphy, which utilizes a pinhole camera.
481
482
Seeing the photo through the eyes of a scientist, the bright streaks in the sky form a great fixed arching structure that you would see instead of the Sun, if you lived on those timescales . No traffic is visible because it's too quick to see. You would miss some things that humans normally would see, but you would also sees things that we normally cannot. In Tarja Trygg's solargraph, (http://www.alternativephotography.com/articles/ artlO8.html) the same bright structure appears in the sky. It's everywhere in this world. The streets are empty, just as in the previous shot, but seen through scientific eyes, it is curious that the parking lot is full . It is not full of any particular cars. It's full of recurrent visits of cars. They are like atoms in the materials making up our bodies. The materials seem substantial enough, but individual atoms jiggle in and jiggle out without affecting the structures that they form. For climate, the solargraph is little more than a snapshot. But again, thinking as a scientist, I could imagine producing a sequence of them to produce a movie. It would take us centuries and more to produce it, but it would be like a short home video when played back. What would such a video be like? What would we observe? Would anything be happening? Would there be anything analogous to "weather" or would change only happen due to external influence? While the overall structure of the bright streaks in the sky would be fixed, thinking scientifically, we would see parts of them brightening, dimming, and pulsating-probably not unlike we see in an aurora borealis. There would be no day or night and there would be no seasons, just this shimmering arch in a translucent grey-blue sky. These questions are at the heart of what we need to know about climate, but we don't know the answers to them, despite all of our money and hard work over the last decades. This was made clear last summer when major U.S. climate institutions and NGO's called for U.S.$9 billion in new spending on top of what is already spent on climate science, to facilitate among other things "broad fundamental" research on the subject of climate forecasting.
483 Some people think that the scientific problem is solved. But if so, why would anyone want to spend more, let alone spend even more rapidly, to answer questions that we already know the answer to? Moreover why would we want to start all over again at the beginning with "broad foundational" research? The answer is simple. The goal of accurate climate forecasting using computers to simulate the Earth's atmosphere and oceans has not lived up to expectations. And that is so despite having the best equipment and scientists from all over the world singularly devoted to it for nearly thirty years. The fact that this problem is not solved, despite all the hopes and resources put into it over decades, does not reflect badly on the scientists involved. The only fault, if there is any, is that many non-scientists and some scientists have significantly underestimated the problem at a fundamental level. If you have not been to Erice in the last few years, you may wonder how this could be? So let me recap, before I get into some genuinely new ideas. Last year, we dealt with the data and statistics side of the climate problem. We faced a common and powerful misconception that data can be interpreted without a theory, free of context. The misconception wrongly presumes that a single specific probability density function ("normal" or "Gaussian") was generally true without theoretical assumptions. Those that hold this misconception don't get Poincare's joke that the mathematicians think that the single function in question must hold because it is proven by experiment, and the experimentalists believe it because it is proven in a theorem of mathematics. But even small changes in theoretical context cause a dizzying array of very different probability density functions, each hurling one into very different statistical worlds. What seems virtually impossible in one such world can be routine in another. Big surprises are in store for you if you presume the wrong function.
:t·f
:l
J Fig. 1.
484 Probability density functions don't have to be Gaussian. They can be skewed; they can spread in time; they can even quasi-propagate. They can be and do turn out to be exotic special functions, or even more exotic fractal functions. Figure I is a collage of some that I have encountered directly in my work. These, among many other classes, depend on the theoretical context. Theory matters. The year before last, we dealt with the theory side of the climate problem, or rather we dealt with climate models. I quickly clarify because there is also a widespread confusion between theory and climate models. Even if computer models were a rigorous application of all of physics-they aren't-faithfully producing all the known thermodynamic conditions, wind speeds etc. at every point, for all time, their output would not constitute a theory for climate. We simply don't have a theory of climate. I will talk more about that later, but for models the issue is moot. Climate models are not remotely comprehensive or rigorous applications of physics. Even the underlying known equations involved cannot be solved. There is a million dollar prize in mathematics if you can prove that solutions even exist. We need to use computers to get anything much out of the equations at all. But can we solve them on a computer? Alas no. We can't actually use those equations on a computer. What actually goes onto a computer is something known as a discrete map, which isn't the original continuous differential equation at all. These maps have solutions in their own right, differing from those of the original equations. Of course they are selected to be as close as possible, without taking too long to compute. But their errors can be significant, and hard to identify, particularly over long times. The maps can have instabilities where there are none (false chaos), and they can also falsely stabilize chaos that is actually there (false stability). Colleagues and I published one of the earliest examples of false stability in Physics Letters in the early 90' s 1, so this concept is not that old. Computational error can steal the ring from a bell, turning its voice into a dull dead thud. And these are just some concerns over proper computational error-computation the way that applied mathematicians teach it to students. Climate models don't even live up to that. They do not even use proper computation. Let me explain what "proper" means here. Maps, unlike the equations they approximate, have a grid size associated with them. Pixels on a video screen are an example of such a grid. Anything smaller than the distance between the pixels is lost. Data between the pixels suffers the fate of someone trying to sit between two chairs. Proper computation requires that the distance between grid points (pixels if you wish) denoted by the black two-headed arrows in Figure 2, must be much smaller than the wiggles in the solution. That is so in the upper example in Figure 2. But in the lower case, the grid is much larger than the wiggles. That is what happens in climate models; the wiggles are much, much smaller than the grid. It is fair to say that climate modelers have no choice, because it is easy to show that contemporary computers would take much more than the age of the Universe even to do short computations properly. If I were building climate models (and we do need climate models), I would be forced down exactly the same path. R.M. Corless, C. Essex and M.A.H. Nerenberg (1991) "Numerical Methods can Suppress Chaos", Physics Letters A 157:27-36.
485
Fig. 2
That path leads to a decisive compromise. The grid spacing they are forced to use is so large (typically 100's of kIn's) that all of the weather that we experience is too small to show up in climate models-all of it. And that includes the heat transport from ground to space, which is rather the whole point of popular interest in the topic . Leaving all those subgrid scale "wiggles" out would be a disaster for the physics. The fix is to introduce some fake physics, like mortar between the bricks, to keep the structure standing. The aim of the fake physics is to mimic what Mother Nature does, between the pixels, but only faster. I should mention, parenthetically, that instead of "fake physics" the official term is "parameterization." I would use that tenn too, if I did not have to endure so many who believe that climate models are an exact computation of classical physics, spewing out future observations. Good climate modelers certainly know better, but even many welleducated people do not. I call this misconception the clockwork fallacy. Those who subscribe to the clockwork fallacy are like philosophers of the 18 th century who saw the Universe as a giant, predictable clockwork. But with modern Hollywood computerized special effects in the mix, their conviction is far more vivid. The clockwork believers today can imagine shovels full of equations from classical physics dumped into a hopper at the top of a climate-model Cuisinart that produces tasty and accurate solutions at the press of a button. But there is no push button climate-model kitchen appliance. Instead climate models are as much pastiche as physics. The fake physics can be propped up empirically, but ultimately physics cannot be faked. The only thing that can make climate models work is an empirical approach. Observations must anchor the parameterizations. Climate models are unavoidably empirical constructions and not direct implementations of known physics at all. Their parameterizations may at times be inspired by the physics, but that's like saying a hamburger chain uses 100% pure beef while making its burgers. The consequences of this insight are enormous. It means that trying to call out climate modelers because a model doesn't agree with certain observations, really misses the nature of these models. For example, if you were to say, I don ' t like climate models because their tropical tropospheric temperatures don ' t agree with observations, then that can be fixed. If you say that there is a problem with cloud amounts, well, that can be
486 fixed too. If you say that climate models rain strangely, or that they don't agree with each other, well then ... you get the idea. It's the unavoidable nature of empirical models that empirical defects are all empirically fixable. John Von Neumann, one of the fathers of modem computers, among other things, famously said, "with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." Another, perhaps more authoritative version, calls for the elephant to flyaway. Currently we are in a curious situation. As things stand, empirical climate models don't just compute climate; amazingly, they define it! Climate models aren't the same as empirically based engineering models. The latter are based on controlled experiments under all conditions envisaged. However in climate dynamics, we cannot do controlled experiments, and we have little or no precise observational knowledge of what conditions will be encountered in the future. In fact we actually aim to compute circumstances that may well be quite unlike any used to empirically set the parameterizations. I want to return now to that idea of a climate movie made from those eerie solargraphs. If we made the movie from climate models, instead of imaging Nature directly, what would the model version be like? We can actually say something about that.
---;v.;;:=:;r"4-j;;::::n;;';;;-j _._E
HQdiSST\f&S!oo 1, i
••
ERA1S ~}'S$ NCEPINCAfl R_'l'<'
..
NCa"JNCA~(1S~·
-
"'-- -
CGl~1S.1(f47}
. _ . """IM·eM:!
-
- cmfK>.MKao
- - GFDI.·CM:
- - - - ; --- -
OISS<EH
::1,0
-~ G!SS-tR
--",:::.::::j::::;;::==:.j _ :
- - ;l'S'.-CM4 _.. _._.... M!RQC3,Zth'roo-t
-
- M,R0C3.2i_»
--
e.
ECHAM_.OM
-..<
-
~..ifl}.('"G~,S..2
0011L-~~~~--~~~~~--~~-.--" ~ --~·~'!3
-
..,...
i.JKMO.H~"
Fig. 3.
Figure 3 is a diagram from the modeling chapter of the IPCC's fourth assessment report 2 It compares different model outputs in a power spectrum of some temperature index or another. Across the bottom are periods of time on a log scale. I am not interested in all of those timescales. I am interested in long timescales: decades to centuries.
D.A. Randall et al.
(2007)
Climate models and th eir evaluation. in Climate Change 2007: The Scientific Basis-Working Group f of the Fourth
Assessment Report of the IPee, chap. S, 59\-662, Cambridge University Press.
487
Fig. 4.
Let me isolate them and stretch them out in Figure 4, since there is such a huge range of timescales. Parenthetically, it's amusing that the model curves differ in magnitude by a factor of one hundred. But, as I said, things like that can be fixed. In any case, what cannot be fixed is that all of them are as flat as the Mediterranean on a calm day. There is a famous theorem by Wiener and Khinchin that tells us in such cases that the autocorrelation function is zero.
r
N
'J
Fig. 5.
In practice that means that the model solutions will look more like the bottom of the curves in Figure 5 than the top one. Indeed if you look at unforced model outputs they look just like the bottom one. That is, not only would there be no long-term trends, averages would be fixed. Thus in the movie made from models, absolutely nothing would be happening. All would be frozen in a kind of permanent equilibrium. This particular equilibrium world of climate models is what is currently regarded as defining climate itself. A great deal of our mainstream thinking about climate stems
488 from this timeless state in big climate models. But is this what we would find in Nature if the system had no external influences? To put it another way, is there such a thing as a long-timescale "weather?" That is perhaps one of the most important questions in climate forecasting in my view, with many significant consequences however it were to turn out. There are no rigorous theoretical arguments for presuming the kind of equilibrium inherent in contemporary climate models. But there is an intuitive argument that these timescales are larger than the thermodynamic timescales from ocean heat capacities. But the system is not in thermodynamic equilibrium, which renders this argument problematic. The system is instead a complex dynamical system. Non-thermodynamic dynamical timescales can play an important role and they can be much longer than the thermodynamic ones. Intuitively that is because chaos can produce bounded behaviour with infinite periods-chaotic signals are not periodic.
0.'
Fig. 6.
Let me give you an example of dynamics based timescales. In Figure 6 we find the output of a system I discussed previously in Erice. It is the 3-times logistic map near the 3-cycle window.3 The jump here occurs after repeating very nearly to about 5 figures the same value about 500 times. If you attempt to predict the next value or values, and you are an empiricist, it would be understandable if you get bored after getting the same value 2-3 hundred times over and over. You may well conclude this is the only value you will ever get. If you do though, you will be surprised shortly after 500. Afterward you may think that the correct lesson is to say "I will wait till 1000 steps next time." I can then give an empiricist new data (Figure 7) to test what has been learned. We may begin from the same start (0.53). This leads to the same steady value as before, over and over and over. But nothing happens even at 1000 points. Perhaps an empiricist will conclude after, say, 50,000 repeats of the same value that nothing really is going to
C.
bst::>:,
S.
lIie,
and
doi: IO.102912007JD00856J.
R.
Corless
(2007)
"Broken
Symmetry
and
Long-Term
Furec
1.
Gl'opilv,I'.
Rl! search,
112
024517,
489 ever happen. And maybe after 100,000 repeats the empiricist will even become smug and self satisfied that all is understood.
Fig. 7.
But all that has happened is that the surprise is delayed to about 170,000 steps. The only difference is that an extremely small alteration to a constant has taken place. The small reduction in the bifurcation parameter from the value at the 3-cycle periodic window, I +2;)2, has been reduced slightly (5xlO,5 to 5xlO'\ This example gives three important lessons about dynamics: I. Nonlinear dynamics can produce timescales of any length. In this example setting the deviation mentioned to zero makes the time till a burst infinite. Fluid dynamics, crucial to climate studies, is known, for example, to exhibit intermittent bursts such as this one and other more complex ones too. Dynamical timescales have nothing to do with mass or thermodynamic timescales, but are properties of the nonlinearities. 2. Empiricism fails on systems that are not stationary, or for timescales that are much shorter than a system takes to explore it's entire dynamical range. 3. The notion that change can only happen in the presence of a cause is simply naIve. In this case the human prejudice is to presume that an external force has caused the jump in values. There is no external cause. The jump is entirely of internal origins. The only way-the only way-we can attribute external causes is to account for internal ones first. Not only is the thermodynamic timescale argument unconvincing, but there are practical reasons to doubt the behaviour of models on those timescales. You should not be surprised at this stage that complex algorithms for nonlinear systems generally (not just for climate models) are often computationally unstable when pushed to extremes in time. For climate models, we saw it in the historical struggle against so-called climatological drift. That is a euphemism for spurious long-term internal change in
490 models. It was an issue years ago when I was a postdoc at the unit working on the Canadian GCM. They knew it was spurious because model atmospheres did weird things over long times, like loosing mass, which no one could account for, or returning negative densities etc. In fact one of the milestones in the UN's fourth assessment report is the creation of a kind of fake physics that finally made fictional energy flows called "flux adjustments" unnecessary to fix such things including instability. So "drift" was interpreted as due to computational instabilities and it was beaten out of the system from the beginning. What else could they do? There was no way, theoretical or empirical, to pick out real, long-term trends from computational instabilities, let alone to distinguish between externally and internally caused dynamics. All they had were the simplistic presumptions of early stochastic models, and a practical phobia about instability. But is their result too stable? For all of the complexity on shorter timescales the steady, stationary behavior of such models makes the climate system into something that behaves more like a heated brick on long timescales than a complex system. As I said earlier, computational schemes can suppress real instability. Furthermore the work of Professors Tsonis and Swanson, which has been presented here in Erice, suggests that known, long-term modes in the oceans are dynamically coupled, suggesting the prospect of nonlinear dynamical behaviour on these timescales. The usual empirical models cannot determine whether things would actually be happening in Nature. We have very little direct experience with that world, therefore it cannot be an empirical question. If there is natural "weather" on the long timescales in question, there are some important considerations that follow. I will outline a couple here: The first is intrinsic. It is the question of closure of the "climate state" function used in climate model discussions. If time averages are actually constants of time in that regime, there may exist a function relating them to each other, without reference to the underlying fields being averaged. This is in analogy to the state function of equilibrium thermodynamics. This idea is implicit in the notion of climate sensitivity, which is in wide use in model analysis. That is, an integral, T, over the temperature field is implicitly assumed to be a function of an integral, F, over the radiation flux field (i.e. !:IT oc !:IF). In other words if you heat a "brick," the "brick" will become hotter. If equations can be written for a regime that does not require reference to the regime they are averaged from, then you have closure. The regime can then "ignore" the underlying fields . Thermodynamics provides the classical example of a function that does not require knowledge of the underlying physics, to the point where it can be literally "ignored." Let me explain this idea of closure, because if we ever can come up with relationships (functions or differential equations) that describes climate but can ignore weather then we really will have a theory for climate. I find it difficult to believe that the "heated climate brick" equation, !:IT oc M, is it, so I will have to explain with an example both how closure works and how it doesn't. Suppose we aim to divide the world into two parts: an un averaged world, and an averaged world. Closure is the mathematical circumstance where we succeed in making
491 the split. In the unaveraged world suppose that we have two variables y and x such that y=x 2 . Now suppose that each day we find that x depends on a variable s in a different way. So one day, we have x given by S·1/3, and another day we have S4 , and another Sl, and so on. Think of it like weather-different every day. Let's just stick to any power over-I12. If we define an average, aj , so that aj =
f~f(s)ds,
then the average of x, ax' is 3/2, 1/5, and 1/2 respectively for the cases mentioned. For the average of y, ay, the corresponding values are 3, 2/9, and 2/3 respectively. We can fill in with more powers and plot them. We find that the results all fall on the graph of the function
a=~ y 2-a,
0:0; a, <2.
The graph looks like Figure 8. 50
40
)0
20
10
Fig. 8.
This is closure. I can determine a,. from a, without ever having to ask what function of s was used. I never again have to evaluate the average integral. The averaged world can ignore the un averaged one, without contradicting it in any way. We have succeeded in separating an averaged world from an un averaged one. How can closure go wrong? One of many ways is to widen the class of functions. Suppose that one day a new function for x appears, s+ I. The average a, is 3/2. We had 3/2 before, but this time instead of 3 for a,., we get 7/3! It may seem like some trivial point of mathematics that there is a different a,. value, but closure is all about the nature of a function, and functions cannot have more than one output value. It means that we
492 cannot just go by at anymore in order to figure out what G y is. We have to go back to the un averaged world to figure it out. We cannot run our averaged world separately. In fact as we add more such functions we will see that the graph fills with a cloud of points, which may not even have a correlation, let alone one simple function. In that case there is no connection and each average is entirely dependent on the unaveraged world. If we can adjust the definition of averages, the nature of relationships in the un averaged cases, and the class of functions that will appear, it's amazing that closure occurs in Nature at all. But there are many spectacular successes. I have already mentioned the amazing case of thermodynamics. The unaveraged functions are limited by strong conditions. Another case is fluid dynamics, which arises from kinetic movements of atoms and molecules. It is remarkable that nature is filled with successful occurrences of physics that ignores the underlying physics. Professor Zichichi uses the metaphor of Beethoven composing music without the use of sub-nuclear physics 4 as an example of this concept. The underlying scales tag along without a need for Beethoven to consider them. A more pedestrian metaphor would be that we don't need to compute quantum mechanical wave functions to go to the grocery store. In that sense the underlying scales are separate. He calls this separation of regimes, the Anderson-Feynman-Beethoven (AFB) phenomenon and cites a hierarchy of successive separate regimes, each containing the next, down to sub-nuclear physics. There is a spectacular case of the failure of closure. That is the averaging of the fluid mechanics to get at turbulent behaviour first attempted by Reynolds. Generations have tried it without success. To this day, not only can we not forecast turbulent flow in a simple pipe from first principles, we cannot forecast the lowest order statistic, the average, from first principles.
100
Fig. 9.
Now consider a far more complicated problem than fluid turbulence, and ask yourself, is it plausible that the far more complicated problem of climate has been successfully closed with the climate as "heated brick" equation, tJ.T ex: tJ.F? I think it is far
A. Zichichi, Complexity and Predictions at the fundamental It:vel of scientific knowledge, Presentation for The Pontifical Academy of Sciences. November 2006; Complexity at the Fundamental Level: Consequenct!s for th e LHC, preprint. March 2008.
493 more likely, if this relation holds in models at all, that it does not in Nature. I suspect that this equation is a cuckoo in the climate theory nest. Let's now consider an extrinsic consequence of having some kind of "long-term weather", or low frequency natural variability as meteorologists might call it. If intrinsically unstable (chaotic) dynamics occurs in this long-time regime in any form, long behaviours may be surprisingly sensitive to neglected or unknown small forcing. For this, I coined the term "crypto sensitivity". External forcing can use the same channels of sensitivity that errors in initial conditions do. To an observer of the chaotic noise, it will seem little different than errors in initial conditions (Figure 9) even when there is actually no error in initial conditions. However if the external forcing dynamics is systematic in any way, it can change the collective behaviour markedly, even if the forcing has a very small magnitude. I give an exampleS here of the logistic map, undriven and driven by small dynamics. The driving is done with another logistic map intruding on the first with a small correction. The result is a very different appearance to the invariant relative frequencies observed. Figure 10 is the undriven case. It has all the characteristics appropriate for the chaotic map at f.l= 3.6787. The map induces the probability distribution. Dynamics can introduce distributions very different from a Gaussian one. In this case the zero values are absolute and will never be filled.
CJ.l
o~s
05
.r Fig. 10.
Figure II is a case where the coupling constant, a, is 0.5%. That means that 99.5% of the signal comes from the main dynamics, the rest is "small". The result is a dramatic difference in the distribution, which is observable on the collective behaviour and not discernable in the middle time scales. That means it is possible to skip the middle regime and affect the long-time "averaged" regime without being fully noticed. I have coined the term the virtual butterfly effect to describe the effects of neglected dynamics on a chaotic system-no butterflies needed. M. Davison. C. Essex, and 1. Shiner (2003) "Global Predictability and Chaol1o: Epidemiological Dynamics in Coupled Populalions,' Oprn Sys. & in/ormatioll DVII.IO,I - l0 .
494 o JS
a
=0.005
O ~
ODS
05 X
Fig. II.
Crypto sensitivity is a possible pathway to explain how the Sun, as an important example, could influence climate without large relative energy fluctuations. It could explain how small external influences can accrue through the middle, undetected, showing up clearly despite other larger drivers, without contradicting their influence. This completely novel pathway only becomes possible if we do not accept the unproved and perhaps unprovable "brick-like" definition of climate imposed through contemporary climate models .
SESSION 10 CLIMATE AND HEALTH FOCUS: WINDBLOWN DUST
This page intentionally left blank
MEDICAL GEOLOGY: DUST EXPOSURE AND POTENTIAL HEALTH RISKS IN THE MIDDLE EAST MARK B. LYLES Research Program Integration and Mission Development, Bureau of Medicine and Surgery, Washington, DC, USA In the Middle East, dust and sand storms are a persistent problem and can deliver significant amounts of micro-particulate exposure via inhalation into the mouth, nasal pharynx, and lungs due to the fine size and abundance of these micro-particulates. The chronic and acute health risks of this dust inhalation have not been well studied nor has the dust been effectively characterized as to chemical composition, mineral content, or microbial flora. Scientific experiments were therefore designed to study the Kuwaiti and Iraqi dust as to its physical, chemical, and biological characteristics and for its potential to cause adverse health effects. First, dust samples from different locations were collected and processed and exposure data collected. Initial chemical and physical characterization of each sample including particle size distribution and inorganic analysis was conducted, followed by characterization of biologic flora of the dust, including bacteria, fungi and viruses. Dust can range in both composition and particle size depending on the global location. In the Middle East, dust and sand storms are a persistent problem especially during the spring and summer months. Desert sand in the Persian Gulf region consists mostly of quartz (Si0 2) but the finer dust consists primarily of clays with and without a silicate core and can be respired into the lungs due to the small size of the particle (Richards et al. 1993). The dust particles predominately consist of clays (-50%) and quartz crystals (-25%). The size distribution of airborne particles can range from
497
498
severe acute pneumonitis. Following Gulf War I, obstructive bronchitis and bronchiolitis were reported in 86 autopsied casualties from Kuwait, with the reported observation of sand particles in lung parenchyma (Irey, 1994). The physical and chemical properties of the Iraqi dust sample were unique among sand/dust samples normally encountered. These observations included: I) a significant portion of the sample (10+%) was 20 11m or less in size; 2) these micro-particulates exhibited a charge distribution that prevented them from clumping; and 3) that particles below 20 11m in size seemed to contain a crystalline core surrounded by a non-crystalline inorganic coating. The data collected and analysis of samples collected from this study produced the following observations and results:
TSP (Total Suspended Particle Mass) (ml!imJ) PMI0 00 urn) and below
=0.001 mg/m' =0.137 mg/m' =2.469 mg/m' =9.114 mg/m' = 2.051 mg/m'
(NIDBR Lab, Great Lakes, IL) (Camp Virginia Clinic, Kuwait-indoors) (Highest hourly average--D800) (Highest TSP reading) (Highest daily maximum-18 June @ 1300)
• NOTE: >9.999 mg/m' readings recorded during peak dust storms
Count (Total Number of Suspended PM 10 Particles /mJ ) Size Range = 0.5 urn to 10 urn = 1,314,906 (Navy Lab, Great Lakes, IL-indoors) =12,290,917 (Camp Virginia Clinic, Kuwait-indoors) = 107,261,167 (Highest average hourly maximum @1300) (SO =54,959,015) =588,633,693 (Highest daily maximum-18 June @1300) = 127,643,273 (Highest average hourly daily maximum-I 3 June) (SO = 34,311,341) • NOTE: Readings recorded during peak dust storms were> 706J93J34 partic/es/m'.
Size Range =5.0 urn to =36,515 =507,824 =6,884,417 =44,571,347
=5,244,651
10 urn (Navy Lab, Great Lakes, IL-indoors) (Camp Virginia Clinic, Kuwait-indoors) (Highest average hourly maximum @ 1300) (SO (Highest hourly maximum-I 8 June @1300) (Highest average daily maximum-13 June) (SO
=4,142,586) = 3,632,501)
Fig. 1: Table of Dust Particle exposure in Camp Buehring Kuwait over a 12 day period.
•
•
At PMIO (particle with aerodynamic mass of 10 microns), the highest hourly average each day was 2.469 mg/m 3 which occurred at 0800. Maximum exposures during dust storms exceeded 10.000 mg/ m3 . (Figure I) Daily daytime PM 10 average for 12 consecutive hours, 0700-1900, was -0.900 mg/m 3 (n=12).
499 •
At peak exposures, particle counts (0.5 to 10 f.l,m range) exceeded 7x 108 particlesl m3 .
Fig. 2: Bioavailable Elements in Dust Particles from Camp Buehring, Kuwait.
•
•
•
A total of 54 elements were screened for with 37 different elements identified of which there are 15 are bioactive metals including Uranium . Of these the ones of greatest concern are: Arsenic (10 ppm), Chromium (52 ppm), Lead (138 ppm), Nickel (564 ppm), Cobalt (l0 ppm), Strontium (2700 ppm), Tin (8 ppm), Vanadium (49 ppm), Zinc (206 ppm), Manganese (352 ppm), Barium (463 ppm), Aluminum (7521 ppm), (Figure 2). The ratio of Chromium III to Chromium VI is unknown (40-120 ppm = .04 .12 f.l,g/m3 per every mg 1m 3 of TSP mass at PM 10). The U.S . Maximum Exposure Guidelines (MEG) for Cr (III) is 12 f.l,g 1m 3 and 0.068 f.l,g 1m 3 for Cr (VI). Microbiological analysis of these same samples identified 147+ different microbial isolates (six different Genera by 16s DNA analysis). Of these, -30% are human pathogens, 13 are alpha and/or beta hemolytic species, and several were found to have antibiotic resistance (Figure 3).
500
Best ID thus Far
Comment
Neisseria meningiditis
meningitis
Staphylococcus aureus
cystic fibrosis
Bacillus circulans
gastro-enteritis
Pantoea agglomerans
I
eptic arthriti.
Pseudomonas al!1ici Ralstonia paucula
opportunist-septicemia, peritonitis, abscesses
Staphylococcus pasteuri
various infections
Arthrobacter crystallopoietes Pseudomonas balearica
cystic fibrosis
Paenibacillus thiaminolyticus
I
Bacillus vedderi
, obligate alkaliphile
bacteremia
Bacillus subtilis Palltoea agglomeram
eninbVJe_
Pseudomonas pseudoalcaligenes
Strains reported to carry metallo-B-Iactamase
Cryptococcus albidus
septicemia and meningitis
Bacillus clousii
Oral bacteriotherapy
Kurthia gibsonii
Diarrhea
Bacillus firmus
alkaliphiJe; bread spoilage
Staphylococcus kloosii Bacillus moiavellsi<
various infections ' bio,urfaclant
Bacillus licheniformis
food poisoning
Pseudomonas oryzihabitans
Hickman catherter biofilm
Fig. 3: Representative Bacteria Isolatedfrom Dust Particles * * Colored columns indicate hemolytic characteristics. • • • •
Isolates of Acinetobacter spp. and Neisseria spp. have been found inhabiting the dust. Approximately 27 fungal isolates have been identified consisting of 7 different Genera. Sterilization experiments have shown an exceptional ability for these organisms to survive. Early animal studies have suggested long-term inflammation from pulmonary exposure with mild to moderate eosinophilia.
Airborne particulates are recognized the number one health risk for troops deployed to the current theaters of operations for OIF and OEF. The 000 currently employs the EPA standard of the respirable fraction «10 ~m) to estimate the risk of exposure to airborne particulates. The U.S. Army Center for Health Promotion and
501 Preventive Medicine (USACHPPM) reported that PMlO exceeded the I-year MEG (70 Ilg/m3) for PM I 0 (particulate matter <10 11m) over 97+% of the time.
The scientific studies conducted supplied substantial data as to the composition of dust particles within the Kuwaiti IIraqi sample area as well as environmental conditions and exposure potential for troops operating under these conditions. For example, during June 2004, environmental conditions were monitored from 0700 till 1900 for 12-days at Camp Buehring, Udairi, Kuwait on the Iraqi/Kuwaiti border. The average daytime temperature was 106.2°F with maximums in excess of 135°F. However, surface sand temperatures elevated to ISS-160°F due to the infrared (IR) radiation absorption. The average UV exposure index was 10 (>360 mW/m2) and is one of the highest in the world. Average humidity was <10 % but was probably elevated over that previously reported for that region «3%) because of the presence of humans and other sources of water. Average daily wind velocity was approximately 14 MPH (1267 ftlmin) with maximum wind gusts in excess of 33 MPH (2800 ftlmin). This is significant because it physically relates not only to the amount of total suspended particle mass in the breathable air at standing height (-2 m vertical), but also to the size fraction of particles being suspended (>150 11m). Airborne particle dynamics determine the size of the particle airborne and the height off the deck based primarily on wind velocity and 'roughness' (drag) of the particle. Simply put, at any given wind velocity, the lower one gets to the ground the greater the mass of particles one is subjected to with an increase in particle size. Particle sizes <44 flm constituted from 5% to II % of the total mass for samples collected outdoors versus indoors (tent dust) respectively. Approximately 98% of the indoor dust samples were
502 ppm) of bioavailable Aluminum and reactive Iron with another I % by weight a combination of trace and heavy metals. This finding is of specific concern due to the recent implication of Aluminum in Multiple Sclerosis and other neurological diseases. Microbial analysis reveals a significant biodiversity of bacterial, fungi, and viruses of which -30% are pathogens. The level of total suspended particle mass along with environmental and physiological conditions present constitute an excessive exposure to micro-particulates including PM 2.5 and the potential for long-term adverse health effects. These data suggest that the level of dust exposure coupled with the microbial and bioavailable metal content could constitute a significant health risk. When taken with other existing work suggest that further immediate research is warranted to provide insight into potential human health risks both acute and chronic. REFERENCES AND SUGGESTED READING. I.
2. 3. 4. 5. 6.
7.
8.
9.
10.
II.
Alshubaili, A.F., K. Alramzy, Y.M. Ayyad, and Y. Gerish. (2005) "Epidemiology of Multiple Sclerosis in Kuwait: New Trends in Incidence and Prevalence," Eur Neurol,53:125-131. Irey, N.S. Kuwait Casualties: Morphologic and Toxicologic Findings, NIH Technical Assessment Statement, April 27-29, (1994). Korenyi-Both, A.L., Korenyi-Both, A.L., Molnar, A.C., Pidelus-Gort, R. (1992) "AI Eskan disease: Desert Storm pneumonitis." Mil. Med. 157(9):452-462. Korenyi-Both, A.L., Korenyi-Both, A.L., Juncer, D.J. (1997) "AI Eskan disease: Persian Gulf syndrome." Mil Med. Jan; 162(1): 1-13. Korenyi-Both, A.L., Monlar, A.C. (1992) Al Eskan disease: Desert storm pneumonitis. Mil. Med. 158:A6. Lyles, M.B., Fredrickson, H.L., Bednar, A.J., Fannin, H.B., Sobecki, T.M. (2005) "The chemical, biological and mechanical characterization of airborne microparticulates from Kuwait." Abstract 8th Annual Force Health Protection Conference, Session 2586, Louisville, KY. Lyles, M.B., H.L. Fredrickson, H.B. Fannin, A.J. Bednar, D. Griffin, and T.M. Sobecki, (2008) "The Chemical, Biological, and Physical Characterization of Dust Particulates from the Middle East," Chinese lournal of Geochemistry, Vol 25, Suppl.1, March, pp. 2-3. Morbitity and Mortality Weekly Report (MMWR). (2003) "Severe acute pneumonitis among deployed U.S. military personnel in Southwest Asia," MarchAugust. 52:857-859. Nouh, M.S. (1989) "Is the desert lung syndrome (non-occupational dust pneumoconiosis) a variant of pulmonary alveolar microlithiasis? Report of 4 cases with review of the literature." Respiration, 55(2):122-126. Richards, A.L., Hyams, K.C., Watts, D.M., Rozmajzl, P.J., Woody, J.N., Merrell, B.R. (1993) "Respiratory disease among military personnel in Saudi Arabia during Operation Desert Shield." Am. l. Public Health 83: 1326-9. Shore, A.F., Scoville, S.L., Cersovsky, S.B., Shanks, G.D., Ockenhouse, c.P., Smoak, B.L., Carr, W.W., Petruccelli, B.P. (2008) "Acute eosinophilic pneumonia among U.S. military personnel deploying in or near Iraq." lAMA 292: 2997-3005.
CLIMATE CHANGE AND CLIMATE SYSTEMS INFLUENCE AND CONTROL THE ATMOSPHERIC DISPERSION OF DESERT DUST: IMPLICATIONS FOR HUMAN HEALTH DALE GRIFFIN United States Geological Survey, Florida Integrated Science Center Tallahassee, Florida, USA There is an emerging interest in how dust storms influence human and ecosystem health on a global scale. Recent desert-dust health investigations have focused on the toxicity of source soils and dust collected from dust storms, the microbial content of source soils and dust (bacterial, fungal, and viral pathogens), the presence of anthropogenic pollutants in source soils and dust (industrial and agricultural emissions), and short- and long-term human-health effects following dust-storm exposure (epidemiological studies). Health effects were widely reported in the early 20 th Century during the American Dust Bowl, although etiology was poorly understood. Figure I is an image showing a number of different dust storms impacting the community of Stratford, Texas, USA, in 1935. Research has established that dust-storm exposure can result in human disease (respiratory stress, coccidioidomycosis, silicosis, meningitis), although the etiology of diseases such as 'desert-dust pneumonia,' commonly reported in current and historical records, remains undefined. As with most diseases, the degree and length of agent exposure determine health risk. Current estimates on the quantity of desert dust that moves through the atmosphere each year range from 0.5 to 5 billion metric tons. Dust storms are common occurrences and originate in the deserts and arid regions of Earth, which include the Sahara, and Sahel of North Africa, the Etosha and Mkgadikgadi basins of South Africa, the Gobi, Takla-makan, and Badain Juran deserts of Asia, the Great Basin in North America, the Chihuahuan and Sonoran of Central and North America, the Patagonia and Atacama of South America, the Great Victoria and Great Sandy of Australia, and the Syrian of the Middle East. The primary sources of dust to the atmosphere each year are the deserts of North Africa (-50-75% of the total annual atmospheric-dust load) and Asia, and clouds of dust emanating from these deserts are capable of trans-oceanic and global dispersion. On long-time scales, the global dispersion of desert dust through the atmosphere is greatly influenced by changes in global temperature. Temporal analyses of data obtained from both Arctic and Antarctic ice cores have demonstrated that enhanced dust dispersion occurs during glacial periods (-25-fold increase versus interglacial periods). Currently exposed sea and lake sediments are the result of a combination of climate change and anthropogenic activity and are prime sources of dust. The surface area of the Aral Sea, the fourth largest lake in the world in the early 1960s at -68,000 km 2 , had decreased to -33,800 km 2 by 1992. The decrease in lake size has been attributed to the diversion of source waters for agricultural purposes. Dust clouds originating from storm activity over the -27,000 km 2 of exposed seabed are common. The surface area of Lake Chad in 1963 was -25,000 km 2 and due to regional drought and the diversion of source waters for agricultural purposes is now -1,350 km 2 (Figure 2). Fifty percent of the lake surface-area decline has been attributed to source-water diversion. Owens Lake had a surface area of approximately 280 km 2 in 1913, the year in which the City of Los Angeles tapped it as a
503
504 source of drinking water. At that time, Owens Lake was also being utilized as a source for irrigation waters in San Fernando Valley. By 1926, all that remained was a dry lakebed, and the exposed sediments became the primary source of dust in North America (estimates as high as 8 millions tons in some years). Climate systems or events are known to influence global short-time scale dustdispersion occurrence and transport routes. Dust storms originating from the Sahara/Sahel occur year round and account for -50 to 75% of all desert dust that moves through our atmosphere. Dust generation in North Africa is influenced by a pressure cell-system flux over the North Atlantic Ocean known as the North Atlantic Oscillation (NAO). During years when the NAO is in a more northerly position, North Africa receives less precipitation resulting in increased atmospheric-dust transport. The NAO has been predominantly in a northern position since the late 1960s, which has resulted in a reduction in precipitation and an increase in dust transport out of North Africa. During El Nino years, dust transport out of North Africa is further enhanced. El Nino Southern Oscillation (ENSO) years are characterized by warm sea-surface temperature (SST) anomalies in the eastern tropical Pacific, and ENSO La Nina years are characterized by cool SST anomalies in the same region. Latitudinal Saharan/Sahel dust transport across the Atlantic is influenced by seasonal Hadley Cell shifts. During the Northern Hemisphere spring and summer (-April through August), transport is to the Northern Caribbean and North America. During the Northern Hemisphere fall and winter (-October through February), transport is to South America. While anthropogenic activity (deforestation, desertification) in the Sahara and Sahel is also believed to influence dust transport, analyses of data collected between 1980 and 1997 have demonstrated year-to-year variation in the overall size of the regions but no longer-term change. Pacific Decadal Oscillation (PDO) events, which typically last 20 to 30 years, influence dust transport in Asia and North America. The PD~ is El Nino/La Nina-like and is characterized by SST anomalies above 20 degrees north. During years of positivephase PD~ (the eastern Pacific warms), there is less dust transport out of Asia and across the Pacific to North America than during negative-phase (eastern Pacific cools) years. Positive-phase PD~ reduces the movement of northern fronts into and across the deserts of Asia. During the American Dust Bowl years, the PD~ was in a positive phase. The increase in large clouds of Asian dusts moving across the Pacific to North America and in several cases circumnavigating the Northern Hemisphere has coincided with recent negative-phase years. El Nino and La Nina events have also been linked to Asian dust transport out of Asia with transport paths out of Asia and across the Pacific occurring at -45 degrees during El Nino years and -40 degrees during La Nino Nino years. The subtropical Indian Ocean High influences the frequency and severity of dust storms in Australia. When the system shifts west toward Africa, cold fronts can move into the continent causing an increase in dust-storm activity from September to February. A system shift west can also result in an increase in dust-storm activity from December to May, although dust-storm occurrence during this period is predominately a result of drought. Dust-storm occurrence is also enhanced during El Nino events, which in Australia are typically severe drought years. The NAO, PD~, ENSO, and subtropical Indian Ocean High are examples of how Earth's various climate systems influence dust transmission and transport routes over short time scales.
505
Constituents of desert dust both from source regions (pathogenic microorganisms, organic and inorganic toxins) and those scavenged through atmospheric transport (i.e., anthropogenic emissions) are known to directly impact human and ecosystem health. Anthropogenic influences on dust transport include deforestation, harmful use of topsoil for agriculture as observed during the American Dust Bowl period and the current Asian deserts (between 1975 and 1987, the desertification rate in China was -2,100 km 2 per year), and the creation of dry sea- and lakebeds through the diversion of source waters. Although the U.S. Soil Conservation Act of 1935 was passed to prevent harmful agricultural practices believed to have contributed to the severity and number of dust storms observed during that period, the end of the American Dust Bowl has been attributed to an end of that era's regional drought. Anthropogenic activities that contribute to dust transmission are obvious, but the extent that these activities contribute to the natural dust budget is not. Long-term analyses of ice cores have demonstrated that climate change is the primary factor that controls dust transmission. During glacial periods, dust transport was much greater than that observed today. Historical transmission trends are clear. If we are at the beginning of a long-term global-warming phase (and anthropogenic influences are limited), then dust transmission and resulting dustassociated risk to human and ecosystem health should decrease. If, however, we are not in a sustained warming phase and/or anthropogenic influences are/or become relevant, then transmission and health risk should increase. SUGGESTED READING I.
2.
3.
4.
5.
6.
De Deckker, P., R.M.M. Abed, D. de Beer, K.-U. Hinrichs, T. O'Loingsigh, E. SchefuB, J.-B.W. Stuut, N.J. Tapper, and S. van der Kaars. (2008) "Geochemical and microbiological fingerprinting of airborne dust that fell in Canberra, Australia, in October 2002," Geochemistry Geophysics and Geosystems 9, QI2QIO,doi: 10.1 029/2008GC002091. Goudie, A.S., and N.J. Middleton . (2006) Changing frequency of dust storms. In Desert dust in the global system. Springer Berlin Heidelberg. Chapter 7: 167-191. ISBN 978-3-540-32354-9 (Print) 978-3-540-32355-6 (Online). Griffin, D.W. (2007) "Atmospheric movement of microorganisms in clouds of desert dust and implications for human health." Clinical Microbiology Reviews 20(3):459-477. Lambert, F., B. Delmonte, J.R. Petit, M. Bigler, P.R. Kaufmann, M.A. Hutterli, T.F. Stocker, U. Ruth, J.P. Steffensen, and V. Maggi . (2008) "Dust-climate couplings over the past 800,000 years from the EPICA dome C ice core." Nature 452:616-619. Moulin, c., C.E. Lambert, F. Dulac, and U. Dayan. (1997) "Control of atmospheric export of dust from North Africa by the North Atlantic Oscillations." Nature 387:691-694. O'Hara, S.L., G.F.S. Wiggs, B. Mamedov, G. Davidson, and R.B . Hubbard. (2000) "Exposure to airborne dust contaminated with pesticide in the Aral Sea region." Lancet 355:627.
506
7.
8.
Shinn, E.A. , G.W. Smith, I .M. Prospero, P. Betzer, M.L. Hayes, V. Garrison, and R.T. Barber. (2000) "African dust and the demise of Caribbean coral reefs." Geological Research Letters 27:3029-3032 Tucker, C.l., and S.E. Nicholson. (1999) "Variations in the size of the Sahara Desert from 1980 to 1997." Ambio 28:587-591.
FIGURES
9.
Fig. 1: The American Dust Bowl. A series of images showing a dust storm impacting Stratford, Texas, USA , April and May of 1935. Photo Courtesy of the National Oceanic and Atmospheric Administration, George E. Marsh Album. Photo 1D: thebJ367, NOAA National Weather Service (NWS) Collection.
507
Fig. 2: Lake Chad time series, 1963-2000. The surface area of Lake Chad in 1963 was -25,000 km 2 and due to regional drought and the diversion of source waters for agricultural purposes is now -1,350 km 2. Image courtesy of the National Aeronautics and Space Administration. Izttp:lliandsat. gsfc. nasa. govlimageslarchivele0013.html.
This page intentionally left blank
SESSION 11 SCIENCE & TECHNOLOGY FOCUS: WMD PROLIFERATION-ENERGY OF THE FUTURE-MATHEMATICS & DEMOCRACY
This page intentionally left blank
REMOTE DETECTION WITH PARTICLE BEAMS GREGORY H. CANAVAN Los Alamos National Laboratory, Physics Division Los Alamos, New Mexico, USA This note presents analytic estimates of the performance of proton beams in remote surveillance for nuclear materials. The analysis partitions the analysis into the eight steps used by a companion note: I. Air scattering, 2. Neutron production in the ship and cargo, 3. Target detection probability, 4. Signal produced by target, 5. Attenuation of signal by ship and cargo, 6. Attenuation of signal by air, 7. Geometric dilution, and 8. Detector Efficiency. I 1. Air scattering decreases energy and increases beam divergence. The former can be treated by the Bethe formula; the latter by the Fermi formula for multiple scattering. 2 An approximate integration that combines the two gives the results in Figure I for beam radius RB as a function of range to target R and beam energy E. RB is a few meters from distances of -0.5 m, but grows to lOs of meters by 1 km and 20-40 m by 2 km. RB is the one sigma radius of the beam; thus, a fraction f - 14 of P protons are within the beam, so the average flux within the beam on the target vessel is Fp '" fP/rrRB2. 2. Neutron production is due to nuclear collisions of beam protons with ship materials, which reduce the proton flux by at interior distance z by dFp/dz = -IlFp, and convert each proton where 11 '" 0.053/cm and E '" 6.4 neutrons in this energy range. Thus, at a distance z into the target the proton flux is Fp = Fo e'~z and the neutron flux is FN = E (Fo - Fp ), as shown on Figure 2. For penetration of solid density steel the proton and neutron fluxes are about equal at 3 cm in. By 10 cm about a third of the protons are converted and the neutrons have reached about half of their ultimate value. By 30 cm conversion is essentially complete. Large vessels have hulls 5-10 cm thick, so the conversion is essentially complete within the hull. Smaller vessels have thinner hulls, so conversion would be incomplete when the mix of protons and neutrons entered the interior. However, cargoes typically have average densities'" 5% that of solid steel, so by the time the protons passed through 2 meters of cargo they would pass through the equivalent of 10 cm of solid steel, which would complete the conversion. Thus, it is a misnomer to describe this process as proton remote interrogation; it is essentially neutron interrogation in which protons deliver the neutrons to the target in a convenient, focused fashion. 3. Target detection probability is largely a matter of estimating how many neutrons diffuse in to the location of the fissile material, which can be estimated analytically3 Figure 3 shows the results of diffusion into steel of 5% solid density from a unit source of neutrons at distance = O. The bottom two curves are neutron densities for initial energies of 1.5 and 10 MeV and final energies of 0.5 MeV, which approximately span the energies expected from conversion and disposition. The curve for 1.5 MeV is slightly higher than that for 10 MeV at distances z < 500 cm. There is a crossover at J. Snyder, et. a!., "Finding the Right Neutron in a Haystack," IDA report 15 May 2009. Fermi, Nuclear Physics scattering, pp. 28-30. Fermi, Nuclear Physics diffusion, pp. 191-3.
511
512
about 5 m, after which the 10 MeV curve is higher-by a factor of:::: 20 by 20 m. The neutron source is symmetric about z = 0, so both neutron density curves have zero derivatives there. At large z their spatial dependence is dominated by F - exp(-z/a), where a = 190 cm is the diffusion length in 5% density Fe, which has scattering length Ls = 90 cm and absorption length LA = 1200 cm. The top two curves in Figure 3 are the corresponding fluxes. Because the slope of N is zero at z = 0, the diffusion flux, F = -DdN/dz goes to zero at small z. Thus, the 1.5 MeV flux has a maximum at:::: 5 m. The 10 MeV flux has a maximum at:::: 10 m because its higher energy neutrons therrnalize further from the source. The fluxes cross over at:::: 9 m. The 1.5 MeV flux is larger for smaller distances; the 10 MeV flux is for larger distances. At 20 m the I 0 MeV flux is a factor of:::: 15 larger than the 1.5 MeV flux. At 20 m the 10 MeV flux is about 0.0015/cm2-s for a 1/cm2 -s unit neutron source at z = O. For a target cross section of AT:::: 100 cm 2 , the probability of detecting a target is :::: 100 cm z x 1.5 x 10- 3 Icm" -s x Is:::: 0.15 for a unit source of lOMe V neutrons. A companion report derives an algebraic estimate of the probability of detection per neutron as :::: AT I --'/2 Ls L, where L is a characteristic length of the vessel. For Ls = ISO cm, L = 50 m gives a detection probability of:::: 100 cm z 1(1.4 x 150 cm x 5,000 cm) = 9.1 x 10- 5 . 4 Figure 4 compares that estimate with those above. However, Figure 3 is computed for a unit flux, and the above result from the companion report is for a single neutron, so Figure 4 assumes a beam diameter of 10 m for normalization and comparison. Producing a unit flux over a 10 m diameter beam would require I n/cm 2 x 106 cm 2 = 106 neutrons, so dividing the curves of Figure 3 by a factor of 106 normalizes them to a single neutron. For presentation, both curves are also multiplied by AT:::: 100 cm z. The comparison curves differ from the diffusion calculation in both magnitude and scaling on z. Most noticeably, the comparison curve probabilities are higher by three to four orders of magnitude at all distance scales. They also have a different behavior at small distances, turning up where the diffusion fluxes turn down towards zero. And most important they are relatively flat at large z where the diffusion fluxes fall strongly with exp( -z/a). The first and last discrepancies can be explained by the comparison calculations ignoring absorption (i.e., setting a = 0, as acknowledged). The second reflects the fact that the comparison calculations deal with integral quantities, not detailed spatial or energy dependence. Since the target area AT:::: 100 cm", and the beam about 106 cm Z, the target occupies:::: 100 cm 21 106 cm 2 so there would be a:::: 10-4 probability of a neutron finding the target if it was not lost. As the 10 MeV diffusion flux gives actually gives a detection probability:::: 10-6 to IS m, that means there is a I % probability that a neutrons will be lost on the way to the target, in accord with the predictions of Figure 8. By 20 m the fraction drops to 0.1 %, so it would take 1,000 incident neutrons for each one reaching the target. For deeper targets the number increases as exp(z/a), quickly reaching demanding levels. Variation with depth is not treated in the companion calculations. 4. Signal produced by the target is the signal from the neutrons that reach the target, multiplied by any fission within it. This multiplication depends primarily on the number of delayed neutrons per fission, the number of neutrons per fission, and the one generation multiplication factor, k. The first two parameters do not vary strongly with J. Snyder, et. a!., "Finding the Right Neutron in a Haystack," Figure 15.
513 fissionable material. The third varies with the composition and configuration of the material, which are generally not known. Thus, neutron multiplication can be represented to the level of approximation here by M ;::: [k/(1 - k)] 2/ 250, which is shown in Figure 5. The curve has three main segments. Multiplication is very low at small k. It grows exponentially from k = 0.2 to 0.8. Thereafter it increases rapidly. The small multiplications are too small to be of concern and the large ones are unlikely for transport, so the calculations below concentrate on the region 0.8 < k < 0.95 and the corresponding multiplications 0.1 < M < 1. S. Attenuation of signal by ship and cargo as the fission neutrons diffuse from the target to the surface of the vessel differs from the inward diffusion of neutrons in two main respects: the delayed neutrons produced in the target have lower energies than those generated by incident protons and they experience more geometric dilution as they spread away from what is effectively a point source. Figure 6 shows their combined effects, in which the fluxes are related to those of Figure 3, but are redrawn for the density of solid iron for later comparison. Thermalization of fast neutrons takes place;::: 25 cm from the target for 1.5 MeV neutrons and 40 cm for 10 MeV neutrons, so geometric dilution is treated beyond those points. For solid density Fe the 1.5 MeV flux peaks at about 0.02/cm 2 s at 25 cm and then falls as exp(-z/a) to ;::: 0.001/cm2 s by 100 cm. The 10 MeV flux peaks at about 0.01/cm 2s at 40 cm and falls to ;;:; 0.001/cm2 s at 100 cm. The fluxes have different exponents at large z, but that for 10 MeV neutrons has not reached its asymptote by 100 cm. The companion calculation provides estimates for these conditions. s For 1.5 MeV it gives an attenuation of 3 x 10.5 for I m of solid density iron and 0.179 for 0.1 m iron. For comparison, Figure 6 gives 3 x 10.4 for I m of iron, which is adequate agreement for estimates of this type. Figure 6 gives;;:; 0.02/cm 2s for 10 cm Fe, which is a factor 0.18 / 0.02;::: 9 smaller than the companion estimate. The flux cannot increase faster than exp(z/a) as z decreases, so interpolating the 1.5 Me V curve from large z down to z = 10 cm gives value of ;::: 0.06, so the disagreement is apparently only a factor of 3. For lOMe V the companion calculation gives an attenuation of 0.036 for 1 m iron and 0.5 for 0.1 m. In contrast Figure 6 gives 0.001 for I m of iron, which is a ratio of 36. The reason is not clear in that at large z the diffusion solution is in agreement with the limiting analytic solution, which is essentially exact and with which the diffusion solution agrees there. 6 For 10 cm Fe, Figure 6 gives;::: 0.003/cm 2 s for 10 cm Fe, which is a factor 0.5 / 0.003 ;::: 167 smaller than the companion estimate. However, interpolating the 10 MeV curve from large z down to z = 10 cm gives value of;::: 0.03, which is a factor of 17. Moreover, the companion calculation decrease in attenuation from 0.5 to 0.036 over 10 to 100 cm indicates a diffusion length of a ;::: 35 cm, which is about a factor of 2.6 larger than that from the Ls = 7.5 cm and LA = 67 cm values used elsewhere. Thus, it would appear that the companion calculation uses a different boundary condition than the planar source at z =0 used as the basis here. 6. Attenuation of signal by air can be estimated by the same analyses used for analytic diffusion calculations in metal, with appropriate changes of material parameters. Figure 7 uses Ls'" 200 m and LA '" 330 m, which give a diffusion length of a ;::: 260 m, J. Snyder. et. al.. "Finding the Right Neutron in a Haystack." Figure 15. Figure 20.
Fermi. Nuclear Physics diffusion. p. 193.
514 which is about 100 times larger than the 188 em of 5% Fe and 3,360 times the 9.9 em of solid density Fe. As scattering and absorption lengths increase, ranges over which densities and fluxes vary increase proportionally. The curves for 1.5 and 10 MeV show the variation with energies of interest. The curves cross at about 1.2 km. At shorter distances that for 1.5 is about five-fold higher; at 2 km the 10 MeV curve is about an order of magnitude higher. The 1.5 Me V curve has an e-folding length of 110m. That is :::; 40% of the diffusion length in air, which is a combination of absorption and geometric spreading. The curve for 10 MeV has a lower slope, which leads to a e-folding of 290 m and produces the crossover noted above. Figure 7 can be compared with the results of comparison calculations. For 10 MeV they give a constant e-fold length of 440 m, which is approximately the absorption length. The slope is about the same for 5 MeV, but its survival rate versus range falls more rapidly for ranges over I km for 3 and 2 MeV. The main differences with the above are that the companion results have survival rates of 0.1 at 500 m while Figure 7 only has rates of 0.01 (1.5 MeV) to 0.03 (10 MeV). This is related to the fact noted above that the fluxes from analytic diffusion calculations fall at small distances because the neutron densities have zero slope there. The survival probabilities are otherwise in reasonable agreement. Figure 8 combines the calculations of diffusion in and out of solid Fe from Figs. 3 and 6 with the calculations of Figure 7 for air to give the overall flux at a detector at 2 km. The top curve combines the 106 rescaling to produce a total flux of I proton incident with the:::; 100-fold loss in diffusing in through I m of solid Fe. The curve a factor of 100-1,000 magnitude below it reflects the absorption and geometric dilution in diffusing out from the 0.1 m target to the I m exterior. The bottom curve reflects the 0.01 - 0.0001 variation in penetration of air in the range from 0.5 to 2.5 km. Together penetration of these layers gives fluxes from a single proton that vary from :::; 10. 16 to 10- 18 Icm 2-s . Figure 9 uses this analytic combination to study the attenuation of fission signal neutrons from I m inside Fe after penetrating 2 km air. The penetration is shown for three energy spectra at the target: a flat spectrum, a fission spectrum, and the spectrum of delayed neutrons from U235. 7 The top curve is the flat spectrum, which rises to:::; 0.12 by 3 MeV and stays at that level for higher energies. The second curve is the fission spectrum, which rises to a maximum of:::; 0.04 by 2-3 MeV and then falls by an order of magnitude by 10 MeV, as the improved penetration at higher energies is offset by the exponentially smaller tail of their energy distribution. The third curve is for the delayed spectrum, which is largest < I MeV but falls rapidly because the delayed U235 spectrum does not extend past 2 MeV. The scaling of the curves for flat and fission spectrums in Figure 9 are in good agreement with those of companion calculations in shape, magnitude, and placement of maxima. 8 However, the companion calculations do not present calculations for delayed neutrons, arguably the most useful signature, so the comparisons of signal cuts at various energies, though favorable, are less useful.
C. Morris, Future Plans, Los Alamos 18 April 2009. 1. Snyder, "Finding the Right Neutron in a Haystack," Figs. 23-5.
515 7. Geometric dilution Is a major factor in determining overall system performance. The dilution involved within the Fe case is discussed and included above. It is also included in the diffusion of signal through the air to the detector, although it is useful to review the magnitude of the effect. That can be done by studying two limits: a large detector on the transmitting platform and small detectors on remote sensing platforms. The large detectors studied by the comparison calculation have areas of about 65 m 2 . Being surface mounted, their size and weight is not an issue. Current mobile detectors have sizes in the range of 1 m 2 • The fluxes estimated above can be multiplied by these areas to determine the signal per proton. S. Detector Efficiency. Both large and small detectors are improving rapidly in weight, power, and efficiency, so it is appropriate to assume unit efficiency, as I done in companion calculations. 9. Signal and Background. The fluxes above can be multiplied by detector efficiency and area to determine the signal per proton. In doing so it is also convenient to multiply the parameters proton survival f, neutron conversion efficiency E, and multiplication M on which the signal scales directly. The parameters used for comparison are f = 0.1, E = 6.3, M = 0.3, and AT = 100 cm 2 . Thus, the product of the parameters is ;:: 20. They are slowly varying relative to the exponential dependence of the fluxes. The product of these parameters and the fluxes indicated above gives the value of the normalized flux at the detector for a single incident proton. The product of that normalized flux and the detector area gives the signal for a detector of that size. Figure 10 shows the signals for 1 and 65 m 2 detectors. The top curve for 65 m 2 extends from;:: 10- 12 at 2 km to 10- 10 at z = 0.25 km, although a detector that large might not be deployed that close to the vessel. The bottom curve for 1 m 2 extends from;:: 3 x 10- 14 at 2 km to 2 x 1012 at z = 0.25 km. The carrier for a detector that small could be deployed that close to the vessel and the payload and volume of current unmanned vehicles could well accommodate it. The dominant background noise appears to be cosmic rays, which produce;:: 50 n/cm 2-s in this energy range. Assuming an integration time> 55 s to screen out noise delayed neutrons and accounting for the ;:: 3.S% of prompt delayed neutrons with lifetimes longer than that gives a required signal of about 50,000 neutrons. Dividing that number of neutrons required by the neutrons per proton from Figure 10 gives the protons per pulse needed for statistically significant detection, which is shown as the top pair of curves in Figure II for detector areas of I and 65 m 2 . The top curve for for I m2 increases from 2 x 10 16 to 2 X 10 18 to protons required as the distance to the vessel increases from 0.25 to 2.25 km. the curve below it for 65 m2 lies a factor of 65 below. The middle pair of curves are the corresponding fluxes for the 10m beam radius used above. That for 1 m 2 increases from;:: 1010 to 10 12 as the distance increases from 0.25 to 2.25 km. The curve for 65 m 2 again lies a factor of 65 below. The bottom curves divide those fluxes by 108 Icm2, the nominal fluence limit for irradiation with particle beams. The top curve for 1 m 2 increases from;:: 100 to 10,000 times the fluence limit as the distance increases from 0.25 to 2.25 km. The curve for 65 m 2 again lies a factor of 65 below it with values that range from;:: 1 to 100. Based on these estimates, it would appear that a large sensor with 65 m2 sensor could operate at the fluence limit if deployed within about 0.25 km. At longer ranges it
516 would exceed those limits by several orders of magnitude. Conversely, while a mobile I m2 sensor would exceed the limit by a factor of;::: 100 from 0.25 km, by approaching to 25 m it could achieve detection while remaining under limits. 10. Summary and Conclusions. The above analyses indicate that the dominant air scattering and loss mechanisms for particle remote sensing are calculable with reliable and accepted tools. They make it clear that the conversion of proton beams into neutron sources rapidly goes to completion in all but thinnest targets, which means that proton interrogation is for all purposes executed by neutrons. Diffusion models and limiting approximations to them are simple and credible-apart from uncertainty over the cross sections to be used in them and the structure of the vessels investigated. Multiplication is essentially unknown, in part because it depends on the details of the target and its shielding, which are unlikely to be known in advance. Attenuation of neutron fluxes on the way out more complicated due to geometry, the spectrum of fission neutrons, and the details of their slowing down during egress. The attenuation by air is large but Jess uncertain. Detectors and technology are better known. The convolution of these effects lead to large but arguably tolerable levels of attenuation of input beams and output signals. That is particularly the case for small, mobile sensors, which can more than compensate for size with proximity to operate reliably while remaining below flux limits. Overall, the estimates used here appear to be of adequate accuracy for decisions. That assessment is strengthened by their agreement with companion calculations.
517
90
80 70
'I
60 +------.~----,--f--~--------J
E 50 III
-
::I
=sC'\!
40
...
3000 MeV
-2500 MeV
30
~~-2000M eV
20
-
1500 MeV
10 --+--- -~--------- .
0
1000
0
2000
3000
4000
rangem
l__
- Ng~lfon~ro: CO~~~-in ge~-
l~l
Db!
1:0
~H--A4-l--+----4--+---
11 l,
1
\~I.
991
!i
, 0.001
distance em
-
p
-
n
_J
518
'l'
N
....5
-
NlO
CIS
-
NL5
tlO.OOOOl
-
FluxlO
-
Fluxl.S
..2 ....
0.0001 +-----!I----+---+-------=~----1
{OOOOOl ~
L-==t~~~"""'""-
-a 0.000000 +------1I----+--"""o,d----""""o,.d:--___1
lE-09 - ' - - _ - - L _ ._ _-'--_---.l_ _-L~ distance em
1.00E+OO
1.00£-01 1.00E-02
i i
1.00e-Q3
TFig. il~oDetprtion nro6;hifitlP(O
~
-----~...
I
1.00E-Q5
I _
1.00E-Q6 ~--- -
1.00E{17
roW'
·l~O ~
I'
750
I
1.00£-04
!II
l'u
.
.- - -..- -
. -
----......-
..--.----
"..
-- --- -...
~
:::: t=~=_~=~:=~=~___= distance em
-----_._-
-
Flux 10
-
Flux 1.5
-
IDA
I
I II
._ _ _.____________J
519
---..--------------..----------..-,
r·~-~-·-·--·
,
I !
10
i
1
0.1
I
II
I
! ~ I
! t
!I I
1
II
i, !
I
I I
Multiplication Factor M
1 ~
m
~
0. i5
~ E
I
I,
0.Q1
~
\I
0.001
'
I I!
0.0001
!I
0.00001 1 generation mutt factor k
L. __._______ ..____._____. _. __ __.______________ ._____..___ ._.......J
I·--·-·--··--·--··;;~·-~~-~~~~;-~~-~-;ff~;~~~-O~~~~~~.-----.----
I
i
1
Ii
I! Ii
; I
!
0.1
I
I
,•
!'
<;
~
II '2" i
I
._= Flux 1.5 Ii
0.01
!
--nudO
I .. I !
I
I I,
QOOI
i
!I II
.,
I i 0.0001
distance em
II
L_______________._.____... _____ .________.. ______._____.______ . ...J
520
Fig. 7. Attenuation in air 1
0.1
--
.....
0.01
c:
::I
c
.
i"..
1-- .
0 +:;
III
......
0.001
~III
.
_
.
-
Fal0
-
Fal
0.0001
0.00001 --
0.00000 1
--
distance km
Fig. 8. Flux in and out 2 m Fe and Air 1 0.Q1 ~
0.0001
~
lE-06
... Q/
'0
1E-08
0 ';:
lE-lO
c:
. t
1\1
c:
Q/
Q.
lE-12 1E-14 1E-16 IE-18
l
c;nl ll'ln
1nr nnn 1
r;n~n
, nr nnn
,c;r 00
I I I I
II
I
-
Foutvessel
I
I
-
Fin detect
II
I II
I i I
-
- F into target
I!
i I I
!
distance in air em
-
521
'"
.~
0.08
+--\--1--+---+----+---+----;1
------I--·- ~--~
I O.06 r-
flat
-fission
I ! - - - ---[-._--------j
0.04 0.02
-
+- - - - - - - .~---\+---+---c: !" " ':-----H ,
.
i
8
10
-----+-! ~-___t____-.--j_---i o
4
1.
6
EnergvMeV
1f-
Fig. 10. Signals into detectors i
i
0.0001
----.---
-.-- - - - --
+--
i
l
~:::
2 _ _ _2_ .5
--.--- --- - -
----
i
.a.OOOOOl
'
_ _0~_ _ _ 1 ___ ~~_
0.01
t-_--.- ____________ ____ I
.."" . . .
-
Signal 65
-
Signal 1
::~:: ~---~......--;;;O;';;;;.d;~ista;;:.:n:c:e:k_m:_~:·~---~- ~----· :-___________
J
522
Fig. 11. Neutrons, protons & fluxes required 1£+18 lE+16
•B >E.,'
~ ~m
i E+12 lE+lO
fOOOOOO rOOOOOO
10000 100 1
t----=.-:::::::=:=:;::;:::;;;;;;;;;;=""""'---~-:---
+---.. ~ .-.---.----.... --- ---------
I - ===:::::::::;;;;;;;;;;;;;;;;=....._--""'"'-- - =---::=_.:.;:;: ___ __~ __- - - --
=......---""""'=:......-
r ----- -----------.-------.. -.-- . -.----------.--...
r---
NG5
-
Nl
-
PGS
-
Pi
- 1 65/D
__-------------------------:::
====__-----------------------+----- -=-------,--.----,-- .---.. ---.-,-------, o
-
0.5
1
1.5
distance km
2
25
-
11/0
EXPLORING THE ITALIAN NAVIGATOR'S NEW WORLD: TOWARD FULL-SCALE, LOW-CARBON, CONVENIENTLYECONOMIC, AVAILABLE, PROLIFERATION-ROBUST, RENEWABLE ENERGY RESOURCES LOWELL WOOD Hoover Institution, Stanford University Stanford, California, USA THE "FACTS ON THE GROUND" Two-thirds of a century after "The Italian Navigator has landed in the New World," nuclear fission-based electricity is .. . • .. . manifestly 'economic' - Higher capital cost trades off vs . lower O&M costs O&M costs <$O.OllkWe-hr at present for USA "best nukes" Soon will be less expensive than CO 2-taxed coal May be so already, in and around France (@ $30/tonne-C0 2 ) And, as "finites" are used-up, "infinites" will become relatively cheaper-and nuclear fi ssion !.I an "infinite" ! • ... a "full scale" electricity-producing technology - ::::400 'nukes' produce -20% of Earth's electricity -80% of total kWe-hr in France--one leader's "political will" - Scalable at will e.g., more nuclear plants are currently being built in China than in the rest of the world together-recent political decisions And, nuclear fission-based electricity is ... • .. .conveniently available-in space and time Sited most anywhere, where electricity is desired Using wet or (at modest additional cost) dry cooling-towers Produces full power at any desired time-and can load-follow! - No cost-doublings for back-up generation capabilities • .. .quite proliferation-robust (as much as desired?) - Fuel assemblies may be naturally located in-core for century-scale intervals( !) ... in advanced core designs ... during which intervals even longer-lived (Sr90; Cs137) "serious" fission products largely decay toward harmlessness .. - A half-millennium of natural (beta-) decay results in less specific radioactivity than as-mined uranium ".and actinide isotopes may 'see' (spectrally-tailored) neutron fluences which render them entirely weapons-useless - e.g., too little Pu239 ; too much Pu240-"un-detonate-able"
523
AND, BY THE WAY ... •
... nuclear fission-based electricity is ... .. . readily capable of being carbon-negative Comparably carbon-neutral to 'solar' electricity 14 T/GWe-hr., vs. 18 (hydro), 14 (wind); 622 (gas); 1042 (coal) cf. Michael Wallace, Constellation Energy; et al. Hydrocarbons from air CO 2 costs are additional - Taking C from +4 valence to -4 is energy-intensive CO 2 air-extraction energy cost is -5% of C's fuel-value(!) - T.:1Smixing at 300 K relative to.:1H of C-oxidation Total cost estimated at $50-1001 ton-C0 2 -removed - CO 2 sequestration, processing, ... costs are additional - And capital and O&M costs seem likely to dominate the energy costs But it f.§. a longer-term option for "atmospheric reset"
BUT, BUT... • • •
•
... we all know that there's only a half-century of U ore left-at present consumption levels, let alone future-expanded ones! " .and so the "Nuclear Renaissance" must be short-lived! Badly flawed assumptions-so quite mistaken conclusions! - e.g., naIvely inelastic uranium supply-demand relationship, etc. - Effectively ignores titanic seawater 'reserves' of uranium ... premised on "Light Water Reactor" technology and economics - Submarine-propulsion reactor technology of a half-century ago "Cream-skimming" fueling; grossly wasteful, ad hoc neutronics Highly-enriched fuel enabled compact, long-lived reactor cores Though uses :s0.5% of as-mined uranium; remaining 99.5% is "depleted" or 'waste' - Foisted onto 'unsuspecting' utilities via a "working prototype" shore side power reactor: Shippingport, 1958 Fuel enrichment 'diluted'; core scaled-up in dimensions and total power from subs But still exceptionally wasteful-crucial deficiency - Both in neutronics and in uranium usage-e.g., doesn't even 'breed' ! Recognized nearly immediately as "unsustainable": impelled commencement of USAEC's fast-breeder power reactor program
AND SO ... •
With modern materials, datasets, concepts and toolsets, design power reactors that:
524
525 ... are exceptionally neutron-economical/efficient For, after all, this is the fundamental technical figure-of-merit in a "neutron economy" And contemporary teraFLOPS supercomputers and codes enable highly-optimized, "technically robust" power reactor designs ". and thus which enable all of the actinide isotopes to become potential fuels "Fertile" ones, as well as "fissile" ones-for, with greatly-improved neutronics, we can now afford to pay the fertile-to-fissile conversion costs (2+ neutrons per fission, vs. 1+ in 'burners')!! Rather than the -0.5% of mined uranium useful in ShippingportiLWRtype reactors! Resulting in a >100X gain in effective nuclear fuel inventories And-far more importantly, in practical terms-enabling a> IOOX greater fuel-cost to be feasible, for the same fractional-electrical-energy costimpact!! Cf. Teller, Wood, Hyde, Ishikawa and Nuckolls, "Problem-Free Nuclear Power and Climate Change," Proc. Sympos. Planetary Emergencies 1997. BASIC NUCLEAR FERTILE-TO-FISSILE BREEDING PHYSICS
e'
e-
~
c;
U239~
U 238
n
/
fl
(23.5 min)
~
~~ V
Np
239/
f3 (2.35 day)
'~
~
V
PU 239
526
527
528
529
530
531
532
533 THE WAY FORWARD •
•
•
With innovative design, these power reactors are enabled to burn exceptionally "low-grade" actinide fuels, e.g., ... .. .completely un enriched uranium-and right-out-of-the-mines thorium ... even 'waste' uranium depleted of its U235 content in 'isotopic enrichment' plants: Qepleted !Iranium (-99.7% U238+0.3% U235, vs. 99 .3%+0.7% nat.) A few million tonnes of DU are "lying around" at present-and are swiftly accumulating from LWR re-fuelings: -10 lbs. of DU for each "LWR fuel" lb . .. . and even "spent fuel" from LWRs-which actually "re-burn" quite nicely! • Meso-term 'disposal' of high-negative-worth materials-while extracting >IOX more energy from them than was realized "the first time around"! ., .and are able to do so for notably long intervals-between-refueling - e.g., for many full-power-decades, as a 'compact' core of fuel contains so much nuclear energy • Since its fuel-load is so efficiently burned: tens of percent fissioned in "traveling-wave breeder reactors" vs. a few percent with LWRs! One single fuel-loading suffices for a plant's entire half-century lifetime? .. .and thus could deliver centuries of electricity for all of humanity from extant fuel-piles, moreover at per capita electricity consumption levels of present-day Americans!! i.e., an effectively inexhaustible energy supply-with no "Unobtainium" required to create and operate it!
534
TOWARD A MORE SUSTAINABLE AND SECURE BULK ELECTRICITY SUPPLY
For the World j - - - - - - -.--.- - - - - - - - - - . - , -
;
I
I
"
i
Supplying 80% of world ,
elecbicity demand at 2008
u.S.,
per capita rates of consumption :Fuel Source i (years) I ._- - - - - - - - - - - _..---_._----- ---_.._-_._._- -- - - - !
StDckpllesofdepleted uranium, as of 2009
80
i~ected .~lle o!~~!I'eted.~!I!"' 1~.~1.~___._..._...._-- ... .. -- -...- -- - -.- ____ ._}!~s.o ' 'StDckpilesoflWRspentfuel, as of 2009 ' 20 Projected stxK:kpiles of LWR spent fuel in 210,0 140 !Known leS!lf\lfiS of uranium IEstima~ uraniumphosphOlitedeposlts
! ·······
320
1,720
.. t·- .. _----------"_._-_
233,330 .. _._--.
BUT WHAT ABOUT "RENEW ABLE"? •
•
After all, "inter-generation equity" concerns in principle persist as long as the Earth is habitable-not just for few dozen millennia! - Sol is estimated to 'last' for another -109 years! Interestingly, the oceans contain :S106 years of U with "all 10 billion people using electricity like contemporary Americans"! - lLburned in high-efficiency, TWR-like reactors - And it's readily economically feasible to extract Already-demonstrated (e.g., JAEA) technology! - Operating at kg scales; marginal costs <$I/gm-U extracted Moreover, production costs « the electricity value! - <1 %, for present-prototype extraction technology and TWRs
BUT 106 YEARS STILL ISN'T 109 YEARS ... •
•
Interestingly enough, the Earth's rivers are constantly eroding the Earth's crust, and carrying its contents into the ocean! - The crust is 0,0003% uranium-by-weight - Contains several times this much thorium Thus, the present river-borne flow of uranium into the oceans happens to be energyequivalent to a full 10 billion people using electricity like Americans .. , . .. when efficiently burned in modem breeder reactors ... - ... and forever replenishes the -millionyear 'reserve' of oceanic uranium ... that's
535
536 forever-sliding under continents ... therefore constituting a perpetually-renewed nuclear fission energy supply, for everyone with ocean-access Yea, for the -1 + Aeon estimated remaining time of life-on-Earth "FOR AS LONG AS THE SUN SHALL SHINE AND THE RIVERS SHALL FLOW ... " •
•
... the crust of the Earth shall be eroded by the rains and carried by the rivers to the seas ... -3 x 1019 gm/year global riverine flow-rate ... and an average of -1 ppb (e.g., -3xlOlO gm/year U) of actinide isotopes shall be included in such riverine flows ... - The Ganges-Brahmaputra carries the current global fission power usage of U
• •
-20% of -10 TWe; cf. Vance et ai. Earth & Planetary Sci. Lett._206, 273 (2003) ... and thus the U and Th contents of the oceans shall be continually and perpetually renewed ... (Cohen, AmJPhys., 1983) ... with the oceans serving as a universally-available reservoir of -5 billion tonnes- -1 million years of "full planet electricity supply" (-10 TWe)-of these energy-rich elements ... which can be released whenever and wherever desired, with alreadyextant TWR-like means: affordable, environmentally sound, safe,foreverrenewable energy, in any practical quantity-or-rate
SUMMARY: STATUS •
•
•
•
There. are many possible types of reactors beyond those LWRs that are 'fashionable' at present, of which TWRs are exemplary - e.g., fast breeder reactors have been operated for over a half-century Some types of fast breeders approach the "physics limits" of fuel use efficiency-and without enrichment or reprocessing - A consequence of exceptionally low-loss neutronics, e.g., via TWRs Uranium fuel sufficient for a century of "full planet electricity supply" via well-designed fast breeder power reactors is now lying-around in the "backyards" of isotope enrichment plants-and there'll be a millennium's worth by 2l00(!) Extraction of kg-quantities of uranium from seawater has been demonstrated, for estimated marginal costs of <$l/gm-U, i.e., <1 % of the market-value of the electricity-equivalent Specialized polymeric absorber-fabrics, tethered in ocean currents(!) - M. Tamada, et ai. Japan Atomic Energy Agency
537 SUMMARY: OUTLOOK •
•
•
Innovative power reactor design and 'practical' recovery of uranium from seawater has qualitatively changed the outlook for large-scale nuclear energy supply for everyone - TWR-like breeder reactors enable "single-pass" burn-up of all nuclear fuel-types to tens of percent (of theoretical limits) Neither enrichment nor reprocessing is involved-innately high burnups! - Fuel requirements become > lOX smaller, at equivalent power production Drastically-reduced quantities of nuclear waste-and it's far less radioactive - Accessing indefinitely-large and sustained supplies of uranium from seawater then becomes eminently practical "Winning" U now is a harvesting industry-not an extractive one! - Rainfall-renewed: Just as much of it 'presents' next year as this year Value of ' harvested' uranium becomes extraordinarily great Electricity from advanced fast breeder power reactors thus appears indefinitely renewable, as well as affordable, full-scale, environmentally benign, safe, conveniently-available power for everyone-indefinitely! Fermi's triumph is !!Qlf. seen to endure untilthe End of Days!
538 SUPPLEMENTAL MATERIAL Rethinking Nuclear Energy as a System Thlt'nutl "Ktor
ora andmines mills
.1qC:i~~~ _
OPENFUELCYCLE
C=:J ADVANCED FUEL CYC LE
539 The Current Nuclear Energy System is Complex and Expensive
-Fuel fabrication
TWRs Make Many Steps Unnecessary
- Fuel fabrication
-Reprocessing
- Spent fuel storage
generation
540 A Simpler, More Secure and Economical Nuclear Energy System
-Depleted uranium storage
-Long-term geologic repository (with greatly reduced waste volumes) A TWR Works Like a Candle
-Fuel fabrication
-Spent fuel storage -(with greatly reduced waste
- Nuclear power generation (with half-century refueling)
541
Prepared for presentation by Lowell Wood (Hoover Institution, Stanford University, Stanford CA and Intellectual Ventures LLC and TerraPower LLC, both of Bellevue WA; [email protected]), reporting on work done with Tyler Ellis, Nathan Myhrvold, and Robert Petroski (TerraPower LLC, Bellevue W A and MIT Nuclear Engineering Dept., Cambridge MA), at the 42nd Session of the Erice International Seminars on Planetary Emergencies, Profs. Antonino Zichichi and T.D. Lee, Chairs, Erice, Italy, 19-24 August 2009, based on studies done in collaboration with Edward Tellert, Bill Gates, John Gilleland, Rod Hyde, Muriel Ishikawa, John Nuckolls, Tom Weaver, Charles Whitmer, George Zimmerman, et al. (Hoover Institution, Stanford University, Stanford CA; Lawrence Livermore National Laboratory, Livermore CA; Intellectual Ventures LLC and TerraPower LLC, both of Bellevue WA USA). Opinions expressed herein are those of the authors only. Institutional affiliations are given only for identification purposes.
This page intentionally left blank
THE MATHEMATICS OF DEMOCRACY IN SOUTH ASIA PROF. K.c. SIV ARAMAKRISHNAN Chairman, Centre for Policy Research Delhi, India This presentation will give an overview of the electoral systems followed in South Asian countries namely India, Pakistan , Sri Lanka, Bangladesh and Nepal. The focus is on problems such as the discrepancy between votes polled and seats won, candidates winning by minority of votes, etc. The machinery for the conduct of elections in the different countries has been indicated. Alternatives to the 'first past the post system', proportional representation , list system, preferential votes, etc ., followed in different South Asian countries are also mentioned. OVERVIEW-COUNTRIES INDIA
BANGLADESH
SRI LANKA
PAKISTAN
NEPAL
Population in millions
1028
153.54
19.74
164.74
28.90
Electorate in millions
714
81.13
12.9
79.93
17.61
Structure of Parliament
Bicameral 543 Lok Sabha (House of the People) 250 Rajya Sabha (Council of States)
Unicameral 300
Unicameral 225
Bicameral National Assembly 342 Senate 100
Constituent Assembly 601 (also functioning as Parliament)
Head of country
President (electoral college)
President (electoral college)
President (electoral college)
President (electoral college)
President (electoral college)
Head of country
President (electoral college)
President (electoral college)
President (electoral college)
President (electoral college)
President (electoral college)
Executive
Indirectly elected PM
Indirectly elected PM
Appointed PM
Indirectly elected PM
Indirectly elected PM
1.4 BILLION
Literacy %
65
43
92
48
45
H 0 Index
127
138
96
142
140
543
544 THE DIVERSITY Religions
Hindu
Muslim
Bangladesh India
16 81
83 12
Nepal
81
4
Pakistan Sri Lanka
7
97 6
Christian, Sikh, Buddhist and others lathers 2.3 1.9 Sikh 2.5 Others 4 Others I I Buddhist 3.0 Others 6 Christian 3 Others 78 Buddhist
Number of Main languages 1 20
I 8 2
OVERVIEW-ELECTIONS Elections last held
INDIA
BANGLADESH
SRI LANKA
2009
2008
2004
2008
Elections Management
Three Member Commission 115
One Member Commission 32
Five Member Commission 53
Five member Commission 71
8070
1538
58.42
80
76
45
FPTP
FPTP
196 FPTP (from 22 multi member districts) 29 PR
FPTP
Number of parties which contested Number of candidates Tum out percentage Process
PAKISTAN
7086
545 OUTCOME-SEATS VOTES RATIO Bangladesh 2001
2008
BNP
42.7%
193 seats
AL
40.1%
63 seats
AL
49%
230 seats
BNP
33.2%
30 seats
PML
25.7%
126 seats
PPP
25.8%
63 seats
Pakistan 2002
2008
PPP
31%
87 seats
PML
19%
67 seats
Freedom Alliance
45.6%
105 (13 list)
UNP
37.8%
82 (II list)
UNP
45.6%
109 (13 list)
Peoples Alliance
37.2%
77 (11 list)
INC BlP INC BlP
26.5% 22.2% 28.56% 18.81%
145 seats 138 seats 206 seats 116 seats
CPM (Maoist) (FPTP) CPM (Maoist) (PR) Nepali Congress (FPTP) Nepali Congress (PR)
30.52% 22.28% 22.79% 21.14%
120 seats 100 seats 37 seats 73 seats
Sri Lanka 2004
2001 India
2004 2009
Nepal 2008
Q:. Is the PR System more representative? Maybe.
546 VOTES SEATS DISPARITY AND FPTP PROBLEMS • • • • •
Percentage of votes do not translate into seats in same proportion Disparity in constituency size partly a reason Large number of contestants and fragmentation of votes Majority of candidates win by minority of votes Many contestants non serious: forfeiting deposits waste votes
(India 2009: 6831 out of 8070 candidates forfeited deposits : votes wasted 6.373 million out of a total of 416 million wasted) FIRST PAST THE POST SYSTEM • •
• • •
Manifest feature is that the majority of candidates win by minority of votes In recent elections in India how many got 50% + votes') '96
'99
146
203
Of the 120-106 got more than 50%, 11 received more than 60% and only 3 including Sonia Gandhi and Rahul Gandhi . Of the remaining 423-256 got 40 to 50% of the votes, 138 received 30 to 40% and 29 received 20 to 30% In this system no "run off' arrangements
ELECTORAL VOLATILITY Mean Volatility
Standard Deviation
Bangladesh
36.3
Nepal
15.7
22.5
-
Pakistan
40.9
23.6
Sri Lanka
15.1
8.1
India
16.1
5.6
South Asia
24.8
-
Pederson ' s index of electoral volatility is derived by adding the net change in the percentage of votes gained or lost by each party from one election to the next. The volatility index for each country is the average of the volatility for all the election periods in that country. Since the mean volatility may conceal variation, the extent of variation can be understood by the corresponding standard deviation for each country. FRAGMENTATION OF POLITICAL PARTIES •
Does the electoral system reflect the people's choice?
547 • • • • • • • •
Only in part Votes-seats disparity due to many factors Fragmentation of political parties, splinter groups, new parties In all countries of South Asia many parties exist but none obtains majority of seats Coalitions become inevitable; may be a world wide phenomenon but developing countries requiring consistent policies and effective governance Coalition politics a major factor of delay In India, a slight decline in coalition numbers noticed correlate From 24 parties in 1999, down to 15 in 2004 and 10 in 2009
ELECTORAL SYSTEM AT SUB-NATIONAL LEVELS • • • •
• •
In Pakistan and Sri Lanka there are provinces and local bodies In Bangladesh, districts and local bodies In India 29 states with legislatures and numerous local bodies At the national level Rajya Sabha or Council of States is the so-called upper house; members elected by an electoral college comprising members of State Assemblies; proportional representation and preferential voting In India after the 73 rd and the 74th Constitutional Amendments electoral system vastly enlarged For rural areas a 3-tier system of village, intermediate and district level panchayats (local bodies); numbers as below: District (539) Intermediate (6,105) Village (233,251)
•
For urban areas nagar panchayats for small towns, municipalities for major towns and corporations for big cities; numbers as follows: Corporations Municipalities Nagar Panchayats
•
15,759 157,175 2,657,112
132 1,513 1,220
About 3 million elected representatives from rural and urban local bodies; one-third of them are women as constitutionally prescribed
THE QUESTION REMAINS • • •
Does the electoral outcome in South Asian countries reflect the choice of the people? Only in part. Reforms to electoral systems: A persisting need.
This page intentionally left blank
SESSION 12 WFS GENERAL MEETING PMP REPORTS-DEBATE AND CONCLUSIONS
This page intentionally left blank
PERMANENT TERRORISM
MONITORING
PANEL
ON
MOTIVATIONS
FOR
THE LORD ALDERDICE FRCPsych Chainnan, PMP on Motivations for Terrorism House of Lords, London, UK In 1996, at the 21" Session of the International Seminars on Nuclear War and Planetary Emergencies, Professor Karl Rebane from Estonia presented a paper on 'HighTech Terrorism as an Increasing Global Problem'. It was, as far as I am aware, the first time that a presenter at WFS had identified the possibility that terrorism, which had long been a tactic of asymmetric warfare, would with some certainty espouse the power of chemical, nuclear or biological weaponry in addition to the developing power of the internet. In his recommendations for WFS he not only pointed to the importance of international cooperation to mitigate the dangers of this emerging threat, and proposed that WFS identify this as a Planetary Emergency and establish a group to address it, but he also advised that issues of education, morality, faith and both personal and collective responsibility would need to playa role in our response. His prescience was striking, not only because he identified the problem some years in advance of 9111 (which triggered WFS into holding a special session in 2002 at which it was agreed to establish a PMP to deal with Terrorism). It was also his own country Estonia that, more than ten years later in April 2007, became one of the first countries in the world to be the subject of a coordinated large-scale Distributed Denial of Service (DDOS) attack as well as multiple politically-motivated web site defacements which resulted in a short-term crippling of the internet infrastructure of the country. The bot-net attack and defacements were correlated with a larger crisis involving violent street demonstrations and ethnic conflict. These had been provoked by a decision to relocate a highly controversial monument which had become a contested symbol of national meaning-representing to some oppression, and to others freedom. Rebane's warning about High-Tech Terrorism had come to a precise fruition in his own country, but its implications were, of course, global. Every major country in the world is now devoting considerable resources to dealing with cyber incursions and I recently assisted our colleagues at the University of Virginia to hold a conference on this issue and produce a report on the problem. When the WFS considered the problem of Terrorism in detail in 2002 and decided to appoint a Penn anent Monitoring Panel on Terrorism, Dick Garwin, in an excellent paper at the 27th Session of the International Seminars on Nuclear War and Planetary Emergencies, focussed attention on a range of measures which could be undertaken to mitigate the terrorist threat and he has continued to give leadership in this work right up to the present. Dr Sally Leives1ey will later speak about the work of the Pennanent Monitoring Panel on the Mitigation of Terrorist Acts. Also at the 2002 Seminar, when HE Msgr Marcelo Sanchez Sorondo, Professor Nino Zichichi and other colleagues addressed the post 9111 challenge they identified it as a Cultural Emergency. The Permanent Monitoring Panel on the Motivations for Terrorism has continued over the
551
552 twelve months since my last report to explore as scientifically as possible the characteristics and causes of this Cultural Emergency. Last year I reported that our work had identified a series of issues. The first obstacle to our work was the asymmetric and highly politicized nature of the problem. Those who undertake terrorist attacks are aware that they are breaking the law, including international law, and all the most powerful countries of the world are relatively united in seeing most terrorist behaviour as criminal-a 'scourge which must be eliminated' . Meetings with those who promote or engage in terrorism are not therefore regarded as legitimate activities by many countries, especially in relation to so-called Islamist groups that use terrorism. This has meant that we have experienced considerable difficulty in accessing funds for our work and in making arrangements to meet those who can give us direct data on the thinking of the terrorists and the culture of their groupsand of course as social scientists, no less than in the physical sciences we must hold to the culture of science identified so clearly by Professor Zichichi. As he has repeatedly pointed out, the scientific method requires us not only to be clear and creative in our Language and our Logic, but that it is only proper Science when we submit it to the test of reality and demonstrate reproducible experimental proof. Where the very meetings through which data might be acquired about the internal culture of the terrorists are obstructed by the law, our task is made the more challenging. As I pointed out last year, even treating terrorism as a 'phenomenon ' to be studied is often misunderstood and criticized. In the Motivation Panel, however, we have continued our psychological and social anthropological studies with further interesting results. Our network of cooperating academic colleagues and institutions now extends beyond Europe and the USA to include centres and investigators in South Asia, the Middle East and South America and a significant number of scientific papers have been written and published in the past year. We have previously reported that we found no data to support the view that individual psychological or personality disorder, or social or economic disadvantage of the individual were in themselves reliable indicators of terrorist involvement, and we had begun to focus more on the psychology of the group rather than the psychology of the individual. The culture of the group is the equivalent of the mind in the individual with its own history and memories, its own identifications and forms of functioning, its own evolution and development and its own possibility of dysfunction, dissolution and death. Groups are therefore a form of organism. We found evidence that perceptions of humiliation, disrespect and shame can be transmitted down through successive generations as well as across groups or communities. This seemed to be less to do with social disadvantage, which may be mitigated with the passage of generations, than to do with a sense of injustice, unfairness and humiliation, which may continue for generations after the social and economic disadvantage have been addressed. However when peaceful and democratic routes to the resolution of these feelings of humiliation, shame and injustice are continually blocked, a rage is generated in the group which, once triggered, can spread with contagion to others who identify with the group, even when they have not themselves directly suffered the humiliation or injustice. The apparent connection with certain kinds of fundamentalist religious beliefs led us to continue to explore the links between fundamentalism, radicalization and terrorism,
553 but it is clear that it is not religious faith, or even religious fundamentalism that leads to terrorism, but the radicalizing effects of the moral outrage felt at certain social and political developments. What is most difficult to deal with as scientists is the evidence that groups, like individuals do not operate logically, or even in their own rational best interests. We have observed how wars and terrorist campaigns are profoundly damaging to those who engage in them, both as individuals and as groups. They do not only damage their enemies. While they are presented as a rational response, the evidence is that they are ultimately almost inevitably self-destructive. This has led us to an exploration of what kinds of group values, emotions and culture drive these behaviours . In a series of studies sampling the views and reactions of people in a number of troubled regions of the world we have identified a type of value which we have called 'sacred values' which seem to trump other values . They are not sacred in the sense of being religious, but refer to values which transcend short-term self interests such as comfort, shelter, health and general well-being. Such values as, the life of my child, a sense of justice for my people, and the importance of self and group esteem, have a value which is not negotiable in the usual way, is not dependent on the prospect of success and may provoke violent reactions . We have been concerned not only to study these phenomena but also to try to convey their significance to those who can most likely use our emerging understandings in the development of peace processes for the Middle East and elsewhere. Since the election of the new U.S. President, there have been some more openings both for the funding of projects and for listening to our findings , not only in the agencies of the United States Government, but also of other governments in Europe, Asia and the Middle East. This is a more promising context than was possible for me to report to the WFS last August. How do we hope to take the work forward in the next year? I . We will continue to publish more work in books and scientific journals, and also to convey our ideas in the public media where they can affect the attitudes of our societies. We also hope with the PMP on Mitigation of Terrorist Acts to complete the project we identified last year to publish a WFS book of our findings. 2. Our application to the Lounsbery Foundation (USA) last year produced a grant of U.S. $100,000 to address the contribution of problems of Water, Energy and the Environment in the wider Middle East to the spread of terrorism in that region. We have had a WFS study visit to the region and a meeting in May of this year here in Erice and we will be completing one further visit to the region and compiling a report in the next three months. 3. Some of the members of the PMP established a company called ARTIS Research and Risk Modelling (www .artisresearch .com) and this company has been successful in acquiring funding to take forward direct field research on the motivations for terrorism. Visits have taken place to Morocco, IsraellPalestine and Turkey, and others are planned in the next few months to Guatemala and elsewhere to investigate suitable sites for research field stations. 4. The group which has developed the International Dialogue Initiative working through Bahcesehir University in Istanbul is continuing its meetings-the next
554 will be in Belfast at the beginning of September-and there is already public evidence of senior government interest in the work of the members of this group in relation to a number of terrorist problems, including in addressing the long-standing problems of Kurdish terrorism in Turkey and the nearby region. 5. Members of the PMP continued to meet extensively in the Middle East to develop our work, not least because of the problems of bringing cooperating partners together in U.S. and European centres. I would emphasise again as I did last year that almost all of our initiatives are undertaken in cooperation with bodies outside of WFS itself, enabling us to inject our scientific perspective into the work undertaken by others, but also acquiring information, contacts and resources that would not be available to us on our own as a PMP. I must also underline how the discipline of meetings here at Erice provides a crucial focus for our activities and our network, and for reflection on the direction and outcome of our findings and their application-the science and the technology of our work. I cannot end this report without expressing my sincere appreciation and that of my colleagues to Claude Manoli and the staff of WFS, as well as to Professor Nino Zichichi and our other WFS colleagues for their consistent support, sound advice and kind assistance over another year of challenging activity.
AIDS AND INFECTIOUS DISEASES PMP FRANCO M. BUONAGURO Istituto Nazionale dei Tumori, "Fondazione G. Pascale" Napoli, Italy AIDS AND INFECTIOUS DISEASES PMP ACTIVITIES This 2009 42 nd Session of Planetary Emergencies is very special, not only for Professor Zichichi's anniversary, but also for the 25 th Anniversary of the discovery of HIV and the occurrence of the Nobel prize in Medicine award to Luc Montagnier and Francoise BarreSinoussi for the identification of HIV.
~,tlDS and 11!fe...~.
't.l. OI.!JS.· Diseases
I
Activities
Amt In this ocoasion, wilien Included also the ('ion!e Prlzl.'! .. ward to Harald zur tlllYS':» for the dIscovery of til ... role of HPV
in human cancer
Ilbi$ y~"'r I-1(1"1 4';'
Wi;)
held an Anniversary HlV
cQnt~. r ...ncli ill Naplt>$ with RObiift C. GaUQ
III
_In It• .,. w. "'..... lh. """,p..n Socill'ty of Virliil0'9'1- April 24'"
L_
555
PMP
556 AIDS AND INFECTIOUS DISEASES PMP PAST ACTIVITIES (I) Since its establishment in 1988, the AIDS and Infectious Diseases PMP has organized several PMP Meetings and Plenary Sessions focused on epidemiological, molecular aspects, prevention, vaccine development on HIV, with several colleagues, as reported in the Proceedings of the Erice Seminars. The HIV sessions have been alternated with other infectious epidemics with major public health impact: I. 2. 3. 4.
BSE, Bovine Spongiform Encephalitis; Avian flu; Vector-born diseases, such as the one held last year; Other emerging diseases.
AIDS AND INFECTIOUS DISEASES PMP PAST ACTIVITIES (2) •
• • •
Established the Infectious Agents and Cancer (lAC) online journal, directed by F.M. Buonaguro, with several senior colleagues in the editorial board (including Bob Gallo, Harald zur Hausen, Guy de The, Peter Biberfeld, etc ... ); Contributed along with the Inter-Academy Society to support the Medical School at Gulu University in Northern Uganda; Established the Goggle Infectious Agents and Cancer group to foster discussions and update PMP participants; Started an Infectious Agents and Cancer Blog for visibility issue.
AIDS AND INFECTIOUS DISEASES PMP 2009 ACTIVITIES This year the AIDS and Infectious Diseases PMP contributed to the Climate PMP session held on August 19 1h on health-related issues which can be studied with and possibly prevented by satellite monitoring of: 1. Soil , precipitations, vegetations, etc., which can clearly define vector habitat: i.e, mosquitoes for malaria, rodents for plague, etc. 2. Dust storm to prevent exposure to particles with or without pathogenic microorganism, which can determine specific organ diseases (including respiratory diseases) besides the overall immunosuppression. AIDS AND INFECTIOUS DISEASES PMP PLANNING FOR NEXT YEAR For next year we are proposing: The organization of a Plenary Session during the 2010 Erice Meeting focused on vaccine development strategy whose presentations will be articulated on: • •
Vaccine strategies; Adjuvants development;
557 • • •
Pandemic infections and vaccine preparedness; Emerging diseases and vaccine approaches; Vaccines and Developing Countries.
Furthermore we are proposing: •
•
The establishment of an African network with colleagues of two scientific societies [African Society of Human Genetics (AfSHG) and Journal of Infection in Developing Countries (JIDC)] in order to confirm the clinical relevance of Remote Epidemiology and the possibility to develop a Health sentinel system for infectious diseases; The organization of a joint PMP session with the Climate PMP to be held 20 I 0 Erice Planetary Emergencies Meeting. during the
This page intentionally left blank
MOTHER AND CHILD PMP NATHALIE CHARPAK Kangaroo Foundation Bogota, Colombia MANIFESTO (ERICE 2002): MATERNAL AND CHILD MORTALITY IS A PLANETARY EMERGENCY MISSION 1. Recognize new and monitor existing tools that decrease maternal and infant mortality and morbidity. 2. Highlight the impact of the other planetary emergencies on Maternal and Infant mortality and morbidity and therefore contribute to the enhancement of the quality of the present life and future generations .
Kangaroo Mother Care is our specialty A tec hnique to decrease the mortality and to improve the quality of survival of the premature and Low Birth Weight infant in all levels of care 18 millons of candidates per year Kangaroo Position: How to carry your premature baby in the neonatal unit
Kangaroo nutrition How to breastfeed your premature infant
559
Kangaroo discharge policy: how to go home sooner with your premature infant
560 For 20 years we have been working on the scientific evaluation, dissemination and implementation of KMC (and we will never forget that you have been the first in supporting this KMC adventure). • • • •
Supporting creation of Kangaroo Foundations in other countries: Philippines, Vietnam (in process) Establishing KMC centers all around the world Making available guidelines and material on KMC Pursuing good research and being present in big international events National 2009 neonatal congress in Italy, American Academy of Pediatrics in Washington in Oct 2009, Neonatology update of Cornell, Columbia and NY universities
Kangaroo Mother Care in Europe Marina CutUni! Hospital Bambino Je!'us de Roms; "European Science f oundation Networfor....
Mother
Fcrl\"r
j
, !
..
!
L 100
_
-.
.!1·i!i· I","',
"
Is
"I
5
100
, i
~
N=283
DI Only if theV ask for it I
I Usuatly not
561
D
Kangaroo Foundation Pilot center
o
Trained KMC centers (big public hospitals)
D
Centers weiling to be trained
In Colombia 12% of all deliveries are LBW, which means that from 850 .000 deliveries a year,
100.000 are LWBI. We just sign (August 2009) with the health ministry for editing the Colombian KMC rules and tools for quality evaluation of KMC
562
MONITORING NEW TOOLS TO DECREASE MORBIDITY OF MOTHER AND INFANT
THE
MORTALITY
AND
Goa\: Update the KMC evidence based guidelines 2006-2009 (70 papers) The access to the guideline is free on the Internet and guidelines are downloaded each day by professionals from all around the world Dr. Socorro Mendoza • Pain • Physiology and thermal stability • Growth • Neurodevelopment Dr. Nathalie Charpak • Perception and acceptability of KMC from mother, parents and health worker • Resistances from health worker and family • Implementation of the full KMC intervention • Diffusion and implementation of KMC Dr. Juan Gabriel Ruiz • Mortality before and after stability • Morbidity • Breastfeeding IMPACT OF TERRORISM ON MOTHER AND CHILD HEALTH We discussed the shameful phenomenon of child participation in armed conflicts worldwide.
563 The Child Soldier: a Terrifying Dimension of Global Terrorism Understanding and increasing awareness to further encourage scientific collaboration towards mitigation of terrorism. A case study: The Colombian Iinternal Conflict by Dr. J.G. Ruiz, Dr. S. De Leon-Mendoza, Dr. N. Charpak. How incomprehensible it is to see the body of a 15 years old suicide-bomber who killed innocent people believing he was right or a 14 years old "sicario" (hired killer) killing for money and buying a refrigerator for his mother with the earning of the murder? The question is: why and how we reach such situation? What can we do to stop it? PLANETARY PROBLEM •
•
Since 1990, the war is responsible for: 2 million dead children. 6 million injured children. 10 million psychologically traumatized children. 22 million displaced children from their homes. 300,000 child soldiers all over the world, acti ve in at least 30 countries
Armed conflict traumatizes children, strips them of their innocence, and denies them the protection needed to develop physically, intellectually, spiritually, and socially.
Where?
).{othfr~('bn4p rmanuteomroT.istp~t
,
I.
564
Countries with Child Soldiers Peru (0)
Iran (G,O)
Turkey{Oi
Iraq ((;,0) Israel [lnd OccuPied T81TitoriBS (13.0 ; Angola (13,0)
Russian Federation (0) Lebanon fO, TajiKistan (0) Papua New GUinea (0) Uzbekistan (0) Nepal (0) Pakistan (0) Phillppmes i,O) Solornen Islands (0)
Burundi (G ,O) Repub lic of Congo (G,O) Oem. Rep. of the Congo (G.O) Rwanda (G.O) Uganda fG .O) Myanmar (G,O)
Sri Lanka to)
Chad tGi Eritrea {G) Ethiopia (G)
Colombia (P ,O) Mexico (P,O) Yugoslavia (former Rep .of) (P,O) Algeri a (P,O) India (P,O) Indonesia (P,O) EaslTimor
Sierra Leone (G,P,O) Somali a (ail groups) Sudan (G,P,O) Afghanistan (all groups)
P: Paramilitary group 0: Opositor group G: Governement regular army force
The Colombian Case
565
Colombia: general indicators/200B Population
45 .6 million (16 .8 million under '18)
Voting Age
18 20
Infant Mortality
0100
Education Net primary school enrollment'l :
Male 88%, Female 88%
Gender-based Violence(GBV)2
Widespread GBV, including rape , in the context of the armed conflict and in domestic life
Government Armed Forces:
208.600
Compulsary or voluntary Recruitment Age :
18
Landmines and Unexploded Ordnance(UXO)
At least 100,000 mines 96 mine-related child deaths more the mutilated
Small Arms
Supply is plentiful; quantity estimates not available
Refugees and Estimated 2.5 million, 48% to 55% Internally Displaced Persons (lOPs) under age 18
COLOMBIAN VIOLENCE LANDSCAPE •
•
•
"Informal conflict" 40 years old War's sons are involved today 25.000 irregular fighting people (estimated) FARC-EP, ELN, AUC, .. ..... Heterogeneous regional conditions RurallUrban conditions Health, Education, Opportunities (hope) Regions without state presence (for long time) 25 years of drug traffic influence Production, processing, transporting Financial support to arms traffic and groups Gangs (urban and rural)
Risk Population: General Conditions • 45% below Poverty Line (21 '000.000) Few opportunities (education, employment, health) 10 000 intra-family violence denounced cases in 2008 12.202 cases of sexual violence against minors less than 18 years 71 % in girls. 5 murders of children per day!!! 96 children were victims of personal mines in 2008 Drugs "facilities"
566 •
The higher risk population ... 20% below extreme poverty (8'500.000) (30% in some regions as Pacific frontiers) Big proportion of single parent families Huge difficulties ( for education, job. health,..) Include the displaced people (2'500.000 people) 50% children
THE ANALYSIS • • •
Why recruit children? Who are they? Risk factors for recruitment What can be done? To prevent the recruitment To integrate them back to society
WHY RECRUIT A CHILD? • • • • • • • •
Travel more easily Can use modern lightweight weapons Don't need feeding as much Obedient, and if they aren't they can be easily scared May not be enough adults willing or able to fight Forced recruitment of children may be used as a means of terror and blackmail against civilians. Easily used as servant for little jobs in the camps Eventually source of income for some families
RECRUITMENT OF CHILD SOLDIERS IN COLOMBIA IN 2009 •
•
In Colombia recruitment of minors is a common practice among paramilitary and guerilla groups. Some studies showed that nearly 50% of these groups are minors . "According to a report of the catholic church more than 500 minors of rural zones from the departments of Meta, Guaviare, Putumayo, Caqueta, Arauca y Vaupes have been recruited by force by the FARC, and the situation seems the same in Narifio y Cauca, where authorities are claiming the FARC is doing a child army". Children are obliged to serve as look out, to maintain and clean the weapons, to take shifts in camps and to fight against the regular army. Recruitment of children by the guerilla is not new, but is becoming systematic regardless of the fact it is a war crime and it could be judged by the international court of law (CPl). " (magazine CAMBIO August 15 of 2009)
567 WHO ARE THESE WAR CHll..DREN? •
Between 10,000 and 14,000 children are actually directly involved in the conflict in 2009.
PERSONAL RISK FACTORS •
•
Lives marked by: (Aguirre, 2002; Defensorfa, 2002). interfamily violence sexual abuse alcohol or drug addiction, and low-level mental handicap Other psychosocial factors are an emotional relationship with a guerrilla or paramilitary fighter fear (that paradoxically leads an individual to seek refuge in an armed group) the illusion of power or status "no future" or perceived absence of opportunities the need for personal recognition
FAMILY AND EDUCATION (87% were living with their family) I. Large dysfunctional single-parent families, in which there is little affection 2. Family does not have the minimal conditions for an integral development. 3. Schooling 3,5 years (they were already out of the educational system at the moment of recruitment) 4. Children who have been denied the right to live like children as are orphans . As one observer once said: "it's like a career for the child-at home it's the whip, at school it's the ruler and in life it's the rifle and the anned group". 5. Intra-family maltreatment: girls escaping from the house because of sexual abuse, look for affection and protection they don t have at home. 6. Addiction to drugs and alcohol in the family SOCIOPOLITICAL FACTORS •
•
• •
While the majority of children were working before the entry in the conflict there are significant links between high indices of recruitment and precarious socioeconomic conditions. Unsatisfied basic needs, poverty, unemployment and restricted access to education are typical to the municipalities most at risk Children and adolescents whose families are linked in some way or another to an anned group. This situation is common in the so called historic zones that have long been occupied and controlled by guerrilla or paramilitary forces. They decide to fight because of a political conviction. Revenge the killing of a family member.
568
GEOGRAPHICAL FACTORS •
• •
Close to 90% of recruits come from rural areas. Municipalities that have been isolated by the conflict report high indices of recruitment; specifically "the recruitment of boys, girls and young people takes place in some sixty municipalities and districts. Most of them are in rural areas in twenty departments-especially Meta, Putumayo and Tolima" (Defensorfa del Pueblo-Unicef, 2002). New recruits often come from populations living on the frontiers of agricultural expansion. Regions with little state presence, that have been controlled for some time by armed groups and that have adopted "cultures of resistance and opposition" which become a point of reference for adolescents and young people.
HISTORIC ZONES Recruitment in Colombia have the same risk factors than the rest of the world but also has particular causes as the marginalization of rural places, the fact that the war in Colombia has 40 years so there are historic zones were the groups has the power and in some cases the "legitimacy". DRUG CULTIVATION ZONES. Although a crude estimate, there may be as many as 200,000 young people and children involved in the growing, processing and marketing of narcotics. The way in which the armed groups support this industry-the umbrella of illegality-eases the passage from coca harvester to militant in an armed organization. 15% of recruited were working in the processing of cocaine. OTHER REASONS • • •
•
Afraid of what will happen if they do not join in Not fully aware of the danger and distorted notions of right and wrong. Minors are not recruited by the Colombian Army, but are sometimes employed as informers, a position that endanger them and involves them in military operations. Love of weapons .
ADDITIONAL FACTORS THAT NEED TO BE STUDIED •
Role of mothers as factor conditioning violence Abuse and other forms of violence against children Mothers reproducing violence they received when kids. Mothers as a mitigating factor breaking the abuse-violence cycle.
569
• •
Mother infant healthy bonding-attachment as prevention for msecure attachment and propensity to violence (KMC, appropriate raising patterns, etc.) Unplanned-undesired pregnancies Up to 50% pregnancies in high risk populations are not desired
WHAT DOES WAR HAVE IN STORE FOR THEM? • Diseases • Exploitation • Death • Fundamental Rights violated: Loss of freedom Loss of contact with their families Disruption of normal development • War experiences: lout of six child soldiers have killed people 6 out of 10 have seen people being killed 8 out of 10 have seen human corpses WHAT CAN BE DONE? • •
• •
Timely detection of population at risk for minors recruitment. Specific interventions in those populations to mitigate risk factors: - Economic opportunities - Education, sports and culture - Law enforcement and social safety Reintegration policies for former child soldiers. Social development and social justice
INTERNATIONAL ACTIONS AND PROTOCOLS •
•
International Humanitarian Laws offer a special protection of children's rights under national or international armed conflict Article 3 (common to the 4 Geneva Conventions) Article 24 of the 4th Convention on civilians protection in times of war Additional protocols I and II to those Conventions. Children are also entitled to benefit from all other norms favoring combatants and victims of armed conflicts. Additional protocols forbid the participation of children under 18 years in any armed conflict. Protection is even greater for internal conflicts, in which both direct and indirect participation is strictly forbidden:
"Children who have not attained the age of 18 years shall neither be recruited into armed forces, or groups, nor allowed to take part in hostilities." (Geneva Conventions)
570 REINTEGRA nON •
•
Reintegration is the official policy for rescuing children from armed groups : Once demobilized there is special follow up and integrated care (Instituto Colombiano de Bienestar Familiar) Human development, plus education and employment offers (PNUD) This is only remedial and insufficient. The major goal is to prevent children from being involved in the armed conflict. Ideal solution: termination of all hostilities In the mean time: Internal and external pressure and demands to all actors to respect children rights. Making recruiting and involving children counterproductive for the political and tactical objectives of the parts in conflict. Deeper commitment of the international community and mass media
Minors leaving the ranks of armed groups should be considered as victims of forced recruitment. As such, he/she cannot be put on trial. Minors who have been victims of forced recruitment should be exempted from military service. That is, the demobilized person should be reintegrated in every sense of the word.
One out of four children leaving the ranks of armed groups does not receive support from governmental entities and ends up rejoining armed groups or other violent groups (common delinquency, gangs)!! DISSUASION AND PENALIZATION •
• • • •
Severe penalties both at national and international level should be in place to discourage and to penalize persons and organizations who involve children in armed conflicts. "Moral" punishment: extensive dissemination in international mass media, of names and activities of those responsible. Boycott to nations and Multinational Companies tolerating or directly or indirectly favoring armed groups that recruit children. Severe penalties to persons involved, who should be prosecuted by international courts. No prescription of these crimes.
CLOSING REMARKS • • • •
Involvement of children in armed conflicts is as ancient as the history of human conflicts. Youth have traditionally fought and died in their elders ' wars. This is a trait of human kind which needs to be expunged. The problem faced by contemporary child soldiers is more complex . They are dragged into the conflict by a host of factors as outlined.
571 When you reread the social juncture based on the history of the children that face the difficulties to live in a society like this one, you find out that if you give the opportunity to those children they have lots of things to contribute. (Benposta Colombia - Naci6n de Muchach@s) SOURCES • • • • • •
•
Medios para la paz CIRC PNUD Watch List http://www.watchlist.orG/news/index.php#open_debate Coalici6n contra la vinculaci6n de nifios, nifias y jovenes al conflicto armado en Colombia. Informe sobre la situaci6n de nifios, nifias y j6venes vinculados al conflicto arm ado en Colombia: falencias en el proceso de desvinculaci6n de nifios, nifias y j6venes de los grupos paramilitares .Presentado a la Honorable Comisi6n Interamericana de Derechos Humanos Washington D.C. , Julio 18 de 2007. Revista Cambio 2009
Just 110t to forget Climate Change & Global watrning, an interesting, defmite evidence!!!!! !
This page intentionally left blank
PERMANENT MONITORING PANEL ON LIMITS OF DEVELOPMENT CHRISTOPHER D. ELLIS School of Natural Resources and Environment, University of Michigan Ann Arbor, Michigan, USA
•
Manolo Borthagaray Universidad Buenos Aires, Argentina
•
Mbarek Diop Former scientific Advisor to the President of the republic of Senegal
•
Christopher D. Ellis (Chair) University of Michigan, Ann Arbor, Michigan, USA
•
Bertil Galland Writer and Historian, Buxy, France
•
Alberto Gonzalez-Pozo Theory and Analysis Department, Metropolitana, Mexico D.F., Mexico
•
Universidad
Aut6noma
Leonardas Kairiukstis Laboratory of Ecology and Forestry, Kaunas-Girlonys, Lithuania
•
Hiltmar Schubert Fraunhofer-Institut Germany
•
fUr
Chemische
Technologie,
ICT
Geraldo G. Serra, University of Sao Paulo, Sao Paulo, SP, Brazil
•
K.C. Sivaramakrishnan Chairman, Centre for Policy Research, Delhi
2009 PROGRAM • • • •
Topic: Energy, Economy and Environment (3E) Dilemma Examined 9 countries in 5 continents Examined basics statistics Papers focused on several important questions
573
Pfinztal,
574 ENERGY USE BY SECTOR
Percm1tage of energy consumed by each economic sector in the United St.ates in 2006." (Source: The National Academies, 2008)
CO2 emissions by U.S. economic sector and energy source in 2005. (Source: The National Academies. 2008)
575
\
Reiative contributio ns of
"
energy sources to total U.S. energy consumption in 2006. (Source: The National Academies, 2008)
2009 PROGRAM I. How are energy efficiency technologies in each sector of the economy being developed and sold? a. Transportation Hybrid vehicles Flexible fuel vehicles Non-motorized travel Bus and transit b. Buildings Geothermal heating Solar daylighting, hot water, photovoltaic Energy efficient appliances Building designs to fit given climate constraints c. Urban Morphology
576 Compact urban development Mixed-use buildings • GREiN IlOOF5: A !hill '~j~r of plants ".1('l soil ¢ /l i'OOf'. ws jYOi!ioo~ f"Sirlmion,
leduce. 5l0m'l\vMef t l.liM)ff, ~
cartJoo dioxide and Ut8\}2S aiqgMh
t AIJliRNATtV£ ENBtGY: Roof-lMllnll!d ",i oo l'droin@s a!Id ~Iar piltnrJS fec!ij(~ Il£<;!d fa< Qutsidll ~f1Ili1ID' §(jijfCi!" tWltj!>OWS WIIlOOws.wi ~gh's pI'OYlde ~tftlta! lighting
",;6 heat G!az;ld
1>, @tlbk-paned il WAIEIl I'l'ffmNCY Cf.s!~ms
rollf!Ct
..."inOClw5 1l!000idli! InsulallOll.
lail\Wl!tertO lise for
lalldscaping irrlgatioll. L{)w·I!~ wallYl@ss ·o r roIl1po~tingtoit~ ~ r\\¢4(~ W"W~
' IlUIU>IHG MAl£IIIA!,$: Re.I '1:I!.!(j building m\!!te:ials
• VOOILA110N: \lllnt~ 31:1Gllf>i1Iabl!! _. __..• ....J ~"I! 'W~, S\liI(~1iY witn certifilld luml:!er h~lfl5 windows
aoo
2. What technologies are being developed to manage rapid urbanization? a. Problems with decentralization sustained by expanding traffic networks Encourage compact development Provide for multiple transportation alternatives b, Need for convergence and coordination of technology standards and regulations: Improved vehicle efficiency standards Instituted building efficiency codes City-wide development regulations that support non-motorized transportation and reduce vehicle miles travelled 3, What technologies are being explored to increase adaptive capacity? a. Use regenerative design techniques Reduces infrastructure costs/protects vs. flood hazards Improves water quality b. Effective procedures are needed for public discourse and dissemination of information Compare decision making with outcome statistics c. Implementation issues Special interests resisting change Additional costslinverse financial incentives Outdated public development policies Unique regional approaches needed for different communities
577 2010 PROGRAM •
Options being explored for next year's program I. Cap and Trade effects on development - Cap the amount of CO 2 emissions allowed - Buy and trade CO 2 credits 2. Adaptive Capacity (urban development, governance) - Water systems - Land use change - Climate change - Natural resource management 3. 5 PMP collaboration
This page intentionally left blank
POLLUTION PERMANENT MONITORING PANEL: ANNUAL REPORT
LORNE EVERETT Chancellor, Lakehead University, Thunder Bay, Canada and Haley and Aldrich, Inc. Santa Barbara, California, USA INTERDISCIPLINARY JOINT PMP-SESSION AND WORK SHOP (CHINA-INOlA)
ENVIRONMENTAL BASIS OF DISEASE: THE NEED FOR GREEN CHEMISTRY • • • • •
Stefano Parmigiani Lome Everett Pete Myers Karen O'Brien Frederick S. vom Saal
Green Chemistry Work Shop 24 August 2009! Enrico Fermi Lecture Hall-Eugene P. Wigner Institute
579
580 THE GROUNDW ATER SOUTHWEST CHINA
PROTECTION
ISSUES
IN
KARST
REGIONS
OF
Yuan Daoxian, UNESCO Karst Research Center School of Geography, Southwest University Chongqing,China. Countermeasures and New Challenges
Countermeasures and New Challenges Objectives: • Identify karst water in subterranean streams • Determine quantity and quality changes in past 20 years • Assess feasibility of use • Vulnerability mapping Exploitation of Subterranean Streams
581 Building Water Tanks: Approximately 1 Million Tanks Built in Recent Years
WHY WE CANNOT "SOLVE" THE RADIOACTIVE WASTE "PROBLEM" WITH THE CURRENT SCIENCE, TECHNOLOGY, REGULATIONS, AND SOCIETAL DEMANDS Frank L. Parker, Vanderbilt University, Nashville, Tennessee, USA. Nuclear Waste Disposal Issue Objective: • Furnish sufficient energy for humans and the environment to be sustainable, to be at least as well off as we are today, for many generations. Boundary Conditions: • All energy systems have positive and negative aspects, and all systems have large uncertainties. • This paper only deals with nuclear energy and one of its perceived negatives: Nuclear wastes (not including proliferation concerns in detail). Data/Conclusions: • Present system does not work. It is not based on scientific and technological sound assumptions. • There are scientifically correct options that may not be politically or socially acceptable, at this time, but may be in the future. • A void scientific and technological hubris.
582 Radiotoxicity of Spent Nuclear Fuel and/or High-Level Radioactive Waste over Time
----
10,000
.
1,000
III
100
it 'iI
1
10
I '"
100
-
-
1,000
10 .000
100,000
1.000.000
L'A'R SNF (No Action AIre~'el Fe;'\ f'(o,r.v~ tfl.W (F ,,1It ~t~ R$..-,c"I'ft.A-JtematlW) . . Ttnrl11&li Faa Recyef& HLW {Tf\~tiFeet ReadDr R~~~~ ~..l $'~} LWR: (MOX-tJ.Pu) HLW(T~ Re.1Clar RECide ~~ 1} HWR: SHF (HWRIHTGR Alhlrrm"Jv&---Option '1) HroR SNF ~GR _ ......iIIe--Oplion 2) Thorium SNF (Thorium Atlet'natM!l) Nahus l ltaniJm Ofe
What To Do With High Level Waste? I. Set a realistic objective function for the number of generations that you have some concern for, say 3 to 5. 2. Chose the system, say dry surface storage, for that time frame but making sure that there will not be a catastrophic release at the end of that period. 3. Since the energy content of even high level waste after that time is low, the releases, if any, will be slow. 4. Design the system to be reversible, modifiable, and the wastes retrievable, if necessary. Test with modeling and at pilot and field scales. 5. At the end of that time period, repeat. If not the best choice, choose another system, such as WIPP or sub-seabed disposal. 6. Some of these or other systems will be more believable than Yucca Mountain, just as protective of public health and the environment over the same periods as the presently proposed system. 7. The cost will be much lower than the capitol costs for the 1,000,000 year protected facility . 8. If repairs are needed, they can be made with the science and technology and with the social expectations available at that time. 9. Since these are assertions, they must be tested with mathematical models, along with laboratory and field scale studies including public impact.
10. Let's give it a try-Machiavelli
583 MULTINATIONAL REPOSITORIES: RECENT DEVELOPMENTS SESSION AND WORKSHOP PROPOSALS
AND
2010
Charles McCombie, Arius Association, Switzerland Why do we need Multinational Solutions for Nuclear Waste Disposal? •
• • •
To enhance global safety and security Highly active spent fuel containing fissile plutonium should not end up in numerous scattered locations around the globe. Fewer, safely constructed and well secured storage and disposal facilities must be the goal. The key challenge is the siting and construction of deep geological repositories for long-lived radioactive wastes. Small nuclear nations may not have suitable locations, adequate financing, or sufficient technical know-how. Two approaches can help: A large program accepts foreign wastes (i .e., USA/Russia). Small countries partner to find solutions.
Governmental Level Expressions of National Interest in ERDO Working Group= (Participant or Observer)
584 WFS 2010 WORKSHOP PROPOSAL •
•
•
Proposal that Dr. Zichichi, together with an Italian Ministry, host a SessioniWorkshop in 2010. Global cooperation on Spent fuel management-Multinational high level radioactive waste repositories. Countries interested in Multinational radioactive waste repositories would be invited to Erice to discuss the benefits, and challenges, and also to outline conditions under which a repository would be considered by each country. Proposed Session would be 112 a day, followed by the Workshop.
Topics to be addressed • History and Status of Multinational initiatives • Technical Issues Sharing repository technology; standardization Geological requirements; siting process • Global environmental and nuclear security issues • Economics Economies of scale Benefits packages • Public and political aspects How credible??? Who would participate? • A workshop on the topic could attract participants from: Smaller European States (c.f. ERDO Initiative). Potential new (or expanding) nuclear nations (e.g., the UAE, Jordan, Egypt, Morocco, Algeria, Iran, Taiwan, South Korea, Vietnam, Malaysia, Philippines, Thailand, South Africa, Namibia, Nigeria, Ghana, Mexico, Argentina, Brazil and Chile). Large nuclear supplier countries which should be concerned that backend solutions are made available to small countries where they are promoting nuclear power and selling nuclear materials. • The Workshop could initiate dialogue between small nuclear user countries that are being asked to forego rights that they have under the NPT and the potential large service providers. Why A Direct Italian Involvement? • Italy was formerly a leading nuclear nation and has accumulated significant inventories of radioactive wastes from the wide-range of activities carried out before the program was shut down. • There is currently no HLW, or spent fuel, disposal program. • Italy has decided to restart nuclear power production. • Italy has actively supported the concept of multinational disposal-the Italian Government has in July 2009 agreed to continue support of Arius and the ERDO-WG.
585 Abstract for Proposal for 2010 Work Shop: Encouraging Global Cooperation on Spent Fuel Management The urgency of reducing C02 emissions in order to mitigate the effects of global climate change is widely recognized. The facts that increased use of nuclear power can not alone solve the problem, but that it must be part of the solution are also recognizedat least by those outside the circle of wishful thinkers who would like to believe that renew abIes alone can do the trick. The argument for carbon free energy and the overriding global need for secure energy sources have together led to the current resurgence of interest in nuclear power, the so-called "nuclear renaissance". But rapid and extensive nuclear growth will also remain wishful thinking unless some crucial requirements are satisfied. Nuclear energy production must be safe, secure and economic both at the front-end (from the mining of uranium through its enrichment to its burn up in nuclear reactors) and also at the back-end, i.e. in the waste management area. The front end issues tend to dominate the nuclear debate, most recently because of the concerns about enrichment capabilities bringing nations closer to weapons capability. However, the back end can not be neglected. Highly active spent fuel containing fissile plutonium should not end up in numerous scattered locations around the globe as more and more nations, both large and small, contemplate expanding or introducing nuclear power. Fewer, safely constructed and well secured storage and disposal facilities must be the goal. The key challenge in this regard is the siting and construction of deep geological repositories for long-lived radioactive wastes. These repositories are expensive; even the smallest state-of-the-art deep facilities for high level radioactive wastes (HLW) or spent fuel will cost several billion dollars. Many small nuclear programs, or countries starting out in nuclear, do not have the technical and/or financial resources to implement a national repository in a timely fashion. They will have to keep their spent fuel in interim storage facilities; this could result in numerous sites all around the world, at each of which hazardous materials will be stored for decades to hundreds of years. One safer and more secure option would be for nuclear fuel suppliers to take back the spent fuel under a leasing arrangement and add it to their own larger stocks, which would be stored for later reprocessing and recycling into new fuels. However, although there is fierce competition among nuclear suppliers to provide reactors, fuels and reprocessing services, there are as yet no offers to take back fuel. The "take-back" options that have been discussed in the scope of the US GNEP proposal and its Russian equivalent have not led to any specific offers-and the concept in any case covers only new fuel supplied and not the extensive further inventories of radioactive waste that must be disposed of in geological repositories. The most promising option that remains open for small and new nuclear power programs is to collaborate with similarly positioned countries in efforts to implement shared, multi-national repositories. Most credible is the cooperation of geographically contiguous or close nations in the scope of regional repository projects. The national advantages in sharing technology and in benefiting financially due to the economies of scale in repository implementation are obvious. The global safety and security benefits in helping all nations have earlier access to state-of-the-art repositories are also clear. The big challenge, of course, is in achieving public and political acceptance in the repository host countries.
586 Over the past few years, significant progress in this direction has been made in the SAPIERR project (acronym for "Strategic Action Plan for Implementation of European Regional Repositories"). The Project, funded by the European Commission, has carried out a range of studies that lay the groundwork for serious multinational negotiations on the establishment of one or more shared repositories in Europe. The studies (all available on the web-site www.sapierr.net) have looked at legal and liability issues, organizational forms, economic aspects, safety and security issues and public involvement challenges. Based on these studies, a Working Group with representatives of around 12 EU Member States will meet throughout 2009 in order to consider establishment of a formal European Repository Development Organization (ERDO). By combining their resources in this way, the partners in ERDO can also demonstrate to other regions of the world the feasibility of enhancing safety and security whilst increasing the economic attractiveness of nuclear power, even for small countries. The ERDO could act as a role model for regional groupings elsewhere. The Arab States have recently made clear that they intend to introduce nuclear power, and that the will to do so collaboratively. Other world regions with potential regional groupings are in Central and South America, Asia and Africa. Several countries in such regions have expressed interest in the regional repository concept and some have already attended relevant meetings on the topic . A workshop on the topic could thus attract participants from the countries further European States and also from the UAE, Jordan, Egypt, Morocco, Algeria, Iran , Taiwan, South Korea, Vietnam, Malaysia, Philippines, Thailand, South Africa, Namibia, Nigeria, Ghana, Mexico, Argentina, Brazil and Chile. All of these are interested in introducing or expanding nuclear power programmes. Further interest in such a Workshop would be expected from the large nuclear supplier countries. These should be concerned that back-end solutions are made available to small countries where they are promoting nuclear power and selling nuclear materials. The Workshop could perhaps initiate dialogue between small nuclear user countries that are being asked to forego certain rights that they have under the NPT and the potential large service providers that may be prepared to help with customers with back-end solutions if this increases commercial opportunities without raising concerns about global security. POLLUTION PANEL WORKSHOP PARTICIPANTS, 19 AUGUST 2009 • • • • • • • • •
Chancellor Lome Everett, Lakehead University, Canada, Chair Dr. Gina M. Calderone, ECC, New York, USA Distinguished Prof. Frank Parker, Vanderbilt University, USA The Honourable James Rispoli, US DOE, (Retired) Dr. Charles McCombie, Arius Association, Switzerland Dr. Yuan Daoxian, UNESCO, China Prof. Honglie Sun, Geographical Sciences and Natural Resources Research Institute, Beijing, China Dr. Jerold Heindel , National Institute of Environmental Health Sciences, North Carolina, USA Dr. KK Satpathy, Indira Gandhi Center for Atomic Research, Tamil Nadu, India
587 • •
Dr. Frederick S. vom Saal, University of Missouri, USA Dr. Stefano Parmigiani, University of Parma, Italy
This page intentionally left blank
ENERGY PMP REPORT WILLIAM FULKERSON Institute for a Secure and Sustainable Environment, University of Tennessee Knoxville, Tennessee, USA CARMEN DIFIGLIO Office of Policy and International Affairs, U.S. Department of Energy Washington, DC, USA BRUCE STRAM Element Markets Houston, Texas, USA MARK LEVINE Lawrence Berkeley National Laboratory, Environmental Energy Technologies Division, Berkeley, California, USA At the 42 nd annual Erice Planetary Emergencies Seminars, the Energy Permanent Monitoring Panel (PMP) held its usual open Panel meeting on August 19. In addition, it organized or helped organize two plenary sessions, one on essential energy technologies for confronting the dual problems of climate change and energy security, and the second on the role of science and technology for solving the energy, environment, and economy predicament faced by China (and every nation) in a greenhouse constrained world. The China Plenary was the result of the collaboration between five PMP's (Climate, Limits of Development, Water, Pollution and Energy), and on August 24 the five PMP's held a small workshop to examine the results of the China effort and to discuss the future of the multi PMP collaboration. This report summarizes all these activities. ENERGY PMP MEETING The agenda for this meeting is attached, as is a list of participants. Ten papers were presented by Panel members and by guests. They ranged from the impact of the world recession on the global energy scene to biomass and wind updates, to three papers on oil/gas production futures, to energy research topics in Japan, to America's Energy Future report, and to essential technologies for managing climate change and energy security. The abstracts and or visuals used for these will be on the Energy PMP web site, http: //www.energypmp.org/, linked to the WFS web site, http://www.federationof scientists.org/. In addition, a paper by Dick Wilson, "No New Nuclear", and one by Jef Ongena, "Fusion Energy Update", will be on the web, but these two members could not attend the meeting. Each paper deserves a line or two in this report.
Carmen Difiglio discussed the efforts in the U.S. Congress to fashion a low carbon fuel standard. Following the lead in California it could be done nationally, but current versions of th e legislation may well cost of the order of #300/t CO 2 avoided.
589
590 Bruce Stram talked about the status of wind worldwide. Capacity is currently doubling every 4 years. Much of the growth is a result of subsidies and excellent transmission and energy storage capacities, e.g., Denmark and the Scandinavian grid and pumped storage capacity. Akira Miyahara discussed a host of energy technology developments in Japan ranging from advances in Fusion science and engineering including the discovery of super-densecore regime in the LHD that may advance future reactor design, to using low level nuclear waste to produce hydrogen and electricity, to advanced manufacturing of amorphous silicon PV devices, to making Japanese nuclear power reactors more earthquake resistant. Hisham Khatib talked about world energy in the financial recession, but he pointed out that the world is changing fundamentally because for the first time non-OECD demand exceeded OECD, that China's carbon emissions were greater than those of the United States, that world oil consumption declined in 2009, that prices of oil and coal were volatile, and that investments in energy systems across the board are down. Nevertheless, he believes the future will be driven more by the environment and international agreements than by any financial crisis. Three papers were presented on oil/gas resources. These were put on the program by the persistence of Jef Ongena. The three were by Roger Bentley, Peter Jackson and Rod Nelson, and they all touched one way or another the possibility of peak oil production, at least for so-called "conventional" oil reserves. These same three papers were presented in abbreviated form in the Energy Plenary Session. There was no consensus about timing a geologic based peak, but the debate was polite, useful and entertaining. A concise summary of the discussion was suggested by Adnan Shihab-Elden: • • • •
Production policies of countries are different vis-a-vis the speed of depletion, Technology for recovery will continue to improve, The transition to unconventional sources will increase production costs and likely carbon emissions, and There may well be a demand peak in OECD countries followed by other nations.
Ed Rubin summarized the recent U.S. National Academy of Sciences report "America's Energy Future: Technology and Transformation." To meet the goals of national security, economic competitiveness and environmental protection the energy system must be transformed dramatically in the next two to five decades. This includes capturing efficiency potential, demonstrating the viability of CCS and deploying evolutionary nuclear power plants. Bill Fulkerson presented the results of a paper authored by David Greene of the Oak Ridge National Laboratory about essential energy technologies for achieving the simultaneous goals of reducing U.S. carbon emissions by 50-80% from 2005 levels by
591 2050 and achieving domestic liquid fuel production increases plus demand reductions by 2030 of II MBD. To do this with a 95% confidence requires greater than 50% probability for success for all eleven advanced technology areas considered. Two technologies, CCS and Advanced fossil fuels including from unconventional sources, seem essential to success, and advances in building and transportation efficiency seem almost as important. THE PMP BUSINESS SESSION
1. Fulkerson asked this question: Why is energy a planetary emergency? From the discussion the following list emerged: • • • • •
Climate change, Energy insecurity, Poverty with the absence of electricity, Nuclear proliferation, Lagging technology with no silver bullet emerging and inadequate investments in research, development, demonstration and deployment.
The Energy PMP has devoted significant attention to the first two of these and the last. Also, proliferation has been a continuing subject of attention at the Seminars. The absence of universal electrification and its relationship to poverty in many parts of the world has been neglected. It should perhaps receive attention in the 2010 Seminars. This might be accomplished if the five PMP collaboration extends to a China-India activity next year. Electrification in India is advancing, but it is by no means universal. What might be learned from intellectual interaction between the two countries? 2. Ideas for the 2010 Seminars In addition to the problem of universal electrification in development a number of other themes were identified in the discussion of ideas for the 2010 Seminars. These included: A plenary session on planetary engineering (geoengineering) was proposed and Mike MacCracken wrote a one pager including a list of potential speakers. He noted that the topic is broad. It ranges from global intervention to more limited regional strategies such as curtailing the loss of Arctic summer sea ice. This idea should be explored with the Climate PMP. Hisham Khatib suggested the broad topic of world energy security with implications for sustainable economic growth to advance human and environmental well-being. Much could be fit under this umbrella including mitigation of environmental bads and resource limitations. The idea of discussing the social rate of discount for evaluating the potential long term costs and benefits of global warming was suggested by Jef Ongena in absentia, but it is, perhaps, a topic not well suited for many attendees at the Seminars.
592 Examining the barriers to actually deploying advanced renewable energy systems was suggested by Bruce Stram. How does deployment actually get done around the world, e.g., wind in Germany or Denmark, solar in Spain, biomass in Brazil, etc. The nuclear renaissance as it is happening or not around the world was a topic suggested by Akira Miyahara and Pierre Darriulat. Roger Bentley suggested a number of topics: the implications of net-energy analysis, the adequacy of comprehensive energy models, the total cost of energy and economic implications, the best policies for reducing GHG emissions and how to best stimulate energy innovation? Related to this last topic is one suggested by Giorgio Simbolotti that he called priority setting in energy technology R&D. Finally, would it be useful to have a post mortem on the December Copenhagen Conference and on progress toward the Kyoto Protocols. No consensus derived from the di scussion, but with this many good suggestions the PMP will have plenty to work with and much to think about. 3. Finally the disciplinary make up of the Energy PMP was discussed. Fulkerson pointed out that the PMP was missing expertise in renewables, energy storage, the grid, and energy efficiency. Also, there are too many Americans. Hisham Khatib pointed out that he was an expert on the grid among other things. People were asked to give Fulkerson suggestions of candidates along with some biographical infonnation and coordinates. Claude Handip, former IEA director was suggested as was Stephen Hirshberg (7), a Swedish renewables expert. THE ENERGY PLENARY SESSION The Energy PMP arranged this session. Our purpose was to expose the problem of inadequate technologies for moderating climate change and for assuring energy (oil) security at acceptable cost. There is no silver bullet. There are, however, many technologies that, with advances, can provide the solutions needed. Therefore, broad ranging research, development, demonstration and deployment (RD3) are indicated. However, RD3 budgets for energy have been meager in many countries for the past 20 years or more, and this is true for both the public and private sectors. This situation has been reversed by the Obama Administration in the United States as part of the American Recovery and Reconstruction Act of 2009, the economic stimulus package. The total energy RD3 budget for 2009 is nearly $50 billion that will be spent over several years. The Plenary Session involved two talks on the essential technology, carbon capture and storage (CCS) by Carl Bauer and Ed Rubin, then four general talks on advanced technologies and the search for new climate solutions were given by Wolfgang Eichhammer, Giorgio Simbolotti, Lee Lane and by Mike MacCracken. This was followed by a talk on extracting uranium from seawater by Maseo Tamada. The estimated cost of yellow cake from a full-scale process is about twice the current price of U 30 8 or about $100 per pound. Finally, Roger Bentley, Peter Jackson, and Rodney Nelson gave three talks on future of global oil production. All of these papers will be on the Energy PMP web site.
42 nd SESSION OF THE INTERNATIONAL SEMINARS ON PLANETARY EMERGENCIES AND ASSOCIATED MEETINGS ENERGY PERMANENT MONITORING PANEL Erice, 19 August 2009 Paul A.M. Dirac Lecture Hall- Patrick M.S. Blackett Institute - 09:30 - 13:00 and 16:00 to 19:00 AGENDA 09.30 SESSION No.1 s Participants Talks Biomass Energy Update Carmen Difiglio Wind Energy Update Bruce Stram Topics of Energy Research in Japan Akira Miyahara Fusion Energy Update-not given JejOngena Financial Crisis Impact on the Global Energy Scene Hisham Khatib Peak Oil/gas Roger Bentley The Future of Global Oil Supply Peter Jackson No Silver Bullet for Energy but Plenty of Buckshot Rod Nelson Energy Technologies for Climate Protection and Energy Security William Fulkerson (jor David Green) No new nuclear-not given Dick Wilson America's Energy Future Ed rubin SESSION NO.2 General Discussion of Next Year's Issues: Proposals for Plenary Session Topics, Potential Speakers and Assignments Global Warming and the Social Rate of Discount Geoengineering Revisited: What's new, e.g., abiotic air capture? Cap and Trade in EU: Has it Worked? Copenhagen Outfall Other Proposals Other Business Five PMP Collaboration Suggestions for New Members Other Issues
593
594 SESSION No.3 Sustainability: "Home" by Arthus Bertrand-not presented video presentation arranged by Jef Ongena, time permitting Chairman: Dr. Bruce Stram; Co-chair: Professor William Fulkerson Energy PMP Meeting Participants, August 19, 2009 Bauer, Carl 0 ., Energy, U.S., +1-412-386-6122, [email protected] Bentley, Roger, Energy, U.K., +44-118-926-4000, [email protected] Bentley, Youqurei, Energy and Climate, UK, +44-1582-750819, youqurie. [email protected] Canavan, Gregory, Energy, U.S., 505-667-3104, [email protected] Darriulat, Piere, Vietnam, +84-438363747, [email protected] .vn Difiglio, Carmen, Energy, U.S., +1-202-586-8436, [email protected] Fulkerson, Bill, Energy, U.S., +1-865-988-8084, [email protected] Hisham, Khatib, Energy, Jordan, +96-265815316, [email protected] Jackson, Peter, Energy, U.K., +44-1732-465316, [email protected] Li, Mingyuan, Energy, China, +86-10-89734145, [email protected] MacCracken, Michael, Climate and Energy, U.S., +1-301-564-4255, mmaccrac@comcasLnet McCombie, Charlie, Pollution, CH/uK, +41-792397486, charlie [email protected] Miyahara, Akira, Energy, Japan, +81-3-3554-7076, [email protected] Nelson, Rod, Energy, U.S., +1-713-375-3425, [email protected] Pozela, Yuras, Energy, Lithuania, +37-05267122, [email protected] Rubin, Ed, Energy, U.S. , +1-412-268-5897, rubin @cmu.edu Shihab-Elden, Adnan, Energy, Kuwait, +96-566151170, adnan [email protected] Sinbolotti, Giorgio, Energy, Italy, +39-3294189358, [email protected] Stram, Bruce, Energy, U.S., +1-281-827-8572, [email protected] Tamada Masao, Energy, Japan, +81-27-346-9213, [email protected] Wu, Maw-Kuen, Energy, Taiwan, +88-6937505123, [email protected] Yuan, Daoxian, CO 2 , Environment, China, +86-773-5834232, [email protected] Zhuang, Jie, Bioenergy, U.S., +1-865-974-1325, [email protected]
PLENARY SESSION WITH CHINESE EXPERTS DISCUSSING THE ROLE OF SCIENCE AND TECHNOLOGY IN HELPING SOLVE THE ENERGY, ENVIRONMENT AND ECONOMY PREDICAMENT IN A GREENHOUSE CONSTRAINED SOCIETY. The Energy PMP collaborated with the four other PMP's to organize this Plenary Session. The five Chinese speakers participated actively in all the Seminar activIties including the Energy PMP meeting and the wrap-up workshop on the last day. Actually, there was a sixth Chinese speaker, Professor Honglie Sun who was invited by Professor T.D. Lee and spoke about the impact of global warming on Tibetan Plateau glaciers. Mark Levine who discussed facts and myths about the Chinese energy situation introduced the session . Then Zhang Xiliang discussed technologies and policies for low carbon energy system transformation. Under this transformation carbon emissions could peak around 2030. Li Mingyuan discussed the prospects for oiUgas reservoir enhanced recovery using CO 2 in the Songliao basin. The prospects are good and sequestration potential is estimated at -500 million tons of CO 2 . Yuan Daoxian discussed uptake of CO 2 by Karst formations in China and around the world. These estimates may account for 20-40% of the missing sink in the carbon cycle. Professor Yuan also discussed ground water pollution in Karst formations at the Pollution PMP meeting. Zhuang Jie discussed bioenergy prospects in China as well as aforestation efforts. Dr. Zhuang pointed out that the similarity between Chinese situation and that of the U.S . is leading to active collaboration on many problems and opportunities. Xia Jun described a systematic screening framework to assess climate change impacts and integration of adaptation into development projects in water basins. All these papers and abstracts will be on the Energy PMP web site. FIVE-PMP WRAP-UP WORKSHOP-THE CHINA EXPERIENCE Twenty-seven people attended this workshop late in the afternoon of the final day of the Seminars. Three of the Chinese speakers attended, Yuan Daoxian, Li Mingyuan and Xia Jun. In addition, people from four of five PMP's attended including the chairs. The list of attendees is attached. Fulkerson gave a brief introduction and then he asked each person to answer three questions: • • •
Was the China effort useful, particularly for the Chinese speakers? Should we continue the effort perhaps with India in 2010? Is the five-PMP collaboration useful and should it be continued?
A very good discussion around these questions followed. Some conclusions resulted as follows: •
The mood was that the Chinese experience was positive particularly for the Chinese speakers who participated actively in all the activities during the
595
596
•
•
•
• •
Seminars. It was agreed that the views of each of the Chinese speakers should be solicited. Fulkerson will follow up with an email message to each speaker. Extending the process to India in 2010 may be a good idea, but with the following conditions: India should not be the only nation involved. China should be involved and an India-China session appeared to have favor. The questions to be asked of speakers should be written down and agreed to before hand. Actually, this was done in the China case, but it should be done better with more active input from the 5 PMP's. An alternative approach would be to expose which advanced technologies are viewed as important by various countries around the world: e.g., China, India, Brazil, and S. Africa; in other words, a multi-nation focus. There was debate but no resolution about this idea. It was not clear that the 5 PMP collaboration was positive overall. Some serious follow-on discussion between the PMP chairs needs to take place. Perhaps the emphasis on climate change is too overpowering, yet there was also mention of adding a sixth PMP (infectious diseases). A better structure is needed such as a working group designated by the PMP chairs to organize the 2010 activities. Internet use should be improved including using such devices as a Google Group.
Erice 5 PMP Workshop, August 24, 2009, List of Participants. Bauer, CarlO., Energy, U.S., +1-412-386-6122, carl.bauer @netl.doe.gov Bentley, Roger, Energy, U.K., +44-118-926-4000, [email protected] .uk Buonaguro, Franco H., Infectious Diseases, Italy, +39-0815903830, [email protected] Calderone, Gina, Pollution, U.S., 845-687-9016, [email protected] Difiglio, Carmen, Energy, U.S ., +1-202-586-8436, difiglio@hg .doe.gov Diop, Mbareck, Limits of Development, Senegal, +221-776333412, [email protected] Ellis, Christopher, Limits of Development, U.S., + 1-734-615-6149, [email protected] Everett, Lome, Pollution, U.S., +1-805-569-1010, LEverett @haleyaldrich.com Fulkerson, Bill, Energy, U.S., + 1-865-988-8084, [email protected] Griffin, Dale, Climate, U.S. , [email protected] Hisham , Khatib, Energy, Jordan, +96-265815316, khatib @nets.com.jo Jackson, Peter, Energy, U .K., +44-1732-465316, pjackson @cera.com Kininmonth, William, Climate, Australia, +61-398539395, [email protected] Levine, Mark, Energy, U.S., +510-486-5238, [email protected] Li, Mingyuan, Energy, China, +86-10-89734145, [email protected] Martelluca, Sergio, Gen. Assembly Mem ., Italy, +39-0672597206, smart@uniroma2 .it McCombie, Charlie, Pollution, CHlUK, +41-792397486, charlie.mcumbie @mccombie.ch Parker, Frank, L., Pollution, U.S., + 1-615-343-2371, [email protected] Pozella, Yuras, Energy, Lithuania, +37-05267122, [email protected] Pozo, Alberto Gonzalez, Limits of Development, [email protected] Rispoli, James, Pollution, U,S., + 1-919-792-2876, [email protected]
597 Sivaramakrishnan, K.C., Center for Policy Research, India, 91-11-26115273, [email protected] Sprigg, William A., U.S., Climate, U.S., + 1-520-626-8945, [email protected] Stram, Bruce, Energy, U.S., + 1-281-827-8572, [email protected] Xia, Jun, Water, China, +86-1064889312, [email protected] Yuan, Daoxian, CO 2 , Environment, China, +86-773-5834232, [email protected]
This page intentionally left blank
REPORT OF THE PERMANENT MONITORING PANEL FOR THE MITIGATION OF TERRORIST ACTS: PMP-MTA DR. SALLY LEIVESLEY Newrisk Limited London, Managing Director London, UK CBRN TERRORISM MITIGATION: ONE-SCIENCE FOR GLOBAL COOPERATION TO MITIGATE TERRORIST ACTS The global emergency of terrorism requires the cooperation of scientists from many fields and the delivery of scientific solutions into a community during conditions of great confusion and crisis. The approach of the PMP-MTA in 2009 has been to build upon the previous years of work on the nature of chemical, biological, radiological and nuclear terrorism (CBRN) and to consider a 'one science' approach to the problem. From the overview of the Panel and observations of terrorism across the planet it appears that there has been an increased capability of terror networks to access and deliver CBRN devices, due to many factors including the World Wide Web, assistance of nation states to terror groups and the widespread adoption of suicide tactics . Despite this new phase in the magnitude and rapid scalability of the terror risk, the outlook is positive. Terrorism is not moving terrorism towards a successful domination of countries and destruction of the well being of the planet. This is because there is improved cooperation between nations in the sharing of intelligence, improved collection methodologies, and a significant amount of practical experience in reducing the threat following lessons learned each day from attacks in many countries. International cooperation of the scientific community in basic and applied research and development is significant in mitigating CBRN terrorism if certain conditions can be created. The first condition IS to develop a sustainable trust through direct, clear communication that facilitates contributions from science in solving the problem of terrorism mitigation and that will be listened to by governments, emergency responders and the public. The second condition is that it is possible to apply generic scientific principles that are directed at countering the effects of the terrorist's actions, motivations, and intent, and that will reduce the consequences of such actions. As regards to the mitigation of the risk of terrorist events, the history of the World Federation of Scientists provides an encouraging background. There is a trusted scientific community from most countries of the world and the community has a focus on the practice of science in society for the protection of humanity, the planet, and future generations. The problem of terrorist mitigation is a challenge for the application of science for the betterment of society. The prevention or reduction of the consequences of attacks and the success of science under these conditions requires trust and cooperation between scientists, governments and the public. This is a pre-condition for success in mitigation, and ultimately, prevention-that scientific method and the science community is trusted
599
600 by all cultures to protect life and the way of life in each culture and the global, biodiverse, robust environment. This is a daunting task that is increasingly more essential due to the conditions that are emerging in the global terrorist conflict. The risk of improvised chemical, radiological, biological or nuclear devices is increasing with the active search by terror networks for scientific cooperation and for training and access to scientific institutions. Through public advertising, Al Qaeda is using sophisticated media messages to seek scientific workers and capability for the creation of weapons of mass destruction. Active use of science by terror networks is in evidence with their constant scanning of the internet and seeking potential weapons and tactics and training. Terrorists are trained in formal academic institutions and they train themselves in informal virtual connections between groups who share a mutual interest and ideologies expressed through collaboration on explosives or other soft factors components of terror activity (i.e., recruiting and retention). The internet is the global communications medium for their scientific collaboration-the platform of the virtual 'university' or research facility, and a powerful enabler of their distributed operations. Terror networks share through the internet videos of attacks on unsuspecting victims, infrastructure, and symbols. Some videos are shared in near-real time from the attack locations and can be viewed in different countries to encourage and train their supporters . The internet has also been used for command and control on the release of weapons such as rockets under control from remote locations. The terror networks create fear and persuasion and operational momentum, and graphic visual evidence is enhanced and reinforced through successful uptake by the mainstream media. In some situations the active use of media creates 'proxy hostages' when the victims of hostage taking are presented on video. This can be extended to countries becoming hostage in the future to threats of CBRN devices if any city is successfully contaminated or significantly destroyed by devices. Scientific innovation is recognized and the progress of testing and development is shared on the internet by the mutual interest groups within terror networks. Speed of innovation is evidenced through this unregulated and chaotic but determined search for solutions in improvised devices, tactics and psychological persuasion. The suicide bombers who may be credited with many destructive attacks on people, transportation and infrastructure and on hardened government targets are, in reality, the end of an industrial scale production line where a sophisticated detonator, counter electronics measures and highly effective shaped charges may be created by a community of capable laypersons, technologists, engineers and researchers collaborating across different countries. The dimension of the CBRN terror problem has increased in recent years. This is because the tactics and destructive intent of terror networks have drawn some nation states into collaboration in weaponization and training. The trend has been widely reported since the militia style attack on multiple targets in Mumbai in 2008 . However the potential for far more organized military style threats to the world has for a long time been evident in the Chechen groups who have sought radiological materials for attacks on cities and in the technology of submersibles and planes produced by the Tamil Tigers along with viable chemical munitions. The use of one-science for mitigation of terrorism is a statement that there is a set
601 of methodologies and systematic solutions that may be applied to mitigate the impact of chemicals, biological, radiological or nuclear materials on people, infrastructure and on the environment. The one-science approach invokes the capability of countries to collaborate through the trusted international scientific community across both public and private sectors. Communities have an urgent need to identify and implement protective solutions for people faced with a grave threat of morbidity or mortality and for systems to identify and protect the emergency response personnel from life-threatening exposures to the physiologic and psychological impacts of devices that have deployed chemicals, radiological, biological or nuclear effects. The mitigation challenge from CBRN terrorism starts with the work of prevention and understanding of mechanisms of weapons effects. It extends to protective solutions during the crisis of an attack and in the recovery period. The solutions for CBRN terrorism need to apply to people, infrastructure and the environment. The PMP-MT A has established a work programme to lay a foundation for a onescience approach to mitigation. The work has begun on a systematic exploration of solutions and reports in the form of a living document on the World Wide Web for scientists to access from any country. This is open source information and is based on solutions and observations that can be contributed to by the trusted scientific community within the WFS for the purposes of mitigation of CBRN terrorism . The spread of this information to scientific institutions and emergency response organizations and governments is the responsibility of the scientists. With collaborative programmes there is an extensive reach into the scientific community for this work. The PMP-MT A has established a method for the development of applied science for CBRN terrorism mitigation and has set up a structure for the initial collection of information. This information is being developed in simple formats that will help the public and emergency response personnel to consider scientific advice in support of their response to any CBRN attack. In this year of work, 2009-2010 the PMP-MTA is focusing on the problems of immediate evaluation to test whether there is a one-science approach with generic solutions and whether there is a simple format for information to be presented. This initial phase of work is essential as a basis for the delivery of material that will be useful as a generic scientific response to CBRN attacks. The objective is to establish whether the one-science approach will deliver a solution for the public when first in a CBRN attack crisis, to the emergency responders and to decision makers in governments. Decision makers in governments and response organizations have a particular vulnerability from CBRN terrorism as the novel conditions are an immediate threat to the security of the state whether by networks acting on their own motivation or whether driven by nation states within broader strategic conflicts. The structure of the work in progress by the PMP-MTA is outlined in the contents for the documented work of the panel which will be completed on the 25 th of August 2009 in the first format for publication .
CBRN TERRORISM MITIGATION: ONE-SCIENCE FOR GLOBAL COOPERATION TO MITIGATE TERRORIST ACTS-INTRODUCTION AND OBJECTIVES I. 2. 3. 4.
Immediate Evaluation: Nuclear and Radiological Immediate Evaluation: Chemical and Biological Scientific Near Real time Evaluation and Risk Communications Recovery 100 Days after Terrorist Acts
APPENDIX: SCENARIO TESTING METHODOLOGY The work in August 2009 is directed at populating the structure of the information listed in the document format, with an initial approach by scientists to the problems of immediate evaluation for nuclear, radiological, chemical and biological terror attacks. The mitigation of the consequences of CBRN terrorism through delivery of information to the public and to the emergency responders is titled 'Scientific near-real time evaluation and risk communications'. This sketches the current and future possibilities through technology and systematic processes in order to change the spatialtemporal-distribution for mitigation of the effects of CBRN devices, and synergistic technologies (i.e., cyber capabilities) on the population. The capacity of science to measure, report and estimate under conditions of high uncertainty and crisis will be tested within this set of observations and information. Recovery from CBRN terror attacks is a challenge that is limited to 100 days post-attack in the current work of the PMP-MTA. This is planned as the commencement of work to look at the criticalities in the 100 days post attack and to work back from these criticalities to create solutions that may prevent or mitigate in the immediate crisis. The objective is to limit the more serious consequences and to reduce the risk of irrecoverable effects from CBRN devices. These effects include the denial of portions of cities for some generations if contamination cannot be mitigated under some conditions of attack with CBRN devices. Part of the methodology that is being used is scenario testing and development of meaningful metrics. In this phase of the work this is done by the PMP-MTA to test the scientific assumptions on the information requirements for immediate evaluation. It is different from processes that are used whereby scenarios are developed to create the solutions. The assessment of the PMP-MT A is that the work of mitigation of CBRN terrorism will run for three years to re-populate and adapt the information for scientists. It is assessed that the contributions of science and needs from science may change after three years because of evolving threats and improved understanding of these threats. CONTRIBUTORS Chair Dr. Sally Leivesley Dr. Diego Buriot (Absent) Professor Rob V. Duncan (Observer) Bertil Galland (Observer) Professor Richard L. Garwin
602
603 Dr. Vasiliy 1. Krivokhizha Dr. Alan Leigh Moore (Co-Chair, Absent) Professor R. Rajaraman Professor Annette L. Sobel M.D. Professor Friedrich Steinhausler Professor Richard Wilson (Absent) Lyudmila Steinhausler (Observer)
This page intentionally left blank
PERMANENT MONITORING PANEL ON CLIMATE ACTIVITY REPORT WILLIAM A. SPRIGG Institute of Atmospheric Physics, University of Arizona Tucson, Arizona, USA Contributors' Mikhail J. Antonovsky Dale W. Griffin Yuri Izrael William Kininmonth Garth W. Paltridge Judit M. Pap Jan Szyszko
Franco Buonaguro Christopher Essex Mark B. Lyles Michael C. MacCracken Herman H. Shugart William A. Sprigg (chair)
A POLITICAL FORUM TO: • •
Follow climate trends, consequences and research Recommend action when needed: policy response? Or study?
CLIMATE PMP ACTIVITY: HISTORICALLY • •
Premise: interdisciplinary and part of an environmental continuum The Panel monitors activity and looks for program gaps, inconsistencies and needs or opportunities, e.g., - Freedom of data and information access - Advances in modeling and availability of tools - Solar influences on climate - Environmental change and infectious disease
CLIMA TE PMP ACTIVITY: TODAY •
PMP collaborations - Energy, Limits to Development and Water PMPs - (20 I 0: All of the above and Infectious Diseases PMP) • Annual Erice Seminars - Information exchange: state-of-the-science (Climate sessions) - Panel Meeting, 19 August 2009 CLIMATE PMP PROPOSALS •
Addressing the dysfunctional family of climate science, political intervention and public frustration A work in progress
605
CLIMATE PMP PROPOSALS FOR ERICE SEMINARS 2010: •
•
•
Planetary Engineering The likelihood grows that large climate and environmental changes, some essentially irreversible, lie ahead over coming decades. Direct human intervention to reduce Earth's absorption of solar radiation has been proposed, along with other ideas, to complement CO 2 emission reduction to limit global warming We propose a series of speakers to review ideas for planetary engineering that consider limiting or reversing the most severe projected consequences of climate change. What does changing climate mean for changing weather as it affects regions and communities? Global climate will alter local climate and weather, including rain, temperature, cloudiness, humidity, wind etc., within which vital infrastructures are adapted Community resilience in meeting local weather and climate extremes is enhanced if the likely direction of change of local and regional climatic parameters is anticipated We propose a series of speakers to review our understanding of 'downscaling' global climate change into characteristics of local weather and climate Solar Influences on Earth's Climate Understanding variability of the Sun and incoming cosmic rays will help establish the degree to which anthropogenic effects contribute to climate change Current analyses of data and solar irradiance models can neither confirm nor deny their influence on inter-decadal and shorter climate time scales, e.g., (a) three decades of measurements are not enough, (b) poor measurement accuracy, (c) large data gaps, and (d) lack of proxy data We propose to (a) review the state-of-the-science and what it means to understanding climate change, and (b) recommend appropriate actions
A JOINT CLIMATE AND INFECTIOUS DISEASES PMP PROPOSAL FOR 2010: •
Impacts of airborne desert dust on human health ... and how climate variability can change the risks Desert-dust health investigations are revealing the toxicity of source soils and dust collected from dust storms Soils subject to becoming airborne contain bacterial, fungal and viral pathogens, heavy metals and anthropogenic pollutants Acute and chronic human-health effects following dust exposure have been documented
606
607 We propose a workshop to review what is understood about airborne dust (and what's in it), the health consequences of airborne dust, and the adequacy of current research to address shortcomings THE CLIMATE PERMANENT MONITORING PANEL Erice Seminars-August 2004 William A. Sprigg Department of Atmospheric Sciences Institute of Atmospheric Physics The University of Arizona P.O. Box 210081 Tucson, AZ 85721 USA Tel. 520-626-8945 FAX 520-621-6839 [email protected] http://www.atmo.arizona.edu/faculty
This page intentionally left blank
PERMANENT MONITORING PANEL ON INFORMATION SECURITY REPORT FROM THE CO-CHAIRS AMBASSADOR HENNING WEGENER Ambassador of Germany (ret.) Information Security Permanent Monitoring Panel, World Federation of Scientists, Madrid, Spain JODY R. WESTBY, ESQ. Global Cyber Risk LLC, CEO Washington, DC, USA In the 2008/2009 period, the PMP concentrated its work on a few key events and documents. It also successfully broadened its international networking, essential in an area so dependent on global reflection and action. In the final weeks of 2008, the group worked on putting together a high-level gathering on the topic "Harnessing Cyber Conflict: The Quest for Cyber Peace". The subject of cyber security had been selected by the WFS as the theme of the Scientific Session traditionally following the Award Ceremony of the Erice Peace Prize, this time in its 2007 edition. The event was organized jointly by the WFS and the Papal Academy of Sciences and took place Dec.17, 2008 at the Vatican. The panel of speakers included, apart from members of the PMP, the Secretary General of the ITU, the head of the new Cyber Defense Programme of NATO and other prestigious experts. The presentations are available in a special segment of the PMP's website at www.unibw.de/infosecur which can also be accessed from the WFS home page. In its themes and substance, the Rome meeting was a follow-up to the Plenary meeting at the International Seminars in August last year on "The Crisis in Internet Security" (now available in the records of the 40 th session), and was, in tum followed by the session entitled "Cyber Conflict v. Cyber Stability: Finding a Path to Cyber Peace" which we put together for this year's Seminar. The three events, especially if viewed together, demonstrate the recent dramatic rise in cyber threats: the tremendous growth in actual attacks, the new magnitude of societal vulnerabilities, the growing sophistication and financial power of organized cyber crime, and the threats not only to the stability of digital systems and networks, but to the stability of our societies as a whole. There has occurred a quantum jump in risk, and a shift of concern from mere economic exposure to cybercrime, to the more comprehensive notion of cyber conflict, including cyberwar. The focus of the work of the PMP has accordingly shifted to dealing with cyber conflict and the requirements of cyber stability. There has also emerged the concept of cyberpeace-a term which we ourselves felicitously coined for the Rome meeting. Cyberpeace, meant to be a positive concept as an important ingredient of a universal order of cyberspace, will need to be fleshed out further in our work. During the year, the PMP has produced two major documents. Both designed to generate greater public awareness and dedicated action by important cyber stakeholders. Our document "Top Cyber Security Problems That Need Resolution" was circulated at the Rome session and then presented at an ITU meeting in Geneva in May of 2009; it is available on the PMP website. The ITU has incorporated the paper in its work on the
609
610 Global Cybersecurity Agenda (GCA). We intend to update the document periodically, and to broaden its distribution world-wide. The other document is the Erice Declaration on Principles for Cyber Stability and Cyber Peace adopted by this session and to be given wide international distribution. The Secretary General of ITD, an associate member of our group, has offered to send the document on his own behalf to all ITD member countries and to the UN Secretary General. It is our hope that the document will set in motion national and international action as required and that WFS members will also distribute it to their national political leaders, urging their attention to the Declaration. Both documents, the one on Top Cyber Security Problems, and the Declaration, are annexed to this Report. Following the Rome meeting, the Secretary General of lTD-who significantly took part in, and spoke, at the three major meetings mentioned above, has invited the PMP to co-author an lTD book entitled "The Quest for Cyberpeace"-borrowing the formula from our Rome event. We have accepted the invitation and will provide large portions of the manuscript with a view to publication of the book in 2010. We will also continue to develop, in concert with other organizations, the ITU's planned Toolkit for the Promotion of a Global Culture of Cybersecurity. As regards further international networking and participation in international events, two members of the PMP have again been prominent participants in the so-called ITD Cluster of meetings on ICT and information security in Geneva in May. One member of the PMP participated in a Conference of the Council of Europe on Cyberterrorism and another has participated in various ITD regional events on its Global Cybersecurity Agenda. Several members have published articles in recognized journals, identifying their role in the WFS and the PMP. Broadening our PMP membership and knowledge base, we have invited the three outside speakers at this year's cyber security session to become associate members of the PMP which they have accepted. A successful cybersecurity policy depends vitally on raising awareness of the threats among all stakeholders, including civil society in its broadest sense. We are thus undertaking not only a further upgrading of our website, but also plan to create a cybersecurity portal that can link us up with academia and the younger generation.
SESSION 13 INFORMATION SECURITY PANEL MEETING
This page intentionally left blank
WORLD FEDERATION OF SCIENTISTS PERMANENT MONITORING PANEL ON INFORMATION SECURITY Erice Declaration on Principles for Cyber Stability and Cyber Peace
It is an unprecedented triumph of science that mankind, through the use of modem infonnation and communication technologies (leTs), now has the means to expand economic resources for all countries, to enhance the intellectual capabilities of their citizens, and to develop their culture and trust in other societies. The Internet, like science itself, is fundamentally transnational and ubiquitous in character. The Internet, and its attendant infonnation tools, is the indispensable channel of scientific discourse nationally and internationally, offering to all the benefits of open science, without secrecy and without borders. In the twenty-first century, the Internet and other interconnected networks (cyberspace) have become critical to human well-being and the political independence and territorial integrity of nation states. The danger is that the world has become so interconnected and the risks and threats so sophisticated and pervasive that they have grown exponentially in comparison to the ability to counter them. There is now the capability for nation states or rogue actors to significantly disrupt life and society in all countries; cybercrime and its offspring, cyber conflict, threatens peaceful existence of mankind and the beneficial use of cyberspace. Information and communication systems and networks underpin national and economic security for all countries and serve as a central nervous system for response capabilities, business and government operations, human services, public health, and individual emichment. Information infrastructures and systems are becoming crucial to human health, safety, and well-being, especially for the elderly, the disabled, the infinn, and the very young. Significant disruptions of cyberspace can cause unnecessary suffering and destruction. leTs support tenets of human rights guaranteed under international law, including the Universal Declaration of Human Rights (Articles 12, 18 and 19) and the International Covenant on Civil and Political Rights (Articles 17, 18, and 19). Disruption of cyberspace (a) impairs the individual's right to privacy, family, home, and correspondence without interference or attacks, (b) interferes with the right to freedom of thought, conscience, and religion, (c) abridges the right to freedom of opinion and expression, and (d) limits the right to receive and impart infonnation and ideas to any media and regardless of frontiers. leTs can be a means for beneficence or hann, hence also as an instrument for peace or for conflict. Reaping the benefits of the infonnation age requires that infonnation networks and systems be stable, reliable, available, and trusted. Assuring the integrity, security, and stability of cyberspace in general requires concerted international action.
613
614
THEREFORE, we advocate the following principles for achieving and maintaining cyber stability and peace: I. All governments should recognize that international law guarantees individuals the free flow of information and ideas; these guarantees also apply to cyberspace. Restrictions should only be as necessary and accompanied by a process for legal review. 2. All countries should work together to develop a common code of cyber conduct and harmonized global legal framework, including procedural provisions regarding investigative assistance and cooperation that respects privacy and human rights. All governments, service providers, and users should support international law enforcement efforts against cyber criminals. 3. All users, service providers, and governments should work to ensure that cyberspace is not used in any way that would result in the exploitation of users, particularly the young and defenseless, through violence or degradation. 4. Governments, organizations, and the private sector, including individuals, should implement and maintain comprehensive security programs based upon internationally accepted best practices and standards and utilizing privacy and security technologies. 5. Software and hardware developers should strive to develop secure technologies that promote resiliency and resist vulnerabilities. 6. Governments should actively participate in United Nations' efforts to promote global cyber security and cyber peace and to avoid the use of cyberspace for conflict. The Erice Declaration on Principles for Cyber Stability and Cyber Peace was drafted by the Permanent Monitoring Panel on Information Security of the World Federation of Scientists (WFS), Geneva, and adopted by the Plenary of the WFS on the occasion of the 42IJd Session of the International Seminars on Planetary Emergencies in Erice (Sicily) on August 20, 2009
WORLD FEDERATION OF SCIENTISTS: PERMANENT MONITORING PANEL ON INFORMATION SECURITY TOP CYBER SECURITY PROBLEMS THAT NEED RESOLUTION TO ADDRESS THE PLANETARY EMERGENCY REGARDING THE INSECURITY OF GLOBAL COMMUNICA nONS The World Federation of Scientists Permanent Monitoring Panel on Information Security (InfoSec PMP) believes that it is imperative that all countries begin to address the problems that enable cyber security risks and to seek mechanisms by which solutions and approaches can more readily be shared, with a goal toward harmonized solutions and greater communication security. Collaborative arrangements between governments, the research community, legal experts, and industry on the issues that underpin security risks to communications will both expand the reach of the solution and more rapidly advance cyber security. Considering the technological innovations and the changing threat environment, the World Federation of Scientists Permanent Monitoring Panel on Information Security (InfoSec PMP) sought input from cyber security experts around the globe regarding the most serious problems that need resolution if the global crisis in the lack of security in communications is to be addressed. l In addition, the InfoSec PMP analyzed prior work in this area and included previously identified problems that continue to create security risks. The Computing Research Association (CRA) developed a report in 2003, Four Grand Challenges in Trustworthy Computing, in which they identified four challenges "aimed at immediate threats, emerging technologies, and the needs of future computing environment over a much longer term. 2 In 1997, the INFOSEC Research Council (IRC) developed a Hard Problems List (HPL), which was published in 1999 and updated in 2005. 3 Since then, neither of these documents has been updated nor has any new list reached a level of prominence. Moreover, there is a seeming complacency by governments and private sector entities alike in recognizing the urgency of advancing cyber security. The InfoSec PMP hopes that its efforts in updating and advancing a Top
Robert E. Kahn, "The Role of Identifiers in Global CyberSecurity," Presentation to the World Federation of Scientists, Erice, Sicily, Aug. 21, 2008, on file with lody Westby ; Chet Hosmer, "Critical Cyber Security Problems," Aug. 2008, on file with lody Westby; KC C1affy, ''Top Problems of the Internet and How To Help Solve Them," CAIDA, http://www.caida.org/publications/ presentations/2005/topproblemsnetl; lames Mulvenon, O. Sami Saydjari (ed.), "Toward a Cyberconflict Studies Research Agenda," IEEE Privacy & Securiry, IEEE Computer Society, 2005; input also was obtained input from Himanshu Khurana, Principal Research Scientist, Information Trust Institute, University of Illinois at Urbana-Champaign and Michael Bailey, , see Email from Himanshu Khurana to lody R. Westby, Aug. 6, 2008; email from Michael Bailey, Assistant Research Scientist, Electrical Engineering and Computer Science, University of Michigan, to lody R. Westby, Nov. 12, 2008. Four Grand Challenges in Trustworthy Computing, Computing Research Association, Nov. 16-19, 2003, http: //w w w .era.org/ A c li vities/grand.challenges/securi ty/. INFOSEC Research Council (IRC): Hard Problems List, INFOSEC Research Council, Nov. 200S, www.cybeLst.dhs.gov/docsIIRC Hard Problem List.pdf;
615
616 Cyber Security Problem list will ignite new and collaborative efforts in addressing these issues. The PMP will make its Top Cyber Security Problem list available to The United Nations International Telecommunications Union (ITU) for its consideration in its Global Agenda on Cybersecurity, nation states, universities, and other multinational fora, such as the European Commission, Organization of American States (OAS), the Asia Pacific Economic Cooperation forum, and ASEAN in the hope that this will spur attention on these critical issues and encourage collaboration. The InfoSec PMP will continue to work with all interested stakeholders to refine its Top Cyber Security Problem list and will update and reissue it accordingly. Taking a multidisciplinary approach, the PMP has divided the Top Problems into three categories: legal, policy, and technical.
•
Develop international law to accommodate cyber warfare offensive and defensive activities, thus making it operative for the cyber age.
•
In that regard, elaborate on the UN Charter in the direction of topical interpretations: Define Article 2 armed attack and Article 51 limits of selfdefense, define the concept of cyber weapon, define operational modes for Chapter VII action in case of cyber attack, develop and analyze scenarios of cyber war and cyber terrorism with a view to their legal consequences.
•
Drawing upon Bucharest Summit Declaration 4 and previous InfoSec PMP work in analyzing gaps in the international legal framework with respect to collective response, develop proposed amendments to NATO Treaty definitions of armed attack and territorial integrity and clarification of collective responses to accommodate collective cyber activities, self defense actions, and communication requirements.
•
Encourage the ratification of the Council of Europe Convention on Cybercrime ("Convention") and internal implementation by signatory states, and, where this does not obtain, encourage the harmonization of cybercrime laws (substantively and procedurally) around the globe consistent with the Convention and the cybercrime laws enacted in developed nations. The InfoSec PMP supports the efforts of the International Telecommunication Union's (lTU) Global Cybersecurity Agenda in this regard and encourages use of the lTV Toolkit for Cybercrime Legislation in developing national cybercrime legislation. s
"Bucharest Summit Declaration Issued by the Heads of State and Government participating in the meeting of the North Atlantic Council in Bucharest on 3 April 2008," http://www.summitbucharest.ro len/doc 20l.html. See "Legislation and Enforcement, ITU Toolkit for Cybercrime Legislation," United Nations, International Telecommunications Union, http://www.itu.int/ITU-D/cyb/cybersecurityllegislation.html.
617
•
Improve awareness and education of the various levels of users to enable them to safely and responsibly use ICTs and protect their systems through userfriendly and easy-to-use self-defense methods.
•
Encourage the development and implementation of a Cyber Code of Conduct to enable a global culture of responsible cyber citizenship.
•
Promote the evolution of computer emergency response teams (CERTs) toward multidisciplinary Cyber Response Centers that can respond to cyber incidents or attacks and coordinate technical, legal, operational, and policy considerations to ensure a holistic and effecti ve response.
•
Improve international cooperation and 2417 points of contact, including improved skill levels in law enforcement and cyber investigations, between all countries connected to the Internet.
•
Promote cyber security with assurance of privacy through compliance with privacy laws, especially in the context of data mining and digital surveillance.
•
Identify and fund collaborative projects to advance solutions to priority issues on a global basis.
Technical • Develop enterprise level security metrics so security progress can be measured. Quantitative information systems risk management for security needs to be at least as good as quantitative financial risk management. •
Enable time-critical system availability and resiliency across distributed systems. Enable the use of advanced information and communication technologies, stimulate the interoperability between communication systems and devices, improve the efficiency, reliability and safety of systems for power delivery and use. Such systems exemplify future critical infrastructures that are heavily dependent on extensive communication systems and are often connected to the open and vulnerable Internet. Research is needed in developing "resilient" control systems that provide trustworthy interactions between communication systems and physical infrastructures to ensure resilience in the face of cyber attacks.
•
Enable information management at the data structure level, in particular, data structures that represent identity information to ensure the identification, authentication and authorization of communications to allow seamless, secure information management on a secure basis beyond the limits of current public key infrastructure.
•
Address the security challenges of mobile/wireless systems. The widespread and exponential deployment of such devices and systems presents security
618
challenges in and of themselves and the risks they present to interconnected systems and devices. •
Identify the security risks and opportunities associated with virtual systems and cloud computing to enable their deployment and interconnection with increased security of information, applications and networks.
•
Improve the ability to track and trace cyber communications to enable source identification (accountability) and use of digital assets by technical means, reducing the reliance on cooperation between Internet Service Providers, while safeguarding privacy.
•
Develop tools that protect privacy and enable audits of activity in environments that involve data mining, digital surveillance and profiling for personalized services, and in the protection of personal and business data.
•
Improve access to information provenance so as to enable users to track the pedigree for every byte of information in Exabyte scale systems, transforming terabytes of data per day. The development of such tools should take into account the challenges of volume of information, degree of automated processing and transformation.
•
Improve transparency of network operations to enable visibility of activities, knowledge of status of operations, and identification of issues as a diagnostic tool to enhance security.
•
Develop digital identification mechanisms to protect and advance the interconnection of devices, information, and networks. Develop an identification framework that identifies personal users in its use of networked devices.
•
Place higher emphasis on cryptography, especially by developing cryptologic algorithms that will withstand future challenges, including those identified with quantum computing.
•
Identify and fund collaborative projects to advance security solutions on a global basis.
InfoSec PMP Members • Amb. Henning Wegener (Ret'd.), Chair, World Federation of Scientists Permanent Monitoring Panel on Information Security, henningwegener@ hotmail.com •
lody R. Westby, Esq., Vice Chair, World Federation of Scientists Permanent Monitoring Panel on Information Security & CEO, Global Cyber Risk LLC, [email protected]
•
Dr. William A. Barletta, Massachusetts Institute of Technology & Director, U.S. Particle School, [email protected]
•
Dr. Axel Lehmann, Professor, Universitaet der Bundeswehr Muenchen, [email protected]
•
Dr. Vladimir Britkov,
•
Dr. Udo Helmbrecht,
•
We need our new members from Latvia and Belarus (??) and who am I missing?
619
This page intentionally left blank
WORLD FEDERATION OF SCIENTISTS: PERMANENT MONITORING PANEL ON INFORMATION SECURITY QUEST FOR CYBER PEACE
Theme This joint publication should aim to promote the concept of international cyberpeace by tackling the issues of cybersecurity and cyberwarfare, with special emphasis on the importance of fundamental rights such as freedom of expression and access to information in cyberspace.
Abstract Before the advent of the information society, power and leadership were usually held by those with political authority, military superiority and economic dominance. Nation states and elite international entities dictated social norms and values. Today, however, the internet has drastically shifted the balance of power which has existed for centuries. The individual can now thwart authority and paralyze an entire infrastructure with the simple click of a mouse. Access to information and the freedom to disseminate it to the world at an individual level has not only altered traditional perceptions of foreign values but has also helped to dispel pre-conceived ideas and biased judgments. The internet has instigated the propagation of knowledge and information at an unprecedented level in world history. All areas of society have been influenced and it is undeniable that the internet has spurred social and economic development at an exceptional rate, generating society ' s dependency on ICTs. The internet has enabled the empowerment of the individual, the expansion of the self, the propagation of uncommon ideas and values to the whole world, regardless of borders and geographical obstacles. A global influence that was once only possible for powerful nations to exert is now in the hands of individuals worldwide. For example, artists, musicians and writers have become world famous by single-handedly using social networking tools to their advantage. On the other hand, individuals have used their expertise in information technology to exploit vulnerabilities in the system. Cybercrime is a relatively recent phenomenon but although the means are new, the intentions have always been present in human nature. For this reason, cybersecurity plays a vital role. The protection of the infrastructure against attacks, hacks, theft, spam and fraud all undermine the reliability of the internet. Although there are always solutions developed to patch and protect, the real, and probably most dangerous problem, is when nation states themselves employ such tactics to wage cyberwar. It is a fact that political and military conflicts spill over into cyberspace, effectively undermining trust in ICTs. The perpetrators are not any more individuals acting alone or criminal organisations. They are governments, who will deny today their engagement in cyberwarfare but will cry out openly tomorrow when they are the victims of other nations' cyberattacks. Cyberwarfare can have life-threatening consequences
621
622 when critical information infrastructures are impaired. It can also leads to propaganda, causing unfair discrimination and favouring gratuitous xenophobic and racist sentiment. However, this is not a valid enough reason to warrant limiting an instrument that has proven to be an exceptional catalyst for development in so many areas of our modern societies. The advantages of cyberpeace far outweigh the destructive consequences of cyberwarfare. Human nature and centuries of history cannot change overnight so it is essential that cybersecurity becomes a priority to ensure stability and trust in the internet. A global culture of cybersecurity can only prevail if guaranteed protection is given to data, privacy, freedom of expression and access to information. These fundamental rights are a prerequisite for the positive development of the information society and for building confidence among users of ICTs, without which, insecurity prevails and the infrastructure can only be undermined. Cyberpeace can only enforce mutual cooperation between countries and improve political relationships. This can be beneficial to countries, international organisations and specific sectors such as law enforcement, academia and trade. ICTs and the internet can provide a positive international framework for collaboration between countries leading to a better understanding and acceptance of differing cultural and societal values worldwide.
Chapter Index I. INTRODUCTION The aims and objectives of the paper are to stress the importance of international stability and harmony in today's networked world non-withstanding existing and potential threats and that guaranteeing certain fundamental rights can only help to enhance cybersecurity and attain cyberpeace. 2.
CYBER RIGHTS & CYBER LIBERTIES
2.1. Value of Cyber Ethics The importance of applying minimum ethical standards in cyberspace. A global culture of cybersecurity can only prevail if guaranteed protection is given to data, privacy, freedom of expression and access to information. These fundamental rights are a prerequisite for the positive development of the information society and for building confidence among users of ICTs, without which, insecurity prevails and the infrastructure can only be undermined. 2.2. Catalysts of the Internet - Freedom of Expression & Access to Information How these fundamental human rights have enabled the rapid evolution of the internet. Preserving them while at the same time, maintaining security and peace in cyberspace are not and should not incompatible. 3.
MODERN CONTEXT
3.1. Evolution of the Information Society and Socio-Economic Developments How lCTs and the internet have changed societies and spurred economic growth. 3.3. Paradigm Shift - National Supremacy vs. Individual Empowerment
623 How the balance of power has shifted from Supreme Nation States to individual end-users.
4.
CYBERSECURITY
4.1. Modern Societies' Dependency on ICTs & the Internet How ICTs have changed the structure of society and made them cyber-dependent. 4.2. Necessity for Trust in the Infrastructure How stability is a critical factor for the correct functioning of the infrastructure. 4.3. Social Implications of Cybercrime How cybercrime can undermine the trust of the end-user and society in the system. 5.
CYBERWARFARE
5.1. Public Proscription of Cyberattacks vs. Government-led Cyberwar How states legislate against cyberattacks but secretly practice cyberespionage with political aims. 5.2. ClIP Threats, Civilian Retaliation and other Nefarious Consequences How impairment of CII can endanger society and end-users can wage personal cyberwars with the same strength as nation states. 6.
CYBERPEACE
6.1. Benefits of Cyberpeace for Internet Stability How cyberpeace & the protection of fundamental human rights such as privacy & freedom of expression can enhance cybersecurity and benefit the positive development of the Information Society. 6.2. Solutions for Political Harmony through International Cooperation How cyberpeace can bring about political harmony and international cooperation worldwide. 7.
CONCLUSION
lCTs & the internet have enonnous potential. Nations should be prepared for the worst but should primarily focus their efforts on international cyberpeace and mutual respect.
This page intentionally left blank
SESSION 14 LIMITS OF DEVELOPMENT PANEL MEETING
This page intentionally left blank
ABOUT QUESTIONS TO BE DISCUSSED ON OCCASION OF THE 2009 ERICE MEETING OF THE PMP LIMITS OF DEVELOPMENT: THE SITUATION IN ARGENTINA JUAN MANUEL BORTHAGARA Y AND ANDRES BORTHAGARA Y University of Buenos Aires, Instituto Superior de Urbanismo Buenos Aires, Argentina ABSTRACT Argentina's electric generation system is unbalanced, too dependent of thermal sources, which tend to increase with the yearly addition (since 2003) of 1.000 MW needed to keep pace with the increase of demand. The thermal is mostly fed by highly pollutant fuel oil. This situation could be significantly improved if thermal plants were to burn hydrogen instead. Hydrogen could be economically produced by electrolysis using eo lie or solar energy. Both renewable sources are bountiful in immense, unpopulated territories in Patagonia, and the Andes foothills. Wind and solar farms would bring economic activities and open opportunities for the development of said desert territories. Use of H in the thermal would mean considerable volumes of fossil fuels spared by plants for other uses that can't do without them, thus relieving pressure on imports or further exploration and production of fossil fuels . Even with increased energy use, total balance of GHG could be significantly improved. I.
ENERGY IN ARGENTINA
Inputs to the National interconnected CAMMESA Compania Administradora del Mercado Mayorista Electrico or Wholesale Electrical Market Administration Company. • • • •
Thermal Hydro Nuclear TOTAL
15.177 .MW 1O.156.MW 1.005.MW 26.337.MW
57.626% 38.562% 3.82% 100%
Use of electricity broken down by sectors. Units = million Tons of oil equivalent Rate of Growth 1980-2005: 1.84% Year 200541.946 MToe 31% Residential & Commerce 2.95% Transportation 31 % 0.97% Industry 26% 1.02% 10% 4.76% Agriculture (Source: Instituto Nacional de la Energia "General Mosconi")
627
628 Energy demand has been growing (2002/2006) at a steady rate of 1.000 MWe per year so, if this amount is not incorporated yearly, an ominous gap grows very fast to critical blackout level.
Generacion de Energia Electrica Mercado Electrico Mayorista
100.000 95.000 90.000
.s::
~ 85.000
80.000 75.000 70.000
2002
2003
2004
2005
2006
Source;"La energfa e6lica en Argentina: situaci6n actual y perspectivas I Jorge Lapena Iinstituto Argentino de la energfa "Gral. Mosconi"
629
2.
2.1
HOW ARE ENERGY EFFICIENCY TECHNOLOGIES IN EACH SECTOR OF THE ECONOMY BEING DEVELOPED AND SOLD?
Eolic Motivated by the exceptional conditions of average wind speeds at ground level of the immense Patagonic area, two important corporations of totally local capitals INV AP S.E and IMPSA (see 1.2 and 1.6) are working to develop commercial versions of a I MW wind generator, by IMPS A and of a 1.5 MW one by INVAP. They have attained the point of advanced prototypes. They have put together adequate R&D personnel, laboratories and centers for human resource formation. Along the way, qualified job opportunities have sprung and these high value added developments, are to be capitalized internally. Considering a recent cost for the thermal of One Million u$s per installed MWe it seems quite likely that INVAP or IMPS AT generators will be competitive even without enjoying any Provincial, National or International incentives, which they should be entitled to receive. But in operational costs, thermal or Combined Cycle must contemplate fuel costs, either natural gas or fuel oil, already a critical input in Argentina, that is dependent on imports to fill a growing gap between production and demand. Of course, if the fuel item could be scratched from the eolic alternative, it is to start ahead and run with growing advantage. So far, wind farms with a total output of 27.760 Kwe are installed and turning (see 6). Smaller outfits, targeted to generate electricity for the rural residential market, or to pump water for human or livestock drinking are omnipresent in the Pampa's landscape and have been working for a century.
630 There is also a quasi equivalent offshore potential, but with these values of inshore potential, which is quite cheaper, there is no need to consider it until the first would be exploited to its full capacity, a moment that seems quite remote. As to solar energy capture, the situation shows striking similarities to that of he eolic source.
2.1.2
Inshore In Argentine Patagonia there are 692.905 Km2 of territory with a density of population of 2.2 inhabIKm2, with wind average speeds of 8 to 10.5 mis, peaking at 12 mls. There are very extensive areas with average wind speed of 9 mis, with extreme values, as shown in table, of 11.2 mls in Comodoro Rivadavia and 10.8 mls in Rada Tilly. The estimated potential is of 2.000GW, of which, at the present moment only 30 MW are installed and performing (see 6). Since it is generally agreed that wind potential is worth exploiting commercially from 4m1s average speed on, and that wind potential is a cubic function of average wind speed, expressed in mls. these figures deserve to be considered. Cube of four 4 x 4 x 4= 64 In extensive areas close to the localities of Comodoro Rivadavia and Rada Tilly, average wind speeds peak with 12m1s average speed. This means Cube of twelve 12 x 12 x 12 = 1.728 It means 1728/64 = or 27 times as much as the commercial sill. Another cubic function that deserves our attention is the flexion stress of a mast, with a side force applied at its top. Since Patagonian average wind speed happen at ground level, they can be well exploited with masts 20 mts high. These are not only much less offensive to landscape than the 60 to 90 mts high current in Europe and the US , but they would be submitted to stresses, of 20 x 20 x 20 = 8.000 Tm for each ton at the top as compared to 70 x 70 x 70 = 343.000 Tm for each Ton in 70 m high masts. Windmill generators custom made for Patagonian conditions are an entirely different product, to be designed from scratch. It can be assumed that Initial cost, maintenance, incidence of trepidation and fatigue of materials will be proportional to the values of the stresses that towers have to endure. This set of advantages could make wind farms generators cheaper and affordable, maybe not as a Ma and Pa backyard business, but within reach of the means of cooperatives, leveraged with a good credit plan to be promoted by Provincial banks. The availability of cheap, clean, electricity is likely to attract electricity intensive industries of all kinds to these vast, vacant territorial areas . 2.2
Solar The photovoltaic (FV) market is segmented into three types of demand: • • •
Rural Professional or enterprise Institutional
Total inner demand of FV units have kept a steady yearly increase of between
631 20% to 50%, until 1999, when it went up as to as much as 1.000 kW per year. See table. From that year on, and quite markedly after devaluation, demand of FV units has seen a strong regression that has only started to reverse during 2003 . Rural demand is met by systems of fv modules of 50-80 W for battery reloading for rural home or posts. Lighting systems of 30 to 100 W, and of 50 to 400 W power for small water pumps to replace old mills. Rural sector shows the biggest growth up to 1998. In the professional, or enterprise sector, a few corporations, around one dozen, among them the telecoms, are the biggest buyers. Main uses are energy provision to communications, telemetries, signalization, emergency highway systems and cathodic protection. Common characteristic of these equipments is that they have to deliver in remote or difficult access points. Size of installations are quite varied, can go between 20 to 50 W for small emergency highway service points 100 to 400 W for repeater stations to 20 kW to feed blockaging valves in gas ducts, The institutional include social assistance programmes, energy regulating entities, Provincial energy corporations, whose role is to provide small amounts of electricity to rural communities isolated from distribution networks . The demand of this sector is fundamentally subject to provincial jurisdictions and typically uses outfits to provide electricity for lighting and social or institutional communication (schools, medical dispensaries, police stations and the sort) and some residential users . Typical nominal power of these outfits vary between 50 to 400 W. Institutional demand has seen an important growth since the start of the World Bank Project of Renewable for Rural Electric Markets (PERMER) agreed in 1999. The project contemplates partial financial assistance for the installation of 70.000 domestic solar systems in seven provinces. At an average 100 W, this means app. 7 MW to be installed in a 5 year period. The magnitude of PERMER is such as to duplicate the demand of the Argentinean photovoltaic market (FV) of 2000. There are at present several firms that offer (FV) panels now, although there is only one that is a manufacturer: Solartec S.A. while the others are agents or distributors of foreign corporations: BP-Solar, Shell, Siemens/Solar, Total Energy, etc. Market prices have seen a considerable reduction, specially noticeable in the institutional and professional sectors, where purchase orders are quite big. Prices vary between 4 to 7 u$slW for the (FV) module and 7 to 10 u$slW for an installed basic system (with no CC-CA conversor) with a strong dependence of the market niche and size of purchase. It is good to consider what has happened with two ambitious projects to increase generation through non GHG sources. Because of lack of continuity in energy policies, and the disorganization of the State, that opens gaps for corrupted practices, Yacireta hydroelectrical, a bi-national endeavor with Paraguay, suffered decades' long delays, and its cost was multiplied manifold. THE HYDROELECTRICAL CASE-YACIRET A Estimated initial cost.. .... app u$s 5.000M Estimated building time ........... 10 years
So far. ..... over l2.000M So far. ..... over 20 years
632 The plant has been partially in operation since 1998, but the lake has not yet been filled up to the height of design. Important and expensive works in the perimeter of the lake, in areas to be flooded, need completion . The same could be said about the nuclear, in the case of the Third Nuclear Plant, that is: Atucha II. THE NUCLEAR CASE-ATUCHA II. Estimated initial cost. ........... u$s 1.800 M So far ..... u$s 4.000 M So far app .... 30 years Estimated building time .. 7 years (construction was interrupted between 1992 and 2003) Then the easiest and quickest solution is the thermal, as with the Combined Cycle brand new plants of Campana and TimbUes, each of 860 MWe, that will increase the system's unbalanced dependence on thermal sources This information shows that in spite of dwindling oil and gas production and reserves, Argentina's electricity generation is increasingly dependent in fossil fuel fed, thermal GHG emittant plants. The addition of thermal middle to big size units is a more realistic option than hydroelectric or nuclear mega projects. They tend to see a faster happy end, they can be erected at the more convenient places to meet local demand and their presence is not offensive. All they need is a regularly sized parcel in any current industrial park, but they burn fossil fuels and liberate big quantities of GHG. But what if they burned hydrogen instead? They would be as clean as the hydroelectric, with far lesser environmental impact. Wind and solar farms could go on spreading gradually over the territories of opportunity, producing compressed hydrogen, finding an incipient market in these new plants, designed from scratch to burn H. But also existent thermal plants could be converted to all or part time hydrogen burning The end ambition is to replace all fossil fuel burning, with the entailing harmful GHG effects: that may be a rather long, historical process, for it should imply a systemic change, the replacement of the motors of private cars, buses and locomotives for new H burning ones, as well as all the adaptation of distribution networks. A shortcut to this deep, systemic change, could be the adaptation of big users, such as major thermal power plants modified to be able to burn alternatively on H or fossil fuels, switching as it suits them, so as to gradually increase demand harmoniously with the mounting offer. This would entail liberating growing amounts of fossil fuels for other uses that could not do without them. Also the elimination of the need for further exploration, drilling, production and burning of FF as well as capping the total emission ofGHG. At current levels of consumption: • Oil reserves may last for 9 years. • Gas reserves may last for 8 years.
633 For too long, exploration of new reserves has fallen behind needs, so, now an extra effort must be made to catch up. There are promising areas, both in and offshore, in the continental platform, but the costs are daunting. If big investments are needed, best to follow grandmother's advice "not all in the same basket". Wind and solar sources, and hydrogen technologies could be encouraged with a moderate fraction of total investments. The 1.000 MWe needed to meet the yearly increase of demand could be met by the addition of thermal plants of varying sizes, the very large interconnected for maximum flexibility, small to middle, a function of local demands, miniplants or even the capillar, close to the H farms, all this a source of capitalization of the territory. For an inventory of installed wind farms, with their geographical location, and average wind speed at ground level see 6. As to solar energy capture, the situation shows striking similarities to that of he eolic source. The deserts in the western border of the National Territory extend in a fringe roughly 500Km wide, parallel to the Cordillera that begins in the North Western Andean Plateau (Puna) by the Bolivian border, and extends southwards for about 2.000 Kms (thus accounting for 500 Km x 2.000 Km = 1.000.000 SqKm, or 100 M Ha) until the cold Andean rainforests south of Neuquen Province. This desert is interrupted by several river valleys and several irrigated oases such as Cafayate Valley and those of La Rioja, San Juan, Mendoza, San Rafael, and the like, that are fine wine production terroirs. The rest of this enormous area is almost unpopulated, so, there is practically nobody to protest against the installation of extensive solar panels and the land is extremely cheap. Because of the desert conditions and the dryness of the climate, permanently bright, spotless skies prevail. To collect energy generated by mirrors and transport it to the high consumption centers of urban Argentina where demand is concentrated would mean horrendously costly, and a spider web-like line system. to concentrate it in nodes, to be connected with some CAMMESA nodes. But this would imply repeating the historical territorial imbalance, of taking away a resource from a frontier territory to hyperconcentrate again around the Port and the Metropolitan Area. A more balanced policy would be to invest in roads, a system to transport everything, and capitalize the territory, and not power lines, that transport only electricity. Then again, hydrogen energy technologies would provide the solution for compact energy stocking and transport. As in the eolic case it is possible to develop a commodity able to be traded nationally as well as internationally, to overpopulated, developed countries where energy-intensive hydrogen production may meet strong opposition, if not outright prohibition. At the moment, some of these dry, stony, Andean provinces are looking after development through highly contaminating, unsustainable open sky mining, that meet strong opposition in the resident general public, as well as fiery confrontation by environmental organizations. At the same time, cheap, available, in situ energy might entail population and small energy intensive industry settlements in the territory, thus helping a desirable decentralization process. To this end, strong R&D efforts should concentrate in making possible a package deal of 10 MW solar farm, with H producing electrolysis miniplants, including land connected to the road grid, housing +social services, at a total price of u$s
634 10 M, affordable to small to middle cooperatives, with the leverage of judicious provincial bank's credits payable with production. Intensive potential solar capture areas, as well as the Patagonian eolic ones are thousands of kilometers away from the main consumption centers. So far, the costs of lines for energy transportation, piled on top of the huge investments needed for installations big enough to make a breakthrough, have discouraged actions of significant scale, in both eolic and solar clean sources. But if instead of lines, packed energy, in the form of compressed hydrogen is to be transported by trucks, driving over existing road systems that only need improvement and completion, it is a completely different question. Wind and solar capture, should have a brilliant future in Argentina, because of the extraordinary advantages just described. 3.
HOW IS SUSTAINABLE DEVELOPMENT DEFINED N ARGENTINA
The Bruntland Commission's Definition is, by far, the most commonly accepted. More precise definitions are constantly looked after, but very soon, along the way, they get overcomplicated and loose the impact and clarity of the BC' s. 4.
HOW IS HUMAN AND ENVIRONMENTAL WELL BEING MEASURED?
Criteria such as established in the Millennium UN Human Development Commission Report seem accurate enough. Wellbeing has to do with economic questions, as well as socio-political and environmental ones. It implicates minimal security conditions that must be met in an interdependent and harmonious balance. The GINI index must also be watched closely, because inequality is a strong source of unhappiness. When considering the environmental aspects, Energy, a basic component of human development, but also the predominant source of GHG, plays an indisputably dominant role. Conflict between human wellbeing and energy. The natural gas story (see 8) 5.
THE LEGAL BACKGROUND •
•
Law 26.093-Promotion of sustainable production and use of biofuels. All gas oil or diesel oil sold in the internal market must be mixed with at least 5% of "biodiesel" by year 2010. All liquid fuels sold in the internal market characterized as naftas must be mixed with at least 5% of "bioethanol" by year 2010. Law 26.190-National Promotion for the use of Renewable Energy Sources. To attain a contribution of renewable energy sources of up to (8%) of total national consumption within a ten year lapse (2018). Renewable Energy Sources are: the non fossil renewables : eolic, solar, geothermic, tidal, hydraulic (up to 30 MW) biomass, waste disposal, sewage treatment plants and biogas (biofuels excepted).
635 Promotion program: Subsidies (0.015 $lkWh for all except solar, to which 0.9 $/kWh) The Plan of development of renewable energies call for a goal of an 8% of total demand (2.500MW) to be attained in 2016. 6.
AN OPTIMISTIC VIEW OF THE FUTURE OF ARGENTINA. • •
•
This nation has historically and repeatedly proven its capacity of political regeneration. Lets hope that it may prove it once again. Likewise, the country has gone through similar regeneration processes after severe economic crises. Lets hope that, once again, this is going to happen, thus sorting from the present global economic crisis. Some think that future developments in hydrogen technology, valuable in its own, may provide better energy stocking and transport alternatives and clean, final use.
Would these technologies become widely available, both of the Republic major deserts and some others abroad, such as Sahara (Schubert 2009) may become priceless assets. There is a growing ethical conscience that the problems of poverty and indigence, but also a reasonable GINI index must be not only addressed, but overcome with the utmost urgency. 7.
WHAT TECHNOLOGIES ARE BEING DEVELOPED TO MANAGE "RAPID URBANIZATION"?
Unfortunately, this is a question for which neither academia nor politics have had any answers. Cities grow, (the bigger the more they grow) and they grow all over the world. This has everything to do with migrations. As far as migrants may feel, even if that feeling is not supported by reality, that they are going to be better off. That is, either economically more affluent, better fed, better lodged, with more satisfactory social contacts in the cities than in their places of provenance, they are going to migrate to the cities, the bigger the better. In Argentina, cities grow through immigration, demographic growth is insignificant. Immigrants, after the last wave of Europeans, came from the remote Argentinean rural areas of the Provinces of the northwest and the northeast, and more recently, from the neighboring countries of Paraguay, Bolivia and, most recently, Peru. There is no question that it would be better to settle people in middle sized cities than to huddle them in Buenos Aires or Rosario's slums. But only a bunch of middle sized cities are balanced enough to attract immigrants and absorb them with good jobs and no social conflicts. Once and again, policies have been announced to develop middle sized cities The 1990 decade saw the construction of a network of superhighways that converge to main accesses to the city center. This has favored a process that can be seen
636 hardly as decentralization, but rather as: • • •
Social as well as physically fragmentation. More land consumption, pushing away horticultural apt land to provide for city daily provision, thus aggravating transportation. Traffic congestion prone, and therefore environmentally harming suburbanization through novel, social disrupting forms for the more affluent (country clubs, gated communities, etc), leapfrogging to 40 to 60 Km from the city centre, past the existing consolidated outer crowns of metropolitan suburbs.
This is a fragrant case of harmful rapid urbanization favored by bad planning, that yielded to the demand of the more affluent motorists and produced big investments in a network of highways, thus favoring the daily injection to the center of the metropolis of an unbearable number of motorcars, with the entailing air pollution, but with the urban environment destruction brought by intolerable street congestion and ever growing parking needs. Mass transportation by rail has big environmental advantages, since it is powered mainly by electricity, and improves the living conditions of the less favored. This has been vociferously agreed by the authorities of different colors, once and again, but then new highway public works plans are the ones that carry the day. It is c I ear that, from the point of view of planning, there is no city that can afford to have housing or land on standby to provide for newcomers, lest this add a most powerful force to the already existing incentives to migrate to cities. There is a current of opinions that states that the reason why slums exist is lack of planning. But it can be argued in return Recent European and USA's experiences have demonstrated that when there are motives for immigration in search of better living conditions, there are not available solutions to stop that tide, not even the cruelest political or military ones. 8.
WHAT TECHNOLOGIES ADAPTIVE CAPACITY?
ARE
BEING
EXPLORED
TO
INCREASE
The rise of sea level may be the aspect of climate change likely to produce the most harmful threats, for a large part of the urbanity of Argentina is built upon land that is close to the Rivers Parana-Plata coasts, and inland in small water course's basins that pour into the big rivers so, if level goes up, they may bring water countercurrent-wise and flood populated areas. Only remedies could be politically intractable population displacements or horribly costly polderizations of dubious infallibility, as the recent New Orleans events have demonstrated. It can be said that, so far, no serious efforts, if any, have been made to increase adaptive capacities. REFERENCES I. A short list of Argentina's Centers of excellence for energy R&D capacity, and formation of high level human resources.
637 1.1. Instututo Balseiro (lB) a higher mathematic and physics human resource formation and R&D, at Centro Atomico Bariloche (CAB) of Comision Nacional de Energia Atomica (CONAE) at City of San Carlos de Bariloche, Province of Rio Negro. INVAP S.E., or Invenciones Aplicadas Sociedad de Estado A joint venture 1.2. Corporation owned by the Province of Rio Negro, and CONAE's CAB at City of San Carlos de Bariloche, Province of Rio Negro. 1.3. INQUIMAE or Instituto de la Fisica Quimica de los Materiales, el Ambiente y la Energia. A joint venture of CONAE's Centro Atomico Constituyentes (CAC) and Facultad de Ciencias Exactas y Naturales (FCEyN) of Universidad de Buenos Aires (UBA) at Ciudad Universitaria, Buenos Aires. 1.4. ITBA, or Instituto Tecnologico Buenos Aires a private tertiary education and R&D at the city of Buenos Aires. 1.5. Instituto Nacional de la Energia "General Mosconi" an NGO Thinktank. at the city of Buenos Aires. IMPSA, or Industrias Metalurgicas Pescarmona, is a private corporation, located 1.6. in the Province of Mendoza, also by the Andean foothills, and is presently producing windmills with a will of specialization for the Patagonian wind conditions (wind speed averages up to 10112m1s). 2. PROSPECTIV A 2000, VERSION PRELIMINAR, Republica Argentina Ministerio de Infraestructura y Vivienda - Secretaria de Energfa y Mineria, Buenos Aires, april 200 I. 3. APROVECHAMIENTO DE LA ENERGIA SOLAR EN LA ARGENTINA Y ELMUNDO, BOLETIN ENERGETICO W 16, Julio C. Duran, Elena M. Godfri - Grupo Energia Solar, Departamento de Fisica, Centro At6mico Constituyentes Comisi6n Nacional de Energia At6mica, Buenos Aires 2004. 4. INFORME DE COYUNTURA DEL SECTOR ENERGETICO, Datos de los meses enero a diciembre 2008, N° 168, Instituto Argentino de la Energia "General Mosconi" Buenos Aires febrero 2009. 5. ARGENTINA ENERGETICA: CLAVES PARA EL ANALISIS DE SU ESTADO ACTUAL "EI suministro de combustibles en Argentina", instituto Argentino de la Energia "General Mosconi", Jorge A. Gaimaro, Buenos Aires, june 2009. 6. RELEVAMIENTO DEL PARQUE EOLICO DE ARGENTINA, mICSO Instituto de investigaci6n de Ciencias Sociales, Universidad del Salvador, Area de Recursos Energeticos y Planificaci6n para el Desarrollo, Juan Manuel Garcia, Buenos Aires, march 2006. 7. LA NACION a Buenos Aires morning newspaper of Wide nacional circulation Publishes (aug 082009 in supplement of Economa & Negocios under the caption "nvestmentin in gy" news about operation start up an electrical generation plant, of 165MWe, that demanded an investment of u$s rationat u$SI M per MWe installed, plus operation costs of fuel, a windfarm alternative may prove competitive, The brand new plant is to provide electricity to neighboring Solvay Indupa and Albanesi S.A. that built and own the plant to fill their own needs, and will sell any surplus they may have to the interconnected wholesale system administered by CAMMESA. 8. CONFLICT BETWEEN HUMAN WELLBEING ANO ENERGY. THE NATURAL GAS STORY The state of the natural gas situation in Argentina cannot be summarized better than in the
638 following review that shows the dispersion of prices at which the same product is commercialized. A policy of subsidies to protect, at first domestic consumers, then industry, farmers, and even urban transportation, ended in such a subsidy maze that has run out of control, and is politically impossible to curb. The following dispersion of natural gas prices re an example more than enough illustrative of this aspect. PURCHASE PRICE PAID BY ARGENTINA STATE GAS MONOPOLY Unit One million BTU. • • •
To producers in Argentina To Bolivian imports To Trinidad Tobago imports
u$s 1.50 to 2 (*) u$s 4.40 u$s 8.00
(*) Recently, an increase of u$s 1.2 was allowed to local gas producers, who are paid now, therefore, u$s 2.60.
The reasons to refuse paying better prices to local producers, so as to encourage increases both in production and exploration were, it was argued: • • •
the exorbitant benefits the corporations have, so far, had concern for not feeding inflationary expectations concern to protect end user's pockets
Rather sooner than later, all stakeholders: local producers, end users and Argentina, as a whole, lost, it was a lose-lose situation. SELLING PRICE CHARGED BY ARGENTINA STATE GAS MONOPOLY • • • •
To service stations for motorists and light trucks To industries Domestic, c/u mts provision through pipe networks Domestic, bottled, equivalent caloric capacity
u$s 5.00 u$s 4.00 $ 0.370 $2.1349
The unfairness of the different prices for domestic consumption is all the more cruel, and belie the spectacularly proclaimed aim of wealth distribution, because houses connected to networks belong to the more affluent. On the other hand, the poorer depend on bottled gas for heating and cooking. They are disfavored not only in the price, but also in the handling of the heavy bottles, hazardous, cumbersome, and often impossible for the aged and disabled, that often are forced to rely on very young children for help.
SUSTAINABLE DEVELOPMENT IN MEXICO: FACING THE MULTIHEADED HYDRA ALBERTO GONZA.LEZ-POZO Theory and Analysis Department, Universidad Aut6noma Metropolitana Mexico D.P., Mexico INTRODUCTION Among the multiple definitions of sustainable development, I like this one: "Improving the quality of human life while living within the carrying capacity of supporting ecosystems" (Caring for the Earth, IUCNIWWFIUNEP, 1991) The concept may hold several dimensions: environmental, social, economic, even cultural questions arise when considering if a country may look forward, assured that the resources and opportunities that will be available for future generations are at least the same (if not greater).than those that they enjoy now. These questions are easily defined, but if all of them arrive simultaneously to a critical situation, as it now happens with developing countries, the path towards a sustainable development may be seriously obstructed or can be derailed. In the past two Erice WFS Seminars, I tried to give a general assessment of climate change in Mexico (Gonzalez-Pozo, 2007 and 2008), with emphasis over its repercussion over natural disasters. This time I look to the Mexican present situation in the fields of climate-change, energy crisis, water provision, pandemic diseases, human development indexes and economic situation. In each topic, it seems that the prospect of arriving to a better position in few decades must be delayed, and at a greater cost. 1.
CLIMATE-CHANGE VECTORS. MAIN CONTRIBUTORS AND COUNTERMEASURES.
Mexico is one of the few developing countries that have systematically actualized their National Inventories in Greenhouse Gases Emissions. The most recent, in 2002, was of 643.2 tC0 2e, equivalent to the 1.5 % of the World total. (SEMARNAT, 2007) Despite of its definition as a "developing" country, the position of Mexico among the Top 30 Nations with highest CO 2 emissions puts questions about its responsibility and the mitigation measures it takes to reduce that participation.
639
640
Source: Simplified alter Watkin, et a1.2008, p.3J
But the 438 tons of CO 2 emissions produced by Mexico during 2004 represented only 74% of the total of Greenhouse-effect emissions, while other 25% is Methane and I % other gases, mostly produced by gas extraction processes, agricultural activity, cattle raising, waste disposal and residual water. (SEMARN AT 2008) That means that, accordingly with the total C02 emissions, Mexico should invest heavily in mitigation measures without having enough financial capacity for doing it, as we will see in part 5 of this article. 2.
ENERGY CRISIS AND RELATED CHALLENGES
During the 20 th Century, energy resources in Mexico tended to depend heavily on the oil industry, who used to be a solid pillar for our economy but has come to a severe financial crisis, as the rest of the productive sectors. The total reserves have declined since 1998 up to 2007, as shown in the following Figure:
641
57.7
58.2
1998
1999
Fig.
56.2
2000
2001
2002
2003
2004
2005
2006
2007
I: Decline in the amount of oil reserves in Mexico, 1998-2007. Black=Certified. Dark Grey=Probable, Light Grey=Possible. Source: Simplified after SENER, 2008.
The oil production has declined, too. In 1999 production was 2.91 million barrels; it reached a maximum of 3,383 million barrels in 2004 and in 2008 went down again to 2.84 million. And, as the increase of the number of motor vehicles is greater than the increases in the oil industry, Mexico has gradually increased the amount of imported fuel to feed the proliferation of transportation media. Now, 41 % of car fuel must be imported from other countries . As a consequence, the economic impact for the country is increasing, too. The decrease in production mean a loss of ca. $17,800 million USD ' s, while the need to import fuel represents ca I I ,000 million USD's. Together, they generate a hole in the economy of ca. 28,800 million USD's. As the dema.nd keeps growing, it is expected a deficit additional to the actual demand of 1.8 million oil barrels for 2021. This means the urgent need to build at least 6 new oil refineries in the next 20 years. The final decision to built a new one, after more than a year of doubts and erratic planning measures, was reached recently. And the first gallon of fuel refined in that new facility is not expected before 2015. But the dependence on oil industry, whose income supports the 15% of the annual budget of the Government budget has neglected other forms of energy resources, particularly renewable resources. During centuries, hydraulic power played an important role moving mining mills, textile devices and since the last decades of the XIX Century, electric generators . The climatic change in several Mexican regions affected by droughts has postponed new. In the following table, the uses of oil for electric energy production is supported by other sources, where eolic, geothermal and photoelectric types playa limited role.
642 T a bl e B • TI ypes 0 gross energy-generatton in 20 08 (G wIh our) Total 234,097 I-Iydroelectric 38.892 Thermoelectric 86,069 74,232 Independent Producers Carbon \7,789 Nuclear 9,804 Geotermic 7,055 Pholoelectric 255 Source: SENER, Mexico, 2009.
3.
INFLUENZA AND OTHER POTENTIALLY PANDEMIC THREATS
Mexico started with few other countries the first experiences with a new breed of the socalled "porcine influenza" of AIHI type, most probably of Asiatic origin, potentially lethal if not early detected and treated. The first reaction was drastic, with strong official measures against the normal public life which in Mexico is especially gregarious and sociable. Children and students were sent to their homes for several weeks as a prophylactic measure and the media was full with news about the number of infected and deceased people. The consequence of all this was that the pandemic disease is now under control (at least during this Summer) but the economy suffered an impact of 1.5% of loss in the GNP that must be added to the economic crisis that will be dealt later in this same article. And we must wait to the Winter, to check if the new breed of influenza does not return, this time adapted to the countermeasures and vaccines generated and stocked in these months. 4.
VARIABILITY OF HUMAN DEVELOPMENT INDEXES
The methodology set by the United Nations Development Program for determining the so-called "Human Development Index" (HDI) measures several variables to determine an abstract measure that is the consequence of different welfare levels: income, nutrition, shelter, education, health and other topics. It is not easy to manage and combine all of them in a single index that represents a minimum of 0 and a maximum of 1.0, and it is possible to arrive to different results depending from the approach and manipulation of statistical data. With such warning, it can be said that the HD! in Mexico (2004) was of 0.800, but with internal contrasts depending of the municipality (our lowest territorial administration unit), that can go from the lowest of 0.362 in the remote municipality of Caycoyan, Oaxaca, to 0.930 in Benito Juarez, a central district of Mexico City. These numbers go parallel to the World picture in HD!, which in 2001 oscillated between 0.250 in Sierra Leona to 0.939 in Norway. A better picture of the oscillation of HD Indexes in Mexico can be represented in the following table:
643 T a bl e C . Human Development Indexes in Mexico according to municipalities and their population . Total Units HOI Rank HOI Municipalities % Municipalities Pop. (xl000) Denomination Less than 0.5 Low 31 1.2 348,000 0.500 - 0.649 Medium Low 625 6'200,000 25.6 2442 Medium High 1,584 45 ' 100,000 0.650 - 0.799 64.9 0.800 or more High 202 45'900,000 8.3 Source: CONAPO, 2001
It shows clearly that the lower HD Indexes are located in ca. 27% of the territorial units, mostly rural and dispersed, the medium high are in almost the two thirds of municipalities which hold ca. 45% of the population in mostly in medium-sized cities, and that the higher indexes are concentrated in little more than 8% of the territorial units, mostly in big cities and metropolitan zones, which hold another 45% of the population. The trouble is that the HOI in Mexico is gradually declining, because if in 1995 it was in the 49 th place among other world nations, in 2000 it was 50 th and in 2004 back in the 53 rd position . Again, this regressive trend is associated with the bad state of the Mexican economy that is displayed in the next part of this text. 5.
GRIM ECONOMIC PROSPECTS AND POSSIBLE SHOCKS
In the last three decades, the economic growth of Mexico has been delayed by one cause or another. In this same time, countries that had similar or less GNP yearly increase than the one we have in our country have progressed much more and at this same moment face a safer, more comfortable position in the World Economic Crisis started one year ago. The situation was not so bad three years ago if we display the following data about Gross National Product Per Capita between 2001 and 2006 showing only the same countries described in Table A: Table D. GNP Per Capita, Selected Countries 2001-2006 Country United States China
2001
2002
2003
2004
2005
2006
34800
35180
37570
41060
43560
44970
Rate 05-06 % 3.4%
1000
1100
1270
1500
1740
2010
13.4%
Russian Federation
1780
2100
2590
3420
4470
5780
22.7%
Japan
35050
33130
33430
36540
38950
38410
·1.4%
Germany
24020
23020
25620
30840
34870
36620
4.8 %
Canada
22090
22560
24390
28100
32590
36170
9.9%
United Kingdom
25090
25720
28450
33890
37750
40180
6.0%
Korea (Rep of)
10580
11280
12060
14030
15880
17 690
10.2%
Italy
20180
19760
22170
26670
30250
32020
5.5%
Mexico
5580
6000
6370
6930
7300
7870
7,2%
Source. SNIEG I INEGI, 2009
As the economic scenario evolves quickly, it is hard to manage fixed data for our bad economic situation. The GNP is expected to fall to -8% or more this year, and the
644 perspectives for 2010 are not better. Only one year ago, the Mexican Peso currency could be exchanged for little more than 10 to I USD. Now is almost l3 to I and is expected to arrive to 14 or 15 to the end of this year. Mexico has now a population slightly over 107 million people, where 45% is Economic Active. But the number of unemployed is now higher than 2.4 million workers (5.2% rate) and ca. 12 million more are underemployed or working in the so called "informal" sector. These and other grim facts have an influence on the growing migration rate and the rise of organized-crime. The number of critics and scholars that ask for a change in the neo-liberal model followed in the last three decades is growing. REFERENCES CONAPO, 2001, Indices de Desarrollo Humano en Mexico 2000. Mexico, Consejo Nacional de Poblaci6n. GONZALEZ-POZO, Alberto, 2007 "Accelerated climate change, an unexpected new limit for developing countries" in WFS, 40 th Session of the International Panel of Planetary Emergencies I Limits to Development Monitoring Panel, Erice. 2008 "Risks, Vulnerability and Mitigation Measures in Human Settlements Under Changing Climatic Conditions in Mexico", presented in the 38 th Session of the International Panel of Planetary Emergencies I Limits to Development Monitoring Panel, WFS, Erice. MARTINEZ, Julia and FERNANDEZ BREMAUNTZ, Adrian (Editors), 2004, Cambio climdtico: una visi6n desde Mexico, Mexico, SEMARNAT I INECOL MENDOZA, Victor et aI., 2004 "Vulnerabilidad en el recurso agua de las zonas hidrograficas de Mexico ante el Cambio Climatico Global", in: Martinez and Fernandez, op. cit. SEMARNAT, 2007, Estrategia Nacional de Cambio Climdtico Mexico. Mexico, Secretarfa del Medio Ambiente y Recursos Naturales .. SENER, 2009, Fortalecimiento de PEMEX (Resumen), Mexico, Secretaria de Energia. TUDELA, Fernando, 2004, "Mexico y la participaci6n de los paises en desarrollo en el cambio c1imatico", in Martinez and Fernandez, op. cit. UNDP, November 2008, Fast Facts. United Nations Development Programme, www.undp.org UNITED NATIONS, 2009, The Millenium Development Goals Report, New York. WATKINS, Kent et a!., 2008 Human Development Report 200712008. Fighting climate change: Human solidarity in a divided World, UNDP
SESSION 15 MITIGATION OF TERRORIST ATTACKS MEETING
This page intentionally left blank
PERMANENT MONITORING PANEL-MITIGATION OF TERRORIST ACTS (PMP-MTA) WORKSHOP CBRN TERRORISM MITIGATION: ONE SCIENCE FOR GLOBAL COOPERATION TO MITIGATE TERRORIST ACTS Overview Overview of CBRN Terrorism Mitigation, Professor R.L. Garwin Politicization in the Process of International Cooperation to Mitigate Nuclear Terrorism: Some Dubious Results. Dr. V. Krivokhizha India's Response to the Prospect of WMD Terrorism, Professor R. Rajaraman Motivations for Terrorism, Lord Alderdice Review of Social and Political Approaches to CBRN Terrorism, Lord Alderdice Immediate Evaluation: Nuclear, Radiological, Chemical and Biological Introduction to the Development of CBRN Event Mitigation, Professor F. Steinhausler Immediate Evaluation of Radiological and Nuclear Attacks, Professor R.L. Garwin Two Scenarios: Immediate Response to Terrorists Attacks, R.V. Duncan One Science for CBRN Mitigation, Professor A.L. Sobel M.D. Guiding Principles for CBRN Decision- Making Operations, A.L. Sobel M.D. Notional BW Exercise, Professor A.L. Sobel, M.D. Pandemic HI N I 2009 Epidemiology, Diego Buriot, M.D., MPH Risk Communications and Near-real-time Evaluation of Terrorist Acts The need for a corps of radiation workers for immediate assignment, Professor Richard Wilson Risk and Communications: Preamble, Dr. Vasiliy Krivokhizha The position and the role of a journalist on the spot after an unexpected disaster, Berti! Galland Immediate Communications in the CBRN Environment, Professor R.V. Duncan Scientifically Informed Communications, Professor R. Wilson Recovery 100 Days after Terrorist Acts Scenario CBRN Post-Attack 100 Days: Recovery, Risk and Communication, Professor F. Steinhausler Summary CBRN Terrorism Mitigation, New Aspects, Professor F. Steinhausler and Professor A.L. Sobel CONTRIBUTORS AND ATTENDEES Chair Dr. Sally Leivesley Lord Alerdice Dr. Diego Buriot, M.D. , MPH (Absent) Professor Rob V. Duncan (Observer) Berti! Galland (Observer)
647
Professor Richard L. Garwin Dr. Vasily I. Krivokhizha Dr. Alan Leigh Moore (Co-Chair, Absent) Professor R. Rajaraman Professor Annette L. Sobel, M.D. Professor Friedrich Steinhausler Professor Richard Wilson (Absent) Lyudmila Zaitseva (Observer) Attendees at selected sessions: Dr. CarlO. Bauer Dr. Michael C. MacCracken Dr. James Rispoli
648
DEVELOPMENT OF CBRN EVENT MITIGATION PROF. FRIEDRICH STEINHAUSLER, PHD Physics and Biophysics Division, University of Salzburg Salzburg, Austria The information listed below is considered essential for decision-makers at the government level , enabling them to initiate timely, scientifically correct and costeffective mitigation after a CBRN event. WHAT ESSENTIAL ELEMENTS OF INFORMA nON FROM A SCIENTIFIC POINT OF VIEW ARE CRITICAL IN THE INITIAL AND SUBSEQUENT EVALUATION OF A CBRN INCIDENT? Chemical WMD
Assumption: • Mode of deployment: Atmospheric release in urban environment with high population density. • Detection of incident: Within minutes after release. Initial Consequences: Although the number of victims is expected to be high, the impact would be limited to a more or less defined area. Under this assumption, the consequences to the population and the environment can be considered to be within the capability of dedicated emergency response organisations, particularly, if they were supported in time by specialised units from the armed forces. Information required: Type of chemical released; concentration of chemical in environment; meteorological conditions at time of release and short-term forecast; identification of Critical Group (CG); modelling of agent distribution in environment as a function of time; health risk assessment for CO countermeasures for first responders and CO. Biological WMD
Assumption: • Mode of deployment: Atmospheric release in urban environment with high population density. • Detection of incident: Days after release, depending on the specific incubation period. Initial Consequences: The global community lacks any experience in the management of the consequences of a large scale terror attack with a biological WMD, e.g., infectious and contagious smallpox. Currently most of the global population has no level of immunity to
649
650 smallpox. The deployment of a single smallpox-based biological WMD (e.g., bioengineered pneumonia into smallpox in order to enhance its virulence) in an area with high population density could infect 50,000 persons, since smallpox aerosol remains stable for several days. This in turn could result in approximately 450,000 cases within a month. In case of the intentional dispersal of 30 kg of anthrax spores on an overcast day or on a night with low wind speed, this would result in a cigar-shaped plume covering an area of about 8 km2, killing between 30,000 and 100,000 persons in a typical urban environment with high population density . Information required: Type of biological agent released; spread of biological agent in target population; environmental contamination of biological agent; meteorological conditions at time of release and short-term forecast; identification of Critical Group (CG); modelling of agent distribution in target population and environment as a function of time; health risk assessment for CG; countermeasures for first responders and CG. Radiological Release Assumption: • Mode of deployment: Atmospheric release of gamma-radiation emIttIng radionuclide due to explosive (radiological dispersal device, ROD; Dirty Bomb) in urban environment with high population density. • Detection of incident: Within minutes after release, provided first responders have the capability to detect gamma radiation. Initial Consequence: The amount of explosives and the meteorological conditions at the time of the detonation will determine the size of the area contaminated. It can be assumed that typically an area comprised of several city blocks would be affected initially. The target areas at highest risk for such an attack would be areas with high population density in order to cause maximum impact, such as shopping malls. The overall magnitude of the impact on the target society is determined by the timely knowledge about the radioactive contamination. Upon realization by members of the public of potentially having been contaminated, this is likely to cause widespread panic. Furthermore, the treatment of victims with wounds contaminated by radioactivity will put additional stress on the health services. The environmental clean-up costs are difficult to estimate without data on the type and size of area affected by the radioactive contamination. However, the practical experience gained during the clean-up procedures of the city of Goiania (resulting in 5,000 m3 of radioactive waste) and the Ukrainian city of Pripyat (contaminated during the Chernobyl accident in 1986 and finally abandoned) are indicative of the technical and logistical challenges associated with such a task. Provided the radioactive contamination of the victims and the debris of the terror attack are discovered in the initial stages of the emergency response, cordoning off the impacted area and triage procedures should suffice to keep the situation under reasonable control.
651 Information required: The source of radioactive material deployed in the ROO and thereby determining the type of radioactive material to be dealt with in the cleanup operation (waste, spent fuel , single isotopes) i.e., whether the radioactive material originated from a power reactor, isotope production reactor, research reactor, defense reactor, industrial irradiator, medical facility, or industrial plant; the activity/amount of material released, its isotopic properties and the physical/chemical status of the radioactive material used in the ROO (prime candidates: reactor-produced 241 Am , 252 Cf, 137CS , 60 CO , I92 Ir , 238 pu, 90 Sr ; natural radioactive nuclide 226 Ra; secondary candidates: 103Pd, spent fuel); the physical and chemical status of the radioactive material after its dispersal by detonation or otherwise; the type of area contaminated; in the more likely case of a city: sidewalks, roads, buildings outdoors and indoors (roofs, walls, windows, rooms, attic, heating- and ventilation-system), parks and other recreational areas all have different requirements with regard to cleanup; details of the explosive device that distributed the radioactive material determining the size of the particles released (e.g., use of TNT , C4, or ANFO); the meteorological conditions at the time of the dispersal of the radioactive material, e.g., direction and speed of wind, type and amount of precipitation, temperature inversion, other weather conditions; the height and size of buildings near the site of deployment of the ROO; the type of area designated for cleanup (residential, commercial, recreational or rural); identification of Critical Group (CG); health risk assessment for CG; countermeasures for first responders and CG. NuclearWMO
Assumption: • Mode of deployment: Above ground-release of crude nuclear device in urban environment with high population density. • Detection of incident: Upon detonation of a nuclear WMO, neither the initial nuciear radiation (release of gamma rays and neutrons during the first minute), nor the subsequent residual radiation resulting from the decaying radionuclides would be noticed by the target population. Instead, they would experience almost simultaneously the bright flash of light (seriously damaging the retina of many victims) from the explosion and the accompanying impact of the excessive heat, followed by the onslaught of the air rushing from the Ground Zero towards them (and a second wave upon returning at somewhat reduced speeds). Superimposed on these effects is the effect of a flux of steelconcrete-glass missiles, typically 10% of the weight of the buildings destroyed by the nuclear blast. Eventually this would be followed by the radiation exposure due to radioactive fall-out. Initial Consequence: Assuming an explosive yield for the crude nuclear device of about 10 kt, the consequences to society would be of a similar magnitude as the nuclear weapons detonated in Hiroshima and Nagasaki in 1945, about 80% of the people within a radius of 500 m will have died instantly or later that day. Ground Zero will consist of an area with a radius of about 2 km, and in many sections, far beyond buildings will be damaged to a
652 varying degree. Within less than 0.5 s infrared radiation (temperature exceeding 3,000°C) will have caused primary bum injuries within 3 Ian of ground zero. Information required: Estimate of magnitude of yield; gamma dose rate at ground zero; concentration of radionuclides in environment due to fall out; meteorological conditions at time of release and short-term forecast; estimate of number of survivors and walking wounded.
GIVEN THE REQUIREMENT FOR THIS TYPE OF DATA, WHAT EQUIPMENT AND/OR PROCESSES ARE REQUIRED TO OBTAIN THE INFORMA nON? Chemical WMD Field detector for most likely chemical agents to be used by terrorists; real-time access to meteorological data; computer-based modelling of spatial distribution of agent as a function of time; computer-based modelling of dose assessment and risk assessment for CG. Biological WMD There are only few detectors commercially available which are capable of identifying the biological agent used in such an attack with a low error rate; access to nation-wide medical incidence data; computer-based modelling of spreading of the disease incidence in population as a function of incubation period; spatial distribution of agent as a function of time; identification of Critical Group (CR); computer-based modelling of risk assessment for CG; countermeasures for first responders and CR; plan for countering potential epidemic. Radiological Release Radiation detectors with capability to discriminate between alpha, beta, gamma and neutron radiation; real-time access to meteorological data; computer-based modelling of spatial distribution of radionuclide as a function of time; computer-based modelling of dose assessment and risk assessment for CG. NuclearWMD Realistically, no society is prepared to take effective countermeasures in case of a nuclear attack on a target population group, until specialized armed forces arrive at the target area. The only measure decision-makers can undertake is moving survivors and walking wounded away from ground zero and avoiding sending them into the radioactive plume. Provided there are surviving specialists to use still existing equipment, the following will be needed: Radiation detectors with capability to discriminate between alpha, beta, gamma and neutron radiation; real-time access to meteorological data; computer-based modelling of spatial distribution of radionuclide as a function of time; computer-based modelling of dose assessment and risk assessment for CG.
WHAT IS THE DEFINITION OF THE MINIMUM CAPABILITY A NATION, LOCAL GOVERNMENT OR INDIVIDUAL MUST POSSES TO ADEQUATELY IDENTIFY AND PROPERLY MITIGATE THE EVENTS CREATED BY A CBRN INCIDENT? Chemical WMD • Local government: Properly equipped and adequately protected first responders. • Nation: Adequate medical infrastructure to cope with number of patients requiring special treatment. • Individual: Information about correct behaviour in case of an attack. Biological WMD • Local government: Properly equipped and adequately protected first responders. • Nation: Capability of medical community to detect biological attack; plan for managing potential epidemic. • Individual: Information about correct behaviour in case of an attack. Radiological Release • Local government: Properly equipped and adequately protected first responders. • Nation: Adequate medical infrastructure to cope with number of patients requiring special treatment. • Individual: Information about correct behaviour in case of an attack. NuclearWMD • Local government: Financially not feasible to consider countermeasures in view of the low risk. • Nation: Provision specialized military units to move in. • Individual: Information about correct behaviour in case of an attack. WHAT TYPES OF INTERNATIONAL, NATIONAL, REGIONAL OR LOCAL COMMUNICATIONS SYSTEMS MUST BE OPERATIONAL DURING THE PERIOD OF THE CBRN INCIDENT AND MITIGATION EFFORTS? Chemical WMD • International: National government communicating through diplomatic lines with United Nations. • Regional: Regional government communicating through telephone lines with national government. • Local: First responders communicating trough telephone lines with local government.
653
Biological WMD • International: National government communicating through diplomatic lines with United Nations. • Regional: Regional government communicating through telephone lines with national government. • Local: First responders communicating through telephone lines with local government. Radiological Release • International: National government communicating through diplomatic lines with United Nations. • Regional: Regional government communicating through telephone lines with national government. • Local: First responders communicating trough telephone lines with local government. NuclearWMD • International: National government communicating through diplomatic lines with United Nations. • Regional: Regional government communicating through telephone lines with national government. • Local: Communication technology is unlikely to be operational at the local level. WHO SHOULD OWN AND OPERATE THE COMMUNICATIONS SYSTEMS IDENTIFIED IN RESPONSE TO QUESTION 3.D. ABOVE? Chemical • • •
WMD International: Minister of Foreign Affairs or equivalent. Regional: Regional Governor or equivalent. Local: Lead Agency at the scene.
Biological • • •
WMD International: Minister of Foreign Affairs or equivalent. Regional: Regional Governor or equivalent. Local: Lead Agency at the scene.
Radiological Release • International: Minister of Foreign Affairs or equivalent. • Regional: Regional Governor or equivalent. • Local: Lead Agency at the scene. NuclearWMD • International: Minister of Foreign Affairs or equivalent. • Regional: Regional Governor or equivalent. • Local: not applicable.
654
655 CURRENT DEFICITS Information requirements There is a need for expanded and integrated "smart systems" for intelligence gathering, integrating information across agencies and disciplines to proved comprehensive data bases Continual threat assessment for the chemical infrastructure is necessary; Independent evaluation of the capabilities of the equipment and supplies of first responders is needed; Emergency responders need to follow a capabilities-based planning approach and establish a capability development mechanism at the national and supra-national level; Multi-disciplinary, information technology-based data analysis system need to become more user friendly and have terminals in every dispatch centre to facilitate information collection; Improvements in detectors are needed in terms of accuracy, ergonomics and speed of analysis; At the laboratory level there need to be improved, rapid diagnostic tests to provide rule in/rule out information with a high degree of confidence; Near-real time air monitoring needs to be improved to allow the detection, identification, and assessment of chemical, biological and radiological threats. This will assist inter alia in the selection of the appropriate personal protection equipment (PPE) for first responders. First Responder Protection Programmes aiming on the development of more sophisticated protection technologies for first responders should be accelerated; Adequate PPE for the management and supervision of outdoor decontamination of self-dispatched patients is required; Improved PPE and redesigned standard medical equipment to work within hearing and sight limitations of the PPE is needed for medical staff treating contaminated persons. Countermeasures Improved treatment and trauma care of victims of a WMD attack are needed, such as new antibiotics, new anti-virals, burn and blast injuries; Improved medical decontamination equipment is required for contaminated and infectious victims of a WMD attack; The issue of supply and storage of the large amounts of surge equipment needed in the aftermath of a WMD attack has to be resolved; Overall contagious disease planning and related topics, such as the use of lethal-force and quarantine enforcement, is underdeveloped; Community planning on managing large numbers of people in WMDstruck disaster areas needs to be accelerated, since such a scenario overlays the emergency with significant psychological aspects (e.g., fear); Emergency planning for transportation resources (e.g., speedy evacuation of a large number of persons) is underdeveloped.
This page intentionally left blank
ONE SCIENCE FOR CBRN MITIGATION ANNETIE L. SOBEL, M.D., M.S. University of Missouri, Vice President Columbia, Missouri, USA OVERVIEW In an age of terrorism, targeting of civilians and critical infrastructure has become all too commonplace. The threat environment has emerged as a setting that merges the complexities of tactical and strategic actions. Across the spectrum of operations, the field of management of CBRN operations spans the principles of complex emergencies, security, stability, transition , post-incident recovery/reconstruction, mass care, and psychosocial dynamics. There are many insertion points for technologies in the gap areas, specifically in the area of mobile adaptive communications and sensor fusion that may assist in the conduct of operations and organization of the public. Immediate objectives are for threat verification as the driver for ensuing operations and best measurable outcome. Although an "All Hazards" response is viewed as the optimal approach in the setting of high uncertainty, the diverse issues affecting the optimal management of unconventional threats are many. This paper will specifically focus on chemical and biological threats and some of the gaps that must be addressed. THREAT DETECTION AND lDENTIFICA TION Threat verification may be challenging due to the compromise of many factors, such as: the operational environment, confounding sensor data and information, time delay, inadequate sampling, lack of recognition of the event, lack of training and other human factors and technical issues. However, let us assume that sampling is adequate with minimal time delay and contamination of evidence, and the presence of coexisting observables that may assist in threat location and mechanism of action. PROTECTION OF RESPONDERS Protection of responders must be one of the initial priorities in response to CBRN attacks. The rationale for such actions is two-fold: (I) protection of critical personnel and (2) mitigation of the effects of such an attack. All hazard response algorithms allow for the most flexible personnel-protective posture while optimizing responder and victim survivability. DECONT AMINA TION Routine protocols exist for chemical decontamination, particularly of nerve and other traditional military agents. However, protocols are variable in efficacy and availability for novel agents and other next generation agents. A similar case exists for radiological agents, given the routes of contamination are through primarily skin, respiratory and
657
658 gastrointestinal routes. In the case of infectious biological agents, however, decontamination is not the standard of care. Surface disinfection may have variable effectiveness, although respiratory containment of threat agents is the first priority of consideration. MEDICAL MANAGEMENT Medical management of CBRN events must be adaptive. An initial comprehensive assessment of the extent of contamination and/or infectious or toxin dose and routes of dissemination must be determined. Agent identification, population and individual susceptibility must be rapidly assessed. The most critical aspect of medical management is timely, accurate information management and characterization of the attack. INFORM AnON DISSEMINA nON Information management is probably among the most ineffectively managed aspects of emergency and disaster response. A disaster of any scale requires pre-emptive information flow to minimize public fear and mobilize leadership and confidence in public services. Levels of uncertainty should be minimized and specific actions encouraged through clear communications across all levels of responders, the general public, decision-makers, and relevant agencies. Cooperation between media, authorized subject matter experts, and public safety and other government communications is required for seamless information flow. These communities of interest must not operate in "silos" rather, they must routinely work to achieve value-added through collaboration. Routine interagency and transnational cooperation minimizes misinformation and enables recruitment of the public as a partner in emergency response and security. Additionally, the public seeks direction and clear guidelines during emergencies, and emerges as a force-multiplier in a response. In summary, although public behavior may not be fully anticipated during a WMD event, clear, expert guidelines enable fear to be channeled into action and positive behavior. COMMAND, CONTROL AND COMMUNICA nON Command and control (C2) is one of the highest priorities of emergency management of CBRN operations. The most important aspect of command is delineation of clear-cut lines of responsibility. Typically, the first on-scene responder leadership is traditionally in charge. Once again, this approach must be adaptive. Specifically, if specialty areas of expertise and capability are required, the personnel most trained, equipped, and knowledgeable must be considered as the most reliable first-echelon decision-making authority. Of course, this does not imply overall C2, rather, situation-specific control and information source. Typically, as a response progresses and phases of the operation unfold, so does the nature of the information and knowledge required to successfully contain the threat and minimize collateral damage.
659 INTERNATIONAL COLLABORA nON Collaboration is the cornerstone to containment of a WMD event, from measurable geopolitical effects, casualty and terror generation, and critical infrastructure destruction/disruption. The greatest enabler of international collaboration is sharing of time-sensitIve, validated, actionable, and relevant information dissemination. Collaboration ensures consistency of information, shared situation awareness, and input to policy and action. International collaboration is especially valued for ability to pre-empt or disrupt future WMD events and maximize the efficiency and surge capacity of a response.
This page intentionally left blank
THE NEED FOR A CORPS IMMEDIATE ASSIGNMENT
OF RADIATION
WORKERS
FOR
RICHARD WILSON Department of Physics, Harvard University Cambridge, Massachusetts, USA
INTRODUCTION The rules for radiation protection for clean up after a spill or other use of radioactive material are often very stringent. Some people in the USA call them the SUPERFUND rules, following the EPA SUPERFUND law. Others call them the "green fields" rules, forgetting that some green fields are more radioactive than others. Whether or not these rules are sensible for ordinary clean up, I argue that they are not sensible and indeed dangerously stupid for discussions of terrorism. Suppose a terrorist were to explode a Radioactive Dispersion Device (RDD) of, perhaps, 20,000 Curies of radioactive cesium in a crowded area like Wall Street. We have an example of what might be the result: the radioactive source in Brazil. 4 people died, and several got high doses but that was all the effect on public health. In Brazil, no one knew what was happening at first and no precautions were taken. If this were to happen in Wall Street, there would be a better response. This contrasts with postulated bioterrorist actions where thousands or even millions of people might be affected. On the other hand the "Green Fields" rules for cleanup might render a large area unusable for many years. This could have an important effect in two respects. The first responders might be unwilling to cope with the situation, and in the longer tenn, people might be unwilling to reenter the area to carry out nonnal tasks. The idea that a large part of a major city would effectively be rendered uninhabitable because of unnecessarily strict criteria would make a "dirty bomb" particularly attractive for a terrorist. Conversely, if less stringent criteria can be successfully agreed to, and advertised, a "Dirty Bomb" would become a less likely. The Harvard University radiation protection officer, Dr. Joseph Ring, pointed out to me that a terrorist act such as explosion of an RDD (dirty bomb) would be an act of war and the consequences and recovery should be considered in that light. In these cases, returning to a state of nonnalcy and psychological and social health should be considered along with medical health when making decisions such as re-entry and allowable radiation dose. This may very well call for reconsideration of traditional peacetime dose limits. This model could allow for different dose limits for those in critical infrastructure areas necessary to re-establish normalcy or higher exposures to minimize the total impact on society. This will require better basic radiation training for those involved and more coordination with radiation safety professionals. One problem is that in an emergency the people who have the duty to take charge usually know nothing about radiation and its effects and have no contact with people who do. All too often they are scared of radiation. In the USA, history shows that the press and other media do not help. After TMI not one major newspaper got the units straightconfusing DOSE and DOSE RATE. Most practical scientists are a bit sloppy here (as
661
662 noted in the footnote). Although they would not confuse dose and dose rate, they use almost interchangeable exposure, absorbed dose and medically effective dose. This sloppiness is aided by the fact that (usually) I R leads to I rad which is usually I rem. Not even the Associated Press bothered to quote the accurate press releases of the NRC. It was a bit better after Chernobyl, but there were numerous nonsense stories and to my certain knowledge they refused to publish an accurate account from the Pravda correspondent in Kuwait who filed when on vacation in Kiev after a visit to the power plant. Even the Japanese criticality incident was badly described. The NY Times quoted the measured site boundary dose rate but although they got the number right, they got the units wrong. They used Rlhr rather than mR per hour, thereby changing a nuisance into a disaster. What follows is one suggestion for changing this situation. SOME FIRST RESPONDERS SHOULD BE RADIATION WORKERS I therefore suggest that we at Erice suggest that all groups that might become first responders : fire brigades, Griffin in the UK, or others, have 10% of their workforce trained and qualified as RADIATION PROTECTION officers . The 10% might be made up of volunteers with other normal jobs, such as nuclear and high energy physicists from surrounding universities and laboratories who volunteer to get additional training and can abandon their normal work and come in promptly for this duty. For example, I have been semi-formally designated as a radiation worker since 1946. I have been qualified as such at several laboratories and been brought "up to date" in 2009 both at Harvard and Jefferson Laboratory in Newport News. I am entitled to go into a radiation area in the relevant laboratory, and can accompany a non-radiation worker and ensure that he does not get into trouble. As an example of how such a corps might work in an emergency is that they would be willing to undertake in such an emergency a dose rate of up to I remihr, and a total dose of 50 rems without flinching. I note an astronaut used to be allowed a dose of 80 rem. Those over 60 years old might be especially willing to have a somewhat higher total radiation because for them the risk of cancer is less due to the 20 year latent period. However, such older people would still have to be concerned about the prompt dose rate. SOME WALL STREET FINANCIERS SHOULD BE RADIATION WORKERS I use Wall street financiers as a generic term for anyone who thinks that his presence after an emergency is essential for either a public reason or the private reason of continued operation of his business. This would enable them to return promptly to their offices immediately after an incident, determine for themselves the dangerous areas and avoid them. Of course, that might not be the CEO of JP Morgan, but his computer and data processing experts. I estimate that they would be able to go in and work in areas with 100 times more radiation dose rate than the areas with the presently anticipated "green fields" approach. There are a number of studies over the years that suggest that "informed" workers are able to reduce their exposure (whether to toxic chemicals or radiation) to less than 115 of what it might be reduced by simple adherence to standards without thought and understanding. Their actual dose would then be less than 10 times the alowed limit for the public ..
663 It is self-evident that if a financier is a radiation worker he can get back to his job promptly. But can it be maintained indefinitely? In principle I think it can, but that may be asking too much. We should be content if all that society is willing to do is allow normal functioning for a limited period.
GUIDANCE FROM PROFESSIONAL GROUPS AND THE SCIENTIFIC BASE THEREFORE Professor Steinhausler has ably set out the requirements as perceived, particularly in Europe. They were developed after Chernobyl, and if blindly applied are, in my view, unnecessarily restrictive. The suggestions of the careful analysis by the National Radiological Protection Board in UK, published in April 1986, one week before Chernobyl, were ignored. Here I deliberately go back a step and consider what makes sense on the basis of the radiation doses and their effects. It is important to distinguish between two different public health effects on health and Chronic effects. These are often forgotten but this should be central to all discussions. All first responders should be aware of this distinction. If a whole body DOSE of 300 rem (3 Sv) is received in a short time period (a week or so) there is a 50% probability of a prompt effect. This is the "Least Dose or LD50". This is believed to be a threshold effect. Rapid blood transfusion can increase the LD 50 about a factor of 2. A whole body dose of 100 rad does not lead to acute effects. In contrast, is believed that cancer incidence is a non-threshold effect. At a total dose of about 20 rem (0.2 Sv) cancer incidence increases about 1%. There is a latent period of perhaps 5 years for leukemias and about 20 years for solid cancers. The implications for radiation protection depend on two scientifically based ideas. The TOTAL cancer incidence in a population should be kept low, and the recommendation is that the average dose for the whole population be kept low. I believe the recommendation of ICRP is still that the dose from man made activities be kept below 170 rnrern/year (1.5 mSv per year). But individual doses can be higher than this. Historically, workers in an occupation have risks in their occupation that exceed those of the general pUblic. This applies to all pollutants and this is anticipated for radiation workers also. Dose limits and the ICRP proposed limits are correspondingly higher than the average for the public. For example an astronaut used to be allowed a one time dose of 80 rem. SPECIFIC GUIDANCE BY NCRPM The National Council on Radiation Protection an Measurements in the USA has issued a number of sensible reports but my judgment is that they have far less influence than they should have I particularly refer to two documents: NCRP 138 "Managing of Terrorist events involving Radioactive Material" and NCRP Commentary 19: "Key Elements of Preparing Emergency Workers for Nuclear and Radiological Terrorism" I start with the proposed actions for a first responder as noted in NCRP commentary 19.
664 "Establish an outer Perimeter at 10 mrem per hr (0.1 mGray per hr)". "Establish an Inner perimeter at 10 rem per hr (0.1 Gray per hr)". The general public should be kept out of the outer perimeter and only emergency workers should be allowed inside who will be expected to measure and minimize their exposure. In my opinion these are very sensible limits if sensibly applied. At the border of the outer perimeter the absorbed dose is about 2 rem per week. At the inner perimeter it would be 2,000 rem per week which would be fatal, but an emergency worker would be able to measure and keep the dose to 1120 of that. They limits apply to DOSE RATE as they should. It is very important that the first responders do not establish the outer perimeter too far out. The public fear and consequent public pressure, after TMI and more particularly Chernobyl led many authorities in USA and Europe to make extreme rules. In the USA the Nuclear Regulatory Commission established a 10 mile radius "emergency palling zone" around every nuclear power plant.. Many jurisdictions, my own state among them interpreted that as "an emergency evacuation zone"-an area to be evacuated even before any radioactivity were released. I personally would have disobeyed any emergency order. Yet as stated clearly in NRCP 138 "Evacuation is the most potentially disruptive of all the early phase counter measures"-a fact borne out by some early WHO health studies. I note that this was explicitly mentioned in the Kemeny report subsequent to the Three Mile Island accident. The context there was than a general increase in cancer rates of about 20% should be anticipated among those simply disrupted by evacuation. This warning is all to often ignored. More recent calculations, and therefore guidance from nuclear power safety experts, is that general emergency evacuation in the event of a nuclear power plant accident, whether naturally caused or terrorist caused, within the 10 mile emergency planning zone is undesirable. but not within a mile, is to stay indoors, with windows closed, and await measurements of such factors as wind direction and total release. It is also a matter of historical fact that at TMI it was inappropriate to suggest an evacuation and the radiation safety expert in Harrisburg, PA was correct in so advising Governor Thornburgh. Also the evacuation of Pripyat after the Chernobyl accident, although delayed 36 hours, was soon enough. Since advance preparation may be crucial, it will be important for the local fire brigades and others who society demands become first responders, be aware that there are enough trained radiation workers. This is easy to accomplish. Already fire brigades are made aware of potentially hazardous facilities in their region, make inspections thereof, and meet the local safety personnel. They could by the same token meet the local radiation workers. Since measurement in the first minutes is crucial, any facility should have in their first aid box, which we hope is easily available, a dosimeter with batteries fully charged, with ranges appropriate for measuring exposure rates between 10 and 1,000 mR per hour. Any prior trained Wall Street financiers would have a different set of incentives that the first responders: to ensure that their business could continue during an emergency and recover smoothly thereafter. They might naturally find which rooms, buildings and offices within their purview were 'clean' and which had to be avoided. They would act as
665 a natural brake to any well intentioned but undesirable excessive expansion of the outer perimeter. THE REQUIREMENTS FOR REENTRY INCLUDING PERSONAL EXPERIENCE I here discuss two closely coupled matters. Should there be an evacuation from a contaminated area, which would for a RDD or dirty bomb, be after the initial assessment? And when can someone reenter an area which has been within the outer perimeter established which was established because of a high dose rate? It is important to understand very clearly that the immediate hazard, where a high dose rate is important to avoid acute somatic effects is not the relevant criterion. What matters is the total dose that can induce cancer. With a total dose of 20 rem (0.2 Sv) the cancer rate could go up 1%. But obviously an 80 year old need not worry as much as a 20 year old. It is usual to hear comments about the area around Chernobyl which 'will be uninhabitable for thousands of years'. The fact that the area is now basically uninhabited influences public perception and hence some official actions. But it is important to distinguish between compulsory evacuation and guidance. This distinction should be easier to make in the nominally "free" western society that in the controlled society of the USSR of 1986. Unnecessary evacuation, with its concomitant disruption, has an untoward effect on health that must be balanced against any assumed or calculated dose reduction. There are several interesting historical anecdotes about the Chernobyl evacuation. As noted above it was not immediate. Pripyat was evacuated 36 hours after the accident with no problem. The estimated average dose was about 3 rem, (30 mSv) which is well within guidance. Some villages immediately downwind had higher doses and were evacuated later-Pavlowski conservatively estimated the average dose for Chistalogovka which was evacuated after 3 days at 45-somewhat higher than desirable. When the Ukrainian authorities expanded the area slightly, the Minister of Health went to one village and suggested evacuation. The villagers did not want to move and the Minister gave in. The uneducated villagers had which turned out to be right - even on the magnitude of the dose itself. In retrospect it was found that the level of contamination was less than the natural radioactivity nearer the Black Se,~ to which they would have been sent! (Personally told to me by the Minister on May 19 1991). It is well known that after the evacuation many peasants-particularly older people, came back to their home in the exclusion area and were allowed to stay. I have been to the area many times, and once, in 1994, to one of the highest areas-in Belarus just north of Chernobyl. The highest dose rate measured was 2 mR per hour, which clearly would diminish with the half life of Cs 137 (30 yrs) but the average over the area was much less and even the highest exposed individual would have received less. It was fine to eat the apples in the orchards (but we spat out the pips which are known to concentrate Cs i). If I lived there, I would also limit my use of mushrooms which have disproportionately concentrated Cesium. South of the plant, in Chernobyl town itself, scientists are living and studying the ecology. But this area is not a crucial area for the economy of Belarus or the Ukraine so there is no attempt to re populate or change is status as an exclusion zone. But radioactivity in Wall Street might be very different. What are the criteria for allowing
666 people to reenter the area or stay in it? Plots of total dose as a function of distance suggest that under some circumstances a total dose of 170 mrem per year (the original guidance for the average dose for the general public) might be exceeded for a large area. The implication is that in the event of a dirty bomb, clean up to this level might not be generally possible and reentry would not be allowed. This might not matter in the countryside but in Wall Street it might be a disaster. Another historical example is the reaction of British authorities to a poisoning by P02lO. P02lO is only hazardous when ingested. The contamination by alpha particle emitters in the machine shop of the University of Manchester is legendary, and showed up in cloud chambers made there - aptly described as "Rutherford's ghosts". But there is no evidence that similar small amounts on desks, chairs and other furniture would lead to excessive adverse health effects. The disruption in various offices after the discovery that a Russian had been poisoned with P0210 was considerable. This may have been necessary to discover the extent of the poisoning plot, but as we now can tell, it was not necessary for public health. The distinction between compulsion and guidance could be all important and the suggestion that the financiers could be classified as radiation workers could be very important. Although I unsuccessfully raised this before U.S. regulatory authorities and a Canadian Royal Commission in the early 1980s, it was seriously discussed in an international meeting in Minsk in 1996. I had an argument with a French safety inspector, who strongly disagreed with my point of view-only to apologize for his disagreement that evening after he had thought about it. I put it succinctly to those suggesting compulsory exclusion: • • •
Are you going to forbid ALL people to enter the exclusion zone? Why not allow in people knowledgeable about radiation who can control their dose? Why not allow in those over 60? (The Belarus authorities do as noted above) - If those over 60 are allowed back are their grandchildren allowed to visit for limited periods? (They are) - If you want to forbid 20 year olds from living there, what else would they do? - Finally would you forbid poor beggars who have nowhere to live (such as those from the slums of Calcutta) from living there?
These are tough societal questions. The answers must necessarily depend on many factors other than radiation exposure. For Wall Street financiers faced with the effects of a dirty bomb downtown the answers may be different than for a Chernobyl farmer. Allowing radiation trained Wall street financiers to reenter an area, might be a simpler decision-particular if the entry were voluntary. While I suggest that many should get training in advance of an emergency, some might want to be so trained very soon thereafter. I would expect that: • • • •
The financiers The financiers The financiers The financiers
would perform their own risk-benefit calculation would limit their occupancy to 8 hrs per day 5 days a week would have their own dose meters to measure doses could choose the cleanest offices
667 •
The financiers might gain the trust of society's radiation safety personnel
In general I would expect radiation trained financiers to be as careful as I am. Yet I have been inside the sarcophagus at Chernobyl unit 4 with a dose rate measured by myself of I rem per hour; I have flown transpolar at 35,000 feet. In 1948 I have picked up a fallen 2 Curie source with my fingers (quickly at arms stretch) with an estimated whole body dose rate of 10 rernlhr (0.1 Sv per hour). But undoubtedly the highest doses I sustained were at age 6 using an X ray fluoroscope to look at my feet in the shoe store. This was before I was informed about radiation hazards, but I later estimated the dose. In 1958, I had a chest X ray done in a clinic to satisfy the lawyers at Stanford University that I had no lung cancer before working as a radiation worker. I measured the dose. The film badge was black indicating an exposure of I R (and consequent dose of I rem) which, although 150 times more than necessary, was common at the time. I note that the many people willingly live in areas of high natural background, whether from cosmic rays or high levels of natural radioactivity in the area. This suggests that no level of compulsion, or even guidance, be used to stop radiation dose levels of the order of those in Denver or even Aspen Colorado which can rise to 20 rem in a lifetime. As a test of the acceptability of this idea I have personally asked a few staff people at Harvard University physics and engineering. Although they understand little or no physics or public health they do understand physicists. They have indicated that they would be willing to work as radiation workers, wearing a film badge and following the advice of those who do understand. In the words of one of them: "If you say it is OK, then I would be willing". This was of course also the attitude of the staff of the late Harvard Cyclotron laboratory. This gives me hope that careful planning of this sort, and allowance of people to voluntarily be responsible for keeping their dose low, would allow life to proceed almost as usual after a dirty bomb attack. If widely advertised and accepted it would also make a dirty bomb attack less attractive for a terrorist. Note on units Radiation exposure is usually defined in terms of ionization of dry at normal temperature and pressure air by the radiation. The older unit is the "Roentgen" (R), where I R is equal to 2.58 x 10-4 using the older convention that the first letter of a name is always capitalized. A newer international unit using SI units is the Anyone exposed to radiation will absorb some. Absorbing the radiation can be tissue or any other medium (for example, air, water, lead shielding, etc.). The quantity is measured in terms of energy per unit mass and the unit of absorbed dose is the "rad," where I rad =0.01 J/kg. But all tissues are not equivalent in terms of biological effects in man and we convert absorbed dose to dose equivalent, or "rem," with a "quality factor". For practical scenarios,with low "linear energy transfer" (LET) radiation such as gamma or x rays, most sloppy scientists, including myself, set I rad = I rem and although fundamentally different units, these are in practice produced by IR. Although some scientists still use the c.g.s system of units, the international standard is now the "System International" (SI) of quantities and units. This uses "rationalized" m.k.s. system. Names and jargon have also changed in accordance just to be sure that everyone remains confused. "Radiation exposure" is now "air kerma,"
668 absorbed dose to gray (Gy) (not capitalized although Gray was a person), and dose equivalent to the sievert (Sv). 1 Gy = 100 rad, and 1 Sv = 100 rem. The usual sloppy scientists will now say 1 Gy = 1 Sv = 100 rad = 100 rem =100R. Whether an s is added for the plural seems a matter of taste. This sloppy scientist tries to get his freshman students used to changing units by asking them to change them all to fundamental constants to the system of Rod Stone and Fortnight. It should be noted that nuclear and particle physicists learn this stuff as a matter of course. To remind them the pocket PARTICLE PHYSICS booklet available from CERN or LBL lists these on page 251.
INDIA'S RESPONSE TO THE PROSPECT OF WMD TERRORISM RAMAMURTI RAJARAMAN School of Physical Sciences, Jawaharlal Nehru University New Delhi, India India has been a victim of terrorism for many years. But it was only after the massive attack on Mumbai in November 2009 that the world was shocked into realizing the extent of the problem we have been facing. That event, and the escalation in Taliban militancy within Pakistan itself, has made it clear that long standing acts of "Jihadist" terrorism in India are not just some bilateral skirmishes between "long standing rivals" India and Pakistan. They are in fact manifestations of the same problem that has come to plague not only the western world, but also Pakistan. They stem from the extreme militant wings of Islamic fundamentalism which have been unwisely nurtured for decades. With the recognition that the Jihad terrorism perpetrated in India is a close cousin of the Taliban supported Al Qaeda attack on the World Trade Center on 9/11 and of various bombings in the UK, a related question emerges naturally. The U.S., the UK and Europe, take very seriously the possibility that the threats, hitherto confined to conventional explosives and aircraft assaults, may expand in scope to include the use of chemical, biological, radiological and nuclear weapons (CBRN). These countries, and particularly the U.S., have been taking very elaborate steps to prevent such attacks, and to mitigate their effects, should they happen. The question is, does India too take the possibility of a CBRN attack as seriously as the West does? And if it does, what steps has it taken to prevent such attacks and mitigate their effects? And to the extent that Indian response to the CBRN possibility is not as strong, what are the reasons behind it? A more intriguing question is whether, perhaps unintentionally, it was wiser of India to have not instituted more vigorous mitigation and prevention steps? These are the issues we hope to cover in this article. We will begin by summarizing the nature and magnitude of terrorism in India. Then, to the extent that the Indian government has addressed possible CBRN events we will briefly describe the measures they have taken. We will see that that while there is increasing attention being paid to this problem during the past year, the actions taken are still at a theoretical, organizational and planning stage, with little in the way of concrete measures at the ground level, especially by the standards of the steps taken in the U.S. and Europe. We will then attempt to offer some understanding of the reasons behind this relatively less vigorous response in India to the prospect of CBRN terrorism as compared to the developed countries of the West. TERRORISM IN INDIA That India does not have in place a well established infrastructure for dealing with a CBRN attack is certainly not because we have not suffered from terrorism. Although, as with the rest of the world, there have been no CBRN attacks, there have been plenty of conventional terrorist strikes and suicide bombings. These have been going on, in one form or another, for decades.
669
670 Unlike the U.S. which identifies its major terrorist threat as coming from the Al Qaeda and other "Jihadist" groups, in India we have had several different terrorist groups and insurgencies. Past examples include Sikh separatist terrorism in Punjab and several violent insurgencies in the north-east of the country. The violence in Punjab was finally contained and gradually eliminated in the early 'eighties, while the militants in the north east have by now been partly (and far from fully) co-opted into the democratic system. Today, the two biggest groups of terrorists active in India are the cross-border Jihadist groups and the Maoists. The Maoists are violent leftist extremists who have gradually increased their hold over underdeveloped rural districts on the eastern and central eastern parts of the country through a mixture of guerilla strikes on the police and intimidation of villagers. Extreme poverty and poor administration in these areas have contributed to the empowerment of the Maoists by alienating villagers against the state. As far as I can judge, the Maoist movement is still on the rise in India. But it is generally believed that they would not have the wherewithal to get hold of CBRN material or the know-how to weaponise it. Also, as a "liberation "movement, the Maoists would not have the motivation to kill a large number of civilians. So I will not discuss them further in this article. The second major perpetrators of terrorism in India today are the Jihadists. Indians widely believe that they have been largely funded and trained by elements in Pakistan' s intelligence agencies. In fact India, as a neighbor of Pakistan sharing a long thousand mile border, was being subject to cross-border terrorist strikes long before the terrible 9/11 tragedy in New York that shook the world. They not only continue to terrorize Kashmir where they started, but have spread gradually all over India. Among other things, the Jihadists have attacked railway trains packed with people, bombed shopping centers in New Delhi during peak hours, shot delegates at an academic conference in the Indian Institute of Science in Bangalore, and launched an assault on the Indian Parliament while it was in session in 2001. The largest such attack in terms of casualties was the series of thirteen coordinated explosions on 12 March 1993 in Mumbai (Bombay), in which 257 persons were killed. More recently, II July 2006 saw the execution of a series of eight bomb explosions at seven locations on local trains and stations in Mumbai at peak hours. Nearly 200 people were killed in that attack. To cap it all came last year's attack on Mumbai. The worldwide shock and condemnation that followed the Mumbai tragedy has temporarily put a halt to further strikes, but they seem to be reviving again. At the time of writing in early June 2009, intelligence agencies in India have just uncovered a plot by three terrorists who have infiltrated across the Pakistan border, with a plan to launch a string of attacks in South Indian cities like Bangalore, Hyderabad and Chennai. Security warnings have been issued to all major cities in that region. In terms of annual fatality counts, only Iraq has the dubious distinction of having more terrorist-caused deaths this past decade than India, which comes second in the world. Just in the state of Jammu& Kashmir alone 30,517 people have died because of terrorism between January 1988 and December 2001.Since then, overall fatalities connected with terrorism and insurgency were 2,765 in India in 2006, 2,598 in 2007, and even higher earlier, with a peak at 5,839 in 2001.
671 EFFORTS TO PREVENT CONVENTIONAL TERRORISM Therefore terrorism is a very serious issue in India. In fact I would say that, no less than in the United States, terrorism is viewed here as perhaps the single biggest threat to national security .Consequently there is a major effort by the government to anticipate and prevent terrorist attacks. Our intelligence agencies spend a major fraction of their energies and funds towards this end. Among the measures taken are: 1. Where individual security checks are possible at public places which are potential targets of strikes, such checks are done regularly. These include airports, movie theatres, major sporting events etc 2. Airport security checks are done not just before entering planes but even when entering the airport itself, where non-passenger entry is forbidden 3. There is fairly sophisticated surveillance of communications between terrorist and their suspected safe havens. 4. Surveillance is also done of underground financial transactions that could be instruments for funding terrorism. 5. New Delhi, an especially attractive target for terrorists, sees much more visible evidence of anti-terrorist measures. Near the airport and train stations you will see police bunkers with policemen squatting behind sandbags wielding automatic weapons. You will also see police barriers on major roads leading out of the city, which slow down the traffic to enable cops to inspect individual vehicles as they go by. 6. Considerable police protection is provided to a large spectrum of "VIP's", such as high risk politicians, movie starts and sports icons. Some times their motorcades hold up traffic, causing resentment and irritation among the general pUblic. 7. Both groups, especially the Jihadists, derive considerable support from other countries. In particular much of the jihadi terrorism is funded from Pakistan and Bangladesh either directly or through conduits in the Persian Gulf. Therefore our external intelligence agencies (the analogues of the CIA) also come into play in trying to prevent infiltration of terrorists, their arms and their funds from outside the country. It is not easy for people outside police and intelligence agencies to make their own independent estimates how successful these preventive efforts have been. While successful terrorist acts get a lot of publicity, the ones thwarted by the vigilance of the police don't make news. Statistics like the decrease or increase in the annual number of terrorist incidents or deaths are indicators, but not definitive ones. The number of "successful" assaults has to be matched against the number of attempts, about which there is little public information. The resources and capabilities of terrorist groups vary year from year as do their strategies, so one does not know to what extent the increase or decrease in the number of their attempts has been affected by the preventive measures. With this proviso, however, some statistics are very encouraging. That the nationwide
672 terrorist fatality has fallen from almost 6000 in 200 I to about 2600 last year is very heartening . EXISTING CBRN MITIGATION MEASURES Unlike the case of the Maoists, the possibility of the lihadist terrorist groups launching a CBRN attack in India cannot be dismissed lightly. It is unlikely but not outside the realm of possibility. They have much more technologically sophisticated cadres, widespread international contacts and could have access to funds and infrastructure comparable to that of many state actors . Indeed, a substantial segment of the security community in the U.S. considers them capable of building even a full fledged nuclear weapon if they could get hold of the required fissile materials. Therefore a CBRN attack does fall within the range of capabilities at the technical level. Despite this, and despite the extensive efforts in India against terrorism by conventional means, the efforts to prevent and mitigate the effects of CBRN have been at best preliminary. CBRNs would fall under the category of disasters, and we dQ have many institutions and agencies related to disaster management. But these are mostly focused on natural disasters. For instance there is a National Institute for Disaster Management (Website: http://nidm.gov.inlhazards .asp) which runs wide variety of programs. It offers all kinds of training programs on activities like food relief to disaster affected communities, shelter relief and reconstruction, protection against future risks through Disaster Insurance and so on. Their work involves not only issues of police and medical help, rehabilitation of victims etc., but also ways to enlist the help of civil society in the area. The full list of hazards addressed in detail by this Institute includes Tsunami, earthquakes, cyclones, floods , drought, landslides avalanches, forest fire, pest-infestation and so on, but there is no mention of any man-made disasters, including those initiated by terrorists. Many of the disaster management measures suggested for these natural disasters will of course also be of use in the event of a CBRN attack. But until recently there were no measures specifically to tackle CBRN attacks discussed either in that premier institute or any other agency in India. [Aside: Exceptions to this are the great security precautions taken around India 's nuclear reactors. Our nuclear complexes have always been guarded heavily and the concrete shells of the reactors fortified against penetration by physical assault. Of the several possible modes of attack on reactor complexes listed in Table (la) of Professor Steinhausler's excellent 2008 article for our PMP, most are probably protected against, except possibly assaults by missiles or large airplane impact a La 9/1 I. Similarly the dangers arising form spent fuel transport listed in his Table (1 b) are greatly reduced because most of the spent fuel still lies in tanks within the reactor complex. Where the fuel is reprocessed to separate Plutonium, whether civilian or military, this is also done within the reactor complex. There is very little transportation of spent fuel over public highways. But these measures have been around for decades, not so much because reactors contain radioactive materials that terrorists could exploit but more because they are considered major national assets and symbols of technological achievement. Their
673 protection is aimed against not just terrorist attacks but even against conventional commando or air strikes during, say, a war with Pakistan. Similarly, radioactive substances in hospitals and research laboratories are also controlled although not so well as to ensure that terrorists cannot slowly squirrel away small amounts of radioactive material from different parts of the country, and assemble an RDD with it. That small stocks of such material are kept in so many widely dispersed places makes security of their inventories is less reliable.] However, in recent years there has been a very positive development in the formation of an apex level National Disaster Management Agency (NDMA). Established in 2005 with the Prime Minister as its Chairman, the Agency has been busy establishing the chain of command in the event of a disaster, the local bodies to be involved etc. More recently they have also been publishing reports dealing with on individual forms of disaster. These do include chemical, biological and RDD attacks. In particular there is a useful and educative report on RDD strikes, the isotopes likely to be used their antidotes etc. For example see the report "Management of Nuclear and Radiological Emergencies", a publication of the National Disaster Management Authority, Government of India. ISBN 978-81-906483-7-0, February 2009, New Delhi. Their website is www.ndma.gov.in While the formation of NDMA and its taking up CBRN seriously are welcome developments they constitute just the beginning of the process of mitigation. As things stand today, what has been done so far is the organizational and bureaucratic structure of disaster management along with general recommendations and guidelines. They have yet to be put in place at the ground level in terms of actual facilities, responders and equipment. If an RDD attack were to happen tomorrow, we would not be able to deal with it any better than we were 5 years ago. PERCEPTIONS ON WMD TERRORISM AND LIMIT ATIONS ON MITIGATION EFFORTS While the efforts by the Indian NDMA to address CRBN attacks is commendable, and will hopefully give rise to concrete some action on the ground around the country, it is worth thinking about two things: I. Why had the level of concern over CBRN terrorism been relatively modest in India as compared to say, the U.S. or UK? 2. Even if the authorities were to take the possibility of CBRN very seriously, to what extent can they actually institute preparatory measures in that vast country, with its limited financial resources?
The answers to these questions would apply not only to India but also to many developing nations in Asia, Africa and Latin America. The psychological attitudes and infrastructural problems faced in India in this connection are likely to be shared by other developing nations-in many cases to a larger degree. Therefore the Indian story is also representative of how the Third World reacts to the possibility of WMD terrorism. It is important for the developed world in Europe and North America to acknowledge and understand the prevailing indifference in the Third World to possible
674 WMD attacks, and if possible, understand the reasons behind it. Even though the former are the more likely targets of any WMD form of terrorism, the problem cannot be dealt with by them alone. Manpower and materials for a WMD attack are just as likely to come from a developing nation, where security requirements are relatively lax, as from a developed one. Hence the problem has to be addressed at the global level, requiring cooperation from all those countries which could potentially be sources or conduits of ingredients like radio-active materials. People deeply worried, with good reason, about WMD terrorism in the U.S. and Western Europe must develop some insight into why such concerns remain relatively more tepid in the rest of the world . That is a first step in enlisting the cooperation of the rest of the world. Agencies and individuals interested in instituting robust mitigation measures throughout the world should keep these difficulties in mind. There are two major reasons for why India had not, till recently, put in place adequate preparations for mitigating a CBRM attack. One is that while India is assaulted by terrorists all the time, these are all "conventional" strikes, usually with RDX explosives. So, the possibility of these terrorist groups resorting to any form of a WMD attack has not really entered into the public consciousness. If asked, members of the public and the polity would make a polite token acknowledgement of those dangers, but would not rate the danger as imminent. As far as I know, this perception is not based on any hard evidence or quantitative analysis, but is nevertheless widely shared in the country. Prevailing terrorist groups are seen to be doing sufficient damage with conventional explosives. Their periodic blasts with conventional explosives at public places seem sufficient to serve their purpose of grabbing headlines for their cause and consuming a chunk of the Indian state's funds and energies. Aside from the perception that WMD attacks are unlikely, there are deeper cultural and economic reasons behind the absence of any special preparedness to mitigate their effects. In fact there is not much in place to mitigate even conventional terrorist strikes with chemical explosives, although those do happen so frequently these days, often killing dozens in each episode. The willingness of a society to take steps to mitigate the consequences of a particular emergency depends how that potential emergency compares with the level of risk and hazards with which people live on a day to day basis. It also depends on the affluence of the society-whether it can find the funds to pay for such mitigation precautions-and on its technological capability to put in place sophisticated mitigation plans. Hundreds of millions people in India live in tin shacks and mud huts that could collapse on their heads in a storm, and eat food that keeps them permanently undernourished and ill. Compared to such problems of day to day survival, they simply cannot afford to worry about the comparatively miniscule probability of CBRN attacks. As a corollary nor can their political leaders. It is also useful to note ways in which natural disasters are different from terrorist attacks. Earthquakes, floods and tsunamis can kill thousands of people. But they don't claim their full quota of casualties in seconds . If medical help can reach them within an hour or two, and if transportation to temporary shelters, sanitation and food can be provided within a day or so, that is enough to greatly mitigate the tragedy and save thousands of lives. This calls for relatively traditional low-tech modes of preparedness
675 that many developing countries can manage to provide to a fair extent and may find it worth preparing for in view of how many lives that can save. As distinct from natural disasters, terrorist strikes are of a different magnitude and have a different time scale. Their damage, usually because of blast effects, happens within seconds or minutes. There is no time to bring in additional mitigation facilities, beyond whatever civic services are already available nearby-ambulances, neighborhood hospitals etc. For instance two bombs exploded in New Delhi's popular Sarojini Nagar market during peak Saturday shopping hours on 29 th October 2005, killing about 50 people. Police vans did rush to the spot within minutes and ambulances soon thereafter and the casualties were shifted to hospitals quickly. Many more would have died of their wounds if this had not been done. This ~ mitigation, but done only within the normal day-to-day capabilities of a city's hospitals and police services. Enlarging these normally available facilities for the sake of better mitigating a low probability terrorist attack is just not feasible, since those could occur in any district of any major city in the country. Their civic services are already as good as they can be, given the overall financial constraints of a developing country and the prevalent levels of governmental efficiency and corruption. Since the fatalities of any given conventional terrorist attack is in the tens rather than hundreds, and since they don't occur in the same municipality every time, the level of response remains limited by pragmatic political and financial considerations, as compared to other more serious things that require attention. With the establishment of the Disaster Management Authority in India, more funds may be made available at the town and city level of this vast country, but given overall financial constraints and competing demands, the funds and attention that can be devoted to terrorism will remain limited. It may appear to people in the U.S. and Europe who are (understandably) deeply concerned about the prospect of WMD terrorism that the Indian attitude I have sketched is too complacent. It might well be, but that is human nature. Consider what the American response has been to the prospect of major U.S. cities being hit by nuclear weapons, fired either in anger or by accident. Fifteen years after the end of the Cold war, there are still thousands of ready-to-launch nuclear missiles in Russia, each capable of killing millions in an instant and wiping out whole cities. Yet how many Americans are actively worried about it? In the early days of the cold war several nuclear civil defense measures were initiated in the U.S. and the UK, including basement shelters in every home stocked with food and Iodine. But most these plans have just petered out, largely because the anticipated disaster doesn't seem to have happened and people have become blase about living in the shadow of the nukes. By comparison, the 9/11 tragedy which killed "only" a few thousand people and destroyed a couple of buildings has caused a deep national trauma in the U.S. that had affected its foreign and defense policies in a fundamental way. The low key nature of the Indian efforts to mitigate CBRN menace has its good and bad sides. The good side is that the public, with so many other problems of more compelling urgency to deal with, is not additionally burdened with the panic and anxiety associated with the threat of a CBRN attack. Keeping such threats in the day-to-day consciousness of the public, in order to facilitate vigilance and speedy mitigation, can also lead to distrust and suspicion in society, often pitting brother against brother.
676 The bad side is obvious. The current levels of preparedness will prove to be disastrously inadequate if an attack does really take place tomorrow. Certainly, if there were to be even one actual RDD terrorist attack in India, the level of preparedness for the next one will increase greatly, to perhaps paranoid proportions. Look how things changed in the U.S. after the 9111 event, and how India's preparedness cope with oceanographic disasters improved greatly after the giant tsunami hit us in 2004. Similarly, public psychology being what it is, until a WMD terrorist attack happens in India (heaven forbid!), worrying about that prospect will remain just one among many priorities here. I fear the same attitude prevails in most developing countries in the world. Pakistan has offered another example of the same phenomenon, in the context of conventional RDX terrorism. For many years, the Pakistani public and even much of its intelligentsia was in a state of denial about the menace of lihadist terrorism that was being nurtured on its soil. It was only in the past year when the terrorists turned their guns towards Pakistan itself and suicide bombers began to inflict serious damage on Pakistani heartland that there was significant public acknowledgement of the lihadist terrorism. Steps are now being taken by the Pakistani army to counter them. In the interests of India, Pakistan and the rest of the world it is hoped that these measures will help contain the lihadist menace.
POLITIZATION IN THE PROCESS OF INTERNATIONAL COOPERA TION TO MITIGATE NUCLEAR TERRORISM: SOME DUBIOUS RESULTS DR. VASILY KRIVOKHIZHA International Department, Federal Assembly of the Russian Federation Moscow, Russia During the last few decades, the experience that has been accumulated in the field of international cooperation to mitigate nuclear terrorism has established that the multinational efforts in this field have unfortunately been subordinated to the achievement of wider national purposes of states participating in the process that impede -to some extent-the achievement of more tangible progress in original sphere. This contention may sound banal, and that banality is the basic problem-as it reflects that such subordination is universally recognized and widely accepted for the majority of the countries participating in the practice. It is viewed as just one of the habitual and key norms of international behavior, and seldom arouses objections for being so commonplace. In this context, there is thus little optimism for change which would allow international cooperation to mitigate terrorism, or at least make such cooperation more effective, or at the very least, workable. Against this background, this paper attempts to focus upon, not the rationality of the situation [this is the combined result of the pragmatism of national Realpolitics (Realpolitik) which by itself proves its rationality], but instead upon the familiar, useful and misleading stereotypes which concern nuclear terrorism. This analytical approach will address the question to which stereotypes in nuclear terrorism are reinforced by the reality of nuclear terrorism; and by adopting this analytical approach, this paper will form a foundation upon which to upgrade general evaluation of the results as well as the effectiveness of the current process and the framework of international cooperation to prevent nuclear terrorism. The importance of undertaking such an analysis is reinforced by the large number of experts who, worldwide, indicate their active interest for programs which are intended to neutralize, so far as is possible, the probable results of nuclear terrorism (and other means resulting in mass annihilation). This interest, indirectly confirms the endemic lack of belief in the efficacy and reliability of the modem day system of measures to mitigate and prevent forms of mass destruction, terrorism and, primarily, nuclear terrorism. Thus, in this light, it is logical to commence with a brief discussion of the plausibility of an occurrence of an act of nuclear terrorism. The danger of nuclear terrorism derives from the assumption that the growth of the number of nuclear states (from five in the early 1960s to nine, including North Korea, in 2009) gradually increases the nuclear arsenals and broadens its windows of vulnerability. According to some experts' estimations, nuclear states have produced approximately 125 thousands of military nuclear devices, and that really makes an impression. Attention is also paid to the contemporary conditions and some prospectively possible trends in the development of nuclear energy. There are about 280 research reactors and 440 industrial reactors which function in more than 30 countries around the world. Keeping in mind the danger of nuclear terrorism (as well as the risk of
677
678 proliferation), some experts highlight the fact that, according to generalized calculations, a 1000-MWe (i.e., one that generates 1000 MW of electrical energy, as contrasted with a reactor that produces 1000 MW of thermal energy-"MWt"-that could yield about 300 MWe) nuclear reactor produces per year plutonium in an amount sufficient for manufacturing 40-50 nuclear explosives. Nevertheless, the gap between the objective precondition for an act of nuclear terrorism (being the difference between its possibility and its probability) is quite, and considerably, obvious. The main intellectual challenge is to bridge this gap in the process of analysis and forecast. This task is next to impossible to solve with hundred percent certainty. That is why a lot of experts for different reasons and motivations produce a large number of publications on the subject (especially scenarios) of nuclear terrorism. Thus the main result was found strange and even counterproductive. Speaking strictly, the propaganda (and intellectual environment as a whole) of actual support for international cooperation to mitigate nuclear terrorism turned into a deep and, most importantly, a habitual public belief, that the world is ready and we are on the threshold of the acts of so-called "nuclear terrorism." Such a public mood-in the face of permanent development and expansion of a nuclear technology base-may hardly be favorable for preventing terrorism in its most dangerous forms. But such is the tangible, namely, result-expectation. In the same way, consecutive attempts to change this state of affairs at the level of official politics are not yet obvious. Moreover, mass media and the statements made by some official persons are the influential factor in support and further development of this stereotype of imminent nuclear accidents. Meanwhile, it seems that very often many authors have drawn their knowledge concerning nuclear terrorism only from the plots of those well known movies about the secret service agent with double zero in his cryptonym who saves the world from nuclear threat. By the way, in a number of aspects the scripts of the movies, especially with the elements of subtle irony, were more interesting at least from viewpoint how one of the sides to the conflict saw the trend in relations between the West and the East for any given period. Plus, especially during the Cold War, for the persons involved in the decision making process such scenarios served to remind participants of global conflicts about the rules of game, about the necessity not to lead up to the degree of intensity in world politics, i.e., to the level which, in practice, wouldn't leave opportunity (the time period was and still is the key factor during crisis decisions) to negotiate compromise in the case of development of the acute international crisis. On the other hand, the scenarios sometimes reminded, although in their specific form, about presence of not only contending but also common interstate interests among participants of global conflict at that time. Moreover, the very essence of nuclear terrorism was depicted in the right way (if one put aside the ways that nuclear charges could be captured)-it was namely explosion (or threat of explosion to be more precise) of nuclear device. What is also important is that in these scenarios there wasn't a rush of interest for expensive national and international programs and concomitant drive to reach control over opponent's arsenals. But of course there was a quest for at least different marginal advantages (with references to necessity to strengthen security) during the process of elaboration on complex multinational (or even universal) agreements as well
679 as on quarrelsome situations concerning the interpretation of the provisions, and the fulfillment of such types of agreements. Theoretically and practically, the enlargement of the international legislative base might have led to better structuring of the relationships amongst states in this sphere and put in order, or at least clarify, terminology. However, that hasn't happened and there is a basis to say that intellectual turmoil and mutual political accusations is becoming more visible. Let us put aside the reasons. In any case looking through mass media information anyone may be sure that difference between using fissile materials as a radiological weapon and as a nuclear explosive device practically does not exist, being mentioned in one or other forms with all of them labeled as nuclear terrorism. A similar process has taken place in political science. Instead of putting in order (together with physicists) conceptual structures of the subject of research and at least the key notions, political scientists have made it even worse. They started to multiply (everyone in the limits of his personal knowledge and interests) the scenarios of so-called possible (probable) acts of nuclear terrorism. What happens is the scenarios have one positive aspect-besides imaginations they express motivations and political aspirations of authors and their vision of the interests of different countries. It seems that such intellectual mess is convenient for a lot of countries. Otherwise it would be different. For the sphere of policy and political science this is quite a usual situation . Now all this (some real facts, conclusions and recommendations together with ghost-stories and science-fiction scenarios of "nuclear terrorism") cook in one boiler, especially taking into consideration the specific features of near political communities, maybe even political circles itself in a number of countries. For instance Russian specificity in this plan (Russia has become one of the main objects of interest in the context of "nuclear terrorism" during last two decades) is that non-critical perception and subsequent duplicating of a lot of these theses borrowed from the outside. And this fact reflects nothing more than straitened circumstances of some representatives of the academic community, especially during the nineties. At first glance such a situation would not be bad for the countries capable of generating and inculcating ideas of their own interests, if it was not for one basic fact. Those ideas borrowed and popularized by political scientists did not reflect in an adequate way the vision of the situation by the people involved in actual decision-making process in their own and other countries, and as a result mislead key actors of the international community additionally producing now and again acute and countering political processes and situations. In a parallel way, a small number of actually serious works, which become subject of publicity under the circumstances, more often and mistakenly are perceived as an additional argument for contingency planning in case of such an inevitable nuclear terrorism act. The specific intellectual context, in which various aspects of nuclear terrorism are examined, creates the whole set of problems: starting from recognizing the actual hierarchy of challenges to any given state, setting national priorities (in conjunction with efforts to estimate the real degree of probability of potential calls (signals), and finishing optimization of the means, including technological and financial decisions as well as the spheres where uni- or multilateral efforts are more preferable.
680 It is no secret that we live in the world of divergent (sometimes conflicting) national interests. As a result, history is continuously rewritten and the gap between actual strategic purposes of states and public declarations of their leaders has reached such a wide scale that for the students of future (even near future) it would be next to impossible to restore and understand the real state of affairs of the present world. We can witness the permanent and well organized process of accumulating and accentuating some chosen facts or even more often-artificial stereotypes. Thus, at least three features deserve attention. Firstly-the criterion of success in international efforts to prevent nuclear proliferation and connected nuclear terrorism. There is not a lack of international agreements (conventions) concerning nuclear safety and terrorism. The concrete results are more doubtful keeping in mind for instance the experience (the tendency) of NPT Review Conferences. And there is no basis to speculate that the forthcoming NPT Review Conference of year 20 10 could have more results in comparison with the last conference. Secondly, if one is to consider, for example, the situation with the nuclear program of North Korea it is necessary to recognize, that in spite of monitoring signals and quite effective "early warning system", the international efforts appeared impotent even in such a relatively simple case. Actually the parties involved in this conflict couldn't agree upon "price" which includes not only amount and forms of economic support (energy first of all) to prop up North Korea's ailing economy, but also on correlation between of "stick" and "carrot" in the political approach of western allies (primarily of the U.S.) concerning North Korea. In its own turn the traumatic experience for most part of the DDR (i.e., East Germany) upper and some middle classes is taken for sure into consideration by the ruling elite of North Korea in their contacts with the outward world. But in any case, a decade, approximately, of bargaining over North Korea has produced just that what we have obtained now. This fact is the next evidence which allows one to assume that the practical priority of nuclear terrorism for the most of the concerned countries is not too high. Of course, it is possible to recognize or not to recognize (as it was declared) the nuclear status of North Korea. But it is positively not a question of recognition. The challenge derives actually from the newly acquired nuclear status of this country. Besides and, thirdly, it makes sense to keep in mind that for the type of North Korean society the pressure from outside may mostly stimulate consolidation of the current regime. Moreover, in nowadays situation of preparation for NPT Review Conference by lots of countries (threshold countries first of all) the experience of North Korea is considered in the context of widespread belief that only possession of nuclear weapon protects any state against extreme measures of "coercive diplomacy" on the part of great powers. Being applied to any country (or territory controlled for instance by mercenaries) the label of "nuclear terrorism"-which may be a WMD threat-opens up the perspective for different operations, including large scale military operations (some of which were initially planned as "surgical"). In a parallel way, it fills up the deficiency of information about real posture with terrorism, allows the support situation of alerts inside these countries, creating the atmosphere of necessity that preparedness must be supported on a permanent basis (sometimes it is true but the critical question is what is the reasonable
681
level of readiness and expenses). In this sense the term "global terrorism" reveals its additional hidden sense. Stereotypical of the use of such a term is that of omnipresent and omnipotent Al-Qaeda, which during the years has impressed on public opinion throughout the world but not explained much about Al-Qaeda itself. It is also interesting that under the presidency of B. Obama a fact of struggle with the specter of terrorism to very large extent somehow at once has lost the dominant position for example in foreign protestations of Washington ... and it is not much connected to change of priorities in current conditions of global and economic crisis, what does it mean and what is the future ... An answer to this practically important question now is unclear. Now, actually, about nuclear terrorism and so called "nuclear terrorism". For today there are "a lot of facts" ostensibly testifying to the case of "nuclear terrorism", And interpretation of events in such manner finds out mostly mentioned above specific goals of key actors of world politics than concern about nuclear terrorism and interest to elaborate on this subject. The current situation additionally stimulates rationality to take a more thorough look at the essence, meaning and nature of probable nuclear terrorism. For this purpose it makes sense to start with a more critical view on the widespread and well known "facts of nuclear terrorism." Among the last ones may be mentioned the attempt (Spring 2009) to sell in Ukraine "4 kg. plutonium-239" for $10 million. That Ukraine secret service undercoverdeal was automatically qualified in mass-media of western countries in the context of successful operation to prevent construction of "a dirty bomb." And nobody even tried to clarify once again the motivation of sellers and to prove or disapprove such estimation 1. Among the popular references to "the facts of nuclear terrorism" was a "crude nuclear bomb" (weight-30 kg, consisting of a mix of cesium-l 37 and dynamite) laid by Chechen separatists in November 1995 in Ismailov Park in Moscow. And again radiological explosive device was qualified as "a crude nuclear bomb" which for a lot of physicists normally means quite a different thing. Moreover, even some well-known American political scientists in this context on the spot expressed the idea that this fact may be considered as the evidence of the tendency to escalate acts of terror in the direction of nuclear terrorism. It was "reminded" also about a long term interest of "ruthless Chechen terrorists" to purchase the nuclear weapon. It was "reminded" as well of a certain plan of the first president of self-proclaimed independent Chechen Republic, D. Dudaev, to take control of a nuclear submarine of the Russian Pacific Fleet. And it is curiously-what follows after that: to grasp nuclear submarine with the purpose "to mine a nuclear reactor and one of the missile with nuclear warhead" (!) as if a nuclear submarine is an easy target among the nuclear objectives. This speculative construction looks strange in a number of other aspects, even beyond the consideration in the sense of a cost-effectiveness formula. As far as the question of resonance may be considered it seems that the capture of a nuclear power station would produce quite comparable-and for the specialist even larger--effect. .. Plus for nuclear weapons of submarine there are:
Ukrainian security service spokesperson Marina Ostapenko is quoted as saying it was americium. http://www.bellona.org/articles/articles 2009/ukraine smuggling arrest
682 • •
• •
very complicated system of electronic locks (so called PAL system); including the device of testing (identifying) the specific features of environment, and to produce an explosion the outside parameters must be adequate to the data programmed into the memory (environmental sensing systems); a lot of stages involving many of qualified military personnel who are needed to launch and target the nuclear missile; any mistake in the process "of launch of nuclear warhead" will result in a dysfunction.
So the only more or less plausible aspect of such scenario turns to be really strange for a number of reasons-attempt to use explosive materials to blow up a missile (warhead) to produce radiological contamination of territory (aquatory) of military bases usually distant from the populated areas. The last point may be considered by "terrorists with high moral principles" but of course as a factor in favor of permissibility of "nuclear terrorism." For sure, there are scientists who can find this scenario interesting and provocative for further research. But in any case this script is not a very practical representation of the scale of terrorist efforts. Not much closer to the reality are a lot of publications around the time of the tragic capture of hostages during the perfonnance in the so called Dubrovka Theatre. Few authors even succeeded in disclosing the initial plan of terrorists-allegedly to grasp control over the Kurchatov institution which has 26 active nuclear reactors and weapon grade materials of an amount sufficient to construct a "few thousand nuclear devices" (!?). There is no sense to .comment on these type of specimens of "scientific thought" and impressive details except maybe for one remark. If one is to take a look at a map of Moscow it would be clear that Dubrovka (even in an artificially invented nuclear context of this case) is certainly not situated a "few blocks" (in ordinary understanding) from the Kremlin as it was depicted. And in any case, to connect the "nuclear potential" of Kurchatov Institution with Dubrovka is a task for the people with very broad imaginations. To understand the level of influence on the brains of people (including highly positioned government officials) by this intellectual mixture of scare-tactics and some real facts, it makes sense to recollect the additional fact which casts light upon the real complexity of the situation. Testifying in the Congress in 1998, former CIA director John Deutch expressed his personal concern in a very stimulating manner. He said he was not disturbed so much by what he knew. However, he could be disturbed by what he didn't know. Another illustration: the story of FBI special agent George Piro, who told the story of his debriefing of Saddam Hussein (January 27, on the TV news program "60 Minutes"). Among Saddam's revelations: "Saddam misled the world into believing that he had weapons of mass destruction in the months leading up to the war." This thesis with some reservations may be accepted as partial fact. But following, to my mind, already closer to interpretation: "he (Saddam) feared another invasion by Iran, but he did fully intend to rebuild the WMD program." Both references give a wide basis upon which to speculate as to motivation, but nobody ever succeeds proving that only his vision is right. Cases like these are examples
683 of the Realpolitics and a permanent subject of quarrelsome discussion among countries, politicians and scientists. And the very nature of this subject leaves small chance to anybody to prove that only his vision is right. The only fact which may appear from debriefing of Saddam is that he was too arrogant and believed in his role of a Player, playing in Washington hands. So it is better now to dwell upon (at least a little bit) on the subject which has an objective basis so that we may find a mutual denominator-that is, to draw a strict distinction between nuclear (itself) and radiological terrorism, if for instance, to sum up viewpoints of Russian scientists from the mentioned above Kurchatov Institution or Moscow Institute of Physics and Technology. To start with, for construction of nuclear explosives the adequate materials are needed: uranium-235 and plutonium-239, by weapon grade enrichment (more than 90% and 97%, accordingly) and cleanliness. Besides it is necessary to mean that the actual production of such materials in the amount and quality sufficient for construction of at least one nuclear explosive device (NED) is beyond the means and resources available to terrorists. Theoretically, it is possible that such terrorists will obtain these means, but only with the direct support of nuclear or near nuclear states with a modem industrial basis. Plus, a nuclear device (warhead) is designed to produce first of all a nuclear explosion rather than cause radiological contamination (a nuclear device vs. an idea of a neutron bomb). Perhaps the author means that all modem Russian warheads have also a fusion component. Any other speculations according to Russian professionals in this field belong to the sphere of science-fiction. Hence any information on losses, sales, plunders, etc. any others, except for two specified materials, in practice have no relation to nuclear terrorism itself. Plus it is necessary to consider the relatively low level of radioactivity of plutonium-239, and that the radioactivity characteristics of uranium-235 are even less. Briefly speaking, on the one hand, these materials are not the best ones "for stuffing the dirty bombs" (as it has been considered with last Ukrainian case.) But, on the other hand, the level of radiation is enough to create serious problems for terrorists who come in touch with them, if it matters for such sort of people-as such people may be the ones who are ready to self-sacrifice. By the way, it is a widely used stereotype that atomic power stations (APS) are among the primary targets of terrorists, who are looking for materials to construct nuclear explosion devices (NEDs), and this stereotype requires a more selective and critical analysis. To use low enriched uranium-235 (with a degree of enrichment about 5% in fresh nuclear fuel) for the construction of an NED is just about impossible. Plutonium from the reactors, as it is known, may be used, yet if not so much of its technical complexities (both at the stage of extraction plutonium and at the stage of assembling the weapon.) As a matter of fact it is mostly a theory which is far away from the realisable potential of the terrorists. In another context, more attention may be given to power reactors with graphite or heavy water moderation. Their constructions admit the reloading of fuel "on the move", without an interval in the working process (such reactors are the Russian PMBK-High Power Channel Type Reactor, and Canadian-CANDU.) Because in this type of reactors (which uses low-enriched uranium, heavy-water CANDU-actually natural uranium) the efficiency of accumulating plutonium in a low-enriched uranium
684 state is in the strong inverse proportion from a degree of enrichment. Moreover, these reactors open a basic opportunity for accumulating weapon quality (grade) plutonium during approximately one month. This time allows, firstly (and most theoretically), for the accumulation of plutonium secretly. Secondly, typical for nuclear energy processes time intervals usually take years and such relatively big periods (in accordance with this criterion) "spoil" weapon grade quality of plutonium actually converting it into plutonium mainly fit for energy reactors (there are also a lot of other nuances). Moreover, according to specialists, the number of relevant reactors (as mentioned above) are quite limited (a few percent of the total aggregated power of nuclear stations). The most part of the staff consists of light water reactors (type of Russian "BB3P"Water-Water Energetic Reactor (body frame), the construction of which does not allow recharging (reloading) nuclear fuel during working regime. Plus the high level of enrichment of uranium-235 in the fuel makes little use of accumulating weapon grade plutonium. Theoretically, there is a possibility to accumulate plutonium from the active zone of power reactors. But there is also such an effective thing as IAEA control and the problem is not in the very system of the control, but in applying the control to the suspicious objects. And primarily objects of concern must be intensity stream reactors (to be more precise-a type of MMR, high-intensity multiple stream mixer reactor. Perhaps, not so much a nuclear reactor, as a laboratory instrument in micofluidics), because these reactors are efficient both from viewpoint of the accumulation of plutonium and the quality of the material (produced plutonium) as well. Also Russian nuclear physicists take into consideration the fact of the very limited sphere of use of Plutonium-239-which is used mainly in the production of nuclear weapons. It means that to acquire the access to the weapon grade plutonium terrorists must penetrate (infiltrate) into the highly secured process of production of nuclear weapons. Utility of Uranium-235 is a little bit wider, for example it may be used in some of research reactors. And nevertheless, even if one can imagine a situation when terrorists possess the initial nuclear materials it is necessary to disclose a great number of technological problems to construct a more or less effective explosive device. As it is known, there are two basic (principal) approaches to weapon construction -so called cannon (gun barrel) type and implosive devices. And again the professionals will find a lot of obstacles-ranging from the fact that Plutonium-239 does not match the construction restrictions of cannon type device to the problems derived from the presence in the material isotope of Pu-240. In any case there is a necessity for the most perfect, advanced and diverse technological base, which is beyond the production facilities terrorists can afford. Of course, every argument may well have its counterargument and one may argue that the production process can be dispersed (unwillingly the story of "Nautilus" by Captain Nemo came to memory.) But in this case the probability of terrorists finding themselves in the field of vision of secret services in a number of mostly developed countries is significantly increasing. And it deviates, in particular, even reasoning that there an opportunity to construct nuclear explosive device on the basis of several grams of Plutonium-239 instead of 6-8 kg, which are usually needed for an ordinary implosive charge. However, as the same experts mark it out, for the time being neither suitable implosives, nor, for instance, lasers, which capacity would allow achievement of a required degree of compression of nuclear material, have been yet
685 produced. And even when such lasers may be widely accessible, they need electric power in an amount produced by a huge power station (and it is desirable in a mobile version to keep a secret.) Taking into consideration the growing number of countries within the "nuclear club" it seems, at the first glance, easier to capture a nuclear device from the arsenal of a country with an unstable regime, especially in one of the periods of recurrent internal turmoil. But in this connection one important aspect arises. Contrary to interstate relations it is hard to understand how to make deterrence workable (so as to exclude the possibility of acts of nuclear terror) when faced with the attitude of potential nuclear terrorists. It is a new situation and an open question. And nevertheless there is a possibility to speculate that any information (which may not be 100% reliable) about the origin of a nuclear device in "fastening" to any given territory (state) as well as it concerns mentioned above the possibility to disperse production facilities among the countries, actually give targeting for adequate military answer (especially in emotional atmosphere of the post terrorists acts urging quick, strident and demanded by public opinion reaction.) As to radioactive terrorism, the implementation of such acts seem, for a number of reasons, more realistic, but again not in the ways often described in lots of speculations in mass media and also in some publications by political scientists. It is a big and very complex original question which deserves a systemic and detailed analysis. But I shall continue along this line of thinking by drawing a more strict distinction between nuclear and radiological terrorism as for a few moments this should be mentioned. Speaking briefly (if one puts aside possible acts of mostly psychological public influence), to accurately prepare and calculate the parameters of an efficient radioactive attack is not the easiest task to achieve, although there is no need to capture a nuclear submarine to undermine nuclear warhead of strategic missile with explosive material. First of all, for high level external radiation, and this is a fact that physicists refer to, it is necessary to use fissile materials with a significant amount of gamma-irradiation (e.g. cobalt-60, radium-226, cesium-I 37 .) But it takes a certain period of exposure to inflict serious or deadly damage. And just gamma-radiation is the easiest one for fixing and deactivating (comparing with products of alpha- and beta-irradiating.) As it is known, every possible case of radioactive terrorism must be studied separately, because a number of variables should be taken into account. Among them peculiar characteristics of fissile materials, including biochemical properties of radioactive materials, specifications of activity of radionuclides in limited areas (spaces, rooms) or on the open air etc. In its own tum the conditions (by air and water) increase the impact of internal contamination deadl y effect. Actually the question is even more complex, as the norms of radiating safety are not the limits (levels) of unconditional emergence of dangerous consequences, but usually mean repeatedly guaranteed conditions of their absence. But even when these norms are not observed (as in Chernobyl and its vicinities where- some inhabitants returned soon after the catastrophe, or the inhabitants who have survived the nuclear explosions of Hiroshima and Nagasaki as well as those military personnel who endeavored to "adapt" to the real conditions of nuclear war during field military exercises in the 1950s) the degree of harmful influence looks-for a lot of still unclear reasons rather individually.
686 Now for a few words about "normal" background radiation, if the purpose of radioactive terrorism-formally and technologically-is to raise the level of usual radioactivity for any given territory. It is known that different areas-for example, locations in big towns,-have different radiation backgrounds. Yet there are also more essential distinctions and, first of all, in the regions of industrial extraction of radioactive materials. So, in Kazakhstan, according to National Nuclear Center data, there is about 13% of the territory of the country, which is polluted by radionuclides. In Kirghizia there are vast territories where the radiating background on the surface of industrial dumps nearby the mines repeatedly exceeds the maximum admitted norm of 17 microRentgen per hour (mRlh). Experts of the "EcoSun" mission affirm that the level of radiation in given places can reach 2-3 thousand microRentgen per hour. In this sense, it becomes clear that parallels between the development of nuclear power stations, which has taken place now throughout the world, and the growing basis for radioactive terrorism, may have to some extent be logical-the more facilities the worse the potential for radioactive terrorism. But as it turns out, there is not much need to attack, to penetrate or to steal nuclear materials of (from viewpoint of system of security) these "hard targets." A lot of places are known for their practically free access to radioactive materials (dumps). It may be that these materials are not the best for maximum efficiency acts of radioactive terrorism, but they may still be quite proper to produce a scare, the psychological effect especially when labeled as a dreadful "dirty bomb." Now and again information appears about attempts to sell some fissile materials and even radioactive metal extracted from a numerous dumps and half ruined other storages. Traditional stereotypes usually identify and put such cases in the context of nuclear terrorism. But actually in most cases it is just a business-to earn some money in the areas of almost total poverty, although as a result of digging works, for instance in the dump of Bobodzhan-Gafur, local radiating background used to grow up to the level exceeding the allowed norm \0 times. Formally, the objective results of such enterprise are close to acts of terror. And nevertheless, it is next to impossible to find such cases in Russia which would be for sure identified (under professional approach) as hundred percent actions (or abortive but real attempts) to carry out an act of radioactive and moreover-nuclear terrorism. In summing up all mentioned above, it is possible to say the following. I. Psychologically, public opinion throughout the world is ready for different types of "nuclear terrorism." For good or for bad such a result and it is positively is not an easy puzzle. The answer to the question and dilemma depends on the interests. 2. There is a twofold tendency in the world nuclear energy sphere-growing interest for nuclear power stations (even in such countries as Sweden) and perspective of growing shortage of uranium which will produce more and more visible changes in the positions of traditional exporters and importers of the material who have cooperated across decades in this realm. In accordance with current logic and widespread opinions, it gives grounds for making conclusions, for instance, about the growth of probability of "nuclear terrorism" in connection with the increase in the list "of risk countries", including developed countries-producers (for example, Australia or Canada), and high level of threat and vulnerability for countries not only manufacturing uranium but involved in its transportation.
687 Such a thesis almost ignores the fact of the free access to a lot of dumps with fissile materials in a number of countries, but matches the current approach and common logic-the more is the worse, yet developed countries are usually excluded from the list. But why so, if it does not mean at the same time that they are also being excluded from the list of potential targets of terrorists. As a whole, there are lots of disputable questions and thus, especially, keeping in mind that under forecasts of IAEA until year 2020 annual manufacture of uranium may be increased up to 65-70 thousand tons at consumption growth up to 82-85 thousand tons. And uranium deficiency may be made up with warehouse stocks and secondary sources. In any case the increase of the amount of international transportation of uranium allows to speculate as usually about "windows of vulnerability" and, of course, necessity of toughening the means of "international control" and still practically ignoring the challenge of radioactive dumps and other storages. 3. There is no universal and single (quantitative) indicator (criterion) to estimate the potential degree of efficiency of an act of radiological terrorism. For countries the effect, psychological, first of all, may be different within the framework of the same radiological level (parameters). And the most vulnerable in this case may turn to be countries that are not so often referred to. There is the Chernobyl experience and there is a quite recent accident in the end of May 2009, when one of the U.S. President's AirForce One planes in its flight over New York (in which it was supported by two fighter jets) has, literally, caused a shock for a number of city inhabitants and created the desire for many to leave their skyscraper homes. 4. In connection with the psychological aspect of the problem, one difficult question arises. Now a number of victims of big scale conventional terrorism acts may be compared to terrorism which using means of mass destruction. On the one hand, these circumstances may impede attractiveness of "nuclear terrorism ." But, on another, namely interest to raising the degree of psychological effect may be the factor in its favor, primarily with accent on psychological effect on well prepared for its public, and only then on the number of victims. Moreover, the challenge of forming a group of liquidators to mitigate the consequences of radioactive contamination (especially with high levels of radioactivity) in the countries, where a life of an individual has traditionally had a very high value, may be rather difficult. And the last point. Politicization is an essential part of everyday life of states and societies. Actually it is a norm to form some patterns of human behavior. But there is a difference between the results of the norms of influence (its role) on internal situations and upon international relations. What is good for an internal political course is not necessarily a good basis for international cooperation. This type of discrepancy should also be taken into consideration and be one of the few among utmost divergences to be narrowed. An algorithm of a few states' behavior, which has been elaborated during the Cold War period and demonstrated its more or less effectiveness during hectic transition period of international system on the threshold of the 21 st century, as a matter of fact outlived itself. New global situations demand new visions, new approaches and inevitably new stereotypes but ones more close to reality than many dread story tales of today. Contending with interstate interests and politics are a permanent issue and as a result reproduce more and more difficult problems. It seems that the only reasonable way to
688 find a mutual denominator for different individual countries, sub-regional and regional interests is to keep moving along the guidelines elaborated by IAEA (for instance, Code of Conduct on the Safety and Security of Radioactive Sources, IAEA, 2003; Categorization of Radioactive Sources, IAEA-TECDOC-1344, etc.) As far as the CIS countries are concerned, among the priorities are: • • • •
monitoring inventory and certification of storages (centers) of radioactive materials; organization of single regional system of radiating control; further strengthening the security measures; and joint anti terrorists exercises, etc.
And, of course, the main problem to be solved is to reach tangible success in a new format of inter-civilization communications not only on the state but rather on the public levels. And once more, for this purpose the rethinking of habitual stereotypes is needed, e.g. it is necessary, at least: to open perspectives to the implementation of agreements (or understanding) which were reached during the Moscow RussianAmerican Summit in July 2009 on the questions of nuclear security; to strengthen and to make abide with IAEA safeguards (rules) as normal international practice which does not produce for the time being numeric disputable situations; to restore confidence in the effectiveness of NPT and other regimes which have been jointly established in the sphere of nuclear safety. But of course, the difference between stereotypes and a real picture of events will inevitably retain its distinction at least because (if simplify this process) of the divergences in functions: stereotypes are mainly for the public, to manipulate the public mood. The more adequate realistic surveys and expertise are for an efficient decision making process to define and to reach some aims. And public stereotypes from this viewpoint have applied instrumental character. It looks cynical but it matches the actual situation and is not much more cynical than politics itself. And it seems that the deep problem and challenge roots back not in the moral foundations but in the threat to lost control during transition from one period of mitigating terrorism to another. Actual and wide spread popularization of WMD terrorism is not so innocent, as anyone may imagine. After the first genuine big scale act of terror using the WMD, the whole complex of relationships among states, public and terrorists will acquire new outlines. There is, as usual, the possibility to speculate with enthusiasm on this subject. But it is not much clear, because the situation will be different and unknown at least for the changing character of the conflict. Now participants of conflict possess asymmetric instruments and forms of warfighting capabilities. Namely, current international terrorism (technically) is itself a most asymmetric form or military answer to coercive actions of states in global affairs. Some alignment in the most destructive means of warfighting arsenals may change a lot in any pair of relations among the actors and as result in the whole system of current cooperation and rivalry. And it seems that, only post-factum, the situation with international cooperation to mitigate nuclear terrorism (keeping its internal contradiction) may be perceived as a really acute planetary emergency, although already in more complex and difficult for interstate consolidation circumstances.
689 As far as the current situation is concerned, the points mentioned above and some other data allows us to come to the following conclusions. There is no reliable (at least open to the public) information on the attempts to organize acts on actual nuclear terrorism, (i.e., explosion of nuclear charge). There are a lot of places throughout the world with practically free access to the fissile materials and thus it makes little sense to penetrate heavy security zones (objects) to obtain "radiological weapon". Plus, from the viewpoint of radiological contamination, an attack on a nuclear power station or some other objects may be a primary target for terrorists. At last, thanks to a lot of literature of different genres, the solid basis for nuclear blackmail is established and this circumstance poses a lot of problems for the decision-makers in case of the signals about nuclear threat. Objectively, the current transitional period is not the best time to deal with such types of challenges. Meanwhile, it seems that results of the economic crisis will allow to stabilize the structure of the world international system and to strengthen the conditions for more constructive interstate cooperation during the next period in the following areas: • • •
Efforts to diminish the motivation for terrorist activities; Multinational cooperation to mitigate, or prevent, such acts; Readiness to solve quickly and effectively at least some urgent problems connected with CBRN terrorism.
Ultimately, the current transitional period is positively not the time to waste, but to use for elaborating and implementing initial practical steps and organizational forms available today.
This page intentionally left blank
IMMEDIATE COMMUNICATIONS IN THE CBRN ENVIRONMENT ROBERT V. DUNCAN Vice Chancellor of Research, University of Missouri Columbia, Missouri, USA We propose a system to provide robust, unambiguous information to everyone in an emergency zone to assist them to safety if possible, and to provide genuine data-driven management of the overall emergency response. This system will be useful in the management of a wide range of emergency situations, so it will effectively provide valuable support in any emergency where the local 3G/4G cellular networks remain available or may be readily restored following the disaster. People caught in an emergency situation will naturally depend first upon their cell phones and PDAs, so the utilization of this system will be intuitive for those who are caught within the emergency zone. Modules, such as the iPhone Applications or 'Apps' will contain stored general emergency instructions, and additional scenario-specific instructions will be sent through the cellular networks to all phones that utilize such an application. SYSTEM CONCEPTUAL DESIGN
In an emergency situation, anyone accessing the internet via their cell phone will be queried for their GPS location information automatically from the central system. If this is available, and if they are found to be within the emergency region, then their browser will be diverted to the emergency web site where this expert system gives them only simple and succinct information and displays regarding what their next move, if any, should be. They will be asked by the expert system and/or emergency management staff for their assessment of damage, local fires and other threats, and the accessibility of roads, electricity, and other local infrastructure. Photos taken by the cell phones may be used for this purpose as well. Supporting highly-relevant data from experts who are outside the emergency zone may also be determined and used by the Emergency Coordinator in near real-time. Collectively this system will provide the interface and data system layer to support a wide class of emergency response development tools. The operator will be queried for their position initially to determine if they are in the immediate disaster zone. If so, then they will be added to the emergency management system and server. This server will only download direct, simple text that instructs the operator of their next best move, and only accept e-mail, text, and calls, and other data uploads from those who are verified to be within the Emergency Zone. This precaution it taken to avoid operator information overload, and to avoid a 'denial of service' saturation of the communication channel that may occur when hundreds of thousands of people are located within the emergency zone. Various expert systems tools may be eventually developed to manage the situation in different event scenarios, and expert systems will operate to provide best advice with only very high-level direct operator intervention, such as decisions to do block-wise evacuations in a staged manner, for instance. Human intervention will be necessary to translate operator self-reporting of information about fires, smoke, and other environmental concerns that may be used to construct critical
691
692 information on flooding, fire, etc. that is then distributed widely to all other cell phones if that information is relevant to their survival.
IMMEDIATE EV ALUA TION OF RADIOLOGICAL AND NUCLEAR ATTACKS RICHARD L. GARWIN Thomas 1. Watson Research Center, IBM Research Division Yorktown Heights, New York, USA (Changed the title and added, at the end, a section summarizing comments at the presentation .)
In principle, this presentation should be based on experience in detecting and evaluating terrorist nuclear explosions or radiological incidents. Fortunately, such experience is largely lacking. Second-best would involve evaluating multiple exercises, games, or mock incidents. I will not do that either. Instead, I will describe some of the general aspects of the detection and immediate evaluation of terrorist nuclear explosions or a terrorist radiological attack and then my own experience in a 6 August 2008 "News and Terrorism Workshop" held in San Diego, California. RADIOLOGICAL AIT ACK Since our purpose is not to determine immediate risk or insurance rates against such attacks, but to understand what can be achieved in the near future, we extrapolate on the order of one-to-two years from now and particularize two cities, New York and San Diego. In fact, in New York City the Police Department (NYPD) plays a leading role in counterterrorism and has a Deputy Commissioner of Counterterrorism, Richard A. Falkenrath. For instance, the NYPD has issued a 2009 document, "Engineering SecurityProtective Design for High Risk Buildings",! that describes, in concrete terms, realistic expectations for builders and operators of tall buildings to have sensors of biological and radiological threats, adaptive heating, ventilating, and air conditioning (HV AC) systems to respond to the threat, and the like. In November 2008 , Falkenrath and Police Commissioner Ray Kelly informed the community of lower Manhattan about protective measures in that region, reporting that there are more radiological sensors around New York City than anywhere in the world. In addition, widely deployed air samplers on city streets look for evidence of biological attack. Police personnel are being fitted with real-time belt-worn radiation sensors that will vibrate in the presence of radiation. In addition, it would be useful to fit city buses with radiation sensors reporting via Bluetooth or WiFi anomalous levels of radiation and the GPS locations of their detection. Since radiological threats are likely to be caused by the dispersal of intense emitters of gamma rays, the penetrating nature of the radiation makes it detectable at very substantial distances. For instance, absorption of the gamma rays in the ambient air would reduce the radiation level by about a factor 10 in 100 meters in addition to the geometric http://www.nyc.gov/html/nypd/downloads/pdf/counterterror ism/nypd engineeringsecurity low res. pdf.
693
694 reduction, but serious threats should be detectable at such distances. Background radiation is on the order of I milligray per annum (I mGy/a) and a concealed but unshielded source that might cause damage 2 might emit on the order of 100 Gylhr at one meter. 3 The ratio between these two rates is about a billion, so at 100 meters distance the measured exposure rate from the source would be about 10,000 times the background rate. Recall that in the case of the nuclear reactor disaster at Chernobyl in 1986, an early public indication was the detection of radioactivity at Swedish nuclear facilities, soon tracked down to workers bringing in fallout from the Chernobyl cloud that had dumped radioactive materials on Sweden as well as much of Europe. Given the linking of radiological detectors in New York City to an automatic high-definition digital map, it is reasonable that dispersal of such materials would be detected within a few minutes. Probably instantaneous detection would be no better, since personnel would need to be dispatched for further evaluation and the public would need to be warned. I don't know the situation about radiological sensors in San Diego, but it need be little different. There are indeed major differences in that San Diego has only a small cluster of high-rise buildings in downtown, compared with skyscrapers over much of Manhattan. It is a given that dispersal of radioactive material would take much longer to discover in the countryside where there are few people, but it is likely that a terrorist group would wish to cause damage and panic and would be unlikely to waste its efforts on regions with little population. Following an alarm of an RDD, it would be extremely useful to have a standalong package that could be taken aboard any helicopter, containing a sodium-iodide scintillation detector of radiation, to map the distribution of radioactivity, reporting automatically by radio along with the GPS coordinates of the package. This would be particularly valuable if dispersal were achieved by micron-size droplet dispersal from an "atomizer." I have not addressed alpha-particle-emitting materials such as polonium-21O that caused such disruption in London when used as a tool of assassination. Such radioactive materials are hazardous only when inhaled or ingested; if mapped, they are fairly readily coated to reduce the hazard. NUCLEAR DETONATION IN A CITY On Wednesday, August 6, 2008 I participated in a workshop, "News and Terrorism: Communicating in a Crisis", in San Diego and relating to a mock event in San Diego There must have been some 200 people in the audience (about 5 on the panel), and it was, for me, an interesting experience. For several years our PMP has included in our papers the necessity to provide for the public information about what to do in the event of CBRN terrorism, and for first responders as well. We have emphasized that not
Such as a 10 kilocurie hospital teletherapy source. In the former unit of exposure, the roentgen, the background level is about 100 mRJa and the exposure from a powerful stolen source perhaps 10,000 Rlhr at a meter distance.
695 only must there be substance, but that the material must be available by communications in a crisis, and that the individuals must be motivated to distribute it and to read it. The scenario for this workshop was kept secret (from us all) until the workshop began, and then it developed following a heightened alert the previous day that there might be an incident in one of the West coast cities. That morning there was to be a telephone call to Internet Editor of the San Diego Union-Tribune, "We have the bomb and San Diego will soon feel the heat of 1000 suns." Twenty minutes later, a bright light was seen as far as two miles, and windows were shattered at a distance of a mile or more from what seemed to be some kind of explosion just off Coronado Peninsula. The moderator, Aaron Brown, posed each of these events to the panelist and audience and then asked the San Diego Internet editor or one of the other responsible people what they were going to say about it or what they would do. Notably, one of the panelists was Richard Burke, Director of Incident Management, Operations, Department of Homeland Security. The first two rows of the large hotel meeting room were filled with people with public safety responsibility or with communication responsibility from network television, newspapers, hospital managers, and the like. I wasn't asked a question by the moderator until the incident had been well underway, and the question was "Was this a dirty bomb?" This, of course, immediately revealed misconceptions among the panelists and the audience, when I said: " ... that when I first heard of the heightened alert, I had taken out my three fact sheets from the National Academy of Engineering on Dirty Bombs, Biological Threats, and Nuclear Explosions. When I heard that a Coast Guard boat in San Diego harbor had seen a small boat slow down and was approaching it, I (re-)read all three fact sheets. When the Coast Guard boat reported heightened radiation from the small commercial boat, I put away the biological threat and the nuclear weapon threat and concentrated on the Dirty Bomb (for spreading radioactive material). But when the bright light and the major shockwave occurred, I put away the Dirty Bomb fact sheet and concentrated on the nuclear explosion. From the graphic that was shown, I could see Coronado and also the San Diego airport and downtown San Diego. I pointed out that the length of the San Diego airport runway was almost two miles, which gave a pretty good scale to the event, and that it could have been worse, since the first mile of the shockwave, radiation, and heat was over the water of San Diego Bay, where there were hardly anybody to be injured or killed. And then I explained what the consequences would be and the importance of sheltering until the fallout cloud had passed except in the very narrow sector where fallout was to be expected (with perhaps quite different azimuth at altitude).
696
Eventually as the scenario developed, these predictions were borne out. I had also prepared by accessing the Internet in real time and had looked at the URL www.narac.gov (the National Atmospheric Release Advisory Center) so that I could tell the officials and the audience that we really needed to contact NARAC in order to get predictions for the fallout plume." Many of the questions with which we have been dealing came up, such as the worried well, the necessity to keep people away from hospitals and the desirability of rapidly deploying expedient trauma units to where people could be patched up who were suffering from being knocked around by the shockwave or, more likely, from cuts from shattered glass. In evidence was the almost total ignorance of the officials and those who might have to act in the emergency. Of course, from the point of view of the media, there is a lot of competition to get there first, and not necessarily to provide the very best information. There is also a great concern on the part of everybody to look good after the fact. When I referred to NARAC, I explained that the federal government had, for a change, done something that was both useful and competent. It was amazing, though, how much the communication people or the public officials imagined that they would rely on a telephone to convey information or to ask for information, despite the ability with email or a website to contact many people at the same time-e.g., a wiki. The actual DHS person, so far as I can recall, gave essentially no information, whether about the likelihood that the Internet would continue to operate, or on the general
697 question of how food and water would be supplied to a city of more than a million from essentially undamaged surroundings. There was no indication, for instance, of whether there is a U.S. government or a state program to help defray the costs of hotels in housing guests who have no place to go for a few days while things get sorted out. As I reflect on the experience of that little exercise, it is apparent that other sources of information could be very valuable. Some are discussed by Rob Duncan and should be integrated with this account. These include the Nuclear Detection System package carried by GPS satellites, that, among other capabilities, contains a "hang meter" that detects a nuclear explosion anywhere on the face of the Earth and gives a reasonable estimate of its yield by analysis of the time course of the double-humped light curve. Another information source is air traffic controllers (ATC) for local and regional airports. Pilots can see the mushroom cloud of a nuclear explosion from hundreds of km and need to be alerted to its existence and even to the weak shock wave that can destroy aircraft in flight if they are not reoriented to face the shock; but pilots will immediately report a nuclear explosion to ATC, which should be connected to the disaster reporting and control system. As for guidance to first responders and the general population, there is now a U.S. Government document 4 that provides useful assumptions and instruction. It assumes a \0 kiloton detonation at ground level and states that "There will be no significant Federal response at the scene for 24 hours and the full extent of Federal assets will not be available for up to 72 hours. " General guidance of the HSC document includes a "zoned approach," recognizing LD (light-damage), MD (moderate damage), and NG (no-go) zones, as well as the DF (dangerous fallout) track that may extend for 20 km or more, depending on the winds aloft:
"Planning Guidance for Response to a Nuclear Detonation," Homeland Security Council, Jan. 162009. http://www.fas.org/irplthreatJdetonation.pdf
698
KEY POINTS
1, There are no clear boundaries between damage zones resulting from a nuclear detonation, but generally, the light damage (LD) zone is characterized by broken windows and easily managed injuries; the moderate (MD) zone by sig'.rificant building damage, mbble, downed utility poles, overtullled automobile~, fires, and seriolls injuries; and the Il{)-go (NG) zone by completely destroyed infrastructure and radiation levels resulting in unlikely survival of victims, ..,
11 is
from flying debris and glass) can be prevented or reduced in severity if individuals tllat perceive an intense and unexpected flash of light seek immediate cover, The >peed of light, perceived as !he flash, mil travel faster than the blast overpressure allowing a few seconds for some people to take limited protective lneasures,
3. Blast. tllermal, and radiation injuries in cOfIlbination \vill result in prognoses for patients worse than those for the individual injury mechanisms, 4, E."-1P effect, conld result in extenslve electronics dtsruptions complicating the ti.lIlction of communications, computers, and other essential electronic equipment. 5, The most llazaroous fallout particles are readily \1sible as fine, sand-sized grains, but the lad:: of appa.<ent fallout should IlOl be misrepresented to mean radiation isn'l present; therefore appropriate radiation monitoring should always be performed" Fallou! that is inl111ediately hazardous to the public and elnergency responders will descend to the ground within ab<.'lut 24 hours,
6, The most effective life-saving oppomtnities for response officials in the first 60 minures following a nuclear explosion will be the decision to safely shelter or evacuate people ill expected fallout areas,
Overview A nuclear detonation would produce several important effects thaI impact the urban environment and people, In this discussion, the term "1l1lclear effects" will mean those primary O\1Tputs from the nuclear explos-ion, namely blast thermal, and prompt radialion, Impomnt secondary effects covered here include ele,ctromagneric pulse {'ElvfP) and flUont, All of these effects have impacts on people, infrastrucl1lre, and the environment. and tlley significalltly affect the ability to respond to the i!lcident. The term "mlclear impacts" will be
699 p. 30: KEY POINTS
l. The goal of ii. zoned appro3cl1 to nuclear detooatmn response is to 52'110 lives. while managing ri<;k:; to emerg.."1Kj'w..poo';e worire;: life and health.
2. Response to a nuclear detonatioo. will be provided fiom neighboring response units; therefore advanc.e planning is required to b"tiblish mutual aid agreements arut.response protocols. 3. Radiation detection equipment shiJwd be capable ofreading dose tares up to 1,000 RihOW". 4. Radiatit."'ll ~'lfety ;md UII'!3ls"1m:'men! training Sl"looId be required of auy worms that would be GqJiO}'ed to a ramatioo area.
S. Most of the injuries inClIITed within the ill zone are not expected to be lire. Ihreatening. Most of the injuries would be associated with frying glass and debris from the blast wave andlr.!ffic accidents. 6. Re:.£Xlnden shoold focus .medical attention in the ill zone only O",} w:ere injuries
and 5hool.d e!h'"ttU!3g.e irKUvidualfi 1(1 shelteI in safe locations to experute access to severely iujured individuals.
7. Response within the MD zone requires planners to prepare for elevated radiation levels, unstable buildings and other slructm:es, downed power lines, mptm:ed gas lilIes, hazardous chemicals. sblllp metal objects, broken glass, and fires. S The II,~ID rone should be the focus of early li.fe-<:a\>IDg operations. Early respollse activities ~huuld focus on medi<:al triage wdh coo!illIDt considemtioo of radiation dose minimization. 9. Response within the NG zone shoold not be attetupted until radiation dose rates have dropped substantiaUyin the days follo'wing a nuclear detDnation, and the MD zone response is significantly advanced.
10. The highe..t hazard from fillout occurs within the first four hours and {;on'".mues t.o drop as the radioactive fusion products decay.
11. The tl!ilSt irnportllnt mission in the DF zone is communicatillg prO!ective action orders to the public. Ei'fudive preparedness requires public education, effective communication plans, messages, and means of delivery in the Df zone.
p.45
700 p.47
Chapter 3 • Shelter I Evacuation Recommendations KEY POli\"TS I. There are t\vo principle actions that may be taken to protect the public from fallout: t.1Kmg shelter and evacuation. 2. The best imtial ;)ction immediately following a nuclear explosion is to take shelter in the nearest building or s1I\1cture and listen for instructions fl'om autbonties.
3. Shelters stl<:h as hcm'Ses with basement;;, large lUulti-story struc tures, and lUJdergrotDld spaces (e.g .. parking garages and nDlllel,). can generally reduce aoses from fallou! by a factor of 10, or more. These stmctures would generally provide shelter defUled a~ "adequate."
4. Single-story wood frame houses willioUl basements provide only minimal shelter. These structures may nor pro\'ide adequate shelter tor extended period. iu the DF zone. S. EvaCUatlOllS should be priontized based on the fallout pattern and radiation itlTensllY, adequacy of shelter, llnpending hazards (eg., fife and strllcmrai collapse), fallout pattern and density, medical and special population needs. ~ustenance resources (e.g. food and water), and re;,-pOll.,e operational alld IQgistical consideratio~. . 6. \Vnen evacuations are executed. travel shoUld he at right angles to rhe fallo\l1 path (to the extent pos~jble) and away fi'Ol.1l the plume centerline, sometimes referred to ~ "lateral evacu.1tioa ,. 7. No evaOlat.ion shoutd be attempted \Ultil basic information is available regarding fallout distribution and radiatlOll dose rates.
8. Decontamination OfpersOllS is getlerally not a lifesaving iss1le. SImply brushing off outer garrnents will be useful uutilmore thorougb decontamination C.all be accomplished.
701 p,61
Chapter 4 - Early Medical Care KEYPOI)'TS I, There ".\111 be a ,pecrrulll of camaltles includlllg one or more of blast. ractiation, and rhenm11l1jury, Irutlal triage and management will be ba,ed in parr on victim' 5 posrdetonation location history, physical examlllation, dosimetry predictions from mllial models and real-tulle phySICal dosimetry (do,e mea,uremems), and from available c1inicallaboratory sntdie;, 2, To maxmllze overall preseryatlon of life with insufficient resource, to manage n1.1,S casualties, se\'erely mjmed victims n1.1y be placed into an "expectam" (expeered to die) category early on although the critem for "expectant-- \\'111 vary depending on resources av;ulable, Although expectant, pallntlOn (rreatlllenr of symptoms) should be performed when possible, 3, Because of the cLlluage to the infrastmcnIre, the linllted a\'allability of re,ource" and presence of radiation par;ulledics and c1inici;ul<; 'sill ha\'e to bypas s cOllventional clinical standards of care, preferably using predetermined cnteria, in order to maxlllllze the overall preservation of hfe. Such conditiOn> are to be expected unril medical staffing, logistical support, and infrastmcmre call be restored 4.
~bnagemem
,
Imtial Illas; casualty tJiage, also known as sorting, ;hould not be confused with follow-on clullcal triage for more specific medICal management.
of serious llljlllY takes precedent o\'er decomanllnation. Decomanllnation of persowlel and patients from fallout or \'Isible debn, involve; bm,hl1lg off, shaklllg, washlllg or '.\iping off the radioactive dust and dirt and should not be a limiting factor in pro,.-icting medical care.
6. There 1<; no established USG interagency medical tnage system specifically \'ahdated for an urbannuc1e;u' detonation: therefore, exist1l1g emergency tnage algorirhms are used With modificatIon for the impact of radiation, !,
For the ume frame covered by this gmdance processing of the deceased ,\'ill likely not be a pnonty 111 heu of saving lives: however. fatality management \\111 be one of the most demandlllg aspects of the nuclear c1etol1.1tion response and should be p13lUled for as early as possible.
p, 78 In SUl11l1Jaf}', fatali\',' management Will be one of the most C\eIll.ll1dlng aspects of the nuclear detonatIon response, because: There '.n11 be 311 o'.-erwhelllling need for inllllediate care for tho;e ',\'ho C3Il be treated ~'lany people who are expectanr willliw for a perIod of tIme and then elIe Concerns of respect for the deceased versus linlited capability to pro\'lde these gesture, For the tinle frame covered by tIllS g1.llcL11lce processing of the deceased wlllli.\;:ely not be a pnority iulieu of saung li'.-es: however, fataliry' management \\'Ill be one of the mo,t demanding aspects of the nuclear detonation response and should be plawled for as early as possible,
702 p.81:
Chapter 5 - Population Monitoring and Decontamination KIT POIl\TS 1. Radiation ,lll;ey methods. screelUng criteria used for ractiation screenings. and
decontamination gui
detection and removal of external colltallll1lation. In most cases external decontamination can be self performed. if straightforward l1lstmctions are provided. 4. Prevention of acute radiation health effects should be the prinlary concern when mOlUtonng for ractioactive contamination. S. Populationlllonitonng and decontamination activities should rem.1m flexible and scalable to reflect the available resources and competing priorities. 6. Radioacnw contEJlllnation is not l1l11uedlately life threatening. 7. Self-e,'acuatmg 111ctividuals will reqlllre decontallllllation instmctions to be conuuunicated to them in advance of the event (e.g., public education campaign) or through post-e,'ent public outreach mechanisms. InstmcUOns should be prOVIded With consideration of languages appropnate for the affected commulUty. S. Planning must provide for consideration of concerned population'> because it IS alltlCIpated tim a Significant llllluber of inctividuals, who should rem,'\in safely sheltered. will begin to request populauon monitoring to confinn that they have not been exposed to ractiation. 9. U,e of contaminated vehicles (e .g .. personal or ma,s transit) for evacuation should not be discomaged in the initial days following a nuclear detomtlon: however, ,inlple imtructiom for nnsing or W3Shmg ,'eludes once decontarllinatioll can be aclueved wlthom Impeding e\'aCUaUOll should be proVided. iO. nlere is no lmiversally accepted threillold of radioactivity (external or internal) above WlliCh a person I> conSidered contarrunated and below which a persolll> considered unc ontamina red. 1L State and local agencies should establish sumvor regmry and locator databases a,
early as pos>ible.. Imtially. the most basic and cntical11lformation to collect from each person is IllS or her name, address. telephone number, and contact information. 12 Plalll1ers should Identify radiation protecllon professionals III their COllll1ltUlity alld
encourage them to ,'oluuteer and register in ally one of the Citizen Corp'> or sImilar programs in theircOIlllUuuity.
Clearly this HSC "guidance" is only the beginning of the needed Federal involvement in defining and supporting the reaction to a nuclear detonation.
SUMMARY OF SOME COMMENTS MADE IN RESPONSE TO THIS PRESENTATION: Michael MacCracken: 1. There may also be an overt threat of a nuclear detonation, which in the United States would call for the deployment of "NEST" (Nuclear Emergency Search Team") capabilities. The question then is how best to communicate with the public in response to such a threat. 2. California cities prepare for earthquake damage, and other locations in many countries face routine tsunami hazard. Can the medical requirements for CBRN mitigation be related to these natural threats? Carl Bauer: In many cases the "first responder" may be the engineer or supervisor in charge of the particular building or institution. Friedrich Steinhiiusler: There may be organizational or bureaucratic impediments to helicopter-borne or drone-aircraft radiation surveys of a city, post altack. John Alderdice: Since the creation of public fear or terror is the purpose of terrorism, anything that can diminish that fear can help to reduce the likelihood of terrorist attack.
703
This page intentionally left blank
ESTABLISHMENT OF A SCIENTIFICALLY-INFORMED RAPID RESPONSE SYSTEM RICHARD WILSON Department of Physics, Harvard University Cambridge, Massachusetts, USA PREAMBLE In this discussion I will focus on two, very different, types of possible terrorist attacks to illustrate the problem and to show how a Rapid Response system could be effective. The two would be explosion of a "dirty bomb", a Radioactivity Dispersal Device (RDD), in a crowded area of a city which I will describe as Wall Street, and the wide dispersal of an infectious agent. In each of these it has been argued, and I believe correctly, that the most important action to prepare for terrorism is to be prepared for a natural occurrence-an accident involving radioactivity or a natural outbreak of a disease such as SARS or HlNl influenza .. Sally Leivesley has cogently argued that there are some very important decisions that must be made very soon after the accident or event: in the first 10 minutes, and maybe an hour or so later. These decisions will not only affect the immediate course an emergency situation will take but will establish a precedent which may adversely affect the effective recovery from the emergency that we all so fervently desire. Although the most important people to be informed of the situation are the "natural" first responders such as the fire brigades, the general public also wants to be informed. Two factors seem important. Firstly, in an emergency the ordinary channels of communication will be overwhelmed, and secondly, the public and maybe even the first responders may not know which source of advice and information to trust. Each of these problems can be mitigated by advance preparation. There are many examples of problems in previous situations. 15 minutes after the San Francisco earthquake of 20 or so years ago, the telephone system into the Bay Area was clogged. as more and more of the public became aware of the disaster. More importantly, in New York City after the two airplanes flew into the World Trade Center, cell phone access was almost impossible. MY PERSONAL ACIVITY AS A "PUBLIC EXPLAINER" I here outline a problem as I have seen it over the last 30 years to illustrate that it is not easy and demands dedication at crucial times . Over the years, I have been aware of the major public misconceptions, and consequent counter-productive public actions, in matters of radiation. In an emergency, the people who have the duty to take charge usually know nothing about radiation and its effects and have no contact with people who do. In the USA, history shows that the press do not help. After Three Mile Island (TMI), not one major newspaper got the units straight-confusing DOSE and DOSE RATE. Not even the Associated Press quoted the accurate press releases of the NRC. It was a bit better after Chemobyl, but there were numerous nonsense stories and to my certain knowledge they refused to publish an accurate account from the Pravda correspondent in
705
706 Kuwait who filed while on vacation in Kiev after a visit to the power plant. Even the Japanese criticality incident was badly described. The NY Times quoted the site boundary dose rate in Rlhr rather than mR per hour, thereby changing a nuisance into a disaster. Fortunately, the National Public Radio saw the NY Times story and called me. I had, during the night, called the head of the Japanese Industrial Forum who had told me all he knew-including the correct number and the NY Times error was nipped in the bud. At TMI, my ability to help was aided by two facts . (i) my continued friendship with Dr. Robert Budnitz, a former graduate student, then Director of Research at NRC, and (ii) my friendship with Dr. Leo Beranek, then running Channel 5 TV in Boston . The one provided me with accurate information and how to get more (for example the telephone number of the TMI control room) and the other provided me with a half hour news broadcast with no advertisements. It was probably just after TMI that I was asked by an informal group in NY City, "Scientists Institute for Public Information", to be on a list of scientists who could be called at any hour of day or night to answer questions about radiation. I took this very seriously and for a month after Chernobyl my phone was constantly ringing. I took it off the hook to sleep. I returned calls from call boxes. After that first month I was on the lecture circuit. I gave approximately 100 lectures, all but one unpaid, and mostly paying my own travel expenses, over the next 6 months. SIPI seems to have vanished, but in my view it needs resurrection and expansion. THE REQUIREMENTS • • • •
The need for a body of people first responders will trust. The need for a body of people the public will trust. The need for a reliable set of recommendations. The need for a communication network that will not be overloaded or compromised.
A COMMUNICATION NETWORK I consider three existing communication networks that can be used in an emergency • • •
The U.S. military. The CERN (Centre Europenne de Research Nucleaire) and U.S. DOE (Department of Energy) Elementary Particle Physics network Google.
Leigh Moore has noted that the WHO list a number of websites in the USA for information about possible pandemics. 80% are military sites. But Leigh goes on to note that while these may be trusted by Fire Brigades and other first responders they are unlikely to be trusted by the general public. I note that the CERN-DOE network was very active as early as 1970, with dedicated telephone lines. The transatlantic link was originally military-the Advanced Research Projects Agency (ARPA net). In 1975 I remember sending to AERE Harwell by
707 British Airways, on New Years Day, several magnetic tapes containing the previous week's data from FERMILAB and on January 2nd sending a brief message "mount tape xxx and run the program yyy". The data analyzed was printed 2 hours later on the computer at the Harvard computer center. My research fellow Dr. Lynn Verhey was able to make a small software modification increasing the speed a factor of 5- thereby illustrating the importance of having a system, designed for military emergencies, which is regularly exercised. The major DOE laboratories and CERN have expanded the system and it is used by elementary particle physicists world wide. Indeed the World Wide Web system was invented in the late 1980s in CERN. It is no longer necessary to send data by air, but data whizzes across the Atlantic all the time. As I write this, my son at FERMILAB informs me that there is a major program between CERN and FERMILAB to handle the vastly increased quantities of data anticipated from the Large Hadron Collider (LHC) at CERN. While the public use of the web has expanded and is now greater than the physics use, the CERN-DOE network is still a major player largely using dedicated lines. In an international emergency, the research use might be temporarily suspended, allowing the whole system to be instantly available. In addition, as noted in the next paragraph, the CERN-DOE network is used and exercised by a number of dedicated scientists who might be willing, as I was after TMI and Chernobyl, to drop what they are doing and help. For each of these dedicated scientists, of course, advanced preparation would be necessary. In 1979, I was not only knowledgeable about radiation and its effects but had studied nuclear reactor safety. The Google network is indeed world wide and in several languages. It is used world wide. Whether or not it is trusted is more doubtful. But unless careful advanced planning is made, it will be overwhelmed in an emergency PROPOSAL I make the following general proposal. That the World Federation of Scientists, based in Geneva at CERN organize scientists to address this issue. That the CERN management, ably led by the Director General Dr. Ralph Feuer, express their willingness to put their facilities at the disposal of the world in an emergency. This would be supported by the United Nations and hopefully this would be followed by the U.S. DOE and Fermilab Director Dr. Pier Oddone and other national entities. For emergencies involving radiation and nuclear matters most CERN scientists are already partially prepared. Almost all will have qualified as "radiation workers", know how to measure radiation and certainly know the distinctions between DOSE and DOSE RATE. They spend their lives understanding the difference between Rems and milliRems. A few volunteers could be recruited in each country to act as "explainers" to the press or advisors to first responders. Although CERN might identify such people, I suggest that WFS might act as a filter to select those who would be useful to recommend to the press and other interested public persons or groups. For other emergencies such as potential pandemics, the average elementary particle physicist is less well informed. But the proximity of WHO to CERN suggests that a collaboration would be appropriate. CERN could make advanced arrangements to make a web available. Experience in the HINI virus (the 2008 Swine Flu) suggests that both
708 WHO and the Center for Disease Control (CDC) in the United States are widely trusted. Leigh Moore has proposed orally an important first step. He has volunteered to organize a small group of students in Huntsville who are looking for (non-secret) projects to plan in some detail various aspects of this proposal. This would be under the auspices of the World Federation of Scientists. This would need a small amount of funding (less than $10,000) which could no doubt be acquired if WFS sponsorship was assured. I therefore propose that the Permanent Monitoring Panel on Mitigation of Terrorist Actions (PMPMT A) make this recommendation to the management of WFS .
This is still very confused and elementary. But far less confusing than what happens in an emergency!
SESSION 16 ENERGY PANEL MEETING
This page intentionally left blank
STATUS OF ITER BROADER APPROACH ACTIVITIES AKIRA MIYAHARA Professor Emeritus, National Institute for Fusion Science Tokyo,Japan The ITER Broader Approach Activities comprise three projects, namely, I) Engineering Validation and Engineering Design Activities for the International Fusion Materials Irradiation Facility (IFMIFIEVEDA), 2) International Fusion Energy Research Centre (IFERC), 3) Satellite Tokamak Programme. In the following sentences, details will be described. 1) IFMIF consists of two accelerators of D+ beams with 40MeVxI2SmA(CW), Li target and small specimen test facilities. Comprehensive engineering design of IFMIF is in progress through the efforts of the Project Team at Rokkasho Japan, while the tasks of design, fabrication, component test of accelerators are being carried out on EU responsibility. They shared the jobs according to their past experiences. Japan is also responsible for RF quadruple accelerator with Italian team. For design of the Lithium Target assembly and Specimen Test facilities , contribution from Japanese team is important. 2) International Fusion Energy Research Centre (IFERC) consists of three subCentre, namely, 2.1) DEMO Design R&D Coordination Centre, 2.2) Computational Simulation Centre, and 2.3) the ITER Remote Experimentation Centre. 2.1) DEMO DESIGN R&D Coordination Centre consists of following two items. The first task is the design work of DEMO. The activity is performed by workshops and/or meetings including the coordinators meeting, and the discussions were focused on two topics concerning design driver and constraints for DEMO design. For physics aspect, plasma shaping and magnetic structure, position stability of elongated plasma are main concern, while for technology and engineering, assessment of superconducting magnet and current sustain system, analysis of electro-magnetic forces, torus configuration and maintenance issues. For the system design, feasibility assessment of pulsed DEMO and sensitivity study of design parameters were also performed. The second task is to identify R&D areas to be carried out during the BA activities, namely R&D on SiC/SiC Composites (IFERC-R-TI), R&D on Tritium Technology (IFERC-R-T2), R&D on Materials Engineering for DEMO Blanket (IFERC-R-T3), R&D on Advanced Neutron Multiplier for DEMO Blanket (IFERC-R-T4) and R&D on Advanced Tritium Breeders for DEMO Blanket (IFERC-R-TS). In addition to the above mentioned items, Prof. Ogawa insisted the need of R&D for Li 6 isotope separator for fuel preparation. The major activities were concentrated to design/evaluate/discuss the equipment/devices/facilities to be installed at Rokkasho site in near future . 2.2) Computational Simulation Centre: The mission and Scope of the centre are to establish a Centre of Excellence (COE) for the simulation and modeling of ITER, of the advanced SC Tokamak and other fusion experiments, and for the design of future fusion power plants, in particular DEMO. The computer resources shall be externally accessible, with sufficient transmission rate to Europe including the ITER site, to allow an efficient remote use of the facilities. Activities in 2008, were selection of High Level Benchmark Codes (gyro-kinetic codes, fluid/fluid-kinetic codes and material science codes), discussion on the procurement process, and so on. 2.3) An
711
712 ITER Remote Experimentation Centre: The preparation schedule depends on the ITER schedule, and at the moment it is scheduled in 2012 begin to install computer, and operation will begin from 2015 . I hope such kind of facility must be built in U.S. also, so that we will be able to access the ITER 24 hours long, because time difference between each facility is 8 hours. 3) Satellite Tokamak Programme: The mission of this programme is important, because during EDA of ITER, experimental results and experiences of JET and JT-60 were transferred to the design activities, while those of Satellite Tokamak will be put into both for ITER support and ITER complement for DEMO. This year, remarkable rebaselining of the JT-60SA (Satellite Tokamak) was done by the integrated Project Team consisting of Project Team and EU/JA Home Teams was successfully completed with approval of Steering Committee in December 2008. By means of the improvement, physical and engineering aspects are very much optimized with penalty of schedule change of obtaining the first plasma from March 2015 to March 2016. SUMMARY •
• • • •
Three Projects are launched for Broader Approach Activities between EU and Japan. They are: lFMlFIEVEDA, lFERC and the Satellite Tokamak Programme. Signature of the BA Agreement was completed on 5th February, 2007, and entered into force on June I st. A new site is being prepared in Rokkasho, Aomori prefecture. ITER Broader Approach Activities are progressing smoothly, and including site preparation, and up to now each project is on schedule. The BA Activities are open to the other ITER Parties and their participation are quite welcome.
TOPICS OF ENERGY RESEARCH IN JAPAN AKIRA MIYAHARA Professor Emeritus, National Institute for Fusion Science Tokyo, Japan In this manuscript, I am introducing some topics of energy research in Japan beside ITER and ITER Broader Activities. These are Nuclear Fusion Study at National Institute for Fusion Science, Related Problems of Uranium Recovery from Seawater, Remarks on Disposal of Low Level Nuclear Waste, Recent Activities of Film Type Amorphous Solar Modules, and Precautions against Earthquake for Nuclear Power Station. Recent Results from Large Helical Device at National Institute for Fusion Science. I. Large Helical Device (LHD) is realizing high performance plasma parameters, Ti(O)= 5.6keV at ne(0)=1.6 x 1019m-3 , mT=5 x 1019m-3 seckeY. 2. Discovery of super-dense-core regime (achieved density was 1.2x10 21 m-3 at B=2.5(T) has an attractive potential for an innovative reactor operation scenario of ignition at 6-7keV ion temperature, instead of 20keV for ITER case. 3. Impurity hole develops with increase in ion temperature, achieved by new perpendicular NBI, that opened the window to adopt SiC and W diverter plate materials. 4. Effective use of facility for bilateral benefits-since 1998, more than 90,000 plasma discharges were served for cooperative researchers-both for the researchers and the students for the next step Fusion Studies. Related Problems of Uranium Recovery from Seawater. 1. Historically, no country has devoted into development of U-235 enrichment technologies to be more proliferation resistant, except the work in 1971 by Prof. Kakihana, former IAEA DDG in 1970's. 2. Project for development of chemical method for U-235 enrichment (ACEP) has started from this Spring under the guidance of Prof. Fujii. 3. Spent fuel (U, Pu, MA) conditioning by pyro-process is inherently proliferation resistant recycle of spent fuels, because in electro-refiner, always U, Pu, MA are co-deposited together. 4. Remarks above mentioned, came from Dr. Tokiwai (Nuclear Solution Access and Communication (NuSAC Inc.). Remarks on Low Level Nuclear Waste Disposal: 1. In addition to Nuclear HLW matters, caution for LLW is necessary in order to be publicly accepted. 2. The New Scope of the Effective Utilization of Low Level Radioactive Waste instead of Disposal was proposed by Tanabe (Kyushu Uni) and Yoshida (Nagoya Uni).
713
714 3. Enhanced gamma-ray Energy Conversion in Water Vessel by means of coexisting with Ah03 has open the way to effective utilization of LLW, such as for Hydrogen Production and Generating Electricity.
Recent Activities of Film Type Amorphous Solar Modules: I. More advanced Photovoltaic Modules were developed in Japan using amorphous Si and microcrystalline Si by Fuji Electric, Mitsubishi Heavy Industries, while in U.S. Unisolar Company. 2. Advantages of Flexible Modules are, light weight, thin and flexible, higher productivity (with roll-to-roll process), high voltage specifications with no external wiring required. 3. Advantages of amorphous modules are, more annual energy output than crystalline modules with the same rated capacity, superior temperature characteristics (less efficiency reduction at high temperatures), power can be generated with a small amount of light, less silicon required (1/200 of that of crystalline cell), less CO 2 emitted during production (50% of that of crystalline modules). 4. Because of the good sunshine is usually available in Japan, government recommended to install solar panel on roof. However, price is still expensive. More wide market is required to reduce the cost, and from this aspect, flexible modules have good future. Precautions Against Earthquake for Nuclear Power Station: I.
2.
3.
4.
5.
After the attack of the earthquake to the nuclear power station of KashiwazakiKariwa, new criterion against earthquake has been introduced, namely every reactor buildings must be guaranteed against 1000gais of acceleration. Severe discussions were done for earthquake forecast, site selection and aseismic reactor buildings. Now seismic isolation structure of reactor buildings is seriously considered. At present, reasonably accurate earthquake forecast is possible only for earthquake caused by Plate Movement, while prediction of earthquakes caused by active faults are far less reliable. In this case, prediction is made through the knowledge of geological and historical (archaeological) approaches. The earthquake accident is not popular in western Europe and eastern U.S., but frequency of occurrence is dominated along pacific coast, where the new nuclear power stations are expected to be built. We have to learn from past experience, through international conversations. On August II, 2009, an earthquake with the moment magnitude of 6.5 shook wide area of Shizuoka prefecture, and at Hamaoka nuclear power station, No.4 and No.5 reactors were automatically shut down. Driving system of control rod for No.5 reactor was damaged slightly.
IMPACT OF THE FINANCIAL CRISIS OF 2008 ON WORLD ENERGY DR. HIS HAM KHATIB World Energy Council Amman, Jordan The financial crisis 2008-2009 was sudden and unanticipated. It greatly reduced economic growth practically in every region in the world, increased unemployment and reduced investments. Its financial and economic details are in Table I below: Table 1. Impact of the Financial Crisis (2007-2010) 2007 2008 World Output % 5.3 3.1 Advanced Countries % 2.7 0.8 Emerging & DCS % 8.3 6.0 Of which China % 13.0 9.0 World Trade % 7.2 2.9 MlD- Year Oil Prices ($)/b 71 97 Source: IMF statlstlcs
2009 -1.4 -3 .8 1.5 7.5 -12.2 60
2010 2.5 0.6 4.7 8.5 1.0 74
The year 2008 was a tectonic year for energy with the following features: • • • • • •
First time in history non-OECD commercial energy consumption (51.2%) was larger than OECD energy consumption. Electrical power generation in OECD fell. China's power generation bigger than EU power generation Carbon emission from China larger than U.S. Coal became the world's fastest growing energy fuel in 2008 it grew 3.1% Global primary energy growth was 1.4% only. Oil consumption on the United States fell 1.3 Mbpd (or 6.4%). China increased 0.26 Mbpd.
From the above statistics it is clear that the impact of the financial crisis on the global energy sector was significant. However the reduced prices of oil and other forms of energy helped to reduce the impact of the recession on the world economy. It also decreased the demand for oil. However the recession reduced funds available for investment in oil and gas development. The negative effect of this in the long term is going to be significant because, it means delay in the development of energy sources, particularly oil which needs 5- 7 years for resource development. The recession also reduced investments and interest in developing renewables, not only due to non-availability of funds, but also because reduced fossil energy prices diminished the need for developing alternatives. The year 2008 also witnessed tremendous swings in energy prices. Oil prices peaked in mid-year to $147 per barrel (b), but went down to less than $40/b by year end. Similarly coal prices witnessed a peak of $219/ton and then plummeted to almost $58/ton by year end.
715
716 With regards to global energy security the ratio of proved oil reserves to annual production has held steady at roughly 40: I for more than 20 years, but the remaining reserves are increasingly concentrated in more politically and technically challenging terrain. As oil prices neared their peak in mid-2008, consumption by industrialized countries fell by about 1 percent from one year before. Economic turmoil dragged demand still lower later in the year, and the average OECD consumption for 2008 was 47.5 million barrels per day (Mbpd), 3.5 percent below the 2007 level, with even sharper declines in the first half of 2009. In contrast, developing-world demand increased by 1.4 Mbpd to 38.7 Mbpd, driven by rising transportation energy needs and government fuel subsidies that softened the pain of higher prices. This growth offset much of the industrial-country decline, and global oil consumption ended only 0.3-0.6 percent lower than in 2007. The World Watch Institute recently drew the attention that for six years running, coal has led the growth in fossil fuel production . In 2000, it provided just 28 percent of the world's fossil fuel energy production, compared with 45 percent for oil. But by 2008, coal production reached 9.1 Mtoe per day, representing a third of fossil energy production and a 0.7 percent increase over 2007. The growth in China's coal consumption since 2000 dwarfs that of all other countries combined. India, second in growth, added less than an eighth as much coal consumption as China during that period. Globally, the largest share of coal production is for electricity generation. Larger capacities and better materials have led to higher efficiencies at coal-fired power plants, particularly in China. China aims to reduce the energy intensity of its economy by 20 percent during the 2006- 10 planning period, in part by improving power-plant efficiency by 4 percent. Industry data suggest that this goal was already surpassed in 2007. In the United States, the construction of new coal-fired power plants has been discouraged by expectations of greenhouse gas regulations, as well as factors such as materials costs and public opposition. Fossil Fuels which constitute more than 80% of world primary energy consumption will continue to dominate global energy markets well into 2050. In my humble view well beyond that, it is not the financial crisis that will shape future of energy ; rather it is environmental awareness and world emission limitations agreements. Still carbon emissions and CO 2 concentration will likely to continue for decades to come. There is mounting need for migration and adaptation The following two Figures demonstrate Fossil Fuel Production 1981-2008 and the growing Coal Consumption by Region in year 2000,2007 and 2008.
717
718
REFERENCES 1.
IMF semi-annual world economic surveys 2008-2009.
2. 3.
BP Statistical Review of World Energy-June 2009. World Watch Institute, "Fossil Fuel Production Up Despite Recession" by James Russell, 2009.
SESSION 17 GREEN CHEMISTRY WORKSHOP
This page intentionally left blank
PLASTICS ADDITIVES AND GREEN CHEMISTRY EVAN S. BEACH AND PAUL T. ANASTAS* Center for Green Chemistry and Green Engineering Yale University, New Haven, Connecticut, USA ABSTRACT The plastics enterprise currently depends on a small number of commodity polymers to perform in a diversity of applications, putting a burden on additives to enhance the properties of various materials. The toxic effects and environmental persistence of certain commercial additives impact the sustainability of the plastics industry. Green chemistry has been (and will be) applied to find solutions. This paper will focus on alternatives to phthalate plasticizers and halogenated flame retardants, which together account for a significant portion of the global additives market and the global dispersion of endocrine disrupting chemicals. Small molecule alternatives that exist in various stages of research and commercialization will be reviewed, with emphasis on the use of renewable resources. The rise of biorefineries and new bio-based monomers may help overcome existing economic barriers. Increasing the molecular weight of additives or covalently linking them to polymer backbones are two promising strategies for reducing both mobility and toxicity, but are beyond the scope of this extended abstract. It should be noted that none of the chemicals put forward as "green" replacements have received the same level of scrutiny as dioctyl phthalate (DOP, aka DEHP) or polybrominated diphenyl ethers (PBDEs). Cooperation between chemists, engineers, and the health and safety community will be critical to ensure the adoption of safe and sustainable technologies. INTRODUCTION Global plastic resin consumption in 2007 was 210 million tonnes. The corresponding demand for additives was 11 million tonnes, or about 5% by weight of all the plastic products manufactured in a year. 1 The environmental and human health impacts of phthalate plasticizers and PBDE flame retardants have been reported in depth and will not be summarized here. Plasticizers, mostly used in poly(vinyl chloride) (PVC), accounted for 54% of additives (by mass) in 2007, and flame retardants were reported to be one of the fastest-growing sectors.2 In Europe, these two categories together accounted for just over 75% of the additive market (by mass).3 Replacing PVC with alternative polymers would have a significant effect on the global dispersion and health impacts of additives. In Europe in 2007, PVC accounted for 80% of plasticizer use, and that market continues to be dominated by phthalates (7585%).3.4 The numbers are not surprising considering the high levels of phthalates that are used in flexible PVC. Whereas pipes may contain >95% PVC by weight, in some applications like fishing lures the proportion can drop as low as 14%, and the polymer is effectively a gelling agent for liquid plasticizer 5 Abated use of PVC is not expected in the short term, however. It is forecast that due to growth in Asia and developing markets, production will more than double from 1992-2012, from 22 million to 50 million
721
722 tonnes/yr, and as of 2007 PVC accounted for 35.3 million tonnes, or about 17% of all polymer resin sold. 6 Even if PVC production diminishes, plastics that fill the gap will demand additives as well. Global production of bioplastics is expected to quintuple from 20072011, and in a future where biorefineries are the top source of chemical feedstocks, the demand will be even higher. Poly(lactic acid) (PLA) is currently the most widely used bioplastic and has been the focus of most additives-for-bioplastics research to date. PLA depends on a variety of additives including plasticizers if it is to perform in a wide range of applications.7 Cellulose-, starch-, and wheat gluten-based polymers consume plasticizers as well. 8 The inherent flammability of most polymers means that flame retardant additives are critical for almost any plastic used in electronics, textiles, foam padding, and other applications where accidental fires cost lives. Global flame retardant demand is expected to increase 4.7% annually to 2.2 million tonnes by 2011. Growth is expected in both halogen-free materials as well as brominated flame retardants. 9 SOLUTIONS: SMALL MOLECULE PLASTICIZERS A survey of the literature shows there are abundant alternatives to DOTP (the most highprofile endocrine-disrupting phthalate), as well as alternatives to the phthalate class of molecules altogether. It must be stressed that absence of the phthalate moiety (as a sole criterion) does not assure "greenness" in any way. The discussion here will be limited to plasticizers derived from bio-based resources, as there are a variety of simple carbohydrates and lipids that are generally expected to be safe. It is well known that nature abounds with toxic chemicals and thus bio-based chemicals should not be excluded from full toxicity testing. One non-phthalate alternative wholly derived from petroleum should be highlighted: BASF's Rexamoll® DINCR (Figure I) is perhaps the most rigorously tested drop-in replacement for DOTP. Prior to commercialization DINCR passed a battery of eco-toxicity and genotoxicity tests covering a variety of species from bacteria and daphnids to zebrafish, earthworms, rats, rabbits, and guinea pigs. IO Production capacity recently increased to 100 million kg/yr.4
c(°
0/isononYI
0,. Isononyl
° Fig. 1. One class of plasticizers entirely based on renewable resources is based on isosorbide, a dehydration product of glucose-derived sorbitol (Figure 2). The performance can be tuned by selecting various alkanoic acids. Isosorbide di-(n-octanoic acid) ester (Figure 3) has capabilities similar to DOTP. Isosorbide esters are fully
723 biodegradable and have passed tests for acute toxicity, sensitization, mutagenicity, and estrogenicity. I1,12
OH
OH
HO~OH OH
OH
-
-2 H 2 0
Hm I:i O
'.
°
° H- -OH
n-octanoic acid (renewable)
. Fig. 3
Fig. 2:
Citrate esters are well known plasticizers for PVC and PLA. Tributyl citrate [TBC, (Figure 4)], acetyl tributylcitrate (ATBC), acetyl trihexylcitrate, and butyryl trihexylcitrate are all available commercially (e.g., Citroflex®) and the toxicological literature shows that this family of compounds is generally nontoxic by most assays. However some studies have found that ATBC has cytotoxic effects,13-IS suggesting citrates should be regarded with some caution. Epoxidized soybean oil is another well known, commercial plasticizer. It has tested negative for harmful effects in a range of tests (estrogenicity, mutagenicity, carcinogenicity, and embryotoxicity), except it is noted that some grades affected organs in rats. 12 .16,17 Danisco GRINDSTED® SOFT-N-SAFE [consisting primarily of the castor oil derivative (Figure 5)] has lower volatility than DOTP and high resistance to extraction. 18 The patent literature suggests that SOFT-NSAFE is finding applications in PLA resins as welL 19 Several dibenzoate esters of biobased diols show excellent performance in comparison to conventional plasticizers, but biodegradation and estrogenicity are concerns. Di(ethylene glycol) dibenzoate and di(propylene glycol)dibenzoate were shown to form toxic, stable metabolites when treated with yeast. 20 The related chemical 1,5-pentanediol dibenzoate shows improved biodegradability.21 The outlook is promising but it was reported that a technical grade plasticizer containing predominantly di(propylene glycol) dibenzoate showed estrogenic properties, 12 so more thorough testing is needed.
f
t;?0y0t;?
~o~o~ OH
Fig. 4.
Fig. 5.
The use of waste products, particularly from agricultural processes, will promote low cost, environmentally friendly plasticizers. Tributyl aconitate [TBA, (Figure 6)] made from aconitic acid, a waste product of sugar cane processing, shows some advantages over citrates. TBA imparted better flexibility to PVC than di(isononyl) phthalate or TBC and had better migration properties than TBC. 22 According to
724 TOXNET, TBA has an LDso >500 mg/kg (mouse), indicating relatively low toxicity, but further study is needed to confirm the safety of this plasticizer. Another low-value agricultural product is unrefined "biodiesel coproduct stream" (BCS, consisting of glycerol, free fatty acids, and fatty acid methyl esters). BCS has been shown to be an effective plasticizer for gelatin. The thermoplastic gelatin produced may be used in extrusion, injection molding, or foam applications. 23 The use of BCS in plastic may raise the value of the biorefinery product and expand the range of applications for gelatin and other biopolymers.
f
9°"1° 9
~o~o~ Fig. 6.
Ionic liquids have emerged as a new class of plasticizers. Low volatility, low migration compared to DEHP, and reduced flammability hazard are all expected benefits,24 though toxicity will be a concern for many structural c1asses. 2s To date, the ionic liquids reported to have plasticizer effects have all been derived from petroleum, but the development of bio-based ionic liquids may offer new opportunities for environmentally benign innovations in the plastics field. 26 SOLUTIONS: SMALL MOLECULE FLAME RETARDANTS Non-halogenated flame retardants are the focus of a thriving research field. A sign of the growing interest is a report from the 2007 AddCon conference noting that there were no submissions on halogenated flame retardants, though presentations on flame retardancy were one of the largest groups of papers?7 Numerous reviews of PEDE alternatives have been conducted by scientists, government, and industry. The EPA has published a study on expected environmental effects of various phosphorus-based flame retardants. 28 Industry groups like HDPUG have made similar efforts?9 Flame retardant manufacturers have created a website, http://www.nonhalogenated-flameretardants.com. compiling performance and environmental data for a variety of applications. 3o The US EPA considers environmentally positive attributes of flame retardants to include ready biodegradation or safe incineration, very large diameters (> 10 A) or high molecular weights (> 1000 Da), ability to chemically bind to the substrate, and low toxicity.28 A few particularly interesting commercial technologies based on small molecules will be highlighted here. For polycarbonate plastics, it has been known for decades that certain metal sulfonates impart flame resistance at spectacularly low levels, in the range of 0.05-0.1 % loading. Of the commercial sulfonates, one is non-halogenated (potassium diphenyl sulfone sulfonate). The sulfonate technology is just one example of the benefits that can be gained through taking advantage of unique flame retardant mechanisms. 31 In the polyester industry, it is estimated that 40% of resins are flame-retarded, usually with
725 halogen-based agents. Melamine polyphosphate (e.g., DSM Melapur® 200) is among the commercial non-halogenated alternatives. 32 Melamine polyphosphate thermal decomposition reactions are endothermic, and combustion generates N2 , contributes to char, enhances char properties, and shows synergy with other flame retardant additives 33 Better understanding of chemical mechanisms, in particular synergies between materials (for example systems containing aluminum, phosphate, and nitrogen that achieve very high flammability standards) will help inform the design of new materials. 34 A new product called Molecular Heat Eater® (MHE) is available in various formulations based on carbonate and phosphate salts and benign organic acids (such as citric, glutaric, succinic, oxalic, formic, acetic, and stearic acids). Many of these components are available as agricultural waste products. MHE is typically dispersed in a polymer matrix as micron-sized particles, which require a strong endothermic reaction to decompose, resulting in the flame retardant effect. Performance of MHE in thermogravimetric analysis testing is reportedly similar to that of PBDEs, and in cone calorimeter tests MHE exceeded ISO standards. 35 MHE is one of the rare halogen-free technologies that makes extensive use of bio-based materials. Further development of flame-resistant materials from biologically familiar chemicals should be highly encouraged. DESIGNING LESS HAZARDOUS CHEMICALS Very few (if any) of the alternative additives discussed in this review have received the same level of scrutiny as DEHP and PBDEs. They have been highlighted mainly to demonstrate that functional alternatives are abundant, and that progress has been made in adoption of green chemistry principles. The use of renewable resources (and particularly renewable resources that are widely recognized as safe) is to be encouraged, but ideally all green chemistry principles must be met. Comprehensive assessments of hazards at all stages of the chemical lifecycles need to be completed for many promising technologies. The criteria considered by the United States EPA Design for the Environment team in its assessments of flame retardant materials 28 are an excellent set of properties that should be determined for any chemical designated for mass markets: Acute toxicity
Carcinogenicity
Bioconcentration
Subchronic & chronic toxicity
Neurotoxicity
Degradation & transport
Reproductive toxicity
Immunotoxicity
Aquatic toxicity
Developmental toxicity
Genotoxicity
Terrestrial organism toxicity
The hazard screening process for new additive technologies will ideally be aided by computational methods, and eventually simple molecular design rules, to aid chemists and engineers in selecting which polymer additives are worthy of comprehensive study. A hierarchy of design information for designing safer chemicals has been proposed (in order of increasing utility): 36.37 •
Molecular modifications that decrease bioavailability
726 •
Molecular modifications affecting absorption, distribution, modification, and excretion parameters
•
Quantitative structure-activity relationships that predict safe or problematic structural classes
•
Knowledge of the precise mechanism of action
Some progress has been made in articulating guidelines that can be easily adopted by chemists and other molecular designers, for example in predicting biodegradability.38 Designing for minimal harm to humans (particularly in regards to emerging issues like endocrine disruption and epigenetic effects) remains a tremendous challenge. Shape Signatures, a computational approach that relies on molecular geometry and polarity information, has been used to identify novel estrogen antagonists 39 and may prove useful in screening new polymer additives. As research efforts continue to reveal new links between molecular structure and harmful effects, one productive application of the results will be the screening of libraries of chemicals that can be simply produced from biorefinery products (by esterification, hydrogenation, or other green processes). The biobased chemical platforms of the future will begin to supplant the petroleum platform of the past, and new molecular structures will appear in the commodity chemical markets. It is in this development where transformative advances in green chemistry of polymer additives will be made. REFERENCES I. 2.
3. 4. 5. 6. 7. 8.
9. 10.
Babinsky, R.; Gastrock, F. Brics, foundation for strategic growth. Addcon 2008, Barcelona, Spain, Paper I. New study highlights trends in additives. Plastics Additives & Compounding 2008,10 (September/October), 12. MUller, S. Plastic additives-the European market in a global environment. Addcon 2007, Frankfurt, Germany, Paper l. Markarian, J. (2007) "PVC additives-what lies ahead?" Plastics Additives & Compounding, 9 (November/December), 22-25. Wickson, EJ. In Handbook of pvc formulating; Wickson, EJ., Ed.; John Wiley & Sons, Inc.: New York, 1993, 1-13. Global PVC markets: Threats and opportunities. Plastics Additives & Compounding 2008,10 (November/December), 28-30. Markarian, J. (2008) "Biopolymers present new market opportunities for additives in packaging." Plastics Additives & Compounding, 10 (May/June), 22-25. Rahman, M.; Brazel, C.S. (2004) "The plasticizer market: An assessment of traditional plasticizers and research trends to meet new challenges." Progress in Polymer Science, 29 (12), 1223-1248. (2008) "Flame retardant demand to rise." Plastics Additives & Compounding, 10 (January/February), 8. Wadey, B.L. (2003) "An innovative plasticizer for sensitive applications." Journal of Vinyl & Additive Technology, 9 (4), 172-176.
727 II.
12.
13.
14. 15.
16. 17.
18. 19. 20.
21.
22.
23.
24.
25. 26. 27.
van Haveren, J.; Oostveen, E.A.; Micciche, F.; Weijnen, J.GJ. In Feedstocks for the future; Bozell, J.J., Patel, M.K., Eds.; American Chemical Society: Washington, DC, 2006, 99-115. Ter Veld, M.G.R.; Schouten, B.; Louisse, J.; Van Es, D.S .; Van der Saag, P.T.; Rietjens, LM.C.M.; Murk, A.J. (2006) "Estrogenic potency of food-packagingassociated plasticizers and antioxidants as detected in ERa and ER~ reporter gene cell lines." Journal of Agricultural and Food Chemistry, 54 (12), 4407 -4416. Meyers, D.B.; Autian, J.; Guess, W.L. (964) "Toxicity of plastics used in medical practice. Ii. Toxicity of citric acid esters used as plasticizers ." Journal of Pharmaceutical Sciences, 53 (7),774-7 . Ekwall, B.; Nordensten, c.; Albanus, L. (982) "Toxicity of 29 plasticizers to HeLa cells in the MIT-24 system." Toxicology, 24 (3-4),199-210. Mochida, K.; Gomyoda, M.; Fujita, T. (996) "Acetyl tributyl citrate and dibutyl sebacate inhibit the growth of cultured mammalian cells." Bulletin of Environmental Contamination and Toxicology, 56 (4), 635-7. Epoxidised soya bean oil. http://www.bibra-information.co.uklprofile-126.html, accessed June 15,2008. Seek Rhee, G.; Hee Kim, S.; Sun Kim, S.; Hee Sohn, K.; Jun Kwack, S.; Kim, B. H.; Lea Park, K. (2002) "Comparison of embryotoxicity of ESBO and phthalate esters using an in vitro battery system." Toxicology in Vitro, 16 (4), 443-448. Kristoffersen, B.L. (2005) "Ud med phthalaterne? (Out with phthalates?)." Dansk Kemi, 86 (3), 22-23. Hamaguchi, T.; Mori, A. (Kao Corporation, Japan). Plasticizer for biodegradable resin. United States Patent Application 2006/0276575 A I. Gartshore, J.; Cooper, D.G.; Nicell, J.A. (2003) "Biodegradation of plasticizers by Rhodotorula rubra." Environmental Toxicology and Chemistry, 22 (6), 12441251. Firlotte, N.; Cooper, D.G. ; Maricacute, M.; Nicell, J.A. (2009) "Characterization of 1,5-pentanediol dibenzoate as a potential 'green' plasticizer for poly(vinyl chloride)." Journal of Vinyl and Additive Technology, 15 (2), 99-107. Gil, N.; Saska, M.; Negulescu, I. (2006) "Evaluation of the effects of biobased plasticizers on the thermal and mechanical properties of poly(vinyl chloride)." Journal of Applied Polymer Science, 102 (2), 1366-1373. Stevens, E.S.; Ashby, R.D.; Solaiman, D.K.Y. (2009) "Gelatin plasticized with a biodiesel coproduct stream." Journal of Biobased Materials and Bioenergy, 3 0), 57-61. Rahman, M.; Brazel, C.S. (2006) "Ionic liquids: New generation stable plasticizers for poly(vinyl chloride)." Polymer Degradation and Stability, 91 (2), 3371-3382. Scammells, PJ.; Scott, J.L.; Singer, R.D. (2005) "Ionic liquids: The neglected issues." Australian Journal of Chemistry, 58 (3),155-169. Zhao, D.; Liao, Y.; Zhang, Z. (2007) "Toxicity of ionic liquids." Clean : Soil, Air, Water, 350),42-48. Pritchard, G. (2007) "Technical progress-but an uphill struggle for Western Europe." Plastics Additives & Compounding. 9 (November/December), 36-39.
728 28. 29. 30.
31. 32. 33. 34.
35. 36.
37.
38. 39.
Furniture flame retardancy partnership: Environmental profiles of chemical flame-retardant alternatives for low-density polyurethane foam. accessed. HDP halogen free guideline. http://www.hdpug.org/contentJpublications-O, accessed 611412009. Halogen-free flame retardants in E&E applications: A growing toolbox of materials is becoming available. http://www.halogenfree-f1ameretardants.com! HFFR-300.pdf, accessed 6114/2009. Levchik, S.V.; Weil, E.D. (2005) "Overview of recent developments in the flame retardancy of polycarbonates." Polymer International, 54 (7), 981-998. Scheirs, J. (2003) In Modern polyesters: Chemistry and technology of polyesters and copolyesters; Scheirs, J., Long, T. E., Eds. 495-540. Murphy, J. (2001) "Flame retardants: Trends and new developments." Plastics Additives & Compounding, 3 (April), 16-20. Braun, U.; Schartel, B. (2007) "Flame retardancy mechanisms of aluminium phosphinate in combination with melamine cyan urate in glass-fibre-reinforced poly(1,4-butylene terephthalate)." Macromolecular Materials and Engineering. 293 (3), 206-217. Additives from the natural world. Plastics Additives & Compounding 2008, 10 (November/December),42-43. DeVito, S.e. (1996) In Designing safer chemicals: Green chemistry for pollution prevention; DeVito, S.C., Garrett, R.L., Eds.; American Chemical Society: Washington, DC, 16-59. Anastas, N.D.; Warner, J.e. (2005) "The incorporation of hazard reduction as a chemical design criterion in green chemistry." Chemical Health & Safety, 12 (2), 9-13. Boethling, R. S.; Sommer, E.; DiFiore, D. (2007) "Designing small molecules for biodegradability." Chemical Reviews, 107 (6), 2207-2227. Wang, e.y.; Ai, N.; Arora, S.; Erenrich, E.; Nagarajan, K.; Zauhar, R.; Young, D.; Welsh, W.J. (2006) "Identification of previously unrecognized antiestrogenic chemicals using a novel virtual screening approach ." Chemical Research in Toxicology. 19 (12),1595-1601.
PLASTIC, PLASTICIZERS AND CONSUMER PRODUCTS NICOLAS OLEA Laboratorio Investigaciones Medicas, Hospital Universitario San Cecilio Granada, Spain INTRODUCTION Knowledge about human exposure to endocrine disrupters (ED) is expanding at a time when we are discovering new chemical compounds that can alter the hormonal balance. As the list of new ED lengthens, we are also identifying exposure pathways and how these substances enter the human organism. This is the case of some plastics and plasticizers found in consumer products, such as bisphenols and phthalates. Bisphenols are a group of chemical compounds that were initially designed as synthetic estrogenic hormones and now form a part of epoxy resins and polycarbonates. Phthalates are used in the manufacture, stabilization, modification, and performance of plastic polymers. The estrogenicity of bisphenols was first documented in 1936, when they were already being used in the formation of synthetic polymers, and bisphenol-F was a base monomer in bakelite. Although bisphenols and phthalates have been used for all of 100 years, account has only recently been taken of human exposure or potential consequential health risks. It can be affirmed that: i) "bisphenols" is a broad term that includes various compounds that are structurally similar to bisphenol-A (BPA) and are widely used in the chemical industry; ii) human exposure to bisphenols and phthalates is a significant, demonstrated and increasing phenomenon; iii) the biological effects of bisphenols and phthalates are well documented, fundamentally with respect to their estrogenicity. The causal relationship between endocrine disruption by bisphenols and phthalates and human disease remains elusive and these uncertainties allow different conclusions to be drawn. Nevertheless, it is clear that these chemicals are hormonally active, interfere in the homeostasis of the hormonal system, and may thus disrupt the endocrine system. BISPHENOLS AND BISPHENOL-A (BPA) Bisphenols is a broad term that includes many substances which have as a common chemical structure two phenolic rings joined together through a bridging carbon. In BPA (2,2'-bis[4-hydroxyphenyl]propane) the bridging group is isopropylidene, in bisphenol S it is sulphur, and in bisphenol AF it is fluorine. BPA is synthesised from two molecules of phenol and one of acetone. Following the same approach, bisphenol F comes from formaldehyde, bisphenol B from butanone, bisphenol H from cyclohexane, bisphenol C o-cresol and bisphenol G from o-isopropylphenol. BPA is a one of the 2,000 high-production volume chemicals manufactured world-wide. In Europe, four companies produce more than 700,000 tonnes/year of BPA at six production sites, with one factory in Southern Spain producing around 250,000 tonnes/year. This massive production implies the continuous emission of BPA into the environment from its manufacture and the utilization of products containing this
729
730 compound (Vandenberg et al. 2007). Nevertheless, BPA has not been subjected to any environmental legislative control. Bisphenols have been extensively used as an intermediate in the production of polycarbonate, epoxy, and corrosion-resistant unsaturated polystyrene resins. Epoxy resins are the fundamental components of high quality commercial polymer materials. They are versatile materials used in a wide range of essential applications from electronics to food protection. They are used as a component in the manufacture of barrier coatings for the inner surfaces of food and beverage cans. They playa vital role in preventing corrosion of the metal or migration of its ions, which would lead to tainting or spoiling of the can contents. They are also used as additives in a variety of other plastic materials such as vinyl and acrylic resins and natural and synthetic rubber. As biomaterials they have multiple uses in human health, for instance in dental composites and sealants and as bioactive bone cements. Polycarbonate is used in a wide array of plastic products, with novel applications continuously being developed. They are used in the automotive, aircraft, optical, photographic, electrical and electronic market. They are also employed in the packaging, storing, and preparation of a myriad of foods and beverages, baby foods, and juice containers. Phenolic resins are produced by the copolymerisation of simple phenols or bisphenols and fonnaldehyde. They are used in inks, coatings, varnishes and abrasive binders. Phenoxy resins are thennoplastic copolymers of bisphenol A and epichlorohydrin. The resins have good resistance to extreme temperatures and corrosion, which makes them suitable for use in pipes and ventilating ducts. BIOLOGICAL ACTIVITY OF BISPHENOLS: ESTROGENICITY The estrogenicity of bisphenols was reported for the first time in 1936 by Dodds and Lawson, who looked for synthetic estrogens devoid of the phenantrenic nucleus. These authors classified stilbenes and bisphenols by their ability to mimic l7b-estradiol (E2) in increasing the uterine weight of ovariectomized rats. Stilbenes were found to be much more potent than bisphenols and one of them, diethylstilbestrol (DES), was selected for pharmaceutical use. Bisphenols were subsequently discarded for pharmaceutical purposes. In 1944, Reid and Wilson again studied the relationship of the structure of some bisphenols to estrogenic activity in vivo compared with stilbene derivatives. Interestingly, some bisphenols were already used at this time in the plastic industry. For instance, bisphenol-F was part of Bakelite plastic, invented in 1909. The early reported estrogenicity of bisphenols was not considered a toxicological problem, and new bisphenols were synthesized for use in many industrial applications. Fifty years later, Gilbert et al. studied the relationship between correspondence factor analysis and structure-activity in bisphenols by testing the effect of these compounds on proliferation of MCF-7 human breast cancer cells, and by testing their binding specificity to the estrogen receptor. They proposed that no single structural feature defines estrogenic activity and that hydrophobic volume, together with hydroxyl groups and conjugation with basic groups in bisphenol structure, are involved in the triggering of cell proliferation.
731
In 1998, we studied the estrogenic potency of some diphenylalkanes with bisphenol structure (Figure I) and demonstrated their ability to stimulate MCF-7 cell proliferation in vitro and to induce specific E2-responsive proteins (Perez et ai, 1998). We proposed that both the length and the nature of the substituent groups at the bridging carbon of BPA analogues affected the estrogenic potency of these compounds. Good correlation was found between the relative binding affinity and the proliferative potency of each compound, suggesting that the proliferative effects of bisphenols are mediated through the binding to the estrogen receptor. Further in 2002, we investigated whether several events triggered by E2 in MCF-7 cells were also observed in response to various bisphenols (Rivas et al. 2002). We explored the proliferative effect of these agents, the expression of the estrogen controlled genes by measuring the mRNA of the pS2 protein and the related protein released to the culture medium, the induction of the progesterone receptor (PgR), and the expression of a luciferase reporter gene transfected in MVLN cells (MCF-7 cells stably transfected with a pVit-tk-Luc reporter-containing plasmid). CH,
OH
(5) ~©
OH
111 ' . ,
OH
/r--~.:-\
C,"'.
~\
.G('') '<[}) ' CHl
j
fU'·.l
C3H7
OH
© ~ '@OH C 3H ~
CF, OH @). C@ - OH CF.1 IW·7
CH;;
Fig. J,' Chemical structure of some bisphenols. Bisphenols showed an agonistic response in all the assays, suggesting that these compounds may act through all the response pathways defined for the natural hormone. However, we found differences between the assays in the potency of bisphenols, defined as the minimum concentration required to produce a maximal effect. In the cell proliferation assay, where all tested compounds need a lower concentration to give maximal response, 2,2-bis(4-hydroxyphenyl)heptane (BP-5) was able to exert estrogenic activity at a LOEC as low as I nM. In the other assays, BP-5 was again the most potent
732 bisphenol. Interestingly, Reid and Wilson reported that BP-5 was the most estrogenic bisphenol in the uterine in vivo assay. These observations confirm that the nature of the bridging carbon substituents may determine the estrogenicity of bisphenols. For instance, two propyl chains in the central carbon (the longest alkyl substituent investigated) gave the strongest response, as previously suggested in cell proliferation and PgR expression experiments. Other biological activities attributed to bisphenols do not seem to depend on the central carbon substituents. For example, the transforming activity determined by morphological transformation frequencies in Syrian hamster embryo cells treated with bisphenols does not vary with the number of carbons bound to the bridging carbon. Among all the bisphenols tested, BP-8 , which presents a carbonyl group in the central carbon, was the least active compound in all the assays . Again, this observation confirms that the polarity and nature of the substituent in the central carbon determines the estrogenic potency, and is in agreement with reports by Dodds and Lawson who showed BP-8 to be the least potent estrogenic compound of a series of 4,4dihydroxydiphenylmethane derivates in the uterotrophic assay. In the tested products, the presence of one methyl group in the meta position of the aromatic ring (BP-6) only very slightly modified the estrogenicity in all the assays. These results are consistent with previously published findings in which the inclusion of chlorine or bromine atoms in the meta position of the aromatic ring of bisphenols had no significant effect on the estrogenic potency. Interestingly, the introduction of two bromine or chlorine atoms in the two meta position of one aromatic ring drastically decreased the estrogenic potency. Finally, our experimental data suggested that the introduction of fluor atoms in the substituent bound to the bridging carbon, as in perfluorinated BPA or BP-7, had no significant effect on the estrogenic activity reported in all the assays. Although BPA is considered to be a xenoestrogen , other hormone receptors might also be targets for BPA and bisphenols. It has been previously demonstrated that BPA interacts with both of the two estrogen receptors, ERa and ERb, with a slightly higher affinity for ERa. In addition, BPA also interacts with the androgen receptor (AR), as detected in luciferase reporter gene cell lines. BPA presents both estrogenic and anti androgenic activity. It is able to activate ERs at concentrations lower than I uM, while it exhibits antiandrogenic activity at concentrations higher than 5 uM. Moreover, a slight activity is observed on the pregnane X receptor (PXR). Taken together with other recent findings that BPA is a partial agonist of estrogen receptor related receptor g (ERRg), and a partial antagonist of the thyroid hormone receptor (TR), our results suggest that the xeno-estrogenic effect of BPA merely reflects one of its modes of action. HUMAN EXPOSURE TO BISPHENOLS AND PHTHALATES : THE CASE OF TAKE AWAY FOOD Food and drink consumed outside the home represent a large and important part of the Western diet. Take-away food is defined as food packaged immediately prior to sale and consumed shortly thereafter. The boxes, wrappings, cups, and trays used for take-away food are made of paper, cardboard, plastic (polystyrene), and aluminum, among other materials. The paper and cardboard can be new or recycled, grease-proof, printed, varnished, or glued. All of these materials are of particular interest because, although the
733
contact period is usually short, the vast majority of these foods are often served hot and many are oily, favoring the migration of any contaminants in the packaging materials to the food. The presence of endocrine disrupters such as bisphenols and phthalates in food has been associated with food packaging materials and food processing. Low levels of these chemicals have been found in baby food , baby bottles, and recycled paper intended for food packaging Nevertheless, little attention has been paid to this contamination under the assumption that it carries negligible risk. More recently, there has been a demand for a more rigorous risk assessment, adopting the precautionary principle as a guide for preventive action. In this regard, Vinggaard and coworkers (2000) demonstrated the presence of multiple chemical residues in recycled kitchen rolls, which showed a marked estrogenic response in an in vitro yeast estrogen screen. The possibility of similar compounds migrating from paper products used for food packaging has increased concerns about the health risks that may be involved. Concern about the estrogenicity of bisphenols in plastics and the consequences of human exposure to them was raised by Krishnan and coworkers, who suggested that BPA was responsible for the estrogenicity of water sterilised in polycarbonate flasks. Food is acknowledged to be the main source of exposure to bisphenols. BPA can migrate from food containers made with polycarbonate or epoxy resin or with recycled paper or cardboard and from polyvinyl chlorine stretch films used for wrapping (Lopez-Espinosa et al. 2007). In fact, BPA has been detected in autoclavable flasks, baby bottles, reusable water carboys, and coffee and black tea cans, among other food materials and containers. The leaching of BPA was related to the temperature, heating time, and type of food contained. The estrogenicity of food preserved in cans was related to BPA and oligomers leaching from the inner surface of lacquer-coated cans. The estrogenicity of kitchen roll extracts was attributed to BPA among other compounds . Phthalates used in the manufacture, stabilization, modification, and performance of plastic polymers have been found in food. Food may be contaminated by orthophthalate esters such as dibuthyl (DBP), butyl benzyl phthalate (BBP), and di-2ethylhexyl phthalate (DEHP), which are used in polyvinylchloride as plasticizers and in adhesives, printing' inks, and colored laminated films. Phthalates have also been detected in virgin and recycled paper used for food packaging, being DBP and DEHP the most common, whereas BBP was not found in paper and cardboard extracts. The presence of phthalates in some food has been also associated to the use of PVC tubes in the production of baby food, to cap-sealing resins in bottled foods, or gloves worn by food handlers. Hence, bisphenols and phthalates may migrate to food packaged in paper products that contain these contaminants. Whereas recycled paper used for kitchen rolls contained BPA, attributed to the office waste from offices used, virgin paper did not appear to be an important source of BPA leaching. In contrast, phthalates were found in both virgin and recycled paper, related to offset printing inks, adhesives, and lacquers (Vinggaard et al. 2000). We had the opportunity to investigate the estrogenicity of recycled paper and cardboard used for food packaging in Europe (Lopez-Espinosa et al. 2007). Nine out ten aqueous extracts of paper and cardboard containers used in take-away food showed a statistically significant proliferative effect on MCF7 human breast cancer cells. In an
734 attempt to identify the chemicals responsible for this hormonal effect, the presence of BPA and of phthalates DBP and DEHP was also investigated. Sixty-five percent and 77.5% of the samples were found to contain DBP and DEHP, respectively. Several phthalates were previously identified as components of virgin and recycled fibers. Vinggaard et al. (2000) reported DBP in all extracts of kitchen rolls made from virgin paper and from recycled paper at similar levels to those found in our study (ND-IO.77 Ilg/g). It has been suggested that phthalates can come from the printing inks adhesives used in food containers, among other sources. Interestingly, when walls of food containers could be mechanically separated in layers, the protocol for sample processing we followed used exclusively the internal part in direct contact with food, so inks of printed parts paper did not contaminated the sample. The levels of the two phthalates were correlated, suggesting that they might have derived from the same source. Although both phthalates would contribute to the estrogenicity of the samples, as their estrogenicity in the E-Screen assay is known, no correlation was observed between the presence of DBP or DEHP in the extracts and the estrogenic response. Chemical residues in paper and cardboard vary as a function of the primary processing material and the use of recycled material. Thus, it was proposed that BP A was useful to distinguish between virgin and recycled fibers (Vinggaard et al. 2000). We found the highest BPA concentration in a pizza box made of recycled material. However no association was found between BPA levels and virgin or recycled types of paper and card-board. It is worth to notice that by reading signs of recycled composition in food containers only 6 out of 40 samples were indicated to contain recycled fiber. Further investigation on composition conducted by our containers suppliers, discovered recycled fibers in all except one sample that interestingly was negative for the presence of BPA. In fact, virgin wood pulp was found in a proportion that ranged from 36 to 98%. However, neither the classification of container composition based on manufacturer's information, nor our re-classification based on the percentage of virgin fiber, served to find an association between recycled composition and BPA levels. BPA is used in the production of epoxy resins and epoxy resins are used in the formulation of printer inks. Therefore, printed paper in waste from offices is a possible source of BPA in recycled paper. The presence of BPA in half of the studied samples suggests that this compound is a common contaminant of paper. Moreover, the estrogenic activity of low concentrations of BPA in MCF-7 cells might suggest that this compound is a main cause of the activity observed in the E-Screen assay. However, we found no significant correlation between the presence or concentration of BPA in the samples and their estrogenicity, but the estrogenicity of the extracts may be increased by additive or synergistic effects of the chemicals detected (BPA, DBP, DEHP) , or by their interaction with any other hormonally active chemical that might be present in the extracts. In Europe, the safety and quality of paper and cardboard used for food packaging are regulated by the general directives for articles intended to come in contact with foodstuffs (Directive 89/l09/EEC), but there is no specific EC Directive for paper and cardboard. In-depth investigation is needed into the presence of hormonally active chemicals in these materials alongside a risk assessment evaluation. In conclusion, the use of food containers made of paper or cardboard warrants closer scrutiny to determine whether chemicals that migrate to food from this type of packaging contribute to the inadvertent exposure of consumers to endocrine-disrupting chemicals.
735 CONCLUDING REMARKS Bisphenol-based polymers and phthalates are used in thousands of products. Investigations revealed that several BP A analogues possess estrogenic activity in the same concentration range of other known endocrine disrupters. Phthalates and bisphenols as denounced for BPA may therefore contribute to the total body burden of endocrine active compounds in wildlife and humans. Alongside phthalates and BPA, bisphenols must be included in the list of potential endocrine disrupting chemicals, and a risk assessment of the potential effect in humans and wildlife of these BPA-analogues should be undertaken. Ten years ago, the General Directorate for Research of the European Parliament published a report on the concerns of European Members of Parliament about the effects on human health of exposure to endocrine disrupters. The paper requested by the Committee for the Environment, Consumer Protection and Public Health, entailed sections of a very varied nature that all showed a practical approach. It provided an overview of this very complex subject, showing how many of the chemicals which are claimed to be endocrine disrupters are also problematic in their own right by virtue of being carcinogenic and/or persistent organic pollutants. And, finally, it reminded that 'environmental legislation mandates the precautionary principle and that experience reinforces the wisdom of this approach in dealing with any potential threat to human health'. To opt for the precautionary principle when taking decisions on human exposure to endocrine disrupters is to act preventively when faced by uncertainty. This is no simple exercise, and there is little experience to help in this task, although in the few cases when decisions were based on this principle the predictions proved to be in the right direction. Invoking the precautionary principle in the field of endocrine disrupters requires "stakeholder education, political courage and conviction" (Ashford and Miller, 1998). Industry, producers, the media and non-governmental organizations are all implicated in the process. Above all, government, "rather than being a arbiter among stakeholders, must return to its role as a trustee of the environment, public health, and sustainability". REFERENCES I.
2. 3.
4.
Brotons, 1.A., Olea-Serrano, M.F., Villalobos, M., Olea, N. (1995) "Xenoestrogens released from lacquer coating in food cans." Environ Health Perspect 103:608-612. Dodds, E.e., Lawson, W. (1936) "Synthetic estrogenic agents without the phenantrene nucleus." Nature 13:996. Gilbert, J., Dore, J.e., Bignon, B., Pons, M., Ojasoo, T., (1944) "Study of the effects of basic di- and tri-phenyl derivatives on malignant cellproliferation: an example of the application of correspondence factor analysis to structure-activity relationships (SAR)," Quant. Struct. Act.Relat. 13:262-274. Krishnan, A.V., Stathis, P., Permuth, S.F., Tokes, L., Feldman, D. (1993) "Bisphenol-A: an estrogenic substance is released from polycarbonate flasks during autoclaving." Endocrinology 132:2279-2286.
736 5.
6.
7.
8. 9.
10. 1I.
Lopez-Espinosa, M.J., Granada, A., Araque, P., et al. (2007) "Oestrogenicity of paper and cardboard extracts used as food containers." Food Addit Contarn 24(1 ):95-102. Olea, N., Pulgar, R., Perez, P., Olea-Serrano, F., Rivas, A., Novillo-Fertrell, A., Pedraza, V., Soto, A.M., Sonnenschein, C. (1996) "Estrogenicity of resin-based composites and sealants used in dentistry." Environ. Health Perspect. 104:298305. Perez, P., Pulgar, R., Olea-Serrano, F., Villalobos, M., Rivas, A., Metzler, M., Pedraza, V., Olea, N. (1998) "The estrogenicity of bisphenol A-related diphenylalkanes with various substituents at the central carbon and the hydroxy groups." Environ. Health Perspect. 106:472-473. Reid, E.E., Wilson, E., (1944) "The relation of estrogenic activity to structure in some 4,4-dihydroxy diphenylmethanes." 1. Am. Chern. Soc. 66:967-968. Rivas, A., Lacroix, M., Olea-Serrano, F., Laios, I., Leclercq, G., Olea, N. (2002) "Estrogenic effect of a series of bisphenol analogues on gene and protein expression in MCF-7 breast cancer cells." 1 Steroid Biochem Mol Bioi 82:45-53. Vandenberg, L.N., Hauser, R., Marcus, M., Olea, N., Welshons, W.V., (2007) "Human exposure to bisphenol-A." Reproductive Toxicology 24(2): 139-77 Vinggaard, A.M., Komer, W., Lund, K.H., Bolz, U., Petersen, I.H., Vinggaard, A.M., Komer, W ., Lund, K.H., Bolz, U., Petersen, J.H. (2000) "Identification and quantification of estrogenic compounds in recycled and virgin paper for household use as determined by an in vitro yeast estrogen screen and chemical analysis." Chern. Res. Toxicol. 13:1214-1222.
ORGANOTINS ARE POTENT INDUCERS OF VERTEBRA TE ADIPOGENESIS: THE CASE FOR OBESOGENS BRUCE BLUMBERG, FELIX GRUN AND SEVERINE KIRCHNER Department of Developmental Cell Biology and Pharmaceutical Sciences, University of California, Irvine, California, USA ABSTRACT Obesity and metabolic syndrome diseases have exploded into an epidemic of global proportions. Consumption of calorie-dense food and diminished physical activity are accepted as causal factors for obesity. But could environmental factors expose preexisting genetic differences or exacerbate the root causes of diet and exercise? The "obesogen hypothesis" proposes that environmental chemicals may perturb lipid homeostasis, adipocyte development, or adipose tissue function. Exposure during sensitive developmental windows could result in permanent metabolic changes that increase fat storage. We identified organotins as a novel class of obesogens and showed that the nuclear receptors, RXR and PPARy are high-affinity molecular targets of tributyltin (TBT). RXR-PPARy signaling is a key component in adipogenesis and the function of adipocytes. Thus, inappropriate activation of RXR-PPARy has the potential to strike at the heart of adipose tissue homeostasis. Our results show that TBT promotes adipocyte differentiation, modulates adipogenic genes in vivo, and increases adiposity in mice after in utero exposure. These results are consistent with the environmental obesogen model and suggest that organotin exposure is a previously unappreciated risk factor for the development of obesity and related disorders. More generally, these results illustrate the principle that prenatal exposure to environmental obesogens can lead to permanent changes in the exposed individuals that predispose them to weight gain despite normal diet and exercise. Recent results revealing details of the mechanism underlying the effects of prenatal TBT exposure on adult weight gain will be discussed. INTRODUCTION Prenatal and early postnatal events such as maternal nutntlOn, drug, and chemical exposure are received, remembered, and then manifested in health consequences later in life. One heath consequence of current concern is the worldwide rise of obesity in the past 30 years. This obesity epidemic now consumes more than 8% of health care costs in Western countries with a cost in the U.S. exceeding $100 billion annually. A single risk factor is rarely responsible for the development of most chronic diseases. The major factors driving obesity are most often ascribed to genetics! and behavioral factors such as smoking,2 excessive consumption of alcohoe and food,4 stress 5 and sedentary lifestyle. 6 Infectious agents may also contribute to obesity and type-2 diabetes. 7,8 Perhaps counter intuitively, babies subjected to either nutritional deprivation or nutritional excess as fetuses appear to be at risk for later development of obesity and diabetes. 9 The retrospective cohort studies of David Barker and colleagues established the principle that the incidence of certain adult metabolic abnormalities may be linked to in utero development. 10 This concept is often referred to as the "developmental origins of health
737
738 and disease" (DoHAD) paradigm. Childhood and adult obesity are risks considered to be "programmed" by early life experiences. The biological mechanisms underlying the developmental origins of metabolic diseases are poorly understood. Extensive human epidemiologic studies and data from animal models indicate that maternal nutrition and other environmental stimuli, during critical periods of prenatal and postnatal mammalian development, can induce permanent changes in metabolism and susceptibility to chronic disease. II . 14 Emerging evidence suggests that alterations in epigenetic marking of the genome may be a key mechanism by which in utero events can influence gene expression, and therefore, phenotype. IS Extensive covalent modifications to DNA and histone proteins occur from the earliest stages of mammalian development. These modifications ultimately determine lineagespecific patterns of gene expression and therefore, represent a plausible mechanism through which environmental factors can influence development. It remains unclear how in vivo cues trigger epigenetic imprinting and there is a large gap in our understanding of the underlying developmental pathways. ADIPOGENESIS AND OBESITY Members of the nuclear hormone receptor superfamily play integral roles in regulating cellular metabolism. In particular, the peroxisome proliferator-activated receptors (PPARs) regulate critical aspects of adapative thermogenesis, glucose and lipid homeostasis. The RXR-PPARy heterodimer plays a key role in adipocyte differentiation and energy storage, and is central to the control of whole body metabolism. 16 PPARy activation increases the expression of genes that promote fatty acid storage and represses genes that induce lipolysis in adipocytes in white adipose tissue.17 Subsequently, PPARy activation modulates gene expression leading to decreases in circulating glucose and triglycerides, depleting their levels in muscle and liver. PPARy ligands such as the thiazolidinedione rosiglitazone (ROS!) used to treat type 2 diabetes modulate insulin sensitivity due to these effects on the adipocyte and reversing insulin resistance in the whole body by sensitizing the muscle and liver tissue to insulin.18 An undesirable consequence of this increase in whole body insulin sensitivity is that fat mass is increased through the promotion of triglyceride storage in adipocytes. The retinoid X receptors (RXRu, ~, y) act as common heterodimeric partners for many other nuclear receptor partners making them central players in numerous hormonal signaling pathways. In permissive RXR complexes such as RXR-PPARy, both partners can be activated by their respective ligands either alone or simultaneously contributing towards transcriptional activation (Figure I). RXR ligands, can activate the RXR-PPARy heterodimer and act as insulin-sensitizing agonists in rodents,19 underscoring the potential effects of both PPARy and RXR agonists on adipogenesis, obesity and diabetes. The onset of obesity involves extensive remodeling of adipose tissue at the cellular level and is dependent on the coordinated interplay between adipocyte hypertrophy (increased cell size) and hyperplasia (increased cell number). Until recently, it was believed that the size of the adipocyte progenitor pool is established during development, and its initial size remains a dominant determining factor for adipogenesis. However, recruitment of additional adipocyte progenitors, increased pre-adipocyte proliferation, and enhanced differentiation in adults may also be involved. During
739 embryonic development of adipose tissue, the fate of pluripotent stem cells is restricted (by largely unknown mechanisms) to multipotent mesenchymal stem cells, now called multipotent stromal cells (MSCS)?0,21 MSCs are plastic-adherent fibroblasts found in several tissues, including the bone marrow (BM) and fat, which have the ability to differentiate into multiple specialized cell types. These include (but are not limited to) osteoblasts, chondrocytes, adipocytes, myocytes , and neurons.22 ROSI is a specific agonist of PPARy that induces adipogenesis in many cell culture models, including MSCS.23 It is believed that ROSI induces MSCs to differentiate into adipocytes through the modulation of PPARy activity and, therefore, that PPARy controls the lineage allocation of bone marrow MSCs toward adipocytes and osteoblasts. 24 PPARy agonists therefore could profoundly influence the stem cell compartment. MSCs have emerged as a model to study adipogenesis because they exhibit gene expression profiles during differentiation comparable to that of other in vitro models, such as 3T3-Ll cells,z5 while being more relevant to in vivo physiology. OBESOGENS There is increasing evidence to support an important role for environmental factors such as exposure to xenobiotic chemicals in the development of obesity. We identified "obesogens" as molecules that inappropriately stimulate adipogenesis and increase lipid storage?6 Known obesogens include organotins such as tributyltin (TBT);27.29 the environmental estrogens diethylstilbesterol (DES 30 and Bisphenol A (BPA),31'34 phthalate plasticizers,35,36 and perfluoro-octanoates.37 The role of xenobiotic chemicals in adipogenesis and obesity has been recently reviewed in detail elsewhere 38 . 4o We will focus here on our results with organotins as illustrative of the general principle of obesogen action. Organotin compounds are widely used in agriCUlture and industry. Their widespread use has led to significant release of organotins into the environment. Human exposure occurs through dietary sources (seafood and shellfish), from organotin use as fungicides on food crops, in wood treatments, industrial water systems, textiles, and via leaching of organotin-stabilized PVC from water pipes, food wrap and other plastics,27,41 TBT is perhaps best known as an endocrine disrupting chemical that decreases aromatase activity, thereby increasing testosterone levels and causing the development of exaggerated male genitalia (imposex) in female gastropod mollusks. 42 TBT exposure induces masculinization in fish 43 ,44 but does not appear to alter sex ratio in mammals. 45 We and others found that organotins act as high affinity agonistic ligands for two nuclear receptors that are critical for adipogenesis: the vertebrate retinoid X receptors and peroxisome proliferator activated receptor gamma (PPARy)?7,29 TBT fully activates both RXR and PPARy at nM doses, comparable to the amounts received from environmental exposure and also in the same range as has been reported from human blood. 46 Whereas RXR and PPARy each has its own high-affinity ligands, TBT has the unique ability to bind to both RXR and PPARy with similar affinity (K.! 12-20 nM)27 (Figure 1). It is not yet known whether TBT primarily acts through the RXR or the PPARy half of the heterodimer, but there is some evidence to suggest that activation of RXR in the presence of a PPARy antagonist is sufficient to activate the heterodimer and induce adipogenesis.
740 TBT exposure can induce adipogenesis in cell culture models 27 -29 and increase adipose mass, in vivo, in two vertebrate model organisms, mouse, and frog. 27 ,40 TBT promotes adipogenesis in the murine 3T3-L I pre-adipocyte model, and perturbs key regulators of adipogenesis and lipogenic pathways in vivo. 27 In utero TBT exposure leads to strikingly elevated lipid accumulation in adipose depots, liver, and testis of neonate mice and increased epididymal adipose mass in adults. 27 Thus, prenatal exposure to TBT causes permanent changes that lead to fat accumulation in adults, despite access to normal diet and exercise. Although TBT is clearly an obesogen, the mechanisms underlying the effects of prenatal exposure on adult physiology remain largely unknown. It is also unknown whether TBT action through the RXR-PPARy heterodimer is the sole means through which TBT influences adipogenesis. TBT has the potential to act through other cellular pathways relevant to adipogenesis such as sex steroid action and glucocorticoid homeostasis. 39,40 OBESOGENS AND STEM CELLS Most mechanistic studies to date have employed the murine 3T3-L1 cell model, a population of fibroblasts already committed to the adipogenic pathway.27.29 Stem cells have become a very popular model to study differentiation processes, including adipogenesis, owing to their ability to differentiate into multiple cell types. Mesenchymal stem cells, now termed multipotent stromal cells (MSCs) are a particularly useful model to study changes in the programming of adipogenesis since they are relevant to the fate of adipose tissue, in vivo. Adipocytes originate from pluripotent stem cells of mesodermal origin and give rise to more restricted multipotent stem cells that become adipocyte progenitors during embryonic and early neonatal developmental. 21 MSCs are maintained in various depots in the body (including fat and bone marrow) and can give rise to fat, bone, muscle, and cartilage (among other tissues) in the embryo and in the adult. We will use the terms BMSC (bone marrow derived stromal cell) and ADSC (white adipose derived stromal cell) to describe the stem cells found in specific tissues and MSC to describe both BMSC and ADSC. We successfully implemented the use of MSC models to study how prenatal exposure to TBT, or the pharmaceutical obesogen rosiglitazone (ROSI) leads to weight gain. 47 Briefly, we found that exposure to environmentally relevant (nM) doses of TBT, in vitro, inhibited proliferation and favored adipogenic differentiation of MSCs. Exposure to the PPARy activator ROSI showed similar effects, suggesting that TBT-mediated increases in adipogenesis result from activation of PPARy. TBT exposure overrode osteogenic induction, in vitro, and induced MSCs to become adipocytes. This suggests that the adipogenic conversion was at the expense of the osteogenic pathway. MSCs derived from TBT-exposed mice showed enhanced lipid accumulation and a gene expression profile consistent with increased adipogenesis after adipogenic induction. The effect was exacerbated by additional TBT exposure during induction, in vitro. Undifferentiated MSCs from mice prenatally exposed to TBT (or ROSI) exhibited increased expression of the early adipogenic differentiation marker, fatty-acid binding protein 4 (aP2), suggesting that these cells were already predisposed to become adipocytes. The promoter/enhancer region of aP2 was under-methylated in MSCs derived from mice exposed prenatally to TBT (Figure 2). Taken together with the other results,
741 this suggests that prenatal TBT exposure was imprinted into the MSC compartment, committing a significant number of MSCs to the adipogenic lineage. This effect is likely to increase adipose mass over time. CONCLUDING REMARKS There is an urgent need to understand the mechanisms underlying the predisposition to obesity and related disorders. While evidence implicating environmental influences continues to mount, the study of environmental factors in obesity is only beginning and the mechanisms remain largely unknown. Our published work showed that TBT is involved in critical steps of adipogenesis in vitro and in vivo. TBT is thus an environmental obesogen that has the potential to contribute significantly to the obesity epidemic. While the degree to which organotins and other chemicals that target PPARy affect obesity is currently unknown, it is also very well established that drugs which activate PPARy (e.g., thiazolidinediones such as ROSI) cause obesity and weight gain at any age. Therefore, we would argue that it is logical and expected that other high-affinity PPARy activators would behave similarly. Exposure to organotins and other known or suspected obesogens is ubiquitous;38-40 therefore, we believe it is of particular importance to understand how prenatal exposure to TBT and other obesogens leads to permanent weight gain. Our recent data show that prenatal TBT exposure predisposes multi potent stromal cells to become adipocytes by epigenetic imprinting into the memory of the MSC compartment. This provides a plausible and provocative mechanism to explain how a single prenatal exposure to obesogenic chemicals can permanently alter the phenotype of the exposed individual by increasing the ultimate number of adipocytes in fat depots. It is currently unknown if hyperplastic growth, triggered by obesogen exposure, is due solely to differentiation of resident MSCs within the adipose tissues or whether other MSCs (e.g., from bone marrow) can be recruited to adipose depots, in vivo. This is an important area for future studies. Unraveling the complex modulation of developmental pathways controlling the prenatal programming of MSC fate will make important contributions to understanding the development of obesity, how obesogens affect this process and which stem cells are involved.
742
9-cis-RA
Rosiglitazone
Adipogenic genes
TBT
Adipogenic genes
Fig. 1: Ligand activation of RXR and PPARy. RXR and PPARy form heterodimers to bind DNA and regulate the transcription of target genes. RXR and PPARy can each be activated by specific, high-affinity ligands such as 9-cis RA (RXR) and ROS1 (PPARy) . TBT can bind to both RXR and PPARy with high affinity, making it a novel, dual ligand for these receptors.
743
c
1'<.
3
,till
t5
Pref-1
mpPARa2 Bap2 ~ IIIIICD68
C1l
c:
'g. ~
....
to
0
to:lI PP'A,Rg2.Met1
T
;:J
filii ilp2-M~t1
• aP2-Met2 B aP2-Met3
2
I
i?f
'\D, E:'
"""
'P
~
.~ g 0.5
'..\1 I0
'"
CJ::
B
aP2 promotet1enhancer region
00/' +1500
I
Meti
Uncut M~K2
~T,' :r: til Adl
Mer::'
"r ~',,-,-I ,, i _---..Iti*'-t-· -T jf\ Atil*+l
+1000
Adl
PPARg2 promoter!enhancerregion
+1500
cfF' I
Uncut
1,'-
M",t]
11~1 Acl
J
et-----------
+1000
',+1
Fig. 2: Effect of prenatal TBT exposure on gene expression and promoter/enhancer methylation in undifferentiated mADSe. A) Expression of macrophage and adipogenic markers was assayed by QPCR and expressed as fold change ± SEM relative to CMC controls. B)The aP2 CpG island (-1500 to +1000) +1- transcription initiation site, arrows - Acil sites, E1 - exon I, vertical bars- CpG sites. Horizontal bars - PCR fragments (Met1, Met2 and Met3 - Aci/), uncut - uncut control amplified by QPCR. C) Genomic DNA extracted from undifferentiated mADSCs of TBT or CMC exposed mice was digested by Acil and assayed by QPCR. Fragments are only observed if the site is methylated, lack of a fragment indicates under-methylation.
REFERENCES 1.
Herbert, A., (2008) "The fat tail of obesity as told by the genome." Curr Opin Clin Nutr Metab Care, 11(4): p. 366-70.
744 2. 3.
4. 5.
6. 7. 8. 9. 10. II. 12.
13. 14. 15. 16. 17.
18. 19. 20. 21.
Power, e. and B.1. Jefferis (2002) "Fetal environment and subsequent obesity: a study of maternal smoking." Int J Epidemiol, 31(2): p. 413-9. Mantena, S.K., et al. (2008) "Mitochondrial dysfunction and oxidative stress in the pathogenesis of alcohol- and obesity-induced fatty liver diseases." Free Radic Bioi Med. 44(7): p. 1259-72. Hill, J.O. and J.e. Peters (1998) "Environmental contributions to the obesity epidemic." Science, 280(5368): p. 1371-4. Garruti, G., et al. (2008) "Neuroendocrine deregulation of food intake, adipose tissue and the gastrointestinal system in obesity and metabolic syndrome." J Gastrointestin Liver Dis. 17(2): p. 193-8. Rippe, J.M. and S. Hess (1998) "The role of physical activity in the prevention and management of obesity." J Am Diet Assoc, 98(10 Suppl 2): p. S31-8. Baillie-Hamilton, P.F. (2002) "Chemical toxins: a hypothesis to explain the global obesity epidemic." J Altern Complement Med, 8(2): p. 185-92. Heindel, 1.1., (2003) "Endocrine disruptors and the obesity epidemic." Toxicol Sci. 76(2): p. 247-9. Curhan, G.e., et al. (1996) "Birth weight and adult hypertension, diabetes mellitus, and obesity in U.S. men." Circulation. 94(12): p. 3246-50. Barker, D.1. and e.H. Fall (1993) "Fetal and infant origins of cardiovascular disease." Arch Dis Child,. 68(6): p. 797-9. Bertram, C.E. and M.A. Hanson (200 I) "Animal models and programming of the metabolic syndrome." Br Med Bull. 60: p. 103-21. Budge, H., et al. (2005) "Maternal nutritional programming of fetal adipose tissue development: long-term consequences for later obesity." Birth Defects Res C Embryo Today. 75(3): p. 193-9. Taylor, P.D. and L. Poston (2007) "Developmental programming of obesity in mammals." Exp Physiol. 92(2): p. 287-98. Gluckman, P.D., et al. (2008) "Fetal and neonatal pathways to obesity." Front Horm Res. 36: p. 61-72. Mathers, J.e. (2007) "Early nutrition: impact on epigenetics." Forum Nutr. 60: p. 42-8. Auwerx, J. (1999) "PPARgamma, the ultimate thrifty gene." Diabetologia. 42(9): p.1033-49. Ferre, P. (2004) "The biology of peroxisome proliferator-activated receptors: relationship with lipid metabolism and insulin sensitivity." Diabetes. 53 Suppl 1: p. S43-50. Day, e. (1999) "Thiazolidinediones: a new class of antidiabetic drugs." Diabet Med. 16(3): p. 179-92. Mukherjee, R., et al. (!997) "Sensitization of diabetic and obese mice to insulin by retinoid X receptor agonists." Nature. 386(6623): p. 407-10. Rosen, E.D. and O.A. MacDougald (2006) "Adipocyte differentiation from the inside out." Nat Rev Mol Cell Bioi. 7(12): p. 885-96. Avram, M.M., A.S. Avram, and W.D. James (2007) "Subcutaneous fat in normal and diseased states 3. Adipogenesis: from stem cell to fat cell." J Am Acad Dermatol. 56(3): p. 472-92.
745
22.
23.
24. 25. 26.
27. 28. 29.
30. 31.
32. 33.
34.
35.
36.
37. 38.
39.
Schaffler, A. and C. Buchler (2007) "Concise review: adipose tissue-derived stromal cells--basic and clinical implications for novel cell-based therapies." Stem Cells. 25(4): p. 818-27. Gimble, J .M ., et al. (!996) "Peroxisome proliferator-activated receptor-gamma activation by thiazolidinediones induces adipogenesis in bone marrow stromal cells." Mol Pharmacol. 50(5): p. 1087-94. Shockley, K.R., et al. (2007) "PPARgamma2 Regulates a Molecular Signature of Marrow Mesenchymal Stem Cells." PPAR Res, 2007. p. 81219. Janderova, L., et al. (2003) "Human mesenchymal stem cells as an in vitro model for human adipogenesis." Obes Res.11 (1 ): p. 65-74. Grun, F. and B. Blumberg, Environmental obesogens: organotins and endocrine disruption via nuclear receptor signaling. Endocrinology, 2006. 147(6 Suppl): p. S50-5. Grun, F., et al. (2006) "Endocrine-disrupting organotin compounds are potent inducers of adipogenesis in vertebrates." Mol Endocrinol. 20(9): p. 2141-55. Inadera, H. and A. Shimomura (2005) "Environmental chemical tributyltin augments adipocyte differentiation." Toxicol Lett. Kanayama, T., et al. (2005) "Organotin compounds promote adipocyte differentiation as agonists of the peroxisome proliferator-activated receptor gamma/retinoid X receptor pathway." Mol Pharmacol. 67(3): p. 766-74. Newbold, R.R., et al. (2008) "Effects of endocrine disruptors on obesity." lnt J Androl. 31(2): p. 201-8. Rubin, B.S., et al. (2001) "Perinatal exposure to low doses of bisphenol A affects body weight, patterns of estrous cyclicity, and plasma LH levels." Environ Health Perspect. 109(7): p. 675-80. Masuno, H., et al. (2002) "Bisphenol A in combination with insulin can accelerate the conversion of 3T3-L I fibroblasts to adipocytes." J Lipid Res. 43(5): p. 676-84. Miyawaki , J., et al. (2007) "Perinatal and postnatal exposure to bisphenol a increases adipose tissue mass and serum cholesterol level in mice." J Atheroscler Thromb. 14(5): p. 245-52. Phrakonkham, P., et al. (2008) "Dietary xenoestrogens differentially impair 3T3Ll preadipocyte differentiation and persistently affect leptin synthesis." J Steroid Biochem Mol Bioi. Feige, J.N., et al. (2007) "The endocrine disruptor monoethyl-hexyl-phthalate is a selective peroxisome proliferator-activated receptor gamma modulator that promotes adipogenesis." J Bioi Chem. 282(26): p. 19152-66. Stahlhut, R.W., et al. (2007) "Concentrations of urinary phthalate metabolites are associated with increased waist circumference and insulin resistance in adult U.S. males." Environ Health Perspect. 115(6): p. 876-882. Betts, K. (2007) "PFOS and PFOA in humans: new study links prenatal exposure to lower birth weight." Environ Health Perspect. 115(\ \): p. A550. Grun, F. and B. Blumberg (2007) "Perturbed nuclear receptor signaling by environmental obesogens as emerging factors in the obesity crisis." Rev Endocr Metab Disord. 8(2) : p. 161-71. Grun, F. and B. Blumberg (2009) "Endocrine Disrupters as Obesogens." Mol Cell Endocrinol. in press.
746
40. 41. 42 .
43.
44. 45. 46. 47.
Grun, F. and B. Blumberg (2009) "The case for obesogens." Molec Endocrinol. in press. Golub, M. and J. Doherty (2004) "Triphenyltin as a potential human endocrine disruptor." J Toxicol Environ Health B Crit Rev. 7(4): p. 281-95. Oberdorster, E. and P. McClellan-Green (2002) "Mechanisms of imposex induction in the mud snail, IJyanassa obsoleta: TBT as a neurotoxin and aromatase inhibitor." Mar Environ Res. 54(3-5): p. 715-8. McAllister, B.G. and D.E. Kime (2003) "Early life exposure to environmental levels of the aromatase inhibitor tributyltin causes masculinisation and irreversible sperm damage in zebrafish (Danio rerio)." Aquat Toxieol. 65(3): p. 309-16. Shimasaki, Y., et al. (2003) "Tributyltin causes masculinization in fish." Environ Toxieol Chern. 22(1): p. 141-4. Ogata, R., et al. (2001) "Two-generation reproductive toxicity study of tributyltin chloride in female rats." J Toxieol Environ Health A. 63(2): p. 127-44. Kannan, K. , K. Senthilkumar, and J. Giesy (1999) "Occurrence of butyltin compounds in human blood." Environ Sci Tech. 33(10): p. 1776-9. Kirchner, S., Kieu, T., and Blumberg, B. (2009) "Prenatal exposure to the environmental obesogen tributyltin predisposes multipotent stem cells to become adipocytes." Molee Endoerinol. in press.
BIO-BASED POLYMERS: A GREEN CHEMISTRY PERSPECTIVE WIM THIELEMANS School of Chemistry, Faculty of Science, University of Nottingham Nottingham, UK INTRODUCTION Polymers or plastics are ubiquitous in our daily lives and contribute to a large amount to increased comfort, food safety and even lower energy consumption through light-weight materials and improved insulation. Polymers are macromolecules which can be produced by connecting (forming chemical linkages between) small molecules called monomers. Most man-made polymers are relatively simple as they are generally formed using a limited amount of different monomers (generally less than four). They can either be thermoplastic when they are formed of independent chains held together by physical interactions, or thermosetting when they form a three-dimensional chemically connected network. Nature continuously produces a variety of polymers and monomers through its biological activity. The most common, wood, is produced at quantities over lOObn metric tonnes a year and has found wide uses as a construction material. Wood is actually constituted of cellulose, hemicellulose and lignin, three polymers, in addition to waxes, pectins and proteins. Mankind has used natural materials to construct shelter, produce weapons and tools, as well as clothing. As technology developed, so did the amount of modification of the naturally occurring materials to improve performance or to better fit the needs they were to address. Up until the first half of the 20 th century, a large spectrum of materials had been studied, resulting in major breakthroughs in the use of bio-based polymers. Examples can be found in the dyeing of natural fibres, leather tanning, vulcanisation of rubber, and derivatisation of cellulose for a variety of uses (e.g., smokeless gun powder, thermoplastic polymers). The advent of the coal and petroleum industry at the beginning of the 20 th century was founded in considerable advances in coal-based chemistry and the reliable and relatively inexpensive cracking of crude oil, resulting in a vast supply of monomeric substances that could easily be turned into polymers on an industrial scale. As new lightweight and cheap polymers were produced in vast quantities, new uses were constantly being discovered that fit the polymer properties. Blending of polymers and reinforcing them using glass fibres, carbon fibres or natural fibres, opened up different ranges of properties, and with it new fields of application. Several reasons can be found for the current reversal of this trend; away from fossil-based materials and back to bio-based ones. Amongst commonly quoted reasons are (i) limited availability of fossil materials, (ii) insecurity in terms of the cost of these materials, (iii) geographic location of fossil materials, (iv) increasing energy demand for extraction of remaining fossil deposits, (v) the understanding that biorefineries can be economically sustainable as proven by pilot plants, and (vi) a growing public awareness both by governments and the wider public of environmental issues and concern. Nature produces a multitude of materials with a potential to be harvested either as a polymer, to be used as a monomer, or to be converted chemically or biochemically to
747
748 polymers or monomers. In general, there are three biobased polymer production strategies. I As a first strategy, biomass can be used directly with little or no chemical or biochemical transformations. Examples of direct biomass use are wood, natural rubber, cotton, starch, sugar, etc. Secondly, natural resources are converted to intermediates followed by conversion to polymers. Conversions can be carried out using industrial biotechnology (also called white biotechnology, with biomass conversion using bacteria or enzymes) or using traditional chemical methods. The third strategy involves the production of biopolymers or monomers in transgenic plants and is referred to as green biotechnology. The preferred route to generate biobased polymers will depend on the efficiency of various production routes and the demands of the final applications. The twelve principles of Green Chemistry were first written down by Paul Anastas and John Warner in 1998 2 They describe general principles to be followed to reduce the environmental impact of chemical transformations and can thus be used as general principles for chemists to implement green chemistry. The production of biobased polymers can be directly linked to the use of renewable feedstocks. In addition, conversion of natural materials to useful monomers and their polymerization as well as derivatisation of natural polymers, require attention in terms of waste generation, safety, pollution prevention and atom efficiency. While degradation after use is not an inherent characteristic of biobased polymers, the potential to degrade (either spontaneous biodegradation or only in industrial processes) might be incorporated during the transformation/derivatisation process. This is however also possible for petroleumderived materials. While green and white biotechnology could be considered as inherently "green" (note the different uses of the term "green") or more sustainable, the principles of Green Chemistry can be equally applied here. It is obvious to see that polymer production in transgenic plants (green biotechnology) will generate little in terms of directly toxic products or pollution. However, genetic engineering of plants has received a rather cool reception by the wider public, with the most negative response in Europe. Given the concerns regarding genetically engineered plants and their potential health effects (perceived or real), seeds carried outside of the containment area could be described as pollution. The provision of barrier zones could provide an acceptable solution but prevents use of those areas of fertile land for growth of crop. Taking into account the reduced efficiency of land use, some transgenic plants may become less efficient than naturally occurring species. Genetically engineered prevention of seed production through pollination inhibition could provide a sufficient solution against spreading but will complicate production. In addition, the extraction of polymers from the native organism as well as the purification of the extracted products may involve several processing steps, all contributing to generation of waste and the potential to increase pollution and reduce process efficiency. It is therefore paramount to increase the polymer or monomer production to the maximum level attainable without compromising the survivability of the plant. Combined production, where different sections of the plant are
J.B. Van Beilen and Y. Poirier, The Plant 1.54,684-701 (2008) P. Anastas and J. Warner, Green Chemistry: Theory and Practice, Oxford University Press: New York, 1998
749 used to produce different materials is a very promising concept that would resolve many issues but it is still far from reality. The use of renewable feedstocks has also led to an increased concern on the efficient use of land and limited availability of natural feedstocks . Especially the increased focus on land use for direct (e.g., photovoltaic systems) and indirect (biomass plantations geared towards crops for biofuel generation) energy applications has led to growing concerns of its effect on the global food balance and the limited availability of land 3 .4 It is, therefore, important to optimise land use and only produce the most efficient crops to meet specific needs to prevent a backlash. BIO-BASED POLYMERS There are a variety of polymers derived from renewable sources. A brief overview of biobased polymers will be presented with references to detailed reviews in the recent literature for a more detailed description. Polysaccharides Polysaccharides are the most abundant organic materials on earth and encompass cellulose, starch and chitin, amongst others such as hemicellulose. 5 While cellulose and chitin are structural polymers (cellulose is used as the structural agent in plants, while chitin in found in the exoskeleton of insects and arthropods such as crustaceans), starch is used in nature as an energy storage material. The common base of these naturally occurring polymers is D-glucose which Nature adapted to various extents to accommodate different environments and obtain different properties. These variations result in varying crystallinity, solubility and ease of chemical modification, so that they can span a wide filed of potential uses. Modification of polysaccharides is often performed to render the polymer more hydrophobic and to improve their solubility in organic solvents. The large amount of hydroxyl groups offers a direct and straightforward modification opportunity. Cellulose is the most abundantly available organic material on earth . Natural fibers (constituted of cellulose, lignin, hemicellulose and a variety of pectins and waxes) are commonly used as a reinforcement agent for composite materials. Its hydrophilicity and water take up makes that surface modifications are commonly used to improve dimensional stability and the interfacial compatibility between polymers and the natural fibres. Because cellulose is remarkably insoluble due to significant hydrogen bonding existing between polymer chains, pure cellulose is generally used in a modified form (commonly etherified or esterified). Cellulose modification required hydrogen disrupting solvents such as dimethyl acetamide/LiCI, various ionic liquids or a TBAF/DMSO
P. Hazell and R.K. Pachauri (Eds.), Bioenergy and Agriculture: Promises and Challenges, International Food Policy Research Institute, Washington, DC, 2006 S. Nonhebel, Renew. Sustain. Energy Rev. 9, 191-201 (2005) A. Gandini, Macromolecules 41,9491-9504 (2008)
750 mixture. 6 Virtually any imaginable chemical modification of cellulose has been reported, but only limited commercial exploitation exists. s.7 Oxidised cellulose has found wide applications in medical applications such as absorbable hemostatic scaffolding materials and post surgical adhesion prevention layer but also as carrier material for agricultural, cosmetic and pharmaceutical applications. In addition to its use as foodstuff, starch has found uses as a biodegradable polymer in the form of ThermoPlastic Starch (TPS). The starch grains and their semicrystalline structure are disrupted by a combination of thermal treatment and the addition of one or more plasticisers (generally water and/or multifunctional alcohols). The major drawback of these polymers is their hydrophilicity, reducing its dimensional stability and mechanical properties, and an aging effect making the polymer more brittle with time. Blending with hydrophobic polymer can improve some of these properties. However, this is not always required. For example, Novamont SpA, a leading Italian manufacturer of bioplastic resins produces a compostable polymer based on cornstarch: Mater-Bi® which is used for shopping bags, netting for fruit and produce, and packaging. 8,9 Various classes of Mater-Bi exist with varying properties and biodegradation profiles. The production of starch foams for shipping fill material and as a precursor for mesoporous materials is also under investigation.IO,11 Just like cellulose, virtually any modification of starch has been reported, even though commercial applications are limited. The extreme insolubility of chitin makes that it is not widely used. However, deacetylation of chitin results in chitosan, a very promising material since it is readily soluble in mildly acidic media and various common organic solvents. The use of chitosan (and its derivatives) for tissue engineering applications has recently been reviewed. 12 The use of chitosan in tissue engineering uses all of the interesting characteristics it possesses: biocompatibility, biodegradability, as well as antibacterial and wound-healing activity, Hemicellulose (obtained from wood) and Ulvan (polysaccharide derived from seaweed) are other polysaccharides available at larger quantities. Hemicellulose has found uses in films and coatings, as polyelectrolytes, and rheology modifiers. However, materials with hemicelluloses as the major constituent are not expected to be developed as significant amounts of research has been devoted to them without considerable success. s While Ulvan solutions are generally low viscosity, they can be made to selfassemble and gel. Industrial use of Ulvan is currently virtually inexistent but its unique chemical and physical properties make them very attractive for a variety of applications
10 II
12
T. Heinze and K. Petzold, In Monomers, Polymers and Composites from Renewable Resources, M.N. Belgacem and A. Gandini, (Eds.), Elsevier: Amsterdam, 343-368,2008 T. Heinze and T. Liebert, Prog. Polym. Sci. 26, 1689 (2001) R. Stewart, Plastics Engineering, 63, 25-31 (2007) B.P. Mooney, Biochem. 1.418,219-232 (2009) J.L. Willet and R.L. Shogren, Polymer, 43, 5935 (2002) V. Budarin, I.H. Clark, J.J.E. Hardy, R. Luque, K. Milkowski, SJ. Tavener and AJ. Wilson, Angew. Chern. lnt. Ed. 45, 3782-3786 (2006) I.-Y. Kim, S.-J. Seo, H.-S. Moon, M.-K. Yoo, I.- y, Park, B.-C. Kim and c.-S. Cho, Biotechnol. Adv. 26, 1-21 (2006)
751 in the medial, pharmaceutical and agricultural domains. 13 In the field of materials, Ulvan has been shown to intercalate into clay, opening an application in the field of clay nanocomposites.1 4 Its gelation properties may be very useful for use as structurant, while its affinity for metal ions could be used for metal sequestration or ion-exchange processes. Polysaccharides have also found their way into nanoscience. The amorphous sections of semicrystalline polysaccharides (cellulose, starch and chitin) can be hydrolysed to release the nanosized crystalline domains. The monocrystalline particles have nanometric sizes with starch nanoparticles having platelet shape and cellulose and chitin having a rod-like structure. Especially their use as nanoreinforcement has already received a tremendous amount of attention. ls Lignin Lignin is a complex polyphenolic polymer found in all vascular plants and is the second most abundant polymer in Nature. 16 It is obtained industrially as by-product of the production of cellulose pulp at an annual rate of about 70 million tons. However, only an estimated 2 percent of this is used as a chemical product, either directly or after modification. The remaining 98% is burned to recover its energy.17 Depending on the extraction process, industrial lignins can have a low molecular weight and be insoluble in water (soda lignin); have a small to large molecular weight (lkDa to 150kDa), 4-8% sulphur content and be dispersible in water (Iignosulphonates); or are recovered as highly pure material with medium molecular weights (2.5kDa to 39kDa) that is water insoluble (kraft lignin, the predominant industrial lignin). Lignosulphonates are used widely as dispersants and binders. 18 Soda lignins have found limited use as replacement of phenol in phenol-formaldehyde resins, as a component in animal feed and as a dispersant. 17 Industrial uses of kraft lignin are rather limited to dispersants and emulsifiers, and to some extent as a source for low molecular weight aromatic compounds, but a significant amount of academic work is being carried out to unlock the potential of this promising compound in materials. 19 Especially blending of lignin with thermoplastic and thermosetting polymers is commonly regarded 13 14
IS
16
17 18
19
M. Lahaye and A. Robic Biomacromolecules 8, 1765-1774 (2007) H. Demais, J. Brendle, H. Le Deit, A.L. Laza, L. Lurton and D. Brault, Eur. Patent WO 2006020075 (2006) S.J. Eichhorn et a!., 1. Mater. Sci., Review Submitted. G. Gellerstedt and G. Henriksson, In Monomers, Polymers and Composites from Renewable Resources, M.N. Belgacem and A. Gandini, (Eds.), Elsevier: Amsterdam, 201-224, 2008 J. Lora, In Monomers, Polymers and Composites from Renewable Resources, M.N. Belgacem and A. Gandini, (Eds.), Elsevier: Amsterdam, 225-241, 2008 J.D. Gargulak and S.E.Lebo, Chapter 15 in Lignin: Historical, Biological and Materials Perspectives, W.G. Glasser, R.A. Northey and T.P. Schultz, (Eds.) ACS Symposium Series, American Chemical Society: Washington, DC, 1999 See for example: Lignin: Historical, Biological and Materials Perspectives, W.G. Glasser, R.A. Northey and T.P. Schultz, (Eds.) ACS Symposium Series, American Chemical Society: Washington, DC, 1999
752
as a promlsmg pathway.20,21 In addition, lignin appears to protect polymers from oxidative degradation. 22 The use of the large amount of hydroxyl groups present on kraft lignin makes it an obvious candidate for targeted chemical modification 23 or as a copolymer in for example polyurethane resins,2o Plant oils Plant oils, or triglycerides, consist of three fatty acid arms connected to a glycerol centre by an ester linkage. The fatty acid arms vary in length and chemical functionality depending on the plant from which it is extracted. 24 The most common fatty acids are 1422 carbons long with 0-3 double bonds per fatty acid. Some naturally epoxidised tryglicerides do also exist. Triglyceride oils have been used extensively to produce coatings, plasticizers, lubricants, agrochemicals and inks. 25 ,26 They have also been used as toughening agents and interpenetrating networks that improve properties of thermosetting polymers. 27 The double bond functionality can also be chemically modified to an epoxy and subsequently to an acrylate, two secondary alcohols, or maleates. Epoxidised plant oils can be used directly as a comonomer in epoxy resins or after further modification?4,28 Hydroxylated plant oils can be used in polyurethanes, while acrylated and maleated triglycerides can be used as additive or co-monomer in unsaturated polyesters or vinyl esters. 24 ,29 The conversion of plant oil triglycerides into difunctional monomers to produce linear polymers has also been described. Triglycerides can also be converted into building blocks for polyamides (production of Nylon-II), polyesters, polyacrylates and pol ymethacrylates. 28 The production of transgenic plants to produce specific triglycerides has proven to be possible and could be extended towards ever more useful monomers. 30 The production of highly epoxidised oils by plants would provide an interesting starting material that
20
21 22
23
24
25 26 27
28 29
30
A. Gandini and M.N. Belgacem, In Monomers, Polymers and Composites from Renewable Resources, M.N. Belgacem and A. Gandini, (Eds.), Elsevier: Amsterdam, 243-271, 2008 W, Thielemans, E. Can, S.S, Morye, and R.P Wool, 1. Appl. Polym. Sci" 83, 323331 (2002) B. Kosikova, J, Labaj, D, Slamenova, E. Slavikova and A. Gregorova, In Biomass and Bioenergy: New Research, M.D. Brenes (Ed.), Nova Science Publishers, Hauppauge, NY, 169-200 (2006) W. Thielemans, and R.P. Wool, Biomacromolecules, 6,1895-1905 (2005) R.P. Wool, S.N. Khot, lJ. Lascala, S.P. Bunker, J. Lu, W. Thielemans, E. Can, S.S. Morye, and GJ, Williams, ACS Symposium Series 823, American Chemical Society: Washington, DC, 177-204 (2002) A. Cunningham and A. Yapp , U.S. Patent 3,855,163 (1974) e.G. Force and F.S. Starr, U.S. Patent 4,740,367 (1988) L.W, Barrett, L.H. Sperling, CJ, Murphy, 1. Amer. Oil Chem. Soc. 70, 523 (1993) M.A.R. Meier, J.Q, Metzger and U.S. Schubert, Chem. Soc, Rev. 36,1788-1802 (2007) Y. Lu and R.e. Larock, ChemSusChem 2,136-147 (2009) e.K. Williams and M.A. Hillmyer, Polym. Rev. 48,1-10 (2008)
753 could be used directly or with minimal chemical transformations. Plant oil extraction is already well developed and optimised industrially, making this a very promising route. Polyhydroxyalkanoates Polyhydroxyalkanoates (PHAs) are polyesters of 3, 4, 5 and 6-hydroxyacids which are naturally synthesized by a large variety of bacteria. 31 PHAs are impermeable to water and air and therefore suitable to be used in bottles, films and fibres. 1 The use of PHA as a low commodity polymer is inhibited by its high cost (5- \0 times cost of polypropylene).32 Variations in monomer composition has a significant effect on the mechanical properties with poly-3-hydroxybutyrate being relatively hard and brittle, while poly(3-hydroxybutyrate-co-3-hydroxyvalerate) improves on the brittleness. Inclusion of longer monomers results in materials with similar properties as polypropylene. 1 To improve the economics of PHA production and envisage easy large scale production, the attention has been diverted towards its production in plants. A vast amount of work has been performed with productions reported of up to 40% polymer of dry shoot weight (in transgenic A thaliana, but the resulting dwarf plant could not produce seeds anymore). Oil rape plants production up to 8% dry weight remained viable and did produce seeds. 1 There still is a large potential to improve production efficiency and keep the plants viable at the same time. Commercial production of PHAs is done by Biomer, a German company, and Metabolix, a U.S.-based company. The industrial production route still follows the bacterial production pathway, but Metabolix is investigating the production of PHAs in switch grass, a perennial grass that thrives on land of marginal use for other crops.31 Polylactic Acid Polylactic Acid (PLA) is one of the biggest success stories of bio-based polymers . It was first developed by Dow Chemicals in the 1950s, but its high cost limited its use to specialised medical devices such as sutures and soft tissue implants. 33 Lactic acid, the monomer used to produce PLA, is obtained through fermentation of glucose with the conversion reaching 90% .34 PLA is generally formed by ring opening polymerisation of lactide, a circular dimer of lactic acid, resulting in high molecular weight polymers. Recent advances in fermentation have significantly reduced the production cost of PLA, currently around the same price as Poly(ethylene terephtalate) (PET), commonly used in drink bottles and the polymer which it most closely resembles in terms of properties and potential applications. Unlike PET, PLA is also compostable. NatureWorks LLC, an independently managed business unit of Cargill, Inc. produces PLA from dextrose maize sugar on an industrial scale and has a annual production capacity of ca. 136000 metric tons. s Greenhouse gas emissions are said to be reduced by 80-90% with ca. 65% less use of fossil fuels than traditional plastics. PLA has found wide uses in films, bottles, labels, disposable cups and serviceware, etc. Toyota is also developing PLA production from 31 32
33 34
S. Philip. T. Keshavarz and I. Roy, 1. Chem. Techno!. Biotechnol. 82,233-247 (2007) E. Rezzonico, L. Moire and Y. Poirier, Phytochem. Rev. 1, 87-92 (2002) B.P. Mooney, Biochem. J. 418, 219-232 (2009) R. Auras, B. Harte and S. Selke Macromo!' Biosci. 4, 835-864 (2004)
754 starch-rich sweet potatoes, containing 40-50% more starch than does corn. The starch is converted to lactic acid, which is then polymerised to PLA. The year 2003 saw the first use of Eco-Plastic™, the Toyota PLA, in commercial vehicles. 8 Toyota targets the production of 20 million tons of PLA by 2020. A recently reported advance over the currently employed production pathway (fermentation followed by a chemical polymerisation step), used a modified enzyme to polymerise lactic acid directly, allowing for a one step production of PLA. This will further reduce the production cost of PLA and improve even further its environmentally friendl y character. 35 Natural rubber Rubber is a polymer of isoprene and is currently the most widely used polymer derived from a natural source. 9 Properties of natural rubber are reviewed in detail elsewhere. 36.37 The excellent properties exhibited by natural rubber (malleability, elasticity, heat dissipation, resistance to impact and abrasion) have not been achieved by synthetic rubber due to the presence of naturally-present property-enhancing secondary components such as proteins, lipids, carbohydrates and minerals which are illcharacterised. All commercial natural rubber is harvested from the Para rubber tree (Hevea Brasiliensis) one of the most genetically restricted crops grown on commercial plantations largely located in South-East Asia. The reliance on a single species to produce vast amounts of a natural resource poses a significant danger to the supply by potential fatal diseases. Therefore other crops such as the Russian dandelion and the Mexican shrub Guayule are under investigation for commercial rubber production. 38 Especially the Russian dandelion, which accumulates rubber in lactifers in the roots, is a very promising candidate since it produces rubber with molecular weight significantly higher than Guayule and the Hevea tree (2MDa) and it has already shown its potential during WWII when its rubber was used as motor-tyre rubber by various countries, including the USA and the UK. The Russian dandelion also accumulates 25-40% root dry weight of inulin, a fructose-based sugar, which can be used for bioethanol production. Guayule rubber is considered hypoallergenic compared to Hevea extracted rubber due to a lower concentration of proteins and the absence of reaction between the present proteins with Hevea immunoglobins. 39 Extraction of Guayule rubber is more difficult as it cannot be tapped, resulting in higher production costs, coupled with 1/3 to 2/3 lower rubber production per acre compared to Hevea. Guayule rubber is currently a more specialized
35
36
37
38 39
S. Taguchi, M. Yamada, K.I. Matsumoto, K. Tajima, Y. Satoh, M. Munekata, K. Ohno, K. Kohda, T. Shimamura, H. Kambe and S. Obata, Proc. Nat!. Acad. Sci. USA 105, 17323-17327 (2008) M.B. Rodgers, D.S. Tracey and W.H. Waddell, Rubber World 232(5),32-38 (2005) M.B. Rodgers, D.S. Tracey and W.H. Waddell, Rubber World 232(6),41-348 (2005) lB. van Beilen and Y. Poirier, Crit. Rev. Biotechnol. 27, 217-231 (2007) D.1. Siler, K. Cornish and R.G. Hamilton, J. Allergy Clin. Immunol. 98 895-902 (1996)
755 rubber product but the Yulex Corporation (Arizona, USA) plans to couple Guayule rubber production with biofuel production, improving the process economics. 9 Other promising materials Other promising materials are (i) Suberin, an aromatic-aliphatic cross linked polyester commercially exploited from Quercus suber cork (Cork oak) and the outer bark of Betula pendula (Birch) to be used as monomers in the synthesis of polyurethanes and polyesters;40 (ii) DNA which can be commercially extracted from natural organisms by the ton and may find applications as a biomaterial, in electronic and optical materials, as a catalyst for enantioselective reactions and a material for environmental cleanUp;41 and (iii) protein-based polymers, which can be produced by animals, as well as in existing and transgenic plants. Well-known examples are collagen, silk, keratin and wheat glutin. 1 Properties of the obtained polymers depends strongly on the amino-acid sequence and length but also the correct assembly of the polymers, something which has yet to be reproduced for natural silk. Protein-based materials (especially synthetically made sequences) can be expected to be viable only in high value applications. CONCLUSIONS A multitude of bio-based materials exist. Some of these have been studied extensively and commercial exploitation has occurred in some cases to various degrees of success. It is however necessary to keep in mind that the main driving factor for these materials is our drive towards global sustainability and not the development of bio-based materials at all cost. Therefore, economics can play an important role in selecting the best materials to be used and their application in the most efficient way. In addition, economics may help in reducing waste and pollution, thereby increasing the benefits of bio-based materials. However, the optimisation of the use of natural resources requires a vast amount of research both applied and fundamental and we can only hope for a continued interest of the wider public in the environment and with it continued funding for research towards materials from renewable resources. Because, unlike at the beginning of the 20 th century, we already do have vast amounts of materials available at low cost that perform adequately although for some of these materials we are discovering significant negative health effects. These materials are also derived from depletable resources so their supply will dry up one day, the timeframe of which depends on our use and availability.
40
41
AJ.D. Silvestre, c.P. Neto and A. Gandini, In Monomers, Polymers and Composites from Renewable Resources, M.N. Belgacem and A. Gandini, (Eds.), Elsevier: Amsterdam, 305-320 (2008) X.D. Liu, H.Y. Diao and N. Nishi, Chern. Soc. Rev. 37,2745-2757 (2008)
This page intentionally left blank
REVOLUTIONARY SCIENCES: GREEN CHEMISTRY AND ENVIRONMENTAL HEALTH KAREN PEABODY O'BRIEN, PH.D. National Institute of Environmental Health Sciences Charlottesville, Virginia USA We live in an era of accelerated scientific understanding and rapid-fire information flow about the environmental and health effects of commonly used chemicals and products. As this understanding and awareness spreads through consumer culture and international regulatory systems we increasingly rely on revolutionary interdisciplinary solutions to deliver us into a new collective era of science, industrial production, and consumer culture. Two scientific fields are essential to help effect this broad scale systemic shift away from our previous dependence on toxic chemicals. We need the environmental health sciences to help us better understand the mechanisms by which chemicals interact with biological systems (Colborn et al. 1997). We need green chemistry to help us design products with this knowledge built in to the very molecules (Anastas and Warner, 1998). Moreover we need these two cutting edge fields to work together in strategic and groundbreaking ways. In the past, chemists generally made design choices without reference to health and environmental impacts. Those concerns were left to those involved with environmental clean up and remediation. The result is that we are discovering belatedly that unsafe chemicals have been incorporated into a wide array of products. History has taught us that efforts to make unsustainable products, processes, and systems a little less bad are both costly, and ineffective. While we can try to design "closed-loop systems" that contain our hazards, loops are rarely completely closed (McDonough and Braungart, 2002). As we know from the environmental health fields of epigenetics and endocrine disruption, minute amounts of a biologically active agent can have dramatic and irreversible effects. Rather than try to manage hazard, Green Chemistry designs the things we manufacture and use to be "benign by design" (Warner). More than a catchy phrase, this is a revolutionary concept. CHEMICALS IN SOCIETY AND ADVANCES IN ENVIRONMENTAL HEALTH SCIENCE During the Twentieth Century, commerce experienced dramatic and unprecedented growth in the quantity and complexity of the materials--chemicals-used in the economy. Modem chemicals have enabled profound improvements in the quality of human life. Yet there have also been unintended consequences for human, wildlife and ecosystem health because potential toxicities and degradation pathways were not explored, and indeed, often unknown, before the materials became widespread. Some of the unintended consequences of these chemicals are now well understood and characterized. These include occupational exposure to toxic substances, accidents at industrial facilities and special vulnerabilities to terrorism of chemical plants and chemicals during transportation.
757
758 As environmental health science has progressed, however, and especially as it has incorporated scientific discoveries and tools from molecular genetics and from increasingly sensitive assays capable of measuring contamination in people at unprecedented low levels, new issues and challenges have become visible. Foremost among these are four issues: •
•
•
•
Some chemicals, including some previously considered benign, are capable, at extremely low levels, of interacting with biological systems and altering how the genes of living organisms, including human, behave. These changes are implicated in the causation of many human diseases, including cancers, infertility, learning and behavioral disorders, heart disease and type 2 diabetes. Chemicals that behave like hormones, called endocrine disruptors, can violate basic assumptions that underpin regulatory toxicology, with low doses causing effects that are different and unpredictable from classic high dose experiments that are the basis for setting current "safe" exposure levels. Direct measurements of contamination in people, made possible by significant advances in analytical chemistry, have established the fact that people have within them hundreds, if not thousands, of contaminants simultaneously. While scientific understanding of the consequence of exposure to mixtures is in its infancy, studies consistently show that exposure to multiple chemicals at the same time can cause effects even though each of the chemicals is at a level so low that, by itself, it would not be expected to cause harm. Early life exposures, especially in the womb, may contribute to diseases much later in life, including diseases of middle age and aging. These effects will not have been apparent to the methods used for decades to establish chemical safety.
These emerging discoveries have come as surprises to traditional toxicology, because they raise questions about many chemicals in common use that conventional approaches had deemed safe. The clear message is that current health standards developed by agencies like the United States Food and Drug Administration and the United States Environmental Protection Agency have missed problematic compounds, and that it will be essential to revise the processes used to establish these standards so that they incorporate current science. Given the range of diseases for which current science has reported plausible links to environmental exposures, diseases that include some of the most costly and burdensome in America today, including prostate and breast cancers, contributors to infertility like endometriosis, uterine fibroids and polycystic ovaries, type 2 diabetes and heart disease, modernizing safety standards holds the promise of a healthier world and the potential for reduced health care costs. GREEN CHEMISTRY: 21 ST CENTURY MATERIALS SCIENCE With current science creating incentives for a new generation of health standards will come enormous scientific and economic incentives for new chemicals and new chemical processes. Green chemistry offers a practical way forward. By providing the scientific basis for a new wave of inherently safe materials, green chemistry can stimulate scientific
759 and economic innovation, avoid the unintended health consequences of inadvertently hazardous materials, and contribute to sustainable economic growth and job creation. This is green chemistry's promise; to achieve it fully will require sustained effort and commitment of resources. While the principles guiding green chemistry appear to be common sense, they bear little resemblance to the way we do chemistry today. Currently feedstocks are generally non-renewable; products we make and their building blocks often have significant toxicity; many of our substances persist, bioaccumulate and biomagnify. We have historically tried to control exposure to hazardous substances in ways that are costly and often fail. Global demand is rising for sustainable materials - materials that support health instead of undermining it. Other countries, e.g., the European Union (EU), China and India, have already begun investing significantly in green chemistry innovation to supply this growing market. Notably, the REACH program in the EU is the first major effort to require chemical transparency in products; it is setting standards for the global economy. Green chemistry began as an initiative out of the U.S. Environmental Protection Agency in the early 1990's and has emerged to involve networks of industry, academia, and environmentalists in thirty nations around the world. This rapidly evolving field of science is governed by twelve specific chemical design principles, which move products and processes toward an economy based on renewable feedstocks, where toxicity is deliberately prevented at the molecular level. Chemicals and chemical processes are designed to: • • • •
Be less hazardous, Eliminate waste, Minimize energy use, and Degrade safely upon disposal.
Green chemists and engineers employ life cycle and biological systems thinking in the act of creating the chemicals that would form the foundation of our economy. The science is rigorous and many specific applications are now emerging in industry and in academia, including: renewable energy technologies, plastics, pharmaceuticals, pesticides, paints and coatings, textile manufacturing, pulp and paper, water purification and basic chemical feedstocks. Over time green chemistry will change chemistry as a whole, re-orienting societies toward an economy based on sustainable feedstocks , renewable energy, biobased production and green jobs. The key is guiding the creative power of chemists with design criteria that specify safety and sustainability at the outset. Focused investment in these fields will drive the transition. REVOLUTIONARY SCIENCE Science usually operates within strict disciplinary boundaries. We live in a world, however, in which answers to crucial questions emerge at the confluence of very different disciplines. We believe we are at such a moment with green chemistry and environmental
760 health. Finding ways to bridge the gap between disciplines depends upon both the substance of the science, the chemistry of the people and catalytic strategy. Until very recently, however, there has been little direct and purposeful communication between the fields of Green Chemistry and Environmental Health. Each has its own journals, scientific meetings and other forms of communication targeting its own membership. Both are gaining serious traction within their larger disciplines. At the same time, each increasingly depends upon information and insights from the other. Environmental health provides information essential for green molecular design. Green chemistry provides new chemical solutions to systemic health problems. Over the past few years, individuals from both disciplines have begun efforts to forge working relationships. This process is beginning to be systematized and broadened to fully capitalize upon direct dialogue between the two disciplines. Scientific innovation and progress depends on cross-disciplinarity; transition to a truly sustainable industrial base requires enhanced scientific communication and radical collaboration between these fields. REFERENCES 1.
2. 3.
4. 5. 6.
7.
8.
9. 10. 1 I. 12.
Anastas, P.A., Warner, J.e. (1998). Green Chemistry: Theory and Practice. Oxford, Oxford University Press. Anastas, P.T., Bickart, P.H., Kirchoff, M.M. (1999). Designing Safer Polymers New York: Wiley Interscience. Bern, H.A. (1992). The Fragile Fetus. Chemically-induced Alterations in Sexual and Functional Development: The WildlifelHuman Connection (Colborn, T., Clement, e., eds). Princeton, NJ: Princeton Scientific Publishing. 9-15. Colborn, T., Dumanoski, D., Myers, J.P. (1997). Our Stolen Future. New York: Plume. Collins, T., Walter, e. (2006) "Little Green Molecules." Scientific American. 84:90. Crain, D.A, Janssen, S.1., Edwards, T.M., Heindel, lJ., Ho, S.M., Hunt, P., et ai. (2008) "Female reproductive disorders: the roles of endocrine-disrupting compounds and developmental timing." Ferti! Steri!. 90:911-40. Diamanti-Kandarakis, E., Dourguignon, J.P., Giudice, L.C., Hauser, R., Prins, G.S., Soto, AM., Zoeller, R.T. (2009) "Endocrine-disrupting chemicals: an Endocrine Society scientific statement." Endocr Rev 30(4):293-342 Gluckman, P.D., Hanson, M.A., Beedle, AS. (2007) "Early life events and their consequences for later disease: a life history and evolutionary perspective." Am J Hum Bio 19:1-19. Griln, F., and Blumberg, B. (2006) Environmental obesogens: organotins and endocrine disruption via nuclear receptor signaling. Endocrinology, 147:S50-S55. Heindel, J.1., McAllister, K.A., Worth, L. Jr., Tyson, F.L. (2006). "Environmental epigenomics, imprinting and disease susceptibility." Epigenetics 1(1):106. McDonough, W., Braungart, M. (2002) New York: North Point Press. Myers, J.P., Zoeller, R.T., Vom Saal, F. (2009) "A clash of old and new concepts in toxicity, with important implications for public health." Environ Health Perspect doi: 1 0 1289/ehp (online 30 July 2009).
761
13.
14.
15. 16.
Newbold, R.R., Heindel, 1.1. (2009) Developmental origins of health and disease: the importance of environmental exposures. In: Early Origins of Human Health and Disease Eds Newman, J.P., Ross, M.G., Karger Publishing, Basal, pp 41-50. Warner, J.e., Jessop, P.G., Trakhtenberg, S. (2009) "The Twelve Principles of Green Chemistry, In "Innovations in Industrial and Engineering Chemistry: A Century of Achievements and Prospects for the New Millennium" Ed. by Flank, William H.; Abraham, Martin A.; Matthews, Michael A. American Chemical Society. Warner, J.e., Cannon, A.S., Dye, K. (2004) "Green Chemistry" 1. Environmental Impact Assessment, 24:775-799. Warner, J.e. (2004) "Asking the Right Questions". 1. Green Chem. 6, G27.
This page intentionally left blank
THE HIGH-VOLUME HORMONALLY ACTIVE CHEMICAL BISPHENOL A: HUMAN EXPOSURE, HEALTH HAZARDS AND NEED TO FIND ALTERNATIVES FREDERICK S. VOM SAAL AND JULIA A. TAYLOR Division of Biological Sciences University of Missouri-Columbia, Columbia, USA PAOLA PALANZA AND STEFANO PARMIGIANI Dipartimento di Biologia Evolutiva e Funzionale Universita' di Parma, Italy SOURCES AND ROUTES OF EXPOSURE AND METABOLIC FATE The chemical bisphenol A (BPA) was first used as a component of Bakelite, a plastic from which billiard balls and some other products were made in the early to mid 20 th century. Similar to many phenolic compounds, BPA was reported to be a synthetic estrogen in 1936, and BP A had the efficacy of estradiol (Dodds and Lawson, 1936). In the 1950s, polymer chemists discovered that these estrogenic BPA molecules could be polymerized to make polycarbonate, a hard, clear plastic. BPA is also used in the manufacture of the resin lining of food and beverage cans in the USA and many other countries. In addition to being the monomer that is used in the above products, BPA is also used as an additive (plasticizer) in other types of plastic, such as polyvinyl chloride (PVC). BPA is one of the highest production volume chemicals in commerce, with over 8-billion pounds produced in 2008 (Bailin et al. 2008). The range of products that contain BPA is not known due to product confidentiality laws that protect corporations from revealing the chemicals used in products. A question that is often asked is: How could chemists have used a chemical that had been reported to act like a sex hormone to make plastic products such as baby bottles? The "green chemistry" movement not only provides answers to this question (which is that they did not pay attention to biological effects), but most importantly, it provides solutions (Anastas and Kirchhoff, 2002). Food contact items (can lining, food packaging and food and beverage containers) were previously thought to be the major contributors to the mean values of about 3-4 ng/ml (parts per billion, ppb) of unconjugated (biologically active) BPA detected in adult and fetal serum (Vandenberg et al. 2007). However, recent evidence has shown that the amounts of BPA that leach out of food contact containers cannot account for these ppb levels of BPA in human serum (Vandenberg et al. 2007; Stahlhut et al. 2009). This has led to speculation that the use of BP A in products such as carbonless or thermal paper (used for receipts, in hospitals, etc.) results in dermal exposure that will lead to higher levels of BP A than occur via the oral route due to the absence of first -pass metabolism of chemical absorbed from the gut; ingested BPA is transported via the mesenteric portal vessels from the gut to the liver, where it is inactivated by the enzymes glucuronosyltransferase and sulfotransferase (Vandenberg et al. 2007). Route of exposure (oral, dermal, inhalation) to BPA has become a highly controversial issue, because articles funded by chemical corporations that manufacture BPA have proposed that all experimental animal studies of BPA that have not involved
763
764 oral administration of BPA are irrelevant to the assessment of the health risk posed to people by BPA. However, as noted above this view is now disputed by the findings by Stahlhut and colleagues based on the United States National Health and Nutrition Examination Survey (NHANES) of thousands of people (Stahlhut et al. 2009). In addition, we conducted an experiment with newborn mice that showed that in the neonate, route of administration of BP A has no impact of the rate of clearance of unconjugated BPA from blood (Taylor et al. 2008). This finding was predicted by an extensive literature showing that fetuses and neonates have limited liver detoxifying capacity relative to adults (Vandenberg et al. 2007). This led the U.S. National Toxicology Program (NTP) to determine in its review of BPA that due to limited ability to glucoronidate BPA during fetal and neonatal life, the more rapid metabolism after oral administration of BPA in the adult (when compared to non-oral routes) would not be expected to occur in fetuses or newborns (NTP, 2008). In sharp contrast, the European Food Safety Authority (EFSA) made the determination that fetuses and newborns have the capacity to metabolize any BPA that they are exposed to (EFSA, 2008), although this opinion, that went on to state that BPA posed no threat to humans regardless of life stage, has been challenged as directly contradicting the published literature by both the German and Danish Ministries of the Environment. Whereas glucuronidation is the predominant pathway for inactivating BPA in adults, this enzyme is not expressed during the fetal period of sexual differentiation when manmade estrogenic chemicals are known to disrupt development, and, instead, what metabolism of BPA that does occur in fetuses is via sulfation (Richard et al. 2001). This is important because tissues in the fetus, as well as the placenta, express the enzyme sulfatase that cleaves the sulfate group resulting in biologically active BPA. The role of glucuronidase in the de-conjugation of BPA-glucuronide is less clear in the adult (Ginsberg and Rice, 2009). Regarding fetal exposure to BPA, sulfated metabolites of chemicals such as BPA can be de-sulfated to the parent compound by sulfatase activity in both the fetal liver and placenta (Collier et al. 2002). The cycling that occurs between sulfation and de-sulfation of chemicals such as BPA thus plays a critical role in determining exposure of fetuses and neonates to the bioactive compound. This shows the importance of a thorough study of the ontogeny of activities of sulfotransferases and glucuronosyltransferases, as well as the deconjugating enzymes, for understanding the risks posed by BPA exposure during prenatal and early postnatal life, when the levels of these enzymes are changing (Collier et al. 2002). There is a considerable exchange between the placenta, which expresses sulfatases that hydrolyze sulfated estrogens, and fetal tissues, which conjugate (via sulfatation) estrogens, providing a continuous cycling pool of unconjugated estrogens and estrogen sulfates. The sulfates may act as a "reserve" yielding active hormones following hydrolysis; the formation of estrogen sulfates may provide a protective mechanism, since sulfates are biologically inactive (Pasqualini, 2005). Exposure assessments We have examined leaching of BPA from metal food and beverage cans and polycarbonate food-storage containers to determine the amount of BPA that people are be exposed to from the use of these products. BPA-free purified water was placed into cans that had contained different products as well as polycarbonate food storage container. The
765 cans and containers were then heated to 95°C for 24 h to simulate the food-sterilization process used after food is added to cans and to simulate the heating of polycarbonate food containers in the microwave, which manufacturers claim is "safe". BPA was analyzed after separation by HPLC with CoulArray detection (limit of detection -10 parts per trillion). All cans and polycarbonate food containers leached detectable levels of BPA, although there were differences in the amount of leaching from different products and from different manufacturers. As expected, the cans that had contained acidic tomato sauce resulted in the highest BPA leaching rate. Specifically, one brand of canned tuna fish leached about 30 flglL BPA, a brand of canned peas leached about 40 flg/L BPA, and a brand of tomato sauce leached about 50 flg/L BPA. A polycarbonate food container leached 15 flglL (1. Taylor, unpublished). It is a basic characteristic of the ester bond linking BPA molecules together in polycarbonate plastic and resins that the rate of breaking of the bond by hydrolysis increases with heat, releasing free BPA (Bae et al. 2002), and an increase in leaching also occurs as a result of either an increase or decrease in pH (Brotons et al. 1995; Vandenberg et al. 2007). When food or liquids (such as beer) are placed into a can, they are heated to a high temperature for sterilization. The consequence is that food and beverages in cans have variable levels of BPA based on whether the contents are lipophilic and/or acidic or alkaline, all of which increase leaching (see the Environmental Working Group web site for additional data on leaching of BPA from cans at www.EWG.org). Another common use for BPA is as the monomer in dental sealants and composites used for fillings , from which BPA leaches in variable amounts and for different lengths of time depending on the product (Joskow et al. 2006). BPA has been reported to be present in PVC products such as stretch film and water pipes. BPA is used in printers ink and to coat paper used for receipts (referred to as "carbonless paper"); unpolymerized (free) BPA in carbon less paper interacts with a gel-encased dye to create a visible print when the dye is released from the gel by heat or pressure. BPA is also used in newspaper print and is thus a major contaminant in recycled paper products (Vandenberg et al. 2007). Polycarbonate food storage and beverage containers (the hard, clear containers, which may be tinted in the case of sport water bottles or baby bottles) cause concern for their potential to leach BPA because they are re-usable, and repeated use leads to an increase in leaching (Brede et al. 2003). Many of these containers are marketed for use in the microwave, despite the fact that heating is known to increase BPA leaching levels. The United States Centers for Disease Control and Prevention (CDC) has measured BPA in the urine of people in the USA as part of the 2003/2004 United States National Health and Nutrition Examination Survey (NHANES) (Calafat et al. 2008). The CDC reported that 93% of people had detectable levels of BPA in their urine. The median and mean levels of unconjugated (parent) BPA reported in blood, as well as the lower and upper range reported for women and their fetuses at the time of parturition in Germany (Schonfelder et al. 2002), were virtually identical to values reported for total BPA in urine by the CDC. Currently a BPA dose of 50 flg/kglday is considered "safe" for daily human consumption by the U.S. EPA and FDA (IRIS 1988), although this safety standard was set in the 1980s and has at the time of preparation of this manuscript not been revised since then in spite of considerable pressure on the FDA by the U.S. Congress, which has
766 given the FDA a deadline of December 2009 to review its position on the safety of BPA (DailyGreen, 2009). Findings reported by Vandenberg et al. (2007) led to a consensus conclusion by 38 scientists who attended a U.S. National Institutes of Health (NIH) sponsored conference on BPA that current levels of human exposure to BPA already exceed the presumed "safe" daily exposure dose (vom Saal et al. 2007). We recently reported that daily oral administration of 400 f.lglkg/day to adult female rhesus monkeys resulted in average blood levels of unconjugated BPA that were approximately 8-times lower than median/mean levels reported in women (VandeVoort et al. 2009). Since the rhesus monkey is considered a good model for human pharmacokinetics of chemicals such as BPA, these findings add to our concern that human exposure to BPA is currently much higher than has been estimated by government regulatory agencies, and is much higher than doses that cause a myriad of adverse health effects in a variety of animal species, such as rodents (rats, mice), farm animals (sheep) and primates (African green monkeys; Chlorocebus aethiops sabaeus). ADVERSE HEALTH EFFECTS OF BPA IN LABORATORY ANIMALS Experiments with laboratory animals are used to inform regulatory agencies about the safety of drugs and chemicals. As indicated previously, the "safe" dose of BPA was estimated to be 50 f.lglkg/day based on a few studies conducted in the 1980s that only examined a few very high doses . These very high-dose studies have been challenged as invalid for accurately predicting the effects of low doses of chemicals that act through receptor systems for hormones, which are sensitive to very low doses (Myers et al. 2009). Examples of effects of acute exposure to low doses of BPA in adult animals are: a significant stimulation of insulin secretion followed by insulin resistance in mice (Ropero et al. 2008), a significant decrease in daily sperm production in rats (Sakaue et al. 2001), a decrease in maternal behavior in mice (Palanza et al. 2002), and disruption of hippocampal synapses, leading to the appearance of a brain in both rats and monkeys that is typical of that seen in senile humans (Mac Lusky et al. 2005; Leranth et al. 2008). Related to the fact that type 2 diabetes is increasing in many regions of the world is the finding that exposure of adult mice to a low oral dose of BPA (10 f.lg/kg/day) resulted in stimulation of insulin secretion that was mediated by estrogen receptor ER alpha. The prolonged hyper-secretion of insulin was followed by insulin resistance and postprandial hyperinsulinaemia (Ropero et al. 2008). The low-dose studies of BPA effects on insulin secretion and insulin resistance in experimental animals have been confirmed in cell culture studies with human and animal tissues that have revealed molecular pathways that mediate effects of BPA in the low parts per trillion range, far below concentrations of BPA found in virtually all people who have been examined (Wetherill et al. 2007). Hugo et al. (2008) reported that human fat cells in primary culture showed a marked suppression of the critical regulatory cytokine adiponectin, with the maximum response occurring at I nM (0.23 ppb), at the low end of the range of human exposure to BPA. A decrease in adiponectin is related to insulin resistance and an increased risk for type 2 diabetes, cardiovascular disease and heart attack (Beltowski et al. 2008) . It is thus of considerable interest that in an analysis of data from 1455 people examined for BP A levels in urine as part of the NHANES conducted in 2003/2004 , there was a significant relationship between urine levels of BPA and cardiovascular disease,
767 type 2 diabetes, and abnormalities in liver enzymes (Lang et al. 2008). The fact that these findings are related to studies that identify plausible mechanisms by which BPA at current levels of human exposure could result in these diseases greatly strengthens the importance of these findings (vom Saal and Myers, 2008). The greatest concern with exposure to BPA is during development: fetuses, neonates, infants, children and adolescents. There are two important issues driving this concern: I. Exposure to BPA has been found to be significantly greater as age decreases (Calafat et al. 2008), and recent findings indicate that premature infants in a neonatal intensive care unit have approximately 10-fold higher BPA levels relative to adults (Calafat et al. 2009). 2. Fetuses and neonates are particularly vulnerable to the "programming" effects that endogenous hormones and chemicals that act like hormones such as BPA have on genes in cells undergoing differentiation. These programming events are referred to as "epigenetic" modifications of genes because they do not involve classical mutations, but instead, involve addition and removal of methyl and acetyl groups from bases that make up genes as well as the associated proteins that form part of the chromosomes. The result of exposure during development to hormonally active chemicals is thus permanent abnormal programming of genes that can lead to diseases later in life (Dolinoy et al. 2007). The laboratory animal research on BPA is unique in that there are now hundreds of studies that have examined doses of BPA within the range of human exposure rather than the more typical approach in regulatory toxicology of only testing a few doses that are thousands of times higher than human exposure levels (vom Saal et al. 2007). There was surprise associated with the first "low dose" publications on the effects of BPA in laboratory mice (Nagel et al. 1997; vom Saal et al. 1998), which showed that feeding pregnant mice 2 or 20 f.lg/kg/day BPA caused abnormalities of the entire reproductive system in male offspring when they were examined in adulthood. The 2 f.lg/kg/day dose was a daily oral dose to pregnant mice that was 25,000-times lower than had ever been examined, and 25-times below the current "safe" daily exposure dose according to the U.S. FDA and U.S. EPA, as well as EFSA. Numerous reviews have since been published challenging as invalid the assumptions used by these regulatory agencies to estimate "safe" exposure levels for endocrine disrupting chemicals such as BPA (Welshons et al. 2003; Myers et al. 2009; Myers et al. 2009). One of the main concerns with the adverse effects reported in response to developmental exposure to low doses of BPA (that produce blood levels in animals below those in humans) is that they all relate to disease trends in humans. For example, there is an obesity epidemic in many regions of the world, and developmental exposure to BPA increases body weight later in life (Heindel and vom Saal, 2009). The incidence of prostate and breast cancer is increasing, and BPA exposure during early life causes these cancers in rodents; most animal carcinogens are human carcinogens (Richter et al. 2007). The largest literature on the adverse effects of BPA exposure during development concerns adverse effects on brain structure, chemistry and behavior (Richter et al. 2007). One of the most interesting aspects of this literature is that there is a consistent finding of a loss of sex differences in brain structure, chemistry and behavior due to fetal/neonatal exposure to low doses of BPA. BPA thus appears to interfere with the normal processes that govern sexual differentiation, with brain changes reported in both males and females, depending on the outcome measured (Palanza et al. 2008). The implications at the
768 population level for disruption of normal socio-sexual behaviors has not been extensively studied, although there are reports of changes in play behavior (Dessi-Fulgheri et al. 2002) as well as other socio-sexual behaviors (Farabollini et al. 2002) that could impact popUlation dynamics. There are also numerous studies of the effects of low doses of BPA on development of the female (Soto et al. 2008). Findings include chromosomal abnormalities in oocytes in female (Susiarjo et al. 2007), and long-term effects on reproductive organs that are not observed until mid-life, such as uterine fibroids and paraovarian cysts (Newbold et al. 2007). Other studies have shown that very low doses of BPA during prenatal or neonatal development can result in permanent effects in male rats and mice. For example, fetal exposure to a low dose of BPA causes a permanent decrease in testicular sperm production in mice (vom Saal et al. 1998). THE NEED FOR ALTERNATIVES TO BPA Over the last decade there have been approximately 1000 published articles concerning the endocrine disrupting effects of BPA. Many of these studies link BPA to diseases that have been increasing in incidence over the last part of the 20th century and beginning of the 21 st century (Talsness et al. 2009). As a result of numerous studies documenting products that leach BPA into the environment after disposal, typically into landfill or into the oceans (Thompson et al. 2009), and also leach out of products used to store food and beverages, out of medical devices, and other products such as carbonless paper and water pipes, there has been public pressure in developed countries to find alternatives to BPA that can be used to make these products (EnvironmentCanada, 2008). For example, in the late 1990s the three major Japanese can manufacturers voluntarily replaced BPA-based resin as the surface coating of cans and replaced BPA-based resin with polyethylene terephthalate (PET), although BPA is still present as an adhesive undercoat that serves to bind the PET to the metal can. We have found that Japanese cans leach less than 5% of the amount of BPA relative cans containing similar products made in the USA (1. Taylor, unpublished), which is certainly a step in the right direction . At the same time carbonless and thermal paper containing BPA completely disappeared from the Japanese market, since there have always been other chemicals available for use to react with the dye in these products, so no product development was required. These actions in Japan occurred due to public concern and did not require changes in laws by legislators or regulatory agencies in Japan. Similarly, there are plastic alternatives that already exist to the PVCbased products used in hospitals (such as dialysis and IV tubing), and some hospitals (notably those owned by Kaiser Permanente) have stated their intention to only buy nonPVC-based products, due to clear evidence that these products leach high levels of BPA as well as another class of endocrine disrupting chemicals called phthalates (Calafat et al. 2009). Finally, the product that has received the most public attention is baby bottles and reusable sport water bottles. BPA-free plastic bottles (and glass bottles) are now replacing BPA-based polycarbonate bottles, since Canada has banned the sale of polycarbonate bottles (EnvironmentCanada, 2008), a few states in the USA have taken similar action, and similar legislation is pending in the U.S . Congress (DailyGreen, 2009). A variety of other chemicals are now entering the market place to replace BPAbased polycarbonate in bottles and food containers, such as a polyester in Eastman's
769 Tritan (chemical composition unknown) and polyethersulfone (PES). One of the major goals of the green chemistry movement is to establish a paradigm (the 12 principles of green chemistry) that can be followed for introducing chemicals for new products into the marketplace as well as replacements for bad chemicals such as BP A (Anastas and Kirchhoff, 2002). It is clear that the public wants products, particularly those targeted at babies, to not contain dangerous chemicals. However, without the cooperation of chemical corporations, which was demonstrated by Japanese corporations, this will not be an easy goal to achieve. So far, corporations in the USA and Europe have taken the opposite approach to the Japanese and have continued to deny that BPA or other endocrine disrupting chemicals have any adverse effect. The denial of science was a successful strategy for the tobacco industry for decades, and U.S. and European chemical corporations have so far chosen that approach. An important argument from "green chemists" is that replacing bad chemicals with safe chemicals is not just important for survival of the planet, it will increase profits over the long run . It appears that unlike the situation in Japan , this may require legislation in the USA, which the current Congress appears willing to tackle (DailyGreen, 2009). The Europeans have moved forward with a new approach called Registration, Evaluation and Authorization of Chemicals (REACH), which is a step in the right direction for stimulating corporations to find alternatives for chemicals to ensure a sustainable planet. ACKNOWLEDGEMENTS Funding during the preparation of this manuscript were provided to FvS by NIEHS grant ESOI6770 and to SP and PP by the University of Parma. REFERENCES
1. 2.
3.
4.
5.
6.
Anastas, P.T. and Kirchhoff, M.M. (2002) "Origins, current status, and future challenges of green chemistry." Ace. Chern. Res. 35:686-694. Bae, B., Jeong, J.H. and Lee, S.J. (2002) "The quantification and characterization of endocrine disruptor bisphenol-A leaching from epoxy resin." Water Sci. Te chnol. 46:381-387. Bailin, P.D., Byrne, M., Lewis, S. and Liroff, R. (2008) Public awareness drives market for safer alternatives: bisphenol A market analysis report. September 15, 2008, Investor Environmental Health Network. http://www.iehn.org/documents/ BPA%20market%20report%20Final.pdf. Access date: March 2, 2009. Brede, c., Fjeldal, P. , Skjevrak, 1. and Herikstad, H. (2003). "Increased migration levels of bisphenol A from polycarbonate baby bottles after dish washing, boiling and brushing." Food Addit. Contarn. 20:684-689. Brotons, J.A., Olea-Serrano, M.F., Villalobos, M., Pedraza, V. and Olea, N. (1995) "Xenoestrogens released from lacquer coating in food cans." Environ. Health Perspect. 103:608-612. Calafat, A.M., Weuve, J., Ye, X., Jia, L.T., Hu, H. , Ringer, S., et al. (2009) "Exposure to bisphenol A and other phenols in neonatal intensive care unit premature infants." Environ. Health Perspect. 117:639-644.
770
7.
8.
9.
10.
II.
12.
13.
14.
15.
16.
17.
18. 19.
20.
21.
Calafat, A.M., Ye, x., Wong, L.Y., Reidy, J.A. and Needham, L.L. (2008) "Exposure of the U.S. population to bisphenol A and 4-tertiary-octylphenol: 2003-2004." Environ. Health Perspect. 116:39-44. Collier, A.c., Ganley, N.A., Tingle, M.D., Blumenstein, M., Marvin, K.W., Paxton, J.W., et al. (2002) "UDP-glucuronosyltransferase activity, expression and cellular localization in human placenta at term." Biochem. Pharmacol. 63:409419. DailyGreen (2009) Congress to FDA: Prove Bisphenol A Safe, or Ban It. http://www.thedailygreen.com/en vironmental-newsllatestlbisphenol-a-4 7080302: August 3, 2009. Dessi-Fulgheri, F., Porrini, S. and Farabollini, F. (2002) "Effects of perinatal exposure to bisphenol A on play behavior of female and male juvenile rats." Environ. Health Perspect. 110 Suppl 3:403-407. Dodds, E.C. and Lawson, W. (1936) "Synthetic oestrogenic agents without the phenanthrene nucleus." Nature 137:996. Dolinoy, D.C., Huang, D. and Jirtle, R.L. (2007) "Maternal nutrient supplementation counteracts bisphenol A-induced DNA hypomethylation in early development." Proc. Natl. Acad. Sci. 104:13056-13061. EFSA (2008) Toxicokinetics of Bisphenol A - Scientific Opinion of the Panel on Food additives, Flavourings, Processing aids and Materials in Contact with Food (AFC). EFS Authority, http://www.efsa.europa.euIEFSNefsa locale1178620753812 1211902017492.htm. July, 2008. Access date: August 3, 2009. EnvironmentCanada (2008) Draft Screening Assessment for The Challenge Phenol, 4,4' -(l-methylethylidene)bis- (Bisphenol A). Chemical Abstracts Service Registry Number 80-05-7. http://www.ec.gc.ca/substances/ese/eng/challenge/ batch2lbatch2 80-05-7.cfm. Access date: July 5, 2008. Farabollini, F., Porrini, S., Della Seta, D., Bianchi, F. and Dessi-Fulgheri, F. (2002) "Effects of perinatal exposure to bisphenol A on sociosexual behavior of female and male rats." Environ. Health Perspect. 110 Suppl 3:409-414. Ginsberg, G. and Rice, D.C. (2009) "Does rapid metabolism ensure negligible risk from bisphenol A." Environ. Health Perspect. Online 14 July 2009, doi: 10. I 289/ehp.09 I 010 (available at http:dx.doi.org/). Heindel, J.J. and vom Saal, F.S. (2009) "Role of nutrition and environmental endocrine disrupting chemicals during the perinatal period on the aetiology of obesity." Mol. Cell Endocrinol. 304:90-6. IRIS (1988) Bisphenol A. (CASRN 80-05-7); U.S.-EPA Integrated Risk Information System Substance file; http://www.epa.gov/iris/substl0356.htm. Joskow, R., Barr, D.B., Barr, J.R., Calafat, A.M., Needham, L.L. and Rubin, C. (2006) "Exposure to bisphenol A from bis-glycidyl dimethacrylate-based dental sealants." l. Am. Dent. Assoc. 137:353-362. Lang, LA., Galloway, T.S., Scarlett, A., Henley, W.E., Depledge, M., Wallace, R.B., et al. (2008) "Association of urinary bisphenol A concentration with medical disorders and laboratory abnormalities in adults." lAMA 300: 1303-131 O. Leranth, c., Hajszan, T., Szigeti-Buck, K., Bober, J. and MacLusky, N.J. (2008) "Bisphenol A prevents the synaptogenic response to estradiol in hippocampus and
771
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
prefrontal cortex of ovariectomized nonhuman primates." Proc. Natl. Acad. Sci. 105: 14187-14191. MacLusky, N.J., Hajszan, T. and Leranth, C. (2005) "The environmental estrogen bisphenol A inhibits estrogen-induced hippocampal synaptogenesis." Environ. Health Perspect. 113:675-679. Myers, J.P., vom Saal, F.S., Akingbemi, B.T., Arizono, K., Belcher, S. , Colborn, T., et al. (2009) "Why public health agencies cannot depend on good laboratory practices as a criterion for selecting data: the case of bisphenol A." Environ. Health Perspect. 117:309-315. Myers, J.P., Zoeller, T.J. and vom Saal, F.S. (2009) "A clash of old and new scientific concepts in toxicity, with important implications for public health." Environ. Health Perspect. :doi: 1O.1289/ehp.0900887 (available at http://dx.doi. Q!g{) Online 29 July 2009. Nagel, S.C., vom Saal, F.S., Thayer, K.A., Dhar, M.G., Boechler, M. and Welshons, W.V. (1997) "Relative binding affinity-serum modified access (RBASMA) assay predicts the relative in vivo bioactivity of the xenoestrogens bisphenol A and octylphenol." Environ. Health Perspect. 105:70-76. Newbold, R.R., Jefferson, W.N. and Padilla-Banks, E. (2007) "Long-term adverse effects of neonatal exposure to bisphenol A on the murine female reproductive tract." Reprod. Toxicol. 24:253-258. NTP (2008) NTP-CERHR Monograph on the Potential Human Reproductive and Developmental Effects of Bisphenol A. September 2008. http://cerhr.niehs.nih. gov/chemicals/bisphenol/bisphenol-eval.html. Accessed September 3, 2008. National Toxicology Program, NIH Publication No. 08-5994. Palanza, P., Gioiosa, L., vom Saal, F.S. and Parmigiani, S. (2008) "Effects of developmental exposure to bisphenol A on brain and behavior in mice." Environ. Res. 108:150-157. Palanza, P., Howdeshell, K.L., Parmigiani, S. and vom Saal, F.S. (2002) "Exposure to a low dose of bisphenol A during fetal life or in adulthood alters maternal behavior in mice." Environ. Health Perspect. 110:415-422. Pasqualini, J.R. (2005) "Enzymes involved in the formation and transformation of steroid hormones in the fetal and placental compartments." J. Steroid Biochem. Mol. Bioi. 97:401-415. Richard, K., Hume, R., Kaptein, E., Stanley, E.L., Visser, T.J. and Coughtrie, M.W. (2001) "Sulfation of thyroid hormone and dopamine during human development: ontogeny of phenol sulfotransferases and aryl sulfatase in liver, lung, and brain." J. CUn. Endocrinol. Metab. 86:2734-2742. Richter, C.A., Birnbaum, L.S., Farabollini, F., Newbold, R.R., Rubin, B.S., Talsness, C.E., et al. (2007) "In vivo effects of bisphenol A in laboratory rodent studies." Reprod. Toxieol. 24: 199-224. Ropero, A.B., Alonso-Magdalena, P., Garcia-Garcia, E., Ripoll, c., Fuentes, E. and Nadal, A. (2008) "Bisphenol-A disruption of the endocrine pancreas and blood glucose homeostasis." Int. J. Androl. 31: 194-200. Sakaue, M., Ohsako, S., Ishimura, R., Kurosawa, S., Kurohmaru, M., Hayashi, Y., et al. (2001) "Bisphenol A affects spermatogenesis in the adult rat even at a low dose." J. Occupational Health 43: 185-190.
772 35.
36. 37.
38. 39.
40.
41.
42. 43.
44.
45.
46. 47.
48.
Schonfelder, G., Wittfoht, W., Hopp, H., Talsness, C.E., Paul, M. and Chahoud, I. (2002) "Parent bisphenol A accumulation in human maternal-fetal-placental unit." Environ. Health Perspect. 11O:A 703-A 707. Soto, A.M., Vandenberg, L.N., Maffini, M.V. and Sonnenschein, C. (2008) "Does breast cancer start in the womb?" Basic CUn. Pharmacol. Toxicol. 102:125-133. Stahlhut, R.W., Welshons, W.V. and Swan, S.H. (2009). "Bisphenol A data in NHANES suggest longer than expected half-life, substantial non-food exposure, or both." Environ. HealthPerspect. 117:784-789. Susiarjo, M., Hassold, T.J., Freeman, E. and Hunt, P.A. (2007) "Bisphenol A exposure in utero disrupts early oogenesis in the mouse." PLoS Genet. 3:63-70. Talsness, C.E., Andrade, A.1., Kuriyama, S.N., Taylor, J.A. and vom Saal, F.S. (2009) "Components of plastic: experimental studies in animals and relevance for human health." Philos. Trans. R. Soc. Lond. B BioI. Sci. 364:2079-2096. Taylor, LA., Welshons, W.V. and Vom Saal, F.S. (2008) "No effect of route of exposure (oral; subcutaneous injection) on plasma bisphenol A throughout 24h after administration in neonatal female mice." Reprod. Toxicol. 25: 169-176. Thompson, R.C., Moore, C.1., vom Saal, F.S. and Swan, S.H. (2009) "Plastics, the environment and human health: current consensus and future trends." Phi/os. Trans. R. Soc. Lond. B BioI. Sci. 364:2153-2166. Vandenberg, L.N., Hauser, R., Marcus, M., Olea, N. and Welshons, W.V. (2007) "Human exposure to bisphenol A (BPA)." Reprod. Toxicol. 24:139-177. VandeVoort, c.A., Taylor, J.A., Hunt, P.A., Welshons, W.V. and vom Saal, F.S. (2009) "Oral exposure of female Rhesus monkeys to 8-times more bisphenol A than the FDAs safe daily dose results in plasma unconjugated bisphenol A below mean levels in people." 91 51 meeting of Endocrine Society. Washington DC. vom Saal, F.S., Akingbemi, B.T., Belcher, S.M., Birnbaum, L.S., Crain, D.A., Eriksen, M., et al. (2007) "Chapel Hill bisphenol A expert panel consensus statement: integration of mechanisms, effects in animals and potential to impact human health at current levels of exposure." Reprod. Toxicol. 24: 131-138. vom Saal, F.S., Cooke, P.S., Buchanan, D.L., Palanza, P., Thayer, K.A., Nagel, S.C., et al. (1998) "A physiologically based approach to the study of bisphenol A and other estrogenic chemicals on the size of reproductive organs, daily sperm production, and behavior." Toxicol. Ind. Health 14:239-260. vom Saal, F.S. and Myers, J.P. (2008) "Bisphenol A and risk of metabolic disorders." JAMA 300: 1353-1355. Welshons, W.V., Thayer, K.A., Judy, B.M., Taylor, J.A., Curran, E.M. and vom Saal, F.S. (2003) "Large effects from small exposures. 1. Mechanisms for endocrine-disrupting chemicals with estrogenic activity." Environ. Health Perspect. 111 :994-1 006. Wetherill, Y.B., Akingbemi, B.T., Kanno, J., McLachlan, J.A., Nadal, A., Sonnenschein, c., et al. (2007) "In vitro molecular mechanisms of bisphenol A action." Reprod. Toxicol. 24: 178-198.
SESSION 18 AIDS AND INFECTIOUS DISEASES
This page intentionally left blank
2009 PROGRESS REPORT OF THE MCD-2!7 PROJECT AND 2010 RESEARCH PROJECT, EAST-AFRICA AIDS RESEARCH CENTER AT THE UGANDA VIRUS RESEARCH INSTITUTE (UVRI), ENTEBBE, UGANDA DR FRANCO M. BUONAGURO Istituto Nazionale dei Tumori, "Fondazione G. Pascale" Napoli, Italy In the year 2009 several activities have been conducted at the Centre in synergism with the program of "Global Support to the National Plan for HIV/AIDS control in Uganda" . Furthermore, research activities directly related to the Uganda HIV epidemic and to the fight against it, have been conducted at the INT-Naples within the context of 4 subprojects: l. Epidemiology of HIV-l in Uganda, within the frame of the Ugandan National Plan for HIV/AIDS Vaccine Development (MCD-217/J): 2. Molecular Characterization of HIV Strains from Subjects exposed to HIV Infections, who either fail to seroconvert or do not develop AIDS (MCD21712); 3. Etiopathogenesis of AIDS associated Neoplasias (MCD-217/3); 4. Development of HIV vaccine (MCD-217 /4).
Studies performed on the molecular characterization of HIV HMA, nucleotide sequencing and phylogenetic analysis show that >60% of the samples cluster within the A clade, - 25 % in the D clade, <20% does not show a clear cut result suggesting a divergence greater than 30%. Within the theme of HIV characterization other studies have been performed on: • • • •
HIV characterization HIV characterization HIV characterization HIV characterization
in Iran (clade A); in Russia (clade A); from Ukrainian subjects (clade A); from Nigerian subjects (HIV I and 2).
Furthermore, we are planning to contribute, with Saladin Osmanov (Director of the WHO HIV Vaccine program), to the re-establishment of the pan-European HIV monitoring system, for predicting the efficacy of new vaccine approaches. The studies conducted on the etiopathogenesis of AIDS-associated cancers have focused on: Kaposi's Sarcoma, with the continuation of sample collection and virus characterization; genital cancer, with virus genotyping and vaccine design; and Conjunctival Carcinoma, with analysis pf genetic susceptibility and ViruS characterization. Within the theme of Virus-associated cancers other studies have been performed on:
775
776 •
•
HPV-association to genital cancers in Ecuador (in collaboration with Claudio Maldonado, President of the Latin-American section of the American Society for Colposcopy and Cervical Pathology); HPV -role in genital lesions in Colombia (Dr. Alvaro Mauricio Florez, Universidad de Santander.
Furthermore, an Infectious Agents and Cancer Monograph has been published in
Frontiers in BioScience, including several articles on EBV, HTL V and human lymphomas; and FMB has been included as member of the Pan-European board on HPV vaccine immunology. P53 ROLE IN CANCER PATHOGENESIS: TWO DISTINCT MODELS. A relevant result has been the definition of The role of P53 polymorphism at codon 72 in both conjunctival and penile cancer pathogenesis, presented in a plenary presentation at the African Society of Human Genetics, Yaounde, Cameroon, 13-15 March 2009.
Fig. 1. The study has been conducted (as shown in Figure 1) using two sets of specific primers in order to selectively amplify either the Arginine (Arg) or the Proline (Pro) allele, with the amplification of 141 or 177 bp fragment, respectively.
777 The amplified products, where subsequently detected by gel electro-phoresis as shown in Figure 2, where homozygous P (Pro) sample is in lanes 2 and 3; heterozygous PIR (Pro/Arg) is in lanes 4 and 5; and homozygous R (Arg) is in lanes 6 and 7.
ELECTROPHORETIC PATTERN OF PIP, PIR e RlR GENOTYPES ~
141bp
Fig. 2. The study has been performed on two different mucosal cancers and in two populations: African subjects enrolled at the East Africa AIDS Res-::arch Center with low prevalence of Arg homozygosity and Caucasian subjects with high prevalence of Arg homozygosity. •
•
In Conjunctival Carcinoma, which has been associated with HIV -associated immunodeficiency and UV-exposure, p53 polymorphism is playing a role [subjects homozygous for Arg at codon 72 are at high risk of progression]. In Penile Squamous Carcinoma, which has been associated with oncogenic HPV types (in particular HPV 16), p53 polymorphism does not playa role.
DEVELOPMENT OF HIV VACCINE Within the theme of VLP-based Vaccines, • Other studies have been financed by: EU within the FP7 our NOIN application has been approved for a 5-year support for a total 12-million Euros; NIH within the SVEU program for a pre-clinical primate evaluation;
778 • • • •
•
A plant-based model of HIV vaccine has been developed; A HIV vaccine patent has been submitted; We have been invited speakers on HIV Vaccine Models at Bob Gallo's annual Conference in Baltimore. Our laboratory is contributing to the Scientific Committee of the International AIDS Vaccine Conferences, which in 2009 will be held in Paris, as shown in Figure 3; Our laboratory has been included in the Global HIV Vaccine Enterprise planning committee on Novel Vaccine Approaches.
Conf~Chair
'-
TRAINING AND EXCHANGE PROGRAMS Within the context of the Training and Exchange Programs several activities have been pursued: • •
Direct support to the GuluNap project at the newly established School of Medicine in Northern Uganda with teaching missions and exchange programs; Contribution to the Gulu Medical library and MultiMedia Unit with the purchasing of 10 computers provided by the Lion Club of Nap\e-Castel Sant' Elmo, of which FMB is the Medical-project coordinator. The Solar Power-supply and the Satellite antenna for high-speed connections should be co-financed by the International LIONS Foundation;
779 •
Two fellowships pf 4-week Exchange programs to medical students, within the IFMSA programs (International Federation of Medical Student Associations) .
<M.j:~"i ll'f~~~lWl'«r~~ \:. ·,""e
~~_.;.t ;i'~":I· t»f.$ ~~ :K""'4iI4~ ~~~~
Mlli IM~";6W ",
~ .I~~~~_"'_ef"~. _"" '"
~~."''''M!'II\If!I~ ''''jII\r~ti' ~luMI4
Furthermore an International PhD in immunology has been established at the University of Milan, available also to foreign students, supported by a tuition fellowship. Finally a collaborative Exchange program has been organized with Chinese Research Institutions, which has been approved and funded by the EU within the context of the EFBIC RED Ribbon program. In this frame. two virologists (Professors Bin Gao and Wenlin Huang) from the Chinese Academy of Science visited the National Cancer Institute in Naples to describe their scientific project and explore possible collaborations.
780
InstiMe of Microbiology Chinese Academy of Sciences Datun Road, Chaoyang District, Beijing 100101 Tell Fax: 86-10-64807599 E-mail: [email protected] Education MSc, The Chinese Academy of Military Medicine. China, 1987 PhD, The University of london, 1990-1993 Post-doc, Oxford University, 1993-1996 Career Research leader, Peptide Therapeutics pic, Cambridge, 1996-1997 Research Fellow, Institute of Molecular Medicine, Oxford. 1997-
Bin Gao PhD. Professor
2001 lecturer. UnIversity College london. 2001-2005 Currently, Director of the Center for Molecular Immunology, IMCAS
Research Inlerests The immune system works by recogniSIng the presence of an invading organism. To distinguish between normal ceUs and invaded celis. immune cells, including Cytotoxic T lymphocyte (CTl) and Natural Killer (NK) cell. keep checking an identity marker called Major histocompatibility Complex (MHC) class I molecule on the surface of all nucleated cells. If cells are invaded by viruses. bacteria, or paraSites, a piece of materia! from the invader wi!! be loaded onto the MHC complex, the T cell will recognize this change and kill the host cells with pathogens.
Institute of Microbio!ogy Chinese Academy of Sciences
Tat: 86-10-64807808 E-mail: ~:fL ...tltJI~']g_~_;J.fli,&.::~i..~,
Research Area
Wenlin Huang Ph,D .. Professor
The group's ,
Current Research 1, Recombinant adenovirus for gene therapy Human adenoyiruses are molecular parasites that rely on cellular mechanIsms for expression of their reporter gene in formation. Recombination of foreign genes !ike endostation, angostetion, neurotrophin in to adenov~rus can express efficiently with eukaryotic in target tissues for trealing diseases. We have developed the injection of an adenovirus encoding human endosatin, and now the Phase 2 clinical trials are performed. 2. The unusual transcriptions in adenoviruses Infection We have established that typical RNA polymerase II promoter is also transcribed by RNA polymerase III in adenoviruses infected ce!!s. The properties of RNA polymerase ill transcription. suggest that recognition of the E2E promoter of polymerase ill may serve to switch initiator, or ignite a threshold for RNA polymerase II tranSCription. !n addition, to investigate the mechanisms by \vhich viral mRNA species are distingulshed trom 1heir cellular counterparts for export to the cytoplasm during the tate phase of subgroup C adenovirus intectioo, we have examined the metabolism of severa! cenular and viral mRNAs in human cells productively infected by adenovirus typeS {AdS}. 3. Miniclrcle DNA for antitumor gene therapy Minicircles are superior 10 standard plasmid In terms ot biosafety, improved gene transfer, and potentia! bioavailabmty. However, minlclrde vectors have never been applied in antitumor gene therapy. One of huang's works has suggested that minldrcle-mediated IFNg gene transfer is a promising novel approach in the treatment of human nasopharyngeal carcinoma (NPC).
SESSION 19 SEMINAR PARTICIPANTS
This page intentionally left blank
Professor Sergey V. Ablameyko
Belarusian State University Rector Minsk, Belarus
Lord John Alderdice
House of Lords London, UK
Dr. Mohd Noor Amin
International Multilateral Partnership Against Cyber Threats (IMP ACT) Cyberjaya, Malaysia
Professor Mikhail J. Antonovsky
Carbon Dioxide Division Institute of Global Climate and Ecology Moscow, Russia
Albert Arking (Vatican Session)
Johns Hopkins University Baltimore, MD, USA
Professor William A. Barletta
U.S. Particle Accelerator School Department of Physics Massachusetts Institute of Technology Cambridge, Massachusetts, USA
Dr. Carl O. Bauer
Director, National Energy Technology Laboratory, U.S. Department of Energy, Pittsburgh, Pennsylvania, USA
Antonio M. Battro (Vatican Session)
Chief Education Officer, One Laptop Per Child
Dr. Evan Beach
Center for Green Chemistry and Green Engineering Yale University New Haven, Connecticut, USA
Dr. Roger W. Bentley
Department of Cybernetics The University of Reading Reading, UK
Dr. Bruce Blumberg
Department of Developmental Cell Biology and Pharmaceutical Sciences University of California Irvine, California, USA
783
Professor Juan Manuel Borthagaray
University of Buenos Aires Instituto Superior de Urbanismo Buenos Aires, Argentina
Dr. Vladimir B. Britkov
Institute for Systems Analysis Russian Academy of Sciences Moscow, Russia
Dr. Franco M. Buonaguro
Istituto Nazionale dei Tumori "Fondazione G. Pascale" Napoli, Italy
Dr. Jacques Bus
European Commission Information Society and Media Directorate-General Brussels, Belgium
Dr. Gina M. Calderone
Hydrogeology Department ECC Marlborough, Massachusetts, USA
Dr. Gregory Canavan
Los Alamos National Laboratory Physics Division Los Alamos, New Mexico, USA
Dr. Nathalie Charpak
Kangaroo Foundation Bogota, Colombia
Dr. Yong-Sang Choi
Earth, Atmospheric and Planetary Sciences Massachusetts Institute of Technology Cambridge, Massachusetts, USA
Professor Luisa Cifarelli
Physics Department University of Bologna Bologna, Italy
Dr. Terry Collins
Thomas Lord Professor of Chemistry Department of Chemistry Carnegie Mellon University Pittsburgh, Pennsylvania, USA
Professor Yuan Daoxian
UNESCO International Research Center on Karst Guangxi, P.R. China
Professor Pierre Darriulat
Vietnam Auger Training Laboratory Institute for Nuclear Science & Technology Hanoi, Vietnam
784
Dr. Socorro de Leon-Mendoza
Jose Fabella Memorial Hospital Neonatology Unit Manila, Philippines
Dr. Carmen Difiglio
Office of Policy and International Affairs U.S. Department of Energy Washington, DC, USA
Dr. Mbareck Diop
(Former) Science & Technology Advisor to the President of Senegal Dakar, Senegal
Robert V. Duncan
Vice Chancellor of Research University of Missouri Columbia, Missouri, USA
Professor Wolfgang Eichhammer
Fraunhofer Institute for Systems and Innovation Research Karlsruhe, Germany
Professor Merab Eliashvili
Theoretical Physics Department A. Razmadze Mathematical Institute Tbilisi, Georgia
Professor Christopher D. Ellis
School of Natural Resources and Environment University of Michigan Ann Arbor, Michigan, USA
Professor Christopher Essex
Department of Applied Mathematics University of Western Ontario London, Ontario, Canada
Dr. Lome Everett
Chancellor, Lakehead University Thunder Bay, Canada and Haley and Aldrich, Inc. Santa Barbara, California, USA
Professor William Fulkerson
Institute for a Secure and Sustainable Environment University of Tennessee Knoxville, Tennessee, USA
Dr. Bertil Galland
Writer and Historian Buxy, France
Professor Richard L. Garwin
Thomas 1. Watson Research Center IBM Research Division Yorktown Heights, New York, USA
785
Professor Alberto Gonzalez-Pozo
Theory and Analysis Department Universidad Aut6noma Metropolitana Mexico D.F., Mexico
Dr. Dale W. Griffin
United States Geological Survey Florida Integrated Science Center Tallahassee, Florida, USA
Dr. John G. Grimes
Former Assistant Secretary and Chief Information Officer, U.S. Department of Defense, Washington, DC, USA
Professor Mohamed H.A. Hassan
Third World Academy of Sciences Director, ICTP Campus Trieste, Italy
Dr. John A. Haynes
National Aeronautics and Space Administration Washington, DC, USA
Dr. Jerrold Heindel
National Institute of Environmental Health Sciences Research Triangle Park, North Carolina, USA
Dr. Udo Helmbrecht
Federal Office for the Security of Information Technologies, President Bonn, Germany
Robert Huber (Vatican Session)
Max Planck Institut fUr Biochemie, Martinsried, Germany
Professor Yuri Antonovitch Izrael
Institute of Global Climate and Ecology Director Moscow, Russia
Dr. Peter Jackson
Cambridge Energy Research Associates Senior Director Cambridge, Massachusetts, USA
Professor Leonardas Kairiukstis
Laboratory of Ecology and Forestry Kaunas-Girlonys, Lithuania
Dr. Hisham K. Khatib
World Energy Council Amman, Jordan
Dr. William Kininmonth
Australasian Climate Research Kew Victoria, Australia
786
Dr. Vasily Krivokhizha
International Department Federal Assembly of the Russian Federation Moscow, Russia
Dr. Lee Lane
American Enterprise Institute Washington, DC, USA
Professor Tsung-Dao Lee
Department of Physics Columbia University New York City, New York, USA
Professor Axel Lehmann
Institute for Technical Computer Sciences Gennan Anned Forces University Munchen Neubiberg, Gennany
Dr. Sally Leivesley
Newrisk Limited London Managing Director London, UK
Dr. Giovanni Levi
Centre National de Recherches Scientifiques Evolution des Regulations Endocriniennes Paris, France
Dr. Mark D. Levine
Lawrence Berkeley National Laboratory Environmental Energy Technologies Division Berkeley, California, USA
Professor Mingyuan Li
EOR Research Center China University of Petroleum Beijing, P.R. China
Professor Richard Lindzen
Earth, Atmospheric and Planetary Sciences Massachusetts Institute of Technology Cambridge, Massachusetts, USA
Dr. Mark B. Lyles
Research Program Integration and Mission Development Bureau of Medicine and Surgery Washington, DC, USA
Dr. Michael C. MacCracken
Climate Institute Chief Scientist for Climate Change Programs Washington, DC, USA
Bruno Maraviglia (Vatican Session)
Dept. of Physics, University "La Sapienza", Roma, MARBILab, Enrico Fenni Centre, Roma, Fondazione Santa Lucia, Roma
787
Professor Sergio Martellucci
Faculty of Engineering Universita degli Studi di Roma "Tor Vergata" Rome, Italy
Dr. Daniel Masiga
International Centre of Insect Physiology and Ecology (ICIPE) Nairobi, Kenya
Dr. Charles McCombie
McCombie Consulting Gipf-Oberfrick, Switzerland
Professor Stephen McIntyre
University of Guelph Toronto, Ontario, Canada
Dr. Leon Michaud
International Programme for the Rehabilitation of the Meteorological Service of Iraq, WMO Geneva, Switzerland
Dr. Akira Miyahara
National Institute for Fusion Science Tokyo, Japan
Dr. John Peterson Myers
Environmental Health Sciences Charlottesville, Virginia, USA
Dr. Giuseppe Nardoni
I&T Nardoni Institute sri Folzano-Brescia, Italy
Dr. Andre N audi
(Former) Director of Finance and Human Resources CERN Geneva, Switzerland
Dr. Rodney F. Nelson
Senior Vice President for Technology and Strategy, Schlumberger Ltd. Houston, Texas, USA
Dr. Karen Peabody O'Brien
National Institute of Environmental Health Sciences Charlottesville, Virginia, USA
Dr. Nicolas Olea
Laboratorio Investigaciones Medicas Hospital Universitario San Cecilio Granada, Spain
Dr. JefOngena
Plasmaphysics Laboratory Ecole Royale Militaire Brussels, Belgium
788
Professor Paola Palanza
Dipartimento di Biologia Evolutiva e Funzionale Universita' di Parma Parma, Italy
Professor Gennady Palshin
lese World Laboratory Branch Ukraine Kiev, Ukraine
Professor Garth W. Paltridge
Australian National University and University of Tasmania Hobart, Australia
Dr. Judit M. Pap
NASA Goddard Space Flight Center Greenbelt, Maryland, USA
Professor Frank Leon Parker
Environmental Engineering Vanderbilt University Nashville, Tennessee, USA
Professor Stefano Parmigiani
Evolutive and Functional Biology Universita di Parma Parma, Italy
Professor Duy Hien Pham
Vietnam Agency for Radiation and Nuclear Safety Control Hanoi, Vietnam
Professor Guido Piragino
General Physics Department Universita degli Studi di Torino Torino, Italy
Professor Juras Pozela
Lithuanian Academy of Sciences Vilnius, Lithuania
Professor Ramamurti Rajaraman
School of Physical Sciences Jawaharlal Nehru University New Delhi, India
Professor Aleksander K. Rebane
Physics Department Montana State University Bozeman, Montana, USA
Dr. Andrea Rigoni
Booz & Company Rome, Italy
Dr. James Rispoli
Office of Environmental Management Washington DC, USA
789
Professor Edward S. Rubin
Engineering and Public Policy College of Engineering Carnegie Mellon University Pittsburgh, Pennsylvania, USA
Professor Zenonas Rudzikas
ICSC World Laboratory Branch Lithuania Lithuanian National Academy of Sciences Vilnius, Lithuania
Dr. Juan Ruiz
San Ignacio Hospital Santafe de Bogota, Colombia
Professor Nicholas P. Samios
Brookhaven National Laboratory Department of Physics Upton, New York, USA
Dr. K.K. Satpathy
Indira Gandhi Centre for Atomic Research Tamil Nadu, India
Professor Hiltmar Schubert
Fraunhofer-Institut fur Chemische Technologie, ICT Pfinztal, Germany
Professor Geraldo Gomes Serra
NUTAU University of Sao Paolo Sao Paulo, Brazil
Dr. Adnan Shihab-Eldin
Kuwait Foundation for the Advancement of Sciences Safat, Kuwait
Professor Herman H. Shugart
Center for Regional Environmental Studies The University of Virginia Charlottesville, Virginia, USA
Dr. Giorgio Simbolotti
Senior Advisor on Energy Technology ENEA-President's Office Rome, Italy
Professor K.C. Sivaramakrishnan
Chairman, Centre for Policy Research Delhi, India
Dr. Annette Sobel
University of Missouri Vice President Columbia, Missouri, USA
Professor William A. Sprigg
Institute of Atmospheric Physics University of Arizona Tucson, Arizona, USA
790
Professor Katepalli Sreenivasan
The Abdus Salam International Centre for Theoretical Physics Director Trieste, Italy
Professor Friedrich Steinhausler
Physics and Biophysics Division University of Salzburg Salzburg, Austria
Dr. Bruce Stram
Element Markets Houston, Texas, USA
Professor Honglie Sun
Geographic Sciences and Natural Resources Research Institute Dep. Head Chinese Academy of Sciences Beijing, China
Professor Kyle Swanson
Department of Mathematical Sciences University of Wisconsin-Milwaukee Milwaukee, Wisconsin, USA
Professor Jan Szyszko
University of Warsaw Warsaw, Republic of Poland
Dr. Masao Tamada
Environmental Polymer Group Environment and Industrial Materials Research Division Quantum Beam Science Gunma, Japan
MJ. Tannenbaum (Vatican Session)
Brookhaven National Laboratory Upton, New York, USA
Dr. Wim Thielemans
School of Chemistry, Faculty of Science University of Nottingham Nottingham, UK
Dr. Richard C. Thompson
School of Biological Sciences University of Plymouth Plymouth, UK
Dr. Hamadoun I. Toure
Secretary General International Telecommunications Union Geneva, Switzerland
Anastasios Tsonis (Vatican Session)
Department of Mathematical Sciences, University of Wisconsin-Milwaukee Milwaukee, Wisconsin, USA
791
Dr. Valery Vengrinovich
Institute of Applied Physics of Belarus National Academy of Science Minsk, Belarus
Dr. Frederick S. vom Saal
Division of Biological Sciences University of Missouri Columbia, Missouri , USA
Dr. lohn C. Warner
Warner Babcock Institute for Green Chemistry Wilmington, Massachusetts, USA
Dr. Henning Wegener
Ambassador of Germany (ret.) Information Security Permanent Monitoring Panel, World Federation of Scientists Madrid, Spain
Dr. Rick Wesson
Support Intelligence Inc. CEO San Francisco, California, USA
Dr. lody Westby
Global Cyber Risk LLC CEO Washington, DC, USA
Dr. Crispin Williams
CERN Geneva, Switzerland
Professor Richard Wilson
Department of Physics Harvard University Cambridge, Massachusetts, USA
Dr. Lowell Wood
Hoover Institution, Stanford University Stanford, California, USA
Professor Maw-Kuen Wu
Institute of Physics, Director Academia Sinica Taipei, Taiwan
Professor lun Xia
Department of Hydrology and Water Resources Chinese Academy of Sciences Beijing, P.R. China
Professor Xiliang Zhang
Institute of Energy, Environment and Economy Tsinghua University Beijing, P.R. China
792
Professor Jie Zhuang
Institute for a Secure and Sustainable Environment University of Tennessee Knoxville, Tennessee, USA
Professor Antonino Zichichi
CERN, Geneva, Switzerland and University of Bologna, Italy and Centro Enrico Fermi, Italy
793
This page intentionally left blank
SESSION 20 ETTORE MAJORANA ERICE SCIENCE FOR PEACE PRIZE Pontifical Academy of Sciences The Vatican, 25 November 2009 SCIENTIFIC SESSION
Why Science is Needed for the Culture of the Third Millennium
This page intentionally left blank
THE IMPACT OF DIGITAL TECHNOLOGIES AMONG CHILDREN OF DEVELOPING COUNTRIES ANTONIO M. BATTRO Chief Education Officer. One Laptop Per Child. [email protected] To fight against ignorance and poverty we need to give a sound education to all. This is one of the most relevant Millennium Goals of the United Nations (2000). Today we have the resources to put every school in the planet in a common digital platform, to end with parochial and cultural isolation and to start the construction of a planetary network where knowledge can be processed and recycled. Every child has the right to be educated and today this education is not conceivable without the proper support of a digital environment. The enormous digital gap in developing countries is unjust and dangerous. We are excluding millions of children from the formidable advantages of a digital environment without borders. This discrimination promotes despair and hate and will lead to increasing poverty in the new generations. We need to eliminate the digital gap. Humanity is unfolding a new "cognitive environment", a new mental capacity that we can share above all kind of frontiers, geographical, political, economical, social and cultural. This is the first time in history that education has the possibility to make a leap of orders of magnitude. Imagine that every kid and teacher in a country owns his or her own laptop connected to the Internet. This can become an explosion of learning and teaching, of creativity and innovation that goes well beyond the physical limits of a school. The old paradigm that concentrates desk computers in a school laboratory has been replaced by the distribution of laptops to the whole school population. It means that we have an "expanded school" at home, after class, during vacations ... This is a revolution. The digital technology is more than a tool, it is not just a new version of a pencil or a blackboard to be used in the school, it generates a new "workspace", an eco-system, to develop our intelligence and affects our community. The revolution is the improvement of our mind-brain potential. A new generation of "digital natives", children that speak "digitalese" with ease as a second language is growing. A new "digital intelligence" has appeared. In order to expand the benefits of this "cultural mutation" among the children of the world, starting with the poor, we need the collaboration of many. We need millions of expert teachers and volunteers to lead the way, hundreds of millions computers to be deployed and connected to the Internet, and an incredible wealth of talents to develop new ways of teaching and learning. It is a daunting task for humanity but we can start to produce an irreversible change in education if we reach the proper scale. One of the first programs already in place is OLPC, the One Laptop Per Child Foundation created by Nicholas Negroponte (www.laptop.org). In a couple of years OLPC has deployed more than two million laptops in some thirty countries and is growing. It is not a dream; it is a reality that brings hope to a better education to all. Other programs will follow. Uruguay is a world leader in this aspect because it has already "saturated" the whole school population. No child, no teacher is left behind, all have their
797
798 own laptops and the results are well beyond our expectations. It is a triumph of equity over discrimination (www.ceibal.edu.uy). Many other countries are following the same trend and a formidable change in education is taking place. We can appreciate different paths of deployment and implementation in each location. This variability is a source of richness; local cultures and languages are being promoted, and different styles of teaching and learning appear. We celebrate this cultural diversity in a globalized word. REFERENCES I.
2.
3. 4. 5. 6.
7. S.
Battro, A.M. (2002). "The computer in the school: a tool for the brain." In Challenges for science: Education for the twenty-first century. The Vatican: Pontifical Academy of Sciences Battro, A.M. (2004). "Digital skills, globalization and education." In M. Suarez Orozco and D. Baolian Qin-Hillard (Eds). Globalization: Culture and education in the new millenium. San Francisco: California University Press. Battro, A.M. and Denham, P.J. (2007). Hacia una inteligencia digital. Buenos Aires: Academia Nacional de Educaci6n. Battro, A.M. (2007). "Homo Educabilis, a neurocognitive approach." What is our real knowledge about the human being? Pontifical Academy of Sciences. Vatican. Battro, A.M., Fischer, K.W. and Lena, P.J. (Editors) (200S). The educated brain. Essays in neuroeduction. Cambridge: Cambridge University Press. Battro, A.M. (200S). "Predictibility: Prophecy, prognosis and prediction." Predictibility in science: Accuracy and limitations. Pontifical Academy of Sciences. Vatican Battro, A.M. (2009). "Multiple intelligences and constructionism in the digital era." Multiple Intelligences Around the World. San Francisco: Jossey-Bass/Wile. Battro, A.M. (2009). "Digital Intelligence: the evolution of a new human capacity." Scientific insights into the evolution of the universe and of life. Pontifical Academy of Sciences. Vatican.
THE CRUCIAL ROLE OF SCIENCE (AND SCIENTISTS) IN PUBLIC AFFAIRS: A SUGGESTION FOR COPING WITH TERRORISM
RICHARD WILSON Department of Physics, Harvard University Cambridge, Massachusetts, USA Firstly, I thank the organizers of this meeting, Professor Antonino (Nino) Zichichi and Monseigneur Sanchez Serrondo for inviting me to this meeting and thereby giving me the opportunity to express and explain my views to this distinguished audience. I start with a couple of quotations by people more distinguished and powerful than myself. Firstly by Professor Wolfgang H. Panofsky, one of the brightest men of the last century who was my wife's brother-in-law. He was known by his childhood nickname (Pief) to a dozen U.S. presidents. Three days after his death the following words were in an "op-ed" in the San Francisco Chronicle. "Scientific-technical realities cannot be overruled by political decisions without resulting in grave risks to the nation." (Panofsky, 2007). This was referring to the emphasis on anti ballistic missiles (ABMs) He had testified to the U.S. Senate about them in 1969 and the ABM treaty was a response to this and other scientists. But he was ignored, as were Bethe and Garwin when President Bush withdrew from the ABM treaty and proposed ABM systems in the Czech Republic and Poland. "Humanity has been the subject of vicious attacks from extremists. Undoubtedly scientific centers that embrace all peoples are the first line of defense against extremists." This was said by an unlikely person-King Abdullah of Saudi Arabia-when he opened a new University in 2009. Indeed, however, King Abdullah and his predecessor understood the importance of science, although they were not scientists, and had two PhD scientists, one each from Harvard and Stanford in the cabinet. Today others will talk on the importance of science in general. I will restrict my talk to the importance of Science in Public Affairs-in short, in politics . Even this is a subject that is large and general. I will talk about a subset in which a panel of the World Federation of Scientists, meeting in Erice, has become concerned. I will talk about a specific set of their proposals made by the Permanent Monitoring Panel on Terrorism (PMPT) of the World Federation of Scientists, and its subgroup Permanent Monitoring Panel on Mitigation of Terrorist Acts (PMPMT A) for limiting the effects of terrorism. As one thinks about terrorism one realizes that it is useful to think through the problems from the beginning. Can one stop someone becoming a terrorist? Can one prevent a potential terrorist from having access to weapons? Can one prevent a potential terrorist from approaching a vulnerable target? Can one mitigate the effects of a terrorist action? The mitigation of the effects of a possible action is difficult because of the huge number of possible targets open to a single terrorist. Mitigation may seem useless and impossible. But we start by a couple of important statements which are not proven and are only assumptions, but assumptions we believe.
799
800 1. If the potential terrorist knows that the effects of attack on a particular target or type of target are limited it is less likely that he or she will choose this action. 2. Many potential terrorist actions are to create a situation which might occur naturally. These have been realized for 35 years by professionals. I quote from memory a conversation on this subject in 1979 with Professor Norman Rasmussen, the chairman of the "Reactor Safety Study". We were discussing terrorism just after the Three Mile Island accident: "There is nothing a terrorist can do that those clowns (the reactor operators at TMI) did not do on their own". More professionally put, one must examine the "Low Probability High Consequence" accident scenarios. These were widely ignored before 1976, but are now taken very seriously by almost all industries. Alas, the U.S. building industry still does not take this seriously, even after the (preventable) fall of the towers at the World Trade Center in 2001. It was less obvious to the biomedical community that preparation for a natural outbreak of disease is the best preparation for a release of disease vectors by a terrorist. But after SARS most of that community was converted. This, then became a focus of discussion. Before I address the specific recommendations I will address a few reasons for the difficulty in communicating science to the public in these situations. I will of course greatly simplify what has indubitably been the subject of many PhD theses. In 1820 or so the same man: 1. 2. 3. 4. 5.
Understood the science Applied it to technology Explained it to legislatures Got approvals Oversaw the application in practice.
Perhaps the last of these scientists who understood almost everything was Gauss. Alas, a problem with the common currency in the European Union is that we no longer see Gauss, and his Gaussian (normal) distribution on a 10 mrk note. But we also had Helmholz in Germany, and Maxwell and Thompson (Kelvin) in the UK. But now each of these 5 steps is, at best, a different man, but more commonly a different department. Each person in the department has his own incentives and constraints which can inhibit transfer of information up or down the chain. The basic science can, and often does, get lost and stupid decisions are made. I think we can all urge that politicians must reach down and understand basic science and also scientists must not stay in an ivory tower but reach up and insist that the public and politicians base their decisions on science. The compartmentalization in the process of communication was perhaps worse in the USSR than in other developed countries . It was dangerous to criticize or even comment upon the work done by people in another compartment. There were few scientists who understood the weaknesses in the RBMK reactor-both in design and careless operation. It is widely believed in the USA that if such criticism had been made and considered, the Chemobyl accident would not have occurred. In this point we note that science is not technology but only related to it. The
801
newspapers and many in the general public and politicians are confused about this. Technological advances depend upon scientific understanding and are strictly limited by basic scientific facts. Inversely, in scientific inquiry, it is important to be able to use the best technological apparatus that is available. Once it is realized that addressing a natural accident with large consequences, is a crucial step in addressing response to terrorism, it follows that FIRST RESPONDERS must learn and understand the science that underlies the accidents with large consequences even though improbable under ordinary circumstances. In considering the possibility of a "dirty bomb", the release of a 10,000 Curie source in a busy crowded, area such as Wall Street, we can look at a situation where such a source was released by accident in Brazil. In that case, a thief broke open a an abandoned clinic, broke into a locked area which contained the source, and spread it in the community. Children took the fluorescent powder on their faces. It was perhaps a week before the hazard was realized. But less than a dozen people died, although a few more got an increase in cancer risk, calculable under usual pessimistic assumptions but unverifiable. But depending on rules for clean up and reentry many square kilometers of a major city could be unusable for 30 or more years. Some of us think that the rules for reentry in an area with a high radioactivity level are too restrictive. They were put in place by the international community soon after the Chernobyl accident contrary to the advice of the UK National Center for Radiological Protection who alone of all national groups had thought about the problem and issued a report that was published just a week before the accident. But the PMP has suggested ways in which the adverse effect be avoided. Trained "Radiation Workers" can enter into places, with their radiation monitors at levels that would not be permitted to the general public. Critical employees in city organizations could get radiation training preferably in advance. I have had radiation training for over 60 years, recently rechecked both at Harvard University and Jefferson National Accelerator Laboratory. First responders should have several people on their rolls who understand radiation and perhaps consultants who would be willing to drop their other occupations at a moments notice. But this is not being urged as it should be and, indeed, there are more stupid restrictions being put in place even as we discuss the matter. For example the latent period for many cancers is 20 years and perhaps the same for radiation-induced. For example if I at age 83 were to get an overdose I might get cancer at 103 and that would be the least of my worries. Yet UK recommend that only people under 50 actively be involved if there is radiation release-the inverse of the sensible restriction! After WWII, my colleague Professor Kenneth Bainbridge, who had set off the first atomic bomb at Alamagordo and became famous for saying: "We'll all be called sons of bitches now", pointed out that physicists were likely to be asked in the future about radiation and insisted that in the (compulsory) advanced physics laboratory everyone get radiation instruction. I was of this generation. In the 1970s I went to the "Television School" run by Jack Hilton in New York learning to make my point in the 2minute (less commercial) time. I learned in detail about the epidemiology and the data on radiation problems and was willing to take the trouble-un paid-to find out. As a result, I have been in demand as an independent expert in the well known accidents at Three Mile Island, Chernobyl and the Japanese criticality accident of a few years ago. At TMI
802 not one newspaper, to my knowledge, quoted the accurate press releases of the U.S. government Nuclear Regulatory Commission. But I was on the TV six times in 2 days with that knowledge. At TMI not one major newspaper got the units right--confusing R and milliR and worse, mR and mR per hour. Even at the Japanese criticality accident the New York Times quoted the dose NUMBER correctly but put in R not milliR thereby changing an accident into a disaster. Fortunately I had called Japan during the night and had the correct number and was able to stop the National Public Radio from making the same mistake. More details about this suggestion are in my presentation to the PMPMTA in August 2009. (Wilson, 2009a) From 1939-1943 incendiary bombs fell on houses in London. But the population was prepared. Inflammable material was removed from the top floors and replaced with a bucket of water and a bucket of sand and a small stirrup pump. There were three minutes to put out the bomb before it burnt through the floor and set the whole house alight. The three minutes was enough in many cases. There are now more reasons for early decisions. The Chairman of the PMPMT A has repeatedly emphasized that the first 10 minutes after an accident is crucial. (Leivesley, S., 2008). Then it is that the mitigation steps are decided. This is the time the trained radiation worker, with equipment in good shape, is crucial. Often, however communication networks can prove inadequate, and this might particularly apply if terrorists deliberately block them This leads to a suggestion which the PMP feels should be explored, where the basic scientists can help (Wilson, 2009b). High Energy Physicists have a dedicated fast computer networks with experts who understand hackers. This is at CERN and DOE. This is operated by a group independent of "industry" and Government. Departments and is perhaps more believable than the government directly. The European Union in May 2009 proposed greater cooperation with CERN. I propose that CERN and U.S. DOE jointly offer to the world their computer network in an emergency. High energy physics data analysis could stop for awhile. Management might be by CERN AND FERMILAB. Additionally the physicists at CERN and DOE laboratories could be encouraged to volunteer. They could be trained to understand radiation in advance, as I was. They could be "vetted" by the World Federation of Scientists (WFS) or by an independent professional society. Interestingly, Andrei Sakharov in the late 1980s proposed a similar committee procedure, not for short-term assistance but to make serious policy recommendations to the government. Government experts would merely testifY to the committee. This procedure was adopted in the organization of the Sakharov conference in Moscow in 1981. Bioterrorism is very different from terrorism with dirty bombs. The wayan epidemic develops is very complex but one can derive from the data two crucial numbers: 1. The time from infection to the time of infecting others (- 1 week ). 2. The average number of people infected 1.5 to 3. The first number tells us the rate an epidemic can develop and the second tells us whether it will. For millenia the only procedure mankind had to combat epidemics was quarantine. Lepers were sent into the desert to avoid infecting others. 200 years ago quarantine was used against yellow fever in the south of the United states. Cuba has used
803 it against AIDS . China was slow at using this against SARS but has used it recently against HINI flu. But the PMP has suggested that we have better procedures (Morse et al. 2006). The main emphasis in the USA has been to vaccinate people. In USA the Center for Disease Control (CDC) is widely respected. Since May 2009 their recommendations have been accepted. BUT new vaccines are inherently slow and the other steps are often ignored. We address the number in (2). We only need to reduce this from 1.5 or 3 to 0.99 to stop a pandemic. Suppose 80% of a group were to screen themselves and reduce exposure 10 fold by closing schools, stop shaking hands, washing hands, wearing masks, staying home if a high temperature. Then instead of 3 people being infected by one patient, there would be 20% x 3 + 80% x 0.3 = 0.84. The epidemic would be wiped out within a few cycles. Although this principle was used in wiping out small pox, spear headed by WHO, and is being used, in wiping out elephantitis, this possibility at the earlier stage is not understood and is still widely ignored. Fortunately there are scientists who are studying the development of the HINI influenza, that has already killed 7000 persons in the USA, half from secondary causes (Yang Yang et al. 2009). They do not in this paper derive the simple numbers above but do note that 1/3 of all cases are from transmission from others in the home or school. Many persons have suggested that school buses are particular places for incubating diseases. I will always avoid in future traveling back by airplane from a resort location just before the school restarts. The last time I did this I picked up a pneumonia virus. Even if one member of a family has a fever, they will not postpone travel because the airline will charge $200 each ticket for cancellation. There is a technological advance that shows promise of reducing (1 )-the time before a disease cluster is diagnosed. The time to recognize an emergency from 1 week to I hour. Representative digital temperature measurements can be sent automatically to a semi-central location with date, temperature and location for each entry. A cluster of fever can be located at once. This is being tested by a former student at Beth Israel Hospital in Brookline and Exergen corporation in Watertown (Pompei, 2009). In either of these attempts to stop an epidemic needs fast action. It is not clear whether a large dedicated computer network is needed in this case, but it would be a fine gesture if the CERN-DOE network were offered with WHO and CDC being the lead organizations. The extent that volunteers would be helpful in this situation is unclear, but they could presumably come from public health school faculty and students and vetted by CDC and professional societies.) In both these two cases, the PMTP is suggesting thought and action on both sides of the communication chain. Scientists should reach out and try to bridge all the gaps in talking to the politicians. At the same time media hounds and politicians must be careful to talk about the science and technology in a responsible way. But I give two major warnings. Anyone who volunteers commits himlherself to continuous hard work for a period of a month or so. One has to be patient with people on the phone. One has to reply to phone calls at once. One has to be prepared to go to an accident scene at one' s own expense and not wait for the government grant. Also do not expect a reward or to be thanked. If you are right, few people will have even radiation damage and there will be no pandemic. Then the public may erroneously say the restriction was unnecessary. The same principles apply widely to other situations even when there is not the
804 urgency of coping with terrorism. Understand the fundamentals of the situation you are describing and emphasize them at all times. Units, words, all mean something and they must be used carefully and as precisely as you can. Try not to talk down to the public. In sports language KEEP YOUR EYE ON THE BALL! To illustrate this I describe two related topical examples where misuse of words by experts still confuses the public. The first is about the energy situation in the country. I found even last week a paper by two physicists who should know better talking about Energy Conservation. Yet we have known for 150 years that energy is conserved! Most people stopped doing this mistaken utterance by saying that we should conserve fuel without saying why. Who, for example, wants to conserve solar energy? Now it is popular to talk about energy efficiency. Fuel efficiency would be better but as we say it, it becomes clear that it is only a partial solution. Persuading people, not to heat their houses as much or to avoid unnecessary journeys is important. That might be fuel restraint. Also if the public policy concentrates on efficiency in use of carboniferous fuels, that leaves nuclear power and carbon sequestration with no incentives. But we must also be careful to define and state the boundaries of the problem. If we emphasize efficient use of coal, we must use the uranium therein. This could, for example, generate between 2 and 10 times as much energy to use as the carbon. Even Dick Garwin and other proponents of using uranium in sea water never proposed this logical step! But I personally, born next to a coal fired power plant, cringe when I hear the words, ' clean coal". Has someone come with a paint brush and whitewashed it? In discussion of climate change there is another set of word usage problems which are not coincident. I hear it said many times per day that CO 2 emissions affect climate. They only do so indirectly: CO 2 concentrations can and probably do. Discussion of control methods at Copenhagen and elsewhere seem to concentrate on control of emissions and making an analogy with control of sulphur emissions. This analogy is very misleading and scientists should not let it pass unchallenged. There is an apocryphal story of a committee of EPA regulators discussing the desired level of CO 2 concentrations. "What level do we want?" From the back of the room came the reply: "Zero of course" an answer appropriate for sulphur but not for carbon. Moreover economists argue for regulation as early in the chain as possible. Yet ALL of the EU or U.S. government proposals, and perhaps even more important the discussion of them in the media, are for controlling emissions of C02 from the millions of emitters and none of the proposals are for controlling carbon where it is already recorded as it comes out of the ground at the oil well, coal mine, gas field or port of entry. Few media commentators discuss the distinction. We have work to do. Two examples of careless use of language at the end of 2009 stand out. The Program on Public Affairs of the American Physical Society carelessly worded a resolution affirming their belief that there is a serious problem demanding attention. This bad wording encouraged critics, including at least two distinguished scientists whom I personally respect, to ask for its recall, which the APS president has declined to do. In another situation the group at the University of East Anglia, UK has been carelessly sending each other e-mails with embarrassing descriptions on how to present the data to disguise uncertainties. Among other things they forgot that e-mails are easier to access and search than the written or spoken word. Laws rightly consider that insults in the written word (libel) are more serious than insults in the spoken word (slander). Insults by
805 e-mail are clearly worse than libel! I know of no scientist who has changed his opinion as a result of this, but opinion polls in the U.S. show the public have turned sharply against action on a climate-energy bill. This then becomes still another matter on the Science-Policy interface that demands the attention of the World Federation of Scientists and its able leadership .. In particular I call upon the groups concerned to "come clean". Admit their mistakes clearly and help us to move on. But in the twin issues of climate and energy there is no international political consensus as the zoo at Copenhagen in December 2009. The scientists meeting in Erice have been vocal but it has not been enough. The chairman, Nino Zichichi, has constantly urged at the Erice meetings that concise recommendations be provided that might influence government action. In the last few years the Energy PMP has responded, and members thereof have also taken these recommendations to all the decision-makers that they personally know. I, for example, gave the 2007 recommendation (Energy PMP, 2007) on controlling carbon as it comes out of the ground to U.S. representative Edward Markey at a small reception at Harvard University, with another copy to his staff. But no sign of it appears in the 1000 plus pages of the Waxman-Markey bill being discussed in the U.S. House of Representatives. Others have similar stories. We must redouble our efforts. In this brief paper I inevitably express my own opinion, and give examples from my own personal experience. I know that others have very similar experiences. I think that my views are consonant with those at both the PMPMTA and the energy PMP. I thank the members of these groups, the speakers at the Erice meetings and elsewhere, for having molded my opinion. REFERENCES I. 2.
3.
4.
5.
6.
American Physical Society (APS) (2009) No longer easily available on their website. Energy PMP, 2007 "Simple Upstream Control of Carbon". Available on the PMP website at http://www.energy pmp or Energy http://physics.harvard.edul-wilsonlenergypmp /2007 carbon control.doc. Other recommendations of the Energy PMP are similarly available. Panofsky, W.K.H. (2007) "Missiles no defense" San Francisco Chronicle, September 25 th http://sfgate.comlcgibini article.cgi ?f=/c/a/2007 /09/26/EDG BSA 081 .DTL Wilson, R.,2009a "The need for a corps of radiation workers for immediate assignment," Presented at the PMPMT A August 2009, available on the site: http://pmpmta.org Wilson, R., 2009b "Establishment of a scientifically informed Rapid Response system," Presented at the PMPMT A August 2009, available on the site: http://pmpmta.org Yang Yang et al. (2009) 'The Transmissability and Control of Pandemic Influenza A (HI Nl) virus," Science, 326:729 (2009).
This page intentionally left blank
WHEN SCIENTIFIC TECHNICALITIES MATTER
CHRISTOPHER ESSEX Department of Applied Mathematics, University of Western Ontario London, Ontario, Canada THE PONTIFICAL ACADEMY OF SCIENCES, NOVEMBER 2009 Thank you for the opportunity to say a few words on science and culture. Have you ever dismissed a technicality that mathematicians are accused of obsessing over? You can get away with it sometimes, if you know what you are doing. But technicalities, not just from mathematicians, are often dismissed or overlooked by those with no clue. Today I want to talk about how that happens from partly a cultural standpoint, and what consequences can arise, making seven points to that end. To begin, I recall my late applied mathematics colleague, Prof. Marita Chidichimo, who once visited a classroom. She asked the teacher why the children were grouped around special tables, playing with little chips that had mysterious markings on them. The teacher said that the children were at a "learning centre" where they were to discover the rules of mathematics on their own. Marita, said to the teacher, "Are you crazy! You can't expect seven-year-olds to figure that out on their own. It took humanity 4000 years to figure that stuff out." DISCOVERING NEW SCIENCE IS HARDER HUMANITY HAS ALREADY FIGURED OUT
THAN
LEARNING
WHAT
It's much easier to learn what humanity has already figured out, than to discover new scientific knowledge. When doing something entirely new, we are like those children trying to discover mathematics from scratch. Two misconceptions make this hard to grasp: I. There is nothing new to discover in the world. Some expert, somewhere, can be found to articulate whatever is knowable. If you don't believe in new things, it's tough to understand what discovery means. 2. Things seem simpler in hindsight. There's a saying among mathematicians: A new mathematical idea is completely incomprehensible, until you get it; then suddenly it's trivial. NEW SCIENTIFIC KNOWLEDGE BUILDS ON THE PAST OVER LONG TIMESCALES It's just good sense to use what we already know. But, those elements from the past on which we build may languish for generations before someone puts them together. Consider the late twentieth century development of the fractal. The Sierpenski triangle (or gasket) is one of the most ubiquitous fractals . It's defined by a sequence of triangular objects. You remove a small triangular area from the
807
808 center of the largest triangles within to create a new member in the sequence. Each member is referred to as a Sierpinski triangle of iteration depth numbered according to the position in the sequence. Of course the fractal is actually the limit of this sequence, but the limit is a mathematical ideal. Finite iteration depth fractals arc widely used in modem research.
I once took this photo of a unique floor-tiling pattern. Let's extract an element of it.
Clearly this is a Sierpinski triangle of iteration depth three.
Overall the original floor is not fully self-similar, but it surely is to a remarkable degree given its age. It's the floor of the room containing Raphael's magnificent fresco, "the School of Athens" in the Vatican Museum. On this visit I also found it on the floor
809 of the Sistine Chapel. It's by the altar. These tiles were likely set about 400 years before Sierpinski was born! Does that mean that Sierpinski was beaten to his idea? No. There is more to his thinking, but clearly what is on the floor in the Vatican museum is a fascinating object lesson, not only of times cales, but also the complex resonance between art and science. IT TAKES TIME AND SURPRISES HAPPEN WHEN FIGURING OUT HOW TO USE WHAT'S NEW Long timescales don't just occur within science itself. They also happen in the transfer of technological knowledge to society. An example is the laser--central to modern technology, from computers to medicine. But it took decades and a number of Nobel prizes before research originating with Einstein led to lasers, and decades more to learn to use it. There is more to lasers than Einstein' s idea. It was actually only an "element," like the tiling on the floor of the Vatican museum. While we take lasers for granted, they would surely have surprised Einstein. Another wonderful example is the global positioning system or GPS which requires general relativity. General relativity was the penultimate example of a result of pure research that could never be of commercial use. Now it's key to a ubiquitous commercial application. Apparently engineers were so skeptical of general relativity in the original GPS tests for the U.S. Navy years ago that they installed a switch that ran the new system with or without general relativity. There is no switch today. But it also takes time for society to figure out how to use new technology. A good example is the fifty years or so between the introduction of electric motors and the end of manufacturing run by steam. Society doesn't just suddenly use a new technology. It takes time to figure it out socially. This is happening today. How exactly will we ultimately use the Internet? CLASSICAL PHYSICS IS STILL GOOD I recently spoke with an educated man who was astonished to hear that classical physics still worked. He had read that modern physics had replaced classical physics. I told him that it would be accurate to say classical physics was retained by modern physics as a crucial limiting case. It wasn't overturned in that sense. The story is one of accrual of knowledge rather than reversal. He thought of knowledge as something that rapidly expired, spoiling like bad milk. For him, catastrophic reversals were the norm. It was like a perpetual and terrible storm of relativism. I told him that knowledge can be divided into two broad types. I. Invariant knowledge: physical laws, mathematical structures, etc. These types
of knowledge accrue over generations, like money in a bank account. 2. Ephemeral knowledge: clothing and hair styles, computer hardware and software, music stars, etc. We all have obsolete knowledge. I know computer operating system languages for computers that I will never see again. On the other hand, I know that under suitable
810
conditions that I can be confident in Euclidean geometry, the principles of calculus, or Newton's Laws. HATE SCIENCE. HATE YOURSELF. Once, at the Telluride Research Centre in Colorado, artists and musicians were holding a joint event together with scientists. Afterward, one of them singled me out, for some reason. He pushed his face alarmingly close to say accusingly: "I don't like scientists!" "Why?" I said. "Because scientists make bombs." "Well," I said, "maybe artists make art that inspires scientists to make bombs." He stepped back and he said with a smile, " I like you." And that was the last I heard from him. Clearly his inner concern had little to do with making bombs. He saw science as something foreign-like something left behind by aliens. This sentiment is shared by many. They hope it's just a fad that will soon pass. But I think denying science in this way is denying an essential part of our humanity. SCIENCE INVITES DIRECT PERSONAL EXPERIENCE Relating to this part of our humanity has to involve some direct personal experience with the natural world. In 1995, with that in mind, I arranged to take my telescope to the school of my children. There was an annular eclipse of the Sun visible then. My daughter's teacher would have her class watch the eclipse using my telescope. Solar observation is easily done by projecting the image from the eyepiece onto a wall or large piece of cardboard. Many can watch simultaneously. On my arrival the teacher regretfully told me the local school board had banned the children from leaving the shelter of the school because of "dangerous eclipse radiation." Even though the real thing was going on outside, the school board's fear caused them to confine the students indoors where they would watch the eclipse on television! After some heated telephone conversations, I made and got a compromise. They gave me a classroom on the West side of the building. I stood outside alone with my telescope to project the image into the classroom onto a projector screen placed indoors. When it became known that the eclipse could be viewed directly, there was a burst of enthusiasm throughout the school. Humans, teachers and children alike, prefer Nature experienced first hand. The demonstration thus grew to involve nearly the entire school. Class after class took their turn to experience a natural wonder personally, while I explained through a window. SCIENTIFIC TECHNICALITIES MATTER A teacher standing in shadow asked me what the cause of the "dangerous eclipse radiation" was. That is an object lesson on how authority or groupthink can overcome commonsense. I don't want to be smug about this. I certainly have made my share of mistakes. But the consequences can be more than just a comical reaction to a natural event.
811
Let me give you an example, other than global warming, where all kinds of political and social tensions complicate the lives of scientists. I once heard the mathematical biologist Alan Perelson, give a keynote lecture at an applied mathematics meeting. He explained how his team's work led to the famous multi-drug treatment that allowed people with HIV to live productively for long periods without AIDS. Before that work, virologists believed that HIV was dormant for many years before the patient got AIDS. However, data from new drug therapies analyzed by Perelson's team, using systems of differential equations, showed that HIV wasn't dormant at all. The rate of reproduction was enormous instead. It was the human body's resistance that kept the virus numbers down until it's resistance was finally overwhelmed. Anyone drug that worked against HIV would wipe out all but a small fraction of viruses resistant to that drug. Retaining the fast reproduction rate, those drug resistant versions of HIV would rapidly repopulate the host. This made multi-drug approaches the only thing that could work. Before this realization, virologists were straining for many years under a basic misunderstanding of calculus: a small function does not imply a small derivative. At the time this might just have seemed like one of those technicalities that mathematicians are accused of obsessing over. But this time, overlooking the technicality cost lives.
This page intentionally left blank
THE USE AND MISUSE OF SCIENCE-AN EXAMPLE
ANAST ASIOS TSONIS Department of Mathematical Sciences, University of Wisconsin-Milwaukee Milwaukee, Wisconsin, USA In the last two years I and, my collaborator, Kyle Swanson have published a few papers where we explain how all major climate shifts observed in the 20 th century are due to the natural variability of the climate system. Our theory considers major climate modes such as the North Atlantic Oscillation (AN), the Pacific Decadal Oscillation, the North Pacific Index (NIP), and the El Nino/Southern Oscillation (ENSO) and investigates the dynamics of their network. All those modes are of oceanic origin with decadal variability. Those modes apparently do not vary independently and they may synchronize (meaning they "beat" together). We found that when the modes are synchronized and their coupling increases, the synchronized state is destroyed and climate jumps into a new state, which is characterized by a reversal in the trend of average global temperature. A simple example of this mechanism (which is also known as synchronized chaos) is the following: consider four synchronized swimmers. As long as they swim by themselves their synchronization is not likely to be destroyed. In other words, they will not mess up their program. If, however, they start holding hands and pulling on each other (increase in coupling) then most likely their synchronization will be destroyed. We found this mechanism in the observation in the 20 th century (Figure 1) and in unforced and forced (by a C02 increase) climate models. In all we investigated 12 synchronization events in observations and in models and, without any exception, when synchronization was followed by a coupling increase a climate shift occurred. We did not observe even one false alarm . Last year we improved our numerical definition of coupling and we were able to extend our calculations to this century. We found that a new shift has occurred around 2001. Since then the very strong positive trend observed since the late 70s has leveled off and may be decreasing. Because of this shift and because of what happened during previous shifts we are suggesting a cooling period for the next 20 or so years. Now we would like to caution the readers on the following issue. These climate shifts even though occur naturally, are superimposed on a background positive trend known as "global warming". This global warming is attributed by some to anthropogenic effects. Others do not agree with this and argue that this may also be some low-frequency signal occurring naturally. This is a "hot" topic and we don't think it has been settled beyond doubt. Our feeling is that the truth lies in between. Our interest in this research is to understand natural variability. Humans may have an effect on climate but climate has a lot of intrinsic variability and, at this point, it appears as if natural forces have overtaken the background warming.
813
814
Fig.]: A cartoon of the global temperature record. Green is the actual data, red the shifts in global temperature trend and blue the underlying low.frequency trend known as "Global warming". After the publication of our work many magazines, newspapers, and internet sites wrote articles on our results. Our experience was that reports were distorted to fit the various agendas. The CATO Institute (an anti-global warming organization) used part of our conclusions to run a one page statement in New York Times and Chicago Tribune telling President Obama that he is wrong in assuming that one of the most urgent issues nowadays is combating global warming (Figure 2). Those who believe that humans are not causing global warming are saying that we did not go far enough but are using our results to claim that global warming is over. Here is one email we received:
"]t is great to see mathematicians stepping into this debate but you did not go far enough .. .1 just urge you to step in farther, and see that the statistical fraud being perpetrated by the ]PCC falls fully under your range of expertise (as it does for me, an economist, and pretty much anyone else with any analytical training) ... Those of us who are competent to expose this fraud need to stop deferring to the climatologists on this, or they will succeed in unplugging our economy, and western civilization will be destroyed. " On the other end, those who believe that humans are completely responsible are accusing us of "trying to destroy the planet". Here is another email:
"] hope you are enjoying the fame that you have received by trying to destroy our planet... "
815
In Lieu of Conclusions: • •
Scientists should be concerned with reporting the science, not with its politics. Scientists are responsible to inform and educated the public about the facts and correct wrong impressions created by the media.
" '"Few challmget> !l!citIgAmerir.a and the world are 1\li;I!"C \ll'gt:l1I.M cootbatil1g ciimafe~ Tli'~ is O;:;'QIld di~ wl.l thi;·1kts >Ire ctwr,; 2t -""~'Mtu::m-·et,~t:.,
'-,J,'
"fI~."'~ !I>U'ti:_'!Ilh1J
1?, m\i.
With all due re.spect Mr. President, that is not true.
w~~t=:==~~=ri~~==:
~~bfl:lll:!:":a~~"#,~~~&e~~~~\~~~" ~~-;,t«e~b,~~'d~j'~~~~~,.-~~
-~~l!jifU)~*_~tQ.*,,1oee•• t;6>;~~-~d"lh!;.i:~_~ ~~~_ud6!~*t~l!iir¥~_w~~.lum~_1f:oo.",R;t
~~
=~!.! 1~~"
~..("
~~~
q;g,~"""t
4<~~ ~""",,,,,,
-
-.,. .
~~~~fuft
~~ ~~
~::=;
~
=-~-::
~~::e.
~
~~~'''l1-
-~..".~
~~~
"r....••
,::-,,~
~
~"'-~ "'~!O
{>~-,-,
~~ ~_M>
~~~t ~~~!;;..,. ~
,",,~.k~
..
...
~
~
~~'~_J"$:o,
~"" -~~
uS1.f;~.~ ~~ ~1L~i>,;."-
-,......
-...-~
~~~
"'"'"~~~-, _~t~
-=..=~~ ~~ ~~~
~t~Mt
~'"
"".........-....
-_
_~~tb'
~1j-
,'-._...
~~
-"~e'\i>~~;""
_s_-....,.
~~'::~ -;.,,~~
--...
r_",_ _
~~~
~~ s.-_,~
"'-;:-..~~'"
-~'='~
~~~-.-
~1iO.~1"£' ?oo>o~
-~~
'T.~e-ij\,
~~~
~-~~ ~
.~~~ ~_I'J -',,,,,,,,~
..........
~
......".,.,..,-b~,h~
~~
'=:~~ ~~ .~ .. ~_W~~
~~ ~~
,..~~
~~
,~",~~~
~~
~~ ~~~
"T;';'~
,
.J Fig. 2: The CATO advertisement.
This page intentionally left blank
INNOVATION CANNOT BE PLANNED
ROBERT HUBER 1,2,3 Erice, the eponym of the Prize I have the honour to receive today, is connected for me with dear memories of science and culture on occasion of the meetings on 'structural biology', when the discipline was in its childhood and its scholars not more than a handfuL Everybody knew everybody at that time, Today, thousands are busy in this field, Days in Erice, as I remember them, were filled with lectures, discussions, scientific exchange, interrupted by excursions to the Greek temples found close by and with promenades through Erice, the medieval pearl built on much older Greek and Roman foundations, More recently, Sicily has captured my wife and me when we spent two weeks in the spring near Catania and on the isle Eolie to explore the volcanoes, the Etna, Volcano, Stromboli, It is a mystical scenery and one may conceive the legend of the Greek philosopher Empedocles throwing himself into the flames of the crater of Etna, We cruised in the Thyrrhenian Sea, but did not get lost as the legend tells us of Ettore Majorana and his voyage with the mail boat from Palermo to Naples, But we lost our hearts to this wonderful scenery of islands, mountains, and the sea and its clear and lucid water. After these joyful recollections on Erice and recent holidays, let me be serious and make a few remarks on my work and field of research, as it relates to the title of my contribution, I analyse the fine structure of bio-macromolecules in atomic detail to understand chemistry and physics behind biological phenomena: Seeing is Understanding. Our work on photosynthesis, more than twenty-five years ago, may serve as an illustration, Plants and photosynthetic bacteria collect sunlight and convert it into an electrical current and store its energy in chemical compounds which supply us and the animal world with foodstuff, which we eat, and oxygen, which we breathe. We are familiar with technical solar cells and light collectors made from metallic, semiconducting, and glassy materials, unavailable in the living world of proteins, nucleic acids and small organic molecules, The structures of the proteins and co factors which we unveiled by using x-rays and crystals revealed to us chemistry and physics of the biological process of light focusing, and electric current generation. An other example is the utilization of carbon monoxide used for a living by some bacteria, but a deadly poison for higher organisms. The chemical process carried out by these bacteria, called 'water gas shift reaction' is also used in huge technical plants at very high temperatures and pressures to produce hydrogen. In bacteria it is performed at ambient conditions. We again visualize the protein molecules carrying out this remarkable chemical reaction and find very large protein machines burying metalcofactors of iron, nickel, sulphur or molybdenum copper and sulphur. We begin to understand the chemistry catalysed by these enzymes and admire the ingenious way Nature is able to assemble the components, some of which are rare, spontaneously and Max Planck Institut fur Biochemie, 0-82152 Martinsried, Germany School of Biosciences, Cardiff University, CardiffCFIO 3US, Wales, UK Zentrum fur Medizinische Biotechnologie, UniversiUit Ouisburg-Essen, 45 I 17 Essen, Germany
817
818 precisely. Research as described has almost always an applied aspect, close to or far-off realization, but certainly opening the mind for new ideas and strategies: We may improve technical photocells by examining Nature's solution, we may modify the technical catalysts of the hydrogen-producing water-gas shift reaction by learning from the bacteria. There is a branch of medicinal chemistry and pharmacology, where information of protein structures has become fully integrated in the discovery and development process. In cases where a certain protein has been identified as causer or important player of a disease, we are able to interfere with its action by small molecule ligands, designed on the basis of the protein structure, following the lock and key metaphor of Emil Fischer. The receptor protein is the lock, for which we design and develop a key, guided by the lock's three-dimensional structure. This process, called ' structure-based drug design' , is firmly established in the pharma industry and is of central importance to devise new medicaments and novel strategies for therapeutic intervention. It is not less important for crop science and the development of new means of pest control. I was lucky to start my career as a young researcher in structural biology when the field was in its infancy. Most experiments in these pioneering epoch required design and development of methods and instruments complying with my affection for physics, mathematics, and chemistry. Over the years, it was a great pleasure to see the field of protein crystallography expanding and maturating. Protein structures are seen as a basis for understanding biology and the molecular cause of health and disease. A most prominent sign of the recognition of its importance is the fact that four Nobel prizes in Chemistry were given in the last 12 years (five, if I add my own and go back 21 years) honouring research in this field, the last just a few days ago for structural studies on the ribosome. Fundamental research of proteins, their structures and functions has paved the way to the application in medicine and crop science and suggests new strategies and offers new tools to combat human diseases and control pests in agronomy. Applied protein research has been established in the large pharma- and agro-companies and has led to the foundation of small focused research-intensive enterprises. But we must bear in mind that the basis is fundamental research and discoveries, which are unpredictable. Innovation occurs when the knowledge obtained from research finds practical application. Turning this scenario into reality requires support by technology transfer programs of governments, universities, institutes and companies. The problem is, that innovation is fairly impossible to plan. This is borne out by the low success rates of global pharmaceutical research. New blockbuster drugs rarely come about because of targeted research. Usually, someone somewhere makes a scientific discovery and stumbles upon somebody who is turning it into a success. That's why it is important to cluster expertise and to put any inventions identified as promising into a suitable industrial environment. Innovation usually happens in places where cutting-edge research converges with professional, industrial users and alliances between academic researchers, small biotech companies and big pharma- and agro-companies emerge. In other words, innovation cannot be planned, but it is possible to create the right conditions for it to happen. The Erice Statement was conceived and formulated in the times of the Cold War and, understandably, focuses on the threat by nuclear weapons. We have become aware
819
of a plethora of threats, new epidemic plagues, age-related diseases, shortage of food for a growing world population, sustainable energy supply, clean water, greenhouse gases and pollution, and others. I am convinced that mankind has the potential to solve these gigantic problems by research and by free exchange of ideas and results, as the Erice Statement says. Chemistry, biochemistry, gene technology, and biotechnology offer tools for a solution of many of these challenges and help to clear the road to a liveable and peaceful future for the coming generations.
This page intentionally left blank
WHY SCIENCE IS NEEDED FOR THE CULTURE OF THE THIRD MILLENNIUM
HENNING WEGENER Ambassador of Germany (ret.), Information Security Permanent Monitoring Panel , World Federation of Scientists, Madrid, Spain SCIENCE AND GOOD GOVERNANCE The second half of the 20 th century has seen momentous changes in how we perceive the nation State and public authority generally. After the collapse of the totalitarian dictatorships, manifestations of the all-devouring Orwellian moloch with an infinite contempt for the individual, there commenced a movement in many parts of the world towards a more slender concept of the State, reviving the ideals of a rigorous limitation of executive State power, accountable government structures and expansive concepts of freedom and licence for an active and vigilant citizenship. The ancient belief in the "invisible hand" to describe self-regulating free societies was strengthening again. Concomitantly, the State tended to shrink to become the provider of mere framework regulation for non-State activities . Civil society emerged as a major player, lately helped by the infinite possibilities and entitlements which digital technology offered. Governments liberalized, privatized and decentralized, disinvested themselves of ancient holdings, allowed public utilities and services to come into private hands with substantially higher profitability, and ceased to dominate the economic life of the citizenry. Now the threat of international terrorism, the glaring insufficiencies of the economic and financial control systems in the current crisis, the calamities of human displacement and the new megathreats to internal security have arrested this movement. Crises and conflicts, and the soaring need for transnational problem-solving, given the increasingly complex challenges of a technologically driven, globally connected society, including risks to critical infrastructures, make us rethink this process. The invisible hand once again seems to have lost its magic touch. The need for strong public management becomes manifest. The State is back in fashion, in an ambivalent and often contradictory way. The impulse for governments to slim their personnel list and to redefine the extent of action is dampened by the simultaneous need to confront grand new tasks resulting from an unprecedented dimension of global threats. This tension profoundly impacts on our view of the State, and on the proper balance between the attributes and capacities of national and sub-national administrations-but no less of international and intergovernmental authorities. We face a tremendous dual challenge: to preserve the achievements of the liberal State, and yet to arm public powers to defuse the new threats. Once again we are groping with the concept, mode of operation and limits of State authority. However defined, the State model of the future requires a quantum change in modernization to cope with a new world. The search for a new model is on. Science must be part of it. Professor Zichichi has taught us: "Applied science for mankind is the real motor for progress". To define modem public management, we have to put the motor of modernization in high gear. Effective management of public affairs is a primordial
821
822 requirement, and the challenge to science is commensurate. Underpinning and facilitating the functioning of public management in a manner consistent with civil liberties may well be among its greatest challenges. "Good governance" has become the conceptual catch word for this task. Good governance is many things to many men, but there is also a wide consensus on its contents. In one current definition, good governance is a tool to describe how public institutions conduct public affairs and manage public resources in order to guarantee the realization of collective and individual rights, also in the face of grave threats. Governance describes the process of decision-making by solid institutions and their subsequent implementation to optimize the public good. For international organizations, good governance has become key to development policies, on the belief that bad government appears increasingly as the root cause of all evil in developing societies. There are many ways of categorizing the parameters of good governance. But in order to limit the discretionary space of the concept, any sensible categorization is welcome, as far as it can guide practical action. The United Nations has defined the following eight characteristics: Governments must be: Consensus oriented Participatory Following the rule of law Effective and efficient Accountable Transparent Responsive Equitable and inclusive World Bank, the Asian Development Bank, the European Union with its documents on European Government have categorized in similar manner. From the scientific world, the Kaufmann-Kraay-Matruzzi categorization, KKM for short, basis of their Worldwide Governance Indicators, has gained prominence. These definitions focus on democratic values and human rights, but presuppose strong State institutions to protect against the new risks and to provide the problem-solving tools for huge transfrontier tasks. The World Bank Institute publishes Governance Surveys and is the home of extensive research work. Its various indexes of government quality permit an objective assessment of government policies. A sizable research movement underway aims to provide empirical support for the theoretical underpinnings of good governance. In all likelihood, given the complexity of the issues and the constant emergence of new themes, this will remain a fertile area for multidisciplinary research work in the future. The requirements of increased citizen participation and transparency, added to the call for more effectiveness, indicate a long-term need for scientists to involve themselves in the further development of all aspects of e-government. The full use of information sciences and digital technologies is necessary to streamline procedures, overcome the increasing complexity and opaqueness of modem public management, bring more
823 celerity to, and enhance the responsiveness of State administration, and move governments closer to the people they are intended to serve. Interactive, easily understandable communication tools between governments and society are needed. This is not only a question of deliberate investment in hardware and software and public education, but equally one of advanced science and technology to offer better, more accessible and manageable tools. The required comprehensive information architectures call for the best from information scientists. Governments can be run more effectively if the digital content in their operating modes is increased and if administrative practices are streamlined and harmonized. The potential savings are, indeed, enormous. A few examples suffice. Public administrations are involved in roughly 20% of all bank transfers. If the governments in the EU would adopt the new uniform digital transfer procedure for all of them, the savings could reach more than 20 billion € over only six years. The Federal CIO recently appointed by President Obama is planning to unify the entire U.S. Government information infrastructure in one super-protected cloud center, to replace the current fragmented structures. While the total operating cost is now 19 billion p.a., it may be less than half when the new structure is in place, at higher security. E-administration in the justice system, including its transfrontier document traffic, could not only bring huge savings, but also make the system more rapid (and more just), more citizen-friendly and more transparent. Here again, applied science can bring breakthroughs with new user-friendly applications at lesser cost. Monitoring decision making processes is essential for good governance, in fostering accountability and effectiveness. The World Bank Research series already offers a body of methodological tools for measuring development processes, their quality and their costing, also as an effective antidote against corruption. Improving the econometric tools and measuring devices would seem to offer additional opportunities for researchers. A functioning society with good governance in the digital age requires confidence and security in cyberspace. Cybersecurity, important as it is across the whole spectrum of civilized activity, is even more vital for the protection of data and operating options In the State domain. The defenses in internal and external security affairs hinge on the reliability and uncompromised usability of data. We cannot afford to have the State debilitated by cyber crooks in its fight against internal and external enemies. At the moment of huge new challenges in cyberspace, including: exponential growth of digital devices; convergence of all digital technologies into a gigantic communication space; massive migration to cloud computing without guaranteed security mechanisms; cruel and powerful attack strategies by opaque cybercrime consortia; the role of science in developing a securer environment-from more effective tracking and tracing methods to sophisticated cryptography and authentification cannot be overestimated. Cybersecurity remains unfinished business, it will continue to constitute an overriding scientific and technological task in assisting a strong protective State dutifully in arms against new enemies. In this context, the efficient protection of critical national infrastructures, whether in State hands or privately managed, against cyber and physical attack will be a prime necessity. However the precise delineation between State and private endeavour, the State will once again need to take on a strong stimulating and coordinating/leading role, both to
824 avert new societal threats and to optImIze technologically possible new levels of efficiency and economy. Areas of application, beyond those already alluded to, abound, and all of them require a new scientific base. Any list would need to include new cleaner technologies in public institutions, more efficient and environment-friendly public transport, more energy efficiency and cleanliness that will only result from more research in new energy sources, better energy storage techniques, improved electromobility, etc. Efficiency standards and norms should be determined nationally and internationally, and be underpinned by scientific rigour. "Green" building codes and energy-optimized information technologies are other areas where more can be done. Generating scientific synergies through borderless international cooperation, based on huge, publicly accessible data banks with well-organized Internet platforms deserves yet another quantum jump. Given these research needs I personally would like to complement the UN- and other-lists of the ingredients of Good Governance by building in an essential additional plank relating to science. Needed are policies that ensure the long-term innovation potential of societies by adequate science budgets; State and private. Our societies, despite increased efforts, have failed in making these provisions. The European Union in its 2001 Lisbon Programme, has stipulated the right percentage figures for their GNPs, but lack of fulfilment is patent, and some States currently diminish their I & D budget citing the financial crisis. That is a poor service to Good Governance. In this context, the figure of a national science advisor (or Committee), to a State government or an intergovernmental body-as an independent, recognized authority of scientific vision, becomes increasingly attractive. Such institutionalized presence, with appropriate authority and attributions, could exercise a catalytic function and could exercise enormous leadership.
GLOBAL WARMING AND THE ENERGY CRISIS: HOW SCIENCE CAN SOL VE BOTH PROBLEMS
ALBERT ARKING Johns Hopkins University Baltimore, MD, USA INTRODUCTION The world faces an energy crisis and, what may be a surprise to many, it is only tangentially related to the global warming problem. From one point of view, each problem exacerbates the other. To mitigate global warming, it is necessary to reduce the use of fossil fuels. On the other hand, the demand for energy is increasing at a rapid rate and any cutback in fossil fuels will harm economic growth and adversely affect the human population--especially in the less developed countries. However, the world-wide shortage of fossil fuels, especially oil, is such that new, alternative sources of energy will be needed to sustain the earth's human population in the coming decades. Thus, there is an energy crisis which we need to confront, even if we choose to do nothing about global warming. Here we propose a solution: Have the nations of the world together establish a dedicated research program that will bring together the world's best scientists and engineers to focus their attention on new, preferably renewable sources of energy and on processes, materials, and products that will utilize energy more efficiently. This is the approach that the democratic countries of the world used successfully when the world faced major crises in the 20 th century. In the following sections we will briefly review the nature of the global warming problem and the energy crisis, and make the argument for a world-wide research program dedicated to energy science. GLOBAL WARMING The possibility that the earth's temperature could be influenced by greenhouse gases dates back to 1895, when Svante Arrhenius showed the influence of water vapor and carbon dioxide on the energy budget of the atmosphere. Increasing temperatures through most of the 20 th century (Figure Ja) led many scientists to consider higher levels of carbon dioxide (which increased by 35-40% since pre-industrial times) as a principal contributor to the warming. Other possible influences on temperature change include the other greenhouse gases, particulate matter, which has a mostly net cooling effect, and variations in solar luminosity, which can influence temperature either way. Internal changes in the climate system, associated with energy exchange between the atmosphere and oceans can also playa role, but up to now only short term changes--on time scales of years to decades-have been identified. The debate on the anthropogenic contribution, which is due mainly to industrial/agricultural emissions of carbon dioxide and other greenhouse gases (e.g., methane and nitrous oxide) has become more intense in the last few decades as the rate of temperature increase steepened.
825
826
Global average temperature 0 .5
14.5
~
3
E
16 14.0
0.0
~ ~
:a -o.S
13.5
E
.§.
g Ol
'"j
50
o
i -50 5 ~ -100
g
E? -150
~
1850
1900
1950
2000
Year
Fig. 1: Observed changes in (a) global average surface air temperature, and (b) global average sea level from tide gauge (blue) and satellite (red) data. A II changes are relative to corresponding averages for the period 19611990. Smoothed curves represent decadal average values while circles show yearly values. The shaded areas are the uncertainty intervals estimated from a comprehensive analysis of known uncertainties. (Source: 1PCC 2007 Working Group 1 report) The focus of debate is more on the science of global warming than on its mitigation, and the science is hung up on how much the mean global surface air temperature will change if carbon dioxide (or its equivalent in a mixture of greenhouse gases) is doubled. This characterizes the sensitivity of the climate system to a change in its energy balance. The most authoritative body of knowledge that could quantify the sensitivity is the most recent report of the Intergovernmental Panel on Climate Change (IPCC), issued in 2007. The report is thorough and based on the known science of the climate system. It is based on observations, most of which were collected over the past century or so and provide information on how climate changes on different time scales, and on computer models of the climate system, developed over the past half century, that can project climate into the future. The models are based on an amalgamation of established principles of physics with empirical relationships derived from the analysis of the observations, which also serve as a test of the validity of the models. The IPCC estimate of climate sensitivity-provided by an ensemble of models, anyone of which is as valid as any other-is likely to be in the range 2°C to 4.S oC due to a doubling of
827 carbon dioxide. This is a wide range, leaving little guidance on whether the global warming expected under any given scenario of future emissions will be severe or mild. The uncertainty in climate sensitivity amongst the models has not changed significantly since 1992, when the first IPCC report was issued. Could we expect to narrow the range? Perhaps, but not by much. The models do well in mimicking past climate, but we cannot be sure that natural variations that can occur on time scales longer than a hundred years or so-for which we have reasonably good data-are properly taken into account. Many years ago I asked the venerable Professor Richard Goody of Harvard, "When will models become good enough to provide reliable projections of future climate?" His answer was, "Never." When I asked why, he said, "Because we do not know what we do not know." When one wants to translate climate sensitivity into a projection for the future, the uncertainty increases dramatically because of the uncertainty in future emissions which, in tum, depend on such things as world population growth, the state of the economy, the degree to which underdeveloped countries increase their need for energy, and the state of technology with respect to energy sources and utilization efficiency. All of these factors, which affect future emissions, were considered by the IPCC, so that IPCC estimates of future temperatures depend on the scenario that may unfold over time. Projections that span the range of scenarios, based on a variety of models, show temperatures by the end of the century ranging from about 1°C to 4°C (Figure 2). Although mean global temperature is the common marker for global warming, other aspects of the climate system are important and may also be cause for concern. Sea level has risen 15-20 cm over the past century (Figure 1b). Overall, glaciers are receding, which they have been doing ever since the end of the last ice age 18,000 years ago . In the aftermath of the ice age, the rate of sea level rise during any thousand-year period was as much as ten to twenty times the current rate, but that was a time when major portions of the North America and Eurasia were covered with ice that was thousands of feet thick. Now, the only major sources of further melting are the Greenland and Antarctic ices sheets. (Of course, melting or freezing of sea ice, which floats, does not cause a change in sea level.) Unless something unusual happens, such as one of the ice sheets breaking up and sliding into the ocean, the expected sea level rise by the end of the present century is in the range of 30-50 cm, about twice the rise in the previous century. For perspective, one should keep in mind that tides could cause local sea levels to go up and down as much as two or three hundred cm in the course of a day. Thus, despite the voluminous observations and powerful computer models, science can only provide a murky view of the climate change to be expected. We summarize the reasons as follows : 1. Knowledge concerning the physical processes governing the atmosphere, oceans, and the earth's surface is limited. The processes incorporated into or omitted from the models are different in different models; hence, with the same input, different models yield different results. 2. The climate models deal with physical processes only, with no capability to account for changes in population, human behavior, economic conditions, and the state of technology-all of which impact greenhouse gas emissions, which are a necessary input to the models.
828
4.Q
-..
() ........ Ol
~ ~
3,0
c
l~
~
-~A2
-A1B --\31 - - Constant composition
c-ommitment -
20th century
2.0
8
~ 1.0 :l f,fJ
(ij JJ
,g (to
~
-1.0 1900
2000
2100 Year
2200
2300
Fig. 2: Multi-model means of surface warming for the 20th century (black) relative to the 1980~1999 mean are projected into the future, based on several lPCC emission scenarios (amongst 40 in the study) that characterize world population growth, socia-economic development, and technological change. Scenario A2 (red) represents a heterogeneous world with continuing population growth and with fragmented socioeconomic development and technological change. Scenario AlB (green) represents a world of very rapid economic growth and rapid introduction of new technology with decreased socia-economic differences amongst a population that peaks in mid-century and declines thereafter. Scenario Bl (blue) has the same population profile as Al B but with more rapid change towards a homogeneous socia-economic structure and more widespread adoption of resource-effiCient energy technologies. An additional experiment is shown (orange) with forcing maintained at the year 2000 level. ([he AlB and Bl scenarios are continued beyond year 2100 with constant forcing) (Source: lPCC 2007, WG 1 report) If what is described above is the essence of the climate change expected this century, then why is there so much concern? The answer is that the future of our climate system is based on current scientific understanding, anticipated level of technology, and human behavior, all of which are highly uncertain. The change could be significantly more or significantly less than the median expectation, and unless one can be sure that the change is tolerable, it would be wise to take steps to mitigate man's contribution. But,
829 clearly, there is no reason to panic. One thing one can be fairly confident is that much of the extreme positions on both sides of the scientific debate are almost certainly wrong. Those who claim there is no significant warming taking place or that the anthropogenic contribution is not significant are ignoring basic physics. The earth's equilibrium temperature and climate state is a balance between the incoming and outgoing energy. The sun's radiant energy warms the system, while the portion of the sun's energy reflected back to space plus the earth's emitted radiance (at longer, nonvisible wavelengths) cools the system . Adding greenhouse gases-such as carbon dioxide, methane, nitrous oxide, etc.--cause temperatures to increase in order to maintain balance. What is not sure is the extent of climate change expected from any particular imbalance between incoming and outgoing radiation. THE ENERGY CRISIS While the debate on global warming continues, with action under consideration by the nations of the world, not enough attention is being given to the emerging crisis in energy. That crisis is a much more compelling reason to move away from fossil-fuels, the main anthropogenic contributor to global warming. Furthermore, if we deal appropriately with the energy crisis, global warming becomes much less of a problem. We noted above the rise in mean global surface air temperature over the past 30 years (Figure la). Within the past decade, there was an unrelated rise in the price of oil (Figure 3). Analogous to temperature being the result of a balance between incoming energy from the sun and outgoing energy radiated by earth, the price of oil is a balance between supply and demand. However, there is a problem with the supply of oil. Unlike the sun, which will continue for billions of years, the oil supply on earth will run out at some point in the not too distant future if the demand of energy continues growing and there is no major source of energy to replace it. One may find ways to substitute natural gas or coal for oil, but in time they too will run out. And even if fossil fuels within the earth are more plentiful than we estimate, cost will continue to grow as the supply diminishes. The earth is finite and has a finite amount of fossil fuels.
830
140
Arab Oil
Iranian Iran-IJaQ
Embargo RQvolutbn
war
Kuwait Invasbn
OPEC 9/11 QuotaCuts Attack
0(J) 120 ::)
w
100
«
80
a:
60
-1
a: a: (l)
w
CL
w 0
40
CC CL
20 0 1970
1980
1990
YEAR
2000
2010
Fig. 3: The price of a barrel of oil after adjustment for inflation in March 2009 dollars. (Source: Us. Energy Information Administration).
Fig. 4: World production of crude oil projected into the 21st century for various estimates of the ultimate amount that can be recovered (ranging from 2.2 to 3.9 TBls, with a mean of 3.0 TBls). Production is assumed to grow at a 2% annual rate up to the point where the ratio of reserves to production (RIP) equals 10, after which production decreases to maintain a constant ratio. (Source: Us. Energy Information Administration.) Similar estimates have been independently made by Russian geophysicist A.E. Kontorovich (2009: Russian Geology and Geophysics, v. 50, p. 237-242). Projections of the U.S. Energy Information Administration, based on known reserves and estimates of yet to be discovered sources show world crude oil production
831
reaching its peak by the middle of this century (Figure 4). As an example, U.S. oil production is decreasing after having peaked in the mid-1970s, despite the increasing price. Eventually, all fossil fuels will follow a similar path. While one could argue that an increasing price will force conservation and slow down the rate of depletion, the impact on human welfare is not uniform. In highly developed countries there is room to absorb the impact of higher energy cost, while in the less developed countries, where per capita income and energy utilization are both low, inexpensive energy is a critical requirement for continuing development. The poorest people in the world will suffer the greatest impact. There is one more compelling reason to be concerned about the emerging energy crisis: political independence and, related to it, national security. Energy is an essential ingredient in everything that characterizes a modem society. Yet energy based on fossil fuels is not uniformly distributed geographically. For those countries lacking in energy sources, the rising cost of energy will not only have a negative impact on national economic growth, it will also become a threat to their independence and security. Oil in particular-at least, the more easily accessible oil-now comes from areas of the world where there is considerable instability and dominance by autocratic regimes. Of even greater concern, many of those areas are threatened by terrorism. Because oil is a critical need, it is difficult for the free countries of the world to ignore the threats and protect themselves from political blackmail and physical harm. A SOLUTION TO BOTH PROBLEMS It is clear that global warming is real, and there is little question that at least over the past 35-40 years anthropogenic influence is playing the major role. What is debatable is the extent to which the global warming observed over the last century is driven by anthropogenic activity-principally the burning of fossil fuels-versus what may be natural variations like what have occurred in the past. It is equally clear that energy is a necessary resource that serves human society. Hence, a solution to global warming that restricts and/or raises the cost of energy will adversely affect the human population, and a solution to the energy crisis that promotes continued high-level use of fossil fuels will intensify the anthropogenic influence on climate. An obvious path to a solution is to develop an alternative to fossil fuels; specifically needed is a source of energy that is abundant, inexpensive, and renewable. At the present time, we do not have such a source, but there are candidates, and it will be of great benefit to the world to do whatever we can to advance its development. Recognizing that we face an energy crisis, we ought to do what was done twice in the last century when the world faced major crises. The first was at the dawn of World War II, with Germany bent on dominating Europe and spreading its Nazi philosophy, it was realized that Germany was highly advanced in nuclear science and had the capability of developing nuclear weapons. The response of the free world was to gather the best scientists and engineers-many of them were Europeans who had escaped from Germany and countries threatened by Germanyand give them free reign and lots of money to develop an atomic bomb before Germany could do so. This scientific effort, known as the Manhattan project, was successful beyond expectation.
832 The second came after World War II, in the 1950s, with the potential of major conflict between two ideologies: communism under autocratic governments, and socialist/capitalistic societies under democratically elected governments. It seemed that communist USSR, which dominated half of Europe, was far ahead in rocket design and may ultimately dominate the world through dominance in space technology. The establishment of the U.S. space program, followed by the European space program, resulted in a rapid advance of space technology, largely under free world control, and ensured that space would be used exclusively for peaceful purposes. Although the Manhattan project and the space programs had very specific objectives, and they met those objectives well, the scientific fallout in each case produced world-wide benefits far beyond what was initially intended. The Manhattan project was devoted to nuclear weapons, but the follow-up to that project in the major countries of the world led to the development of nuclear reactors, which provide energy without emission of greenhouse gases, and to nuclear medicine, where radioactive isotopes are used for diagnosis and treatment of disease. The space programs produced an even wider spectrum of benefits, including satellite communication, improved weather monitoring and forecasting, satellite-based search and rescue, robotics and miniaturization, electronics and computer technology, and new materials. No doubt, all of these new developments would have come about eventually without government involvement, but it would have taken a much longer time. Dedicated research towards achieving specific objectives without a focus on economic gain will yield a societal gain on a much faster time scale. Ultimately, as we have seen with the Manhattan project and space programs, the societal gain translates into great economic gain. There is no doubt that the economic prosperity in most countries of the world since World War II can be attributed to a large extent to the fallout from the Manhattan project and space programs. It is clear that the world is facing an energy crisis that is as important as the crises of the 20 th century, and what is needed is a similar response, namely, a multi-national research program dedicated to energy science. The program would focus not only on developing inexpensive and renewable sources of energy with minimal environmental impact, but also on methods for storing and distributing energy, and on developing energy-efficient materials, processes, and methods of utilizing energy. In developing energy sources, special attention should be given to sources that are more equitably distributed with respect to world population centers so as to lessen dependence on the resources in limited areas of the world. Such a program would have a strong, positive impact over the entire world, boosting the global economy and, in particular, helping the less developed countries catch up with those more developed. The program should be managed and funded by the respective governments, but it should rely on the participation of academia and private industry to accomplish its goals, much like the current space programs. Government take the financial risks, but based on the stellar performance of science in previous crises, there will be ample compensation downstream from the stimulus to each nation's economy. Without prejudging what would be the nature of the inexpensive and renewable sources of energy that would obtain, it is clear that most would be directly or indirectly related to the sun's radiant energy. That would include energy derived from solar cells, wind power, river flow, and perhaps combustion of vegetation-all renewable with zero carbon emission, including vegetation if replanted. The amount of solar energy absorbed
833 by the earth is approximately 7,000 times present world consumption, so that growth of energy utilization is not a problem. The main problem is converting, collecting, and storing the energy at costs well below that of fossil fuels . That would ensure that the new sources will replace fossil fuels without government mandate. The outcome of a multi-national energy research program would also go a long way towards mitigating man's impact on the climate system. Another way to look at how the program would affect the climate system is to consider the climate scenarios used by IPCC in their projections (Figure 2). The objective of the energy program would be to bring about a scenario that will limit global warming to an acceptable level. In scenario B 1, for example, the median additional warming from year 2000 to 2100 in Figure 2 is 1.6°C, with a model-dependent upper limit of 2.0°C. That scenario might be a reasonable goal, considering it is only about twice the warming during the 20 th century warming. Recall that a key factor in that scenario is technological change, which is the basic aim of the energy research program. CONCLUSION Energy is essential for human society and its economic growth. Restricting and/or raising the cost of energy, as some experts have proposed for dealing with global warming, will have adverse affects on socio-economic development, especially in the less developed countries. On the other hand, continued high-level use of fossil fuels for energy will intensify the anthropogenic influence on global warming and, in any event, there is not enough fossil fuel at low enough cost to sustain socio-economic development in the coming decades There is therefore an urgent need to deal with the emerging energy crisis. We suggest establishment of a multi-national research program dedicated to energy science. It would focus on methods for storing and distributing energy, and on developing energy-efficient materials, processes, and methods of utilizing energy. Like the space program, it should rely on the participation of academia and private industry to accomplish its goals. Governments will provide the funding, but the cost will ultimately be made up by the stimulus to each nation's economy.
This page intentionally left blank
CO-BENEFITS OF CLIMATE POLICIES: THE ROLE OF SCIENCE CARMEN DIFIGLIO Office of Policy and International Affairs, U.S. Department of Energy Washington, DC, USA Interdisciplinarify in Science as Applied to the Fight Against Planetary Emergencies
ENERGY EFFICIENCY MEASURES DO WELL ON CO 2 ABATEMENT COST CURVES • •
CO 2 "abatement cost curves" rank climate policy measures by cost. In addition to cost, they show how much C02 mitigation is provided by measure. Many measures are shown with negative cost. These measures are typically energy efficiency programmes. Example: the McKinsey C02 Abatement Cost Curve.
• • •
MCKINSEY CO 2 ABATEMENT COST CURVE U.S. MID -RANGE ABATEMENT CURVE
- 2030
_ ..--
-C&ut~
_ -Coo;
Fuel ti~
~
--
m -
c..o.,"..... -
~d
~plInB
UD! J'CUfi~ o:.cs~
..........
....
~
~~
ENERGY SECTOR CLIMATE STABILIZATION POLICIES •
Several models estimate the cost of reducing GHG emissions in the energy sector:
835
836 -
•
lEA Energy Technology Perspectives lEA World Energy Model Integrated Model to Assess the Global Environment (IMAGE) Pacific Northwest National Laboratory MiniCAM/GCAM* IIASA Model for Energy Supply Strategy Alternatives and their General Environmental Impact (MESSAGE) - Stanford Integrated Assessment Model for Climate Change (MERGE)* These are not climatology models. *MiniCam/GCAM and MERGE do have reduced-form climate calculations to connect emissions to stabilization scenarios.
• •
They are models that simulate the economic and technological relationship between the economy and GHG emissions. They show how energy investments respond to climate policies (cap and trade, GHG taxes, efficiency measures, etc.) to produce an energy sector that has a different relationship between energy services and GHG emissions.
TECHNOLOGY CHANGES FOR 450 STABILIZATION (lEA ENERGY TECHNOLOGY PERSPECTIVES 2008)
Contributions of Technology Wedges 10 .,.---------------.,
o+-~-~~-~~~~--~~~
2OD5 2f)10 2015 2020 2Il25 203Q 2035 2040
837 OIL and GAS IMPORT BILLS IEA 2009 WORLD ENERGY OUTLOOK, 450 PPM SCENARIO 700
• European Union
~600
o
• China
£i500
t: Jl400
" 300 .2
• United States • India
100
• Japan
o
~200
o 2008
2030 Reference Scenario
2030 450 Scenario
INCREMENTAL INVESTMENT VS. FUEL COSTS SAVINGS IEA 2009 WORLD ENERGY OUTLOOK, 450 PPM SCENARIO 18000
~15 o
• Transport
000
• BUildings
~12 000
~
9000
"~
6000
• Industry
iii 3000
o Inerem ental investment 2010-
Fuel cost savings 2010-2030
Fuel cost savings over lifetime
2030
CO-BENEFITS SUMMARY IEA 2009 WORLD ENERGY OUTLOOK, 450 PPM SCENARIO •
•
Consumer fuel costs are $8.6 trillion lower (2010 and 2030) for an additional investment of $8.3 trillion. Savings in transport alone accountfor $6.2 trillion. OEeD oil imports are 6 mb/d lower in 2030 than in 2008.
838
• • • •
China and India oil imports are 10% and 15% lower, respectively, by 2030 than in the Reference Scenario. China's gas imports are 23% lower by 2030. Worldwide S02 emissions are 29% lower than in the Reference Scenario (2030). Worldwide NO x emissions are 19% lower and emissions of particulate matter 9% lower (2030).
WHY CO-BENEFITS ARE IMPORTANT • •
•
•
Prospects for a enforceable climate treaty at Copenhagen have dimmed. It is politically difficult to ask people to take action now to avoid climate problems that seem far in the future. Less-developed countries still question why they should incur costs to avoid consequences of high GHG concentrations when the developed countries were responsible for doubling them since the pre-industrial age. Recognition of co-benefits could get countries started on actions that will benetit them in the near term.
THE ROLE OF SCIENCE •
•
•
• •
Scientists have driven the debate about the science of climate change but have been less engaged about the effects of policies that would reduce GHG emissions. The models and analyses cited above have been largely developed by economists and engineers utilizing computational techniques developed by mathematicians. While the several major models tend to give the same key messages (strong co-benefits, especially from energy efficiency), can these conclusions be endorsed as scientific? If not, what steps need to be taken to review and, perhaps, revise their methodology and data? This could be a parallel effort to WFS proposals to develop improved climatology models.
WHY SCIENCE IS NEEDED FOR THE CULTURE OF THE THIRD MILLENNIUM: HISTORICAL EXPERIENCE OF A SMALL COUNTRY (LITHUANIA)
ZENONAS ROKUS RUDZlKAS lese World Laboratory Branch Lithuania, Lithuanian National Academy of Sciences, Vilnius, Lithuania Meditating upon the fundamental question asked by Professor Antonino Zichichi "Why science is needed for the culture of the third millennium?" reminded me of the event at the international physics conference, when the speaker started his talk by the phrase: "Prophet Moses was wandering for forty years with the Jewish tribes in the desert looking for the land without oil and gas". Why? Let us try to answer this question. The answer differs from a widely accepted one. Nobody knows who has brought the forefathers of recent Lithuanians to the Baltic Sea, where there are rains fairly often, where there are the seasons of the year and rather cold winters. Therefore, in order to survive, they had to work hard and to think; how not only to survive, but to have a fairly decent life on the land without natural resources, oil and gas included.
Fig. 1: Typical Lithuanian Cross.
Homo sapiens were always trying to understand the world they lived in, and to explain, at least naively, its origin, structure and the phenomena they saw. Many of which were beyond their understanding and therefore were creating the embarrassment and fear. That is why they were worshiping them, offering them food, animals or even people and trying in such a way to win their sympathy and support. Only with Galilei the understanding of the nature has become more scientific. l
839
840 Lithuanians were the last pagans in the Europe, and less than 700 years ago were still worshiping the Gods of a Sun, Moon, Fertility, etc. Some elements of those times have survived even after baptizing the peoples of contemporary Lithuania. It is enough just to look at the typical Lithuanian Cross (Figure I), you can often find erected, particularly at crossroads. In the country, having little or even no natural resources, people must rely on themselves, on their knowledge, skills and experience of previous generations. The initial stage is the liquidation of illiteracy, simply the learning to read and to calculate. Figure 2 represents the so-called "Poverty school" in Lithuania (a mother teaching her offspring to read). On the other hand, let me remind you that on the territory of recent Lithuania, already in 1579 there was founded the University (Vilnius University), the oldest university in the Eastern Europe.
Fig. 2: A mother teaching her offspring to read.
For more than a century, Lithuania was occupied by and incorporated into the Russian Empire and named its North-West Region. There was a rather long period of time when the Russian Czar has closed Vilnius University and even prohibited the use of the Latin alphabet for printing the Lithuanian books. However, the desire to have the newspapers, calendars, the other books in the mother language led to the occurrence of a unique phenomenon, namely, special persons or smugglers, who were transporting secretly the above-mentioned books, printed in Prussia, through the boarder of Russian Empire. This was extremely dangerous, some smugglers were even shot dead. In the second largest city of Lithuania (Kaunas) you can see the monument dedicated to such heroes (Figure 3).
841
Fig. 3: Monument dedicated to book smugglers.
Fig. 4: Painting by artist M K. Ciurlionis.
The ultimate challenge of every nation is the creation of a prosperous, flourishing society. However, the quality of life, measured usually by GDP, is necessary but not sufficient condition for proper evaluation of this aspect of the developed democratic state. All human beings want to be happy. But who is happy, is there universal recipe of
842 happiness? Famous Lithuanian artist M.K. Ciurlionis tried to answer this question about hundred years ago in his painting (Figure 4): two kings, wandering in their kingdoms in order to find a happy family, have found it in the bungalow of a peasant. What about our times? After the end of the World War I and the collapse of the Czarist Russian Empire, Lithuania became an independent country, though without the capital of Vilnius and the only sea port Klaipeda. Nevertheless, the Government of a new independent state, bearing in mind everlasting challenge-creation of democratic economically developed and stable nation, founded a new university in temporary capital Kaunas, later named after the Great Duke of the Great Duchy of Lithuania Vytautas Magnus, and formed and implemented the system of primary schools, pro-gymnasiums and gymnasiums. Gifted graduates of Kaunas University were sent to Western Europe in order to learn more, to deepen their knowledge, to do research and to obtain scientific degrees and after returning back to teach at the university and to develop research activities in Lithuania. However, the peaceful building of the state was cruelly interrupted by the incorporation of the independent Lithuania into the Soviet Union, by the World War II and afterwards by fifty years of Soviet occupation. The paradox of this period consists in the fact that the standard of living of peoples of the occupied Baltic States (Estonia, Latvia and Lithuania) was higher in comparison with the situation in Russia. This phenomenon can be explained by higher culture and traditions of learning, the attitude to work, the entrepreneurship, the general culture and traditions in these countries, formed by previous generations. In spite of many restrictions and control systems, of the centralized authoritarian governing, there were certain, though narrow, slits for local initiatives, for better and clever management, resulting in higher efficiency of production, local economy and general culture. Lithuania has managed to preserve its unique ancient Indo-European language, its material and spiritual heritage and values, education in mother tongue, original art, poetry and music, even architecture in spite of general standardization and centralization. Notwithstanding the standardized high and higher education curricula governed from Moscow by the relevant All Union ministries, Lithuanian system has managed to preserve its unique features and efficiency. That is why, when the Lithuanian Supreme Council on March 11, 1990 declared the independence of Lithuania and soon after the Soviet Union collapsed and the borders opened, the young Lithuanians were able to enroll into prestigious universities of the Western Europe or the USA and to study on a par with the students from the other countries. In 2004, Lithuania joined NATO and became a European Union member. Another peculiarity of this time consists in the accelerating globalization. What does this mean for a small country like Lithuania, having population of 3.8 million in 1991 and 3.4 million in recent days? First of all, there are qualitatively new challenges and opportunities. However, there are also certain dangers for Lithuania to be lost, to disappear in the global sea of nations and cultures. The European Union stands on three fundamental pillars: cooperation, competition and solidarity. Every member country must find the optimal way to make the best use of these opportunities and horizons opened.
843 Small countries will not surprise the international community by the quantity of something. The best possibility, the niche is to surprise by quality and uniqueness. In order to make use of this niche we must pay the most attention to the education of our children, to look for the most gifted boys and girls and to create for them the optimal possible conditions to develop their personalities and to become, later, a part of the intellectual potential of their country as well as to contribute to the development of its economy and culture, to the implementation of the European social model and to the preservation of the identity and uniqueness of every nation. Science has the pivotal role in the implementation of this challenge? The future of the Europe must be based on a modem education system, intellectual leadership, high technologies and science-intensive industries and, above all, on democracy, a European social model, respect of human rights and the rule of law. However, the situation in the Europe is worsening due to the insufficient financing of research, its fragmentation and a number of the other reasons, "brain drain" included. The urgency of the problem was amplified even more by the world economic crisis. The science does not recognize the borders, it is inherently international, requires the networking and various forms of coordination, cooperation and integration of the research and researchers regardless of their location and positions there. Existing associations of National Academies of Sciences/-5 learned societies6 and the other nongovernmental organisations of this sort are of use in such a respect, as well. Another specific feature of recent years is penetration of modem information and communication technologies into practically all domains of science, social and humanitarian sciences as well as arts. Moreover, they penetrate also into the everyday life of almost every citizen, a revolutionary change of their way and culture of life. Let us only mention the near future of internet, which soon will connect not only peoples, but also peoples with things and things with things. New computing architectures like grid and cloud computing, and high performance computing are worth mentioning, too. While discussing the integration and networking in research, and creation of the European Research Area, we must bear in mind the existing research infrastructures, institutions and organizations. Moreover, we must promote the excellence in science and to bridge the gap (pessimists say "the valley of death", and optimists-"the valley of opportunities") between fundamental research and practical use of its results, their transformation into new technologies and products. Let us present only one example. Lithuania was behind the Western countries as concerns the use of modem information and communication technologies. However after becoming the member of the European Union and opening the possibilities to make use of framework programmes, it initiated two international projects on grid technologies (BalticGrid I and II) supplemented by national projects (LitGrid and GridTechno). As the result, now many Lithuanian universities and research institutions are widely using these modem technologies and are ready to continue the implementation of new achievements in this domain. The expansion of the European Union, the increase of a number of member countries, having different history, culture, traditions and the level of economic development, also raises the new challenges for the integration and networking. The mobility of students and teachers, "brain circulation" could help to overcome the inequalities in education and research as well as to foster the strengthening and
844 consolidation of the European and Global Research Area. Small countries like Lithuania must find their ways and places in this mosaic of global science and culture of the third millennium. REFERENCES I. 2. 3. 4. 5. 6.
A. Zichichi. (2009) Galilei Divine Man, Italian Physical Society, Bologna. R.L. Garwin, et al. (2002) "Terrorism, Culture and John Paul II," World Federation a/Scientists, Pontifical Academy a/Sciences, Vatican-Erice . http://www.interacademies.netl. http://www.allea.orgl. http://www.easac.org/. http://www.eps.org/.
MEANS TO PROPAGATE OUR IDEAS IN SCIENTIFIC AND DECISIONMAKING CIRCLES MAW-KUENWU Institute of Physics, Academia Sinica Taipei , Taiwan The twentieth century witnessed the greatest changes in technology and science that humans have ever achieved. These changes have made profound influences on the modern society, not only to our daily lives, but also our thought and beliefs. In the last few decades. increasing attention is paid to the topic of responsibility in technology development and engineering. The current technology development has realized many promises. However, people can be blinded by this development and ignore the potential disastrous effects. Technology has frequentl y been stated to be the major factor that contributes to the quality of life, but it is rarely said that the opposite could also be true. The modern, science-based technology has made major impact on our cultural development. With the modern technology, everything is connected to everything else, which leads to an environment that everything is technology related. In particular, the technology and economy becomes so closely intertwined that one cannot exist without the other. Thus, it creates a situation where the general public believes that economical advancement only relies on the technological development; and vice versa that the purpose of technology development is for the economical development. This perception is certainly not correct and may lead to a dangerous outcome. We all know that Science is to study the fundamental laws of nature and the new scientific discovery or advancement usually will lead to the development of new Technology. As Mother Nature tells us, there are fundamental laws we need to follow. For example, the second law of thermodynamics teaches us that for sustainable development of our planet earth, we need to develop a better technology to consume the energy source in a more efficient way. It is essential to educate the general public of this basic understanding. Without the recognition of such an important, basic law, it is likely that people will just pursue more economical advantage with new technological development but neglect the consequence of potential catastrophe on earth. Therefore, it is an important mission for scientists, as being advocated by Prof. Zichichi, to promote the general public to appreciate sciences. We present, in this paper, two examples we exercised in Taiwan as an attempt to educate the general public about the value of science and technology development. The first example was an special science exhibition we organized in 2005 as part of the international celebration of the Year of Physics, which was to celebrate the looth anniversary after Einstein published his three seminal works in 1905. The second example was the activities we created for school students to introduce to them the development of current new technologies such as nanotechnology, high temperature superconductivity, etc. THE YEAR OF PHYSICS EXHIBITION The International Union of Pure and Applied Physics (IUPAP) declared the year 2005 as
845
846 the World Year of Physics. With this declaration, people all over the world joined in the celebration of physics and its importance in our everyday lives. The World of Physics aimed to raise the worldwide awareness of physics and physical science. The year 2005 was chosen as the World Year of Physics because it marked the 100lh anniversary of Albert Einstein's "miraculous year" in which he published three important papers describing ideas that have since influenced all of modem physics. This year provided the opportunity to celebrate Einstein, his great ideas, and his influence on life in the 21 sl century. To support the initiative of "the World Year of Physics 2005", research institutes in Taiwan, with the support from the National Science Council and the Physical Society, called for the integration of schools and industries in unity to proclaim the fact that physics lies behind all of our technology. A series of programs were launched in the year. It started with the 3-day sessions of "Summit of Physical Societies in Asia-Pacific Region (SPS)" on January 31 , 2005 in Kaohsiung, a port city in southern Taiwan, with emphasis such as "Physics for tomorrow in Asia", "Women in Physics", and "Physics Education". A particularly interesting and potentially high impact program was a monthlong science exhibition sponsored by the National Science Council of Taiwan and organized by the Physical Society. The exhibition was held in July during summer vacation in order to attract students to visit and spend more time at the exhibition. The contents of the exhibition were very rich. The exhibitions used different means such as posters, demonstration kits, videos, etc ., to introduce the general public to the basics of the modem laws of physics and the related modem technologies. It particularly emphasized the physic basis of the technologies used in our daily lives. Picture I shows one volunteer interpreter (who is a full professor in physics) explaining to the visitors the function of a gas water heater.
Picture J.' A volunteer interpreter demonstrated the physics of gas water heater.
847
Pictures 2 & 3: President Chen of the country gave the opening speech at the exhibition and experienced a demonstration system showing the physics of acoustics.
848
Pictures 4 & 5: Elementary school children visited the exhibitions.
849
Picture 6: Nobel laureate Prof Steven Chu visited the exhibition and demonstrated the physics offriction.
Picture 7: Ms. Lin, the most popular fashion model in Taiwan, in a video advertisement film to promote the exhibition. The organizers made a great effort to advocate the activity. To make it more attractive, the organizers were able to convince several celebrities to help promote the event. It was very fortunate that even the President of the country agreed to help. He not only came to give a speech at the opening of the exhibition, he also spent almost two hours watching and having a hand-on experience with the exhibited exhibitions. Picture 2 showed the President delivering the speech at the exhibition opening; and Picture 3 was the President experiencing an exhibition that demonstrated the physics of acoustic wave.
850 The President's support to the activIty did help generate great media coverage and attracted many visitors, especially the school children, who are the program's major customers. Pictures 4 and 5 showed the elementary school children in the exhibition hall. We also were very lucky to have famous physicist, such as Steven Chu, the 1997 Nobel Laureate in Physics, as our VIP guest (Picture 6). Furthermore, the organizer was able to convince the most favorite female fashion model (Picture 7) in Taiwan to help produce, with no cost, an advertising film to promote the event. The exhibition was a great success because of the above efforts by the organizers. INNOV ATIVE COMPETITION USING SUPERCONDUCTOR FOR MAGNETIC LEVITATION An innovative competition program, based on the idea of high temperature superconductivity, was established as an extracurricular educational activity for high school students. The main purpose of the program was to train high school students to learn about interesting new science to invigorate their innovative minds, with an ultimate goal to the make general public aware of high-temperature superconductors. The main approach of the program was to engage students in using the novel magnetic levitation property of high-temperature superconductors. The program was organized by a team consisting of academicians in the fields of materials science, technology education and industrial design, as well as researchers from an informal education institute, the National Science and Technology Museum. These academics provided consultation and access to liquid nitrogen for participating students .. The program starts with a workshop that gives the participating students the basics of the levitation/suspension phenomenon of superconductors. In order to make the complex theories of high-temperature superconductors understandable by high school students, creative animations and demonstration kits were used. These materials have been made available online and were distributed to high schools throughout Taiwan not only for the initial promotion of the competition but also for continuing education. After the initial basic training, the students are encouraged to organize into teams of three to five members. The participating teams had to submit their creative process notebook, which included their initial design concept.
851
Pictures 8 & 9. The photographs shows the workshop to teach (top) and demonstrate (bottom: a levitation car using high temperature superconductors) to the high school students the basic of superconductivity.
A group of 20 reviewers were involved in the screening of all competition entries. About 30% were chosen to compete in the final stage where participants built their prototype models and presented to another group of expert reviewers. The selected team was then given a levitation kit that consists of several pieces of high flux trapping hightemperature superconductors, which were prepared by the organizers, and commercially
852 available Nb-B-Fe permanent magnets. The teams then used the provided kit to prepare a prototype model of their innovation and presented it to the organizers for the final competition. During the past few years, we have attracted more than five thousand students to participate in the program. In order to control the number, we have imposed a screening process where the students are required to pass an on-line test before they can register for this competition. This additional requirement, in fact, assures that the students who take part in this contest are genuinely interested in the program.
Picture 9: Referees listened to the students to present their innovative product.
Picture 10: Students (2004 Gold medal winners) presented their creative design model "Beehive-rotated parking lots".
853 A very important objective of the program is to explore a new model and opportunity to implement students' learning in non-mandatory subjects like material science. Under the support of the National Science and Technology Museum (NSTM), this project gave an informal learning opportunity for students to learn more about something, in this case the fascination high temperature superconductivity, which they normally do not have access to during their high school years. Not only the training workshops but also the first and final contests all require the full attention of the students. The competition served as a means for students to learn more about frontier in science and enhance their creativity by team work. We have kept complete test records of all participants from years 2001 to 2004. The data demonstrate that the participants' awareness of superconductivity increases year by year. Since 2002, in addition to the required on-line test, the organizers further used a process that requires the students to tum in a notebook recording their learning and creative design practice. This practice does greatly strengthen the students' basic knowledge on superconductivity. Most amazingly are the outstanding innovations the participating students came 'Jut. For instance, the gold medal in 2004 went to a model named "Beehive-rotated parking lots" (see Picture 10), in which the students were able to fully demonstrate the feature characteristics of high temperature superconductormagnetic levitation and magnetic suspension. The experience of this competition for the participants not only enhanced their problem-solving skills but also improved their communication skills. An important consequence was the influence on their future academic choices. The survey found that of those who went to college educations, about 14% of them were went to material science-related subjects and engineering, 72% were in basic sciences, and the other 14% in humanity and social sciences. Amazingly it was also found that 93% of them were still interested in high-temperature superconductivity. A most rewarding outcome of this extra-curriculum activity was that many participating students maintained activity and became volunteers to help the program organizers after they graduated from high school to become college students. We believe that this unconventional learning experience had strong and long-term impact on the students' career path. SUMMARY We present in this paper about two programs developed in Taiwan to promote the appreciation of sciences by the general pUblic. The first program was coupled with the 2005 World Year of Physics. A month-long science and technology exhibition was the highlight of the year-long activity. The exhibition gave the general public, particularly, the school children, the opportunity to see, to learn and to have hand-on experiences in the fundamental of sciences. A second program is a long-term program to give high school students extra-curriculum activity to learn and appreciate the value of fundamental science. The program allows the participating students to learn the novel characteristics of high temperature superconductor-magnetic levitation and suspension effects-and then to use these effects to create innovative prototype model. This competitive program has successfully inspired the participating students to appreciate science and have long-term impact on their career path development.
854 ACKNOWLEDGEMENTS The author thanks those who contributed to the organization of the 2005 World Year of Physics exhibition; and his colleagues who designed and implemented the high temperature superconductor competition program. The author also acknowledges the financial supports from the Taiwan National Science Council that made the program realized.
THE HUMAN BRAIN FUNCTION INVESTIGATED BY NEW PHYSICAL METHODS
BRUNO MARA VIGLIA Dept. of Physics, University "La Sapienza", Roma, MARBILab, Enrico Fermi Centre, Roma, Fondazione Santa Lucia, Roma The Human brain is today probably the most extraordinary frontier of research because of the vast amount of unknown mechanisms and functions, which are responsible of the human life. This objective was always present in our history but the approaches used were not scientific (non-Galilean) and even the localizations of functions like thought, feelings, etc., were for centuries uncertain or regarded as undetectable because considered not physical. The development of the experimental scientific method, due to Galileo Galilei, gave birth to the most exceptional cultural revolution for mankind with great progress of knowledge in most fields. However the complexity of the human brain structure and function on the one hand and on the other hand the fear of entering the most holy and intimate essence of man, have caused a dramatic delay in the knowledge of this crucial organ of the human being. Nowadays the human brain morphology, physiology and cellular organization are widely known. Some physical techniques like Positron Emission Tomography (PET), functional Magnetic Resonance Imaging (fMRI), together with Electro Encephalography (EEG) and Magneto Encephalography (MEG) have helped also to begin the investigation of its functional activity. The level of insight of brain function is however still quite superficial. In order to get an idea of the amount of unknown activity, we can make a rough energetic consumption consideration. The weight of human brain is about 2% of the whole body weight but the brain requires about 20% of the energy used for ordinary living. So this complex organ has a much higher rate of consumption of the other parts of the organism. Moreover the energy required by our brain for all the interactions with the environment (sensorial activity like seeing, hearing, plus speaking, etc.) amounts to about 20% of the total energy needed by the brain. These interactions with the environment are the activities of brain, that we usually detect by EEG, fMRI, and the other available modalities. In other words, the remaining 80% of the energy used by the brain supports functions that we essentially ignore ,due to the fact that we have no tool capable of seeing them and even more we do not know yet which effects we should try to detect. Clearly this large amount of energy (80%) is used by brain for spontaneous activities, not stimulated by the environment. As an analogy with astrophysics it has being called "Dark Energy" by somebody. It follows then that, in order to get insight into this world of unknown activities, it is thoroughly necessary to invent and develop physical methods, which should allow us to detect the activity of brain in the "resting state" (stimulations absent) and further on to reveal the brain actions ,which involve the fundamental functions of the human mind. My group of research has been operating in this direction for over a decade and in the last years, with the essential support of the Enrico Fermi Centre, many important results were achieved.
855
856 •
The new physical methods, like for example the development of functional MRI, the combination of fMRI and EEG, the ultra-low field MRI-MEG (operating at 40-100 microT), the direct neurocurrents imaging, etc. , were our major objectives with the long distance target of discovering how: I. 2. 3. 4.
the the the the
brain spontaneously operates (logic, conscience, etc.) brain functions are related to its connectivity and architecture border between life and death of brain can be made sure brain dysfunctions originate and can be treated
r"""'--,N.J~~'~~~~-~~~~ f'>4-+"F
I
~.'"'\.t.~ _____~~~~.~~-"V"~~.r~ :
.
'
:
,
.
,
'
a' ·'.F f/''V\v~--'vhJ'"v~v~'-A.J'v~~~,i'v,~..rv·''../'vJ\.j\to"y'''''V-v: 02-WO~
~~~
Fig 1: Simultaneous jMRI and Evoked Potentials detected with EEG. Among the early results very significant was the solution of the combined fMRIEEG measurements, which up to that time were taken in a sequential way. These two methods are strongly coupled as the fast switching magnetic field gradients required in fMRI induce strong perturbations in EEG; other disturbances arise from the heart beating of the patient, etc. By modifying the EEG electronics and introducing a filtering system on the computing line we succeeded in measuring the electrophysiological (EEG) and the
857 fMRI signals at the same time, thus opening a new series of possible investigations (Figure I). In fact, fMRI has a high spatial definition but poor time resolution (seconds), while the neuronal activity time scale is in the range of milliseconds and then the EEG can compensate this lack of information. Moreover the EEG signal generated by unpredictable events (epilepsy, brain rhythms, drug effects, etc.) can be used as a trigger to start the fMRI measurement ,which gives the localization of the event. This combination with simultaneous use of the two methods is today already applied in neurosurgery to better localize the area responsible of epileptic seizures, which in non treatable epilepsy must be removed. Brain morphology and architecture are the natural base on which one can build up the functional activities. MRI is a powerful tool for morphology and morphometry. By means of special treatment of MRI data, it is possible to obtain, as an example, the 3D image of the surface of the brain in vivo as if it were just under our eyes (see Figure 2).
Fig. 2: Rendering of the surface ofthe whole brain of a member of our group.
858 Among the several parameters that MRI can image (spin density, relaxation times Tl and T2 of protons but also of other nuclei like Na23 , P31 , F 19 ,etc.) there is also the diffusion coefficient D, which can give quite different maps and structural information. In fact in heterogeneous systems D is a symmetric tensor, requiring six elements to be fully described. In confined diffusion of the proton of water, within the bundles ofaxons connecting neurons of different regions, from the measurement of the D tensor it is possible to build up the overall network ofaxons in the brain. This allows one to see which parts of the brain are connected and of course when there are interruptions due to disease or trauma (Figure 3).
Fig. 3: MRI image based on Diffusion Tensor Imaging (DTI). In practice from the measurement of the diffusion coeffiCient (tensor) of the water molecule it is possible to reconstruct the network of the bundles ofaxons. This procedure is called Tractography. The lines of research that I am here briefly describing have, besides several important cultural and biomedical consequences, a fundamental philosophical content, but strictly based on the Galilean method. After all the target of understanding, by physical means, the way brain operates and its architecture is just a humble new approach to: NOSCE TE IPSllM
859 During the investigation of this fascinating frontier, it is spontaneous wondering how far one can go in interfering with this centre of the human being. With knowledge man increases his freedom and power of doing things, which were impossible before, thus he encounters new choices about what is good or bad for the individual and for the whole mankind. The experimental research on human brain will surely give rise to crucial discoveries, for which we will be bound to make dramatic choices. We have already some evidence of this sort of choices for the cases of persons in a vegetative state due to accident or disease. The existing instruments allow already a set of measurements, which can tell us, in some cases, whether the person in a vegetative state is conscious or not. In fact some experiments carried on by fMRI since 2006 (A. Owen et al.) have demonstrated the capability of this approach to establish, at least in some special cases, whether the person in a supposed vegetative state is conscious or not. Conscience (i.e., being in a conscious state) implies the coexistence of arousal (i.e., being vigil, awake) and awareness (i.e., being conscious of self and of the environment). (Figure 4) Coma, Norma. consciousness
sleep, anaesthesia
Minimally VegetatIYe
consdous
state
state
Fig. 4: Different conditions oj consciousness.
Thus new physical methods based on fMRI combined with other techniques (like EEG) will be usable, in vegetative states, to detect perception and even conscious awareness. This perspective will be relevant for diagnosis, medical decision making, besides for the fundamental questions about the nature of consciousness, thought and will. Even the definition of human death will evolve with time, as we define a person dead, when from some experimental evidence, for the actual state of science, his condition is absolutely irreversible. But science and thus knowledge evolves and the capability of acting on the state of brain will evolve as well. The most urgent targets related to brain function are of course the dysfunctions caused either by pathological agents or by neurogenetic conditions or by other possible causes. A vast part of brain dysfunctions are poorly curable, due above all to the deep
860 ignorance of the processes which actually occur, and moreover this situation brings to the broad acceptance of non scientific tools like psychotherapy. In particular, the neurodegenerative diseases cannot, in general, be objectively diagnosed, their diagnosis is still only clinical; the mechanisms which generate them are unknown and treatments almost not existing. New research for a deeper insight in the brain function and dysfunction, with the support of original physical methods is dramatically needed. I thank my group for the excellent work: F. Giove, T. Gili, A. Cassara, S. Capuani, G. Garreffa, S. Mangia, M. Moraschi, M. Carni, G. Giulietti, M. Di Nuzzo, S. Peca, F. Marcocci. and Prof. A. Zichichi for his fundamental support to this research as President of the Enrico Fenni Centre. REFERENCES I. 2.
3.
4.
5.
6.
Owen, A.M., Coleman, M.R., Boly, M, Davis, M.H., Laureys, S., Pickard, 1. (2006) "Detecting awareness in the vegetative state." Science, vol.313, pag. 1402. Dinuzzo, M., Mangia, S., Maraviglia, B., Giove, F. (2010) "Changes in glucose uptake rather than lactate shuttle take center stage in subserving neuroenergetics: evidence from mathematical modeling." Journal of Cerebral Blood flow and Metabolism, vol. 30; p. 586-602, isbn: 0271-678x. Peca, S., Carni, M., Di Bonaventura, c., Aprile, T., Hagberg, G.E., Giallonardo, A.T., Manfredi, M., Mangia, S., Garreffa, G., Maraviglia, B., Giove, F. (2010) "Metabolic correlatives of brain activity in a FOS epilepsy patient." NMR in Biomedicine, vol. 23; p. 170-178, Isbn: 0952-3480 Silvia Mangia, Federico Giove, Ivan Tkac, Nikos K. Logothetis, Pierre-Gilles Henry, Cheryl A. Olman, Bruno Maraviglia, Francesco Di Salle, and Kamil Ugurbil, (2009) "Metabolic and hemodynamic events after changes in neuronal activity: current hypotheses, theoretical predictions and in vivo NMR experimental findings," Journal of Cerebral Blood Flow and Metabolism 29 no. 3,441-463. G. Garreffa, M. Bianciardi, G.E. Hagberg, E. Macaluso, M.G. Marciani, B. Maraviglia, M. Abbafati, M. Carni, 1. Bruni, and L. Bianchi, (2004) "Simultaneous EEG-fMRI acquisition: how far is it from being a standardized technique?", Magnetic Resonance Imaging 22, no. 10, Sp. Iss. SI, 1445-1455. G. Garreffa, M. Carni, G. Gualniera, G.B. Ricci, L. Bozzao, D. De Carli, P. Morasso, P. Pantano, C. Colonnese, V. Roma, and B. Maraviglia, (2003) "Real-time MR artifacts filtering during continuous EEG/fMRI acquisition," Magnetic Resonance Imaging 21, no. 10, 1175-1189.
QUALITY OF LIFE-HOW TO USE ECOLOGICAL SCIENCE FOR SUST AI NED DEVELOPMENT JAN SZYSZKO University of Warsaw Warsaw, Republic of Poland Your Excellency Archbishop, Distinguished Professors, Ladies and Gentlemen! Expressing my sincere gratitude for the invitation to this important debate, I would like to inform you that my scientific life is linked to two passions: provoking open discussion and coping with problems resulting from such discussions. This is why I would like to thank professor Antonino Zichichi for the discussion-provoking lecture on "Why Science is Needed for the Culture of the Third Millennium". At the beginning of my speech, I would like to mention that I entirely agree with the statement of Professor Zichichi (see his publication titled "The Motor for Progress") that ignorance is the number one enemy of a humanist. I support that statement because I have been working on two planes within the last few years: scientific and political. The title of my speech forces me to answer three basic questions: I. What is "Quality of life"') 2. What is ecology and ecological science? 3. What is sustained development? WHAT IS "QUALITY OF LIFE"? "Quality of life" entails full access of all humans to water that is good for them, to good air, and the possibility to use all species created by the Creator. Any deterioration of the quality of water and access to it, any deterioration of the quality of air and elimination of species entails the deterioration of the "quality of life". These are significant global issues and, for that reason, action has been taken to improve the quality of life through the establishment of the UN Convention to Combat Desertification, the UN Climate Change Convention and the UN Convention on Biological Diversity. It is worth remembering that about 30% of the human population suffers from the lack of water, about 50% of the inhabitants of large cities suffer from the lack of good air and the majority of the territories of states on a high level of economic development suffer from the rapid disappearance of native species of plants, animals and fungi. Why do we suffer from these three problems, hence why our quality of life is not high? The answer is simple: because our management of global space is bad and we do not use it for the good of man, and we destroy it carelessly due to our ignorance: through "modem" agriculture, "barbaric" industry, "liberal" forestry, schematic road infrastructure and water "meliorations" concentrating, in particular, on the "straightening of rivers" and lowering of the level of ground waters (Szyszko, 2004). The reaction of the environmental resources is evident in the rapid disappearance of native species. That visible effect resulted in a simple and primitive reaction of man. He started developing laws that protected environmental resources from man. What we forgot is the basic environmental law, i.e., the fact that the nature protects its biodiversity (occurrence of
861
862 species) by way of the "use", i.e., diversification of carbon resources in space with the use of events currently called by man "environmental catastrophes": such as fires, floods or windfalls. For example, it is thanks to fires launching hundreds of tons of burnt carbon from natural forests that species characteristic for fields, meadows and fallow lands could exist. Shortly speaking, as nature protected its biodiversity through "environmental catastrophes", man protecting environmental resources from "disasters" has to protect them through their use on the condition that he will use them in an ecological manner and on the basis of environmental studies in line with the concept of sustained development. WHAT IS ECOLOGY AND ECOLOGICAL SCIENCE? Ecology is not about warm hearts and the love for nature promoted on the TV screen among people living on the 20 th Hoor of skyscrapers in great cities. Ecology is about knowledge of environmental rules; it is about mathematics, economy and excellent recognition of not only the species themselves but also of the basic needs of native animals, plants and fungi. WHA T IS SUSTAINED DEVELOPMENT? Sustained development is prompt economic development combined with a rational use of environmental resources and respect for human rights. That definition requires two assumptions in the light of what I said about the "Quality of life": •
Man is a part of the natural environment. Thus, he has to use it and introduce changes to it. It is not only his right but also his duty.
•
None of the human activity needs to detonate the condition of the natural environment. On the opposite, his activity has to protect natural resources. All activities should protect natural resources.
That concept will remain just a slogan if we do not apply appropriate reliable indicators. According to Szyszko (2004), there is no sustained development without: I. GDP increase. 2. Net increase in the number of jobs. 3. An increase in the length of lives with the retained advantageous demographic structure guaranteeing the continuity of the human population (prevalence of young age classes). 4. Improvement of the quality of waters according to the UN Convention to Combat Desertification. 5. Improvement of the quality of air, hence the reduction of greenhouse gases according to the assumptions of the UN Climate Change Convention. 6. Improvement of the condition of the natural environment measured not only with the lack of disappearance but also with the return of native
863 species of plants, animals and fungi at least in according the UN Convention on Biological Diversity. Where a full range of native species exists, sustained development only has to correspond with their occurrence control, while in those regions where we caused the extinction of native species due to our economic activity, sustained development has to be measured with the return of these species (Szyszko, 2008). The UN Climate Change Convention and the appendix to it, i.e., the Kyoto Protocol provides an excellent instrument and opportunity in that area. Such an opportunity comes from the absorption of carbon dioxide from the atmosphere thanks to the afforestation of degraded arable lands and sustained forest management focused on an increase in that absorption (Szyszko, 2004). Globally, there are millions of hectares of poor, degraded soils that do not guarantee the productivity of farming. According to experts, each hectare of such soil is able to absorb up to more than ten tons of CO 2 annually after afforestation. One ton of the absorbed carbon dioxide is a specific amount of money that can be defined according to the prices of the European Emission Trade System, currently amounting to a more than ten Euro. The afforestation of poor degraded soils entails the creation of new jobs on nonurbanized areas, an improvement of the quality of environmental resources and multiplication of renewable sources of energy in the form of wood. The UN Climate Change Convention and the UN Convention on Biological Diversity provide an opportunity to implement the sustained development concept entailing the rational use of environmental resources for human needs by way of the appropriate management of carbon in environmental space (landscape). Our debate is taking place at a very important moment, i.e., before COP 15 in Copenhagen. The only chance for a compromise will be available there. That chance is the "Quality of life", i.e., sustained development forcing us to develop regional programs of such development connected to climate, culture and biodiversity for the creation of jobs in rural areas on the basis of carbon management in the space in living systems like forest, peat more field, meadow, etc. "Applied Science for mankind should be a real motor for progress but Scientists should be better in policy than ignorance". REFERENCES I.
Szyszko J. (2004) Foundations of Poland's cultural landscape protectionconservation policy. In: M. Dieterich, J. van der Straaten (eds.): Cultural landscapes and land use. Kluwer Academic Publishers, The Netherlands: 95-110.
2.
Szyszko J. (2008) Szanse zr6wnowazonego rozwoju w Polsce. W: Laska M. (Ed.) Nauka wobec zagrozen srodowiska przyrodniczego. WSKSIM: 211-226
3.
Zichichi A. (2009) "The motor for progress." Public Service Review: European Union 18:316-318.
4.
Communication no 286 of the Laboratory of Evaluation and Assessment of Natural Resources Warsaw Agricultural University and Association of Sustained Development of Poland.
This page intentionally left blank
FUNDAMENT AL SCIENCE AND IMPROVEMENT OF THE QUALITY OF LIFE-SPACE QUANTIZATION TO MRI MJ. TANNENBAUM Brookhaven National Laboratory Upton, New York, USA Transcript of intervention presented at the Pontifical Academy of Sciences, The Vatican, on the occasion of the award of the Ettore Majorana~Erice~Science for Peace Prize, November 25,2009. SCIENCE VERSUS • • • •
TECHNOLOGY~A
FALSE DICHOTOMY
Science is the study of the laws of nature and the properties of natural objects. It answers a fundamental need of human nature, the desire to understand. Technology is the application of scientific knowledge to make devices. Science improves with improving technology and vice versa. Many of the problems facing us require scientific discovery as well as technological development.
SCIENTIFIC DISCOVERY IS VITAL FOR FUTURE
PROGRESS~
This is generally believed by government and private industry in the U.S. Here are a few examples. M.1. Pupin-Serbian immigrant and Columbia College graduate 1883 PhD Berlin I 889-Prof. Columbia 1889-1931-made significant contributions to long-range telegraphy and telephony-and a sizeable fortune. He attributed his success in part to a fortunate encounter with classical academics/mechanics. Inspired by this experience, he preached the study and promotion of "pure science", which he called the "goose that laid the golden egg". Donated his estate to "pure science"~Pupin Laboratory at Columbia. l
865
866
Fig. 1: Pupin Physics Laboratories, Columbia University, New York, NY, USA.
G.A. Keyworth, science advisor to President R. Reagan 1981-86 "No federal research dollars, on average, gain more fruitful rewards than do those relatively few committed to basic research, the search for pure knowledge.,,2 U.S. National Academy of Sciences Report 2007 From "Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future", Committee on Prospering in the Global Economy of the 21 st Century, N. Augustine (chair), National Academy Press, Washington DC. Chapter 6.1, "Sowing the Seeds through Science and Engineering Research" Recommendation B: Sustain and strengthen the nation's traditional commitment to long-term basic research that has the potential to be transformational to maintain the flow of new ideas that fuel the economy, provide security, and enhance the quality of life. Implementation Actions: Action B-1: Increase the federal investment in long-term basic research by 10% a year over the 7 years, through reallocation of existing funds 2 or if necessary through the investment of new funds . Special attention should go to the physical sciences, engineering, mathematics, and information sciences and to Department of Defense (DOD) basic-research funding . This special attention does not mean that there should disinvestment in such important fields as the life sciences (which have seen growth in recent years) or the social sciences. A balanced research portfolio in all fields of science and engineering research is critical to U.S. prosperity.
AN EXAMPLE: SPACE QUANTIZATION TO MAGNETIC RESONANCE IMAGING (MRI)-A TIMELINE FROM 1911-1977 1911-Rutherford-Discovers the Nucleus. By scattering alpha particles (from radioactive decay), on gold foils he finds that the positive charge of matter is restricted to a volume with radius 10 fm (10,14 m=O.OOOOOOOOOOOOOlm), the nucleus. The negative charged electrons are at a much larger radius. 1913-Bohr Theory of Hydrogen Spectrum-Quantized electron orbits. In the Bohr theory, electrons orbit the nucleus, like planets around the sun. Radiation is emitted when electrons fall from a higher orbit to a lower orbit (Figure 2). This results in a series of characteristic discrete spectral lines emitted by the different elements. The spectrum of hydrogen (one electron orbiting about one proton) was the simplest. Bohr explained an empirical formula by Balmer for the spectral wavelengths of light emitted by hydrogen by assuming that the electron orbits were quantized. They could only take on certain values given by integer "quantum numbers".
: I'
i I
~ "7d ' ,
>l
I' •
I~"
1
.j
1
• T' 1
I)
Fig. 2: (Left) Schematic orbits of electrons with integer quantum numbers. n= 1,2,3, about a nucleus. (Right-top) Emission spectrum of 92U238, (Rightbottom) Emission spectrum of Hydrogen (1 HI). 1916-Sommerfeld-Bohr Model-Space Quantization-elliptical orbits Sommerfeld proposed that the electrons travel in elliptical rather than circular orbits. More important for the present discussion, in order to reproduce the observed spectral lines, Sommerfeld proposed that an electron orbit can not take any angle with respect to an external magnetic field, only integer projections of angular momentum. In classical physics, there is no such restriction-any angle is possible.
867
868 B
B
•
•
Fig. 3: In an external magnetic field, B, the orbit of an electron with I unit of angular momentum can only take discrete orientations with respect to the magnetic field.
1922-Stern Gerlach experiment proves space quantization. Classical prediction
Silver atoms
Inhomogeneous magnetic field
Fig. 4: Stern Gerlach Atomic Beam experiment.
Stern and Gerlach in Frankfurt created a beam of silver atoms and passed them through a non-uniform magnetic field, which deflects the beam proportional to the angular momentum projection along the B field. The beam doesn't spread, it splits only into 2 projections, proving space quantization. 1925-Pauli trying to understand periodic table (2,8,18,32) proposes Exclusion Principle-"a new quantum theoretic property of the electron, which I called a 'twovaluedness not describable classically'"
Periodic Table of Elements
2 +8 +8 +18 +32
* Lanthanide 58
59
Series + Actinide Series
Pa
60
61
~h
1
92
62
63
Sm Eu
Ce Pr Nd 93
9"1
95
6<1
65
66
Gd 1b Dy 96
97
98
67
68
Ho Er" 99
00
69
7Q
71
Tm Yb lu 101
102
103
U
Fig. 5: Periodic Table of Elements. 1925-Goudsmit and Uhlenbeck propose electron spin of 1/2 as the source of the twovaluedness. (two possibilities-spin t, spin J,) 1927-I.I.Rabi receives PhD from Columbia. He goes to Hamburg to work with Pauli. Instead, he learns atomic beams and does experiment with Stem. 1929-Rabi returns to Columbia, sets up Molecular beams Lab. 1937-Rabi Invents "Molecular Beam Magnetic Resonance Method"
Apprted ~gnetic field
"",,,,",,, .~ .... :~ .... orbt \.
-'-'-i--·' :
\
Spinring
-~
. :l) ~ E)
DOO
small rotating magnetic field nips the spin
/
\ \ V
00
.... _
.....
"- 00
Fig. 6: The molecular beam is deflected by the inhomogeneous magnetic field in the left Stern-Gerlach apparatus (A). Rotating magnetic field at the resonant frequency in the region c, flips the spin and deflects the beam away from axis. 3
869
870 The spin orientation of a nucleus with a magnetic moment processing around an external magnetic field is flipped by a rotating magnetic field at the "Larmor frequency". In the two back to back Stern-Gerlach apparata, resonantly flipping the spin causes the beam to not return to the axis which causes a dip in the measured intensity. The resonant displacement of the beam leads to measurement of magnetic moments of nuclei with high precision. 1939-1945-World II Intervenes-MIT Radiation Laboratory with I. I. Rabi as deputy director for scientific matters MIT Rad. Lab. develops improved radio frequency sources and detectors in addition to Radar, Loran, etc. 1945-46-NucIear Magnetic Resonance (NMR) Purcell (paraffin) and Bloch (water) observe Nuclear Magnetic Resonance of protons in solid and liquid materials in an external magnetic field. (Protons have spin1l2 and act like little magnets.) Meanwhile, powerful magnets, computing technology, algorithms are developed leading to: 1971-1977-Magnetic Resonance Imaging (MRI)-Damadian (SUNY-Brooklyn), Lauterbur (SUNY-Stony Brook), Mansfield (Nottingham)
Fig. 7: (Left) Modern MRI machine, a large electromagnet (axial magnetic field) with lots offancy r j and computing power. (Right) Typical MRI picture of the head-examination of the brain and other delicate internal parts of the human body without surgery and with exquisite resolution. 1967-Rabi retires from Columbia-Photo of Rabi and Magnetic Resonance disciples
871
Fig 8: Rabi and disciples at the time of his retirement. 4 From left to right: N Ramsey (Hydrogen Maser-most precise frequency source), G. Zacharias, C. Townes (Maser, leading to Laser), I.1. Rabi, V. Hughes (spin structure of nucleon), J Schwinger (Quantum Electrodynamics), E. Purcell (NMR), W Nierenberg, G. Breit. Rabi was also instrumental in creating both Brookhaven National Laboratory (U.S.) and CERN (Europe) for fundamental research. MODERN BASIC RESEARCH-WHAT IS INSIDE THE PROTON? Protons (spin 1/2) are composed of3 quarks (spin 1/2). Quarks in protons come in 2 "flavors", up and down, and 3 "colors" ReB, where "color" and "flavor" are the modem quantum numbers. We collide beams of polarized protons with each other and beams of polarized electrons with beams of polarized protons to determine where the mass and spin are located inside a proton. Like two Stem-Gerlac beams colliding with each other but with much higher energy.
872 Where is the spin located inside the proton? Quarks only carry 112 the momentum of a moving proton: color-charged gluons (the quantum of the strong force) make up the rest. In addition to the 3 "valence" quarks and gluons, there are pair-produced sea quarks; and all these constituents of the proton have angular momentum. We haven't yet figured out where the spin of the proton is. However, we do know that 98% of the mass of the ProkIn proton is due to the kinetic energy of the constituents, not to their rest masses!! Where we do these experiments: RHIC-Relativistic Heavy Ion and polarized protonproton collider on Long Island, visible from space.
Fig. 9: NASA Infra-red photo of New York Metro Region. RHIC is the white circle in the center of Long Island below the rightmost group of clouds. Manhattan island is clearly visible on the left side.
873
Fig. 10: A closer view of RHIC at Brookhaven National Laboratory. The large circle without tree cover is excavation related to the tunnel containing the RHIC machine. The colored lines show the Linac, Booster accelerator for polarized proton injection, the tandem van de graaf accelerator and transfer line to the booster, and the AGS which accelerates the beams to an energy of 22 GeV per nucleon x VA where Z and A are the atomic number and weight of the nucleus. 2000-RHIC begins operation: an accelerator made entirely from superconducting magnets. Superconductivity is another triumph of basic research-Discovered by Kamerlingh-Onnes in 1911; but only in the 1970's were practical large superconducting magnets developed.
874
Fig. 11: (left) Inside the RHIC tunnel- two rings of superconducting magnets. (right) Cross section of a RHIC dipole magnet viewed along the beam axis. B field is vertical. Note the resemblance to MRI magnet (Figure 7). 4.3 The New York City region nurtures science
Fig. 12. NASA Infra-red photo of New York Metro Region from Figure 9 with locations where work mentioned above was done. Also shown is the location of the Bronx High School of Science (see text). Not shown but also on this map are the original Bell Laboratories (in Manhattan until 1966), where the transistor was invented as well as many other discoveries, IBM Research Labs in Yorktown Heights, NY, Cold Spring Harbor Lab on Long Island, many research universities, etc.
875 Some locations where the fundamental science mentioned above was performed are shown (Figure 12) on the same map of the New York Metro region as Figure 9. I have also taken the liberty of showing the location of my High School, "Bronx Science" on the map since it is one of the great examples of nurturing science in the world. The Bronx High School of Science counts seven Nobel Prize winning scientists among its graduates-all in Physics! The Bronx High School of Science is a public high school (grades 9-12) in New York City open to all eligible students by competitive exam. No other secondary school in the world has as many alumni who have won Nobel Prizes. If Bronx Science were a country, it would be tied at 23 rd with Spain for the number of Nobel Laureates (as of 2008). There are two other such H.S. in New York almost equally successful: Stuyvesant H.S. in Manhattan and Brooklyn Technical H.S. Here is the list of Nobel Laureates (all in Physics) from Bronx Science: • • • • • • •
Leon N. Cooper 1947, Brown University, Nobel Prize 1972 Sheldon L. Glashow 1950, Boston University, Nobel Prize 1979 Steven Weinberg 1950, University of Texas at Austin, Nobel Prize 1979 Melvin Schwartz 1949, Columbia University, Nobel Prize,1988 Russell A. Hulse 1966, Princeton University, Nobel Prize 1993 H. David Politzer 1966, California Institute of Technology, Nobel Prize 2004 Roy 1. Glauber 1941, Harvard University, Nobel Prize 2005
Many problems facing society at the beginning of the 21 5t Century need input from trained scientists. • • • •
Climate change Clean renewable energy Nuclear power Nuclear proliferation
• Perhaps more important, wide-spread scientific understanding in the general public is required in order to understand the validity of proposed solutions-or to help find the solutions! Thus in addition to specialized secondary school to produce trained scientists, the general science education in the public schools must be improved. PHYSICS FIRST!-A Proposal to improve the science curriculum in U.S. secondary schools. The standard science curriculum in a good U.S. secondary school is Biology, Chemistry, Physics. A movement led by the American Association of Physics Teachers (AAPT) and Leon Lederman of Fermilab is to revise the High School Curriculum to Physics First! The three year coordinated science sequence would be Physics, Chemistry, Biology, while integrating Earth Science and Astronomy topics into these areas. The emphasis in a physics-first sequence should be focused on conceptual understanding
876 rather than mathematical manipulation. Mathematics would be introduced on a "need-toknow" basis. My elder daughter had a similar curriculum in High School and was very happy with it: Earth Science, Biology, Physics, Chemistry. A pdf of Physics First! can be found at: http://www.aapt.org/upload/phys first.pdf. The website for Project ARISE (American Renaissance in American Education) can be found at http://ed.fnal.gov/arise/. THE 21 sT CENTURY-BEGINNING OF THE 3RD MILLENNIUM The 20th century started with the study of macroscopic matter which led to the discovery of a whole new submicroscopic world of physics which totally changed our view of nature and led to new quantum applications, both fundamental and practical. For the third millennium we start in the sub-nuclear world with a new periodic table to understand. Who can imagine where this will lead over the next century and beyond? REFERENCES I. 2.
3. 4.
S. Devons, 1.1. Rabi: Physics and Science at Columbia, in America, and Worldwide, Columbia Alumni Magazine, Summer 2001. G. A. Keyworth, II, "Policy, Politics and Science in The White House-The Reagan Years ", University of Colorado at Boulder, Boulder, Colorado, January 31,2006. I. I. Rabi, S.Miliman, P. Kusch, J. R. Zacharias (1938) Phys. Rev. 55:526-535 (1939); see also Phys. Rev. 53:318. V. W. Hughes (2000) Annu. Rev. Nuc!. Part. Sci. 50: I.
IMPROVING THE CHANCES FOR PEACE BY PROVIDING ALMOST LIMITLESS ENERGY FRANK L. PARKER Environmental Engineering, Vanderbilt University Nashville, Tennessee, USA INTRODUCTION This paper is presented as part of the program of the Premio Ettore Majorana Erice Science for Peace, 2008 project at the meeting at the Pontifical Academy of Sciences, Vatican on Why Science is Needed/or the Culture a/the Third Millennium on November 25, 2009. Science can aid in the search for peace and to make the third millennium a more peaceful, prosperous and sustainable world. Peace can be described as an absence of war. There is no agreement on the causes of war. There are numerous studies of the causes of war dating back to Thucydides Peloponnesian War, 500 BC. l Speakers at the Pugwash Conference of 2000 2 noted "War, once a calling of the rich has become an infliction of the poor"." Further, "It is one of the great challenges for science policy and practice to organize a much broader and deeper effort to understand the nature and sources of human conflict and above all to develop effective ways of resolving conflict without recourse to violence".b However, control of resources, such as water and oil, is always considered a major cause. Also, poverty can cause conditions that lead to wars. So, how can science help? The President of the Massachusetts Institute of Technology (MIT), Karl T. Compton (1933-1948) wrote at the height of the Depression in 1935, that " ... the overwhelming influence of science has been to create employment, business, wealth, health and satisfaction,,3 as quoted in Mahoney.4 Now in 2009, President Obama has a Science stimulus package of USD $100 billion (Millard) but as Richard Lester of MIT has said, "The reinvention of the nation's energy sources is inherently a project on the time scale of several decades.", as quoted in Rotman. 5 It will not be quick. Though most attention has been placed on the availability of energy resources, water is actually more important because while there are other energy resources than oil, there is only a finite amount of water on earth and there are no substitutes. So science and technology can help find alternate ways to produce energy and to increase the supply of potable water by desalination of seawater and of brackish ground waters. Availability of clean sources of water can also increase health and reduce poverty, also a contributor to wars.
SOCIETAL NEEDS For a more general view of what are the important societal needs, we can see what the Goals of the United Nations' Millennium Declaration of September 2002 to Reduce Extreme Poverty by 2015 are. 6 Of the eight goals, science and technology can help to achieve four, numbers 1: Eradicate Extreme Poverty & Hunger, 4: Reduce Child Mortality, 5: Improve Maternal Health and 7: Ensure Environmental Sustainability. Director-General Lee Jong-Wook, of the World Health Organization wrote, "Once we can secure access to clean water and to adequate sanitation facilities for all people, irrespective of the difference in their living conditions, a huge battle against all kinds of
877
878 diseases will be won.,,7 Preventing disease also helps alleviate poverty. Some of the poorest people in the world are also the unhealthiest. They are among the 1.1 billion people without access to improved water sources and the 2.2 billion without basic sanitation. 1.4 million children die each year from preventable diarrhoeal diseases. Up to 50% of malnutrition is related to repeated diarrhoea and intestinal nematode infections due to unclean water. 8 WATER 'Conflicts over water have a long history-going as far back as the tale of Noah in the ark. Gleick has tallied over 400 incidents from that time to November 2008. 9 The earliest and most recent entries of the tabulation are shown in Table 1. Table 1 Abbreviated Table of Conflicts over Water Date Parties Basis of Violent Conflict or in the Involved Conflict Context of Violence 3000BC 2008
Ea, Noah Pakistan
Religious Account Terrorism
Yes Yes
Description Sumerian Legend, Bible Taliban Threat to Blow up Warsak Dam
In response to scarcity of water and the competition for water, it is useful to see where water is used and how the supply might be used more efficiently.IO Seventy percent of fresh water withdrawals are for agriculture and can reach as high as 90 percent in some developing countries. Irrigated agriculture accounts for 40 percent of global food production. lo Thousands of years ago, inhabitants in the region of Israel had constructed systems to collect, store and transport rain water. Water shortages still exist and Israel leads the world in developing methods for reclaiming wastewater effluents, intercepted runoff and artificial recharge, artificially induced rainfall-cloud seeding and desalination. II The drip system of irrigation has evolved to minute irrigation and microsprinkling-each tree has its own sprinkler. 12 However, conservation and efficiency are not sufficient. Additional water must be sought and desalination is the only method now available to add to the supply of fresh water. As of the end of 2009, there are 14,451 desalination plants on-line with a total capacity of is 59.9 million m3 . It is projected that by 2014 more than the equivalent of a new River Thames will be added each year to the world's renewable freshwater resources and by 2020, twice that amount will be added.13 The sources of water for desalination are, in percents, brackish water, 48 and seawater, 52. Desalinated water, as of2004, while only 0.4 percent of total fresh water usage, is used, in percentages, for drinking water, 24; industrial uses, 9 in countries that have reached the limits of their renewable water resources; and agriculture, I but increasing in use for high-value crops in greenhouses. 14 There are three main methods of desalinating water. The earliest was multi-stage flash distillation, though today most plants use reverse osmosis technology while a minority use multi-effect distillation or multi-effect vapor compression. ls The WNA report states that nuclear desalination "is generally very cost-competitive with using fossil fuels". Of course, this is just in the production of potable water and does not consider other costs
879 such as the water delivery systems. They note that Israel is producing desalinated water at a cost of $0.50 USD/m 3 .The cost can vary widely depending upon the process used, the cost of the energy used, the location and quality of the water supply, whether the energy is derived from a combined system, electricity and heat for the process or district heating and desalination, etc. This WNA report gives an up-to-date tabulation of experience with nuclear desalination and its prospects. It should be noted that many of the existing and proposed plants are in arid regions, particularly the Middle East and Northern Africa that have severe water shortages. In addition, many of these states in these regions are unstable and the location there of nuclear plants may be problematical. The International Atomic Energy Agency also has a program in nuclear desalination l6 though the most recent publication was in early 2007.17 However, there are multitude of factors that affect the desalination costs including the quality of the feed water, the plant capacity, the site characteristics and regulatory requirements and whether the energy is obtained from a co-generation operation. 18 Perhaps varying even more widely are the site specific costs including distance from water supply, distance to water use, home delivery costs, etc. There are also environmental problems to be considered including water intakes' impact on the local biota and the discharge of the concentrated brine. Finally, it should be remembered that the costs should be compared against the costs of developing new sources of water rather than against existing plants. A recent estimate of delivered water costs are shown in Table 2. Table 2 Estimates of Delivered Total Water Costs Supply Type Existing Traditional Supply
19
To Consumer'
Total Family Cost'
$ per 1,000 gallons
$ per month
$0.90-$2.50
$10.80-$30.00
New Desalted Water Brackish'
$1.50-$3.00
$18.00-$30.00
Seawater'·5
$3.00-$8.00
$36.00-$96.00
Traditional Brackish +
$1.20-$2.75
$14.40-$33.00
Traditional Seawater +
$1.11-$3.05
$13.32-$36.60
Combined Supply6
I. Price includes all costs to consumers for treatment and delivery. 2. Cost is based on a family of four using 100 gallons per day per person, for a total monthly use of 12,000 gallons. Cost is based on the average of the "To Consumer" cost shown. 3. Brackish is moderately salty-I ,000-5 ,000mg/L total dissolved solids (TDS). 4. Seawater contains 30,000-35,000mg/L TDS. 5. Cost is for typical urban coastal community in the USA. Costs for inland communities may be higher. 6. Combined supply costs are for the traditional supply augmented with 50% of desalted brackish water, or 10% of desalted seawater.
880 As can be seen from Table 2 if desalinated seawater provided 10% of the drinking water where the current price for 1.000 gallons of drinking water is $2.50, the new price would be $9.10. Note from figure I that all power utilized represents only 30% of the cost of water so that a doubling of the cost of electricity in treatment cost would, at most. increase the total cost of water by 15% but since there are other energy costs than for treatment, this percentage would be considerably less. Maintenance & Parts 8%
Chemicals Supervision & Labor 4%
Membrane Replacement 9%
Recovery of Capital Electric Power
43%
30%
Fig. i: Percentage of lola! coslsfor water. 19
ENERGY We need sufficient, sustainable, relatively low cost, low risk to humans and environment energy production for multiple uses including replacement of greenhouse gas producing energy systems and desalination and also because of the correlation between energy use and GDP (gross domestic product) and the correlation between clean water and health and health and wealth. Of course, correlation is not necessarily causation and there is the chicken and egg problem. As we make energy choices, we should remember that all energy systems have positive and negative aspects and all systems have large uncertainties. There are no perfect solutions. There must be compromises and tradeoffs but the objectives of the highest importance: provide sustainable, economical energy; protect human health and environment; promote a harmonious social and political environment; and reduce the risks of nuclear weapons proliferation, must be kept at the forefront of the search for satisfactory energy systems. Terrorists and rogue nations will also consider tradeoffs in obtaining and/or developing and delivering nuclear, chemical and biological weapons and this must be kept in mind in choosing energy systems. This paper only deals with nuclear energy. Nuclear wastes and diversion of fissile materials for nuclear weapons production are the two of the main obstacles to increased utilization of nuclear energy. However, if global warming of a magnitude of the dire forecasts of the IPCC, global warming of 1.8 and 4 degrees Celsius by the end of this century, or a nuclear war occurs, the impact of nuclear wastes will be far less significant
881
than these events. The IPCC estimates that up to 30 percent of animal and plant species could be wiped out by a global temperature rise of 1.5 to 2.5 degrees Celsius 20 Warming may also spur extinctions, shortages, and conflicts. 21 As shown in Figure 2, there is a strong correlation between energy use and Gross Domestic Product though correlation is not causation. 3
~
15. !': :;,
~~ 2
vi o~
r~
Ul~ 1 _Q.
.!!!B eCUCU...
EQ.
E
a
0
~
-1
2.5
3
3.5
..
4.5
log (GOP per caplta In $PPP)
Fig. 2: Per capita commercial energy consumption, kilogrammes of oil equivalent, per vs. Gross Domestic Product Per Capita (Purchasing Power Parity). 22 There is also a strong correlation between life expectancy, the best overall indicator of health of the population, and energy use as shown in Figure 3. 23 Again, correlation is not necessarily causation. There are many arguments whether correlation in both cases is really causation. In the meantime, there is no more agreed upon measure so we shall use these. A similar but inverse correlation can be shown for infant mortality and energy use.
882 80
•
70
~
i
••• •• •
•
•
>0 60
1$1
~
!@ 50 ... -
... .0
40 30~----~--~---'-------.-------r----~
o
6
• (toM. . of
9
I equ
1Z I nt per cap
15
Fig. 3:. Energy Use and Health. 23
PROLIFERATION OF FISSILE MATERIALS Even with no additional nuclear power production, there already exists sufficient highly enriched uranium and plutonium, not in the weapons stockpiles, to make between 50,000 and 80,000 nuclear weapons. The exact amounts of fissile material available and needed to make bombs are classified and depend upon the purity of the material, the design of the bomb and other factors. However, Both the Union of Concerned Scientists and Wikipedia have published articles that give such estimates. Their numbers are quoted here 24 Tal~~__ 3._Am __O_Il!.I_t_ of_fi _s_ sj~ le~m _ at_ er_i~!
needed
h.t.!_ ild _ an_a_ to_m _j_ c_ ho _~,.;,; lb:....-"'I'"'""_'< ____ ' ____
imple gun-type nuclear weapon 0 to ) to Ibs. (40 to 50 kg) U (emiched to 90 percent U-235
_._.
~.P.~:J.l~~ wea£2E.._" . ~.!.t>~.Q5 k~L._ . _0>."'" ISopiJisticated implosion wC
14 Ibs. (6 kg)
Biological or chemical terrorists' attacks are also in the possible mix. These terrorists' possibilities are beyond the scope of this talk but must influence the choice of new energy sources. STATUS OF RADIOACTIVE WASTE DISPOSAL IN THE USA The present systems for low-level and high-level radioactive waste disposal in the USA do not work. Though low-level radioactive waste laws were passed in 1980 and amended in 1985, no new disposal facilities under the Compacts authorized have been put in operation since that time. However, a site in Texas is expected to open late this year (2010). Most concern has been about high level waste and spent fuel and only those will
883 be discussed here. Fifty years ago in 1959, the first international conference on radioactive waste was held where field studies of disposal of spent nuclear fuel in deep geological formations and 5 national laboratories' studies for immobilization were described.25 After 50 years of effort, there is, as yet, no vitrification at Hanford and no high level waste or spent fuel disposal in geologic formations in the world. Now the USA law demands assurances for 1,000,000 years and beyond and that is clearly impossible. With this performance, could any private company survive? WHAT CAN BE DONE? Guide to what should be done was published in 1990 the US National Academy of Sciences, Rethinking High-Level Radioactive Waste Disposal 26 That report stressed the scientific approach that can be summarized as: a.) Start with the simplest description of what is known, b.) Meet problems as they emerge, c.) Define the goal broadly in ultimate performance terms, d.) Comparison is not so much between ideal systems and imperfect reality as it is between a geologic repository, at-surface storage, deep sub-seabed disposal, etc. , and e.) Apply a conservative science backed engineering approach and an institutional structure designed to permit flexibility and remediation. Because the waste can remain hazardous for very long times, the question is what do we owe future generations? For thousands of years philosophers have been and still are struggling with this question. The responses range from Lao Tzu "Those who predict, don't know; those who know, don't predict,,27 to the Bible "And the land shall not be sold in perpetuity; for the land is Mine; for ye are strangers and settlers with Me.,,28 and to the United Nations, "What is needed is growth that is forceful and at the same time socially and environmentally sustainable.,,29 WHAT TO DO WITH HIGH LEVEL WASTE? With this background, what should we do? I. Set a realistic objective function for the number of generations that we have concern for, say 3-5, approximately 100 years. 2. Design the system, say dry surface storage, Option 1, for that time frame but make sure that there will not be a catastrophic release at the end of that period. 3. Since the energy content of even high level waste after that time is low, the releases, if any, will be slow 4. Design the system to be reversible and modifiable and the wastes retrievable, if necessary. Though where will you put the retrieved waste since the ' best' technique and site had been chosen? Test the system with modeling and at pilot and field scales. 5. At the end of that time period, repeat the process. Would we then have lasers that would open shafts and tunnels at far lower cost and damage to humans and the environment? Before choosing an option, we need to ask:
884 • • •
Would we be more or less safe than if we sent the material to Yucca Mountain or its equivalent? Would it cost more or less? Would it more publically acceptable, that IS likely to be implemented, In comparison to any of the other options?
If Option 1 is not the best choice, look at other options such as reprocessingOption 2; centralized dry surface storage, Option 3, send wastes to the Waste Isolation Pilot Plant- Option 4; sub-seabed sediments disposal-Option 5; or do nothing-Option 6.. Option 4, Dispose in Sub-Seabed Sediments deserves more discussion since it is not on anybody's list now. The technical results of the international study of disposal in the subseabed sediments in the 1980s seemed to be favorable. 3o The withdrawal of the USA from the program and the banning of disposal into the sea terminated any further discussion of this possibility. It should be noted that at that time there were legal arguments that the disposal into the deep sediments was not forbidden. The unlikelihood of large environmental and human effects, since no one drinks sea water and there is large physical and chemical dilution, would warrant another look at this option. It should be noted that if spent nuclear fuel is treated as waste and buried deep in ocean sediments, then it is almost totally unavailable to terrorists and rogue states at these depths and the needed, visible surface superstructure required to retrieve the material would be immediately apparent. Some of these or other systems will be more believable than Yucca Mountain, just as protective of public health and the environment over the same time periods as the presently proposed system. The cost will much lower than the capital costs for the 1,000,000 year protected facility. If repairs are needed, they can be made with the science and technology and with the social expectations available at that time. Since these are assertions, they must be tested with mathematical models, and laboratory and field studies including public impact. SUST AINABILITY OF THE NUCLEAR OPTION Japanese field studies over a period of almost 30 years have shown that they can extract uranium from sea water at twice the present spot price of uranium, $441 lb U308. The price of uranium over the last years has ranged over double the present price and as low as $71 lb U308. 31 There is an estimated 4.5 billion tons of uranium in the sea. 32 Even if commercialization of gathering uranium from the sea is slow, breeder reactors seem to be on the verge of becoming routine. This would mUltiply the amount of usable uranium already available at least 50 fold. 33 CONCLUSIONS In the USA the present system is broken and new laws must be passed whether Yucca Mountain survives or not as the site is presently limited to hold 70,000 metric tonnes of heavy metal. Even the existing reactors will produce more spent fuel than that and a nuclear renaissance would produce even more spent fuel. Further, the public climate has
885 been soured by the political debates over the choice of Nevada as the host state and seemingly inability of DOE to solve the problem in a reasonable and at a reasonable cost. Therefore, a new approach that would satisfy the scientific community and the public, if that is possible, is needed. That means an approach that is believable, achievable and sustainable. The system must be viable, can be monitored and that will serve each generation according to the conditions prevalent at that time and the wishes of that generation. Unfortunately, with all the considerations that need to be taken into account and the uncertainties involved, there is no mathematically optimal solution that can be obtained. As we try to predict what will happen in the future, we see that, as the literature shows,34. 35.36 the best we can do is hope to muddle through . It will not be easy for as Machiavelli wrote 500 hundred years ago, "The reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order.,,37 REFERENCES
1. 2
3 4 5 6 7.
8.
9.
10.
II.
Singer, J. David (1981) "Accounting for International War: The State of the Discipline." Journal o/Peace Research, Vol. 18, No. I, pp. 1-18. Pugwash Conference on Eliminating the Causes of War, 3-8 August 2000, Cambridge, England http://www.pugwash.org/reports/pic/pac3b.htm (a) Eliminating the Causes of War, John Keegan http://www.pugwash.org/reports/pic/pac256/keegan.htm (b) Eliminating the Causes of War, David A. Hamburg http://www.pugwash.org/reports/pic/pac2S6/hamburg.htm Compton, Karl T. (1935) "Put Science to Work!" Technology Review 37: 133-135. "Science Has Been Considered an Aid to Peace." IS2-IS8. Mahoney, Matt (2009) "A Hard Sell." Technology Review, 112:4,88. Rotman, David (2009) "Can Technology Save the Economy?" Technology Review 112:4,44. Goals of United Nations' Millenium Declaration_September 2002 to Reduce Extreme Poverty by 2015 http://www.un.org/millenniumgoalslbkgd.shtml LEE Jong-Wook, Director-General, World Health Organization.2004 Water, Sanitation and Hygiene Links to Health-FACTS AND FIGURES http://www. who.int/water sanitation healthifactsfigures200S .pdf The International Decade for Action: Water for Life-200S-2015, 2001 The health aspects of water supply and sanitation, Joint Monitoring Programme of WHOM and UNICEF, http://www.wssinfo.org/en/14I_wshIntro.html Gleick, November, 2008 Peter H. Water Conflict Chronology-Over 400 Incidents November, Peter H. Gleick, Pacific Institute for Studies in Development, Environment and Security, http://worldwater.org/conflictchronology.pdf 3rd UN World Water Development Report: Water in a Changing World 2009, Response to Water Scarcity and Competition http://www.unesco.org/water/wwap/wwdr/wwdr3/pdfI13 WWDR3 ch 3.pdf Water in Israel http://www.jewishvirtuallibrarv.org/jsource/brief/Water.htmI Israel.s Chronic Water Problem http://www.jewishvirtuallibrary.org/jsource/Historv/scarcity.html
886 12. 13.
14.
IS. 16. (17)
(18) (19)
20.
22.
23.
24
25. 26. 27. 28. 29.
Israeli Agro-Technology, http://www.jewishvirtuallibrary.org/jsource/Economv/ ec03.html (2001). Inventory: 700 New Desalination Plants Commissioned in 2008 http://eponline.coml Articles/20091 II III IInventory-700-New-Desalination-PlantsCommissioned-in-2008.aspx?p=1 3'd UN World Water Development Report: Water in a Changing World, 2009, Desalination-Increased Availability of Water, Earthscan World Nuclear Association, February 2010, Nuclear Desalination, http://www.world-nuclear.org/info/inf7l.html IAEA, March 2004, Nuclear Desalination http://www.iaea.orglNuclearPowerlDesalinationiOverview/activities.html IAEA-TECDOC-1524, Status of Nuclear Desalination in IAEA Member States, 2007, http://wwwJanuary pub.iaea.org/MTCD/publications/PDF/te 1524 web.pd Tamim Younos (2005) "The Economics of Desalination." Journal of Contemporary Water Research and Education. Vo1.l32, p. 39-45 Membrane Desalination Costs, February 2007, American Membrane Technology Association http://www.membranes-amta.org/amta medialpdfs/6 Membrane DesalinationCosts.pdf Gateway to the UN System's Work on Climate Change, n.d. World on Track to Meet Worst Climate Change Projections http://www .un. org/wcmlcontent/sitelclimatechangelcacheloffonce/pagesl gatewayl the-science/pid/2907;jsessionid=145D4284 7942F4C3EAA37 A276360B3C821. Owen, James, April 6, 2007, Warming May Spur Extinctions, Shortages, Conflicts, for National Geographic News http://news.nationalgeographic.com/ news/2007104/070406-g10 bal-warming.html Commercial Energy Consumption and GDP, 2000, Energy Services for the Millennium Development Goals 2005, Source: United Nations Common Database, 2000 http://www.unmilienniumproject.org/documents/MP Energy Low_Res.pdf United Nations Development Program, World Energy Assessment, 2000 Energy Use and Health, energy and the challenge of sustainability-energy and social issues http://www.undp.org/energy/activities/wealdrafts-frame.html Union of Concerned Scientists, August 17, 2004, Nuclear Weapons & Global Security, http://www.ucsusa.org/nuclear weapons and global security/nuclear terrorism/technicaUssues/fissile-materials-basics.html. Wikipedia, February 23, 2010, Enriched Uranium, http://en.wikipedia.org/wikilNuclear_weapon_design International Atomic Energy Agency (1950) Scientific Conference On The Disposal of Radioactive Waste, Monaco, 16- 21 November 1959. U.S. National Academy of Sciences, 1990, Rethinking High-Level Radioactive Waste Disposal. Lao Tzu http://www.chebucto.ns.calphilosophy/Taichi/lao.htmlaccessed July 6, 209 Leviticus 25: 23 Brundtland, Gro Harlem, [Chair, UN Our Common Future, 4 August, 1987), (p. 14)].
887 30.
31. 32.
33.
34. 35. 36. 37.
Mobbs, S.F., D. Charles, C.E. Delow, and N.P. McColl (1988) Perfonnance Assessment of Geological Isolation Systems for Radioactive Wastes, (PAGIS), Commission of the European Communities EUR 11779 EN Tamada, Masao, August 2009, Extracting Uranium from Seawater, Erice, Sicily, in Press International Union of Pure and Applied Physics, October 6, 2004 R &D of Energy Technologies, Annex A, II.Nuclear Fission Energy http://www.iupap.org/wg/energy/annexa.pdf World Nuclear Association, September 2005, Uranium: Sustaining the Global Nuclear Renaissance? http://www.world-nuclear.org/reference/default.aspx? id=294&tenns=fast+breeder+uranium+utilization Taleb, Nassim Nicholas (2007) The Black Swan-The Impact of the Highly Improbable, Random House. Fortun, Michael and Herbert J. Bernstein (1998) Muddling Through: Pursuing Science and Truths in the Twenty-First Century, Counterpoint. Verweij, Marco and Michael Thompson, Editors (2006) Clumsy Solutions for a Complex World-Governance, Politics and Plural Perceptions, Palgrave Macmillan Machiavelli, Niccolo, 1910, The Prince, N.H. Thomson, translator, Dover Publications, Inc., New York, 1992, page 13, Originally published by P. F. Collier & Son, New York, 1910. http://design.caltech.eduierikiMisc/Machiavelli.html
This page intentionally left blank
A SCIENCE OF THE IRRATIONAL CAN HELP PROTECT SCIENCE FROM IRRA TIONAL ATTACKS
JOHN, LORD ALDERDICE FRCPSYCH House of Lords London, UK Professor Zichichi, in his important paper presented earlier today on Science as the Motor for Progress, stated that the first great achievement of our human intellect, on which all logic and science is based, was the development of language. Language enabled mankind to be able to control, inhibit, delay and transform the gratification of the physical emotions and desires of hunger, aggression, anxiety, terror, envy and the sexual drives. The development of language not only facilitated communication between individuals but also communication within the self (the capacity to reflect and remember) and between groups of people and succeeding generations. This made possible the growth of logic, and later science. However evolution and development did not remove emotion, nor did it remove the possibility of a falling back or regression into emotionally driven mental functioning and behaviour and the dissolution of the more advanced thinking capacities, as happens for example in mental illness. Individuals they can lose the capacity to think rationally and logically not only as a result of mental illness, but also of organic damage or emotional overload. All of us as individuals can find our capacity to think logically being transiently affected by anger through a reversal in fortunes or a humiliation, or more positively by falling in love. However when someone falls mentally ill such a disturbance of their thinking is not so transient and is not necessarily removed merely by improving their external circumstances. One of the disturbances of thinking which can appear in psychotic mental illness is the 'delusion'. This is a fixed false belief, out of line with the person's cultural context, and from which the person cannot be persuaded by rational argument or reasonable evidence. Their appreciation of reality is distorted to fit their delusionary belief, rather than their delusion being susceptible to disproof by confronting reality. Fixed false beliefs can also arise in groups where they are widely shared by many people with a common cultural context. This is not however a sign of widespread individual psychotic illness and disturbance. What is it that makes whole groups or communities hold to fixed false beliefs contrary to reasonable evidence? Why do communities hold to what we would regard as non-scientific thinking? It is not only in the mental life of individuals that irrationality can take over, but also in the thinking of a group or community. This often happens when there is deep group anxiety-a fear of economic chaos, physical external attack, or threats to religious faith, culture or other aspects of the group's identity. Science and scientific thinking, which one can see as the characteristic of healthy, rational functioning in a group or community, can be damaged when communities descend into fundamentalism which is anti-scientific and produces a primitive form of religion or of group thinking in general. But this descent into the irrational does not only arise where there is a direct attack on the scientific approach from anti-science fundamentalists. The scientific approach can also be corrupted from 'inside the camp' as it were. The scientific approach demands that in the face of uncertainty we take a rational approach which accepts that there is uncertainty, and that we may have to live with 'not knowing' for a long time until the truth becomes
889
890 clear. However when there is a serious threat or uncertainty for a group it may seek to remove the uncomfortable anxiety provoked by this uncertainty by jumping to conclusions which are given a cloak of scientific respectability by the profession of those involved, rather than by the scientific rigor of the approach. For example the uncertainty and fear of climate change and environmental catastrophe can lead communities to jump to illusory conclusions about how far humanity can control the forces of nature rather than living with the uncertainty which a rigorous and long-term scientific investigation requires. Unfortunately the levers of rational enquiry are not strong in the face of passionate anxiety and the resultant regression into thinking dominated by emotion rather than logic. The problem of predicting outcomes in certain circumstances, such as where there are a very large number of variables, or where prediction is simply not possible, is very frightening to many people who would rather have false certitude than honest uncertainty. If we are not to be overwhelmed by irrationality driven by such fears we must apply our scientific approach to understanding the nature of irrationality itself, not only in the individual, but also in the group. Only if we focus upon and increase our understanding of group irrationality and how and why this happens can we hope to prevent our global community from falling victim to ignorance which pretends to be knowledge, and an appearance or 'gloss' of science contributing to the worsening cycle of panic rather than a truly scientific approach that may be able to help us out of it. An example, from another field, of how apparent progress can actually lead to a worsening reality is provided by the growth of some of the structures of democracy in Africa. In the past, sub-Saharan African society was largely tribal and its technology was modest. The judgement of the tribal chief was arbitrary but final, however in the larger scheme of things it was also of limited effect because the capacity to wage war and cause destruction was modest. Law and the institutions of western style democracy accompanied by the technology of security, defence and the weapons of war were rapidly imported into Africa during the twentieth century without a concomitant fundamental change and development of culture. This meant that old-style chiefs could now use electoral processes and law, not to create liberal democracies but to maintain old tribal attitudes, and in addition to enforce them not with the limited capacities of spears and foot-soldiers, but with a fearsome battery of modem weapons of war, which they could not themselves produce, but which are now available for them to use. It was believed that if the institutional and physical tools of modernity were provided, a tolerant liberal culture would inevitably develop. It is by no means clear that this is true, even in the medium to long-term. The technology made possible by science may be used by a society whose culture is enriched by a highly developed and positive view of humanity. In this case we can have some chance (though no certainty) of relative peace, stability and mutual benefit and understanding. On the other hand, without the culture of science, the technology developed out of the science of recent centuries can contribute to fear, economic chaos, war, and social destruction. Terrorism is a symptom of such cultural regression through fundamentalist ways of thinking, and in addition political radicalization, and this is also a symptom of group regression. The recent economic chaos is another case where the gloss of economic ' science' was used to bolster the wish for cost-free prosperity, but actually led to catastrophe. The illusion of a ' science' of economics lies in its failure to recognize that with so many variables, and a very high level of complexity of function, prediction in economics is not
891
possible. In addition, economic markets are largely a function of group psychology rather than mathematical calculus, and so psychological contrarians are as likely to be able predict outcomes as the mathematical economic analysts. We need the culture of science to protect us from disaster, but we need a scientific understanding of man's irrationality, especially irrationality in group functioning, to help us protect science and the scientific method from being abused by some who claim to be its adherents, resulting in the misleading of the majority of our people and the potential of cultural catastrophe.