INTERNATIONAL SEMINAR ON
NUCLEAR WAR AND PLANETARY EMERGENCIES 32nd Session:
The 32nd Session of International Seminars and International Collaboration International Seminar on Nuclear War and Planetary Emergencies-32nd Session: Limits of Development: Mi ation and Cyberspace; in Euro e, Synoptic European Overview; From and Within Asia; Globalization -Climate: Global&;anning; a Chronology;Simple Cfkate Models; Energy and Electricity Considerations-T. S. E.: CJD and Blood Transfusion;BSE in North America; Gerstrnann-Straussler-ScheinkerDisease -The Cultural Emergency:innovations in Communications and IT Cosmic Objects: Impact Hazard; Close Approaches; Asteroid Deflection; Risk Assessment and Hazard Reduction; Hayabusa and Follow Up -Aids and Infectious Diseases: Ethics in Medicine; InternationalCo-operation; LaboratoryBiosecurity Guidelines; Georgian Le 'slation; Biosecurity Norms and International Organizations, Legal Measures Against Biocrimes -Water and Pollution: Cycg Overview; Beyond Cost and Price; Requirements in Rural Iran; Isoto e Techniques; Clean and Reliable Water for the 21st Century -Permanent Monitoring Panels Reports - Workshops: &obal Biosecurity;Cosmic Objects
THE SCIENCE AND CULTURE SERIES Nuclear Strategy and Peace Technology
Series Editor: Antonino Zichichi
-
International Seminar on Nuclear War - 1st Session: The World-wide Implications of Nuclear War
1984 -
International Seminar on Nuclear War - 2nd Session: How to Avoid a Nuclear War
1981 1982
1983
InternationalSeminar on Nuclear War - 3rd Session: The Technical Basis for Peace International Seminar on Nuclear War -4th Session: The Nuclear Winter and the New Defence Systems: Problems and Perspectives
1985
-
InternationalSeminar on Nuclear War - 5th Session: SDI, Computer Simulation, New Proposals to Stop the Arms Race
1986
-
InternationalSeminar on Nuclear War - 6th Session: InternationalCooperation: The Alternatives
1987
-
InternationalSeminar on Nuclear War - 7th Session: The Great Projects for Scientific Collaboration East-West-North-South
1988 - InternationalSeminar on Nuclear War - 8th Session: The New Threats: Space and Chemical Weapons -What Can be Done with the Retired I.N.F. Missiles-LaserTechnology 1989 1990
-
International Seminar on Nuclear War - 9th Session: The New Emergencies International Seminar on Nuclear War - 10th Session: The New Role of Science
1991 - International Seminar on Nuclear War - 11th Session: Planetary Emergencies 1991 - International Seminar on Nuclear War - 12th Session: Science Confronted with War (unpublished) 1991 - International Seminar on Nuclear War and Planetary Emergencies - 13th Session: Satellite Monitoring of the Global Environment(unpublished) 1992
-
International Seminar on Nuclear War and Planetary Emergencies - 14th Session: Innovative Technologiesfor Cleaning the Environment
1992 - International Seminar on Nuclear War and Planetary Emergencies - 15th Session (1st Seminar after Rio): Science and Technology to Save the Earth (unpublished) 1992 - International Seminar on Nuclear War and Planetary Emergencies - 16th Session (2nd Seminar after Rio): Proliferationof Weapons for Mass Destructionand Cooperation on Defence Systems 1993
-
International Seminar on Planetary Emergencies - 17th Workshop: The Collision of an Asteroid or Comet with the Earth (unpublished)
1993
-
International Seminar on Nuclear War and Planetaty Emergencies - 18th Session (4th Seminar after Rio): Global Stability Through Disarmament
1994 - International Seminar on Nuclear War and Planetary Emergencies - 19th Session (5th Seminar after Rio): Science after the Cold War 1995
-
International Seminar on Nuclear War and Planetary Emergencies - 20th Session (6th Seminar after Rio): The Role of Science in the Third Millennium
1996
-
InternationalSeminar on Nuclear War and Planetary Emergencies - 21st Session (7th Seminar after Rio): New Epidemics, Second Cold War, Decommissioning, Terrorism and Proliferation
1997
- InternationalSeminar on Nuclear War and Planetary Emergencies-22nd Session (8th Seminar after Rio): Nuclear Submarine Decontamination,Chemical Stockpiled Weapons, New Epidemics, Cloning of Genes, New Military Threats, Global Planetary Changes, Cosmic Objects & Energy
1998
- InternationalSeminar on Nuclear War and Planetary Emergencies- 23rd Session (9th Seminar after Rio): Medicine & Biotechnologies,Proliferation & Weapons of Mass Destruction, Climatology & El Nino, Desertification, Defence Against Cosmic Objects, Water & Pollution, Food, Energy, Limits of Development, The Role of Permanent Monitoring Panels
1999
-
International Seminar on Nuclear War and Planetary Emergencies- 24th Session: HIV/AIDS Vaccine Needs, Biotechnology,Neuropathologies,Development Sustainability- Focus Africa, Climate and Weather Predictions, Energy, Water, Weapons of Mass Destruction, The Role of Permanent Monitoring Panels, HIV Think Tank Workshop, Fertility Problems Workshop
2000
- International Seminar on Nuclear War and Planetary Emergencies-25th Session:
2001
-
InternationalSeminar on Nuclear War and Planetarv Emeraencies- 26th Session: AIDS and Infectious Diseases - Medication or Vacknation for DevelopingCountries; Missile Proliferation and Defense; Tchernobyl - Mathematics and Democracy; Transmissible Spongiform Encephalopathy; Floods and Extreme Weather Events Coastal Zone Problems; Science and Technology for Developing Countries;Water TransboundaryWater Conflicts; Climatic Changes - Global Monitoring of the Planet; Information Security; Pollution in the Caspian Sea; Permanent Monitoring Panels Reports; Transmissible Spongiform Encephalopathy Workshop; AIDS and Infectious Diseases Workshop; Pollution Workshop
2002
-
InternationalSeminar on Nuclear War and Planetary Emergencies- 27th Session: Society and Structures: Historical Perspectives Culture and Ideology; National and Regional Geopolitical Issues; Globalization- Economy and Culture; Human Rights - Freedom and Democracy Debate; Confrontationsand Countermeasures: Present and Future Confrontations; Psychology of Terrorism; Defensive Countermeasures; Preventive Countermeasures;General Debate; Science and Technology: Emergencies; Pollution, Climate - Greenhouse Effect; Desertification, Water Pollution, Algal Bloom; Brain and Behaviour Diseases; The Cultural Emergency: General Debate and Conclusions; Permanent Monitoring Panel Reports; Information Security Workshop; Kangaroo Mother’s Care Workshop; Brain and Behaviour Diseases Workshop
2003
-
International Seminar on Nuclear War and Planetary Emergencies- 29th Session: Society and Structures: Culture and Ideology - Equity -Territorial and Economics - Psychology-Tools and Countermeasures-Worldwide Stability - Risk Analysis for Terrorism The Asymmetric Threat - America’s New “Exceptionalism” - Militant lslamist Groups Motives and Mindsets-Analysing the New Approach The Psychology of Crowds - Cultural Relativism- Economic and Socio-economic Causes and Consequences-The Problems of American Foreign Policy UnderstandingBiological Risk Chemical Threats and Responses - BioterrorismNuclear Survivial Criticalities - Responding to the Threats - National Security and Scientific Openness -Working Groups Reports and Recommendations
Water - Pollution, Biotechnology- Transgenic Plant Vaccine, Energy, Black Sea Pollution, Aids - Mother-Infant HIV Transmission, Transmissible Spongiform Encephalopathy,Limits of Development - Megacities, Missile Proliferationand Defense, Information Security, Cosmic Objects, Desertification, Carbon Sequestration and Sustainability, Climatic Changes, Global Monitoring of Planet, Mathematics and Democracy, Science and Journalism, Permanent Monitoring Panel Reports, Water for Megacities Workshop, Black Sea Workshop, Transgenic Plants Workshop, Research Resources Workshop, Mother-Infant HIV Transmission Workshop, Sequestrationand DesertificationWorkshop, Focus Africa Workshop
-
-
2004
- InternationalSeminar on Nuclear War and Planetary Emergencies-30th Session: Anniversary Celebrations: The Pontifical Academy of Sciences 400th -The ‘Ettore Majorana’ Foundation and Centre for Scientific Culture 40th - H.H. John Paul I1 Apostolate 25th -Climate/Global Warming: The Cosmic Ray Effect; Effectson Species and Biodiversity; Human Effects; Paleoclimate Implications; Evidence for Global Warming - Pollution: Endocrine Disrupting Chemicals; Hazardous Material; Legacy Wastes and RadioactiveWaste Management in USA, Europe; Southeast Asia and Japan -The Cultural Planetary Emergency: Role of the Media; Intolerance; Terrorism; Iraqi Perspective; Open Forum Debate - AIDS and Infectious Diseases: Ethics in Medicine; AIDS Vaccine Strategies -Water: Water Conflicts in the Middle East - Energy: Developing Countries; Mitigation of Greenhouse Warming Permanent Monitoring Panels Reports -Workshops: Long-TermStewardship of Hazardous Material; AIDS Vaccine Strategies and Ethics
-
2004
- InternationalSeminar on Nuclear War and Planetary Emergencies -31st Session: MultidisciplinaryGlobalApproach of Governmentsand InternationalStructures:Societal Response -Scientific Contributionsto Policy - Economics - Human Rights Communication - Conflict Resolution- Cross-DisciplinaryResponses to CBRN Threats: Chemical and Biological Terrorism - Co-operation Between Russia and the West - Asymmetrical Conflicts - CBW Impact - Cross-DisciplinaryChallenges to Emergnecy Management, Media Information and Communication: Role of Media in Global Emergencies - Emergency Responders - Working Groups’ Reports and Recommendations
2005
- InternationalSeminar on Nuclear War and Planetary Emergencies - 32nd Session: Limits of Development: Migration and Cyberspace; in Europe; Synoptic European Overview; From and Within Asia; Globalization- Climate: Global Warming; a Chronology; Simple Climate Models; Energy and ElectricityConsiderations- T. S.E.: CJD and Blood Transfusion; BSE in North America; Gerstmann-Straussler-Scheinker Disease -The Cultural Emergency: Innovationsin Communications and IT Cosmic Objects: Impact Hazard; Close Approaches; Asteroid Deflection; Risk Assessment and Hazard Reduction; Hayabusa and Follow Up -Aids and Infectious Diseases: Ethics in Medicine; International Co-operation; Laboratory Biosecurity Guidelines; Georgian Legislation; Biosecurity Norms and InternationalOrganizations, Legal Measures Against Biocrimes - Water and Pollution: Cycle Overview; Beyond Cost and Price; Requirements in Rural Iran; Isotope Techniques; Clean and Reliable Water for the 21st Century - Permanent Monitoring Panels Reports - Workshops: Global Biosecurity; Cosmic Objects
-
T H E SCIENCE AND CULTURE SERIES
Nuclear Strategy and Peace Technology
"E. Majorana" Centre for Scientific Culture Erice, Italy, 19-24 Aug 2004
Series Editor and Chairman: A. Zichichi
Edited by R. Ragaini
1@World ; Scientific N E W JERSEY
LONDON
SINGAPORE * B E l J l N G * S H A N G H A I * HONG KONG
TAIPEI * C H E N N A I
Published by
World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
INTERNATIONAL SEMINAR ON NUCLEAR WAR AND PLANETARY EMERGENCIES 32ND SESSION: LIMITS OF DEVELOPMENT: MIGRATION AND CYBERSPACE; IN EUROPE; SYNOPTIC EUROPEAN OVERVIEW; FROM AND WITHIN ASIA; CLIMATE: GLOBAL WARMING; A CHRONOLOGY; SIMPLE GLOBALIZATION CLIMATE MODELS; ENERGY AND ELECTRICITY CONSIDERATIONS-T. S. E.: CJD AND BLOOD TRANSFUSION; BSE IN NORTH AMERICA; GERSTMANN-STRAUSSLERSCHEINKER DISEASE - THE CULTURAL EMERGENCY: INNOVATIONS I N COMMUNICATIONS AND IT - COSMIC OBJECTS: IMPACT HAZARD; CLOSE APPROACHES; ASTEROID DEFLECTION; RISK ASSESSMENT AND HAZARD REDUCTION; HAYABUSA AND FOLLOW UP - AIDS AND INFECTIOUS DISEASES: ETHICS I N MEDICINE; INTERNATIONAL CO-OPERATION; LABORATORY BIOSECURITY GUIDELINES; GEORGIAN LEGISLATION; BIOSECURITY NORMS AND INTERNATIONALORGANIZATIONS,LEGAL MEASURES AGAINST BIOCRIMES - WATER AND POLLUTION: CYCLE OVERVIEW; BEYOND COST AND PRICE; REQUIREMENTS IN RURAL IRAN; ISOTOPE TECHNIQUES; CLEAN AND RELIABLE WATER FOR THE 21ST CENTURY - PERMANENT MONITORING PANELS REPORTS - WORKSHOPS: GLOBAL BIOSECURITY; COSMIC OBJECTS
-
-
Copyright 0 2005 by World Scientific Publishing Co. Re. Ltd.
All rights reserved. This book, or parts thereoJ may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-256-385-7
Printed in Singapore by World Scientific Printers (S) Pte Ltd
CONTENTS
1.
OPENING SESSION
Antonino Zichichi The 32nd Session of the International Seminars and International Collaboration Richard L. Garwin Science and National Intelligence
Rolf K. Jenny Statement on Migration
2.
3
6
17
LIMITS OF DEVELOPMENT: MIGRATION
Ahmad Kamal Migration and Cyberspace
27
Hiltrnar Schubert Migration in Europe
32
Nigel Harris Migration and Development: A Synoptic European Overview
38
K. C. Sivamarakrishnan Migration From and Within Asia
45
Geraldo G . Serra Migration and Globalization
66
3.
CLIMATOLOGY: GLOBAL WARMING
John S.Perry From Curiosity to Concern: A Chronology of the Quest to Understand Global Climate
vii
77
viii
Tom M.L. Wigley Simple Climate Models
84
Garth W Paltridge Old Physics for New Climate Models -Maybe
95
Hisham Khatib Energy and Electricity Considerations-Global Warming Perspectives
4.
102
TRANSMISSIBLE SPONGIFORM ENCEPHALOPATHY: PRIONS
Robert G. Will Creutzfeld-Jakob Disease and Blood Transfusion
115
Maura N. Ricketts BSE in North America
120
Bernardino Ghetti Role of the Polymorphism at Codon 129 of the Pnon Protein Gene in the Phenotypic Expression of Gerstmann-StrausslerScheinker Disease Associated with the F198S Mutation Herbert Budka Update on the Pathogenesis of Transmissible Spongiform Encephalopathies
5.
133
137
THECULTURAL EMERGENCY: INFORMATION AND COMMUNICATIONS -ENVIRONMENT
Axel Lehmann Innovations in Information and CommunicationsTechnologies: Benefits and Threats
143
ix
6.
COSMIC OBJECTS
Clark R. Chapman Recent Perspectives on the Hazard of an Asteroid Impact
155
Donald K. Yeomans Recent Close Approaches of Asteroids to the Earth
162
Russell L. Schweickart Asteroid Deflection: Hopes and Fears
177
Alan
W Harris The Near-Earth Object Impact Hazard: Space Mission Priorities for Risk Assessment and Reduction
Hajime Yano Hayabusa and its Follow-up Plans by JAXA
7.
185
186
AIDSAND INFECTIOUS DISEASES: ETHICSIN MEDICINE
Diego Buriot Limiting Access to Dangerous Pathogens -The Need for International Cooperation
215
Reynolds M. Salerno The U.S. Select Agent Rule and an International Opportunity to Develop Laboratory Biosecurity Guidelines
221
Lela Bakanidze New Georgian Legislation on Biosafety Bradford Kay International Biosecurity Norms and the Role for International Organizations Barry Kellman Legal Measures to Prevent Bio-crimes
229
232
237
X
8.
WATER AND POLLUTION
Soroosh Sorooshian Overview of the Hydrological Cycle and its Connection to Climate: Droughts and Floods
25 1
Ronald B. Linsky What is the Real Value of Water? Reaching Beyond the Global Dilemma of Cost and Price
266
Amir I. Ajami Agrarian Transformation and Shifts in Water Requirements in Rural Iran: A Case Study
212
Pradeep Aggarwal Sustainable Water Resource Management and the Role of Isotope Techniques
284
Andrew EB. Tompson Scientific Challenges for Ensuring Clean and Reliable Water for the 21st Century
289
9.
PERMANENT MONITORING PANEL MEETINGS AND REPORTS
AIDS and Infectious Diseases Permanent Monitoring Panel Guy de The' AIDS 2004 -Pressing Financial and Ethical Challenges
301
Climatology Permanent Monitoring Panel William A. Sprigg Implications of Climate Variability and Change: A Policy Maker's Summary
309
Cosmic Objects Permanent Monitoring Panel Walter E Huebner Panel Report
318
xi
Energy Permanent Monitoring Panel Bruce Stram Panel Report Abul Barkat Bangladesh Rural Electrification Program: A Success Story of Poverty Reduction through Electricity Richard Wilson Sustainable Nuclear Energy - Some Reasons for Optimism Information Security Permanent Monitoring Panel Henning Wegener Chairman's Report Ktaly N. Tsygichko Information Revolution in the Military Field and the Establishment of an International Legal Regime for Information Security Limits of Development Permanent Monitoring Panel Hiltmar Schubert Panel Report
323
331
371 387
392
398
Mbareck Diop West African Point of View on Migration
400
Christopher D. Ellis Impacts of Migration on Megacities in the United States
403
Stephen S. Y Luu Inter-regional Migration in China in the Post-Deng Economic Era 1990-2000
410
Alberto Gonza'lez-Pozo Migration in Mexico: Slower Trends to Megacities; Higher Flow to the U.S.
424
xii
Mother and Child Permanent Monitoring Panel Nathalie Charpak Panel Report
433
Christiane Huraux Using the KMC Programme’s Database in Developed Countries: An Illusion?
436
Juan G. Ruiz Quality of Health Care Assurance: The Kangaroo Mother Care Program Experience
437
Pollution Permanent Monitoring Panel Lome Everett, Richard C. Ragaini Panel Report
446
Risk Analysis Permanent Monitoring Panel Terence Taylor Panel Report
449
10.
GLOBAL BIOSECURITY WORKSHOP
Barry Kellman The Bio-Science Dilemma -Precious Opportunities and Dire Threats
453
Terence Taylor Biological Safety and Security -Advances in the Life Sciences - Reaping the Rewards and Managing the Risks
455
11.
COSMIC OBJECTS WORKSHOP
Mario Di Martino Detection of Transient Phenomena on Planetary Bodies
465
xiii
Raymond Goldstein Proposed Ground-Based Experiments for Asteroid Seismology
12.
SEMINAR PARTICIPANTS
495
This page intentionally left blank
1.
OPENING SESSION
This page intentionally left blank
THE 32NDSESSION OF THE INTERNATIONAL SEMINARS AND INTERNATIONAL.COLLABORATION
ANTONINO ZICHICHI Dear Colleagues, Ladies and Gentlemen,
I would like to welcome you to this 32"d Session of the International Seminars on Nuclear War and Planetary Emergencies and declare the Session to be open. There are many interesting topics to be debated during this Seminar and its associated meetings. Some are related to non-scientific problems to which we are trying to bring scientific solutions. This can only be achieved through our interdisciplinary groups and through the methods elaborated in our symposiums on complexity. The main topics of this Session are: MIGRATION This is a worldwide phenomenon, which has been with us for a very long time. It was left to fester until it became a major problem to societies and a burden on economies. You will hear more on this from Dr. Rolf Jenny, Director of the Global Commission on International Migrations in Geneva, Drs. Ahmad Kamal and Hiltmar Schubert as well as from other members of the Limits of Development PMP. GLOBAL WARMING This is an ongoing debate, here in Erice. We have heard many one-sided arguments offered separately, without any real debate. Many possible factors and suggestions, some of which I offered myself during the August Seminars, have not been sufficiently elaborated or pursued. The Italian Government has now instituted a special commission to study the Global Warming issues to be presided by our colleague Professor Enzo Boschi. Many questions require precise answers, which need to be sought dispassionately in a purely scientific environment, without political and economic bias. This engenders a continuing debate for the WFS. You will hear more on this from Professor William Sprigg of the Climate PMP, Dr. Hisham Khatib from the Energy PMP and other eminent international experts. BRAIN DISEASES AND TRANSMISSIBLE SPONGIFORM ENCEPHALOPATHY Prions were first described to a scientific audience in 1996, here in Erice, by Professor Stanley Prusiner who discovered them. Since then, a few dedicated WFS workshops have been organised and we have closely monitored the results of the research undertaken in that domain. Professor Robert Will and other PMP members will report on the latest results. COSMIC OBJECTS This suddenly became, as you can imagine, a very hot topic during the Star Wars era. We keep discovering proof of global catastrophic events in the past, which wiped out most of the existing life, and were due to large meteorites crashing into our planet.
3
4 This Planetary Emergency is one of a very few of its kind that could bring a brutal and rapid end to our civilisation with very little notice. It could happen at any moment, and yet it receives relatively little attention or funds for research. We are still almost totally unprepared to avoid a disaster coming from deep space. Professor Walter Huebner and other PMP members will elaborate on this during the dedicated session. AIDS AND INFECTIOUS DISEASES AND GLOBAL BIOSECURITY STANDARDS Security standards for handling sensitive biological material in our laboratories have always been a concern. Now, with the advent of large-scale terrorism, we must also ensure the safety of biological material against its misuse. Dr. Diego Buriot and his colleagues will give you a rundown of the measures adopted in various countries. WATER The essence of life itself, Water, has always been one of our going concerns. Professor Soroosh Sorooshian and members of the PMP, together with other eminent colleagues, will report on the current situation. As an example of our interactions with International Organisations, let me now single out an important milestone for the World Federation of Scientists, concerning the World Conference on Disaster Reduction, which will be held in Kobe, 18-22 January 2005. Quoting from the announcement, the Conference aims at providing: ‘‘ a unique opportunity to promote a strategic and systematic approach at the national level to address vulnerabilities and to reduce risk to natural hazards”. The announcement further stipulates, “Human and economic losses due to natural disasters continue to rise and remain a major obstacle to sustainable development and achievement of the Millennium Development Goals (MDGs). New risks are emerging. The WCDR is expected to guide and motivate governments and their policy makers to pay more attention to such vital issues, identifying practical ways to incorporate risk reduction measures into action to reduce poverty ”. We do, of course, agree with the above. Now, looking at the measures to be investigated and the kind of recommendations to be proposed, they seem to us highly commendable but almost systematically and resolutely on the passive side, i.e.: how to manage the disaster once it has happened. The Italian Government, and especially the Minister of Foreign Affairs, Dr. Franc0 Frattini, believes in a more pro-active policy of identifying the causes for a disaster in order to mitigate its effects or avoid its happening altogether. T h s is in perfect agreement with the role of Science in the fight against the Planetary Emergencies, which has been stressed and put in evidence during our Erice Seminars in previous years. I would like to invite the chairs of the World Federation of Scientists’ PMPs to discuss this issue with their colleagues and remit, by the end of the Seminar, a note with their suggestions aimed at increasing the awareness of the Conference institutions of the role of Science and the WFS action. Please note that we have already addressed our comments to the report of the First Session of the Preparatory Committee to the member states.
5 We commented on Sustainable Development and Terrorism, on addressing the danger of Cosmic Objects, Floods and Extreme Meteorological Events, Water and Partnership Mechanism. These suggestions, along with those presented at the end of the Seminar, will be presented by ow delegation to the Second Session of the Preparatory Committee, which will be held 11-12 October 2004 in Geneva.
SCIENCE AND NATIONAL INTELLIGENCE RICHARD L. GARWIN' IBM Fellow Emeritus, Thomas J. Watson Research Center Yorktown Heights, USA Those who have followed the American scene in recent months have witnessed an extensive discussion of intelligence "failures" for lack of prevention of the September 11, 2001 attacks on the two World Trade Towers and the Pentagon, which killed 3000 people of many nationalities. An addtional aircraft had been hjacked and would have been used, probably, to attack either the White House or the Capitol in Washington, DC. Reports of other Commissions have been dedicated to failures of intelligence in regard to the weapons of mass destruction (WMD) in Iraq, and the reasons and logic for initiating war there. Of course, "WMD" is a term that makes little sense, since a nuclear explosion (even of the magnitude of the 1945 bombs used on Hiroshima and Nagasaki), if smuggled in and detonated near ground level, would kill 100,000 to 500,000 people in a densely populated city. Similarly, an appropriately chosen biological weapon such as anthrax, properly disseminated, or the smallpox virus, could kill as many, and perhaps far more. In comparison with these nuclear and biological threats, the current threat from chemical weapons such as Sarin is almost negligible. In fact, a reasonable rule of thumb is that there would be about as many deaths and non-fatal casualties kom the use of a chemical weapon as from a modem high-explosive weapon such as cluster bombs, and the like. By "WMD," therefore, we should understand nuclear and biological weapons, excluding radiological and chemical weapons. Why "Science and National Intelligence," and what is "National Intelligence" anyhow? National Intelligence is that information and interpretation that can guide decisions at the national level. This is distinct from Tactical Intelligence and Military Intelligence. Tactical Intelligence guides the actions of a platoon, company, brigade, or even of an army, as a result of knowledge and analysis of the deployment and capabilities of the opposing forces. Military Intelligence provides additional information on the overall structure of opposing military forces, the characteristics and efficacy of the weapons with which they are provided, and the detailed information as to command structure, likely ability to carry out detailed and large-scale plans, and the like*. The point is that National Intelligence goes far beyond Military or Tactical Intelligence to inform the leadership of a country as to its options in negotiating, befriending, defending, or, for that matter, conducting military operations against another power. At a time when U.S. television channels (especially the Cable Satellite Public Affairs Networks (CSPAN)) are full of congressional hearings which feature former directors of (US) Central Intelligence and other experts, it may make some sense to consider the past and potential future contributions of science to National Intelligence. Science enters not so much as science itself, which is, by definition, the acquisition of new insights and knowledge, but largely in the form of science codified in the form of
6
7 technology and other tools. Just as the science of condensed matter physics has been incorporated into the miracles of this video projector, computers, and many of the amenities of modem life dating back to Galileo and even Archimedes, so science is taken for granted in the tools available for National Intelligence. But it is there, as the finest flower of optics, of mathematics, chemistry, and, increasingly, of biology. Intelligence involves the acquisition of information, its preservation and review, and its continual interpretation and reinterpretation in view of various hypotheses as to meaning and significance. In this it has a lot in common with the means by which we understand the secrets of the universe. Sometimes the information is in view for all to see, as was the case with the laws of falling bodies at the surface of the Earth, explored by Galileo and Newton. Sometimes it is hidden until a new tool makes it apparent, as is the case with the signals and “noise” in the radio spectrum, to which humans were blind and deaf until the advent of sensitive of radio receivers and amplifiers. Sometimes it is necessary in the acquisition of Nature’s secrets to travel to hostile environments, in order that the signal be received at all, or to be made more prominent against the local background noise. So it is in the sending of Soviet probes to the surface of Venus, OT to the ocean depths in the exploration of the mid-oceanic ridge and the black smokers of recent decades. So it was with the introduction of intelligence satellites, the first of which flew in June, 1960, in the form of a so-called “Galactic Radiation and Background (“GRAB”) satellite, the real purpose of which was the acquisition of electronic intelligence on the radars of the world, and in August, 1960, the CORONA satellite to photograph the Earth from space. These early satellites have been fully declassified (that is, the information and in most cases the “product” made publicly available by the United States in 1995 in the case of CORONA, and in the year 2000 in the case of the GRAB satellite). They are discussed, for instance, in the article by Mark Moynihan3. The CORONA system was fully described in an article by Albert D. Wheelo4 who as the first Deputy Director for Science and Technology of the Central Intelligence Agency, from 1962-1966 played a key role in the ongoing development of CORONA, as well as in the development of a titanium aircraft that traveled thousands of kilometers at a speed of Mach-3 (three times the speed of sound). The imaging satellites (providing “image intelligence“ or IMINT) and the Electronic Intelligence (ELINT) satellites had quite different origins. The first ELINT satellite was the product of the US.Naval Research Laboratory, NRL, where scientists and engineers had the idea that they could obtain useful intelligence about the Soviet radar system for early warning against aircraft, by flying some relatively simple satellites. Recall that in the 1950s the state-of-the-art was vacuum tubes rather than transistors. Once these satellites were in operation, additional contributions were made, and additional launches of these relatively short-lived satellites could benefit from the rapid evolution of technology for military and civilian purposes, that have brought us from the first multi-million dollar digital computers of the 1950s to the $1000 marvel of today. The modem imaging satellites took form in the minds of a few consultants to the President’s Science Advisor in the 1954-1956 era, who had conceived the U-2 reconnaissance aircraft and persuaded President Dwight D. Eisenhower to develop it as a secret program assigned to the CIA. The subsonic jet-engine U-2 fleet first flew in 1956
8 through Soviet air space and that of other countries, publicly unacknowledged until May, 1960, when it was shot down by the SA-2 missile system near Sverdlovsk. These remarkable individuals included Edwin H. Land, inventor of polarizing film material and the Polaroid instant photographic system; Edward M. Purcell, Professor of Physics at Harvard University and Nobel Laureate for the invention of nuclear magnetic resonance; and James G . Baker, optical scientist and engineer par excellence, also of Harvard University. Their observation was simple. It was that lenses of the 1950s could be built to provide resolution on the film comparable with the wavelength of light, and that ultrafine grain film could also be made. So instead of the typical eight line pairs per millimeter (lp/mm) of military reconnaissance cameras flown by the U.S. in the Korean War, one could build systems that would record information at 200 lp/mm. The difference made by the factor 25 is astonishing, since the amount of film required to record a given scene is reduced by a factor 625. Therefore, it makes sense to go to very thin-base film, and suddenly it is possible to record from a high flying aircraft at 20 km altitude horizon-to-horizon coverage continuously as the aircraft flies over the scene to be photographed. Recognizing that aircraft would not long be invisible to radar (and, in fact, the U-2 was detected by Soviet radars from its first flight over the Soviet Union), the "Land Panel" conceived also the Mach-3 SR-71 aircraft, which brought entirely new challenges to the acquisition of IMINT. These included the extreme heat from the adiabatic compression of the air at this speed, bringing the surface temperature of the aircraft to the softening point of the titanium skin, and photography through the turbulence of the boundary layer adjacent to the aircraft. Ultimately, even the SR-71 would be vulnerable to being shot down, and so an additional secret program was instituted, that would record not from 25 km within the atmosphere but from 160 km altitude, outside the sensible atmosphere, from the first Earth satellites, dubbed CORONA. CORONA was initiated in the deepest secrecy, accompanyng the cancellation of an Air Force program for the return of TV images from space. The technology of those days imposed stark limitations on what could be used in the satellite. There was essentially no "electronics" in CORONA. Rather, batteries operated electrical motors to drive the complicated film path, the rotating drums of the panoramic cameras, and the cams and switches of the timers that controlled the cycling and eventual reentry of the "bucket" containing the exposed film. The first man-made objects retrieved from orbit were these reentry vehicles (RVs) of the CORONA system, that used ablative technology to survive the fiery heat of reentry. These packages were fitted with parachutes that would open at subsonic speed over the Pacific Ocean, so that the dangling film bucket could be retrieved by a kind of trapeze deployed from a C-119 aircraft of a special detachment operated for that purpose. The CORONA system is well described in numerous articles following its 1995 declassification, not least by Wheelon in his 1997 article. Here are a few illustrations from that article:
9
Mid-air
10
A view ojojthe Kremlin. At the left is a narrow line ojpeople awaiting entrance to Lenin s tomb. The CORONA system was operated from 1960 to 1972, in more than 145 successful flights, returning almost 2000 lan of film. By 1972, CORONA delivered a ground resolution of two meters (2 m), and was replaced by other systems, not yet declassified, both for broad area search and for high-resolution imaging from space. Those now operating no longer depend on film return, but instead use imaging technology similar to that in your digital camera, typically employing charge-coupled devices (CCD) of silicon technology. The resulting images are returned in "near-real time" via radio downlinks from long-lived satellites in space. Instead of mean mission duration on the order of one week for CORONA, the satellites provide images for many years. For their part, ELINT satellites have evolved as well. The evolution of satellite technology no longer leads that of commercial applications, in view of the long lifetimes of satellites, and the special-purpose nature of their operation. Furthermore, there are limitations and hazards involved in the apparently benign space environment, since satellites are not shielded from cosmic ray radiation by the Earth's atmosphere, equivalent to a water depth of ten meters, nor from occasional collision with a micro meteor or piece of orbiting space debris. The magnetic field that prevents most of the cosmic rays even from striking the atmosphere instead traps energetic electrons and protons that provide a heavy dose of radiation (on the order of a megarad over several years) to satellites in certain orbits. Nevertheless, ELINT satellites have evolved to real-time return of information that not only pinpoints radars and other emitters on the Earth's surface, but also captures both Communications Intelligence (COMINT) and Signals Intelligence (SIGINT). The import of this is evident from the daily newspaper, with the implication that much useful information in the "global war on terrorism" is derived from such sources. Naturally, communications that travel via satellite (including some mobile systems) can be intercepted by ground-based antennas looking at the cell-phone relay satellites. In this activity the U.S. National SecurityAgency (NSA) plays an important role. Accompanying the acquisition of intelligence is the required evolution of processing capability, dissemination, and the like. Ultimately, however, the intelligence product is reflected in various bulletins or estimates, that come to the desk of decision-
11 makers at all levels and must result in national action or decision not to act, and be provided to other governments and to elements of the United Nations. Unfortunately, many of those involved have little understanding either of the sources of the information or the limitations of the processes, including the possibility of denial and deception (D&D). Thus the analyst assigned to watch for threats to the Information Technology infrastructure (IT) would have, in principle, access in this case not so much to IMINT, but to COMINT or SIGINT and would try to determine what resources are being expended by which foreign powers or terrorist groups, which individuals are involved, whom they communicate with, what test incursions have been made, and the like. Another analyst looking for wayward nuclear explosives would concentrate on security of those sites where nuclear explosive materials are to be found in declared nuclear powers and in others. Most of the plutonium or highly enriched uranium exists in Russia or in the United States, so such an analyst would be alert to COMINT or suspect groups and nations traveling to Russia or dealing with middlemen in Russia. There might in addition be “sting” operations set up in order to determine interest in the acquisition of nuclear materials contrary to the Non-Proliferation Treaty (NPT). Another analyst might be assigned to look for preparations for military activity or for genocide in an African country. In all these cases, the analyst would be concerned with foreign newspapers, foreign broadcast information, as well as information regarded as secret by the group or nation that originated it. Here is a problem, since the state of Information Technology is such that in practice in the United States a person with a computer with access to a secret (classified) governmental network cannot use that same computer for access to unclassified information, such as the Internet. For this reason, “air gaps” must be created. Officials have testified recently that they have four or even six computers under their desk, and can switch the keyboard and the display (monitor) from one to another. This is already an advance, because a few years ago it was necessary for each computer to have its own display and monitor. But copying from one network to another is typically forbidden, unless the material has been printed and then scanned optically for transfer to the other network. As one might expect, the efficiency of working under these conditions is much reduced, even though the IT tools, in principle, can be very powerful. Insufficient effort has been invested to provide a secure computing framework that would allow flexible access to information at multiple levels, including unclassified and highly sensitive material in the same information system. It should be possible for information to be identified, with its security classification appended, and composite documents or files thus prepared for the analyst’s display. In any case, IT has brought us a long way from the “shoe box” era (still occupied by some analysts) in which material on a given site or topic was filed in the form of clippings or images literally in a shoe box, for future access by the analyst. Whatever the mechanization, however, an analyst must form hypotheses and then determine their probability. “Alternative Competing Hypotheses” is a summary term for this approach. Is there to be a military attack tomorrow? If there was not one yesterday or the previous day or the previous year, it seems inherently unlikely that there will be one
12 tomorrow. It is said that British intelligence charged with warning of a military attack was wrong only twice in 50 years, but such an error can be very significant. In general, the most rigorous framework for determining the validity of a hypothesis in science or in intelligence comes down to Bayesian Analysis. Here one asks for the probability of a hypothesis given the prior probability before the most recent “fact” and the likelihood that the new intelligence datum is correct or that it is wrong. It can be wrong in one of two ways (for a “yes-no’‘decision): it can have a Type-1 error, in which the datum may say “no” but the hypothesis may be valid; or it may have a Type-2 error, for whch the datum reads ”yes” and the hypothesis is invalid. An example of a Type-1 error is a bit of disinformation stating that all troops are in their rest areas, when in fact they have been mobilized. An example of Type-2 error is a finding that troops have been mobilized, when in fact the motion that was observed was from one rest area to another. A recent article by Bruce Blair, President for the Center for Defense Information in the United States (www.cdi.org) nicely illustrates the details of Bayesian Analysiss. In this case, one assumes a prior probability (without any intelligence data) of 99.9% that an attack is in process. If one has then a bit of intelligence saying that it is, and one knows or assumes that the source of the intelligence is correct 50% or the time, incorrect 50% of the time (in the sense that 25% of the time it indicates an attack is in process if it isn’t, and 25% of the time indicates that an attack is not in process when it is), then the likelihood according to Bayesian Analysis after 1,2,3,4 and so on negative intelligence alerts is as shown in “0.999” line of the table. The mechanism of Bayesian Analysis is shown in the first figure, with the symbols having the following meaning: a term P(AIW) signifies the probability P of an attack A given that the warning W has been received. Bayes taught that this can be obtained from the more physically determinable P(WIA), the probability that the warning signal would be received if the attack were really in process. Important is the initial “prior (A)”, which is the assumed likelihood that an attack is in process, to be refined by intelligence data. The a posteriori probability “Post (A)” is then the Bayesian update of the probability before the most recent information.
13 Our application of Bayes theorem is as follows: Definitions: Prob (attacklwarning) = P ( A W Prob (attacklno warning) = P(A\NW) Prob (warninglattack) = P(WJA)= 1 - prob (type i error) Prob (warninglnoattack) = P(W1NA)-type II error Prob (no warninglattack) = P(NWIA)-*type II error Prob (nowarninglno attack) = P{NWINA) = 1 - Prob (type II error) Prior initial subjective expectation of an attack: prior (A) Posterlor subjective expectationof an atlack after either receiving or not receiving warning: Post (A) Formulas: Given warning is received during warning report period: P(W1A) prior (A) Post(AIW)s P(WIA) prior (A) t (P(W/NA)(l - prior(A)] Given warning is no1 received dUring warning report period: P(NWIA)prior (A) post (A'NW) P(NW1A) prior (A) t [P(NWINA)(l - prior(A)]
From Bruce Blair, The Logic of Intelligence Failure, http://www.cdi.org/blair/logic, c$m
In this case, nine successive negative reports are required to convert an initial 99.9% probability of attack to a 95% judgment of no attack. It is thus very hard for fact (even facts with a pretty good probability of being correct) to overcome an initial bias of this magnitude.
14
Example I of Bayesian updating
Perhaps one imagines that a decision-maker would do better never to have such a fixed idea with a probability of 99.9%. The second example illustrates graphically the degree of conviction he or she would properly infer, as intelligence data came in one at time, in the case of attack or no attack.
15 Baywan Upttatina of Attack EXpBCtafions. Average over Far& SICal Run Expectation of Attack
0
Warning raport peiiod
Example 2 of Bayesian updating Even in this example, something like 17 data points would be required on the average to raise the expectation of attack from 50% to 95% (averages over a 40-trial run) if an attack were truly in process. But these are only averages. If now one looks at an atypical trial run,one can see that in case of an attack, four data points, all reporting “no attack,” reduce the mferred probability from 50% to only about 2%, whereas an attack was really in process, and the data ultimately (on the 17th repeated sampling) correspond to 99% probability of attack. fiayeslan Updatingof Attack Expectations, One Atypical Triai Run Fxpclalinn of Atlack 1 .OO
.
..-.------.,----
0.90
0.80 0.70 0.60
0.50
0
5
10
15 20 25 Warning “port period
30
35
40
16 These are truly cautionary findings, little understood by analysts or decision-makers. Indeed, not every piece of intelligence data has the same value or the same Type-1 and Type-2 error rate. All the more reason for each piece of data to be identified with its assumed rates, and the analyst and decision-maker should be able to use a simple tool such as a spreadsheet in order to determine not by “group think“ but for himself or herself the likely spread of probabilities of what the intelligence data may seem so strongly to imply. In conclusion, science and technology have revolutionized intelligence, as they have changed most aspects of modem life. At the national and international level, the consequence of actions that might be taken on the hasis of intelligence (and the consequences of inaction) can he enormous, imposing a heavy load of responsibility on officials charged with the provision and interpretation of intelligence. Better preparation of such officials would be desirable, but is difficult because relatively few in the educated population are accustomed to dealing quantitatively with uncertainty, and there is the difficulty that persons who occupy a position of power are too busy exercising that power to take the time to learn something new or even vital. Perhaps new tools of simulation and video presentation might be devised to provide virtual experience with examples close to the problem at hand- attack or no attack; the decision to prevent a person from traveling on a commercial flight; the cancer risk posed by this or that environmental contaminant. REFERENCES
’ Recipient in 1996 of the R.V. Jones Award for Scientific Intelligence, and in 2000 named one of ten Founders of National Intelligence.
’A useful source is the Center for Studies in Intelligence, operated by the U.S. Government at http:llwww.cia.govlcsi/
’ Mark F. Moynihan, “The Scientific Community and Intelligence Collection,” Physics 1.html) Today, December 2000. (http://www.~hysicstoday.org/pVvol-53/iss-12/p5 Albert D. Wheelon, “Corona: The First Reconnaissance Satellites, ”Physics Today, February 1997, pp. 24-30, ISSN 003 1-9228. h~://www.physicstoday.or~pt/vol50liss-21vo15Ono2~24-3O~art1 .vdf, http:/lwww.physicstoday.orgipt/vol-50/iss-2/vol5Ono2p24-30part2.pdf Thomas Bayes, “An Essay towards Solving a Problem in the Doctrine of Chances”, Philosophical Transactions of the Royal Society of London 53 (1764). Bayes (1702-1761) was a Presbyterian preacher and a member of the Royal Society. See http://www.bun.kyoto-u.ac.jp/phisci/Gallery~ayes-note.h~,
STATEMENT ON MIGRATION DR. R.K. JENNY Executive Director, Global Commission on International Migration (GCIM) Geneva, Switzerland Let me first thank you, Professor Zichichi and Ambassador Kamal, for having invited me to address this esteemed forum of scientists and international personalities, and to talk to you this morning about one of the principal global, or perhaps I should say planetary, challenges of today: that of international migration. I will do so on behalf of the Global Commission on International Migration which was launched by the United Nations Secretary General and a number of governments on 9 December 2003, in Geneva, and which is tasked, among many other things, to present a report to the Secretary General with recommendations aimed at providing the framework for the formulation of a coherent, comprehensive and global response to migration. THE BROADER PICTURE OF MIGRATION Let me begin by making a few general comments on the broader picture of migration and the challenge it presents today as an issue that is intrinsically interlinked with the global economy, international and national development and poverty reduction policies, population policies, trade, conflict prevention, human security and human rights, and international co-operation. In a time of profound global interdependence -and amazing technological developments- one could imagine a better quality of life for the majority of the world’s people. And yet, the stories we read portray a growing divide between the rich and the poor, between peoples of the North and the South. There can be no international stability, no peace, no human security, when a few rich countries with a small minority of the world’s people alone have access to high living standard, while the large majority live in deprivation and want, shut off from opportunities of full economic and human development, but with expectations and aspirations aroused by easy access to information about life and opportunities abroad, low cost travel and ongoing communication with migrant communities abroad. I think we first have to look at international migration -at least the part that seems to pose problems both for states and individuals- from this broader perspective of persistent economic disparities and imbalances between the South and the North which, combined with the demographic equation, set the stage for the ever-growing migration pressures that we are currently witnessing across the globe. We know that population growth will for the foreseeable future be significantly higher in developing regions than in developed countries, thus compounding pressures on local labour markets in the south. Over the next decade, between 700 million and 1 billion young workers are expected to join these labour markets in developing regions. Many of these young people will not find employment in their home country due to economic stagnation that results from, inter a h , failing governance, current trade policies, including agricultural subsidy policies that affect negatively the export capacity of developing countries, and reduced or mismanaged financial and development aid. To put it differently - and as others have said before - if you look at our today’s economy in a globalized world, the biggest failure of globalisation so far has probably been the persistent inability to createjobs where people live.
17
18 Yes, industrialised and more developed countries need immigration, but it is unlikely - if you look at the figures I just mentioned - that the surplus labour available in developing countries, present and future, could in any sizable manner be absorbed through migration to industrialisedcountries. It is also true that essentially all industrialised countries face serious population decrease and consequent reduction in their active national labour force, resulting in growing difficulties to maintain current levels of social security and related welfare systems through national labour alone. However, in many industrialised countries the political, social and cultural costs of significantly increased labour in-migration from developing countries are still being perceived as too high, in particular in those countries whose societies and economies have not traditionally been built through immigration, early access to citizenship and an open attitude regarding migrant acceptance and integration. In short, there exists an important imbalance between the supply, or surplus, of migrant labour available in the south, and the actual demand for such labour in the north, an imbalance that cannot be resolved through whatever new and innovative global migration policy the international community might be able to develop, but that requires action in a much broader context of global economic development, international stability and international co-operation. I make these general remarks just for the purpose of reminding all of us that we cannot look at the phenomenon of human mobility in isolation, but that we must address international migration in an inter-disciplinary context of comprehensive, sustained and coherent international policy and action in a multitude of areas, many of which are spelled out, by way of example, in the Millennium Development Goals. INTERNATIONAL MIGRATION - TRENDS, CHALLENGES AND OPPORTUNITIES This being said, I am here to talk to you about migration, including current trends, the principal challenges posed by migration, but also the opportunities than migration offers and the important contribution that migrants make to both their host societies and their countries of origin. You are familiar with the most recent statistics according to which the number of international migrants in the world increased by 14 per cent between 1990 and 2000, having reached some 175 million migrants by 2000, with a projection of 230 million by 2050. While South-South migration persists, more migrants are moving from developing to developed regions with an annual average of 2.4 million migrants moving from the less developed to the more developed areas. Currently, 60 per cent of migrants live in the more developed regions, where migrants make up almost one in every 10 persons. By contrast, migrants make up nearly one of every 70 persons in developing regions. In terms of population growth between 1990 and 2000, migrants represented 56 per cent of the overall population increase in the more developed regions, but only 3 per cent of the overall population growth of the less developed regions. Net immigrants represented 89 per cent of the population increase in Europe. As people become more mobile, traditional assumptions and concepts in the field of international migration are steadily breaking down. It is no longer possible, for example, to draw a sharp distinction between countries of origin, transit and destination, as many states now fall into all three categories. Nor is the notion of
19 nationality as clear-cut as it once was. A growing number of people around the world have dual citizenship, and migrants who have settled in another country increasingly retain close economic, social and cultural ties with the families and fiends they have left behind. Capital, goods, images and ideas are moving more and more keely around the world, as are highly skilled personnel in sectors such as information technology, higher education and health care. But other people who want or who feel the need to move - lower-skilled workers, asylum seekers and people who would like to join family members who have already moved abroad - are confronted with many obstacles in their efforts to migrate. Because of these obstacles, growing numbers of people seek to move in an “irregular” manner from one country and region to another, using the services of a growing migration industry that includes human smugglers and traffickers. In doing so, they are obliged to spend large amounts of money and to run numerous risks, including that of being detained and deported during or at the end of their journey. Those who manage to reach their intended destination may have little alternative but to live a life of clandestinity, exploited in the workplace and marginalized in society. The arrival of such ‘irregular’ or ‘undocumented’ migrants is currently a major concern for the world’s upper and middle-income states, many of which need a cheap and flexible labour force to undertake unattractive jobs and to compensate for their diminishing and ageing labour force, but whch at the same time perceive the arrival of such migrants as a threat to social cohesion and a challenge to the right of states to control the movement of people onto their temtory. Less developed countries have quite a different set of interests and concerns in relation to international migration. These include 1) the departure of educated young professionals to regions which can offer them a much higher standard of living (according to L O estimates, developing countries are currently experiencing a 1030% loss of such skilled manpower through “brain drain”); 2) a desire to maximize the volume and developmental impact of the remittances which many migrants send home; 3) a concern to protect the rights and working conditions of citizens who have found employment abroad; and 4) in some cases a fear that diaspora communities will engage in activities which represent a threat to the established social and political order in their country of origin. Given these different interests, it is not surprising that international migration has become an issue of some contention between the ‘North’ and the ‘South’, a situation which, for example, has so far limited the scope for a global dialogue on the issue in the United Nations. I shall come back to this aspect in a few minutes. MIGRATION AND DEVELOPMENT The link between migration and development is not a new one. Economic improvements in countries of origin are tied to easing of migration pressures. More recently, this assertion is qualified by the fact that a “migration hump” exists such that economic development can, at least over the medium term, spur migration as acquisition of skills and means offer greater access to foreign markets. Nevertheless, it stands to reason that improving economic conditions in countries of origin in particular through job creation will assist in reducing migration pressures by providing people with the option to work domestically. Part of this debate pertains to trading practices between States, since it argued that more balanced trading relations between developed and developing countries
20 would in fact assist in promoting States’ economic competitiveness and development potential. Thus, the potential for reducing migration pressures in countries of origin could be further assisted through a more equitable trading system. Another important dimension to this discussion relates to migration as a development tool for countries of origin. With increased global mobility, more attention is brought to migrants’ contributions as in most cases they maintain vigorous economic, social and cultural ties to their countries of origin. These links result in reverse flows of financial, technological, social and human capital back to their countries of origin. The World Bank estimated the global flow of remittances to developing countries in 2002 at US$ 88 billion. Remittances are projected to exceed US$90 billion in 2003. This implies that remittances exceed Official Development Assistance (ODA) and constitute the largest single source of financial flows to developing countries after FDI, indeed even exceeding FDI flows in many countries. In addition to their sheer volume, remittances manifest several other key characteristics which make them interesting as a development tool, including their stability as they appear to be less vulnerable to economic up and down turns than other sources of external funding; their growth which is increasing in tandem with growing migration; and the fact that remittances are unilateral transfers that do not create liabilities unlike other types of financial flows such as debt and equity flows. Also, unlike foreign aid, remittances go directly to the people who need them and to whom they were directed without any intervening and costly bureaucracy. Beyond the significant transfers of capital through remittance flows, migrant diasporas have also been instrumental in channeling flows of FDI to their home countries. For example, the Indian diaspora contributed 9.15% of FDI flows to India in 2002. The Chinese diaspora contribution to FDI in China was even higher. MIGRATION AND SECURITY The events of 11 September 2001 focused attention on whether greater international human mobility represents a security threat to States and societies, and may increase the likelihood of incidents of international terrorism. States are thus strengthening their borders through pre-clearance, at the border and internal measures. They are developing improved technologies to screen identity documents, data and communications systems, and training of relevant personnel. Greater security concerns have translated into more restrictions on mobility by checking immigration applications against criminal and terror databases more thoroughly. In the U.S., for example, the backlog of immigration applications has increased by nearly 60 per cent between 2001 and 2003, for a total of 6.2 million applications. Furthermore, security changes have raised a host of questions with regard to the lawful detention and treatment of individuals who can be mistreated and falsely charged. Tightening of security measures also is associated with the declining use of regular migration channels, such as the amnesty application processes, and an increase in irregular migration channels, such as human trafficking and smuggling. In addition, states are focusing on how to strengthen inter-state relations (cooperation) on this matter for greater transparency and information exchange, coordination of procedures, etc. A growing perception that immigrants and asylum-seekers are more likely to be involved in international terrorist activity, or in activities that might otherwise
21 undermine the security of the hosting State has, in many countries, had a negative impact on migrants’ human rights. Events related to 11 September 2001 and thereafter have also exacerbated some human rights abuses by contributing to heightened levels of xenophobia and suspicion of migrants in a number of countries. The European Monitoring Centre on Racism and Xenophobia has documented changing attitudes towards Muslims in the “Summary Report on Islamophobia in the E U A f e r 11 September 2001.” The report shows an increase in hostilities directed at those who appear to be Muslim or of Arab descent, as measured by several indicators, including opinion polls, documented physical and verbal attacks, and news media analyses. Similarly, the profiling of migrants in some countries has had negative consequences on migrants, for example, in some cases programmes have required males of predominately Arab and Muslim countries to register with immigration authorities upon arrival in the country. MIGRANTS IN SOCIETY One of the principal policy challenges of contemporary migration relates to the impact of international migration, in its different forms, on host societies and culture, and the potential tension that exists between social diversity on one hand, and social cohesion on the other. Current government policies and practices related to assimilation, integration (and non-integration), multiculturalism, trans-nationalism, citizenship, etc. differ widely in this regard. Some states that admit migrants actively encourage them to integrate and naturalise in their adopted country, ascribing to its values, language and general way of life. Other destination countries place less emphasis on notions such as ‘assimilation’ or ‘integration’, and have adopted a policy of multicultural tolerance, which allows and even encourages ethnic minorities to practice their own languages, cultures and lifestyles. A third group of states rejects any notion of integration or even long-term migrant settlement. While such states may tolerate the presence of migrant workers for fixed contractual periods, they are keen to ensure that such migrants have little or no opportunity to become part of society or to become naturalised citizens. Temporary contract migrants then may not wish to become an integral part of societies with cultures and lifestyles that are quite different from their own. These different approaches have triggered a lively debate about the role, rights and responsibilities of migrants in society, and with regard to the expectations that states, societies and migrants can legitimately have of each other. The debate includes the notion of mutual acceptance and multi-cultural tolerance, but also encompasses a variety of other aspects, including the respect for national law and customs (e.g. equality in law and practice between men and women, prohibition of polygamy, preeminence of civil marriage over religious mamage, respect for the rights and physical integrity of children), the learning of the national language and, in some cases, the acceptance of basic values that prevail in host societies. At the World Conference Against Racism, Racial Discrimination, Xenophobia and Related Intolerance (WCAR), held in Durban, South Afnca in September 2001, states singled out the need to combat manifestations of a generalized rejection of migrants and actively discourage all racist demonstrations and acts that generate xenophobic behaviour. The Durban Declaration urges states to take measures in order to foster greater harmony and tolerance between migrants and host societies, to review
22 and revise, where necessary, immigration laws, policies and practices so that they are free of racial discrimination. How host communities react to the presence of newcomers constitutes an essential element in ensuring successful integration. National and local administrators are generally encouraged to emphasize open and participatory discourse on societal transformations. Church groups, migrant associations and non-governmental organizations play an important role in facilitating integration and fostering understanding between newcomers and host communities. However, social cohesion is often difficult to achieve and maintain, in particular during periods of economic strain, and/or when migrants are considered “different” to nationals in their cultural habits, religious faith, political beliefs, etc. Migration then becomes an extremely sensitive and emotional issue; it is often politicized, as politicians play on the fears of the electorate, pitting them against migrants as the source of their problems and predicament. GLOBAL GOVERNANCE OF INTERNATIONAL MIGRATION BASED ON SHARED INTERESTS AND RESPONSIBILITIES International migration is a complex and far reaching issue that transcends national borders and jurisdictions. In a globalized world, policies relating to international migration and associated issues (labour supply, employment, economic growth and development, human security, state security, public health, etc.) must be based upon common understandings and accepted principles, as well as a recognition of the shared responsibilities and interests of all states and other stakeholders. International migration should consequently be addressed in a collaborative and cooperative manner. A key question here if of course is how to develop the policy and practical modalities of such a collaborative and co-operative approach at the multi-lateral level. In other words, how can the respective interests of sending, receiving and transit states (or combinations thereof) be elucidated and articulated, and how can the competing or conflicting interests of different states be effectively and equitably reconciled? A number of regional and global initiatives have been focusing on this issue over the last few years, many of them promoted and supported by relevant international institutions, such as ILO, IOM and UNHCR. The United Nations General Assembly has been discussing this matter more concretely since the 1994 International Conference on Population and Development (ICPD), including proposals to hold a global conference on international migration and development. A series of resolutions and action recommendations on international migration were adopted, without reaching agreement, however, as to whether a global conference was indeed required at this juncture. THE GLOBAL COMMISSION ON INTERNATIONAL MIGRATION
In view of the developments described above, there has been a growing recognition of the need to examine the potential for global approaches to the issue of international migration, and to identify ways in which the effective and equitable governance of migration can be promoted at the national, regional and international levels. It was precisely this consideration which led United Nations Secretary General, Kofi Annan, to identify migration as a priority issue for his office and the
23 rest of the international community in his 2002 report on ‘Strengthening of the United Nations: an agenda for further change’. Following the publication of that report, in early 2003 an expert working group was established by the Secretary-General, which identified a number of options in relation to the way that the UN and other international organizations might strengthen their role in this field. The group proposed, inter alia, the creation of a high-level and independent international panel that could look more deeply into issues related to migration. The Commission was launched by the Secretary General and a number of interested governments on 9 December 2003 in Geneva.
Purpose and Mandate THEOVERALL AIM OF THE COMMISSION IS TO PROVIDE THE FRAMEWORK FOR THE FORMULATION OF A COHERENT, COMPREHENSIVE AND GLOBAL RESPONSE TO MIGRATION ISSUES. The Commission’s mandate is: To place international migration on the global agenda by promoting a comprehensive dialogue among governments, international organisations, academia, civil society, private sector, media and other actors on all aspects of migration and issues related to migration; To analyse gaps in current policy approaches to migration and examine inter-linkages with other issue-areas by focusing on various approaches and perspectives of governments and other actors in different regions, and by addressing the relationship of migration with other global issues that impact on and cause migration; and To present recommendations, by mid-summer 2005, to the United Nations Secretary-General and other stakeholders on how to strengthen national, regional and global governance of international migration. To test the Commission’s own findings and identify areas of emerging consensus for consideration by the international community, we will over a 14-month period organise five broad-based regional consultations with governments, NGO’s, regional organisations and experts, media, corporate sector, trade unions and other stakeholders. Other thematic seminars will also be held and two to three additional Commission-only meetings are planned. The first regional consultations for the Asia and Pacific region took place with some 150 participants in May 2004 in Manila. Further regional meetings are planned for the Mediterranean and Middle East, Europe, sub-Saharan Africa and the Americas. While it is too early to be specific at this stage, the Final Report of the Commission is likely to put forward a series of strategic options, together with a set of ‘actionable steps’, for consideration by the Secretary-General and other stakeholders. The Commission will also produce interim reports, undertake and commission specific research activities, publish background studies and other relevant materials, and develop an extensive information sharing activity, including running a web-site. It is anticipated that the Final Report will first be considered by the United Nations General Assembly in 2005. The report could then be the subject of further discussion in the March 2006 Population and Development Commission and, more particularly, in the High Level Dialogue on International Migration of the 2006
24 General Assembly. Following the submission of the report to the Secretary-General by mid-Summer 2005, the Commission Secretariat and concerned United Nations officials will actively disseminate the report's findings and recommendations with governments and other stakeholders in order to ensure full understanding of and support to the proposals made by the Commission.'
The Commission is based in Geneva. It is independent and is comprised of 19 internationally renowned members, drawn from all regions and bringing together a wide range of migration perspectives and expertise. It is co-chaired by Dr. Mamphela Ramphele from South Afiica, and Mr. Jan 0. Karlsson from Sweden. The Executive Director is Dr. Rolf K. Jenny from Switzerland.
2.
LIMITS OF DEVELOPMENT: MIGRATION
This page intentionally left blank
MIGRATION AND CYBERSPACE AHMADKAMAL Senior Fellow, United Nations Institute of Training and Research New York, USA The history of migration is one of the most exciting chapters in human development. It is the story of mankind's exploding movement outwards, from its origins in East Africa across vast distances and entire continents. It is the story of the pioneering spirit that led migrants to brave inhospitable climates and spaces, and the fear of an unknown world, not just to satisfy their wanderlust, but more essentially in search of a better future for themselves and their children. For hundreds of years, it was this migration that was the driving force which led to the establishment of new human settlements in virgin lands and spaces. Each one of us is the product of that essential human expansion, as our ancestors progressively moved outwards in repeated circles and waves into the lands and territories in which we find ourselves today. The development of townships was a natural consequence of this migration. It was in the establishment of these new townships that a balance was struck between man's individual desire to tap newer economic resources, while at the same time seeking relative safety in gregarious kinship and company. The industrial revolution changed all that. The townships which hitherto were the focal points for agricultural and h-ading communities, changed into economic enterprises in their own right. Towns gave way to cities, as burgeoning economic activity took on a life of its own. We then witnessed the second most extraordinary phase of migration, away from the agricultural countryside and township into the urban industrial city. This urban migration, which thus began just a few hundred years ago, has now reached its epitome in the urban sprawl of mega-cities, with all the unsolvable problems that they exemplify. The habitat is thus in a state of crisis. In its essence, the crisis is due to the uncontrollable, and a seemingly unstoppable, process of urbanization. Millions are agglomerating in already over-packed cities, which are increasingly unable to provide basic civic amenities. The planning and provision of adequate transport, water, sewage es are becoming an impossibility. As a result, human habitations are slowly but inexorably turning into environmental hazards, into seething hot-beds of social tension, and into chronic sources of political unrest. The crisis has particularly ominous features. It is rapidly involving those who are least able to cope with it, namely, the developing countries. At present, the list of the dozen largest cities in the world includes only some that are in the developing countries, but by 2015 all but one shall be located in these very developing countries. Even more worrying is the fact that the crisis is not amenable to traditional solutions. Conventional wisdom does not work. Almost all industrialised countries, faced with the problem of providing proper civic amenities to an exploding urban populace, have attempted to do so in three manners: First, there was an attempt at "investment", by pouring more money into the problem. Larger sewage plants were built, bigger hospitals erected, elaborate mass
27
28
transport systems put in place, and massive costs incurred to bring in water from increasingly distant reservoirs. With every improvement in infrastructures, ironically, the result was that even more people gravitated towards the cities, soon stretching the urban facilities again to breaking point. Second, there was an effort at "devolution". Attempts were made to move parts of urban populations into outlying "satellite" townships, which were supposed to take the pressure off from the mother cities themselves. Unfortunately, in most cases, the townships soon merged with a rapidly growing central urban area, merely becoming enclaves with relatively better facilities than the adjoining older neighborhoods. Third, there was an attempt at "decentralization". Outlying small towns were privileged with incentives, in the form of tax breaks and subsidized civic services, in an effort to relocate industries in these regions. In effect, the state moved in to counteract the operation of economic forces which, left to themselves, led inevitably to the creation of mega-cities. These measures, which have had rather limited success even in industrialised countries, have had no meaningful impact in developing countries. The reasons are obvious. In the first place, the magnitude of the problem is far greater in developing countries. Industrialization, and its accompanying urbanization, is taking place in a compressed time frame. What occurred in developed countries over several decades, is happening in developing countries within just a few years. Also, urbanization is occurring in the presence of much larger and faster growing populations. The pressure of urbanization in developing countries thus exceeds by far anythmg seen in developed countries. At the same time, the resources available to the developing countries are much more limited. There is just not enough of capital available to constantly upgrade civic infrastructure, or to establish alternative urban centers. Despite this, policy makers in developing countries persist in resorting to these conventional measures. It is a dispiriting sight. Intelligent people are taking steps that are doomed to failure, waging battles that are lost even before they are commenced. It is clear that the solution to the urban crisis, in developing countries in particular, does not lie in persevering with existing traditional approaches. These can, at best, be palliative band-aids. More viable solutions can perhaps be found if we pay closer attention to the radical changes taking place in the global economy. These changes could affect the shape of human habitation in as fundamental a manner as the industrial revolution itself. Properly exploited, these changes may provide meaningful solutions even to the urban crisis in developing countries. The global economy is rapidly moving into what is being termed as the postindustrial era. This is resulting in as dramatic a break, away from the industrial economy, as the latter was from its preceding agriculture-based economy. In this post-industrial era, the older manufacturing sector is slowly being overtaken by the new services sector as the basic engine for growth. Marketing, banking, insurance, shipping, tourism, consultancy, in fact a whole range of industries that perform various functions for customers, but do not involve the sale of any tangible product, are becoming the leading sectors of major economies. This is not to say that manufacturing is becoming unimportant, or that it shall disappear in the post-industrial economies. It is just that recent technological advances make it possible to dramatically increase productivity in
29 the manufacturing sector, while devoting far lesser resources in terms of labour, time, and capital. This is exactly what happened to the agriculture sector with the advent of the industrial revolution. It became possible to dramatically increase agricultural output, which was an essential requirement for all human beings, with a fraction of previous levels of resources, especially in terms of labour inputs. The quantum increase in the new productivity of the manufacturing sector is largely due to recent advances in information technology. The advent of personal computers, faxes, electronic mail and video-conferencing, has led to a revolution in traditional modes of production. Machines driven by computer programmes can cut more pieces out of the same amounts.of steel or textiles than any average human operator. Rapid information can be obtained about the changing needs of the market; production lines can be adjusted accordingly, and customized products can be provided at optimum cost. By improving the speed of communication, and by decreasing the reaction time for factories as they adjust to market needs, shorter production runs become economical, and it becomes possible to reduce the inventories sitting idle in warehouses. Efficiency increases dramatically. Similar changes become possible in offices also. Most office work can now be performed much closer to peoples homes. Instead of office workers moving to cities, office work can move out from cities into homes. In the process, the freeing of physical space and the lessening of the burden on transport and accommodation facilities in cities can be significant. This is already happening in developed countries. In North America, major banks are handling credit cards in one state, clearing checks in another, and performing data processing in a third. In Europe, major airlines are shifting their data processing centers into distant locations. Almost everywhere, one after the other, major companies are rapidly shifting their labour-intensive departments to the outskirts of metropolitan areas. All this has been made possible by the ability to move information rapidly and cheaply. It is clear that in a post industrial society the new economic and technological forces can lead to a significant lessening of the pressure on cities. The question then is, to what extent is this relevant to developing countries ? Can some of the solutions tried out in developed countries be applied to developing countries also in a globalised and shrinking world. Even the most advanced among these developing countries are still struggling merely to industrialize their economies. The expectation that these countries shall soon graduate towards post-industrial modes of production may appear unrealistic. But this is true only if we assume that development takes place in a standard linear fashion, that all countries must move mechanically from the agricultural, to the mercantile, to the industrial, and finally to the post-industrial stage. This does not have to be so. There are many instances, even in past history, of countries that have leap-frogged certain stages of development, in some cases straight from a feudal economy to a modem industrial one. But the question still remains. Can the developing countries of today move towards a post-industrial economy even though they have not yet fully industrialized? Unlikely as it might appear, this is possible. Post-industrial economies are informationbased and not capital-based. Human beings, of which developing countries have an abundant supply, are the key resource. The critical inputs required are a literate work force, a good communications network, and a relatively inexpensive source of energy.
30 These are already available in many developing countries, and increasingly accessible in others. The major new development is the revolutionary discovery of cyberspace and the linked invention of the Internet. During many of the past phases of economic development in the world, we have frequently seen the Malthusian doomsday scenario being neutralised by outward migration, or overtaken by quantum improvements in agricultural and industrial techniques. Noticeably, however, these decreases in population pressures or increases in output occurred in the same developed societies in which the Malthusian scenario had been originally cast and feared. Some of the most advanced countries in Europe, for example, were major exporters of migrant labour just a few decades ago. That is no longer so. The problems of population pressures on economic resources are now concentrated in the developing countries, while the solutions, in the form of capital, industrial know-how and market access, lie in the developed countries. Mutual contact between problems and solutions, between the developed and developing countries, is largely broken due to the protectionist policies that the former are deliberately following in a vain effort to shut out the rest of the world from the benefits of better standards of living. New hope has, however, emerged for developing countries in the form of the informatics revolution, and cyberspace. This is a new virtual layer in the atmosphere, with its own network of virtual highways sitting astride the world, available to all and sundry against a simple outlay in increasingly cheap hardware. With that hardware in hand, and access to electricity, the poorest in the most outlying comers of the world can have the same degree of access to information and services and technology, as the very richest and most privileged. Never, in the history of mankind, has it been so easy to access all the technological information and databases on such a democratic and equitable basis as is afforded by the Internet of today. The political commitment to move beyond industrialization is thus likely in developing countries. The realization shall soon set in that, not only can the developing countries move towards a post-industrial economy, they must, and they can. To be truly developed, and to end the chronic state of dependency in which they presently find themselves, mere industrialization shall not be enough. Modes of production based on new information technologies shall have to be adopted, and they are readily available. Once the developing countries embark on a post-industrial path, intriguing new possibilities shall open up for tackling the seemingly intractable problems of uncontrolled migration and urbanization. The city in the developed world would no longer be the main center of economic opportunities, or the focus of unrelenting migration from the hinterland. This still leaves many questions unanswered. To what extent could these postindustrial changes make an actual dent in the problems of the mega-cities? In quantitative terms, what effects may be anticipated, say in regard to rural-urban migrations? What is the existing data on the impact of incipient post-industrial forces on the urban problems of developing countries? There is insufficient information on these questions, and obviously a need for further study of these matters. What is even more troublesome is the growth of poverty in the world. The United Nations estimates that the “gap” has doubled in the past thirty years alone, and the World
31 Bank affirms that more than a quarter of the countries of the world have seen per-capita incomes actually decline in the past decade. Other than its obvious effects on hunger and jobs and human rights and welfare, poverty is intimately linked with disease. It is no surprise that more than 90% of the AIDS pandemic is now located in developing countries, some of whom report as much as a quarter of their total adult populations affected. Poverty also generates frustration, and when this frustration is super imposed on the visible political injustices being unabashedly practiced around the world, it becomes a breeding ground for terrorism, and a magnet not just for the poor but also for the relatively well-off youth. This disturbing trend is clearly seen in the profiles of recent suicide bombers. The frustrations created by poverty and injustice are further compounded by the protectionism that is now increasingly practiced by the developed countries against migrant labour. Movement of humans is becoming progressively difficult day by day. The restrictions that are imposed, and the short-sighted refusal to discuss these in a global discussion on migration issues, are subjecting developing countries to “double-jeopardy”, with no jobs at home, and no possibilities of seeking them elsewhere. At this stage, all that one can assert with some surety, is that the best hope of addressing the problems of rampant urbanization and poverty and migration, lies in a system in which the new economic and technological forces are harnessed to work against the current unhealthy concentrations of work units and people, and the vast gaps between the developed and the developing. Let us hope that policy makers, especially in developed countries with their aging populations, will look afresh at this crisis, seeking solutions in the forces of the future and in open discussion, rather than persisting with the failed protectionist and mercantilist practices of the past.
MIGRATION IN EUROPE HILTMAR SCHUBERT Fraunhofer ICT, Pfinztal, Germany INTRODUCTION Migration only occurs if the demographic potential of a region cannot be absorbed by its economic and social systems. Migration then takes place to a region or country with higher economic potential and demographic development, which can support additional foreign labour. Political, religious or ethnographic pressures on special parts of a population can also cause migration. Europe has changed since the middle of the last century fiom an emigration to a (de facto) immigration region due to its more or less decreasing population and because it is an industrial region of high prosperity. Migration influences the social, economic and ecological situation of both immigration and emigration countries. The overall aim is a form of development leading to a sustainable situation in both kinds of countries. Sustainable integration of the immigrants will be one of the most important tasks in reaching this goal. THE SITUATION IN EUROPE The population of our world is increasing and will reach 6.45 million people this summer; the overall growth-rates has decreased in the last 40 years from 2% to 1.18%, but the average growth rate of the 50 poorest countries of the world is still 2.4%, meanwhile industrial countries have an average rate of 0.25%. The prognosis is that world population in 2020 will reach 7.5 million. The different changes in population between 2005 and 2020 (prognosis) are the following in millions:
Africa Latin American and the Caribbean Europe North America Asia Oceania
2005 12020
Growth rate in %
8871 1188 558 1659 725 1705 332 I380 3917 14570 33 138
34 18 - 2,8 14 17 15
These numbers show where we may expect migrations in the world. History Migration in Europe can be divided into different phases: Thefirst phase: In the 19th and early 20th centuries large waves of emigration took place. Between 1815 and 1930, more than 50 million Europeans emigrated for economic or political reasons; only 38 million people found their new home in the USA. Workers from Poland and the Ukraine migrated to the West European centres for coal-mining and steel production. After World War I, 6 million Europeans emigrated as a result of
32
33 the war: for ethnic reasons, repatriation and forcible relocation (Greeks from West Turkey; 1.5 million because of the Russian October Revolution). Before World-War 11, 0.5 million Jewish and politically persecuted people emigrated from Germany and Austria. During Wold War I1 about 8 million foreigners were forced to work in Germany. The secondphase: Between 1945 and 1949 12 million East Germans and members of the German ethnic group emigrated from East Germany, Poland and Czechoslovakia, Hungary and Yugoslavia to Germany. 2 million of them lost their lives during repatriation: 10.5 million “displaced persons“ went back from Germany to their home countries. 1.2 million Polish people had to leave East Poland and were settled in the former German East Prussia and Silesia. Prior to 1961 when the Berlin wall was built, about 3.5 million Germans crossed the internal border from East to West Germany. In the 50s and ~OS,after the original colonial countries became independent, about 1 million people went back to their countries of origin. (Examples: French people from Algeria to France, people of the Portuguese colonies to Portugal, people in Indonesia to the Netherlands and inhabitants of the Belgian colonies to Belgium. English people also returned to England.) Many of these people settled down in the large European cities. Due to this migration these towns became multicultural centres. The thirdphase: Because of the economic boom in Western Europe, a very high number of qualified workers were needed. The first immigrants came from Italy, Greece, Spain and Portugal, and later on also from Turkey and Yugoslavia. In 1973 due to the “mineral oil shock” a drastic limitation of immigration into Europe was introduced. These regulations have caused only a retardation of immigration but never a halt. This internationalization of the Western European labour market has caused an overall immigration of 30 million people. Currently, 19 million foreigners are living in West Europe, mostly people at a lower social level. The east-west conflict gave rise to a high number of immigrants to Europe. Internal political strife has produced ethnic and political refugees in Eastern Europe, who were accepted by the countries of Western Europe, through the Geneva Convention, as candidates for asylum. For this reason alone, about 1 million refugees immigrated to Western Europe between 1956 and 1990. (Hungary ca 200.000; Czechoslovakia 170.000; Poland 250.000; 400.000 Bulgaria; and only in the year 1989, 385.000 Germans.) This large migration by DDR citizens in 1989 was one of the reasons for the breakdown of the communist regime in East Germany. As a result, 19 million inhabitants of Western Europe are not citizens of their country of residence. Thus history shows that Europe was, over the last 110 years, an emigration region first and then, for 70 years, more or less an area of immigration of increasing importance. Demographic Changes Emigration and immigration will be influenced by a pull and push factor. The increasing growth rates of countries around Europe create a pressure of immigration in the European region. On the other side, low natural decrease of population causes an economic need of additional manpower. “Eurostat” gives an overlook (Fig. 1) about the growth rate of European countries. It is shown that Germany, Italy and Greece have negative natural rates, and that, on the contrary, France, Ireland and the Netherlands have the largest positive rates. It may be of interest that the new EU countries have a negative average growth rate. The migration balance is so positive in
34 all the countries of the EU, that the overall growth in Europe is marginally positive. According to these numbers and influenced by different political directions, individual European countries also have different immigration policies (Fig 1). The treaty of the European Union gives all European citizens the right to move to any member state. In practice, over the last 10 years, the mobility between member states is estimated to range between 0.1 and 0.2 % of the total population per year. For the people of the new member countries (May lSt, 2004), different transition regulations are installed. To keep the population numbers constant, a birth-rate of 2.1 children per woman is necessary. The rates in Spain (1.22), Italy (1.25) and Germany (1.34) are below this figure. The problem of demographic development in Europe becomes apparent if the population of a country is extrapolated into the future by using today's growth rates. Using Germany as an example: up to 2050 the population will decrease by about 10 million. Economic Consequences For those countries that decrease in population, the question arises as to whether such a country can maintain its social sustainability and economic power. Another important figure will be the aging of the population. In recent decades industry has produced more products, of better quality and quantity, and with less manpower. The number of employees in the tertiary sector has also decreased, while maintaining the same or a better level of service. The important point is the percentage of working population in relation to the elderly population. This ratio will be further influenced by the unemployment percentage and will also have an impact social sustainability. Example: for a constant population figure in Germany, the immigration of 325.000 people per year would be necessary. But the evaluation of all these figures show that the decrease in birth-rates of the population of a country must not be completely replaced by immigrants. The number depends on the educational level of the immigrants, on the aging and the percentage of employment of the population on the BSP, as well as the favourable turn of the market. Therefore an intensive discussion in the European Union and in most of the member states may continue. IMMIGRATION INTO EUROPE
Africa The growth rate of the population in Africa 3% is per year with a stagnating standard of living, the consequence of which is dramatic internal migration. 100 million Africans are without jobs and 10 million southern African people have emigrated to other African countries. Because of the strong cultural and economical interaction of the states bordering the Mediterranean, the North African population will cause considerable immigration-pressure on the South European member states of the EU. North Africa has an emigration potential dependant on the job market of 0.8 to 1 million people per year. Therefore this region is an important immigration region for Europe and, in spite of the restrictive immigration policy of the EU, the number of North Africans will increase until 2025 to 65 million maximum (10 % of the whole population). France, Spain, and Italy are the favoured areas of immigration for these African emigrants. To overcome this problem, the EU is taking the following measures: 1. Integration of North Afiican immigrants already in the EU. 2. Passive policy that controls and halts immigration.
35 3.
Suitable actions in the emigration countries to decrease the potential by technical, social and financial cooperation. The efforts of the member states of the EU to protect the population against terrorism cause a more restrictive immigration policy of Muslim people and will strengthen the restrictive immigration policy of the EU member states. Turkey and the Arab States The immigration from Turkey to Europe has been taking place since the beginning of the 60s. These “Guest Workers” came mostly from a rural background. The intention was to earn money, go home and found a new existence. This emigration of Turkish workers to Western Europe continued until 1974, but an important part of them have become permanent residents with their families and now, in the second and third generations, many have acquired the citizenship of the host countries. For the time being some 3.5 million Turks are living in Western Europe. Because of their Muslim religion, most of them have not integrated with the host population. Turkey has requested membership in the EU, but the opinions about this diverge in the EU. The Arab world has a population of about 280 million and, by 2020, will grow to 400-450 million. 38 % of the population is under the age of 14. The conservative rule of the administration cannot follow this development, and therefore social, economical and educational situations will decline and there is a real threat of instability. Besides this negative development, the long lasting conflict between Israel and Palestine will worsen any chance for future solutions. The acceptance Arab immigration into Europe is decreasing because of the fear of terrorism. Others After Germany had absorbed Russian citizens of German origin, the legal immigration of Russians and Ukrainians decreased noticeably. After the Eastern European countries become members of the EU, the outer border will be with Russia and the Ukraine. We will see in the future if there is a tendency of transit immigration from east to west. During the first years, regulations will prevent legal immigration. The illicit nature of migration is unsettling. The international mobility of the Roma and Sinti is a special problem. There is no statistical data available because these people are rarely specified as ethnic groups. INTEGRATION Because of its demographic development, Europe will also be more or less a region of immigration in future. The European Union is trying to establish some mutual directives concerning immigration. Examples are a directive for the right to family reunification, the status of third country nationals who are long time internal residents, and on the conditions of entry and residence of third country nationals for the purpose of paid employment and self-employed economic activities, The successful immigration of third country nationals is closely connected to their integration into their country of residence in Europe. It should be successful in their 2”* or 3rdgeneration. Most European states have made major efforts in recent years to improve the integration of immigrants and persons enjoying international protection, by developing national integration policies, but many of them consider that the policies they have put in place so far are not sufficiently effective.
36 However, more sustained immigration flows are increasingly likely and necessary. The trend towards a shrinking working age population in combination with various push factors in the developing countries is likely to generate a sustained flow of immigrants over the next decades. The successful integration of immigrants is both a matter of social cohesion and a prerequisite for economic efficiency. Low employment and high unemployment rates even among the second generation of immigrants are characteristic examples of these problems. Integration is a two-way process based on mutual rights and corresponding obligations of legal residents and members of the host society to provide full participation of the immigrants. That means, on the one hand, that the immigrants must have the possibility to participate in economic, social, cultural and civil life and on the other hand, the immigrant must respect the fundamental norms and values of the host society. Integration involves the development of a balance of rights and obligations. Access to the labour market is crucial for the integration of third country nationals into the host country's society, and education and training are key factors of successful integration. A core concern in most of the European states is the ability to speak the language of the host country. Poor language ability is seen as the main barrier to successful integration. A special problem concerning integration is the families of immigrants of Islam faith. It is a tradition supported by the Islam religion that the female side of the family, in many cases, must be separated from the social life of the host society. As large European towns have become more and more multicultural, these immigrants live together in specific areas of the city. In turn, the lack of assimilation leads to internal immigration in the host country even in the second or third generation. A special concern is the education of immigrant's children in the elementary schools with the children of the host society. Normal teaching becomes impossible if children of immigrants reach a certain number, and sometimes, in these neighbourhoods, the percentage of immigrant children is higher than the number of children from the host society. In more rural parts of the host country, the children of immigrants do not impede education to a large extent, but they are left behind in education and the cycle of problems continues. It is urgent to improve this situation by special language courses (teaching the language of the host country), but very difficult to accomplish. The holistic approach of immigrants' integration, and in consequence also immigration itself, is greeted sceptically by the European population who maintains an attitude of reserve. This situation be aggravated in times of economic problems and under threats of terrorism. Consequently, politicians hold different opinions about how to solve this problem. CONCLUSION Since the middle of the last century, Europe has changed from a region of emigrants to countries of immigration due of the demographic development. Such migrations follow the "Push and Pull Phenomenon". Therefore the European politicians, and especially the EU, are prepared to accept in future this situation in principle. To continue sustainable development in Europe, immigrants from so-called third countries must be integrated into their hostcountries through economic partnership.
37 Favourable conditions imply a good education and knowledge of the host country’s language. Experience has shown in the past that integration of families takes time, money and an open labour market. European people and their politicians know about the holistic approach and the necessity of immigration from third countries into Europe, but they are very sceptical due to the moderate economy in Europe and the threat of terrorism from Muslim countries.
MIGRATION AND DEVELOPMENT: A SYNOPTIC EUROPEAN OVERVIEW
NIGEL HARRIS Development Planning Unit, University College London, UK MIGRATION It is now generally agreed that the European labour force is set to decline over the next three decades- to different degrees in different countries and at different times. Efforts are underway to increase the employment of adults currently not working, to raise retirement ages, and increase intra-European migration; business is also, to different degrees, outsourcing activities and services, innovating to replace labour, etc. But, even if these measures are successful, the deficits in labour supply are still likely to be economically deleterious. The more dynamic the European economy is - the faster the rate of growth and the more rapidly economies restructure - the more severe the problems of labour supply become. Most public attention has been focussed exclusively on the problems of scarcity of skilled and highly skilled workers, but the educational systems of Europe are continually upgrading the nativeborn workforce so that there are steadily fewer workers willing, at the rates of pay on offer, to undertake low-skilled jobs. The shortage of complementary low-skilled workers can severely reduce the capacity of the skilled to attain optimal levels of productivity. The present system of migration controls, put in place in the 1970s, is no longer capable of accommodating both the dynamic but unpredictable domestic demands for workers of different skills, and the global demand for work in Europe, an incapacity illustrated in the perpetual changes in statute and regulation governing entry and residence. The system is increasingly costly, bureaucratic, opaque and arbitrary. Intensifying controls only increases the criminalisation, brutalisation and militarization of the process of entering Fortress Europe (changes no less apparent on the border between Mexico and the United States). In sum, the workers are needed, and developing countries have available a ready supply of willing and literate workers. Furthermore, there is evidence of major potential gains to the world economy from lowering migration controls'. The reluctance of Europeans to avail themselves of this obvious remedy arises, at a minimum, from fears that the workers would want to settle and would impose burdens on systems of social security, housing etc; at a maximum, that increased ethnic diversity will undermine that social homogeneity seen as the foundation of the nationstate. However, in terms of long-term settlement, contrary to popular opinion, many migrants, possibly a majority and particularly the low-skilled, do not wish to go into permanent exile', but only to secure access to work, to earn in order to support families left at home or meet other major expenses (to marry, purchase a house, pay for hospital treatment or education, etc). This is particularly important where purchasing power values are markedly different between source and destination countries - a poorly paid worker in Europe has a middle income at home, provided he or she is able to spend the incomes earned abroad at home. Most migrants would seem to prefer to circulate rather than settle (of course, this generalisation is powerfully affected by conditions at home3). Circulation is the most ancient form of worker
38
39 migration, and there are many schemes that have worked with great reliability in this field (not least, employer-run contract labour scheme^)^. However, the effect of immigration controls is, perversely, to force migrants to settle, to accept exile until such time as they can secure citizenship and thus the freedom to circulate’. It has been commonly noted that Spanish, Portuguese and Greek migrant workers settled permanently in Germany until their countries entered the European Union and they won the right to return home without jeopardising their freedom to circulate and return to Germany if they wished. Thus, preserving the freedom to circulate is a condition of workers being willing to return home (as a number of European governments have found in the disappointing results of schemes to encourage return). However, the modem nation-State is ill equipped to accommodate circulation. Governments assume and seek to preserve a sharp distinction between a clearly defined body of citizens, the basis for the exercise of democratic franchise and the privileges of nationality, and those who are foreigners and should leave. The instinct of government is to enforce either departure or “integration”, immobility and incorporation into the historic nation. However, despite the problems attached to the idea of migratory circulation, it provides a way both of spreading the benefits of migration over much larger numbers of people and meeting the fears of Europe’s native-born population. Furthermore, the decline in international transport costs - and in ordinary communication - makes feasible the keeping of family and other social relationships intact while a worker is working abroad, and thus the social basis for return. DEVELOPMENT In the 1990s, the dynamic of Europe’s labour market attracted much larger numbers of regular and irregular workers from outside Europe - lobalisation has, as it were, become inescapable on the streets of Europe’s big cities . The by-product of this change has been an extraordinary increase in the flow of worker remittances to their home countries (increasing rapidly, and now - including estimates of unofficial transfers in cash and kind - worldwide, possibly two and a half times the levels of official development aid). Given the differences in purchasing power parities between developed and developing countries, this global sum is immensely increased at the point of expenditure. Furthermore, the multiplier effects of such spending are firther magnified (by two to three times according to one study of Mexico). In development terms, this is a remarkable and unexpected increase in the revenues of developing countries. In addition, remittances are, in contrast to other revenue and investment flows, counter-cyclical (they increase in a recession), do not generate counter-flows (payments for imports, profits on foreign investment), and go directly to those in need in some of the poorer localities (Suro, 2003). Governments in developing countries, after some reluctance, have become eager to hamess this new source of revenues for development7. The four hundred or so hometown clubs of the Mexican diaspora in the United States have mobilised to finance development projects in their home localities - to pave a road, build a health clinic, primary school, etc. Mexican local, State and Federal governments have, in some States, offered three dollars to match each dollar remitted by a worker abroad, and re-aligned domestic anti-poverty, health and educational programmes (Progresa, now Opportunidades) to magnify the effect of remittance flows (Escobar et al., 2003; O’Neil, April 2003). The Mexican government after long years of shame at the scale of emigration of its citizens, has moved to track their destinations, keep in touch,
k
40
supply Mexican identity cards (for irregular migrants), facilitate cash transfers and offer advice. Other countries have developed schemes to utilise the scarce skills of their most highly skilled citizens abroad to upgrade universities and the professions, and to start industries of high technology (Lindsay Lowell, Dec. 2001; Findlay, 2001). There are many other schemes for collaborative transnational partnerships to extract development and other benefits from emigration (see Grillo, 2002, Grillo and Riccio, 2004). However, migration can remove from the labour force of a developing country the most skilled, energetic and enterprising workers, making very much more difficult the task of conquering poverty. It would be quite wrong for Europe to purchase the welfare of its inhabitants at the cost of developing countries. However, there are means, discussed below, to turn circulatory migration into a deliberate positive reinforcement for development efforts.
AID Official aid programmes to developing countries play a great variety of roles, from supporting macro economic balance and reforms, financing responses to emergencies, and projects. Project aid has a mixed record of achievement and can, in certain circumstances, lead to the subordination of the perception of the developing country’s requirements to the interests of the donor. This does not happen with remittances that carry no political strings. Furthermore, the lack of local development agents can jeopardise the outcome of aid projects. Donors employ governments in developing countries, consultants, and increasingly, NGOs to play the role of local implementing agents. However, with circulatory migration, there could be an immense number of development agents in returnees. Aid programmes could then be employed to reinforce the efforts of returnees, of remittance flows and, as now, the efforts of developing countries’governments and NGOs8. Enhancing human capital is widely seen as one of the most important issues in economic development, and circulatory migration can contribute to this aim. On the one hand, temporary migration includes a large number of students who come to Europe to study. In many cases, they are also allowed to work. On the other, if we were to think of all circulatory migrants on the model of students (including in study, work-experience, on the job training, and enhancement of professional skills), then migration could simultaneously meet Europe’s requirements for workers and enhance the human capital of developing countries through returnees. In addition, treating all migrants on the same basis would militate against the current tendency to create a two class system in which the highly skilled are able to move fairly freely, work and settle, but the low skilled are expected to be tied to the soil of their native place. Aid programmes in conjunction with host country educational institutions, could be enlisted to organise the training, education and professional development programmes of migrants, track returnees, and offer follow-up programmes in the student’s country of origin, of aid and support for development projects’. LESSONS FOR EUROPE There are many issues not resolved here - for example, how far families can migrate with temporary workers, how far extensions in the period of work are permitted, how people who wish to stay on a more permanent basis are to be permitted to make the transition from migrant to settler, how conditions of work and
41 pay are to be regulated, monitored and policed, how migrants are to be accorded health protection during their work period (whether within or without existing social security arrangements) . Ideally, employers should be obliged to bear the risks and costs of recruitment and repatriation, but this may not satisfy European electorates. Partnerships between home and host country governments, relevant trade unions and NGOs may be a formula for establishing a fair, well-regulated and well-policed system of circular migration. However the central principle remains - to turn migration from a problem for both Europe and for developing countries into an opportunity for the reduction of world poverty. MIGRATION AND ANTI-GLOBALIS ATION Between about 1970 and 2050, Europe is undergoing wrenching processes of social and economic change involved in the emergence of a single integrated world economy. European electorates have accepted much of this process already in terms of deindustrialisation (and the relocation of part of the manufacturing capacity to Asia), in trade and capital movements, and now the “out-sourcing” of services. Major reforms in Europe’s social security and pension schemes can also add to a generalised sense of insecurity. Still to come are the full effects of declining population and ageing. In retrospect, the 1990s may be seen as witnessing the beginning of a major transition in terms of people, particularly in Europe’s large cities. An emerging world labour market is in continual collision with the political order of the world as embodied in political boundaries. After two hundred years or so of creating national States and the appropriate national identities, it is hardly surprising if the combination of these processes did not threaten to destabilise the psyche. The economics of labour migration could become disastrously intertwined in the politics of personal identity. In fact, the process may be less destabilising for the majority of Europeans who belong to countries, than to those Europeans to whom those same countries belong, or rather to the intelligentsia whose role it has been to articulate and sustain the national idea - a respectable xenophobia may be more dangerous today than popular resentments. This would be enhanced for the population at large by the real or invented association of border crossing with terrorism. The danger of terrorism is less with the threat of particular acts of violence and rather more with the maintenance of a continual state of popular panic in electorates to which political leaders are obliged, if they are to survive politically, to react with “tough measures”. Decisions on migration are affected by how far we see the nation-State as constituting in the future the primary organisation of the world’s population, so that national social homogeneity should be shored up, or new nations created out of ethnic diversity. If we accept that states may be superseded by other sub-national or supernational bodies of governance and identity, then it may be more important to facilitate greater circulation rather than pursue integration. Not dealing with migration in a timely and publicly transparent way thus has the potential for disaster, pulling down the temple on our heads. On the other hand, converting the issue into an opportunity for a sustained attack on world poverty can mobilise the idealism of Europeans for this task.
42 ENDNOTES 1
2
3
4
5
The theory of international trade turns on the proposition that where there are differences in factor endowment (raw materials, labour, capital, entrepreneurship, etc), between countries or localities, disproportionate economic gains result f?om exchanging factors. This is the rationale for liberalising world trade and the mobility of capital. A number of studies have endeavoured to put figures on the gains arising from the liberalisation of labour migration. Hamilton and Whalley (1984), using 1977 data and a set of strict assumptions, estimate gains to gross world product (then US$7.8 trillion) arising from lifting all migration controls at between 4.7 trillion and $16 trillion. Recent reworking of more up-to-date data confirms these broad magnitudes (Moses and Letnes, 2002; Iregui, 2002). UNDP, in the Human Development Report 1992 (pp.57-58), present a different calculation of more limited changes. Walmsley and Winters (2001) present a model in which worker migration to employment in services in developed countries equal to three per cent of the developed countries labour force would yield benefits of $156 billion, shared between developed and developing countries, compared to the estimated $104 billion generated by a successful outcome of the Doha trade round (and the roughly $55 billion granted in aid to developing countries by the OECD group). The precise figures are no better than the assumptions made, but the direction of change, and the magnitudes, are important. Without controls, it is commonly observed that migrant workers circulate. Thus, with the decline in trans-Atlantic transport costs of the 1890s, 40-50 per cent of Italian migrants to the United States up to 1914 returned to Europe; and 30-40 per cent of Portuguese, Croatians, Serbs, Hungarians and Poles (Baines, 1991). Constant and Zimmerman (2003) estimate that 60 per cent of contemporary guest-workers in Germany are repeat migrants. See also Eichengreen (1994) and Dustmann (1996). However, steady progress in many developing countries, in the provision of basic infrastructure, telecommunications, etc, is narrowing the gap with developed countries. The availability of cheap support workers - maids, nannies, gardeners, cooks, drivers - may already mean, for example, that standards of living for software programmers in Bangalore - at lower levels of remuneration - are higher than in Silicon Valley. The German guestworker case is often cited to support the proposition that “there is nothing so permanent as a temporary worker”. However this is a misjudgement since (i) employers pressed the government to keep workers because the suspension of the programme meant that there could be no replacements; (ii) workers tried to stay because they recognised that, if they left, there would be no repeat opportunity to work in Germany; (iii) in any case, a significant proportion of guestworkers did leave Germany - see Werner (2001); Constant and Massey (2002). There are many other schemes of circular migration that have worked effectively - for example, the US-Mexico Braceros scheme, contract labour schemes in the Persian Gulf, etc. In the case of the Mexico-Canada agricultural labour programme, in the 28 years of its operation (with 12,500 workers involved in 2002), no Mexicans overstayed their visas, and 5 per cent returned to Mexico before their visas expired - O’Neil(2003). In the American case, Mexicans are estimated to have stayed in the US on average three years in the early 1980s, but in the late 1990s after major steps to
43
6. 7
8
9
tighten border controls, nine years - and as a result, bring spouses, put children into schools and seek US citizenship. On the general case, see Cornelius (2001) and Massey et al. (2002): “Immigration policies should recognise that most international migrants are not initially motivated to settle in developed nations, and that hardening the borders through police actions only undermines the inclination to return, ultimately reducing the flow of people and migradollars back to sending regions to choke off their development. A smarter strategy would be to counter the natural inclination to remain abroad by facilitating return migration and the repatriation of funds” (Massey et al., 2002: 157). But also, migrants are increasingly saving parts of the European rural economy - see for example, Kasimis, Papadopoulos, Zacopoulou, 2003. As also have financial institutions, development banks and aid donors see the DFID-World Bank conference in London, Oct.2003. The EU has made efforts to relate aid programme to migration, but these have usually been directed at preventing emigration from source countries, or encouraging return migration, rather than reinforcing development. There have already been schemes here - see, for example, the Belgian Migration for Development programme; working holidaymaker schemes in Belgium, the UK, etc.
REFERENCES
1. 2.
3.
4.
5. 6.
7.
8.
Baines, D.E. (1 991): Emigration from Europe, 1815-1930, Basingstoke: Macmillan. Constant, Amelie and Douglas S. Massey (202): Return Migration by German Guestworkers: neoclassical versus new economics, International Migration, 40/4,2002:5-39.Constant, Amelie and Klaus F. Zimmermann (2003): Circular movements and Time Away from the Host Country, IZA DP 960, Forschung Institut zur Zukunfi der Arbeit (IZA), Bonn, Dec. Cornelius, Wayne (2001): Death at the Border: efficacy and unintended consequences of US immigration control policy, Population and Development Review, 27/4., Dec.: 661-685. DFID-World Bank (2003): Report and Conclusions, International Conference on Migrant Remittances: Development Impact, opportunities for the financial sector and future prospects, Oct.9/10, London. Dustmann, Christian (1 996): Return Migration: the European experience, Economic Policy: A European Forum 22, April 1996: 215-250. Eichengreen, Barry (1994): Thinking about migration: European migration pressures at the dawn of the next millennium, in H. Siebout (Ed.), Migration: a Challenge for Europe, Mohr, Tiibingen. Escobar, Agustin, Philip Martin, Peter Schalzer, Susan Martin ( 2003): Migration: moving the agenda forward, International Migration, IOM, 41(2), 2003). Findlay, Alan (2001): From brain exchange to brain gain: policy implications for the UK of recent trends in skilled migration from developing countries, International Migration Programme 43, ILO, Geneva, Dec.
44 9. 10. 11.
12.
13.
14.
15.
16.
17.
18. 19. 20.
21.
22.
Grillo, Ralph (2002): Transnational migration, multiculturalism and development, Focoal - European Journal of Anthropology. No. 40,135-148. Grillo, Ralph, and Bruno Riccio (2004): Translocal development: Italy-Senegal, Population, Space and Place, 10,99-111. Hamilton, C., and J. Whalley (1984): Efficiency and distributional implications of global restrictions on labor mobility: calculations and political implications, Journal of Development Economics, 14 (1-2): 61-75. Iregui, Ana Martes (2002): Efficiency gains from the elimination of global restrictions on labour mobility: an analysis using a multiregional CGE model, World Institute id Development Economic Research conference, Poverty, International Migration and Asylum, Helsinki, Sept 27th-28‘h. Kasimis, Charalambos, Apostolos G. Papadopoulos, Ersi Zacopoulou(2003): Migrants in rural Greece, Sociologia Ruralis, European Society for Rural Sociology, 4312, April: 167-184. Lowell, B. Lindsey (2001): (a) Some development effects of the international migration of highly skilled persons, International Migration Programme 46, ILO, Geneva., Dec. (b) Policy responses to the international mobility of skilled labour, International Migration Programme 45, ILO, Geneva, Dec. Massey, Douglas, Jorge Durand, Nolan J. Malone (2002): Beyond Smoke and Mirrors: Mexican immigration in an era of economic integration, Russell Sage Foundation, New York. Moses, Jonathan W. and Bjom Letnes (2002): The economic costs of international labor restrictions, paper for WIDER conference (see Iregui above). O’Neil, Kevin (2003): (a) Using remittances and circular migration as drivers of development, Center for Contemporary Immigration Studies, University of California: San Diego, Apr.l 1/12th.(b) Migration and Development, Migration Policy Institute, Washington, Dec. Suro, Roberto (2003): Remittance senders and receivers: tracking the transnational channels, Pew Hispanic Center and Multilateral Investment Fund, Washington DC, Nov.24”. UNDP (1992): The Human Development Report 1992, UNDP: New York. Werner, Heinz (2002): From the German ‘LGuestworker”programmes of the Sixties to the current “Green Card” initiative for IT sprcialists, International Migration Programme1 1, ILO, Geneva. Walmsley, Teme L. and L. Alan Winters (2002): Relaxing restrictions on the temporary movement of natural persons: a simulation analysis, unpublished paper, University of Sheffield. Winters, L.Alan, Terrie L. Walmsley, Zhen Kun Wang, Roman Grynberg (2003): Liberalising labour mobility under GATS, Economic Paper 53, Commonwealth Secretariat, London.
MIGRATION FROM AND WITHIN ASIA K.C. SIVARAMAKRISHNAN Visiting Professor, Centre for Policy Research, New Delhi, India This paper’s main purpose is to serve as an introduction to the scale and patterns of migration from and within Asia. Its focus is international rather than internal migration, which is touched upon only briefly. The paper relies heavily on published data from a variety of sources such as World Migration Reports, Human Development report, regional migration surveys, numerous publications and articles. No separate research has been undertaken for preparing this paper. The World Federation of Scientists, a prominent international body of scientists has been organising an annual conference at Erice for the past several years to consider some serious problems of planetary significance. In recent years, some of the conference themes have dealt with socio-economic and socio-political issues. Perceptions of international migration, its attendant problems and prescriptions for dealing with them usually tend to pay more attention to cross border movements into North America and recently into Europe. Given the preponderance of Asia in world population and the huge labour force in many of its centres, at least a preliminary understanding of the dimensions is essential for any serious discourse on migration. It is hoped this paper will serve that purpose. In preparing this paper I acknowledge with thanks the assistance of Mr. Kamal Jit Kumar, Librarian in the Centre for Policy Research and Mr. Karthikeyan, Researcher in the Institute of Social Sciences in compiling the data from numerous sources and Mrs Sarala Gopinathan for secretarial assistance. INTRODUCTION Migration has a long history. The Chinese, Korean and Hindu settlements in Asia, the movement of the Bantu speaking peoples from the north to the south in Africa and the European colonization of the Americas are all a part of human history. However, it is the scale of international migration in recent years that has added new dimension and complexities. The United Nations policy division estimated that migrants defined as those residing in foreign countries for more than one year accounted for only about 75 million persons in 1965. For 10 years since then international migration grew at 1.16% per year compared to the general growth in global population of 2.04. However, in the next 5 years global migrant population increased by about 2.6% per year compared to only 1.7% in general population growth. Today the number of international migrants is estimated at 175 million, of whom 16 million are refugees. The bulk of international migrants are thus voluntary as distinct from involuntary migrants or refugees forced by war, ethnic strife or other conflicts, natural calamities etc. The principal motive is economic, which, it must be emphasized, is shared by the originating as well as the receiving country. The phenomenon is across the world.
45
46 Figure 1: Graph on International Migration.
The export of Afiicans as slaves to the Americas, the large-scale industrial labour from China after the opium wars and the recruitment of Indians to work in the plantations of the West Indies, Ceylon or Fiji Islands, and similar migratory movements have all been prompted by economic considerations. As Harris rightly points out, in the 19'h century, economists and politicians would have been astonished, if told these movements were unnatural. Yet during the past few decades more and more governments, rich or poor, developed or less developed have been viewing immigration as 'too high' and adopting policies to control and reduce migration. Asia is no exception to this trend. Three broad streams of international migration are considered in this paper (Figure 2) First is the flow of migrant labour from various Asian countries to the Gulf region. The large scale of these flows has been maintained for several years though the Gulf War of '91 was a temporary set back. The second is the migration to a few countries, especially in East and South East Asia such as Japan, Korea, and Singapore as also the Taiwan and Hong Kong areas. The main originating countries are the Philippines, Indonesia and Bangladesh. The third item focuses on South Asia where migration, both voluntary and involuntary, has occurred on a large scale during the past few decades. Figure 2:
0 South Asia
I
(B
Prospects and Policies for the future
.I
47
ASIA TO MIDDLE EAST The rise in oil prices in 1973 and the consequent flush of funds enabled the oil producing Arab countries to undertake massive infrastructure development and construction programmes. Since these countries, with the exception of Saudi Arabia, had limited populations and even more limited supply of labour and skills, they turned to several Asian countries such as India, Pakistan, Bangladesh and the Philippines. Sri Lanka, Indonesia and Thailand were also significant sources. In the case of Korea, Korean companies were successful in securing several construction contracts in the Middle East and, as part of these contracts a large number of Korean construction workers were sent to the Gulf countries. Starting with about 70,000 in 1977 and peaking to 197,000 in 1982, about 1.4 million such contract labourers went to the Middle East in the ten-year period from 1977 to 1987. The decline began in 1985 due to demographic changes in Korea, consequent labour shortages and rising incomes, which rendered labour export unattractive. By 1981, about 2.5 million workers had gone from various Asian countries. Thereafter the annual flow was about one million for the next several years. Figures 3 and 4- map and table -Middle East Magnet.
I
MIDDLE EAST MAGNET
I
48 THE MIDDLE ERST MAGNET
0 Gulfwar, i)temp0taryretbackfwlndi+ Pakirtan.Bangladesh 0 Tlghteningoftheregimeandrepatriatlon: aboutl million Sentbackundermnedy Mhemerduring
1996-98 0 still, ann~alR o w t r o m Sovth Asian muntris exceed half million: much less from EastAda" 0
Covntries Stock of migmnti from South Aria estimated about 4 mlllloni
Q Remlarncesfr~mUlemlddleEasfamajorpartofLeGNP andexpartearnings
The demographic and economic features of these labour exporting countries of Asia merit attention. The combined 1975 population of India, Pakistan, Bangladesh, Sri Lanka, Indonesia and the Philippines was about 956 million. Population growth ranged from a low of 1.3% in Sri Lanka to 2.8% in Pakistan. Total fertility rates were 6 or more in Pakistan, Bangladesh and the Philippines. Given the low per capita GDP and the large labour force, emigration to the Middle East in response to the demand became an attractive option (see Figure 5-table on Features of Labour Exporting Countries). The South Asian countries of Bangladesh, India, Pakistan and to a lesser extent Sri Lanka became the principal sources of labour export to Middle East. From a modest figure of a little over 50,000 in 1976, by 1985 the figure had reached close to a million. Indonesia, the Philippines and Thailand were the other important sources of labour supply to the Middle East, with the volume exceeding 800,000. The flow for some of the years during the period are contained in tables for South Asian countries (see Figure 6). Figure 5: Features of Labour Exporting Countries.
FULTURES OF LABOUR EXPORTING COUNTRIES (popJanon fgures ~nrnilons
I
Source Human Development Report 2004
49 Figure 6
SOUTH ASIA EMIGRANTSAND %TO MIDDLE EAST
Source: World MigrationReport 2000, InternationalOrganisation for Migration I
However, political uncertainties did affect the migrant flows from some countries. After the Gulf war, workers from countries that supported Iraq were expelled. Expulsion of about 350,000 Palestinians from Kuwait and nearly 800,000 Omanis from Saudi Arabia changed the nationality composition of foreign workers. Kuwait banned the return of 5 nationality groups i.e. Iraqis, Palestinians, Jordanis, Omanis and Sudanese. The resulting vacuum in the labour market was filled by Asians and Egyptians. Given this large supply, it was inevitable that the composition of the work force in the Gulf countries would undergo perceptible changes. As of 1996, the non-national component was as high as 76.5% in Qatar to 26.5% in Oman. The position obtaining in 1996 is presented in a table (see Figure 7). It is not easy to establish what part of the nonnational labour force is regular and what part is irregular including the unauthorised entry or stay. Figure 7
I POPULRTIOHAWD LABOUR FORCE IN THE MIDDLE EIBT COUNTRIES I
Source: World Migration Report: 2000, International organisationfor Migration
50
Between 1996 and 1998, the six Gulf Cooperation Council countries also undertook several measures to clamp down on unauthorised migrants and to jail undocumented workers. During this period nearly 1.023 million workers were repatriated to their countries. The number of workers affected was the highest in Saudi Arabia, accounting for 752,000 (see Figure 8). Figure 8
I
-
GULF WAR AND AFCER -AMNESTY RETURNEES1996 98
Source: World Migration Report 2000, International Organisation for Migration.
Notwithstanding these measures, as well as the 1991 Gulf War and its aftermath, the remittances from the migrant workers have kept the migrant flows to the Middle East high. For India, Pakistan and Bangladesh alone, remittances currently total more than 16,000 million U.S. dollars. (See Figure 9 for the table on Remittances from the Middle East). For the South Asian as well as East Asian countries, the Gulf countries have therefore remained major labour export destinations. ' e r e 9: Rernittancesfiornthe Middle East.
REMlllANCES FROM THE MIDDLE EASTMIGRANTS
Remittances accounted for 26% export earnings Important in meeting balance of payments attimes of crisis Remittances use both f o n d and informal channels For India, Pakistan and Bangladesh remittances total 16,000 miiiion US$ as of 2003 Sources. Based on country sludies underthe Asian Regional Migration Pmject. 0 0 0 0
51
Nevertheless, the huge volume of migration to the Gulf countries underwent some change, not because of what happened there, but because of the demographic and economic development far away, in some East Asian countries. Fertility and declining populatiodworkforce accompanied by rapid economic growths brought about the changes. Korea made a transition from a labour exporting to a labour receiving country. So did Thailand to a lesser extent. Japan, Hong Kong, Taiwan and Singapore came to need workers in some categories (see Figure 10 for the table on fertility decline). Alternative destinations became available to labour exporting countries like the Philippines, Indonesia, Bangladesh and Thailand (see Figure 11 for the table on Migration Shifts). Figure 10
China(Main Land) Hong Kong' Japan Republic of Korea Taiwan
1950.55 6.11 4.44 2.75 5.18 6.53
1960-65 1970.75 5.61 4.76 5.31 2.89 2.01 2.07 ,
5.40 4.83
1980-85 1985-90 2002 2.50 2.41 1.8 1.80 . 1.31 1.76 1.66 1.3 . 1.73 -. 1.4 2.40 4.11--2.83 1.89 1.81 ,
.
Hong Kong, Taiwan 0 Economic growth and rising incomes Source: United Nations (1955), World Population Prospects: The 1994 Revision, Human Development Report, 2004
Figure I I
I
MIGRATION SHIFTS FROM MIDDLE EASTTO EAST ASIA
0 tmpactdGuifWarand~ebeginningsdashin
0 For Bangladesh, Indonesia,Philippinesand Thailand, shifl significant Soume: InternrtlondMlmUon Policlar in Asia
I
52
EAST ASIA An overview of East Asia is necessary to establish the context of migration streams
within this part of the continent. In any such overview China looms large, with its population of 1.3 billion and a labour force of 700 million. Since the Opium War, largescale migration through indentured labour to engage in various infrastructure works in North America, such as the construction of the railways, has been a prominent phenomenon. Migration within the East Asian region such as the Indo-China countries, Malaysia and Singapore, has also been significant. As of 1990, it was estimated there were 30 million Chinese overseas. Large as this number is, it is still small compared to the mainland population of 1.3 billion (see Figures 12 and 13 for East Asia overview chart and map on migrant flows). Figure 12
EASTASIA OYERYIEW
B China loom large; population gmwth rak projected. 0.6 during 2 0 5 1 2 0 Indonesia ind Phlippines with large labour foce
B Migrationfrom China and the region, of long history: since the opiumwar. 0 By 1590, abed 30 million Chinese overseas: many in the region; still small compared tomainland popllation. 0 Internal migration significant within China; from 2 million in mid '80s to 80 by 2001
-
Fimre 13
MIGRANTFLOWS IN EASTASIA
I
' Bangladesh to Malaysia C h i n a
' Myanmar to Thailand ' Indonesia to Malaysia, Brunei, Thailand 1
Philippines to Korea, Hong Kong, Taiwan, Malaysia, Singapore
53 It is estimated that about 300,000 to 400,000 Chinese migrate to other countries annually, including contract workers, settlers, students, unauthorised workers etc. Roughly a third of these emigrate legally to the US., Canada and Australia. Excluding Hong Kong and Macao, Chinese workers recruited for other countries in East Asia are small. Though China did not participate in labour export to the Middle East, after the economic reforms of 1979, China began to apply its vast labour resources. By the mid 90s, Chinese migrants abroad generated about US$ 7 billion foreign exchange for the mainland. During the 1990s, international immigration from China is estimated to be about 3.15 million, of whom half a million were job seekers. Yet this is a small proportion of the country’s labour force of 700 million. lnternal migration within China is significant. From a modest figure of 2 million in the mid 8Os, it is estimated, according to household surveys of the Ministry of Agriculture, that by 2001 such migration was of the order of 88 million, which is about 13% of the labour force. The causes, patterns and consequences of the migrations need an understanding of the development strategy of China, which is beyond the scope of this paper. The Philippines and Indonesia are the other populous countries in the East Asia region from which migration flows within the continent have become significant. Myanmar and Bangladesh are the other countries of relevance. Regarding receiving countries, mention has already been made of declining populations and labour force in Japan, Korea, Hong Kong, Taiwan and Singapore. These have become important receiving countries of migration. Malaysia and the oil-rich, but small, Sultanate of Brunei are also important destinations because of their labour needs. Among the destination countries, by 1997, 27% of the work force in Singapore comprised foreign workers who numbered more than half a million. Singapore has consistently encouraged highly skilled foreigners, providing incentives for them to acquire permanent residence. Singapore’s approach has been pragmatic recognizing that migrants perform jobs which local workers are either not willing or unable to do and thus migration adds to the flexibility of the labour market. Figure 14: Table on Singapore worvorce.
SINGAPORE POPUIATIONFOREIGN WORKFORCE Approx. 2716 of total workforce Thailand, Indonesia. Sri Lanka, India, Philippines, Mainland China and Malaysia are the main source countries. About 55,000 professionals Participation of local female labour force rose from 37X to 51 %during 1976.1997 due to Imported domestic help.
I
Source: Hui, Weng.tat(l998), The RegionalEconomic Crisis and Singapore: Asian and Pacific MigrationJournal.
54 In the case of Malaysia, it is estimated that there are about 1.2 million migrant workers. Out of these, Indonesia alone accounted for about 700,000 followed by Bangladesh with about 300,000. Unlike Singapore, Malaysia has not formulated a coherent migration policy, but highly skilled immigration is encouraged Figure 15: Table on Malaysia workforce MAULVSIII: FOREIGNWORKFORCEBY OCCUPATION AND NATIOHULIPI, 199
L
..
Source: Kassim, A. (1998). Paper presented at the Technical Symposium on International Migration and Development, The Hague, Netherlands.
Hong Kong is another area of concentration for migrants. Apart from workers from the mainland, workers from Philippines and other Asian countries totalled 250,000 in 197'1, which doubled in 1998. The small state of Brunei, with a population of just about 300,000, has one of the highest per capita incomes in South East Asia. Its economy is dominated by oil and Brunei has been used to migrant labour for .a long time. While the local population is invariably involved in public offices, migrant labour comprises more than 70% of the private sector work force. Thailand is another country that has become both an immigrating and emigrating country. During the 198Os, it was an important exporter of labour to the Middle East, particularly to Saudi Arabia. However, from the 1990s on, migration from Thailand has shifted towards Taiwan, Hong Kong and Singapore. Migration into Thailand is a recent phenomenon with people coming mainly from Myanmar, Cambodia and Vietnam. The migrant workforce was 67,000 in 1993 and rose to 270,000 in 1997. Thailand was one of the countries seriously affected by the Asian economic crisis. In 1998, unemployment affected nearly about 8.5 % of the work force, i.e. about 2 million workers. The crisis had its impact on migrant workers as well. However, as the economic crisis eased, some of these measures were slowed down. As for labour exporting countries, Indonesia, because of its large population, continues to be an important source of emigration to Asian destinations as well. During the 1980s, much of this migration was to the Middle East. Malaysia is the most important destination for Indonesian migrants with annual figures exceeding 300,000. In the Middle East, Saudi Arabia continues to be another important destination with annual flow of about 116,000.
55 The Philippines has been long regarded as a major source country. The Middle East, of course, has been an important destination with about 200 to 250,000 Filipino workers deployed annually since 1984. But other countries in Asia have also become an important destination with the 1997 deployment level of 235,000. Filipino population resident in South East Asia is estimated to be about 350,000. Hong Kong alone has a large proportion. Though the region is still to fully recover from the economic crisis, migrants have been less affected than local workers mainly because they have occupations avoided by local workers as being ‘dangerous, demanding and dirty’. In reality, these jobs can be characterized as poorly paid, insecure and boring. So even in times of economic crisis, they serve as a work platform for international labour. SOUTH ASIA Both voluntary and involuntary movements in South Asia have been on a large scale. As shown in an earlier table, the combined population of India, Pakistan, Bangladesh and Sri Lanka was 780 million in 1975. Today it is about 1362 million. If Nepal is added, the figure will go up by another 25 million. Fertility rates have always been high and population growth across the subcontinent, considerable. The last two decades have seen strenuous efforts in population planning, yielding significant results, but the population remains large. Economic disparities within the region as well as within the countries have been sharp and the share of agriculture to the GDP has been on the decline compared to services and manufacturing. It is therefore to be expected that movements within and across the borders will be significant in absolute numbers, though small when compared to the population base. Adding involuntary migrants, migration within South Asia has exceeded 30 million in the past 50 years. Given such a scale, the causes and categories of migration cannot be precise. Ghosh has attempted a useful categorization such as imperial surgery, failed states, military interventions, ethnic conflicts, statelessness, etc. (see Figure 16 for the chart on South Asia Overview). The single most important act of surgery, (though Ghosh acknowledges this is too benign a word to be used) has been the partition of India and the carving out of Pakistan (see Figure 17 for South Asia map). The aftermath was a harrowing experience. Within the space of a few months, some 13 million people moved between the two new countries, roughly half and half. Another million are estimated to have lost their lives in the communal riots, close to a 100,000 women were abducted or raped, several thousand families were separated and properties were lost and looted.
56 Figure 16 ~
~
~
SOUTHASIA OVERVIEW Some Causes and Categories 0 IMPERIAL SURGERY
Partition of India and Pakistan (1947); Some 13 million moved between the two
8 FAILED STATES:
Pakistan loses the eastern part in 1971; 10 million estimated as fleeing to India
0 MILITARY INTERVENTIONS: Afghanistan conflict in 1979
3.5 million moved out: mostly in Pakistan; By 1997,2.6 million returned with UNHCR help; Tibetan Refugees (about 100,000) in India 17
Q
ETHNIC CONFLICTS:
0 STATELESSNESS:
Sri Lankan. Tamil conflict since 1983: 164,000 moved into Tamil Nadu People declared "non.citizens": Tamils in Sri Lanka (975,000); Shastri Sirimavo Pact of 1964: 375,000 will be granted Sri Lankan Citizenship; rest Indian; Biharis in Bangladesh (235,000) Pakistan HRC report of I 9 9 5 'Geneva'ca mpo across the Country; Indians in Myanmar (1960s): about 150,000 had to leave
0 OPEN BORDERS:
India.Nepal; Nepali born, about3 million in India
CAUSES COMMON TO MIGRATIONAND REFUGEES: LARGE VOLUME DEMOGRAPHICS OPERATE DURiNG BOTH PEACE AND CONFLICT
I
Source: Parba. S.Ghosh, 'Unwanted and Uprwted,Sanskriti,Delhi2004.
57 Figure I 7
r
SOUTH ASIA
The agony of displacement and shift did not cease following these months. Over the next two decades, as strife continued in different parts and the two governments tried unsuccessfully to either regulate or stem the flow, another 5.5 million are estimated to have moved. Altogether about 18 million people were thus involved in one of the largest exodus of people in human history. Pakistan lost its eastern wing in 1971 and Bangladesh emerged as an independent country. In the struggle for liberation, another 10 million are estimated to have fled from East Pakistan to India. To add to the travails of the sub-continent, the military intervention in Afghanistan and ethnic conflicts in Sri Lanka brought more refugees into Pakistan and India. It is estimated that South Asia has one of the largest concentrations of refugees in the world. Pakistan has refugees from India, Bangladesh and Afghanistan. In addition to these, India also has refugees from Tibet, Nepal and Myanmar. “Statelessness” is another dimension of migratory movements in South Asia. From time to time, governments in the region have declared categories of people as “noncitizens”. The Tamils in Sri Lanka, Biharis in Bangladesh and Indians in Myanmar are examples. More than 1.3 million people are affected, An important geo-political aspect of South Asia is that much of India’s boundary with Pakistan, Nepal, Bhutan and Bangladesh is a virtually borderless situation. The Nepal-India border is officially open permitting movement of nationals of both countries with minimum checks and regulations, but the other borders are supposed to be international, requiring valid travel papers. However, the terrain is mostly desert in the west and riverine in the east, rendering fencing expensive to build and maintain. Especially in the case of Bangladesh, the demographics and economic comparisons with India are such as to make migration to India very attractive.
58
Figure 18: India-Pakistan-Bangladeshflows.
INDIA- PAKISTAN- BANGLADESH FLOWS .13 million moved: roughly half and half between India and Pakistan
0
Oufflom to and from East Pakistan:
. 5 million during the 1950's .1.7 million during the 1960's
0 War of Liberation (1970.71):
. I 0 million refugees into India: Number of returnees not
0 Birth of Bangladesh and after:
.High demographic pressure in Bangladesh:
established 800 per sq.km.; 300 In Assam
I
.High fertility, high density, limited livelihood opportunities; contrast In India .I300 km border: all but 50 kin demarcated: but riverheterrain, difficult to fence and maintain: border only on paper .13 million estimated as inflow inta India: .Highly contested figures; virtually impossible to verify
-
0 India Pakistan:
.1817 km border, again on paper but terrain difficult and inhospitable; movement m a i l and seasonal
Because it is impossible to physically distinguish a Bangladeshi from an Indian, the numbers of people moving across are very difficult to estimate. Indian officials and researchers have attempted such estimates from time to time, which are either flatly denied or hotly contested in Bangladesh. One estimate places the figure at over 13 million since the birth of Bangladesh. About 5.4 million among them are reported to be in West Bengal, another 4 million in Assam, and 0.8 million in Tnpura, all being border states. The balance is to be found in other states like Maharashtra, Rajasthan, Bihar and Delhi. Various reasons have been advanced to support this large presence such as the high population density in Bangladesh, which has about 800 personshq. km compared to 300 or less in Assam; better employment opportunities and better wages in India; the push factor of a large labour force in Bangladesh; the decline of traditional manufacturing industries like jute; the increasing demand for unskilled and semi-skilled jobs in some parts of India like Delhi; Maharashtra and Gujarat looking for cheap labour, etc. It is also believed that local political interests in the Indian border districts encourage vote banks of migrants and assist them in securing food rations or voter ID cards which render any subsequent action against illegal migrants impossible. The 1971-81 Indian Census figures revealed that in eight districts of West Bengal bordering Bangladesh, population grew by 30% as compared to 20% or less in other districts as warranted by natural growth. The 1981-91 figures c o n f m the position. The recently held 2001 Census also shows that in seven of the eight districts, decadal growth exceeded the state average. Migration-wise, the border zone including Assam and Tnpura have become highly sensitive, resulting in several controversies and struggles in the past two decades, including some which were violent. On some occasions the census could not take place in certain areas: electoral rolls had to be deferred or cancelled.
59 Figure 19: Map of the Migration Sensitive zone.
A MIGRATIONSENSITIVEZONE
I
CURRENT AND FUTURE OUTLOOK Traditionally many of the countries in Asia have been regarded as labour exporting countries. Even in the case of Japan between 1868 and 1942, about 750,000 Japanese workers and their families migrated to work in Latin America, the US, Canada etc. The Republic of Korea was also a major exporter of labour during the 1960s and 70s when two million workers left for employment abroad. China has been another significant exporter of labour. However, as demographic and economic conditions changed, some of the labour sending countries became labour receiving countries such as Japan, Korea Hong Kong, Thailand, Malaysia, Singapore and Taiwan (see Figures 20 and 21 for tables on Labour Exporting and Receiving Countries in East Asia). The distinction between labour exporting and receiving countries became blurred. Figure 20
DESTINATIONS OF LABOUR EXPORTINGGOUNTRESOF EAST ASIA
60 Figure 21
.
(apart from mainland China)
0 Thailand: 1
Transition from 'immigration't o 'emigration' 67,000 migrants in 1993 270,000 migrants in 1997 Long land border with Myanmar, Cambodia and Vietnam Economic crisis of 1998 some repatriation, but temporary "There is nothing as permanent as a temporary worker"
. 1
1
At the beginning of the present century, the volume of migration from and within East Asia is nearly four million. Battistella estimates that in some countries migrant labour constitutes a significant percentage of the work force, accounting for 27% in Singapore, 11% in Malaysia, 9% in Hong Kong and 4% in Taiwan. Given the large volume, illegal or undocumented migration has been a constant companion of authorised migration (see Figure 22 for table on unauthorised migrants). In the case of Thailand, for instance, which shares a large land border with Myanmar, Laos and Cambodia, it is estimated that for every authorised migrant, there are at least five unauthorised workers. Data on legal and illegal migration however, is not easy to compile because the policies and regulations change from time to time. The norms and regulations about who is authorised and who is unauthorised also change. Figure 22: UnauthorisedMigrants.
AUTHORISED I UNAUTHORISEOMIGRANTS IN EAST ASIA
61 The regulatory regime in regard to migration has some broad similarities between the labour exporting countries in formulating policies and measures to promote employment, protect emigrants and also maximize the development impact of migration. Abella has broadly categorised these measures that are listed in a table (see Figure 23) for labour exporting countries. Receiving countries on the other hand follow a generally restrictive policy encouraging highly skilled workers but discouraging semi-skilled or unskilled workers (see Figure 24 for the table). The political and social mindset behind these measures reflects a mismatch between these policies on the one hand and economic needs and labour requirements on the other. Though migrants may represent only a proportion of the total labour force, their ethnic and cultural characteristics promote significant resistance to inflow. Some countries permit migrant workers only from selected countries. Japan, for example, experimented with recruitment and import of about 300,000 ethnic Japanese born overseas mainly from Brazil and Peru. These people called 'Nikkeijin' looked Japanese, but their behaviour patterns were more Latino. Many of these were employed in the automobile industry. This rather unique effort towards reverse migration has been the subject of separate sociological studies. Figure 23
62 Figure 24
1
MIGRATIONPOLICIES - LllBOUR RECElYlNGCOUNTRIES :
Malaysia
UnskiilefflSemi.shilledlimited to few sectors and nationals: tough border controls
Singapore :
More open for high and more restricted for low end migrants: tough border controls
Thailand
:
Alien employment allowed in all but 39 sectors: undocumentedworkers from Myanmar, Vietnam. Laos in some iobs and areas Penitted: tough border controls for olhers
Japan
:
Highly restrictive migration policy: foreign workers only as last resort, no unskilled labour. temporary workJtrainee system in some segments:
“Nihkeijin” :
tough border controls
Hong Koog :
Highly restn’ctive:quota system for expatriates, professionals; no quota limit for foreign domestic workers, daily quotas for worken from mainland
Korea
:
Highly restrictive: guest worker scheme in some segments.
laiwan
:
Foreign workers from limited countries: more open for professionals
0 international Convention on Migration adopted by UN GeneralAssembly in 1990: ratified by
many: followed by few.
21
It is generally believed that economic conditions affect migration but in the Asian case, it is seen that even the economic crisis of the 80s did not affect migration flows for any length of time. This could be because in certain segments of the labour force, the demand does not vary much and even at times of crisis migrants are prepared to adjust to lower wages. An expression frequently used therefore is “there is nothing as permanent as a temporary worker”. The economic effects of migration within Asia are apparent so far as labour exporting countries are concerned. Remittances from migrant workers continue to be the most ostensible feature. To illustrate, during the 1990s, the Philippines, Thailand, China and Indonesia received as much as U.S. $ 80 billion in remittances from migrant workers. As of 2000, these remittances accounted for 7.6% of the GNP in the Philippines, 3.8% in Bangladesh and 1.2% in Thailand. Even in the case of a large economy like India, remittances from migrant workers accounted for 2.5% of the GNP as of 1999. A less noticed aspect is the significant improvement in the economic performance of labour receiving countries. By permitting worker needs in segments of the economy to be filled by migrants, these countries have been able to overcome the problems of adjustment in the labour market. As a U.S. National Academy of Science study points out ‘‘ The gains to the domestic economy come from a number of sources. On the products side, immigration allows domestic workers to be used more productively facilitating specialization. Specialization in consumption also yields a gain”. As a result, the overall economic performance of labour receiving countries is enhanced. Singapore, for example, whose GDP was only 30% compared to Japan in 1980 increased to 62% by 2000. Similarly the GDP of Korea and Hong Kong, which was 16% and 30% of Japan’s GDP in 1980, rose to 26% and 55% respectively as of 2000. Looking towards the future, it appears migration will be an enduring and established social and economic phenomenon in Asia. Demographic pressures alone will be an important determinant. A UN report on replacement population estimates that over the next five decades, as birth rates fall below the required replacement rates, Japan
63 would need more than 300,000 migrants every year to maintain a constant total population. Singapore may need to boost its work force by about 19000 per year to maintain its growth rates. The Republic of Korea may need at least 30,000 per year. The phenomenon is not unique to Asia. Western Europe's working population will fall by 2 million over the next 20 years and it will have to double its intake of migrants. Though it is unlikely that migration flows will reach such volumes, the demand from the economy will sustain migration at least at current levels. The Asian experience also shows that migrants do not always compete with nationals in the labour market. It is also observed that even when conditions become adverse due to economic or political factors, migrants tend to remain in the destination countries
Figure 25: Table on Current and Future Outlook.
CURRENTAND FUTURE OUTLOOK B Economic Benefits Apparent 1 1
. 1
US$80 billion during 1990s from Philippines, Thailand, Indonesia 7.6% GNP for Philippines, 3.8% for Bangladesh, 2.5% for India Singapore GDP rose from 30% of Japan's in 1980 to 62% in 2000 Korea's from 16 to 30% Hong Kong's from 26 to 55%
6 Labour force needs are compelling
1 1
Japan needs 300,000 migrants per year to maintain constant population Singapore 19,000 Economic demand will keep up the volume atleast in past
0 Barriers to goods and capital being removed 1 But labour migration "solidly national" 8
Despite all the trends towards globalisation and economic liberalisation, resulting in the removal of barriers for goods and capital, migration policies tend to be a solidly national affair (Abella). The International Covenants on economic, social and cultural rights have not been adequate in dealing with the problems of migrants. The UN General Assembly Resolution of December 1990 adopted the International Convention on the Protection of Migrant Workers and Families. The covenant ratified by many countries by 2003 provides some rights to migrant workers, including irregular migrants, to liberty, personal security and the procedures followed in the event of arrest or imprisonment. The UN also appointed a special Rapporteur to monitor human rights of migrants and the ILO conventions provide further safeguards. Yet for most receiving counties, regulation of migrants continue to be ad hoc, short term and unfavourable to semi- or unskilled migrants. Bilateral trade and other agreements appear to be the preferred approach. This is an important feature of Asia where attempts have been made from time to time for regional cooperation to regulate the process. Brunei, Indonesia, Malaysia and the Philippines have established the BIMP-EAGA for this purpose. A wider coalition for Asia Pacific economic cooperation, assisted by the International Organisation for Migration, adopted the Bangkok declaration focusing on irregular migration. While countries of origin such as Indonesia, the Philippines and recently Myanmar depend on remittances as an essential source of foreign exchange, receiving countries such as
64 Singapore, Malaysia and Thailand also rely on migration for the functioning of their own economies. Migration is also a significant factor in the integration of a region with diversity of history, traditions and cultures but with increasing economic relations and mutual interest. In recent years, data, analysis and exchange of information between international organisations and researchers has been considerable. The International Organisation for Migration, ESCAP, and bilateral agencies like DFID have been very active in this regard. There are also individual organisations like the Scalabrini Institute of Migration Studies in Rome and various university centres in Asia. However, new information appears to prompt country governments to reinforce, rather than relax, their policies of migration and the general tendency to prioritize high-end migrant workers and minimize low end ones. As one scholar has observed, migration policies are a matter of ‘benign neglect’ by most Asian governments, so long as they can keep the flows at the present levels. REFERENCES 1. 2. 3.
World Migration Report 2000: International Organisation for Migration (1.O.M). Human Development Report 2004: W P , Oxford. Facts and Figures on International Migration, IOM in Migration Policy Issues: March 2003. 4. Asia Labour Migration Pipeline to the Middle East, editors Fred Arnold and Nasra M Shah; Westview Special Studies in International Migration, Honolulu. 5. Labour Migration in East Asian Economies; Monolo I Abella; International Labour Organisation, paper submitted in the Annual Bank Conference on Development Economics-Brussels: May 2004. 6. Asia’s Regional Labour Tensions Grow; John Berthelsen; Asia Times Online: 2003. 7. India and Bangladesh Migration Matrics; Jyoti Pathania; South Asia Analysis Group; paper No.632; March 2003. 8. Migration Patters and Policies in the Asian and Pacific region; UNESCAP, Bangkok; Asian Population Studies Series No.160, 2003. 9. To the Gulf and Back Studies on the Economic Impact of Asian Labour Migration, editor Rashid Amjad, ILO, Delhi 1989. 10. Unwanted and Uprooted; A political study of Migrants, Refugees, Stateless and Displaced of South Asia; Partha S Ghosh; Samskriti, Delhi 2004. 11. Migration Development and Pro-poor Policy Choices in Asia: Conference organised by DFID in Dhaka, Bangladesh; June 2003. The following papers for this conference have been accessed for source material Migration and Migration policy in Asia; a synthesis of selected cases; Ronald Skeldon, University of Sussex; UK; An overview of migration in India, its impacts and key issues; Ravi Srivastava, Jawaharlal Nehru University, New Delhi; China Migration country study, Huang Ping; Chinese Academy of Social Sciences; Beijing and Frank N Pieke, Institute for Chinese Studies; University of Oxford;
65
-
A Review of migration issues in Pakistan; Hans Gazdar; Collective for Social Sciences Research, Karachi; Migration as a livelihood strategy of the poor: the Bangladesh case, Tasneem Siddiqui, Dhaka University, Bangladesh; International migration policies in Asia; Clare Waddington, Independent Consultant, UK.
MIGRATION AND GLOBALIZATION GERALD0 G. S E W Universidade de S b Paulo, Silo Paulo, Brazil ABSTRACT This paper presents migration as an aspect of globalization and shows that while international circulation of information, capital and goods are generally considered as very positive and an indication of progress, migration is seen as a difficult problem to be controlled. Migration is presented as something inherent to human history from the very beginning and a part of globalization. Modem migration is a form of economic and social promotion to the migrant, but even so it could deplete human resources at their origin. Finally, it presents a general view of the Brazilian viewpoint on international and domestic migration. GLOBALIZATION:A MANIFOLD CONCEPT Nowadays, globalization is a polysemy used in many and different senses. In this text we employ this word referring to a process based on the relatively recent technological development in communications and transportation fields, and characterized by a growing international and global movement of information, goods, capital and labor. The Technological basis of globalization The infrastructure basis of globalization is formed by developments in communications and transportation of people and goods that have made them much easier and cheaper than in the past. Radio and TV were important contributions to the diffusion of ideas and information, but satellites and the global web of computers have made communications cost independent of distance. The incredible spread of mobiles telephones all over the world has improved the process to an even greater extent. Fast and cheap transportation of people and goods is the second most important development in creating the infrastructure basis of globalization. Airways cross the atmosphere for lower and lower prices. Therefore, it has never been so easy to move from country to country, and inside the same country from one province or state to another, from one town to another. The information flow Nowadays information flows instantly through the global space, making control by governments and international entities very difficult, and resulting in nervous market answers to any changes in the political and economic scenario as well as cultural clashes. It promotes and motivates capital, goods and labor flows. The movement of capital The first obvious consequence is the permeability of national borders to money and values. Most governments have reluctantly accepted their inability to control the movement of investments and values across borders, which today is a reality all over the world. Instant information turns the speculative capital flow continuous throughout world markets.
66
67 Illegal paybacks and corruption funds and laundered money have easily found their ways to banks and tax heavens where, protected by numbered accounts, they finance drug traffic and terrorism. Of course, the total amount of this illegal capital flow is unknown, but it is estimated at hundreds of billions of dollars. A blanket of hypocrisy and cynicism covers the entire operation. In terms of the movement of capital and information, globalization is heralded by developed countries as the beginning of a new era of progress and peace. In terms of the movement of goods, it is something that needs strong international regulation in order to preserve their own selfish interests, making the economic differences that stimulate migration even worse. Yet, the movement of labor is considered a very risky issue which requires urgent and strict rules and regulations enforced manu militari, building new “walls of shame” if necessary. MIGRATION IN THE TIMES OF GLOBALIZATION A specter is haunting the world - the specter of uncontrolled movement of people from poor countries to the richer countries and from rural areas to cities. However, there is nothing more rational and natural than these movements, both nationally and internationally. After all, migrants are only pursuing happiness.. . Indeed, throughout the territory, income gradients are formed by economic development, creating large areas where a predominance of unemployment, underemployment and misery exist, as well as a few centers of economic dynamism desperate for cheap labor: therefore, demographic movements are not different from water flowing from highlands to lowlands, from poor countries and regions to richer ones. The consequence of these gradients of economic development and job opportunities is a flow of legal and illegal migrants from Latin America, Africa and some poor Asian countries to the US, Canada and Europe, but also from poorer to richer regions inside many developing countries. This dynamic is particularly noticeable in emergent economies where many social and economic changes are taking place. Migration in the past A few years ago, descendents of Italian immigrants organized a commemoration of 150 years of Italian immigration to Silo Paulo and adopted as their slogan “Siamo tutti oriundi” meaning that, not only they, but all of us came from somewhere. Of course, in the case of S b Paulo this is particularly true. But is it true only for the cities of the American continent? Surely not! Asia, Europe, Africa and Oceania have been the scenario of many migration flows which have completely changed the ethnic structure of every region, before and after the formation of national states. Slave traders sent around 20 million people from Africa to the American colonies during a period of almost 400 years. The last century saw entire populations changing home forced by political turmoil, wars, misery or attracted by hope of a brand new life elsewhere. After the war, a new wave of refugees and migrants left their homes. At least 50 million people had to leave their homes during the 20th century. So migration is not a new issue at all! It has been P part of world economic and social dynamics since the very beginning. Indeed, humanity began with nomads seeking food, and many centuries passed before the first settlers appear. So what is new? Novelty resides in size, pace and awareness.
68
Modem mimation The number of people living outside their countries of birth has grown from approximately 80 million to 185 million during the last 30 years (United Nations, 2002). Both nationally and internationally, labor movements are promoted by expulsion and attraction factors. Expulsion factors are what make an emigrant move fiom her or his homeland. The Earth does not have the same level of resources everywhere; on the contrary, inequalities are the rule. Although many countries with very few natural resources manage to maintain high levels of life for their people, underdeveloped economies and the lack of natural resources very often mean poverty and very high levels of unemployment and social and economic marginality. These are the expulsion factors that put migrants on the road. The main attraction factors are also economic. First of all, job opportunities and then other aspects related to the standard of living. Distance or facilities to enter a country would also play a role in the destination decision. Until the time of globalization, migrants had very bad or imperfect information about opportunities in other countries, and migration presented many risks. Nowadays, in a world of global and real time information, people know very well the countries where job opportunities are. Thus, if their movement is started by expulsion factors, the direction of this movement is determined by very good information. Another aspect is the trend to concentrate. Domestic flows are eminently rural urban and from small towns to larger cities. Internationally, the trend is also to emigrate to the main cities and capitals of developed countries. As a consequence, migration is the main factor in urban growth and in the formation of megacities and large urbanized areas around the world. Populations of most developed countries are aging and presenting low fertility rates, making the influx of migrants very important to maintain work force numbers and make social services and retirement payments feasible. The awareness of the immigration importance is clearer and clearer in certain countries. Therefore, the world economic landscape is seen as a sort of topological surface reflecting gradients of economic underdevelopment, job opportunities and hope. Migrants move like a liquid mass: they leave places with high levels of unemployment and poverty and look for a brand new life in the green valleys of prosperity and job opportunities. Either from their viewpoint or from the viewpoint of the world economy, nothing seems more natural and positive. The most important targets for immigrants are Australia, Canada, Sweden, the USA and the Netherlands, in terms of proportion of immigrants to the native population. On the other hand, hundreds of thousands Filipinos and Mexicans live outside their borders, mostly in the United States.’ There are more than 22 million refugees and other forcedly displaced people (US Committee for Refugees, World Refugee Report, 2002). Criminal “traffic” moves around 4 million people per year (IOM 2002). Globalization therefore has a powerful influence on the increase of people movements around the world. Except for criminal traffic and warfare, migration seems to have a positive effect both on their personal lives and economies. However, a number of studies reveal that emigration is selective, i. e., emigrants seems to be the more skilful sector of the population, which could have a long term negative effect on the economy of the country they come from.
69 REACTIONS AND CONSEQUENCES Many governments from underdeveloped or developing countries see migrants as an important source of hard money for their debilitated economies. Indeed, migrants send their families billions of dollars each year. “Last year, immigrants working in the United States sent $31 billion to relatives in Latin America - more than $13 billion to Mexico alone. Salvadorans, Dominicans and Guatemalans send home billions more.”* Every year Brazilians send around 3 billion dollars to relatives in Brazil. “The total amount of money flowing from developed nations to developing nations through remittances has nearly quadrupled in the last seven years.”3Formerly, although immigrants occasionally sent money to relatives, it was not such a large amount and not on such a regular basis. But not everybody is happy with these arrangements! Some people see migration as linked to terrorism, particularly from Muslims. Cultural clashes are taking place, as is absolutely natural. Integrating minorities into society is one of the more difficult problems facing Europe social development. Borders management improvements, stricter regulation on legal immigration, struggles against illegal immigration and pressures on the Third World countries are not enough to confront the problem. Integration needs to be carried out, because otherwise changing gender roles, low fertility rates and an aging population will not let the region maintain its pace of development. A sound policy for the assimilation and adaptation, providing information, complementary education and job procurement help is needed. A common prejudice against immigrants is that they put pressure on health and social services, without making any contribution to the respective funds. The rationale is that immigrants are not taxpayers but put a severe burden on social services. Indeed, the situation described is true for any kind of unemployed people, whether they are immigrants or of local origin, and thus there is no correlation between this situation and immigrants. Cultural consequences Peripheral areas are more susceptible to cultural influences from centers where innovation is being created and introduced. More than 50% of all web sites, for instance, are located in the US, and nearly 72% of all web sites use English as their language. On the other hand, although we still have around 6500 languages, estimates are that we are losing 2 of them every month5, together with 2 world views. From those 6500 languages, 1700 have fewer than 10 speakers each. Ostlers calls “the triumphalism of Empire”, the attitude of those that see this loss as representing the spread of civilization, a belief shared by Spain between the 15th and 17th centuries, and England in 19th century India. Of course, language is probably the most important cultural aspect, but there are others like movies, pop music, fast food and clothes where global uniformity and conformism are the rules. Therefore, in terms of culture, globalization is not only the discovery of new and different arts and expressions, but is also cultural colonization and lost of diversity. Social consequences In most cases, the social consequences of the migration processes for the migrant means economic improvement. They are fleeing from unemployment or poor economic and social conditions, and therefore even menial jobs seem wonderful
70 opportunities. But there is some evidence that migrants were previously the best part of the work force in their original country or region. This implies a lost of valuable workers to these countries and regions. As a consequence, migration worsens the social and economic situation of an already depressed and underdeveloped region and improves it in an already developed region, city or country. Demographic consequences Ethnic and cultural changes provoked by immigrants depend on their origin and on the ethnic characteristics of the local population. In countries like Brazil, and particularly in big cosmopolitan cities like SLo Paulo, although immigrants can have some cultural impact, it is not perceived as something unusual or difficult to accept. EXTERNAL, AND INTERNAL MIGRATION IN BRAZIL The Historical importance of immigration Brazil, like all countries in the American Continent, is a land of immigrants. In colonial times, most immigrants were Portuguese, Spaniards, Dutch and Africans. The second half of the 19th century saw the arrival of Italians and Germans. At the end of the 19th century and in the first half of the 20th century, immigrants came from a large number of countries but with a majority of Japanese, Italians, Poles and Lebanese. Other groups came from Central Europe and Syria. From 1884 to 1933,4 million immigrants disembarked in Brazil. A new wave happened during and after World War 11, including many Jews from Germany and other Central and East European countries. The most recent immigrants have come from Korea and China. Approximately the same number of African slaves6 entered Brazil over four centuries. Internal migration and social improvement However, in order to understand the Brazilian contemporary reality, domestic migration is a major issue. First of all, rural-urban migration has transferred more than 47 million inhabitants over the last 60 years to towns and cities. The following graphs show the dramatic rate of this migration.
I
I
BRAZILIAN POPULATION
1940
1950
1960
1970 YEARS
1980
1990
2000
71
BRp9LbW POPULATION %
1940
1950
1980
1970 1980 YEARS
1890
2000
A large part of this migration is concentrated in the big cities of the southeastern region, particularly S b Paul0 and Rio de Janeiro, and this population flow has resulted in an intense population growth in these cities. From a small city of 1.3 million inhabitants in 1940, S5o Paul0 grew to a metropolitan region of more than 17 million inhabitants in 2000. The last decades saw a new wave of migrants to the western states. Rapid urbanization and infrastructural deficits Such a powerful and dramatic population dynamic radically changed the character, ethnic composition and cultural aspects of the entire country, and especially of these large urban areas. Slums and “favelas”, inhabited by a marginal population, grew up around the old city center. During the economic boom periods, it was possible to assimilate these newcomers, but from 1980 onwards, economic stagnation could not give any hope and perspective to these people. The result is informal work, marginality and criminality. Towns and cities, confronted with increasing social demands for new hospitals, schools and social centers, tried to pay for them through heavy debts. Nowadays, pressed by the federal government, they are paying their debts and have no money for new social investment. Emigration: a new reality A new phenomenon is confronting Brazil: emigration. Indeed, more and more Brazilians are looking for a better fate in other countries, particularly in Europe, the U.S. and Japan. For many years, they looked for jobs and opportunities throughout Brazilian temtory, moving to other states and towns. Now, facing economic stagnation, they have to look elsewhere. Rural-urban migration Other social movements that result from this situation are the MST, initials for “movement of the landless”, and also the “movement of the homeless”. The first is a consequence of two main elements: improved productivity in agribusiness, which cut many jobs; and improved industrial productivity associated with economic stagnation, which also eliminated jobs. In recent years, these jobless workers gave up migrating to cities because they know the high level of unemployment there. They are being organized by the MST and other social movements that promote invasion and occupation of large farms.
72
Causes, consequences and solutions Of course it is not pleasant to leave one’s homeland, relatives and friends. When somebody decides to move elsewhere, he is being pressed to take this decision by very powerful reasons. Apart from political pressures or wars, which do not exist in Brazil, the main reason is lack of jobs and perspectives. The direction of the movement is determined by the level of information on job opportunities. While we look for a feasible plan to contain these movements, the situation goes from bad to worse because gains in productivity follow cutting jobs in agriculture and in industry. Efforts to promote the service sector, mainly tourism, do not have enough potential to absorb such a mass of unemployed workers. Any solution implies the reduction of personal income differences and the creation of a certain level of development at the origin. It is not necessary to make income and development equal or to diminish or to stop migration because there is always a certain difficulty to be overcome by the migrant. It is not easy to implement such a program, because many difficulties must be overcome, from differences in natural resources to political questions. CONCLUSIONS Migration as a human promotion process Migration is not a problem, but a solution, both for the migrant and for the economic activity at the destination. Historically, migration has been a strong incentive for development and cultural change. At the same time, globalization makes international borders permeable to capital, allows people to obtain current information on social and economic situation all over the world and makes it easier to move to other regions and countries. It is comprehensible that local people are initially afraid of the foreigners’ invasion of their towns and womed about their own jobs. But, in most cases, a consistent policy to receive and to help the assimilation process could make it easier. How to reduce migration trends The migration flow is in direct proportion to the difference of job opportunities, personal income and economic development between origin and destination. Therefore, international support for underdeveloped countries, particularly in activities that generate more jobs, is an important deterrent to the migration trend. In addition, emergent economies need better trade conditions with more open markets for their products. A scientific approach to migration The migration phenomenon is a very complex system and process: a large set of variegated areas linked by a large and diverse set of links where we can observe a large number of different flows. A characteristic of complex systems is the permanence of complexity on different scales. Indeed, either when we observe the system on a global satellite image scale or when we observe it on a restricted regional scale, it is still complex. Even when sociological analysis concentrates on the specific cases of migrants, the phenomenon maintains its characteristic complexity. Although a mathematical analysis would only produce a quantitative description of a very complex socio-economic process, it would greatly help to have a general framework, with a better knowledge of the characteristics of migration flows, origins and destinations. This framework could help us to better understand the gradient
73 surface, with its “ridges of poverty” and its “valleys of prosperity” between which “creeks” and large “rivers” of migrants flow. This paper points to a large cooperative research program bringing together investigation centers f7om developed countries and emergent or undeveloped countries.
REFERENCES I 2
3 4
5
6
htlu:~//www.ncir.or~about immigrationiworld map intro.htm Sanchez, Marcela. “Desde Washington - Immigrants’ money could serve hemispheric cohesion”, WashingtonPost.comThursday, April 1,2004; 9:39 PM. “New York Times. “Dollars without borders”. May 13,2004. O’Neill et al. “Trends in Evolution of the Public Web”, D-Lib Magazine, April 2003. Ostler, N.. “Endangered Languages - Lost worlds”. Contemporary Review, Dec 2001. A lencastro, L. F. de. “0 trato dos viventes”. Companhia das Letras, S l o Paulo. 2000.
This page intentionally left blank
3.
CLIMATOLOGY: GLOBAL WARMING
This page intentionally left blank
FROM CURIOSITY TO CONCERN: A CHRONOLOGY OF THE QUEST TO UNDERSTAND GLOBAL CLIMATE’
JOHN S. PERRY National Research Council (ret.), Alexandria, USA The second half of the century recently concluded brought measurements of rapid atmospheric changes, worries about human influences on the atmosphere, projections of troubling future changes in global climate, a myriad of fragmentary hints of global changes actually in progress, and a plethora of meetings, reports, studies, conferences and programs to respond to these challenges. A naive citizen - or non-specialist scientist for that matter - might assume that concern about climate somehow emerged as an unexpected global hazard like AIDS or fast-food restaurants. In fact, humans have pondered about climate for as long as they have pondered about anything at all. Even the dimmest-witted of our distant ancestors must have taken note of the annual march of the seasons, and guided their hunting and gathering by knowledge of climate’s regularities. The rise of agriculture surely tuned this understanding far more precisely. Moreover, any degree of wandering over the face of the globe must have revealed that climate varies from place to place. Since the life of our species has spanned at least one glacial cycle, we all may somehow have a deep memory of climate change. Thus, humanity’s quest for an understanding of climate is surely as old as humanity itse1f. In this brief account, I will highlight a few of the many milestones on the long road from our species’ first perception of climate through the beginning of ow present era of frenetic research and heightened concern. Throughout, the story of climate and mankind has centered on three persistent streams of speculation, research, and concern: The physical explanation of climate, the influences of climates on humans, and the influences of humans on climate. CLIMATE AND HUMANKIND - AN AGE-OLD PARTNERSHIP The first documentation of thought about climate dates - like practically everything else - from the Greeks, who saw climate as primarily depending on the height, or inclination (klima in Greek) of the Sun. This first model of climate explained much of the data available to them: It’s hot in the summer and in the tropics, where the sun is high; it’s cold in the winter and in the north, where the sun is low. A second line of thought attributed to Aristotle held that the air and the climate were also linked to the “vapors” emanating from a country. Interestingly, recent research supports Aristotle’s view, suggesting that perceptible human influences on the atmosphere began 5000 to 8000 years ago as early humans cleared forests and developed agriculture (Ruddiman, 2003). Moreover, the Greeks linked climate strongly to human health and national character. Thus, over two thousand years ago the Greeks had a viable climate model and two hotly debated topics in today’s dialogs on climate - human influences on climate and climate’s influences on humans - were active topics of discussion. If Aristotle’s ghost is perchance
77
78 eavesdropping on us today in this former outpost of his Greek world, he will feel quite at home! This conceptual framework of closely interlinked climatic and human systems simply determined by geography persisted through the darlush ages following the fall of the Greco-Roman world, and well into the “enlightenment” of the 18‘h century. Both aspects of human-climate interaction were explored. The Abbe Jean-Baptiste du Bos held that the emergence of “genius” in the arts and sciences depended primarily on the nature of the air, soil, and especially the climate of a region. (Aficionados of wine will recognize the concept of terroir so treasured by French vintners to this day.) Hence, the differences in the character of nations corresponded to the differences in their climates. An implication was that colonization of foreign climes by Europeans was fraught with great risks of damage not only to the health of the settlers, but also to their character and culture. These notions, elegantly and forcefully expressed by du Bos, influenced many other writers, most notably Montesquieu and David Hume. The latter, however, reemphasized the Greek notions on the influence of a country’s development on its climate. He saw evidence that the climate of Europe had warmed significantly since ancient times, and attributed it to the gradual advance of agriculture across the continent. The colonization of North America brought these concepts into sharp focus. Lying mostly at lower latitudes than Europe, these virgin lands were initially expected to have warm and gentle climates conducive to wine, sugar, olives and spices. Instead, the early settlers found the climate far more severe than suggested by the Greek klima-based model. Survival, rather than over-abundant harvests, proved to be a challenge. Januaries were quite different in Boston and Roanoke than in London and Nice, Julys hotter and wetter, and storms far more severe. Moreover, these new lands were covered with dense and fearsome forests, inhabited by strange bronze-colored and often unfriendly savages. Surely the severe climate, the hostile landscape, and the uncivilized population - all so different from settled and serene Europe - were somehow interlinked. If a primitive landscape creates a primitive climate, and a primitive climate creates a primitive populace, then it should be expected that the process of civilization should favorably transform all. Indeed, many European and American writers confidently expected that the transformation of chaotic nature into orderly pastoral settlements would improve both the climate and the people who lived in it. In 1771, for example, Harvard scholar Hugh Williamson held that the clearing of New England forests had significantly warmed the climate, and predicted that this would enable American civilization to compare favorably with the great republics of the past. Thomas Jefferson agreed fully, and proposed a system of climate measurements to document this benign change. Not all scholars agreed, of course. Studies based on observations by the US Army found no evidence for correlation between long-term climate trends and the expansion of American settlements. Nevertheless, this conviction that the evolution of climate and human society were closely interlinked was probably the strongest motivation for the notable expansion of weather and climate observing networks that took place in the 19‘hcentury. No history of thinking about climate and humans would be complete without mention of the century-later work of Ellsworth Huntington (1896-1945), a vastly traveled and awesomely prolific American geographer of the first half of the 20thcentury, and his almost evangelistic promotion of a theory of climatic determinism. Huntington’s travels
79 and research in Asia had impressed upon him a conviction that the rises and falls of civilizations and empires were primarily brought about by fluctuations in climate. He attributed variations in initiative, creativity, stability, honesty, and all the virtues needed for civilization almost exclusively to favorable climates characterized in terms of “climatic energy,” an empirically derived amalgam of a grab-bag of climatic parameters. He then compared the distribution of “climatic energy” with the distribution of “civilization,” as determined by polling people whom he deemed “civilized.” Amazingly, he found a remarkable correspondence between these favorable climatic zones and populations of people closely resembling himself! As Fleming remarks, Huntington’s links to long-dead writers such as Du Bos and Montesquieu are obvious, and to our eyes he simply cloaks the same ethnocentric wishful thinking in a specious wrapping of data and charts. However, his theories fitted well with the colonialist mentality of the time, and had remarkable influence. FROM SPECULATION TO SCIENCE In parallel with these mainly philosophical and speculative ruminations on climate, a considerable body of fundamental science was slowly being built, so let us backtrack to follow the stream of thought about the causes of climate. In the lgthcentury, Halley and Hadley explained many of the principal features of atmospheric circulation and global climate on the basis of differential heating between equator and pole, coupled with the earth’s rotation. With later elaboration by Ferrel, the tropical trade winds, the subtropical deserts, the mid-latitude westerlies were plausibly explained. By the early 19thcentury, the operation of the Earth’s heat engine driven by differential heating between tropics and poles was fairly well understood in broad outline. But what about the global climate itself! Why was the Earth as a whole neither too hot nor too cold? One of the earliest hints came from De Saussure’s 1774 observations of greater solar intensity in the high Alps, as indicated by thermometers in glass-lidded boxes. In the 182O’s, Fourier drew on this data to suggest that the interposition of the atmosphere between earth and space augmented the temperature of the earth’s surface, since light seemed to penetrate the air more readily than did heat. Later, Pouillet measured differential absorption of solar and thermal radiation by air, and reinforced Fourier’s insight. These two scientists are often cited as the first to elucidate the “greenhouse effect.” However, the broad analogy between the earth-atmosphere system and a garden hothouse was apparently drawn earlier by a number of writers, and neither Fourier nor Pouillet developed a full theoretical explanation of the radiative processes involved. The processes underlying the greenhouse effect were elucidated much more fully by John Tyndall in the second half of the 19thcentury. Beginning in 1859, Tyndall made careful measurements of the radiative properties of various gases, including water vapor, carbon dioxide, ozone, and various hydrocarbons. He discovered that these gases with complex molecules were far more powerful absorbers and emitters of thermal radiation than oxygen and nitrogen, the predominant constituents of the atmosphere. Taking into account their concentrations in the atmosphere, he concluded that water vapor is the strongest absorber of radiant heat, and hence the most important gas controlling the
80
Earth’s surface temperature. In a famous passage, he vividly depicts the greenhouse effect in terms that can hardly be bettered today: “It is perfectly certain that more than ten percent of the terrestrial radiationfrom the soil of England is stopped within ten feet of the surface of the soil. This one fact is sufficient to show the immense influence which this newly-discoveredproperty of aqueous vapours must exert on the phenomena of meteoroloa. This aqueous vapour is a blanket more necessary to the vegetable life of England than clothing is to man. Remove for a single summer night the aqueous vapour from the air which overspreads this country, and you would assuredly destroy every plant capable of being destroyed by a freezing temperature. The warmth of our fields and gardens would pour itself unrequited into space, and the sun would rise upon an island held fast in the iron grip offiost. ” With Tyndall’s work, building on that of Fourier, Pouillet, and others, the principal elements of the atmospheric greenhouse effect that gives the Earth a habitable climate were identified and characterized. But how could we quantify the physical processes involved and assess their possible changes over time? QUANTIFYING THE GREENHOUSE EFFECT In 1891, Svante Arrhenius returned to Stockholm after a period of post-doctoral study abroad to take a post at the Stockholm Hogskola. His brilliant early work in electrochemistry would eventually lead to a Nobel Prize. In Stockholm, however, he soon expanded his horizons, turning to a broad field of investigations then termed “cosmic physics.” This was seen as an interdisciplinary effort to bring the study of natural phenomena, with all their complex interrelations, into the domain of the physical sciences - a prospectus remarkably similar to the objectives of today’s International Geosphere-Biosphere Program. Arrhenius founded the “Stockholm Physics Society,” which brought together a remarkably diverse and expert group of local scientists for lectures and discussions of “cosmic physics.” The late 19” century was a time of rapidly growing data on the Earth’s past and present. The immense antiquity of the Earth had been well established by Horton, revealing a past in which the only constant was change. It was clear that great ice ages had come and gone, scouring the Swedish landscape. Geological data, notably the work of his colleague Hogbom, indicated that the carbon dioxide content of the atmosphere had changed markedly over time. Tyndall, Pouillet, and Fourier had shown that these changes might be important to climate. But others such as Croll held that variations in the Earth’s orbit caused climate changes (a hypothesis eventually put on much firmer ground by Milankovitch). Here was an interesting problem: Could plausible fluctuations in carbon dioxide produce climate changes large enough to explain the ice ages? Arrhenius set out to calculate the temperature changes that would result from specified changes in atmospheric carbon dioxide. He knew from Tyndall’s work that water vapor was also a major factor, and moreover varied markedly in time and space. Given the available meteorological data and laboratory measurements, direct calculation of the radiative effects of small changes in either gas through the entire atmosphere would be clearly infeasible. However, the American astronomer Langley had made numerous observations of thermal radiation from the moon in several wavelengths (characterized in
81 terms of the angle of deviation from a rock salt crystal) at many times, seasons, and lunar altitudes (corresponding to various slant paths through the atmosphere). These data allowed Arrhenius to estimate the absorption by the whole atmosphere of thermal radiation by carbon dioxide (presumably well-mixed) and variable water vapor, the latter of which he related to surface temperature. He then constructed a conceptually simple one-dimensional model based on Stefan’s formula for thermal radiation and a dazzling smorgasbord of approximations to account for surface albedo (e.g., snow cover), clouds, and the distribution of radiation with height. After a year’s work with pencil, paper, slide rule, and logarithmic tables, he produced the sought-for estimates of temperature changes resulting from changes in carbon dioxide? Arrhenius’ most famous result was his estimate that doubling atmospheric carbon dioxide would produce a global temperature increase of roughly 6 deg C, a figure not far from today’s estimates. However, his original question related to the ice ages, and his results seemed to c o n f m that plausibly lower carbon dioxide concentrations could indeed trigger glaciation. Interestingly, the American geologist Chamberlin had been studying the same questions in parallel, but with a primary focus on the natural sources and sinks of carbon dioxide. Although the motivation for Arrhenius’ arduous year of model-building had been to elucidate the cause of the Ice Ages, it is the former result that has been cited by many in calling Arrhenius the “father” of the greenhouse effect. Indeed, after publication of his 1896 paper, he drew on Hogbom’s work to consider the effect of fossil fuel burning on atmospheric carbon dioxide and climate. Not surprisingly for a Swede, he took a rather benign view of global warming: “By the influence of the increasingpercentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bringforth much more abundant crops than at present, for the benefit of rapidlypropagating mankind.” More detailed examination of carbon dioxide and climate had to await the attention of Britain’s G.S. Callendar who carried out in the 1920’s a wide range of studies of temperature trends and the concentrations and radiative properties of carbon dioxide. In 1938, he pointed out that combustion of fossil fuels had produced a six percent increase in atmospheric concentrations of carbon dioxide since the beginning of the century, and by 1949 he found an increase of over ten percent. Using a simple radiative model, he estimated that a doubling of carbon dioxide would result in a temperature increase of 2 deg C, so that the observed carbon dioxide increase could account for about 60 percent of the temperature increase experienced in the century. Like Arrhenius, he viewed the prospect of global warming as a mostly beneficial process - better prospects for cultivation at high latitudes, better productivity through carbon dioxide fertilization, and indefinite deferral of the next ice age. By integrating studies of temperature trends, atmospheric composition, radiative properties of the atmosphere, the carbon cycle, and modeling of the climate system, Callendar essentially laid out the foundation for the subsequent decades of research and discussion that have led to our present assessments of climate change. Callendar’s work ushered in what I will arbitrarily call the “modern era” in climate modeling research, with rapidly increasing capabilities in high-resolution modeling of the climate system, narrowing focus on anthropogenic climate change, extensive exploration of a wide range of future scenarios, entrainment of other disciplines such as economics,
82
sociology, biology, and agronomy into climate studies, and - above all - exploding public and political interest. International conferences highlighted the need for carbon dioxide monitoring, and meticulous measurement programs were initiated. The exchanges of carbon dioxide between ocean and atmosphere were elucidated. Increasingly sophisticated models of the climate system were developed, including from the 1950’s onwards - fully three-dimensional general circulation models. A wide range of proxy data sources - ocean sediments, ice cores, lake cores, borehole data, phenomenological observations, historical records - was exploited to yield a rich history of climate variation. Social scientists were entrained into research to project scenarios of human social and economic development, estimate human influences on the climate system, and assess implications for human society. A vast global infrastructure of research, assessment, analysis, dissemination, and bureaucracy that the late Roger Revelle termed a “cottage industry” developed. A complete history of this complex and turbulent era is beyond the scope of this brief note. Those interested may refer to Spencer Weart’s superb book and HTML hypertext. As we begin the 21” Century, Aristotle’s agenda - understanding climate, our influences on climate, and climate’s impacts on us remains alive and well. CONCLUSION In this whirlwind sprint through the centuries, I have tried to demonstrate that curiosity and concern about climate are by no means a tabloid-fed fad of the electronic age. Rather, humans have been concerned about climate for about as long as they have been concerned about anything at all. Moreover, as they struggled to learn how climate affected their affairs, they soon began to speculate that perhaps their own activities influenced climate itself. Elaborate speculations on the origins of climate and the mutual influences of climate and mankind abounded and persisted. Expansion of European settlement into foreign landscapes, climates and cultures severely tested these ideas, but the notion of an intimate connection between climate and human affairs persisted. Meanwhile, the rapid development of the natural sciences turned attention toward physical explanations of climate and climate change. By the end of the 19’hcentury, the fundamental processes determining climate were qualitatively understood, and brave attempts at quantitative modeling and prediction were undertaken. By the middle of the next century, a very substantial body of knowledge, technique, and technology had been developed, and the stage was set for the intense era of modeling, projection, and assessment that has extended into our own time. This evolution from curiosity to concern has in part been driven by imaginative speculation and theorizing. However, as the blank areas on the maps became filled and interconnected, observation, data, and simple modeling based on fundamental physics, heroic assumptions, and artful mathematics increasingly informed discussions of climate. Prior to Norman Phillips’s pioneering simulations of the mid-l950’s, our ideas about climate and climate’s interactions with human society came from simple models, pencils, paper, thermometers, and slide-rules, liberally aided by tradition, history, prejudice, chauvinism, ingenuity, and imagination - but not from massive computing machines. Indeed, if the digital computer had never been invented, our curiosity and concerns about climate would have been little altered.
83 The climate agenda that was evident to the Greek citizens of Athens and colonists of Sicily in classical times is still vital today. Climate importantly influences our welfare. What we do can importantly influence climate. Our climate is a product of our planet’s unique composition and situation, and we need to understand how climate works. These questions have endured for two millennia, and will doubtless persist for generations to come. If this seminar should reconvene in August of 2104, Aristotle’s ghost would still find himself quite at home. REFERENCES 1.
2. 3. 4. 5. 6.
Ausubel, Jesse H., 1983. Historical Note, Annex 2, Changing Climate. Report of the Carbon Dioxide Assessment Committee. Board on Atmospheric Sciences and Climate, National Academy Press, Washington, pp. 488-491. Fleming, James Rodger, 1998. Historical Perspectives on Climate Change. Oxford University Press, New York. 194 pp. Kutzbach, John E., 1996. Steps in the Evolution of Climatology: From Descriptive to Analytic, in Historical Essays on Meteorology 1919-1995, James Rodger Fleming, ed., American Meteorological Society, Boston, pp. 353-378. Rodhe, Henning, and Robert Charlson, 1998. The Legacy of Svante Arrhenius: Understanding the Greenhouse Effect. Royal Swedish Academy of Sciences and Stockholm University, 276 pp. Ruddiman, William F., 2003. The Anthropogenic Greenhouse Era Began Thousands of Years Ago. Climatic Change. 61 (3): 261-293, December 2003 Weart, Spencer R., 2003. The Discovery of Global Warming. Harvard University Press, 228 pp. (An expanded version of this work is available as an extensively hyperlinked HTML document from the author at the American Institute of Physics, American Center for Physics (ACP), One Physics Ellipse, College Park, MD 20740-3843). ENDNOTES
I
2
The content of this brief review is drawn almost entirely, and with great admiration and appreciation, from the superb scholarly histories of James Rodger Fleming (1998) and Spencer Weart (2003). The reader may turn to the excellent bibliographies in the above-cited works for full citations of the various works referenced in this paper. Review papers by Ausubel(l983) and Kutzbach (1996) also provided many illuminating insights. See Rodhe and Charlson (1998) for a fascinating collection of papers on the climate-related work of Arrhenius, including a reproduction of his famous 1896 paper.
84 SIMPLE CLIMATE MODELS T.M.L. WIGLEY National Center for Atmospheric Research, Boulder, USA ABSTRACT Simple climate models, from a one-box energy balance model (EBM) up to a multibox upwelling-diffusion (UD) EBM, are described. The latter is illustrated using the MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) model, which couples a UD EBM to a range of gas-cycle models to investigate future climate change due to anthropogenic emissions of greenhouse gases and aerosol precursors. MAGICC is able to emulate the global-mean temperature responses of more sophisticated coupled Atmosphere/Ocean General Circulation Models (AOGCMs) with hgh accuracy over a wide range of forcing scenarios. Because of their computational efficiency, simple climate models are valuable when a large number of climate simulations are required, as is the case for probabilistic projections of future warming. Some examples of probabilistic projections are given. INTRODUCTION Modeling the climate system is an extremely complex problem involving both physical and chemical processes operating and interacting over a large range of spatial and temporal scales. It is not surprising, therefore, that there is a hierarchy of climate models of increasing complexity and sophstication that is used to tackle different aspects of the climate problem. Each class of model has a role to play. The simplest, ‘energybalance’ models capture only the grossest aspects of the problem, the balance between (or changes in the balance between) incoming and outgoing energy at a global scale. Since such models require only minimal computational resources they may be used to run multiple scenarios for past and future change and to assess the importance of different sources of uncertainty. The most complex models, coupled Atmosphere/Ocean General Circulation Models (AOGCMs) represent both oceanic and atmospheric processes on a three-dimensional grid over the whole globe. The latest generation of such models has a spatial latitude-longitude resolution in the atmosphere of about 1-2’ by 1-2 but even this is not fine enough to model important components of the system like individual clouds (which must be represented in approximate or ‘parameterized’ form). AOGCMs are computationally intensive and it usually requires many days of wall-clock time to run a single simulation of a few hundred years on the world’s most powerful computers, limiting the number of simulations and sensitivity studies that may be performed. The present paper will give an introduction to the models that lie at the simplest end of the hierarchy. A useful summary of simple climate modeling is given by Harvey et al. (1997). O,
85 BASIC CONCEPTS At the global-mean level, the state of the climate system may be represented by the global-mean (area-averaged) temperature near the Earth's surface (T). Changes in this quantity (OT(t)) are closely tied to changes in the mean temperature of the troposphere where the atmosphere is well-mixed in the vertical through convective processes. The prevailing value of T is determined by a balance between net incoming (short-wave) solar energy, which is largely independent of T, and outgoing (long-wave) energy, which depends linearly on T. If the balance is disturbed, for example by some external forcing agent such as an increase in solar irradiance, then an increase in T will increase the outgoing energy and act to restore the balance. The simplest possible climate model therefore would be to represent the climate system as a radiating black body via
where 0 is the planetary albedo (= 0.3). Qs,l,, is about 342W/m2,one quarter of the solar output since the intercepted heat (Or?) must be distributed over the whole Earth's surface (40?). Hence, a perturbation in Qs,l,, (OQsolar) will lead to a temperature change OT given by
... (2) Unfortunately, the planet does not behave like a simple black body. If the system is externally forced, many of the characteristics that determine how much energy is absorbed and emitted by the system will change. For example, an increase in energy received at the top of the atmosphere will change the albedo of the planet through changes in clouds, the area of snow and ice, and (more slowly) vegetation, and so modify the amount of energy absorbed. In addition, changes in clouds, which are absorbers of long-wave radiation, will change the amount of outgoing energy and the relationship between outgoing energy and temperature. Processes like these are referred to as climate feedbacks, and the eventual response of the climate system to a change in external forcing depends to a large extent on the magnitude of these (and other) feedback processes. Furthermore, the climate system has, through the ocean, a considerable thermal inertia, so it will respond only slowly to any imposed external forcing. A more general form for equ. (2) therefore is ATequil = AQ/h
..... (3)
where ATequilis the eventual change in global-mean temperature in response to some general external forcing AQ, and h is the climate feedback parameter, the net effect of a number of individual feedback processes. Conventionally, we use the concept of climate sensitivity (S) rather than feedback parameter,
s = Ilh
..... (4)
86
where S is the equilibrium warming (K or "C) for a unit change in radiative forcing (W/m2). Further, since one of the primary concerns in climate science is the effect of increases in carbon dioxide (C02) concentration, it is common to express the sensitivity in terms of the equilibrium warming (ATzX) that would occur if the atmospheric concentration of C 0 2 were doubled. If AQzx is the forcing for a C02 doubling (= 3.7W/m2), then equ. (3) gives
ATzx = AQ2xlh = S (AQ2x)
..... (5)
The value of ATzX is subject to considerable uncertainty because of the difficulty in modeling and/or empirically quantifying feedback processes, particularly those associated with clouds. In many reports and publications it is stated that ATzX lies in the range 1.54.5"C, but the confidence interval associated with this range is rarely given. My own judgment is that this range represents, approximately, the 90% confidence interval (Wigley and Raper, 2001), but there are a number of different opinions (e.g., Andronova and Schlesinger, 2001; Forest et al. 2002). ONE-BOX MODELS The next step up the ladder of complexity is to introduce time-dependent behavior into the system. This may be done using a simple one-box model of the energy balance
... C dATldt + AT/S = AQ(t)
.....(6)
where C is a heat capacity term, AQ(t) is the applied external forcing, S is the climate sensitivity, and AT(t) is the change in global-mean temperature. Equation (6) is the simplest form of Energy Balance Model (EBM). The steady-state solution to equ. (6) is simply equ. (3) (with S replacing Ih).AT(t) must be a function of C and S, and we can show that the relative importance of these two terms depends on the characteristic time scale for AQ(t). To do this, suppose that AQ(t) is sinusoidal, AQ(t) = A sin(ot). The solution to (6) is then
AT(t) = [(or)2/(1+(07)2)] exp(-ffz) + [S/(l+(oz)2)][A{sin(wt) - ot cos(ot)}] .. (7) where z is a characteristic time scale for the system, z = SC. (Note that the sinelcosine term can be written in the form sin(wt+$), showing that the asymptotic response follows the forcing with a lag, 4.) We now consider two end-member cases, for high-frequency and low-frequency forcing. For the latter (o << Ih),the asymptotic solution is simply the equilibrium response
AT(t) = S A sin(ot)
..... (8)
87 showing no appreciable lag between forcing and response, with the response being linearly dependent on the climate sensitivity and independent of the system’s heat capacity. For the high-frequency case (o>> I/T) the solution is AT(t)
= [A/(wC)] sin(ot - 7d2)
..... (9)
showing a quarter cycle lag of response behind forcing, with the response being independent of the climate sensitivity. (A more general analytical treatment in the frequency domain, accounting for ocean mixing as an upwelling-diffusion process - see below - is given in Wigley and Raper (1991). The results are qualitatively the same as derived here.) If representative values are used for C and S (via ATzx), then we find T = 1 to 5 years. This means, for example, that the response to the seasonal cycle of solar forcing should be largely independent of the sensitivity and lag behind the forcing by about 3 months (the lag is a little less than this, about 2 months, partly because the thermal inertia term, C, is not a constant). For slower cyclic forcing, such as the response to the 11-year sunspot cycle of solar irradiance changes, sensitivity is far more important and there is a smaller but still non-negligible lag of response behind forcing. UPWELLING-DIFFUSION EBMS In a one-box EBM, the box represents the globe, and C is the global heat capacity. In reality, however, the climate system has a large number of boxes each with a different heat capacity. For example, the atmosphere has very little heat capacity; and the land surface also may be considered to have, effectively, a very low heat capacity (because the generally low conductivities of rocks and soils mean that externally imposed heat changes affect only a small near-surface layer). Most of the system’s heat capacity is in the ocean. Here lies a complication, since the effective heat capacity of the ocean depends on the time scale of forcing (see Wigley and Raper, 1991). Rapid forcing changes, such as those associated with the seasonal insolation cycle, will affect only the top layer of the ocean (the mechanically mixed ‘mixed layer’), while slow changes will penetrate much deeper and so experience a much larger effective heat capacity. A one-box model cannot capture these important aspects, so it is important to account somehow for the processes that transport heat below the mixed layer into the deeper ocean. The simplest way to do this is to consider heat transport in the ocean as an upwelling-diffusion process (Hoffert et al., 1980). An upwelling term is necessary because the ocean’s thermohaline (density-driven) circulation (THC) provides a short cut for heat transport. In high latitudes, colder surface ocean temperatures and lower salinities lead to higher density water that sinks to considerable depths before being entrained into deep-ocean horizontal ocean current systems. To compensate for this geographically-restricted sinking water, there is a general ocean-wide upwelling (at a rate of about 41dyear). In addition to heat transport by the THC are large-scale mixing processes that transport heat across and along iso-pycnal (constant density) surfaces. Although physically unrealistic, these processes can be characterized by vertical diffusion in a onedimensional model (see, e.g., Harvey, 2000, p. 262). The net result is a model that
88
assumes that ocean mixing can be represented by a one-dimensional upwelling-diffusion (UD) process. Mathematically, a UD EBM may be written as . ..
C dAT/dt + AT/S = AQ(t) - AF AF = K, [a(Ae)/az],=o a(Ae)/& + wa(AO)/az= K,
.... (10)
Here, A refers to changes from an initial equilibrium state, T(t) is the temperature of the ocean mixed layer, AF is the flux of heat below the mixed layer into the deeper ocean, z is a vertical coordinate measured downwards from F O at the bottom of the mixed layer, e(t,z) is the ocean temperature profile, and w is the upwelling rate. Because of their relatively low heat capacities, it is reasonably assumed that the land-surface and tropospheric temperature changes follow those in the ocean mixed layer. In the following, I will give results based on a somewhat more complex UD EBM in which the land and ocean areas in each hemisphere are considered separately (with exchanges between these four boxes quantified by linear exchange coefficients), and where various gas-cycle models are coupled interactively to the climate model (to account for possible climate feedbacks on the sources and sinks of, e.g., COz and C h ) . Incorporating coupled gas-cycle models means that AQ(t) is calculated internally by the model for radiatively active gases like COz, based on emissions input information. The need to separate the globe into land and ocean boxes arises kom the fact that the climate sensitivity over land is larger than over the ocean (by approximately 30%), and the hemispheric separation is an advantage because some forcings (such as sulfate aerosols produced by SO2 emissions) have quite different values in the northem and southern hemispheres. This model (MAGICC - Model for the Assessment of Greenhouse-gas bduced Climate Change; Wigley and Raper, 1992; Raper et al., 1996) is the model that has been used by the Intergovernmental Panel on Climate Change (IPCC) for all of its projections of future global-mean temperature and sea level change. A user-friendly version of MAGICC that runs on a PC may be downloaded from www.cad.ucar.edu. As noted above, the computational efficiency of models of this type (MAGICC runs a 300+ year simulation in about 0.1 seconds) makes them ideal for carrying out multiple simulations for different emissions scenarios, and for investigating uncertainties associated with the carbon cycle, ocean mixing, and climate parameters like the climate sensitivity, and forcing uncertainties such as those for aerosol forcing. Another advantage is that these models are deterministic so they give the signal due to external forcing directly. AOGCMs, on the other hand, produce output that is a combination of both the extemallyforced signal and internally-generated weather ‘noise’. To obtain the underlying signal from an OAGCM requires running an ensemble of simulations and averaging these results to reduce the noise. AOGCMs, however, are physically more realistic, and they give the full spatial details of future climate change for all variables. For the IPCC Third Assessment Report (TAR) a number of improvements were made to MAGICC to ensure that its science was consistent with the state of the science as represented by the TAR (Wigley and Raper, 2002). MAGICC was also calibrated against a number of AOGCMs using results from a standard forcing experiment in which COz
89
concentration was increased at a compound rate of l%/year, equivalent to a linear forcing increase (Raper et al., 2001; see also Cubasch and Meehl, 2001, Appendix 9.1). As an example of the success of the model calibration process, Fig. 1 (from Wigley et al., 2004) shows how MAGICC can reproduce the results for a specific AOGCM for forcings that differ radically from the linear forcing considered in the calibration exercise. Here, the AOGCM is the NCAR/USDOE Parallel Climate Model (PCM; Washington et al., 2000), and the forcing is that due to the major explosive volcanic eruptions of the past 100+ years (from Ammann et al., 2003). The AOGCM signal is obtained by averaging results from 16 different simulations or combinations of simulations. A considerable amount of noise still remains after this averaging process. Note that the calibration involves forcing changes on the decadal to century time scale, while the test simulation employs forcing changes on a monthly to annual time scale, quite a stringent test. The agreement between the two models is excellent. 03
.........
02
01
3 w
o
a
r -01 @ i 3 I-
;
-02
a
5 I-
-0 3 El iichon
-0 4
~
~.....
'inatut
-0 5
0
120
240
360
480
600
720
840
960
1080
1200
1320
MONTH (JAN. 1890=1)
Figure 1: Comparison of the simulated response to volcanicforcing calculated with MAGICC (bold line) and with PCM. The PCMresult is the mean of 16 independent realizations, but this still leaves considerable month-to-month and year-to-year background variability that masks the underlying volcano signal. Thefour largest 2dh century eruptions are identified.
90 PROBABILISTIC PROJECTIONS It is possible to take advantage of the computational efficiency of a model like MAGICC to make probabilistic projections of future change under the influence of manmade emissions of greenhouse gases and related gases (cf. Wigley and Raper, 2001). The greenhouse gases considered are C02, CH4, N20, tropospheric and stratospheric ozone, a large number of halocarbons (CFCs and HCFCs, which determine changes in stratospheric ozone, and HFCs and PFCs), and SF6. The other gases are SO2 (which controls the level of sulfate aerosols) and the reactive gases CO, NOx and VOCs, which control (along with CH4) tropospheric ozone. Other aerosols are also considered. The future emissions of these gases are defined by a set of no-climate-policy scenarios produced as part of the IPCC Third Assessment and published in the Special Report on Emissions Scenarios (NakiCenoviC and Swart, 2000). These are referred to as the SRES scenarios. Emissions are prescribed in decadal steps from 2000 to 2100. These scenarios span a wide range of emissions depending on assumptions made for future population change, economic development, technology changes, levels of international cooperation, and attitudes towards sustainable development. They assume that no policies are introduced specifically to reduce future climate change. They do, however, include the effects of policies to reduce the impacts of sulfate aerosols on urban pollution and acid precipitation (leading to substantially lower SO2 emissions than would otherwise be the case). Ironically, since these aerosols have a cooling effect, this leads to enhanced global warming. The SRES report states that the authors were unable to associate probabilities with individual scenarios, so I have assumed that all scenarios are equally likely. Emissions uncertainties are clearly a major source of uncertainty in future climate change. The other sources of uncertainty may be identified by sensitivity studies. They are: the climate sensitivity, the rate of ocean mixing as characterized by K,, the magnitude of aerosol forcing, and feedbacks on the carbon cycle. Emissions and sensitivity uncertainties dominate. To obtain a probability density function (pdf) for future global-mean warming it is necessary to assign pdfs to the various uncertain parameters that define the MAGICC climate model. Of course, these input pdfs are themselves uncertain, so there is some expert judgment required in their quantification. Details are given in Wigley and Raper (2001). The method used for obtaining the output pdf is a form of Latin Hypercube Sampling where each of the five input pdfs is divided into fractiles and the fivedimensional fractile space is sampled exhaustively with replacement (‘Exhaustive Fractile Sampling’). For emissions, there are 35 discrete scenarios. The sensitivity pdf is divided into 25 fiactiles, and the K,, aerosol forcing and carbon cycle pdfs are divided into quintiles. Sampling of fractile space requires running 3 5 x 2 5 ~ 5 ~ 5=~ 109,375 5 simulations. I will first show the results for a single emissions scenario, one constructed by using the median emissions values across the 35 scenarios at all time steps (referred to as the P50 - 50thpercentile - scenario). This requires only 3125 simulations with the model. The results, shown in Fig. 2, provide an indication of climate and carbon cycle modeling uncertainties. As expected, uncertainties increase with time. Even for a fixed emissions scenario the uncertainties are large, arising mainly from uncertainties in the climate sensitivity. In 2100, the median warming (from 1990 - subtract 0.2OC for warming from
91
2000) is 3.10"C with a 90% confidence interval of 2.00 to 4.39"C. (The precision here is for comparison purposes and does not reflect accuracy.)
2.2 2
1.8 Y i'
1.6
5 - 1.4
g
1.2
2
w 0
c 30
1
0.8 0.6 0.4 0.2 0 0
1
2
3
4
5
6
7
GLOBAL-MEAN WARMING FROM 1990 (degC)
Figure 2: Probabilistic projections of global-mean warming from 1990 for the P50 (SRES median) emissions scenario. The projections account for uncertainties in the climate sensitiviw, ocean mixing, aerosolforcing and carbon cyclefeedbacks.
Figure 3 (which is the same as Fig. 4 in Wigley and Raper, 2001) shows the results when all 35 SRES emissions scenarios (equally weighted) are considered. Uncertainties increase, but the medians change little. (Using a scenario other than P50 in Fig. 2 would shift the output pdfs to the left (for lower emissions) or right (for higher emissions), but would not noticeably affect the spread of the distribution). The median warming in 2100 is 3.06"C, while the 90% confidence interval expands by one third to 1.68-4.87"C.
92 2.2
ALL EMISSION: 2 1.8
2030
.?1.6 8
0)
2
1.4
5z
1.2
2
1
Ei
0.8
3
2070
2n 0.6 0.4 0.2
0 0
1
2
3
4
5
6
7
GLOBAL-MEAN WARMING FROM 1990 (degC)
Figure 3: Probabilistic projections of global-mean warming from 1990 for the full set of SRES emissions scenarios. The projections account for uncertainties in emissions, the climate sensitiviQ, ocean mixing, aerosol forcing and carbon cycle feedbacks. For changes over 1990-2100, the IPCC Third Assessment Report (using the same climate mode0 gives a warming range of 1.4-5.8"C.
SUMMARY AND CONCLUSIONS A simple one-box Energy Balance Model (EBM) is described and used to identify the forcing time scale boundary (1-5 years) below which the response of the climate system is controlled by its thermal inertia and above which the response is primarily determined by the climate sensitivity. The simplest type of model that can describe the time-dependent behavior of the climate system is an Upwelling Diffusion (UD) EBM in
93 which ocean heat transport is modeled as an upwelling-diffusion process. It is possible to model this transport without considering upwelling (i.e., as a ‘pure diffusion’ process), but such a model has as its steady-state an isothermal ocean, which is clearly unrealistic. The MAGICC model is used as an illustration of a UD EBM. When MAGICC’s parameters are calibrated so that MAGICC emulates the results of a particular AOGCM for the case of linear forcing, it is found that emulations for different forcings are equally accurate. An example is given using the response of the Parallel Climate Model (PCM) to the volcanic forcing history of the past 100+ years. MAGICC’s results match those of PCM accurately and provide a clearer indication of the underlying volcanic response signal. Because of their computational efficiency, models like MAGICC may be run thousands of times to produce probabilistic projections of future climate change accounting for emissions, climate sensitivity, ocean mixing, aerosol forcing and carbon cycle uncertainties. An example is given for a specific emissions scenario and this is then expanded to account for emissions uncertainties. REFERENCES 1.
2. 3. 4.
5.
6. 7. 8.
9.
10.
Ammann,C.M., Meehl, G.A., Washington, W.M. andzender, C.S., 2003: A monthly and latitudinally varying volcanic forcing data set in simulations of 20* century climate. Geophysical Research Letters 30( 12), 1657, doi: 10.1029/2003GL016875. Andronova, N.G. and Schlesinger, M.E., 2001: Objective estimation of the probability density function for climate sensitivity. Journal of Geophysical Research 106,22,605-22,611. Cubasch, U. and Meehl, G.A., co-ordinating lead authors, 2001: Projections for future climate change. (In) Climate Change 2001: The Scientific Basis. (J.T. Houghton et al., Eds.), Cambridge University Press, 525-582. Forest, C.E., Stone, P.H., Sokolov, A.P., Allen, M.R. and Webster, M.D., 2002: Quantifying uncertainties in climate system properties with the use of recent climate observations. Science 295, 113-1 17. Harvey, L.D.D., 2000: Global Warming. The Hard Science. Prentice Hall, 336pp. Harvey, L.D.D. et al., 1997: An Introduction to Simple Climate Models used in the IPCC Second assessment Report: IPCC Technical Paper 2 (J.T. Houghton et al., Eds.), Intergovernmental Panel on Climate Change, Geneva, Switzerland, 50 pp. Hoffert, M.L., Callegari, A.J. and Hsieh, C.-T., 1980: The role of deep sea heat storage in the secular response to climate forcing. Journal of Geophysical Research 86,6667- 6679. Nakikenovik, N., and Swart, R., Eds., 2000: Special Report on Emissions Scenarios. Cambridge University Press, 570 pp. Raper, S.C.B., Wigley, T.M.L. and Warrick, R.A., 1996: Global sea level rise: past and future. (In) Sea-Level Rise and Coastal Subsidence: Causes, Consequences and Strategies. (J. Milliman and B.U. Haq, Eds.), Kluwer Academic Publishers, Dordrecht, The Netherlands, 11 4 5 . Raper, S.C.B., Gregory, J.M. and Osbom, T.J., 2001: Use of an upwellingdiffusion energy balance climate model to simulate and diagnose NOGCM results. Climate Dynamics 17,601-613.
94 11.
12.
13. 14. 15. 16.
Washington, W.M. et al., 2000: Parallel Climate Model (PCM) control and transient simulations. Climate Dynamics 16,755-774. Wigley, T.M.L. and Raper, S.C.B., 1991: Internally generated variability of global-mean temperatures. (In) Greenhouse-Gas-ZnducedClimatic Change: A Critical Appraisal of Simulations and Observations. (M.E. Schlesinger, Ed.), Elsevier Science Publishers, Amsterdam, Netherlands, 471482. Wigley, T.M.L. and Raper, S.C.B., 1992: Implications for climate and sea level of revised IPCC emissions scenarios. Nature 357,293-300. Wigley, T.M.L. and Raper, S.C.B., 2001: Interpretation of high projections for global-mean warming. Science 293,451-454. Wigley, T.M.L. andRaper, S.C.B., 2002: Reasons for larger warming projections in the IPCC Third Assessment Report. Journal of Climate 15,2945-2952. Wigley, T.M.L., Ammann, C.M., Santer, B.D. and Raper, S.C.B., 2004: The effect of climate sensitivity on the response to volcanic forcing. Journal of Climate (submitted).
OLD PHYSICS FOR NEW CLIMATE MODELS - MAYBE GARTH W. PALTRIDGE Institute of Antarctic and Southem Ocean Studies University of Tasmania, Australia ABSTRACT On the one hand we look at the possibility that positive feed-back within the earth-atmosphere climate system is great enough to generate an oscillating climate whose long-term behaviour is more-or-less independent of such things as enhanced greenhouse global warming. On the other we look at the possibility of applying a principle of maximum entropy production to calculate the sub-grid-scale turbulent diffusion coefficients of climate models, and thereby perhaps bypass the need for ever more detailed model resolution. Examination of either possibility requires a sizeable change in the way modem numerical climate models are handled. INTRODUCTION This talk is about two scientific difficulties still facing people in the climate prediction business. The first concerns the variety of feedback processes which have to be incorporated in a model before it can produce anything other than partialderivative forecasts of, say, the change of global temperature associated with an increase in atmospheric carbon dioxide. Researchers are still having difficulty identifying all the significant feedbacks, and indeed are deliberately avoiding some of them because they are such a nuisance. The second concerns the perennial problem of turbulence. Its detailed behaviour has that awkward characteristic of inherent unpredictability - this because fluctuations smaller than the distance between measurements in a turbulent medium have the nasty habit of growing unexpectedly into something much bigger and much more noticeable. It is not immediately obvious that either difficulty will be removed by adding ever-more detail to general circulation models of the atmosphere and ocean. FEEDBACK AND OTHER MATTERS Want to know:
du
4.0 Wm2 for doubled CQ and no feedbacks
0.3 owWm’
=
1.2 OK for doubled CQ
Figwe 1
equation in Figure
The
95
96
1 is perhaps the simplest way of looking at the climate-change problem. T is the global average surface temperature, F is the net radiant energy flux into the planet (it must be zero on average), and we will let x be the concentration of atmospheric carbon dioxide whose unit of change is set to be a doubling of its value. The values of the two derivatives on the right-hand-side were worked out on the back of an envelope more than a hundred years ago, and those of you who can multiply will quickly work out that, according to this equation, a doubling of carbon dioxide would lead to a rise in global temperature of 1.2OC. People were fairly happy with such a figure in the less timorous years of the early part of last century. With Feedbacks:
O
Zfi
I .o Figwe 2
The problem of course is that there are no feedbacks in the equation as it stands, and the left-hand-side is very much a partial rather than a full derivative. Higher temperatures may change cloud cover and the concentration of water vapour and the albedo of the surface and so on, all of which changes may in turn ‘feed back’ so as to affect F. Depending on the sign of the feedback, they may amplify or reduce the initial change in T induced by the increasing carbon dioxide. Thus equation 1 should be modified to the more complicated equation 2 in Figure 2, with the sum of all the feedback factors fi appearing as shown in the denominator. Climate-change modelling is essentially a process which, at least in principle, identifies all the J and calculates reliable values of them. The table in Figure 3 gives the feedback factors that were specifically calculated as a by-product of a popular numerical climate model developed about 20 years ago. There are both positive and negative values, but when they are all added up their sum comes to +0.712. Tbe implied sensitivity to doubling carbon dioxide is an increase in temperature of 4.16 C - this both from our equation 2 and indeed from the actual simulations within the model itself. Other models over the years used different feedbacks or values of feedbacks, and indeed the range of reputable published sensitivities to doubled carbon dioxide includes values as high as 6 or 7 C . Feedback Mechanism
f,
T/ x
None
0.000
1.20
Water vapour amount Water vapour distrib. Lapse Rate Surface Albedo
Cloud height Cloud cover
0.445 0.216 -0.264 0.091 0.123 0.101
1.85 0.90 -1.10 0.38 0.51 0.42
Total
0.712
4.16
97 The first thing to recognize is that quoting the variousfi to three decimal places is rather silly. Even now we don't for certain know the sign of some of the feedbacks, let alone their magnitude to three decimal places. The second thing to recognize (referring back to equation 2 and its associated graph in Figure 2) is that the more positive is the overall feedback, the more significant can be any further addition. (As the sum of theJ approaches 1.O, the sensitivity to change of carbon dioxide increases rapidly and indeed tends to infinity). Suffice it to say that some of the more extreme positive feedbacks have been tuned out of the models over recent years, and in general the average sort of quoted sensitivity to doubled carbon dioxide is actually not all that far from the hundred-year-old figure of a little more than a degree or so. But reputable models with high sensitivity can still be found, and it is possible (not necessarily likely, but at least possible) that the overall feedback of the real atmosphere-ocean system is very large and positive. If for instance we accept the large values of positive feedback implied by those models which predict a 6 or 7 degree rise in temperature, and then add other positive feedbacks which may not yet be incorporated in those models (rainfalValbedo and rainfall/emissivity are two possibilities that come to mind) then one can conceive the system as possibly having an overall feedback greater than 1.0. It would in fact be an oscillator.
Figure 4
Such a world would have its interesting points. The climate would skate back and forth from one extreme to another at a rate determined by the longest timeconstants of the system - that is, by the centuries and millennia associated with processes in the deep ocean and the polar ice packs. The extremes would be defined by points where the total feedback once again became less than 1.O. For instance, one extreme might correspond to the dry ice-age situation when (we might imagine) there would not be enough clouds to allow positive cloud feedback. The other might correspond to some version of the relatively wet present climate where (we might imagine again) the positive rainfall to surface-emissivityfeedback could be reduced to virtually nothing. Perhaps the most interesting point about such a world is that enhanced greenhouse warming from the burning of fossil fuel would be more-or-less irrelevant to its long-term behaviour. In any event, and before consigning the whole silly idea to the rubbish bin, it is perhaps worth looking at the record in Figure 4 of the
98
temperature of the high Antarctic plateau over the last half million years - this from the Vostok ice core. People argue that this temperature record is probably similar in shape (if not in magnitude) to that of the world as a whole. The almost saw-tooth graph is remarkably oscillatory in character. Suffice it to say that various more respectable attempts to explain the dominant 1 10-thousand-year cycle in global temperature still leave rather a lot to be desired. The real point of this speculative, rather disreputable, and indeed not very original discussion is to emphasise that such ideas are not all that easy to pursue in this day of highly complicated and highly expensive numerical climate models. There have to be very serious reasons to make major changes to an operational model. Indeed it is difficult enough nowadays - some would say virtually impossible - even to analyse the sensitivity of models to the tunable parameters buried deep in the simulation of many of the thermodynamic processes of the system. Requests to try strange ideas which might run the models way outside the regime for which they are designed can, rather understandably, lead to some very funny looks from the modelling practitioners. On the other hand, ideas which are not tested on the big models tend to remain permanently within the realm of science fiction. And in the case of the present example, perhaps it is just as well.
TURBULENCE AND MAXIMUM DISSIPATION
Latitude (0
, Figure 5
Many years ago someone pointed out to me that the last resort of the confused physicist when faced with an insoluble problem was to look for an extremum principle. Such things can be used to bypass the difficulty of not having enough equations to go with all the unknowns. I won’t bore you with the details, but as a consequence of that conversation I developed back in the mid-seventies a particular version of what is known as an ‘energy balance’ box-model of the climate system. Referring to Figure 5 , in the first version of the model the boxes corresponded to latitude zones, and there were 10 of them ranging from pole to pole. The unknowns in each box were cloud cover 0,surface temperature T and (effectively) the turbulent transfer coefficient relevant to energy transfer X from the one latitude zone i to the next zone i+l. Energy balance at the top and bottom of the atmosphere provided only two equations to solve for the three unknowns of each box. Thus began a more-orless random search for any overall parameter of the system which had a maximum or
99
minimum for some particular distribution of the transfer coefficients (i.e of the Xi), and hence also of the distributions of nand T and the radiative fluxes dependent on them. Hopefully the distributions would match reasonably well the known conditions of the climate system. Suffice it to say that after a lot of running down dark alleys there emerged a parameter which seemed to fit the bill in that it had a minimum in roughly the right place. The parameter was the sum over all the latitude zones of the net radiation input to the planet ( R N ~ R Jdivided ) by the outgoing long-wave radiation RL. This was fine, but was also completely incomprehensible until a colleague pointed out that if I took the fourth root of the bottom line I would have something which at least had recognizable units - namely, units of entropy exchange (energy flux over temperature). After that it took quite some additional time to wake up to the fact that a minimum in entropy exchange so defined was the equivalent of a maximum in the internal entropy production associated with the meridional transfer of energy. Anyway, looking at the business in reverse, application to the model of a principle of maximum entropy production (MEP) produced some amazingly good simulations of the Earth’s distribution of cloud and surface temperature. You might verify that by looking at the output of a more complicated two-dimensional model of the Earth to which the principle has been applied. See Figures 6 and 7. Bear in mind that this work was done at a time when the GCM practitioners of the day made no pretence at all of generating clouds in their models. They simply pre-set cloud amount to match the observed conditions - and thereby effectively pre-set as well many of the thermodynamic conditions of the system such as sea surface temperature. Cloud Cover
Figure 6
100 Surface Temperature
Figure 7
The result created a small stir at the time, since it would indeed be rather nice to be able to predict climate and climate change without having to go to all the trouble of describing every dynamical process of the atmosphere and ocean in minute detail. But the stir quickly faded, mainly because there was no rational physical explanation as to why the earth-atmosphere climate system should actively adopt a format which maximizes its entropy production. My own published attempts to produce such an explanation were beneath contempt. Despite this, quite a number of people over the years kept looking at the question, and among other things applied the principle of MEP reasonably successfully to other planets, and as well put some real physics behind the entropy production associated with radiation transfer. But it was not until a couple of years ago that one Roderick Dewar produced what seems to be a fairly solid statistical mechanical proof of the concept - namely, that essentially turbulent systems can be expected to adopt a format (i.e. to adjust their transfer coefficients for instance) to maximize their rate of entropy production or (what is nearly the same thing) their rate of energy dissipation. The proof is still being argued about by the gurus of statistical mechanics but, if it ultimately proves to be correct, Dr. Dewar has come up with what might be called a codicil to the Second Law of thermodynamics. Such a codicil would say something to the effect that, not only does an isolated system go ultimately to maximum entropy and disorder, but given the chance it will go there as quickly as possible. So MEP has gained some measure of respectability of late, and as a consequence the literature of the last year or two on the subject has expanded rapidly. But there are still problems of application. The main problem is simply that the constraint is an overall constraint, which at face value is not very easy to incorporate directly into existing general circulation models (GCMs) which are very much based on separate calculation of local conditions every time step in order to build the overall picture. So a major question about MEP concerns the level of detail to which it can be directly applied. Can it, for instance, be applied locally to calculate the transfer coefficients appropriate to the sub-grid-scale diffusion of energy (and other scalar quantities) between individual grid boxes of a GCM? There are indications that it might, in which case the whole nasty business of requiring more and more resolution in numerical climate models might be done away with. But first it will require some fairly esoteric physical analysis to relate the concept directly to various theories of turbulence.
101
MEP seems to apply usefully (as opposed to trivially) to systems which have sufficient degrees of freedom to allow an almost continuous spectrum of possible steady states - of which one steady state has maximum dissipation relative to all the others. In other words the principle concerns turbulent mechanisms, and not (for instance) the radiation streams that drive them. It is for this reason that attempts to apply MEP to the total entropy production of the earth-atmosphere system (i.e. including the essentially linear production associated with conversion of solar radiation to thermal energy at terrestrial temperatures) have failed. Having come to that realization, it is interesting also to speculate about other types of turbulent system to which MEP might apply and therefore provide some forecasting skill. The economics of complex societies seem remarkably turbulent and could be good candidates for the application of some form of MEP principle. In that event, a jaundiced onlooker might perhaps comfort himself with the thought that an economic model based on such a principle stands a fair chance of being no worse at forecasting the future than the various techniques which exist at the moment.
ENERGY AND ELECTRICITY CONSIDERATIONS GLOBAL WARMING PERSPECTIVES DR. HISHAM KHATIB Honorary Vice Chairman - World Energy Council Amman, Jordan (Ideas expressed are entirely personal) INTRODUCTION
No doubt energy is a major anthropological cause of what is termed as "greenhouse gases" particularly carbon. Fossil fuels that account for over 85% of commercial global primary energy consumption are rich in carbon at varying degrees (coal is very rich, natural gas is less rich). Human use of energy and its dependence on fossil fuels have not changed during the last few decades and are unlikely to change for decades to come. In the foreseeable future, there is no alternative to fossil fuels to satisfy global energy needs. New renewable energy sources, other than hydro, do not contribute more than 2% of global resources (less than 1% if refuse is not included) and are not likely to significantly increase their relative contribution for years to come. Such renewable sources are intermittent and, disbursed correspondingly: they are expensive and unreliable. Fossil fuels are abundant, highly concentrated, versatile and efficient, correspondingly they are relatively cheap and tradable. Geographical endowment is uneven (particularly in case of oil). This provokes serious womes about the security of supplies, but the experience of the last few years has proven such womes exaggerated (and unjustified). A concerted global action to restrain emissions is yet to come. The Kyoto Protocol, although agreed seven years ago, is still to be ratified. The U.S. decided to withdraw and Russia is wavering. The main problem is that the cost to national economies (and the global economy) of restraining emissions and of enforcing a strong carbon discipline and developing alternatives are severe and involve considerable cost. The rewards are doubtful and long term. The problem is compounded by the fact that the major share of increase in future emissions will come from developing countries, particularly countries with high population concentration and high growth - China and India. Developing countries are eager to achieve economic growth and less worried about global warming. To convince these countries to join the global carbon emissions restraint effort is not going to be easy and, without their participation, the outcome will be limited. This is a serious dilemma. Global warming, as a science, is still controversial. It is not the intention of this paper to dwell on the controversies. But calls for efforts to restrain emissions are becoming stronger and almost universal. They are concerned with: better energy efficiency, clean technologies (fossil and non fossil), switching to cleaner fuels and involving developing countries. We shall start with the last issue as being the most important in the long term. DEVELOPING COUNTRIES AND GLOBAL WARMING Developing countries are going to be the main player in the growth of the global energy market in the coming decades. Lead by China (and to a lesser extent India) 102
103 their energy demand, which accounted for only one third of global primary commercial energy in 2003 (it was only 27% in 1993), will surge to 43% in 2025. It will approach half world consumption by 2030. Correspondingly developing countries will, as a group, be the main emitters of COz in the coming years as detailed in the following table (Exxon Mobil). Table I Carbon Dioxide Emissions
Industri EE I FSU
12 Yo
13 %
Note: Developing countries carbon energy intensity is higher than OECD.
This, however, should not obscure the fact that all three groups will be increasing their carbon emissions in the next few years. Figure 1 projects the rising global COz emissions and various fuel contributions. Figure 1: Global CO2 Emissions 5illion Metric Tons Carbon Dioxide
History
_____
._
Projections Total
20
10
Coal 0 1970
1980
1990
2007
2010
2025
Source: Energy Information Administration (KIA) International Energy Outlook 2004 (IEO 2004). This is promoted by the following factors: Rapid Economic growth Most developing countries, particularly the countries in South Asia - China, India, Indonesia - have had and continue to have high economic growth. Because of high population increases and fact that they are in the early stages of their development they have the potential for more rapid growth. This also implies that their economic development is more energy intensive than the mature economies of
104
industrialized countries. The two following tables give indications of past economic history and future trends, and also of the extent of energy intensity.
Table 2: Global Economic Growth
Industri 1% World
11
(j 3.3 % 3.0% ___- __ Source: World Energy Outlook (WE0 - 20023 International Energy Outlook (IEO - 2004). _
-
~
-
-
____-_^
I
Table 3: Energy Intensity -~ -- __ Energy Intensity 1000 BTU I $ GDP (1997) 11 1977 ~
_"
~
Industrialized Countries (ICs) Developing Countries (DCs) ___ East Europe I FSU
_
^
_
_
I
I
I ~
-
1
i
I
-
1
1
1
"
1
1
1
1
~
_
-
-
~
--
I
2001
-_I
1
2025
I
-~~~~ 14_~_,,
23 -
45
-2
50
I
-3F-l:
Industri These two tables are very significant in indicating trends in global energy consumption. developing countries income is expected to increase at twice that of the rate of industrialized countries during the next few years and, because of the high energy intensity in developing countries (almost 3 times that of ICs), their energy consumption is expected to equal that of 1Cs by the year 2030 as shown in the following figure:
105 Figure 2: Energy Consumption 300
Chiadrillion Btir
250 200 150
100 50
0 1970
1980
1990
2001
2010
2025
Source: Energy Information Administration (EIA) International Energy Outlook 2004.
Also because of the high carbon intensity of fuels utilized by developing countries (mainly coal in China and India), developing countries' carbon emissions will almost certainly constitute almost half of global emissions by the year 2030. From all the above it is clear that any global effort which does not have containment of carbon emissions from developing countries as its center of interest will be missing the target. RESTRICTING FUTURE EMISSIONS THE WAY AHEAD ~
Future carbon emissions can be significantly restricted by regulations, efficiency measures and technology. There are (beside the Kyoto mechanisms) many ways to restrict emissions, these are mainly, but not restricted to, the following: 1. Better and higher efficiency in energy use. 2. Electrification and fuel switching with more reliance on natural gas. 3. Resurgence of nuclear power. 4. Greater use of new and renewable energy. We shall now explore these measures in greater detail. EFFICIENCY IN ENERGY UTILIZATION Continuous improvement in energy utilization has been taking place in the industrialized world, and to a lesser extent in developing countries, throughout the last three decades with remarkable results. In the past, economic growth was accompanied by a commensurate increase in energy use. Coupling was almost one to one. With the oil shock of 1973, energy efficiency became a major issue and decoupling was achieved (WEA - 200). Over the period 1990-2001, the world economy (world domestic GDP) grew by 31.5%, i.e. 2.52% annually. Simultaneously, the world's total primary energy consumption growth was restricted to only 16%, i.e. 1.35% on average annually. This
106
signifies an average annual improvement of 1.2% in energy efficiency that is quite significant. Similar improvements are expected in the future. The U.S. Energy Information Administration (EIA) expects future efficiency improvement to be no less than that of the past, 1.2% annually (3% economic growth versus 1.8% energy consumption annually). Without this the global COz emissions of 23,900 million metric tons in 2001 would have been almost 48,600 million tons in 2025 instead of 37,000 million metric tons. An improvement of almost 11,600 million metric tons (i.e. reduced by one quarter) is expected to be achieved though greater efficiency, during the first quarter of this century. THE VALUE OF ELECTRIFICATION Electricity is versatile, clean to use, easy to distribute and to control. Just as important, it is now established that electricity has better productivity in many applications than most other energy forms. All this led to the wider utilization of electricity and its replacement of other forms of energy for many uses. Demand for electricity is now growing globally at a rate higher than that of economic growth and in many countries, at almost 1.5 to 2 times that of demand for primary energy sources. Going electric will significantly contribute towards less carbon emissions. The future is going to show a growing role of electricity as the preferred energy carrier. Growth in electricity use during recent years has been markedly higher than energy demand growth and almost identical to that of economic growth, approximately 3% annually (Figure 3). Of course such a trend cannot go on indefinitely. The growth of electricity demand will gradually diverge from economic growth as substitution and markets mature. Figure 3 Electricity Demand as a Function of World GNP (Excluding Former CPEs) TWh X lo3
10
12
14
16
18
20
22
24
26
28
30
32
34
36
GDP 2000 (Trillion $ 1990 expressed in purchasing power parity) Source: Khatib, H. Economic Evaluation of Projects in the Electricity Supply Industry, 2003. However, with the types of technologies and applications that already exist, there is nothing to stop electricity's advance or to stop it assuming a higher share of the energy market. Saturation of electricity use is not yet in sight, even in advanced economies where electricity production claims more than half of the primary energy use. Other than for the transport sector, electricity can satisfy most human energy requirements. It is expected that, by the middle of the 21st century, almost 70% of
107
energy needs in some industrialized countries will be satisfied by electricity (Gerholm). In the near future, electricity demand growth is expected to match the growth of the world economy. This is expected to average around 2.5-3.0% annually during the next few years. The International Energy Agency and the International Atomic Energy Agency (IAEA, 2002) estimate that global electricity production will increase at an annual average rate of 2.7-3.0% during the first decade of the 21" century. Therefore it is expected that total electricity production in 2010 will amount to around 20,000 TWh and to 25,880 TWh in 2020. Most of this growth is going to occur in developing countries, particularly in Southeast Asia, a region that is enjoying rapid economic growth. In 2030 global electricity production is expected to exceed 28,000 TWh and half of this amount will be accounted for by developing countries. Nowhere is better efficiency achieved than in electricity generation. The average world efficiency of existing power stations is around 31%. New combined cycle gas turbines (CCGT) have an efficiency approaching 60%. A new modem CCGT plant firing natural gas would emit only 40% of a similar large modem coal power station that has a high efficiency of around 42%. By going electric the world economy is restricting its carbon emissions. Correspondingly, electrification and its utilization of natural gas are going to be significant contributors towards containing global warming prospects. THE ROLE OF NUCLEAR POWER Despite its major contribution to the curtailment of carbon emissions, the contribution of nuclear power to the global energy supply is on the decline. Nuclear power, which produces 16% of world electricity now, is expected to see its share decline to 11% in 2025 and even further afterwards. The accidents at Three Mile Island in the United States in 1979 and at Chemobyl in the Soviet Union in 1986 pushed public opinion and national energy policies away from nuclear power as a source of electricity. In the United States, massive cost overruns and repeated construction delays - both caused in large part by regulatory reactions to the accident at Three Mile Island essentially ended the U S . construction of nuclear power plants. Similarly, both before and after the Chemobyl accident, several European governments had announced their intentions to withdraw 6-om the nuclear power area. Sweden committed to a phase-out of nuclear power in 1980 after a national referendum. Both Italy and Austria have abandoned nuclear power entirely, and Austria has also been a strong opponent of nuclear power programs that it considers to be unsafe in Eastern Europe. Belgium, Germany, and the Netherlands have committed to gradual phase-outs of their nuclear power programs, although in some cases such commitments have proven difficult to carry out. Given the periodic changes in political leadership that can shift official government positions on nuclear power, it is difficult to assess the degree to which current commitments for or against nuclear power will be maintained. Many issues still impede the expansion of the nuclear power industry. Nuclear waste disposal remains a key concern. So are the dangers of proliferation and serious operational accidents in developing countries. But the future of nuclear is also blighted by its economics, although it does provide a measure of energy security its costs are high compared to CCGT plant running on gas. Nuclear power, a capital-intensive investment, is for government owned utilities and industries to undertake. Private business which are increasingly taking over the production of electrical power are not prepared to put up the huge ~
108
amount of capital and the high risks which nuclear power entails. Correspondingly, at least in the foreseeable future, the contribution of nuclear power towards solving global warming problems will continue to be limited, and much lower than its potential. BIOMASS AND RENEWABLE ENERGY AND BIOFUELS - PROSPECTS FOR NEW ENERGY SOURCES Biomass, whose contribution to global primary energy sources is significant, is not usually accounted for in global primary commercial energy consumption. However there are at least 2.40 billion people (i.e. as high as 40% of world population) is entirely dependant on biomass as their main source of energy. Biomass consumption in the world is around 1200 - 1500 m.t.0.e. (around 14% of global end use energy consumption). Biomass, which is mainly used in developing countries (mostly Sub-Saharan Africa and South Asia), is a major source of local environmental degradation and emissions that injure public health (IEA - WE10 2003). Globally, the extraction and burning of biomass releases carbon dioxide into the atmosphere; however, there is no net release of carbon dioxide if biomass is planted and harvested at the same rate, because growing plants remove and sequester carbon dioxide from the atmosphere. But this is subject to question. What about the burning of dung for instance? Prospects for the domination of new energy sources in the years to come are not promising, mainly because existing energy resources (particularly fossil fuels) are abundant, highly concentrated, cheap and tradable. The alternatives, particularly new and renewable energy are disbursed, intermittent and correspondingly expensive. No doubt some of the new energy sources like wind power are becoming competitive and certain applications of solar energy for water heating in sunny countries and for small electricity production by PV cells are becoming common. But this is only a small niche in a very large market. The outlook for wind and solar energy is for double-digit growth, based on both continued public subsidies and technological advances. However, because they start from a very small base, their combined contribution to total energy supplies is likely to be less than 0.5% in 2020-30. Installed capacity of wind power in Europe, where it is most popular, was around 25,000 MW in 2003, almost doubling over the last two years. It is promoted by generous subsidies and tax credits. Wind power is intermittent and correspondingly cannot be relied on as a permanent electricity supply without adequate storage. This storage will make it uncompetitive. Wind power can still be competitive and useful in countries with proper wind regimes, only as a limited source of electricity to augment existing electricity sources and save on use of fossil fuels. Its presence will add to energy security and energy independence in many countries, but only to a modest extent. Wind power can be utilized in the future for the production of (expensive) hydrogen. Photo Voltaic (PV) cells have many useful small power applications. Most importantly they can provide electricity in small amounts to many households in the world that lack it. But all this, as said earlier, will only make a small dent in the global energy scene. The two principle instruments used to promote renewables are renewable energy feed-in tariffs (REFIT) or simple quotas. REFIT is a system where the price of renewable power is politically set in advance at a level high enough to attract sufficient investment and the producers' output is purchased regardless of how much it
109 may be valued on the market. The quota system sets output levels, or as a percentage of generation, or other measures (EEn Inf). Much promise has been credited to hydrogen as a source of energy in the future. President George W. Bush pledged in his 2003 State of the Union Address, that "the first car driven by a child born today could be powered by hydrogen and pollutionfree". But is this realistic and justified! The most ambitious use of hydrogen is in a car powered by a fuel cell, a battery like device that turns hydrogen into electricity while emitting only heat and water vapor. Hydrogen can also be burned directly in engines much like those that run on gasoline, but the goal is fuel cells because they get twice as much work out of a pound of hydrogen. But where does this hydrogen come from? The main source of hydrogen is natural gas, which is in short supply, cumbersome to convert and may have better uses. Waiting in the wings is coal, burned in old power plants around the world that are already the focus of a dispute over their emissions. The long-term hope is to make hydrogen from emission-free "renewable" technologies, like windmills or solar cells. In fact, hydrogen may be an essential step in translating the energy of wind or sunlight into power to turn a car's wheels. But electricity from renewable technologies is costly. In the US, hydrogen is five times more expensive than gasoline when produced from wind power and 17 times when produced from solar. A likely source of hydrogen is from a machine called an electrolyzer, which is like a fuel cell in reverse. The fuel cell combines oxygen from the air with hydrogen to produce an electric current, with water as a byproduct, while an electrolyzer runs an electric current through water to split the water molecule into its constituent hydrogen and oxygen atoms. The problem is that if the electricity came off the national power grid to run an electrolyzer, about half of it, on average, would be generated by coal. Another problem is emissions. According to the U.S. DOE, an ordinary gasolinepowered car emits 374 grams of carbon dioxide per mile, or 1.6 kilometers, when driven, counting the energy used to make the gasoline and deliver it. The same car powered by a fuel cell would emit nothing, but if the energy required to make the hydrogen came from the electric grid, the emissions would be 436 grams per mile. Similarly, the car would not emit nitrogen oxides, a precursor of smog, but the power plant would. Correspondingly an energy future, with hydrogen as its main fuel source, has to be viewed (at least for now) with skepticism. It is not likely to come before the middle of this century, if it comes. During 2002, the EU commission proposed that there would be a 20% use of substitute fuels in road transport by the year 2020. The short-term targets are to reach 2% by 2005 and 5.75% by 2010. The commission proposed that alcohol (ethanol) be blended into petrol and that diesel oil be partially replaced by vegetable oil derivatives. There are two approaches towards the solution: the use of pure vegetable oils, and biodiesel (trans-esteified vegetable oil or animal fat). Bioenergy in the form of ethanol and similar fuels (from corn or other agricultural products) is likely to provide only a limited alternative to oil. Cultivation of crops for use as fuel requires substantial land that would otherwise be available for food, or other uses. With present technologies, ethanol is more expensive than gasoline. It also can require substantial inputs of fossil energy for production and conversion into fuels. The Brazilian experience over the last few years has mixed results. Most new cars in Brazil are now sold to bum a mixture of biofuels and 75% gasoline. Brazil now sells biofuels at a cost equal to or below petrol. But in view of
110
independent studies this could only be achieved through subsides (Baker Institute). Of course ethanol production does provide a measure of energy security but at a price. THE FUTURE OF CARBON EMISSIONS The future of carbon emissions can now be predicted without much difficulty, because the fundamentals are now known and are unlikely to change over the next 25 years. These fundamentals are not encouraging for a carbonless future, they are: 1. There is no foreseeable viable alternative to fossil fuels. 2. The future of new and renewable energy is not very promising. Too much talk, promises, conferences, etc., but very little real market achievement. Its contribution may increase gradually but will not significantly change the structure of the global energy balance. 3. The Kyoto Protocol is unlikely to be ratified in the foreseeable future. It is now seven years since the agreement, with no helpful signs for ratification. Even if ratified, there are so much flexibility in the targets and mechanisms as to limit its effectiveness. 4. Nuclear energy, which is carbon free, is still shunned by the majority of nations. 5. Technological progress in carbon containment, sequestration, storage, etc. is happening, but it will be many years before it will have a sizable effect on carbon emissions and reduced concentration in the atmosphere. 6. Most of the gowth in energy demand is going to be in developing countries. These countries are mostly concerned with economic growth and less womed about global warming. Consequently they will continue with their relatively carbon intensive economic growth using their local sources (mostly low quality coal) with little consideration for emissions. But equally there are a few (but important) bright spots: 1. Energy efficiency (low intensity in energy use) is not only continuing but also improving. This is mostly happening in electricity generation. The world is gradually becoming more electrified with electricity utilization growth of at least 1.5 times that of total primary energy usage growth. New electricity generating facilities are increasingly of the CCGT type that has relatively high efficiency and low emissions per kWh (less than half) particularly when natural gas is used. 2. Natural gas utilization (which is a relatively benign fuel) will continue to increase at a rate above that of total energy use. Natural gas demand is expected to grow at a rate at least 1.5% of primary energy growth (2.7% annually compared to 1.7% for energy). Natural gas growth rate will be twice that of coal. Trading in LNG is improving, so also prospect of gas to liquid, and in the future coal gasification. 3. There is global awareness of global warming, and carbon emissions. Even if Kyoto is not ratified, its message and mechanisms are not forgotten and will foster carbon restraint, particularly among OECD countries. A time span of 20-25 years is not long in terms of energy development. Correspondingly it is now possible to predict with reasonably accuracy the future of the emissions that are going to influence global warming. I am not going to venture new predictions of my own, but will mainly rely on US-EIA and the IEA figures. Let us start with the emissions at end of 2003. They point out to the following: 24 750 million metric tons Carbon emissions (2003)
111 Growth of emissions (1990-2003) 18.3 % Growth of emissions of U.S. (1990-2003) 19 % The future of carbon emissions may look as follows. Table 4: IEA Predictions Year
1
1-
COzmilliontons
1
__
I . _ _ _ _
Growth
%I
Industri Industri 1971 2003
I
13 654 24 700
1.74 %
The writing on the wall is clear. We are destined (at least in the medium-term until 2030) to have a relatively high growth of carbon emissions. Prospects for global warming are only warmer. REFERENCES 1. 2. 3.
4. 5.
6. 7. 8.
ExxonMobil. "A Report on Energy Trends, Greenhouse Gas Emissions and Alternative Energy", 2004. Gerholm, T R 1991. "Electricity in Sweden-Forecast to the year 2050", Vattenfal, Sweden. IAEA. "Energy, Electricity and Nuclear Power Estimates for the Period up to 2020", Vienna. IEA (WEIO), 2003. World Energy Investment Outlook, IEA, Paris. Khatib, H. Economic Evaluation of Projects in the Electricity Supply Industry, 2003. US Department of Energy (US DOE), International Energy OutZook 2004, (IEO 2004). WEA. World Energy Assessment, UNDP, New York, 2000. WEO. International Energy Agency. World Energy Outlook 2002, Paris, 2002.
This page intentionally left blank
4.
TRANSMISSIBLE SPONGIFORM ENCEPHALOPATHY: PRIONS
This page intentionally left blank
CREUTZFELDT-JAKOB DISEASE AND BLOOD TRANSFUSION PROFESSOR R.G. WILL National CJD Surveillance Unit, University of Edinburgh, Edinburgh, Scotland Creutzfeldt-Jakob disease (CJD) is a member of a group of diseases known as prion diseases or transmissible spongiform encephalopathies (TSEs). The very fact that these conditions are experimentally transmissible raises the spectre of accidental transmission and CJD has been transmitted iatrogenically in the course of medical and surgical treatments, including via pituitary growth hormone and human dura mater grafts"] (table 1). In prion diseases the level of tissue infectivity varies, with the central nervous system carrying very high titres of infectivity and peripheral lymphoid tissues lower levels. In some tissues or fluids such as blood, infectivity has been detected only in experimental models and not in natural disease. Until recently, all cases of iatrogenic CJD have involved transfer from person to person of infectivity from high titre tissues and the transmission of sporadic CJD through blood transfusion has not been identified despite a number of, largely epidemiological, studies[']. Variant CJD (vCJD) is a new disease, which is caused by human infection with the agent of bovine spongiform encephalopathy (BSE)13]. In contrast to sporadic CJD, the lymphoid tissues in vCJD, such as spleen and lymph nodes, contain infectivity, raising the possibility that blood in vCJD might pose a greater risk of onward transmission of infection. TABLE 1
TOTAL CASES OF IATROGENIC CJD WORLD-WIDE
I Mode
I
Neurosurgery Depth electrodes Corneal transplant Dura mater Human growth hormone Human gonadotrophin
I
I
Cases (n)
1
I Clinical
4 2 3
Mean incubation period (years) 1.6 1.5 15.5=
136 162
6b 12b
Visual/cerebellar/dementia Cerebellar
5
13
Cerebellar
Visual/cerebellar/dementia Dementia Dementia
"Range 1.530 years. bEstimated on incomplete data. Data courtesty of Dr P Brown. Shortly after the identification of vCJD in 1996, a study was set up to determine whether vCJD was transmissible through blood transfusion. This study, the Transfusion Medicine Epidemiology Review (TMER), has been a joint project between the National Blood Services in the various regions of the United Kingdom (England and Wales, Scotland, Northern Ireland) and the National CJD Surveillance Unit (NCJDSU). Since 1990 this Unit has attempted to identify all cases of sporadic and vCJD in the UK through a system of voluntary referral of suspect cases by neurologists and neuro-physiologists, and of confirmed cases by neuro-pathologists.
115
116 In order to identify cases that have acted as blood donors, details of all cases classified as probable vCJD (and the handful of cases identified at post mortem) are forwarded to the National Blood Services. Details of these individual cases are then circulated to all blood donor centres to identify those cases that had acted as blood donors. Information is sought on the use of these donations including the details of recipients of labile blood products, most commonly packed red cells. Details of the recipients are then supplied to the NCJDSU to determine whether any of these individuals themselves develop vCJD (a similar study in sporadic CJD and the reverse study in which vCJD blood transfusions are investigated are not described in this paper). A summary of the overall data from the study is shown in table 2, which indicates that only a minority of cases of vCJD have previously acted as blood donors, 19 out of 147 cases to date. Some of these individuals made multiple donations and a total of 50 recipients were identified by the National Blood Services. The number of recipients per year and the blood components used are shown in table 3. Some of the blood donations (N = 23, originating from 9 vCJD donors), were used in the production of fractionated plasma products. TABLE 2: vCJD DONOR SUMMARY
1
Number of vCJD cases in the UK Number who were eligible to donate (ie aged 17 and over) Number reported by relatives to have been blood donors Number of cases where donation records have been traced Number of cases from whom components were actually issued Number of reciDients identified from 16 cases where reciDient and comDonent infirmation is available
147 137 27 19* 16
I
50
*Donation records were traced on one case where the relatives had reported the case not to be a donor. TABLE 3: NUMBER OF RECIPIENTS TRANSFUSED BY YEAR AND BLOOD COMPONENT GIVEN @=SO) Year of Transfusion 1980-1984 1985-1989 1990-1994 1995-1999
2000-2003
Blood component transfused Whole blood Red blood cells Red blood cells Red blood cells Whole blood Red blood cells Red blood cells - buffy coat depleted Red blood cells - leucodepleted Fresh frozen plasma Cryo-depletedplasma Cryoprecipitate Platelets Red blood cells - leucodepleted Fresh frozen plasma - leucodepleted
Number of recipients 1 1 2 9 1 15 2 2 3 1 1 1 10 1
I
117 In order to obtain information on outcome following transfusion of labile blood products all the recipients are ‘flagged’ in order that death certificates can be identified in recipients who die and these are then forwarded to the NCJDSU. Currently 18 of the recipients are still alive (figure l), often years after receiving the transfusion, while 32 have died (figure 2), the majority within 1 to 2 years of the transfusion as a result of the primary illness (as judged by the diagnosis on the death certificates). FIGURE 1
RECIPIENTS OF LABILE BLOOD COMPONENTS DONATED BY vCJD CASES (still alive, n=18)
5 4 L Q)
Q
€ 3 3
z
2 1
0
118
FIGURE 2
RECIPIENTS OF LABILE BLOOD COMPONENTS DONATED BY vCJD CASES (dead, n=32)
20
18 16
14
3
v
N
:
W
V
d
v)
V
ID
V
h
V
m V
T
d
0
‘ ,-i
N
W
d
v)
ID
r.
m
0 rl
A
0,
Interval from transfusion to death (years)
In December 2003 a death certificate from one of the recipients for the first time listed a neurological condition, dementia, raising the possibility that this could be a case of vCJDr4]. The case was also referred to the NCJDSU for surveillance purposes and brain tissues obtained at postmortem were also sent to the Unit for review. The clinical course in this patient was suggestive of vCJD, although an MRI scan was not ‘typical’, and examination of the brain from the post mortem tissues confirmed the diagnosis of vCJD. The development of vCJD in an individual who had received a blood transfusion from a donor who themselves developed vCJD raises the possibility that the infection was transfusion transmitted. The patient in question was one of the oldest cases of variant CJD yet identified, but exposure to a dietary source of infection cannot be excluded. Taking account of the size of the recipient population, statistical analysis suggests that the chance of observing a case of vCJD in such a recipient in the absence of transfusion transmitted infection is about 1 in 15,000 to 1 in 30,000 depending on assumptions. If this was transfusion-transmitted infection the incubation period was about 6.5 years, and the donor donated blood about three years prior to the development of clinical symptoms, both periods consistent with In these studies Houston and Hunter demonstrated experimental evidence in the transmission of experimental BSE by blood transfusion in sheep with the blood effecting transmission taken midway through the incubation period. All the evidence accords with the possibility that the case described was a transfusion transmitted infection. As a result of this case, the decision was made in the UK to inform surviving recipients of the vCJD blood transfusion that they were at increased risk and should not act as organ or blood donors. In 2004 an elderly patient who had received a blood transfusion in 1999 from a donor who subsequently developed vCJD themselves died of an unrelated illness, a ruptured abdominal aortic aneurysm. There was no evidence of a neurological disorder but in view of the fact that the patient was known to be at
119 risk of vCJD, a post mortem was carried out. A range of tissues were examined and no abnormality in relation to prion disease was discovered in the brain, tonsil, appendix or large intestine, but the spleen showed evidence of immuno-staining for prion protein, as did a cervical lymph nodec7].This may therefore have been a case of pre-clinical vCJD after blood transfusion, underlining the probability that blood transfusion may act as a mechanism of transmission of vCJD from patient to patient. Furthermore the individual with pre-clinical vCJD was discovered to be a methionine valine heterozygote at codon 129 of the prion protein gene, a genotype that has not previously been identified in vCJD. To date all tested cases of vCJD (125 out of 147) have been methionine homozygotes. This raises the possibility that the heterozygote subgroup of the general population may be susceptible to infection with BSE, and may potentially have a longer incubation period than the methionine homozygote subgroup. In conclusion, there is now sufficient evidence to suggest that there is a probability that variant Creutzfeldt-Jakob disease can be transmitted by blood transfusion. Because of this theoretical possibility, a range of precautionary measures have already been taken in the UK and some other countries in order to minimise the risk of transfusion transmission of vCJD. The two recent cases of transfusiontransmitted vCJD underline the importance of taking precautionary measures at a time when there is uncertainty about the scientific evidence[81. REFERENCES 1.
2.
3. 4.
5. 6.
7
8
Brown P, Preece M, Brandel J-P, Sat0 T, McShane L, Zen- I, Fletcher A, Will RG, Pocchiari M, Cashman NR,d'Aignaux JH, Cervenakova L, Fradkin J, Schonberger LB, Collins SJ. Iatrogenic Creutzfeldt-Jakob disease at the millennium. Neurology 2000; 55:1075-1081. Wilson K, Code C, Ricketts MN. Risk of acquiring Creutzfeldt-Jakob disease from blood transfusions: systematic review of case-control studies. BMJ 2000; 321:17-19. Will RG, Ironside JW, Zeidler M, Cousens SN, Estibeiro K, Alperovitch A, Poser S, Pocchiari M, Hofman A, Smith PG. A new variant of Creutzfeldt-Jakob disease in the UK. Lancet 1996; 347:921-925. Llewelyn CA, Hewitt PA, Knight RSG, Amar K, Cousens S, Mackenzie J, Will RG. Possible transmission of variant Creutzfeldt-Jakob disease by blood transfusion. Lancet 2004; 363:417-421. Houston F, Foster JD, Chong A, Hunter N, Bostock CJ. Transmission of BSE by blood transfusion in sheep. Lancet 2000; 356:999-1000. Hunter N, Foster J, Chong A, McCutcheon S, Parnham D, Eaton S, MacKenzie C, Houston F. Transmission of prion diseases by blood transfusion. J Gen Virol 2002; 8312897-2905, Peden AH, Head MW, Ritchie DL, Bell JE, Ironside JW. Preclinical vCJD after blood transfusion in a P R " codon 129 heterozygous patient. Lancet 2004; 364~527-529. Wilson K, Ricketts MN. Transfusion transmission of vCJD: a crisis avoided? Lancet 2004; 364:477-479.
120
BSE IN NORTH AMERICA MAURA N. RICKETTS, M.D., MHSC FRCPC Blood Safety and Health Care, Acquired Infections, Ottawa, Canada PMP TRANSMISSIBLE SPONGIFORM ENCEPHALOPATHIES: FOCUS ON PFUONS Outline of the paper: 1. TSE basics 2 . Evolution of the BSE epidemic 3. Cattle and BSE in North America 4. Management of BSE TSE BASICS The transmissible spongiform encephalopathies include a number of diseases found among humans and animals (Figure 1). For many of them, the actual route of transmission is poorly understood. For those that we know well, such as BSE and vCJD, there is confidence in a number of important characteristics, characteristics that determine the most appropriate public policy to adopt to protect human populations. Figure I Scrapie Scrapie spreads relatively easily among sheep populations. Environmental contamination &om birth products has been postulated. Certain genetic characteristics confer relative resistance to scrapie. Scrapie arose, apparently de novo, approximately 200 years ago and spread in an epidemic fashion such that there are very few scrapiefree countries in the world. Australia and New Zealand are considered scrapie-free.
Kuru Kuru was intensively investigated by Professor Carleton Gajdusek in Papua New Guinea in the 1950’s. He determined that it was caused and transmitted by ritual cannibalism thus identifying the first such disease. CJD and its’ ‘relatives’ GSS and fatal insomnia (‘classical’ human TSEs) Three forms are recognized - FamiliaVgenetic, acquired and sporadic - making this human TSE the only human disease transmitted by all three such routes. Mink encephalopathy Associated with feed. Chronic Wasting Disease (CWD) CWD is epidemic in parts of North America in certain deer and elk populations. It arose apparently de novo in the 1960’s in northwest USA and has been demonstrated to be spreading epidemically. The route of transmission is unclear - environmental contamination has been postulated. At this time there is no evidence that CWD can infect humans or cattle, however ongoing investigation will inform this conclusion over time.
121
Bovine spongiform encephalopathy (BSE) BSE is transmitted orally by contaminated cattle feed supplements. Feline spongiform encephalopathy BSE in domestic and wild cats, transmitted orally by BSE contaminated feed Exotic ungulate encephalopathv BSE in captive ungulates (zoo), transmitted orally by BSE contaminated feed
VCJD BSE in human beings was acquired by the consumption of contaminated bovine tissues. It has now been seen, with high levels of confidence, that vCJD can be transmitted between humans via blood transfusion. This means that other iatrogenic routes (certain surgical procedures, biological products made from human tissues) can be implicated. vCJD and the other forms of human spongiform encephalopathy (referred to as ‘classical’ CJD) are distinct from each other in pathology, natural history and cause.
EVOLUTION OF THE BSE EPIDEMIC Although we do not know how the first case of BSE arose, epidemiologic and experimental evidence overwhelmingly supports the source of the epidemic as being recycling of contaminated animal feed. Since the early part of the 20th century, in developed countries, those parts of the animal carcases left after the removal of high quality human food have been converted into a wide variety of products through a process call rendering. Rendering, essentially cooking and separation of animal byproducts, results in, among others, the production of high protein feed supplements for animals. It is these feed supplements that became qontaminated with the prion agent responsible for BSE. Prions are known to be highly heat resistant; hence they survived the process of cooking that killed other pathologic agents. As the production of animal feed is relatively centralized, with subsequent distribution through an international (in the UK, initially national) market, the infectious agent found an effective route of distribution. Finally, the infective dose for cattle is very low, possibly less than O.lgm of brain tissue. This situation led to a distributed point source epidemic of BSE.
122
Figure 2 Global BSE EpidemicCurye
Peak O f BSE in Ihe UK
Figure 2 Figure 3: BSE Reports in UK and Europe, by Year
Inwuslnnd
United ingdom
TtN.,nEC I.l,W
Continent Europe
OIEOafa.10082W
Figure 3 N=l87,162
123 Figure 2 shows the classical epidemic curve of a distributed point source epidemic. The first peak is the outbreak of BSE, principally in the UK. It can be seen that the epidemic peaked at almost 40,000 cases per m u m . However, the institution of appropriate interventions such as feed bans led to the eventual decline in the UK epidemic. The long tail of the epidemic curve consists of two populations, those few UK cases that continue to appear despite control measures, and cases that appeared in other European countries. Although subtle, a slight swelling in the tail of the epidemic reveals the impact of strengthened surveillance programs, to be discussed later. Figure 3 compares the epidemic of BSE in the UK and EU. The size of the epidemics is very different - the scale for UK is in ten thousands and for EC in hundreds. Regardless, the efficacy of interventions in the UK is apparent. In addition, it is clear that the european community did no adopt the same measures, and regrettably, the number of cases of BSE began to climb in EC after the first case of vCJD appeared in 1997. Additionally, in 2001-2002, the EC implemented regulations requiring active surveillance. This important measure is responsible for the jump in case reports that can be seen in the curve. At the same time, implementation of existing measures was improved and new measures were introduced, leading to control of the BSE epidemic in a number of countries. BSE has a long incubation period; hence, while the first case was identified in 1986, modelling reveals that the first infections probably began in the late 70’s. In addition, it became clear that by the time the first 180,000 clinical cases of BSE were identified in the UK, there would have been fiom 1 million to 4 million animals infected in the 10 years or so preceding identification of the first clinical case. As most cattle are killed at two years of age, it is feared that no less than 446,000 infected animals entered the human food chain before the first control measures were introduced in November 1989 and a further 283,000 between Dec 1989 and Dec 1995. The European Union review of the epidemiology of BSE in the LJK notes that between 1988-1993, the prevalence of BSE in cattle was approximately 5% i.e. 1 in 20 animals. In the UK, BSE affected 59% of dairy herds, 15% of beef suckler herds and 34% of herds with adult breeding cattle. The risk of contaminated MBM was highest between 1986-1990, peaking in 1989 when SBO (specified bovine offals) were excluded from the human food chain but included in rendering and feed production. Studies of native cattle populations in the UK revealed that BSE is rare under 30 months of age, and very rare at under 24 months of age. Experimental studies (the pathogenesis study) of the natural history of BSE found that disease could develop in animals under 12 months, however, this has never been observed in a native cattle herd. These same pathogenesis studies confirmed that BSE infectivity is not spread evenly through the body of infected cattle. During the natural history of the disease, there is transient infection in bone marrow and the distal ileum, and possibly other tissues, however over 97% of the infectivity of cattle is found in the brain, spinal cord and associated spinal nerves. The quantity of infectivity increases during the incubation period of BSE, maximizing during clinical illness. There are lower levels of infectivity before clinical illness; however exactly when the infectivity levels pose a danger to humans (or other animals) is uncertain. Animal models demonstrate that infectivity arises at approximately 50% of the incubation period. Applied to cattle, but not known from natural history studies in native infected herds, it can be inferred that some cattle tissues may become infective around 2.5-3 years after infection.
124 There are important policy implications to the information gleaned from the natural history studies. For example, it is clear that younger animals are ‘safer’ for consumption (< 30m, <24m), that downer cattle (fallen stock and emergency slaughter) are at higher risk, as are those that consumed high risk feed or who were associated with a known BSE case. The most important measures to control BSE infectivity and improve safety are that focus on the removal of high-risk tissues (SRM). Some tissues have high levels of safety and are considered by the OIE to be of no risk to trade regardless of the BSE status of the origin country. These include the following: Milk and milk products Sperm/semen/embryos Protein-free tallow Gelatine from specific sources Skeletal muscle meat (boneless beef) is considered by the WHO to be safe for consumption, with the caveat that no one should consume food from an animal known to have a TSE. The specified risk materials (SRM) in order of infectivity are as follows: brain, spinal cord, dorsal root ganglia, distal ileum, trigeminal ganglia, other tissues found in the head e.g. eyes. There are some tissues at low or no risk of having infectivity, however, these can become contaminated during slaughter, i.e. advanced meat recovery systems leaving spinal cord and dorsal root ganglia in recovered meat, cuts of meat sold with vertebrae attached, contamination of cheek meat by brain etc when head is split, and the use of captive bolt stunning using air injection. Figure 4 shows data from the UK demonstrating the importance of active surveillance. The first column shows the year of reporting. The second column shows the projected number of cases (the range in brackets) over the time period. The final column shows the number of cases that were reported, with the number of cases identified solely through active surveillance in brackets. In all time periods, the number of reports was at least doubled through active surveillance. This experience was repeated in all countries reporting BSE that also implemented active surveillance. Figure 4: The Effect of Active Surveillance on Identijkation and Reporting of BSE cases in the UK
received (cases identified through active surveillance)
2001
’Expected Reports from passive surveillance (range) 504 (353-655)
2002
183 (93-273)
1039 (594 active)
2003 (1Omonths)
57 (7-107)
410 (281 active)
Year
Total reports
781 (332 active)
125
Figure 5: Tola1 number Of cwntnes repofling indigenous BSE cases, by year
301
Source: OIE. 5AugwtZCCd
Figure 5
The other important consequence of the introduction of active surveillance was the discovery of the first BSE reports in a number of countries. Figure 5 displays the increase in the number of countries reporting their first case in the time period after the introduction of active surveillance. In a few of these countries, dozens of cases were discovered within months, indicating an existing epidemic, albeit at a low-level, of BSE.
126
Figure 6: impact of BSE Tesling on Rates of BSE
lnlmducflonof EC feed ban
OIE 10 06 2W4
Figure 6
Figure 6 shows the rates of BSE in European countries with relatively larger epidemics (not including UK). These countries did not implement all the measures used in the UK, yet clearly good surveillance and appropriate interventions demonstrate that control of BSE is possible before large epidemics develop. Important policy implications follow from the preceding information. Among the most important messages is that ‘passive’ surveillance, that is surveillance dependant upon reporting from farms and veterinary services, is inadequate. The most appropriate metaphor for a single BSE case is the tip of an iceberg, as BSE epidemics are propagated from common point sources and so, by the appearance of the first case, there have been numerous infections. It is also important to understand that management of BSE is now well comprehended. Additionally, measures can be implemented incrementally and can be made-to-measure for the level of risk. CATTLE AND BSE IN NORTH AMERICA The North American cattle market is highly integrated among the three NAFTA partners. Since 1998, the three countries have held tripartite meetings to discuss risk assessment and risk management. Figure 7 shows the size of the cattle populations in the three countries. The nature of animal husbandry varies within and between the countries, with Mexico tending toward family/pastoral animal husbandry, and the US tending toward agri-industry with large handling and feedlots by professional management.
127
Country
Cattle population (millions)
Industri I
Mexico
I
30.1
I
I
Figure 7
In North America, the dairy cattle population is 25% of cattle population. In Europe and among family-run dairy farms, animals may live quite a long time, up to 10 years before going to slaughter, however, it is not uncommon for dairy cattle to be slaughtered younger as their ability to bear calves and produce milk diminishes. Dairy cattle may become human food and animal feed at slaughter. The offspring of dairy cattle are removed from their mothers at birth, and are fed milk replacers. There is some concern regarding the risks o f BSE from milk replacers, as they may contain highly refined products of rendering. Most beef cattle are slaughtered between 18-24 months. Feed choices for these animals vary. Supplements such as MBM are generally used in intensive livestock systems for dairy cattle, feedlot cattle and feedlot sheep. In North America as well other countries, soy and vegetable protein are cheaper than MBM. Grass and grain are widely available in some parts of the region and during some parts of the year, making supplements neither necessary nor desirable. However, MBM supplements are still used for poultry and pigs. It is possible for cross-contamination to occur at any level of supplement production, from the slaughter-houseto the farmyard. Figure 8 indicates the level of confidence of risk assessors regarding the risk of BSE appearing in any of the three countries. Extensive reviews and multi-lateral assessments supported the assessment that the level of risk of BSE was very low or absent. However, the European Union’s assessment was that while BSE was not known to be present, the risk could not be denied (“unlikely but not excluded” GBR 11).
128
Industri Industri Figure 8: Sey- and Peer-assessed Risk of BSE in Canada, USA and Mexico
The European Community reviews the risk of BSE being present in a country, upon request from that country, for the purposes of determining trade risks to the EC. This review is called the Geographic-based Risk Review (GBR). Four levels of risk exist varying from no risk to high level of known risk (see Figure 9). Some of the information used by the EC includes information garnered from the UK Customs and Excise data, such as that seen in Figure 10. FiQYre Q: GeogaphIeQased BSE risk a55e66ment
Category I: highly unliksb Category 11: unlikeb but not excluded Catwow 111: like& but n o t c e n h s d or ConRrmsd
'
Category I V confirmsd, at a higher level
Figure 9 shows the EC GBR, last updated August 2002 and based upon 55 countries reviewed at thatpoint.
129 I
Imports of MBM, offals, meat, meat preparations and live bovines could be assessed (see Figure 10) to investigate the risk that BSE has been imported into a country. One of the first observations to be made from Figure 10 is the extent of global movement of the products. There are important limitations to this data as follows: Export and import records may not tally (this was observed in Canada); repackaging and onward sales obscure the origin of trade goods (the UK exported MBM to European countries, labelling need not indicate the UK as the country of origin; illegal or uncontrolled movements not reported; export data does not describe how the imports were used (some of the use may be such that there is no risk of BSE transmission; the amount of infectivity in any exported product certainly varies over time, but cannot be defined exactly. IMPORTED BSE The first case of BSE in North America was discovered in 1993. It was reported in Canada in a cow imported from the UK in 1987. The entire herd was depopulated and both Canada and the US decided to trace and identify all cattle imported from UK and to prevent further importation. However, some cattle could not be traced. lSTBSE CASE IN THE NATIONAL HERD
The first case in the national herd was reported in May 2003, in Canada. BSE was reported in a cow with pneumonia, hence sent for slaughter categorized 'not fit for human consumption'. The investigation completed by June, final report by July 2004. Eighteen herds quarantined (all but 5 steer were found in Canada). Seventeen hundred were depopulated from which 1500 samples tested. All tests were negative. The case was attributed to feed, and since the MBM imports into Canada were traced and thought to be not relevant to BSE transmissible, it was postulated that possibly the feed was contaminated by original live cattle imports from UK.
130 2ND BSE CASE IN THE NATIONAL HERD The second BSE case was reported in the US (Washington State) in 6 Yi year old dairy cow in December 2003. Within a short time, the case was linked through DNA testing to Canada. It was determined that the cow had been born in 1997. The investigation determined that there were 114 animals linked to index case in Canada. Twelve were tested (all negative) as the remainder are already deceased or lost to follow up. One hundred and eighty-nine ranches checked in US from which 255 animals were tested. Of the cattle investigated, 11/25 US cattle were known to have eaten same feed but have been lost to follow-up. Twenty-nine of eighty-one (29/81) of the birth cohort in Canada are lost to follow-up. It was determined that this case was not linked to case 1.The index case was linked to over 2000 tons of MBM, all of which was destroyed. Approximately 10,000 lbs of raw meat distributed by the processor were recalled. The SRMs were not processed. The case was attributed to feed, as the animal was born 6 months before the feed ban. The testing of large numbers of healthy animals may not be as reassuring as could be thought. Figure 11 demonstrates that even in countries with much larger epidemics of BSE, finding positive samples among health animals is quite rare.
Industri Industri Figure 11: Number ofpositive BSE tests in Health Slaughter Cattle in C mntries with Internal Epidemics of BSE
MANAGING BSE
In terms of the actions being taken or considered, there is a wide range of appropriate actions. The appropriate interventions are tailored to the risk levels within each country, and it is not uncommon for interventions to be implemented iteratively as further information is gathered, for instance from surveillance data. 1.
Prohibition of Specified Risk Materials (SRMs) from the human food and animal feed chain. August, 2003: SRM ban for human consumption in Canada January, 2004: SRM bans for animal feed in Canada and US
131 2.
3.
Exclusion of raw materials containing potentially infectious tissues from Advanced Meat RecoveryIMechanicallyRecovered Meat processes Discussion underway, including discussions of stunning processes due to concerns that embolism of brain tissue throughout the body of slaughtered animals when certain types of stunning devices are used. Increase the number of animals tested in the existing surveillance program to obtain a more accurate picture of BSE (Figure 12) In 1990, the first BSE surveillance for clinical BSE was initiated in US. In 1992, surveillance for clinical BSE was initiated in Canada. By 2004, both countries announced enhanced surveillance and tracking systems. At this time, however, the US tests 20,000 brains per annum, which is less than 0.1% of 36 million cattle slaughtered each year. Canada is testing approximately 5-10,000 cattle per annum; OIE requirements for Canada will be around 30,000 per annum.
Figure 12: 4 D's Animals found dead (dead stock); Animals that are non-ambulatory (downers); Animals presented for emergency slaughter (distressed); Animals sent to slaughter and are found to be sick at antemortem inmection (diseased). Figure 13: Summary of Issues in 3SE What is safe to eat? Milk and milk products are safe for trade regardless of BSE status of country of origin. WHO - skeletal muscle meat is safe to eat, in the context that no tissue from an animal with BSE or suspected of BSE should be included in any food chain, human or animal. This implies adequate surveillance and implemented methods to ensure the removal of sick animals from the food chain i.e. a BSE control program. What should be done to evaluate the existing risk within any country's boundaries? What was imported? How was it used? Bovine feed is the biggest problem. Consideration of non-bovine animal feed issues that can allow cross-contamination to occur in farmyards and human exposure through food should be evaluated. Is there any rendering (within or without country boundaries)? Recycling opportunities should place the country on high alert. What should be done to prevent further global exposure? National control programs should include animal feed regulations, appropriate slaughter methods, handling of SRMs and adequate surveillance. International trade on animal feed must be revkwed. Appropriate trade bamers should be placed on some human foods.
132 At a meeting held in the WHO in 1998, the consultants concluded that ‘the eradication of BSE must remain the principle public health objective of national and international animal health control authorities’.
ROLE OF THE POLYMORPHISMAT CODON 129 OF THE PFUON PROTEIN GENE IN THE PHENOTYPIC EXPRESSION OF GERSTMANN-STeUSSLERSCHEINKER DISEASE ASSOCIATED WITH THE F198S MUTATION.
BEFWARDINO GHETTI', LETICIA MIRAVALLE', KELTIYAMAGUCHI', FRANCINE EPPERSON', JILL R. MURRELL', TONY PERKINS', SIU HUI~,BRADLEY s. GLAZIER', MARTIN R. FARLOW', PEDRO PIC CAR DO^', STEPHEN DLOUHP 'Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 'Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 'Department of Neurology, Indiana University School of Medicine, Indianapolis, IN 4Departmentof Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN *Dr. Piccardo is currently at the Center for Biologics Evaluation and Research, Food and Drug Administration, Rockville, MD 20852. INTRODUCTION
Gerstmann-Straussler-Scheinker disease (GSS) is a genetically-determined adult-onset progressive neurodegenerative disease associated with Prion Protein (PRNP) gene mutations, whch lead to the formation of amyloidogenic degradation products of the prion protein (PrP)' . The clinical phenotype is characterized by a constellation of signs and symptoms; however, cerebellar ataxia, akinetic parkinsonism, pyramidal signs, and cognitive decline are the most commonly observed, particularly at the onset'. These clinical characteristics may be present in various combinations and in varying degrees of severity. The central elements of the neuropathologic phenotype are amyloid plaques and difhse deposits resulting from the accumulation of PrP degradation products'. These plaques and deposits are most numerous in the cerebral cortex, basal ganglia, and cerebellar cortex. In addition, the neuronal pathology may vary with the PRNP gene mutation. GSS is inherited in an autosomal dominant pattern with almost 100% penetrance. The PRNP gene is located on the short arm of chromosome 20 and encodes for PrP'. Currently, ten point mutations in the PRNP gene are known to be associated with GSS'. In one large kindred, GSS has been linked to a point mutation at codon 198 of the PRNP gene (TTC+TCC); the mutation is predicted to result in a phenylalanine (F) to serine (S) substitution at residue 198 (F198S)4.The mutation was found to be in coupling with GTG, which is predicted to result in valine (V) at residue 129 (129V) of the PrP'. Codon 129 is normally either ATG or GTG, thus resulting in either methionine (M) or V being encoded at residue 129 of the PrP6. The genotype at codon 129 has been shown to be associated with variations in the clinical and pathological phenotypes associated with prion diseases as well as with the susceptibility to some types of these disorder^^-'^. It has been suggested that individuals carrying the F198S PRNP mutation (198F/S) and who are homozygous for valine at codon 129 (129VN) may have an earlier age at onset of symptoms than individuals who are heterozygous methoninevaline (129h4/V)4. This hypothesis was tested using a large cohort of individuals from this kindred.
133
134 METHODS Family History and Disease Phenotyue The pedigree of this family dates back to the year 1792. Clinically affected members have been recorded in every generation. The main clinical features are clumsiness in walking evolving into ataxia, gradual deterioration of short-term memory, bradykinesia, rigidty, mild tremor, dysarthria, and cognitive impairment evolving into dementia. Neuropathologically, the distinctive feature of this GSS variant is the coexistence of insoluble-tau filaments, which are seen as neurofibrillary tangles in neurons of the cerebral cortex and subcortical nuclei. The PrPimmunoreactive amyloid deposits are seen in the cerebrum and cerebellum. Subjects For this study, eighty-six members from this kindred were selected. Of these eighty-six, twenty were deceased and sixty-six were living at the time of genetic analysis. Clinical Analysis A retrospective review of the medical history was carried out for individuals, who were genetically confirmed to carry the F198S mutation or clinically affected. The following items were reviewed i) neurological anamnesis, ii) age at onset of the neurological symptoms, iii) neurological examinations, iv) psychiatric symptoms, v) age at death and vi) duration of illness. Neuropathologic Analysis A reexamination of the histological and immunohistochemical brain tissue sections was camed out. Molecular Genetic Analysis Genomic DNA was extracted from peripheral blood, frozen brain tissue and formalin fixed-paraffin embedded brain tissue. The open reading frame of the PRNP gene was amplified using two sets of primers: F1 (5’-ACC CAC AGT CAG TGG AAC AAG C-3’) and R1 (S’-TAA AAG GGC TGC AGG TGG ATA-3’); F2 (5’-AGC AGC TGA TAC CAT TGC TAT-3’) and R2 (5’-GGT AAC GGT GCA TGT TTT CAC (3-3’). PCR amplification reaction was camed out using 50 pl each containing 500 ng genomic DNA, 0.2 mM dNTP, 1.5 mM MgZCl, 0.5 pM of each primer (FUR1 and F2/R2), 10 mM Tris-HC1, pH 8.3, and 50 mM KC12. Th~rtycycles of 94°C 1 min, 55°C 1 min and, 72°C 1 min were run in a thermal cycler (Eppendorf). The amplification products were separated on 1.5% agarose gels in TBE, visualized by ethidium bromide staining and gel purified (Qiagen) and sequenced on a CEQ 2000XL DNA analysis system (Beckman Coulter). Statistical Analysis We used Kaplan-Meier survival analysis to test for differences between individuals with 198FIS-129MN and 198FIS-129VN for several different outcomes. The first two outcomes are the age at clinical onset (time to clinical onset) and the age at death (time to death). In a separate analysis, we looked at the subset of patients who displayed clinical symptoms to determine if the time from onset of clinical symptoms to death differed between the two groups. For each analysis, patients were censored at their last time of follow-up if they did not reach that outcome.
135 We used Kaplan-Meier estimation to model the survival curves for the two groups and the Wilcoxon test to determine if the survival curves were significantly different between the two groups. RESULTS Forty-four individuals (51%) of the total analyzed were carriers of the PRNP F198S mutation. Of these, 32 individuals (73%) were 198FIS-129MN and twelve individuals (27%) were 198FiS-129VN. Seventeen of the 198FIS-129MN individuals (53%) and eight of the 198FIS-129VN individuals (67%) have shown clinical symptoms. 129V/V individuals had significantly (p < 0.001) shorter time to onset (medim age at onset 46.3 years) of clinical symptoms than 129MN individuals (median age at onset 55.5 years). The time to death was significantly (p = 0.006) shorter for 129VN individuals than for 129MN individuals (median age at death: 52.9 vs. 62.6 years). The time to death after onset of clinical symptoms was not significantly different (p = 0.790) between 129VN individuals (median time to death: 8.1 years) and 129MN individuals (median time to death: 5.5 years). DISCUSSION The polymorphic site at codon 129 of the PRNP genes seems to play an important role in the phenotypic expression of human prion disorders6-". Homozygosity at position 129 has been shown to influence the susceptibility for iatrogenic and sporadic Creutzfeldt-Jakob disease (CJD)7-8.The presence of the D178N mutation and 129M or 129V on the same PRNP allele determines whether the disease will be Fatal Familial Insomnia (FFI) or familial CJD9. The presence of the P102L and 129M or 129V on the same PRNP allele determines two different GSS phenotypes". A previous study suggested that individuals carrying the F198S PRNP mutation and homozygous for valine at codon 129 may have an earlier age at onset than methionine-valine heter~zygotes~. The data presented here strongly suggest that the polymorphism at codon 129 of the PRhJP gene plays a major role in the phenotypic expression of GSS F198S. Individuals homozygous for valine at residue 129 develop the symptoms of GSS earlier than individuals heterozygous methionine-valine at residue 129. In addition, those same homozygotes died at an earlier age than the heterozygotes. Both of the differences were statistically significant. The results of this study not only confirm the hypothesis previously put forward by Dlouhy et al, but also adds strong evidence to the concept that the polymorphic site at codon 129 influences variations in the phenotypes associated with hereditary prion diseases4. ACKNOWLEDGEMENTS This study was supported by PHS P30 AG10133. No official endorsement of t h s article by FDA is intended or should be inferred.
136 REFERENCES 1.
Ghetti B, Dlouhy SR, Giaccone G, Bugiani 0, Frangione B, Farlow MR, Tagliavini F Gerstmann-Straussler-Scheinkerdisease and the Indiana Kindred. Brain Path01 1995, 5:6175. Sparkes RS, Simon M, Cohn VH, Fournier RE, Lem J, Klisak I, Heinzmann C, Blatt C, 2. Lucero M, Mohandas T. Assignment of the human and mouse prion protein genes to homologous chromosomes. Proc Natl Acad Sci U S A. 1986,83:7358-62. 3. Ghetti, B.; Tagliavini, F.; Bugiani, 0.;Piccardo, P.: Gerstmann-Straussler-Scheinker disease. IN: D. Dickson (Ed.) Neurodegeneration. The Molecular Pathology of Dementia and Movement Disorders Pp. 2002, 318-325. Dlouhy SR, Hsiao K, Farlow MR, Foroud T, Conneally PM, Johnson P, Prusiner SB, 4. Hodes ME, Ghetti B. Linkage of the Indiana kindred of Gerstmann-Straussler-Scheinker disease to the prion protein gene. Nat Genet. 1992, 1:64-67. 5. Hsiao K, Dlouhy SR, Farlow MR, Cass C, Da Costa M, Conneally PM, Hodes ME, Ghetti B, Prusiner SB. Mutant prion proteins in Gerstmann-Straussler-Scheinker disease with neurofibrillary tangles. Nat Genet. 1992, 1:68-71. 6. Owen F, Poulter M, Collinge J, Crow TJ. A codon 129 polymorphism in the PRIP gene. Nucleic Acids Res. 1990, 18:3103. 7. Palmer MS, Dryden AJ, Hughes JT, Collinge J. Homozygous prion protein genotype predisposes to sporadic Creutzfeldt-Jakob disease. Nature. 1991,352:340-Erratum in: Nature 1991,352:547. 8. Collinge J, Palmer MS, Dryden AJ. Genetic predisposition to iatrogenic Creutzfeldt-Jakob disease. Lancet. 1991,337:1441-2. Brown P, Cervenakova L, Goldfarb LG, McCombie WR, Rubenstein R, Will RG, 9. Pocchiari M, Martinez-Lage JF, Scalici C, Masullo C. Iatrogenic Creutzfeldt-Jakob disease: an example of the interplay between ancient genes and modem medicine. Neurology. 1994,44:291-3. 10. Goldfarb LG, Petersen RB, Tabaton M, Brown P, LeBlanc AC, Montagna P, Cortelli P, Julien J, Vital C, Pendelbury WW. Fatal familial insomnia and familial Creutzfeldt-Jakob disease: disease phenotype determined by a DNA polymorphism. Science. 1992,2589068. 11. Young K, Clark HB, Piccardo P, Dlouhy SR, Ghetti B Gerstmann-Straussler-Scheinker disease with the PRNP P102L mutation and valine at codon 129. Mol Brain Res 1997, 44~147-150.
UPDATE ON THE PATHOGENESIS OF TRANSMISSIBLE SPONGIFORM ENCEPHALOPATHIES HERBERT BUDKA, MD MSCD HC Institute of Neurology, Medical University of Vienna, Austria. Since 1996, public concern about bovine spongiform encephalopathy (BSE) or mad cow disease and its transmission to humans has oscillated between utmost panic and indifference. For about the two years indifference, almost negligence, prevails. After having soared with the introduction of active surveillance by the mass testing of cattle at slaughter, BSE incidence figures are declining in most (but not all) EU countries (http://www.oie.int/eng/info/en-esbincidence.htm). For variant Creutzfeldt-Jakob disease (vCJD) that is the result of human contact with the BSE agent, there is statistically significant evidence that the epidemic in the UK is no longer increasing exponentially but may have reached a peak and is currently in decline (I). In spite of the appraisal of a new “Network of Excellence” NeuroPrion, the EU research budget for transmissible spongifom encephalopathies (TSEs) is at its lowest since 1996. This sharply contrasts with the USA where TSE research has recently been identified as top priority, worthy of much higher funding (*I. All this seems to emerge from, and contribute to, a general feeling of complacency that many feel to be appropriate now. Or is this cause for a l m ? (3) Mostly unnoticed by the public, significant inroads were recently made into the scientific understanding of TSEs and help to keep a balanced view on the urgency of TSE problems. I will try to briefly review here the progress achieved in 2003 and in the first half of 2004. With the recent EU enlargement, countries with little or no surveillance of both animal and human TSEs in the past are forced to apply the same costly measures that have been implemented in the 15 old EU member states since at least 2001. One of the main reasons that national surveillance for CJD continues to be mandatory are necessary public health measures if vCJD is detected, in order to avoid secondary human-to-human transmissions. Blood and blood products have been implicated as potential carriers in this type of transmission. Indeed, this concern was recently emphasized when the first possible vCJD transmission by blood transfusion was reported (4). Moreover, a second likely vCJD transmission by blood transfusion was reported in a person who died from an unrelated disease, yet without neurological symptoms, and was heterozygous for methionine/valine at the polymorphic codon 129 of the prion protein gene (5). This first report of vCJD infection in a 129 heterozygote demonstrates that apparently all prion protein genotypes are susceptible to BSEIvCJD prions. While current data on the vCJD epidemic in the UK seems reassuring, this does not exclude the possibility of further peaks in the future. This might occur if genotypes other than methionine homozygotes in the polymorphic codon 129 of the prion protein (PrP) gene PRNP and people possibly exposed to smaller infectivity doses might start to manifest disease Of still more concern is a recent LJK screening study (’) of lym hoid tissues that might harbour disease associated PrP (PrPd) even before manifest vCJD (’. Three samples out of 12.674 were positive, giving an estimated prevalence of 237 per million (95% CI 49-692 per million) (7). These estimations are in contrast to currently observed declining vCJD incidence and add significant uncertainty to its future development. For sporadic CJD (sCJD) where only CNS and eye tissues have been considered to cany infectivity, the development of more sensitive detection techniques resulted in the demonstration of PrPd in peripheral tissues such as lymphoid organs and muscle (9). Preexisting muscle disease appears to carry a risk of massively upregulated PrPd production in muscle if CJD develops in such a patient (lo). These observations are clearly of importance
137
138 when considering measures to prevent iatrogenic CJD. Detection of PrPd in the olfactory mucosa of sCJD patients was proposed as new diagnostic - although invasive - option (I1, 12). PrPd was recently also detected in muscle of sheep with natural and experimental scrapie (I3); it remains to be seen whether such data bear new implications for further protective measures in the human food chain. While scrapie in sheep has not been found to be linked to human disease, public health concern about sheep is based on the possibility that the humanopathogenic BSE agent has entered sheep and masquerades there as scrapie (I4). Thus diagnostic methods that differentiate BSE in sheep from scrapie are of utmost importance but still under development (I5* 16). Another concern emerges from the fact that shee genotypes that were previously considered to be resistant to scrapie may be also susceptible . Improved surveillance of TSEs in animals recently resulted in recognition of infections with atypical characteristics, both in small ruminants and bovines (I5, 18-20). It isunknownat present whether this means that there might be more than the one already identified animal source (“classical” BSE) for human disease. Recent research on the nature of TSE agents or prions and their pathogenesis confirmed the essential role of PrPd for disease. In an elegant, conditional knock-out, mouse model, brain pathology could be reversed and clinical disease manifestation cured when neuronal PrP was ablated at a later age (21). Interesting studies of prion-like ro agation and character of different strains were recently published in the yeast model (’ 2 3 . Still more basic to prion propagation seems to be the requirement of host derived RNA (24). Although this seems to reopen the discussion on virus-like nucleic acids in prions, a recent report from Stanley B. Prusiner’s laboratory on the transmission capacity of sythetic prions (recombinant PrP in amyloid form) (25) are likely to signal the final proof of the protein-only hypothesis, if these result can be confirmed elsewhere. It will be important to engineer synthetic prions not only from a complicated transgenic-mouse model as has been far achieved in Prusiner’s laboratory (25), but also in wild type mice, and by using most stringent controls. Indeed, the past year was most exciting in prion disease research. Provocatively put, it has been the end of the beginning of our understanding in prion science, and hopefully the beginning of the end of scientifically unfounded panicking.
8)
U ‘
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
Andrews NJ. Incidence of variant Creutzfeldt-Jakob disease onsets and deaths in the UK. January 1994 - December 2004. 2005: htkx//www.cid.ed.ac.uk/vcidq.htm [cited; Available from: http://www.cid.ed.ac.u!dvcidq.htm Erdtmann R, Sivitz LB, editors. Advancing Prion Science. Guidance for the National Prion Research Program. Washington, DC: The National Academies Press; 2003. Anonymous. vCJD complacency - cause for alarm? Lancet Neurol2003;2(1):1. Llewelyn CA, Hewitt PE, Knight RSG, Amar K, Cousens S, Mackenzie J, et al. Possible transmission of variant Creutzfeldt-Jakob disease by blood transfusion. Lancet 2004;363:4 17-421. Peden AH, Head MW, Ritchie DL, Bell JE, Ironside JW. Preclinical vCJD after blood transfusion in a PRNP codon 129 heterozygous patient. Lancet 2004;264:527-529. Ghani AC. Predicting the unpredictable: the future incidence of variant CreutzfeldtJakob disease. Int J Epidemiol2003;32(5):792-793. Hilton DA, Ghani AC, Conyers L, Edwards P, McCardle L, Ritchie D, et al. Prevalence of lymphoreticular prion protein accumulation in UK tissue samples. J Pathol2004;202. Hilton DA, Fathers E, Edwards P, Ironside JW, Zajicek J. Prion immunoreactivity in appendix before clinical onset of variant Creutzfeldt-Jakob disease (letter). Lancet 1998;352:703-704.
139 9. 10. 11. 12. 13. 14. 15.
16.
17. 18. 19. 20. 21. 22. 23, 24. 25.
Glatzel M, Abela E, Maissen M, Aguzzi A. Extraneural pathologic prion protein in sporadic Creutzfeldt-Jakob disease. N Engl J Med 2003;349( 19):1812-1820. Kovacs GG, Lindeck-Pozza E, Chimelli L, Arahjo AQC, Gabbai AA, Strobe1 T, et al. Creutzfeldt-Jakob disease and inclusion body myositis: abundant disease-associated prion protein in muscle. Ann Neurol2004;55(1):121-125. Tabaton M, Monaco S, Cordone MP, Colucci M, Giaccone G, Tagliavini F, et al. Pnon deposition in olfactory biopsy of sporadic Creutzfeldt-Jakob disease. Ann Neurol 2004;55(2):294 - 296. Zmusso G, Ferrari S, Cardone F, Zampieri P, Gelati M, Fiorini M, et al. Detection of pathologic prion protein in the olfactory epithelium in sporadic Creutzfeldt-Jakob disease. N Engl J Med 2003;348(8):711-719. Andrkoletti 0, Simon S, Lacroux C, Morel N, Tabouret G, Chabert A, et al. PrPSc accumulation in myocytes from sheep incubating natural scrapie. Nat Med 2004; 10:591593. Hunter N. Scrapie and experimental BSE in sheep. Br Med Bull 2003;66(1):171-183. Lezmi S, Martin S, Simon S, Comoy E, Bencsik A, Deslys J-P, et al. Comparative molecular analysis of the abnormal prion protein in field scrapie cases and experimental bovine spongiform encephalopathy in sheep by use of Western blotting and inimunohistochemical methods. J Virol2004;78(7):3654-3662. Nonno R, Esposito E, Vaccari G, Conte M, Marcon S, Di Bari M, et al. Molecular analysis of cases of Italian sheep scrapie and comparison with cases of bovine spongiform encephalopathy (BSE) and experimental BSE in sheep. J Clin Microbiol 2003;41(9):4127-4133. Houston F, Goldmann W, Chong A, Jeffrey M, Gonzalez L, Foster J, et al. Prion diseases: BSE in sheep bred for resistance to infection. Nature 2003;423:498. Buschmann A, Biacabe A-G, Ziegler U, Bencsik A, Madec J-Y, Erhardt G, et al. Atypical scrapie cases in Germany and France are identified by discrepant reaction patterns in BSE rapid tests. J Virol Meth 2004;117:27-36. Casalone C, Zanusso G, Acutis P, Ferrari S, Capucci L, Tagliavini F, et al. Identification of a second bovine amyloidotic spongiform encephalopathy: molecular similarities with sporadic Creutzfeldt Jakob disease. Proc Natl Acad Sci USA 2004; 101(9):3065-3070. Benestad SL, Sarradin P, Thu B, Schonheit J, Tranulis MA, Bratberg B. Cases of scrapie with unusual features in Norway and designation of a new type, Nor%. Vet Rec 2003;153(7):202-208. Mallucci G, Dickinson A, Linehan J, Klohn P-C, Brandner S, Collinge J. Depleting neuronal PrP in pnon infection prevents disease and reverses spongiosis. Science 2003;302(5646):871-874. Tanaka M, Chien P, Naber N, Cooke R, Weissman JS. Conformational variations in an infectious protein determine prion strain differences. Nature 2004;428(6980):323-328. King C-Y, Diaz-Avalos R. Protein-only transmission of three yeast prion strains. Nature 2004;428(6980):3 19-323. Deleault NR, Lucassen RW, Supattapone S. RNA molecules stimulate prion protein conversion. Nature 2003;425:717 - 720. Legname G, Baskakov IV, Nguyen H-OB, Riesner D, Cohen FE, DeArmond SJ, et al. Synthetic mammalian prions. Science 2004;305(5684):673-676.
Key Words: Creutzfeldt-Jakob disease - prion - bovine spongifom encephalopathy scrapie - transmissible spongiform encephalopathies
-
This page intentionally left blank
5. THE CULTURAL EMERGENCY: INFORMATION AND COMMUNICATIONSENVIRONMENT
This page intentionally left blank
INNOVATIONS IN INFORMATION AND COMMUNICATION TECHNOLOGIES: BENEFITS AND THREATS AXEL LEHMANN Institut f~ Technische Informatik Universitat der Bundeswehr Miinchen Neubiberg, Germany ABSTRACT Permanent innovations and spreading of information and communication technologies (ICT) will result in the phenomena of worldwide, global connectivity. This connectivity enables a permanently increasing percentage of the world population to access, transfer or exchange data, information and knowledge independently of time and location. Technological innovations are permanently increasing computers functionality and performance and also enlarging the use of mobile communication. These innovations increase tremendously the trend towards pervasive and ubiquitous computing and communication capabilities. Rapid advances of ICT over the past decade have already caused dramatic changes in all segments of our society ranging from business, economy, industry and government to private life. Examples of those changes include the way individuals or groups of individuals are communicating, learning, doing business, or forming their leisure time by use of special hardwarehoftware products, web-based searches, videoconferencing, tele-learning, electronic banking, or e-commerce services. This article will summarize major innovations expected in information and communication technologies, and their impact on information and communication in business, economy, industry, government, and in private life around the world. Based on the expected product and service developments, the paper forecasts pervasive and ubiquitous computing and communication facilities that might be available in 10 years from now. Based on these forecasts, major benefits and threats are discussed which can result from these developments. Finally, the question will be raised as to which threats must be recognized as a major planetary emergency and as a challenge for interdisciplinary future research. INTRODUCTION Over the past 50 years, information and communication technologies have dramatically changed our business, public and private life around the world. Never before have technological advances and product innovations happened so fast and caused such dramatic changes in the way people communicate, work and live. In general, there are always two major driving forces for innovations leading to new products or improved services (see Fig. 1): On one hand, the driving forces are the development of new or enhanced technologies, enabling product innovations andor improved services for the user community. On the other hand, pulling forces are generated by the user community asking for improved products or additional services thus forcing innovations in ICT products and enabling technologies. In the case of ICT, without any doubt, driving forces are dominant, resulting in permanent offers of new, but not market-driven, products and services. Examples are the huge variety of products for mobile communication, cellular phones, personal digital assistance (PDA), and Internet services. The question arises as to how future innovations in this
143
144 area will change the way we collect and exchange data, information and knowledge and how we communicate with each other.
“PULC‘ Fig. I : Major Driving Forces of (ICT)-Innovations As in other domains, well-known experts from academia, industry and economy have always forecast advances and innovations in ICT. For example Gordon Moore, former chairman of Intel Corp. forecast in 1965 that the complexity of integrated circuits (ICs) will double every 18 to 24 months, and, as a result, computer performance will double every 18 months’, ’! About 40 years later - as shown later in this article - his forecast is still valid and will still be valid for the next decade (“Moore’s Law”)! But there are also problems with forecasts, even those of experienced experts in the field. Thomas J. Watson, former chairman of IBM, stated in 1943: “I think, there is a world market for maybe five computers”. In an article in Popular Mechanics (1949) an estimate was published that stated “... computers in the future may weigh no more than 1.5 tons...”. Kenneth Olson, founder and former CEO of Digital Equipment Corporation, mentioned in 1977 that “... there is no reason anyone would want a computer in the home ...”3. In 1990, Tim Berner-Lee programmed HTML just to connect some local networks within CERN by this hypertext system - today it is called and used as the basis for the “World Wide Web”. Despite the risks inherent in forecasting ICT innovations and the resulting changes for our public and private life, this article will briefly analyse expected advances of ICT systems and applications for the next decade, and raise some questions about implied worldwide cultural consequences - and about major benefits and threats. as well. TECHNOLOGICAL TRENDS A dominant driving and pushing force for ICT product and service innovations is the digitization of data, information and knowledge for storing, processing or communication. As key enabler development of highly integrated (transistor) circuits (IC) on silicon has to be seen which is still the key technology of future technological advances for the next decade4: Minimization of feature (device) sizes: Based by data published by Intel Corp. (see Fig. 2) and others, integration density of integrated circuit devices on silicon has increased exponentially over the past 40 years, and continue to increase exponentially over the next decade, at least. This results from minimization of the feature size of electronic components and connections.
145
Fig. 2: Minimization of Feature Sizes on Silicon ([4]).
Cost-benefit improvements: As indicated by data presented in Fig. 3a, the absolute values of storage capacities per chip, and the introductory costs of new generations of memory chips have changed dramatically over the past 30 years. In contrast, and despite increasing chip complexity, the frequency of innovation cycles (2-3 years), and the final costs of memory chips (approximately US$2) have been almost constant over the past three decades. In addition, Figures 3a and 3b indicate that from one memory chip generation to the next, almost every 2-3 years, the total memory capacity per chip is always increased by a factor of 4. Considering the costs for the implementation of a memory with 1 Megabyte storage capacity, it can be seen that over the past 25 years the prices of RAMS (semiconductor memories) could be decreased by a factor of about 200.000!! Never before have prices for any technical product been so drastically decreased as those of ICT products.
Fig. 3a: Costs &Innovation Cyclesfor DRAM-Memory Chips.
146
Year:
Number of ICs:
1973 1977 1981 1984 1988 1998
8192 ICs a 512 ICs a 128 ICs a 32 ICs a 8 IC a 1/8 IC a
Price (in DM)
1 KBit 16 KBit 64 KBit 256 KBit 1 MBit 64 MBit
1.200.000,80.000,6.400,1.920,400,5,-
B
w
w
Fig. 3b: Costs of I MBfle-Memory.
Performance improvements: Fig. 4 demonstrates that the performance of processors (measured in MIPS: Millions of Instructions Per Seconds) could be exponentially increased over the past 45 years, and that this increase will continue for the next decade, according to reliable data.
Fig. 4: Increases in Processor Performance ([4]), Degradation of power supply: As indicated by Fig. 5, supply voltage for processors has been significantly decreased over the past 3 decades. Nevertheless and as a consequence of the drastic increase of integration densities of ICs on chip areas - the total supply voltage requirements for computing and communication components is still a major limiting factor especially for mobile ICT devices, and a major challenge for future research.
147
Fig. 5: Processor Supply Voltage Increasing IC complexity: As indicated in Fig. 6 by both projected and actual data presented for memory chips and for microprocessors, the IC-complexity increase has been exponential over the past decades. Projections demonstrate that this trend will continue for the next decade, at least.
Fig. 6: IC-Complexity ([4], (21)
In summary, the data presented in Figs. 2 to 6 indicate that the forecasts given by Moore in 1965 - the so-called “Moore’s Law” - have been valid for almost four decades now, and that these will be valid at least for the coming decade! Regarding
148 these data and projections presented by various research institutions and semiconductor manufacturers, questions about the physical and technological limits of silicon technology have to be raised. According to all projections, silicon will be the dominant basic material for technological innovations in ICT for another decade. In addition - and already available - optical components will be used for data transmission and processing, as well. Furthermore, major research efforts are focusing on quantum computing and on bio-analogue information processing. FUTURE ICT SYSTEMS AND APPLICATIONS Primarily based on the technology pushes summarized in the previous section, the following product and service innovations in hardware or software can be expected within the next years: Enhanced functionality of computers and communication devices, in general; Acceleration of the convergence of computing and communication facilities; Improved humadcomputer interaction resulting from innovations in audio and visual information processing techniques (e.g. via Conversational User Interfaces, ’1; Increased application of “Embedded Systems” integrated or hidden in all kinds of technical systems from household devices to high-tech devices. (More than 95% of all currently produced microprocessors are already used for Embedded Systems instead of for use in stand-alone computers!); Pervasive use of “Smart” systems and devices, like mobile phones or personal digital assistants (PDA’s); Wearable computers - like so-called “prit-a-porter” computers integrated into clothes and personal belongings (see Fig. 7a).
Consumer Electronics Technology Megatrends 2000
LEEE Fig. 7a: The vision of “Wearable Computing”
149 Concerning the evolution of telecommunication networks and capabilities, we can base our forecasts on current analyses: In the year 2002, approximately 500 million individuals used the Internet for communication and for information exchange. Currently, approximately 7 to 10 million new web pages are made available per day! Forecasts in 1999 concerning the world market of terrestrial mobile services proposed an increase from 426 million users worldwide in the year 2000, to approximately 940 million users in 2005 and up to more than 1,7 billion users in 2010 with the strongest increase in the Asian-Pacific region6. From more recent analyses, we know that these projected numbers completely underestimated the dissemination of global wireless subscribers and should be doubled, at least! Another trend is the changing use of the Internet from a communication infrastructure to an information (and knowledge) infrastructure: the Internet provides users independently of time and location with all kinds of information. According to various experts, the so-called “Future Net” requires new services of “information logistics”. These services should provide user communities with permanently updated information, at any time, individually adjusted and accessible independent of location. If this can be achieved, ubiquitous computing will become a reality’. As mentioned in various publications on the expected innovations in personal computers (PCs), and as presented by Maurer13 as a model (see Fig. 7b), in about 10 years from now a wearable PC will consist of components ergonomically adapted to the human body, to clothes and personal belongings. The “PC 2014”13 will be worn like a cigarette box in a shirt pocket and wirelessly connected with various inputloutput sensors, e.g. with tiny audio-sensors and tiny mirrors both integrated in the frames of glasses as output devices for listening or reading information; a tiny video camera could also be integrated into the glasses, a larynx microphone and a body sensor as a hair clip used as input devices could complete a personal computer wirelessly connected to the “Future Net”, with all kind of information logistic services. Communications of this type and a “PC 2014” configuration will allow time and location independent communication and information processing, e.g. for permanent care and medical data collection for seniors, for distance learning, or for mobile problem solving and working.
1: PC 2: Audio-Sensors 3: Sensor 4: Video-camera 5: Mirrors 6: Laryux-microphone Q
Fig. 7b: A Model of a Wearable PC 2014 [13]
150 The current Internet already offers a high degree of connectivity as well as all the limits of transmission speed, bandwidth and quality of service of our current heterogeneous and diverse communication infrastructure (see Fig. 8). High-speed optical and wireless technologies will provide significant networking advances for’: Network connectivity anytime, anywhere; High end-to-end bandwidths (2 10 Gigabits); Grids connecting computers, storage, other instruments and Sensor nets.
Heterogenity and Diversity of Communication Networks
Fig. 8: Heterogenity and Diversity of Communication Networks [14], Besides these technological advances, additional architectural innovations will contribute to significant changes in ICT applications such as: High performance computers as specialized stand-alone computers; Grid computing which allows information processing by networks of general purpose computers in virtual groups or Organic computing which will allow self-adaptation, self-configuration, and self-optimization of computers and networks according to given workload requirements. CULTURAL CONSEQUENCES When talking about the cultural consequences resulting from ICT advances and innovations, it should be noticed that these consequences concern our public, professional and private lives in general. According to Gartner Inc. (2004), we will become a “Connected Society” - which means “... almost anything will be connected - from refrigerators with smart-labelled food, and elevators in smart buildings, to ... wearable computers, ... and to all kind of servers ... “12.
151 As already mentioned at the beginning of this article, giving projections and forecasts in the rapid changing domain of ICT is very difficult and conclusions are often unreliable. Nevertheless, based on the assumption that the afore mentioned innovations will become reality, computer scientists and engineers, as well as users of high-tech devices, have to think about the potential benefits and threats of the technologies they will have to use every day in the future. Without question, the expected innovations summarized above could be beneficial for individuals and for the world community as a whole. Examples we can think of are: Permanent access to globally available data, information and knowledge (like to time-tables, route or traffic data, weather forecasts); Permanent access to personalized data, information and knowledge (like to names and addresses of individuals, bank accounts, credit cards, medical data, diagnostic experiences); Location and time independent solving of problems (e.g. in planning, diagnosing or in creative processes); Overcoming distance or language barriers in direct communication (e.g. by forming virtual groups, groups of seniors, or groups of disabled people); Improved analysis and solution capabilities for extremely complex problems (e.g. in logistics, climate change, optimization of resource consumption) and Permanent, life-long learning possibilities. Besides the potential benefits, the potential threats of these developments must also be taken into account and should be analysed carefully. Some of them can even cause planetary emergencies. Major threats we can think of are particularly: Data and information security issues; Privacy of individuals (which is becoming a real concern as wireless and permanent connectivity can result in individuals becoming “glassy humans”); Safety and vulnerability of technical systems; Mastering of (technical) systems; Increasing globalization resulting in significant, rapid changes in businesses, industries, public and private life; The “digital divide” which will become an ever increasing problem for all nations around the world. In summary, it is almost impossible to balance carefully the listed benefits and threats in a final evaluation. Instead, it is more important to generate awareness in ICT experts and in the user community with respect to potential benefits, risks, and challenges. CONCLUSIONS Based on the data and projections presented regarding technological advances in ICT, it can be assumed permanent advances will take place towards ubiquitous and pervasive computing that will significantly influence our public and private lives. Advances in three major directions will enable ubiquitous and pervasive computing: Availability of ubiquitous information processing capabilities by an increasing offer of smart systems, embedded systems, and by wearable computers.
152 High performance and high bandwidth networks will offer almost unlimited connectivity between all kinds of computing devices, and An increasing number of web and information logistics services will be available in the coming years. Ubiquitous and pervasive computing and communication will offer new benefits, but also new threats for all users in a global and highly connected society. While most research activities are focusing especially on the benefits arising from technological advances and ICT innovations, an important task for scientists - and for the World Federation of Scientists - is to focus on threats which may arise from these dramatic changes, and may even result in new planetary emergencies. REFERENCES
1.
2. 3. 4.
5. 6.
7. 8.
9.
10. 11. 12.
13. 14.
Moore, G.: “Cramming More Components Onto Integrated Circuits”; Electronics, (April 19, 1965). Moore, G.: “Moore’s Law” (www.intel.com/technology/silicon/mooreslaw/index.html). Wurster, C.: “Der Computer - eine illustrierte Geschichte”. Taschen GmbH, Koln (2002) (www.taschen.com). Moore, G.: “No Exponential is Forever ... but We Can Delay ‘Forever”’; Presentation at ISSCC: International Solid State Circuits Conference, (February 10,2003). Wahlster, W.: “ConversationalUser Interfaces”; (Ed.) Journal on Information Technology, 6/2004, Oldenbourgh Verlag, (Dec. 2004). “The Future Mobile Market: Global trends and developments ...” UMTS: UMTS Forum Report 8, (March 1999). Beck, J.C. “Future Wireless Trends” (www.accenture.com/isc). Mattem, F.: “Vom Handy zum allgegenwwigen Computer; ubiquitous computing”; Analysen der Fr. Ebert-Stiftung zur Informationsgesellschaft 6, (2002). NITRD: “Large-ScaleNetworking: Future Nets: ...” (National Coordination Office for Information Technology Research and Development; (www.itrd.gov/pubs/blue03/future-nets-O1.html), (2003). GRID: “Grid Computing Info Centre” (GRID Infoware) (www.gridcomputing.com). Weber, H.: “Future Net: Spekulationeniiber das Internet der Zukunft” Informatik-Spektrum,Bd. 27, Heft 1, (Februar 2004). Gartner Inc.: “Gartner Outlines Most Significant Technology Driven Shifts in the Next Decade” (www3.gartner.cod5~about/press~releases/asset~62507~11 .jsp). Maurer, H.: “Der PC in 10 Jahren” Informatik-Spektrum;Springer Verlag; Bd. 27, Heft 1, (Februar 2004). Eckert, C.: “NGN, AII-P, B3G: Enabler ftir das Future Net?!” et al. Informatik-Spektrum;Springer-Verlag;Bd. 27, Heft 1, (Febr. 2004).
6.
COSMIC OBJECTS
This page intentionally left blank
RECENT PERSPECTIVES ON THE HAZARD OF AN ASTEROID IMPACT CLARK R. CHAPMAN Southwest Research Institute, Boulder, USA ABSTRACT
It has been over half-a-century since scientific pioneers Ralph Baldwin and Ernst Opik frrst proposed that our planet suffers occasional catastrophic impacts by asteroids and comets, capable of changing the evolution of life on our planet. And it is nearly a quarter century since Luis and Walter Alvarez et a1 proposed asteroid impact as the cause of the K-T mass extinction and Eugene Shoemaker organized the first workshop that considered the threat to modem civilization. The impact hazard has been the subject of major motion pictures and has thus become a cultural icon: "X is as likely to happen as an asteroid is to fall." However, apart from the ongoing telescopic Spaceguard Survey for Near-Earth Asteroids (NEAs) larger than one kilometer in size, there is rather little funded scientific research concerning the impact hazard, its potential lethal effects, or ways to mitigate it. New perspectives have been achieved, however, in the last few years as astronomers develop a growing appreciation of the physical characteristics of NEAs, their complex dynamical evolution, and unexpected ways in which "near misses" are manifested in existing survey data. Furthermore, gradually broadening awareness of NEAs by the natural hazard community and by international scientific organizations (e.g. ICSU and the Global Science Forum of the OECD) has spurred new thinking about social and political matters involving NEAs. Finally, serious attention is starting to be paid to issues of mitigation, including unexpected issues involved in nudging an NEA away from Earth impact as well as in preparing citizens to deal with a threatened impact. I will outline some of these new, interdisciplinary perspectives in this seminar. INTRODUCTION The possibility that a cosmic body, an asteroid or comet, might strike the Earth during the 21st century, is one of the innumerable natural and man-made hazards with which modem society is trying to cope. Like some of the hazards (e.g. volcanic explosions, huge tsunami, mass murder by terrorists, nuclear power plant meltdown), an asteroid impact catastrophe is unlikely, but is deservedly on our radar screen. Other less catastrophic causes of death and destruction are far more deadly and costly, and far more likely to happen or are even happening right now (e.g. military conflicts in Palestine and Iraq, automobile fatalities, death by preventable diseases and smoking, famine and AIDS in Africa). In introducing the impact threat to an international, interdisciplinary audience, I don't wish to imply that it is more important than some of the most deadly threats facing humanity. But it is nearly unique in having the potential to destroy civilization or even exterminate our species. And, statistically, the threat is as significant as airliner crashes or lightning storms and thus deserves a measure of international attention, especially
155
156 since practical and affordable means (involving space technology) exist to deflect an oncoming asteroid and prevent the disaster from occurring at all. At this conference, I intend to introduce (andor update) the international scientific community to recent issues involving a particularly unlikely but uniquely dangerous threat involving impact on Earth of an asteroid or comet several kilometers in diameter. It is conceivable, as exemplified by the Cretaceous/Tertiary extinctions of the dinosaurs and most other fossilizable species of life 65 million years ago, that humankind could be rendered extinct by such an impact, or at least that the future of civilization would be placed in jeopardy. But much more likely, smaller impacts also pose threats to society that, in some ways, are analogous to the terrorist threat that has gripped the world's attention, in which the objective damage (deaths andor immediate economic consequences) are comparable to or smaller than those of typical natural hazards (e.g. earthquakes, hurricanes, power-grid blackouts) that we have had to deal with regularly during the last few decades. My goal, in my oral presentation to these seminars, is to introduce the salient facts about the impact hazard to the general audience, and to concentrate on some issues that have developed or been recognized in just the last few years. Since I have had several recent occasions to review this topic in published articles and reports or in presented talks that are readily accessible on the Internet, I will cite these references and links below and then concentrate on recent developments concerning the impact hazard in this written article: My very recent review of the impact hazard, emphasizing the scientific features of the hazard, is "The hazard of near-Earth asteroid impacts on Earth," by Clark R. Chapman, Earth & Planetaiy Science Letters, Vol. 222, Pp. 1-15, 2004, downloadable from: o http://www.boulder.swri.edu/clark/crcepsl.pdf. This is referred to below as CRCO4. My 2003 report to the Organisation for Economic Cooperation and Development (OECD) on the potential societal consequences of an asteroid impact: "How a Near-Earth Object Impact Might Affect Society," by Clark R. Chapman, commissioned by the OECD Global Science Forum for "Workshop on Near Earth Objects: Risks, Policies, and Actions" (Frascati, Italy, January 2003), downloadable from: o http://www.oecd.org/dataoecd18/4O/2493218.pdf or o http://www.boulder.swri.edu/clark/oecdjanf.doc. This is referred to below as CRC03. BACKGROUND Human beings have long lived with natural disasters as well as with wars and tragedies of our own making. The last century has seen a rise in risks of potential manmade disasters, like nuclear war, terrorism, and global climate change. And, in the last decades, we have become more aware of countless risks in our daily lives, many of which have always been with us. But it is unusual for scientists to recognize a significant, previously unrecognized natural hazard. In the decades following the discovery of the first asteroid in an orbit that crosses the Earth's orbit around the Sun, a few prescient
157 scientists - including Ralph Baldwin and Emst Opik - calculated roughly correct odds for a modem-day strike on OUT planet by a kilometer-scale rock and they appreciated the terrifying and dangerous consequences that were possible. Not until the early 1980s, however, did the scientific community become generally aware of both the past consequences of impacts (the K-T mass extinction) and the present risk. Public and governmental consciousness of the impact threat developed from a few scary headlines about "near misses" in the late 1980s to widespread familiarity following more headlines, the dramatic impacts on Jupiter of Comet Shoemaker-Levy 9 in 1994, and the subsequent release of two major Hollywood movies on the theme. As of 2004, public awareness of the hazard has translated into very little official governmental action by any nation or by international organizations. Officially sponsored workshops, documents, or resolutions notwithstanding, serious funding of research on the impact hazard or official incorporation of the hazard into hazard management agencies has been nil. The most significant funded portion of NEO impact research is the international endeavor called the Spaceguard Survey. And it is being undertaken primarily by the augmentation, by a few million U S . dollars a year, of a pre-existing NASA science-oriented research effort to telescopically identify Near Earth Asteroids (NEAs). Despite recommendations of several advisory committees in the 1990s, no large, dedicated telescopes for NEA studies have been built. The most successful element of the augmented effort has been the success of several programs (primarily the two telescopes of the LINEAR observatory in New Mexico) in increasing the numbers of larger NEAs discovered (those >1 km diameter) to well over half of the estimated population of -1100. None of the charted NEAs of any size will strike the Earth this century. Thus we are at risk of un-forecasted impacts from <500 large NEAs, and the number may be under 100 by the end of the decade. Of course, the possibility remains that one of the several hundred that will be discovered during the next few years will be found to be on a near-term impact course, or that one of those not-yet-discovered will actually strike without warning. An impact by an NEA >1 km diameter could have serious global environmental consequences (Toon et al. 1997) and thus societal ramifications. Almost surely, a 3 km asteroid would threaten the future of human civilization. Because of uncertainties, unprecedented global consequences could conceivably result from a smaller impact. Beyond that, impacts by much smaller asteroids, say 100 - 200 m in size, are much more likely to happen and could cause a regional catastrophe of a magnitude that society is not prepared to deal with. NEAs <30 m diameter cannot cause significant damage on the ground, although psychological reactions to an unexpected 1 megaton TNT-equivalent blast in the upper atmosphere could have adverse consequences. In any case, no official recommendations have been made by national advisory committees (such as the American National Academy of Sciences) nor accepted by governmental agencies to take the impact hazard beyond the realm of paper studies and give it a status on equal footing with governmental management of other hazards. The most substantive report with policy recommendations is probably that of a Task Force established by the British Parliament several years ago (Atkinson et al. 2000); few of its recommendations have been implemented, however.
158 DEVELOPMENTS IN 2003 AND 2004 NASA established a Science Definition Team to research the NEA impact hazard with the rather narrow goal of deciding if there was a sound basis to consider extending the Spaceguard Survey (e.g. by investing in one or more large telescopes) down to objects a few hundred meters in size, or smaller. Despite its rather narrow charter, the SDT report released in August 2003 (SDT 2003) is the most comprehensive analysis to date of several aspects of the impact hazard, including quantitative assessments of the efficiencies of existing and proposed telescopic surveys and evaluation of environmental consequences and lethality due to impacts by bodies of various sizes. The SDT report documents, for the first time, the statistical importance of damage/deaths by impacting objects 50 - 300 m in diameter. (See also my somewhat modified take on the SDT conclusions, in which I deem the larger tsunami-makers to be of lesser consequence: CRC04.) For the first time in the last decade, there has been serious examination (in the shape of conferences, back-of-the-envelope calculations, and white papers, but not by major research and development efforts) of the practical issues concerning NEA deflection. Deflection, of course, is an approach to hazard mitigation which - in its purest form - is practically unique to the impact hazard. Although there are attempts to minimize the likelihood of some kinds of catastrophes (e.g. avalanche blasting, flood control, anti-terrorism activities), mitigation of most natural and man-made hazards primarily consists of minimizing the deaths and damage that will result when the disaster occurs (e.g. by warnings, emergency response, recovery operations) rather than by definitively causing the disaster not to happen in the first place. In the case of NEAs, it seems simple in principle to use spacecraft technologies and bombs to blast a threatening body to bits or to change its velocity vector so that it misses the Earth. Advances in both the knowledge of NEA physical properties and in evaluation of practicalities relating to how a deflection could actually be accomplished have been integrated, in a preliminary way, during two recent conferences: (a) the NASA-sponsored “Workshop on Scientific Requirements for Mitigation of Hazardous Comets and Asteroids,” Arlington VA USA, 3-6 Sept. 2002 (proceedings: Asteroid Impact Mitigation, edited by M.J.S. Belton, is expected to be published by Cambridge University Press later in 2004); (b) the AIAA and B612 Foundation sponsored “Planetary Defense Workshop: Protecting Earth from Asteroids,” Garden Grove CA USA, 23-26 February 2004 (video and pdf s of all presentations on-line at http://www.planetarydefense.info/). Themes and conclusions that gained prominence in these two meetings include: (a) NEAs have a wide but poorly understood diversity in physical properties (e.g. snow vs. rock vs. metal composition, monolithic vs. cohesionless rubble-pile structure, spin periods ranging from a few minutes to many hours, possible possession of satelliteis) and (b) the desirability in many cases of using small forces over long periods of time (e.g. as envisioned in the nuclear-powered plasma engine approach of the B612 Project, Schweickart et al. 2003) rather than very energetic, rapidly acting approaches (e.g. bombs). Fundamentally, the responses of a small-body to various kinds of insults are less predictable and controllable if energetic and sudden. Yet there are certain cases (large body, short warning time) for which deflection might require bombs if it is to be accomplished at all.
159 Ongoing groundbased observing programs (especially radar) and several space missions to asteroids and comets have continued to augment our understanding of NEAs and comets, or will do so in the near future. These include the Stardust mission, which in January 2004 returned the sharpest comet nucleus images yet obtained, revealing unexpected surface features on Comet Wild 2. The Deep Impact mission is scheduled for launch at the end of this year, and will fire a projectile into the nucleus of Comet Temple 1 next summer. Also next summer, the Hayabusa spacecraft will begin a five-month study of an NEA and then return collected samples to Earth during summer 2007. Two events early in 2004 provided important lessons concerning how discoveries of NEAs by the Spaceguard Survey connect (or fail to connect) with public realities. On the evening of 13 January, a preliminary NEA discovery by LINEAR was posted on a public web site operated by the International Astronomical Union's Minor Planet Center (MPC) to enable chiefly amateur astronomers to follow-up the discovery in order to refine knowledge of the NEA's orbit. The nominal prediction had the object, estimated at -30 m diameter, striking the northern hemisphere of the Earth the very next day! Calculations performed over the next several hours by experts at the MPC and the Jet Propulsion Laboratory (later confirmed by others) showed that the object AL00667 really did have a 10% - 40% chance of striking within the next few days, assuming that the object's positions, as reported by LINEAR, were subject to the usual uncertainties. With cloudy skies covering much of Europe and North America, attempted follow-up observations were generally unsuccessful. The question arose as to what kind of communications, if any, should be issued to whom and when, if confirmation continued to elude observers until hours before the possible impact. As it turned out, several observations made later that night demonstrated that the object was not on a collision course; it was actually larger and much farther away. After-the-fact analysis shows that the reported positions had unusually poor accuracy; moreover, it could be argued that there were other objective facts that should have suggested a much lower impact probability, regardless of the formal error analysis. In any case, the KO0667 event highlighted the fact that no wellunderstood protocols were in place to guide the flow of information up the chain-ofcommand within NASA, from NASA to emergency management agencies in the U.S., or to other astronomers and the public worldwide. Subsequently, the director of the NEA observation program at NASA Headquarters has issued preliminary guidelines and is seeking to formalize them later this year. The other event that raised public consciousness about NEAs was the record close passage of NEA 2004 FH, which missed the Earth by only about one Earth-circumference on 18 March. The behavior of both objects (AL00667 and 2004 FH) highlighted the previously poorly understood fact that current search procedures result in great ambiguities between the apparent motions of standard, faraway asteroids in the main asteroid belt and objects that might strike the Earth within days (Harris 2004). Since the purpose of Spaceguard is to find large NEAs, and the small telescopes are very unlikely to capture a small NEA on its final plunge, it had been largely overlooked that false alarms about small, imminent impactors are actually inevitable and might even happen frequently, given current procedures. If such events get into the sensationalistic news media, then a "cry wolf' situation may be established, discrediting the reports of NEA astronomers. On the other hand, failure to take such observations seriously (and that is currently unavoidable, given the modest funding level, which cannot support 24-hour
160
operations), could possibly result in the actual impact of a small body that had been detected but was not evaluated or reported in time to attempt mitigation (e.g. evacuation of ground-zero). Another issue highlighted by the two early 2004 events concerns the lower threshold for serious consequences. Both the original (mis)interpretation of AL00667 and the actual case of 2004 FH involved bodies estimated to be -30 m in diameter. Astronomers debated whether impact of a rocky body that big into the upper atmosphere would, or would not, cause serious damage below. One view, supported by statements in earlier literature, is that the 1 megaton explosion, despite being very high in the atmosphere, would result in shock pressures and winds that would topple wood structures and be very dangerous within a 10- or 20-km radius below. Others, relying for example on a simplified analysis in the SDT report which was focused on larger objects, argue that impactors smaller than 50 m diameter would explode brilliantly but harmlessly. Since objects near 30 m in diameter strike nearly an order-of-magnitude more frequently than "Tunguskas," which certainly are devastating, it is clear that more focused analysis of the lower threshold for damage should be undertaken. ("Tunguska" refers to the 1908 explosion over Siberia, estimated at 10 - 15 megatons, which toppled over a thousand square km of forest. Such events are now thought to occur somewhere on Earth less than once in a millennium, although the recent occurrence of Tunguska itself casts some doubt on that conclusion.) Some interest in the impact hazard has developed recently within the social science specialties that deal with risk perceptiodcommunication and hazard management and mitigation. Yet that field is, itself, in considerable turmoil and evolution as a result of the huge public reaction (primarily in the United States) to the September 11th terrorist attacks, and resulting reorganization of priorities within agencies once responsible for natural hazards. For example, the Federal Emergency Management Agency is now just a sub-unit of the Department of Homeland Security. As people wrestle with appropriate ways to perceive and react to the terrorist threat, it is worth noting that psychological perceptions of the impact hazard share some of the "dreadful" attributes of perceptions about terrorism. Thus the impact hazard may provide some lessons concerning other "extreme disasters" that are currently in vogue. The next interdisciplinary, international venue in which the social consequences of the impact hazard will be evaluated is the forthcoming "Workshop on Comet/Asteroid Impacts and Human Society," sponsored by the International Council for Science (ICSU), 27 November - 2 December 2004, Santa Cruz de Tenerife, Spain. In the meantime, a major cost-risk evaluation of the impact hazard and policy analysis has appeared (Sommer 2004). Often it seems that modem society is busy dealing with the last disasters rather than proactively addressing possible future disasters. Perhaps it will take an actual NEA impact with lethal consequences to spur actual action. But that is not likely to happen within the next few decades. REFERENCES
1.
Atkinson H. et al. 2000. Report of the Task Force on PotentiaIIy Hazardous Near Earth Objects. British National Space Centre, London, UK (http://www.nearearthobject.co.uk)
161 2. 3. 4.
5.
6. 7. 8.
CRCO3. (See "Introduction") CRCO4. (See "Introduction") Harris A.W. 2004. Confusion of main-belt asteroids as possible Earth impactors: a lesson from AL00667. American Astronomical Society DDA Meeting #35, abstract #06.03. SDT 2003. Near-Earth Object Science Definition Team, Study to Determine the Feasibility of Extending the Search for Near-Earth Objects to Smaller Limiting Diameters, NASA Office of Space Science, Solar System Exploration Div., Washington, DC., 154 pp. (http://neo.jpl.nasa.gov/neo/neoreportO30825.pdf) Toon, O.B. et al. 1997. Environmental perturbations caused by the impacts of asteroids and comets. Rev. Geophys. 35 41-78. Schweickart R., E.T. Lu, P. Hut & C.R. Chapman 2003. The asteroid tugboat. Scientific American 289 (9, 53-61. Sommer G.S. 2004. Astronomical Odds: A Policy Framework for the Cosmic Impact Hazard. PhD Dissertation, RAND Graduate School, June 2004.
RECENT CLOSE APPROACHES OF ASTEROIDS TO THE EARTH DONALD K. YEOMANS, PAUL CHODAS, AND STEVEN CHESLEY Jet Propulsion Laboratory/California Institute of Technology, Pasadena, USA ABSTRACT
On March 18, 2004, a 20 meter sized asteroid designated 2004 FH passed a record close 3.4 Earth diameters from the surface of the Earth. Had this object actually hit the Earth’s atmosphere, the energy released would have had the power of 0.25 million tons of TNT (0.25 MT), about the size of a nuclear weapon. Because they are infrequently noticed, very close Earth approaches normally cause some media attention. However, there are thought to be 10 million (largely undiscovered) objects of this size in near-Earth space and one of them would be expected to pass within 3.4 Earth diameters of the Earth’s surface every eleven months. Objects of this size pass within one lunar distance every 5 days. This example underscore the fact that while we are making great strides in discovering the large near-Earth objects (larger than 1 km) that can cause global damage, we have not yet focused discovery efforts upon the vastly more numerous objects that could cause local damage. Since the close approach rate for active short and long period comets, of any size, is at least two orders of magnitude below that for near-Earth asteroids, current and planned telescopic surveys to discover near-Earth objects should focus their attention upon close approaching asteroids, not comets. There have been a number of well publicized close Earth approaches recently and this paper makes an effort to call out the lessons learned from these events and note the processes that have been put into place to make the future predictions of Earth approaches more of a scientific exercise and less of a media event. INTRODUCTION Because of their faintness, close Earth approaches of small near-Earth asteroids (NEAs) often cannot be predicted until just before, during, or just after a close Earth approach. They often seem to catch the scientific community by surprise. Moreover, the lack of astrometric observations over a lengthy time interval sometimes make it impossible for orbit computers to initially rule out the possibility of future Earth impacts by the object. Although subsequent astrometric observations almost always rule out any future Earth collisions, sometimes these observations do not become available before the media provides sensationalistic stones of the “upcoming collision.” Once the observations do become available, the orbit is revised and the potential future impact is removed, and yet some members of the press incorrectly announce that astronomers once again “erred” in their initial predictions. This paper will focus upon some recent close approaches of near-Earth asteroids (NEAs); the often intense glare of media attention that followed and the lessons learned from each of these encounters. Table 1 presents the recent close Earth approaches of near-Earth asteroids of all sizes to within 2 lunar distances (1 LD = 384,400 km) in the four-month interval from March through June 2004. As of August 2004, the 2004 FH flyby of Earth remains the closest approach of any asteroid for which a designation has been assigned by the Minor
162
163 Planet Center (MPC). Based on estimates of the asteroid population, objects of this size would be expected to pass within a lunar distance every 4 days or so and strike the Earth approximately once every 40 years. The fact that so few close approaches are known illustrates that the vast majority of these objects pass by the Earth unnoticed and undiscovered. Basketball-sized objects strike the Earth’s atmosphere about 3 times per day and Volkswagen-sized objects hit every 6 months or so. While these events cause visually impressive fireball events, they do not do any damage. Well over 100 tons of interplanetary material rains down upon the Earth daily but almost all of it is in the form of dust that causes no harm. As a rule, a rocky object must be larger than about 30 meters before the energy of its air blast could cause ground damage. For the six-month interval of January through June 2004, Table 2 presents the close Earth approaches for those objects larger than 30 meters. Even at these larger sizes, only a very small percentage of those objects passing closely past the Earth are actually discovered. Table 1. Close AsteroidEarth Approaches to within 2 Lunar Distances (LD) during the 2004 March through June interval only. All asteroid sizes are included. Mean Interval Impact Impact for passage Dist. Vel. Abs. Approx. Energy Interval less than 1 Object Date (LD) (km/s) Mag. Dia. (m) (MT) (Years) LD(days) 2004FH 2004FY15 2004HE 2004MR1 2004FM32 2004KF17
March 18 March27 April 18 June21 March25 May31
0.1 0.6 0.7 1.5 1.8 1.8
8.0 8.6 18.3 7.6 4.3 11.4
26.4 26.1 26.8 25.2 26.6 25.5
18 20 14 32 16 28
0.18 0.26 0.21 0.98 0.10 0.91
40 50 20 160 30 120
4 5 2 16 3 12
Table 2. Close Asteroid/Earth Approaches to within 4 Lunar Distances (LD) in 2004 Jan.-June. Only Asteroids with absolute magnitudes brighter than 24 are included. Mean Interval Impact Impact for passages Dist. Vel. Abs. Approx. Energy Interval less than 1 Object Date (LD) (km/s) Mag. Dia. (m) (MT) (Years) LD (months) 2004JP1 2004MC 2004CA2
May18 June29 Feb. 1
3.0 3.7 3.8
12.6 8.7 14.2
22.7 23.2 23.6
98 78 66
43.5 15.5 15.3
2200 1300 900
7 4 3
While long period comets can be significantly larger than most near-Earth asteroids and have encounter velocities 2-3 times greater than those of NEAs, the relatively paucity of long period comets reduces their threat considerably. During the interval 1900 - 2002, 155 separate NEAs made close Earth approaches to within 0.1 AU. During the same interval, only two long-period comets came as close; curiously, both of these approaches occurred in 1983 (1983 J1 Sugano-Saigusa-Fugkawa and 1983 H1 IRAS-Araki-Alcock).
164 This comparison, along with the other arguments presented in the NASA Science D e f ~ t i o nTeam report (SDT, 2003), suggests the threat of long period comets, relative to NEAs, is 2/155 or about 1%. In Appendix 1, we present a timeline of significant milestone events for the discovery of near-Earth objects (NEOs). The several surveys that are currently active in the discovery and follow up of NEOs provide the critically important astrometric observations that allow NEOs to be discovered and their future motions tracked. As such, these surveys provide the backbone for the international efforts to reach the socalled Spaceguard Goal of discovery and tracking 90% of the NEOs larger than one kilometer by the end of 2008. This paper will not focus on these efforts, but rather consider the evolution of the processes to compute orbital information and impact probabilities for these objects and then announce these results to the scientific community, media, and public. THE EVOLVING PROCESS OF COMPUTING AND ANNOUNCING FUTURE NON-ZERO EARTH IMPACT PROBABILITIES. To some extent, the evolution of the processes for computing orbits and future motions for the NEO population, and announcing these results, has been driven by reactions to problems that arose after some specific close approach events were predicted and announced. Earth impact predictions have often been treated by the media as sensationalistic and newsworthy. Immediate media attention, including a Newsweek cover story, was focused upon an announcement, made in 1992, of a possible Earth collision of comet 109P/Swift-Tuttle on August 14, 2126’22x3. Initial difficulties in fitting the 1862 and 1992 observations suggested to Brian Marsden that the comet might be subjected to significant accelerations from the comet’s out-gassing, and that these accelerations might allow the comet’s motion to be in error by 15 days in 2126 - enough to allow an Earth collision on August 14, 2126. Proper consideration of the 1992, 1862, and 1737 observations soon made it clear that the Cape of Good Hope observations taken in September-October 1862 were discordant and an orbit without these observations precluded an Earth collision in 212643536. This took place less than two weeks after the initial announcement but it was too late to stop the tidal wave of media attention. The same sort of media frenzy accompanied another Earth impact prediction, made in March 1998, for a collision of asteroid (35396) 1997 XFll on October 26, 20287,839. Jim Scotti using the Spacewatch telescope on Kitt Peak had discovered the asteroid on December 6, 1997. After a month, its orbit was well enough determined for the MPC to predict that the asteroid would pass within a million kilometers of Earth on October 26, 2028. The asteroid was well observed for another month, but then went unobserved for four weeks. When Peter Shelus at the McDonald Observatory in Texas picked it up again on the nights of March 3 and 4, his four observations extended the data interval to 88 days, and yielded a significantly improved orbit estimate. On March 11, Brian Marsden, director of the Minor Planet Center, announced in an IAU Circular that the new prediction for the miss distance in 2028 was remarkably small, less than a quarter of a lunar distance. The Circular noted that “error estimates suggest that passage within [one lunar distance] was virtually certain.” In an accompanying press statement, Marsden stated, “The chance of an actual collision is small, but one is not entirely out of the
165
question." In this case, a proper consideration of the position uncertainties associated with this asteroid in 2028 would have quickly ruled out an Earth collision. Don Yeomans and Paul Chodas at JPL computed the impact probability as zero, or at least having a value less than the underflow limit on their computer, approximately lO"Oo; later independent analyses supported this conclusion9. The subsequent identification of prediscovery 1990 observations by K. Lawrence" made it even clearer that an Earth collision in 2028 was ruled out (see Figures 1-3). The international media circus for 1997 XFl 1 was even larger than it was for comet 109P/Swifi-Tuttle. Figure I 1997 XF11 Position Uncertainty in Earth Target Plane on 2028 OcI26
,
15
1
lo!
!
Most likely m i 8 ~distance. 86.0W km
NO*
-5
c
i
Pro!mhlbty that \he asterad will passwiththe error ellipse h 95%. Prahbirry ot impactwith Earth is zem
Error elllpse Is 2.8 million km long but only 2550 km wide
-10
I
Omnd solution lorasteroid bmed on 98 ObSBrVBtimS from 1216l87 thmwh 34198
1
-15
i I1 -5
.
0
(100000 km)
..
I
5
8s up.
10
15
PUUl w. 0 - s
NPL'NASiV~ecnJ
OJIIUUD. m v 5 e d O M r n
166 Figure 2 1997 XFl1 Position Uncertaintyin Earth Target Plane on 2028 Oct 26 10 -
1
----,
I 1
, Dots within eltlpse are
8
E -
2.8 million km long but only 2550 km wide
0-
l
Earth
-2t
1
-6 -4
-1 0
L
-16
-14
-12
-10
-8
-6
-4
-2
0
. 2
(10000 h)
Figure 3 1997 XF11 Position Uncertaintyin Earth Target Plane on 2028 Oct 26
51View 1s lmm the approachmg astern*. about 5 deg behm Eanhs equalom plane, and appmxmteiy from the direction 01 the Sun. Nmh is up.
175,000 i m long by 1000 km wide
-10
P~obaDllilythat Iha asternid will p N s wilhln the enn diipse 1s 99%. Prabvbliy ofimpan w(th
c
Eanh is zero.
1-
-15
-10
-5
0
(loaoookm)
5
10
167 For Comet 109P/Swift-Tuttle and near-Earth asteroid (35396) 1997 XF11, an Earth impact possibility was announced without any sort of peer review of the computations by other orbital specialists. Furthermore, such a vetting process was not even possible for 1997 XFI 1, since the observations necessary to undertake the analysis were not available until after the announcement had been made. Although the MPC did provide the observations upon request, it was clear that NEO observations had to be distributed to the scientific community in a more timely fashion. Responding to the 1997 XFll issues raised by concerned scientists and media representatives, NASA convened a meeting (Carl Pilcher, Chair) on March 17, 1998 at the Lunar and Planetary Institute in Houston Texas. NASA Interim Roles and Responsibilities were established for near-Earth Object (NEO) data release and public announcements of any cases where a future Earth impact could not be ruled out. These guidelines included the provision that the Minor Planet Center would release NEO astrometric data within 24 hours of its arrival there (via the MPC Daily electronic circulars). Before any orbit computation specialist or team publicly announced a possible future impact, other specialists in the field would verify their computations. This verification period was expected to last 48 hours and NASA announced a study on how best to communicate NEO issues to the public. Less than a year later, the discovery of NEA 1999 ANlO on 13 January 1999 by LINEAR set off an analysis of this object’s future motion by Andrea Milani, Steven Chesley and Giovami Valsecchi that showed that the review guidelines could work as intended. The asteroid was observed until February 20 when its position in the sky near that of the sun prevented further observations. Milani and colleagues noted that the asteroid would make a close approach to Earth in August 2027 and that this close approach would introduce chaotic behavior so that one of the possible subsequent trajectories would bring the object to an Earth approach in 2034 and again in August 2039, and for this latter encounter, there remained a slight chance of an Earth impact. They were careful to note that the impact probability was about and that this value was less than the probability of an impact by a similarly sized unknown object within the next few hours! In short, this was an interesting object from a mathematical point of view but not of any particular concern. During the first two weeks of April 1999, others, including Paul Chodas at JPL, verified the basic analysis by Milani and his colleagues. Here was the first case whereby an impact prediction was made and then vetted by professional colleagues before any public announcement. Following this review process, a draft of the paper was posted without fanfare on the University of Pisa web site, where it was discovered by Benny J. Peiser, a faculty member at Liverpool John Moores University who focuses upon neo-catastrophism. On April 13, 1999, Peiser published an account in his widely read NEO internet newsletter that chided the professional astronomers noting that “there is no reason whatsoever why the findings about 1999 ANlO should not be made available to the general public - unless the findings haven’t been checked for general accuracy by other NEO researchers.” Peiser also speculated that one reason “why the authors may have decided to hide their data could be due to the current NASA guidelines on the reporting of impact probabilities by individual NEOs. After all, NASA is threatening researchers with the withdrawal of funding if they dare to publish such sensitive information in any other form than in a peer reviewed medium.” Peiser’s statement and speculations were nonsense of course, but he
168
did demonstrate the problem of the public perception whenever Earth impact studies are conducted privately, even if the secrecy lasts only a few days, and the impact risk is negligible. Part of the problem with announcing Earth impact probabilities is that the public and media do not fully understand the language of statistics and physics. For example, scientists may react to an announcement of, say, a one in 10,000 chance of a 10 meter asteroid hitting the Earth six months from now by noting that there are 9,999 chances out of 10,000 that it will m i s s the Earth and even if it did hit, it would not survive penetration through the Earth’s atmosphere and no damage would be done to the ground below the expected air blast. On the other hand, the media and public might hear only that scientists have predicted a catastrophic 100 kT impact by a huge rock within a very short six months. Part of the blame here is that scientists often do not use understandable language. In an attempt to make NEO impact predictions more meaningful for the nonscientists, Richard Binzel formulated a scale from 0 to 10 that factored in both the object’s impact probability and estimated impact This scoring system became known as the “Torino Scale,” after it was modified and re-introduced during a June 1999 international meeting of NEO scientists in Turin (in Italian, Torino) Italy. When a Torino Scale value for a particular close Earth approach was announced together with the date of the event, it was hoped that this would guide the public and media reaction to the upcoming event. For example, an event with a Torino Scale value equal to 1 merits careful monitoring and a Torino Scale value equal to 10 would indicate a certain collision, with global consequences. Occasionally, objects on the JPL Sentry and Pisa CLOMON risks pages reach a Torino Scale value of 1 but a value of 2 (“Event meriting concern”) has never been reached. The Palermo Scale is a NEO hazard scale that includes the proximity of the potential impact to the current time and is of great use for communication among orbital specialist^'^. The Palermo Scale’s logarithmic scale factors in the object’s impact probability, the energy of the potential impact and the proximity of the event to the current time; potential Earth impacts that rise above the expected background level for all objects ( h o w n and unknown) of the same size, will have positive Palermo Scale values. During the June 1999 NEO meeting in Turin, Italy, a statement on the announcement of NEO events was made by the IAU Working Group on Near-Earth Objects (WGNEO). For any comets or asteroids with an impact probability greater than one in one million in the next 100 years, the WGNEO requested information on the predicted event be communicated to the Chair of the WGNEO and General Secretary of IAU before any announcement is made public on any information media including the world wide web. The Chair of the WGNEO, or designee, will distribute the mentioned materials to a standing NEO Review Committee for independent validation. This review committee shall communicate within 72 hours the results of their individual reviews to the chair of the WGNEO and to the authors of the original report. If the original report is validated, the results of the analysis will be posted on the IAU web site for public access. If the review disagrees with the report, the results of the review will be given to the report authors for their consideration. If so requested by other agencies (e.g., NASA or ESA), the IAU will also inform the responsible officials of these agencies of the results of the WGNEO review.
169 While a good start was made with these recommendations, it was less than a year later when Mother Nature clearly pointed out the drawbacks of a 72-hour deadline for reporting impact prediction announcements. On September 29, 2000, David Tholen and Robert Whiteley discovered asteroid 2000 SG344 using the 2.2-meter University of Hawaii telescope atop of Mauna Kea. On October 26, using only 2000 data, Andrea Milani identified a very low probability potential Earth impact in 2029. On the same day, pre-discovery observations taken in May 1999 by MIT's LINEAR observatory were also identified by Gareth Williams at the MPC. On October 30, Paul Chodas computed an orbit and determined a surprisingly high Earth impact probability of M O O for September 21, 2030 and invoked a review by the NEO Review Committee (see Figures 4-5). While the reflectivity of this object was unknown, its apparent brightness and known distance from the Earth and sun allowed its absolute magnitude to be determined. Then using reflectivities typical of near-Earth asteroids, the object's diameter was estimated to be somewhere between 30 and 70 meters. Toward the upper end of this range, the object would have a Torino Scale equal to one indicating that the object needed careful monitoring. Because of its modest dimensions, Earth-like orbit, and a predicted close Earth approach in the early 1970's, there was a chance the object was actually an Apollo S-lVB booster stage in heliocentric orbit. On November 2, JPL's Steve Ostro was unsuccessful in making radar observations of this object from Arecibo. Figure 4
Relative Positions of Asteroid 2000 56344 and Earth in 2030
sun
170 Figure 5
2000 SG,
Collision Orbit in a Frame Rotating with Earth
On October 31, the 72 hour review period began and by the end of that period the possibility of an Earth impact was confirmed by Steven Chesley (JPL), Giovanni Valsecchi (Italian National Research Center), Andrea Milani (University of Pisa), and Kam Muinonen (University of Helsinki). An announcement was prepared for a NASA press release but it was never actually released. However, the IAU on their web site released a similar notice on November 3, 2000. As luck would have it, that same day four additional pre-discovery observations on May 17, 1999 from Carl Hergenrother at the Catalina Sky Survey became available and a new orbital solution then removed the While the IAU guidelines for timely potential Earth impact in 2030 altogether. announcements of NEO Earth encounters had been followed to the letter, Mother Nature dramatically demonstrated the folly of imposing an arbitrary 72-hour time limit upon the verification process. Since January 2000, an automatic orbit computation and Earth impact-monitoring system (CLOMON) has been in operation at the University of Pisa. This system was developed by Andrea Milani, Steven Chesley (then at Pisa), and Giovanni Valsecchi. An independent Earth impact monitoring system (Sentry), developed by Steven Chesley (now at JPL), Paul Chodas, and Alan Chamberlin has been operational at JPL's NEO Program Office since January 1, 2002. With two independent NEO impact-monitoring systems in operation, each potential Earth impactor is checked at both JPL and Pisa, and the results most often confirm one another. When different results occur, they are quickly transmitted to the other team and the ensuing collaborative efforts resolve the
171 differences quickly. This constant cross-checking between JPL and Pisa not only makes each system more robust but, together with additional independent Monte Carlo analyses at both locations, the verification process called out in the guidelines of the IAU Working Group on NEOs is taking place on a continuing basis. As a result, there is normally no reason for conducting the formal verification process by the NEO Review Committee, since most of the members of this Committee are in constant communication anyway. As a fmal example of the type of event that forces change in the ongoing process of predicting and announcing possible Earth impacts, we call attention to an object first denoted K O 0 6 6 7 by the LINEAR discovery team upon its discovery on January 13, 2004. After receiving four astrometric observations from the LINEAR group, the Minor Planet Center (MPC) followed its normal routine and posted a short search ephemeris on their NEO Confirmation Page (NEOCP), where follow-up observers are notified of new near-Earth objects. These observers often then provide the follow-up observations that allow the object’s orbit to be vastly improved before the object becomes unobservable (often because the object becomes too faint). Less than an hour after the ephemeris of AL00667 was posted upon the NEOCP, the German amateur astronomer, Reiner Stoss, posted a message to other observers on the Minor Planet Mailing List (MPML) to the effect that this ephemeris was peculiar in that the object had a predicted brightness change of 4 magnitudes within 24 hours. U.S. astronomer Alan Harris then replied to Stoss that the object’s ephemeris was consistent with an Earth impact trajectory and in a subsequent e-mail to the Minor Planet Mailing List, Harris noted that the impact would occur on January 15.0 UT, almost exactly 24 hours after he posted the message! Alan Harris then called Don Yeomans at JPL who in turn notified his colleague Steve Chesley. Chesley verified that the ephemeris posted upon the NEOCP was, indeed, an Earth impact ephemeris and he then tried unsuccessfully to reach Tim Spahr at the Minor Planet Center to point this out. Yeomans and Chesley then contacted Brian Marsden at the MPC who promptly sent the four astrometric positions to JPL for impact probability analysis. Marsden also requested additional observations from follow-up observers in Europe. Later on in the evening, Chesley did reach Tim Spahr at the MPC who then computed and sent Chesley 819 variant orbits that fit the four observations of the object well, assuming the distance between the observer and the asteroid was between 0.01 5 and 0.019 AU at the time of the LINEAR observations. Because the observations were taken only a few minutes apart, many slightly different orbits could successfully fit them and some of these orbits allowed the object to strike the Earth. Chesley determined that a crude Earth impact probability would be about 25%. Armed with search ephemerides that assumed the object was on an Earth impact trajectory, Brian Warner, an amateur observer in Colorado, looked unsuccessfully for the object. This negative result effectively ruled out the trajectories that allowed an Earth impact. Thus, less than 10.5 hours after the Minor Planet Center received the 4 observations from LINEAR, an Earth impact trajectory had been posted upon the MPC web site (NEOCP), an impact probability of about 25% had been computed, follow-up observations were requested and taken, and when these observations showed no object could be located on the predicted Earth impacting trajectory, the heightened activity level relaxed since no Earth impact was then possible. About six hours later, LINEAR provided successful follow-up observations of the actual object and when these observations were included in new orbital solutions, it
172 became clear that the object, now designated 2004 AS1, would get no closer to the Earth than 0.08 AU and that would not be until mid-February, 2004. With these follow-up observations in the orbit determination process, it became clear that the original four observations contained slightly larger-than-usual errors, and these were enough in error to provide the possibility of an Earth collision. Had these observations been more typical of the type of accuracy reported by LINEAR, this would not have been the case. As a result of this event, the Minor Planet Center installed a feature in its software that alerts them when an Earth impacting ephemeris is computed for posting on their NEOCP web site. In addition, plans are underway to make more robust the monitoring of objects placed upon the NEOCP web site. SUMMARY Beginning with the 1992 prediction that comet 109P/Swift-Tuttle might strike the Earth in 2126, the small community of near-Earth object orbital specialists have rapidly advanced the techniques used to compute Earth impact probabilities. Great strides have also been made toward more timely and robust impact predictions, as well as a more understandable dialogue with the public and media. In 1998, following the events surrounding the prediction that near-Earth asteroid 1997 XF11 might hit the Earth in 2028, NASA established guidelines and announcement procedures, and orbit specialists were requested to vet each other’s predictions. The IAU’s Working Group on NEOs then adopted similar guidelines. In April 1999, this review process worked for near-Earth asteroid 1999 AN10 but there were complaints about the secrecy with which the verification process took place and the announcement deadline of 72 hours following an impact prediction by one group of orbital specialists. This problem with the duration of the 72 hours verification process deadline became painfully apparent in early November 2000 when the IAU posted an impact advisory for the predicted Earth close approach in 2030 by near-Earth object 2000 SG344. On the next day, additional pre-discovery observations allowed the 2030 threat to be removed altogether. Currently, the constant cross-checking of the impact probability systems at IPL (Sentry) and the University of Pisa (CLOMON) largely removes the need for a formal verification process and the attendant 72-hour deadline for reporting review results. All the near-Earth objects for which an Earth impact cannot yet be ruled out are made public on the Sentry and CLOMON websites. The risk analysis computations are now largely automatic (Sentry, CLOMON), the verification process is carried out routinely when differences arise between results of these two systems, and the results are posted in a timely fashion on each of these web sites. A formal review process is rarely called for and close Earth approach postings are largely camed out in full view of the public and media on the respective web sites: http://neo.ipl.nasa.gov (JPL) http://newton.dm.uniui.itheodys(University of Pisa) The Minor Planet Center is now releasing near-Earth asteroid astrometric data within 24 hours of receipt and they have installed triggers that allow them to immediately recognize Earth impacting ephemerides before they are posted on their NEO Confirmation Page.
173 Great strides have also been made in communicating with the public and media. The Torino Scale provides a simple, understandable scale for each object for which an Earth impact cannot be ruled out and the Palermo Scale has been a great help for communications between the orbital specialist teams. The monitoring and announcements of potential Earth impacts by near-Earth objects has been continuously improved as the surveys detect more and more of these objects. As new objects are predicted to have Earth close approaches, deficiencies sometimes become apparent in the current systems that are designed to monitor them. Fortunately, the seriousness and frequency of these deficiencies are becoming smaller and smaller with each event so that now we operate in a rather robust environment for the monitoring and tracking of discovered near-Earth objects. The largest remaining task would seem to be the continuing efforts to discover these objects. ACKNOWLEDGEMENTS This research was camed out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. REFERENCES 1. 2. 3. 4. 5.
6. 7. 8. 9. 10. 11. 12. 13.
IAU Circular 5636 dated October 15, 1992. Begley, Sharon The Science of Doom. Newsweek, Nov. 23, 1992, pp. 56-60. (discussion of comet Swift Tuttle’s return in Aug. 2126) Marsden, B.G. (1993). Comet Swift-Tuttle: Does it Threaten Earth? Sky and Telescope, Jan. 1993, pp. 16-19. IAU Circular 5671 dated December 5, 1992. Marsden, B.G., G.V. Williams, G.W. Kronk, W.G. Waddington (1993). Update on Comet Swift-Tuttle. Icarus, v. 105, pp. 420-426. Yau, K., D.K. Yeomans and P. Weisman (1994). The past and future motion of Comet P/Swift-Tuttle. Monthly Notices, Royal Astronomical Society, v. 266, pp. 305-316. IAU Circular 6837 dated March 11, 1998. Minor Planet Center Public Information Sheet, 1998. Chodas, P.W. and Yeomans, D.K. (1999). Predicting close approaches and estimating impact probabilities for near-Earth objects. American Astronautical Society paper AAS99-462. IAU Circular 6839 dated March 12, 1998. Binzel, R.P. (1997). A Near-Earth Object Hazard Index. Near-Earth Objects: The United Nations International Conference (J.L. Remo, ed.). Annals of the New York Academy of Sciences, vol. 822, pp. 545-551. Binzel, R.P. (2000). The Torino Impact Hazard Scale. Planetary and Space Science, vo1.48, pp. 297-303. Chesley, S.R., P.W. Chodas, A. Milani, G.B. Valsecchi, D.K. Yeomans (2002). Quantifying the risk posed by potential Earth impacts. Icarus, vo. 159, pp. 423-432.
174 14. Pilcher, Carl (1998). Statement of Dr. Carl Pilcher before the Subcommittee on Space and Aeronautics Committee on Science, House of Representatives, May 21, 1998. 15. Near-Earth Object Science Definition Team Report (2003). Study to determine the feasibility of extending the search for near-Earth objects to smaller limiting diameters. NASA document published at JPL, August 22,2003, 154 pp. FIGURES 1. The position uncertainty region of near-Earth asteroid 1997 XFl 1 is shown in red for the predicted 2028 October 26 close Earth approach. Only a very limited set of observations from 1997 December 6 through 1998 March 4 is included in the orbit and close approach computations represented by this illustration. While the asteroid’s region of uncertainty is huge, almost all of the uncertainty is in one direction and an impact with the Earth is excluded. This illustration shows that even before the discovery and use of the March 1990 pre-discovery observations, an Earth collision was ruled out. 2. This illustration shows a blow up of the near Earth region from Figure 1. Monte Carlo test cases have been used to verify the initial uncertainty analysis. 3. Once pre-discovery observations in March 1990 are added to the orbit and impact probability computations, the uncertainty region for asteroid 1997 XF11 shrinks dramatically and moves away from the Earth. 4. A preliminary, Earth-like heliocentric orbit for near-Earth asteroid 2000 SG344 is shown here with the Earth close approach predicted for September 21,2030. With the addition of pre-discovery observations made in May 1999 to the orbital computations, the possibility of an Earth collision on September 21, 2030 was removed altogether. 5. The orbit of near-Earth asteroid 2000 SG344 is shown here in a reference frame that rotates with the Earth about the sun (i.e., the sun to Earth line is fixed). Because of its Earth-like orbit, the asteroid will slowly lap the Earth and approach it again closely in 2030. However, as noted in Figure 4, a revised orbit based upon addition observations has ruled out any possibility for an Earth impact in September 2030. Appendix 1: Milestone events for the near-Earth Obiect (NEO) discovery surveys 1973 E.F. Helin and E.M. Shoemaker begin photographic NEO searches using Palomar 46 cm telescope 1981 “Snowmass Report” Sponsored by NASA (but it remains unpublished). NEO impacts causing serious disasters have probability of 10-4to 10-5per year. NEO deflection techniques are within scope of current technology but “it is highly unlikely that an object requiring deflection will be identified over the time period for which the present technology is relevant.”
175 1984 Tom Gehrels begins using 0.9 meter Steward observatory telescope for NEO search efforts. Telescope is operated in scan mode whereby telescope is held fixed while the celestial objects scan through the telescope’s field of view. 1989 T. Gehrels begins full time NEO search operation using large 2K x 2K CCD camera in drift-scan mode near Tucson, AZ (Spacewatch). First NEO detected using CCD (1989 UP). 1990 D. Steel begins (short-lived) photographic NEO search program at Siding Spring, Australia using 1.2 m Schmidt 1992 House Committee on Science directs NASA to conduct a NEO Detection (Survey) Workshop organized by NASA and a NEO Interception Workshop organized by DOE. 1992 NASA Spaceguard Survey Report (D. Morrison, Chair). Recommend six international 2.5 m aperture telescopes with limiting magnitude equal to 22 (N & S hemispheres) to enable the discovery of more than 90% of NEOs larger than 1 km within 25 yrs. 1993 Lowell Observatory NEO discovery survey (LONEOS) comes on line using 0.6 meter Schmidt telescope in Flagstaff, AZ. 1995 NASA sponsors “Shoemaker Report” which encourages collaboration of U S . Air Force and international partners to discover 90% of NEOs larger than I km in 10 years. Two dedicated 2 m aperture telescopes and 1-2 one-meter telescopes with advanced focal plane detectors are recommended. 1995 JPL’s NEAT survey begins NEO search effort using Air Force one meter GEODSS telescope on Maui. 1996 Spaceguard Foundation begins (March 26, 1996 with Andrea Carusi, President) with the goal of promoting and coordinating at an international level, the discovery, follow-up, and orbit determination of NEOs. 1997 MIT’s LINEAR effort begins using fast read out CCD for NEO discoveries (using Air Force 1 m GEODSS telescope) 1998 April. Catalina Sky Survey comes online using 0.6 meter Schmidt telescope on Mt. Bigelow (near Tucson, AZ)with a 4K x 4K CCD camera. A neighboring 1.5 meter telescope on Mt. Lemmon is used for follow up observations of NEO discoveries. 1998 May 21. For NASA Headquarters, Carl Pilcher announces Spaceguard Goal to House Subcommittee on Space &Aeronautics (i.e., by 2008, find & track 90% of NEOs whose diameters are greater than 1 km)I4 1998 July 6. NASA establishes NEO Program Office at JPL to coordinate and monitor the discovery of NEOs and their future motions, to compute close Earth approaches and, if appropriate, their Earth impact probabilities. 1999 March 5-6. JPL’s NEO Program Office and the University of Pisa (NEODys) post their web sites for close information on near-Earth objects (e.g., close approaches, ephemeris information, orbital data). 1999 LINEAR adds second co-located 1-meter search telescope with fast read out CCD. 1999 Australian National Observatory’s 1.O-meter telescope in Siding Spring begins follow up observations as part of the Catalina Sky Survey.
176 2000 January 1. University of Pisa places its NEO impact monitoring system (CLOMON) on line. 2000 NEAT program begins using 1.2 meter AMOS telescope on Haleakala Maui, Hawaii rather than 1.O meter GEODSS telescope. 2001 NEAT adds Palomar 1.2 meter Schmidt telescope for NEO searches using a 3 camera system, each of which has a 4K x 4K CCD. 2001 Oct. 16. First NEO discovery by recently commissioned Spacewatch 1.8 meter aperture telescope. 2002 January 1. NASA NEO Program Office at JPL puts its automatic NEO orbit determination and Earth impact monitoring program (SENTRY) on line. 2002 Oct. 22. Spacewatch 0.9 meter telescope modified for a larger field of view and new large mosaic camera (four CCDs each of which is 4608 x 2048) is put into operation. Full time operations began in March 2003. 2002 LINEAR adds a 0.5 meter aperture telescope to help follow up discoveries by its two I-meter search telescopes. 2003 NASA releases NEO Science Definition Team report (G. Stokes, Chair) recommending that NEO searches be extended to discover 90% of Near-Earth asteroids whose diameters are greater than 140 meted5. 2004 April. Catalina Sky Survey brings on line a 0.5 meter Schmidt telescope at Siding Spring Australia for a southern hemisphere NEO search effort.
ASTEROID DEFLECTION: HOPES AND FEARS RUSSELL L. SCHWEICKART Chairman, B612 Foundation, Tiburon, USA
The subject of asteroid deflection is moving gradually into the foreground for those aware of the asteroid collision hazard. Publication of concepts in major popular magazines, TV documentaries, Congressional testimony and reports by major professional societies have gradually increased awareness of the potential in larger segments of both the public and political sectors. The emergence of new capabilities is also enabling greater options in addressing the daunting physical challenge of changing the orbits of these massive bodies. While no government has yet shown any inclination to actually assume responsibility for asteroid deflection, there are increasing pressures both to extend the Spaceguard limit down toward 100 meter size objects and mount dedicated space missions to acquire the specific knowledge of NEA characteristics necessary to design future deflection missions. Perhaps the most dramatic choice to be made will be that between those proposed techniques utilizing high force nuclear explosives and those utilizing low (or very low) force controlled acceleration. The nuclear techniques face many dgjcult societal issues in addition to major technical unknowns whereas the Wow, soft“ alternatives primarily face uncertainties in the physical and mechanical characteristics of the asteroids themselves. Over the past 25 years the general public has become aware of the fact that the Earth collides, ffom time to time, with asteroids and comets. Many thoughtful people were aware of these occasional collisions prior to that time, and many others are still unaware of this reality. Nevertheless there has been a material shift in the acceptance of this phenomenon in these two and a half decades, beginning with Louis and Walter Alvarez’ public pronouncement that the demise of dinosaurs and the rise of mammals was triggered by a major impact with an asteroid or comet’. The vigorous scientific debate on the validity of this claim, at times quite strongly emotional, simply fueled the public interest in and imagination about the power of such an event. Ultimately the identification of the Chicxulub impact crater by Alan Hildebrand2 and others resolved, for most people, the residual uncertainties about the validity of this mechanism as a major evolutionary force. While only just over a decade in the past, it is difficult today to recall the considerable power of the Lyell-ian uniformitarian dogma3 which precluded such consideration before the Alvarez’s claim. But it was only a few years later when in 1993 Carolyn Shoemaker and her collaborators Gene Shoemaker and David Levy4 picked up the fragmented Shoemaker-Levy comet as it looped slowly back toward its ultimate rendezvous in July, 1994 with the atmosphere of Jupiter. With over a year of early warning and excellent orbit determination to guide observers around the world, the first ever observed collision between a comet and a planet was witnessed by crowds of people and clouds of satellite and ground-based telescopes. It didn’t take a rocket scientist to extrapolate to asteroids impacts with the Earth. Thanks to Gene Shoemaker, Glo Helin, and many other observers, the late 90s saw the detection of many near Earth asteroids (MAS) and the growing recognition that the cratered surface of the Moon was not simply a historical “document”, but
177
178
more a “work in progress”. Impact dynamics were analyzed, statistical estimates of NEO populations were derived, telescopes and detectors to optimize detection were designed, and gradually a database of NEOs was building. All the while the public were being fed both information and mis-information about these objects and the potential for them to collide with the Earth. The Spaceguard Survey’ was established in reality as NASA committed in 1998 to detect 90% of the NEOs over 1 km. in diameter by 2008. With the increase in observation and detection however, came the inevitable and sometimes loud false alarms of pending collisions or even the end of civilization. The laudable determination of the astronomical community to make all its data public, combined with the news media’s tendency toward the sensational, led to a series of very public reports, denials, accusations and explanations which continues today. Each case of a claimed pending disaster is unique, but each emphasizes in its own way the intrinsic difficulty in dealing openly with extremely unlikely but extremely devastating events about which only imperfect knowledge is available. Perhaps the most powerful public impression originated in the Hollywood contribution to the subject. While there are historic precedents, the two most recent submissions have been the movies Armageddon and Deep Impact, released almost simultaneously in 1997(?). While far from entertainment marvels, these films created a broad image in the public mind which probably dominates over all other sources of information. The overall result is that the public today generally understands that asteroids and comets do occasionally impact the Earth and that it might just be possible to take heroic action to intervene. While the former is widely, if not deeply, understood, the latter is lost in the fog of fact vs. fantasy. And in any case, the impression left by the films is that if defense against asteroids and comets is possible, the default means is the use of nuclear weapons. Looking into the future it seems clear that there will be a rapid acceleration of new NEA discoveries due primarily to the improved detection capability of new telescopes. In particular the Pan-STARRS6system at the University of Hawaii is due for “first light” in January 2006 and will reach full capability about 2 years later. This system is designed to search NEA-space down to about 300 meter diameter objects and will therefore tap a population of objects of about 20,000 vs. the 1,100 in the current Spaceguard 1 km class. Along with a rate of detection some 20 times (approximately) that of the current discovery rate, there will likely be a similar increase in the rate of discovery of nearmisses, false alarms, misreportings and alarmist headlines. The world can therefore expect to see increased attention to the subject of asteroid impacts with the Earth for some years to come. Should the recommendations of NASA’s Science Definition Team7, the AIAA’ and others be accepted and the Spaceguard goal be extended in future years down to detecting NEAs in the 100 meter class, the relevant population of objects jumps to 280,000! Today we can state without ambiguity that of the 3000 NEAs detected and cataloged, there is a zero probability of impact with the Earth in the next 100 years for to all but 529. Those 52 each have a very small probability of impact (from which will likely drop to zero at the next sighting. However, with target populations in the hundreds of thousands, it is not unlikely that there will be predictions of potential impacts within the 100 year time horizon for projecting orbits forward in time. It is therefore clear that within the next decade or so the subject of NEA deflection will become a lively topic of discussion... and debate. It would seem
179 inconceivable that the general public, and perhaps even governments, would be able to accept the news that a 100 megaton impact will occur within several decades and without seriously calling for a concerted effort to deflect the incoming asteroid. What is to be hoped, clearly, is that the first such asteroid to be discovered does indeed kindly provide us with several decades of warning. Accepting that deflection of NEAs lies in our near future, the issue arises as to the options available to us. These options have been addressed to modest degree over the past decade with the most substantial meeting having occurred in February 2004 in Southern California, sponsored by the American Institute of Aeronautics and Aeronautics”. In summary, while there are many variations in specific concepts for asteroid deflection, they can, roughly, be divided into two primary categories; the hard options (with generally uncontrolled results) and the soft options (with controlled results of varying degrees). One could as well label the options as hard-fast options and softslow options in that the former are high impulse explosive schemes and the latter are low impulse designs in which the asteroid is accelerated slowly over extended time periods. This paper will only serve to introduce the various options in summary fashion, pointing out the primary characteristics and limitations of each. The reader is referred to the AIAA conference report and other sources for details. In a similar summary fashion it will be simply stated here that the most efficient manner to cause two cosmic objects to miss a future rendezvous is to alter the magnitude of the velocity of the smaller body, either impulsively or over time. The option of rotating the velocity vector (causing a plane change or a rotation of the line of apsidies) is both inefficient and non-cumulative, and therefore largely ineffective. Increasing (or decreasing) the velocity of the NEA, even slightly, increases (or decreases) the period of the asteroid’s orbit and the effect is cumulative over time, from the application of force through the rendezvous period. Therefore, for all cases where several orbits occur between the time of deflection and the rendezvous (or near miss) with Earth, the optimal direction to apply force to the asteroid is along its velocity vector. Interestingly the velocity change required to cause an asteroid to miss a rendezvous with Earth, if applied 10-15 years ahead of impact, is on the order of 1 cdsec. The hard options consist of various forms of nuclear explosion as well as that of direct (or kinetic) impact. In each case, however, to be effective, the resultant force must be applied along the NEA’s velocity vector, with the exception of two cases. If one considers the option of fragmenting the NEA a viable option (i.e., blowing it to pieces) then the direction of impulse becomes meaningless. While there are many uncertainties regarding the effect of a nuclear explosion intended to fragment an asteroid (generally assumed to be a sub-surface burst) it seems clear that, given a large enough nuclear weapon, the fragmentation could be achieved. Arguments have been made from the first discussion of this option, however, that such a strategy would be unwise since the possibility exists that the resultant fragmentation could actually increase the overall threat and not eliminate it. No general answer to this debate will likely evolve since it is highly dependent on the structural character of the asteroid in question. The more favored nuclear options are intended not to fragment the asteroid but rather to accelerate it in a preferred direction adequate to cause it to miss its rendezvous with Earth. While quite distinct in specifics, two examples serve to illustrate the options here. One a surface burst designed to excavate asteroidal
180
materials and eject them in a preferred direction. Alternatively a stand-off explosion, probably maximizing the neutron flux directed at the surface of the asteroid to cause an explosive boil-off of the surface to generate the desired impulse. In both these cases the characteristics of the specific asteroid are clearly critical. In addition to this uncertainty, the placement of the nuclear explosive in each case must be quite precise in order for the resultant impulse to be generated in the desired direction. The certainty of such a placement is profoundly enhanced if the explosive it positioned by a spacecraft which has fully rendezvoused with the asteroid; i.e., if the spacecraft has matched velocity with the asteroid. Such a rendezvous however, requires considerable fuel compared with a flyby where the spacecraft simply flys by at a precise distance at high speed and the nuclear warhead explodes at precisely the correct time. The cumulative uncertainties intrinsic in this design, combined with the unknowns about the structural characteristics of asteroids lead many proponents of deflection to remain skeptical about the nuclear options. The direct impact or kinetic impact option is extremely simple in concept. In its caricature form it says just take any large spacecraft that's ready to launch (communication satellite, weather satellite, etc.), load it onto the largest rocket available, and send it out to crash directly into the asteroid at the highest speed it can achieve. The orbital mechanics of this concept are not favorable, however. In the general case, a direct impact would involve injecting the impactor into a trajectory that would intersect the asteroid's orbit after traveling some 130 degrees or so around the Sun. The two orbits would intersect at an angle dependent on the specifics of the mission design and propulsive energy available, but in all likelihood at something less than 30 degrees. The velocity of each, at the time of intercept, would typically be in the order of 15-20 km/sec and therefore the relative velocity about half that, or 8-10 km/sec. However, if one looks at the vector diagram of this intercept, one realizes that the relative velocity is roughly radially outward from the Sun (or alternatively radially inward). . . in fact 90 degrees from the desired direction along the asteroid's velocity vector. Furthermore, this impact angle is not optional; without the expenditure of considerable additional fuel, it will always lie relatively near the asteroid-Sun line for a direct impact trajectory. Alternatively, if a rendezvous is accomplished in which the impactor matches the velocity of the asteroid at some short distance behind the asteroid (or in front of it) and then it accelerates rapidly to make a direct impact parallel to the velocity vector, it can generate an impulse in the desired direction. However, if one makes a simple calculation of the velocity required at impact, the daunting nature of the challenge becomes evident. E.g., if we assume a target asteroid of about 200 meters, the mass would be approximately 10" kg. If we then posit an impacting spacecraft of mass lo3 kg and desire a velocity change of only 1 c d s e c we see that the impacting velocity which must be achieved is 100 km/sec! While the numbers can be refined a bit, it is discouraging to realize that for this generalized case, dozens of Saturn V class rockets would have to be available at the stand-off point to accelerate the impactor to this velocity! To get them all off the Earth and to that point in space would require hundreds of Saturn V rockets! The bottom line is that direct impact would only be a realistic option for NEAs of such a small size that they would be destroyed in the Earth's atmosphere in any event. Another issue of real world significance with all the hard options is that the spacecraft which delivers the impulse mechanism is destroyed in the process. There
181 will therefore remain a substantial uncertainty of just what the result of the deflection operation was until perhaps years of telescopic tracking from the Earth can pin down the resulting orbit. A mission design variant that can resolve this problem is to deploy a separate transponder-equipped sub-satellite that would have to survive the impulsive explosion and be able to determine the result, either as a stand-off observer or perhaps as a surface lander on the asteroid far-side. However, this not only adds considerable complication to the mission but most important would require that the primary spacecraft match velocity (i.e., fully rendezvous) with the asteroid. One of the generally argued advantages of the hard options is that they would not have to rendezvous with the asteroid, and this option would negate that perceived advantage. The soft options also come in several forms, which are lumped together here into ablative systems and direct push systems. Ablation is used in a liberal sense in that the mass driver concept is included in this category as a kind of discrete mass ablation system. Once again one of the most challenging considerations in these systems is that of getting the velocity increment in a direction parallel to the velocity vector of the asteroid. Unlike the hard options, however, these concepts all require significant amounts of time to achieve the necessary velocity change, and each uses very low forces to effect this change. The desirable consequence of this is that there is no danger of fragmenting the asteroid in the effort. However a significant complication is introduced in that the rotation of the asteroid now comes into play, and it is a powerful design driver. Again, simplifying for brevity, both the laser and mirror ablation concepts utilize high-energy electromagnetic radiation to heat a localized portion of the asteroid surface from a station keeping stand-off position either ahead of or behind the asteroid. Vaporizing the surface will generate a small, but potentially significant thrust opposite to the direction of the gases as they escape the asteroid surface. In the case of the laser a very high temperature must be generated to vaporize the surface and the surface must maintain high temperatures despite the fact that the spot is continually moving out of the laser beam due to asteroid rotation. Clearly a pulsed laser system of higher power could be substituted to partially avoid this effect. One challenge for the laser is to maintain a station keeping position with respect to the asteroid for considerable lengths of time while precisely pointing the laser beam. Additionally, to provide the very high energy to power the laser will likely require a space nuclear electric system in order to reliably provide the necessary electricity at reasonable launch weight. Finally an intrinsic problem with all the optical ablation concepts is that the ablating gases from the asteroid will gradually tend to coat all optical surfaces and some self-cleaning design may well have to be built in to the design. Mirror ablation is similar in kind to the laser systems with the exception that concentrated and focused sunlight is a heating mechanism. While this requires little or no electrical energy to operate, the area of the solar collecting concentrator must be very large, probably several square kilometers. Assuming that the deployment and figure control of such a large surface can be successfully addressed, both the station keeping, attitude control, and vapor deposition degradation issues are daunting, to understate the challenge. When all is said and done the launch weight of such an alternative will be substantial, if not prohibitive. An alternative, which falls squarely in the attractive sounding category, is the use of a mass driver to directly launch chunks of the asteroid itself off the surface opposite to the direction one wishes to accelerate. The idea would be to land the mass driver on the asteroid surface and, using solar power (or nuclear electric), send
182
packages of asteroidal material in rapid sequence off at greater than escape velocity. The real world challenges of directing the asteroid bullets in the proper direction as the asteroid is rotating, mining and packaging the bullets and setting all this automated operation up on a tumbling asteroid are beyond daunting. Finally there is the concept of the direct push, most fully championed by the B612 Foundation, for which I serve as Chairman of the Board. The idea for the Foundation emerged from an October 2001 meeting of NEA-fluent astronomers, engineers and astronauts who decided to explore the possibility of seriously initiating work on NEA deflection. Our choice to develop the direct soft push concept was driven primarily by two considerations; our sense that a demonstration of capability should be demonstrated within the next 20 years (given the anticipated public policy demand), and the fact that there were several cost-effective key technologies that were developing rapidly. The concept, simply stated, is to land on an asteroid, and using the power and propulsion systems used to get there, control the spin axis of the asteroid and directly push on the surface of the asteroid to accelerate it in the desired direction. While the challenge of performing such an operation on a 1 km asteroid would be out of reach for decades, we realized that with advanced nuclear electric power systems and plasma propulsion systems operating in the laboratory today, that a demonstration mission to a 200 meter diameter asteroid could be accomplished in a bit over a decade. With a space qualified nuclear electric reactor of about 1 megawatt and a plasma propulsion system that could generate 2.5 newtons with an exhaust velocity of 100,000 metershec, a representative demonstration of asteroid deflection could be made by 2015. We therefore established the B612 goal to significantly alter the orbit of an asteroid, in a controlled manner, by 2015. After working through several mission designs, primarily addressing the challenge of thrusting continuously in the desired direction (given a rotating asteroid), we settled on an elegant mission design that first torques the spin axis of the rotating asteroid to a desired angle with respect to the orbit plane and then pushes directly parallel to the instantaneous velocity vector until the desired change in velocity is achieved. This demonstration mission design was presented in a recent Scientific American article". This rather ambitious agenda is surprisingly straight forward, but for one significant challenge, and that is the great unknown of how to attach the spacecraft (or anything) to the surface of an asteroid. In particular, while the spacecraft axis would be oriented vertically with the engine pointing radially outward, the engine would have to continuously thrust off the vertical axis to enable the necessary control of the asteroid spin axis. To achieve this capability the spacecraft would have to have lateral support in order to maintain its vertical position while thrusting at as much as 90 degrees off the vertical. While several concepts for providing such stabilization exist, none will become viable until we can visit one or more asteroids to understand better the near-surface structural characteristics of these bodies. Within several months after identifying the performance needed to accomplish the B612 demonstration mission, NASA announced the formation of its Prometheus program targeted to develop and demonstrate the very same power and propulsion technologies that we had integrated into our design. In another few months, NASA defined its first Prometheus mission utilizing these powerful new capabilities, a mission to orbit the icy moons of Jupiter (JIMO). At this point it became clear to us that we could quite easily adapt our preliminary mission design to utilize the specific power and propulsion systems which would be developed by NASA for the JIMO
183 mission. Our task had then shifted quite dramatically to “convincing” NASA that one of its immediate follow-on Prometheus missions should be the B612 mission to a near Earth asteroid. Our efforts to convince NASA to adopt this goal have included both popular and technical papers defining the mission and designing specific techniques for various mission operations, the production of several commercial TV films supporting the mission rationale, participation in various international meetings on the subject of threat assessment and asteroid deflection, and testimony before the US Senate, the National Academy of Sciences, and others. To date NASA has shown polite interest but nothing more. CONSIDERATIONSAND IMPLICATIONS
A comparative look at the various asteroid deflection options can only, at this point, provide preliminary insights. Nevertheless, there are several distinct differences already in evidence which can inform not only the technical but also the social and political choices which must ultimately be made. The nuclear explosive options will all be strongly dependent on the bulk and surface structural characteristics of asteroids, a feature about which we know very little today. It is also likely that we will find substantial variation in these structural characteristics from one asteroid type to another, and perhaps even within the population of any given type. Therefore the nuclear option may require quite extensive detailed information about each asteroid to be deflected, an information set not easily acquired. Until much more is known about this subject, predicting the result of a nuclear explosive deflection effort will be highly unreliable. In addition to predicting the result of a nuclear deflection, measuring the actual result of a deflection mission will be challenging due to the violent nature of the operation. A double spacecraft compound mission, with one component serving as the deflector and a second as observer is one solution to this challenge. However, since the velocity change being sought is less than one part in lo6, verifying success from a spacecraft flying past at 10 kmlsec is daunting. If the operation also fragments the asteroid, even partially, the task of determining the result of the operation may well be impossible. Finally, any nuclear explosive option is and will remain inextricably intertwined with global geopolitics and, in fact, raise to prominence the spectre of space nuclear weapons. International treaties ban these objects in space today, but if no other deflection technique has been tested andor validated when the world experiences either a near miss or perhaps a small but significant impact, the world public demand for action to prevent a recurrence of such an event may be sufficient to enable a state, so determined, to justify abrogating the treaties against weapons in space on the grounds of protection of the world public. It is critical therefore, that the soft options be developed, demonstrated and known to be viable as soon as possible. This task is of utmost importance in order to avoid a situation in which the public misperception that the nuclear option is the only one available to protect the Earth from asteroid impacts. In fact, the soft options (and I would argue the B612 mission in particular) provides not only a viable, but a highly preferable alternative for asteroid deflection. Not only is the technology devoid of geopolitical considerations, but it is, by its nature, generally applicable to all types of asteroids even in the absence of detailed information on their characteristics. The total forces applied to the asteroid for
184
successful deflection are in the range of a few pounds (less than 10 newtons) and can be distributed easily over an area of several square meters thus assuring that virtually any surface, even the most fragile, can reliably be used. Furthermore, the B612 technique, in which a soft landing on an asteroid surface is integral to the design will enable many other missions to the asteroids. Given both the scientific and potential future commercial interests in the exploitation of asteroids, the operational techniques integral to the B612 mission are useful, and perhaps even necessary, for economic future exploration of the near Earth asteroids. Regrettably, the situation today, in particular within NASA, is that developing such a capability, or even exploring for this purpose the application of advanced technologies they are currently developing, is not within their mission. It may therefore require a directive to NASA (or equivalent to ESA or others) from the US Congress before this critical undertaking becomes an actual space mission. REFERENCES 1
2 3 4
5
6 7
8
9 10
II
Alvarez L.W, Alvarez W., Asaro F., Michel H.V. (1980) Extraterrestrul Cause for the Cretaceous-Tertiary Extinction, Science 208, 1095-1108. Meteroite and Impacts Advisory Committee, Canadian Space Agency Chicxulub website; http://miac.uqac.caMIAC/chicxulub.htm Sir Charles Lyell; Principles of Geology, (3 volumes, 1830-1833) NASA/JPL; Comet Shoemaker-LevyCollision with Jupiter; http://www2.jpl.nasa.gov/sl9/ NASA NEO Program Office; NASA release 98-123, NASA Establishes NearEarth Object Program Office at Jet Propulsion Laboratory, June 1998 Pan-STARRS Project; University of Hawaii, http://pan-
starrs.ifa.hawaii.edu/public/index.html Report of NASA NEO Science Definition Team; August 22,2003, http://neo.jpl.nasa.gov/neo/neoreport030825.pdf AIAA Position Papers on Planetary Defense; April 6,2004, http://nai.arc.nasa.gov/impact/news_detail.cfin?ID=l39 JPL Sentry System, 13 Aug 04, http://neo.jpl.nasa.gov/risks/ AIAA ref. The Asteroid Tugboat, Scientific American; Schweickart, et al; November 2003
THE NEAR-EARTH OBJECT lMPACT HAZARD: SPACE MISSION PRIORITIES FOR RISK ASSESSMENT AND REDUCTION NEAR-EARTH OBJECT MISSION ADVISORY PANEL REPORT TO ESA, JULY, 2004 A. W. HARRIS', W. BENZ2,A. FITZSIMMONS3,A. GALVEZ4,S. F GREEN', P. MICHEL6AND G. B. VALSECCH17 'DLR Inst. of Planetary Research, Berlin, Germany; 2Univ.of Bern, Switzerland; 3QueensUniv., Belfast, UK, 4ESA-ESTEC,Noordwijk, The Netherlands; 'The Open University, UK; 60bservatoire de la CBte d'Azur, France; 71NAF-IASF,Rome, Italy ABSTRACT
In July 2002 the general studies programme of the European Space Agency (ESA) provided funding for preliminary studies of six space missions that could make significant contributions to our knowledge of near-Earth objects. Following the completion and presentation of these studies, the ESA Near-Earth Object Mission Advisory Panel (NEOMAP) was established in January 2004. NEOMAP was charged with the task of advising ESA on the most effective options for ESA participation in a space mission to contribute to our understanding of the physical nature of near-Earth asteroids and the terrestrial impact hazard. The paper summarizes the final recommendations of the panel, and is taken from the Executive Summary of the original NEOMAP Report to ESA. The complete report can be downloaded from:
http://www.esa.int/gsplNEO/other.htm
185
HAYABUSA AND ITS FOLLOW-UP PLANS BY JAXA HAJIME YANO Department of Planetary Science, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Kanagawa, Japan ABSTRACT This paper summarizes scientific rationales for asteroid explorations including knowledge, which greatly assists in preparing for impact hazard issues (i.e., physical characterization of NEOs), the mission outline and the current status of Hayabusa asteroid sample return mission, with an emphasis on sampling and analysis procedures, as well as future prospects of Japanese minor body explorations. Examples of future mission concepts include multiple rendezvous and sample returns from spectra-known NEOs, multiple fly-by and sample returns from main belt asteroid family members, and a solar power sail mission to fly-by multiple asteroids in main belts and Jovian Trojans. SCIENTIFIC OBJECTIVES OF ASTEROID EXPLORATION As of early 2004, more than 250,000 minor bodies in the solar system have been detected. Among them, more than 10,000 asteroids are well-determined orbital elements and several thousands of them are observed with multi-band spectroscopy so that we classify their taxonomy in statistically valid numbers. On the other hand, several 10,000’s of meteorite and cosmic dust samples have already been collected in the terrestrial environment. Thus, asteroid studies for statistical matters are conducted practically by ground observation and meteoritic analyses. Then, what is the value of a spacecraft mission to study asteroids? Unlike the ground observation and meteoritic analyses, a number of exploration missions are limited so that one cannot send spacecraft to every single asteroid. The following provides some answers to this question and justifies why spacecraft missions are complementary to other methods of studying asteroids, especially for benefits of preparation for impact hazards on the earth by cosmic objects. Ground Truth Bridging between Minor Bodies and Meteorites/Cosmic Dust CollectionDespite the existence of large databases for both asteroids and meteoritic materials collected in the terrestrial environment, what we lack is the “ground truth” to link the two. Amongst over 10,000 meteoritic collections, the only samples whose origins are known are samples from the Moon, Mars and possibly a unique, heavily differentiated (igneous) asteroid, Vesta. Of course, many previous studies have tried to make the connection between asteroid taxonomy and meteoritic classes with various arguments. However, no conclusion will ever be reached as to whether these perceptions are right or not, unless we retrieve samples from asteroids of major taxonomic types, which are visited and studied in detail in situ, down to ground laboratories and apply meteoritic analyses for them. This is a unique contribution that only planetary exploration can provide. Therefore, we need not send sample return spacecraft to every single asteroid, but only to representatives of major taxonomic types (the more the better, but the minimum may be a half dozen or
186
187 so, i.e., S, C, D, V, M, A) in order to feed back to and re-classify the asteroid and meteoritic databases from their samples (Fig. 1). Also it is important to note that all spectral reflectance data of asteroids only contain materials covering their surfaces, most possibly a layer of sub-mm regolith of different thickness. Thus, one has to be cautious when spectral reflectance data of “powdered” meteorites, believed to have been excavated from an asteroid interior, are interpreted. Surface regolith may be closer to micrometeorites and some types of interplanetary dust particles. Also “most primitive” asteroids like C-types, D, P, and T-types are important so as to study their samples in detail for the chemical evolution of organics, because their very low-albedo make ground observation spectra show hardly any distinctive features. By coordinating the efforts of the international space agencies about the spectral types of asteroids to which they should send their probes, we should be able to cover all major taxonomic types within next decade or two (Fig. 2). Then the combination of the re-classified asteroidal and meteoritic databases can be used to interpret the variation of material distribution by heliocentric distance at the boundary between the terrestrial planets and the gaseous planets, which now forms the main asteroid belt region, in the formation of the solar system (Fig. 3). This is fundamental information about the planetary system formation, which can also serve as a reference data to other planetary systems. LABOkYI ORY Cosmic Dust
P
-\.
,/’ 4
Fig. 1: Mutual dependency between meteoritic analysis and ground observation of usteroids and the need for asteroid sample return missions.
188
Fig. 2: Possible generic relationship between asteroid taxonomy and meteorite classes with space missions to find their ground truths. Bloch indicate sample return missions and ellipses are jly-by and rendezvous-only missions (see the details in the last chapter).
Fig. 3: Heliocentric distance dependency of asteroid taxonomy ratios (Yano, et al., 2004a). Geological Processes and Material AlterationsOne of the major paradoxes that planetary scientists have been facing for years is “the S-type asteroid vs. ordinary chondrite paradox”. S-type asteroids account for the largest number of members among the main belt asteroid population, while ordinary chondrites are the most common class of meteorites collected on the earth. Thus one may simply assume that ordinary chondrites should come from S-type asteroids, if there is no bias in the
189
population of meteorites that have fallen to the ground, such that it reflects a true ratio of asteroid population. However, their spectral reflectance data do not match exactly; large S-type asteroids tend to be darker and reddened and exhibit less sharp absorption bands than most ordinary chondrite powders. One of the strong hypotheses in explaining this discrepancy is the “space weathering effect”, which forms nano-scale iron particles and reddens the spectra of S-type asteroid surfaces by micrometeoroid impacts orland solar wind bombardments (Sasaki, et al., 2001). The degree of such effects seem to be time-dependent on smaller S-type asteroids, which have shorter collision lifetime than larger ones, because they should look more similar in spectra to meteorites (Binzel, et al., 2001). We must therefore collect surface materials of S-type asteroids to see if the space weathering effect can solve this paradox and samples obtained during the Hayabusa mission (see following chapters) will bring us the answer. In larger geological processes than space, weathering by micrometeoroids is due to impact consequences on larger meteoroids. As observed on the Martian satellite Phobos, and an S-type asteroid Eros, regolith “ponds” and boulder/large block formation in the microgravity condition give us clues to the internal structure and collision history of the asteroids. Ridges and grooves also provide evidence of macro-porosity of the fracturedshattered asteroid interior. Comparison between crater chronology and absolute dating of returned samples will bring us a strong foundation to study the lifetime of the asteroid and the time scale of its geological alteration. Recent discoveries of fluid inclusion in ordinary chondrite falls (Zolensky et al., 1999) give us the chance to study thermal history and volatilehydration evolution (e.g., hydration) of the early stage of S-type asteroid formation. Structure of “Undifferentiated” BodiesExcept for limited occasions to measure mass and shape of minor bodies such as spacecraft visits and binary system observations, bulk densities of asteroids and comet nucleus are still largely unknown. Their internal structures are even more difficult to study (see the later chapter), especially for “primitive”, thermally undifferentiated asteroids, which have no internal layers of cores, mantles or crust, unlike the large terrestrial planets. Even within the undifferentiated asteroids, there may be a variety of possible structures including “monolithic (e.g., fast rotators)”, “fractured”, and “rubble pile” and we have not established how to classify and quantify them (e.g., mesh scale, porosity) for sensible scientific interpretation of their evolutions. It is also important to note that governing forces to bond the building blocks of asteroids vary depending upon scales, from microns to 100-km ranges. When one discusses the internal structure and its porosity and strength, the appropriate governing forces must be applied such as electrostatic, frictional, and gravitational forces. Also compaction factors must be carefully evaluated to judge whether the porous structure of asteroids is due to either macro-porosity by rubble piles of boulders, or to micro-porosity as seen inside micrometeoroids and IDPs. This point really counts for impact hazard issues, too. Take a simple example: when one throws different types of balls of the same size against a wall, each impact consequence will be different for example, a metallic ball, a solid rock, a bean bag and a snow ball. Countermeasuresto impact hazards have several levels of activities such as (1) discovery of potentially hazardous objects, (2) tracking them to determine their precise orbits, (3) their physical characterization (density, strength, structure, center of mass, etc.), and finally (4) development of mitigation technologies (e.g., changing their orbital elements, in case of a long leading time). A better
190 understanding of the inner structures of asteroids in various types (taxonomy, size, and morphology) by spacecraft missions directly contributes to the third point. Other Interests: Jinuact Hazards and Future ResourcesApart from the above science-driven motivations, there are other reasons for exploring asteroids, especially near-earth objects (NEO), as follows: First of all, once an earth-impactingNEO is discovered with a sufficient leading time, “know your enemy” information must be obtained. Since NEOs at > lkm in size are nearly all discovered with determined orbital predictions, what we need to understand urgently are the physical properties of a variety of sub-km Earth crossing asteroids (Huebner, et al., 2001). The Hayabusa mission to go to a sub-km S-type NEO “Itokawa” is the very first relevant space mission for this purpose (see more in a later chapter). Another reason relates to human planetary exploration aspects in the future. Recent trends of human space exploration plans to Mars via the Moon and beyond will certainly have near earth asteroids as one of the destinations at some point in the future (e.g., Durda, et al., 2001). NEO missions have advantages over the Moon as they have the required delta-V and small gravitational wells and are further away from the earth than the Moon but still only a I-year round trip in the microgravity condition, they are within the reach of life support system technology and operational experiences gained in the LEO space station activities, except for radiation issues outside the Earth magnetosphere. In other words, without obtaining technologies and operations required for a human NEO mission, human missions to Mars do not seem be practically feasible. From a more futuristic viewpoint albeit a lesser need, we must first learn which types of asteroids human-tended spaceships should go to for “gas refuels” and “water fountains”, if they have to utilize propulsion and life support consumables in the course of a long interplanetary cruise (e.g., Lewis, et al., 1997). Table 1 summarizes what we can learn about asteroids by ground observation, in-situ measurement and sample return. Table I : What we can learn about asteroids by ground observation, in-situ measurement and sample return. Ground Observation >Orbital elements >Spectral type (Taxonomy) >Thermal properties (Space-IR) >Geometric albedo >Rotation period (Light curves) >Spin state (Light curves) >Rough global shape (Resolution: Radar < Fly-by < Rendezvous) >Averaged surface roughness (Same as above) >Binary system (if any) >Yarkovsky / Y O W effects, etc.
In-situ Measurements >Global surface composition >Local surface mineralogy >Local surface topography / geology >Gravity-mass,bulk density, macro-porosity >Regolith condition, surface thermal property >Implication to the internal structure (boulders, ridges, grooves, ponds.. ..) >Satellites and associated dust bands (if any), etc.
Sample R e t u r n >Absolute dating (Isotopic ratios, noble gas.. .) >Mineralogy/petrology >Major & trace elements, >Micro-porosity & micro-structure >Space weathering effect >Hydration history >Compositional enhancement / depletion >Chemical evolution of organics >Meteorite/cosmic dust connection, etc.
191
HAYABUSA MISSION OUTLINE By noting the above scientific and other motivations to explore near earth asteroids, here is an overview and the current status of the Hayabusa mission, the world’s first asteroid sample return. Timeline and Current Status At 13:29:25 on the 9” May 2003 JST, J W I S A S ’ s spacecraft “MUSES-C” was successfully launched with the full-stage solid fuel rocket M-V-5, from the ISAS Uchinoura Space Center in Kagoshima, Japan. After confirming deployment of its solar array paddles and sampling horn, as well as the proper attitude control through the telemetry received by the Deep Space Network, the spacecraft was inserted in its interplanetary trajectory and renamed as “Hayabusa”, or “falcon” in English, for its resemblance with the bird that flies rapidly to its target, hovers to monitor, and catches its prey in a “touch & go” sequence (Fig.4). The Hayabusa is originally defined as the engineering test spacecraft for four major technologies that are necessary for more ambitious planetary explorations in the future. They are (1) ion engine system for interplanetary cruise, (2) autonomous navigation and control by image processing, (3) surface sample collection from a microgravity body, and (4) direct Earth’s atmospheric re-entry from interplanetary space. At present, the spacecraft has been operating with three ion propulsion engine systems (IES) for more than 15-months continuously in the interplanetary cruising phase. It is now heading to the target NEO after the successful earth swing-by on May 19‘h,2004. In late August 2005, after the solar conjunction, the spacecraft will be inserted in the rendezvous trajectory with Itokawa, called “home positioning”, which is nearly identical to the orbit of the asteroid itself. There the spacecraft will conduct global mapping with the multi-color optical camera (AMICA), the near infrared spectrometer (NIRS), the X-ray fluorescence spectrometer ( X R S ) , and the LIDAR for the maximum duration of three months (Table 2). After the completion of the global mapping phase, Hayabusa will collect surface materials (e.g., regolith) of several hundred mg to several g per shot (see the later chapter). The sampling will be repeated for up to three locations and the catcher will finally be transferred to the reentry capsule and tightly sealed just before leaving the asteroid in December 2005. The spacecraft will leave the asteroid in December 2005 and operate the IES again in interplanetary space. In June 2007, Hayabusa will finally come back to Earth and the return capsule will be released for earth re-entry and land in Woomera, Australia (Fig. 5).
192
Fig. 4:
The Hayabusa spacecraft flight model (Courtesy: JAx.1/ ISAS).
Fig. 5:
Trajectory design and mission plan of the Hayabusa spacecraft,
193 Table 2: Hayabusa ’s scienti3c instruments and their objectives. On-board Instruments
sampler
AMICA (Optical Camera with ECAS)
LDAR (Laser Altimeter) NIRS (Near IR Spectrometer) X R S (X-ray Fluorescence Spectrometer) MINERVA (Technological demonstration) with cameras and heat probes Ranging & range rate measurement
Scientific Objectives
Collect surface samples in 100mg-1 g Surface geology, surface mineralogy, implication to the internal structure (boulders, ridges, grooves, ponds.. .) Surface topography Surface mineralogy (0.85-2.10 microns) Global surface composition Regolith condition, surface thermal property Gravity-mass, bulk density, macro-porosity
Target Asteroid (25 143) Itokawa. a Su-km S-twe PHA Hayabusa’s mission target is an NEO named (25143) Itokawa (formally known as “1998SF36”), which was first discovered by LINEAR in September 1998. At the time of mission design a few years ago, the population of NEOs at least as large as Itokawa was at least 5000. Of the roughly 500 of those that have been discovered, Itokawa is distinguished by having the lowest delta-v for a spacecraft rendezvous (4.29 km/s vs. 5.95 km/s for 433 Eros). Itokawa is a potentially hazardous asteroid (PHA), whose perihelion distance is 0.953 AU inside the Earth’s orbit and aphelion distance is 1.693 AU outside Mars’ orbit. By previous radar and multi-band optical observations (e.g., Ostro, 2001 and Binzel, et al, 2001) , its orbital and physical characteristics were yielded, as can be seen in Table 3. Itokawa’s size is (49&100) x (25M5.5) x (18m50) m and its rotation period is 12.132 hours. High S/N and relatively high resolution (50A) visible and near-IR spectroscopic measurements obtained during its 2001 apparition revealed it to have a red-sloped S(1V)-type spectrum with strong 1- and 2-micron absorption bands, analogous to those measured for ordinary LL chondrites with space weathering effect. Assuming its bulk density, the surface gravity level of Itokawa is in the order of 10 micro-G with its escape velocity = -20 c d s . As the result of those measurements, it seems that Itokawa may be made of ordinary chondrite bed rocks covered with a relatively thinner layer of regolith, whose size distribution peak lies at larger end than that of Eros, which is a several ten times larger NEO than Itokawa, visited by the NEAR-Shoemaker spacecraft. Thus we need a single sampling device that suits both ordinary chondrite bedrocks and regolith layers.
194
Table 3 summary of physical and orbital characterisof N Ilokwa Physical Characteristics
Orbital Parameters I
Geometric Albedo : 0.32 +-0.04 Absolute Magnitude: +I 9.1
I
Spectral Type: S(1V) Meteorite Analog: LL chondrites (2.7g/cm3) Apparent Mean Diameter: 298+-29 m (optical) Ellipsoid Size Estimate: 630 x 250 m (radar) Estimated Mass: 6.6~10’” kg Geometric Albedo: 0.385+-0.106 @, G=0.40+-0.15 G level) Escane Velocitv = -22 c d s (
I
Ellipticity (e) : 0.280 Perihelion Distance (9): 0.953 AU Aphelion Distance (Q): 1.693 AU Orbital Inclination ( i): 1.63 deg. Semi-Major Axis (a): 1.324AU Rotational Period (P): 12.132 hours (retrograde)
I
Sample Return Strategy Since sample analysis is the key for understanding physical and compositional properties of the target asteroid, here is a brief description of the sampling strategy. After Hayabusa completes the global mapping of Itokawa’s surface, sampling locations will be decided on from both scientific merits and engineering safety. The first descent for “touch-and-go” sampling will be conducted. Before touching the surface, one of three “target markers” reflecting I-second flash light pulses will be dropped and will track its passage by autonomous navigation. Also a hopping rover “MINERVA” will be deployed during the last several 10’s m of the descent. The rover will carry color stereo cameras and heat probes for studying the asteroid surface conditions. Recent in-situ studies suggested that even relatively small asteroids (e.g., 10-km across) can retain considerable regolith and boulders on their surfaces by non-gravitational force (Veverka, et al., 2001). However it is impossible to fully understand surface conditions of such minor bodies from ground observation only prior to the launch of a sample return spacecraft. Thus it would be desirable to design a single sampling mechanism that suits a diverse heterogeneity of target surfaces, from metal-silicate hard bedrocks to regolith layers covered with fluffy microparticles. Hayabusa employs a sampling mechanism that suits a diverse heterogeneity of target surfaces, from metal-silicate hard surfaces to regolith layers covered with fluffy microparticles (Yano, et al., 2002b). It cames a 1-m horn made from (1) A1 metal cylinder horn at the tip, (2) foldable, compliant fabric horn (Vectoran) and (3) A1 metal conical horn connected to the sample catcher inside (Fig. 6). The sampling mechanism is attached to the basement of the spacecraft and consists of (a) a sample catcher canister coated with 99.9999% Al, (b) a transfer mechanism to the re-entry capsule, and (c) projectors. Within 0.1 seconds after the tip of the horn touches the asteroid surface, the laser range finder will detect 2 1 cm retraction of the fabric horn; it triggers the firing of a 5-g Ta projectile, by a small projector onto the asteroidal surface through the interior of the I-m long horn, at velocity of 300 m/s. Impact of the projectile produces surface ejecta, which are concentrated through the conical
195
horn toward the catcher (Fig. 7). One second after detection of the touch-down, the spacecraft completes its sampling and ascends with reaction control system thrusters to quickly leave the asteroid surface to eliminate the risk of the spacecraft colliding with local obstacles (Fig. 8). In order to evaluate the behavior of impact ejecta from small asteroids, we performed both 1G tests and reduced gravity impact experiments by using parabolic flights and dropping tubes (down to lO-’G, equivalent to the gravity order of a 1-km asteroid) onto various asteroid surface analog materials such as porous, heat resistant bricks, 200-micron sized glass beads and lunar regolith simulant. After the complete sets of 1G and micro-G impact tests, major findings are the following (see more details in Yano et al., 2002b): For the brick impacts, collected samples were consisted of some mm-sized chunks and a large number of sub-mm particles in both 1G and micro-G. However, in lG, 50-70% of the ejected mass stopped at the fabric horn if bounced at the horn “shoulder”. Thus the collection efficiency (CE) was around a few to 10 %. If the ejected mass was 20-30% at the fabric horn and the shoulder bounced, then CE increased to 10-20 % (Fig. 9). On the other hand, brick impacts in micro-G went up to 13-30 % of the total CE, due to cancellation of the gravitational pull of the relatively slow particles otherwise not reaching to the catcher in -3 seconds. The best performance of the CE in the micro-G was boosted to >40 %. In addition, the collection mass (CM) at several 100 mg->l g were routinely achieved. As for lunar regolith simulant and glass bead impacts, hi-speed photography yielded that there were a few vertical jettings unlike the brick impacts, and a majority of ejecta diverged at around 45-degree cone from the impact point in the very slow velocity of a few m/s. Thus CE significantly went down to < 0.1% in 1G and to 4 % even in micro-G However the total ejecta mass from regolith impacts was as much as 10-1000 times of that of brick impacts. Therefore, even such a small CE could result in the CM of 0.1-10 g per shot in micro-G The result was that micro-G impacts onto glass beads increased CM by two to three orders of magnitude compared to the 1G impacts. Since glass beads are free from the compaction effect, unlike the lunar regolith simulant, although its size distribution is thought to be much larger than the lunar soil, the CE of impact samplings on the actual asteroid regolith may end up somewhere between the micro-G glass bead data and the micro-G lunar regolith simulant data. In conclusion, the expected amount of the samples from results of 1G and microgravity impact tests for both consolidated bedrocks and regolith simulants are around several hundred mg to several g per shot.
196
Spaceaaft
Retuni Capsule (Snmpla C a t c h 22 C O n l ~ e r twdr)
Pi OJedols (UP h e 4
Conical
Horn (Conecuhxtor)
Extendable FaLric Hoin Metal (41) Horn ~ t
h Dttr t Rateehou &aud X LRF I Tnggcr Tmgd
Fig. 6: Components of the Hayabusa sampler horn.
Fig. 7: The flight model of the Hayabusa sampler horn.
Fig. 8: An artist's impression of the Hayabusa spacecraft descending to the asteroid surface for sampling (CG by A . Ikeshita, MEF, and ISAS/JAx4).
197
0 BVS-mG(60QA)
BVN-mG(#O%>
IR H - ~ G ( ~ o o / ~ )
* GlassVS-mU~40%>
0.01 1 0.0001
'
''....I1 0.001
'
'
'
0.01
'
'
'
' I . . . . '
0.1
1
' ' ) . . _ . '
'
' ' ) . " L '
10
100
Collection Efficiency (96)
Fig. 9: Experimental results of Hayabusa sampler collection efficiency and collected mass in I G and micro-G. Symbols: B = brick, R = regolith, G = glass beads, H = horizontal horn, V = vertical horn, S = with shoulder of the funnel, N = without shoulder, mG = micro-G (modifiedfrom Yano, at al., 2002b).
Samde Curation and Analyses However, due to impact sampling, returned samples are not like pieces of meteorites nor lunar rocks that might be picked up by astronauts. The majority of recovered samples were fine-grained (sub-mm size) particles, rather than large chips (>several mm) of rocks. Thus, micro-analysis, sample handling and contamination control of the Hayabusa samples will be more similar to those of cosmic dust samp1es.h order to maximize scientific output from recovered samples, the samples should be distributed to all qualified researchers upon proposals from all over the world. Such detailed analysis proposals must rely on general characteristics of the samples studied by the initial analysis team, the "Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET)". HASPET will consist of ISAS scientists, NASA and Australian Co-Is, and Japanese researchers from outsourcing institutions, who are selected through open competitions of mostly non-destructive, microanalysis techniques in the respective disciplines needed during the initial analysis stage (Kushiro, et al., 2003). They will work as one "all-Japan'' team and are responsible for characterizing the general features of the bulk and some of the major samples. The initial analysis will investigate physical properties (e.g., mass, size distribution, morphology, color, transparency, etc.) and produce optical calibration data for the on-board instruments from 100 mass % of bulk samples by non-destructive means (Fig. 10) (Yano, et al., 2003a). Then up to 15 mass % will be consumed to characterize the representatives of Itokawa samples for more details and their results will be published within one year (hopefully in 3-6 months) after the capsule retrieval. Thus JAXA/ISAS is preparing the creation of its own astromaterial curation facility on site, which will be the second of its kind following the NASA Johnson Space Center's Lunar Sample Laboratory (Yano and Fujiwara, 2004). Given that a sufficient amount of samples (i.e., >several 100 mg) is recovered, after the initial analysis period, the peer-reviewed international
198
announcement of opportunity (AO) for detailed analyses of another 15 mass % of the samples will be released. The other 15 mass % will be used for competitive A 0 only open to Japanese scientists while the other 10 mass % will be permanently transferred to NASA. The rest will be preserved for future use (Fig. 11). The first competition was conducted in 2000-2001 (Yano, et al., 2003a) and the second competition is now in progress. Applications have included (but not restricted to) the following techniques: (1) selected isotopic measurements, (2) ion probe (including SHRIMP), (3) carbonates, (4) organics & carbon isotopes, ( 5 ) major & trace elements, (6) micro-tomography, (7) mineralogy & petrology, (8) noble gas, (9) nuclear activation, and (10) residual magnetism. Multiple international referees evaluate their reports for their qualifications to join the team. Since analytical instruments, techniques, and personnel may advance greatly in the next several years, before the 2007 sample retrieval, final membership of the HASPET should be decided at the end of 2005, right after the spacecraft leaves the asteroid. Once Japan establishes both the curation facility and the preliminary examination team expertise, ISAS will be ready to accommodate samples from sample return missions of various planetary bodies other than Hayabusa, both domestic and international plans in the near future (Fig. 12).
-
Samples
--Data
I' I I
mist imaging inside the containel Move each site samples to t r a y
I Suinplc division
I ~nternrtionalAO analyses I I=
-1 I
Fig. 10: Initial analysis procedure plan in the ISAS Astrornaterial Curation Facility (Yano and Fujiwara, 2004).
199
Fig. 11: Hayabusa sample analysis flow (Yano, et al., 2004a.
Fig. 12: World trends of sample return missions in the present (normal) and the near future (Italics). Japanese missions are underlined. “POST-HAYABUSA”, A NEXT GENERATION MINOR BODY EXPLORATION MISSION Scientific Themes for Minor Bodv Exploration in the Post-Hayabusa Era As already stated, Hayabusa is an engineering test spacecraft for key technologies needed for future planetary exploration, together with ground facility and expertise of sample analysis and curation. Indeed, Hayabusa will not answer all the major scientific questions raised in the previous chapter alone but it is just the beginning of a series of minor body explorations to follow. Many Japanese planetary scientists hope to advance Hayabusa’s heritages of sample returns as their
200
new expertise in the post-Hayabusa era. Then what kind of minor body explorations should follow Hayabusa? In 2000-2001, the Minorbody Exploration Forum (MEF) in Japan, a volunteer web-based e-group of scientists and engineers as well as any interested amateurs, hosted an open competition to design post-Hayabusa mission concepts and received seven proposals (Table 4). As can be seen, all of them addressed key questions to be answered in the next decade and most were later adopted somehow by other space agency missions, proving that MEF work was going in the right direction. Considering the international trend of minor body explorations in the 2010’s and multiple evaluation processes (Yano, et al., 2001, 2002a, 2003b), two reference missions were selected for detailed studies and combined scientific themes from other proposals whenever possible. They are (1) multiple rendezvous and sample return missions to spectra-known NEOs and (2) multiple fly-by and sample return missions to main belt asteroid family members. Common key words for both missions were “multiple sample returns” from “several asteroids” of “known spectral types”, because complete understanding of the generic relationship between asteroid types and meteoritic classes is a top priority science. Also there are increasing interests and importance in studying the chemical evolution of “life precursor” organics on primitive bodies for astrobiology aspects and the internal structure of undifferentiated asteroids for solar system formation and impact hazard implications (Table 5 ) . Therefore possible mission targets may include several near Earth objects with spectral types other than S-type, to which Hayabusa’s target asteroid Itokawa belongs (i.e., C-type, M-type, E-type, D-type, P-type, etc.), an extinct cometary coma, whose spectral features resemble some asteroids, and members of a single main-belt asteroid family. After the MEF final report was published at the end of 2003 (MEF, 2003), ISAS’s new generation minor body exploration working group was proposed to the ISAS Science Steering Committee and its foundation was approved this March. In order to increase the mission feasibility and cutting the development time and cost, the working group has noted that key technologies needed for their realization should be inherited and upgraded from those of the Hayabusa mission as much as possible, such as ion engines, autonomous navigation, surface material sampling, in-situ scientific instruments including micro-rovers, and reentry capsules. Also the working group has recognized the importance of both international collaboration at different levels, and public outreach efforts to maximize the mission success. The working group is expected to submit the complete mission proposal after determining the major technical challenges to realize the next mission before the 2006 Japanese fiscal year for evaluation by the selection committee within JAXA/ISAS. Should it be selected for starting the mission, the development, construction and testing phases will be completed in 4-5 years so that the earliest launch date is assumed to be around 20 11-2012. Apart from the newly founded working group’s initiative, ISAS has studied concepts of a new generation engineering test spacecraft (“MUSES-D’) called “solar power sail”, a hybrid propulsion system of solar sail and ion engines, to demonstrate technologies necessary to explore the outer planet region in the solar system, starting around 2010 (Kawaguchi et al., 2004 and Yano, et al., 2004b). This mission concept includes fly-by observations of main belt asteroids and Jovian Trojan asteroids, most of which are D-type asteroids without analogous meteorites found on Earth. In the following sections, two reference missions studied by MEF, including the surface science package development, and the solar power sail concept are briefly
201
explained. Table 4: Minorbody .Exploration Forum’s seven mission proposals for the post-Hayabusa missions in the 2000-2001 study phase. Comments
MEF Mission Proposals Fly-bys & Sample Returns Spectral Known NEO Multiple Rendezvous & Sample Returns Comet-Asteroid Transition Objects (CAT) Rendezvous
1 I
Vesta Rendezvous Phobos & Deimos Landing Mission
NEO Multiple Fly-bys & Martian Satellite Sample Returns M-type . - Asteroid Rendezvous
I
an asteroid with a satellite in the Koronis family “Hera” proposed as a Discovery candidate at this round Later NASA chose “Comet Nucleus Sample Return” as one of five New Frontier I1 candidates Later NASA chose as “Dawn” mission Later Russia and UK separately studied mission feasibilities for Phobos landing “Aladdin”, “Guillver”, “Phobos-Soil” and other ideas have been proposed in the past and now Later ESA’s “Rosetta” decided to flY-bY
Table 5: Major scientific goals of minor body explorations in the post-Hayabusa era. Science Goals and Reference MissionshIeasurements (A) Ground Truth for Asteroid Taxonomy & “Astromineralogy” Extension to Exo-Planetary Systems (e.g.) Spectral Known NEO Multiple Sample Return (B) Direct Investigation of Impact Disruption History of Planetesimals: Dynamic Evolution of the Solar System (e.g.) Main Belt Family Multiple Sample Return (C) Inner Structure of Minor Bodies in Large Dynamic Range of Scales: Formation Hostory & Impact Hazards (e.g.) Tomographic, Seismic, &Robotic Measurements (D) Prebiotic and Volatile Component Evolution in the Early Solar System: “Astrobiology” Implication (e.g.) CAT & C-, P-, D-Asteroid Exploration Spectra-KnownNEO Multiple Rendezvous Sample Returns One of the MEF reference missions is the multiple rendezvous sample return mission to known spectra NEOs of both primitive types (i.e., C, P/D) and differentiated types (e.g., V, M) (Fig. 13). This is a direct heritage of Hayabusa, and its main objective is to bring “ground truth” samples to connect meteoritic analysis data and ground observation archives of as many asteroid types (but other than S-type) as possible in one mission. In the previous studies, various mission scenarios were studied to achieve “multiple” NEO sample returns by changing the number of launchers, spacecraft, earth swing-bys and asteroids visited by single spacecraft, together with launcher and spacecraft propulsion (i.e., chemical and electric) capabilities (Fig. 14) (Morimoto, et al., 2004). To take advantage of experiences gained by the Hayabusa sample analyses and the ISAS astromaterial curation facility, sample returns in the short time duration are strongly desired for orbital design (Table 6 ) . Also international
202
coordination of target asteroid types is important if other space agencies also plan to send probes to spectra-known NEOs.
(1 )Single launcher(H HA), One spacecraft (S/C), Multiple asterofd visits and No Earth swing-by (SB) Earfiv AsfHoid I +Asteroid 2 Ear&..
-
(2)Single launcher(HI1A). One S / C Multiple asteroid visits and Earth SB Eafth +Asteroid I Earth SE (n trines+ Asteroid 2
-
~
-
Earff, m L ?&
(3)Single IauncheKHIIA), Two S/C, Multiple asteroid visits Earfh -,Earth SB a. Asferdd 1 6 Asteroid 2 +
1
-.
Fip. 13: An artist's imvression of the multiple spectra-kn&n NEO sample return spacecraft (above) Fig. 14: Mission scenarios studied for multiple NEO sample return missions (Morimoto, et al., 2004). (CG Honda di Honda, H, Yano and MEF).
Table 6: Summary of initial mission design examples of the multiple spectra-known NEO sample return mission. (Yano, et al., 2002b).
Figure 15 shows an example of a single mission to collect samples from three NEOs with different spectral types (i.e., E, C, V-types). In this scenario, an H-IIA launcher and chemical propulsion are assumed and the spacecraft, equipped with three return capsules, conducts one Earth swing-by between two destinations. At each Earth swing-by, the spacecraft release a return capsule to ground and changes its course to head a new target asteroid. In this way, ground scientists can expect
203 samples from each asteroid to be received at the curation facility after 4, 11, and 14 years from the launch, i.e., 3-7 years apart from each other. Such time intervals are comparable to cases of Hayabusa (4 years) and Stardust (7 years). Another important feature is that the total mass of scientific payloads can be of an order of magnitude heavier than that of Hayabusa (i.e., 68 kg), excluding the sampling mechanisms and the capsules (Table 7 ) . This will allow us sufficient resources to employ necessary improvements of orbiter science payloads and sampling devices. Also a new surface science package (SSP, see the later chapter) may well be
2 15
-3 0 05
4
45 I -1
5
7
-25
Fig. 15: An example of orbital sequences for three NEO rendezvous and sample return mission (Morimoto, et al., 2004).
204 Table 7: Comparison of mass budgets between Hayabusa (MUSES-C ) launched by M-V and an example of a post-Hayabusa multiple NEO sample return spacecraft launched by H-IIA (Yano, et al., 2002b).
wet Mas DtvMaS
2243 kg 1019ke
SlOkg 381 ke
1224 kg
129kg
~RehimCapid~
Y7 kg (SYd
Its orbiter science measurements may include similar areas of interests as Hayabusa such as global/local topography, geological features at high phase angles, visible-infrared spectral maps, dust environment, gravitational anomaly, surface condition, and X-ray elemental maps. Thus initial model payloads may include NIRS, AMICA, X R S , LIDAR, and MINERVA inherited from Hayabusa. They may also include possible new developments like enhanced capability of NIRS up to 3 micron water features for C and D types, geological camera with scan mirror, 2D-scan LIDAR, gamma-ray spectrometer, dust detector, gravity VLBI, radar sounders, etc. Sampling mechanisms may also be modified for enhancing scientific values of returned samples. To do so, there are several requirements of such improvements including (1) more sample mass (>lo g), (2) collecting larger chips (especially for differentiated asteroids), (3) preserving stratigraphic information, (4) sub-surface sampling (for un-weathered materials of both differentiated and undifferentiated asteroids), ( 5 ) retaining organics and “water” signatures, (6) more strict contamination control than Hayabusa (especially for C- and D-type asteroids), (7) more severe planetary protection protocol than Hayabusa (also for D-type asteroids) and (8) in-situ estimation of collected mass (e.g., light curtains). Investigation of Asteroid Internal Structures Since understanding the internal structure of undifferentiated asteroids is both a fundamental question of the solar system formation and an impact hazard assessment of an NEO as well as its deflection options, we first investigated existing methods used for physical exploitation of underground structure on the earth (Fig. 16). However most of these rely on underground water and filling media of voids. Since asteroid sub-surface structure are “dry” and porous in high vacuum, gravity survey and radar tomography from an orbiter, and seismic network and robotic surface
205 investigation by lander/rover(s) seem sensible options, although their spatial resolutions in depth profiles to be yielded are all different from each other (Fig. 17). For this reason, the next generation minor body exploration working group is now developing a surface science package (SSP) for small asteroids, which should be able to conduct various scientific investigations in dry, vacuum, microgravity and dusty surface environments. One of the ideas for a mobile SSP to study sub-surface structure at asteroid surfaces is the robotic investigation of several tens of meter-sized boulders which are bedrocks ejected to surface by large impacts, thus providing direct windows to sub-surface structure as deep as their sizes (Yano, et al., 2004a). Cliffs and walls of large crater interiors, ridges, and grooves are other geological features to study for sub-surface structures by SSP.
Pnsriw
Fig. 16: Existing methods for physical exploitation of underground structures on the earth.
206
Fig. I 7: Strategies and technologies to investigate asteroid interior in dEfferent depth resolutions. Main Belt Asteroid Family Multiole Fly-bv and Samole Returns Another MEF reference mission is fly-by investigation and sample collection of multiple asteroids that belong to a single main-belt family (Fig. 18) (Yano, et al. 2004a). It will provide direct information of the interior as well as collision history of their parent body, a refractory planetesimal disrupted by mutual collisions in the early stage of the solar system evolution. One scenario targets the Koronis family including the Ida-Dactyl system, the only family asteroid visited by spacecraft in the past, and its dust band. Another targets the Nysa-Polana family, which has several spectral types within it (Table 8). As most of the Koronis family asteroids are classified as sub-groups of S-type, multiple visits to family members in different sizes may tell us how such “undifferentiated” bodies are formed and later disrupted. On the other hand, the Nysa-Polana family members indicate many spectral types such as M-, E-, S-, and F-types. It may be a remnant of catastrophic disruption of a differentiated object by a less differentiated projectile impact. In either case, the study of the interior of lost planetesimals or proto-planets is only possible by close investigation of asteroid families in present days. The initial mission design studies in the past assumed H-IIA launch and chemical propulsion only but there are many plausible options to visit 2-4 asteroids with large resources allocated to scientific payloads (e.g., 200-300 kg order including sampling devices, capsules and intelligent impacts) in relatively short mission durations (i.e., 3-6 years) (Table 8). The most challenging part of this mission is to design a fly-by sampling mechanism, which is completely different from the Hayabusa sampling system. However, such a mechanism was designed in the past for the Aladdin proposal to the Discovery mission to a Phobos sample return. Intelligent impactor technology is ready for the Deep Impact mission to be launched in the end of 2004 and to impact the Comet Tempel-l nucleus in July 2005. In
207
Japan, the bus system of the Lunar-A penetrator module can be adapted as an engineering baseline for the impactor with autonomous navigation and guidance-control system with image processing, which is now being tested by Hayabusa in space. Capturing media of ejecta particles are assumed aerogels, which have been space-proven in various LEO experiments including the Japanese MPAC-SEED onboard the International Space Station, as well as NASA's Stardust mission, which collected cometary coma dust samples in January 2004. Collection efficiency was studied for S-type and C-type asteroids by using Hydrocode computer simulation in the past (Yano, et al., 2000).
Fig. 18: An artist's impression of the Family mission (above; CG by MEF and A. Ikeshita) and conceptual drawing of its spacecraft (right) (llano, et al., 2000).
208
Table 8: Summary of initial mission design examples of the Family mission for the Koronis and Nysa-Polana families. All missions assumed chemical propulsion (Yano, et al., 2002b).
Figure 19 shows an example of orbital designs to visit 4 Koronis family asteroids in 6 years, including a re-visit of the Ida-Dactyl system, thus increasing success levels of fly-by sampling and scientific outputs (Yamakawa, et al. 2000). Every time the spacecraft intersects with the Koronis family orbits, it flies by one asteroid and collects impact ejecta samples fiom it. In the first three years, the spacecraft will fly-by (243) Ida and (2700) Baikonur and then return a capsule containing their samples to the earth. In the next three years, it will repeat the same for (1079) Mimosa and (993) Moultona and bring their samples to the earth 6 years after the launch.
Multi-Korons Family Asteroid Flyby
Multl-Koronis Family Astaroid Flyby (follow-on mission / post Earth swlngby)
Fig. 19: An example of orbital design for Koronis f a m i b j y - b y and sample return mission (Yamakawa, et al., 2000).
209
Solar Power Sail Mission to Fly-by Main Belt and Jovian-Trojan Asteroids Yet another future mission concept that ISAS is studying is the solar power sail mission, which will make fly-by observations of main belt asteroids as well as Jovian Trojan asteroids, most of which are D-type asteroids, poorly understood taxonomic types from neither ground observation spectroscopy nor meteoritic analyses, due to the lack of possible analog chondrites discovered on the earth (Fig.20) (Yano, et al., 2004b). Thus, the most severe planetary protection protocol (Bio-Safety Level 4) will be required if we conduct sample returns from them now. It is important to visit and look closely at them first. Jovian Trojans have never been visited by spacecraft and they are a totally unknown world. Ground observations contradict their low density, which implies water ice content with spectra reflectance lacking water signatures, and it may have organic-rich regolith covering sub-surface water-ice terrain. Understanding generic connections among the Trojans, short-period cometary nucleus and the outermost D-type asteroids in the main belt may be an important key to distinguish between asteroids and comets, depending upon where they originated in the early stage of the solar system. Based on the present mission plan, it will take about 4 years to go to Jupiter and its extended mission will reach the Jovian LA Trojan asteroids eventually (Fig. 21). Unlike conventional planetary probes, this spacecraft will not arrive at a single destination but continue cruising all the way to the outer planet region. Thus the current reference mission aims at maximizing scientific outputs during its cruising phase, by changing its heliocentric distance on the ecliptic plane. Lastly, this mission will serve as a precursor for some innovative, new planetary explorations, which may take place after the successful demonstration of key technologies in this mission.
Fig. 20: An artist's concept of the ISAS Solar Power Sail mission to fy-by an Trojan asteroid (Phoebe image is courtesy of NASA)
Its present model payloads have the following scientific objectives. The study of zodiacal light reduction as a function of the heliocentric distance, and the first observation of comic background radiation by dust-free infrared astronomy, will employ the same photometer to produce both data during the cruising, especially around 5AU. 0.5% of -2000 m2 sail film may be replaced with PDVF threshold dust impact detectors, which exposed area is two orders of magnitude larger than the largest dust detectors in the past. Interplanetary network for positioning of gamma-ray bursts in higher precision also becomes more advantageous as the spacecraft goes further away from the earth. As the spacecraft flies by Jupiter, a
210
small probe will be released and inserted to a polar orbit around Jupiter in order to make simultaneous observations of the Jovian magnetosphere and solar wind interaction with the atmosphere in the polar regions of the planet (e.g., aurora). The spacecraft will also have several opportunities to make fly-by observations of at least 2 main belt asteroids and 2 Jovian LA Trojan asteroids. At present Achilles of >lo0 !un in size is one of our targets and there are more than a dozen possible combinations of two or more L4 Trojan asteroid fly-bys found in orbital design studies.
Fig. 21: Scientijk observation plan for the ISAS solar power sail (MUSES-D) concept (rano, et al., 2004b.)
SUMMARY This paper summarized scientific rationales for asteroid explorations including knowledge, which greatly assists preparing for impact hazard issues (i.e., physical characterization of NEOs), the mission outline and current status of Hayabusa asteroid sample return mission, with an emphasis on sampling and analysis procedures, and
21 I future prospects of Japanese minor body explorations. Examples of future mission concepts include multiple rendezvous and sample returns from spectra-known NEOs, multiple fly-by and sample returns from main belt asteroid family members, and a solar power sail mission to fly-by multiple asteroids in main belts and Jovian Trojans. These can be expressed in a form of a roadmap of minor body explorations in the next 15 years as shown in Figure 22.
2003 2005
2010
2015
2020
Fig. 22: Roadmap f o r Japanese minor body exploration in next 15 years (Yano et al., 2004a).
ACKNOWLEDGEMENTS The author is gratehl to the World Federation of Scientists for kindly inviting him to present this paper at the 32"d session of the International Seminars on Planetary Emergencies. Fruitful discussions in both the Hayabusa mission team and the ISAS Next Generation Minor Body Exploration Working Group have greatly contributed to improve the context. This work is partly supported by the Japan Space Forum Ground Research Grant and the Japanese Society for Promotion of Science.
212
REFERENCES 1. 2. 3. 4.
I. 5.
6. 7. 8. 9.
10. 11. 12. 13. 14.
R. P. Binzel, A S . Rivkin, S.J. Bus, J.M. Sunshine, and T.H. Burbine: Meteoritics & Planetary Science, 36, (2001). M. Kaasalainen, et al.: Proc. the Asteroids, Comets, Meteors 2002, ESA-SP-500, (2002). J. Kawaguchi, et al.: Prof: 22"dISTS, 2000-0-3-06v, (2000). J. Kawaguchi and Solar Power Sail Working Group: Adv. In Space Res., submitted, (2004). First Open Competition for Kushiro, A. Fujiwara and H. Yano (eds.) MASPET, ISAS SP-16,159pp. (2003). Minorbody Exploration Forum, H. Yano et al. (eds.), MEF Report: the Revised Version, http://www.minorbodv.com, CD-ROM, Minorbody Exploration Forum, (2003). M. Morimoto, H. Yamakawa, M. Yoshikawa, M. Abe and H. Yano: Adv. In Space Res., in press, (2004). S.J. Ostro et al.: Meteoritics & Planetary Science, 39, (2004). J. Veverka, et al.: Science, 292, p484-488, (2001). H. Yamakawa, M. Yoshikawa, M. Abe, A. Fujiwara and H. Yano: Proc. the 22st Int? Symp. on Space Tech. and Sci., p2411-2416, (2000). H. Yano, M. Yoshikawa, M. Abe, A. Fujiwara, H. Yamakawa, and M. Katayama: Proc. the 21" Solar System Sci. Symp., ISAS, Japan, p44-47, (2000). H. Yano, J. Kawaguhci, M. Abe, A. Fujiwara,, M. Morimoto, T. Akiyama, Y. Miura, H. Demura, K. Yoshida, and MEF: Proc. I"' ISAS Space Science Symp.. ISAS, Japan,No.l, p153-160, (Main text in Japanese), (2001). H. Yano, J. Kawaguchi, T.Akiyama and MEF: Proc. 2ndISAS Space Science Symp., ISAS, Japan, No. 2, p335-342, (Main text in Japanese), (2002a). H. Yano, S. Hasegawa, M. Abe, and A. Fujiwara: Proc. the Asteroids, Comets, Meteors 2002, ESA-SP-500, p103-106, (2002b). H. Yano, M.E. Zolensky, A. Fujiwara, and I. Kushiro, in The First Open Competition of the MUSES-C Asteroidal Sample Preliminarv Examination (Eds.), I. Kushiro, A. Fujiwara and H. Yano, ISAS Report SP-16, pp.1-8, (2003a). H. Yano, M. Abe, A. Fujiwara, T. Yoshimitsu, H. Akiyama, and MEF: Proc. 3rd ISAS Space Science Symp., ISAS, Japan, No.3, p211-218, (Main text in Japanese), (2003b). H. Yano and A. Fujiwara: Hayabusa Asteroidal Sample Preliminary Examination Team (HASPET) and the Astromaterial Curation Facility at J W I S A S , Adv. In Space Res., submitted, (2004). H. Yano, M. Abe, Y. Kawakatsu, 0. Mori, T. Yoshimitsu and A. Fujiwara: Next Generation Minor Body Exploration Roadmap in Japan, Ad v. In Space Res., submitted. (2004a). H. Yano, S. Hasegawa, Y. Kasaba, S. Matsuura, F. Usui, D. Yonetoku, and the ISAS Solar Pow Sail Working Group: Scientific Observations onboard the J W I S A S Solar Powered Sail Mission, Adv. In Space Res., submitted, (2004b).
m,
15.
16.
17.
18.
7. AIDS AND INFECTIOUS DISEASES: GLOBAL BIOSECURITY STANDARDS
This page intentionally left blank
LIMITING ACCESS TO DANGEROUS PATHOGENS FOR INTERNATIONAL COOPERATION.
-
THE NEED
DIEGO BURIOT MD, MPH. World Health Organisation, Geneva, Switzerland The ZOth century philosopher Bertrand Russell wrote, “Almost everything that distinguishes the modem world from earlier centuries is attributable to science”. This transformation is the result of centuries of free and open scientific inquiry and exchange of knowledge. Progress in life expectancy, disease reduction and increased agricultural outputs, to name just a few, could be directly attributable to life sciences. The rapid advances in microbiology, molecular biology, and genetic engineering are also enabling scientists to modify and manipulate fundamental life processes and have created extraordinary opportunities for biomedical research, including rapid identification tools and novel drugs and vaccines, thereby holding great promises for improving human health and the quality of life. Biological life sciences, and especially recombinant techniques, have experienced enormous growth over the last 30 years and biotechnology is truly a global enterprise. The United States of America is still leading the sector with employment reaching 191,000 by 2001 and US$Z9 billion. However, countries such as Germany, Japan and the United Kingdom but also China, India and Brazil, are becoming major players and the fundamental knowledge that emerges from them is available around the world. Efforts to protect workers and communities from the accidental release of pathogens have been a constant challenge for the scientific community. In the seventies, following the historical Asilomar Recombinant DNA Conference, the CDC and NIH encouraged the life science community to participate in a collaborative initiative to develop consensus guidelines to safeguard worker safety and public health from hazards associated with the possession and use of human pathogens in microbiological and biomedical laboratories. The initiative resulted in the publication by CDC and NIH in 1984 of the Biosafety in Microbiological and Biomedical Laboratories referred to as BMBL. Four levels of biocontainment provide increasingly stringent levels of protection to personnel, the environment and the community. The BMBL served as a model for biosafety guidelines issued by the World Health Organization and are widely accepted by scientists throughout the world and considered the gold standard for the safe conduct of laboratory work with dangerous pathogens. Since these biosafety standards were published there has been a marked decline in the number of accidental infections of laboratory workers and the escape of dangerous pathogens into the environment. Although there has been the occasional case of laboratory accidents reported, including smallpox, polio and S A R S , the practical use of biosafety norms have kept them to a minimum. Although the English terms biosecurity and biosafety are sometimes used interchangeably, they refer to different issues. Biosafety measures are intended to prevent accidental infections of researchers or the release of pathogens from a laboratory facility that could endanger public health. Biosecurity measures aim at preventing the deliberate diversion of deadly pathogens for malicious purposes, as biotechnology, like nuclear physics or chemistry, can be exploited for peaceful or nefarious purposes. 215
216
In 1986, the United States Government passed the Anti Terrorist and Effective Death Penalty Act 1986 (Public Law 104-132), regulating the transfer of dangerous pathogens and toxins. It mandated the Centers for Disease Control and Prevention to develop a “select agents” list of pathogens that could be used as weapons. The legislation did not receive much attention kom the scientific community, as at that time, bioterrorism was considered a hypothetical threat. Bioterrorism became a harsh reality soon after 11 September 2001, when letters containing a refined preparation of dried anthrax spores were sent through the United States’mail, infecting twenty-one people and killing five. In the aftermath of the attack, policy-makers awakened to the inherent power of biological agents and began calling for more government control and stronger mechanisms to prevent the deliberate theft or diversion of deadly pathogens and toxins for malicious or criminal purposes. Spurred on by rising concerns about bioterrorism, we are now witnessing a transition kom an environment based upon voluntary compliance with recommended practice to a greater number of statutes and regulations, particularly for the control of biological material and personnel. Under the 2001 USA Patriot Act, it is a criminal offence for anyone to knowingly possess any biological agent, toxin or delivery system that is not reasonably justified by science or medicine. The Act also makes it a criminal offence for certain persons, including illegal aliens and individuals from terrorist-list countries, to possess, transport or receive any of the threat agents on the CDC ”select agents list“. Another piece of legislation, the Public Health Security and Bioterrorism Preparedness and Response Act passed in 2002, requires any person who possesses, uses or transfers a “select agent” to register with the Secretary of the Health and Human Services and to adhere to safety and security requirements commensurate with the degree of risk that each agent poses to public health. Furthermore, advances in genetic engineering and gene therapy, through deliberate or inadvertent means, can create organisms of greater virulence, or allow modification of the immune response system of the target population to increase susceptibility to a pathogen. Recent reports describe the inadvertent creation of an unexpectedly virulent animal poxvirus, or the creation from scratch of an infectious poliovirus by using genomic information available on the Internet with custom-made DNA sequences purchased through the mail. The US government is exploring new regulations on the conduct of research involving selected agents, including possible restrictions on the dissemination of scientific findings that could have national security implications - what has been called “sensitive but unclassified” information. The goal is to strengthen the oversight process for biotechnology research, raising the issue of the balance between scientific openness and national security. The scientific community is increasingly aware of the danger posed by the proliferation of biological weapons capabilities, its potential misuse by hostile individuals or nations and the need for deterrence and law enforcement - critical components in responding to bioterrorism. However, scientists are also concerned about the balance between the need to constrain malignant applications without damaging the generation of essential knowledge. Life sciences rely upon a culture of openness in research, where free exchange of ideas allows researchers to build on the results of others, while simultaneously opening scientific results to critical scrutiny.
217 Some scientists claim that it is futile to imagine that access to dangerous pathogens and destructive biotechnologies can be physically restricted, as is the case for nuclear weapons and fissionable material. Imposing mandatory information controls on research in the life sciences would be difficult and expensive with very little gain in genuine security. Proven measures to minimize the risk of reintroducing dangerous pathogens, such as limiting the number of sites where they are stored and studied, could be a realistic goal, but absolute containment cannot be assured. Restrictions imposed on laboratories working with "selected agents" have already requested some laboratories to destroy archived samples and to limit the exchange of material between scientists. To extend government control to the information contained in laboratory reports, conference papers and journal articles would further constrict avenues of communication, which have been an essential source of the dynamism of biological research in modem areas. Some major United States' universities have already proscribed classified research on campus, and the danger exists that the life science fields of study come to be regarded as less inviting, thereby affecting the quality of researchers entering the field or making it more attractive to work outside the USA. Lastly, without an international consensus and consistent guidelines for overseeing research in advanced biotechnology, it is feared that limitations in the USA would only impede the progress of biomedical research and undermine its own national interest. THE INTERNATIONAL DIMENSION Worldwide, a large, but unknown number of clinical or research laboratories are keeping well-characterized strains of dangerous pathogens, either for diagnostic reference purposes or for drugs or vaccine research. The range of scientists and institutions involved would thus be hard to enumerate, let alone monitor. Most of the countries do not even have a comprehensive inventory of their national laboratories. As part of the Polio Eradication Initiative, a comprehensive survey of national laboratories was carried out in 152 countries. Over 160 000 facilities have been inventoried to date, including those in the USA and the Russian Federation, but some important middle-income countries such as China and India have not yet been included. The number of Culture Collections is also unknown. The World Data Center for Microorganisms registered 484 culture collections in 65 countries, but most former Soviet Union countries are not included. The dissemination of academic research has been carried out by over 2,000 publishers in what is called STM (scientific, technological and medical) publishing. Together, they publish 1.2 million articles a year in about 16 000 periodical journals. Several countries, including France, Germany, Japan and the United Kingdom have also passed national legislation making the prohibition of BWC binding, imposing penal sanctions for violations and tightening security over dangerous pathogens and toxins. However, most countries have not yet passed any legislation each country is developing and implementing its own rules, rather than fostering a set of harmonized global standards. This ad hoc approach is likely to result in a patchwork of inconsistent regulations, giving rise to security gaps and areas of lax enforcement. Because facilities that house and work with dangerous pathogens and toxins range from pharmaceutical companies to academic research laboratories, specific
218
biosecurity measures cannot be developed on a “one size fits all” basis. For these reasons, guidelines for laboratory security should consist of functional requirements SO that the affected entities can implement specific measures in a tailored manner. In October 2003, the National Research Council (NRC) of the USA National Academy of Sciences published an important report on “Biotechnology Research in an Age of Bioterrorism”, in which a panel of prominent scientists acknowledged the risks associated with the potential misuse of molecular biology to develop “improved” BW agents. The authors came up with seven main recommendations, including the establishment in the USA of a voluntary process to review the security implications of potential hazardous experiments and the establishment of an International Forum on Biosecurity to develop and to promote harmonized national regional and international measures. Among the topics for this International Forum are: Education of the global scientific community, including curricula for professional symposia and training programmes, to raise awareness of potential threats and modalities for reducing risks, as well as highlighting ethical issues associated with the conduct of biological science; Design mechanisms for international jurisdiction that would foster cooperation in identifying and apprehending individuals who commit biocrimes; Development of an internationally harmonized regime for the oversight of the transfer of pathogens within and between laboratories and facilities; Development of systems to review and to provide oversight of biological research for identifying and managing “experiments of concern”; Development of an international norm for the dissemination of “sensitive” information in the life sciences. It is widely accepted that minimum global standards should include: Mechanisms to account for dangerous pathogens; Registration and licensing of facilities that work with dangerous pathogens, certifying both competency of workers and containment capabilities of the laboratory facility; Physical security of these fac Procedures for screening laboratory personnel to determine their suitability to work with highly dangerous pathogens. The ideal forum to discuss these kinds of issues would have been the 1972 Biological and Toxin Weapons Convention which entered into force in 1975, and banned the development, production, stockpile and transfer of biological weapons, but permitting research activities for peaceful purposes to defend or protect against BW agents. Unfortunately, the BWC was burdened with a serious birth defect: the lack of formal measures to check compliance and to punish violation. Experience suggests that a mechanism for addressing BWC compliance concerns can be effective only if implemented by an international organization that is seen as independent, objective and competent. From 1995 to 2001, an ad hoc group of interested Member States met with the mandate to negotiate a “legally binding instrument” to strengthen the BWC. However in 2001 the USA rejected a legally binding protocol, and in its place suggested a variety of largely voluntary measures to be pursued on a national basis by individual countries. This included a proposal that other countries adopt legislation requiring entities possessing dangerous pathogens to register with their own government - as is the practice in the USA.
219 However, the BWC meeting of experts held in 2003 to discuss pathogen controls did not result in the adoption of a common set of standards that could have had global implications. A number of regional and international organizations are now moving to develop programmes and policies on various aspects of the problem. For example, the Organization for Economic Cooperation and Development (OECD), the G7 Global Health Security Action Group, the Australia Group, the International Criminal Police Organization (INTERPOL), the World Health Organization and others have begun information sharing and efforts to identify critical elements to include in setting standards, and mechanisms to establish global biosecurity standards and international oversight mechanisms for dangerous pathogens. Approaches must be harmonized to become effective. CONCLUSION 1.
2.
3.
4.
If the scientific community does not become an active partner in crafting the policies that involve and affect its work, it will be done without its insight, reason and wisdom. That does not seem to be the preferable choice for the continued health of science or the well-being of society. It is obvious that there is room for improved collaboration between the scientific community and the security communities at national level. The process will be far better served if both sit at the same table. Reasoning is best served when reasonable people share their points of view. A sense of proportion with these issues is certainly needed; when, for instance, decisions are made unilaterally to potentially limit bio-research in a particular country such as the USA. Censoring biomedical research will stifle medical progress - including the ability to counter the diseases that bioterrorism might unleash. There is a critical role for National Academies and for international organizations, such as UNESCO and WHO, to engage the scientific community and to address and debate issues affecting science. Scientists must come-up with a consolidated consensus on how to conduct bioscience in the new context created by recent events.
REFERENCES 1. 2. 3. 4. 5.
Tucker J. The BWC New Process: A Preliminary Assessment. Nonproliferation ReviewISpring 2004; 1-13. Ostfield M. Bioterrorism as a Foreign policy issue SAIS Review vol XXIV no 1 (Winter-Spring 2004); 131-146. Ledenberg J. Infectious diseases and biological weapons: prophylaxis and mitigation. JAMA. 278; 435-436. Editorial. Pickrell J. Imperial College fined over hybrid virus risk. Science. 2001; 293: 779-780. Jackson RJ, Ramsay AJ, Christensen CD, Beaton S, Hall DF, Ramshaw IA. Expression of mouse interleukine-4 by a recombinant ectromelia virus suppresses cytolytic lymphocyte responses and overcomes genetic resistance to mousepox. J Virol. 2001; 75: 1205-1210.
220 6. 7.
8.
9. 10.
11. 12. 13.
14.
15. 16.
Fenner F, Henderson DA, Arita I, Jezeck Z, Ladnyi ID. Smallpox and its eradication. Geneva World Health Organization, 1988. Mulders MN, Reimerink JHJ, Koopmans M P , van Loon AM, van der Avoort HGAM. Genetic analysis of wild-type poliovirus importation into The Netherlands (1979-1995). J Infect Dis 1997; 176: 6 17-24. China confirms S A R S infection in another previously reported case ; summary of case to date. World Health Organization S A R S situation update 5, 30 April 04 30/en (accessed 2004. available from: htt~://www.who.intlcsrldod2004 May 11,2004). Severe acute respiratory syndrome ( S A R S ) in Singapore - update 2. S A R S in Singapore linked to accidental laboratory contamination. Available from: httr,://www.who.int/csr/don/2003 09 24/en (accessed May 11,2004). World Health Organization. Severe acute respiratory syndrome in Taiwan, China. Dec 17, 2003: available &om: http://www.who.int/csr/don/2003 12 17/en (accessed May 9,2004). Heymann DL, Aylward RB, Wolff C. Dangerous pathogens in the laboratory: from smallpox to today's S A R S setbacks and tomorrow's polio-free world. The Lancet 2004; 363 16566-68 Editorial. Steinbruner JD, Harris ED. Controling dangerous pathogens Issues in Science and Technology Spring 2003,47-54. Kwik G, Fitzgerald J, Inglesby TV, O'Toole T, Biosecurity: Responsible Stewardship of Bioscience in an Age of Catastrophic Terrorism. Biosecurity and biotemorism: biodefense strategy, practice, and science. Vol 1 number 1, 2003 Mary Ann Liebert, Inc. Biotechnology research in an age of terrorism: confronting the dual use dilemma, National Research Council of the National Academies, 2003 National Academies Press. Home Pages of Culture Collections in the World http://wdcni.nig.ac.iDhDcc.html (accessed July 23,2004). The Economist August 7" 2004 Economic focus / Development piecemeal Finance and Economics p 63-65.
THE U.S. SELECT AGENT RULE AND AN INTERNATIONAL OPPORTUNITY TO DEVELOP LABORATORY BIOSECURITY GUIDELINES REYNOLDS M. SALERNO, PH.D. San&a National Laboratories, Albuquerque, USA Recent natural outbreaks of highly infectious disease have had devastating consequences for public and agricultural health, the international economy, and international security.’ The consequences of an outbreak of infectious disease resulting from the use of a biological weapon would be at least as damaging as a naturally occumng infectious disease, and possibly more so. The 2001 anthrax attacks in the United States killed 5 people, injured 22, resulted in enormous economic damage, and brought bioterrorism to the center of debates on international security. If a bioterrorist were to widely deploy an agent that causes a highly contagious and lethal disease, such as smallpox or Foot and Mouth Disease, the international economic and security consequences could be catastrophic.2 The risk of infectious disease resulting from an accidental release of a pathogen from a laboratory setting or an intentional use of a biological weapon is real and g r ~ w i n g .The ~ rapid expansion of the biotechnology industry has resulted in the global proliferation of dual use biological materials, technologies, and expertise. As a result, dangerous pathogens are much more accessible to a wide range of biological weapon proliferators, including terrorists, as well as legitimate scientists who may inadvertently expose themselves or their local environments to exotic d i ~ e a s e . ~ Currently, many different methods are being used to address the global risks associated with naturally occurring and accidentally or intentionally introduced infectious disease. Most strategies-such as increasing the effectiveness and availability of therapeutics, improving diagnostic capabilities, and developing decontamination and detection technologies-focus on enhancing national responses to an outbreak of infectious disease after it has occurred.’ The international community has also implemented some preventive strategies as a means to support global efforts at countering outbreaks of infectious disease and biological weapons proliferation.6 Preventive strategies are important because they provide an opportunity to reduce the risk of an outbreak of disease that must be mitigated by emergency responders and public health officials. A comprehensive strategy to counter the infectious disease and biological weapons risk should combine identification and response techniques with preventive measures. A principal preventive strategy now regulated in the United States is laboratory biosecurity: the protection of dangerous pathogens and toxins from theft and sabotage at the facilities where they are used and stored. Laboratory biosecurity provides the first line of defense against both biological weapons proliferation and bioterrorism by making it more difficult for proliferators to acquire dangerous biological materials.’
221
222
LABORATORY BIOSECURITY AND LABORATORY BIOSAFETY The emergence of the term “laboratory biosecurity,” used in the context of protecting dangerous pathogens and toxins, is very recent, and it is often confused with an older, more widely recognized term, “laboratory biosafety.” Laboratory biosecurity and laboratory biosafety-both critical to the operation of a modem bioscience laboratory--often overlap and should complement each other, but they have quite different objectives. Laboratory biosafety, another preventive measure that reduces biological risk, aims to reduce or eliminate exposure of laboratory workers, or other persons, and the outside environment to potentially hazardous agents involved in bioscience or biomedical research. Laboratory biosafety is achieved by implementing various degrees of laboratory “containment,” or safe methods of managing infectious materials in a laboratory setting.8 Laboratory biosecurity aims to protect pathogens, toxins, and security-related information from theft and sabotage. Laboratory biosecurity is achieved by instituting a culture of responsibility among those who handle, use, and transport dangerous pathogens and toxins, and by implementing various security measures that restrict access to these materials to authorized individuals.’ THE BIOSECURITY REGULATORY ENVIRONMENT IN THE UNITED STATES The current U.S. biosecurity regulatory environment is based on two laws, the USA PATRIOT Act and the Bioterrorism Preparedness Act, which aim to improve the protection of “select” agents and toxins. Three Codes of Federal Regulations (42 CFR 73, 7 CFR 331, and 9 CFR 121, or collectively “CFR) establish lists of agents and toxins that pose a threat to humans, animals, or plants, and require any laboratory that possesses any one of these 82 listed agents or toxins to enforce and adhere to a series of specific security measures. The security requirements include facility registration, designation of a responsible official, background checks for individuals with access to the listed agents, biosecurity plans, agent transfer rules, safety and security training and inspections, notification following identification, theft, loss, or release of a listed agent, record maintenance, and restrictions on some types of experiments.” SCIENTIFIC CONCERNS ABOUT THE U.S. REGULATIONS Scientists and laboratorians in the U S . have expressed many concerns about these regulations. Some question the rationale for the regulations, since U.S. laboratory biological materials also exist in nature and are globally distributed in research laboratories, collection centers, biotechnology institutes, and clinical facilities.’ Any attempt to implement laboratory biosecurity in the U.S. cannot encompass all dangerous biological materials. Therefore, an individual does not need to steal an agent from a U S . laboratory to obtain material with which to pursue bioterrorism. Moreover, many people in the US. microbiological research community perceive
’
223 the CFR as an inappropriate impediment to important research. The designation of certain types of individuals, and nationals from specific countries, as “restricted persons” who cannot handle, transport, or have access to Select Agents is often cited as particularly antithetical to the pursuit of science.” The CFR have also imposed significant financial costs and operational inconveniences on bioscience research. In addition, there is considerable concern that security will trump biosafety, increasing the risk of accidental release or exposure of dangerous organisms. Recently, many researchers and laboratories have decided to discontinue or not pursue research on regulated biological agents, rather than implement the new security regulations and bear the associated financial burden. According to the supplementary information published in December 2002 in 42 CFR 73, the Centers for Disease Control and Prevention (CDC) expected 817 entities to register under the new Select Agent rule. Instead, only 323 facilities are now registered with the CDC, indicating that many institutions have discontinued their work with select agents.13 For example, Stanford University has consciously chosen not to conduct research on select agents. Their collections of Francisella tularensis were transferred and/or destroyed after consultations with scientists and senior University officials who believed the administrative and security burdens of the Select Agent rule outweighed the scientific need of maintaining stocks on campus.I4 Security regulations that induce such a negative response in the research community will stifle valuable public health and biodefense research, further compromising the ability to respond to bioterrorism and infectious disease outbreaks. SECURITY CONCERNS ABOUT THE U.S. REGULATIONS The best defense against emerging infectious disease and bioterrorism is the progress of research that results in improved vaccines, diagnostics, and therapies-work that requires handling, using, and transporting dangerous pathogens and toxins. Although some of these agents have the potential to cause serious harm to the health and economy of a population if misused, all have legitimate uses for medical, commercial, and defensive applications. It is incumbent on those in the scientific community who strive to improve human, animal, and plant health to take measures to limit the opportunity for their valuable materials to be used illicitly. However, it is critically important to strike an appropriate balance between protection of dangerous pathogens and toxins, and preservation of an environment that promotes legitimate, ultimately life-saving, biological research.I5 Designing a laboratory biosecurity system that does not jeopardize microbiological operations requires a familiarity with bioscience and the materials that require protection. Security system designers must be cognizant of several challenges to protecting microorganisms and toxins.’6 Biological agents are living, reproducing organisms. These organisms can vary in quantity and quality over the course of legitimate research activities by growing, dying, and mutating. Therefore, knowing the exact quantity and quality of organisms in a laboratory is not achievable. Within bioscience facilities, biological agents and the toxins some of them produce can be isolated from a number of process streams. They can be found in Petri dishes, cell cultures, environmental samples, clinical specimens, infected animals, and animal
224 carcasses, as well as stored in refrigerated or freeze-dried forms. This wide distribution makes safeguarding all of the material a complicated task. Biological agents cannot be detected with available stand-off technologies, nor can the naked eye identify usable amounts. Therefore, intercepting someone who is in the midst of covertly and maliciously removing biological material from a laboratory or facility is almost impossible. Unfortunately, the current U S . regulations do not demonstrate an appreciation for these unique elements of pathogens and toxins. In particular, the U S . regulations apply a black-or-white standard to biosecurity: either an agent is on the regulated list and requires security or it is not on the list and needs no security. Coccidioides immitis, Bacillus anthracis, and Variola major virus are all select agents, legally subject to the same security standards. In reality, all CFR-listed agents and toxins are not equally vulnerable to BW proliferation, and therefore do not require the same level of protection. Some of these agents would be more attractive than others to adversaries interested in diverting materials that they could use to build biological weapons. Investments in security, especially if these resources come out of limited research and dia ostic budgets, should be focused on those agents that are most attractive to adversaries. ,B The nature of biological material and its use in a laboratory setting make biosecurity an extremely imperfect science. Even with the most intrusive laboratory biosecurity system, it is possible for a person with approved access to a containment laboratory to divert biological material without detection. At the same time, the wide availability of pathogens and toxins, including natural sources, makes it improbable that an adversary would overtly attack a bioscience facility to steal an organism. Yet the U.S. regulations do not reflect an understanding that the “insider” is the most significant threat and that therefore the effectiveness of a laboratory biosecurity system will depend, first and foremost, on the integrity of those individuals who have access to pathogens and toxins and those who have regular access to the facilities that contain such agents. LABORATORY BIOSECURITY RISK ASSESSMENT One of the most significant security concerns of the U.S. Select Agent Rule is that its regulatory approach does not adequately allow for variations in security based on a facility’s environment and/or its assets. Those responsible for the safekeeping of dangerous pathogens and toxins must understand that security risks are impossible to eliminate; they can only be mitigated. Since security in a biological environment can never be perfect, it is incumbent upon security system designers to employ a risk management approach to securing dangerous pathogens and toxins. A risk management approach to biosecurity recognizes that different assets at an institution may have different levels of security risk. These risks need to be prioritized through a risk assessment process. Those assets at the highest risk should receive the most protection, and lower risk assets should receive commensurately less protection. The allocation of available protection resources and the implementation of operational restrictions should be at the discretion of facility management, but the application should always be in a graded manner - protecting the assets at the highest risk more than those at lower risks.
225 Risk assessment should begin with the identification of the facility’s assets, including pathogens, security-related information, and operational infrastructure. The risk of an undesired event, generally theft or sabotage of an asset, should be determined by examining the consequences that would result and the potential threat a particular adversary poses. Each undesired event should be evaluated separately. The various events should then be ranked by risk so that managers can prioritize the institution’s investment in protection measures and operational restrictions. Laboratory biosecurity practices employed internationally should be based on a similar risk assessment methodology and graded protection philosophy. However, various institutions can achieve biosecurity in many different ways, reflecting the unique concerns and available resources of individual countries. INTERNATIONAL CONCERNS ABOUT THE U S . REGULATIONS The concept of laboratory biosecurity was fKst discussed in an international forum at the August 2003 Experts Group Meeting of the Biological Weapons Convention (BWC) in Geneva, Switzerland. It became evident during this meeting in Geneva that the international community could benefit fkom additional consultations on biosecurity. For that reason, an international symposium was held at Sandia National Laboratories in Albuquerque, NM, USA in February 2004 to share information and clarify international perspectives on biosecurity. The symposium, which included more than 60 scientists and policymakers from 15 different countries, had three broad goals: 1) to present the United States’ experiences in implementing biosecurity; 2) to elicit from the international participants their interpretations and concerns about biosecurity; 3) to set biosecurity in the context of biological weapons non-proliferation and counterbioterrorism. In general, the international community does not perceive bioterrorism as a serious threat; the priority, especially in the developing world, is on identifjmg and controlling natural outbreaks of infectious diseases. There is considerable apprehension within the international community that US. biosecurity methods, or an international regulatory regime, would hinder advances in basic biomedical research by increasing the cost, straining international collaborations, and restricting the sharing of information. Despite these concerns, many in the international community believe that dangerous pathogens and toxins are valuable assets for research and commercial ventures that deserve protection. There is also a widely held conviction that biosecurity can support and strengthen the biosafety agenda, and that biosecurity will help maintain the confidence of citizens and investors in the biomedical and biotechnology industries. Finally, many in the international community acknowledge that biosecurity can reduce the risk of bioterrorism and biological weapons proliferation. This latter point is particularly important because the rapid expansion of the biotechnology industry has resulted in the global spread of dual use biological materials, technologies, and expertise.
’*
PROPOSAL FOR ACHIEVING INTERNATIONAL BIOSECURITY Significant scientific and security concerns have been raised about the U.S. Select Agent Rule since it went into effect in 2003. Despite the difficulties of the U S . Select
226
Agent Rule, many in the international community have recognized that laboratory biosecurity represents both good laboratory practice and an appropriate national measure for States Parties to the Biological Weapons Convention. For laboratory biosecurity to succeed in reducing the risk of infectious disease and biological weapons proliferation, it must be implemented globally. Protecting dangerous pathogens and toxins in some areas of the world and not in others will only serve to drive proliferators to materials in unsecured facilities. Every country shares the burden of securing pathogens from theft, sabotage, and accidental release. In addition, successful international biosecurity will depend on willing implementation by the scientific community. Therefore, laboratory biosecurity must be designed specifically for biological materials and research, must complement and be integrated with laboratory biosafety practices, and must avoid compromising fundamental biomedical and microbiological research and diagnostics. The best method for achieving these objectives is for a respected technical organization in the public health and life sciences field, such as the World Health Organization or the World Federation of Scientists, to promulgate international laboratory biosecurity guidelines. These guidelines should avoid the mistakes of the U.S. Select Agent Rule, should draw upon the precedent of globally implemented biosafety standards, and should involve the international scientific community in their development. Finally, these international guidelines should recommend the use of a sound risk assessment and risk management methodology. REFERENCES Mark S. Smolinski, Margaret A. Hamburg, and Joshua Lederberg, Microbial Threats to Health: Emergence, Detection, and Response (Washington, DC: 2003). National Research Council of the National Academies, Biotechnology Research in an Age of Terrorism: Confronting the Dual Use Dilemma (Washington, DC: October 2003). The recent examples of laboratory acquired infections of S A R S in Asia in 2003 demonstrate the increasing accidental risk, and the anthrax attacks in the U.S.in 2001 reflect the increasing intentional risk. Jonathan B. Tucker, “Biosecurity: Limiting Terrorist Access to Deadly Pathogens,” Peaceworks No. 52, United States Institute of Peace (Washington, DC: November 2003). For example, see B.T. Smith, T.V. Ingelsby, and T. O’Toole, “Biodefense R&D: Anticipating Future Threats, Establishing a Strategic Environment,” Biosecurity and Bioterrorism, 1(3), 2003; P.A. Emanuel, C. Chue, L. Kerr, D. Cullin, “Validating the Performance of Biological Detection Equipment: The Role of the Federal Government,” Biosecurity and Bioterrorism, 1 (2), 2003; D.M. Sosin, “Syndromic Surveillance: The Case for Skillful Investment,” Biosecurity and Bioterrorism, 1(4), 2003; S.A. Hearne, et al., Ready or Not? Protecting the Public’s Health in an Age of Bioterrorism (Washington, DC: 2003). For example, see World Health Organization, Public Health Response to Biological and Chemical Weapons (Geneva: WHO, 2004).
227 7
8
9
10
11
12
13
14
15
16
17
Reynolds M. Salemo and Daniel Estes, “Biosecurity: Protecting High Consequences Pathogens and Toxins against Theft and Diversion,” Encyclopedia of Bioterrorism Defense, R.F. Pilch and R.A. Zilinskas, eds. (New York J Wiley & Sons, 2004). World Health Organization, Laboratory Biosafety Manual, second edition (revised), 2003 (liM,://www.who.intlcsr/resources/publicationsibiosafetv/whocds csr lvo 20034/e nJ). Also see National Institutes of Health and Centers for Disease Control and Prevention, Biosafety in Microbiological and Biomedical Laboratories, fourth edition, May 1999 (http:/ibmbl.od.nih.gov/contents.htm).It is important to note that biosafety, as used here, does not refer to the management of genetically modified organisms. It is important to note that biosecurity, in this context, does not encompass efforts to protect crops and animals from natural outbreaks of disease, or efforts to protect the food supply from contamination. U.S.Federa1 Register, Rules and Regulations, Vol. 240, No. 67,42 CFR Part 73, December 13,2002 (Department of Health and Human Services, Office of the Inspector General); U.S.Federa1 Register, Rules and Regulations, Vol240, No. 67, 7 CFR Part 331,9 CFR Part 121, December 13,2002 (Department of Agriculture, Animal and Plant Health Inspection Service). The one exception is the Variola major virus, the causative agent of smallpox, which has been globally eradicated. The two official WHO repositories are the Centers for Disease Control and Prevention, Atlanta, Georgia (USA) and the State Research Institute for Virology and Biotechnology, Koltsovo (Russia). Barry R. Bloom, “Bioterrorism and the University: The Threats to Security and to Openness,” Harvard Magazine (November-December 2003); R. Gallagher, “Choices on Biosecurity,” The Scientist 18/10 (May 2004). CDC’s 8” National Symposium on Biosafety, “Biosafety and Biosecurity: A New Era in Laboratory Science” Session Three: Impact of New Regulations, Atlanta, GA (January 27,2004). Private communication, David H. Silberman, Director, Health and Safety Programs, Stanford University School of Medicine, February 6,2004. Reynolds M. Salemo, et al., “Balancing Security and Research at Biomedical and Bioscience Laboratories,” BTR 2003: Uni$ed Science and Technologyfor Reducing Biological Threats and Countering Terrorism-Proceedings (Albuquerque, NM: March 2003). httD://www.biosecuritv.sandia.~ov/documentsibalancin~-securitv-and-research.udf. National Research Council of the National Academies, Biofechnology Research in an Age of Terrorism: Confronting the Dual Use Dilemma (Washington, DC: October 2003). Jennifer Gaudioso and Reynolds M. Salemo, “Biosecurity and Research: Minimizing Adverse Impacts,” Science 304/30 (April 2004); Arturo Casadevall and Liise-anne Pirofski, “The weapon potential of a microbe,” Trends in Microbiology 12/6 (June 2004); Susan B. Rivera, et al., “A Bioterror Risk-Assessment Methodology,” The Scientist 18/13 (July 2004); Jennifer Gaudioso and Reynolds M.
228
18
Salemo, “A Conceptual Framework for Biosecurity Levels,” BTR 2004: Unified Science and Technologyfor Reducing Biological Threats and Countering Terrorism - Proceedings (Albuquerque, NM. March 2004). Sandia National Laboratories, “International Biosecurity Symposium: Securing High ConsequencePathogens and Toxins,” Sandia Report, SAND 20042109, June 2004.
NEW GEORGIAN LEGISLATION ON BIOSAFETY LELA BAKANIDZE, PAATA IMNADZE, SHOTA TSANAVA, NIKOLOZ TSERTSVADZE National Center for Disease Control and Medical Statistics of Georgia, Georgia Georgia, when it was part of the Soviet Union, shared all legislation, amongst which the regulations on biosafety and dealing with especially dangerous pathogens, with other republics of the country. Biosafety rules in Soviet Union were very strict, and they did not leave gaps for misunderstanding. Bioweapon facilities were well defined, they were components of Soviet offensive biological weapons program, and no other institutions were authorized to have any especially dangerous pathogens. More than a decade has passed since Georgia became independent, but not all the biosafety legislation has been adapted to the new situation. New Georgian laws were adopted, part of the sanitary norms and regulations were renewed, but the greatest part remains unchanged. For example, even the classification of pathogens by risk groups is absolutely contrary to the WHO classification. Today, when the threat of bioterrorism is more realistic, the need for a new regulatory basis for biosafety has become evident. Furthermore, scientific investigations on especially dangerous pathogens are very well funded by different donors, and sometimes they do not take into account whether the proposed recipients of their grants are eligible to carry out such investigations, and whether or not they have sufficient experience. There were not very many bioweapon-related institutions in Georgia (compared for example with Russia and Kazakhstan) - the Georgian Anti-Plague Station (now the National Center for Disease Control and Medical Statistics of Georgia - NCDC Georgia of the Ministry of Labor, Health and Social Affairs of Georgia), carries out surveillance on especially dangerous pathogens in the whole territory of Georgia; Biokombinat in Tabakhmela produces live vaccines for foot and mouth disease, etc.; and, in part, the Eliava Institute of Bacteriophage manufactures different vaccines. All these institutions carry out controlled activities with dangerous pathogens. After the collapse of the Soviet Union there was a significant outflow of military microbiologists to other institutions, often to private companies, and this is the source of concern. Nowadays investigations with especially dangerous pathogens are carried out according to old Soviet regulations: (1) The Decree of the Ministry of Health of the USSR “Concerning Rules of Registration, Containment, Handling and Transfer of Cultures of Pathogenic Bacteria, Viruses, Rickettsia, Fungi, Protozoa and others, also Bacterial Toxins and Poisons of Biological Origin“ approved by the Ministry of Health of USSR, 18.05.79. (2) The “Instruction on Regime of Control of Epidemics while Working with Materials Infected or Suspected to be Infected with Causative Agents of Infectious Diseases of 1-11 Groups” approved by the Ministry of Health of USSR 29.06.1978. Among the various other duties of the NCDC Georgia, according to the Decree of the President of Georgia No. 55 of February 21, 2003, in its statute is the “Participation in drafting normative and methodological documents on surveillance, disease control and prevention, biosafetyhiosecurity”. The Department of Biosafety and Threat Reduction, together with experts from other departments of the Ministry of Labor, Health and Social Affairs of Georgia drafts biosafety legislation.
229
230 The new Georgian legislation on biosafety is based on WHO and American regulations on biosafety, particularly, “Laboratory Biosafety Manual” - World Health Organization - WHO/CDS/CSR/LY0/2003.4 and “Biosafety in Microbiological & Biomedical Laboratories” - US DHHS, CDC Atlanta & NIH, Fourth Edition, May, 1999. We have tried to make the new legislation comply with Georgian national legislation, particularly: Law of Georgia “on Health Care” (10.12.1997) says in C1.70 that “Providing an environment safe for public health is the responsibility of the State. The Ministry of Health of Georgia elaborates and approves sanitary and hygiene regulations and norms and controls their observance”; C1.72 - “Observance of sanitary, hygiene, sanitary-control regulations and measurements elaborated for the avoidance of negative effects on the environment or other factors on public health, that are approved, are obligatory for any physical or legal body notwithstanding its ownership, organizational or legal form or departmental subordination”; C1. 77 - “Import, export, containment, transfer and work with infectious diseases causative agents is allowed only with the permission of the Ministry of Health of Georgia”. Law of Georgia “on Export Control of Armament, Military Techniques and Products of Bilateral Purpose”, (28.04.1998), among products undergoing export control in C1.4d. “Causative agents of diseases, their genetically modified forms and fragments of genetic materials, that can be used for the production of bacteriological (biological) and toxic weapons according to the list of international regimes of nonproliferation”. Law of Georgia “Georgian Sanitary Code”- (08.05.2003) regulates legal relations concerned with maintaining an environment safe for human health, and also defines ways of carrying out state control on the implementation of sanitary norms and preventive sanitary, hygiene and sanitary-control measurements. It was decided that the Georgian legislation on biosafety will be a package comprising four documents: (1) Select Agents Rule; (2) Rules of Import, Export, Containment, Transfer and Handling of Cultures of Infectious Diseases Causative Agents (Bacteria, Viruses, Rickettsia, etc.), Protozoa, Mycoplasma and Genetic Materials, also Toxins and Poisons of Biological Origin; (3) Sanitary Norms For Labs Working with Especially Dangerous Pathogens; (4) Guidelines for Safe Transportation of Infectious Substances and Diagnostic Materials. Select Agents Rule is the main tool in regulating the work of laboratories with select agents, especially dangerous human and animal pathogens (list of overlapping select agents is now being coordinated with the veterinary services). The US Federal Register Part IV,DHHS “Possession, Use, and Transfer of Select Agents and Toxins; Interim Final Rule” was taken as a basis for it. It defines mechanisms of registration, security risk assessments, safety, security, emergency response, transfers, record keeping, inspections, duties of Responsible Official, training, notifications for theft, loss or release, administrative review, criminal penalties (here sufficient chapters from the Administrative Code of Georgia and the Criminal Code of Georgia were used), submissions and forms, applicability and related requirements. As a basis for “Rules of Import, Export, Containment, Transfer and Handling of Cultures of Infectious Diseases Causative Agents (Bacteria, Viruses, Rickettsia, etc.), Protozoa, Mycoplasma and Genetic Materials, also Toxins and Poisons of Biological
231 Origin”, “Sanitary Norms For Laboratories Working with Especially Dangerous Pathogens” and “Guidelines for the Safe Transportation of Infectious Substances and Diagnostic Materials” were taken from WHO and CDC regulations. The new legislation package will be agreed by all the agencies involved, such as the Central Sanitary Inspection of the Ministry of Labor, Health and Social Affairs of Georgia, Ministry of State Security of Georgia, Ministry of Infrastructure of Georgia and Ministry of Interior of Georgia. New legislation on biosafety, like all other legislation, rules and regulations, can be perfect and applicable, but the community must be prepared to follow it. Mechanisms for its implementation must be created.
INTERNATIONAL BIOSECURITY NORMS AND THE ROLE FOR INTERNATIONAL ORGANIZATIONS BRADFORD KAY Laboratory Capacity Development & Biosafety, World Health OrganisatiodCRS Office, Lyons, France ABSTRACT The threat of infectious disease-whether naturally occurring or as a result of deliberate use - is real and growing. The global community must find appropriate ways to counter this threat without crippling the very institutions that are critical to health security. The legitimate functions of science, industry, medicine and public health must be preserved while at the same time addressing the risks they engender. In order to do this there must be an understanding that the traditional boundaries to health and security continue to be redrawn. Technology, trade and travel have inexorably linked local health issues to those of the global community, regardless of geography, economic status or ethnicity. The linkage of health with security is now articulated by stake-holders representing diverse viewpoints including economics, science, law, politics, medicine, public health and human rights. As a consequence, health security issues now drive a number of national and global activities that require careful consideration from non-traditional viewpoints. National health security activities are undertaken with the expectation that they will result in safer, healthier citizens. Global health security initiatives have essentially the same goal, but must be developed with consideration to global impact, sustainable implementation and effectiveness of reducing biorisks. It is incumbent on the international community to explore ways to develop norms that balance the valid needs of stakeholders. The development of norms by different organizations are useful examples. The UN regularly updates its model regulations for the transport of infectious materials through an ongoing consensus process. Norms for safe laboratory work practices with infectious materials are developed by WHO (WHO Laboratory Biosafety Manual, 3'd Ed.). Likewise, the revised International Health Regulations (WHO) will require signatories to pledge compliance with diagnostic and reporting standards for communicable diseases of international concern. Professional and technical organizations (Office International des Epizooties, Organization for Economic Cooperation and Development, International Air Transport Association, etc.) significantly contribute to the development of consensus standards that become normative. While instructive, none of these examples is sufficiently comprehensive in its approach to development of global Bidsecurity standards, and none offer a realistic mechanism for assuring global implementation. BACKGROUND Disease threats The threat of infectious disease-whether natural or intentional in origin-is ., real and growing. Expanding globalisation and advancing biotechnology drive the threat. These trends show no sign of abating. The consequences of the intentional use of a biological agent as a weapon would be at least as damaging as a naturally occumng emerging infectious disease, and possibly more so. The 2001 anthrax attacks on the United States killed 5 people, injured 22, resulted in enormous
232
233 economic damage, and yet shook US national security to its core. If bioterrorists were to effectively deploy highly contagious agents such as smallpox or foot and mouth disease, the international consequences could be crippling. It is no surprise then that bioterrorism is now widely discussed in health, agricultural (plant and animal) and economic circles, as well as by law enforcement and security communities. However, it is difficult to identify mechanisms to counter biological threats, as no international framework exists within which they can be applied. Such mechanisms exist for other security threats. The International Atomic Energy Agency oversees the Nuclear Non-proliferation Treaty. Likewise, the Organization for the Prohibition of Chemical Weapons overseas the Chemical Weapons Convention. However, no encompassing means exist for the global oversight of biological organisms themselves or the technology that allows their isolation, manipulation and proliferation. The source of the threat States have traditionally been seen as the greatest source for the development and use of biological weapons. The resources to develop and deploy such weapons were considered to realistically be beyond the means of groups or individuals. Indeed, governments have recognized the potentially devastating impact a deliberate epidemic could have on an enemy. Just the threat of their use and the fear of contracting a devastating illness can be demoralizing to both military and civilian populations. Likewise, feelings of helplessness to detect andor to deter the use of unconventional weapons can have strong psychological effects. States are no longer the sole theoretical source for the development and use of biological weapons. The unprecedented acceleration of knowledge in the biological sciences has provided tools to efficiently manipulate biological organisms. Likewise, the global development and use of information technology makes mind-boggling amounts of information available to even the most remote areas. The resulting global distribution of advanced technical knowledge allows small groups and even individuals the theoretical, but real, possibility to effectively develop bioweapons. Strategies for biological security Nearly all current strategies for biological security are national in origin and focus on the response to disease outbreaks. These include increasing the effectiveness and availability of vaccines and other therapies (such as antibiotics); improving disease surveillance and diagnostics; developing improved decontamination and detection technologies; and building public and agricultural health capacities. While these response capabilities are needed, many believe that prevention strategies offer a greater return on the investment. Prevention counters the threat before it can invoke the need for scarce resources. All seem to agree that strategies for preparedness for deliberate epidemics should be built on and strengthen existing capacities for the prevention and control of natural diseases. A fundamental preventative strategy is to protect dangerous pathogens and toxins from theft and sabotage. Physical security in laboratory environments provides a first line of defence against the unauthorized possession and/or use of pathogens by reducing their availability to those who would use them for harm. Several counties, including the United States, Japan, Germany, the United Kingdom and France, are forerunners in the development of legislation seeking to tighten controls over the custody, storage, transfer and manipulation of pathogens and toxins, and to criminalize certain activities with these agents.
234 ISSUES AND ANALYSIS Regulations National regulations for the security of pathogens are important and should take into consideration the valid needs of national and international stakeholders. A lack of harmonization could result in the development of a patchwork of inconsistent and potentially conflicting regulations. Even as national laws should be harmonized, it is essential that international regulations be developed and implemented through a consensus process. Some of the issues to be addressed are: Accountability for dangerous pathogens and toxins Protection from theft and loss Mechanisms for transfer andor export Accreditation (registration andor licensing) of facilities working with dangerous pathogens Uniform procedures for screening laboratory personnel Uniform procedures for threat and risk assessments. The range of institutions that house and work with dangerous pathogens and toxins include laboratories in public health, hospital and clinical settings, as well as pharmaceutical laboratories, academic research laboratories, and public and private culture collections. Proscriptive and highly specific security measures will be difficult to enforce without significantly challenging the essential and legitimate functions of these institutions. As a consequence, security requirements with biological materials should, wherever possible, be performance-based and outcomeoriented, thereby allowing maximum flexibility in establishing controls and oversight. Such controls should also directly involve the active participation of scientists, physicians and laboratory workers themselves who are the backbone of sustainable security policies and procedures. Beneficial outcomes for such an approach should include: Flexibility of approaches for enhanced biological security Identification of internal security benchmarks Constructive participation of those regulated Identification of means wherein valid and essential institutional activities are preserved. International models International organizations now have a unique opportunity to engage willing global communities on issues of biological security. These international institutions have advisory and/or regulatory activities that may provide insights into needed global solutions. These organizations and their membership establish norms, standards, model regulations and regulatory instruments that govern a broad spectrum of activities with pathogenic organisms. The United Nations regularly produces Model Regulations for the transport of infectious substances. The International Air Transport Association uses these UN model regulations to develop operational standards for the airline industry. In a similar fashion, the International Civil Aviation Organization incorporates the essence of the model transport regulations into their international regulations for airlines. The World Health Organization regularly updates and publishes its recommended procedures for safe laboratory work practices with dangerous micro-organisms. The Office International des Epizooties (World Animal Health Organization) has a similar role with veterinary laboratory practices.
235 The UN Food and Agricultural Organization establishes global norms, standards and policies for agriculture and foods. Examples of means to prevent the malicious use of bioscience A variety of means exist to influence or control the actions of individuals, groups, nations, and the global community. All attempt to define and/or establish acceptable, normative behaviour. Individuals can be held accountable to norms of conduct, professional standards and national and international law. However, unfortunately there seem to exist a small number of disenfranchised individuals who will act in contradiction to accepted norms. States can be subject to international cooperative mechanisms, treaties and binding agreements and submission to international law. Each mechanism, however, relies upon an acknowledgement of the legitimacy of their requirements and a willingness by those involved to be bound by them. Treaties The Geneva Protocol (1925) and the Biological and Toxin Weapons Convention (1975) represent major international efforts to limit the development and use of biological weapons. However, as neither treaty has mechanisms for verifying compliance they can only be defined as behavioural norms, not regulatory mechanisms. The Convention on Biological Diversity and the supplementary agreement known as the Cartagena Protocol on Biosafety are self-regulatory agreements established by the United Nations Environment Programme in order to seek protection of biological diversity against the potential adverse effects of living modified organisms (LMOs, also known as genetically modified organisms {GMO’s}) and the transboundary or international movements of these organisms. Elements of the protocol continue to be under discussion. Membership is voluntary. Trade and economic organizations Organizationfor Economic Cooperation and Development (OECD) The OECD, a group of thirty-one advanced industrial countries that have common trade interests, has long been interested in the establishment of “Biological Resource Centres (BRC). BRCs are defined as government, industry or academic facilities that house, control, test, and use biological resources such as microorganisms, cell lines, DNA and tissue samples. BRCs are envisioned as a potential means for the distribution of biological materials in the international research infrastructure for biotechnology and life sciences. A global network of BRCs could be established by OECD to ensure the availability to member states of standardized (typical) strains of economically valuable micro-organisms as well as to provide a mechanism to share unique biological agents. This is due to the rapid disappearance of private culture collections due in part to withdrawal of governmental financial support and to legislative pressures to restrict the availability biological agents. The planned BRC network would allow the free exchange of microbial cultures among members that met certain defined criteria, and would function as a virtual lending library to enable research and sharing of valuable biological materials with known collaborators. Likewise, this mechanism could be applied to the distribution of dangerous pathogens. To this end the OECD is in the process of establishing an accreditation system to ensure that economically valuable and/or highly dangerous
236 pathogens are appropriately maintained and to design mechanisms that allow access only to those who have a legitimate need for these the materials. The OECD is aware that this process may exclude some countries to a variety of biological materials and is conducting regional consultations with non-member states on how these issues can be addressed. Technical organizations International Centrefor Genetic Engineering and Biotechnology (ICGEB): The ICGEB is part of the UN system and promotes the safe development and use of genetically modified organisms (GMOs). The ICBEB provides a nongovernmental forum wherein policy issues related to biosafetv and technology transfer can be discussed. The ICGEB works in harmony with its member states to instil good laboratory practices with GMOs in order to protect human health and the environment. The ICGEB participates in the broader forum to prevent the misuse of GMOs in the Interagency Network for Safety in Biotechnology (INSB) chaired by the OEDC. International cooperative agreements: The Australia Group The Australia Group is a voluntary membership of developed countries that cooperate in an effort to control the export and transhipment of chemicals and equipment that could be used in the production of chemical and biological weapons. The group was established in 1984 with 15 countries and now has 38 members as well as the European Commission. Members of the Australia Group work together to harmonize export controls and to share intelligence data regarding proliferation of chemical and biological weapons. Their efforts are aimed to control the spread of dual-use biological equipment as well as the distribution of over 100 pathogenic micro-organisms capable of infecting humans, animals andor plants. A challenge for the Australia Group is that lists of restricted materials must regularly be revised to reflect new technologies and non-participating members that supply sensitive materials must be evaluated for membership. Likewise, intelligencesharing among current members and ideologically diverse nations such as China, Russia, India, Pakistan, and Indonesia raise challenges for their inclusion. Critics argue that the Australia Group’s closed-door policies and restrictive trade practices can be arbitrary, inhibitory to free trade and an impediment to justifiable development. SUMMARY AND CONCLUSIONS Pathogenic micro-organisms -naturally occumng and genetically modifiedpresent a significant threat to global health security. There is no clear consensus on how to globally limit the distribution or availability of pathogenic micro-organisms. Development of international norms for the security of these organisms are complicated by the facts that they appear naturally and globally, can self-replicate, are difficult to detect at distance, and can be used for both harm and for good. International agencies offer many examples of activities that should be examined as potential models for global norms and standards. However, the development of security standards should be broad-based and take into consideration the needs of health, science, economics, and international and national law.
LEGAL MEASURES TO PREVENT BIO-CRIMES PROFESSOR BARRY KELLMAN Director, International Weapons Control Center, DePaul University College of Law, Chicago, USA Advisor To The Interpol Secretary-General - Preventing Bio-crimes Bio-weapons threaten mass casualties and immeasurable panic; their indiscriminate consequences will afflict civilians as horribly as combatants. A contagious disease, e.g. plague, can turn victims into extended biological weapons, carrying an epidemic virtually anywhere. More fundamentally, humanity has waged a species-long struggle against disease; to deliberately foment contagion is an act of treason - a fundamental crime against humanity. Despite the grave threats posed by bio-weapons, law enforcement’s capabilities to prevent a catastrophe are constrained by inadequate legal authorization to detect and interdict bio-weapons preparations. Although the Biological Weapons Convention prohibits States from having bio-weapons, it is legal in most nations for persons to acquire pathogens and weaponization equipment and to actually make a weapon. Without laws that criminalize bio-weapons preparations, law enforcers may not investigate disease weaponization nor pursue cooperative investigations to combat transnational bio-weapons production and smuggling. To strengthen law enforcement poses unique challenges. First, strategies must be preventive - to only manage a bio-attack’s consequences and to limit law enforcement to post-event apprehension, prosecution, and punishment of the perpetrators will not save many victims from disease and death. Second, effective measures must advance international cooperation. Criminal networks can transport lethal biological agents through any airport or customs checkpoint without detection; once released, a contagious outbreak will have no respect for borders. The key strategy, therefore, is to globally restrict access to bio-capabilities and to interdict programs in progress. This strategy must augment the capacities of national law enforcement as well as focus the efforts of international organizations. Fortunately, the United Nations Security Council, almost without notice, has set forth a framework that will map our strategies for preventing bio-crimes. Much about this framework is controversial with manifold implications whose significance has yet to be fully appreciated. But not debatable is that Resolution 1540 is now the law of the world, obligatory for all. SECURITY COUNCIL RESOLUTION 1540 - CONTENT AND OBJECTIVES U.N. Security Council Resolution 1540,’ adopted on 28 April 2004 under Chapter VII of the U.N. Charter, is motivated by the concern that non-State actors “may acquire, develop, traffic in or use nuclear, chemical and biological weapons and their means of delivery.” S. Res. 1540 recognizes “the need to enhance coordination of efforts on national, subregional, regional and international levels in order to strengthen a global response to this serious challenge and threat to international security.” It requires all States to:
237
238 “[Aldopt and enforce appropriate effective laws which prohibit any non-State actor to manufacture, acquire, possess, develop, transport, transfer or use nuclear, chemical or biological weapons and their means of delivery, in particular for terrorist purposes” @ma. 2); “[Tlake and enforce effective measures to establish domestic controls to prevent the proliferation of nuclear, chemical, or biological weapons and their means of delivery, including by establishing appropriate controls over related materials” These controls include: (a) measures to account for and secure such items; (b) effective physical protection measures; (c) effective border controls and law enforcement efforts; and (d) effective national export and trans-shipment controls over such items. (para. 3). S. Res. 1540 establishes a Committee to receive reports from States within six months on “steps they have taken or intend to take to implement this resolution” and to report to the Security Council. Because some States “may require assistance in implementing the provisions of this resolution”, S. Res. 1540 “invites States in a position to do so to offer assistance as appropriate in response to specific requests.” Furthermore, S. Res. 1540 calls upon States to promote dialogue and cooperation on non-proliferation (para. 9) and to take cooperative action to prevent illegal trafficking (para. 10). Gar, That S. Res. 1540 Fills The BWC, void of verification mechanisms and lacking an international institution to set guidelines and monitor compliance, has long been the weak sibling of the three WMD treaties. During the 1990s, a policy fissure grew between advocates of establishing mechanisms to verify State non-production of BW and advocates of promoting bio-crime prevention measures. These are not inherently exclusive aspirations; certainly there are policies that advance both verification and crime prevention. Yet, disputes about the direction of treaty-related negotiations became increasingly polarized, culminating in the Bush Administration’s rejection of a verification protocol and a remarkable diplomatic debacle at the 2001 Review Conference when a last-minute U.S. proposal to terminate negotiations virtually torpedoed the treaty.’ Tempers have since cooled, but no proposals are even remotely on the table to strengthen international controls of pathogens, verify that States obey their obligations, nor prevent bio-crimes. Most important, there is no organization through which initiatives can be advanced. BWC Article IV requires implementation of penal measures. But because the primary obligations imposed on States are so vague, Article IV is wholly ineffective for preventing non-State actors from cultivating pathogens, assembling equipment critical to weaponization, or transferring highly refined and lethal disease agents. Indeed, until adoption of S. Res. 1540, it has been perfectly legal in most States to prepare lethal biological agents for dissemination. A consideration that motivated S. Res. 1540, therefore, was that there was a gap in international law - that widely appreciated constraints on State behavior do not apply to non-State behavior. Perhaps this gap was of incidental significance in an earlier era when States held an oligopoly of WMD, but non-State actors are challenging that oligopoly. As perception of the threat from non-State actors has grown, so has the realization that, in most States (especially States of greatest concern), accumulation of WMD precursor
239 materials or critical equipment has not been legally proscribed, no domestic authority is mandated to supervise access to such materials or equipment, and neither domestic law nor international law is violated by the trans-national transfer of such items. In this context, adoption of S. Res. 1540, although focusing on WMD generally not just BW, serves to fast-forward implementation of bio-security controls. Committee Monitoring of Compliance -- How Is Compliance Enforced? The resolution does not authorize action to be taken against States to compel compliance. Yet, the resolution was adopted pursuant to Chapter VII - it is of the highest concern to international peace and security. Notably, Chapter VII is rarely invoked except in the context of a particular dispute or crisis. Recent Chapter VII resolutions have dealt specifically with Iraq, Afghanistan, other conflict zones, or the attacks of September 1lth. S. Res. 1540 is not the Security Council’s reaction to any particular event or threat but is an effort to fill a legal gap. In this regard, what does it mean if a State does not fulfill its obligations? Although S. Res. 1540 gives remarkably scant definition to the committee that will receive reports from States, the experience of the Counter-Terrorism Committee (CTC) established pursuant to S. Res. 1373 is informative. Adopted in the immediate aftermath of the terrorist attacks on New York and Washington D.C., S. Res. 1373 calls upon all States to implement measures to prevent terrorism. Like the committee to be established pursuant to S. Res. 1540, the CTC was authorized to receive reports about how States implement twelve counter-terrorism treaties and other pertinent obligations. To date, the CTC has received extensive reports in response to questionnaires from virtually every State specifying what they have done to prevent terrorism. The CTC can ask more detailed questions or seek clarification from States. States’ reports are posted in full on the U.N. website, available for anyone to peruse. This experience highlights the efficacy of treating “compliance” not as a verification and enforcement matter but as promotion of transparency and potential embarrassment if a State’s measures are less substantial than those of its neighbors. In the debates preceding adoption of S. Res. 1540, various delegates expressed concern about what actions might be taken to enforce ~ompliance.~ But the resolution’s proponents clarified that the resolution does not contemplate enforcement action. According to Mr. Arias of Spain, [Tlhe draft resolution in no way explicitly or implicitly gives a blank check for the use of coercive measures, including the use of force, in cases of non-compliance.” Further according to Mr. Thomson of the United Kingdom, “What this draft resolution does not do is authorize enforcement action against States or against non-State actors in the territory of another country.” Moreover, it will be up to the Security Council as a whole, not individual States, to determine the appropriate course of action. Any enforcement action would require a new Council decision. According to The President, “[Als the resolution will be binding on all Member States, the Council as a whole must remain the final arbiter of compliance. Any necessary enforcement action must be subject to a specific decision by the Council as a whole.. ..” While any Security Council determination will be the product of a unique political and strategic calculus at the time, it is worthwhile to consider what State activity, or lack thereof, constitutes non-compliance with S. Res. 1540. If a State does not implement prohibitions against bio-crimes nor implement measures to keep bio-items secure, what
240
happens? What if it has enacted bio-regulatory measures but those measures fall far short of prevailing international standards as promulgated by relevant international organizations? What if it has implemented rigorous standards but failed to enforce them, whether due to lack of capabilities or more sinister reasons? Most important, if terrorists take advantage of a State’s failure to implement and enforce security standards, docs the State’s nonfeasance rise to the level of State responsibility? Offering definitive answers to these questions at this time is impossible. Yet, it is clear that a State may no longer claim, in defense of its nonfeasance, that it did not know it had a legal responsibility to take preventive action. The “defense” of purported ambiguity as to whether it has nonproliferation responsibilities is no longer available. Arguably, before adoption of S. Res. 1540, a State in which criminals or proliferators operated could distinguish between State “support” of criminality and State neglect to take action against crime - if criminals successfully gained BW capabilities from insecure facilities, the State could deny responsibility even if it took no action to encourage or enable that diversion. Now, that argument is not available. All this suggests a substantially enhanced role for inter-governmental organizations and professional associations. Indeed, S. Res. 1540 specifically calls on international organizations to provide assistance to States. At this time, at least three dozen international organizations promulgate guidelines or exercise relevant responsibilities for keeping bio-items secure. All these international organizations need to be coordinated, and the standards they promulgate should be harmonized. What may be foreseen is an integrated network of organizations working cooperatively but with specialized expertise to carry out distinct aspects of bio-security, with mutual representation and assistance. In the final analysis, if a State does not avail itself of support from States with relevant capabilities or from international organizations, nor does it adopt essential measures to satisfy S. Res. 1540, then the Security Council could determine that the State has breached its obligations and impose sanctions. These sanctions would most likely be substantively linked to the scope of the resolution and therefore would seek to impede or prohibit the State’s trade in nuclear, chemical, biological, or missile items. S. RES. 1540 REQIJIREMENTS AND IMPLICATIONS For technologically-advanced States (e.g. members of OECD or the Australia Group), S. Res. 1540 does not require much. Many of these States already have enacted measures that substantially fulfill the resolution’s mandates. But of the roughly 150 remaining States, some have substantial gaps in their legal infrastructure. Filling those gaps in order to satisfy the resolution’s requirements is not trivial. According to Mr. Lovald of Norway: “[Tlhe draft resolution places far-reaching and legally binding demands on all Member States. ... Those steps should cover areas such as national legislation, law enforcement, export controls, border controls, and protection of sensitive materials.”’ Not only must States fulfill these requirements, they must report their compliance measures to a new Security Council committee. National Legislation To Prohibit WMD Proliferation Each nation’s laws should prohibit development, acquisition, or transfer of BWcritical items and should make it a crime to violate that prohibition for hostile purposes.
24 1 Thus, bio-crimes should be illegal everywhere, powerfully reinforcing the norm against acquisition of such weapons as well as facilitating law enforcement and trans-national legal cooperation. National laws applicable to threats and hoaxes should be harmonized and should ensure prosecution of offenders or extradition to another State for prosecution. The scope of legal jurisdiction over such crimes should broadly reach the behavior of legal entities in trans-national smuggling and weapons development conspiracies. A priority question is precisely what activity should constitute a criminal offense. Clearly, the use of BW should be a crime; but it will be necessary to reach preparatory steps that can overlap innocent behavior or even legitimate scientific inquiry. Bio-crime preparations may employ the same materials, equipment and techniques as undertaking legitimate disease research. Standards must be developed to guide law enforcers as to what behavior merits criminal investigation to prevent a hostile attack. Relatedly, legal measures must define “BW’ items, including biological agents that are non-lethal but incapacitating to humans as well as agents that are lethal as to animal or plant life. Each State’s law enforcement officials must work jointly with their counterparts in other States by sharing information, conducting investigations, and prosecuting apprehended criminals. State cooperation both in gathering intelligence and using that information to prevent criminal activity is undermined, however, by lack of coherent legal instruments. There is no integrated database of State laws concerning BW incidents, threats, or hoaxes; it is difficult to know what gaps exist, much less fill them. Worse, many States, especially some of greatest concern, lack capabilities. (technical, financial, and know-how) to implement legal assistance obligations. All this suggests that, to fulfill obligations under S. Res. 1540, States must enact harmonized criminal prohibitions and authorization for law enforcement cooperation in order to establish a seamless web of security among all nations. Failure to do so implicitly poses a threat to international peace and security. Controlling Trans-National Trade of WMD Enacting coherent laws to control the export and trade in sensitive bio-materials is essential to security. Defining the content of such laws is reasonably straightforward with regard to items for nuclear and chemical weapons. But no widely accepted international guidelines apply to trade of pathogens and laboratory equipment; some States control them, most do not. The bigger problem i s that export controls pertain only to license-seekers, but resolute criminals are not likely to seek an export license for their desired items. Toughening standards for a license might discourage legitimate suppliers from knowingly assisting wrongdoers, but such standards have limited utility against covert smuggling. The real challenge, therefore, is to build a system of customs and border controls that can detect secret operations. How might relevant control authorities know what they don’t know? Law enforcers will need to gather information about bio-crimes and link that information with data about criminal networks and smuggling operations from police and customs files. New data sources are needed, gathered through cooperation with industry, stimulated by air, sea and land transportation authorities. That information must be retrieved according to specific collection methodologies and shared among intelligence, law enforcement, regulatory and health organizations. Identification of an anomaly by
242 sophisticated analysis of collected information should provoke follow-on inquiry, either by requesting clarification from a relevant State, by assigning a “task force” to gather more facts, or by authorizing an investigation. The gathering of data that enables insight as to wrongful BW preparations should not intrude on scientific freedom or personal privacy. Even contemplating how governments might undertake information-gathering initiatives has prompted considerable controversy in the United States and the European Union. Effectively globalizing those initiatives suggests an integrated system for preventing BW smuggling that is not remotely on the horizon at this time. Yet, if S. Res. 1540’s requirement that States adopt “effective national export and trans-shipment controls” is to have meaning, these issues need elaboration. Protection of Sensitive Materials Full compliance with S. Res. 1540 requires implementation of prevention measures to deny access to BW-relevant materials and equipment. States should strengthen physical security and containment measures as well as restrict access to sensitive facilities only to properly trained and screened persons. Satisfaction of such measures should be a condition for a license, and anyone having BW-relevant items without authorization should be prosecuted without need for further evidence of malevolent intent. The challenge here is how to keep critical expertise, materials, and equipment from criminals without unduly constricting those items’ non-hostile applications. At issue is the marginal utility of regulation: constraints on technology could pose substantial costs for legitimate industry but hardly impede criminals who can employ alternative technologies. All that said, however, some regulations may make sense, especially if designed to enhance information about where and for what purposes relevant expertise and equipment are being put to use. Promulgation of effective regulations will require careful and effective balancing of myriad considerations on a global scale. An initial question, therefore, is who has authority to established nuanced guidelines? A related concern involves security of critical items during transport. If such items need to be moved, international standards should control their packaging and mode of shipment. It may be appropriate to regulate carriage of BW-critical items with participants monitored for compliance with applicable handling and storage guidelines. Biological agents are almost impossible to detect. They also can be transported in a variety of containers, including as packets within the bodies of living persons or as virulent infections within the bodies of living suicide-terrorists disguised simply as tourist-travelers. Actualizing these regulatory initiatives is complex. States should consider whether to establish official supervisory bodies, information management and reporting systems, and linkages to related policies for advancing scientific progress. To know the extent of compliance within their jurisdiction, each State will need to implement a system for monitoring relevant activity and penalizing non-compliance. Underneath all this is a recognition that bio-tech is not an arcane endeavor on the fringe of commercial activity. Some technologically-advanced States already have regulatory systems that more or less oversee these sectors, but these legitimate sectors are proliferating rapidly across the globe. Requiring every State to establish comparable
243 regulatory systems to register legitimate entities and facilities that handle critical items (only if those facilities adopt rigorous security measures to prevent illicit diversion) is a breathtaking implication of S. Res. 1540. Calling on international organizations to assist this process is a radical contribution to the globalization of technology oversight. PROBLEM: INADEQUATE LAW ENFORCEMENT AUTHORIZATION AND CAPACITY Law enforcement personnel (police, customs and border officials, regulatory inspectors, etc.) comprise the primary system for effectuating bio-criminalization. Law enforcers must enforce bio-security measures, detect unlicensed activities that might constitute bio-crime preparations, interdict illicit efforts to use territory to trans-ship pathogens, gather and analyze data for purposes of expanded surveillance, apprehend perpetrators, and mitigate the consequences of a bio-attack and restore order if prevention efforts fail. While law enforcers bear these responsibilities in every State, only in highly developed States are they assisted by networks of professional associations, public health systems, and emergency responders. Unfortunately, in the vast majority of States - from where bio-crimes may be more likely to emerge - law enforcement personnel undertake these responsibilities essentially alone. To carry out these responsibilities with maximum efficacy, law enforcers need authorization and they need capability. “Authorization” refers to the legal empowerment to conduct bio-crime prevention and response functions, without which no law enforcer may legitimately act. As noted above, most States’ laws do not authorize law enforcers to conduct such functions, thereby precluding effective action. To correct this condition by implementation of proper laws and regulatory measures is necessary but is not, by itself, sufficient. Execution of relevant responsibilities demands unique capabilities that entail understanding biological science as well as the operations of research laboratories and pharmaceutical facilities. Moreover, there are challenges of knowing how to detect pathogens that are essentially invisible and are inherently dual-use. To gather, analyze, and share large amounts of technical data requires sophisticated information technology and the training to put that technology to optimal use. In the event of a bio-attack, law enforcement personnel will need sensors and diagnostic equipment and, similarly, training as to how to use it. Again, there is an unfortunate convergence that the States lacking proper authorization tend also to be most deficient in relevant capabilities. There is also a convergence in efforts to address these deficiencies. Law enforcers who propound the importance of enhanced authorization will likely perceive that they lack capabilities to carry out their new responsibilities and will demand better equipment and training. Law enforcers who receive new equipment and training to address a profound threat will likely identify the inadequacies of existing law and put pressure on legislators to broaden authorization. Thus, efforts to strengthen law enforcement should proceed symbiotically by encouraging law reform by equipping law enforcers. PRIORITIES: COMPLYING WITH S. RES. 1540 These obligations are complex and layered, compelling multilateral commitments with differentiated and mutually reinforcing responsibilities that consider the difficulties
244 of isolating legitimate from wrongful behavior as well as considering the sovereignty of States to enforce criminal prohibitions. Altogether these commitments and responsibilities will push the margins of international law. Moreover, from the perspective of prioritizing an implementation strategy, the cure - to coin a phrase - may be worse than the disease, entailing a broad set of bio-security measures that, for most States, is both costly and irrelevant. For all but the few States where research and pharmaceutical use of weapons-capable pathogens is concentrated, these requirements present burdens of establishing a regulatory authority and promulgating intricate safety and protection measures; more onerous is that the police must be trained and equipped to investigate compliance and the penal system must be capable of distinguishing inadvertent shortcomings from behavior designed to cause catastrophic harm. Of undeniable significance here is that many nations are facing urgent public health crises with radically insufficient resources; nature is manifesting disease terrors that far surpass the as yet only hypothetical fears associated with bio-crimes. It is worth noting in this regard that advanced biological science is proliferating - the number of States that are hosts to sophisticated laboratories is expanding, and that trend is likely to accelerate. Moreover, the expansion of biological science is outstripping effective bio-security standards. Promulgation of harmonized bio-security standards, therefore, connotes creating a “bargain” between States whereby the magnitude of bio-security burdens is based on the size and risk of a nation’s life sciences activities and where acceptance of these burdens is both an incentive for and a condition of encouraging new life sciences capab security of all States could improve by integrating a broad international commitment to advance the life sciences with regulatory oversight that is targeted to promote security and consistently applied. The active involvement of the life sciences communities, transcending national boundaries, is critical. The near-term future offers disquieting possibilities that mandate the attention of scientists and health professionals, both because of the potential catastrophic consequences of an actual misuse of biology and because dismissal of possible risks would leave policy making decisions to communities that might be insensitive to the value of scientific freedom. Engagement of the scientific community in this context necessarily means coordinated interaction with the international law enforcement community that is and will be directly responsible for interdicting those who might misuse biology. Bio-crimes prevention policies must respect the aspirations and requirements of scientific inquiry, the economics of producing pharmaceuticals, and the exigencies of trying to enhance public health. In return for policies that respect the aspirations and requirements of scientific inquiry, the economics of producing pharmaceuticals, and the exigencies of trying to enhance public health, the scientific community can offer guidance as to: Developing indicators of misuse that can help laboratory officials, regulators, and police to detect wrongful behavior before it becomes manifest in an actual threat or attack; Expounding a code of responsibility that defines roles and responsibilities for laboratory and related institutional officers with regard to providing information to and interacting with regulatory and police authorities; and
245 Specifying procedures for pursuing and apprehending persons who pose a threat of misuse of pathogens, including specifying rules for obtaining and handling relevant evidence and records. THE INTERPOL PROGRAM FOR PREVENTING BIO-CRIMES On July 1’’ of this year, Interpol launched the Police Training Program for Preventing Bio-Crimes, with support of the Sloan Foundation to develop police methods and strategies to prevent crimes. This Program will create within Interpol the organizational impetus and expertise to lead bio-security initiatives. It will advance law enforcement capabilities to interdict biological weapons technologies and pursue proliferators. Interpol intends to become the international center for information about bio-crimes and about national, regional, and international efforts to prevent them. This is especially important because there is now no systematic collection of incidents involving the wrongful use of biological materials, whether actual, anticipated, or fraudulently perpetrated. As a result, it is impossible to know the extent of criminal activity. A critical function, therefore, is to establish a database of bio-crime events. Interpol will develop an architecture for sharing information with due regard for the risks that information-sharing has for privacy and scientific freedom. The core of Interpol’s Program is to train national law enforcers about bio-crimes. Although the program’s focus is global, training efforts will focus on non-OECD States, especially States that can serve as regional leaders or which have unique characteristics (e.g. a well-developed bio-technology sector). Currently, law enforcers in most nations are not familiar with pathogen control or security of bio-research facilities, nor have they engaged in regulatory oversight of pharmaceutical or agricultural safety. Training will improve capabilities for detecting, deterring, and punishing bio-criminals. Moreover, providing information and training should encourage national police forces to become advocates for resources to augment their capabilities and for legal authorization to investigate illicit bio-crime preparations. The program will also promote relationships with other international and regional organizations as well as NGO’s. By forging integrated networks between national and trans-national bodies and among synergistic disciplines, the program will accelerate and focus global coordination of expertise and resources. These networks will bring in expertise to support Interpol’s missions as well as disseminate Interpol-generated information and experience. In early 2005, Interpol will convene in Lyon, France a global conference of national police chiefs, leaders of international and regional organizations, and bio-terrorism experts. The conference will broadly consider bio-threats and the opportunities for police to work with scientists, public health care providers, and other professionals to augment bio-security. Subsequent regional workshops will initiate a “train the trainers” project that will reach out, through regional organizations, to national police bureaus.
246 CONCLUSION Bio-crimes are, for the most part, an abstraction. The anthrax attacks in 2001 and other isolated crimes demonstrate that concerns are not fanciful, but far more damage has been inflicted by conventional bombings and plane hijackings. Yet, the trend lines are disturbing. Unquestionably, the availability of sophisticated scientific knowledge, materials, and technology means that criminals will find it increasingly easier to wage a catastrophic bio-attack. The global expansion of bio-research and pharmaceutical sectors means that the chances are ever-growing of finding a source of weaponizeable pathogens in a remote location. Perhaps most challenging in the longer term is that explosions in genetic research are opening opportunities for producing an immeasurable catastrophe that could scarcely have been imagined only a few years ago. If the seriousness of the threat is accepted, then the necessity of international action within a legal context cannot be denied. The inherent nature of these threats is global; little can be done to seal off any country fiom criminal conduct or its effects. Multilateral coordination and specific delineation of responsibilities and obligations, while undeniably posing diplomatic challenges, is essential to enhance security. Ultimately, to address threats of bio-crimes demands strengthening international institutions under the rule of law. That is not an ideological argument - disease has no more respect for ideological distinctions than it does for borders - it is an unavoidable implication of biology’s dangers at this time. REFERENCES
‘
*
’
The Security Council convened an open forum for all States to offer their views. S/PV/4950, April 22,2004, http://ods-ddsny.un.org/doc/UNDOC/PRO/N04/318/07/PDF/N0431807.pdflOpenElement. This discussion continued after adoption. SPVl4956, April 28,2004, http://odsdds-ny.un.org/doc/UNDOC/PRO/NO4/327/13/PDF/N04327 13.pdf?OpenElement. Throughout this essay, diplomats’ statements are quoted from these special discussions. Quotations are preceded by reference to the speaker but are not given separate citation. For a discussion of the BWC’s tortured history culminating in the Fifth Review Conference, see Barry Kellman, An International Criminal Law Approach To Bioterrorism, 25 HARV.J. L. & PUB.POL. 721,740-41 (2002). Chapter VII has also been recently used to address prevention of conventional terrorism. See S. Res. 1455 (2003) and S. Res. 1526 (2004), both concerning threats to international peace and security caused by terrorist acts. According to Mr. Danesh-Yazdi of Iran, “[Tlhe current state of international affairs teaches us the following crucial lesson: the follow-up and monitoring of such a resolution cannot be left to the subjective interpretation of individual States. We need common and sound understanding on the part of all States to ensure their faithful implementation of the resolution .. . Mr. De La Sabliere, representing France, offered as a response to the question of what the resolution requires of States: “The Council is establishing the goals, but htt~://ods-dds-nv.un.org/doclUNDOCIGENR\O432843.~df?~enElement.
247 it leaves each State free to define the penalties, legal regulations and practical measures to be adopted. The draft resolution does not establish those aspects.” Although not overtly disingenuous, the comment belies the fact that the goals are sufficiently far-reaching and specific such that States have freedom to “define” how to achieve them only within relatively narrow ambits.
This page intentionally left blank
8.
WATER AND POLLUTION
This page intentionally left blank
OVERVIEW OF THE HYDROLOGIC CYCLE AND ITS CONNECTION TO CLIMATE: DROUGHTS AND FLOODS BISHER IMAM, SOROOSH SOROOSHIAN Center for Hydrometeorology and Remote Sensing University of California, Irvine, USA INTRODUCTION Water is both ubiquitous and important to life on Earth. Through the hydrologic cycle, the occurrence, circulation, and distribution of water in various compartments of the Earth's system, affect the life of the world's increasing population. While oceans hold more than 96.5% of the global water (Shiklomanov and Rodda, 2003), fresh water, which represents a minor fraction (2.53%), is distributed among various reservoirs (Figure 1). The interplay between gravity and solar energy are the primary mechanisms forcing the lateral and vertical movement of waters between these reservoirs and across their interfaces. From the hydrologic point of view, precipitation is the key hydrologic variable linking the atmosphere with land surface processes, and plays a dominant role in both weather and climate. Three fourths of the heat in the atmosphere is contributed by the global release of latent heat (Kummerow et. al., 1998, 2000), while the distribution of water vapor and clouds control the radiation balance. Regional precipitation plays a major role in weather patterns and is, of course, the major renewable source of fresh water (both liquid and frozen). Too much precipitation, or too little, can cause significant damage to life and property through floods and droughts. These two extremes, caused by climatic fluctuations, have been constant concerns to societies since the dawn of humanity. Such concern is exemplified in the Pliny the Elder (23-79 AD) words describing the role of the River Nile in regulating the livelihood of ancient Egyptians: "The country has reason to make careful note of either extreme. When the water rises to only twelve cubits, it experiences the horrors of famine: when it attains thirteen, hunger is still the result; a rise of fourteen cubits is productive of gladness; a rise offiJeen sets all anxieties at rest: while an increase of sixteen is productive of unbounded transports of joy. The greatest increase known, up to the present time, is that of eighteen cubits, which tookplace in the time of the Emperor Claudius; the smallest rise was that offive, in the year of the battle of Pharsalia, the river by this prodigy testlhing its horror Pliny the Elder, Naturalis Historia, Book C: First Century AD 'I
CHANGE IN GLOBAL TEMPERATURE Concerns about extreme hydrologic events have intensified in recent decades due to increased awareness of climate variability and climate change. It is well documented now that the mean global temperature has increased by about 0.3"C to 0.6" since 1880, and by about 0.2 to 0.3 2°C in the past forty years, which represents a period of more reliable record. Figure 2 illustrates the above-mentioned trend for 3 latitudinal bands: northern, lower, and southern. The increase in temperature is noticeably higher in the northern
251
252 latitudes than in both the lower and southern latitudes. In fact, the greatest increase is found between 40"N-7OoN. Satellite observations of sea surface temperature have also confirmed that warming is evident in both sea surface temperature (Cane et. al. 1997) and land-based surface air temperatures. The Intergovernmental Panel on Climate Change (IPCC) 2001 report, which represents a synthesis of key climate change studies, identified two distinct periods of change in temperature (1919-1945, and post-1975), with the latter being associated with a rate of change exceeding O.lS"C/decade. Furthermore, the report concluded that in most cases, the observed change in temperature is mainly due to increased daily minimum temperature, with maximum temperatures not displaying statistically significant trends. The question then is whether the changes described above have resulted in intensifymg the hydrologic cycle and in increasing the likelihood of extreme events? The question invites many follow-up questions, including at what spatial and temporal scales can change be detected and are some regions more susceptible to change than others? Developing answers to these questions requires long-term monitoring and rigorous investigation of many state-variables reflecting the interactions between water and energy fluxes at the land surface (i.e., precipitation, runoff, soil moisture, soil evaporation, and evapotranspiration).For example, changes in precipitation patterns (both in time and space), and the impact of these changes on the hydrologic cycle must be documented and understood. Similarly, long-term records must be examined to determine if droughts are becoming more common, if and where the severity of droughts is increasing, if the timing and magnitude of snow-accumulation and snow-melt onset is changing, and if changes are taking place in regional vegetation cover. Needless to say, global coverage of many state-variables over the longest possible time series is needed to understand the natural variability of the hydrological cycle so that deviations from the norm, such as a genuine progressive amplification of the cycle, can be detected with confidence. POTENTIAL HYDROLOGIC IMPACTS OF GLOBAL CHANGE As air temperature increases, the water holding capacity of the atmosphere increases due to higher saturation vapor pressure. Figure 3 illustrates the relationship between temperature and saturation vapor pressure. As seen in this figure, while a change of 0.6"C in air temperature may only yield a small change in saturation vapor pressure, the projected range of change (1°C to 4°C) from the current mean temperature value during the next 80 years will be associated with more drastic changes in atmospheric moisture. This will most likely influence moisture circulation at global and regional scales. Arguably, there are potentials for more intense precipitation, which causes floods, as well as for extended periods of dryer conditions due to longer residence time and to changes in recycling ratio, which represents the amount of rain generated locally within a given region compared to moisture advection from outside (Eltahir and Bras, 1996). Satellite observation and radio-sound measurements have shown an upward trend in atmospheric moisture reaching 10% during the past three decades (Ross and Elliott, 1996). While such a trend was observed in regions associated with increased temperature over the same period (Ross and Elliott, 2001), regions where temperature decreased showed downward trends in precipitable water (Hense et. al., 1988). Trenberth et al., (2003), argue that despite uncertainties in measurements, most studies point to
253 statistically significant upward trends in atmospheric moisture consistent with observations of increased temperature. Because of increased social and economic vulnerability to extreme events, answering these questions is highly important to many flood managers and to disaster management agencies worldwide. Figure 4 was produced based on flood damage estimates published by the Dartmouth Flood Observatory (Brakenridge et. al.). It is noteworthy that according to the available estimates, a single flood event, which was the worst flood in 150 years to affect the east coast of China during June of 1998, was responsible- for nearly $210 Billion of the total $236 of economic damage. While the primary cause of damage was the flood itself, heavy rains lasting more than 13 days also induced widespread mudslides, thereby compounding the damage. Similarly, 136,000 of the 169,000 fatalities in 1991 resulted from a single tropical cyclone event that affected the Southeastern coast of Bangladesh and displaced more that 10 million people. It is not surprising therefore, that over the past decade numerous studies have attempted to quantify trends in various climatic variables. These studies have confirmed changes in annual precipitation during the past century. Hulme, et al., (1998) reported an increase of 2% in global precipitation since 1900. However, the spatial and temporal characteristics of precipitation trends point to high variability in both dimensions. For example, while the IPCC report indicates a 5% to 10% increase of mean annual precipitation over the United States during the past century, several drought episodes have punctuated the trend (Karl and Knight, 1998). Needless to say, when discussing floods, changes in the total annual precipitation are less important than changes in other characteristics of precipitation including intensity, duration, and the number of consecutive days of heavy precipitation. In fact, the sparse network of hourly precipitation gauges on a global scale hinders assessing trends of these characteristics. In the United States, where the density of a good quality precipitation network allows such analysis, the number of days with precipitation exceeding 50.8 mm (2 inch threshold for heavy rain) was found to be increasing (Karl and Night, 1998). Local analysis performed on 6 rain gauges in the Walnut Gulch experimental watershed shows an upward trend of wet days between 1956 and 1990 (Figure 5). Similar trends were observed for 9 and 39 stations within the same watershed but for a shorter record period. Some argue that the amplification of the hydrologic cycle is best illustrated by the stark differences between 2002 and 2003, particularly across Europe (Pal et al., 2004). In 2002, major floods swept the continent causing significant damage (Figure 6), while in 2003, summer droughts, caused by a combination of record high temperatures and low precipitation ravaged the continent causing major wildfires all over the continent. Trends in the number of wildfires can also be observed in the US (Figure 7), where the number of wildfires in BLM managed lands has increased significantly during the 1990s. Clearly, recent studies suggest an intensification of the hydrologic cycle. While this intensification can be attributed to increases in mean temperature, much remains to be done before the changes in the hydrologic cycle can be quantified with certainty. Most importantly, both the identification of local impacts of an intensified hydrologic cycle and the prediction of such impacts in the future require the availability of high spatial and temporal resolution data of many hydrologic variables. Paramount among these is precipitation. With respect to observation, satellites offer a unique opportunity to monitor precipitation at the required resolution. However, the spatial (0.25') and temporal (5 day-
254 monthly) resolution of most satellite based precipitation products continue to be consistent with climate models as opposed to hydrologic models. Quantifying changes in rainfall intensity as well as in the diurnal cycle of precipitation (Sorooshian et al., 2002) requires estimates at much higher resolutions. The Global Precipitation Measuring Mission (GPM), to be launched in 2008, encompasses several satellites and is expected to offer both the resolution and spatial coverage required for more accurate quantification of precipitation. In the meantime, research efforts are being carried out in the Center for Hydrometeorology and Remote Sensing, at the University of California, Irvine, to improve the resolution of precipitation estimates through the use of information from multiple satellites and through the testing and implementation of cloud classification algorithm (CCS) (Hong, et al., 2003) within the Precipitation Estimation from Remote Sensing Information using Artificial Neural Networks (PERSIA”) (Sorooshian, et al., 2000). The new algorithm, termed PERSIANN-CCS, provides hourly and 3 hourly precipitation estimates at 4 km resolution (Figure 8), thereby improving the opportunity for krther study of the intensification of the hydrologic cycle. On the other hand, predicting the impacts of potential climate change scenarios on the hydrologic cycle needs further development. Uncertainties in scenario assessment, as well as in model predictions, continue to affect our ability to reach a comprehensive understanding of the feedback cycle between the atmosphere-ocean-land surface components at the global scale (IPCC, 2001 and Smith et al., 2002), and of the dominant process on a watershed scale. Figure 9 illustrates the wide range of uncertainties associated with scenario predictions of global temperature and precipitation, greater uncertainties would be expected at a local scale, and more significantly in predicting changes in precipitation characteristics at an event scale (intensity, soil-liquid, event duration, recurrence) (Trenberth et al., 2003). SUMMARY There is evidence to suggest that the observed increase in temperature over the past century will impact the hydrologic cycle at global, regional, and local scales. As increases in temperature result in an increase of the atmosphere’s moisture holding capacity, the hydrologic cycle is expected to intensify. Observations support the notion that while total precipitation may not have changed significantly over the past century, statistically significant changes in precipitation characteristics have been observed. Arguably, such changes, coupled with increasing vulnerability to extreme events may have been responsible for some of the worst floods and drought damage observed in recent decades. Further studies are essential to improve both monitoring and modeling of the precipitation process as well as the interaction between the atmosphere-ocean-land surface components of the Earth’s system. ACKNOWLEDGEMENT The authors wish to thank the various agencies in the United States (NASA, NSF, and NOAA) for their support of our research related to the hydrologic cycle.
255 REFERENCES 1. 2.
3. 4. 5.
6. 7.
8. 9. 10.
11. 12. 13. 14.
Brakenridge, G.R., Anderson, E., Caquard, S., Flood Summary Tables, 19932004, Dartmouth flood Observatory, Hanover, USA, digital media, http://m.darhnouth.edu/-floodslarchives. Cane, M.A., A. C. Clement, A. Kaplan, Y. Kushnir, D. Pozdnyakov, R. Seager, S. E. Zebiak, and R. Murtugudde, 1997, Twentieth-Century Sea Surface Temperature Trends, Science 275: 957-960. Eltahir, E.A.B., and R. L. Bras, 1996: Precipitation recycling. Rev. Geophys., 34, 367-3 7 8. Hense, A,, P. Krahe, and H. Flohn, 1988: Recent fluctuations of tropospheric temperature and water vapor content in the tropics. Meteor. Atmos. Phys., 38,215227. Hong, Y., K. Hsu, and S. Sorooshian, 2003, “An Automatic Segmentation Algorithm for Cloud Infrared Satellite Images: Incremental Temperature Threshold Technique”, Report, 03-020, Dept of Hydrology and Water Resources, University of Arizona, Tucson, Arizona. Hulme, M., T.J. Osborn and T.C. Johns, 1998: Precipitation sensitivity to, global warming: Comparison of observations with HadCM2 simulations. Geophys. Res. Lett., 25,3379-3382. IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T.,Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, U. K. and New York, NY, USA, 881pp. Karl, T R., and R.W. Knight, 1998: Secular trends of precipitation amount, frequency, and intensity in the United States. BAMS, 79,231-242. Kummerow, C., W. Barnes, T. Kozu, J. Shiue, and J. Simpson, 1998: The tropical rainfall measurement mission (TRMM) sensor package. Journal of Atmospheric Oceanic Technology, 15, 809-816. Kummerow, C., J. Simpson, 0. Thiele, W. Barnes, A.T.C. Cgang, E. Stocker, R.F. Adler, A. Hou, R. Kakar, F. Wentz, P. Ashcroft, T. Kum, Y. Hong, K. Okamoto, T. Iguchi, H. Kuroiwa, E. Im, Z. Haddad, G. Huffman, B. Ferrier, W.S. Olson, E. Ziper, E.A. Smith, T.T. Wilheit, G. North, T. Krishnamurti, and K. Nakamura, 2000: The status of the tropical rainfall measuring mission (TRMM) after two years on orbit. Journal of Applied meteorology, 39, 1965-1982. Pal J.S., F. Giorgi, and X.Q. Bi, Consistency of recent European summer precipitation trends and extremes with future regional climate projections, Geophysical Research Letters 3 1 (13). Ross, R.J., and W.P. Elliott, 1996: Tropospheric water vapor climatology and trends over North America: 1973-93. J. Climate, 9,3561-3574. ROSS,R.J., and W.P. Elliott 2001: Radiosonde-based Northern Hemisphere tropospheric water vapor trends. J. Climate, 14, 1602-161 1. Shiklomanov, 1.A (Editor), and J.C. Rodda (Editor), 2003, World Water Resources at the Beginning of the Twenty-First Century (International Hydrology Series), Cambridge University Press.
256 15. Smith T.M, T.R. Karl, and R W. Reynolds, Climate modeling: How accurate are climate simulations? SCIENCE 296 (5567): 483-484 Apr 19,2002. 16. Sorooshian, S., X. Gao, K. Hsu, R.A. Maddox, Y . Hong, B. Imam, and H.V. Gupta, “Diurnal Variability of Tropical Rainfall Retrieved from Combined GOES and TRMM Satellite Information,”, Journal of Climate, Vol. 15, No. 9,983-1001 May 2002. 17. Sorooshian, S., K. Hsu, X. Gao, H.V. Gupta, B. Imam and D. Braithwaite, “Evaluation of PERSIA” System Satellite-Based Estimates of Tropical Rainfall,” Bulletin of the American Meteorological Society, Vol. 81, No. 9,2035-2046,2000. 18. Trenberth K.E., A.G. Dai, R.M. Rasmussen, B.B. Parsons, 2003. The changing character of precipitation, BAMS 84 (9). 19. World Water Resources at the Beginning of the Twenty-First Century, Edited by I.A. Shiklomanov, John C. Rodda, Cambridge Press (available Jan 2005)
El
Ice 24364. I 69.554%
Kl Others
I3 Rivers
E3 Marshes
135.14
2.12 0.006%
0.386% 0.033%
I
I
I
Lakes
Groundwater 10530 30.06 I%
91 0.260%
I
0.037%
0 Soil I E E 1Q.U
0.047%
I
Figure 1. Distribution of terrestrial fresh water storages based on data from Shiklomanov, 2003). Upper numbers represent the total volume of each storage in 100 km . The lower numbers represent the percentage of total terrestrial fresh water.
257
to
Cn 00
1 — 23.6°N-90°N
23.6°S-23.6°N ----- 90°S-23.6°S
0.8
o 0.6
(0
0)
0.2
1
0
§ -0.2 -0.4 -0.6 1900
1910
1920
1930
1940
1950
1960
1970
1980
1990
2000
Year Figure 2. Trends in mean temperature anomalies for three latitudinal bands over the 20th Century. Data obtained from Goddard Institute for Space Studies (GISS), http://www.giss.nasa.gov/research/observe/surftemp/
- Evaporation ~~
-
Temperature
I
I Temperature {%I
I
Green House lntensi
Min Projection 2080 1"C
I
Figure 3. Left: Relationship between air temperature and saturation vapor pressure. Right: simplified diagram of the potential impacts of heating and the hydrologic cycle. Arrows inside the box point to upwarddownward trend directions.
259
Industri Industri 0
02
OP
9
09
08
z2 $ UI
001
x
2 0
02 1
OP 1 09 1
260
Industri Industri 261
6. Extent FigureFigure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6.Figure Extent6. Extent Figure 6. Extent Figure 6. Extent Figure 6. ExtentFigure 6. Extent
262
7,000 6,000 « 5,000 2
'S
4,000
® 3,000 E
3 Z
2,000
1,000 -
1 1990
i 1991
i 1992
1 1993
1 1994
1 1995
1 1996
1 1997
1 1998
1 1999
1 2000
Year Figure 7. Trends in the number of wildfires in BLM managed land in the US during 1990-2000 Decade. Data source: (http://www.fire.blm.gov/stats/10year.html)
s
Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. Extent Figure 6. ExtentFigure 6. ExtentFigure 6. Extent
264
265
I
0
,
I I I I I 20 40 60 Years from start of experiment
I
80
Figure 9. Uncertainties in climate model projection of the climate change scenarios. Notice the higher uncertainty associated with precipitation estimates. (After IPCC, 2001)
WHAT IS THE REAL VALUE OF WATER? REACHING BEYOND THE DILEMMA OF COST AND PRICE R. B. LINSKY Executive Director, National Water Research Institute Fountain Valley, USA Many ancient civilizations perceived water as mystical and valued it for its spiritual qualities, which were closely tied to deities and therefore considered sacred. Early cultures did not view natural resources as their property but did consider themselves as stewards of resources with the responsibility to maintain them for future generations. As we know from studying history, populations always expand. Very seldom do they contract. With population expansion comes the need to become “organized” and reduce chaos. Emperor Justinian (527 - 565 A.D.) did just that during the closing days of the Roman Empire. He is credited with organizing and codifying Roman laws that then gave rise to the establishment of the conceptual framework for what has been accepted today as the “Public Trust” doctrine. The concept puts forth the principle that governments hold resources in trust for all present and future generations. The concept traveled throughout Europe and served as the basis for English Crown Laws, eventually landing in America and Australia with the early settlers. The recognition that a government does not own a resource but holds them in trust and is responsible for managing the resource through regulations that focus on public health and safety has prevailed in the United States and Australia for nearly 250 years. The perception that resources are “free” was easier to accept when populations and demands were relatively small. Defining a “free resource” is certainly easier when supplies are perceived as being endless. Populations, however, are not static - nor are their appetites for resources to sustain their growth and development. People continue to consume resources at extraordinary rates. The perception of a never-ending supply is reinforced on a daily basis when people turn a valve or spigot handle and voila! water suddenly appears as if by magic or at least with very limited understanding of the processes that ensures water reaches their glasses. I believe that instead of using the term “natural resources,” we should introduce the term “natural capital” into the lexicon of environmental based vocabulary. Natural capital according to Paul Hawken and his colleagues (1999) is what modem civilizations depend upon to create economic prosperity. Unfortunately, the world’s natural capital continues to decline at a rate proportionate to our material gains. Natural capital includes all the familiar resources such as water, trees, oil, fish, soil and air. It also includes natural systems like grasslands, streams, and coral reefs, as well as aesthetic vista. More and more, national prosperity is becoming limited by natural capital rather than industrial strength. For mankind to continue to progress relies not on the number of fishing boats, but on the number of fish available; not on the number of boreholes, but on the depleting aquifers. Natural capital provides a wide variety of services that continue to go unrecognized by the population. Yes, forests provide wood, but they also provide water storage and flood management services. A healthy environment not only provides clean air and water but also less recognized services such as waste processing, buffers against severe weather, and the generation of atmospheric gases. Though we have been fortunate
266
267 in inheriting a 6 billion-year supply of natural capital we have not only squandered but have more seriously neglected to understand the services that that inheritance provides. Readjusting or modifying long-standing concepts is difficult at best, however, if we are to become better stewards of the planet Earth, one of the strongest avenues will be through improving our understanding of the value of water through the educational process Today, half of the world’s 6-billion people live in crowded urban centers, the majority of which are located adjacent to an ocean, lake, river or estuary. Many of these centers, by definition, overlay what were at one time visible watersheds. By the year 2050, an additional 3-billion people will migrate into and become permanent residents of those same urban centers. Watersheds are classically defined by including the characteristics of ecology, hydrology, geology, biology, or politics, and more recently economics. Unfortunately, many watersheds described at the beginning of the last century are no longer identifiable. What were rivers are now open channels of concrete, what was a savannah is now a golf course or public park, and what was a meadow is now filled with houses and manufacturing and commercial facilities. What were natural watersheds are now urban watersheds. Urban watersheds today include front and back yard lawns, parks and parkways, freeways, railroads, airports, harbors and shopping centers. An urban watershed is, in actuality, a non-porous environment designed to remove water as quickly as possible. Urban watersheds in Tokyo, Los Angeles, London or Rome are not at all dissimilar. Historically, urban watersheds have also served as the sites for the economic engines of societies around the world and the host location of food production, trade, recreation, protection, and other activities required to maintain a vital and robust society. This continues today at a much more concentrated scale and at a more rapid pace. Urban watersheds will continue to increase their value to society. Regardless of the frustrations associated with crowded conditions, shortages and traffic congestion, people will continue to move into urban centers. It is my opinion that, both historically and contemporarily, the most critical element associated with these centers is water. Not only is water the element responsible for the growth and development of urban centers, but it has also been historically the most important factor of their demise. Water is the most common product required by society to lubricate its needs. Unfortunately, because the vast majority of society perceives water has having no value, it has been and will continue to be the most misused and abused product in modem history! Indigenous populations have always described water in spiritual terms. Native Americans describe rivers and streams as the “veins of mother earth” through which her “blood” flows. Water is, therefore, something to revere and value; however, modem cultures appear to only be concerned about the value of water when drought threatens or shortages occur. In the vast majority of modem societies, water remains a mystery. Why? One reason has been that water utility managers have been so successful in providing a high quality water on demand 24 hours a day to their customers that the process of treating and delivering water remains a mystery and, therefore, is of little or no concern to the consumer. When questioned about where water comes from, many people will hesitate
268 only slightly before responding that it comes from the faucet in the kitchen, or from the sky, or from a pipe underground. The real value of water is neither its price nor cost. The real value of water should be related to how it is used. In the humanistic world, sustaining human life is of the highest value. Water used for environmental purposes, such as maintaining and protecting wetlands, would have a different value as would water used to wash sidewalks and cars or fill swimming pools; however, if you talk to people from around Sydney, Australia or Los Angeles, California about the fires of 2003, they no doubt would place a very high value on water for fire suppression. Faldenmark and Lindh (1993) describe three stages that a society goes through in the development of their water resources. The first stage is the pre-industrialized “free gift” period when water resources are abundant and few impacts occur. The second stage occurs when populations expand, and water resources are exploited to sustain development. Strategies such as dams and water transfers to non-water rich areas are introduced to expand development. The third stage appears when significant economic investments must be made to ensure the availability of resources to continue development. It is in this latter stage that non-traditional technology, like desalination and reuse, are introduced to sustain populations and their development. A value can also be assigned to water that is not used for a specific purpose. Water kept in a groundwater aquifer or retained in a reservoir is like leaving your money in a bank account. Left there, it provides interest and protection for future needs. Water is, therefore, an asset that provides services that have value to the user and requires investments to be maintained. It is not so much that historically available water supplies are becoming scarce, but that more critical concern should be placed on the valued services it provides for mankind that are rapidly becoming limited to growing populations. Kasnakoglu and Cakuak (1997) have suggested that water should not be treated simply as a commodity reacting to supply and demand equations, but with a little help, could be treated more realistically in terms of social and economic welfare. Human welfare is best served by improving the quality and flow of services delivered by water. It is the value of the services that is important. Unfortunately, and because water continues to be perceived as having little or no value, it has been, and continues to be, the most misused and abused product by modern society. The rapid globalization of what were once considered national economies fully recognizes today that consumers can be marketed electronically every day for their attention. Such market strategies are based on consumer perception of the real or anticipated value of a particular product. Whether those products are a pair of blue jeans, a car, a loaf of bead, the Sunday newspaper, or a quarter pounder with fries, they all are dependent on water for their production and delivery to the marketplace. Product Water Requirements for Production
Wheat for a 2 Ib. loaf of bread Quarter pounder with fries Ford Taurus, including tires Sunday newspaper
1,400 gals. 1,000 gals. 1,800 gals. 39,000 gals. 150 gals.
269 Water utilities recognize the value of water when they attempt to reduce its usage, however, such reductions generally result in reduced revenues, and that can create quite a traumatic atmosphere within an organization. During the period from 1950 to 2000, the worldwide consumption of water nearly tripled as the population rose by over 3 billion. This rapid growth and concomitant demand for water increased faster than originally anticipated. Every second of every day, there are 4.2 births and only 1.7 deaths, which provides a net gain of 2.5 new persons per second throughout the world, or 150 new water consumers per minute, 9,000 persons per hour, 216,000 per day, or 78.8 million persons per year. At that rate it would take only 3 days to replace all the Americans killed in the Vietnam War, and a mere 4 hours to replace the 51,000 killed in auto accidents in the United States in the year 2002. Webster’s Dictionary defines value as “the quality of something that makes it more or less desirable, useful, and a thing of quality having intrinsic worth.” So why is water perceived has having little or no value? If water remains unique to earth and is the penultimate substance for sustaining life as we know it, why then is it perceived as having so little value? One might speculate that its lack of value is the basis for why it is so misused or abused by mankind over the centuries. Taking liberty with a statement attributed to Ben Franklin, we can also define value as when the well runs d v , we ’I1 know the value of water, or we can enter the twenty-firstcentury and use the analogy that there is a terrible shortage of sports cars! Z don ’t have one. Over the last century, treatment technologies have evolved that will now produce waters to an extraordinary high level of quality. These technologies (eg. reverse osmosis and microfiltration membranes) take wastewaters and produce a product that exceeds the water-quality standards set forth in regulations to protect public health. These technologies carry with them a price tag reflective of the cost of the technology. Unfortunately, man all too often becomes enamored with the technology and never recognizes the value of the products that the technology creates. One of the challenges that water utilities must address in this decade is to determine the value of the product they are responsible for creating. For instance, what is the value of the product water when dissolved solids have been removed? Does the value increase when harmful nitrate has been removed? When Cryptosporidiurn is removed? And, too, what is the value of water when used to create wetlands and recreational bodies of water or when stored for energy production? These questions do not ask to describe the benefits but the value of water itself. Traditionally, guidance to invest in water projects is based on the principles of benefit-cost analysis. This typical engineering approach starts with describing the conditions, as they exist at present, and provides one or more scenarios under alternative conditions. The differences between the “with and without new conditions” are measures of the benefits of the project. The bottom-line financial variables, either positive or negative, generally prompt the decision-makingprocess. To move away from the more traditional approach will require the acceptance of the concept of total economic valuation that relies upon a broader range of benefits, both monetized and non-monetized, than are typically captured in the traditional benefit-cost analysis. Because it is typically not possible to measure the actual economic benefits in
270 terms of value output of goods and services, as theory dictates, several techniques are introduced to capture estimates of the value of economic outputs. These include: Willingness to pay for water produced where additional water supplies may be valued on the basis of actual simulated market prices. Increase in net income to an industry The costs of the most likely alternative means of obtaining the desired output are used to approximate the total value when willingness to pay or changes in net income cannot be used. There is a need to add value to quality, as well as to quantity. The opportunity to desalt impaired waters gives the option to provide water for new uses, including ecosystem restoration with the added possibilities for the development of impaired land and waters for recreational uses. It also provides the opportunity to sell, trade, or transfer high-quality water out of the immediate region to meet environmental and water-quality requirements. Assigning value to water can be an effective tool in allocation strategies. Appropriately valuing water leads to the recognition that its application to environmental issues is a bona fide use of water and, therefore, can be assigned a use value. Higher valued uses can effectively discourage less valued use and, therefore, reduces waste. Promoting recycling and reuse by both industrial and municipal users demonstrates very well the willingness to pay principal. The Intel Corporation recycles nearly 86 percent of their water; Armco’s steel plant in Kansas City reuses water at least 16 times, which allows it to now take in only 3.5 million gallons a day (even though it uses 58 million gallons a day throughout the plant). The recognition of the value of water has allowed these companies to compete in the marketplace and to retain market share and profitability. In the United States, the University of Maine at Orono completed a study that examined property values of shorefront homes on 34 lakes across the state of Maine from 1990 to 1994. The results indicated that within a group of lakes of varying water quality, the homes along lakes with lower clarity also had lower property values. The university’s research indicated that a 1-meter difference in average minimum clarity over 10 years was associated with a property value decline of US$ 3,000 to US$ 9,000. In a 1996 survey by the U.S. Fish and Wildlife Service, the value of water-based recreational activities in the United States was estimated to be US$ 101.2 billion. To make the significant paradigm shift from where we are today to where water is accepted as having a value will obviously take time and effort. Nevertheless, one must remember that water only starts out “free” and by the time it reaches its ultimate destination, whether in a manufacturing plant or on your lips, investments were made to provide a product of high quality. Anderson and Leal (1991) observed that artificially low prices for federal water promoted waste at a time when water supplies were coming under increasing stress from industrial, municipal, and environmental demands. Valuing water can encourage and promote its efficient use and discourage negative environmental and economic impacts. In the near term, water utilities will have to address resource needs based on their determination of the value of water based on various sources, different quality requirements, and different applications. They may also want to know the cost of
271
producing water of alternative qualities delivered to different sites and at different times of the day or night. We live in a consumer world. We also live on a planet with shrinking resources and, unless we adjust our thinking and appreciate the value of our resources, & m e generations will not enjoy the benefits or services we have today. REFERENCES 1. 2.
3.
4. 5.
6.
Anderson, T.L. and Leal, D.R. Free Market Environmentalism. Pacific Research Institute for Public Policy, Oxford 1991. Dumsday, R.G., The Value of Water and Implications for its Allocation, unpublished presentation to Melbourne Water, May 2001. Faldenmark, M. and Lindh, G. Water and economic development, Water in Crisis. Pacific Institute for Studies in Development, Environment, and Security Stockholm Environment Institute, 1993. Hawken, Paul, Lovins, A. Lovins, L.H. Natural Capitalism, Little Brown & Co. 1999. Kasnakoglu, H. and Cakmak, E.H. Economic Value and Pricing of Water in Agriculture. Options Mediterraneennes Ser A h 3 1 1997. Seyam, I.M., Hoekstra, A.Y. and Sanenije, H.H.G. Calculation methods to assess the value of upstream water flows and storage as a function of downstream benefits. 2"d Water Net Symposium, Cape Town Oct 2001.
AGRARIAN TRANSFORMATION AND SHIFTS IN WATER REQUIREMENTS IN RURAL. IRAN: A CASE STUDY AMIR ISMAIL AJAMI University of Arizona, Tucson, USA ABSTRACT
This paper contends that the transition from “peasant” mode of production to “farmer” agriculture is accompanied by dramatic changes in arable land use, cropping pattern, agricultural intensification and crop yelds. These changes, in turn, will lead to an increased demand for irrigation water. An empirical investigation of agrarian transformation in an Iranian village will illustrate the suggested proposition. This is done by a comparative analysis of the data collected in a 1967 base-line study and the 2002 restudy of a village community. INTRODUCTION In the past four decades a global transition from a peasant’s mode of production to market-oriented capitalist agriculture has been in the making in most developing countries. The transition from “peasant” to “farmer” is driven primarily by integration of the rural economy into the market, adoption of the Green Revolution technologies, and implementation of economic policies that do not discriminate against small farmers. As indicated below, by 1970, about 20 percent of the wheat area and 30 percent of the rice area in developing countries were planted with H W s (high-yield varieties), and by 1990, the share had increased to about 70 percent for both crops. Yields of rice and wheat virtually doubled. The Green Revolution led to sizable increases in returns to land, and hence raised farmers’ incomes. Moreover, with greater income to spend, new needs for farm inputs, and milling and marketing services, farm families led a general increase in demand for goods and services. This stimulated the rural non-farm economy, which in turn grew and generated significant new income and employment of its own. Real per capita incomes almost doubled in Asia between 1970 and 1995, and poverty declined fromnearly three out of every five Asians in 1975 to less than one in three by 1995.’ A striking feature of the transformation in the mode of production is the emergence of a large number of capitalist farmers as evident in many developing countries including China, India, Egypt and Turkey. Various theoretical frameworks have been formulated to explain the dynamic forces, processes and implications of this transformation. In its classical model, the agrarian transition theory, first outlined by Lenin, postulates that the penetration of capitalism into the countryside would result in the concentration of landholdings by rich peasants and absentee urban landlords, while poorer farmers would be dispossessed and converted into a class of landless rural proletarian wage laborers.* An alternative model advanced by Hayami and Kikuchi focuses on peasant stratification, not inevitable polarization, where there is increasing differentiation in a continuous spectrum ranging from landless laborers to non-cultivating landlord^.^ There is much debate on the agrarian transition theory and its applicability to rural transformation in developing countries. Netting, for example, upon reviewing the results of village studies
272
273 in Thailand, the Philippines, Indonesia, and Pakistan concludes that, “there is some evidence that stratification reflects the process of change in traditional, on-going communities of intensive cultivators more closely than does p~larization.”~ This paper focuses on Iran, especially because Iranian agriculture and rural society have undergone profound socio-economic and political changes over the past four decades. While reorganizing the significant impact of urbanization, economic development, and integration of rural economy in the market, we contend that the Shah land reform program of the 1960s and the 1979 revolution represent the primary turning points in the rural transformation. Land reform, through intense state intervention, dramatically changed the traditional landlord-sharecropping system (nizam-i arbab-rayati).5 Peasant uprisings, the forceful occupation of large estates and the agrarian policies of the postrevolutionary regime have led to the demise of the urban agricultural bourgeoisie and the empowermentof the peasank6 THE IRANIAN VILLAGE IN TRANSITION The present paper contextualizes these macro-societal changes by tracing the patterns of socio-economic transformation of one village community over a thirty-five year period. To do so, Shishdangi (a pseudonym used for this settlement) was first investigated by the author in 1967’, and was restudied in fieldwork in 1999 and 2002. The village is located in Fars Province near the town of Marvdasht, about 45 kms northwest of Shiraz. At the time of the 1967 study, the village had a population of 784 individuals who lived in 140 households residing in 118 houses, all entirely enclosed within a high earthen wall or qalah. The villagers made their living mostly by farming and raising livestock; only 13 percent of the heads of households were employed in nonagricultural activities. Agricultural production depended heavily on imgation, the water being supplied by both the Sivand River and 27 imgation pumps tapping ground water. The cropping pattern, mainly wheat, barley and sugar beet, had changed little over the previous decades, except that cultivation of melons had increased considerably. As the data in Table 1 indicate, between 1967 and 2001, there were major changes in the village’s population, per capita arable land, crop density, and agricultural productivity. There is a substantial decline in per capita arable land (by 47%), resulting from the combination of a high population growth rate and a slight decrease in the village’s arable land. Substantial increases occur in crop yields (wheat by 200%, barley 156%, and sugar beet 102%), which correspond to an average annual growth rate of 3.2 percent, 2.8 percent and 2.1 percent respectively. The changes in Shishdangi reflect corresponding trends throughout Iran in the agricultural sector, as overall per capita arable land declined by an estimated 24%;8and increase in crop yields is reported for wheat by roughly 76%, barley 63%, and sugar beet 42% over the 1961-1993period.’
274 TABLE 1 Population, land use, crop density, and outputs of crops: Shishdangi (1967-2001)
1967
2001
% change 1967-2001
Population
784
1,335"
70
Mean Household Size
5.6
4.9
-12
Arable Land (ha.) Per Capita Arable Land (ha.) Crop Density (area under cultivatiodarable land)
No. of Water Pumps
No. of Tractors Fertilizer
1,215
1,113b
-8
1.58
.83
-42
.so
l.lOC
120
27
60
122
3
16
433
92
55Od
498
1.9 1.6 24.5
5.7 4.1 49.7
200 156 103
(Kg. per ha.)
Output of Crops (tons per ha.)
Wheat Barley Sugar Beet
"The village population is for 1996, as given by the 1996 Census. bThe decline in the village's arable land is largely the result of land sales to the town of Marvdasht for urban development, and expansion of the village, especially housing into farmlands. "Crop density of 1.10 is due to double cropping. dThisfigure is estimated based on the average amount of fertilizer applied by the sample farmers on their crops: wheat 400 kgs., barley 400 kgs., corn 900 kgs., sugar beet 600 kgs., and tomato 1000 kgs. per hectare. Source: Field Studies 1967 and 2002; 1996 Census: Sarshumari-i Umumi-i Nujius va Maskan-Shahristan Mawdasht, 1375 (Tehran: Markaz-i Amar-i Iran, 1997).
275 In light of changes in the pattern of land ownership, three distinct periods can be discerned in Shishdangi’s transformation: (1) the village under landlord domination; (2) land reform, the rise of peasant proprietorship, and development of capitalist fanning; and (3) revolution, the demise of capitalist f d n g , and transformation of the peasants. This paper will briefly explore and analyze the dynamics and consequences of the socioeconomic changes in the village agrarian transition under each of the three periods. 1. The Village Under Landlord Domination Shishdangi’s social structure was nearly homogenous, largely dominated by absentee land ownership and sharecropping arrangements. Most households were sharecroppers, at the same level in the village’s social hierarchy, lacking any appreciable internal socio-economic differentiation. The village exemplifies the pre-land reform agrarian structure in Iran, characterized by (1) the predominance of an absentee landlordsharecropping system; (2) low capital investment in agriculture, leading to low productivity and poverty of the peasants; and (3) a pattern of landlords’ domination over the social and economic institutions of the countryside.”
2. Land Reform, the Rise of Peasant Proprietorship, and the Development of Capitalist Fanning Iran launched a sweeping land reform in 1962, which was implemented in three phases over a decade under the Shah’s “White Revolution.” While the Shah regime’s interest in land reform is believed to be primarily political,” the implementation of the reform contributed to a dramatic decline in absentee land ownership and the sharecropping system, leading to a substantial increase in peasant proprietorship. Altogether, as a result of land reform, some six to seven rillion hectares of agricultural land (between 52% and 62% of the total) was transferred to the occupant sharecroppers and tenant farmers.’2 However, not all rural households benefited from the land reform. Due mainly to the scarcity of agricultural land, some 35 percent of the households who did not hold cultivation rights (nasaq) were mostly employed in agriculture as wage laborers (khwushnishins), and were not included among the beneficiaries of the land reform program. In 1965, half of the village, cultivated under sharecropping arrangements, was subject to land redistribution under the second phase of the land reform program. Consequently, one-third of this half, along with the water rights, was transferred to 34 occupying sharecroppers, on average about 4.7 ha. each. The other half of the village, which was worked by wage laborers, was exempt from redistribution. The implementation of the land reform gave rise to the development of a peasant production system, the decline of absentee landownership, and further consolidation of capitalist farming in the village (Table 2). Thus, the development of relatively large-scale capitalist agriculture alongside the emerging smallholders and pump-owner tenant farmers has given rise to contradictory tendencies in the village transition to capitalist agriculture. These trends are corroborated by other case studies, continued to dominate the village agrarian structure until the 1979 revolution.
276 TABLE 2 Number and size of holdings by production system: Shishdangi @re- and post-land reform)
Pre-Land Reform: 1962 Production System
No. of Holdings
Mean Size of Holdings
(Total)
(ha.)
Total Area of Holdings
Post-Land Reform: 1967
% of Yilluge Farmland
(ha.)
No. of Holdings
Mean Size of Holdings
Total Area of Holdings
(Total)
(ha.)
(ha.)
% of Village
Farmland
1
-
500
41.1
1
-
500
41.1
LandlordSharecropping
34
11
375
30.9
-
-
-
-
Remnant of the Sharecropping system
-
-
-
-
1
-
218
17.9
Peasant Proprietor
-
-
-
-
34
4.7
157
13.0
Pump-owner Tenant Farmer
13
26.1
340
28.0
13
26.1
340
28.0
Capitalist Farm
-: Not applicable. Source: Data extracted from Shishdangi (1969), Chap. 5 , Tables 1,6, 10, Chap. 6, Tables 2,4.
277 3. Revolution. the Demise of Capitalist Farming. and Transformation of the Peasantry As the 1977 - 1979 revolutionary upheavals gathered momentum in the urban centers, they eventually reached the rural areas, leading to peasant radicalization and land seizure in the countryside. Current scholarship suggests that the peasants’ participation in pre-revolution anti-regime demonstrations and political activities was by and large lin~ited.’~ However, villagers who commuted daily to city jobs and those who migrated and became part of the urban sub-proletarians were often willing participants in the urban uprisings as well as active agitators in the villages. Therefore, what is apparent is that when the power of the central government was weakened, the peasantry gradually joined the protest movement, primarily for land takeovers. In the case of Shlshdangi, interviews with the two landlords and the village key informants suggest that peasants were mostly in the “wait-and-see’’mode during the revolutionary upheavals; except for a few activists who began agitation against the owner of the capitalist farm about three months before the collapse of the regime in February 1979. In fact, peasants’ protests against the village’s two landlords occurred at different times, depending largely on the nature of their past relations with the landlords. Immediately after the revolution, the peasants who were working on capitalist farm took over the fields, the landlord’s water pumps and farm machinery; while the peasants, who were the beneficiaries of land reform in the other half of the village, occupied the remaining fields of their landlord almost a year and a half after the revolution. Thus, in the third period, the village witnessed the demise of relatively large-scale capitalist agriculture along with a substantial increase in the number of small holders. A comparative analysis of the pre and post-revolution data illustrates changes in the system of agricultural production, cropping patterns and crop yields. As the data in Table 3 reveal, the number of small peasant and medium farmers has tripled; the sharecropping system has totally disappeared, and the large capitalist farm (500 ha.) has been reduced into small disintegrated land holdings. The radical redistribution of land has, however, resulted in a 70 percent decline in the average size of holdings, from 24.8 ha. in 1967 to 7.5 ha. in 2001. Even though it has led to a notable decrease in the scale of landlessness among village households, from 64.4 percent of the total households in 1967 to 49 per cent in 2001, the absolute number of landless households has increased from 85 to 143 because of population growth. The substantial rise in the absolute number of landless households reinforces the fact that even radical land redistribution measures under a revolutionary regime cannot by itself resolve the issue of landlessness in the countryside where land scarcity is coupled with a rapid population growth rate. The increased access to land and water resources by a larger number of villagers, coupled with the impact of rapid urbanization in the Marvdasht region and further integration of the village economy into the market, have largely contributed to the acceleration of the socioeconomic transformation of the peasants in Shishdangi. This is mainly reflected in changes in the agricultural production system; diversification of occupational structure; and movement in social stratification. In the remaining section of this paper, we will limit our discussion to the changes in the production system and shifts in agricultural water requirements.
278
CHANGES IN THE PRODUCTION SYSTEM An analysis of the production system in Shishdangi over the last thirty-five years will reveal a fundamental change: a transition from peasants, whose livelihood depended largely on subsistence agriculture, to farmers who now practice intensive agriculture within a market economy. As part of this transition, we will see the emergence of a small number of petty capitalist farmers. The peasant production system, which previously depended heavily on family labor and farming for domestic consumption, has made a drastic shift to mainly mechanized and commercial agriculture. Plowing, planting, weeding and harvesting of grains are now highly mechanized, even on small farms. The rapid expansion of farm mechanization has substantially reduced the need for household labor except in tomato weeding and harvesting, as well as managing the irrigation system itself, which have remained labor intensive. The peasant household that used to produce a significant part of its own subsistence (roughly 60% in 1967) now sells almost all of its output to the market (Table 3). Even a family’s daily bread is now acquired from bakeries in Marvdasht - in sharp contrast with 1967, when nearly all village households baked their own bread. A comparison of 1967 and 2001 production system data show a considerable rise in crop diversification, yields, and the increase in the percentage of output sold into the market by small and medium farmers. The dramatic rise in yelds is primarily the result of the adoption of Green Revolution technologies, especially fertilizers and high-yielding seed varieties, and increased investment in irrigation pumps.14 It should be stressed that the development of transportation and market networks in the Marvdasht region, as mentioned earlier, has brought the village more into the urban economy, which has contributed considerably to this transition.
Table Table 33 Mean Mean Table 3 Mean Table 3 Mean Output p e r hectare
No. of holdings (total)
Mean size of holdings (ha.) Pre-Revolution: Agricultural Year 1966 - 1967 Peasant Proprietor (< 5 ha.) 34 4.7 26.1 Pump-owner 13 Ten& Farmer (10 - 25 ha.) I 1 Remnant of the ShareProduction System
I
Wheat (tons)
Barley (tons)
Sugar Beet (tons)
1.6 2.2
1.5 1.3
21.3 27.5
1.4
1
1.3
I
20.0
Corn (tons)
I
-
-
-
-C
-
Crop Density'
Tomato (tons)
I
-
0.79 0.71
I
0.51
(%)
18 510
I
2.260
oo o o
Ourput sold into the market
Hired labor (mandays)
39 75
I
90
Not applicable. aData is based on information gathered from sample households in both the 1967 and 2002 studies. bCrop density: area under cultivatiodarable land. 'Melon cultivation was a second cash crop, next to sugar beet, on pump-owner tenant farmers farms and the capitalist farm in 1966 - 1967. Source: 1966 - 1967 data from Shishdangi (1969), Chap. 5, Table: 1,3, 6, 7, 10, 12, 17,20; 2000 - 2001 data from 2002 field study. -:
J.t
-4 (0
280
Peasant agriculture has also changed with regard to animal husbandry. Most villagers have moved to new housing outside of the old confinement of the qalah, and so they have largely given up the practice of raising a few sheep and goats. In 1967 each household had on average some 20 head of sheep and goats and one cow, but by 2002 raising sheep and milk production had been largely taken over by a few relatively specialized farmers, A further indicator of peasant transformation is reflected in new patterns of household consumption that appeared gradually over the last 30 years, including the adoption of new types of housing, clothes, food items, televisions, refiigerators, and telephones.’’ A striking feature of the transformation in the mode of production is the emergence of a small number of petty capitalist farmers in the village. The changes actually had begun initially in the 1960s when a few pump-owner tenant farmers increased the number of fields under sugar beet cultivation and introduced melons into their cropping pattern. This trend continued, particularly since the early 1990s when some farmers began to diversify into milk production and increase tomato cultivation in their fields. The new developments, which required increased investment and hiring wage-labor, have resulted in sharp distinctions between “ordinary” smallholders and the emerging petty capitalist farmers.16 Loeffler observed a similar development in Sisakht, a village in the Boir Ahmad region, where in 2002 the farmers installed drip irrigation in their vineyards, drastically increasing their grapes production and shipping to markets all over Iran.I7 Also, the farmers in Kheirabad in the Marvdasht region abolished their cooperative arrangements for land and water use a year after the revolution, and proceeded to build chicken farms, dig wells, install irrigation pumps, and expanded cash crops in their fields.I8 The question of how these patterns of emerging capitalist farmers may be typical of rural Iran can be answered only in tentative terms because of the limited number of comparable field research. We can reasonably argue that conditions conducive to the development of capitalist farmers are prevailing in most villages, especially those in the proximity to urban centers. This argument can be supported by the fact that the number of rural entrepreneurs in agriculture, i.e., capitalist farmers, increased by 255 percent between the 1976 census and the 1996 census.” Furthermore, the size of holdings of about 50 percent of commercial farms enumerated in the 1993 Agricultural Census was less than 50 hectares, which can be assumed to be operated mostly by local capitalist farmers
.*’
IMPACT ON WATER REQUIREMENTS The changes in the production system have substantially increased the volume of water used in irrigation. This is clearly reflected in the number of water pumps, which more than doubled between 1967-2001 (see Table 1). Three major factors have contributed to the dramatic shift in water requirements: (1) a sharp increase in crop density, mainly as a result of the elimination of fallow lands in the farming system and introduction of double cropping, (2) increased application of Green Revolution technologies, especially the excessive use of chemical fertilizer, and (3) further agricultural intensification. We may argue that the underlying trends call into question the long-term sustainability of intensive irrigated agriculture in the village. This argument is
281
mainly based on the following three factors: (1) substantial retreating in groundwater levels, as more water is being pumped for irrigation than can be replenished by the rains and/or other sources. Our filed observations suggest that while in 1967 most of the irrigation wells were 15-20 meters deep, by 2001 they were down to 60-70 meters, (2) inadequate drainage which has led to salt build-up in some of the fields, and (3) low irrigation efficiency which is currently estimated at 32% for surface flow and 40% for groundwater (pump irrigation) in the Marvdasht region. Considering that the pattern of agrarian transition in Shishdangi is fairly typical of other villages in Iran, we can reasonably argue that conditions conducive to the development of capitalist farmers, small, and/or large, are prevailing in many rural areas, especially those in proximity to urban centers. Consequently, the emerging capitalist farmers will dramatically increase the demand for irrigation water in future. Similar to Iran, many rural areas in the developing countries are experiencing a significant increase in the demand for irrigation water as farmers are moving to more market-oriented intensive agriculture. These trends, coupled with increased use of the Green Revolution technologies, could potentially lead to further depletion of water resources needed for a sustainable agricultural development, especially in arid and semi-arid zones of the world. The situation will deteriorate drastically due to rapid population growth, increased urbanization, and rising industrial demands, unless sound irrigation management, conservation policies, and land use practices are vigorously implemented in the developing countries. REFERENCES “Green Revolution: Curse or Blessing,” International Food Policy Research Institute, Washington, D.C., 2002. V. Lenin, The Development of Capitalism in Russia, Vol. 3 of Collected Works (London: Lawrence & Wishart, 1960); idem, The Agrarian Question and the ‘Critics of M a ’ (Moscow: Progress Publishers, 1976). Y . Hayami and M. Kikuchi, Asian Village Economy at the Crossroads (Tokyo: Tokyo University Press, 1981), 60-5. See, among others, Teodore Shanin, “Polarization and Cyclical Mobility: The Russian Debate on the Differentiation of the Peasantry, in Rural Development: Theories of Peasant Economy and Agrarian Change, ed., John Hariss (London: Hutchinson University Library, 1982), 223-245; Robert M. Netting, Smallholders, Householders: Farm Families and the Ecology of Intensive, Sustainable Agriculture (Stanford, California: Stanford University Press, 1993): 214-21; and David Goodman and Michael Redclift, From Peasant to Proletarian: Capitalist Development and Agrarian Transition (New York: St. Martin’s Press, 1982). The current literature on Iran’s 1960s land reform is highly controversial. For a review of different perspectives, see A.K.S. Lambton, The Persian Land Reform, 1962-1966 (Oxford, England Clarendon Press, 1969); Eric J. Hooglund, Land and Revolution in Iran, 1960-68 (Austin, Texas: University of Texas Press, 1982); Afsaneh Najmabadi, Land Reform and Social Change in Iran (Salt Lake City, UT: University of Utah Press, 1987); Asghar Schirazi, Islamic Development Policy: The Agrarian Question in Iran (Boulder, Co & London, England: Lynne Rienner
282
6
7
8
9 10
12 13
14
Publishers, 1993); Fatemeh E. Moghadam, From Land Reform to Revolution: The Political Economy of Agricultural Development in Iran, 1962-1979 (London, England: Tauris Academic Studies, 1996); Mohammad G. Majd, Resistance to the Shah: Landowners and Ulama in Iran (Gainseville, FL: University Press of Florida, 2000), and Ahmad Ashraf, “State and Agrarian Relations Before and After the Iranian Revolution, 1960 - 1990,” in Peasant and Politics in the Modern Middle East, eds., Farhad Kazemi and John Waterbury (Miami, FL: Florida International University Press, 1991), 277-311. For a discussion of some of the revolutionary changes, see, among others, Adnan Mazarei, Jr., “The Iranian Economy Under the Islamic Republic: Institutional Change and Macroeconomic Performance (1979-1990)”, Cambridge Journal of Economics 20 (1996): 289-3 14; Asghar Schirazi, Islamic Development Policy; Sohrab Behdad, “Winners and Losers of the Iranian Revolution: A Study in Income Distributions,” International Journal of Middle East Studies 21 (1989): 327-358; Farhad Kazemi, Poverty and Revolution in Iran (New York: New York University Press, 1980); Mansoor Moaddel, “Class Struggle in Post-Revolutionary Iran,” International Journal of Middle East Studies 23 (1991): 317-343. Ismail Ajami, Shishdangi: Pazhuhishi dar Zaminah-yi Jamahshinasi-yi Rustai [Shishdangi: A Study in Rural Sociology] (Shiraz: Pahlavi University, 1969). This is calculated on the basis of the data on the rural population provided in 1956 and 1996 censuses; and arable land area reported in 1961 First National Census of Agriculture and 1993 Agricultural Census. FAO, Production Yearbook, 1961; and 1993 Agricultural Census. For a discussion on urban landlord domination see, among others, Paul Ward English, City, and Village in Iran: Settlement and Economy in the Kirman Basin (Madison: University of Wisconsin Press, 1966); Javad Safi-Nezhad, Talibabad: Nimunih-i Jami az Barrisi-i Yik Dih (Talibabad: A Comprehensive Example of the Study of One Village) (Tehran: Mussissa-yi Mutaliat va Tahqiqat-i Ijtimai, 1967); and Michael E. Bonine, Yazd and its Hinterland: A Central Place System of Urban Dominance in the Central Iranian Plateau (MarburdLahn: Geographischen Institutes der Universitat Marburg), Marburger Geographische Schriften 82 (1980), Chapter 3. For a survey of debates on the 1960s land reform politics, see Keith McLachlan, The Neglected Garden: The Politics and Ecology OfAgriculture in Iran (London: I.B. Tauris, 1988), 105-152; Ashraf, “State and Agrarian Relations”; Majd, Resistance to the Shah, 88-163; and Hoogland, Land and Revolution in Iran, 123137. Ashraf, “State and Agrarian Relations,” 305-306. See Ashraf, “State and Agrarian Relations,” 290-91; and Ervand Abrahamian and Farhad Kazemi, “The Non-Revolutionary Peasantry in Modem Iran,” Iranian Studies 11 (1978): 250-304. As an illustration, the number of water pumps increased from 27 in 1967 to 60 by 2002, and the average amount of fertilizer used per irrigated hectare increased from 92 kgs. to 550 kgs. over this period.
283 l5
l6
l7
l9
2o
The 2002 sample household survey shows that all the small and medium farm households, except one, now have a refrigerator, television, and telephone, whereas in 1967 only 39 percent of peasant households had just a radio. For a discussion on theoretical perspectives, see, among others, John Hariss, Capitalism and Peasant Farming: Agrarian Structure and Ideology in northern Tamil Nadu, (Bombay: Oxford University Press, 1982); and Luis Llamli, “Small Modem Farmers: Neither Peasants nor Fully-Fledged Capitalists,” Journal of Peasant Studies 15 (1988): 350-72. Reinhold Loeffler, “Change and Continuity in Sisakht,” a paper presented at the Fifth Biennial Conference on Iranian Studies, May 28-30, 2004, Bethesda, Maryland. Ono, Kheirabad Namah,133-6. Behdad and Nomani, “Workers, Peasants, and Peddlers”: 677. Sarshumari-yi Umumi-yi Kishavarzi-Kull-i Kishvar, 1372 (1993 Agricultural Census) Tehran: Markaz-i b a r - i Iran, 1998.
SUSTAINABLE WATER RESOURCE MANAGEMENT AND THE ROLE OF ISOTOPE TECHNIQUES PRADEEP AGGARWAL. Isotope Hydrology Section, International Atomic Energy Agency Vienna. Austria ABSTRACT Sustainability of water resources and vulnerability of groundwater to contamination have become important environmental issues in many countries. Only recently, water managers have realised the significance of appropriate evaluation of aquifer systems in order to improve the sustainable management of such systems and to avert deterioration of water quality. Isotope techniques, an independent and powerful tool, are becoming an integral part of many hydrological investigations and sometimes a unique tool in groundwater studies. The International Atomic Energy Agency (IAEA) fosters the role of nuclear science and technology to support sustainable human development. An overview of the groundwater sustainability problems worldwide, application of isotope techniques for water resources development and management, as well as the role the IAEA in the water sector are discussed briefly. INTRODUCTION The steady increase in global demand for fresh water coupled with rapid industrial and agricultural development is threatening the quality of fresh water supplies, especially in developing countries. Sustainability of water resources and the vulnerability of groundwater to contamination have become important environmental issues in many countries. The availability of proper hydrological and hydrochemical information, before decisions are taken, is expected to lead to an improved management of the resources. Nuclear and isotope methodologies in hydrological studies provide powerful tools to hydrologists and civil engineers involved in water resources assessment and management. The isotope hydrology section of the International Atomic Energy Agency (IAEA) has played a key role in the development of Isotope Hydrology, covering both theoretical aspects and the use of proven isotope and nuclear techniques to practical hydrological problems. GROUNDWATER SUSTAINABILITY
In the Millennium Declaration, the UN Member States resolved "to halve by the year 2015 the proportion of people who are unable to reach, or to afford, safe drinking water" and "to stop the unsustainable exploitation of water resources, by developing water management strategies at the regional, national and local levels, which promote both equitable access and adequate supplies". Some forecasts show that, by 2025, more than 3 billion people will face water scarcity. This is not because the world lacks water. As HRH the Prince of Orange said to the Panel of UN Secretary General in preparation for the Johannesburg Summit, the world water crisis is a crisis of governance. At the global level, there probably is enough water to provide water security for all, but only if we change the way we manage and develop it. In South 284
285
America for example, water resources approximate 3 millions km3 and only the equivalent of 1/10 of the total amount of water contributed by precipitation is used every year (GW-SAMTAC, 1999). The major problems these countries face are sustainable groundwater use and prevention of contamination of the available resources. Increases in water shortages and quality deterioration have a great impact on economical and social development of the countries. Groundwater represents about 97% of the fresh water resources available in the world, excluding the resources locked in polar ice (The World Bank, 1999) and is the main source of drinking water in many countries. However, after some years of groundwater development, it has been observed that pollution levels show a steady increase. This is usually associated with an excessive abstraction also causing a decline in the water table. Despite the importance of groundwater for many societies, there is not enough public concern about its protection, perhaps because the extent and availability of groundwater are not easily measured. In some cases, it has been found that the exploited groundwater is not a renewable resource, thus leading to the mining of the resources. The impact of increasing degree of temporal and special climatic variability on the water resources is also an important consideration. In the formulation of sustainable management strategies the following knowledge requirements arise: (i) Determination of the aquifer recharge rates and their temporal and spatial variation (especially in arid and semi-arid environments); (ii) Evaluating the age and origin of groundwater explored or abstracted and associated contaminants; (iii) Determination of groundwater flow-fields; (iv) Assessment of the spatial variations in aquifer vulnerability in relation to land use, and (v) Identification of the three-dimensional distribution of deep, high quality paleao-groundwaterbodies, which represent potential strategic reserves. Hydrogeology, and related scientific disciplines, provides a wide range of methods to address these questions. However, due to the complexity and inaccessibility of the subsurface, coordinated use of independent methods is required to arrive at a consistent and robust conceptual model of the physical and chemical characteristics of the groundwater system. In establishing such models, environmental tracers are extremely useful and, in some cases, the only means for obtaining the necessary knowledge. To guarantee that aquifer management profits from environmental tracer data the interface of data information to management models has to be strengthened. A number of good examples show that environmental tracer information can contribute to water resources assessment, planning and management. The following issues appear to be most important for sustainable groundwater management in South America (GW-SAMTAC, 1999): Lack of information in relation to hydrogeological and hydrochemical characteristics of aquifers and absence of monitoring of these parameters; Intense urbanization without any regulation and insufficient infrastructure for water supply and wastewater network; Poor quality of cesspools and septic tank construction; Saline intrusion in coastal zones; 0 Inadequate or nonexistent management of industrial and mine waste deposits;
286
The erroneous idea that groundwater is a common resource that should be used f?eely and without any restrictions leading to over-exploitation of many aquifers; The absence of integrated management regulation or organisation in countries; The lack of numerical models for the adequate evaluation of resources. In Central America, suffering from droughts and impacts of climate change, the issue of groundwater management is even more relevant. ISOTOPES IN HYDROGEOLOGICAL INVESTIGATIONS A comprehensive understanding of a hydrogeological system is necessary for sustainable resource development without adverse effects on the environment. Isotope techniques are effective tools for fulfilling critical hydrologic information needs. The cost of such investigations is often relatively small in comparison to the cost of classical hydrological techniques and, in addition, isotopes provide information that sometimes could not be obtained by other techniques. Stable and radioactive environmental isotopes have now been used for more than four decades to study hydrological systems and have proved particularly useful for understanding groundwater systems (Aggarwal et al, 2004). Applications of isotopes in hydrology are based on the general concept of “tracing”, in which either intentionally introduced isotopes or naturally occumng (environmental) isotopes are employed. Environmental isotopes (either radioactive or stable) have the distinct advantage over injected (artificial) tracers in that they facilitate the study of various hydrological processes on a much larger temporal and spatial scale through their natural distribution in a hydrological system. Thus, environmental isotope methodologies are unique in regional studies of water resources to obtain time and space integrated characteristics of groundwater systems. The use of artificial tracers generally is effective for site-specific, local applications. The most frequently used environmental isotopes include those of the water molecule, hydrogen (’H or D - also called deuterium, and 3H - also called tritium) and oxygen ( I 8 0 ) , as well as of carbon (I3C and I4C - also called radiocarbon or carbon-14) occurring in water as constituents of dissolved inorganic and organic are stable isotopes of the respective elements carbon compounds. ’H, 13C and whereas 3H and I4C are radioactive isotopes. Variations in stable isotope ratios of natural compounds are governed by chemical reactions and phase changes due to the energy difference between chemical bonds involving different isotopes of an element. Such energy differences are caused by the relative mass difference between isotopes. The stable isotopes of light elements show greater variations because they have larger relative mass differences. Stable isotope ratios in hydrology are conventionally reported as per mil (%o) deviation from those of a standard using the 6 (delta) notation. The isotopic standard used for hydrogen and oxygen isotopes is the Vienna Standard Mean Ocean Water (VSMOW). The International Atomic Energy Agency (IAEA) distributes stable isotope reference materials to all interested users. Most of the applications of stable isotopes of hydrogen and oxygen in groundwater studies make use of the variations in isotopic ratios in atmospheric precipitation, i.e., in the input to a hydrogeological system under study. A worldwide relation between l 8 0 of precipitation and mean annual air temperature has been observed. This dependency on temperature produces seasonal isotope variations of
287
precipitation (winter precipitation is depleted in heavy isotopes with respect to summer precipitation), latitude effect (high latitude precipitation is depleted with respect to low latitude precipitation), and altitude effect (heavy isotope content of precipitation decreases with increasing altitude). These effects allow the use of these isotopes to delineate various hydrogeological processes as well as indicators of past and present climate changes and of palaeowaters. The other set of tools extensively used in isotope hydrology is radioactive isotopes. Radioactive isotopes (also called radioisotopes) occumng in groundwater originate from natural and/or artificial nuclear processes. Radioactive decay of environmental isotopes makes these isotopes a unique tool for the determination of groundwater residence time (“age”) - the length of time groundwater has been isolated from the atmosphere - that is crucial to understanding aquifer dynamics. Among the environmental radioisotopes, tritium and carbon-14 have found the widest application in groundwater studies. Among the most important areas where isotopes are useful in groundwater applications are studies of recharge and discharge processes, flow and interconnections between aquifers, and the sources and mechanisms of pollution. In particular, under arid and semi-arid climatic conditions, isotope techniques constitute virtually the only approach for identification and quantification of groundwater recharge. Pollution of shallow aquifers and, due to over-exploitation of superficial aquifers, also of deeper aquifers by anthropogenic contaminants is one of the central problems in the management of water resources. Environmental isotopes can be used to trace the pathways and predict the spatial distribution and temporal changes in pollution patterns for assessing pollution migration scenarios and planning for aquifer remediation. Furthermore, isotopes can trace dispersion and infiltration of pollutants in landfills as well as quantify degradation and migration of pollutants. Isotopes are also applied extensively to study: atmospheric processes, climate and environment changes, palaeowaters, leakages from dams and reservoirs, stream flow measurements, effluent dispersion, suspended sediment and bedload movement in ports and harbours, lake dynamics, sedimentation in lakes and reservoirs, geothermal systems, glaciology, etc. Environmental isotopes are supplementary tools for hydrological investigations. Generally, an integrated approach - employing isotope, hydrogeological, and hydrochemical data - will lead to the optimum use of these techniques and to a logical interpretation. ROLE OF THE M A The International Atomic Energy Agency has played a crucial role in promoting and expanding the field of isotope hydrology over the last four decades. Isotope hydrology today is practiced in most countries although the field began nearly 50 years ago with a few research centres in the developed countries involved in understanding the distribution of isotopes in natural waters. The Agency’s role in building a cadre of trained isotope hydrologists worldwide is significant. In as much as isotope hydrology is an advantageous tool for the sustainable management of water resources, it is imperative that practicing hydrologists are competent in the use of isotope techniques. More than 700 fellowships with an average duration of 3 months have been awarded over the last four decades for training at the Agency’s Headquarters or other established centres.
288 Group training events involving, national, regional, and inter-regional courses of varying duration from 1 to 8 weeks have been conducted with more than 600 participants. The trainees further improved their skills through on-the-job training associated with technical cooperation projects. Regularly undertaken Coordinated Research Projects (CWs) provide more advanced training to a limited group of participants from developing Member States. The Agency-funded technical cooperation projects in isotope hydrology cater to the needs of the Member States for hydrological field investigations, human resource development, and strengthening of infrastructure facilities. Presently about 60 projects are active for the 2001-2002 cycle. The International Atomic Energy Agency maintains and provides the analytical support to the IAEA/WMO Global Network on Isotopes in Precipitation (GNIP). The Agency periodically publishes data on the concentration of stable isotopes and tritium in precipitation samples collected at a large number of stations around the globe. This is the only database on isotopes to provide basic reference data for researchers in the field of hydrology and atmospheric sciences worldwide over the last four decades. During the past four decades, there has been a steady growth in the stations participating in the GNIP. The data can be found on the Internet site: http://isohis.iaea.org. Isotope monitoring of river water will provide a robust new tool for evaluating the effects of climate change and land use patterns on water resources, as well as for developing strategies for integrated watershed management. A programme has been initiated to formulate the design parameters of such a Global Network for Isotopes in Rivers (GNIR). Until recently, the Agency publications were the sole source of written material for training and education in isotope hydrology. In addition to geographical spread, the sheer number of hydrological studies with isotopes has shown a substantial increase. The number of analytical facilities has also increased steadily. A large number of these laboratories in the developing countries have been established with IAEA's support. REFERENCES
1. 2. 3.
4.
Aggarwal, P.K., Froehlich, K., Kulkami, K.M. 2004. Environmental Isotopes in Groundwater Studies, Encyclopaedia Of Life Support Systems (EOLSS). UNESCO, Paris. GWC-SAMTAC. 1999. Agua para el siglo XXI: De la vision a la Acci6n. America del Sur. Modulos3, Buenos-Aires, 81p. IAEA/UNESCO. 2001. Environmental isotopes in the hydrological cycle; Principles and applications. W.G. Mook (Eds). 6 Volumes. UNESCO, Paris. The World Bank. 1999. Groundwater - Legal and Policy Perspectives. Proceedings of a World Bank Seminar. S.M.A. Salman (Eds). World Bank Technical Paper N. 456. The World Bank, Washington.
SCIENTIFIC CHALLENGES FOR ENSURING CLEAN AND RELIABLE WATER FOR THE 21STCENTURY ANDREW F. B. TOMPSON Lawrence Livermore National Laboratory, Livermore, USA INTRODUCTION Many areas in the world are experiencing significant fresh water shortages due to drought, growing populations, increased agricultural and industrial demands, and extensive forms of pollution or water quality degradation’. Many more are expected to face similar predicaments in the next 20 years. Water shortages will significantly limit economic growth, decrease the quality of life and human health for billions of people, degrade the ecological health of natural environments, and could potentially lead to violence and conflict over securing scarce supplies of water. These concerns are not limited to the economically poor countries, of course, as many parts of the United States face similar dilemmas. These problems can be exacerbated by fluctuating imbalances between need and supply, poor water management or land use practices, social, economic, political, and trans-boundary disputes, as well as factors related to climate change. The future is one that will require significant technological advances to support the conservation, preservation, and movement of fresh water, as well as in the development of new or alternative supplies. It is also one that will require concomitant improvements in the use of practical solutions and the ways in which the broader scientific and technical community interacts with policy-makers, water-related agencies, the educational community, as well the public in the solution process. This presentation will review several aspects of these issues and proposed or implemented solutions for new and reliable water in the context of an example water situation in the US. WATER IN THE AMERICAN WEST In the past decades, population growth and droughts in California have highlighted and refocused attention on the problem of providing reliable sources of water to sustain the State’s future economic development. Specific elements of concern include not only the stability and availability of future water supplies in the State2, but also how current surface and groundwater storage and distribution systems may be more effectively managed and upgraded, how increasingly degraded water supplies may be improved or treated, how the water needs of natural ecosystems may be met, as well as how legislative, regulatory, and economic processes may be used or modified to address conflicts between advocates of urban growth, industrial, agricultural, and environmental concerns. California is not alone with respect to these issues. They are clearly relevant throughout the West, and are becoming more so in other parts of the US. They have become increasingly important in developing and highly populated nations such as China, India, and Mexico. And they are critically important in the Middle East, especially as they relate to regional stability and security issues. Indeed, in almost all cases, there are underlying themes of “reliability” and “sustainability” that pertain to the assurance of
289
290
current and future water supplies, as well as a broader set of “stability” and “security” issues that relate to these assurances - or lack thereof - to the political and economic future of various countries and regions. Moreover, water quality is becoming an equally or more important concern in many parts of the world, either as a result of long term agricultural or industrial contamination, or as a result of naturally poor or saline waters being used for routine domestic supplies. The water supply and quality situation in the United States is replete with many examples of the issues outlined above. Consider, for instance: Chemical contamination of surface and subsurface waters, as caused or induced by agricultural, industrial, and defense related activities over the past century, has been recognized as an important and widespread problem3 affecting drinking water supplies and the health of natural ecosystems, yet one that has proven to be extremely costly to address. Pathogenic contamination of drinking water, often associated with isolated septic tank or wastewater discharges, has received more attention recently as a result of water borne illnesses attributed to Cvptosporidiurn in Milwaukee, WI and is now the subject of important changes proposed for the Ground Water Rule in the National Primary Drinking Water Regulations4. Sea water desalination, long thought to be too costly in the US, is now being implemented in Tampa, Florida as part of a master plan designed to provide new water to a region (10% of the overall water supply by 2008) whose groundwater resources can no longer supply the growing urban demand. The Ogallala formation in the central plains - an extensive fossil water aquifer with no effective recharge - is being depleted ever so slowly by agricultural and urban extraction, setting the stage for increasingly serious water supply problems in the future’. The impact of climate change, as caused by global warming or longer-term natural cycles, may affect water supply scenarios over the next fifty years, especially in California, where decreasing mountain snow pack storage may occur6. A recent dispute regarding reduced allocations of Colorado River water to California highlights the increasing difficulty and creativity required by competing urban, agricultural, and environmental interests to agree on a comprehensive conservation and plan to lower overall withdrawals over a mandated 15 year period7’*. CALIFORNIA AS A MORE FOCUSED EXAMPLE In many respects, California serves as an excellent example of many important water supply and quality problems facing the U.S. and many parts of the world. Consider, initially, some pertinent facts2: Direct precipitation provides most of the renewable water input to California each year, approximately 200 million acre feet (MAF) on average, of which approximately 65% is lost to evaporation and vegetative transpiration (1 acrefoot = 1233.5 m3). The remaining 35% comprises the State’s average annual
291 renewable runoff of about 71 MAF. Most of the runoff is stored in mountain snows, captured in numerous reservoirs, recharged to groundwater aquifers, or discharged to the Pacific Ocean. 0 Over 30% of the average annual renewable runoff is unused and otherwise lost, primarily to the Pacific Ocean. Although it not explicitly designated for urban, agricultural, or environmental uses, its existence is deceiving, as it may be concentrated in wet years of the averaging cycle, and nonexistent in dry years. Small imports from the Colorado and Klamath Rivers (totaling about 6 h4AF) are added with the remaining runoff total to compute the State’s annual water budget. Of this, approximately 0 - 28% is captured or otherwise used for irrigated agriculture, 0 7% is captured or otherwise used for urban demands, 0 - 35% is consumed by environmental allocations (such as mandated flows in wild and scenic rivers, the California Delta, and wetlands), and - 1% is for other uses such as power generation. Urban demands - primarily in the coastal areas - are growing and are basically being offset by surface water transfers from agriculture. Eventually, limitations, delivery restrictions and other political considerations may limit such transfers such that sources of “new” water must be found. Because precipitation is concentrated in the North and in the winter, an interconnected surface water reservoir and aqueduct system has been built to store snowmelt water in reservoirs and redistribute water to users to in the growing population centers along the coast and the expanding agricultural users in the Central Valley. The California Delta forms the heart of this system. Water flowing through the Delta is subject to many forms of degradation, increasingly stringent environmental outflow and quality constraints, potential interruption from earthquake and levee failures, and a finite throughput capacity. The system is augmented by local groundwater use, which is significant (-16 MAF/year) and satisfies between 40 and 50% of the statewide agricultural and urban demands. In California, groundwater storage capacity is quite large, over 800 MAF, as compared with the surface reservoir capacity of 43 MAF, but may be limited by quality, sustainability, and other production-related constraints. Although it has been said that there is “enough water in California” to meet future population (urban) demands for some time, capacity, water quality, and environmental constraints in the system are seen by many to prevent the kinds of redistribution and capture necessary to satisfy all demands in an economically feasible manner.
-
Where are things headed in California? Significant trends in population growth are likely to aggravate water shortages today’s average may be tomorrow’s drought - sharply increase the cost of marginal increases in supply, and reduce the overall reliability of the system. Although water transfers from agriculture to urban use are here to stay, especially with increasing urbanization of former agricultural lands, there will be growing pressure to reallocate more water from agriculture to the environment, each of which will be subject to the capacity of the surface water system to move the right water around to the right places.
292 Increasing the capacity of the system (more dams and aqueducts) is strongly limited by land and economic concerns. Statewide Overview of Wells
Key to features NITRATE mglL
ra*udatrbrrc
Olm.L,~,m.nrslu*nlw, Am,.XIUI
Figure I : m e extent of nitrate contamination in California groundwater. Low nitrate concentrations in green (5 - 20 mg/L), moderate concentrations in yellow (20 - 45 mg/L) and high concentrations in red that equal or exceed the regulated drinking water limit of 4.5 mg/L. Moreover, water quality problems in many parts of the state are beginning to limit the use of existing supplies and effectively reduce the amount of fresh water available. Groundwater quality in many areas, for example, is being threatened by fertilizer and pesticide contamination, farm wastes, septic discharges. One third of the public drinkingwater wells in the state have been lost since 1988 and nitrate contamination is the most common reason for abandonment. Currently, about 10 percent of active California public water-supply wells have nitrate contamination exceeding the drinking water standard of 45 parts per million (Figure 1). In agricultural areas, such as Stanislaus County, up to 80 percent of groundwater is affected or polluted by nitrate. Accumulations of unhealthy natural minerals, such as arsenic and selenium, are found in imgation discharges, and can become concentrated in wetlands or circulated into the California Delta. Arsenic itself is the focus of increasingly stringent regulated concentration limits. Degrading water quality in the Delta arising from agricultural wastes, lower flow rates, and saltwater intrusion is of considerable concern, both in terms of threats to drinking water quality, as well as fisheries and the natural ecosystem; these, in turn, will affect its overall role the California water system.
293 Persistent shortages in the urban areas will become the norm unless broader and more aggressive strategies for developing reliable sources of “new” water are pursued. In truth, there are only two real sources of water in California: (1) that derived ultimately from the hydrologic cycle and (2) seawater. Sources of new fresh water must either be derived from reallocations or more efficient use of the hydrologic input, reuse of impaired water (wastewater, agriculture drainage, polluted or non-potable groundwater), or from seawater itself. Notably, marginal increases in the supply can have a dramatic effect on costs (Figure 2).
Figure 2: Increasing marginal costs of new fresh water supplies grow exponentially. Costs of reused or desalinated water tend to be less dramatic (courtesy, Dr. Norm Brown, Integrated Water Resources, Inc.).
In the near term, water reuse, as derived from wastewater treatment and agricultural drainage sources, will be particularly popular, yet subject to increased concerns with respect to water quality that are related mainly to salt and human pathogen loads (see example in next section). Water banking in underground aquifers, using fresh or reused water, is being used successfully in many areas, and is being considered for many others especially if reclaimed water is used, if existing underground water is of poor quality in the first place, or if unhealthful natural minerals like arsenic and selenium are leached into the water in the process. Over the longer term, the consensus of many is that desalination of ocean water or aggressive treatment of marginal, brackish, or otherwise unusable water - typically quite expensive - will become a routine source of new water, especially if more robust and economically viable treatment methods are found. In addition, the ability to predict changes and variability in climate over the next 50 years is becoming increasingly important to many water planners, especially as it relates to forecasting changes in the overall inputs to the California water system, changes in urban demand due to higher temperatures, or less snow in favor of more rain because of global warming629.Increased
~
294 and more concentrated runoff from this latter scenario will undoubtedly lead to floods, the need for higher capacity in runoff and storage systems, or just smaller amounts of water that can ultimately be saved in existing reservoirs. Higher sea levels produced by melting ice caps will increase the penetration of saline water into the California Delta. These, in turn,may accelerate the need for developing reliable sources of new water and improved water management strategies in California as described above and below. What kinds of key, wide ranging science and technology (S&T) developments can make a difference in California? Here, we wish to highlight three or four closely connected areas for S&T development that could have a noticeable, practical, and meaningful impact on water in California (and by extension, elsewhere). In many senses, these are aligned with other recent national studieslo2"focused on elaborating the role of science in providing future water security in the 21" century. New Water: Imaroved water treatment technologies. Here, we are concerned with the development of more cost effective methods for water treatment and purification, as it relates to seawater desalination, removal of salts and fertilizers from non-potable brackish groundwater or agricultural drainage, filtration of viruses or other pathogens from treated wastewater, or removal of other kinds of industrial or organic waste stream contaminants. One important S&T issue here involves a need for more energy efficient reverse osmosis (RO) or electrodialysis based filtration designs, for example, new deionization techniques, point-of-use techniques, cheaper sources of energy, or some suitable combination of these or similar processes12. High energy costs of current RO technologies, for example, account for half of the total treatment cost, and far exceed the minimum theoretical thermodynamic energy for purification (Figure 3). If effective and viable at large enough scales or in a widelydistributed sense, more efficient technologies processes could serve to add "new" potable water into the California equation where it otherwise did not exist - through the effective reuse of wastewater or development of new water from seawater - thereby adding more reliability to the immediate users of this water and greater flexibility to other parts of the California water system. Another important S&T issue involves the treatment of brines and other highly concentrated wastewaters produced by RO and related treatment methods. Although discharge to the ocean is often touted as the only real possibility, another lies in the potential to extract commercially valuable minerals from the wastes themselve~'~.
295
~~~~~~~
Figure 3. Energy required for desalination, as a function of concentration,for current RO and electrodiabsis technologies, and the room for improvement for potential improved technologiesi2.
Future Water: More reliable assessments of future climate change and variabilitv. Here we are concerned with achieving a greater understanding of the future climate in California through prediction and observation, as it relates specifically to long term changes in precipitation and temperature, or shorter term fluctuations, typically manifested as droughts. Long term trends or changes in climate may result in more or less precipitation coming into the state, longer or shorter wet seasons, warmer temperatures that minimize accumulation of snow, and potentially rising sea levels. Precipitation changes will alter the water balance in the state and change the timing and way in which water is used, reused, moved, stored or procured - as, for example, through desalination. Even if precipitation amounts remain the same, less snow means that runoff will be concentrated in earlier months and potentially unavailable for reservoir storage (due to capacity and operational procedures) unless, for example, new forms of storage or alternative sources of water are found. Rising sea levels will induce salt water to flow further into the California Delta, affecting balances in the local ecosystem, as well as threatening water quality in the State’s aqueduct system. The S&T issue here really is one of developing (i) climate predictions at a fine-enough spatial resolution for use in California and over specific types of relevant time scales, (ii) the ability to reduce or quantify uncertainties that are involved in such predictions, and (iii) the ability to translate or propagate the results of such predictions into hydrologic variables such as runoff and groundwater recharge rates, that are pertinent to the needs of planners in local and statewide agencies. Banking Water: Imuacts on moundwater sualitv. Here we are concerned with the use of groundwater basins for the storage of excess or reclaimed water, as recharged artificially through injection wells, infiltration basins, or ephemeral streams, or in the development of groundwater fiom degraded or low quality aquifers. Active water banking is already being used in many parts of the State (e.g., Kern, Los Angeles, and Orange Counties) and is being considered in some others (e.g.,
296 near Cadiz in the Mojave Desert). Many currently viable aquifers are being threatened with widespread salt loads from agriculture, while many others are naturally of poor quality - yet might be used if effective treatment techniques could be employed. The S&T issue here is really one of understanding mechanisms that degrade or threaten groundwater quality, especially as they relate to the more aggressive uses of groundwater basins that are being considered or are in use. Issues to be addressed may include understanding the fate and migration of viruses in groundwater systems, developing gross balances of introduced or dissolved salts from agriculture or recharge practices, understanding the impacts of surface water - groundwater interactions on water quality, and so forth. A more in-depth example is presented below.
Figure 4. The Calfornia Delta Delta Water: Understanding comulex ecological trends and balances. The California Delta is the heart of the State water system (Figure 4). Natural inflows from the North, West, and East normally discharge to the San Francisco Bay and the Pacific Ocean to the West. The Delta is an area of “unsurpassed ecological importance for salmon, migratory waterfowl, and a host of other plants and animals”14, yet it is also used to transfer water released from upstream reservoirs into aqueducts that flow to the South. Delta water quality and its interactions with the natural ecological system is affected by the quantity and quality of water moving into the Delta from inland sources or the San Francisco Bay, groundwater interactions between the channels and
297 Delta islands, and other more complicated land use issues, both in the Delta and along the rivers that feed it. The S&T issues here would really focus on trying to understand complex chemical and ecological cycles, especially as they are influenced by a combination of anthropogenic and natural forces, and their relation to the increasingly complicated and limiting environmental constraints being imposed to protect the Delta ecological system. The charter for addressing many of these issues lies with the recently established CalFed Pr~gram’~, a cooperative effort of more than 20 state and federal agencies working with local communities to improve the quality and reliability of California’swater supplies while preserving the San Francisco Bay-Delta ecosystem. AQUIFER BANICING IN ORANGE COUNTY, CALIFORNIA As a more illustrative example, we now review some recent and ongoing work to provide more scientific insight into the groundwater banking processes in a large urban setting. The Orange County Water District (OCWD) manages a groundwater basin that provides 70% of the domestic water supply for approximately 2 million residents in the northern part of Orange County, California”. The remaining 30% is purchased and imported from outside the district. On an average annual basis, roughly 270,000 acre-feet (AF) of water are extracted Erom several hundred production wells located within the middle production aquifers of the basin. To sustain this rate of withdrawal, OCWD maintains an artificial recharge program that returns about 205,000 AF of water, on an annual basis, to the groundwater basin. This is achieved by diverting large portions of the base flow of the Santa Ana River into a series of infiltrationbasins and abandoned gravel pits along or nearby the upper reaches of the river. Because of the higher geologic permeabilities in these areas, infiltrated water readily percolates into the main production aquifers. Although the principal source of recharge comes from the Santa Ana River, additional supplies are occasionally imported from the Colorado River and California State Water Project sources. Future plans also call for direct use o f reclaimed water from a nearby wastewater treatment plant to increase the overall recharge. Interestingly, much of the base flow in the Santa Ana River today is already reclaimed in the sense that it is partially composed of discharges Erom upstream wastewater treatment plants in Riverside county. Reclaimed wastewater may contain organic and microbiological contaminants like viruses that, upon recharge into an aquifer, may later be captured in production wells, especially in the absence of tertiary or other advanced forms of wastewater treatment. Because dilution, natural degradation and other transformation processes may lower these contaminant concentrations along travel pathways, state regulators in California have proposed a nominal set of standards to govern how production wells and recharge practices involving reclaimed water are operated. In terms of the Orange County basins, they would require that (1) Reclaimed water have a groundwater residence time of one year before reaching production wells, as a way to ensure that degradation or dilution mechanisms occur; (2) No more than 50% of production well water may be reclaimed in its origin, regardless of residence time; and (3) Production wells be located more than 2,000 ft. from recharge basins.
298
1.o
4 40.0
1000.0
Figure 5: Perspective showing simulated travel pathways from wells P5, P6, and P7 to their surface sources. Streamlines are color-coded to indicate the relevant capture well, and white areas along each well bore indicate their open intervals. The background block is coded to indicate complexity in the geology, hydraulic conductivity distribution. Dots represent intersection of streamlines with recharge surface and are color coded to travel time (after re$ 19).
Because these regulations are tentative, additional scientific study may be needed for their refinement. There have been no conclusive monitoring or epidemiological studies relating to the introduction and fate of viruses into the OCWD aquifer system, although viruses derived from similar artificial recharge operations have been observed in a nearby aquifer in Los Angeles C ~ u n t y ’As ~ . a means to assess compliance with the proposed regulations, however, isotopic and modeling analyses have been used in the Orange County system to infer migration patterns of groundwater and estimate the ages and sources of groundwater in production and monitoring wells near the spreading
basin^'^‘^'. Figure 5 shows a figure from a modeling s t u d y that shows approximate flow pathways from three production wells, labeled as ‘P5’, ‘P6’, and ‘P7’, back to their surface water sources. These sources are represented, primarily, by three recharge basins (Anaheim Lake, Warner basin, and the Santa Ana River). Other production wells exist
299
but, for clarity, are not shown. Note that wells P6 and P7 are deep and have large open production intervals, while P5 is shallow and only has a small open interval. For each well, the travel pathways envelop a distorted “capture zone” around a body of water that flows uniquely into each well through a complicated geological setting. The point is that the “age” of the water entering each well is not unique, but rather distributed as a function of the recharge pathways for each well. Small and shallow wells such as P5 would tend to have younger ages, while deeper wells like P6 and P7 will have older ages. This is obvious in Figures 6 and 7, which show the groundwater age as a function of depth and as a histogram in each of these wells.
0
20
44
GO
&w
80
100
(Y)
0
-50
3
-100
: -*-
-ZSO
ma ,250
0.0
0.5
1.0
1.5
20
&P (Yl
Figure 6: Model-predicted groundwater age as a function of depth below the water table for wells P6, P7, and P.5. Colors represent relative rate of flow into each well at particular depths, as controlled by neighboring geologic conditions (after ref: 20).
300 The mean ages in each well, as determined from the results in Figure 7, were similar to tritium/helium age dating estimates determined for “average water” extracted from the entire open interval of each well. Although the age estimates were useful in calibrating the simulation model, they were not wholly indicative of the age distribution in any of the wells, and thus were not as completely useful for demonstrating compliance with the aforementionedproposed regulations as originally envisioned. Notably, from the simulation results, no water entering wells P6 and P7 is younger than 1 year in age, although more than half of the water entering well P5 is. 500
I
0
10
20
30
40
50 Age (y)
60
70
80
90
100
500 400
.
.
.
__
. .
300
. . _ _ . ”
200
_
_
(
..
_
. __ _
100
0
0
10
20
30
40
50
60
70
80
90
100
Age (y)
0
1
2
Figure 7: Flux-weighted distribution of model-predicted groundwater ages for wells P6, P7, and PS. One-year intervals shown for P6 and P7 histograms; one week intervals shownfor PS histogram (after ref: 20). As a result of these observations, a tracer test was conducted during a recharge event in nearby Anaheim Lake to see whether any “first arrivals” would appear in any of the various production wells surrounding the lake within a 1-year time period’’. A small amount of a Xenon isotope (Iz4Xe)was introduced in the lake as the tracer. Figure 8
30 1 shows measurements of ‘24Xe/’32Xeratios observed in three deep wells (P6, P7, and P8) surrounding the lake over a 400 day period following the recharge event. Although there was no appreciable arrival of the tracer in wells P6 and P7 during this time (apparently consistent with the results in Figures 5-7), there was an obvious arrival in well P8.
105
200
300
4QQ
500
Days after Recharge Figure 8: t24Xe/”2Xe ratios observed in deep wells P6, P7, and P8 following tracer injection during a recgharge event in Anaheim Lake (after re$ 17). Observations suggest a < I year travel time component to the well P8, but nothing so short to wells P6 and P7. Now, although age and travel time data for P8 were not included in the information in Figures 4 and 5 , it should be noted that all three wells are similar in their deep penetration into the aquifer, their large open intervals, their close proximity to one another, and a mean groundwater age that is over ten years for each well. Neverthelesss, the tracer test indicated a relatively fast travel pathway between Anaheim Lake and P8 that is not apparent in the other two wells. Our interpretation of this observation is that it reflects the complicated nature of the geologic system that controls groundwater flow in relatively small areas, and that “statistically” different results of this sort are reasonable to expect in such natural systems. In a broader sense, the results indicate that the current wells, or even nearby “untested” wells of similar design have a plausible chance for having 1-year old water components, and that the relationship of these findings to the proposed regulations and the more fundamental concerns about pathogen transport may deserve additional consideration. CONCLUSIONS The problems and difficulty in providing clean and reliable water to the world’s population in the 21” century remain among the most challenging tasks for the human race. In addition to the important role that the world’s political, social, and economic institutions play in addressing these issues, there will always be a correspondingly strong and focused role for the world’s science and technology establishments in this effort. The
302
need for improved communication and integration across the spectrum of institutions that deal with water canot be understated. In this paper we have discussed several scientific challenges that face the State of California with respect to securing reliable water supplies in the future. These concern, primarily, understanding impacts of future climate change and climate variability, the long term effects on water quality and ecological health produced by industry, agriculture, and various land use practices, primarily over the past century, and the technical challenges we face in developing “new” water through advances in treatment, purification, and reuse practices. As an example, we reviewed, albeit briefly, a detailed modeling of an isotopic study related to a water banking operation in an urban setting in Southern California. The ultimate water quality concerns here are related to the potential introduction of viruses and other pathogens from treated wastewater into the aquifer and the ultimate viability of several proposed “surrogate” regulations designed to protect the quality of water produced from the aquifer. The model - albeit complicated - and the tracer tests proved to be powerful tools to examine the behavior of the system and provide insights related to compliance with the proposed regulations. We believe, in addition, that these techniques may continue to offer a strong scientific basis to explore more directly the fate of viruses and pathogens introduced into such systems, the concern that motivated the regulatory interest in the first place”. ACKNOWLEDGEMENTS This work was performed under the auspices of the U. S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. REFERENCES 1. 2. 3. 4.
5. 6. 7.
Gleick, P., ed., 1997. Water in Crisis: A Guide to the World’s Fresh Water Resources, Oxford University Press. California DWR, 1998. The California Water Plan Update, Bulletin 160-98, California Department of Water Resources, Sacramento, CA. National Research Council, 1984. Groundwater Contamination, National Academy Press, Washington DC. USEPA, 40 CFR Parts 141 and 142, National Primary Drinking Water Regulations: Ground Water Rule; Proposed Rule, United States Environmental Protection Agency, Federal Register, 65(91), May 10,2000 / Proposed Rules, 30194-30274. Bittinger, M. W., and E. B. Green, 1980. You Never Miss the Water till . . . (The Ogallala Story), Littleton: Water Resources Publications, Littleton, CO. Dettinger, M.D., D.R. Cayan, 1995. Large-scale atmospheric forcing of recent trends toward early snowmelt runoff in California. Journal of Climate 8:606-623. Quantification Settlement Agreement, 2003. http://www.saltonsea.water.ca.gov/crqsdindex.cfm
303 8.
9
10.
11. 12.
13.
14. 15. 16.
17.
18.
19. 20.
21
US Department of the Interior, 2003. Record of Decision, Colorado River Water Delivery Agreement, Implementation Agreement, Inadvertant overrun and Payback Policy, and Related Federal Actions, Final Environmental Impact Statement. Snyder, M. A,, J. L. Bell, L. C. Sloan, P. B. Duffy and B. Govindasamy, Climate responses to a doubling of atmospheric carbon dioxide for a climatically vulnerable region, Geophysical Research Letters, 29(1 l), 9-1 - 94,2002. National Research Council, 2001. Envisioning the Agenda for Water Resources Research in the Twenty-First Century, National Academy Press, Washington DC. National Research Council. 2004. Confronting the Nation’s Water Problems: The Role of Research, National Academy Press, Washington DC. Heller, A., 2004. Helping Water Managers Ensure Clean and Reliable Supplies, in Science and Technology Review, JulyJAugust, 2004. Lawrence Livermore National Laboratory, Livermore CA (UCRL-TR-52000-04-7/8) Bourcier, W. L., M. Lin, G. Nix, 2003. Recovery of Minerals and Metals from Geothermal Brines, Lawrence Livermore National Laboratory, Livermore CA (UCRL-JC-153033). http://calfed.ca.gov/ OCWD. Groundwater Management Plan, 1991 Update. Orange County Water District, Fountain Valley, CA, 1991. Yank0 W. A., J. L. Jackson, F. P. Williams, A. S. Walker, and M. S. Castillo, 1999. An unexpected temporal pattern of coliphage isolation in ground waters sampled from wells at varied distance from reclaimed water recharge sites. Water Research 33,53-64, 1999. Davisson M. L., G. B. Hudson, R. Hemdon, S. Niemeyer, and J. Beiriger, 1996. Report on the feasibility of using isotopes to source and age-date groundwater in Orange County Water District’sForebay Region, Orange County, California. UCRL-ID-123593, Lawrence Livermore National Laboratory, Livermore, CA. Davisson M. L, G. B. Hudson, J. B. Moran, S. Niemeyer, and R. Herndon, 1998. Isotope tracer approaches for characterizing artificial recharge and demonstrating regulatory compliance. Proceedings, Annual UC Water Reuse Conference, Monterey, CA, June 4-5,1998, WateReuse Foundation, Alexandria, VA, 1998. Williams, A. E., 1997. Stable isotope tracers: natural and anthropogenic recharge, Orange County, California. Journal of Hydrology, 201,230-248. Tompson A. F. B, S. F. Carle, N. D. Rosenberg, and R. M. Maxwell, 1999. Analysis of groundwater migration from artificial recharge in a large urban aquifer: A simulation perspective. Water Resources Research, 35, 2981-2998. Maxwell, R. M., C. W. Welty, and A. F. B. Tompson, 2003, Streamline-based simulation of virus transport resulting from long-term artificial recharge in a heterogeneous aquifer. Advances in Water Resources, 26( lo), 1075-1096, 2003.
This page intentionally left blank
9.
PERMANENT MONITORING PANEL MEETINGS AND REPORTS
This page intentionally left blank
AIDS 2004 - PRESSING FINANCIAL AND ETHICAL CHALLENGES
GUY DE THE Pasteur Institute Paris, France The world A I D S situation was reviewed at the recent XVth AIDS International Conference in Bangkok, Thailand on July 11-16. The epidemic continues to spread in Africa, with however, some good news fiom Uganda. But the situation is worsening in Asia where 213 of the world's population lives, often in crowded conditions favoring a high rate of transmission. As seen fiom the data presented by the UNAIDS secretariat, the estimated number of persons living with the HIV virus at the end of 2003 was around 46 to 40 million, 25 million in sub-Saharan Africa, with the highest prevalence rate being 40% in Botswana. There were more than 3 million deaths due to AIDS in 2003, and 4,s million new HIV infections! Asia seemed to be somewhat protected up to now, however over the last few years, the epidemic has progressed like a bush fire, but under different conditions than in Africa, where heterosexual transmission prevails. For example in Thailand, and more recently in the Philippines, the epidemic began among IVDU (intravenous drug users), 90% of them sharing needles, rapidly leading to a 40 % infection rate. Among the IVDU are prostitutes, where a prevalence rate of 0.3% in 1995 has reached 25% in 2003. The next natural step was their male clients, who then transmitted the virus to their spouses, leading rapidly to the dramatic issue of the mother to child transmission of HIV. In China the situation similarly involves IVDUs in the South West provinces, and the epidemic has moved up to the North West province of Xin Jiang. In the large coastal cities the prevalence among prostitutes is poorly known, but seems to be lower than that in SEA, probably due to education efforts and natural use of condoms in the Chinese population for birth control. But the main emphasis of the conference was the participants' plea for better financial commitment from rich countries to the fight against HIV-AIDS. According to UNAIDS the 2003 funding amounted to $4.8 billion, while the needs for the 20056 are estimated at $12 billion. One of the main difficulties is to insure that the funds are properly utilized and reach those who need them! The WHO, which put the target for 2005 at three million patients treated (50% of the estimated 6 million who need anti-retroviral therapy - ART), reported that today 440,000 patients are under antiretroviral therapy (7% of the target)! But progress is being made and money is not the sole problem in the long road towards controlling the epidemic. A complex treatment, such as ART, needs careful immuno-virological surveillance, hence the need for improving the health and hospital infrastructure, which, in turn, will require both strong local political commitment, training and international financial support. The economic impact of the AIDS epidemics is becoming a major issue for the developing countries, as it is their work force which is most severely hit. What scientific progress has been achieved since the last conference in Barcelona? Basic research on the physiopathology of the viral infection has allowed a better grasp on the interplay between the incoming viral infection and the immune response of the host. Research on a vaccine is moving ahead, with more than 80 vaccine preparations already tested, or being tested, in clinical trials. The preparation of our colleague Franco Buonaguro is one of them. The main difficulty relates to the high genomic variability of HIV, and the need to induce both long term humoral and cellular immunity.
307
308 Prevention should remain a top priority, but the choice of means remains controversial. Use of condoms varies enormously from country to country, reflecting the local cultural background of the ethnic groups concerned. In Africa condoms remain foreign to most, while in Asian countries it has long since been accepted. But the most controversial issue was the American attitude stressing that the best way for preventing the spread of infection was the ABC rule: abstinence, fidelity and possibly the use of condoms! The example of the Ugandan government with its success in lowering the prevalence rate from 20% to 6% in a few short years was stressed by Randall Tobias, the head of the US delegation and AIDS coordinator of the Bush administration. The American government will limit its financial commitment to the countries adopting and devoting 30% of the US fund to the ABC program. A strenuous debate followed his intervention requesting that differing opinions be respected! Vaginal microcidal drugs are now considered as a promising avenue, as their use would be under the control of women, who are the main target for HIV infection in Africa today. A few preparations are being clinically tested, but nothing is available yet. In summary, the easiest way to combat AIDS around the world is financial support towards universal anti-retroviral therapy of patients, while preventive measures to limit the spread of the epidemic are deeply linked to cultural patterns and sexual habits. Both preventive and therapeutic approaches raise ethical questions and we are happy to say that the World Federation of Scientists held pioneering workshops on these critical issues in 2000, and in 2001 on the mother-to-child transmission of HIV that can be avoided by the new virocidal drug Nevirapine, and later by full anti-retroviral therapy. Reports of these meetings were published in Acta Paediatica (vol 89:1385-6, 2000; vol90: 1337-1339, 2001; vol91: 241-243, 2002). In August 2003, we had a stormy workshop on the ethical issues facing preventive and therapeutic interventions in developing countries (Acta Paediatrica vol 93 :1-4 2004). The expectations of the developing world are great concerning the social commitment of scientists to health matters and especially concerning AIDS control to influence donor countries and international foundations to take the views of recipient populations more into account. The issues discussed this year at the Planetatry Emergencies Conference were very appropriate to the AIDS and infectious disease PMP issues.
IMPLICATIONS OF CLIMATE VARIABILITY AND CHANGE: A POLICY MAKER’S SUMMARY WILLIAM A. SPRIGG The University of Arizona, Tucson, USA PREFACE The effects of climate variability and change depend on public perceptions as much as they depend on nature. Public administrators will act to counter nature’s adverse influence on social, economic or environmental systems. These acts can amplify or diminish nature’s influence. Furthermore, administrators are influenced by public perception as well as their own. For example, pro-business and pro-environment, tend to define the extremes for U S . views on climate change and on the influence of climate variability on everyday lives. Around the world, corporate and environmental protagonists oppose one another at these extremes. This is both unfortunate and fortunate. Political opposition is fortunate in that it creates a healthy discourse and challenge to ongoing studies. Scientific debate helps clarify what is known and unknown. Debate, played out in peer review of research papers, repeated use of published data, and replication of results, reduces uncertainty in research findings. This makes it easier to develop and implement policy. Yet, politically motivated challenges can be unfortunate because the emotions, motives, and ideologies of politics unjustly affect perceptions of climate change and, consequently, the safety and wellbeing of people everywhere. Political views matter because they affect policy responses. A policy response favoring the other sides’ perception usually appears too costly. For example, to corporate executives the cost of reducing fossil fuels will hurt business. To environmentalists, not curbing fossil fuel emissions will cost much more in a wide range of social, economic and environmental sys terns. It is best, corporate executives will say, to make absolutely certain the draconian steps needed to counter global warming are, indeed, warranted. And environmentalists argue the evidence supporting future global warming scenarios, while imperfect, demands countermeasures be taken if calamity is to be forestalled. Both sides agree the cost of these countermeasures can be substantial. And, both sides agree that no response by policy makers is a response by policy makers. So, policy makers ask, how real is the evidence of climate change and global warming? How certain are we of the cause? Can climate be predicted? What does global warming mean to a state, city, village or farm? If climate is changing, what can be done to reduce unwanted consequences and leverage positive ones? What can be done to change climatic trends? HOW REAL IS THE EVIDENCE OF CLIMATE CHANGE? Basic concepts behind “greenhouse” warming were proposed early in the 20th Century and have been examined intensively since. Through basic laws of chemistry and radiation, demonstrated in the laboratory as well as in theory, Carbon Dioxide (CO2) gas
309
310 absorbs the radiant heat emitted from the Earth’s surface. The absorbed heat is trapped, almost (but not exactly) like a greenhouse, and warms the atmosphere. Scientific measurements of atmospheric COz, begun in 1957 on the flanks of Hawaii’s Mauna Loa in the North Pacific Ocean, have shown a consistent increase in the atmospheric concentration of this “greenhouse” gas. Similar monitoring stations have been added in clean air sites that include the South Pole, American Samoa, and Point Barrow, Alaska. They all show similar trends of increasing concentrations of CO2, as well as other “greenhouse” gases such as methane and oxides of nitrogen. Thus, common sense tells us, if nothing else is countering the effect, warming of the atmosphere might be detectable. A serious effort began in the early 1970’s, involving hundreds of scientists worldwide, to assemble and analyze weather and climate records from all over the globe. Ocean temperature records were included, which launched a most remarkable partnership across scientific disciplines and serves today as an example of how interdisciplinary fields develop and thrive. Many other technical disciplines joined in the effort. They included specialists in hydrology, chemistry, meteorology, oceanography, mathematics, statistics, computer science, data management, engineering, instrumentation, and remote sensing. Climatologists called upon historians to search for written accounts of weather that existed before modem instruments were available, and they called upon other specialists who were interpreting long-ago temperature from tree rings, ice cores, lake sediments, coral and mollusk shell bands, and records of glacier advances and retreats. Interpreting and merging data and information from such disparate sources from over centuries of evolving technology is a challenge. Now, with a hundred or more scientific, peer-reviewed papers supporting the evidence, there is no question’ about the reality of the comparatively rapid rise in the globally averaged, near surface, atmospheric temperature during the 20th century: 0.6”C. Approximately half this increase has taken place since 1970. In the early 1970’s, many scientists attributed a series of agricultural disasters and collapsed fisheries around the world to shifts in atmosphere and ocean behavior. The exact causes and linkages were not known. Some researchers began to believe that “greenhouse” warming might be at work. By the mid-l97O’s, policy makers began to support more research to see if industrial emissions of greenhouse gases, climate variability and, for example, collapsed fisheries and reduced crop yields, were connected. This included support for much of the aforementioned studies of the climate record, as well as new observing systems, including expensive satellite-based remote sensing technology, and theoretical, computer-based modeling. A series of scientific papers were published in the late 1960’s and early 1970’s showing that numerical dynamical models of the whole Earth’s atmosphere could be used to replicate global climate reasonably well. They could also be used to explore ocean-air interactions and the effects of increased atmospheric concentration of COz on global climate. By today’s standard these early experiments, using the most advanced computers of the time, would seem simplistic. Yet, they paved the way for numerical models to include the most complex understandings of weather and climate dynamics in depiction of climate and the potential for climate change. Other major modeling groups followed, including the Max Planck Institute in Hamburg, the European Centre for Medium Range Weather Forecasting in England, the U.S. National Center for Atmospheric Research, the
31 1 NASA Goddard Institute for Space Science, and the Canadian Climate Centre. Their General Circulation Models (GCMs) include the fluid thermodynamics of oceans and atmosphere and effects of the Sun, land (including urbanization and industry), fiesh water, biology, snow and ice. Modeling the climate system soon demanded the highest standards for advanced computer capabilities. An even wider range of scientific disciplines joined in climate research, including classical physics, fluid dynamics, solar physics, ecology, and biology to understand the environmental sources, sinks, and exchanges of radiatively important trace gases, energy, and water, the most important of the “greenhouse” gases. The resulting complex, numerical, dynamical models have become the greatest expression of overall knowledge of the climate system. By the mid-l990’s, model experiments both supported and complemented empirical analyses of the past and present climatic record. Three decades of empirical and theoretical research, conducted and reviewed by hundreds of scientists from all over the globe, make the current analyses of the temperature record extremely credible and, for all practical purposes, unassailable. HOW CERTAIN ARE WE OF THE CAUSE FOR EARTH’S RAPID TEMPERATURE RISE? National climate programs appeared around the globe in the early 1980’s, all collaborating under a World Climate Programme organized by the United Nations and the International Council of Scientific Unions. Through these programs, scientists and world leaders have sought to understand the reasons behind the observed changes in climate. They began with the already well developed, centuries old, scientific disciplines of physics, chemistry, biology, climatology, atmospheric science, and oceanography. One focus of research was the most obvious environmental difference between climatic regimes of today and of a century or more ago: industrial emissions of radiatively important trace gases. This, scientists believed, could explain the recent extended rise in temperature. Other research foci include possible changes in solar output, oceanic circulation, volcanic activity, and episodic outbreaks of El Nino, the aperiodic anomalous warming in the central, equatorial Pacific Ocean. General Circulation Models (GCM), laboratory experiments, and direct observations continue, today, to show probable dominance of the burning of fossil fuels (the principal new source of atmospheric CO1) behind global warming. Solar variables are believed to play a role in climate other than Earth’s position in orbit around the Sun, which explains most of climate change on millennia time scales, and the eleven and twenty-two year sunspot cycles, which, from maximum to minimum, alter solar energy received by the earth less than one percent. Today, most research aims to reduce the uncertainties of how much solar energy emissions can vary and to examine the effects of charged particles in the solar wind that are intercepted by the earth. Our understanding of these solar effects cannot yet demonstrate a major role in causing the temperature record in question. However, it is important that credible research continues in this area to reduce the uncertainties and perhaps assist in promoting countermeasures to global warming. The world’s oceans are part of the climate system. With the atmosphere, they store, transport and emit heat and carbon dioxide. But, like the Sun, the oceans are not likely to
312 have triggered the current temperature rise. Their role in this case may be more to modulate than to stimulate such a trend. Volcanic activity, by the emission of gas and particles high into the atmosphere, tends to cool the earth by reflecting solar energy back to space. Similarly, dust storms inject particles into the atmosphere that reflect solar radiation, and also provide condensation nuclei that alter clouds and rainfall patterns. El Nino events cannot explain a sufficiently sustained and significant release of energy into the atmosphere to explain the temperature rise. Urbanization, deforestation, desertification, and a host of other local and regional climate-altering factors have been examined and fail to show they could affect such a trend in global temperature. Yet, all these factors, and those mentioned before, must be explained in any theory of climate change. All are part of the dynamics of climate. None, however, explain the major features of the current temperature trend as well as the observed increase of greenhouse gases. As reported during the 30th Session of the Erice International Seminars’, over the last century, the Earth’s oceans and land surfaces have warmed, glaciers have retreated over most of the earth, sea level has risen, snow and ice extent have decreased in the Northern Hemisphere, and the lower atmosphere has warmed. The preponderance of evidence, including the GCMs that bring all factors together, point to greenhouse gases as the trigger and sustaining force behind this trend. Thus, the phenomenon has been labeled, “greenhouse warming.” CAN CLIMATE BE PREDICTED? Climate is usually expressed in the statistics of weather variables -- in averages, extremes, and periodicities. We may talk of both atmosphere and ocean climates because they are so intimately linked and because they share many physical and dynamical similarities. Reliable prediction of climate is a highly prized aim. At present, over time scales of two weeks (where the limits of weather prediction are very likely to end) to a season, climate forecasters rely on statistical models. Results are notoriously poor in detail and in reliability, but in the right season and in the right region, they can depict future general characteristics of temperature with some degree of skill. In widely scattered regions of the world under the influence of El Nino, statistical models are fairly reliable in forecasting precipitation or temperature characteristics seasons and even several years in advance when an El Nino or its opposite are developing. Thus, certain climate-related events offer some predictability. On longer time scales, for example, over the last century, a variety of statistical and dynamical models, including GCMs, are being developed to anticipate climatic trends. Models are first tested to see if they can replicate past and present climate characteristics, which the better ones do relatively well. Current GCMs that incorporate outlooks of greenhouse gas emissions, as well as the solar cycle and all the features mentioned earlier, probably depict continental scale general features of temperature trends reasonably well. However, because of the inherent chaotic nature of climate and the inability to predict many significant influences on it, such as volcanic eruptions and evolving energy technology, climate models looking into the next decade or century will
313 probably never credibly resolve very short time periods, perhaps a season at best, or areas as small as a city. WHAT DOES THE GLOBAL WARMING TREND MEAN TO A STATE, CITY, VILLAGE OR FARM? Relatively accurate decade (or longer) records of temperature, precipitation, and other weather variables can be assembled for virtually any inhabited place on earth. But, signs of global warming are not seen everywhere all the time. In some cases, temperature records show a cooling trend or no trend at all. In other places the effects of warmer temperatures are obvious. For example, where glaciers are melting or where sea level is rising. In some places the effects of warming may be subtle, such as mosquitoes venturing beyond their normal habitat and their biting frequency increasing. Continued global warming will affect everyone, some more directly than others. If an island nation fears being lost to the rising sea, it is a concern for humanity. So, too, will we regret loss of biodiversity. If a trading partner loses crops or offers inferior products due to less rain or to pest infestations made possible by a longer warm season, market prices rise, people loose jobs, and poverty drives migration and illegal border crossings. If an outbreak of infectious disease occurs overseas because the disease vector has a more hospitable environment, international trade, tourism, and economies are hurt, and other nations must guard against further spread of the disease. An almost infinite variety of problems may be triggered by local warming conditions that are statistically part of the global warming trend. An Intergovernmental Panel on Climate Change (IPCC) was formed in the early 1980’s. In several reincarnations, involving a changing expert membership, the IPCC assesses the state-of-knowledge of climate, the potential impacts of climate change, and essential information for policy makers. Its aims are to reach decision makers on farms, in factories, on corporate boards, and in provincial and state capitals. In practice, the IPCC translates esoteric science and generalities of climate change into something relevant for scientists in many disciplines and decision makers at all levels. Upon release of the IPCC reports, they are announced and summarized by the world press. The actual IPCC reports, published in several volumes, may be obtained by contacting the Cambridge University Press, the IPCC Secretariat, the World Meteorological Organization, or the meteorological agencies of most nations3. The World Climate Programme (WCP) and various national programs have created a new generation of interdisciplinary scientists who understand the long distance connectivity of our environment. For example, dust storms that strip potentially productive soil from farms in China or Africa are believed to alter fishing grounds in the western Pacific Ocean, respiratory health in Korea and Japan, hurricane formation in the equatorial Atlantic Ocean, coral growth in the Caribbean Sea, and the energy budget of the earth. The phenomenon of El Nino in the equatorial Pacific Ocean is believed to occur more frequently under conditions of global waming. And, El Nino events affect rainfall and temperature, and the potential for vector borne disease, wildfires, floods, and drought, from Mexico to Florida to Cuba and across the Atlantic Ocean to Africa. The United Kingdom4, the United States’ and a few other nations have launched comprehensive assessments of the potential implications of climate variability and
314 change for their own circumstance. Their conclusions demonstrate that without considering climate change or global warming, the economic, environmental, and social consequences could be substantial. These same studies show, however, that actions can be taken, even under conditions of considerable uncertainty, to lower the odds of undesirable impacts and raise the odds of profiting fkom global warming and climate change. This is prompting, for example, the U S . Senate to call for more such studies and response strategies. IF CLIMATE! IS CHANGING, WHAT CAN BE DONE TO REDUCE UNWANTED CONSEQUENCES AND LEVERAGE POSITIVE ONES? The WCP provides volumes of data, analyses and knowledge of climate and climate processes for everyone’s access. However, not everyone is aware of this information or is equipped to use it. For these cases, the World Federation of Scientists (WFS) can assist. The WFS has, for example: provided life-saving and agriculturebuilding information about weather and climate monitoring stations to the government of Senegal; taught climate and weather analysis techniques to dust storm forecasters from Syria and Lebanon; informed university students in China as to where extensive, updated information about climate variability and change may be obtained; developed strategies in and lands for reducing wind erosion of soil under conditions of climate change; informed Afiican countries of opportunities to profit by sequestering carbon while improving crop yields; and each year evaluated the status of climate, the adequacy of research on greenhouse warming and stratospheric ozone, and provided policy guidance where warranted6. In the latter case, for example, the WFS issued a statement urging governments to support open access to data for research purposes’. This statement responded to private sector and some government efforts to influence the World Intellectual Property Organization to endorse expansion of copyright rules on all data. Intergovernmental actions restricting data access to only those who could afford it, or who could ensure themselves legal protection, would have violated scientific principles and severely handicapped not only climate researchers, but also anyone working on global environmental issues. All nations should examine the potential consequences of climate variability and change for their own set of circumstances. These studies should involve from the outset, those who have an economic, social, or environmental stake either in the potential for climate to affect their interests or in the outcome of the study itself. Following such an assessment, stakeholders, technical experts and policy makers should prepare mitigation and adaptation strategies for particularly vulnerable interests, along with action plans to leverage the positive aspects of climate change. These strategies should include means to monitor climatic trends, international policy responses, and research that exposes new vulnerabilities and adaptation strategies. National and provincial governments, the business sector, and non-governmental organizations should monitor these efforts and supplement them when appropriate.
315 WHAT CAN BE DONE TO CHANGE CLIMATIC TRENDS? Having reached international accord through the “Montreal Protocol” to ban chemicals that were destroying the Earth’s protective layer of stratospheric ozone, world leaders sought to establish goals for reducing greenhouse gases. An international agreement to reduce emissions of COz was drafted in Kyoto, Japan, but was met with some resistance by some governments, notably the United States. Scientific evidence about the rise in global temperature and arguments derived from laboratory experiments and computer modeling about the probable role of the burning of fossil fuels were not enough to overcome conservative perceptions in the US., mainly that (a) economic impacts to U.S. business would be too costly if the Kyoto goals were to be achieved and (b) the goals of the Kyoto agreement were unattainable and unjustified given the range of uncertainty in climate models. The degree to which the steps proposed in the Kyoto agreement would influence global warming is debated. Indeed, the GCMs do have a problem with low spatial resolution, depiction of clouds, detail in the carbon budget, predictions of the global economy, forecasts of future fuel use, forecasts of technology breakthroughs, and other issues described earlier. But, the scientific community as a whole agrees that the models get the overall features of cause and effect correct. The problem with not taking action to reduce fossil fuel consumption, i.e. maintaining the status quo, is that no other cause or combination of causes of global warming have been identified that would produce the current trend. No other means to counter the warming trend has been proposed that would be as effective as reducing fossil fuel consumption - not dispersing chaff high in the atmosphere to reflect solar energy, not blocking ocean currents, not seeding clouds, and not, although it will help, sequestering carbon, planting trees, building urban parks, halting urban sprawl, practicing energy conservation in construction and design, and reducing emissions of sulfur, methane, nitrogen and other radiatively important pollutants. One argument says that, in spite of uncertainties, the costs of absorbing the affects of global warming will exceed the costs of reducing fossil fuel use. The aforementioned assessments of potential consequences of climate change begin to offer hard evidence of this. Other, compelling benefits that should be factored include being a good neighbor acting on concerns for vulnerable island nations (and others, e.g. Bangladesh) and for people more likely under warming scenarios to experience drought, flood, famine and disease. Being less altruistic and more pragmatic, cutting back fossil fuel consumption will reduce harmful air pollution. It will accelerate the time when renewable forms of energy will take the place of already dwindling oil reserves and when viable consumer options for energy sources will multiply. CONCLUSIONS All of Earth‘s biological, social, and economic systems have adapted to a range of climates. When extremes of these ranges are exceeded, or even approached, disasters occur. Coastal erosion, loss of biodiversity, forest fire, heat stroke, vector borne disease, flood, drought, famine, and illegal border crossing can all be triggered by climate variability and change.
316 Earth’s climate is warming at a rate unprecedented in the history of humankind. The likely cause is a buildup of atmospheric “greenhouse gases,” the major part of which is carbon dioxide, and most probably from burning of fossil fuels. The social, economic and environmental implications of thls warming are under study, but preliminary findings show these to be substantial. These studies urge international action to curb fossil fuel use. Research findings also urge more intensive examination of local and regional implications of climate change, particularly in developing countries, where such studies are virtually nonexistent. Means to cope with climate variability and change, indeed, to leverage and to take advantage of climate, will come from these studies. These findings are backed by hundreds of peer-reviewed technical papers, produced by hundreds of international researchers laboring over several decades of well-funded modem science. Researchers have examined climate data, trends, processes, the potential for climate change, and the implications of change for social, economic and environmental systems. Results of these studies are available to all who frame policies and make decisions where the implications of climate may be a factor. The World Federation of Scientists monitors these studies, linking research institutions worldwide, and recommending policy interventions when needed. ACKNOWLEDGEMENTS This paper draws upon the work of many other scientists and institutions too numerous to mention. Much has been gleaned from voluminous reports of the WFS, the IPCC, the WCP, and the UK and US impact assessments. And, without the support of the Italian government, the forum for scientific debate and interdisciplinary examination of ideas expressed in this paper would not have been possible. REFERENCES One will always find some debate in an active area of science. But, it is a consensus of the vast majority of experts in the field of atmospheric temperature analyses (see, e.g. the IPCC reports), that the accumulation of direct and proxy measurements of this reported trend in near surface temperature is incontrovertible. Santer, B.D. and T.M.L. Wigley, 2003. “New Fingerprints of Human Effects on Climate,” 69-75. In: The Science and Culture Series, International Seminar on Nuclear War and Planetary Emergencies;30thSession. “E. Majorana” Centre for Scientific Culture, Erice, Italy, 18-26 August 2003. 571 pp. See, e.g., (a) Houghton, J.T., G.J. Jenkens, and J.J. Ephrams, 1990: Climate Change. The IPCC ScientiJicAssessment. Cambridge University Press, Cambridge, U.K., 365 pp. and (b) Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson, 2001: Climate Change. The Scientzjk Basis. Cambridge University Press, cambridge, U.K., 572 pp. The United Kingdom Climate Impacts Programme. See, e.g., (a) National Assessment Synthesis Team, 2000: Overview: Climate Change Impacts on the United States; the Potential Consequences of Climate Variability and Change. Cambridge University Press, Cambridge, U.K., 154 pp. and (b) Sprigg, W.A. and T. Hinkley (Chairs), 2000: Preparing for a Changing
317
6
7
Climate: The Potential Consequences of Climate Variability and Change; Southwest. University of Arizona; Tucson, 60 pp. See, e.g., five papers surveying climate published (47-101) in: The Science and Culture Series, International Seminar on Nuclear War and Planetary Emergencies; 3dhSession. “E. Majorana” Centre for Scientific Culture, Erice, Italy, 18-26 August 2003; 571 pp. See the WFS website.
PMP AND WORKSHOP REPORT FOR COSMIC OBJECTS
W. F. HUEBNER Southwest Research Institute, San Antonio, USA
J. Mayo Greenberg presented the last Permanent Monitoring Panel (PMP) report on Cosmic Objects at the International Seminars in 2000. We held a workshop on cosmic objects at Erice 16 - 25 June, 2001, in which Greenberg participated part of the time in spite of being very sick. The last PMP report published, but not presented at the seminars, was by Greenberg and Huebner (2002). There has been no activity on cosmic objects at the International Seminars since Greenberg died. We plan to rejuvenate the activities on cosmic objects and carry forward the plans and concepts established under his leadership and broaden and expand our activities in new directions as outlined in the sections below. Our aim will be to provide a forum for international cooperation by inviting guest participants active in the defense against impacts by asteroids and comets with Earth. ONGOING ACTIVITIES OF THE PMP We have started, and continue to implement, the database for geophysical and geological properties of near-Earth objects (NEOs) as recommended by the last workshop (Greenberg and Huebner, 2002). Work on the database continues. It can be accessed at http://neodata.space.swri.edu.It contains four major components: 1. Observational Data. 2. Material Properties Data. 3. Instruments and Mission Development. 4. Dissemination and Public Outreach. We also participated in the Workshop on Mitigation of Hazardous Comets and Asteroids, Arlington, Virginia, 2002. The proceedings of the workshop are in press and will be published by Cambridge University Press in October 2004. NEW ACTIVITIES OF THE PMP Just prior to this seminar, we held a PMP meeting at Erice. Present were Albert0 Cellino, Clark R. Chapman, Raymond Goldstein, Mario Di Martino, Ali Safaeinili, Russell L. Schweickart, and Donald K. Yeomans. Discussions centered on the following topics: 1. Extension of NEO search to objects of smaller sizes (effective diameter down to about 140 m). 2. Missions to NEOs to determine their geophysical and geological properties. 3. Missions to test nudging an object in its orbit, e.g., the B612 Foundation’s objectives. 4. How to establish a plan for reporting an imminent impact by a potentially hazardous object (PHO) to yet to be identified national and international authorities and recommend for a course of action to be taken in such an event. 5. Our proposal in response to requested input by the Inter-Agency Task Force (IATF) of the International Strategy for Disaster Reduction (ISDR) for a workshop.
318
319
NASA is now considering extension of the NEO search to smaller objects. ESA, JAXA, and NASA pursue calls for proposals for exploratory missions to asteroids and comets. Schweickart (see also Schweickart et al., 2003) presented plans by the B612 Foundation to conduct a test to move an asteroid in its orbit. Item (4) is an issue of major importance. Extensive discussions were held in a workshop following the seminars (see below). ISDR-IATF Proposal Closely related to the new activities that we consider is the proposal to ISDRIATF to establish a working group on disaster reduction for Earth impacts by asteroids and comets: Provide guidance on disaster reduction to international, national, and 1. regional authorities. Provide guidance on regional preparations and recommendations for 2. strategies and plans of action in case a PHO is identified. Enhance collaboration and coordination on an international scale among 3. different national and regional organizations on research and technology development for disaster mitigation of PHOs. 4. Facilitate the creation of a body of knowledge for disaster risk reduction for PHOs (Know Your Enemy). We suggested the following activities for the ISDR-IATF working group: The first activity of the Working Group should be to facilitate strategies and plans of action for the event that a PHO is identified and issue a report to the World Conference on Disaster Reduction (WCDR). Subsequently, regional meetings should be planned starting in 2005. These meetings should develop a draft for a plan of action to be taken for the expansion of search programs for NEOs, research of geophysical and geological properties of NEOs, deflection or destruction of a PHO, and dissemination of information about NEOs. The Working Group should also follow and support initiatives to develop a disaster risk reduction strategy. Important issues to be considered include: 1. Ongoing NEO surveys in the USA, Europe, Japan, China, and Australia continue to yield a rich harvest of scientific results (discovery of new classes of objects, improved statistical studies of orbits, etc.), and must be continued and expanded to smaller objects. 2. Of particular importance is the identification of targets for space missions, radar, and other means for detailed physical studies that these surveys provide. 3. To capitalize effectively on these new discoveries, follow-up programs for physical observations of important NEOs need to be invigorated. A most important step is to assign priority and adequate telescope observing time for the study of these objects. 4. A central clearinghouse, with free public access to all data collected, is an essential component of NEO survey programs. Active participation of our group was demonstrated with presentations at the multidisciplinary seminars: Clark R. Chapman: Recent Perspectives on the Hazard of an Asteroid Impact. Donald K. Yeomans: Recent Close Approaches of Asteroids and Comets. Russell L. Schweickart: Asteroid Deflections: Hopes and Fears.
320 Alan W. Harris: Recommendations for an ESA Initiative to Further our Understanding of the Near-Earth Impact Hazard. Hajime Yano: Hayabusa and its Follow-up Plan by JAXA. NEW DIRECTIONS - THE WORKSHOP In the workshop, using a roundtable format, we examined and discussed several space mission concepts and action plans. Space Mission Concepts Alan W. Harris (DLR) reviewed recommendations for an ESA initiative to further our understanding of the NEO impact hazard. He presented ESA’s Don Quijote mission study. It is a two-spacecraft mission consisting of Sancho, a science spacecraft, and Hidalgo, an impactor spacecraft. The mission would allow a detailed determination of the interior structure of an asteroid, its mechanical properties, and a measurement of its response to an impact. The goal is to measure size, shape, bulk density, large-scale mineralogical composition, mass distribution, the internal structure using seismology, the ratio of the moments of inertia, thickness of the regolith, and other regolith properties. The data provides crucial information for further development of mitigation strategies including numerical modeling. Raymond Goldstein (SwRI) presented plans for experiments to determine the interior structure of asteroids using micro-electro-mechanical systems (MEMS) seismometers. Knowledge about the interior structure and strength of materials are needed for various types of Earth collision avoidance measures. Seismology experiments are most successful if many seismometers are placed over a wide area on the surface. Seismometers must be firmly anchored to make good mechanical contact with the asteroid. This is a challenging problem because of the lack of gravity on asteroids. Ali Safaeinili ( P L ) presented plans for the Deep Interior mission proposed to NASA to investigate the internal structure of an asteroid using radio (radar) reflection tomography. A two-frequency radar would be used to obtain volumetric maps of the asteroid. Radiation from a 5 MHz channel can penetrate rock to depths of about 1 km while that of a 15 MHz channel can penetrate to a depth of more than 100 m. Interior structure would be related to surface structures such as craters and fractures. Albert0 Cellino and Mario Di Martino, both from the Torino Astronomical Observatory, presented plans for the Near-Earth Objects Radiometric Observatory (NERO) and detections of transient phenomena on planetary bodies, respectively. Transient phenomena are defined as luminous events of different intensities lasting for milliseconds to hours. The most relevant events for the workshop are meteors and bolides. Bodies in the mass range of 100 to 100,000 tons (about 10 to 100 m) are the most important part of this influx. Donald K. Yeomans (JPL) gave a status report on NEO searches and presented plans for future NEO searches. About 75% of the approximately 1100 near-Earth asteroids larger than 1 km in diameter have been discovered. It is anticipated that the search for about 90% of these asteroids will be completed by 2008. He reiterated and emphasized the need for government policy makers to establish a chain of responsibility for action in the event that a threat to Earth becomes known. He pointed out that the Organisation for Economic Co-operation and Development (OECD) Global Science Forum Workshop recommended to OECD countries establishment of a national policy and a responsible national agency to study NEO issues.
321 Hajime Yano (JAXA) presented the Hayabusa mission, a mission to an NEO to determine its properties. He also explained the follow-up plan by the Japanese space agency. Russell L. Schweickart discussed gently nudging an asteroid as a test. He pointed out that unknown physical properties of an NEO can lead to unpredictable deflection behavior. For details of the two discussions, please see their presentations in the section of the multidisciplinary seminars. Action Plans Some of the specifics for action plans include (1) scientific and technical issues and (2) political and social issues. Under the scientific and technical issues, we identified the need for developing options for procedures and techniques for Earth collision avoidance. Depending on the object’s size and the warning time before potential impact, they include: 1. Evacuation of the potential impact area. 2. Gently pushing the object out of its orbit. 3. A more energetic push on a larger object with reduced warning time. 4. Very energetic measures for short warning times All of these methods need to be developed and tested. Under the political and social issues, we identified the need for a risk reduction workshop and broadening the international base of participation in NEO searches, follow-up investigations, geophysical and geological investigations, and technology development and testing for impact avoidance. To accomplish this, workshops should be conducted at other international meetings, and an interagency working group should be set up to coordinate missions to asteroids and comets, to coordinate target selection (e. g., relate interiors to surface appearance), mission concepts, and instrumentation. Hajime Yano presented a possible framework for international coordination of NEO science, technology, and political issues. He proposed formation of an international impact hazard coordination group, formed by national representatives of the participating countries. It would consist of subgroups to coordinate observations of NEOs, physical characterization including geophysical and geological properties, impact avoidance procedures and technologies, databases and models, policy and law, and public outreach including dissemination of information. Extensive discussions focused on: 1. The need for an international treaty defining the responsib countries and international institutions central to the issues of NEO impact hazard. 2. Identification, investigation, and coordination of international policy issues concerning the planning and execution of impact avoidance, impact site evacuation, and other related mitigation and risk-reduction issues. 3. Policies and practices for the provision of NEO impact and response information to the public and decision makers. 4. The need for an international facilitator for the development of policies and actions supporting a coordinated international response to NEO threats. 5. The need to monitor and recommend to international entities measures and programs to enhance the rate of discovery and physical characterization of NEOs.
322 The workshop concluded with a summary report on the AIAA Planetary Defense Conference by Clark R. Chapman (SwRI). Details about the conference can be found at: httu://www.ulanetarydefense.info/. REFERENCES 1.
2.
Greenberg, J. M., Huebner, W. F., 'Summary of the workshop on geophysical and geological properties of NEOs: "Know your enemy" ' in International Seminars on Nuclear War and Planetary Emergencies, 26" Session, Ragaini, R. (ed.) p. 419-432,2002. Schweickart, R. L., Lu, E. T., Hut, P., Chapman, C. R. 'The Asteroid Tugboat' Scientific American. November 2003.
2004 REPORT OF THE ENERGY PERMANENT MONITORING PANEL
Bruce Stram, PMP Coordinator, BST Ventures, Houston, USA The Energy PMP set its agenda for 2004 to review various reports by Panel members and to have an agreed upon follow-up focus on energy issues as they relate to economic development and general welfare in rural areas of developing countries. Jef Ongena gave the PMP a report on progress of fusion energy in general and JET in particular. His report is summarized as follows: 1. INTRODUCTION: In discussions on the world energy future, all options should be discussed together with their advantages and disadvantages. Fusion is an option with a large potential, and should therefore not be forgotten, even if it is long term. At our last general meeting, Professor Zichichi criticised the world fusion community by stating that too much money has already been spent on fusion. At that time, Professor Palumbo’s immediate response was a quite reasonable answer, but for those who were not so strongly involved in fusion, perhaps it is difficult to understand and to judge whether the answer is correct or not. In order to avoid misunderstandings on the status of fusion research, we would like to inform the members of the PMP-Energy. ITER PROJECT: To make further progress in fusion, the ITER project is 2. a necessity. Therefore, a most important concern is how to promote the ITER project. The object of ITER is to make further progress in fusion, after the success of TFTR, JET and JT-60U, with as final aim to prepare the definition of a future fusion reactor. The construction cost of the project is high (400 million $ per year, for a period of 10 years), although one has to put this into perspective. As fusion plants are intended to produce electricity, it seems fair to compare the investments in fusion with the costs of electricity at this moment. The total cost for electricity in Europe is about 200 billion euro per year. The investment in fusion in EU is about 400 million a year, i.e. 0.002 of this sum. A very modest investment then, as insurance policy for future (clean) electricity. Note that Europe has already spent for years 50% of the world costs in fusion; Japan -25% and the US -25%. ITER, a large tokamak, is an experimental fusion reactor. However, the initial objective is relatively modest, namely, to realize a burning plasma experiment of long duration (500-1000s) with D+T fuels, as the next step after the three large tokamaks, JET, JT-60U and TFTR. In current discussions, ITER will be complemented with a materials research facility, IFMIF. With the current funding, a commercial fusion reactor will be a reality in about 50 years, if all goes well. However, there is an increasing demand (from the UK and other governments) for a “fast track“ to fusion science, which would include more funding and materials research (IFMIF) in parallel with ITER operation, and which would allow (according to these sources) to realize a fusion reactor in 25 year. In this r?nse, it is natural that Prof. Wilson wrote “A LONG SHOT: FUSION” on his agenda of ERICE SYMPOSIUM. Burning plasma is a very turbulent medium, and the study of such plasmas is closely related to the so-called “Self organization
323
324
3.
4.
5.
of plasmas”. The scaling of such plasmas to a reactor size is very difficult and it is necessary to proceed cautiously. A prudent next step is a device like ITER, a point that Prof. Palumbo clearly explained at the previous meeting. Note, however, that the step from the smaller tokamaks (early 70’s) to JET was much larger than the step from TFTR, JET and JTdOU to ITER. It is also important to realize that, as Prof. Palumbo underlined, in fusion research, the device itself is part of the experiment! This is entirely different from the situation in particle physics. The accelerator is an instrument for the study of the physics of sub-nuclear materials, designed according to well-established physics principles, which can be run by a set of operators in a routine fashion. This is not true in fusion research. Every experiment needs careful programming of the whole machine! The plasma shaping, timing and amount of heating is heavily dependant on the experiment, and is far from being a routine operation. Moreover, the structure of the machine depends on the physics. A long-bum fusion device cannot be operated without a blanket, because of the large 14MeV neutron fluxes, and ITER will consequently have one sector with a Liblanket, to study the neutron absorption and the possibilities of breeding T from Li. Wording or terminology of EXPERIMENTAL REACTOR: Because ITER is called “International Thermonuclear Experimental Reactor”, it can lead to the impression that the realization of a fusion reactor is near. As already discussed above, there is still a long way to go, as we have to demonstrate long burning plasma pulses. (Note that long pulse D-D plasmas at realistic parameter values have recently been demonstrated on Tore Supra, in France: pulses of 6 min 18 sec, 1.5 10’9m-3,0.5-0.7MA, 3.4T, 3MW). The fact that many years of study and experimentation are still needed, implies that physicists must and will play an important role in at least the early stage of ITER operations, and should play a role in the decision of the site. Recent developments in the siting for ITER: To unlock the stalemate on the siting for ITER (3 partners prefer EU, 3 partners prefer Japan), on 24 September the EU Council of Ministers encouraged the Commission to actively pursue international support in order to be able to reach a final agreement on the site for the Council of Minsters on 25/26 November 2004. Need of other approaches: It may seem strange for scientists and engineers outside the fusion community to understand why fusion researchers are investigating other approaches to a fusion reactor as LHD, W7-X and NSTX, in parallel to the study of ITER. LHD and W7-X are based on the Stellarator concept, with superconducting coils to confine the plasma. The lessons learned for the construction of the (large) superconducting coils on W7-X and LHD are directly beneficial for ITER. The fact that these alternative approaches are being studied shows clearly that the physics of a burning plasma are not yet sufficiently developed to define a fusion reactor in detail. The reason why the tokamak was chosen as the first configuration to test long pulse burning plasmas, is that up to now the tokamak device has delivered the most usable plasmas. The other devices are about 2 generations behind the tokamak development. If later
325
6.
7.
8.
it becomes clear that another configuration is more ideal for a fusion reactor, then the lessons learned with the ITER tokamak are not lost. In fusion research we urgently need practical experience with plasmas in which heating is dominated by the fast alpha particles from fusion reactions. Only JET has allowed a very preliminary first study of such plasmas in Oct 1997. Another important item to study in order to realize fusion reactors that are more reliable and compatible with environmental issues, is fusion material research. This was already mentioned through the “Blue Ribbon Panel” in early 1970, and it is now ongoing as the Project IFMIF (International Fusion Material Irradiation Facility). We have to accelerate this project in parallel to the ITER. Future Reactor Aspects: Future fusion reactors must have properties that are economic and environmental acceptable. At the moment, reactor engineering of ITER is almost equal to the device engineering, but in future it has to be done under well-defined conditions of economic feasibility and environmental acceptability. On the fusion fuels: estimates of recoverable Li are equivalent to 3000 years of fusion energy, at the current rate of world electricity consumption. If later, with optimised reactors we could use only D as a fuel (for the D+D reaction), then we would have a really unlimited source for more than 150 billion years. But this type of reaction requires even higher temperatures than the D+T reaction. CONCLUSIONS
Fusion is a potential source of energy with ideal properties: a sufficient quantity of resources and a large environmental potential. Although it is necessary to have long-term research and development efforts, it is one of the big “hopes” of humankind. Is there any “Hope” beyond Fusion Energy? Of course a lot of scientific efforts are needed for long term, but I believe we have to continue the work for future generations. Carmen Difiglio reported on developments regarding C 0 2 sequestration. This is an evolving technique directed toward mitigating C 0 2 emissions and therefore considered a possible antidote to global warming concerns. The PMF’ agreed that this will be a major focus in the coming year, leading to a joint two day meeting with the committee of the World Energy Congress in Sicily in August 2005. A summary: Carmen Difiglio addressed the following question using a newly-developed energy model from the International Energy Agency: what is the additional cost to stabilize greenhouse gas emissions at “safe” levels (loosely defined) without using nuclear power or carbon capture and storage? Difiglio’s answer was that, by 2050, annual costs would be about $800 billion per year higher. Average C 0 2 abatement costs would increase from around $20-25 per ton C02 to $40-$50/ton. Marginal abatement costs would increase from around $40/ton to almost $90/ton. The results showed that the economic losses would be substantially less if at least one of these technologies (nuclear or carbon capture and storage) would be available. Difiglio concluded that while energy efficiency and renewable energy technologies will be critically important to achieve stabilization, it is not realistic to expect them to achieve the desired result by themselves.
326 PMP Chairman Richard Wilson reported on the problematic history of nuclear power in America, and the substantial efforts made in the last 10 years to address those issues. Dr. Wilson concludes that utilization of nuclear power should be very seriously reconsidered. A summary: If we accept that it is necessary to reduce the C02 concentrations in the air to prevent or cope with global warming, and assume, as seems reasonable that these concentrations are related to carbon burning as a result of fossil fuel consumption, we must either reduce this fossil fuel consumption or sequester the carbon dioxide for a long period. In the first approach we may either reduce drastically any use of fuels or otherwise use of energy in man’s activities (which may be done either by restraint or more efficient use), or switch from fossil fuels to other fuels. There are no reasonable projections that efficient use and renewables can make any appreciable dent in the near future. Numerous experts, of whom I quote only two, agee that C02 production will increase in the next 25 years. The Energy Information Agency projects a 30% or so increase in energy use, even though a further reduction in energy intensity of 213 is anticipated in developed countries as shown in a Table from the International Energy Office. OPEC is also projecting a similar increase in oil consumption. This, of course, depends upon cost but prices up to $40 per barrel seem not to affect demand appreciably. The options for other fuels include nuclear fusion, nuclear fission, and various “renewable” resources: hydro, wind, direct solar conversion. Thus the options are: 1. Restraint in any energy use 2. Efficiency in energy use 3. Sequestering carbon 4. Switch to nuclear fission 5. Switch to nuclear fusion 6. Switch to hydropower 7. Switch to wind power Switch to other “renewable” resources. 8. Bruce Stram repeated his observation that despite widespread concerns over global warming, pollution, and geopolitical consequences of current energy supplies, expenditures for energy research and development are substantially less than during past decades. He called for the WFS to address energy R& D expenditure issues. Two presenters focused on issue related to the lack of energy, primarily electricity, in rural villages in developing countries. First Professor Abul Barkat of the University of Dhaka presented the results of his excellent study that quantified both the economic and general welfare impacts of rural electrification on villages in Bangledesh. His study strongly supported the PMP assumption that such effects were substantial, and provided much greater specificity as to the effects. A summary: The purpose of this paper is to present an analysis of the impact of electricity in poverty reduction in rural Bangladesh. This impact has been analyzed using comparisons between ‘with’ and ‘without’ electricity situations. Retrospective information has been collected to understand inter-temporal changes in asset situation of various economic categories. It has been argued that access to electricity (at household and outside household) reduces both economic poverty and human poverty (in education and health). On many dimensions, the poor electrified households were found to be better off than even the rich in the non-electrified villages. The spill-over
327 effect of electricity on the non-electrified households of electrified villages is very pronounced. Electricity has profound impact on human capital formation through knowledge building mediated through electricity-driven media exposure. It is recommended that in order to accelerate the process of economic development, strengthen pro-poor orientation in growth process and to further human development in Bangladesh, access to electricity of the households and social and economic institutions should be expanded. Rural electrification should be viewed as one of the key strategies for national poverty reduction. In addition, Dr Arun Kumar from New Delhi reported on efforts in India undertaken by his organization to provide the means to establish indigenous capability to achieve a level of electrification off the central grid. A summary: Of the half million villages in India, about 100,000 more remain to be electrified. In actual practice, even the majority of the electrified villages do not have reliable, adequate or good quality power. No commercial investments in micro enterprises can, therefore, be made by either individuals or companies without installing captive diesel generators which have a very high generating cost, create adverse environmental conditions and are vulnerable to rising energy costs in the global market. It has become painfully evident during the last decade of liberalization that it is well nigh impossible to mobilize the enormous amounts of capital required for large power stations to supply fossil fuel based electricity within a foreseeable period to every Indian, to every large and medium industry, to new rural micro enterprises, to the agricultural sector and for rural public services. Modem mature renewable energy systems are, on the other hand, available commercially and can supply reliable and affordable power and energy services at prices which are competitive with non-subsidized conventional fossil fuel based grid supplies or even captive generation. These include Biomass Gasification Technology, Bio-Diesel Generation and Micro-Hydel systems ranging in capacity from 50 Kw to 2 Mw. Many more technological solutions for local value addition through small scale production processes are available today to process local resources. These include food processing, minor mineral processing, agri-processing, building material manufacture and craft products. There is an emerging demand for new services such as ICT, computer education, cold storage, refrigeration and warehousing. These diverse micro enterprises can operate profitably in villages if reliable electrical power is available. Local value addition, increased farm productivity and export of traditional and new products and services will promote faster economic growth and create local employment. One successful example is the supply of village-processed bio-diesel based on crops such as Jatropha and Castor. The liberalized economic regime and the political framework of village institutions will greatly assist in the creation and operation of decentralized energy systems. DESI Power and its partners are convinced that electrification alone will neither make the electrical supply profitable nor promote the economic and social development of remote villages in India. Self-sustained growth can only take place if the rural electrification programme is linked to village micro enterprises for local value addition and employment generation. The power generation based on local renewable energy resources can provide reliable and affordable electricity supply while making the operations financially viable and attractive for investments.
328 DESI Power would like to propose to the scientific community that it accelerates its efforts to bring forth innovations into the public domain. Several innovations that can dramatically improve the quality of basic energy services to the masses worldwide include the White LED devices for high efficiency lighting, the Plant oil Stove for smoke free cooking and the multi-he1 Stirling Engine for simultaneously providing energy and safe drinking water. It is further suggested that financial mechanisms are put into place for providing finance package through leading Financial Institutions that are presently only engaged in the financing of large projects. The financial package should include an element of financial support in the form of low return equity or an interest subsidy to small scale projects based on renewable energy that have the ability to seek investments from the community. Finally, pursuant to these presentations, and internal discussion, the PMP agreed to the following resolutions:
329
PREAMBLE 1 Producing electricity for developing countries and developing areas was a major topic of the energy PMP at its 2004 session. The PMP addressed the problem of rural areas where a grid connection is presently nonexistent. There is very strong evidence worldwide that rural access to electricity is a powerful tool for achieving substantial improvement in welfare for the poor. By emphasizing access to electricity in rural areas, rural life can be made more attractive. Migration to cities can be reduced, resulting in less overcrowding, and less abandonment of rural land with consequent despoiling. In recent years multinational and national development lenders, with commercially oriented policies, have strongly supported large-scale power projects in developing countries. These have been quite effective in helping provide electricity to cities and wealthy rural areas. There are now micro scale energy and energy services systems capable of economically delivering power at reasonable cost at the village level with indigenous and ubiquitous resources. However, lending practices developed for large-scale projects have not accommodated micro projects. Recommendation 1 Therefore, we recommend that multinational and national agencies develop the capability of efficiently funding micro projects. This might be done through private intermediaries such as the Grameen Bank. Long-term loans should be on a par with loans for large-scale projects but with practices and conditions appropriate for very small-scale investments. Further, it is recommended that an element of subsidy support initially be linked to such investments. This support could take the form of low return equity or an interest subsidy that is part of the financial package. In this, national or international equity support should be forthcoming to match any community or other local investment.
PREAMBLE 2 Governments have fostered the promotion of energy efficiency and the deployment of renewable energy technologies as steps toward addressing global climate change concerns. These efforts have been supported by both subsidization of technology deployment and elimination of marketplace barriers that deter adoption of such technologies. The energy PMP has concluded that, while critically important, these steps alone are unlikely, by themselves, to achieve stabilization of greenhouse gas emissions safe levels while also allowing world economic growth. The group is convinced that all non-carbon energy technologies are necessary to achieve these goals and must include nuclear energy and carbon sequestration (and possibly other alternatives) within the basket of solutions. Recommendation 2 Therefore we recommend that governments and international agencies treat all non carbon energy technologies on a par with each other with access to similar subsidies and benefits of removal of financial market barriers so that improved versions of all these technologies can rapidly be utilized for achieving stabilization of greenhouse gas emissions while meeting energy demand.
330 Additionally it was agreed that for 2005 the Energy PMP would hold a joint meeting with the World Energy Congress regarding global warming and carbon sequestration. Bruce Stram will endeavor to arrange a session presenting other advanced energy technologies that offer the potential to ameliorate global warming and other global problems induced by current energy supplies. It was agreed that the webpage httu://energwmu.org had been useful during the last year. The power point slides and other reports of the PMP at the 2004 meeting have been posted on that page, as is this draft. It is likely that the maintenance of this page will for the next year, be done by Carmen DeFiglio. Users should be warned that the actual page now on httu://uhvs4.harvard.edu/-wilson/ener~wmu.html will be on a different computer and webpage although the same URL http://energypmu.orp will continue to work.
BANGLADESH RURAL ELECTRIFICATIONPROGRAM: A SUCCESS STORY OF POVERTY REDUCTION THROUGH ELECTRICITY ABUL BARKAT, PH.D Department of Economics, University of Dhaka, Bangladesh ABSTRACT The purpose of this paper is to present an analysis of the impact of electricity in poverty reduction in rural Bangladesh. This impact has been analyzed using comparisons between ‘with‘ and ‘without‘ electricity situations. Retrospective information has been collected to understand inter-temporal changes in asset situation of various economic categories. It has been argued that access to electricity (at household and outside household) reduces both economic poverty and human poverty (in education and health). In many ways, the poor electrified households were found to be better off than even the rich in the non-electrified villages. The spill-over effect of electricity on the non-electrified households of electrified villages is much pronounced. Electricity has a profound impact on human capital formation through knowledge building mediated through electricity-driven media exposure. It is recommended that in order to accelerate the process of economic development, strengthen pro-poor orientation in the growth process and to further human development in Bangladesh, access to electricity of the households and social and economic institutions should be expanded. Rural electrification should be viewed as one of the key strategies for national poverty reduction SUCCESS AND CHALLENGES OF BANGLADESH RURAL ELECTRIFICATIONPROGRAM: A BRIEF SUMMARY
In order to set the stage for an understanding of the Bangladesh Rural Electrification Program (REP) - the largest agency in the power sector for the rural population (76% of 140 million people reside in rural areas) - it would be worthwhile to provide a brief evolutionary description together with the successes and key challenges facing the program. In 1971 - the year Bangladesh became independent only 250 villages out of 87,928 villages had access to electricity. The village electrification rate in an agro-based country was extremely slow, and the Power Development Board, by the mid ’~OS,found itself overburdened with an absolute monopoly. Against this backdrop, to accelerate the process of rural development and remove urban-rural disparity in standards of living, the Constitution (in 1972) declared rural electrification as one of the “hndamental principles of state policy” (Article 16), and subsequently, after an in-depth feasibility study, an Ordinance was promulgated to establish Rural Electrification Board in 1977. Based on the spirit of the Constitution and the Ordinance, the Polli Bidyut Samities (PBS; Rural Electricity Cooperatives, REC) were instituted. This practice is intended to provide a stable and reliable supply of power (electricity) in the rural areas at a reasonable price, and through that enhance the rural population’s standard of living both in terms of expansion of production-oriented (irrigated agriculture, industry, shopdmarket place) and human development activities (health, education, women’s empowerment etc). Over the last 20 years, the number of PBS established has increased over fivefold. From a low of only 250 villages in 1971, REP covered 39,684 villages by the 33 1
332 end of 2003, and constructed a distribution line of 164,631 km - a significant increase as far as coverage is concerned. During the past twenty years, the annual average growth rate of villages electrified was over 20%, the length of lines energized was 23.4%, and the number of services connected was 43%. This program now serves about 25 million people in rural areas throughout the country, including about 21 million household population, most of the electricity operated imgation equipment (121,715 units), a significant number of industries (90,921 units), 567,842 commercial units, and 4.3 million domestic connections. Following these relative successes, the key future challenges for the Rural Electrification Program are to cover 100% of villages (now 45%), 100% of rural households (now 22%), 100% of poor households (now 21% in electrified villages, and 5% of the total rural poor), and to construct 400,000 km of new distribution lines - all by 2020, in compliance with the national development policy objective of “electricity for all by 2020”. OBJECTIVE The key objective of this paper is to present - based on the secondary analysis of a recently conducted comprehensive study titled “Economic and Social Impact Evaluation of the Rural Electrification Program in Bangladesh” (Barkat e t d , 2002) - some empirical indications about the impact of rural electrification on various areas of poverty reduction in rural Bangladesh. Attempts have been made to cover two broad aspects of poverty, namely, economic poverty (employment, income, expenditure, savings, incidence of poverty, and changes in asset strengths), and human poverty (including health and education poverty). Analyses have been made to understand the rich-poor divide and the gender (male-female) divide on some crucial development indicators. Finally, in order to draw inferences about the overall developmental impact, human development index values have been estimated separately for electrified and non-electrified household segments. SALIENT METHODOLOGICALISSUES At the outset, it would be appropriate to mention that the principal purpose of the Bangladesh Study was not to provide an in-depth assessment of the impact of Rural Electricity/Electrification (RE) on poverty reduction per se. Dictated by the study design, no separate sampling was done for poor and non-poor. Therefore, whatever analysis of impact on and implications for poverty reduction has been presented, it is based on the original data set of the households not distinguished by poor and non-poor in the original sample. The specific core objectives of the base study were to assess RE’S economic and social impacts on four measurement objects, namely, household, industries, agricultural equipment, and commercial activities. The sampling design was developed accordingly with a total of 3718 samples: 2491 households, 171 industries, 528 commercial units and 523 irrigation equipment users (including 67 non-irrigation) divided into users and non-users of electricity, and dispersed through 23 PBSs (out of 67 PBSs operating under the Rural Electrification Board). Due to the absence of baseline data, it was deemed most appropriate to adopt a post-test-only control group operations research design using a “with-without’’ electricity scenario to gauge the impact of RE. Two types of villages (with and without electricity) with three types of household were covered in the study sample. Those were:
333 HE= Household with electricity (pure experimental group; sample size 1380), WE-EV= Household without electricity in electrified villages (semi-control having spill-over effect; sample size 421), and WE-NEV= Household in non-electrified villages (pure control group; sample size 690). This classification of households is absolute from the viewpoint of access to household-level electricity connections, but relative from the viewpoint of a household’s benefit from electricity outside the household (because of benefits from outside household electricity in irrigation pumps, industry, and commercial establishments). The dearth of baseline values was compensated, to some extent, by collecting retrospective information (for 5 years preceding the time of field data collection) on certain specific indicators which suffer less from memory recall problems (such as ownership, property, assets - land, homestead, number of rooms, sq.ft., number of livestock, etc). These data were useful in understanding the dynamics of changes in the asset situations of the sample households. Therefore, such data can be gainfully used as baseline data to perform secondary analysis on RE’Simpact on poverty. It is important to note that all the sample households covered in the study, by design, were drawn (using PPS and a statistically random technique) from the rural areas (rural by the definition adopted officially by the Bangladesh Bureau of Statistics in its Population Census). Therefore, all three categories of households are homogeneous on one broad location count, that is, none of these are urban - all are rural. Throughout this article, the category ‘poor’ has been used to denote ‘poor household’ in terms of amount of landownership (details see footnote 6; methodology for poverty line analysis is presented in footnote 8). From the methodological point of view, it is pertinent to mention that in the electrified villages, both the poor and nonpoor households have access to electricity at their households, the extent of access being different. In the electrified villages, on average, 24.3% among the poor households and 45.4% among the non-poor households have electricity connections. A recent study revealed that rural household’s access to electricity is not socioeconomic status (or class) neutral. The actual connectivity of households to electricity declines with the declining economic status of a household; while on average 39.7% of households in the electrified villages are electrified, the same ratio is as high as 53.5% for the non-poor and only 30.6% for the poor; while over 90% rich households (those having 750 decimals and above land ownership) possess electricity, it is only 21% among the poorest households (Barkat et al., 2003: 37). Another crucial methodological issue to be mentioned here is that, in terms of most community level infrastructural variables- population per primary school, population per km pucca road, population per public health facility- no pronounced differences were found between the electrified villages and the non-electrified ones. This is so because of the fact that establishment of these infrastructural public facilities are not contingent upon the availability or non-availability of electricity in the villages. This point is extremely important due to the fact that availability status of these infrastructural facilities is usually used as explanatory variables in the analysis of the economic and social impact of an intervention on people’s lives and living standards. Since differences on this count between the electrified villages and the nonelectrified ones are not pronounced, these should not be accepted as explanatory
334 variables; and they act as neutral variables. In terms of some other variables, such as distance of the upazila (subdivision) health complex from the village - nonelectrified villages, on average, were found to be placed better than the electrified ones. However, for some other infrastructural facilities, e.g., night schools, banks, cooperative societies, the electrified villages were found to be better endowed compared to the non-electrified villages. Last, but not the least, it is pertinent to emphasize that the multifaceted impacts and benefits of RE are most likely to be both direct and indirect; and quantitatively attributing RE to those impacts is a formidable research task to accomplish. IMPACT ON REDUCTION OF ECONOMIC POVERTY Rural electricity has a multidimensional economic impact in general, and an impact on economic poverty reduction in particular. The economic impact is pronounced in the creation of employment opportunities in an overpopulated country characterized by uncertain and a gloomy labour market. RE has been instrumental in creating about 3 million (2.95 million) direct employment, as well as an enormous employment in support services as a spillover effect. The sectoral situation of employment (job) creation associated with the RE is as follows: a. b. C. d. e. f. g.
h.
An estimated 1.1 million ersons are directly involved in farmlands using RE irrigation equipments. RE-connected industries (a total of 63,220 industries in 67 Palli Bidyut Sarnities, PBSs) employ 983,829 persons.’ RE-connected retail and wholesale shops employ 848,630 persons (with 753,918 persons in retail shop^).^ Direct employment through PBS is 16,223 persons with all billing assistants being women. Electrified households have relatively higher non-agricultural employment (indicating modernization effect on occupation). Unemployment rate in the electrified households is less than that in the non-electrified households (3% against about 4%). Women in the electrified households compared to those in the nonelectrified are more involved in income-generation activities (IGAs) including poultry raising, livestock, cottage industry, sewing, and handicrafts (with better re-allocation of time for remunerative employment). Spill-over effect on employment creation in support services is highly pronounced in electrified areas.’
P
It is not easy to ascertain the extent of the direct employment creation (2.95 million persons) that can be attributed to RE. However, a conservative estimate based on expert judgement-based assumptions4 shows that about 1.3 million of the 2.95 million jobs associated with RE-connected industries, irrigation equipment users, commercial shops (retail and wholesale), and employment in PBS could be attributed to RE. Thus, at least 42% of the direct employment associated with RE connections can be attributed to rural electricity. As a predominant part of these jobs is associated with the economically poor segment of the society, this has direct poverty reduction impact.
335 The average annual income5 of households with electricity (HE) is 64.5% higher than that in the households of non-electrified villages (WE-NEV), and 126.1% higher than that in the households without electricity of electrified villages (WE-EV). Last year's average household income of HE was Tk. 92,963, and the same for WE-EV was Tk. 41,110, and that for WE-NEV was Tk. 56,524 (Table 1). Table 1: Average annual household income (net) by landownership groups by household electrification status.
Landownership groups
Household (HH) electrification status: HH with HH without HH without electricity electricity (HE) electricity (in non-electrified (in electrified villages) (WE-NEV) villaees) IWE-EV)
Table 3 Me Table 3 Me Income poverty is less pronounced in the electrified households compared to their counterparts - non-electrified households. Considering the lowest income group of Tk.24,000 per household per year, while only 19.3% of the electrified households fall in this group, the corresponding proportion of households in the WE-EV and WENEV were as high as 39.2% and 27.4% res ectively. The average annual income of poor ?households with an electricity connection was Tk.58,864, and that in their two counterparts was Tk.35,104 (WE-EV) and Tk,38,982 (WE-NEV) i.e., the landless in the electrified households earn, on an average, 68% more than the landless in the non-electrified households in electrified villages and 51% more than the landless in the non-electrified villages (Table 1). The average income of the poor households with electricity is higher even than the average income of all households in the non-electrified villages. Thus, in terms of income, the poor households with electricity are richer than the average (rich and poor combined) households in the non-electrified villages. Beside the fact that the average income of electrified households and that of electrified poor households is higher than the comparable non-electrified households, the rich-poor gap in income is much less pronounced in the electrified households than in the households of non-electrified villages. In the electrified households, the rich (large landowner) earns an income 3.75 times higher than that of the poor, but in the non-electrified villages it is 5 times higher (Table 2).
336
As to how much of the income is attributable to electricity, three issues are in order. The detailed methodology of estimation of share of income (by sources) that can be attributed to electricity is presented in Annex A. a. b.
c.
Having electricity in the household has contributed to the increased income of electrified households. Electricity has contributed to the income of all households (both electrified and non-electrified), in varying degrees, in terms of benefits from the use of electricity-driven irrigation equipment, commercial connections, and industrial connections (i.e., benefits from outside household connections). Not all of the differences in income of the three categories of sample households are attributable to the independent effect of the presence of electricity.
Before presenting the estimates of share of income attributable to electricity, it would be pertinent to present the following findings about the households’ reporting on the relationship between enhanced income and the availability of electricity (Table 2): a.
b.
c.
d.
Overall 40.8% of the households with electricity reported that having electricity in the household has somehow influenced the increase in income. The relationship of income to the availability of electricity outside the household was reported by 66.2% of the households with electricity, and 59.1% of the households without electricity in electrified villages (WEEV), and 21.3% of the households in non-electrified villages (WE-NEV). These are indicative enough to show that electricity benefits allirrespective of availability of electricity in the household. Many households reported that some sources of income are absolutely new to the household and emerged with electricity. Such absolutely new sources of income were reported by 13.6% HE, 6.9% WE-EV, and 1.6% WE-NEV. Some households reported that the sources of income were not new, but the increase in income due to electricity was very pronounced. Such reporting was made by 62.9% of the electrified households, 50.4% WEEV and 16.7% WE-NEV. This is an indication that income from various previous sources had increased after access to electricity both in and outside the household.
337 Table 2: Rural electrification and poverty reduction household electrification status
~
Selected indicators by
~~
Selected indicators
1.
2.
3.
4.
5.
6.
7.
Income’s relation with electricity: % households reported - Income’s relationship with availability of electricity outside HH - Source of income as absolutely new to the HH which emerged with electricity - Income source as not new to the HH but income enhanced due to access to electricity Annual (net) income (in Tk. for last year): - Poor (landless) - Rich (large landowner) - All Annual household expenses by food and nonfood items (% share) - Food - Non-food Below poverty line (“hpopulation): - Absolute poverty (DCI) - Hardcore poverty (DCI) - Lower poverty line (CBN) - Upper poverty line (CBN) School drop out (“hhouseholds reported): - Boysonly - Girls only - BothsexAvailing treatment from medically competent persons (% reported for last year): - Sex: Male Female Both - Rich-poor divide: Poor Rich Child delivery assisted by medically trained person (proportion of last birth): - All - Poor - Rich
Household HH with electricity (HE)
H) electrifi tion status: HH HH without without electricity electricity (in (in nonelectrified electrified villages) villages) (WE-EV) (WENEV)
66.2
59.1
21.3
13.6
6.9
1.6
62.9
50.4
16.7
58,864 220,986 92,963
35,104 76,000 41,110
38,989 195,165 56,524
47 53
53 47
53.4 46.6
39.9 21.8 22.3 36.3
51.2 27.1 47.9 61.2
43.4 23.1 35.0 51.8
16 8 20
20 8 25
22 11 28
59.0 54.3 56.7
46.3 40.3 43.6
46.4 39.9 43.3
54.7 64.0
NA NA
42.5 64.3
36 30.4 67.5
23.1 NA NA
17.9 14.7 25
338 Availing treatment from medically competent persons in last maternal morbidity: - Poor - Rich 9. Full immunization coverage ratio among children 12-23 months by sex: - BOYS - Girls - Both 10. Contraceptive Prevalence Rate: - Poor - Rich 11. Women's knowledge score of gender equality issues: - All women - Poor - Rich 8
79.2 100.0
56.3 NA
57.4 66.7
64.2 56.2 60.7
48.8 60.6 54.4
31.7 40.9 36.5
65.7 83.8
NA NA
55.0 61.9
0.80 0.79 0.84
Based on the above findings, it can be inferred that electricity's influence over income is more pronounced in the households with electricity than that in the other two categories having no electricity in the households; and the same is relatively more pronounced among the households without electricity in the electrified villages compared to those in the non-electrified villages. Thus, the path of influence, from high to low, shows the following pattern: HE+
WE-EV -+WE-NEV
Estimates show that 16.4% of the annual income of the electrified households may be attributed to electricity (Figure 1). As for non-electrified households in the electrified villages (WE-EV), 12% of the annual income can be attributed to electricity, and it is only 3.6% for the households in the non-electrified villages (WENEV). Moreso, in absolute terms, the annual income attributable to electricity in the electrified households (Tk.l5,229 out of Tk.92,963) is 3 times higher than that in the non-electrified households of electrified villages (Tk.4,947), and 7.4 times higher than the households in the non-electrified villages (Tk.2,058). Figure 1: Share of annual household income attributable to electricity (in Tk.)
I
n 92.263
56,524
D
HE
WE-EV
% shows share of electricity
WE-NEV
339
Another interesting dimension of electricity's contribution to household income is the pattern in terms of specific sources of income. The income source-wise contribution of electricity reveals the following: a.
b.
c. d.
While 13.8% of the income from crop agriculture of electrified household (HE) can be attributed to electricity, it is only 0.8% in the households in non-electrified villages (WE-NEV). Electricity's share of income from crop agriculture in the households without electricity in electrified villages (WE-EV) is 10.1%. While 31.3% of the income from small (petty) business and shops of electrified households is attributed to electricity, it is only 7.2% in the WE-NEV, and 11.9% in the WE-EV. While 32% of the income from livestock and poultry in WE is attributed to electricity, it is less than 1% in the WE-NEV, and 9.1% in WE-EV. While 61% of the income from cottage industry in HE is attributed to electricity, it is 35.3% in WE-EV and only 3.2% in the WE-NEV.
The above estimates indicate a general situation of the income-enhancing impact of electricity. The income-poverty reduction impact of rural electricity is evident in the fact that, irrespective of household electrification status, the relative share of household income attributable to electricity is consistently higher for the poor than that for the rich. As for the electrified household category, whereas 15.2% of the annual income of the rich can be attributable to electricity, it was 17.2% for the poor. As for non-electrified households in electrified villages, whereas 8.6% of the annual income of the rich can be attributable to electricity, this was 14.3% for the poor. In non-electrified villages, whereas 3.8% of the household income of the rich can be attributable to electricity, this was 6.1% for the poor (Table 3). Electricity's poverty reduction impact is further evident in the fact that, in absolute terms (Taka value), the amount of income attributable to electricity in the poor electrified households (Tk. 10124) is even higher than that in the rich households of non-electrified villages (Tk.7461). Table 3: Households status
Share of annual household income (net) attributable to electricity by richpoor by household electrification status. by
electrification
HE WE-EV WE-NEV
Rich
Poor
15.2(220986) 8.6 (68237)* 3.8(195165)
17.2 (58864) 14.3(35104) 6.1 (38989)
Note: Parentheses show annual household income (net; in Tk). 'Poor' means those having <50 decimals landowners hip; 'rich' means those having landownership 750 decimals and above. *Since the number of rich (large landowner) in the sample was only 1, the medium landowners (31 in number) are included. The food-non food pattern of expenditure was different between the electrified and non-electrified households, hi absolute terms, for food, the electrified households spend (annually Tk.44,512) more than their non-electrified counterparts (Tk.32,516 in electrified villages and 36,456 in non-electrified villages). Similar was the situation in
340 terms of non-food expenses. However, in relative terms, the share of foodfnon-food expenditure of the electrified households exhibited a different pattern from that in the non-electrified households. While the share of non-food expenditure was higher than the food expenses in electrified households, it was the opposite in non-electrified with higher share of food than non-food (Table 2). The food/non-food expenditure pattern in electrified households resembled closely the national urban pattern and that in the non-electrified close to the national rural pattern7. Thus, electrification has acted as a factor in urbanizing the consumption pattern of the rural people having electricity in their households. The average annual educational expenditure ranged between 2.3% of the total household expenditure of the non-electrified households of electrified villages and 3.4% in the electrified households. The average annual (last year) household expenses for education incurred by the electrified households was Tk.3,260 - about 87% higher expenditure than that in the households of the non-electrified villages, and 135% higher than the non-electrified villages (Figure 2). The expenses for males of electrified households (Tk.2137) were 100% and 170% higher than their counterparts in the non-electrified villages and in the non-electrified households of electrified villages respectively. The annual household educational expenses for females of electrified (Tk.1124) was 66% and 89% higher than their counterparts in the nonelectrified villages and non-electrified households of electrified villages respectively. Thus, spending on education for both boys and girls is much higher in the electrified than their counterparts in the non-electrified households; also the educational expenses constitute a high proportion of the higher income of the electrified (3.4% income) as compared to their counterparts (with less income than electrified). The average annual health care expenditure reported by the electrified households was Tk.4325, which is 44% higher than their non-electrified counterparts (Tk.3,012 and Tk.2,999) (Figure 3). Figure 2: Household expenditure on education: annual (last year) and per capita by male-female (in Tk.)
I
Per capita (by sex)
Annual
~~l~
Female
Figure 3:Average annual household expenditure on health care by male-female by household electrification status
Both
4325
HE
WE-EV
WE-NEV
34 I
The annual healthcare expenses for the males of electrified households (Tk. 2,376) was 22% higher than their counterpart in the non-electrified households of electrified villages and about 16% higher than those in the households of nonelectrified villages. A much higher level of gender-gap was evident in case of health care expenses for the females. The annual health care expenses for the females of electrified (Tk.1,948) was 85% higher than those in the households of the nonelectrified villages and 104% higher than those in the non-electrified households of electrified villages (estimated from data in Figure 3). More importantly, the male-female gap in household health expenditure was much less pronounced within the electrified households than that in the two other sample categories. For example, while in the electrified households, the annual average health expenditure for the males was 22% higher than the females, the corresponding health expenses were as high as 85% higher for males than females in non-electrified households (of electrified villages), and more so, 116% higher for males than females in the households of the non-electrified villages (estimated based on data in Figure 3). Thus, when compared to the non-electrified households, the electrified households not only spend more on health but also exhibit less gender disparity. The poverty reduction influence of electricity is evident from the rich-poor gaps in savings. In terms of income groups, within electrified, the average amount of savings of the highest income group was 6.4 times higher than the lowest income group; and corresponding figure for the non-electrified villages was as high as 18 times. In terms of landownership groups, in electrified households, average amount of the savings of the large landowners was 9 times higher than the landless group; and the corresponding figure for the non-electrified villages was as high as 14 times (estimated based on information in Figure 4). Also, the average amount of savings of the land-poor electrified households is 3.3 times higher than that of the land-poor households in the non-electrified villages. Figure 4:Rich-poor differences in average household savings by income and landownership groups by electrification status (in Tk.)
1
Land Group
Income Grou
I
159740
75379
HE
WE-NEV
I l P o o r HRich
I
342 The influence of electricity on poverty reduction is also evident in the changing landownership pattern. The bottom 40% of the electrified households now (in 2002) own 3.7% of the total cultivable land (total of all cultivable land owned by the electrified households) and the top 10% own 43% of such total land. In the nonelectrified villages, the bottom 40% of the households own only 1.6% of the total cultivable land (2.3 times less than that in the electrified households), and the top 10% own as high as 5 1.6% of such land (which is about 19% higher than in the electrified households). This shows prevalence of a higher extent of inequality in terms of ownership of cultivable land in the households of non-electrified villages compared to that in the electrified households. The changes in the degree of inequality in cultivable landownership during the last five years were observed to be more favourable for electrified households compared to those in the households of non-electrified villages. The bottom 40% (four deciles) of the electrified households owned 3% of the total cultivable land in 1997, which went up to 3.7% in 2002. While the bottom 40% of the households in the nonelectrified villages owned only 1.2% of the total cultivable land in 1997, it had increased to 1.6% of the total in 2002. Therefore, the relative share of ownership in the total cultivable land of the bottom 40% of the electrified households has gone up at a higher rate during the last five years, as compared to that in households of the non-electrified villages. The distribution of ownership of cultivable land in the electrified households, although skewed, is still better than that in the non-electrified households. The progress during the last five years was more pronounced in the electrified households. The gini concentration ratio for ownership of cultivable land for the electrified households has dropped slightly from 0.62 in 1997 to 0.61 in 2002 (a decline of 1.6%); the same for non-electrified households in the electrified villages has dropped from 0.69 in 1997 to 0.68 in 2002 (a decline of 1.4%); but for the households in the non-electrified villages this has remained the same at 0.67 in 1997 and 2002. That, over time, the poverty status (movement of the low asset category) in the electrified households has improved as compared to that in the non-electrified is evident in the pronounced changes in the asset situation of the former during the last five years, 1997-2002 (Figure 5). The five-year change in the overall asset situation of the electrified households (HE) representing the low asset group in 1997 was as follows: Of those representing the low asset group in 1997 - after five years in 2002 76.2% remained low, 19.4% joined the medium, and 4.5% even joined the high asset category. The changes in the asset strength situation of the 1997 low asset group of non-electrified households in the electrified villages (WE-EV) was as follows: of those representing the low asset group in 1997, 90% remained at the same low group, only 10% went up and joined the medium group, and none joined the high group (of this group, in case of electrified, 4.5% joined the high group). The changes in the overall asset of the 1997-low asset group households in the non-electrified villages (WE-NEV) was as follows: of those representing the low asset group in 1997 - after five years (in 2002) - 87.9% could not improve their asset strength and remained in the same low asset group, 10% moved to the medium and remaining 2.1% to the large asset groups. Thus, with all the fluctuations in the movement of the original (1997) low asset group to other groups (by 2002), compared to the non-electrified households, the electrified households have shown a progressive trend in their economic strengths measured through upward movement of the household asset situation.
343 Figure 5: Flow diagram showing changes in the overall household asset situation during the last five years by household electrification status: Movement of 1997 asset group into year 2002. A. Households with electricity, HE (n=l380)
8.Households without electricity in electrified villages, WE-EV (n=421)
C. Households in non-electrified villages, WE-NEV (n=690) Past (1997) Present (2002) High: 20.7%
(143) Medium: 33.3%
Medium: 33.3%
(230)
Source: Estimated based on survey data. Notes: 1, Low, medium and high asset groups corresponds to the total valuation of all capital assets (movable and immovable) upto Tk. 250,000, Tk. 250001 to Tk. 750,000, and Tk. 750,001 and above, at 2002 market price respectively 2> Figures in the parentheses show the number of households in the asset group 32 Number accompanying arrow marks show the share of the 1997 group moving to 2002 group.
In order to draw further inferences about the poverty reduction impact of electricity, it would be pertinent to present an analysis of the various aspects of incidences of poverty (below povert line situations). The head-count index measured in terms of DCI and CBN methods1shows distinctively that the poverty situation is much better in the electrified than that in the non-electrified households Absolute poverty was most pronounced among population in the households without electricity in electrified villages. About 40% of the population in the electrified households is below the absolute poverty line (i.e; per capita consumption is less than 2122 k.cal per day). The corresponding figures for the population in nonelectrified households of electrified villages is 51%, and that for the population of non-electrified villages is 43.4% (Table 2). Compared to the national level of absolute poverty (44.3%), the electrified household's level is lower by 11% implying that electricity has contributed to poverty reduction. Like absolute poverty, hard-core poverty was also most prominent among the populations in the non-electrified households in the electrified villages (27.1%). In the electrified households, 21.8% of the population was found to be below the hard-core poverty line (i.e., per capita consumption is less than 1805 k. cal. per day). The corresponding value for the population in the non-electrified villages was 23.1% (Table 2). Both the lower and upper poverty lines using the CBN method are much less pronounced for the electrified than the non-electrified households (Table 2). The
344 proportion of population below both the lower and upper poverty lines was highest in the non-electrified households of the electrified villages (47.9% and 61.2%), followed by households in the non-electrified villages (35% and 51.8%), and lowest in the electrified households (22.3% and 36.3%). The very high incidence of poverty among the population of non-electrified households, and high gaps in those incidences between the electrified and nonelectrified households with electrified showing the least incidences signify that access to electricity in the poor households (not in the villages only) had much impact in poverty reduction. Thus, ensuring poor people's (households) access to electricity should be assigned with h g h priority in any future poverty reduction strategy for rural Bangladesh. Thus, based on the above analysis of relationship between electricity and various dimensions of economic poverty, the following major inferences can be drawn: In terms of almost all measures of economic poverty, the population in the a. electrified households is much better off than the national averages and their counterparts in the non-electrified households. Thus, electricity has a strong poverty reduction influence. b. The incidences of poverty are highest among the non-electrified households in the electrified villages, and the poverty gap between the electrified and non-electrified households in the electrified villages is substantial. This implies that electrification in the village only, without electrification of households, will not be sufficient to reduce poverty. Finally, in terms of impact of electricity on economic poverty reduction, two more issues having broad policy implications are worth noting: 1. A binary probit analysis has been conducted to ascertain how far the shift of a household from poor category to non-poor category is influenced by possession of electricity and other factors. The results presented in Table 4 show that possession of electricity positively and significantly influences the shift of a household from a poor to a non-poor category (z Statistic = 7.3145, coefficient = 0.52438, and P = 0). The poor to non-poor shift is also influenced by the education status of the head of the household. Such education status (among male-female and poorinon-poor) is more pronounced in the electrified households than that in the non-electrified households (Figures 2 and 9). Table 4: Results of binary probit analysis. Variable Household electrification status (HW # HH members involved in income generation (IGAMEM) Education status of HH head (VII EDU) C
I Coefficient [ Std. Error 1 z-Statistic I Prob [ 0.524380 [ 0.071690 [ 7.314506 [ 0.0000
I
0.182401
0.060806
2.999714
0.0027
0.255000
0.061443
4.150214
0.0000
-0.563612
I
0.112841
Note: Dependent Variable: LANL-CAT* Method: ML Binary Probit Sample (adjusted): 1 1801 Included observations: 1801 after adjusting endpoints ~
I
-4.994753
I
0.0000
345 Fonvergence achieved after 3 iterations (Dependent variable) y = 0 indicates ownership of 0-49 decimals of cultivable land = poor; y = 1 indicates ownership of 50 decimals and more of cultivable land = non-poor. Although electricity has high potential for economic development and 2. poverty reduction, only about 18% of the rural households in Bangladesh have electricity connections in their households. Our estimates (in Annex B) regarding the potential benefits of electricity show that "with 100% rural households electrified the total annual household income in rural areas (at current market price) will increase to Tk.1,775 billion from current Tk.1,105 billion. Almost 44% of the incremental income will be due to electricity, and the size of the incremental income (Tk. 671 billion) is equivalent to 26% of the current GDP of Bangladesh. IMPACT ON EDUCATION-POVERTY REDUCTION Electricity impacts upon social and cultural development of individuals, families, and the community at large. This impact is mediated through various intervening channels such as knowledge building and behavioral changes through TV viewing, radio listening, extended lighting hours, etc. Education forms the knowledge base of economic development. Education must be recognized as a cornerstone to human capital formation (Schultz, 1981; Becker, 198l), 'education' as a means to human capability building and through that to human life (Sen, 1999), and 'education' as a key factor in human development. Adequate emphasis has been made in the paper to reveal the educational status of the members of electrified and non-electrified households. The overall literacy rate' was found much higher at 70.8% in the electrified households, compared to that in the non-electrified with 54.3% in the electrified villages and 56.4% in the non-electrified villages (Figure 6). Compared to the nonelectrified households, the overall literacy rates for both male and female in the electrified were significantly higher. The overall male literacy rate in electrified was about 76% and that for WE-EV was 58.4% and for WE-NEV was 62.2%. Similarly, the overall female literacy rate in electrified was 65.2% (even higher than male literacy in non-electrified villages), and the corresponding rate for WE-EV was 49.9% and for WE-NEV was 49.8%.
346 Figure 6: Overall literacy rate by sex by household electrificationstatus (%)
75.8
HE
WE-EV
WE-NEV
The gender-divide in the overall literacy was much pronounced in the households of the non-electrified villages compared to the electrified households: male-female gap in non-electrified villages was 25%, and in the electrified households was 16%. Thus, in the electrified households, as compared to the households in the non-electrified villages, the overall literacy rate is significantly higher (by 22%), with much less gender-inequity (female literacy in electrified is 31% higher than in the non-electrified villages). Other factors being the same, these significant rises in overall literacy including that for females can indirectly be attributed to the household's access to electricity which has contributed much, both in enhancing income and in raising awareness about the value of education. Not only the overall literacy was higher, but also the rich-poor divide in literacy was less pronounced in the electrified than that in the non-electrified households (Figure 7). Regarding inequity in overall literacy, the following are in order: a. Across the sample categories, the richer the households are (in terms of landownership status), the higher is the overall literacy rate, i.e; the richpoor divide exists irrespective of household electrification status. However, the divide is much less pronounced in the electrified households compared to those in the non-electrified villages. In the electrified households, the overall literacy rate of the rich was about 25% higher than that of the poor, but the same was as high as 60% in the households of non-electrified villages (estimates based on Figure 7). This average richpoor divide was more pronounced with females than with males. b. The rich-rich divide with and without electricity was also evident. The overall literacy rate among the rich (large landowning) electrified households was 9.8% higher than the same category in the non-electrified villages. But the poor-poor divide with and without electricity was much more pronounced. The overall literacy rate among the poor (landless) electrified households was 41% higher than that of the poor in the non-
347 electrified villages (estimated based on information in Figure 7). This difference was 47% in case of overall female literacy of the poor. Figure 7: Rich-poor divide in overall literacy rate by household electrification status Poor (landless)
Rich (Large landowner)
81.9
HE WE-NEV
HE WE-NE
Thus, electricity has a neutral impact on the literacy of the rich, but it has a highly pronounced impact on the literacy of the poor especially that of poor women. This implies that electricity plays a catalytic role in reducing the knowledge-poverty measured in terms of the overall literacy rate as well as that of female. Adult literacy rate" is one of the major indicators of human development. The adult literacy rates by sample categories, economic groups (land) and by sex, show a similar pattern as was evident in the case of overall literacy rates. The pattern of the adult literacy rate for electrified households as compared to non-electrified ones is characterized by a relatively high rate for both male and female, relatively less gender disparity, and relatively less disparity in rich-poor (Figures 8 and 9). Therefore, it can be argued that ensuring access to electricity in the households should be seen as a major strategy to reduce the knowledge-poverty (in terms of both raising overall literacy and adult literacy) in rural Bangladesh.
348 Figure 8:Adult literacy rate by sex by household electrification status (%)
1
Figure 9: Rich-poor divide in adult literacy rate by household electrification status
I
Poor (Landless)
HE
WE-EV
WE-NEV
Rich (larqe landowner11 85
80.6
HE WE-NEV
HE WE-NEV
Electricity influences not only literacy (of the poor) but also the quality of literacy. This can be seen through such parameters as expenditure on education, marks (grades) obtained (performance) in the examinations, school dropouts, and time spent for study by students at night. The per capita annual household expenditure on education in the electrified households was Tk. 1,964 with Tk. 2,344 for males and Tk. 1,502 for females (Figure 2). The corresponding expenditure in the households of non-electrified villages was much less at Tk. 1,300 (both), Tk. 1,505 (male) and Tk. 1,069 (female). In all categories of households, the per capita education expenditure for females was less than that of the males. This is most likely largely due to the existence of the female secondary education stipend program in rural Bangladesh. Thus, the electrified households fare better than the other two categories in terms of overall expenses on education, as well as per capita educational expenditure and especially that for female students. In terms of educational attainment measured through marks obtained in the last final examination, both boys and girls in the electrified households reported to be better off than their counterparts in all the grades/classes. Average marks obtained by the students in the electrified did not vary much between boys and girls. However, the difference was much more pronounced for the students in the non-electrified households. This difference is much more pronounced in the higher grades (VII-X). Dropping out of school is a major indicator of the quality of education. All three categories of households have reported such dropping out. A higher proportion of non-electrified households than that in the electrified households reported the incidence of dropout. In general, reported dropout was higher for the boys than for girls with a smaller gap in electrified than in the non-electrified households (Table 2). Electricity contributes to improving the quality of education. This quality improvement in the electrified households works through very many channels: more time available for study after the sunset, the better quality of that time due to
349 sufficient light and fan for comfort, strengthening the knowledge-base due to access to TV (which in turn increases the appetite for learning), parents (especially motherdother elder female members) devote more time in assisting children's education compared to before electricity, etc. The average amount of time spent on study after sunset (6 p.m.) was 126 minutes in the electrified households. It was 16% less in the households of nonelectrified villages (109 minutes) and 22% less in the non-electrified households of electrified villages (103 minutes). In the electrified, not only the availability of more time for study (after sunset) but also the quality of that time in terms of learning environment because of sufficient light and fan for comfort must have played a determining role in the improvement in the quality of children's education. An additional factor, which has contributed to this improvement, is associated with parents giving more time in assisting children's education after electricity as compared to before electricity in the household. Around 51% women reported that they now give, on average, 37 minutes more time in assisting children's study as compared to the situation before electricity reached them. IMPACT ON HEALTH-POVERTYREDUCTION People's health status is the prime component of human development. The value of good health is recognized as a means to humane capability building and through that to human life (Sen, 1999), health as a cornerstone of human capital (Schultz, 1981; Becker, 1981), and health as a central input into economic development and poverty reduction (WHO, 2001) - adequate emphasis has been given in the study to understand the various dimensions of health status in the electrified and nonelectrified households. Since health practice and behavior is a function of health awareness (among others), the latter has been analysed first. Such awareness is mediated through very many agents, of which television is a major proponent. Thus, in all possible health related issues, the role of electricity has been identified using electricity-driven equipment, especially TV, as the agent. Electricity's impact (or influence) on health is not only mediated through TV, but also through availability of other facilities such as refrigerator, fan, modern diagnostic facilities (possible only if electricity is available), etc. Keeping this in mind, the following broad spectrum areas of health-hygiene-sanitation were covered: awareness of crucial public health issues, source(s) of knowledge, disease and treatment patterns, health care expenses, attendance at child delivery, access to antenatal care (ANC) and postnatal care (PNC) check-ups, tetanus toxoid (TT) immunization, maternal morbidity, child immunization, infant death (infant mortality ratio), use of family planning, type of latrine facility in use, use of hand-washing material after defecation, role of media in changing health-hygiene-sanitationbehavior and practice. Health inequity is a major poverty issue across the low-income countries. This is usually termed as the 'health divide' among the poor and the rich. This divide is first and foremost knowledge" based. "Access to electricity" can be a major means to address and resolve this problem. This is evident from the low-gaps in the publichealth-knowledge coefficient among the poor and rich in the electrified households and high gaps in that of the non-electrified villages; as well as from the relatively high knowledge coefficient among the poor in the electrified compared to that in the nonelectrified villages. The rich-poor divide in health knowledge shows the following: 1. The overall public-health-knowledge coefficient in the electrified households ranged between 0.61 for landless and 0.72 for large
350 landowning households, i.e; the gap is 11% points (Figure 10). The corresponding values for households in the non-electrified villages are only 0.36 (landless) and 0.59 (large landowner) with a gap of 23% points. Thus, the poor and rich in the non-electrified households are not only less aware than their counterparts in the electrified households, but also the poor-rich gap is twice as high. This means access to electricity (driven media or media exposure) at the housrhold level impacts significantly in reducing the knowledge-in-health poverty by increasing the knowledge base among the poor. Figure 10: Rich-poor divide in public health knowledge by hh electrification status (overall knowledge coefficient)
IPoor 1
Rich 7n
61.0
HE WE-EV WE-NEV
2.
72.0
HE WE-EV WE-NEV
The poor in the electrified households were found to be more knowledgeable (61%) about public health issues than even the rich (large landowner) in the non-electrified villages (59%). This also means, in terms of knowledge-poverty, the economically poor people become knowledge-rich if access to electricity is ensured. 3. The gaps in overall knowledge coefficients vary substantially for the same landownership category depending on the availability of electricity in the household. The overall knowledge coefficient of the landless in electrified villages is 25% points higher than the landless in the nonelectrified villages. Such gaps were 23% points, 21% points, 19% points, and 13% points for the marginal, small, medium and large landowning households respectively. These means that ensuring access to electricity will have a pronounced impact in reducing the existing knowledge gaps in the non-electrified households, and the rate of such an impact will be higher for those who are poor. Thus, in transforming poor people into rich when it comes to public health knowledge, access to electricity can be a potential answer. Electricity has contributed spectacularlyto the knowledge building about crucial public health issues. Overall, as high as 56% of those having knowledge in the
351 electrified households reported TV as the main source of knowledge, the corresponding figure for TV was 28% in the non-electrified households in electrified villages, and 17% in the non-electrified villages (Figure 11). Figure 11: Share of major sources of knowledge about 20 public health issues (aggregateshare)
56
I HE
28
I
WE-EV
WE-NEV
Thus, a straightforward inference can be drawn that the respondents in the electrified households and their neighbours, compared to those in the non-electrified villages, are more aware about the crucial public health issues, and electricity (through TV) had played an immense role- as the major source- in enhancing such knowledge. The pattern of 12-months’ incidence of sickness did not vary by status of access to electricity. But, the distinctions indicating impact on health-poverty were pronounced when it came to the question of treating sickness by medically competent persons (MCP)”: 1. Availing treatment from the MCP was much more pronounced in the electrified households compared to that in the non-electrified households (Table 2). About 57% of the electrified households reported that they availed treatment from MCP. The corresponding figure for non-electrified households was 43%. This means, in case of sickness, the electrified households are more likely to seek treatment from MCP (32% more) as compared to those in the non-electrified household. Gender disparity in seeking treatment from MCP exists. However, it is 2, much less pronounced in the electrified than that in the non-electrified households. The male-female proportions in seeking treatment from MCP were 59% and 54.3% in electrified; 46.3% and 40.7% in non-electrified households in electrified villages, and 46.4% and 39.9% in the nonelectrified villages (Table 2). The percentage points disparity was 4.7%, 5.6% and 6.5% respectively for the three sample categories respectively. Thus, although disparity existed in all categories, it was more pronounced
352 in the non-electrified households; and females in the non-electrified were taken to MCP while sick in much lesser proportions than those in the electrified households (difference being 14% points). A much encouraging finding in this regard was that the females seeking treatment from MCP in HE was about 8% points higher than even that of the males in nonelectrified villages. 3. The landless (poor) group in the electrified households reported that in 55% events of sickness they sought services from MCP, while it was only 42.5% in the non-electrified villages (a difference of 12.5% points). Similar was the pattern in case of marginal, small and medium landowner categories of households. The rich-poor gap between utilization of MCP in sickness was 9.3% points in the electrified households, and as high as 21.8% points in the households of the non-electrified villages (Figure 12). Thus, availability of electricity in the households influences the status of seeking treatment from MCP (while sick) much more in the poor households than in the rich households. This means health poverty reduction - both in terms of awareness on public health issues and utilization of medically competent persons while sick - is possible with ensuring access to electricity in the non-electrified households. Medically trained persons assisted a much higher proportion of child delivery (last birth) in the electrified households (36%)13. The corresponding figure for the non-electrified households in electrified villages was 23.1% (around national average), and for households in the non-electrified villages was 17.9% (Table 2). The rich-poor disparity was clearly evident: Among the last deliveries in the electrified households, 30.4% in the landless group and 67.5% in the large landowning group were assisted by medically trained persons. The rich-poor gap was 37.1% points (with a base of 30.4%). However, the same in the non-electrified villages were only 14.7% (for landless) and 25% (for large landowners) respectively. The rich-poor gap being only 10.3% points (with a low base of 14.7%). Thus, in terms of assistance in child delivery by medically trained persons, the electrified households show a much better situation - both overall, as well as by landownership categories. On this count, the poor with electricity are better off than even the rich in the non-electrified villages (Table 2). The situation of availing of ante-natal care (ANC) check-up during pregnancy by a medically trained provider, receipt of tetanus toxoid injections during pregnancy, and post-natal (PNC) check-up after delivery were all reported by much higher proportions in the electrified households compared to those in the non-electrified households (Figure 12). The disparity in access to ANC and PNC by rich and poor is distinctly evident (Figure 13). But, significantly, even the poor in the electrified households had received more ANC check-up services than the rich in the nonelectrified villages. Thus, in general, the women in the electrified households, irrespective of rich-poor had received more ANC and PNC services as compared to the national averages; and women in the non-electrified villages received relatively very little ANC and PNC services as compared to those in the electrified villages. All this implies that having electricity in the households positively influences the utilization of ANC and PNC services, and also acts as a health-mediated poverty reduction factor by motivating poor people (through radio/TV) to use ANC and PNC services when needed. It is also to be mentioned that the annual income of the poor (landless) in the electrified households is 51% higher than that of the poor (landless) in the non-electrified villages implying higher financial affordability of the poor in the
353 electrified villages. Therefore, for the poor people, electricity works in both ways, income and thereby affordability, and increase in knowledge of the value of good health.
Figure 12: Percentage reported ANC checkup, TT immunization and PNC checkup by medically trained providers by hh electrification status
Figure 13:Rich-poor divide in access to ANC and PNC checkup services by medically trained provider (% reported in connection with last childbirth)
ANC
Poor
Rich R
2
3 23
E 2 E 2
g 3
Maternal morbidity during pregnancy, delivery, and within 42 days of delivery (postpartum period) is a serious public health concern in Banglade~h'~. As expected, the proportions of women suffering maternal morbidity by household electrification status were similar for each type of morbidity - during pregnancy, during delivery, and in 42 days after delivery. But when potential morbidity is treated by a medically competent person (MCP), major variations are observed by household electrification status with distinct advantages in the electrified households (Figure 14). Reduction in the burden of maternal morbidity, by ensuring treatment by medically competent person, as a major health-mediated poverty reduction strategy of the Government of Bangladesh, has worked better in the households having electricity compared to those in the non-electrified villages. This is evident from the following: a. Women from the landless group (poor) in the electrified households availed themselves of maternal morbidity related treatment services from medically competent persons 38% more than their counterparts in the nonelectrified villages. b. The landless women in the electrified households availed themselves of more services (79.2%) than even the rich in the non-electrified villages (66.7%) (Table 2).
354 Figure 14:Percentageof maternal morbidity case undergone treatment by MCP by types of morbidity by hh,electrification status During preqnancy
During
deliven/
In 42 days after delivery
One of the most spectacular influences of electricity was found on the infant mortality rate (IMR)I5. The infant mortality rate in the electrified households is 42.7/1000 live births, in the non-electrified households in electrified villages 53.8/1000 live births, and in the non-electrified villages 57.8/1000 live births. IMR in the electrified households is 25% less than the national average (57/1000 LB) and 35% less than the national rural average (66/1000 LB). Secondly, the IMR in the nonelectrified households in the electrified villages is less (53.8) than that in the nonelectrified villages (57.8). Third, the estimated IMR in the electrified villages is 49.9/1000 live births and that in the non-electrified villages is 57.8/1000 live birthsI6'. Finally, our estimates show that if access to electricity is 100% ensured in the rural households, and those electrified households maintain the same IMR as the current electrified households, the annual number of infant deaths that could be avoided will be around 36,818, Le., saving 101 infants everyday. Full immunization ~ o v e r a g e ' ~ among children aged 12-23 months was significantly higher in the electrified households (60.7%) than that in the households of non-electrified villages (36.5%) (Table 2). The coverage in the non-electrified households of electrified villages was 54.4%, which is close to the electrified households. The full immunization rate varied by the household economic status of the children. In the electrified households, the full immunization rate was 52.2% for landless and 100% for large landowners. The same varied between only 28.9% and 66.7% in the villages without electricity. Thus, not only full immunization coverage, but also the coverage of both rich and poor was high in the electrified households compared to households in the non-electrified villages. Access to electricity not only contributes to the overall increase in the Contraceptive Prevalence Rate (CPR)"', but also influences significantly in raising CPR among the poor-landless. The CPR among electrified poor-households (65.7%) was found 19.5% higher than that among the poor in the non-electrified villages (CPR being 55%). The CPR in the electrified poor-household was even higher (by 6%) than that of the rich households in the non-electrified villages (CPR being 61.9%).
355 The indication that electricity provides impetus in accelerating the process of attainment of the demographic goal of Bangladesh is clearly evident from the fact that a large share of FP use was contributed by the television. As for 22.5% of the family planning users in the electrified household, TV was mentioned as the most influential factor prompting FP use. This self-reported weight assigned to TV was only 6.7% in the non-electrified households in electrified villages, and 5.5% in the non-electrified villages (Figure 15). Thus, it can be concluded that provisioning of electricity in the household combined with access to TV would most likely contribute significantly in expediting the process of reaching the national demographic goals of Bangladesh (NNR=l or TFR=2.1 by 2005). Figure 15: Self-reported most influential factors prompted use of family planning (% users report)
Ann 81.3
One of the most notable findings having far-reaching cultural, public health and poverty reduction implications is related to the rich-poor divide in the use of hygienic latrines and open spaces for defecation. Over 50% of the poor households having electricity use hygienic latrine, while it was only 27.3% among their counterpart poor in the non-electrified villages. The rich-poor gap in the use of hygienic latrines was 25.5% points in electrified households and 35.2% points in the non-electrified villages. More spectacularly, while only 6.8% of the electrified poor-households reported use of open place for defecation, it was as high as 29.2% for the poor in the non-electrified villages (Figure 16).
356 Figure 16: Percentage of poor and rich households using hygienic latrine and open space for defecation purpose Hvnienic latrine
Rich
Open Poor
78.0
There have been distinct cultural changes in hygienic practices due to household electrification, which include, among others, the use of soap after defecation. The use of soap as a hand-washing material after defecation was reported by a higher percentage of the poor households (60.7%) in the electrified villages than even the rich in the non-electrified villages (58.3%). This use of soap was much influenced by information education - and communication through television. ~
INFLUENCE ON WOMEN'S EMPOWERMENT: KNOWLEDGE OF GENDER EQUALITY ISSUES AND OVERALL EMPOWERMENT SCORE
In terms of knowledge about selected gender equality issues'', a consistent awareness pattern was found: women in the electrified households were reported to be much more aware than those in the non-electrified households (Table 2). Women's knowledge score of gender equality issues in the electrified households ranged between 0.79 for landless households and 0.84 for large landowner households, i.e., the gap is 5% points. The corresponding values for households in the non-electrified village are 0.44 (landless) and 0.64 (large landowners) with a gap of 20% points. Thus, the poor and the rich in the non-electrified households are not only less aware than their counterparts in the electrified households, but also the rich-poor gap was twice as high. It is found that the poor women in the electrified households were more knowledgeable (79%) about gender equality issues than even the rich in the nonelectrified villages (64%). This means that access to electricity at the household level significantly increases the women's knowledge base among the poor. This also means, in terms of knowledge-poverty, economically poor people become more knowledgeable than the rich if access to electricity is ensured. In order to understand the effect of electricity on women's empowerment, a combined (aggregate) knowledge-score has been constructed. This overall womenk empowerment score is a combined effect of three indicators: women's fieedom in mobility, participation in family decision-making process, and the knowledge about
357 gender equality issues. The higher score of women's empowerment is found in the electrified households as compared to that in non- electrified households (Figure 17). Figure 17:Overall women's empowerment score by hh electrification status
HE
WE-EV
WE-NEV
OOverall women empowerment score =Gap between ideal and actual situation of women empowerment
Electricity has contributed spectacularly to knowledge-building about selected gender equality issues. Overall, as high as 64% of those women having knowledge in the electrified households reported TV as the main source of knowledge, the corresponding figure for TV was 34% in the non-electrified households and 19.1% in the non-electrified villages (Figure 18). Figure 18: Share of major sources of knowledge of women about selected gender equality issues (aggregate share)
HE /Relative
WE-EV
WE-NEV
Radio
O N G O OGovt. worker
358 Thus, an inference can be drawn that women in the electrified households and their neighbours, compared to those in the non-electrified villages are more aware and knowledgeable about the selected gender equality issues; and electricity (through TV) had played an immense role as the major source of enhanced knowledge. In addition, based on the values of the overall empowerment scores and role of TV as source of knowledge, it can be said that access to electricity at the household level can be a major factor in increasing the level of women's empowerment. Impact on Human Development: Human Development Index The Human Development Index (HDI) has been constructed for all three categories of sample households. In constructing this, the standard methodology of UNDP2' has been adopted with some technical corrections. The HDI values obtained for electrified household is 0.642, for non-electrified households in the electrified villages is 0.440, and for households in the nonelectrified villages is 0.436 (Table 5). Based on the analysis of HDI of 3 categories of sample households, the following inferences are in order: The HDI for electrified households (HE) 0.642 is substantially higher than 1. the overall HDI of Bangladesh (0.478) (UNDP, 2002: 151). The electrified households' HDI corresponds to the lower-mid-level index for medium HDI countries (ranking 100 out of 173 countries, in which Bangladesh current rank is 145). This implies that, by ensuring 100% access to household electricity in the rural areas, Bangladesh may raise its HDI ranking substantially from current 145" position to a position of around 100 (corresponding to the ranking of such countries as Egypt, Bolivia, Indonesia, Honduras). Thus, electricity's potential impact on enhancement of national HDI could be very significant. Table 5:
Human Development Index by household electrification status. Household HH) electrificationstatus: HH with electricity (in non-
Human Development Indicator
Table 3 Me Table 1 3 Me Table 3 Me above
54.0
r e a l ~ :DP~ : . E. Indexed LE F. Indexed AL G. Indexed CE H. Indexed Education Attainment (EA) I. Indexed Ad'usted Income Human Development Index (HDI)
~
0.825 0.732 0.637 0.700 0.400 0.642
0.188
0.250 0.436
359 2.
3.
Even the non-electrified households in the electrified villages (WE-EV), which are predominantly poor, represent an HDI almost similar to that of the Bangladesh country average. The former category’sHDI value is even higher than the households in the non-electrified villages (which are economically better off than the non-electrified households in electrified villages). This implies that HDI increases with the village level electrification even when household‘s access to electricity is denied. This, as found in the survey, is most likely influenced by the relatively low infant mortality rates and higher combined gross enrolment. The difference in HDI values between the electrified HHs and the nonelectrified households in the electrified villages is 45.9%; between the non-electrified households in the electrified villages and the nonelectrified villages is less than 1%, and that between the electrified households and the households in the non-electrified villages is 47.2%. All these imply that providing access to electricity for the non-electrified households will have a spectacular impact on raising HDI in Bangladesh. Thus, village electrification without electrifying the households will have not much effect on improving human development and increasing HDI values. In other words, universal rural household electrification will have a spectacular impact on human development in rural Bangladesh.
CONCLUSIONS AND SUGGESTIONS Ascertaining the accurate extent of rural electricity’s impact on the reduction of poverty- economic and human poverty- is a complex endeavor and it is difficult to establish empirical causal relationships. Based on the comparison between ‘with’ and ‘without’ electricity situations, the direct-indirect and tangible-intangible benefits of electricity in poverty reduction have been ascertained. Household access to electricity, people’s access to electricity for productive purposes (industry, imgation and commercial activities), and availability of electricity for human development purposes (education and health facilities) - all contribute to economic development and poverty reduction. Economic poverty reduction impact mediated through electricity is evident in enhanced employment generation, increased income of the poor, increased savings, progressive pattern of food-non food expenditure, relative high share of education and health expenses, and increased influence over asset building. Household possession of electricity significantly influences the shift of a household from poor to non-poor category. Human poverty reduction impact of electricity is evident in enhanced literacy, improved quality of education, relatively higher empowerment status of women, better health status of the poor in the electrified households compared to those in the non-electrified ones. People’s exposure to electricity-driven media (most importantly, TV) matters much in human capital formation and in improving the knowledge base, which in turn, influences their education and health practices. Electricity’s impact in reducing economic poverty and transforming human poverty produces a type of synergy, which is a powerhl catalyst in accelerating the process of sustainable poverty reduction. Therefore, in order to accelerate the process of economic growth, strengthen pro-poor orientation in growth process and to further human development, access to electricity of the households and social and economic institutions should be expanded. Our knowledge base on the empirically firm causal relationships between access to electricity and many crucial dimensions of poverty is still at the embryonic stage.
360 One feasible way to minimize our knowledge gap could be to undertake relevant secondary analyses of the huge empirical and high quality database produced in the Bangladesh Study (Barkat et al., 2002). Moreover, in order to minimize our knowledge gap and to expedite the process of informed policy and decision making, research studies having high national and global utility should be launched to more accurately understand the complex relationships between the availability of electricity and various dimensions of poverty reduction including migration, women’s empowerment, the creation of job opportunities and scope for self-employment, poverty status and age length of domestic connections, relative poverty reduction impact by types of connections, mortality-morbidity and health status, micro-credit / micro-finance potentials, and strategies for expanding poor people’s access to electricity.
ANNEX A METHODOLOGICAL ISSUES PERTAINING TO THE ESTIMATION OF INCOME ATTRIBUTABLE TO ELECTRICITY In order to systematically estimate the share of income that can be attributed to electricity, each respondent household, irrespective of their access to electricity, was queried about the following for possible source(s) of income: For each of the 23 possible sources of income, the source(s) applicable for 1. the individual household's income over the last year were ascertained. For an electrified household, it was determined whether or not the income 2. from the relevant source had any relationship with the availability of electricity in the household. For all households, electrified and non-electrified, enquiry was made as to 3. whether or not the income from the relevant source had any relationship with the availability of electricity in the area, i.e., outside the household (agriculture, industry, commercial shops and establishments etc). For all households, electrified and non-electrified, it was ascertained 4. whether or not the specific source of income was a new source of income for the household which had begun only with the advent of electricity in the household andor in the area. For all households, electrified and non-electrified, the question was asked 5. whether or not the specific source of income was a new source of income for the household, and whether electricity gave impetus for enhanced income from that source. Once these logical steps were taken in the interview, for all applicable sources, the approximate share of net income ffom sources attributable to electricity was estimated. This estimation was based on the following methods (one or a combination): direct financial, amount of land, multiple use of same land, person dayslworking dayslworking hours, amount of production (non-crop), kmlnumber of passengers, etc. (Table below). As for example, in the case of estimating electricity's contribution to the income from crop agriculture, the following factors were considered (single factor or a combination as appropriate): increased production due to electrified irrigation, increased cropping intensity, amount of previously fallow land which is now cultivated due to availability of electrified irrigation equipment and changes in production, etc. In the case of estimating electricity's contribution to the income from livestock (and poultry), the following were considered: stealinghheft of cattlelpoultry birds stopped (reduced) due to electricity which gave impetushcentive for cattle breeding and poultry rearing; more production of high breed cowlpoultry due to security, availability of vaccines, feed and fodder, fan and lighting; demand for and price of milk increased after electricity; increased salelproduction of cattle, milk, poultry, eggs after electricity, etc. In the case of shops and businesses, the factors considered were increased business hours (after sunset), increased customer-flow due to electrified market place, income increase due to refrigerators, business diversification etc. All the factors, by sources, which have contributed in electricitymediated income enhancement are presented in the box below:
361
362 Factors considered in estimating the contribution of electricity in enhancedlincremental income 1. Increased production due to electrified irrigation. 1. Crop Agriculture: 2. Cropping intensity increased. 3. Possibility of cultivating previously fallow land. 4. Less expenditure due to electrified irrigation. 5 . Income increased due to electrified irrigation compared to before electricity. 1. Demand increased due to increase cropping intensity. 2.Wage 2. Wage income increased compared to before electricity. Labour Agriculture: 3. Price of wage labour increased due to establishment of new industrial and commercial units. 1. Increase due to expansion of market places and economic activities after 3. Wage Labor: Nonelectrification. 2. More time for work (days & hours) is available after electricity. Agri: 3. Increased income of wage labour due to over time at night (e.g. mason, industries, shops and establishment) 4. Absolutely new type of work has emerged with electricity (electrician, poultq raising, pisciculture, job in PBS etc.). 1. Stealing cattle stopped due to electricity, which gave incentive for livestock 4. Livestock: breeding. 2. More production of highly bred cows (Australian) and milk due to security, availability of vaccines, feed, fodder, fans and lighting; and demand for and p of milk increased after electricity. 3. Increased sale/production of cattle after connection of electricity. 1. Income increased due to increased sales of poultry after electricity. 5. Poultry: 2. Increased poultry rearing/production rate due to electricity. 3 . Poultry theft stopped due to electricity in HH, giving impetus for increased protection. 4. Income increased by selling eggs after electricity. 5. Did not rear poultry due to deaths and theft prior to electrification of HH. Absolutely new to the HH & emerged with electricity. 6. Attack of poultry by dogs/ foxes stopped after electricity. 7. Death of poultry stopped (reduced) due to vaccination facilities. 8. Emergence of new poultry farms after electricity. 1. Absolutely new to the HH and emerged with electricity. 6 . Trees/ Nurseries: 2. Stealing of trees/nursery plants stopped due to lighting. 3. Selling increased of treednursery plants after electricity. I. 1. Vegetable thefts stopped after electrification of the HH. Kitchenhome 2. Income increased from selling vegetables after electricity. 3. Awareness of benefits of home/kitchen gardening in applying of modem gardening: cultivation methods learned from TVlradio. KFruiUvegetab 1. Increased production of hit/vegetables due to fewer attacks by birds after lighting. les: 2. Theft of fruivvegetables stopped due to electricity in HH. 3. Production of vegetables increased due to electrically-powered irrigation. 9. Pisciculture/ 1. Income increased due to round-the-year availability of water in ponds by use I electrified pumps. Fisheries: 2. Stealing of fish stopped due to lighting at night (electricity). 3. Income and production from fisheries increased due to awareness through Rac Source(s)
363 Source(s)
10. Selling water : 1 1. Business/ Shops
12.Rent: House, Shop 13. Agricultural Implements: 15.
Transport:(V an, Rickshaw, Boat, Motorcycle, Cycle) 16.Cottage industries:
17. Industry/ Factory:
Factors considered in estimating the contribution of electricity in enhancedlincremental income & TV. 4. Insects killed due to lighting in the pond, as a result less food expenditure to cultivate fish. 5. Price of fish increased due to good communication after electricity. 1. Income increased Gom selling water using electrified pumps. 2. Absolutely new to the HH and emerged with electricity (DTW, STW & LLP). 1. Increased business hours at night - after electrification. 2. Income increased due to electrically powered refrigerators. 3. Increased sales and customers at night after electricity. 4. Market expansion has taken place after electrification of the market place and surroundings. 5. New business enterprises emerged after electrification (e.g. photocopying, rice husking mills, stocking potatoes in cold storage, establishment of new shops etc.). 6 . Increased business days and hours after electrification. 7. Sales of goods increased due to availability of fans, lights, cassette -players . etc. after electrification. 1. Increased rents of shop /house after electrification. 1. Increased demand for agricultural implements due to electrical irrigation and thereby increased income. 2. Increased use of agricultural implements at night due to electricity. 1. Increased income after electricity as compared to before electricity. 2. Driving more (increased km of movement) after nightfall due to electrification of villages. 3. Increased passengers, passenger hours after electricity.
1. Increased income from handicrafts produced at night due to electricity. 2. Work time increased after electricity. 3. New cottage industries emerged in many households. 1. New industry/factory established. 2. Diesel driven industries converted to electricity, which generated more employment and earnings. 3. Expenditure saving in rice husking due to conversion to electricity from diesel.
schedules (Barkat et.aZ2002: 307-329).
ANNEX B RURAL ELECTRIFICATION'S CONTRIBUTIONTO RURAL HOUSEHOLD INCOME: PRESENT AND FUTURE Based on the findings of the study, an attempt has been made to estimate the contribution of rural electrification to the overall income of the rural households in Bangladesh. A second attempt has been made to estimate the income of the rural households in Bangladesh assuming "all rural households have electricity". This has a broad policy implication in terms of potentials for electricity-mediated economic development and poverty reduction in Bangladesh. Our estimate shows that out of the total of 19,092,224l' (19.1 million) rural households in Bangladesh, 3,413,825 (17.88%) households have (REB) electricity connections in their households; 6,395,086 (33.5%) households are situated in the electrified villages but do not have electricity in their households; and the rest, 9,283,313 (48.62% of total rural hhs) households are situated in the non-electrified villages (implying that they do not have electricity). Using the values generated in the survey on annual income and share of electricity in that income for the three sample categories (HE, WE-EV and WE-NEV), the nationwide weighted values have been estimated (Table below). Table: Estimated annual income and income from electricity Rural Bangladesh.
Table 3 Me Table 3 Me Estimates based on the above methodology show that, in rural Bangladesh, the total annual household income (at current 2002-market price) is about Tk.1,105 billion, of which Tk.102.73 billion can be attributed to electricity. Thus, 9.3 percent of the annual income of the total rural households (19.1 million) in Bangladesh can be attributed to electricity. Only 17.88% of the rural households in Bangladesh have electricity connections in their households. Assuming "all rural households have electricity" and their average income rises to the level of today's electrified households (in rural areas) and electricity's share in that income remains the same as now (i.e., 16.38%), the total annual household income (with 100% rural hhs having domestic connections)- at
364
365 current market price- will increase to Tk. 1,775 billion from the present Tk. 1,105 billion, i.e., the annual net gain in income will be Tk. 671 billion more than today, of which Tk.290.8 billion or 43.3% of the increment will be due to electricity. The above net gain in annual income (Tk.671 billion) due to 100% electrification of rural households is equivalent to 26% of the current GDP (Tk.2,580.6 billion” at current market price) of Bangladesh. REFERENCES Bangladesh Institute of Development Studies. (2001), Human Development Report, 2002 Fighting Human Poverty: Bangladesh, prepared for the Ministry of Planning, Government of Bangladesh, January 2001, Dhaka. WRatna, M 2. Barkat Abul, SH Khan, M Rahman, S Zaman, A Poddar, S Halim, I Majid, AKM Maksud, A Karim & S Islam. (October 2002), Economic and Social Impact Evaluation Study of the Rural Electrijication Program in Bangladesh, Human Development Research Centre (HDRC), undertaken for IWECA International Ltd., partners with the Rural Electrification Board of Bangladesh and USAID for the Rural Power for Poverty Reduction (RPPR) Program. Barkat Abul, S. H. Khan,M. Haque, R. Ara, S. Zaman and A. Poddar. (2003), 3. Impact Study of Rural Electrijkation Project: Mechanism of Poverty Alleviation Fostered by Rural Electrijication, Prepared for Japan Bank for International Cooperation, Dhaka. 4. Becker, G. S. (1981), A Treatise on Family, Cambridge, Mass.: Harvard University Press. Government of Bangladesh (l989), Yearbook of Agricultural Statistics of 5. Bangladesh 1987-88, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. 6. Government of Bangladesh (1993), Yearbook of Agricultural Statistics of Bangladesh 1992, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. 7. Government of Bangladesh (1996), Report on Labour Force Survey in Bangladesh:l995-96, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Bangladesh, Dhaka. 8. Government of Bangladesh. (1997), Report on Bangladesh Census of Manufacturing Industries (CMr). 1991-92, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. Government of Bangladesh. (1999a), Bangladesh Population Census 1991, 9. Vol.4, Bangladesh Bureau of Statistics, Statistics Division, Planning Division, Ministry of Planning, Government of Bangladesh, Dhaka. 10. Government of Bangladesh.(l999b), Census of Agriculture 1996, Vol.1, Bangladesh Bureau of Statistics, Statistics Division, Planning Division, Ministry of Planning, Dhaka. 11. Government of Bangladesh. (1999c), Population and Development-Post ICPD Achievement and Challenges in Bangladesh, prepared for Ministry of Health and Family Welfare, presented at Special Session at the UN General Assembly, New York: June 30-July 02. 12. Government of Bangladesh. (1999d), Statistical Pocket Book of Bangladesh 1997, Bangladesh Bureau of Statistics, Statistics Division, Planning Division, Ministry of Planning, Dhaka. 1.
366 13. Government of Bangladesh. (1999e), Yearbook of Agricultural Statistics of Bangladesh 1999, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, 14. Government of Bangladesh. (2001a), Bangladesh at a Glance, National Accounting Wing, Bangladesh Bureau of Statistics, Statistics Division, Planning Division, Ministry of Planning, Dhaka. 15. Government of Bangladesh. (2001b), Performance of Bangladesh Economy 1991-2001, General Economics Division, Planning Commission, Ministry of Planning, Dhaka. 16. Government of Bangladesh. (2001c), Population Census 2001: Preliminary Report, Bangladesh Bureau of Statistics, Statistics Division, Planning Division, Ministry of Planning, Government of Bangladesh, Dhaka. 17. Government of Bangladesh. (2001d), Preliminary Report of Household Income and Expenditure Survey 2000, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. 18. Government of Bangladesh. (2001e), Report on Bangladesh Census of Manufacturing Industries (CMI): 1995-96, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. 19. Government of Bangladesh. (2002a), Preliminary Report on Household Investment Survey 1998-99, Strengthening National Accounts and Poverty Monitoring System, Bangladesh Bureau of Statistics, Planning Division, Ministry of Planning, Dhaka. 20. Government of Bangladesh. (2002b), Statistical Pocketbook of Bangladesh: 2000, Bangladesh Bureau of Statistics, Statistics Division, Ministry of Planning, Dhaka. 21. Haq, M. (1999), Human Development in South Asia 1999: The Crisis of Governance, Islamabad: Human Development Centre. 22. National Institute of Population Research and Training (NIPORT), Mitra and Associates and ORC Macro (ORCM). 2001. Bangladesh Demographic and Health Survey 1999-2000, Dhaka, Bangladesh and Calverton, Maryland, USA:NIPORT, MA and ORC Macro. 23. Rural Electrification Board, Management Information System (MIS), for the Month of June, Separately for the last 20 Years (From 1983 to 2002). Dhaka: Rural Electrification Board. 24. Rural Electrification Board, Annual Reports, 1997 to 2002, Dhaka. 25. Schultz, T.P. (1981), Economics of Population. New York: Addison -Wesley. 26. Sen, A. K. (1995), "Gender Inequality and Theories of Justice," in Martha, C. Nuesbaum and J. Glover (eds), Woman, Development: A Study of Human Capabilities, India: Oxford University Press. 27. Sen, A. K. (1999), Development as Freedom, New York: Alfred A. Knopf. 28. United Nations. (2000), Millennium Poll (Global Survey Commissioned for the Millennium Summit of the United Nations by UN Secretary General Kofi Annan). New York: United Nations. 29. United Nations Development Program.(1998), Human Development Report 1998, Oxford University Press, New York, USA. 30. United Nations Development Program. (2001), Human Development Report 2001; Making New Technologies Work for Human Development, New York: Oxford University Press.
367 31.
32. 33. 34.
United Nations Development Program. (2002), Human Development Report 2002: Deepening Democracy in a Frugmented World, New York: Oxford University Press. World Bank. (1990), World Development Report 1990:Poverty, Washington D.C: Oxford University Press. World Bank. (1998), Bangladesh: From Counting the Poor to Making the Poor Count, Washington D.C: Oxford University Press. World Health Organization (2001), Macroeconomics and Health: Investing in Health for Economic Development, Prepared by Jeffrey D. Sachs, Chairman, Commission of Macroeconomics and Health, Geneva: World Health Organization. Data on Life expectancy at birth (LE) was not available in the survey. Data on child birth and infant death (last year) were obtained, and infant mortality rate has been estimated. LE is function of many factors including IMR. LE and IMR are inversely related. LE for specific IMR has been calculated using formula: 100 - IMR x 1.3 (constructed for Bangladesh). The estimated IMR in the current survey are found as follows: HE = 42.7/1000 LB, WE-EV = 53.8/1000 LB, and WE-NEV=57.8/100 LB. b>Per capita real GDP for 1999/00 was 1632 PPP$ (BIDS 2001: 30). The overall per capita rural income in Bangladesh (estimated using information in Section 4.3.2) in 2001 was Tk.10,036. This amount equivalent to 1632 PPP$ i.e; Tk.l= PPP$ 0.1626$. The actual per capita found in the survey for HE was Tk. 15,494, for WE-EV Tk. 7,613 and for WE-EV, Tk. 9,916. The PPS$ equivalent to these Taka value has been estimated using the above conversion coefficient.
Notes: a>
I>
2>
3>
Significant.employment generation impact of RE in agriculture is associated with the following: RE-powered irrigation equipment, on average, cover 10 acres more net area, 12 acres more total area and 3 acres more new-to-irrigation area as compared to diesel operated irrigation equipment. Both land use intensity and cropping intensity is higher with electricity than with diesel. Average yield per acre under electricity powered irrigation is 24% higher than that of diesel operated ones. REP irrigated land produces annually 4.1 million tons of HYV Boro and Aman, which is 29% of all similar types of rice produced in Bangladesh. Substantial employment generation impact of RE in industries is evident in the following: Electrified industries generate, on average, 11 times more employment than the non-electrified ones (mainly due to large scale electrified industries). Compared to the non-electrified the electrified industries generate more employment for women. Electrified industries are cost-efficient, productive, environmentally less hazardous, influence strengthening of local industrial base by promoting backward-forward linkages and diversification. Electrified industries created a huge demand for expansion of support services including growth of shops, fax-email-telephone facilities, restaurants, banks, photocopy shops, schools and colleges, bus/tempo stoppages, availability of qualified medical doctors, diagnostic centres and clinics. Employment generation associated with electrified shops (retail and wholesale) can be seen in the following: An estimated 24% of total shops in Bangladesh are using RE. On average, an RE-connected shop employ one additional person. Electricity has given rise to constellation of shops. Electrified shops are more
368 attached to marketplace. Business turnover in electrified shops is much higher than non-electrified shops (2 times for retail, 11 times for wholesale). Electrified shops are about 2 hours more open after sunset (prime business hours) than their non-electrified counterparts. Out of total annual sales turnover of retail and wholesale shops in Bangladesh (Tk 1274.1 billion), RE-connected shops’ share is about 14%. 4> Out of the 143 specific CMI-coded industry types using RE connections, 90% of the large, 75% of the medium and 50% of the small scale industries using RE would not have been established without RE facilities. This means that about 75% of the total employment in RE connected industries is due to electricity. In agriculture, over 100,000 additional employment have been created for throughout the year. On average, an additional one job has been created in each of the 456,528 commercial shops connected through RE. And, direct employment in PBS is 16,223 persons. Enormous amount of employment opportunities has been created in the agriculture (e.g, due to higher productivity higher amount of employment for harvesting purposes etc), huge amount of employment in support services; extent of such employment due to electricity could not be ascertained. 5> Income here refers to net income i.e; gross income minus cost of earning (of gross income). Household income refers to last year’s income and the duration considered was between April 14,2001 and April 13,2002 or Bangla last calendar year. 6> ‘Poor’ has been defined as those having no landownership (absolute landless) or have less than 50 decimals of landownership (functional landless). The five landowner groups adopted in the study are landless, marginal, small, medium and large. These categorization was made using official classification which is as follows: landless includes functional landless owning less than 50 decimals of cultivable land, marginal owns 50-149 decimal, small owns 150-249 decimals, medium owns 250-749 decimals, and large owns 750 decimals and more (Soupses: Census of Agriculture 1996, vo1.1:71, BBS 1999; Statistical Pocketbook of Bangladesh 1997: 184-185). 7> The most recent Household Income and Expenditure Survey of Bangladesh 2000 (GOB, 2001d) provides the following figures for the distribution of household expenditure by food and non-food: National (food 54.6%, non-food 45.4%); Rural (food 59.3%, non-food 40.7%); Urban (food 44.6%, non-food 55.4%). ’> Economic-poverty has been estimated using the head count measures. The direct calories intake @CI) and the cost-of-basic-needs (CBN) methods were used. In order to ensure Comparability of the estimated values, the official methodology used by the Bangladesh Government in the Household Income and Expenditure Survey 2000 (published by BBS in 2001) was adopted. Also, to ensure comparability, the relevant correction factors were applied (e.g., Taka value per person per month for 2001 in estimating the lower and upper poverty lines using CBN method). 9> The overall literacy rate is defined in the Population Census of Bangladesh as the proportion of the members 7 years and above who read at least class I11 level. lo’ Adult literacy rate is the proportion of the population 15 years and above of age who read at least class I11 level. 11> Twenty (20) crucial public health issues against which awareness (knows or don’t know situation) was measured included symptom of diarrhoea (Ol), preparation of oral rehydration solutiodlabon-gur-sarbat (ORS/LGS) (02), symptoms of acute
369
12>
I3>
14>
IS>
16>
I7>
respiratory infection (AEU)(03), Child vaccination against 6 diseases (04), place to go to for child’s vaccination (05), place to go to for ANC checkup (06), five danger signs of pregnancy (07), place to go to for emergency obstetric care (EOC) (OS), need for PNC checkup (09), prevention of goitre using iodized salt (lo), name of three sexually transmitted diseases (STDs) (ll), place to go to for treatment of STD (12), what is HIV/AJDS (13), how HIV transmission can be stopped (14), effect of arsenic in drinking water (IS), avoidance of arsenic problem (16), reason for nightblindness in child (17), place to go to for TB treatment (18), place to go to for leprosy treatment (19), and necessity to use sanitary latrine (20). Medically competent persons (MCP) include MBBS doctor, Family Welfare Visitor (FWV), Nurse, Medical Assistant (MA), Sub-assistant Community Medical Officer (SACMO), other Paramedics. Medically trained persons (MTP) includes MBBS doctor FWV, Nurse, MA, SACMO, other Paramedics, and trained traditional birth attendants (TTBAs) and not the untrained traditional birth attendants, (UTBA). According to the most recent Demographic and Health Survey (BDHS 1999-2000), almost all births (92%) in Bangladesh occur at home; 12.1% of births are assisted by medically competent persons and 2 1.8% by trained persons; antenatal check-up coverage by medically trained provider is 33%; and TT coverage (2 or more TT injection) is 64% (BDHS, 1999-2000: 112, 114-115, 117-118). These recent national figures would be of use in comparing the same with our survey findings. However, a caveat to note is that the BDHS data quoted above are national data (not rural ones). Findings from this survey are mostly rural and were collected recently. The situation is extremely distressing and unacceptable because, each year about 600,000 pregnant women in Bangladesh develop reproductive morbidities, which diminish women’s fertility, productivity, quality of life, and the health and survival of the next generation. See: Government of Bangladesh (1999), Population and Development - Post ICPD Achievements and Challenges in Bangladesh, MOHFW, presented at Special Session of the UN General Assembly, NY June 30 - July 02,1999: 195. Infant mortality rate (IMR) is the probability of dymg before the first birthday. IMR is a key reflector of a country’s level of socio-economic development and quality of life. IMR is usually associated with antenatal care, delivery care, breastfeeding practices, immunization status, and nutritional status of the wouldbe-mother, among others. IMR is a powerful determinant of life expectancy as well, especially in a country with high IMR. The national average of IMR for 1998 was 57/1000 live births, with 47 for urban and 66 for rural (GOB 2002c: 140). The data bases on Birth Death Sample Registration. The IMR quoted in BDHS 1999-2000 for five-years periods preceding the survey is 66.3/1000 live births (BDHS 1999-2000: 101).
Among the total households in the electrified villages (in Bangladesh) the electrified households constitute 34.8%, and non-electrified households, rest 65.2%. Full immunization includes a BCG vaccination against tuberculosis; three doses of DPT vaccine for the prevention of diphtheria, pertussis (whooping cough), and tetanus; three doses of polio vaccine; and a vaccination against measles (WHO recommended guideline).
370 18>
The national contraceptive prevalence rate for 1999-2000 was 53.8% with 60% in the urban and 52.3% in the rural areas (BDHS 1999-2000: 51,52). 19> Seven gender equality issues against which awareness was measured (1) include equality of man and woman in terms of access to resources (2), equality of men and women in terms of wage, employment (3), women trafficking: punishable criminal offence (4), child trafficking: punishable criminal offence (5), acid throwing: punishable criminal offence (6), informed choice of family planning use; and (7) right to participate in election. The overall knowledge co-efficient (ranging between '0' denotes no knowledge and '1' shaming 'have knowledge') has been constructed to show the aggregate knowledge of women about 7 gender equality issues. One minus this coefficient is the value equivalent to knowledge gap. The higher the co-efficient the more is knowledge or the less is the gap. 20> Methodological notes: The HDI is a simple average of the life expectancy index, educational attainment index and adjusted real GDP per capita index, and so is derived by dividing the sum of these three indices by 3 each of the indices is computed according to the following general formula: Index = [Actual Value - Minimum Value]/[MaximumValue - Minimum Value] For the construction of these indices, fixed minimum and maximum values have been established for each of the indicators as: - Life expectancy at birth: 25 years and 85 years - Adult literacy: 0% and 100% - Combined gross enrolment ratio: 0% and 100% - Real GDP per capita (PPPS): $100 and $40,000 [the discounted value of the $6,154 (PPPS) has been used for the upper limit (see note below)]. The construction of income index is a little more complex. The world average income of $5,835 (PPPS) in 1994 is taken as the threshold level, and any income above these level is discounted using the formula based on Atkinson's formula for the utility of income. Using that formula, the discounted value of the maximum income of $40,000 (PPPS) is $6,154 (PPPS). Since Bangladesh's real GDP per capita is less than the threshold level, so it needs no adjustment. Given the variations in the estimates of the per capita GDP in PPPS provided in HDRs and WDRs, the following method was adopted. The per capita GDP (PPPS) figure for 1996/97 was taken from Human Development Report 1997, HDR 1997 gives a figure of 1331 for 1994. This figure has been updated by using the national growth rate in GDP over 1994-1996/97 to derive the real GDP per capita for 1996/97. Similarly, the 1998/99 figure has been derived by applying the national growth rate in GDP actually observed during 1996/97-1998/99. I>
2>
According to the Bangladesh Population Census-2001 (Preliminary Report published by BBS, August 2001), 76.6% of the total 24,924,613 households (dwelling) are rural households distributed in 68,000 villages. Rural electrification program covers 34,936 villages with a total of 3,413,825 domestic connections (REBMIS, June 2002). Source: Statistical Pocketbook Bangladesh 2000, BBS 2002.
SUSTAINABLE NUCLEAR ENERGY - SOME REASONS FOR OPTIMISM RICHARD WILSON Harvard University, Department of Physics, Cambridge, USA INTRODUCTION- THE NEED FOR NUCLEAR ENERGY If we accept that it is necessary to reduce the C02 concentrations in the air to prevent or cope with global warming, and assume, as seems reasonable that these concentrations are related to carbon burning as a result of fossil fuel consumption, we must either reduce this fossil fuel consumption or sequester the carbon dioxide for a long period. In the first approach we may either reduce drastically any use of fuels and otherwise use of energy in man’s activities (which may be done either by restraint or more efficient use), or switch from fossil fuels to other fuels. There are no reasonable projections that efficient use and renewables can make any appreciable dent in the near future. Numerous experts, of whom I quote only two, agree that C02 production will increase in the next 25 years. The Energy Information Agency’ projects a 30% or so increase in energy use, even though a further reduction in energy intensity of 213 is anticipated in developed countries as shown in Table 1 from International Energy Office. OPEC is also projecting a similar increase in oil consumption - and oil can be available at a price of $40 a barrel2. Dr Hisham Khatib presented a summary of these at this meeting. Thus the options are: Restraint in any energy use (i) (ii) Efficiency in energy use (iii) Sequestering carbon (iv) Switch to nuclear fission (v) Switch to nuclear fusion (vi) Switch to hydropower (vii) Switch to windpower (viii) Switch to other “renewable” resources. NUCLEAR POWER: A BRIEF HISTORY OF THE DREAM AND THE NIGHTMARE One of the dreams of nuclear physicists ever since the discovery of nuclear fission and the possibility of chain reaction in spring 1939 has been that it can be the basis for a sustainable long-term energy future. When included with Fermi’s idea of a breeder reactor to get the most out of the fuel, it can easily be shown to produce enough for 100,000 years at moderate cost. REF Glenn Seaborg, when Chairman of the Atomic Energy Commission, had emphasized to the Joint Committee on Atomic Energy of the U.S. Congress in 1968 that nuclear fission can help to avoid global warming. Even before 1975 each step in that process had been tested: mining; fuel fabrication, reactor, waste processing, sequestration. The country was enthusiastic. But in the early 1970s, public perception changed with disastrous effects on the burgeoning nuclear power industry. Since 1975 environmentalists have urged that the U.S. and the world return to various “renewable” resources (vi,vii and viii) and some claim that this can solve the problem. However, the rate of introduction has been slow and it is widely believed
371
372 that they cannot cope with the problem. Nuclear fusion (v) has not yet been shown to be feasible on any scale, sequestering carbon, (ii), will certainly be effective on a small scale (capturing C02 and selling it to oil companies for secondary and tertiary recovery of oil will certainly be cost effective, but the cost is certain to increase with increasing amounts of carbon sequestered and may soon be excessive). I assume here that no one technology can do the whole job, and a bit of them all may be necessary. In this sense I urge that nuclear fission once again be considered as an important component of any future energy mix on a par with all other non-fossil technologies. Although there are reasons for the world's rejection of nuclear from 1980 on, I contend, as have other commentator^,^ that only one objection has any validity (contribution to potential proliferation of nuclear weapons) and that is debatable and should be controllable. I illustrate the differences between those who rejected nuclear power and those who accepted it by two quotations: In Daniel Ford's view the U S . nuclear power program is "the most ambitious, expensive and risky industrial venture ever ~ndertaken,"~ On the other hand, Samuel McCracken concluded that "nuclear energy is environmentally the most benign of major energy sources except natural gas. The most benign in terms of public health [and] major accidents and the only ma'or source able, over a long period of time to give us large amounts of flexible energyJ. I find it hard to believe that these two men were discussing the same subject! In a talk I gave at a meeting on global warming in 1991, I noted the very unfavorable public opinion and some of its bad consequences. I predicted that all the nuclear power plants in the USA would be abandoned as their licenses expired and there would be no nuclear power plants left by 2025. I even commented, not entirely in jest, that the Nuclear Regulatory Commission might still be expanding at the rate of 4.8% per year as suggested by Parkinson's first law6. Matters got worse until about 1995 and since then a number of reasons have changed my pessimism into cautious optimism. What had gone wrong and what has now gone right? Will it stay that way? PUBLIC VISIBILITY The nuclear enterprise has been much more open than many comparable enterprises. Most technically minded people have been able to obtain information and data from AEC and nuclear engineers when they ask7. But utility management often failed to interact positively with the public, and many executives failed to understand the technical issues, and hid this failure from the public. This inevitably led to mistrust among the non-technical public. Public opposition affects almost every aspect of the nuclear enterprise. Before 1970, there was general support among the U.S. public. Public hearings took less than a day and were usually unopposed. After 1970 opposition increased rapidly. The U.S. licensing system had deliberately been set up to allow many places for the public to express their views. Express them they did in formal public hearings - making these hearings last from the hours in the 1960s to years in the 1980s. It is evident that regulators of the industry, both safety regulators and economic regulators are strongly influenced by the public. Other effects of public concern and opposition are more subtle but nonetheless real. Opposition increased dramatically after the Three Mile Island accident in 1979, although, unlike most industrial accidents, this accident killed nobody and gave nobody an excessive radiation dose. It increased still further after the Chemobyl
373 accident in the Ukraine in 1986, even though that accident occurred at a power plant of a design that would not have been acceptable in any western country, and which was operated in a manner without the regard given in the west of learning from industrial experience. One leading Russian scientist commented that the accident proved that their political system could not handle modem technology’. I have described elsewhere how the secrecy inherent in their society made such an accident inevitable.’ There were no public opinion polls in the 1970s. The above statements are based therefore on less quantitative justifications. My personal experience is that students at Harvard University who from 1975 to 1995 considered that “safe and environmentally friendly” nuclear power is an oxymoron were critical but positive by the year 2000, and are now saying: “why not?” The change since the early 1990s is shown in public opinion polls. The answers to polls depend critically on the exact wording of the question. I noted many years ago that polls seemed to find a hard core of 20% opposition, and hard core, mostly of scientists, who were strongly in favor - with 60% in the middle whose answer depended critically on the question. To avoid the problem that the answer depends upon the detailed question, I take the polls from one organization - Ann Bisconti’s research in the USA - and only examine the trend. Figure 1 shows the trend in the answer to the question “Do you strongly agree, or do you somewhat agree that we should build more nuclear power plants?” Figure 2 shows the (2003) response to the question: “US DOE and electricity companies should work together to build state-ofthe-art nuclear power plants that can be built to meet public demand”. European support is similar. Table 1 shows the response in several European countries to the question: ”If the waste is managed safely, nuclear power should remain an option for electricity production in the European Union”. The public responses from France and Italy are very similar, although France has developed nuclear power and Italy, after a public referendum 20 years ago, decided not to do so. The French government support is usually attributed to their centralized governmental structure as compared to the federal system in Germany. Support in Germany likewise is similar in spite of their official governmental opposition. The surprising result for me is the large support in Sweden - a country whose government officially decided 20 years ago to close down all plants and whose prime minister stated in 1986: “Nuclear power is one of the greatest threats to our environment... Nuclear power must be gotten rid of.“1o! One must not exaggerate the change in public opinion. The question posed in figure 2 was a general one for the future. When it comes to a specific plant at a specific site, the answers are different”. In general, those in the specific locality tend to be in favor, because the facility pays real estate taxes on an expensive facility, but those further removed (for example at the state level) tend to object. NIMBY (not in my own back yard) does not therefore seem to apply, but NIMNBY (not in my neighbors back yard) doesI2. Tentative locations for new nuclear plants in the USA are locations where there is already an existing power plant. In 1991, it was felt that the operating licenses for nuclear reactors, which would be coming up for review in the 2000 - 2020 period would meet with the intense public opposition that the initial operating licenses met in the 1980s. But this has not turned out to be the case. Operating licenses, which were typically for 40 years, are now being extended to 60 years without appreciable public opposition. Many scientists and engineers think that the extension can be for an even longer period. Public perception and public opposition would not matter if it did not impact cost.
374 COST Eugene Wigner reminded us in 1975 that, if nuclear power is more expensive than other fuels, it will not be used. But that was before opposition arose. Moreover, costs depend upon the boundary conditions and upon the way they are calculated. These have changed. In this I urge a look at history. Indeed I urge the proponents of any resurgence in a technology to look at history and explain why matters are now different. It is a historical fact that in 1973 nuclear powered electricity was cheaper than other sources of electricity in the USA. Connecticut Yankee was producing electricity at 0.55 centsfkwh including paying off the mortgage on the capital inve~tment.'~Capital costs were less that $200 per kilowatt electric (kWe).I4 The cost calculations of Virginia Power and Light, made when they decided to build a nuclear rather than a coal plant (at Suny), are shown in the last column of Table 2.15 Note that they used a charge on capital of 13% which is higher than used for many governmental projects. For example France has often used 5%. CAPITAL COSTS Since 1973, there have been many improvements in the existing technology that should have made nuclear power safer and cheaper. Although the consumer price index only went up a factor of 4.5, capital costs went up after 1973 over 10-fold from $200 per kWe to $2,000 per kWe (and for badly managed plants to $5,000 per kWe and operating costs a factor of 50 from 0.04 cents per kWh to 2 cents per kWh - both more than inflation). Why has there been this dramatic increase in cost? For most technologies, there has been a modest improvement in cost with time as engineers and others learn to cut costs. This is known as a Learning Curve. For nuclear power we have had a Forgetting Curve! Several reasons have been suggested: (a) In 1970 manufacturers built turnkey plants or otherwise sold cheap reactors as loss leaders. But this can only account for a small proportion of the capital cost; (b) Construction costs generally have risen since 1970 even when corrected for inflation; (c) It may be that in 1972 we had good management and good technical people. But why has management got worse when that has not been true for other technologies? (d) Operating costs rose rapidly in the 1970s because the rate of expansion of nuclear energy exceeded the rate of training of good personnel; (e) A sudden rise in costs came in the late 1970s after the accident at Three Mile Island unit 11; (0 Although mandated retrofits have been blamed for cost increases, this applies to existing plants, not to new construction. Although it is abundantly clear that poor management has been the reason for much of the problem, I contend that a large part has been a reaction to the unfavorable public opinion including some excessive regulatory requirements that were imposed in response to the public. In the following sections I will detail some of these and how they have changed in the last seven years. (Reasonable) projections are that the capital costs for new nuclear plants (with the newer designs) will be $1,400 per kWe for the first plant in a series and $1,000 per kWe after the first of a type. Operating costs have also been declining slightly, and plant availability has increased markedly. I put these numbers in the first column of Table 2 and suggest that nuclear power will
375 be cost competitive against all other sources provided that the present favorable climate continues. OPERATING COSTS Management and operation costs have had an even more dramatic increase than capital costs that will be hard to reverse. The cost is even higher than the 2004 number of 1.4 centsfkwh and 0.04 kWh, because the present 1.4 centsfkwh is inversely proportional to the availability and the availability has improved. Various plant operators have described factors that caused the increase. The number of security guards at Point Beach, Wisconsin, went from three to over 100 in 10 years. Dresden power plant went from 250 in 1975 to over 1,300 (Benhke, 1997).16 The increasing attention to sabotage and terrorism ensures that the increase in security personnel will stay, and other increased staff carry out maintenance during operation. Nonetheless the operating cost is manageable. CONSTRUCTION DELAYS
In 1980 the time between approval (issuance by the Nuclear Regulatory Commission of a Construction Permit) and completion (issuance of an operating license and connection to the electricity grid) had increased from the 3 years of Connecticut Yankee to 6 to more than 12 years. (A simple average of construction times is infinite and means nothing because some plants were never completed!) This increase in time was due almost entirely to public opposition in the licensing process. This added considerably to cost. The original intention was that major aspects of nuclear power and of the plant design would be decided (and litigated if necessary) before the construction permit was issued, leaving only a discussion of whether the construction proceeded according to design, and whether any variations fiom that design made a difference in safety. But public opposition in the operating license phase prevented this. It is well known that, to build a project cheaply, it should be built fast. This keeps labor costs down. Moreover a major cost item is always interest on capital expended during construction. The interest rate had increased from the 13% assumed in 1973 to 17%. Many power plant operators had ordered the expensive item, the reactor vessel, too early, and for them the interest cost loomed large. For some plants this interest was half the total capital cost. By careful scheduling and improvements in public acceptance and other improvements suggest that these costs can be brought down. But most importantly, the licensing process has been changed with a “one stop” licensing with most details accepted early in the process. It remains to be seen whether this will be opposed and, if so, whether the procedure will survive the opposition. NUCLEAR FUEL INTEGRITY The problem with nuclear fuel is that during operation it undergoes a dramatic change as the uranium changes to plutonium and fission products fill any gaps in the structure. In 1973 the nuclear fuel in the fuel rods was not always able to maintain integrity, and in common parlance “fell apart” after a certain “bum up” (in the peculiar units engineers use, after 20,000 Mw-day per ton). Now nuclear utilities regularly attain a “burn up” of 42,000 Mw-day per ton and future designs anticipate 100,000 MW-day per ton. Not only does that improvement cut fuel fabrication costs,
376 and waste disposal costs in half, it enables a longer period between shut downs for fuel changes. As discussed further below it reduces the need or desire for a controversial breeder reactor for the near (100 year) term. Associated with the improvements in fuel integrity are improvements in radioactivity containment. The original power plants were designed to allow 1% of fuel rods to leak. Now 70% of power plants operate without leaks. This improves employee radiation safety. The above and other methods have steadily reduced radiation doses to employees. This improvement in fuel rod integrity is a major factor in my cautious optimism. SAFETY REGULATION It has been said that the power to regulate is the power to destroy. This has certainly been the experience of the nuclear industry. Regulation of many industries increased rapidly during the 1970s. In 1970, when Maine Yankee was being licensed, there were 91 permits to be obtained including, for example, a permit to discharge sewage. By 1975, this had risen to over 400 permits per plant! But there are only two regulatory authorities of importance: the Federal Nuclear Regulatory Commission (NRC) and the various state Public Utility Commissions. The NRC regulates safety, including radiation safety, although that radiation safety is often delegated to the states when the states show adequate competence. Regulatory authorities, especially state authorities, are sensitive to public opinion and have often been very assertive of their power and their duty. A power plant can earn $1,000,000 in electricity sales each day and the incentive to keep the plant operating is great. Correspondingly the power of the regulator is great. This has often been used in undesirable ways. For example, in spring 1978, concern was raised about ability of piping to withstand an earthquake. The NRC insisted on a shut down of the affected plants while this was being investigated. For most plants no change was needed and it was clear that the calculated effect on safety was small. In 1996, Northeast Utilities, operator of four nuclear power stations, including Connecticut Yankee, which as noted above had generated cheap electricity in the past, was in trouble. The original specific problem appears to have been the movement of a larger number of fuel bundles from the reactor to the spent fuel pit during shut down than envisaged in the technical specifications. Although the calculated effect on accident probability was small and well within the NRC’s safety goals, they were clearly in violation of the specific specifications. Moreover Northeast Utilities appear to have lied to NRC regulators about it. Other similar problems were found and the Chairman of the Nuclear Regulatory Commission insisted on a shut down - which ended up in a two year shut down of two plants and a permanent shut down of two others. This was a draconian response which was very costly in money and in air pollution from substitute power. In a Senate hearingI7, I likened this to a bus driver regularly driving down Fifth Avenue at 5 mph more than the speed limit and the policeman on beat waved at him. A whistleblower insisted on action so that the whole bus system was shut down for two years. But in 2004 the regulatory procedure had changed. The commission has now endorsed “risk informed regulation”. Rigorous regulation is only used for those aspects of the system that have a large impact on safety as measured by the calculated accident probability. The situations just described would probably not have led to plant shut down but regulatory action short of shut down. Part of the reason for the change is in the thinking of the regulatory body and a part due to the changed thinking of engineers generally. Many older engineers preferred the definiteness of a strict
377 regulation to the calculation procedure of the Probabilistic Risk Assessment (PRA). Associated with this risk informed regulation is a realisation by industry and regulators alike that a well-operated plant is a safe plant. The World Association of Nuclear operators (WANO) has developed a number of indicators of plant performance. These include, for example, unplanned shut downs - “scrams”. These have been reduced over the last decade as shown in figure 4, with a consequent reduction in stress on the plant and, according to the PRA, an improvement in safety. But the situation could change back. ECONOMIC REGULATION For most of the 20th century, public utilities in the USA were monopolies. In exchange for this monopoly, they were subject to state economic regulation by Public Utility Commissions. Utilities were only allowed to collect a fixed percentage of their capital assets - usually about 6%. Although the state Public Utility Commissions were only supposed to regulate the economics, this was interpreted “liberally” and in many cases the PUCs used their considerable power to prevent nuclear power plants coming on-line. This power was exercised in subtle ways. The most evident example was the refusal of the PUC of New York State to allow a rate increase to Long Island Lighting Company until they abandoned the Shoreham Nuclear Power plant which had just (in April 1989) obtained a full power license from NRC after a long battle with public opposition. But, in the late 1990s, all has changed. In most states, the electricity system has been deregulated to some degree and the utility companies have sold their power plants to an independent generator. The power plant operator is now subject, of course, to ordinary economic competition, but no longer subject to the unpredictable behavior of the PUCs. MISCELLANEOUS IMPROVEMENTS Although there appears to have been no learning curve when cost is concerned, there is a learning curve in safety - as measured by a number of indicators by the Institute of Nuclear Power Operators ( N O ) and its world equivalent, the World Association of Nuclear Operators (WANO), and by the calculated accident probability. The criticisms of the PRA approach of Rasmussen in the 1970s have either been taken into account or demonstrated to be invalid. This, and the absence of severe accidents for 15 years have, in themselves, had an effect on public perception Small improvements in individual designs have been made that enable small (10%) power increases of existing plants. These are being made with little or no public opposition. This shows up in improved operating efficiency. PLANT AVAILABILITY Many of the improvements above, in management competence, in fuel integrity, in safety and in regulatory behavior have showed up in improvement in the availability of a nuclear power plant and hence on the cost. As shown in Table 1, Virginia Power and Light assumed that the power plant would operate 65% of the time (65% availability) - a figure achieved by the first plants. This was increasing with time but fell again to 60%, and stayed this way till 1990. Since the cost of the fuel consumed is relatively small, it is evident that the costs, both capital and
378 operating, of nuclear electricity are (approximately) inversely proportional to this availability. Even in the much-vaunted French nuclear program, the availability was only just over 57%18. A part of this decrease in availability before 1990 was certainly bad operating practice, but another part can reasonably be attributed to a regulatory environment that demanded unnecessary shutdowns for small infractions. Figure 3 shows how plant availability has increased from 65% to 92% over the last 15 years. This in itself leads to increased safety because it is safer to run at full power than to continually go up and down. SUSTAINABILITYOF NUCLEAR FUEL SUPPLY I now return to the purpose of this talk. I maintain that nuclear energy is a source that is sustainable over a long period. For how long can the nuclear fuel supply be maintained? As with all minerals, the supply depends upon the price. In 1971 it was felt that an increase in fuel cost of 0.5 cents/kWh was excessive, and it was imperative to start a rapid program to develo a breeder reactor that can use more of the fuel in the original ore by converting U2 * to PuZ3’. Such a program would enable us to use uranium ores that are 50 times as expensive as presently used and extend the availability of fuel almost indefinitely. This program, however, necessitates chemical processing of the fuel. With the usual aqueous processing, this makes available large quantities of pure plutonium separated from the higher actinides and fission products. This would make control of weapons proliferation more difficult. However, in this respect matters have changed.” Firstly 0.5 cents per kWh is no longer considered excessive. Secondly the cost of fuel remains the same in absolute terms and is therefore down a factor of three when corrected for inflation. Thirdly, as noted above, the increase in fuel “burn up” implies that more use is made of a given amount of ore, with of course plutonium being burnt at the later times. Fourthly, exploration for uranium has virtually ceased since there is 20 years of supply at the present rates of consumption. It is likely that more uranium will be found at a modest price. The combination of all of these should enable an expanded nuclear power program to proceed for at least the next 50 years without a breeder reactor and longer if the cost of uranium from seawater comes down. That will enable a more cautious approach to the development of the breeder, including probably electro-refining of the fuel so that plutonium is never separated from the actinied and fission products.
Y
A CAUTIOUS SUMMARY
I have outlined some reasons for optimism about the future of nuclear power and its ability to supply electricity for the foreseeable future. This change in my thinking since 1991 has been due to a number of factors: (1) Improvement in public perception and reduction of public opposition; (2) Improvement in fuel behavior; (3) Risk informed regulation; (4) Steady safety improvements; ( 5 ) Improved plant availability; (6) Improved designs. But no new plant has been ordered in the USA for nearly 30 years. Although two consortia are discussing new plants with NRC, these have not yet begun.
379 According to IAEA figures shown in Table 3, the number of new plants under construction is only 10% of those operating and those in Asia, leading to an increase of only 2% a year even if no existing plants are shut down. Not enough to make a dent in the global warming problem. Will the optimism lead to new construction? There remains one critical facet of public perception - it is perceived that there is no way to dispose of the waste. Indeed the question in the European public opinion poll of Table 2 presupposed a solution to this problem. While technically minded people keep asserting that this is the easiest waste problem in society to solve, and scientific committees have insisted that there is no scientific obstacle to a good solution:’ politicians, following uninformed public opinion have delayed a “solution” while insisting that nothing proceed till a “solution” has been found. Two aspects give me some optimism that this logiam will be removed. One is that contact nuclear waste from the U.S. military is being safely disposed of at the Carlsbad salt mine, and it is likely that remote handled waste soon will be. Secondly, the Yucca Mountain disposal site has been accepted subject to Nuclear Regulatory Commission approval. Legal objections to the right of the U.S. Congress to decide on a waste site in a specific state have been declared invalid by the courts2’ and a Committee of the National Academy of Sciences has outlined reasonable steps to be taken22. (However the court held that a calculation limited to 10,000 years is inadequate, and an appeal to that decision was rejected). The Nuclear Waste Technical Review Board raised a major concern that the fuel canisters were likely to corrode but reversed itself after the Department of Energy scientists produced evidence on that I believe that new nuclear power plants need a “shot in the arm” to stimulate the plant owners. Of the three major attempts to cope with climate change, renewables, carbon sequestration, and nuclear energy, only the first is now receiving subsidies and tax breaks. For example, neither sequestration nor nuclear power are included in the “clean development mechanism” favored by economists and adopted in part by the European Union. All three carbon dioxide reduction technologies should be treated equally. I have always preferred a carbon tax, charge or permit scheme24,but a subsidy for development might achieve much of the desired end. In the USA, a carbon charge, or any new energy tax, seems unpopular in 2004 and the present policy of the U.S. Department of Energy is to subsidize the first plant of a series to help test the new licensing process, and respond effectively to any opposition. Two independent groups have come out with statements along these lines. A report on the future of nuclear power from MIT, chaired by two professors who had worked in Washington, Dr Deutsch and Dr Moniz, stated: “We recommend that incremental nuclear power be eligible for all “carbon free” federal portfolio standard programs7725. The Atlantic Council of the United States stated inter aha: “In implementing international emission trading programs, credit should be given to nuclear power facilities for their contribution to the reduction of greenhouse gas emissions” 26. EPILOGUE This and other papers were presented to the Permanent Energy Monitoring Panel of the World Federation Scientists on August 19th 2004. After careful deliberation the panel stated : “...we recommend that governments and international agencies treat all non carbon energy technologies on a par with each other with access to similar subsidies and benefits of removal of financial market barriers so that
380 improved versions of all these technologies can rapidly be utilised for achieving stabilisation of greenhouse gas emissions while meeting energy demand.”
FIGURES AND TABLES P e r c e n t Strongly o r S o m e w h a t A g r e e that w e should build m o r e Nuclear P o w e r P l a n t s 80
-
-Favor
-Oppose
Figure I Trend of Public Opinion of Nuclear Power in The USA (From Ann Bisconti Research)
U . S . D e p a r t m e n t o f E n e r g y a n d electric c o m p a n i e s should work together to d e v e l o p state-of-the-art nuclear p o w e r plants that c a n be built to m e e t n e w electricity d e m a n d . ”
AGREE DISAGREE Strongly agree S o m e w h a t agree S o m e w h a t disagree Strongly disagree N o t sure 0
20
40
60
Figure 2 Public Opinion about How to construct New Nuclear Power in the USA from Ann Bisconti Research)
80
381 Table 1: A 2003 Public Opinion Poll in Europe in response to the question: "If the waste is managed safely, nuclear power should remain an option for electricity production in the European Union". (From Ann Bisconti Research)
Country
B DK D-W D-Total D-E GR E F IRL I L NL A P
FIN S UK EU 15
Strongly Agree
Tend to Agree
Tend to Disagree
13.1 29.5 12.7 12.5 11.9 19.0 9.0 15.8 7.2 13.7 14.9 30.8 8.0 5.7 26.5 47.3 14.0 14.9
46.9 24.7 33.3 35.1 42.1 29.4 22.6 43.4 30.3 40.8 40.5 29.4 16.6 32.5 38.5 26.3 38.6 35.6
11.01 13.4 21.7 20.7 16.7 13.9 17.9 13.6 14.8 11.4 18.8 10.0 23.2 12.4 14.7 9.9 13.0 15.1
Don't
Know 5.9 25.3 14.0 13.1 9.6 8.1 10.0 8.7 10.7 6.6 12.9 14.6 38.6 7.6 9.5 8.0 7.9 10.4
I
2.87 2.63 2.55 2.58 2.70 2.84 2.52 2.81 2.54 2.85 2.66 2.90 1.93 2.62 2.92 3.23 2.80 2.72
23.1 7.1 18.3 18.6 19.7 29.6 40.6 18.6 37.0 27.5 12.9 15.2 13.6 41.8 10.8 8.5 26.5 24.0
382
US Nuclear Industry Is Achieving Record Levels of Performance ( I 980-2003)
r6 l $ k
Source' NRC-Updated 02104
i
Figure 3: The Increasing Availability of Nuclear Reactors according to NRC records
Table 2. Projected Busbar Costs of Nuclear Energy at various times. (uncorrectedf or inflation) 1971 costsfrom Benedict (1971) as calculated by Virginia Power and Light Description
?2004?
2002
1971
Unit investment cost of plant, d o l l a r s h . Annual capital charge rate per year Kilowatt-hours generated per year per kw. Capacity
$1,400 0.13 8,200
$1,700 0.13 7,446
$255 0.13 5,256
Cost of electricity, centdkwh.: Plant investment Operation and Maintenance Fuel
2.22 1.3 0.18
2.97 1.50 0.21
0.63 0.04 0.19
Total
3.7
4.68
0.86
383
Uranium resources and costs Lr-
Total Resources
10
100 unit cost uc (rskgu)
1000
Figure 4. Projected Uranium Reserves. Modified from OECD (1 998) "Nuclear Power and Climate Change,'' Organizationfor Economic Co-operation Development (OECD) report (April).
384
Table 3. Nuclear Power additions and Subtractions January 1'' 2003 to June 3dh 2004 (datafrom IAEA)
Total Operating 440 Nuclear Power Plants with Total Generating Capacity 362 Gwe: Under Construction 3 1 Nuclear Power Plants Connected in 2003: Qinshan 3-2, a 665 MW(e) PHWR in China Ulchin 5, a 960 MW(e) PWR on S. Korea Reconnected in 2003: Pickering 4, a 515 MW(e) PHWR in Canada Bruce 4, a 790 MW(e) PHWR in Canada Construction Started during 2003: Rajasthan 6, a 202 MW(e) PHWR in India Shutdown during 2003: Stade (KKS), PWR, 640 MW(e) (Germany) Calder Hall A,B,C,D, GCR, TOT 250 Mwe (UK) Fugen ATR, HWLWR, 148 MW(e)) Japan Connected First half 2004: Qinshan 2-2, a 610 MW(e) PWR in China Hamaoka 5, a 1325 MW(e) ABWR in Japan Reconnected First half 2004: Bruce 3 NPP, Canada
REFERENCES 1
2
3 4
5 6 7
International Energy Outlook 2004 Energy Information Administration (WW ' A. Shihab-Eldin, M,Hamel, and G. Brennand, Oil outlook to 2025 OPEC Review 28(3),155-201 Sir Alan Cottrell, How Safe is Nuclear Energy? Heinemann (1982) Ford, D. (1982) "The Cult ofthe Atom", Simon and Schuster, NY McCracken, S (1982) "The War Against the Atom", Basic Books, Inc. McKinley I.G. (1992) Parkinson, C. Northcote "Parkinson 's law, and other studies in administration". Boston, Houghton Mifflin, (1957) In 1972 when I first criticized nuclear energy I wrote to the Chairman of the Atomic Energy Commission, Dr Glenn Seaborg. He replied with a personal telephone call in 2 days and invited me to spend 2 days at the AEC. He introduced me to each relevant Assistant Secretary with the comment "Now you know him and his telephone number he will answer any question you wish". Which they did. The AEC during this period was often called secretive. But
385
10
11
12
13
14
18
19
20
21
22 23
24
compare with the response I received from the Chairman of NRC 25 years later. Communication to the author by Dr Sergei Kapitsa in 1987. Andre Sakharov in 1987 also made similar comments. “Chernobyland Glasnost: The Effects of Secrecy on Health and Safety‘: A. Shlyakhter and R. Wilson, Environment 34,25-30 (1992). A.Shlyakhter, and R.Wilson. ”Chernobyl: The Inevitable Results of Secrecy”, Public Understand. Sci. 1251-259 (1992). Prime Minister Carlsson quoted in NY Times, Summer 1986 Public: Nuclear Plants OK, but Not Around Here, Darren K. Carlson, Government and Public Affairs Editor, Gallup Organization, (March 30,2004) http://www.gallup.com/content/login.aspx?ci=l 1140. “Constraints limiting the expansion of nuclear energy” (1995) Alexander Shlyakhter, Klaus Stadie, and Richard Wilson, Global Strategy Council, Washington DC. Cost figures given to the author in 1973 by William Webster, Chairman of New England Electricity System, who operated Connecticut Yankee The cost of Maine Yankee Nuclear Power Plant (900 Mwe) was slightly higher. It went to $220 per kilowatt when a causeway was replaced by a bridge to make a smaller cooling water impact on the estuary. This was not required of any coal-fired plant at the time. M.Benedict, “ElectricPower from Nuclear Fission” Technology Review 74(1): 32-41 (1971) W. Benhke, former C.E.O. of Commonwealth Edison Co., owner and operator of Dresden 11and Dresden 111, communication to the author in 1997. R Wilson, “Remembering How to Make Cheap Nuclear Electricity” Testimony to Hearing of Subcommitteeon Energy and Water Development, United States Senate Committee on Appropriations, Washington D.C., Tuesday May 19th 1998. Baumier J. and Bertel E. (1987) Chapter 12 in Nuclear Power: Policy and Prospects, Jones P.M.S., ed. John Wiley & Sons. ”The Changing Need for A Breeder Reactor“, R. Wilson, in Proceedings of the Annual Symposium, Uranium and Nuclear Energy: 1997-1999,The Uranium Institute, London, Nuclear Energy 39,99-106 (2000). Report to the American Physical Society by the study group on nuclear fuel cycles and waste management (APS Study Group Participanis), L. C. Hebel, E. L. Christensen, F. A. Donath, W. E. Falconer, L. J. Lidofsky, E. J. Moniz, T. H. Moss, R. L. Pigford, T. H. Pigford, G. I. Rochlin, R. H. Silsbee, M. E. Wrenn, ( A P S Council Review Committee), H. Frauenfelder, T. L. Cairns, W. K. H. Panofsky, and M. G. Simmons Rev. Mod. Phys. 50, S1-S176 (1978) U S . Court of Appeals for the District of Columbia Circuit Nuclear Energy Institute vs. U.S. EPA No.1-1258 Decided July gth2004 available at : http://pacer.cadc.uscourts.gov/docs/common/opinions/200407/0 1- 1258a.pdf “Technical Bases for Yucca Mountain Standards ” Robert Fri, Chairman, National Academy Press (1 995). Letter of November 25 2003 from Dr M.S.Y.Chu to U.S. DOE and subsequent letter of July 28th2004. ”Taxthe Integrated Pollution Exposure, ’I Richard Wilson, Science,Vol. 178, pp. 182-183, October (1972).
386
25
25
“Free-MarketApproaches to Controlling CarbonDioxide Emissions to the Atmosphere: R discussion of the Scientijic Basis“ K.S. Lackner, R.Wilson and H-J Ziock (2000) Global Foundation Conference on “Global Warming and Energy Policy“ Ed: Kursunuglu, Mintz and Perlmutter, Kluwer AcademicPlenum Press New York, Boston, Dortrecht, London and Moscow pp 31-46. “The Future of Nuclear Power” An interdisciplinary MiT study, July 2003 page 79. “An Appropriate Role for Nuclear Power in meeting Global Energy Needs” The Atlantic Council of the United States, February 1999, page 44.
PERMANENT MONITORING PANEL ON INFORMATION SECURITY CHAIRMAN’S REPORT Henning Wegener Ambassador of Germany (ret), Madrid, Spain The PMP met on August 19 and 22 in connection with the 32”d Session of the International Seminar on Planetary Emergencies. It confirmed its present membership as it appeared well balanced and there was no particular reason at this time to consider enlargement. However, the Panel decided to establish the institution of associate membership. Associate members would hold their position for a defined period and for selected topic areas. They would not be regularly invited to meetings, but would receive communications and would be invited to make contributions and suggestions. Current members would be free to nominate suitable contacts with specific expertise as associate members. The Panel then received a number of presentations from its members on The International Emergency Management Society, and on the topic of the information revolution in the military field. These will be of importance in the Panel’s further activities. In its substantive work, the Panel at first reviewed its achievements of the past year and assessed its impact, especially the impact of its Report and Recommendations “Toward a Universal Order of Cyberspace: Managing the Threat from Cybercrime to Cybenvar” (August 2003), on the multilateral process in the field of information security. It then proceeded to define issues for its work programme 2004-2005. INTERNATIONAL FOLLOW-UP TO THE 2003 REPORT AND RECOMMENDATIONS The Panel noted that its Report and Recommendations of August 2003 had been communicated by the President of the World Federation of Scientists to the heads of all relevant international assemblies and agencies, especially those of the United Nations system, and had furthermore been widely distributed through the Internet. More important, it had received a favourable and operational echo in many quarters. The Report had been distributed as a document of the Geneva session of the World Summit on the Information Society (WSIS). Members of the PMP had participated in the Summit Meeting in Geneva in December 2003, where they presented and explained the document, and in part its preparatory process. The PMP envisages participation also in the second phase of the WSIS (Tunis, May 2005), and was presently discussing with the Tunisian Government the holding of a special meeting in Tunis prior to the Summit, at their invitation. The Secretary General of the United Nations and his senior staff were in possession of the Report and had commented favourably. The UN ICT Task Force had demonstrated special interest in the Panel’s Recommendations through its President and Executive Director. The document had been presented orally at the March meeting of the Task Force through one of its members, also member of the PMP, and the ICT Task Force had decided to include three of the PMP’s Recommendations in its current work programme. The document in its entirety was also included in the web page of documents
387
388 of the Task Force. At the initiative of the ICT Task Force, a Memorandum of Understanding between the UN and the World Federation of Scientists for fiu-ther collaboration had been agreed and was awaiting signature. The PMP noted that in the fall of 2003 the UN General Assembly had created a group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security pursuant to UN General Assembly Resolution 58/32 of 18 December 2003, and that one of its members had been elected Chairman of the Group. The PMP considered assisting the Group of Governmental Experts in its task, and declared itself ready to await a request to this effect. In July, the PMP was called upon to participate in a session of the Preparatory Meeting for the Eleventh United Nations Conference on Crime Prevention and Criminal Justice (Bangkok, April 2005), dedicated to Measures to Combat Computer-related Crime, in Seoul, Republic of Korea. The UN Office for Drug Control and Crime Prevention as the organizer of the meeting had selected the PMP’s Report as one of the key documents for this portion of the World Conference. At the Seoul meeting, the Chairman of the PMP was elected to serve as Rapporteur of the meeting, thus ensuring the participation of the PMP in the further m - u p to the World Conference. Also in July, on the strength of its Report, two members of the PMP were invited to the NATO Forum on Business and Security in Istanbul, where both made presentations on information security, and one of them was invited to chair the Workshop on Information Security, a key event of the Forum. Further activities of the members of the Panel included participation in meetings of the US National Academy of Sciences and the American Bar Association’s Section of Science and Technology Law in which one of the members holds a permanent position. Reviewing these events, the Panel concluded, with satisfaction, that it had been possible within the span of only a few months, to disseminate its work widely, to establish links with some of the key international players in the field of information security, and to consolidate the name of the World Federation of Scientists in a critical area of Planetary Emergencies. PMP WORK PROGRAMME FOR 2004/2005 After an extended debate, the Panel expressed its conviction that the everincreasing complexity and importance of information technology, along with its enormous potential and benefits, also produced an exponential growth of the inherent risks. This made a continuation of its work imperative. The work also had to be made as topical and relevant to on-going political processes as possible. In this perspective, the Panel decided to envisage, for the working year 2004-2005, a two-phase approach. The first phase should focus on generating a specific input to the Tunis phase of the WSIS (16 to 18 November, 2005). The Panel would elaborate a brief report with clear-cut and strongly worded recommendations as a contribution to the Summit. In a second phase, aiming at more substance and depth, the Panel would amplify and substantiate further their package of recommendations to the WSIS. Taking into account an earlier issues list it had elaborated for further study (p.13 of its 2003 Report), the Panel decided to concentrate its efforts, within this two-phase
389 strategy, on three main areas. All three had to be seen in light of the fact that the United Nations overall, but also the WSIS in its preparatory process and in its Geneva phase, had not assigned the necessary priority to the creation of a secure and world-wide nondiscriminatory information society. The Panel should contribute, as best it can, to instill this sense of priority into the process. Once the priorities have been well established, the key question of adequate resources for the promotion of a world-wide information and knowledge society could also be more easily addressed, with due contribution from the private sector. The role and responsibility of the private sector had been particularly insuffienctly highlighted at the Geneva phase of the Summit, dominated, as a conference of states, by participating governments. One guiding idea of the Panel would be that, at the present stage of development of the world information society, a global alliance between governments, the multilateral process and the private sector was of the essence and had to include considerations of security and privacy. Such a global alliance would be able to draw on the technical and funding resources of the private sector with greater ease. As regards the private sector, the notion of corporate responsibility also needs to be fleshed out. Since private corporations own over 85% of the ICT infrastructure, corporations have a key role, based on both self-interest and social responsibility, to ensure both security and privacy. The Panel’s recommentation would aim at placing a number of crucial issues on the WSIS agenda, and to contribute to a more balanced and comprehensive focus of the Summit, rather than offering definite solutions. The three areas in which the Panel proposes to make recommendations in this sense would be the following: 1. A contribution to bridging the Digital Divide. This would include the identification of measures to promote cost-effective, secure broad-band connections, concrete proposals for a strong involvement of the private sector and the improvement of conditions for attracting investment to developing countries. The Panel recognized that the primary focus of the WSIS had to lie in building up ICT capacity and access in these countries and that its own recommendations had to assist in this effort. In doing so, the PMP would use its special expertise to enhance information security and business confidence, as a prerequisite for attractive markets and a functioning information society in emerging markets. Security and built-in trust are especially important at the beginning of such developments, avoiding the errors of negligence in other countries and educating users early-on to maximize security and forming part of an emerging global culture of cybersecurity. The Panel would stress that especially in fragile regions with an emerging information infrastructure and, thus, growing vulnerabilities, adequate levels of information security were a question of stability. The Panel was convinced that extension of affordable access to ICTs in developing countries was the most important agent in bringing about sustainable development on a massive scale, but that only a secure ICT environment would bring about the desired development effect.
390
2.
The Panel recognized the importance of concrete, low-cost, pilot-type projects with an initial built-in security component and with a model character that lent itself to emulation, rather than lofty over-all strategies. One proposal that might bear fleshing out was an appeal to the United Nations to make their information offices in developing countries partially available as Internet cafes, to help inculcate “safe computing” habits, to serve as a role model and to stimulate private Internet service providers. A guidebook for use in the developing countries on how to build up ICT capacity and performance to create a secure information environment to attract direct foreign investment and outsourcing opportunities could be written, possibly in the second phase. A specific recommendation of the Panel for the WSIS could be a proposal to broaden the CERT-framework as a particularly successful and effective example of private sector involvement in information security, following on from its earlier work Private business and industry have a particular role to play in offering developing countries a wide choice of hardware and software options that would, on the one hand, guarantee state-of the-art, high-speed connectivity to allow fully competitive participation in a modem, creative global society, and, on the other hand, functional but low-cost consumption equipment, affordable for large segments of the market, for educational and transitory purposes. The scientific community would have to deliberate on these choices and participate in their technical development, giving due consideration to security. A further examination of the evolving scenarios of cybenvar and cyberterrorism. The Panel felt that the whole area of information warfare and the national and international security implications of ICT had been a missing element so far in the WSIS process. It was therefore urgent to draw attention to current developments by way of an analysis of the state of the threat, drawing on recent papers put before the Panel and the supportive papers contained in its 2003 Report. It needed to be highlighted that the sheer increase in volume of ICT devices, the exponential growth in connectivity, the emergence of terrorism with its increasingly dangerous misuse of ICT technologies, etc., were causal factors in bringing about a sea change in information security. The growth of information technology increasingly lent itself to easier use in war, and new prevention doctrines heightened this war-making potential which might, however, minimize infrastructure destruction and limit casualties. Recognizing these risks, and convinced that strategies to avoid them had to be part of any global agenda to construct a serene information society, the Panel felt that these insights needed to be incorporated fully into the agenda of, and the thinking at, the WSIS and within the international community more broadly. The Panel’s recommendations would also address the question of the existing laws of armed conflict, including the rules of engagement in case of information warfare and the new challenges to the international law
391
3.
community in terms of defining hostile acts of information use and the availability of sanctions. What were the rules and permitted counter-measures in case of denial and corruption of information on the state and national level as well as the private and sub-state level? Privacy vs. security. The necessary responses to the 9/11 events and the current elaboration of anti-terrorism strategies in many countries world-wide has given new significance to revisiting traditional solutions to the privacysecurity dilemma in the realm of information technology, civil liberties and human rights. There are profound issues surrounding privacy and security that must be addressed. For example, are there new technologies that offer more security while safeguarding individual rights? Are there trends developing in how governments are using ICTs to counter terrorism and foster security that show erosions in privacy? How can adequate levels of privacy be protected in policy and legal frameworks without stifling security needs? While the Panel recognized that its contribution to this debate, in light of other probable contributions to the WSIS, might have only a limited value, it still thought that it was worth reinforcing the arguments for privacy in developing countries from the information security perspective.
Apart from these three work areas, the Panel considered that there may be a requirement for many developing countries, in building and legislating their information structures, to have access to impartial international expertise, to a secretariat that gave advice; especially where recourse to dedicated regional organizations (e.g. APEC) and other international bodies is not available. In any event, it appeared that a focal point for such services was still missing. It might be possible to broaden the mandate of existing or proposed institutions to provide this service. The Panel might possibly formulate a recommendation in this sense to the WSIS, referring to its own 2003 Recommendation for the establishment of an international Information Technology Agency. The global alliance to which reference was made earlier could well be a stepping stone in this direction. In any event, given present needs, such an organism would need to be tasked and to be functional in the short term.
FUTURE MEETINGS Given the above work programme, and the timeframe to respect for any useful input into the WSIS, the Panel thought it necessary to schedule a meeting in the spring of 2005. If another venue was not found, this meeting should be held in Ence, provided that the required funds were available.
INFORMATION REVOLUTION IN THE MILITARY FIELD AND THE ESTABLISHMENT OF AN INTERNATIONAL LEGAL REGIME FOR INFORMATION SECURITY
VITAL1 N. TSYGICHKO Institute for Systems Analysis, Russian Academy of Sciences, Moscow, Russia The late 20th and early 21" centuries were marked by yet another phase of the technological revolution, namely, the wide-scale introduction of information and communication technologies (ICT) and the further development of the global computer network - the Internet. The information and network telecoms technologies, the rapid spread of local and global networks provide for a new quality of information exchange, shape new global cyberspace, and have a strong impact on all facets of public life: politics, economics, culture, international relations, national and international security. Changes in the world of cyberspace act as a global development factor determining the key directions of social progress, such as: an ever-faster scientific, technological, economic, social and cultural development thanks to a bigger volume and higher speed of information exchange irrespective of distance; opportunities for disseminating new ideas and knowledge; the fast spread of new technological advances; building a database for the development and propagation of new scientific and philosophical paradigms of the 21" century based on the understanding of uniformity of global diversity and recognition of the common global problems of humankind; the strengthening of global integration trends, especially in economic, political, information, technological, educational, cultural and other areas; creation of the prerequisites for the development and introduction of new forms and methods of providing for global, national and regional security; advances in the area of political, economic, production and military management and international relations. At the same time, the evolution of the global community digitalization breeds a whole range of negative geopolitical implications. These are primarily faster global polarization, an ever-bigger differentiation between the wealthy and the poor, technologically backward and advanced nations across the board, a rising number of marginal countries being left by the roadside of the evolution of civilization, which is the major source of instability, of the current and future conflicts including global ones. Also, fast growth also concerns the military potential of technologically advanced countries resulting in the changes of global and regional balance of force, an ever-greater tension between traditional and evolving centers of force, emergence of new frontlines, and thereby giving rise to potential military conflicts. There are prerequisites for information, political, economic, cultural and military expansion on the part of advanced countries. Thus, the information revolution not only boosts civilization advances but also breeds new threats to national, regional and global security. Most radical and dangerous are those changes brought about by new warfare information technologies. The developments in recent years are clear evidence of that military power is still the key and the weightiest argument in the world of global politics. Furthermore, the significance of military power keeps growing in international affairs. In the post-cold war period, the world entered a phase of regional wars and political instability; the number of large-scale regional and national military actions has sharply increased. The efforts to put an end to the proliferation of nuclear weapons have failed. Under the circumstances, the recognized leader of the Western
392
393 world - the USA - and their allies are set to fight challenges to security and defend their national and group interests by way of establishing and maintaining the “new world order” based largely on the threat of use or direct use of military force. This strategic line is implemented primarily through the build-up of military power, involving the reorganization of their armed forces in line with the new tasks and the supply of new generations of armaments and military technology making wide use of new information technologies. Operation Desert Storm, the developments in exYugoslavia and the last war in Iraq are vivid examples of the practical implementation of this strategy. The intensive introduction of new information technologies considerably increases combat capabilities of conventional armaments and military technology. Information technologies prop up qualitative changes in reconnaissance and communicationscapabilities, greatly increasing the speed of processing huge arrays of data and decision-making, which makes it possible to switch over to radically new methods of troop and arms control at all levels - from strategic to tactical. The new information technologies have contributed to a sharp increase in the combat capacities of electronic warfare facilities and creation of a quite new type of arms, notably, information weapons designed for damaging military and civilian information infrastructure of probable adversaries by obtaining access to their computer networks. The information and technological revolution in the military field, resulting in a sharp increase in the combat capabilities of troops, leads not only to changes in the forms and methods of different scale of warfare but also to changes in the traditional paradigm of military struggle. The evolution of information weapons has radically changed the pattern of military conflict escalation. According to American experts, even a selective application of information weapons to military and civilian information infrastructure projects would terminate the conflict at an early stage, i.e. prior to the beginning of active combat operations, as the escalation of an information attack would result in disaster. The possession of information weapons, as of nuclear weapons, provides for an overwhelming advantage over other nations. If not today then in the near future, the information and political variables of the confrontation of powers will dominate nuclear ones. In terms of the implications of a wide-scale military application, information weapons can well be referred to as a new type of weapon of mass destruction with all the realities and problems emanating from this fact. It is worth noting the vulnerability of all countries, especially highly developed ones, to information weapons. Information weapons, just like nuclear ones, can serve both as a factor of political pressure and deterrence. It is quite apparent today that information warfare is not just the virtual reality of computer games but a quite tangible tool to win victory in a military or political conflict. There is no doubt that information weapons will become a major component of a nation’s military potential, and many countries, primarily the USA and China, are persistently and actively preparing for waging information wars. The important specifics of information weapons are their relatively low cost and accessibility, the opportunity for latent development, accumulation and introduction, extra-territoriality and the anonymity of impact. All of the above makes their uncontrolled spread, especially in the hands of aggressive regimes, very dangerous. The emergence of information weapons places the problem of information security on a par with the global problems such as proliferation of nuclear, chemical and bacteriological weapons, international terrorism, drug trafficking, etc. All these
394 problems are alike in terms of their global nature and the impossibility of solving them in one or several counties. Information warfare, information terrorism and crimes have come to be a reality of our times. The global community has come to realize, partly thanks to Russia’s initiatives in the UN, the threat to national and global information security and is prepared for practical steps towards its neutralization. Many counties take measures for countering threats to information security, sometimes very tough at that, but they are relatively inefficient primarily due to the trans-national nature of these threats and anonymity of the transgressors. Nobody can be safe when fighting off information threats alone. Only the creation of an international information security regime and the concerted efforts of all its participants can prevent the proliferation of information weapons and effectively resist the threats of an information war, information terrorism and crime. However, the practical steps towards an international legal framework regulating information security run into quite specific problems, preventing the effective use of past experience acquired in creating a regime to ban and stop the proliferation of weapons of mass-destruction. The problems are engendered by the basic properties of information weapons, in particular, the specific nature of their application. The start of negotiations on practical measures for ensuring global information security has been hampered primarily by the vagueness and ambiguity of the negotiation subject and object. The subject of negotiations, i.e. ensuring information security, and object - information weapons, information warfare, information terrorism, information crime, etc. - are interpreted differently in various countries. The development of uniform, universally acceptable terms of reference is, therefore, an initial and highly important step towards an international information security regime. The main problem in building an information security concept boils down to defining the term “information weapons”, and developing the principles of its identification. What means of armed struggle should be treated as information weapons? What are the distinctive features of information weapons? What reasonable principles can underlie the definitions and classifications of information weapons? Thus far, no satisfactory answers have been given to these questions, which could be conducive to a uniform terminological base for launching constructive negotiations on global information security. Today, there are two main approaches to the definition of an “information weapons” concept. First, the key feature of information weapons is the ability of some other means of destruction to affect military and civil information infrastructure (communication lines and facilities, databanks and databases, information computer systems and control systems, electronic media, etc.). A fundamental drawback of this approach is that, following its logic, any type of weapon, including conventional ones, can be referred to as information weapons, if it is capable of damaging the components of the information infrastructure. Indeed, does it matter in the final count which control system of a municipal economy was put out of action by a weapon on the basis of program code, an intensive electronic pulse or a direct hit by a conventional air bomb? The second approach suggests that reference be made to all means of destruction and weapons systems making use of ICTs as information weapons. Practically all sophisticated weapons systems, however, are making use of ICT, hence
395 it is impossible to single out precisely these information weapons from the entire range of armaments. Attempts have been made to combine the two approaches. It was suggested, for example, to refer to the means of destruction of an information infrastructure making use of ICT as information weapons. Even the combined approaches, however, fail to prevent uncertainty in the identification of information weapons. According to the above approaches, only the pieces of software designed specifically for destroying information infrastructure (different viruses, bookmarks, etc.) can be unambiguously referred to as information weapons. The remaining sophisticated weapons of armed combat, using ICT, are considered to be multipurpose, i.e. designed not only for destroying information infrastructure, but also for other combat tasks. Furthermore, these means differ from previous generations of weapons by greater selectivity and accuracy, i.e. they constitute “humane” weapons to a certain extent, and hence do not fall into the category of mass destruction. The nations possessing sophisticated weapons systems, means of reconnaissance, communication, navigation and control based on a wide-scale application of ICTs, boast a decisive military advantage, and hence will never enter into any agreements limiting their advantages. Thus, it is impossible to find any identification criteria with respect to information weapons (apart from software) that is mutually acceptable for all the participants of negotiations on information security problems, except in not banning multiple purpose weapon systems, but of limiting the application thereof against civilian information infiastructure. This naturally gives rise to the question of whether the very issue of banning or limiting production, proliferation and application of information weapons is possible at all. It follows that negotiations on banning the development, application and proliferation of information weapons may deal largely with single-purpose weapons designed only for hitting information infrastructure components, e.g. weapons based on program codes, i.e. various viruses and their means of delivery. However, the universality, secrecy, suddenness of attack, impersonality, possibility for a wide-scale trans-boundary application, efficiency and highly effective impact make these weapons not only an extremely dangerous means of information infrastructure destruction, but also hinder the creation of a system for the international control of such weapons. The situation is further aggravated by the fact that the overwhelming majority of modem ICTs, which can be used to military, terrorist and criminal ends, are developed in civilian industries, and control of their development and proliferation is very difficult. At the same time, the threat of the weapons application is real for everyone, especially for advanced nations, where the complex information infrastructure determines all their vital activities. It is the time when only the joint efforts of international community to secure national information structures can diminish the threat of information weapons application. Practically all the members of the international community now recognize the necessity of defining and agreeing, at an international level, upon the list of key information systems (both public and private) whose function is critical to ensure vital activities and national security of states. The delimitation of this class of information systems will help to create more efficient protective measures for them, including the right to use retaliatory actions in case of information operations against them. This will also facilitate the development of emergency mechanisms of international response to threats in the information area
396 with due regard to the extent of impact thereof on national security of different countries. It would seem, under the circumstances, that a real step towards pooling the states’ efforts to ensure international information security, is the development and approval by international community of a convention covering: The renunciation by all participants of information warfare, development and application of information weapons designed for destroying information infrastructure, including the weapons based on program codes; The harmonization of national legislation regulating the issues concerned with counteracting threats to information security; The development of legal, institutional, economic, military, technological and other international measures to counteract threats of information warfare, information terrorism and crime, and tools for resolving conflicts in information security area; The development of mechanisms to allow convention participants’ interaction on the joint counteraction of information security threats, including a continuous exchange of the results of current situation review, information on potential enemies and emergencies having to do with the efficiency of information infrastructures for development of adequate countermeasures against potential threats; The compilation of a list of projects critical to national information infrastructure whose destruction may result in large-scale technological disasters and huge human casualties; Protection of critical information infrastructure projects by international law, wherein an attack is treated as a crime against humanity; Development of a system of international control over the observance of the convention and a system of information security monitoring; Responsibility of those who infringe on the convention. In order to prevent the use of the convention’sprovisions to the benefit of one or a group of nations, it would be reasonable to adopt a declaration providing for a commitment by the states to refrain from: Actions leading to domination and control in cyberspace; Countering access to the most sophisticated information technologies, the creation of conditions resulting in technological dependence on the computer science field to the detriment of other nations. The adoption of such a declaration will dispel the doubts of developing countries concerning the discriminatory nature of the convention. The first step towards the development and approval of the convention on international information security could well involve setting up a group of international experts, under the UN auspices, responsible for: Developing a mutually agreable concept of information security, including such fundamental notions as “information warfare”, “information terrorism”, “information crime”, “information weapons”, etc.; Review, listing and classification of information security threats and means to realize them; Developing the principles of classification and identification signs of information weapons;
397 Compiling a list of critical projects of information infrastmcture and the description of the potential consequences of disturbing their normal functioning; Analysis and a recommended list of possible measures against threats to information security; Key principles of establishing and running an international information security system; A list of voluntary obligations of convention participants and potential measures to be taken against convention infringers. The first phase of the international expert group's efforts may produce a joint concept of international information security, which will outline the hrther activities of the group and serve as a basis for drafting the basic provisions of an information security convention. The convention of international information security shall contain: A common vision of the information security problem; Uniform terminological and conceptual apparatus developed by the experts; Evaluation of the current situation in the area of information security; Evaluation of the existing and potential threats to information security; Goals and objectives of the international community with respect to information security; Description of problems linked to the development of an international legal framework for control of the area of information security; Counteractions against information security threats; Possible mechanisms of international collaboration with a view to ensuring information security; Recommendations on the principles of development and key provisions of the convention of international information security. Developing an information security concept could well become an important practical step towards an international legal framework for regulating the issues of development, proliferation and application of information weapons preventing the emergence of information warfare and providing an effective counteraction to information terrorism and information crimes.
LIMITS OF DEVELOPMENT PERMANENT MONITORING PANEL REPORT (Individual papers following this report were discussed during the workshop) HILTMAR SCHUBERT (Member and Chair) Fraunhofer Institute for Chemical Technology, Pfinztal, Germany Migration in Europe JUAN MANUEL BORTHAGARAY (Member) University of Buenos Aires, Buenos Aires, Argentina Migration in Buenos Aires. Argentina GERALD0 G. SEEUZA (Member and Meeting Coordinator) University of Sao Paulo, Brazil Migration and Globalization K. C. SIVARAMAKRISHNAN (Member) Center for Policy Research, New Delhi, India Migrationfrom and within Asia ALBERT0 GONZALES-POZO (Member) Universidad Autonoma Metropolitana, Xochimilco, Mexico Migration in Mexico: Slower trends to Megacities; Higher Flow to the US NIGEL HARRIS (Invited Speaker) European Policy Centre, Brussels, Belgium Migration and Development: the European Case STEPHEN S. Y. LAU (Invited Speaker) University of Hong Kong, China Inter-Regional Migration in China in the Post-Deng Economic Era 1990-2000 CHRISTOPHER D. ELLIS (Member and Editor) Texas A&M University, College Station, Texas, USA Impacts of Immigration on Megacities in the United States A meeting of the Permanent Monitoring Panel was held August 19, 2003 at the Ettore Majorana Foundation in Erice. The scope of the meeting focused on migration and its effect on megacities. The number of people living outside their countries of birth has grown from approximately 75 million to more that 175 million during the last 40 years. This trend affects the social, economic, and physical dimensions of megacities. A discussion of these effects ensued on a spectrum of megacities including Sao Paulo, Hong Kong, Shanghai, Dhaka, Delhi, Buenos Aires, Mexico City, Houston, and Dallas. Papers from each of the participants listed above were presented and discussed. These papers are included in the pages immediately following this report.
398
399
MAJOR POINTS FROM THE DISCUSSION ON MIGRATION 1.
Not all migration is for permanent settlement. Some migration is circulatory.
2.
Migration - domestic and international - is one of the most important mechanisms for the redistribution of income and wealth socially and geographically. International remittances from workers abroad to their home localities is of growing significance for poor countries and poor social groups.
3.
Fast population growth in megacities leads to rapid restructuring in terms of economic sectors, the geographical distribution of the labor force (migration), and physical infi-astructure.
4.
Policy interventions to shape worker movements has a poor record of success and high costs.
NEXT YEAR’S INVESTIGATION Migration and the restructuring of megacities.
WEST AFRICAN POINT OF VIEW ON MIGRATION COLONEL MBARECK DIOP Former Senior Advisor to the President of the Republic of Senegal (1994-2002) Dakar, Senegal The international economic disequilibrium, poverty and environmental degradation, the lack of peace and security, human right violations, and unequal development of democratic and judicial institutions are factors that influence international migrations. The migratory movements in the world, including refugees, involve more than 1.50 million persons, one third concerning the developing countries. During the last decade, the net flow of international immigration to the main host countries in the developed world is estimated to be 1.4 million persons, two thirds of whom come from the developing countries. Between 1995 and 2000 in the Democratic Republic of Congo, 1.7 million persons left the country. In the same period the net flow of migration from countries like Burkina Faso, Burundi, Guinea, Mali, Sudan or Tanzania was over 200,000 persons. BRAIN DRAIN As far as migration for jobs is concerned, one can consider two factors: the poor conditions of work in the countries of origin, the attractiveness of the socioeconomic conditions in the targeted Western countries. Not only is the salary to be taken in consideration, but also the working conditions which are the most important argument given for the exodus of competences, the so-called “brain drain”, from developing countries toward the developed world. COST AND BENEFITS OF MIGRATION FOR JOBS It is usually admitted that migration has a positive global effect both for the country of origin and the country of destination. For the country of origin, an important flow of money comes from the migrants; for the target country cheap and qualified labour is at hand. In addition, the migrant himself can build his own capacities and increase his performance. However, there are some negative impacts such as the increasing gap between developing and developed countries. THE SOCIOLOGICAL ASPECTS OF SENEGALESE MIGRANTS Like many African countries Senegal has put its economy under the control of the IMF, with little prospect of recovery. Senegalese religious organisations seem to be amongst the most important actors filling the gap produced by this difficult situation. Mouridyyu is one of the four main Sufi synchretic brotherhoods in Senegal. The other three are Tzdzunyyu, Quudryya and Luyenne. It has been recognised that by relying on relations of personal dependence added to an effective organisation, these brotherhoods, and the Mouride in particular, offer a solidarity system well adapted to crisis situations.
400
40 1
The first wave of Senegalese emigration to Europe concerned mainly Toucouleur, Serer and Soninke ethnic groups. The last one is the most numerous in France. Most of Senegalese migrating to Europe in the 1980s and 1990s belonged to the Wolof ethnic group and to the Mouride brotherhood coming mainly from the northwestern regions of Senegal. The Mouride order was founded in the 1880s by Cheikh Ahmadou Bamba and has its headquarters at Touba, the site of his revelation, where Mourides have constructed one of the largest mosques in Sub-Saharan Africa. Members of the Mouride brotherhood shaped a commercial system covering Senegal and France, and now Italy and the USA, partially using as its framework the structures and practices of the brotherhood. The Mourides have maintained a strong identity and a highly centralised organisation, emphasing certain themes of their history to form a continuity with the present. For instance their present migrations are compared with Ahmadou Bamba’s periods of exile, a parallel that provides a framework for their experience as migrants. This identification with the founding saint provides the symbolic background underlying the migration process. DISAGGREGATING THE “COMMUNITY” The Senegalese emigrate for mainly economic reasons and in particular because of the crisis in the traditional agricultural structure, which produced the following historical pattern: firstly urbanisation in Senegal, secondly western African internal migration, thirdly emigration to Europe (mainly France), internal European migration (to Italy from France) and a change of direction in European emigration directly to Italy and to Spain, and finally to the USA. A variety of Senegalese migratory modes have been distinguished by some scholars: (Campus Perrone and Mottura 1992,260-269): a) From the villages in the countryside: these migrants are the most recent arrivals in Italy and are mainly involved with seasonal jobs; they are strongly linked with the village and families; they often become street sellers without any ambition of regularisation or “career jobs”; they do not invest in the receiving society nor do they learn the language because their wish is to go back as soon as they can. b) From the urban milieu (Dakar): they know the value of money and they are familiar with market activities. c) Entrepreneurs of “family-enterprises”: long-term urbanised, they represent trade, and have a direct or indirect experience of trade and migration. Another sociological typology is provided by Marchetti (1994) who distinguished six types of Senegalese migrants in his research in Milan: a) The seasonal trade-artisan coming only to supply other co-nationals; b) Traders who come intermittently, mainly to obtain supplies to sell in Senegal, and to a minor extent to sell some Senegalese products in Italy (import-export); c) Young unemployed people who migrate to alleviate the family’s expenses in Senegal and who want to return; d) Young people who come with a longer immigration perspective, with the aim of helping their families (via remittances) and of accumulating capital to reinvest in Senegal;
402 Young people from a rural or urban milieu with an even longer-term perspective intending to contribute consistently to the family and the village living standards; 0 Students who want to pursue their studies and may undertake training courses to learn a profession to be practised in Senegal. These typologies can reveal the great diversity among migrants. e)
CONCLUSION Migration is one of the most important issues along with economical, social and legal aspects. The challenge is firstly to help developing countries create economical conditions to avoid massive migration to developed world. The second goal is to channel the migrants' experience into the development of their mother countries. Partnership between developed world and developing countries is essential to overcome migration problems with the help of the appropriate body of the UN (IMO).
IMPACTS OF IMMIGRATION ON MEGACITIES IN THE UNITED STATES CHRISTOPHER D. ELLIS Department of Landscape Architecture and Urban Planning Texas A&M University, College Station, USA U.S. IMMIGRATION POLICY BACKGROUND Foreign citizens are admitted to the United States for temporary (non-immigrant) and permanent (immigrant) reasons. Non-immigrants include tourists, foreign students, diplomats, temporary agricultural workers, exchange visitors, and intra-company business personnel. Immigrants achieve legal permanent resident status under one of four basic conditions: to reunify families, to satisfy labor shortages, to protect refugees, and to diversify admissions by country of origin. The annual levels of immigration are set at 675,000 overall, with 480,000 allocated for family-sponsored preference, 140,000 for employment-based preference, and 55,000 for diversity (Wasem, 2004). These numbers are flexible and do not include refugees or those granted asylum. Permanent residents may apply for citizenship after 5 years. In the early part of the 20th century, immigration peaked at around 1.3 million (Figure 1). However, by the period of the Great Depression and WWII, the trend had reversed and nearly reached an inflow of zero. Since then, a gradual and steady increase has brought levels of immigration nearly back to the earlier peak levels culminating in a total of 1.1 million in Fiscal Year 2002 (an amnesty program accounts for the spike in the late 1980s). The U.S. Citizenship and Immigration Services has reported 5.3 million petitions pending for FY2003 (Wasem, 2004). Italy, Russia, Germany, Canada, Mexico, and Austria-Hungary made up the top source countries in the early half of the century but the latter half has been primarily composed of Latin American (Mexico, Dominican Republic, El Salvador) and Asian countries (China, India, Korea, Vietnam, Philippines). mouuUd( 1,YSO 7
I,Sm 1,650 1,SLW
-
1.m1.200
1
1.050
-
9uo-
i
II
i
rllY
,:;:mm uo
Q
II
j 1 l\I
:LA 1 I W
i1
,u wI D ,
A 4 m -
Ad== A & m = = m
m=LAd=====
Figure 1. Annual Immigration into the United States (Office of Immigration Statistics, 2002).
403
404 In FY2002, nearly 25% of the immigrants arrived from Mexico and these were based mostly on family. India, China and the Philippines were the next highest averaging 60 thousand each and were split between family and employment opportunities. Refugee/asylee immigrants were in large part from Bosnia-Herzegovina, Cuba, El Salvador, Ukraine, Vietnam and Russia (figure 2).
192s
1932
1939
1946
1953
1960
1967
1974
1981
1988
1995
20UZ
Figure 2. Legal Immigrants by Region of Birth 1925-2002 (Ofice of Immigration Statistics, 2002).
AUTHORIZED IMMIGRANT INTEGRATION The United States generally does not provide federal assistance programs for immigrants with the sole exception of refugees. There have been debates on immigrant integration that focus on three key issues: economic integration, English language acquisition and naturalization (Martin & Martin, 2001). Most immigrants are able to find jobs easily in the US, but earning a “living” wage with minimum health benefits is more difficult. Education and English language skills are needed to compete for higher earnings. Only 40 percent of immigrants finish a high school diploma exacerbating the problem (Martin & Martin, 2001). Without successful integration, immigrants are likely to suffer the harms of prejudice and the social fabric of megacities could fragment. UNAUTHORIZED IMMIGRATION Unauthorized immigration poses several difficult problems. These include resource drains on local communities, illegal employment, and trafficking of human beings. In the first case, resources that are used to support healthcare and community services can be strained when a significant percentage of the population is made up of unauthorized immigrants. Recent federal legislation has been introduced to help hospitals cover the costs of emergency care for unauthorized immigrants (Pear, 2004). Approximately $1 billion dollars was allocated with substantial portions awarded to the states most affected by immigration (Figure 3) including California ($72 milliodyear), Texas ($48 milliodyear, and Arizona ($44 milliodyear). Controversy stems from requirements that hospital workers ask a series of questions regarding immigration status. Opposing issues are the need to verify the appropriate use of funds by documenting patient immigration
405 status, and the concem that asking such questions might discourage immigrants from seeking care
Low
Figure 3. Percent of county population made up of foreign-born immigrants is highest along the border with Mexico (US Census Bureau, 2000). Labor market conditions are also affected by immigration. Specifically, immigration affects the scale, geographic distribution and skill composition of the labor force (Briggs, 2001). Historically large-scale immigration has been linked to suppressed wages. Due to a general lack of education and skills, immigrant workers command a lower wage, which in turn affects the lower end job opportunities and wages of nonimmigrants. The largest source of unauthorized immigrant labor is Mexico with an estimated 6 million unauthorized immigrant workers (Papademetriou, 2004). Many feel that the most appropriate response would simply be to offer permanent or temporary immigrant status to these workers. In 2004, popular legislation aimed at addressing unauthorized immigration by naturalizing existing immigrant workers and providing temporary work documentation for seasonal workers was blocked prior to the 2004 presidential election. Large scale trafficking of human beings has also become a major problem in the United States. Referred to as modem slavery, this violation of the basic human right to fi-eedom leaves vulnerable populations exploited for the purposes of sex or forced labor (Powell, 2003). Traffickers stand to gain high profits with little risk. Human cargo is moved across borders more easily than drugs or weapons and even if caught, victims can be re-trafficked for additional gains. It is estimated that human trafficking generates $7 to 10 billion annually. Efforts to fight human trafficking include additional resources for
406
local communities, cooperation among law enforcement, intelligence, and diplomatic agencies, targeted anti-trafficking laws, and support for initiatives abroad. There are two basic strategies for managing unauthorized entry into the United States: border control, and internal control. Of these, border control along the USMexico border receives the most attention. Cooperation between the U.S. and Mexico has increased to fight crime in border communities and to educate citizens about the dangers of attempting illegal entry through the desert (Martin & Martin, 2001). In FY2002, the Immigration and Naturalization Service requested $5.5 billion to support 32,000 employees including 10,000 border agents (Martin & Martin, 2001). The length of the U.S. borders with Mexico and Canada makes the border control option rather ineffective. However, apprehension of unauthorized immigrants has continued to rise (figure 4). ,l,OUU,l
I.
1Y51
lPSS
1960
1Y6S
197U
1975
191U
I985
199U
IYYS
lUU2
Figure 4. Unauthorized Immigrants Apprehended 1951-2002 (Office of Immigration Statistics, 2002). Internal control depends primarily on enforcement of existing legislation regulating the labor market (Martin & Martin, 2001a). These include sanctions to employers who knowingly hire unauthorized workers or fail to adequately check documentation. Problems with enforcement relate to the ease in fraudulently producing false documentation. Driver’s licenses, visas, and green cards are easily forged and used for gaining employment. RESPONSE TO TERRORISM Since the attack of terrorism on the World Trade Center in New York City, efforts have been made to improve identification and tracking of temporary immigrants to the United States. Inspections of paper documents have been determined to be largely ineffective for fighting terrorists (Martin & Martin, 2001b). This is due in part to the ease in forging documents, problems with effectively querying databases, lack of coordination with other governments, and the relative ease in slipping past borders with Mexico and Canada. In addition, tracking immigrants once they have amved in the U.S. is slow
407 under the best conditions (6 month to 1 year is common), and nearly impossible if the person attempts to hide. The current U.S. Department of Homeland Security program, labeled “US-VISIT”, states the goals of: 1. Enhancing the security of our citizens and visitors; 2. Facilitating legitimate travel and trade; 3. Ensuring the integrity of the immigration system; 4. Safeguarding the personal privacy of our visitors. To do this, the program seeks to create an automated entry exit system that integrates foreign travelers’ arrival and departure information using biographic fingerprint scanners and digital photographs. These will be checked against a “watch list” or database upon arrival to determine if the individual is a security threat. Once entered into the database, individuals who have overstayed can be identified quickly and reported to enforcement officials. Practical criticisms of the program include a failure to assess life cycle costs, no governance structure, and deficient program management. The system may also bring antipathy toward the United States by treating all visitors as potential terrorists and thereby handing terrorists a form of international victory. EFFECT OF IMMIGRATION ON U.S. MEGACITIES The effect of authorized immigration on U.S. megacities is generally positive. It can lead to happier reunited families, address shortages in labor markets, and diversify the composition of the population. Immigrants to Texas and other parts of the US, generally settle in urban areas where jobs are more prevalent. The Fort WortWDallas and Houston metropolitan areas show the highest percentage of foreign born residents within the Texas Triangle. Closer inspection of these cities does not reveal any particular pattern of immigrant settlement.
A €3 Figure 5. Foreign-born immigrants in Texas tend to gravitate toward the larger metropolitan areas (A). However, no clear pattern exists within the metropolitan
408 areas regarding the distribution of immigrants as a whole (B). -Source: U S . Census Bureau. 2000.
The most prominent effect on large metropolitan areas has to do with unauthorized immigrants that live in poverty, causing a strain on local public services. This includes schools and emergency healthcare systems in particular. As explained above, some effort to provide additional resources for emergency healthcare is being made, but few additional resources have been made for educational needs. SOME CONCLUSIONS The United States does not appear to have difficulty integrating authorized permanent immigration. The most prominent issue is the lack of staffing to process the backlog of applications. Unauthorized immigration, on the other hand, is a problem that has yet to be solved and is currently not being adequately addressed. Trafficking of modem slaves is a critical human rights related immigration issue that deserves more prominent attention. The primary avenue of unauthorized entry is through the US/Mexico border. Current border and internal controls are largely ineffective, and either the goals or the system need to change. A system for tracking the temporary entry into the U S . by foreign nationals is currently being implemented. This system has been criticized as being poorly conceived, inadequately researched, and not properly managed. REFERENCES 1.
2. 3.
Wasem, R.E. 2004. U.S. Immigration Policy on Permanent Admissions. CRS Report for Congress, CRS Web (http://fpc.state.gov/documents/organizationJ31352.pdf). Martin, P. and S. Martin. 2001a. US. Immigration Policy. In Policy Recommendations for EU Migration Policies. King Baudouin Foundation and the German Marshal Fund. Martin, P. and S. Martin, 2001b. Immigration and Terrorism: Policy Reform Challenges. Institute for the Study of International Migration online publications. (http://www.gmfus.orglApps/GMF/GMFWebFinal.ns~D286B36BE806335685256
BA4007293BO/$File/Immigration%2Oand%2OTerrorism%20%20Policy%20Reform%2OChallenges.pdf). 4.
5. 6.
7.
Office of Immigration Statistics. 2003. 2002 Yearbook of Immigration Statistics. U S . Department of Homeland Security. (http://uscis.gov/graphics/shared/abou~s/statisticsNearboo~OO2.pd~~ Pear, R. 2004. U.S. is Linking Immigrant Patient’s Status to Hospital Aid. New York Times Company. August 10,2004. Powell, C. 2003. Letter to the Reader. In Trafficking in Persons Report. Trafficking Victims Protection Act of 2000. United States Department of State. (http://www.state.gov/documents/organizationJ34158.pdf). Briggs, V.M. 2001. American Unionism and U.S. Immigration Policy. Center for Immigration Studies. (http://www.cis.orglarticles/2OOlhackl00 1.pdf).
409
8.
9.
Papademetriou,D.G. 2004. U.S. & Mexico Immigration Policy & the Bilateral Relationship. Statement before the Senate Foreign Relations Committee. United States Senate. (http://www.migrationpolicy.org/researc~papademetriou~O32304.pdf) US Census Bureau. 2000. TM-PO31 Percent of Persons Who are Foreign Born. Data Downloaded from American Factfinder and displayed on commercial GIS software. (http://factfinder.census.gov/home/safElmain.h~?-l~~en).
INTER-REGIONAL MIGRATION IN CHINA IN THE POST-DENG ECONOMIC ERA 1990-2000 STEPHEN S.Y. LAU, J. WANG’ Department of Architecture, University of Hong Kong, Hong Kong INTRODUCTION Inter-regional migration, as a major part in the field of macro-level migration, has attracted concerns from various sectors. Given the current conditions in China, where population redistribution took place in various directions, out of different considerations, and by differing economic-socio groups, an integrated study is selfevident. First of all, clarifying population objectively in this type of research is a prerequisite to guarantee an in-depth exploration and an overall observation of the phenomenon. If the term “migrants” is to be defined as “those who moved kom their origins to certain destinations where they worked and lived for a certain period”, say one year as usually employed by Chinese statisticians, there exist two groups of migrants, considering the specific situation in China. One group of them is called “migrant”; the other is referred to as “long-term-resident floating population”. Due to the population control policy in China, people are inscribed as Registered Permanent Resident (RPR) in their place of birth. Although the location of this RPR may be transferred to other regions due to a change of employment or other reasons, it is very strictly controlled and has a limited quota. This is especially true for rural people. Therefore, people who move to find work, and live for more than one year without a permission to change their RPR, are called long-term-resident floating population. Those who are officially permitted to change their RPR address are known as migrants. According to a survey of the floating population in three cities - Beijing, Wuxi and Zhuhai - most of them had no plan to return to their place of origin. 50% of the respondents tended to settle down, and this figure grew to 70% for those who had stayed for more than 10 years. As the official statistical data on immigrants simply refers to the population who get official residential permits by destination city governments, it ignores the much larger influx of “floating” people and, most probably, the real permanent residents of the future. Therefore, it is argued that an indepth study on migration in China is only valid when it covers long-term-resident floating population.
410
411 Figure I :map of China byprovinces.
M cs 1 Figure 2 shows a steeply increasing volume of migrants. According to the published data of the 5thNational Population Census, the size of the so-called floating population has grown to more than 144 million. With regards to the migration direction, there appears to be a strong tendency towards a fixed destination in a certain province, like Guangdong, Beijing and Shanghai. It has also been observed that new residents usually come from neighboring provinces. For example, most people moving to Shanghai are originally fiom Zhejiang, Jiangsu and Anhui Provinces; those moving to Guangdong are usually from Sichuan, Hunan provinces, and so on. Regarding the growing number of migrants and the possible problems, like their living conditions, influence on destination regions as well as on their regions of origin in terms of challenges in land use patterns, housing, infrastructures and management, etc., the investigation of floating people becomes one of the biggest challenges facing the country. Starting from the basic assumption that all migration research that treats migration as “a deliberate means to reach some expected end” resulted from rational thinking (Mulder, 1996), in this paper, we emphasize the factors that encourage people to move, and in some cases, whether there are gaps between the expectations and realities of these new residents and how much they could be filled. The structure of this paper is: first, theories on the motivation of migration are studied globally and chronologically; followed by discussions of the forces and conditions of Chinese migrants, based on selected cases in Hong Kong and Shanghai; and finally, a preliminary conclusion was whose purpose is to generalize possible strategies of population redistribution.
412 Figure 2: ofJicia1data on floatingpopulation (unit: 10,000people).
5
3000 .. ..............
P
oa
2000
..... ....................
1000
0 1988
~
1990
1992
1994
~ . . .~ ........ .....~ ........ ~ ~ ~ ~~ ~................. . ~~ . . . ~ ..~ ~
1996
1998
2000
2002
Source: Development Research Center, State Department, 1989, Agriculture Ministry,l993, Department of Labor and Social Securi& 1994, Census to 1% of the Whole Population, 1995, Census on Agriculture Sector, 1996, State Statistic Bureau & Department ofLabor and Social Securitj, 1997, 1998, 1999, 2000. MIGRATION THEORY IN LITERATURE Migration research has a long history and has always been an inexhaustible source for research topics. With specific focus on ‘‘ why people migrate”, Jong and Gardner pointed out that studies exploring goals for migration may also be segregated into internal and international migration, or macro and micro motivations (Jong and Fawcett, 1984). Differing from micro studies with the analysis of cities as separate entities, macro migration is concerned with the examination of the city as a reflection of the whole society, describing broad patterns of movement for geographic areas and population aggregates. With specific concerns in the group asking why and where, as an echo, influencing factors were believed to come from two levels - micro-level and macro-level - and covered motivations, family structures, degrees of information available, policy, regional social economic context, and so on. However, it is worth noting that the majority of previous efforts were placed on motivations or destinations separately while less emphasis is put on unraveling the association between them. It is argued that, a factor for moving would always be some matching issue. The problem of why and where, namely, motivations and destinations should always be considered at the same time. More specifically, the diversity of motivations associated with the diversity of social structures of residents, which may, it has been suggested by many academicians, have an influence on the diverse directional preferences of residences. It is argued that more attention should be paid to socio-economic differentiations between various social groups as expressed by different preferences in geographical destinations. Understanding the segregation of residents’ social structure is the only way to guarantee achievable and desirable spatial structure of residence.
413 Table I : Influencing factors summarized by previous works. Motivations Macro level
..
Social status/ Economic Social-psychological Policy/power/management
.
Contextual factors(vil1agecommunity tie, village norms, ethnic/Ethnic status and social networks Summarized from: (Bassett and Short, 1980, Bourne, 1981, Jong and Gardner, 1954, Oberai and Bilsborrow, 1984).
.
The study of cross-province migration, obviously, could be ascribed to the research scope of motivation in macro-level migration. In this respect, the economic model was recommended as most persuasive by most academics (Muth, 1969, Bassett and Short, 1980, Phe and Wakely, 2000). The economic approach can be traced back to the writing of Ravenstein in the journal of the Royal Statistics Society with his theory entitled “laws of migration” in 1885 and 1889. During the first third of the 20thcentury when cities world-wide were experiencing massive population growth fuelled by migration from other parts of the country as well as from overseas, models on migration were extended as well. The Dual Economy Model Based on the observation of rural-urban migration, the primary theory by economists was suggested by Lewis and, later, by Ranis and Fei (Lewis, 1954, Fei and Ranis, 1961). Briefly, rural-to-urban migration was a result of high wages in urban modem industrial sectors. This model hypothesized that migration could be considered as an equilibrating mechanism in which people were transferred from a labor surplus to a labor deficit sector. The difference in wages between two destinations urges workers to move. The process continues as long as surplus labor exists in the rural areas. Integrating Theory of Determinants of Migration In the succeeding decades, many factors were suggested to extend the economic model by researchers. In 1969, Todaro added a subjective estimate of the probability of obtaining employment in the urban modem sector (Todaro, 1969). High migration can continue when high urban unemployment rates exist and are known to potential movers. After that, a few studies hypothesized and tested a number of other settings, including better entertainment or “bright lights” in the city, better educational facilities for their children, fertility and child-care arrangements, means of breaking away from traditional village norms, higher social status, community facilities (health, recreation, communication), and so on (Oberai and Bilsborrow, 1984, Standing, 1984). In conclusion, it is argued that the decision to migrate is a complicated consideration of the whole living context, with the purpose to maximize welfare. It is most important to note that the term “welfare” is not only restricted to its economic meaning, but also including social and cultural meanings. “In formulating their migration decisions, people respond to imbalances in the distribution of land, labor, capital, culture, and other resources” (Oberai and Bilsborrow, 1984).
MIGRATION IN CHINA: CASE STUDY IN HONG KONG AND SHANGHAI In developing countries, the main concern of internal population redistribution and migration is often inter-related with problems of accelerating urbanization, the emergence of primary cities, and the stimulation of economic development policies directed at unemployment, underemployment, and regional job and income differentials. In the early 1980s, geographical living preference in China was investigated by Lee and Schmidt through a nation-wide survey (Schmidt and Lee, 1987). Although focused on the relationship between site characteristics and place choices with the purpose of inducing criteria of human-centered livable city evaluations, the findings, however, had broad use. It was discovered that the most preferred locations were associated with proximity to current residence, an important source of information, and indicators of economic well-being. Moreover, there was an overwhelming attachment to urban living rather than rural living. These variables, as it happened, corres ond to common features of regions where most immigrants were recorded in the 5 t R Population Census, with the obvious significance of interrelationship with indicators of economic well-being easily detected. As we mentioned in Section 1, the top three regions, attracting the largest number of immigrants, were Guangdong, Beijing and Shanghai, whose personal GDP also ranked in the top three. The predicted tendency of migration to cities along the coastline, especially the two large economic bases of the Pearl River delta and the Yangtze River delta proved to be true. The data in Table 2 shows the regional difference in the volume of migration and average level of GDP per person. It can be observed that, where there was a better economy, there was a larger volume.
Table 2: percentage of migrants in east, middle and west China3 in 2001 (unit: %). Area Intra-region Inter-region Total Average GDP/person Eastern 36.30 27.17 63.47 14550.73 RMB Middle 16.14 2.96 19.10 6066.875 RMB West 14.64 2.79 17.43 4821.917 RMB Total 67.08 32.92 100 Source: (Cai et al., 2003, Lu et al., 2003) Economic force was argued to be the main factor contributing to the phenomenon of migration in the whole country. Meanwhile, other factors such as career, social status, social ties, and so on were also suggested. Among the total interregional migration of the whole nation, 82.5% took place in the eastern part, from which cities are selected as a research field in this paper. In the following paragraphs, conditions in Hong Kong and Shanghai will be explored one by one. It is to be noted that migration to Hong Kong is somewhat different in that it more closely resembles international migration in which no floating population exists, given the situation of "one country, two systems".
414
415 Hong Kong Hong Kong’s population tripled between 1950 and 2001, increasing from 2.2 million over the period. During the past century, Hong Kong’s population has been shaped largely by immigration and emigration flows. After 1997, a scheme of a One Way Permit Scheme ( O W ) was established and became the most important immigration policy in forming Hong Kong’s demographic growth and composition. It is to facilitate families with immediate members residing in the mainland to be reunited in Hong Kong. Other population flows come from skilled immigrants, and transitory population such as Foreign Domestic Helpers and imported workers. Figure 3: Number of new arrivals andproportion to the whole population, 1991, 1996 and 2001. Number
450000
Propomon (Bb)
I
15
100 000 4
350 000 300 000
3
?SO 000 200 000
2
150 000
~* 1
100 OOO 50 000 n
1991
1996
2001
(Reproducedfrom&ures in the report of the Task Force on Population Policy, 2003) Social structure of the population With regards to the volume of skilled immigrants, HKSAR welcomes talents and professionals from overseas and mainland China, yet with many more restrictions on those from the mainland. Statistics reveal that, from 1997 to 2001, an average of 16,700 foreign professionals came to Hong Kong each year. Since its inception in1999, the Admission of Talent Scheme successfully attracted 256 immigrants from the mainland up to the end of 2002, and another program of Admission of Mainland Professionals Scheme admitted 268 from 2001 to 2002. This group usually claims a higher education attainment and a relatively higher salary level. However, population in this segment is too small compared with the other two flows and therefore they did not contribute much to the average performance of the entire immigrant population. On the other hand, the majority of incoming people were under the scheme of O W , most of them were spouses or children of Hong Kong residents, with relatively little education, little working experience, and over half of them came from Guangdong Province. In the year 2001, immigrants under O W reached 266,537 and attained 4% of Hong Kong population. Up to now, new arrivals from the mainland under this scheme contributed to 93% of the population growth (Task Force on Population Policy, 2003). Hong Kong has a significant number of imported, low-skill workers. The typical sub-group was Foreign Domestic Helpers (FDHs). They were usually employed by families to take care of household chores when the wife had joined the labor force. The number of FDHs also increased constantly and, at the end of 2002, it reached 237,104.
416 Given the large proportion of low-skilled, less educated and less experienced sub-groups, the resulting performance of immigrants is lower than the average level of the whole population in many aspects, including education attainment, median monthly income, and unemployment rate. Table 3: new arrivals and whole population aged 15 and over by educational attainment (highest level attended). New amvals aged 15 and over Whole population aged over 15 and over Educational Number % Number % attainment
Lower 66431 38.4 1060489 18.9 secondary Upper 41438 23.9 2001771 35.8 secondarv .~ - .Tertiary 9879 5.7 918500 16.4 Total 173212 100 5598972 100 (Reproducedfromfigure in the report of Task Force on Population Policy, 2003) Table 3 gives a comparison on educational attainment between new amvals and the whole population of Hong Kong. The percentage of people accepted for upper secondary school or tertiary school education were 23.9% and 5.7% respectively; much less than the percentage of the whole population with 35.8% and 16.4%. While the proportion attending only lower secondary, primary, 38.4% and 25.3% for the new arrivals was higher than the 18.9% and 20.5% for the average level of the whole. Motivations, expectations and realities It has been observed that, the majority of those coming to Hong Kong were members of families residing in Hong Kong and wishing to be reunited. More specifically, the story began when a Hong Kong resident married a person in mainland. Because of the large number of people in this group, spouses and children usually have to wait several years to obtain permission to migrate4. It is worth remembering that most of those seeking wives or husbands in mainland are part of the working class, as are their spouses. These new arrivals were found less capable in competing with local population, and therefore less choosy and more willing to accept lower wages. The labour force participation rate was 44.2% compared to 61.4% of the overall population. 30.4% were service workerdretail sales workers, and 34.9% were workers in elementary occupations. The median monthly income from all employment was $6,000, which was much lower than the $10,000 for the overall working population. Based on the discussions above, it may be noticed that the social network could be the main force in attracting inflows. However, the authors doubt that, better living conditions, like higher income, better community facilities, seem to have been the basic force of many mainland wives or husbands when they married a Hong Kong residents. These inter-marriages are, most likely, similar to those between urban and rural people, the side with the lower economic status had the expectation of breaking away from traditional rural life. It was a pity to observe that, in this case, the gap between reality and their expectation was found to be rather large in fact.
417 Shanghai The story of immigration in Shanghai is as long as the story of the city's development. Historically, Shanghai has experienced three waves of incoming population since its establishment. Along with Shanghai's lengthy process of land formulation as an alluvial plain of the Yangtze River Delta, residents originally living around the Tai-Lake moved there and formed the ascent Shanghainese. The population of Shanghai grew so fast that, in the year 1949, its population went up to 5460,000, almost 20 times that of 1843 when Shanghai County was established. Immigrants were always the main contributors to population growth in the semi-colonial period, thousands of people, both from China and overseas, came to this "heaven of adventure", dreaming of enrichment overnight. In 1946, immigrants occupied 79% of the total population of Shanghai according to statistics data. Among them, 4% were foreigners from more than 40 countries (Zou, 1980). The 2ndwave came during the period 1950-1955. In the first population census after liberation in 1950, all the remaining residents in Shanghai were registered as permanent residents, which was the main reason for the steady increase in Figure 4. Since then, a large number of rural people have migrated to Shanghai as workers, responding to the call to transform capitalist Shanghai to socialist Shanghai with a shift of economic basis from the commercial sector to the industrial sector. The instant population boom brought much more pressure to bear on every sector of the society than Shanghai could afford, in terms employment, inkastructure, educational facilities, and so on. In 1955, a strict population control policy was introduced in Shanghai. For many years thereafter, only outflow was strongly encouraged. It seemed that waves of immigration were always synchronized with economic fluctuation. Since the late 1980s, people coming to Shanghai to seek for jobs and domicile have been increasing. Now the floating population has become the main stream. While the number of immigrants remained around 180 thousand in the past five years, the floating population jumped straight up to 3870 thousand by the end of 2002. Figure 4: comparison of population development among various sub-groups from 1930 to 2002. Here the non-local group includes immigrants, floating people and foreigners, both of the last two categories refer to those living in Shanghai for more than 1 year. 1400 --.--
..
__
1200
j
+total
j
+local
1000
I
800
"O"-lOCd
42 3
2
600
~
j j
400
~
200
0 1920
1930
1940
1950
1960
1970
1980
1990
2000
2010
-K
immigration
+floating population +foreigners
418 Source: (Zou, 1980, Zhang et al., 1990, Shanghaishi Tongii Ju (Shanghai Municipal Statistics Bureau), 1991, Shanghaishi Ton& Ju (Shanghai Municipal Statistics Bureau), 1992, Editorial Board of ‘thePopulation of China towards the 21st Century-Shanghai’, 1994, Shanghaishi Tongji Ju (Shanghai Municipal Statistics Bureau, 2003). The conclusion can therefore be reached that Shanghai could be called “city of immigrants” and, as a result, this will have an impact on its culture, living style, and regional value system. It is interesting to note that the capacity to embrace diverse people exactly reflected what “~ea-culture”~ claimed as assimilating exterior cultures and flexibility. It might be a good example to show the dynamic process in which a city attracts incoming population and then is reshaped by them culturally.
The social structure ofpopulation
In Figure 5 , an obvious social segregation was observed. 57% of immigrants acceded to education at high school level and above, this percentage decreased to 19% for floating people, and that of the overall population was 33%. Whereas, as many as 24% of floating people only finished primary school, the number of these immigrants was only 10%.
Figure 5: comparison of education attainment between immigrants and the whole population of Shanghai (unit: %), based on data from the 4“ National Pouulation Census in 1990. College, and above 50%
45%
[
: tImmigrants
High school
l l i teracy
tFloilting : population;
*
, tWhole
labour*
Primary school
J..
...... ........ .... .. .
?.
Secondary school
*In these sections, only populations in the labor force were included, that is aged 18 to 59for male, aged 18 to 54 for female. Source: (Editorial Board of ‘the Population of China towards the 21st CenturyShanghai’, 1994). Significantly, the shift in economic structure contributed to proportional changes in the employment structure and the resultant distribution of socioeconomic groups. The social structure of immigrants also changed. Recent years have witnessed a trend from secondary industry to tertiary industry, with the percentage of employees
419 in the latter sector increasing to 38.41% (see table 4), compared with that of 26.58% in 1986(Zhang, Wang andHu, 1990). The floating people were mainly from the working class, originating kom rural areas. Shanghai has started a specific survey of the floating population since 1988. It was summarised that, among the entire of the floating population, 62.6% were motivated by employment or business opportunities6, and the number increased to 7 1.9% if only males were considered. Table 4: median occupational distribution of immigrants in 1989-1994. Percentage % Quotient’ Immigrants Floating’ Immigrants Floating Primary Agriculture 3.57 5.91 .47 0.28 industry Secondary Industry 53.80 36.75 .68 1 industry Construction 3.99 32.87 7.98 0.97 3.18 0.74 1.06 industry Retaikatering facility 6.39 10.86 1.21 0.71 Real estate/communityservice 4.75 4.78 1.16 1.16 2.83 1.43 0.65 1.28 Healthhocial welfare 1.76 0.39 1.35 Educatiodculturehroadcasting 6.04 1.79 0.50 0.40 1.42 ResearcWtechnique 0.58 0.05 0.11 1.23 FinanciaUinsurance Government 11.48 1.75 0.47 3.09 Reproducedfrom (Editorial Board of ‘thePopulation of China towards the 2Ist Century-Shanghai‘, 1994)
Spatial distribution Spatial distribution of these non-local populations were twofold. With respect to immigrants, it was found that usually, the living segregations of this sub-group were in accord with the pattern of local residents. That is, a linear gathering along the commercial axis of west-east, gradient descent towards south and north(Zhang, Wang and Hu, 1990). However, in as much as there are not many social ties with relatives or friends for new immigrants, they were more likely to be found in new residential districts like Pudong and Minghang. With respect to the floating people, closer data revealed that the majority of them were originally countrymen with lower educational attainment and less working experience, therefore, they, like the O W holders in Hong Kong, faced the same problem of less choice and lower wages for their jobs. Therefore, the places where floating people congregate were found to be places, judged by population volume, located in city fringe areas where there were clusters of factories, new project construction sites, and such places where chances of employment requiring little skill were likely to be found (Editorial Board of ‘the Population of China towards the 21st Century-Shanghai’, 1994). Furthermore, living costs in city fringe areas were much lower than downtown, another significant filter in living location selection.
420 CONCLUSION The theory of Dual Economy successfully explains the massive rural-urban migration at the beginning of the industrial era in the 1930s. It also seems to be capable of explaining most of the rural-urban migration by less educated populations in China as well as the process of “instant city” building. Deng Xiaoping’s plan of Socialist Market Economy brought this country into a transitional era in all aspects, physically and mentally. The open-up policy produced impressive and immediate reforms however of cities rather than villages. While society was experiencing “letting some people get rich first”, at the same time the steadily widened gap between rural and city life was also witnessed. Taking the example of Shanghai, at the end of the year 2001, an average per capital yearly income of 12,883 yuan was recorded among the urban population, against that of 5,850 yuan among the rural population, whilst the national average per capital rural income of 920 yuan yearly. Indeed, the deviation between rural and urban people was so large that it attracted a continuous labor flow from villages to cities, despite the fact that there were few chances of obtaining a residential permit in the destination cities. For most of the floating population, “city” is a term representing numerous employment chances and means of earning fast money. The transformation of the economic structure provides another opportunity. When the emphasis shifted from the Secondary Industry to the Tertiary Industry, a growing demanding on service sector appeared, especially in low-skill sectors like waitresses in restaurants, workers on construction sites, etc. at this preliminary stage. These jobs welcome these people from the country as they have few requirements of special skills or work experience but appreciate their cheap labor. However, the speedily developed city is grateful for their work without acknowledgingtheir social status. It is also noticed that in contemporary China, rates of growth of other sectors in cities were insufficient in view of the large incoming rural population. Early in Shanghai’s second wave of immigration in 50s, the fast-booming population almost damaged the supporting sectors of life’s daily necessities, from schools to infrastructure. The strict residential registration system was employed as a strategy to attach rural people to their land, deprive them of hope to benefit from the social welfare in cities including education, health care, etc., no matter how long they live there or for how many generations, until they can obtain a formal contract with a state-owned company. A similar situation was shared by imported, low-skill workers under the schema of FDHs (Foreign Domestic Helpers) in Hong Kong. Despite Hong Kong‘s open system in terms of migration control, the government set relatively strict restrictions on those FDHs and also on mainland professionals. Understanding that no other benefits would be available, these former farmers expected nothing more than a higher salary. In this case, the imbalance in the distribution of employment opportunities and payment was argued to be the main contributor to migration. Considering migration of a more educated population or city-to-city migration, the dual economy model loses its viability. For this group, the triggers encouraging them to migrate differ with their living expectations and value systems. Most of the time, it was widely accepted that the possibility to develop their career might be one of the typical reasons, like professionals under the program of Admission of Talent Scheme and Admission of Mainland Professionals Scheme to Hong Kong, and new university graduates seeking jobs in Shanghai. It should be noted however that, the real scenarios are far more complicated. Given the higher performance of education
42 1
attainment or working experience, this group is more competitive and therefore, has more opportunities and could make comparisons among different alternatives. They are different from the floating population who are simply fighting for survival, these people surpass the first level and are reaching for the higher levels associated with socio-cultural factors in the Maslow Value system (Maslow, 1943). Systematic studies of this group, or more precisely, these sub-groups, are planned in future work. REFERENCES 1.
Bassett, K. and Short, J. R. (1980) Housing and Residential Structures: Alternative Approaches, Routledge & Kegan Paul, London, Boston and Henley.
2.
Bourne, L. S. (1981) The Geography ofHousing, Edward Arnold, London.
3.
Cai, F., Zhang, C. W., Wang, D. W. and Wang, M. Y . (Eds.) (2003) Zhongguo Renkou yu Laodong Wenti Baogao NO. 4(2003): Zhuangui zhong de Chengshi Pinkun Wenti (Green Book of Population and Labour No. 4: Urban Poverty in Transitional China), Social Sciences Documentation Publishing House, Beijing.
4.
Cheng, Z. (1999) Dangdai Shehui (the Contemporary Society) In History of Shanghai (Shanghai TongShi), Vol. 13 (Ed, Xiong, Y. Z.) Shanghai People Press, Shanghai.
5.
Editorial Board of 'the Population of China towards the 21st Century-Shanghai' (Ed.) (1994) Kua Shiji de Zhongguo Renkou: Shanghai Juan (The Population of China towards the 21st Century: Volume of Shanghai), China State Statistics Press, Beijing.
6.
Fei, J. C. H. and Ranis, G. (1961) A Theory of Economic Development, in: American Economic Review, 5,4 533-565.
7.
Jong, G. F. D. and Fawcett, J. T. (1984) Motivation for Migration: an Assessment and Value-Expectancy Research Model In Migration Decision Making: Multidisciplinary Approaches to Micro-level studies in Developed and Developing Countries(Eds, Jong, G. F. D. and Gardner, R. W.).
8.
Jong, G. F. D. and Gardner, R. W. (1984) Introduction In Migration Decision Making: Multidisciplinary Approaches to Micro-level studies in Developed and Developing Countries(Eds, Jong, G. F. D. and Gardner, R. W.).
9.
Lewis, W. A. (1954) Economic Development with Unlimited Supplies of labor, in: The Manchester School of Economic and Social Studies, 22,2 139-191.
10.
Lu, D., Fan, J., Liu, Y., Jin, F. J., Chen, T. and Liu, W. D. (2003) 2002 Zhongguo Quyu Fazhan Baogao, Shangwu YinShuGuan, Beijing.
11.
Maslow, A. H. (1943) A Theory of Human Motivation, in: Psychological Review, 50,370-396.
12.
Mulder, C. H. (1996) Housing Choice: Assumptions and Approaches, in: Netherlands Journal of Housing and the Built Environment, 11,3 209-231.
13.
Muth, R. F. (1969) Cities and Housing, University of Chicago Press, Chicago
14.
Oberai, A. S. and Bilsborrow, R. E. (1984) Theoretical Perspectives on Migration In Migration Survey in Low Income Countries: Guidelinesfor Survey
422 and Questionnaire Design(Eds, Bilsborrow, R. E., Oberai, A. S. and Standing, G.) Croom Helm, London & Sydney. 15.
Phe, H. H. and Wakely, P. (2000) Status, Quality and the other Trade-off Towards a New Theory of Urban Residential Location, in: Urban Studies, 37, 1 7-35.
16.
Schmidt, C. G. and Lee, Y. (1987) Residential Preferences in China, in: The Geographical Review, 77,3 18-327.
17.
Shanghaishi Tongji Ju (Shanghai Municipal Statistics Bureau) (1991) Shanghai Tongji Nianjian 1991(Shanghai Statistic Yearbook 1991), China Statistic Press, Shanghai.
18.
Shanghaishi Tongji Ju (Shanghai Municipal Statistics Bureau) (I 992) Shanghai Ton@ Nianjian I992 (Shanghai Statistics Yearbook 1992), China Statistics Press, Shanghai.
19.
Shanghaishi Tongji Ju (Shanghai Municipal Statistics Bureau) (2003) Shanghai Tongji Nianjian 2003 (Shanghai Statistical Yearbook 2003), China Statistics Press, Shanghai.
20.
Standing, G. (1984) Conceptualising Territorial Mobility In Migration Survey in Low Income Countries: Guidelinesfor Survey and Questionnaire Design(Eds, Bilsborrow, R. E., Oberai, A. S. and Standing, G.) Croom Helm, London & Sydney.
21.
Task Force on Population Policy (2003) Report of the Tas Force on Population policy,HKSAR Government Information Center, February 27, 2003, 20, May, see: http:llsc.info.~ov.h~blw.info.~ov.h!dinfolpopulationJ
22.
Todaro, M. P. (1969) A Model for Labour Migration and Urban Unemployment in Less Developed Countries, in: American Economic Review, 59, 1 138-148.
23.
Yang, D. P. (1994) Chengshi Jlfeng: Beijing he Shanghai de Wenhua Jingshen (Monsoon passing Cities: Comparison between Cultures of Beijing and Shanghai;),Dong Fang Press, Beijing.
24.
Zhang, K. M., Wang, J. M. and Hu, R. S. (Eds.) (1990) Shanghai Renkou Qianjng Zhanwang (Predicting Future Population Development in Shanghai), Baijia Chubanshe, Shanghai.
25.
Zou, Y. (1980) Jiu Shanghai Renkou Bianqian de Yanjiu (Intra-urban migration in Old Shanghai), Shanghai People Press, Shanghai.
ENDNOTES I
Associate Professor, Center of Architecture and Urban Design for Hong Kong and China, Department of Architecture, University of Hong Kong, ssvlau@,hkucc.hku.hk Ph.D Candidate, Department of Architecture, University of Hong Kong,
[email protected]
In this table, the eastern part includes Beijing municipality, Tianjing municipality, Hebei province, Liaoning province, Shanghai municipality,
423 Jianngsu province, Zhejiang province, Fujian province, Shandong province, Guangdong province, Hainan province; the middle includes Shanxi province, Jilin province, Heilongjiang province, Anhui province, Jiangxi province, Henan province, Hubei province, Hunan province; the regions remaining are ascribed to the west part. See Figure 1. According to data provided by the mainland authorities, the total number of OWP applications as of August 2002 was around 168,000. Currently spouses in Guangdong have to wait seven or eight years to obtain an OWP while those in other provinces have to wait about five years. 5
’ 8
With regard to the socio-cultural context, Shanghai is labelled with the socalled “Sea-Culture”. What sea-culture is still debating: supporters claim it to be capable of assimilating exterior cultures and flexibility; the opposition claims that the core spirit of Sea-Culture is nothing more than westem adoration. Yang, D. P. (1994) Chengshi Jifeng: Beijing the Shanghai of Wenhua Jingshen (Monsoon passing Cities: Comparison between the Cultures of Beijing and Shanghai), Dong Fang Press, Beijing.. A widely accepted opinion is that “SeaCulture” took on the role of various complex incoming cultures, reflected a sensitive attachment to modem and advanced civilization, especially from overseas. Only long-term floating people, aged 5 years old and above, residing in Shanghai for 1-5 years, were taken into account. The equation is: quotient = (percentage of occupational distribution of immigrants)/(percentage of occupational distribution of local people) Only long-term floating people, aged 5 years old and above, residing in Shanghai for 1-5 years, were taken into account.
MIGRATION IN MEXICO: SLOWER TRENDS TO MEGACITJES; HIGHER FLOW TO THE U.S. ALBERTO GONZALEZ-POZO Departamento de Teoria y Analisis Universidad Autonoma Metropolitana, Xochimilco, Mexico Mexico faces a puzzling paradox. Its population is subject to several migration movements: one to the nation's capital, another froin the capital to other big cities in the country, and a third one of people migrating to the United States of America. Migration to Mexico City was mentioned in the Monitoring Panels Reports on Megacities during the 2gthand 30thSessions of the International Seminar on Planetary Emergencies'. This article tries to appraise the size and profile of the three movements, as well as their impact on the economic, social and cultural structure of Mexico and the U.S. The available data suggests that Mexican migration to the U S . is at least equal if not bigger than migration to several metropolitan areas in Mexico. Most of the incentives that drive people in economically depressed areas to migrate to big cities are the same that trigger the decision of individuals andor families to migrate north of the border. If Mexicans looking for a better wage could not migrate (seasonally or for longer periods) to the North, the migration towards big urban agglomerations like Mexico City could be worse. Likewise, the capability of the Mexican Metropolis to hold millions of people looking for a better quality of life has prevented much larger numbers of Mexican emigrants to our neighbor, one of the biggest economies in the world. Thus, solutions to both problems are closely related. MIGRATION TO AND FROM THE MEGACITY: CONTRADICTIONS AND LOGIC BETWEEN BOTH TRENDS Historic background Migration to the Metropolitan Zone of the Valley of Mexico (MZVM) is an historic (even an ancient) reality. Teotihuacan in the Valley of Mexico, the first metropolis of the American Continent between 100 B.C. and 600 A.D., inhabited by 200,000 to 300,000 people, had already been settled almost exclusively by foreigners coming from Oaxaca, 400 miles away, and probably also by other different alien groups. In modern times, the migration forces increased exponentially during the 20th Century. In 1900, Mexico was basically a rural country, with a national capital-city of less than 400,000 people and several regional capitals with less than 100,000 inhabitants each. The Mexican Revolution between 1910 and 1921 expelled thousands of families from the country's battlefields to the national capital, looking for security. Peace returned to the country afterwards, but the migrants stayed in Mexico City. Then, between 1930 and 1970, there was a process of industrialization thanks to policies that sought to substitute imported products with national industrial production. As a growing important market of consumers, and due to the infrastructure and communications available, Mexico City became the leader of this process and gave rise to an important demand of labor force in the secondary sector of the economy. Migration to the national capital increased once again, and that led to a process that is far from ending'. Until 1970, the whole flood of migrants to Mexico City, Guadalajara and Monterrey, represented 38.2% of all the inter-state population movements in the country.
424
425 Meanwhile, many other metropolitan areas with industrial or tertiary development (tourism), as well as other non-metropolitan urban settlements with more than 50,000 inhabitants compose now the whole picture of the urbanization in Mexico3. MEXICO: SYSTEM OF CITIES, METROPOLITAN AND NONMETROPOLITAN CENTERS, 1970-2000
Table 3 Me Table 1 Source: Simplijied after Garza (2003)p.88
Figure 1 (After Garza, 2003, p . 96.) Due to the rise of several smaller metropolitan areas in Mexico, the flood of immigrants to big cities showed changes during the XXth Century. For instance, the importance of the Metropolitan Zone of Mexico City as a destination point of migrants from the rest of the country absorbed 901,243 people between 1965-1970, but in 198590 attracted only the half of that figure: 448,365 new inhabitants. Expressed in terms of the total of internal migration in Mexico, the MZMC attracted 47.8% of the movements between 1965-70, and only 29.1% between 1985-904. INTERNAL MIGRATION: PLACES OF ORIGIN AND PROFILE OF MIGRANTS Where did the migrants to Mexico City come from? In 1970, as well as in 1990, 66% of them came from seven Federal States (Michoach, Guanajuato, Puebla, Oaxaca, Hidalgo, Veracruz and Guerrero) located within a range of 200 to 500 kilometers around the national capital’. They show a dense rural population, mostly composed by
426 persons of mixed race (mestizos), some of them illiterate (or with few school years), and with serious local production problems due to poor soil, lack of land, water, services, communications, capital and credit. They are very good traditional farmers, but cannot compete with mechanized agriculture and agribusiness, characteristic of our time. Their income is very, very low, and many of them see big cities as a possibility to get even a bad-paid job, that probably is better than their meager and always unpredictable earnings as farmers. At the same time, big cities offer them free education, health services and other advantages that they appreciate. Of course, there are people of urban origin too, that come from smaller cities, with better education and other skills as artisans, industrial workers, employees and small entrepreneurs, even professionals. But they are minorities among migrants. A profile of migrants inside the country by group of ages between 1930 and 1990 shows that most of the migrating decisions are taken by people within 24 and 39 years old. Children of school age usually don't migrate with their parents, and the index of migrating women within ages from 50 to 64 years, even if it is low, nonetheless is higher than men in the same age group6. MEXICO: INDEXES OF INTER-REGIONAL MIGRANTS BY GROUPS OF AGE, 1930-90
Gro,">s
d e
E n a d
Table 2 (After Pimienta Lastra, 2002, p . 95.) As we will see later on, the same states, a similar profile and the same reasons are behind the main flood of emigrants to the Unites States.
A NEW TREND: MIGRATION FROM THE METROPOLIS TO OTHER PLACES But as the centripetal migration slowed its rate, a centrifugal process began. It started slowly in the sixties in the city center, driving people out of old buildings that changed from dwelling to commercial or services land-uses. The people moved to other urban areas not far away, still within the Federal District. Then, from 1970 onwards, the city center lost more and more people who moved to closer and finally to distant Municipalities of the Federal State of Mexico. These peripheral places received at the same time new immigrants from the rural areas, and therefore, show an incredible fast rate of growth.
427 Now, the Metropolitan Zone of Mexico City is composed by a core of 8.5 million people within the Federal District and a urbanized “crown” or periphery of 9.5 million inhabitants living in more than fifty Municipalities of the State of Mexico. The Federal District (or Mexico City itself) as a whole now shows almost stabilized growth. It has few areas to grow more and it can only increase the land-use densities to hold more people. At the same time, the periphery grows more and more, as well as the metropolitan areas in neighboring valleys. Both phenomena announce the next step in this process: the change of a Metropolis to a Megalopolis, that is, a huge urbanized region that already has 24 million people and could grow to 38 million in 20207. But the centrifugal trend is far from being limited to the Megalopolis of Central Mexico, and its actors have a profile quite different to that of the centripetal migrants. Most of the families that abandon the central city are young couples, between 20 and 30 years old, with few still small children. The couples have primary and secondary education complete and show working skills in industry or services. Since the temble earthquakes of 1985 that ruined more than 100,000 dwellings in Mexico City, many people tried to move far away from the Valley of Mexico and other areas subject to seismic risk. Therefore, 50% of the families that decide to abandon the MZVM choose to settle in another metropolitan area (the so called “metro-metro” migration), 30% look for smaller settlements or rural places, and 20% return to the capital’. MIGRATION TO THE UNITED STATES Historic background Mexico is a country where emigration abroad has only one meaning for 98.7% of the people taking that decision: they go to the United States, either on a temporary or permanent basis. Only 1.7% of emigrants move to other countries. Canada is considered distant, and for the average rural emigrant, Europe, South America, Afiica or Asia seem far away. Migration from Mexico to the U.S. is now more than a century old. It started in the last decades of the 19” Century with a small flow of peasants to the four southern states of Texas, New Mexico, Arizona and California. Then, between 1900 and 1930, the flow increased. The United States was growing vigorously, in conjunction with its own industrialization process, and thousands of Mexicans looked for security during the Mexican Revolution, crossing the border between the two countries (more than 3,000 kilometers long) with or without papers. The total number of Mexicans living in the U.S. in 1926 was estimated at more than one million people, half of them on a temporary basis and the other half already permanent residents’. Only the Great Depression of 1929 stopped the flow and was followed by a process of expulsion of immigrants, especially those that could not show legal papers. During the 30s, the flow ceased almost completely, but in the 40s, and especially during World War 11, it resurged vigorously due to the lack of rural labor force. A special “Bracero” Program was set up in 1942 for Mexican rural workers”. It displaced more than 10 million people over the next 22 years. The next period (1965-1986) is popularly characterized by the appearance of more and more “indocumentados” (people without legal documents), with restricted legal admission of Mexican emigration to the U.S., increasing restrictions for legal immigration, more border controls and systematic deportation of illegal migrants. The latest period, from 1987 to present, is characterized by a process of regularization of undocumented Mexicans (2.3 million people received their papers between 1967-89) and at the same time, by growing numbers of clandestine
428 immigration, followed by the regular expulsion of hundreds of thousands of undocumented people”. The total number of Mexican migrants now living in USA is estimated by American sources at 20.5 millions in 2000 (it was only 13.5 million ten years earlier). They represent 58.5% of the total “Hispanic” population living there, and 7.3% of the total population of that great country12. MEXICAN REGIONS INVOLVED IN MIGRATION TO THE U.S. Where do Mexican emigrants to the U.S. come from? They come from all over the country, but mostly from the same federal states that provide migrants to Mexico City. This is the so-called Historical Region 1 of emigrants, comprising 8 west and central-west States. From this region come between 50% (according to Mexican census) and 63% (according to American sources) of the total of Mexican migrants to the U.S.l3. It is followed by the Central Region 2, also historical but with a better climate, represented by another 8 states near the Federal District. This region provides between 31% (Mexican sources) and 13.8 % (American sources) of the total of Mexican migrants. Then there are the two border regions to the north and south of the country. The Northern Border Region 3 comprises 6 federal states along the border with the U.S. and two other Pacific States that do not border the U.S. but are very close. This region provides between 10.8 and 26.5% of migrants, depending on the source consulted. Finally, there is the Southeastern Border Region 3 composed mostly by 6 eastern and southeastern border states that share similar ecological and socioeconomic backgrounds. It is a humid region, covered with tropical vegetation. This region plays a minor role in the migration to the north, because emigrants to the distant U.S. fluctuate between 7.09 and 1.40% of the total depending on the source consulted. But it is significant because it receives its own flow of emigrants from Central America. MIGRATORY REGIONS IN MEXICO
Figure 2 I . Historic (Western) 2. Historic (Central) 3. Northern Border States 4. Southeastern Border States. The dark dot in the center of the figure represents the Metropolitan Zone of the Valley of Mhico. (Simplifed and added afer Durand and kfassq, 2003, p . 71.)
429 U S . REGIONS AND CITES OF DESTINATION Where do the migrants settle when they travel and arrive in the U.S.? Frequently, they stay on both sides of the border. Along the Mexican side there are the 6 federal northem states as mentioned above, with 35 Municipalities and several Metropolitan Zones such as Tijuana (1’274,240 inhabitants in 2000), Mexicali (764,602), Ciudad JuArez (1’218,807), Nuevo Laredo (310,915), Reynosa (524,692) and Matamoros Altogether, . these six Mexican cities have more than 4.5 million inhabitants, (418 ~ 4 1 ) ’ ~ and their explosive growth during the second half of the 20th Century is better understood if one compares it to a dam that controls part of the incessant flow of migrants, because many migrants without papers stay on the Mexican side waiting to get them, or find a job in some of the American industries settled there. Moreover, there are many people residing there who cross the border daily or weekly to work on the other side, in American cities (San Diego, Calexico, El Paso, Eagle Pass, Laredo, McAllen and Brownsville). As for the people that finally can cross the border, many seek job opportunities in the 25 counties on the American side of the border. MEXICO AND THE UNITED STATES: BORDER STATES AND COUNTIES
-
Fimre 3 The dark areas represent the American counties next to the borderline. The Mexican Municipalities are not shown. (After Durand and Massey, p . 53)
But there are many other migrants who move into American territory. They choose not only agricultural regions where they can find a job as rural workers; they try to go to cities, even metropolises, where they can find jobs in services or industry. The difficulty in obtaining jobs, permits and legal advice is eased if they have relatives, friends or at least people from their place of origin who are legally settled there. There are four regions towards which Mexican migration is oriented in the U.S. The most important comprises the four southem-border states already mentioned, plus its recent expansion to the western states of Utah, Nevada, Idaho, Montana and Washington. In 2000, it had 77.5% of the total Mexican population in the whole of the USA. The second zone includes the states around Lake Michigan: Illinois, Michigan, Indiana and Wisconsin, and had 7.9% of Mexican residents there. In third place is the region of the Central Plains (Colorado, Kansas, Okahoma, Missouri, Iowa, Nebraska and Wyoming) with 4.60%. And finally, the 13 East Coast States from New York to Florida are in the fourth region, with 7.5% of total Mexican migrants in 2OO2I5.
430 The four regions show different patterns of migrants dispersed in rural areas or concentrated in some cities. Maximum concentrations were in cities like San Antonio (until 1960,) and later on in Los Angeles (from 1960 onwards). Kansas City and Chicago are examples of high concentrations, and Dallas, El Paso, San Antonio, Phoenix, Yuma, Yakima, Dalton and Atlanta are examples of medium concentrations'6. U.S. REGIONS WITH MEXICAN POPULATION, 2000
Figure 4 After Durand and Massey, 2003, pp. 108
Labor market and economic impact for both countries The importance of Mexican and Mexican-American rural workers in USA is better understood if one takes into account that they represent a total of 86% (77% and 9% respectively) of the total rural labor-force in that country. '7Mechanization of rural work increased during the 20thcentury, but there are always some tasks that require the qualifications (and modest salaries) that an old rural culture facing unemployment at home can offer abroad. But the rural labor market in the U.S. cannot support the high demand for lowpaid jobs that migrating Mexicans are looking for. Only 8% of them are employed in agriculture. Therefore, they go to the cities, looking for other opportunities. 56% find them in services (especially personal services) and 36% in industry. On average, the salaries they get there are four times higher than those for equivalent tasks in Mexico. And at the same time, they are lower than comparable salaries earned by Americans. Their economic importance for the Mexican economy has grown to impressive levels. It can be measured by the amount of money they save and send to their regions of origin. Yearly, they send home growing totals that could be as high as 15 to 16 thousands of million U.S. dollars in 2004'*. This amount is at least equal if not higher to the yearly income of Mexican oil exports. SOME CONCLUSIONS There are many more details on internal and external migration in Mexico, but the aspects shown here are enough to draw several conclusions: The migration from economically depressed regions towards Mexican metropolitan zones (especially to Mexico City and its immediate region) has slowed down, but continues to feed the growth of one of the largest cities in the world. Migrants find there more opportunities to get a job. Even if it is
431 low paid, it can be better than the meager income that they get in the rural areas they come from. On the other hand, over the last three decades, a new migration trend from the Megacity (and other big cities) has begun. The new flow is directed to smaller metropolitan zones and smaller cities of more than 50,000 inhabitants. It consists mostly of people with more years of education, earning middle and even medium-high incomes, looking for a better quality of life. Meanwhile, the flow of Mexican migrants to the U.S. has not ceased, but grown steadily during the 20” century. Emigration north of the border is an important alternative that poor peasants take as an opportunity to work abroad on a temporary or permanent basis. They cover now (along with Mexican-American rural workers) 86% of the whole rural labor force in USA. Together with other migrants working in services or industry, they send home important yearly amounts, equal or higher than those earned by the Mexican oil exports. Excessive specialization has avoided an integrated and simultaneous picture of the three migration movements. But they seem to be closely related, not only because the regions that send migrants to the Mexican Metropolis are the same that send migrants abroad, but also because the relatively new centrifugal force to other metropolitan zones and cities may be an important alternative, that could introduce some sort of equilibrium between the first two massive migration movements, especially if controls recently taken by the US Government to prevent and reverse the migration of Mexicans to the USA are successful. In any case, the Mexican economy should be revised as a whole, because it is still far from the level of development, employment and income needed by a democratic nation in the 21” century. The labor force now provided by migrants in the big Mexican megacity and several regions in the U S . could and should be used to strengthen the economy of other urban centers in Mexico and the same depressed regions that now expel their own people. REFERENCES Gonzalez Pozo, Alberto, “Urban Mobility in the Mexican Metropolis”, in International Seminar on Nuclear War and Planetary Emergencies 3dhSession, Series Editor and Chairman, A. Zichichi, Edited by R. Ragaini, Singapore, World Scientific, 2004. pp. 359-373. Gonzilez Pozo, Alberto and Hinojosa, Victor, “Water Use, Abuse and Waste”, in International Seminar on Nuclear War and Planeta y Emergencies 2Sih Session, Series Editor and Chairman, A. Zichichi, Edited by R. Ragaini, Singapore, World Scientific, 2002. Negrete Salas, Maria Eugenia, “4.5. Migracion” in Garza, Gustavo, La ciudad de Mkxico elfin del segundo milenio Mtxico, Gobierno del Distrito Federal y El Colegio de MCxico, 2000. p. 265. Garza (2003) considers “metropolitan” cities those with more than 100,000 inhabitants settled in two or more contiguous municipalities, “non-metropolitan” those of the same size minimum but within a single municipality, and just “cities” those between 50,000 and 100,000 inhabitants. Garza, Gustavo, La urbanizacidn en Mkxico en el siglo XX, MCxico, El Colegio de Mtxico, 2003. pp. 92-101. ”
432 4 5
6
7
8
9
10
I1 12 13
14
15
16 17
18
Negrete Salas, p. 266. Ibid., p 268. Pimienta Lastra, Rodrigo, Ancilisis demografico de la rnigracibn interna en Mixico, 1930-1990”,MCxico, Universidad Autonoma Metropolitma I Plaza y ValdCs, 2002. pp. 30-32, 94-96. Gonzalez Pozo (2004), p.359-360. Negrete Salas, “4.5. Migracion”, p. 273. Durand, Jorge and Massey, Douglas S., Clandestinos: Migracibn Mixico-Estados Unidos en 10s albores del siglo X U , MCxico, Universidad de Zacatecas I Miguel h g e l Pomia, 2003. p. 57. The term “Bracero” is related to arm (brazo in Spanish), because the Mexican rural workers used mostly their arms in the harvest of cotton, tomato and other vegetables. Durand and Massey, pp. 11-13,4540. Ibid., p. 56. They follow data from the U.S. Census 2000 Brief, 2002. Ibid., pp. 73-74. Differences between Mexican and American sources are due to different criteria of formats among emigrants’ census and immigration questionnaires. For instance, the State of origin (birthplace) may be not the last place of residence in Mexico before migrating. Demographic data after Garza, 2003. Durand and Massey, pp. 97-127 Ibid., pp. 103-104. kid. p. 153. Hernindez Amador, Roberto, “Incesante emigracion: mexicanos ganan en EU cuadruple que aqui“ in la Jornada, Mkxico, 31/07/04, p. 43.
MOTHER AND CHILD PERMANENT MONITORING PANEL (MC/PMP) NATHALIE CHARPAK Istituto Matemo Infantil, Bogota, Colombia Second Meeting, Erice, August 2004 First Meeting Erice 2002 MANIFESTO OF THE MOTHER AND CHILD PERMANENT MONITORING PANEL Mission To help to decrease the mortality and morbidity of mother and infant (less than one year old) through an efficient and effective network with the International Scientific Community in general and the World Federation of Scientists in particular. To highlight the impact of the other planetary emergencies on maternal and infant mortality and morbidity. Activities (I) during 2002-2004 Joint meeting in Erice on the ethical issue in AIDS-HIV epidemics. A workshop was held in Erice in 2003. Recommendations were made and published in 2004 (Report PMP Infectious Diseases - Professor Guy de ThC). Activities (11) The diffusion of the Kangaroo Mother Care Method for the care of Low Birth Weight Infants (LBWI). Since the fust scientific evaluation of the method in 1989, funded by the World Laboratory, KMC training, education and implementation are spreading all over the world (participants from more than 25 countries have already been trained in Bogota). The WHO published the KMC practice guidelines in 2003. Why do we target the LBWI in the the MCPMP? We must remember that Low Birth Weight Infants are a worldwide problem and a public health problem in developing countries. Their frequency and distribution closely parallel those of poverty. Low birth weight is associated with high neonatal and infant mortality and morbidity: worldwide five million children die annually and, in half or more of these deaths, LBW is either the direct or an associated cause of mortality. Care of low birth weight infants represents a burden for the health and social systems everywhere, but especially in developing countries. Most of the globally available resources for LBWI are invested in developed countries, both for sophisticated, technological care and for research focused on solving problems in scenarios in which access to expensive resources is available.
KMC diffusion in India: a challenge Since 2003, our training activities were directed to the implementation of KMC excellence centers in India. Formal training in Indian hospitals will begin after the National Neonatology Indian Convention that will be held at the end of October 2004.
433
434 Activities (111) KMC and quality of the care delivered to the LBWI: The written draft of a KMC universal database was compiled in Erice during the first meeting. The database was then produced on free access software that is easy to use and then translated into three languages and sent to all the interested centers to enable them to evaluate their performance and follow up their LBW infants.
Whv did we produce a universal KMC database? General objectives To provide support and monitor performance and results of the KMC programs around the world, through a standard database in order to enhance the quality of care for LBW babies. To gather the data over several years, first from regional zones and then on a larger scale, in order to evaluate the impact of the kangaroo mother technique nationally and then internationally on the care of LBWI. To identify the best practice of KMC according to the level of development in each setting, promoting the best diffusion of the method. To help standardization and avoid undesirable variations in KMC practice. Specific obiectives To monitor selected important outcomes such as mortality among LBWI (i.e. less than 2001 g) enrolled in the KMC program. To monitor quality of life among KMC graduates. To evaluate quality of care rendered to KMC enrolees through the database. First Evaluation of the use of the KMC Database There is room for improvement! All our colleagues are willing to evaluate their practices as we have been doing in Colombia for more than ten years, but things are not coming together as expected. Why? Although there are perceived benefits: The database is pleasant and easy to use; It contains data that can help to monitor and evaluate performance with the aim of continuing quality improvement, including revising and modifying clinical practice rules to obtain better results. There are difficulties and criticism: No time or staff dedicated to filling out the database; Too much data to collect; Expenses associated with printing new data entry forms and training data entry staff. Proposal: To stimulate each KMC center to write their own KMC guidelines for each location. To prioritise variables and modify (shorten) the database accordingly to ensure that at least a compact set of good quality data is obtained for minimal quality standard assurance. To promote the use of a form that is easy to read, to fill out and to use. Definition of minimal, desirable and ideal standards for a KMC center: Structure: space, comfort for the mothers, personnel organization;
435 Processes: Kangaroo position, Kangaroo nutrition, early discharge in KMC, maternal involvement, in-hospital KMC, ambulatory KMC, followup, pharmacological support. 0 In raw outcomes: Mortality, morbidity measure as re-admision, visual and auditive sequels, breastfeeding, growth and neuro-psychomotor development. c In adjusted outcomes: according to birth weight, to maturity of the premature infant, to the age of the mother, to the education of the parents. The scientific approach to a problem such as the LBWI in a developing country is not a luxury but an absolute necessity and a prerequisite for sustainable development. The production of good quality scientific knowledge by itself does not solve health care problems. Knowledge needs to be translated into action. The KMC research and implementation program worldwide is a good example of both South-to-South and South-to-Northknowledge transmission. 0
PARTICIPANTS AND PROGRAM OF THE 2ND MEETING OF THE MCPMP, ERICE 2004
0
Information Technology as a tool for monitoring Quality of Health Care in a developing country: the Kangaroo Mother Care Program experience. Dr. Juan Gabriel Ruiz, Colombia KMC in the Jose Fabella Memorial Hospital: use of the KMC database, difficulties and benefits. Dr. S. Medoza, Philippines KMC in Vietnam: difficulties using the KMC database. Dr. N. Nguyen, Vietnam The KMC database in a European Neonatology Unit: an illusion? Dr. C. Huraux. France
CONCLUSION Poverty in the world is increasing at alarming rates, bringing more LBW infants into the world each year. We feel that our modest contribution is adding to the efforts to improve lives of children all around the world, in developing countries in Asia, Latin America and Africa. KMC not only allows a child to survive with quality, but also promotes solidarity in the family and must be considered as a basic step for the building of the social peace that is badly needed in our world. Let's continue!
USING THE KMC PROGRAMME'S DATA BASE IN DEVELOPED COUNTRIES: AN ILLUSION? EXPERIENCE IN A LEVEL 3 MATERNITY IN CRETEIL, FRANCE. CHRISTIANE HURAUX Mother-Infant HIV Transmission Consultant, Pans, France The Kangaroo Unit within the obstetric ward, can accept up to five newborns in KMC, has a room for monitoring and incubators when necessary, and individual rooms for mother and child. There is a neonatologist nurse trained in KMC on duty 24W24h and neonatologist doctors dedicated to KMC. They are also in charge of all the newborns in the maternity unit and emergencies in the delivery room. This unit must also take care of distressed newborns who do not need NICU but specific care. Sometimes there are as many as eight newborns for only one nurse and pediatrician. The clinical history of each newborn is reported on a specific chart, common to all the public maternities in France and prepared for coding. It includes maternal history, delivery, neonatal care and follow-up. But, currently, coding is not used and at discharge there is a Standardizided Summary (SS) that is coded by the neonatologist. It is then transmitted to the DBpartement d'informatique mkdicale (DIM - department of cornputerised medical information) for registration or to be returned for correction. Obviously the KMC database is totally different, as our SS is very short and common for all the newborns including full-tern, normal babies. All the KMC procedures cannot be recorded unless we have the data written in by the nurse for each baby on its nursing chart; the same applies to the follow-up, which is recorded on the medical chart. The value of a specific KMC database is obviously to have a thorough follow-up of the activities of the K Unit, a good idea of the population in KMC and its evolution, whether medical or demographic, to compare our practices to other K Units and to develop a network between all the users of KMC based on a common language. Currently the neonatologist cannot file the data due to a lack of time. We should find a way of giving neonatologists and nurses the opportunity of doing this, and ask the DIM to register them in the KMC database. Then, using KMC database would not be just an illusion.
436
QUALITY OF HEALTH CARE ASSURANCE: THE KANGAROO MOTHER CARE PROGRAM EXPERIENCE JUAN G. R U E MD, M. MED.SC1. Department of Pediatrics, San Ignacio Hospital, Santafk de Bogota, Colombia TOPICS Quality of care: definition, components, measurement, improvement; The Kh4C program: brief description of goals and activities; Computer Tools for Monitoring Processes and Outcomes in the KMC Program; Critical Points; Conclusions. QUALITY OF HEALTH CARE Components of Health Care Delivering health care involves two aspects: Health care interventions (preventive or curative): diagnostic or therapeutic activities. Usually delivered directly by clinical care personnel. Quality of interventions depends on the effectiveness of the intervention and on the Clinical Performance of the clinician delivering it. Health care sewices other than interventions: delivery of drugs, appointments, transportation, administrative guidance, basic amenities etc. Usually not offered directly by clinicians but by health care institutions. They concern mainly patients' expectations, and are measured as Health Services. RESPONSIVENESS The combination of these two results in the global delivery of health care. Ouality of Care (AOHR Definition) Health care that is accessible, effective, safe, accountable, and fair is quality health care. This means that: Providers deliver the right care, to the right patient, at the right time, in the right way. Patients can access timely care, have accurate and understandable information about risks and benefits, are protected from unsafe health care services and products and have reliable and understandable information on the care they receive. Both patients and clinicians have their rights respected. Attributes measured when assessing Quality of Care Effectiveness of interventions Opportunity: was the service or intervention delivered at the right time?
437
438 Accessibility: was the service or intervention delivered to the right person? Was the access to services or interventions hampered by barriers? This item might also include equity. Efficiency: was the delivery of the service or intervention accomplished in the best possible way minimizing cost and risk without compromising effectiveness? Satisfaction: was the recipient of the service or intervention satisfied with both the way the service was delivered and the outcome? Elements considered when assessing Quality of Care (Donabedian) Structure: stable characteristics of a health service. Process: how things are organized and done, and how different activities interact. Includes adequacy of diagnostic work and treatments. Outcome: impact of health care on individuals and populations. Measuring Clinical Performance Processes: Does the clinician provide the diagnostic or therapeutic interventions appropriate for the patient’s condition and preferences in a timely manner and according to available resources? Outcome: Do observed outcomes correspond to expected outcomes under optimal care and according to base-line risk, for the condition? OUTCOMES The effects of health care as measured by health status, satisfaction, and survival. Cost outcomes refer to the economic consequences of choosing a particular therapeutic approach. Health outcomes include both traditional clinical endpoints and patientcentered outcomes. Proximal-distal continuum A continuum of health status outcomes in which the more proximal outcomes describe the clinical indications of illness based on objective (signs) and subjective (symptoms) information and the more distal describe the broader areas of mobility, role performance and, ultimately, life satisfaction (Brenner 1995). KMC PROGRAM Current uses of Kangaroo Mother Care (KMC) Kangaroo care instead of “usual” neonatal care Kangaroo care instead of minimal care units. Limited in-patient skin-to-skin contact: KMC Program at ISS -Colombia First contact established in 1989 as part of a cohort study, comparing KMC and traditional neonatal care for LBW infants. Starts in 1993 with a RCT comparing KMC and Traditional neonatal care.
439 Formally adopted by ISS in 1994. Modality offered is KMC instead of in-hospital minimal care. The program includes follow-up for at least 1 year. Structure of KMC Program (I) Human Resources: Interdisciplinary Team. Pediatricians and neonatologists, Pediatric and neonatal nurses, Psychologists, Social worker, Clinical epidemiologist, Manager, Data entry personnel, Clerical personnel. Structure of KMC Program (11) Administrative: Ascribed to a major ISS Pediatric Hospital and to a Medical School (Javeriana University). Runs a Kh4C Clinic, a Research Center and a Training Center; Defined Budget; Resources from ISS; External Agencies (Research). Structure of KMC Program (111) Interventions and Services Provided: Provides Kh4C Intervention as an alternative to Neonatal Minimal Care Unit; Screening, case finding; Three levels of Prevention; Does not provides care for acute conditions: refers patients to ISS Hospital; Delivers services such as appointments, some medications, referrals, 24 hour telephone support. Processes KMC major components: Kangaroo position, Kangaroo nutrition, Kangaroo discharge policy, Ambulatory follow-up program.
440
HOW KM INTERVENTION FITS INTO LBW INFANTS CARE Care of LBW Infants under 2001 g Delivery Room
1
Resuscitation
NCU
FOLLOW-UP SCHEME Identification (Delivery Room, NCU) I
“Term” I
I
I
I
I
I
I
I
I
I
c
I
KMC 4 I I
+
4I I
4
I I
4
I I I
I
I
I
I
3m
6m
9m
+I I I
I
12m
E11igib1e COMPUTER TOOLS FOR MONITORING PROCESSES AND OUTCOMES IN THE KMC PROGRAM General Descriution The KMC program structure and processes evolved from a research program: Defined target population and health care problems, which allow a high degree of homogeneity in clinical care. Formulation of detailed and specific protocols for management. Systematic and detailed pre-coded data collection forms (Clinical Records), allowing easy, computerized management. Process of Monitoring Baseline data collected by interview and by clinical records abstraction at time of eligibility. Clinical and laboratory data collected by clinician at each clinician-patient encounter.
441
0
0
Additional data from psychological evaluations, home visits etc. recorded in pre-coded formats. Data entered into a dedicated computer database (specifically designed for the program). Periodic (at least every 3 months) data cleaning and analysis.
Data analvsis and interpretation Indicators are produced for: Description of baseline risks and risk adjustment. Assessing whether appropriate preventive, diagnostic and therapeutic procedures were made, and when were they made. The program has established a norm (evidence-based) of interventions that should be delivered (process-based assessment of performance of clinicians). Describing proximal and intermediate outcomes during the first year of life.
Clinical Data Records
442
443
Table 3 Me Table 3 Me 1
(61%) (65%) (73%) (70%). (62%) (58%) (65%) 102 I 1086 173 228 139 231 , 213 I
Nodata
-
CRITICALPOINTS Clinician-RecordInterface Multiphase procedure: room for error: Clinician obtains data; Clinician records data in pre-coded form; 0 Form might be not flexible enough to capture relevant data; Mistakes in writing down data; Incomplete data. Paper form does not “ask” for missing data; Writing difficult to read. Solution: - Clinician enters data directly into database:
Cumbersome,slow; Expensive. - Touch-sensitive screen: Expensive, limits feasibility. Complex programming and sophisticated support. - Voice recognition: Expensive, Currently under development. Computer Data Recording Secretary keys in data Time lag between visits and data entry. Double entry and built-in filters control part of data recording errors. Solutions: Direct computer input by clinicians. Automated computer data entry (scanner and ORC) Moderately expensive, Requires some training for filling out forms.
444 Data Oualitv Assurance Directors of the KMC program and the epidemiologist do the data cleaning process Cumbersome and slow, Demanding in terms of time and resources. Solutions: Enhancing of clinician-record interface Qualified personnel committed exclusively to health quality monitoring: Human and financial resources are scarce, Would it be cost-beneficial? Miscellaneous Problems: No links with the ISS administrative database; No electronic records made by other providers within the ISS (e.g. information about E.R. visits is difficult to obtain and to link to patient records). High mobility of patients: they frequently move in and out of the ISS. CONCLUSIONS About the type of program Health Care Programs with an easily defined structure, processes and outcomes are particularly suitable for using simple and effective computer tools for quality monitoring-improvement. Computers will not help if there is no clarity regarding the attributes, elements and indicators to be measured. About data Quality Electronic clinical records can be produced and maintained in several ways. The interface between clinicians and electronic records should be simple and efficient ensuring: Accuracy of recorded data, Data available in a timely manner, Demands investment, maintenance and promotion of a culture of quality assessment-improvement. About Indicators of Clinical Performance Monitoring the adherence to the proposed processes is justified if the effectiveness of interventions is evidence-based (most interventions composing KMC have been assessed by locally conducted research). Monitoring the outcomes needs careful risk adjustment. Limitations of our experience We are only attempting to measure aspects of: Clinical Performance. Other aspects concerning delivery of services, satisfaction of patients' expectations (responsiveness) could be assessed with
445
the help of automated computerized tools. Attributes other than effectiveness (adherence to effective processes) are not measured.
FINAL REMARKS The cost-effectiveness of different types of computer tools should be studied. Feasibility and affordability are major issues, particularly in developing countries. The effort of conducting formal and systematic measurements of our performance in KMC has been rewarding, giving us team cohesion and promoting a culture of continuous improvement.
POLLUTION PERMANENT MONITORING PANEL - 2004 REPORT LORNE EVERETT The Shaw Group Santa Barbara, California, USA RICHARD C. RAGAINI (Chair) Department of Environmental Protection, University of California, Lawrence Livermore National Laboratory, Livermore, CA, USA The continuing environmental pollution of earth and the degradation of its natural resources constitute one of the most significant planetary emergencies today. This emergency is so overwhelming and encompassing, it requires the greatest possible international East-West and North-South co-operation to implement effective, ongoing remedies. It is useful to itemize the environmental issues addressed by this PMP, since several PMPs deal with various overlapping environmental issues. The Pollution PMP is addressing the following environmental emergencies: Degradation of surface water and ground water quality; Degradation of marine and freshwater ecosystems; Degradation of urban air quality in mega-cities; Impact of air pollution on ecosystems. Other environmental emergencies, including global pollution, water quantity issues, ozone depletion and the greenhouse effect, are being addressed by other PMPs. The Pollution PMP coordinates its activities with other relevant PMPs as appropriate. Furthermore, the PMP will provide an informal channel for experts to exchange views and make recommendations regarding environmental pollution. PRIORITIES IN DEALING WITH THE ENVIRONMENTAL EMERGENCIES
-
The PMP on Pollution monitors the following priority issues: Clean-up of existing surface and sub-surface soil and ground-water supplies from industrial and municipal waste-water pollution, agricultural run-off, and military operations; Reduction of existing air pollution and resultant health and ecosystem impacts from long-range transport of pollutants and trans-boundary pollution; Prevention andor minimization of future air and water pollution; Training scientists & engineers from developing countries to identify, monitor and clean-up soil, water and air pollution. ATTENDEES The following scientists listed below attended the August 2004 Pollution PMP meeting: Dr. Lome G. Everett, University of California at Santa Barbara, USA Prof. Vittorio Ragaini, University of Milan, Italy
446
447 Dr. Andy Tompson, Lawrence Livermore National Laboratory, USA Prof. Joseph Chahoud, University of Bologna, Italy Prof. Sergio Martellucci, University of Rome, Italy Ms. Gina Calderone, EA Science and Technology, USA Professor Aurelio Aureli, University of Catania (Emeritus) Dr. Salvatore Carrubba, University of Palermo HISTORICAL AREAS OF EMPHASIS OF THE POLLUTION PMP The following Erice workshops and seminar presentations have been sponsored by the Pollution PMP since it began in 1997 in order to highlight global and regional impacts of pollution in developing countries: 1998: Workshop on Impacts of Pharmaceuticals and Disinfectant Byproducts in Sewage Treatment Wastewater Used for Imgation; 1999: Memorandum of Agreement (MOA) between WFS and the US Department of Energy To Conduct Joint Environmental Projects; 1999: Seminar Session on Contamination of Groundwater by Hydrocarbons; 1999: Workshop on Black Sea Pollution; 2000: Seminar Session on Contamination of Groundwater by MTBE; 2000: Workshop on Black Sea Pollution by Petroleum Hydrocarbons; 2001: Workshop on Caspian Sea Pollution; 2001: Seminar Session on Trans-boundary Water Conflicts; 2001 : Workshop on Water and Air Impacts of Automotive Emissions in Mega-cities; 2002: Seminar Talk on Radioactivity Contamination of Soils and Groundwater; 2002: Seminar Talk on Environmental Security in the Middle East and Central Asia; 2003: Seminar Session on Water Management Issues in the Middle East; 2003: Workshop on Monitoring and Stewardship of Legacy Nuclear and Hazardous Waste Sites; 2004: Proposal on Mapping Groundwater Pollution Vulnerability in Sicily POLLUTION PMP ACTIVITIES DURING 2004 In June 2003, Richard Ragaini attended planning meetings in Rome to discuss environmental problems in Italy and in Sicily. From these meetings came the proposal to establish a joint WFS Regional Resources Commission For Sicily, which held a meeting in Erice just prior to the International Seminars in August 2003. In September 2003 the Pollution Permanent Monitoring Panel Task Force developed the proposal focused on the protection of groundwater resources in Sicily, entitled Soil and Groundwater Pollution Vulnerability Mapping and Environmental Database Project for Sicily. This project would require collaboration with Italian government agencies and is targeted to provide the Italian and Sicilian governments and other stakeholders with a state-of-the-art database to be used as a land-use planning tool. This database would utilize the
448 development of a Geographical Information Systems database, including key environmental, hydrogeological, and land use information on the island of Sicily. The WFS PMP Task Force is composed of the following members, which are a sub-group of the Pollution PMP: Dr. Richard C. Ragaini, Lawrence Livermore National Lab, Livermore, USA Dr. Lome G. Everett, The Shaw Group, Santa Barbara, USA Dr. Gina Calderone, EA Science and Technology, USA Professor Aurelio Aureli, University of Catania (Emeritus) Professor Paolo Ricci, University of San Francisco In August 2004 the Task Force met in Erice to further develop the proposal, and delineate budgets and timelines. FUTURE POLLUTION PMP ACTIVITIES DURING 2005 A.
Proposal to Develop a Second Workshop on Waste Management in August 2005, to be entitled Waste Management and Soil/Groundwater Remediation in Southeast Asia. This workshop will build on the workshop held in Erice in 2003 on hazardous waste management issues; however, the focus would be targeted to gain insight into the issues currently developing in Southeast Asia.
B.
Proposal to Develop a Highly Collaborative Workshop on Worldwide Issues Concerning Nitrite, Sulfate, and Arsenic in Groundwater. This workshop will focus on the increasing issue of elevated concentrations of these contaminants in groundwater on a global scale, both in developed and developing countries, and provide case studies and statistics of occurrence of these constituents in groundwater. This workshop would be proposed as a collaborative effort with the World Federation of Scientists Water Panel.
C.
Proposal to Hold A Workshop of Potential Participants in Project on Mapping Groundwater Pollution Vulnerabilities in Sicily in August 2005. This workshop will be a joint meeting of a selected group of Italian groundwater vulnerability experts and the WFS PMP Task Force. Ideas and capabilities to do the proposed project in Sicily will be discussed.
RISK ANALYSIS PERMANENT MONITORING PANEL TERENCE TAYLOR International Institute for Strategic Studies-US, Washington DC, USA OBJECTIVE The objective of the Risk Analysis PMP is to find new and more effective methods of risk analysis with particular application to high-level decision-making in planning for complex emergencies directly affecting safety and security. MEMBERS William Kastenberg Genevieve Lester Charles Perm Jean Savy Terence Taylor (Chairman) Eileen Vergino Henning Wegener Others have been engaged through the IISS/CGSR project “Living with Risk”. FOLLOW-ON WORK The following tasks are considered by the group to be the most important areas for continuing work: Continuing review of the latest methodologies irrespective of the subject area. As part of this effort the definition of risk is being kept under review. The working definition, which is intended to be instrumental and factual, is: “the expected value of an unintended consequence”. This needs revision to take account of the qualitative aspects of risk such as public perception. Further overarching work was conducted on complexity and (Kastenberg) and uncertainty (Savy). A special meeting on risk perception &om the perspective of the public, the legislators and high-level policy-makers was held on May 2004 in Cambridge, Massachusetts, USA. The papers presented at this meeting are available for PMP participants at www.llnl.gov/cgsr (password required as the site contains work in progress). Discussions with high-level policy-makers/decision-makers to understand how they could use risk analysis along with other factors they are compelled to take in to account. Case studies have been presented in the nuclear field (Yucca Mountain disposal site), developments in the commercial insurance, public perception and biological risks. A conceptual paper is in draft by Savy and Lester drawing together the results of the work undertaken so far to define the pathway for future work. This work is being conducted under the auspices of the IISSICGSR project “Living with Risk‘’.
449
450 FUTURE PRIORITIES The priorities for the coming year should include: Continue the search for novel methodologies that address, particularly, issues that have a low data content, deal with complexity and varying time scales (imminence); Biological risk; Perceptions of risk and how this aspect might be integrated into a risk assessment methodology.
NEXT MEETING The next meeting is proposed for April 2005.
10. GLOBAL BIOSECURITY STANDARDS WORKSHOP
This page intentionally left blank
THE BIO-SCIENCE DILEMMA - PRECIOUS OPPORTUNITIES & DIRE THREATS BARRY KELLMAN Director, International Weapons Control Center, DePaul University College of Law, Chicago, USA Advisor To The Interpol Secretary-General - Preventing Bio-crimes ’ A terrorist - or perhaps an extortionist or even just a psychotic - gets a deadly disease strain. Perhaps he has stolen it fiom a laboratory; perhaps he has crudely developed it fiom natural sources. Now undeterrable, the culprit poses a dire threat. Not only is there a risk of enormous casualties, the dissemination of this disease could incite panic that rips the sinews of modem civilization. Tragically, disease and hate are merging into new and homfymg dimensions. The misuse of bio-science threatens thousands of casualties and unprecedented panic levels. A contagious disease, e.g. plague, can turn victims into extended biological weapons, carrying an epidemic virtually anywhere. Terrorists have proven that anthrax can be fatally disseminated - if terrorists get smallpox, the death toll and ensuing social chaos exceed calculation. More fundamentally, humanity has waged a species-long struggle against disease; to deliberately foment contagion is an act of treason - a fundamental crime against humanity. It is folly to believe that the international community can indefinitely rely on the hope that terrorists will not turn to disease. What would have been the implications had the perpetrators of the Madrid subway attack used a pathogen instead of explosives? Would people who have such little regard for human life as to commit that attack have any compunction against using disease if they thought it an effective tool? A fundamental dilemma of our era is that the bio-science capabilities that may improve the human condition are indistinguishable fiom the capabilities that may be used to attack humanity. Of course, progress in life expectancy, disease reduction, and increased agricultural output can be directly attributable to life sciences. Rapid advances in microbiology and genetic engineering are enabling scientists to modify and manipulate fundamental life processes. The mapping of the human genome augurs new methods for combating disease through vaccine development as well as advanced surveillance capabilities. There are extraordinary opportunities for biomedical research, including novel drugs and vaccines, thereby holding great promise for improving human health and quality of life. Yet, for all its marvelous wonders, biology threatens to enable a few twisted villains to devastatingly change history. Although it is impossible to say how likely it is that there will be a bio-terrorist attack, the likelihood is growing. The pace of bio-science means that capab would have seemed sophisticated a few years ago are now pedestrian. Escalating biological research offers the potential to uncover elemental principles of pathogenicity that could enable cultivation of a disease of such devastation that civilization itself could be fundamentally maimed with endant risks of economic collapse and political s for making biolog weapons are ever more upheaval. Moreover, the capab widespread as new bio-research and pharmaceutical fac es are being constructed throughout the world, often in places with serious terrorist networks.
453
454 Simply stated, the bio-sciences are global, and any serious attempt to reduce the risks associated with the misuse of bio-science must be international in scope. A contagious disease (e.g. plague) will have no respect for borders. Criminal networks can covertly transport lethal agents through any airport or customs checkpoint in the world. A suicide terrorist could infect himself, becoming a human biological weapon. Preventing these threats from materializing requires consensual action among the entire global community. Domestic action is, by itself, insufficient. The hdamental premise of a supervisory strategy is that to use bio-science to inflict disease must be prohibited. There is no conceivable justification. Furthermore, because bio-criminals are likely to seek seed cultures of lethal pathogens or critical equipment, those who handle lethal pathogens should adopt rigorous security measures to impede their diversion. Transfers of pathogens should be restricted to laboratories that meet the highest standards. Only properly trained and screened persons should be permitted access to pathogens. New methods to trace pathogens need to be developed, and equipment that is critical to effective weaponization should be tagged, thereby improving law enforcement’s ability to attribute responsibility. However, we must be careful not to ensnare legitimate bio-science. Capabilities to make biological weapons are virtually ubiquitous and are interwoven with research and production of pharmaceuticals. While the scientific community is increasingly aware of the dangers posed by the proliferation of biological weapons capabilities, they are also concerned about the need to sustain a culture of openness in research where free exchange of ideas propels human well-being. This is a pivotal moment for enhancing humanity’s resistance both to disease and to persons who try to inflict disease for hostile purposes. Emerging threats of bio-terrorism demand internationally coordinated preventive measures that exploit the mutually propelling capacities of sophisticated law enforcement and leading-edge science. But without collaboration, counter-bio-terrorism policies might inappropriately constrict legitimate scientific inquiry, imposing costs without appreciable benefit. The World Federation of Scientists has a critical role to play in balancing the need for security with the need for scientific freedom. It is essential for the scientific community to become fully engaged in far-reaching analyses of how to prevent the intentional development of disease as a weapon. Preventing bio-crimes is too risky to be left to non-scientists.
BIOLOGICAL SAFETY AND SECURITY ADVANCES IN THE LIFE SCIENCES REAPING THE REWARDS AND MANAGING THE RISKS TERENCE TAYLOR International Institute for Strategic Studies-US, Washington DC, USA NEW APPROACHES, NEW PARTNERSHIPS The leading edge of the global dissemination of the life sciences and related technologies are private industry and academia operating in an increasingly symbiotic relationship. The limitations of traditional approaches to dealing with biologcal risks through an arms control regime have been amply demonstrated over the past decade. Important as it is, the Biological and Toxin Weapons Convention (BTWC) as an intergovernmental instrument will have increasing difficulty dealing with the rapid advances in the life sciences, particularly in reaching out to the transnational private academic and industrial sectors that are at the leading edge of that science and technology. This already difficult challenge to intergovernmental agreements is further complicated by the threat from non-government groups and even individuals and calls for a different form of vigilance that deals directly with those at the leading edge of the spread of the relevant technologies. In an environment marked by rapid scientific and technological progress and distinct public anxiety about the implications of those developments, pressures have emerged to fmd better ways to manage the risks associated with developments in the life sciences. Those pressures have generated calls, for example, for increased regulation of both the scientific community and industry. In some cases, such as the recent report of the U.S. National Academies of Science on managing research with potentially negative security consequences, the calls are for national and international self-regulation'. In others, people are suggesting that governments must take the lead. But managing the risks related to life sciences developments is a job for which governments are singularly ill suited. Government is neither flexible nor agile enough to respond effectively at the speed at which science and technology are moving forward. Nor is it usually subtle or nuanced enough to develop policies appropriate for such a sophisticated and multifaceted arena, particularly if those policies have to be put into place under the pressure of events. The problem is made more complex by the fact that the life sciences - whether academically or business oriented - are global enterprises, and any successful means for managing risks must incorporate a similar global approach beyond the purview of any single government. DEVELOPING A CULTURE OF RESPONSIBILITY IN THE PRIVATE SECTOR At the same time, those working in the life sciences, whether industry or academia, must 'appreciate that things are not likely to remain as they are. Arguing that security issues have no relevance or that risks are already suitably managed with existing approaches and institutions is not likely to be convincing to an uneasy public and
455
456
government officials with security responsibilities in the face of growing awareness of the potential for both good and ill inherent in the ongoing life sciences revolution. In such a situation, new ways of doing business are needed; new tools and institutions are required. An independent international centre for the life sciences could provide a focus the new instruments and approaches required. No standing organisation of this kind currently exists, on a global basis that brings together scientists, technologists, and policy experts in the life sciences field with a specific mandate to focus on a security agenda defined appropriately for the 21" century. THE BUILDING BLOCKS There are four possible areas that could form the building blocks of an effort by a partnership between private foundations and independent research institutes that could make a lasting contribution. Together, these efforts could form the foundation on which an International Centre for the Life Sciences could be established. These project areas are: An initiative aimed principally at private industry to engage life sciences companies more actively and on a more sustained basis on issues of public safety and security. The key mechanism for that engagement is the commitment to a global charter with obligations to observe and actively support national and international laws and regulations related to the life sciences. The project also embraces academia and enhances the relationship between governments and the private sector. Members of the charter would develop rules to assure proper management of personnel with access to sensitive technologies, safe and secure operation of facilities, governance of research and ethical conduct. A draft charter developed by two independent institutes is at Annex A'. An annual analysis of developments in the life sciences to identify their implications for high-level policy makers. These analyses need to be directed by a senior advisory panel and implemented through study activities in academia and private industry led by an appropriately qualified team. There are national activities underway in some countries to conduct such analyses. An independent international effort would enhance and complement these national efforts and assist those countries that might require scientific and technical support in relation to such activities. 0 A global analysis of the world's epidemiological surveillance systems with particular reference to infectious diseases. This assessment will include a directory of private and publicly sponsored surveillance systems and an analysis of their capabilities, including their efficacy and design. The analysis would identify shortcomings through an independent critical appraisal to help indicate where resources and new techniques and methodologies must be applied to enhance the standard of global epidemiological surveillance. The survey and analysis would be conducted annually. Consultations with the World Health Organization (WHO), certain governments and private professional bodies indicate that such an effort by a non-governmental
457 organisation would bring great benefits. The project has scientific, political and social dimensions with sensitivities that an independent organisation with global reach (represented by CBACI with IISS) can circumvent, avoiding some of the limitations that an intergovernmental organization such as the WHO cannot. An international leadership forum for young life scientists working in academia, the private sector and government organisations to educate emerging leaders in the life sciences community towards a culture of responsibility in managing the risks associated with advances in the life sciences. Most academic and industry researchers in the life sciences simply do not understand the security environment of the 21" century, where asymmetric warfare, terrorist networks, and weapons of mass destruction are changing the strategic landscape. Whereas valuable support from individual foundations have addressed this problem with endowed programmes at a small number of research departments in universities, these approaches have mainly been national in scope and is limited to an audience of currently enrolled graduate students. What is urgently needed is an international programme that addresses young researchers from academia, private industry and other sectors worldwide. THE CONCEPT An International Centre for the Life Sciences should operate independently, overseen by a Board of internationally recognized and highly regarded experts from the science, technology, and public policy communities. The organisation need not be large. The requirement should be met by an Executive Director and a small, highly qualified staff that reflects the multi-disciplinary nature of the Centre's mandate at the nexus of science, technology, and security. Those activities would include research projects, workshops, conferences, media work, and public educational efforts. Drawing on existing international networks of institutions and individuals, the activities of the Centre should be global in scope. In essence, such a Centre should become the hub of an international network of organizations, institutions, and individuals representing a wide range of highly diverse constituencies that nevertheless share common concerns in managing the risks that could emerge from the rapid advances in the life sciences. Such a multi-disciplinary centre should be designed to draw the best available talent together in an urgently needed independent effort that can make a vital contribution to enhancing public safety and security in the face of the current and future challenges posed by the rapid advances in the life sciences and biological threats both natural and deliberate.
TIME FOR ACTION
This is a field that is ripe for international non-governmental action that could make a vital contribution to biological safety and security. The matter demands urgent attention to reduce the risk of a major catastrophe. Equally important is to prevent an over-reaction to a biological emergency (actual or perceived) resulting from perhaps deliberate misuse of an organism as a weapon that could lead to restrictions on advances in the life sciences
458
that rely on essential transnational activities in both the private and government sectors; this could have the unintended result of increasing the actual risks to public safety and security.
'
Biotechnology Research in an Age of Terrorism, National Academies Press, Washington DC, 2004
2
The draft charter has been developed by the International Institute for Strategic StudiesUS and the Chemical and Biological Arms Control Institute under a joint project.
ANNEX A
DRAFT CHARTER4 INTERNATIONAL COUNCIL FOR THE LIFE SCIENCES
PREAMBLE Extraordinary advances in biotechnology have brought enormous benefits to medicine, public health, the food industry, agriculture, and industrial processes. At the same time, the risks to public safety and security from the accidental or deliberate misuse of this technology have increased. As a result, there is mounting public concern worldwide about the advances in the life sciences from an ethical and moral perspective. In order for the full humanitarian and economic benefit arising from the advances in the life sciences to be realized it is essential that all these concerns are explicitly recognised. The international community and national governments are faced with a demand for harmonized national and international regulation of the life sciences industry. However, the speed of the developments is outpacing national and international attempts at effective legal and regulatory action. Private industry and non-governmental institutions are at the leading edge of these advances and dissemination of the scientific developments and their technological applications. The private sector and academia need to contribute directly to the international effort to deal with biological threats to public safety and security in order that effective legal and regulatory regimes can be implemented without unnecessary costs and other burdens. It is important that industry promote a culture of responsibility that enhances the interest of public safety and security, and in doing so increase public understanding of the issues, risks, and benefits of life sciences research and development. In the light of public and governmental concerns there is a widespread recognition in the private sector of the life sciences that they have a unique capacity to meet their collective responsibility to conduct research and business operations to the highest possible standards. To achieve this objective leaders from business and academia around the world have agreed to the creation of an international entity called the International Council for the Life Sciences. Its primary purpose is to help safeguard the future development of the life sciences and associated industries. Specifically, the council will: Establish and promote best practices in activity involving the life sciences (research, production, propagation). Protect against the risks arising from biological challenges from any source. Provide an international forum to identify and discuss these matters.
459
460
THE CHARTER An International Council for the Life Sciences (hereinafter referred to as the Council) is hereby established that will: Create a self-sustaining global organization for industry and the academic community to contribute to improved quality of life and enhanced public safety and security; Promote the engagement of the life sciences industries worldwide on issues of public safety and security; and Facilitate effective partnerships between the private life sciences industries, government, academia and other critical constituencies. Mission The mission of the Council is to promote public health, safety and security by: Safeguarding opportunities offered by advances in the life sciences and their application by the life sciences industries; and Cooperating to counter the risks arising from the development and dissemination of such science and technology. What the Council will do: To accomplish this mission the Council will facilitate essential and timely contributions to national and international policy development through being: Action-oriented: To proactively engage industry, governments and the public to enable accurate communication and understanding of the risks and benefits arising from the advances in the life sciences. Independent: While the Council will be independent of governments, and represent its interests through its governance structure, it will cooperate closely with national governments and international inter-governmental organizations. Global: The diffusion of the life sciences is not confined by borders and as such the Council will engage the life sciences academic community and industry globally through promoting the widest possible membership. Membership Membership of the Council is open to corporate and academic entities that have a direct interest in guiding the appropriate use of the life sciences and biotechnology. CHARTER COMMITMENTS To promote the objectives of the Council, members undertake to: International and National Laws and Regulations Observe, promote and cooperate to help develop effective national and international laws and regulations in relation to the life sciences.
46 1 Personnel Exercise the highest standards in the recruitment, training and management of personnel during and after employment, with special attention to those with access to information, materials and technology that could directly affect public safety and security if misused or not operated safely and appropriately. Information Ensure the security of information by observing the relevant international and national laws and regulations in the handling of information that could have a negative impact on public safety and security; and also to contribute to developing, in cooperation between govemments, the academic community and commercial sector as appropriate, effective and responsible procedures for the release of such information into the public domain. Safe and Secure Oueration of Facilities Observe the highest possible standards for the safe and secure operation of all facilities in the interest of public and environmental safety; and to contribute to the development of more effective international and national laws, regulations, guidelines, and standards in this regard. Governance of Research and Develoument Activities Take full account of security, safety and ethical concerns when planning and conducting research and development activities and to support and contribute to effective and responsible international and national entities engaged in developing and promoting codes of conduct in this regard. GOVERNANCE The governing authority is an initial convening conference (CC) and subsequent Annual General Meetings (AGMs). Members are entitled to send one representative with voting rights to the CC and to the AGMs. The CC will approve the Charter of the Council. The AGMs will approve any subsequent amendments to the charter. The CC will appoint the first President of the Council whose term should last for at least two AGMs. Subsequent appointments of Presidents will be made by the AGMs for terms of at least two years. The CC will appoint an Executive Committee composed of no more than [twenty] members to manage the Council’s affairs. The Executive Committee will be composed of individuals that reflect as broad a range of members’ interests as possible. These members, who will be internationally recognized as leaders in their fields as well as being aware of the far-reaching policy and security implications of the life sciences, will reflect expertise in the following areas: Industry - An understanding of the interests of all sectors of the life sciences and what impact this may have on security. Relevant science and technology - An understanding of the security implications of the life sciences and related areas.
462 Public Policy - An understanding of the public policy implications in relation to safety and security from advances and developments within the private industry. Risk assessment - An ability to accurately gauge the possible impact of developments in the life sciences and biotechnology in the context of challenges from naturally occurring disease and the possible use of biological agents by government or non-government entities. Members of the Executive Committee should serve for at least [three] years and for no more than [two] terms consecutively, unless the AGM agrees otherwise. The AGM will approve appointments of its members based on the recommendations of the Executive Committee. The Council and the Executive Committee will be served by a small permanent Secretariat. The Secretariat will be headed by a Chief Executive Officer appointed by the Executive Committee. The Secretariat will be composed of international staff with properly recognised expertise in the life sciences, public policy and private industry. The number of staff and specific responsibilities will be determined by the Executive Committee and subject to approval by the Council as a whole. Mechanisms In addition to the AGMs other mechanisms will be set up on the advice of the Executive Committee to deliberate over issues pertinent to safeguarding the public and the industry as determined by members at the AGMs, such as multi-stake holder working groups, forums, projects, information exchange, and educational activities.
This draft is a product of meetings and other interactions with private industry, academia and governments, principally in Asia, Europe and North America, as part of a joint project being conducted by the International Institute for Strategic Studies-US and the Chemical and Biological Arms Control Institute.
11. COSMIC OBJECTS WORKSHOP
This page intentionally left blank
DETECTION OF TRANSIENT PHENOMENA ON PLANETARY BODIES MARIO DI MARTINO, ALBINO CARBOGNANI, AND ALBERT0 CELLINO INAF - Osservatorio Astronomico di Torino, Pino Torinese, Italy Transient phenomena on planetary bodies are defined as luminous events of different intensities, which occur in planetary atmospheres and surfaces, the duration of which spans from about 0.1 s to some hours. They consist of meteors, bolides, lightning, impact flashes on solid surfaces, auroras, etc... If well monitored, they represent a very useful tool to study the smallest component of meteoroids in different regions of the interplanetary space and the electric phenomena in planetary atmosphere. So far, the study of these phenomena has been very limited, due to the lack of an ad hoc instrumentation, and their detection has been performed mainly on a serendipitous basis. Recently, ESA has issued an announcement of opportunity for the development of systems devoted to the detection of transient events in the Earth atmosphere and/or on the dark side of other planetary objects. One such a detector has been designed and a prototype is in construction at Galileo Avionica S.p.A (Florence, Italy). Efforts have to be made now to place this instrument on space platforms. For sake of clarity, in what follows, we classify the transient phenomena in “Earth phenomena” and “Planetary phenomena”, even though some of them originate in a similar physical context. EARTH PHENOMENA Transient luminous phenomena on Earth occur mainly in its atmosphere at different heights. Their origin is due essentially to the interaction of cosmic debris with the high layers of atmosphere and the electric activity that is present in it at different levels. Interest for meteors is mainly due to the information they can provide about the history of the Solar System and the properties of the original planetesimals accreted in different regions of the system, as well as to the possible risk for humankind represented by the existence of potential Earth impactors. Interest is rising also about the space debris issue, given the increasing risk for aerospace vehicles. From the event identification standpoint, it can be difficult in many cases to discriminate meteors from space debris events. Lightning is a phenomenon of high interest in different scientific fields, including studies of the water cycle, and also taking into account less known events like jets, sprites, etc. Among the different electric phenomena in the atmosphere, satellite observations of lightning and auroras are relatively easy. More difficult are the observations of red sprites, blue jets and elves, due to their superposition to stormy cells where lightning is numerous. Noctilucent clouds are very thin and then difficult to observe from the local nadir. Meteors Millions of asteroids and comets orbit around the Sun. Asteroids are mainly located in the so-called main belt, amid the orbit of Mars and Jupiter, between 2.1 and 3.3 astronomical units (AU) from the Sun, while a huge number of icy bodies (proto-
465
466
cometary nuclei) are supposed to be located in the very outer regions of the Solar System, the Oort cloud (between about 40,000 and 100,000 AU from the Sun). Since 1992, a new family of icy bodies has been discovered beyond the Neptune orbit, the so-called Edgeworth-Kuiper belt, which is considered to be the reservoir of short period (< 200 years) and low orbital inclination comets. Asteroids and comets are the sources of small interplanetary bodies. The collisions between asteroids in the main belt, besides producing km-sized asteroids, generate huge numbers of fragments having sizes spanning from a hundredth of millimeter to some tens of meters, intermediate between classical asteroids and interplanetary dust. These bodies are called meteoroids. The International Astronomical Union (IAU)in 1961 established that meteoroids are defined as the small bodies having mass in the range 10-6-10’o g. Assuming a density of 3.5 g cm”, the radius of a meteoroid spans from 40 pm to 10 m. Even comets produce meteoroids, although their density is likely to be lower than those of asteroidal origin. Comet nuclei, when reaching the inner regions of the Solar System, undergo sublimation of their icy and volatile components and in this way they also inject in the interplanetary space solid particles, which are imbedded in this ice. Then, “meteoroid streams” are produced which follow the orbital trajectory of the parent comet. Meteoroids of cometary origin produce meteor showers visible from Earth (among the most important we can mention the Quadrantids, Perseids, Leonids, and Geminids). The term meteor generally refers to the luminous phenomenon generated by the entry of a meteoroid into the Earth atmosphere. The term meteoroid refers to the solid particle of extraterrestrial material that enters the Earth atmosphere. The term meteorite refers to a meteoroid that survives a transit through the Earth atmosphere. Approximately 4x104 metric tons ( 4 0 ~ 1 0kg) ~ of extraterrestrial matter enter the Earth atmosphere every year and eventually settle on the ground. This extraterrestrial matter originates from two main components, cosmic dust background and meteoroids. Whereas there is a large uncertainty in the absolute fluxes, there seems to be agreement that mass influx of particles having masses in the range to l o 6 g accounts for about 20% of the total flux per year, with the remaining 80% arising from objects in the mass range to l O I 5 g (Brown, 2002). Orinin and ohvsical properties of a meteor Whatever the origin of a meteoroid belonging to the Solar System is, its geocentric velocity spans between 11.2 km s-’ (the escape velocity from Earth) and 72.8 km s-’ (42.5 km s-’, the escape velocity at the perihelion of the Earth orbit, plus 30.3 km s-’, the Earth orbital velocity at perihelion). When a meteoroid enters the Earth atmosphere with a velocity of the order of some tens of km s-‘, the collision with the atmospheric molecules heats its surface. At a height of 80-90 km the meteoroid temperature reaches 2,500 OK and begins to sublimate. This mass loss process is called ablation. During the atmospheric flight the atoms of the meteoroid disperse in the atmosphere forming a trail, which looks like a long and narrow cylindrical column. The trail initial dimensions are equal to the mean free path at that height, i.e. about 1 m at 120 km and about 10 cm at 90 km. The trail length can reach several kilometers. A typical value spans from 10 to 20 km, while the heights where the phenomenon begins are between about 120 and 75 km. Time duration of meteor phenomenon spans from 0.5 to 3 seconds. The atoms in the trail column collide with the surrounding air molecules: the first collisions, which are most energetic, ionize the atoms. Secondary collisions,
467
Table 3 Me Trail diameter (m) Trail diameter (m) Trail diameter (m) v, 4 0 km/s V,,,=60 km/s v, =20 km/s 2.13 2.72 1.41 I 0.9s 1.49 1.90 0.96 1.23 0.63 90 I 0.41 I 0.61 I 0.78 . . 75 0.22 0.33 0.42 Table I : Diameters of the trails vs. height and geocentric velocity.
Height (km) 120 110 100
I
I
~
I
I
I
Like in the case of other celestial bodies, for the meteors we can define an absolute magnitude: the apparent magnitude of a meteor located at the zenith at a height of 100 km. Taking into account that the average height where meteors occur is around this value, the apparent magnitude of a meteor at the zenith is roughly comparable with its absolute magnitude. The visual magnitude, M, is expressed in a logarithmic scale related to the luminous intensity, I, via the relationship: M = 6.8 - 2.5 log1
(1)
where I is in Watt. Visual observation of meteors generally refers to meteoroids having diameters 2 2 mm. The recent use of low light level televisions has increased the detection limit considerably, and now meteors of magnitude +7, corresponding approximately to meteoroids having diameters of about 1 mm, can be observed. In comparison, radar ionization trails are seen for particles having diameters ? SO pm, equivalent to visual magnitude +lo. Meteor sDectra A meteor spectrum at visible wavelengths can provide useful information on the meteoroid chemical composition, but it does not provide complete mineralogical evidence. Some chemical elements (i.e. potassium), which have a great importance in the physics of meteoric plasma, have stronger lines in the infrared spectral domain, so they are not detectable in the visible. Moreover, it is very difficult to collect enough light to obtain a spectrum for faint meteors, even though a series of studies have shown that meteoric spectra are not a function of apparent luminosity. In the case of bright bolides only, when the meteoroid reaches the lowest part of the atmosphere, the luminous contribution of the compressed and heated air predominates. In Table 2 some spectral meteoric emission lines are listed.
468
Table 3 Me Table 2: Principal spectral lines emitted by meteors in the visible band (Buil,1994). Millman has classified meteor spectra in 4 classes. X type meteors show strong magnesium and sodium emission lines (20% of the total), Y type show the ionized calcium lines (2% of the total), Z type show iron lines (66% of the total), eventually W type meteors include all those which are not classified in the previous classes (12% of the total). Meteor showers As seen before, cometary meteoroids are generally grouped in streams. When these streams encounter Earth, many meteoroids enter the atmosphere and vaporize giving rise to a meteor shower. The name of the shower is assigned on the basis of the location in the sky (radiant) from which the meteors seem to come. To estimate the activity of a shower, the Zenithal Hourly Rate (ZHR) is used. This is the number of meteors per hour that an ideal observer, under perfect sky conditions and with the shower radiant at the zenith, would see. A selection of the main meteor showers is shown in Table 3. During the meteor storms the number of meteors can be much higher. In some case, the ZHR has been indicated as “variable”, due to the fact that these showers usually have a low activity, but sometimes they originate outbursts or storms. The shower activity can be annual or periodic. Annual activity means that every year a particular shower occurs. For example, the Lyrids are active every year around the end of April, the Perseids around the half of August, the Leonids in the middle of November, etc. The periodic activity is characterized by episodes during which the number of meteors increases about a factor ten (outburst) with respect to the normal ZHR. This behavior is typical of the showers of cometary origin. At each perihelion passage the parent comet disperses new material in the space. When the Earth crosses this new material the shower activity increases abruptly.
Table 3 Me Table 3; Some information on the principal meteor showers.
469 From the observed frequency we can obtain the ZHR by using this formula:
where N is the number of meteors observed in the actual time of observation T,, F is a correction depending from the observer field of view, sin(eR) the correction for the radiant height (eR) from the horizon, r the meteoroid population index (it ranges from about 2 to 3), and ml the observer limit magnitude (Buil, 1994; Hawkes, 2002). From the ZHR we can obtain the meteoroid density in the space. In fact, if a is the angle of the observer field of view on the ground and h, the typical height at which the meteors occur, the areaA of the atmosphere intercepted by the observer is:
’( F)
A = xh: tan
(3)
If V, is the meteoroid geocentric velocity and p, their average number per volume unit, we can write: p,AV,
ZHR 3600
(4)
=-
from which: (5)
ZHR
P, =
3600 nh:V, tan
’
==
With p, we can estimate the average distance among the meteoroids: d,
(6)
For example, we calculate pmfor the Perseids. In this case, ZHR=140,V,=60 W s , considering h,=100 km e a = 104O, we obtain p, = meteoroid/m3with an average distance between a meteoroid and another d, = 200 km. Another parameter characterizing different showers is the population index, r, which is an estimate of the ratio of the number of meteors in consecutive magnitude classes. Basically, r is a value indicating how many more times meteors of magnitude m+l appear with respect to meteors of magnitude m. The number of meteors will increase, as the magnitude gets fainter. For example, if m = 4, r = 3 (the typical r value for sporadic meteors), then three times as many meteors of magnitude 5 (m+l) appear than meteors of magnitude 4 (m). For meteor showers, if no reliable data is available, then r is assumed to be equal 2.5. Sporadic Meteors Besides meteor streams, a sporadic component exists. The frequency of sporadic meteors is not high, of the order of some meteors per hour, but the flux is continuous and not limited in particular periods of the year, as in the case of meteor showers. Only 25% of the observable meteors belong to a shower, all the others being sporadic. Sporadic meteoroids can originate from the gradual diffusion of streams, due to solar radiation pressure and mutual collisions, or from impacts occurring among asteroids in the main belt. It is not yet clear whether among the sporadic meteors the cometary or asteroidal component is prevalent.
470
Limit magnitude -7 -3
n +4 +7
+8 +13
Fraction of sporadic meteors 0.85 0.54 n An 0.64
0.71 0.78 0.85
Table 3 Me Table 3 Me Table 3 Me Bolides and suuerbolides Due to orbital perturbations with Jupiter, Saturn and Mars, and also under the influence of radiative mechanisms like the Yarkowsky effect, meteoroids originating in the asteroid main belt can be inserted into orbits crossing those of the terrestrial planets: Mercury, Venus, Earth and Mars. Therefore, there are a lot of small bodies, having dimensions of some meters, which can interact with our planet. If the meteoroid has a size larger than about 20 cm, during the passage in the atmosphere the meteor head can reach a high luminosity. When the zenithal apparent magnitude is lower than -8 magnitudes, the meteor is called bolide. This definition has not yet been approved by IAU, thus for some authors the limit magnitude is 4 or 4.If the bolide magnitude is lower than -17, this is called a superbolide. Bodies having mass larger than 1,000 kg generate superbolides. For meteoroids having diameter of some tens of meters the bolide can be brighter than the Sun (apparent magnitude -27). Superbolides are rare events, which would need a global observing network, in order to be studied in a systematic way (Ceplecha, 1999). Often, due to the aerodynamic pressure difference between the leading and trailing parts, the meteoroid undergoes multiple fiagmentations, generating a multiple bolide. Such a phenomenon occurred near Peekskill (New York) on the evening of October 9, 1992. If the meteoroid is big enough, it can survive the ablation process. When the velocity in the atmosphere decreases below 3 km s-', the mass loss and the radiation emission end, and the meteoroid enters the so-called darkfright phase. From this moment a cooling process begins, while at the same time the body trajectory becomes more and more vertical. The impact velocity of a meteoroid with Earth surface typically spans from 10 to 100 m s-' (for a mass between 10 g and 10 kg and a body geocentric velocity of 15 km s-I). Of course, the probability to reach the ground, in addition to the original dimensions of the meteoroid, depends on its mineralogical composition.
47 1
Meteor observations from mace Still to be explored is the systematic monitoring of meteors from satellites in orbit around the Earth. USA military surveillance satellites reveal 30-50 superbolide explosions in the atmosphere per year, but frequently the data on these events, especially the less bright, are discarded. A satellite network equipped with dedicated cameras is thus necessary to observe in a global and systematic way these phenomena. As far as we can know, most of the brighter bolides and superbolides are not associated with meteor showers. If q is the height in km of the satellite from the Earth surface, and M the meteor absolute magnitude, the apparent magnitude of the meteor observed from orbit is given by: m =M
- I O + 510g(q - 100)
(7)
Considering the sensor limit magnitude on board the satellite +6.0 and the height 400 km, from equation (7) it follows that from orbit only meteors having absolute magnitude lower than M=+3.6 will be detectable. The trends of equation (7), in function of the height and for different M values are plotted in Figure 1. At a height of 1,000km only meteors having M=+l are still visible. 6
5
4
z 3
0
E
g2
P
8 1
i 0
1
-2
100
200
300
400
600 Height (km)
500
700
800
900
1000
Figure I Meteor apparent magnitude vs. orbit height and absolute magnitude M. As a result, the ZHR observed from orbit is lower than that one measured by ground-based observations: the values of the meteor frequencies will not be those listed in Table 3 but smaller. If ml is the observer limit magnitude, the ZHR down to the magnitude m1 will be given by: ZHRI -- ZHR . r-(6s-ml) (8)
Taking the previous example, putting ml = +3.6 and r = 2.3, we obtain ZHRIFZ 0.1 ZHR. It follows that the ZHR of showers observed from a height of 400 km are reduced to about 90% with respect to those observed from the ground. The most important showers during the year, as Quadrantids, Perseids and Geminids, will show
472 a ZHR of 12, 14, and 12, respectively. The situation is worse for larger heights, but the decrease can be compensated by using cameras having large field of view. To obtain the ZHR] in function of the height from the Earth surface we have to know the limit magnitude mj as seen from the orbit. Assuming that the limit magnitude is comparable with the absolute magnitude M, and imposing a limit magnitude from the orbit m = +6, from eq. (7) we obtain: m,= 16 - 510g(q - 100)
(9)
Substituting eq. (9) in the eq. (8), taking ~ 2 . and 3 ZHR=lOO, we obtain Z H R in function of the height (see Fig. 2).
Figure 2 Apparent ZHR vs. height. As we can see, at 400 km only 10% of the meteors are still detectable. Only in the case of bolides problems do not exist. From (7), a typical bolide having absolute magnitude M = -12 observed from an height of 400 km becomes a 9.6 meteor, whereas from 36,000 km its magnitude is +0.8, well visible by a sensor designed to detect meteors having magnitude +6.0. Due to the largest distances, from orbit a meteor trail, on average, shows an apparent length shorter if compared with that observed from the ground. The scenario is schematized in Fig. 3, where the meteor trail is parallel to the radiant direction.
473
radiant
Figure 3 Geometry for the computation of the length of a trail as viewedfrom space If 8 is the angle between the satellite radial direction and the meteor radiant, the trail length and h, the average meteor height from the Earth's surface, the angle p subtended by the trail which is at the center of the field of view is given by: tan@) = [sin(@) q-h,
(10)
where q is the satellite height. If satellite and radiant are aligned on the same line with respect to the Earth's center, then 8 = 0" and the meteors will appear as point like images. In the most common cases the angle will be 8;t 0". Taking into account an average meteor height of 100 km, a typical angle 6=45", and a trail length, I, of 15 km, we obtain the data reported in Tab. 5. For the angular velocity computation we have assumed a value of 20 km/s as geocentric velocity. Apparent length p (degrees)
Height (km) 400 1000 10 000
36 000 I
Angular velocity (degreeds) 2.70 0.90 0.09 0.02
2.00 0.70 0.07 0.02 I
I
Table 5: Meteor trail apparent length at the FOVcentre vs. orbit height.
As can be seen, even from high orbits, the trail length is long enough and the angular velocity values are not prohibitive for a modem CCD sensor. If the limit magnitude is +6, not all the trail length will be visible from space. A meteor lightcurve, i.e. the luminosity versus time, is not a constant curve, but usually has a bell shape with a maximum at the center of the meteor visible trajectory. For instance, for Leonids and Perseids, on average, the maximum luminosity is reached at about 0.45 and 0.54 from the trail beginning. If we propose to observe a Leonid meteor from the ground having 0.5 s duration and a planetocentric velocity of 71 km s-', the atmospheric trajectory is 35.5 km long. Leonids have a mean absolute magnitude +0.4, so (taking into account a limit magnitude +6) the meteor magnitude variation is 5.6. From 400 km height, only meteors having an absolute magnitude 2 4 are visible as magnitude +6 meteors,
I
474 therefore the magnitude variation of a Leonid observed from orbit is 3.6 magnitudes, corresponding to duration of about 0.32 s. Thus, from orbit, the trajectory length will be 23 km, 35% less than observed from the ground. Similarly, for the Perseids (mean absolute magnitude +1.4), the trajectory length as observed from orbit will be 40% shorter than from the ground. If y is the sensor field of view, we can estimate the number of meteords visible at the same time. The frequency observed from the orbit will be: v, = p,V,z(q
- hJ2 tan 2(;) -
(11)
By using equation (5) with the ZHRl obtained from equation (8), we obtain: ZHR, tan2(p/2)
(12)
For example, during the Perseids maximum activity (ZHRl= 14), considering a field of view p = 120' and q=400 km we obtain v, = 0.064 meteords. In average we can expect that once every 16 s an observable meteor will cross the instrument field of view. A wide field of view can compensate a lower ZHR due to the larger distance from the atmospheric layer where meteors occur. SDace debris re-enter At the end of the last decade the catalogued objects in orbit around the Earth having dimensions larger than 10 cm were about 9,000. 6% of them consist of active satellites, 21% satellites out of service, 17% propulsive stages, 13% the so-called service debris and 43% debris produced by satellite explosions or collisions. It is difficult to observe from the ground debris smaller than 10 cm. 100,000 fragments larger than 1 cm and several tens of million larger than 1 mm are estimated to exist (Rossi et al., 1994). The reentry velocities in the atmosphere of orbiting bodies are of the order of 10 km s-' and when this occurs a meteor phenomenon is produced in a way similar to the interplanetarymeteoroids. Large bodies, as satellites and missile stages, disintegrate at heights larger than about 78 km. Every year around 100,000 kg of material falls on the Earth. To estimate the absolute magnitude of a meteor produced by a space debris reentry, we can use the relation valid for interplanetary meteoroids: M E 10 - 2 log(m)- 7 log(V) (13) where rn is the mass in grams, V the velocity in km s-'. Assuming a velocity of 10 km s-', a spherical shape and the aluminum density (2.7 g ~ m - ~we ) , obtain the following relation between absolute magnitude and the debris radius, r in cm: M I 1 - 6 log(r)
(14)
For example, a debris having 10 cm in diameter produces a -5 absolute magnitude meteor, well observable from a height of 400 km (see Fig. 1). Only debris with radius smaller than 0.3 cm will be under the visibility threshold (see Fig. 4). Since 1957, about 17,000 catalogued objects reentered the atmosphere, in average one per day. The reentries of the fragments from the largest population are practically invisible, whereas the reentry of fragments larger than about 1 cm (about 100,000 objects) can be detected. In these conditions we can estimate 5+6 observable reentries per day: a flux lower than the sporadic meteor background.
475
Figure 4 Absolute magnitude of space debris re-entering in the atmosphere at a velocity of I0 km s-I as a function of its dimensions.
Lightning In recent years, scientists have become increasingly aware of the key role played by lightning in the dynamic interplay of forces occumng in the Earth's atmosphere. Research has indicated, for instance, that lightning may be a very good indicator of the strength of large-scale convective storm systems. The Earth's surface is affected by an electric field 2, whose mean intensity, in the absence of thunderstorms, is about 130 V m-', The electric field vector is perpendicular to the ground and decreases with height up to about 20 km, where it approaches 0. The electric field direction is from the atmosphere towards the ground, so the ground has a negative electric charge with respect to the air. The air electric conductivity would neutralize the electric field in about 50 s, but the observations indicate that the mean value on the ground is constant with time. Since 1920, it has been understood that a difference in electric potential (d.e.p.) between the surface and the atmosphere is maintained constant by thousands of stormy cells always present in the troposphere. These cells, which supply the Earth surface with negative charges, maintain constant the d.e.p.. The intensity of the electrical current generated by the thunderstorms is about 1,800 A. A storm cell can persist even for several hours and it moves at mean velocities of about 30-50 km h-'. Inside the cloud the air currents can reach a high enough velocity to separate the electrical charges (this mechanism is not yet completely clear): positive on the cloud top, negative in the bottom part. The cloud is comparable to an electric dipole, having a typical momentum of about 100 C km. Lightning distribution on the Earth Lightning distribution on Earth is not uniform. Most recent data have been obtained by the NASA satellite OTD (Optical Transient Detector). OTD has observed about 1.2 billion lightnings per year. On the oceans lightning is less frequent than on dry land and thicken between equator and tropics. The electric activity is stronger in
476 the northern hemisphere than in the southern one. The most “stormy” areas of our planet are: Zaire, Sudan, Cuba, Mexico (Sierra Madre), India (Himalaya), and SouthEast Asian countries (see Fig. 5).
Figure 5 Lightning distribution on Earth on 1998 (data from OTD satellite).
The lightning phenomenon The charge separation inside a stormy cell generates big d.e.p. among different parts of the cloud and between cloud and ground. When the air electric resistance is overcome, an electric discharge occurs: the lightning. Lightning is composed of a series of discharges, in average four. The duration of each discharge is 35 ms, while the duration of the series spans from 0.1 to 0.25 s, but can reach 2-3 s. In the electric discharge the temperature of the atmospheric gas can reach 20,000 O K , while the peak optical power is in the range 7.107+3.109W (Russel, 1993). The spectrum is black body like with a peak emission around 144 nm. The intense heating (peak power of the order of lo9 W) causes atoms to be ionized and radiate at discrete wavelengths (e.g. 01(1) line at 774.4 nm and NI(1) multiplet at 868.3 nm). On the Earth every time about 2,000 stormy cells exist, producing about 100 discharges per second. The most frequent kind of lightning is the intra-cloud type, while cloud-toground lightning is less common. Most of the latter bring negative charges to the ground, but some lightning produced by positive charges exists also. Lightning caused by positive charges is more numerous at the life end of the stormy cell. The ratio between intra-cloud and cloud-to-ground lightning can vary in a significant way from a cell to another. In general, stormy cells having a large vertical development tend to produce only intra-cloud lightning. From the Pogson relation between flux density and magnitude and using the above-mentionedtypical values, we can derive the magnitude difference between two bodies having different fluxes:
Considering as reference flux the optical solar constant F, (1.83 .lo2 W m”), the magnitude ml is the apparent Sun visual magnitude (-26.74). The apparent visual
477 magnitude of lightning, having an optical power of 3.10' W and observed from a height q (km) on the Earth's surface, is: m,z&-h,","g= 5lodq) - 27.0
(16)
Assuming a 400 km height, the apparent peak magnitude of lightning is -14, larger than the Moon apparent magnitude (-12.5). Then, lightning can be revealed without difficulty through the clouds and on the background of anthropic lights or in the diurnal hemisphere.
L .
L
15
2
.
1
25
3
Height (rm)
35
4 Y
10.
Figure 6 Total apparent magnitude of a 7 lo7and 3 . 1 8 Watt lightning vs. orbit height. If a is the instrument field of view, the observed surface is: \
,
Taking a 120" field of view and q = 400 km, the area A is 8.7.105km2, i.e. the 0.7% of the Earth projected area on a plane. Taking into account that on a hemisphere about 1,000 stormy cells will be visible, then, about 7 cells will be detectable, with a lightning frequency of about one lightning every 3 s. Considering that at a height of 400 km the orbital velocity is 7.7 km s-', a stormy cell can be observed for a maximum of 180 s. On the average, an active stormy cell produce 3 lightnings per minute, so 180 s are enough to determine the lightning frequency of the cell (the mean number is about 6 lightnings). The mean life of a cell is about 20 minutes, and then at the following satellite passage (after about 90 minutes), it is probably extinct. The observation of lightning fiom space differs in a substantial way from meteor observation not only for the peak magnitude, but because the electrical discharges occur in short time in the same position. Sprites The terrestrial electrical activity involves not only the troposphere but also the highest atmospheric levels up to the ionosphere. As already seen, a cloud-to-ground lightning transports negative charges from the cloud to the ground. Sometimes, the positive cloud top can lose its charge which discharges on the ground. When this occurs, the isolated negative region can create an intense electrical field within the
478 ionosphere. This field accelerates the electrons towards the ionosphere and, hitting the atmospheric molecules, they excite their energetic levels. When these molecules come back to the fundamental state they emit electromagnetic radiation, and become visible. This is the origin of the so-called red sprites. Red sprites appear as luminous globes red in color with a quite low surface intensity. They are correlated with cloud-to-ground or intra-cloud lightning. Their frequency is about 1% with respect to the common lightning. Red sprites occur over the top of stormy cells and can be single or multiple. They can reach a height of 95 km with a maximum brightness around 65-75 km. The duration of a red sprite is of some milliseconds and seems to be correlated to the stormy clouds at the end of their life cycle. The luminosity of a red sprite is comparable with an auroral arc of moderate brightness. The optical peak power is about 5-25 MW, while the dimensions are of the order of 10-20 km. If observed from a height q (in km), the apparent magnitude, for a power of lo7 W in the area of maximum luminosity is: mRsprrres = 5log(q -70)-20.8 (18) A red sprite viewed from a height of 400 km has a total apparent magnitude of -8. The angular dimension will be about 4'. A hindrance to the observation of red sprites from space can be due to the fact that they tend to be projected on the stormy cell and then to be hidden by the lightning dazzle.
Blue iets Much more enigmatic are the bluejets, high height luminous phenomena whose origin is still unknown. Blue jets appear as luminous blue-colored beams coming from the top of a stormy cell. They have a conical form with an aperture of about 15". Their velocity is about 100 km s-' and reach 40-50 km in height, then their luminosity decreases and they disappear. The phenomenon duration is of the order of 0.4 s, while the peak power is lo4 W. If observed from orbit, the total apparent magnitude in function of the height is : m,,> = Slog(q - 20)- 13.3 (19) At 400 km a blue jet has an apparent magnitude of about -0.4. The problems arising when these phenomena are observed from space are analogous to those related to the red sprites: they are geometrically projected against the stormy cell, and then the emitted radiation is dominated by the light coming from the lightning. In any case, if the sensor limit magnitude is +6, the observing height cannot be larger than 1,000 km.
Elves Elves are produced by the interaction between the mesosphere (between 75 and 100 km) and the electromagnetic fields originated by very powerful lightning. They have a flat and circular shape with diameters of about 200 km. They can be associated also with red sprites but they form before. Their duration is less than 1 millisecond. The elves power is far less than the red sprites and they can be observed only if they are viewed in profile. Viewed from orbit, elves are projected on the stormy cell and on the lightning that it generates, so their observation can be very difficult. Noctilucent clouds Noctilucent clouds (NLC) are the highest clouds observable in the atmosphere. Their average height is 83 km. NLC are formed by water ice and are similar to cirrus, but are thinner and blue or silver colored. Due to the absence of winds, their evolution is very slow, if compared to the tropospheric clouds. Due to their low optical
479 thickness, they can be observed at sunset and dawn, when the Sun height is 6"-12" below the horizon. In these conditions their contrast with the sky is maximum. NLC are usually observed during the summer from regions having latitude included between 50"-60" in both hemispheres. During one year, 10-20 NLC are visible, thus it is a quite rare phenomenon. From space they can be only visible from the terminator region, they are very thin, and so, if observed from space, their localization is very difficult. From the ground, in fact, they can be observed mainly when they are low on the horizon, a condition in which their optical thickness is higher. From space and using a large field of view, the observation of NLC is hampered from their faintness, which requires high exposure times near the terminator light. Auroras The aurora is a phenomenon caused by the interaction between the charged particles of the solar wind and the atoms of the ionosphere. When the solar wind charged particles interact with the Earth's magnetic field, most of them are reflected in the interplanetary space, but some, moving along helicoidal trajectories along the lines of the magnetic field, can penetrate the atmosphere in correspondence to the magnetic poles. In the ionosphere there are oxygen and nitrogen atoms, which emit radiation after being excited by the charged particles. The aurora spectrum is a typical emission spectrum (see Table 6).
Table 3 Me Table 6; Characteristic emission lines of an aurora spectrum.
The aurora heights span from 70 to a maximum of 1,100 km with the higher occurrence being between 90 and 110 km. The duration of an aurora can span from some seconds to some hours. The typical power of terrestrial auroras is of the order of 10" W. Using the Pogson relation (eq. 15); the apparent visual magnitude of an aurora occurring at a height of 100 km is given by: mouroro = 5log(g - 100)-28.3
(20)
Supposing that the satellite is at a height of 400 km, the apparent visual magnitude will be about -16. PLANETARY PHENOMENA They essentially consist of the same events we can observe on Earth, except impact flashes on the surface of atmosphereless bodies. Of course, these phenomena show different characteristics due to the different physical environments.
480
METEORS ON MARS As we have seen, on Earth a meteor occurs when a meteoroid interacts with the atmosphere between 120 and 75 km. Usually, meteoroids are grouped in streams associated with comet or asteroid orbits. So far, the study of meteoroid streams in the interplanetary space has been limited to those intersecting the Earth's orbit. Nevertheless, in principle, we can observe meteors on almost all the major bodies of the Solar System. Apart from Mercury and Pluto, all the other planets of the Solar System (and even the satellite of Saturn, Titan, and the Neptune satellite, Triton) have sufficiently dense atmospheres to generate the meteor phenomenon. Mars is one of the planets where meteors could be observable. Here we can anticipate that the study of the meteors on Mars from orbit would be possible using the same instruments necessary for the observation of terrestrial ones. The phenomenon is very similar, but will allow investigations of meteoroid streams at larger distances from the Sun and closer to the asteroid main belt. Mars orbits at a mean distance from the Sun of 1.524 AU, with a period of 1.88 years and an orbital eccentricity of 0.0934. Its mean equatorial diameter is 3,396 km and its mass 0.107 Earth's masses. Due to its small mass, the gravity acceleration on the planet surface is only 3.72 m/s2 (38% of the terrestrial one) and the escape velocity 5.03 !un s-' . On the Mars surface the atmospheric density varies from 7 to 10 mbar and the carbon dioxide (C02) is the principal component. The pressure can reach 14 mbar in the bottom of the deepest canyons and reaches 0.3 mbar on the top of the big Martian volcanoes, as Olympus Mons. As comparison, the composition of the terrestrial and Martian atmospheres is, respectively, as follows: 95.3% CO2, 2.7% Nz, 1.6% Ar, 0.13% 0 2 , and 78% N2,21% 02,1% Ar. The meteor height on Mars In spite of the difference in chemical composition of the Earth's and Mars' atmospheres, in the same conditions (atmosphere density, meteoroid mass and velocity), the meteors show the same luminous intensity because less than 3% of the trail emitted radiation is produced by atmospheric atoms. If almost all the radiation comes from the meteoroid ablation, even the spectra will be similar to those of the terrestrial meteors. Assuming that for the Earth and Mars the simple isothermal atmosphere model with constant gravity acceleration is valid, the atmospheric densityp in function of the height q is given by (law of the atmospheres): p(q) = p,,e(-y'H)
(21)
where po is the density at the surface (q=O km), and H i s the scale height of the atmosphere: H = -RT
(22)
Pg
where R is the gas constant, 2' the absolute temperature in "K, g the gravity acceleration on the surface, and p the mean molecular weight.
481
Table 3 Me Table 7 Parameters of the exponential atmosphere model for the Earth and Mars.
Using equation (21) and imposing the same density of the Earth's and Mars' atmospheric layers, we can compute the height where on Mars meteors occur:
The ratio of the densities can be obtained using the perfect gas law: POE -
POEPET,
POM
'OMPMTE
(24)
From the data in Tab. 8 and using equation (24), the density ratio on the ground between the Earth and Mars is 73. From equation (23) we obtain that the interval 100+40 km on Mars corresponds to the interval 120+70 km on the Earth. This range is in good agreement with that obtained by numerical simulations (Adolfsson et al., 1996). The atmospheric density intervals span from to lo-' g cm". This result indicates that at the same meteoroid mass and velocity, the meteors on Mars occur at a lower height, and that from the Mars surface the meteor apparent magnitude will be a little lower (higher luminosity) with respect to the same meteor as viewed from the Earth's surface. .
-
I
MWS '*, I
'3,
. . Earth . .
8.
' % '
<
\
On Mars, the height of meteors is only about 20 km lower; this means that the frequency of surface impacts is similar to the Earth one.
482
Meteors on Mars observed from orbit Apart from their high magnitude values (+30 for the more luminous), the study of Martian meteors from the Earth is practically impossible because the planet, when visible, shows a small phase angle, which prevents observation of the night hemisphere. Now we will examine how meteors can be observed from a spacecraft in orbit around Mars. For identical mass and planetocentric velocity, the luminosity of a meteor is inversely proportional to the scale height of the atmosphere. Using the Pogson relation, the difference in magnitude between a meteor on Mars at the height qw and a similar one at the height q E on the Earth is: mM - *E
= -2
.slo&E
1H ,
IsE1,4 Y J
(25)
If we put qE=q,%f, and we consider that the ratio between the scale heights is 0.8, we find mMmE = +0.25. Thus the absolute magnitude of Martian meteors is a little lower than similar ones on Earth, having the same mass and velocity. Meteors on Mars are very similar to the terrestrial ones, thus the instruments designed for observation from orbit of terrestrial meteors can also be used for the observation of Martian meteors. The meteor phenomenon on Mars is very similar to the terrestrial case (length of the trail included), the only difference being that it takes place at relatively lower heights. To obtain the difference in magnitude of meteors as seen from the ground we can put q E = 100 !un and qM = 80 km. We obtain mM - mE = -0.25. This means that meteors observed from the Mars surface are slightly brighter than those observed on the Earth. Meteor showers on Mars On Mars, the interplanetary meteoroid velocities span from 5 to 60 km s-'. On the Earth, the meteoroids belonging to a stream can be observed as meteors if the minimum distance of the stream parent body from our planet is lower than 0.2 AU. By applying the same criterion to Mars and looking for comets and asteroids with known orbit, we find that 297 asteroids and 51 known comets can approach the Mars orbit at a distance lower than 0.2 AU (Chnstou et al., 1999). For the Earth the above numbers are 156 and 24, respectively. For Mars, the encounters with meteoroids with low velocity (< 15 km/s) are 5 times more frequent with respect to the Earth. This is due to the Mars lower heliocentric and escape velocities. In the case of the Earth, the 20 major meteor showers have relative velocity equal or larger than 18 km s-I. On Mars, meteoroids having velocity larger than 30 km s-' practically do not exist (if we exclude the Halley shower, which reach 54 km s-I). Using as a lower limit of planetocentric velocity, the value of 25 km s-', and assuming as distance from the Mars orbit 0.1 AU, the orbits of 5 bodies considered as progenitors of the most intense meteoric showers exist (see Tab. 8). Of course, the list of the Martian showers can be larger: it is, in fact, possible that progenitor bodies of meteoric showers have not yet been discovered. It is not easy to estimate the ZHR of Martian meteoric showers; anyway it is plausible that they are not much different from the terrestrial ones. Thus, the observation frequency from an orbital facility can be considered similar, at least as an order of magnitude, to the terrestrial one: 1 meteor each 15-20 seconds (during meteoric showers), if the sensor is at an height of 400 km and the field of view is 120"; for the magnitude see Figure 1.
483 Progenitor 2102 Tantalus 5335 Damocles 1974 MA lP/Halley 13P/Olbers
Planetocentric velocity (Ws) 27.3 29.8 25.8 53.9 27.3
Minimum distance (UA) 0.060 0.049 0.024 0.067 0.021
Peak date 24/05/2003 16/06/2004 06/07/2003 0 1101l2004 10/03/2004
Table 3 Me (Christou and Beurle, 1999)
The first three progenitor bodies are asteroids and, as for 3200 Phaeton in the case of the Earth, they could produce meteoric showers on Mars. Comet Halley generates two distinct showers on the Earth (Orionids and Eta Aquarids with a ZHR of 60 and 20, respectively), thus, it should produce also similar showers on Mars. In the case of comet Olbers, whose perihelion passage is foreseen in 2024, it is very likely that in that period a meteor storm will be observable on Mars. Also on Mars bolides and superbolides can occur having the same characteristics as the terrestrial ones. Due to the lower mean height where the phenomenon occurs, the impact on the surface producing a flash is more probable. METEORS ON VENUS Meteors on Venus could be observable from orbit, making it possible to study meteoroid streams down to 0.72 AU from the Sun. Venus and its atmomhere Venus orbits around the Sun at a mean distance of 0.723 AU, with an orbital period of 224.7 days. The equatorial radius is 6,052 km and its mass 0.815 the Earth's mass. The gravity acceleration on the surface is 8.83 m s-' (90% of the Earth's one) and the escape velocity 10.36 km s-'. The clouds and the hazes in the Venus atmosphere are located between 30 and 80 km from the surface, the clouds from 48 to 67 km of height. On Venus, the atmospheric pressure on the ground is 90 times the terrestrial one and its principal component is carbon dioxide (CO2). In terms of number of particles per volume unit, the chemical composition of the atmosphere is traces of SO1, Ar and Ne. On the similar to the Martian one: 96.5% CO1, 3.5% Nz, surface, due to the intense greenhouse effect, the temperature has a mean value of 730 OK.
The height of meteors on Venus Using the same simple physical model already applied to Mars, we can estimate the height in the atmosphere of Venus where meteors occur. Parameter Ground pressure Gravity acceleration on the surface Mean temperature Scale height Molecular weight
Venus Pov= 8.96.104mbar g y = 8.83 m s-' Tv= 730 K Hv= 15.5 km p v = 44
Table 3 Me
484 where q is the height, p the atmospheric density, and H the scale height of the atmosphere. The indexes E and v refer to Earth and Venus. From equation (26) we obtain that the interval 120+70 km for the meteors on Earth corresponds to 300+200 km on Venus. This means that Considering the same meteoroid mass and velocity, meteors on Venus occur at a higher height, then the apparent brightness from orbit will be higher with respect to a similar event on Earth. Moreover, the height where the meteor phenomenon occurs on Venus is larger than the haze and cloud upper limit (80 km), allowing the observation from space. Venusian meteors observed from orbit The meteors on Venus, due to the height of their occurrence and the presence of a thick cloud deck, can be observed only from space. In principle, the observation of superbolides on Venus could be carried out also from the Earth (see Tab. lo), because during its lower conjunction the planet night hemisphere is directed towards the Earth. Nevertheless, the observations from the Earth are very difficult and limited to a short time interval. Log mass (kg)
Absolute magnitude
Magnitude from Earth
3.5-4.5 4.5-5.5 5.5-6.5 6.5-7.5
-19 -22 -24 -27
15 12 10 7
Interval between 2 events (days) 2 9 21 183
Table 3 Me Table 3 Me Meteor showers on Venus On Venus the planetocentric velocities span from 10.4 to 85 km s-', thus the average velocity of the Venusian meteors is high than on the Earth. Taking into account that in general meteoroids of a certain stream can be observed as meteors if their minimal distance from the planet orbit is less than 0.1 AU, 11 comets and 4 asteroids satisfy this criterion (Beech, 1998) (see Tab. 11).
485
Table 3 Me Table 3 Me It is difficult to estimate the ZHR on Venus, but it is plausible that it is not much different with respect to the principal Earth's showers. Thus, even the observation frequency from orbit would be similar: one meteor every 15-20 s during the shower maximum. METEORS ON JUPITER Taking into account the same conditions (mass and velocity of the meteoroids and atmospheric density), even on Jupiter meteors have to be as luminous as on the other planets. If all the emitted radiation comes from the ablation of the meteoroid atoms, even the observable spectra will be similar to the terrestrial meteors (see Tab. 12). On Jupiter a solid surface does not exist, so we consider as zero the level where the pressure is 100 mbar. As for Venus and Mars, we assume the atmosphere isothermal model with constant gravity acceleration (law of the atmospheres): p(q)= p0ex& 4 i H )
(28)
' 4 ( where po is the density at the atmospheric level with 100 mbar pressure 0 H the atmosphere scale height. Earth Surface pressure Surface gravity acceleration Mean temperature Scale height Molecular weight
PoE= 1013 mbar g ~ z 9 . 8 m/s2 1 T~=290K H~=8.5km ~ ~ 1 ~ 2 8
km), and
Jupiter PoJ= 100 mbar gJ = 23.12 m/s2
TI= 190 K
H J = 29.6km pJ = 2.3
Table I2 Parameters ofthe atmospheric exponential model f o r the Earth and Jupiter.
By using equation (28) and imposing the equality between the densities of the Earth and Jupiter atmospheric layers, we can compute the altitude, qA where the atmospheric density on Jupiter is the same as that on Earth where the meteors occur:
We obtain the ratio between the densities by using the law of perfect gases:
486 - POEPETJ
POE
POJ
(30)
‘0JPJTE
From the data listed in Tab. 12 and by using eq. (28), the ratio between the Earth and Jupiter atmospheric densities is 8 1. From eq. (29) we obtain that the interval 120+70 km for the meteors on Earth corresponds to the interval 288+114 km on Jupiter. These values have been computed considering as level zero the atmospheric layer at the pressure of 100 mbar, well over the first ammonia cloud layer. Jovian meteors observed from orbit Meteor trails in the Jupiter atmosphere have been observed in the past. One bolide was observed on 5 March 1979 during the Voyager 1 fly-by (Cook and Duxbury, 1981). The spacecraft distance from the planet was 555,000 km and the bolide absolute magnitude was -12.5. The meteoroid mass, estimated from the lightcurve, was about 11 kg. From the observations, it results that the bolide reached the atmospheric level at 3.5 mbar, at about 100 km over the zero level at 100 mbar, in good agreement with the estimated height range for the Jovian meteors. Another wellknown event has been the impacts of the fragments of comet Shoemaker-Levy 9 on Jupiter on July-August 1994 (Orton et al., 1995). The observations of Jupiter meteors can be carried out only by a spacecraft in orbit around the planet, in fact, due to its distance and very low solar phase angle, it is impossible to observe these phenomena from Earth, even with the “Hubble” Space Telescope. As seen in the paragraph ”Meteors on Mars”, considering the same mass and planetocentric velocity, the meteor luminosity is inversely proportional to the atmosphere scale height. By using the Pogson relation, the magnitude difference between a Jovian meteor at height qJ and a similar one in the Earth atmosphere at eight q E is: mJ
=-2.510g[(H,’HJ)(q,’qJ)2]
(31)
If qE=qJ and taking into account that the atmospheric height scale ratio is 0.29, we obtain mJ-mE= + 1.34. Then, the absolute magnitude of the Jovian meteors, having the same mass and velocity, is larger of about one magnitude with respect to the terrestrial ones. This means that the instruments used for the observation of the terrestrial meteors from orbit are also valid for Jupiter, where the meteor phenomenon is substantially analogous. Meteor showers on Jupiter Meteoroids of a given interplanetary stream can be observed as meteors on Earth if the minimum distance with the Earth orbit is at least lower than 0.2 AU. We can apply the same criterion to Jupiter when looking for comets and asteroids having known orbits, as candidates to be progenitors of meteor showers. The principal source of Jovian meteor showers is thought to be the comets belonging to the Jupiter family comets (JFC). All known JFC (about 200) orbit around the Sun in direct sense on low inclination trajectories and practically all comets belonging to this group have an orbital period lower than 20 years (Femandez et al., 1999). Due to these short orbital periods, they suffer a strong nucleus activity, which in a relatively short time exhausts the nucleus volatile component. Thus, it is reasonable to expect that the JFC are rich in meteoroids having good probabilities to fall in the
487 Jupiter atmosphere. An estimate of the ZHRs is not easy; anyway it is plausible that they are larger than those of the major terrestrial showers. Electrical discharges Lightning occurs as a result of natural charging phenomena. On Earth, lightning is known to result from electric fields developed during rainstorms, dust storms, and volcanic eruptions. These fields are the result of droplet-droplet, or dust-dust collisional charging. Lightning discharges have been detected on every planet with an atmosphere, except for Mars. On Saturn, Uranus and Neptune atmospheric electric activity has been recorded in the radio VLF range (Russell, 1993). Due to the high opacity of the Venus, Saturn, Uranus and Neptune atmospheres, which hamper the observation of lightning, candidate planets to image atmospheric electrical discharges are Mars and Jupiter. Observation of Martian lightning Due to the prevalence of Martian dust devils and dust storms, an understanding of the underlying physics of electrical discharges in Martian dust clouds is critical for future Mars exploratory missions. Mars’ low atmospheric pressure and arid, windy environment suggest that the dust near the surface of Mars is even more susceptible to triboelectric charging than terrestrial dust. Electrical discharges on Mars should occur more frequently but at lower intensities than those seen on Earth. Since extensive dust storms are known on Mars, Martian lightning should be expected to occur. Mars has been more extensively surveyed; however, the reconnaissance involved did not specifically focus on surveying the Martian night, and, therefore, might simply have failed to detect the relatively faint signatures of Martian lightning flashes. There are good theoretical reasons to expect that lightning discharges occur on Mars. Mars has a cold, dry climate, with seasonal winds, and dust storms. On the Earth, lightning discharges occur in association with desert sandstorms and volcanic ash plumes. In these events, electrical charge separation begins when dust particles collide or brush past each other in the turbulent air. Particles become positively or negatively charged according to their size. (This situation is analogous to raindrop charging in a thunder cloud.) In a Martian dust cloud the electrical potential should remain near zero. Variable winds and/or gravitational settling, however, may separate particles by size, building substantial electric fields within the cloud. When these fields reach a critical value, a lightning discharge occurs. Dust storms have been observed on Mars to develop and spread over the entire planet and may last for months. Smaller storms are also known to occur and should provide an ideal environment to search for Martian lightning. Lightning on Mars may be very different from terrestrial lightning due to the low atmospheric pressure. Some authors have suggested diffuse glows or flashes, filamentous discharges, or small arcs. The most likely candidate for the creation of electrostatic charges and fields is triboelectric charging of dust, i.e., the friction between blown dust particles and the ground, (or between) dust particles with each other. Terrestrial experience demonstrates that electric fields of 5+15 kV m-l are not uncommon during dust stoms and dust devils (Sentman, 1991). Olhoeft (1991) suggests that Martian lightning will be a diffuse flash (similar to summer heat lightning). Because of Mars low atmospheric density, electrical discharges occur at lower electric potential than on Earth, and therefore should be
488
more frequent. The breakdown electrical field on Mars is expected to be between -5 and 20 kV/m, compared to -3,000 kV m-' on Earth. How bright might a Martian lightning flash be? To have a rough idea of what might be expected if we consider an electric field, E, of 5 kV m-', the energy density is: = (i/z)&$' =
1.11 .10-' J ~ "
(32)
where F, is the vacuum dielectric permittivity. If the dust cloud is assumed to be 1,000 m high, and to have a projected surface area of 1 km2, then the total cloud volume is lo9 cubic meters, and the total energy due to charge se aration within the cloud (assuming that E is uniform throughout) must be 1.11 . 10 J. If the camera recording the flash were 200 km away from the cloud, and if 1/10 of the total energy in the discharge went into producing light, then 1.11 . 1O4 J of light would be spread over a sphere of area 5 . 10" m2. If the discharge takes one millisecond to occur, the intensity at the camera is approximately 2.2 . lo" W m-'.
P
Observation of Jovian lightning The possible occurrence of lightning in the Jupiter atmosphere was first predicted by Bar-Nun (1975), suggesting that they could be responsible for the abundance of the observed acetylene. The Voyager 1 and 2 spacecrafts performed the first observations of lightning in the Jovian atmosphere on 1979. Later, the Galileo spacecraft has monitored the electrical activity in the Jupiter atmosphere. These observations show that the regions between 47" e 49" (in both hemispheres) are the more active from the electrical point of view. A typical Jovian storm is about 1,500 km in diameter and produces about 20 flashes per minute. The heights of the flashes are between 2 and 5 bar atmospheric pressure layer, in the region where the H20 clouds are located. This suggests that the lightning generation mechanism is analogous to the terrestrial one (convective electrification of the clouds). In the visible band, the flash intensity ranges from 4.3.108 J (for those having mean energy) to 6.6.109 J for the more energetic ones (Russel, 1993). These values did not take into account the atmospheric scattering and the actual optical power can be larger by an order of magnitude. The total powers are larger with respect to the optical ones for a factor between lo2 e lo3 (Borucki and McKay, 1987). In average, in the abovementioned optical range 0.01 flashes per km2 per year occur. Assuming a typical duration of 35 ms, as for the terrestrial lightning (Zarka, 1985), the optical power span from 1.2.10'0 W to 2.10" W lo5 km,and, if observed from orbit, the corresponding apparent magnitudes are negative to a distance of lo5 km from the top of the Jupiter clouds (see Fig. 8).
489 -2
-I -18
-'
-20 0
1
2
3
4
5
6
Height from Jupiter clouds (km)
7
8
I
9 X I
Figure 8 Optical magnitude of Jupiter lightning in function of the cloud distance.
The Jupiter lightning spectrum shows the emission lines of the hydrogen Balmer series overlapped to the continuum. In 2001, some attempts were made by the HST to observe lightning on Jupiter in the hydrogen H a line (656 nm), but the results have not been satisfying (Caldwell, et al., 2001). At present, the observation from orbit is the best way to monitor the electrical activity on Jupiter. Imuact flashes The Moon, as all the Solar System bodies, has undergone a continuous bombardment by asteroids, comets and meteoroids in general since its formation. Meteoroids can be of comet origin, in this case they generally belong to a shower (or stream), or of asteroid origin, and most of them belong to the class of sporadic events. Due to the absence of an atmosphere, during the last phases of the fall the meteor phenomenon does not occur and all the meteoroid kinetic energy is released on the to of the impact energy ground. During the impact a fraction (spanning from is converted in a luminous flash potentially visible from the Earth. Meteoroid impacts on the Moon have been detected in 1974, in the period of the Leonid meteoric shower, by Apollo Lunar Seismic Network. These stations stopped working in September 1977 and since then they registered the impact of more than 100 meteoroids. During the last decades, several lunar observers have claimed to have detected optical flashes on the Moon, but any such observations have never been confirmed independently. Unfortunately, different phenomena can cause the appearance of a flash, such as reflection in the instrument optics, cosmic rays hitting the retina, point meteors, reflections due to artificial satellite crossing the lunar disk, etc. This is the reason why we need two independent observations of the same flash. For the Leonids, a rough estimate of the apparent magnitude of the flash due to impacts is given by the following formula (Beech and Nikolova, 1998): mLeonlds = -2.5.logM + 5.5 (33) where M is the meteoroid mass.
490
The impact on the Moon surface of a meteoroid belonging to the Leonid shower and having M 100 g produces an optical flash of magnitude my =+0.5. Other authors (Bellot Rubio, 2000) claim that, considering the same magnitude, the masses of the meteoroids, which produced the 1999 brightest flashes, should have been about 5 kg. Impact flashes on the Moon The detection of impact flashes by using a camera on board a lunar satellite should be much more efficient, if compared with the ground-based observations. In fact, the distance being much lower, even the impact of a meteoroid with small mass should be detectable. Moreover, we could avoid the constraints due to the geometry of the Earth-Moon system, having the possibility to monitor the lunar dark hemisphere even when it is invisible from Earth. Considering qLthe spacecraft height (in km) from the Moon surface, the impact flash magnitude is given by: m = mr +5log(qL)-27.9 (34) where mT is the event magnitude as seen from the Earth. Assuming that the limiting magnitude of the sensor is +6, from low orbits (within 5,000 km from the lunar surface), flashes that from the Earth should appear as magnitude +15 could be visible (see Fig. 11; the different plots refer to different apparent magnitudes, mT, observed from the Earth.). From eq. (34) a meteoroid belonging to the Leonids producing a flash of magnitude +15, if seen from Earth, has a mass of 1.6 . lo4 g. So, from lunar orbit it should be possible to detect impacts of low mass particles. For impact flashes having a magnitude between +3 and +7, as seen from the Earth, for distances from the lunar surface lower than 50,000 km, the apparent magnitudes are negative. This fact should allow to study very low mass meteoroids and to take spectra of the impact phenomenon, at present completely unknown. 5
0
5
-10
-451 -201
I 05
1
15
2 25 3 35 Distance from Moon surface (km)
4
5
45
x 10'
Figure 9 Apparent magnitude of impacts vs. Moon sur$ace distance.
49 1
Assuming ZHR=lOO, an average population index r = 2.3, and a geocentric velocity V = 60 km s-', we can estimate that meteoroids generating meteors with magnitude 5 +6.5 are distributed with a spatial density (Koschack & Rendtel, 1990): p (m 56.5) = lo-*meteoroids . (35) If we consider a mass of 10" g as a lower limit, then the spatial density of meteoroids having a mass equal or larger is: p(M 2 lO-'g)= p ( m 5 6 . S ) r 9 7 7 5 ' 0 g ( 2 9 ' y )
(36)
where V is the shower geocentric velocity in km s-'. Considering V = 60 km s-I, we meteoroids . km-3.These meteoroids can produce flashes obtain p(M 2 10" g) = having an apparent magnitude +13, as seen from the Earth (see eq. 1). These events are visible within an apparent magnitude +6 from a distance of 15,000 km from the lunar surface. The Moon radius is RL= 1738 km, then the impact frequency, v,, is: v m ( 2~ lo-")=
p
( t~~ O - ~ ~ ~ R ; V
(37)
If V = 60 km s-', v, = 0.6 meteoroids s-'. By dividing the lunar cross-section exposed meteoroids . s-' . to the meteoroid stream, we obtain a frequency density of 3 . kni2. If a is the angular field of view of the camera, the lunar disk is completely covered from a distance:
If a = 120°, ijL= 269 km (for the Earth is 1,000 km). From the 3rdKepler equation the spacecraft orbital period is:
.={F
(39)
At a height of 269 km, the orbital period is 4,534 s (Ih 15"'). With this T a point on the lunar surface crosses the camera field of view in 756 s (12.6 "'). This time is sufficient to detect all the impact phases, included the dust cloud evolution. Of course, the monitoring time increases if the height increases. At qL= 1,000 km the orbital period is T = 12,811 s (3h 33") and a point on the Moon surface is framed for about 1 hour. Transient Lunar Phenomena A camera like S-POSH in lunar orbit for a long period could help also to solve the problem of the so-called Transient Lunar Phenomena (TLP). These phenomena consist in temporary aspect variations of limited parts of the lunar surface. The TLP average duration is 20 minutes, whereas the average diameters of the regions affected are about 16 km. After some time the surface returned to the previous state: permanent variations of the lunar morphology following a TLP have never been observed. So far, more than 1,500 TLP have been observed on the Moon, but, in spite of many hypotheses, their origin at present is completely unknown. Most of the TLP observations are visual, so it is difficult to understand if the phenomenon really occurred on the lunar surface, or if it was due to mistaken interpretation by the observer. On the Moon there are regions where TLP are relatively frequent and have been detected by independent observers. About 30% of TLP are observed in the Aristarchus crater, 8% in Plato, 5% in Proclus, and 3% in Alphonsus. Usually TLP are more frequent near the maria rims and in regions where hills are numerous.
492
Impacts on Mercury To estimate the impact flash magnitudes on Mercury we have to obtain the impact scaling law. The emitted radiation during an impact is proportional to the 4'h power of the impact velocity (Eichorn, 1976). Thus, the radiation flux density is proportional to:
v4
F Z , 4
In the equation (38) q is the observer distance from the impact point. Using the Pogson relation, the difference in magnitude between the flash apparent magnitudes on the Moon and Mercury are given by: mM
- m, = -2
r
.510g[(VM/ V, (q, / qM)2]
(41)
where VMand VLare the impact velocities on Mercury and on the Moon, respectively. On Mercury the impact velocities span from 4.2 km s-' (escape velocity) and 135 km s-' (orbital velocity + heliocentric escape velocity at perihelion). From equation (33), the impact flash produced by a Leonid meteoroid having a mass of 100 g has an apparent magnitude mL = +0.5. If a similar meteoroid impact occurs on the Mercury surface with a velocity in between 100 and 135 km s-', the apparent magnitudes, from a distance to the planet of 384,400 km are, respectively, -1 and -2.2. This shows that the impact flashes on Mercury are 1-2 times brighter than those on the Moon. The duration of the events span from about 0.01 to some seconds (depending on the meteoroid mass). Meteoroid impacts on the JuDiter Galilean satellites The Galilean satellites are bodies of planetary dimensions having a solid surface where is possible to observe the flash produced by a meteoroid impact. As already done in the case of Mercury, to estimate the flash magnitude we have to use the scaling law for the impacts (see the paragraph "Impacts on Mercury"). The difference between the flash apparent magnitude, m,of lunar impact and those on the satellites is given by: m s -m, = -2.51og[(Vs/V,~(g,/ q s ) z ]
(40)
where V is the impact velocity on the ground and q the distance of the observer from the surface. The impact velocities on the satellites are practically the same as on Jupiter, ranging from 60 to 68 km s - ~ Analysing . the lunar impacts we estimated that a Leonid having a mass of 100 g produces a flash of apparent magnitude mL = +0.5 as seen from the Earth. If a similar meteoroid impacts on a Galilean satellite, the apparent magnitudes at 384,400 km from the satellite surface would span from +0.8 to -0.25. Therefore, the impact flashes on the Galilean satellites are comparable to the lunar ones allowing observing such phenomena with similar instruments developed for the Moon. In conclusion, the study of meteors and lightning on Jupiter and impact on the Galilean satellites is possible using the same instruments designed for the Earth and the Moon. Auroras Auroras are also observable in the polar regions of the giant planets of the Solar System (Jupiter, Saturn, Uranus and Neptune), which have an intense dipolar magnetic field. Mercury, which has a magnetic field but no atmosphere, cannot generate auroras. Venus and Mars practically lack a magnetic field.
493 The mechanisms producing auroras in the polar regions of giant planets are the same as on the Earth. The differences consist in the emitted power and in the extension of the phenomenon. Earth Power (W)
10'O
Jupiter 10'~
Saturn
Uranus
Neptune
10"
10"
< lo8
CONCLUSIONS CONCLUSIONS CONCLUSIONS CONCLUSIONS The detection of luminous transient phenomena on planetary bodies is central if we are to derive important physical information on the origin of these events. So far, the study of these phenomena has been very limited, due to the lack of an ad hoc instrumentation, and their detection has been performed mainly on a serendipitous basis. The times seem now mature to plan the development of a new generation of dedicated, space-based observing facilities. Visible and infrared cameras should be the primary sensors for these facilities, which do not require the launch of independent satellites or space probes, since they may be carried aboard orbiters conceived to carry out also other activities, to reduce the overall costs of the deployment of an observing network. As a purely speculative example, for instance, one could think about the possibility of adding dedicated cameras to some satellite of the fleet which will be launched in the near future for the imminent European Global Navigation Satellite System (Galileo) or to the three satellites of the ESA SWARM constellation devoted to Earth observation. The first should be operative starting in 2008, the second on 2009. High satellite orbits are better for sky covering issues, but there is also a corresponding intrinsic reduction in sensitivity performances, and in precision in the determination of the entry trajectory of the bodies. For this reason, an ideal spacebased network would include also a number of satellites on lower altitude orbits. In addition, also the International Space Station could include some of the sensors of a more general network. In practical terms, recently, ESA issued an announcement of opportunity for the development of systems devoted to the detection of transient events in the Earth atmosphere and on the dark side of other planetary objects. One of such a detector as been designed and a prototype is in construction at Galileo Avionica S.p.A. (Florence, Italy). REFERENCES Adolfsson L.G., S. Gustafson, C.D. Murray, Icarus, 119, 144-152 (1996). Artem'eva N.A. et al., Solar System Research, 35, 177-180 (2001). BX-NUII A,, ICU~ZIS, 24,86-94 (1975). Beech M., Mon. Not. R. Astron. SOC.,294,259-264 (1998). Beech M. and P. Brown, Earth, Moon and Planets, 68, 171-179 (1995). Beech M., and S. Nikolova, I1 Nuovo Cimento, 21C, 577-581 (1998). Borucki W. J. and C.P. McKay, Nature, 328,509-510 (1987). Buil C., in The Observer's Guide to Astronomy, v01.2, Cambridge University Press (1994). 9. Caldwell J., W. Borucki, K. Rages, Bulletin American Astronomical Society, 33, 1104 (2001). 10. Ceplecha Z. et al., Space Science Reviews 84, 327-471, (1998).
1. 2. 3. 4. 5. 6. 7. 8.
494 11 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
Ceplecha Z. et al, In Meteoroids 1998, Astron. Inst., Slovak Acad. Sci., 37-54, (1999). Christou A.A., and K. Beurle, Planetary and Space Science, 47,1475-1485 (1999). Cook A.F. and T.C. Duxbury, Journal of Geophysical Research, 86, 8815-8817 (1981). Eichorn G, Planet Space Sci, 24,77 1 (1974) Fernandez J.A., G. Tancredi, H. Rickman, J. Licandro, Astronomy & Astrophysics, 352,327-340 (1999). Hawkes, R.L., In Meteors in the Earth atmosphere, pp.97-122 (2002). Hughes, D.W., Nature, 285,438, (1980). Koschack R., and J. Rendtel, WGN, 18:2 (1990). Koten P., and J. Borovicka, in Meteoroids 2001 Conference (2001). Olhoeft G.R., In “Sand and Dust on Mars”, NASA CP-10074, p. 44 (1991). Rossi A., A. Cordelli, P. Farinella, L. Anselmo, J. Geophys. Res., 99,2319523210 (1994). Russel C.T., Ann. Rev. Earth Planet. Sci., 21,43-87 (1993). Sentman D.D., In “Sand and Dust on Mars”, NASA CP-10074, p. 53 (1991). Zarka P., Astronomy &Astrophysics, 146, L15-Ll8 (1985).
PROPOSED GROUND-BASED EXPERIMENTS FOR ASTEROID SEISMOLOGY
RAYMOND GOLDSTEIN, A. DE LOS SANTOS, W. HUEBNER, E. SAGEBIEL, AND J. WALKER Southwest Research Institute@,San Antonio, Texas, U.S.A. ABSTRACT We are currently planning to develop a series of ground-based experiments in order to develop techniques for carrying out in situ seismological measurements on a Near Earth Asteroid W A ) . These experiments are directed toward a mission that would place several sensor packages distributed over the surface of the asteroid, as well as one or more seismic signal initiators. The signals detected by the sensors would be relayed back to the orbiting spacecraft that had previously released the landers. The analysis of the results of these measurements will allow determination of internal structure of the asteroid. INTRODUCTION Scientific Obiectives The objective of the planned research effort is to begin the development of a system to determine in situ the inner structure of a near-Earth asteroid (NEA) by means of seismic measurements. Such measurements are difficult because asteroids (and other small bodies) have negligible gravity, mostly unexplored surfaces, no atmosphere, and can experience large diurnal and seasonal temperature variations. We are specifically concerned in this research with a laboratory investigation of techniques for the delivery of miniature landers carrying seismometers and anchoring them on the surface of an asteroid or at least providing adequate seismic coupling between the asteroid and sensor. We anticipate that at the end of this work we will be in a position to design a system for a flight mission to an asteroid. This system (beyond the scope of the proposed work) will presumably include explosive devices to initiate seismic activity, packages of seismic sensor landers, techniques for delivering them to the target NEA, a technique to determine sensor location, as well as the telemetry link to obtain the measured data on the ground via spacecraft (S/C) relay. In this study we plan to use commercially available MEMS sensors, which are small, light, low-power, inexpensive devices. Then for a future flight program these sensors should allow placing a network of several seismic “stations” on an asteroid at a low cost and low burden to the delivery S/C. There are several techniques routinely used for terrestrial exploration and some of these have also been used or are planned for extra-terrestrial application. These methods include penetrators, use of electromagnetic waves to measure attenuation through the body andor reflections from internal structures in the body (transmission or reflection tomography), and measurement of naturally occurring and induced seismic waves. These methods are complementary and a combination of them may offer the best answers to the structure and material properties of a body. Our planned research is directed toward
495
496 seisrnic and sound speed measurements in NEAs (Near-Earth Asteroids, a subgroup of NEOS). We will use existing Southwest Research Institute (SwRI@)test facilities that have been used in previous, preliminary studies related to seismology of an asteroid (Walker and Huebner, 2004). Developing such a system involves solving or addressing a number of inter-related complex problems or issues. The solution to some of them will depend on the particular asteroid target (e.g. size, rubble pile or not) and type of mission (e.g. flyby or orbiter). Some of the difficulties of carrying out such a mission include the very low gravity of an asteroid, no atmosphere, a relatively unknown surface material and strength, and temperatures varying from extremely cold on the dark side to very hot in sunlight. We present here four topics that need to be studied to accomplish our objectives: 1) Determine measurement requirements (e.g. seismometer sensitivity and frequency range, and a minimum measurement time interval), 2) Method of delivery of the measurementpackage to the NEA, 3) Method of anchoring the package and providing good “seismic” contact, 4) Preliminary electronic and packaging design concepts. For reference, the additional issues that need to be considered for a complete system (but not currently planned), include: 5) Packaging of the seismometers and related components to survive during and after the landing, 6) Determination of the positions of packages and seismic initiators, 7) Time coordination of the initiator and seismic signals, 8) Detailed electronics design, including telemetry and data rate. Science Background The Geophysics and Geology of Asteroids Asteroids and comets are the remnants of the building blocks (planetesimals) for the inner and outer planets, respectively. Knowledge of their composition and internal structure (geophysical and geologcal properties) is important for understanding their formation in the solar nebula, their age, their collisional evolution, their relationship to meteorites, their thermal properties, for the formation of planets, and for countermeasures to avoid a collision with Earth. A detailed taxonomy for asteroids has been developed (e.g., Tholen, 1984), but in broad (somewhat simplified) categories, asteroids are called metallic, stony, or carbonaceous. Asteroids have been characterized mostly by their visible and infrared broadband spectra. These spectra reveal information about their surfaces, but we do not know much about their interiors. Judging the interior composition and structure of asteroids from their surface appearances probably gives misleading results. Space weathering can change surface properties of asteroids. It usually makes surfaces redder and darker and weakens spectral band structures. Asteroids in the main belt are exposed to a weaker solar wind than asteroids closer to the Sun. A weaker solar wind reduces space weathering. Lunar-style space weathering does occur on asteroids, but only in reduced amounts. However, one must keep in mind that NEAs are not now in orbits where they started, so their solar weathering history is not clear. Note that small asteroids seem to show less space weathering than large asteroids (Binzel et al., 2001).
497 Other effects that change surface appearances include aqueous alterations, heating, and especially the occurrence of a regolith resulting from impact ejecta (ejecta blankets). Seismic shaking can destroy or obscure small craters. Down-slope regolith motion probably erased some features on the surfaces of Gaspra and Ida (Richardson et al., 2004). In rubble pile structures the amount of seismic coupling may reduce seismic effects to more local areas. In general, bulk densities of asteroids and comet nuclei are lower than equivalent materials on Earth. Bulk densities of over twenty asteroids and a few comet nuclei have been derived from spacecraft rendezvous and flybys, from observing mutually orbiting binary asteroids, and estimating non-graviational forces (in the case of comet nuclei). Britt et al. (2002) and Hilton (2002) determined averages of bulk densities for 23 asteroids and provide the references from which the averages were determined. It is not known whether the low densities are caused by microporosity - as a result of their formation in low gravity environments - or whether they are caused by macroporosity implying that they result from fractures and loose assemblies (e.g., rubble piles) as a result of collisional evolution, compositional inhomogeneities, or several of these effects combined. Large “voids” also raise the question whether they are empty or filled (Cheng, 2004). Fines can fall into cracks or they can be filled by debris from internal friction (Britt and Consolmagno, 2001). These low bulk densities imply high bulk porosity, possibly higher than 50%, suggesting that many asteroids are rubble piles held together by self-gravity. Accretion models and energy balance calculations of disrupted asteroids suggest that material may be sorted by particle size: the larger pieces (and larger voids) may be located deep inside an asteroid and smaller particles may be restricted to the surface regolith zone. Low densities also suggest low material strengths. The nature of the porosity provides clues to an asteroid’s collisional history, shock effects, compression, and lithification, and it influences physical properties such as thermal diffusivity, seismic velocity, cosmic ray exposure, and dielectric permeability. The thermal and seismic effects in turn influence asteroid internal evolution, metamorphism, shock dissipation, and elastic properties that can determine whether colliding asteroids accrete or disrupt. Thomas et al. (1986) showed that small objects tend to be irregular in shape but that even at a size much smaller than 1 Ceres (an oblate ellipsoid with equatorial radius of about 500 lan and polar radius of about 470 km) the shape becomes nearly spherical, suggesting that gravity is sufficient to compress the material and eliminate pore spaces. Johnson and Anderson (2003) elaborated on this point in their study of Amelthea. Average bulk densities for meteorites are in the range from 2.1 to 3.4 g/cm3. (The Tagish Lake meteorite is an exception with 0 -1.7 g/cm3.) Densities of asteroids can be as low as 1 0.3 g/cm3for 15 Eunomia (a stony asteroid; Britt et al, 2002) and as high as 4.9 f 3.9 g/cm3 for 804 Hispania (Britt et al., 2002), but in general are higher than the densities of meteorite analogs. Hydrated minerals in meteorites influence their density. Densities of common but relevant materials on Earth are still higher, ranging from 8.8 g/cm3 for nickel, 7.86 g/cm3 for iron, to 4.2 g/cm3 for Fayalite (FezSiO& to 3.2 - 3.3 g/cm3 for Enstatite (MgSi03) and Forstente (MgzSiOd), to 2.25 g/cm3 for graphite. Densities of comets are thought to be about 0.5 g/cm3 (Rickman et al., 1987; Farnham and Cochran, 2002).
*
498
The results of seismological studies of asteroids will benefit many areas of asteroid research. For example, the properties of internal structure and composition, when coupled to heliocentric distance dependence, also give clues about source regions of asteroids. Internal structure will reveal whether an asteroid is primitive or differentiated. Unfiagmented (original) structure will tell us about formation of asteroids in the inner solar nebula. Fragmented structure and the size distribution of fragments will be useful for modeling (see, e.g., Asphaug et al., 1998). It will be possible to connect interior structure and composition of asteroids to their surface properties (broadband spectra) and to composition and source of meteorites. Seismology will also reveal whether water ice is in some asteroids (e.g., Ceres type asteroids such as 10 Hygiea and 24 Themes; Rivkin et al., 2004). The non-alignment between principal axes of inertia and the spin axes gives rise to complex motions including tumbling, which cause stresses and strains resulting in reverberation signals. The determination of the background noise caused by the reverberations will provide useful information about the asteroid structure and dynamics. The results from our proposed work will provide the information necessary for understanding how to anchor instruments to the surfaces of asteroids and make acoustic contact so that seismic signals can be measured. This in turn will allow determination of properties such as seismic wave speeds, material strengths, shock dissipation, elastic properties, etc., which will provide the information necessary to determine whether colliding asteroids accrete to form larger bodies or disrupt to make rubble piles. There are several techniques routinely used for terrestrial exploration and some of these have also been used or are planned for extra-terrestrial application. These methods include penetrators, use of electromagnetic waves to measure attenuation through the body andor reflections from internal structures in the body (transmission or reflection tomography), and measurement of naturally occumng and induced seismic waves. These methods are complementary and a combination of them may offer the best answers to the structure and material properties of a body. Our proposed research is directed toward seismic and sound speed measurements in NEAs (Near-Earth Asteroids, a subgroup of NEOS). Hazards to Earth The history of catastrophic impacts of extraterrestrial objects (i.e., asteroids and comets) with Earth has been well documented. (See, e.g.: httw//www.lul.arizona.edu/SIC/impact craterindworld Craters Webhntrornatxht
ml.) Remnants of many large impact events have been discovered in various regions of the planet and the role of such events in the disappearance of early life forms (e.g., the “KT” event) is now well accepted. Can such an event happen again, with serious consequences for humanity? It is believed that there about 1000 near Earth objects (NEOs) larger than 1 km in diameter and perhaps 25 times as many with diameters larger than 200 m that could cause more local catastrophic effects if they were to impact Earth. Interest is mounting to develop techniques to divert these objects or otherwise minimize the effects of an impact by such NEOs. For example, see: www.noao.edu/meetings/mitigation/eav.html
499
for proceedings of a recent NASA-sponsored workshop on the subject. The concern is great enough that the U.S. Congress has mandated the goal of finding and determining the orbits of at least 90% of NEOs with diameter > 1 km by 2008 (the Spaceguard Survey). An important aspect of any such measure is the understanding of the physical nature of the object (Huebner and Greenberg, 2000, and Huebner et al., 2001). What is the composition? What is the internal structure (monolithic, fractured, highly porous, or rubble pile)? Ground-based observations give little information on this and even the few spacecraft (Galileo; e.g. Johnson and Anderson, 2003, NEAR, Veverka et al., 2001,and DS-1; Rayman et al., 2000) that have visited asteroids have shed little light on their internal structure. One method of determining internal structure is by seismic mapping in a manner similar to those used for terrestrial geo-exploration such as for oil or mineral searches. The overall concept is to place an array of sensor packages on the asteroid surface, artificially initiate “activator” devices at appropriate locations on the surface, and relay the resulting seismic signals via an orbiting mother ship to ground for analysis. The results of this proposal will determine how to anchor instruments to the surfaces of asteroids and make acoustically good contact so that seismic effects can be measured to determine composition, internal structure, material strengths, and other elastic properties that are needed for successful countermeasuresfor collision mitigation with an asteroid. Technical Approach Measurement Requirements We are concerned here with estimating and bounding the sensor parameters likely to be needed for the required seismic measurements. These include the sensitivity, frequency range, and minimum measurement duration time. To define the needed seismometer response, computations will be performed for a selected asteroid size and a selected charge size to determine the approximate ground motions. Recently, SwRI performed work for NASNJSC in a preliminary study of seismology on 433 Eros (Walker and Sagebiel, 2003). There, the methods that had been outlined in previous work (e.g., Walker and Huebner, 2002, 2004) were applied to Eros. In the model, explosive charges of a given mass were set off at a selected location on models of the asteroid. Then, the ground motions were measured at some predefined gage locations. These calculations helped define the frequency response that would be desirable as well as the amplitude of the ground motions. Two different internal models of the asteroid were used, including one that had an internal fissure, showing the difference in response at the gages of two different internal structures. Figure 1 shows the model of Eros with the charge location and one of the seismometer locations marked. We plan to perform more computations comparing asteroid ground motion to charge size for different asteroid compositions and geometries in order to suggest explosive charge masses and to determine the required seismometerresponse.
500
Figure 1. The 433 Eros model, with the charge location (23253) and a seismometer location (36981) marked. Other seismometer locations considered in the study are on the opposite side of the asteroid. The dark line denotes the location of an interiorfissure used in one of the computations.
Delivery Svstem Details of the package delivery will depend on the particular mission involved, but we can distinguish three categories. In one case the spacecraft is able to land on the asteroid, in the second the craft orbits and at best can hover over a selected spot on the body, and in the third, the S/C flies by the asteroid at some (probably relatively fast) predetermined speed. In preliminary studies we had considered the possibility of placing the sensor packages on the surface from a lander mission. This has the advantage of possibly allowing for a better sensor location knowledge and more secure anchoring of each device. However this technique would not allow “global” dispersal of the sensor packages over the asteroid, and has been abandoned, at least for now. For the case of a fly-by mission the delivery would simply be a matter of separating the packages from the S/C with only a few meters/s and allowing the initial speed to carry them to the target. Although simple, assuring that the packages actually hit the target may be difficult and determining their location on the asteroid would also present some problems. For the proposed study we will then consider ejection from an orbiting S/C by a spring-loaded or gas pressure device (e.g. a type of gun). Three important parameters that enter here and that are related to other issues are the number and the spacing of the sensors and seismic initiator device(s) and the determination of their location on the asteroid surface. These are beyond the scope of the present proposal but we will keep them in mind for our research. To some extent the delivery technique will depend on the method used for assuring good seismic contact, discussed in Section 2.3, below. But the delivery techniques we will consider in our study assume ejection from the parent S/C using springs (similar to, but simpler than, the planned release of the impactor on the Deep Impact Mission, A’Heam, 2003) or gas pressure device (e.g., shot from a gun).
501 Seismic Contact One of the most important issues codonting us is how to obtain adequate mechanical (i.e. seismic) contact between the sensor package and the NEA surface. Part of the difficulty is the very low gravity, absence of an atmosphere (some adhesives require air curing), and the possibility of a dusty or sandy layer on an NEA surface. The measure adopted may depend in part on the particular object targeted, and especially on how much is known at the time about the surface. It will also depend on the details of the method of delivery. We will investigate three basic techniques: Direct contact of the test device on the sample surface(sl This will represent a) a baseline, showing how good or bad the coupling is with no other special techniques. We will also use a range of device contact areas to understand the relation between coupling and area. (See Fig. 2 below.) b) BondinP the package to the surface by means of an auurouriate material. We use the term bonding loosely here, meaning only that good acoustic coupling can be achieved. “Imualing” the sensor uackage onto the surface. The sensor package for these c) tests will be fitted with a sharp spike and shot from a gun-like ejector. (See Fig. 3 below.) Direct Contact For a baseline measurement we plan a simple case of the sensor package resting on the test surface, using no additional devices or material as in the two cases below. We envision a tetrahedron-shaped lander package of about 20 cm on each side. The size is somewhat arbitrary but is reasonable both for the proposed experiments as well as for a possible future flight system. A tetrahedron has the advantage that it will always rest on one of its four equal area sides and would not require some technique of orienting it during delivery to a target and has a smaller volume than say, a cube of the same contact area. Since one of the characteristics of an asteroid surface is the very low gravity, to simulate this in our research the sensor package would be suspended from above and allowed to barely touch the surface. In order to determine the connection of the sensor to the surface of the asteroid, either with or without some sort of adhesive, shaker tables will be used. A sanddust box will be placed atop the shaker table. The seismic sensor will be hung above the shaker table to reduce the amount of downward force. Very compliant springs will be used as part of the mechanical connection to allow up-down motion of the seismometer without greatly affecting the reduced “gravitational” force. The shaker table will then be driven with pre-recorded motions and the response of the gages will be recorded. Through this approach it will be possible to compare the recorded signal in the reduced gravity to the original input driving signal. The effect of the reduced gravity will be studied by adjusting the vertical connection to produce less or more apparent gravity on the seismometer. By performing a number of these tests, both by setting the gage directly on the sanddust surface and by connecting the seismometer through an adhesive of some sort to the surface, it will be possible to determine whether a coupling agent is required or if the reduced gravity and fiction are sufficient, and it will be possible to quantify the signal measured versus input signal, useful for later interpreting real seismograms during a mission.
502 Figure 2 is a sketch of the type of test arrangement we plan to use for this part of the study. As mentioned above, we will perform the tests on at least two types of surface to simulate possible asteroid structure: sandy/gTavely and stony. The “sandbox” will be placed on the shaker table that will be used to excite the seismic waves in the test material. Test Package Suspension
t
Tetrahedral Test Package
Figure 2. Schematic drawing (not to scale) of the proposed test setup for the case of the sensor package resting on the surface. The package will be suspendedfrom above to simulate the very low gravity at an asteroid. The “bed” will be mounted on a seismic shake table to provide a controllable, known input signal. Bonding A more general technique, which may be necessary even for sandy material, involves use of some material to provide the seismic contact between the sensor package and the NEA surface after landmg. This material would need a low enough viscosity and surface tension to flow out to cover an area of about 1 m2, without evaporating,. This area is somewhat arbitrary, but is based on experience with terrestrial seismology. We have investigated a wide variety of commercially available adhesives but have found none suitable. Typically, they either require the moisture in the air for setting or consist of 2 parts that must be mixed before application. We will continue to survey, for example, materials used for acoustic and ultrasonic measurements. However, we may not actually need to bond the package to the surface so use of something such as low viscosity silicone oil, which has a very low vapor pressure, and readily wets siliceous material, might be a simple solution. The oil would thus stick and penetrate the interstices. We will then perform tests to select the best-performing one or two materials which meet our criteria. A secondary issue is the technique of releasing the material from the sensor package onto the surface, but we believe that standard devices in use for space applications, such as pyrotechnic or wax thermal actuators to break a seal, would be appropriate for this, and require a minimum of on-board resources. We will then need to test how well a sensor using the above techniques to contact a surface can measure seismic signals. We will arrange test setups consisting of a sensor
503 package (the same as in the direct case) and candidate adhesives on at least two types of surfaces: I ) sandy/gravely, and 2) hard rock. We will use the same type of setup as for the direct contact case illustrated in Figure 2. Impaling One approach to obtaining a good seismic connection with the ground of the asteroid is by impaling a spike in the surface and having the seismic gage attached to the top of the spike. For the purposes of the proposed work here, the questions are 1) Does one get a good seismic connection with a spike launched from a distance against surfaces that we think are representative of asteroid surfaces? and, 2) What impact velocity and impactor mass are required for the spike to firmly lodge in the surface material? These questions will be addressed by producing three surfaces that are thought to be representative of asteroids - and then firing an adjustable-mass impactor with a spike into these surfaces out of a gas gun. Impacts against the surface will determine the velocities and masses required for the spike to firmly lodge in the three different surface materials. Once it is known what parameters for an impactor are required, it will then be possible to determine whether using an impactor with a spike is a feasible means of deployment for an actual mission, since it will be possible to determine the launcher requirements to obtain the given masses and velocities for the seismic package. Also, the test geometry will allow a determination of alignment requirements for the spike to firmly lodge in the material: if one of the impactor geometries seems feasible, then a yaw study will be carried out to determine how aligned the spike must be with the velocity vector upon impact to still obtain a good connection. This study will be helpful in determining guidance requirements. Some of the results of the NEAR (Near Earth Asteroid Rendezvous) mission (Veverka et al., 2001) indicate the possible presence of basins containing sandy or other loose material. An impaling technique should work quite well in such a case. Figure 3 shows sketches of concepts for an “impaler” type of sensor. On the right is the concept we would plan to use for the proposed tests, ejecting it with one of our gas guns. (See the description under Facilities, below.) The left version is an example of how that could be modified for a flight mission. The dimensions shown are examples, for illustrative purposes.
504
lam431
Y
‘Dmensions
Figure 3. Sketch of concepts of “impaler type sensor, to be ejectedfvom a gas gun. System Design We plan to use an Applied MEMS, Inc. Si-Flex SF1500L low-noise accelerometer as the sensor. These devices are used extensively in the oil exploration industry, as well as for other applications. This sensor is small (< 1 inch on a side), low mass (-7.5 g), and requires low power (-0.2 W). It is therefore well suited both for our proposed studies as well as future flight missions. In particular, the small size will allow distribution of a array of several sensor packages on the surface of an asteroid. Appropriate power, control, and data electronics will be included with this accelerometer in the sensor packages to be used for the tests.
505
SUMMARY We anticipate that at the end of the planned study we will have developed: a) Techniques for testing the delivery and anchoring of a small sensor package on an asteroid. b) A strawman design of a sensor package that will be capable of scientifically useful seismic measurements on an asteroid. c) One or more viable techniques for delivering the package(s) to the target. d) At least one method for assuring good seismic contact between the sensor and the surface. These results will allow subsequent detailed design and testing of a sensor package system that we could then propose for a flight to an asteroid, such as a mission to revisit Eros. REFERENCES 1.
A’Hearn, M. F. et al., (2003), The Deep Impact Mission, Highlights ofdstronomy, v. 13, IAU (in press). 2. Asphaug, E., S. J. Ostro,, R. S. Hudson, D. J., Scheeres, , and W. Benz (1998), Disruption of kilometer-sized asteroids by energetic collisions, Nature 393, 437440. 3. Binzel, R. P., S. J. Bus, T. H. Burbine, L. E. Malcom (2001), Size dependence of near-Earth asteroid spectral properties: A comparison with space weathering models. BAAS 33, 1149. 4. Britt, D. T. and G. J. Consolmagno (2001), Modeling the structure of high porosity asteroids, Zcarus 152, 134-139. 5. Britt, D. T., D. Yeomans, K. Housen, G. Consolmagno (2002), Asteroid density, porosity, and structure. In Asteroids ZII. W. F. Bottke Jr., A. Cellino, P. Paolicchi, and R. P. Binzel (eds), University of Arizona Press, Tucson, p. 485-500. 6. Cheng, A. F. (2004), Macroscopic voids in small asteroids, LPSCXLW, 23. 7. Famham, T. L. and A. L. Cochran (2002), A McDonald Observatory study of Comet 19P/Borrelly: Placing the Deep Space 1 observation into a broader context, Icarus, 160, 398-418. 8. Hilton, J. L. (2002), Asteroid masses and densities. In Asteroids ZZZ. W. F. Bottke Jr., A. Cellino, P. Paolicchi, and R. P. Binzel (eds), University of Arizona Press, Tucson, p. 103-112. 9. Huebner, W. F., J. M. Greenberg (2000), Needs for Determining Material Strengths and Bulk Properties of NEOs, Planet. Space Sci. 48, 797-799. 10. Huebner, W. F., A. Cellino, A. F. Cheng, J. M. Greenberg (2001), NEOs: Physical Properties, In International Seminars on Nuclear War and Planetary Emergencies, 25,309-340. 11. Johnson, T. V. and J. D. Anderson (2003), Galileo’s encounter with Amelthea, Geophys. Res. Abst. 5, 07902, (EGS-AGU spring). 12. Rayman, M. D. et al. (2000), Results &om the Deep Space 1 technology validation mission, Acta Astronautica, 47,475.
506 13. Rivkin, A. S. et al. (2004), Diversity of types of hydrated minerals on C-class asteroids, 3SthLunar and Planetary Science Conference, March 15-19,2004. 14. Richardson, J. E., H. J. Melosh and R. Greenberg (2004), The seismic effect of impacts on asteroid surface morphology: Early modeling results. LPSCXWV, 23. 15. Rickman, H. et al. (1987), Estimates of masses, volumes and densities of shortperiod comet nuclei, in ESA Proceedings of the international symposium on the diversity and similarity of comets, pp. 471-481. 16. Tholen, D. J. (1984), Asteroid taxonomy from cluster analysis of photometry, PhD thesis, U Arizona, Tucson. 17. Thomas, P. C., J. Veverka, S. Dermott (1986), Small Satellites, in Satellites, J. A. Bums and M. S. Matthews (eds.), U ofArizona Press, p. 802-835. 18. Veverka, J., et al. (2001), The landing of the NEAR-Shoemaker spacecraft on asteroid 433 Eros, Science 413,390-393. 19. Walker, J. D. and W. F. Huebner (2002), Seismic Investigations of asteroid and comet interiors. Workshop on Scientific Requirementsfor Mitigation of Hazardous Comets and Asteroids, Arlington, VA, Sept. 3-6,2002. 20. Walker, J. D. and W.F. Huebner (2004), Loading sources for seismological investigations of near-Earth objects, Advances in Space Research 33, 1564-1569. 21. Walker, J. D. and E. J. Sagebiel (2003), “A Preliminary Study of Seismology on Eros,” SwRI Report 18-07635 prepared for NASAIJSC, Southwest Research Institute, San Antonio, Texas, October 2003.
12. SEMINAR PARTICIPANTS
This page intentionally left blank
SEMINAR PARTICIPANTS Professor Pradeep Agganval
InternationalAtomic Energy Agency Vienna, Austria
Professor Amir I. Ajami
InternationalAgriculture Programs University of Arizona Tucson. USA
Dr.Hussain Saleh Al-Shahristani
Dr. Scott Atran
University of Surrey Guilford, UK Institut Jean Nicod CNRS Paris, France
Professor Aurelio Aureli
Department of Applied Geology University of Palerrno Palerrno, Italy
Dr. Lela Bakanidze
Department of Biosafety and Threat Reduction, NCDC Tbilisi, Georgia
Professor Abul Barkat
University of Dhaka Dhaka, Bangladesh
Professor William A. Barletta
Accelerator & Fusion Research Division Lawrence Berkeley National Laboratory Berkeley, USA
Professor J. Ray Bates
Department of Geophysics and DCESS Niels Bohr Institute for Astronomy, Physics and Geophysics University of Copenhagen Copenhagen,Denmark
509
510 Professor Isaac Ben-Israel
School of Government and Policy University of Tel-Aviv Tel-Aviv, Israel
Professor J. M. Borthagaray
Instituto Superior de Urbanism0 University of Buenos Aires Buenos Aires, Argentina
Dr. Olivia Bosch
New Security Issues Programme Royal Institute of International Affairs London, UK
Dr. Vladimir B. Britkov
Information Systems Laboratory Institute for Systems Analysis Moscow, Russia
Profesor Herbert Budka
Institute of Neurology University of Vienna Vienna, Austria
Dr. Franc0 Buonaguro
Fondazione Pascale Istituto Nazionale dei Tumori Naples, Italy
Dr. Diego Buriot
Communicable Diseases World Health Organisation Geneva. Switzerland
Dr. Gina M. Calderone
EA Science and Technology New York. USA
511 Dr. Salvatore Carubba
Department of Applied Geology University of Palermo Palermo, Italy
Dr. John P. Casciano
GrayStarVision, LLC Chantilly, USA
Dr. Albert0 Cellino
Osservatorio Astronomico di Torino Pino Torinese, Italy
Professor Joseph Chahoud
Physics Department Bologna University Bologna, Italy
Dr. Clark R. Chapman
Southwest Research Institute Boulder, USA
Dr. Nathalie Charpak
Instituto Materno hfantil Bogoti, Colombia
Professor Robert Clark
Hydrology and Water Resources University of Arizona Tucson. USA
Dr. Socorro de Leon-Mendoza
Neonatology Unit
Jose Fabella Memorial Hospital Manilla, Philippines Professor Guy de ThB
Epidemiology of Oncogenic Viruses Institut Pasteur Paris, France
512 Dr. Carmen Difiglio
Energy Technology Policy Division International Energy Agency Pans, France
Dr. Mario Di Martino
Osservatorio Astronomico di Torino Turin, Italy
Professor Adam Driks
Department ofMicrobiology and Immunology Loyola University Medical Center Maywood, USA
Profesor Christopher D. Ellis
Landscape Architecture and Urban Planning Texas A&M University College Station, USA
Dr. Lome Everett
Stone & Webster Management Consultants The Shaw Group Inc. Baton Rouge, USA
Professor Baruch Fischhoff
Social & Decision Sciences Department Camegie Mellon University Pittsburgh,USA
Dr. Robert Fox
Defence Correspondent and Historian London, UK
Dr. Bertil Galland
Writer and Historian Buxy, France
Dr. Richard L. Garwin
Thomas J. Watson Research Center ISM Research Division Yorktown Heights, USA
513 Professor Bernardino Ghetti
Department of Pathology & Laboratory Medicine Indiana University Indianapolis, USA
Dr. Raymond Goldstein
Space Science and Engineering Division Southwest Research Institute San Antonio, USA
Professor Albert0 GonzPlez-Pozo
Theory and Analysis Deparhnent Universidad Aut6noma Metropolitana Xochimilco,Mexico
Dr. Balamurugan Gurusamy
Environmental Engineering Technical Division The Institute of Engineers
Kuala Lumpur, Malaysia Dr. Munther J. Haddadin
Ministry of Water & Irrigation of the Hashemite Kingdom of Jordan Amman. Jordan
Dr. Alan W. Harris
DLR, Institute for Planetary Exploration Berlin, Germany
Professor Nigel Harris
Economics of the City University of London London, UK
Professor Pervez Hoodbhoy
Physics Department Quaid-e-Azam University Islamabad, Pakistan
Dr. Walter F. Huebner
Southwest Research Institute
San Antonio, USA
514 Dr. Christiane Huraux
Mother-infant HIV Transmission Consultant
Paris, France Dr. Jafar Dhia Jafar
Crescent Petroleum Group Shajah, United Arab Emirates
Dr. Rolf K. Jenny
Global Commission on International Migration Geneva, Switzerland
Dr. Ahmad Kamal
Ambassador (ret.) U. N. Institute for Training and Research New York. USA
Dr. Bradford Kay
Laboratory Capacity Development & Biosafety World Health Organisation/ CRS Office Lyon, France
Professor Barry K e l ha n
InternationalWeapons Control Center Chicago, USA
Dr. M. Reza Khatami
Tehran University of Medical Sciences (TUMS) Tehran, Iran
Dr. Hisham Khatib
World Energy Council Amman, Jordan
Dr. Stephen J. Kowall
Idaho National Engineering and EnvironmentalLaboratory Idaho Falls. USA
Dr. Vasily Krivokhiza
International Department Federal Assembly of the Russian Federation Moscow, Russia
515 Professor Valery Kukhar
Institute for Bio-organic Chemistry Academy of Sciences Kiev, Ulclaine
Dr. Arun Kumar
Professor Stephen Lau
Development Alternatives New Delhi, India Department of Architecture University of Hong Kong Pokfulam, Hong Kong
Professor Tsung-Dao Lee
Department of Physics Columbia University New York, USA
Professor Axel Lehmann
Institute for Technical Computer Sciences Universitat der Bundeswehr Miinchen Neubiberg, Germany
Dr. Sally Leivesley
NewRisk Limited London, UK
Mr. Ronald Linsky
National Water Research Institute Fountain Valley, USA
Professor Sergio Martellucci
Physics & Energy Science &Technology UniversitB degli Studi di Roma “Tor Vergata” Rome, Italy
Dr. Akira Miyahara
National Institute for Fusion Science Tokyo, Japan
516 Dr. Alan L. Moore
Redstone Arsenal Alabama, USA
Professor El Hadji Abib Ngom
Ecole Sup6rieurePolytechnique Dakar, Senegal
Dr. Thu Nga Nguyen
Department of Pediatrics Vietnam Sweden Uongbi Hospital Uongbi, Vietnam
Dr. Jef Ongena
Professor Garth W. Paltridge
Ecole Royale Militaire Plasma Physics Laboratory Brussels, Belgium Institute of Antarctic and Southern Ocean Studies (Ret.) University of Tasmania Hobart, Australia
Professor Donato Palumbo
World Laboratory Centre Fusion Training Programme Palermo, Italy
Professor Stefan0 Parmigiani
Evolutional and Functional Biology University of Parma Parma, Italy
Dr. John S . Perry
National Research Council (ret.) Alexandria,USA
Professor Margaret Petersen
Hydrology & Water Resources University of Arizona Tucson, USA
517 Professor Andrei Piontkovsky
Strategic Studies Centre Moscow, Russia
Professor Juras Pozela
Lithuanian Academy of Sciences Vilnius, Lithuania
Dr. Elizabeth Prescott
Professor Vittorio Ragaini
US Senate Committee of Health, Labor & Pensions Washington, USA Chemical Physics and Electro-Chemistry University of Milano Milan, Italy
Dr. Maura Ricketts
Professor Zenonas Rudzikas
Blood Safety and Health Care Acquired Infections Ottawa, Canada Theoretical Physics & Astronomy Institute Lithuanian Academy of Sciences Vilnius, Lithuania
Dr. Juan Ruiz
Department of Pediatrics San Ignacio Hospital Santafk de BogotP, Colombia
Dr. Ali Safaeinili
Jet Propulsion Laboratory Pasadena. USA
Dr. Reynold Salerno
Biosecurity Program - International Security Center Sandia National Laboratories Albuquerque,USA
Dr. Mauro Saviola
Mauro Saviola Group Mantova, Italy
518 Professor Hiltmar Schubert
Fraunhofer Institute for Chemical Technology Pfinztal, Germany
Dr. Russell L. Schweickart
B6 12 Foundation Tiburon, USA
Professor Geraldo Gomes Serra
NUTAU University of SBo Paolo
SBo Paulo. Brazil Professor K.C Sivaramakrishnan
Centre for Policy Research New Dehli, India
Professor Soroosh Sorooshian
Department of Civil and Environmental Engineering University of California at Irvine Irvine, USA
Professor William A. Sprigg
Institute of Atmospheric Physics University of Arizona Tucson USA
Dr. Bruce Stram
BST Ventures Houston, USA
Dr. Terence Taylor
International Institute for Strategic Studies - US Washington, USA
Dr. Andrew F.B. Tompson
Geosciences and Environmental Technologies Lawrence Livermore National Laboratory Livermore, USA
519 Professor Vitali Tsygichko
Institute for System Studies Russian Academy of Sciences Moscow, Russia
Dr. Frederick vom Saal
Division of Biological Sciences University of Missouri Columbia, USA
Professor Frangois Waelbroeck
World Laboratory Centre Fusion Training Programme St. Amandsberg, Belgium
Dr. Henning Wegener
Ambassador of Germany (ret.) Information Security PMP Madrid, Spain
Dr. Jody Westby
The Work-IT Group Mclean, USA
Dr. Tom M.L. Wigley
National Center for Atmospheric Research Boulder. USA
Professor Robert G. Will
National CJD Surveillance Unit Western General Hospital Edinburgh, UK
Professor Richard Wilson
Department of Physics Harvard University Cambridge, USA
520 Dr. Georg Witschel
Federal Government Commissioner for Combating International Terrorism Ministry of Foreign Affairs Berlin, Germany
Professor Aaron Yair
Department of Geography Mount Scopus Campus The Hebrew University Jerusalem, Israel
Dr. Hajime Yano
Department of Planetary Science
The Graduate University for Advanced Studies Kanagawa, Japan
Dr. Donald K. Yeomans
NASA Near-Earth Object Program Jet Propulsion Laboratory Pasadena, USA
Dr. Rolf K. Zetterstrom
Acta Paediatrica Stockholm, Sweden
Professor Guangzhao Zhou
Standing Committee of the National People’s Congress The China Association for Science & Technology
Beijing, PRC Professor Antonino Zichichi
CERN, Geneva, Switzerland and University of Bologna, Italy
This page intentionally left blank