Environmental Physics
Environmental Physics Sustainable Energy and Climate Change
Third Edition
EGBERT BOEKER and R...
632 downloads
3777 Views
10MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Environmental Physics
Environmental Physics Sustainable Energy and Climate Change
Third Edition
EGBERT BOEKER and RIENK VAN GRONDELLE VU University Amsterdam
A John Wiley & Sons, Ltd., Publication
This edition first published 2011 © 2011 John Wiley & Sons, Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of fitness for a particular purpose. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for every situation. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising herefrom. Library of Congress Cataloging-in-Publication Data Boeker, Egbert. Environmental physics : sustainable energy and climate change / Egbert Boeker and Rienk van Grondelle. – 3rd ed. p. cm. Includes bibliographical references and index. ISBN 978-0-470-66675-3 (cloth) – ISBN 978-0-470-66676-0 (pbk.) 1. Environmental sciences. 2. Physics. 3. Atmospheric physics. I. Grondelle, Rienk van. II. Title. GE105.B64 2011 628–dc22 2011011525 A catalogue record for this book is available from the British Library. Print ISBN: Cloth 978-0-470-66675-3, Paper 978-0-470-66676-0 ePDF ISBN: 978-1-119-97418-5 oBook ISBN: 978-1-119-97417-8 ePub ISBN: 978-1-119-97519-9 Mobi ISBN: 978-1-119-97520-5 Set in 10/12pt Times by Aptara Inc., New Delhi, India.
Contents
Preface Acknowledgements 1
2
3
xiii xv
Introduction 1.1 A Sustainable Energy Supply 1.2 The Greenhouse Effect and Climate Change 1.3 Light Absorption in Nature as a Source of Energy 1.4 The Contribution of Science: Understanding, Modelling and Monitoring Exercises References
1
Light and Matter 2.1 The Solar Spectrum 2.1.1 Radiation from a Black Body 2.1.2 Emission Spectrum of the Sun 2.2 Interaction of Light with Matter 2.2.1 Electric Dipole Moments of Transitions 2.2.2 Einstein Coefficients 2.2.3 Absorption of a Beam of Light: Lambert-Beer’s Law 2.3 Ultraviolet Light and Biomolecules 2.3.1 Spectroscopy of Biomolecules 2.3.2 Damage to Life from Solar UV 2.3.3 The Ozone Filter as Protection Exercises References
7
Climate and Climate Change 3.1 The Vertical Structure of the Atmosphere 3.2 The Radiation Balance and the Greenhouse Effect 3.2.1 Simple Changes in the Radiation Balance 3.2.2 Radiation Transfer 3.2.3 A Simple Analytical Model 3.2.4 Radiative Forcing and Global Warming 3.2.5 The Greenhouse Gases
1 3 4 5 6 6
7 7 9 12 12 14 16 19 20 21 22 28 28
31 32 36 39 41 44 45 48
vi
4
Contents
3.3 Dynamics in the Climate System 3.3.1 Horizontal Motion of Air 3.3.2 Vertical Motion of Ocean Waters 3.3.3 Horizontal Motion of Ocean Waters 3.4 Natural Climate Variability 3.5 Modelling Human-Induced Climate Change 3.5.1 The Carbon Cycle 3.5.2 Structure of Climate Modelling 3.5.3 Modelling the Atmosphere 3.5.4 A Hierarchy of Models 3.6 Analyses of IPCC, the Intergovernmental Panel on Climate Change 3.7 Forecasts of Climate Change Exercises References
51 53 58 59 59 62 63 66 67 70
Heat Engines 4.1 Heat Transfer and Storage 4.1.1 Conduction 4.1.2 Convection 4.1.3 Radiation 4.1.4 Phase Change 4.1.5 The Solar Collector 4.1.6 The Heat Diffusion Equation 4.1.7 Heat Storage 4.2 Principles of Thermodynamics 4.2.1 First and Second Laws 4.2.2 Heat and Work; Carnot Efficiency 4.2.3 Efficiency of a ‘Real’ Heat Engine 4.2.4 Second Law Efficiency 4.2.5 Loss of Exergy in Combustion 4.3 Idealized Cycles 4.3.1 Carnot Cycle 4.3.2 Stirling Engine 4.3.3 Steam Engine 4.3.4 Internal Combustion 4.3.5 Refrigeration 4.4 Electricity as Energy Carrier 4.4.1 Varying Grid Load 4.4.2 Co-Generation of Heat and Electricity 4.4.3 Storage of Electric Energy 4.4.4 Transmission of Electric Power 4.5 Pollution from Heat Engines 4.5.1 Nitrogen Oxides NOx 4.5.2 SO2 4.5.3 CO and CO2
77
70 70 74 76
78 79 82 82 83 84 87 90 91 91 95 97 98 101 103 103 104 105 107 110 113 114 115 117 123 125 125 126 126
Contents
5
vii
4.5.4 Aerosols 4.5.5 Volatile Organic Compounds VOC 4.5.6 Thermal Pollution 4.5.7 Regulations 4.6 The Private Car 4.6.1 Power Needs 4.6.2 Automobile Fuels 4.6.3 Three-Way Catalytic Converter 4.6.4 Electric Car 4.6.5 Hybrid Car 4.7 Economics of Energy Conversion 4.7.1 Capital Costs 4.7.2 Learning Curve Exercises References
127 128 129 129 129 130 131 132 133 134 134 134 138 138 142
Renewable Energy 5.1 Electricity from the Sun 5.1.1 Varying Solar Input 5.1.2 Electricity from Solar Heat: Concentrating Solar Power CSP 5.1.3 Direct Conversion of Light into Electricity: Photovoltaics PV 5.2 Energy from the Wind 5.2.1 Betz Limit 5.2.2 Aerodynamics 5.2.3 Wind Farms 5.2.4 Vertical Wind Profile 5.2.5 Wind Statistics 5.2.6 State of the Art and Outlook 5.3 Energy from the Water 5.3.1 Power from Dams 5.3.2 Power from Flowing Rivers 5.3.3 Power from Waves 5.3.4 Power from the Tides 5.4 Bio Energy 5.4.1 Thermodynamics of Bio Energy 5.4.2 Stability 5.4.3 Solar Efficiency 5.4.4 Energy from Biomass 5.5 Physics of Photosynthesis 5.5.1 Basics of Photosynthesis 5.5.2 Light-Harvesting Antennas 5.5.3 Energy Transfer Mechanism 5.5.4 Charge Separation 5.5.5 Flexibility and Disorder 5.5.6 Photoprotection 5.5.7 Research Directions
145 146 146 150 152 159 160 162 165 165 167 168 169 169 170 170 174 175 175 180 180 182 183 184 185 187 190 193 193 195
viii
6
7
Contents
5.6 Organic Photocells: the Gr¨atzel Cell 5.6.1 The Principle 5.6.2 Efficiency 5.6.3 New Developments and the Future 5.6.4 Applications 5.7 Bio Solar Energy 5.7.1 Comparison of Biology and Technology 5.7.2 Legacy Biochemistry 5.7.3 Artificial Photosynthesis 5.7.4 Solar Fuels with Photosynthetic Microorganisms: Two Research Questions 5.7.5 Conclusion Exercises References
196 196 199 202 203 203 204 207 209
Nuclear Power 6.1 Nuclear Fission 6.1.1 Principles 6.1.2 Four Factor Formula 6.1.3 Reactor Equations 6.1.4 Stationary Reactor 6.1.5 Time Dependence of a Reactor 6.1.6 Reactor Safety 6.1.7 Nuclear Explosives 6.2 Nuclear Fusion 6.3 Radiation and Health 6.3.1 Definitions 6.3.2 Norms on Exposure to Radiation 6.3.3 Normal Use of Nuclear Power 6.3.4 Radiation from Nuclear Accidents 6.3.5 Health Aspects of Fusion 6.4 Managing the Fuel Cycle 6.4.1 Uranium Mines 6.4.2 Enrichment 6.4.3 Fuel Burnup 6.4.4 Reprocessing 6.4.5 Waste Management 6.4.6 Nonproliferation 6.5 Fourth Generation Nuclear Reactors Exercises References
221
Dispersion of Pollutants 7.1 Diffusion 7.1.1 Diffusion Equation 7.1.2 Point Source in Three Dimensions in Uniform Wind 7.1.3 Effect of Boundaries
261
213 213 215 217 222 222 226 229 231 233 234 237 238 244 244 245 247 247 247 248 249 249 252 252 253 256 257 258 259 262 262 267 269
Contents
8
ix
7.2 Dispersion in Rivers 7.2.1 One-Dimensional Approximation 7.2.2 Influence of Turbulence 7.2.3 Example: A Calamity Model for the Rhine River 7.2.4 Continuous Point Emission 7.2.5 Two Numerical Examples 7.2.6 Improvements 7.2.7 Conclusion 7.3 Dispersion in Groundwater 7.3.1 Basic Definitions 7.3.2 Darcy’s Equations 7.3.3 Stationary Applications 7.3.4 Dupuit Approximation 7.3.5 Simple Flow in a Confined Aquifer 7.3.6 Time Dependence in a Confined Aquifer 7.3.7 Adsorption and Desorption of Pollutants 7.4 Mathematics of Fluid Dynamics 7.4.1 Stress Tensor 7.4.2 Equations of Motion 7.4.3 Newtonian Fluids 7.4.4 Navier-Stokes Equation 7.4.5 Reynolds Number 7.4.6 Turbulence 7.5 Gaussian Plumes in the Air 7.5.1 Statistical Analysis 7.5.2 Continuous Point Source 7.5.3 Gaussian Plume from a High Chimney 7.5.4 Empirical Determination of the Dispersion Coefficients 7.5.5 Semi-Empirical Determination of the Dispersion Parameters 7.5.6 Building a Chimney 7.6 Turbulent Jets and Plumes 7.6.1 Dimensional Analysis 7.6.2 Simple Jet 7.6.3 Simple Plume Exercises References
270 271 275 277 278 280 281 282 282 283 286 290 295 298 301 302 304 304 308 309 310 311 313 317 319 321 322 323 324 325 326 328 329 331 333 334
Monitoring with Light 8.1 Overview of Spectroscopy 8.1.1 Population of Energy Levels and Intensity of Absorption Lines 8.1.2 Transition Dipole Moment: Selection Rules 8.1.3 Linewidths 8.2 Atomic Spectra 8.2.1 One-Electron Atoms 8.2.2 Many-Electron Atoms
337 337 341 341 342 345 345 346
x
9
Contents
8.3 Molecular Spectra 8.3.1 Rotational Transitions 8.3.2 Vibrational Transitions 8.3.3 Electronic Transitions 8.4 Scattering 8.4.1 Raman Scattering 8.4.2 Resonance Raman Scattering 8.4.3 Rayleigh Scattering 8.4.4 Mie Scattering 8.4.5 Scattering in the Atmosphere 8.5 Remote Sensing by Satellites 8.5.1 ENVISAT Satellite 8.5.2 SCIAMACHY’s Operation 8.5.3 Analysis 8.5.4 Ozone Results 8.6 Remote Sensing by Lidar 8.6.1 Lidar Equation and DIAL 8.6.2 Range-Resolved Cloud and Aerosol Optical Properties Exercises References
347 347 349 353 359 359 360 361 362 362 362 362 362 364 368 368 369 371 376 377
The Context of Society 9.1 Using Energy Resources 9.1.1 Energy Consumption 9.1.2 Energy Consumption and Resources 9.1.3 Energy Efficiency 9.1.4 Comparing Energy Resources 9.1.5 Energy Options 9.1.6 Conclusion 9.2 Fresh Water 9.3 Risks 9.3.1 Small Concentrations of Harmful Chemicals 9.3.2 Acceptable Risks 9.3.3 Small Probability for a Large Harm 9.3.4 Dealing with Uncertainties 9.4 International Efforts 9.4.1 Protection of the Ozone Layer 9.4.2 Protection of Climate 9.5 Global Environmental Management 9.5.1 Self-Organized Criticality 9.5.2 Conclusion 9.6 Science and Society 9.6.1 Nature of Science 9.6.2 Control of Science 9.6.3 Aims of Science 9.6.4 A New Social Contract between Science and Society
379 380 380 382 383 384 387 388 389 389 390 392 393 394 396 396 396 398 398 401 401 401 402 402 404
Contents
Exercises and social questions Social questions References Appendix A: Physical and Numerical Constants Appendix B: Vector Algebra Appendix C: Gauss, Delta and Error Functions Appendix D: Experiments in a Student’s Lab Appendix E: Web Sites Appendix F: Omitted Parts of the Second Edition Index
xi
405 405 406
409 411 419 423 425 427 429
Preface
This third edition of the textbook ‘Environmental Physics’ has been thoroughly revised to give more focus on sustainable energy and climate change. As fossil fuels and nuclear power will be with us for many years to come, the physical and environmental aspects of these ways of energy conversion are given ample attention as well. The textbook is suitable for second year students in physics and related subjects as physical chemistry and geophysics. It assumes a basic knowledge in physics and mathematics, but all equations are derived from first principles and explained in a physical way. Therefore the book may serve as an introduction to physics in the context of societal problems like energy supply, pollution, climate change and finite resources of fossil fuels and uranium. Even where parts of the text will be familiar from other courses it is advisable to read those parts in order to grasp the way the material is presented here. At some places we included parts that may be too ‘heavy’ for most second year students, but will not easily be found in the literature. They will be suitable for the final undergraduate year or for a Master course. They are indicated by ↓ at the beginning and with ↑ at the end. Usually we will indicate them in the introductory lines of each section as well. As in any text, later chapters refer to concepts and properties discussed in earlier chapters. But the book is written such that the reader may read any chapter on its own. Therefore the teacher may select a few chapters as a module or for background reading. A distinguishing feature of the text is the discussion of spectroscopy and spectroscopic methods, again from basic concepts, as a crucial means to quantitatively analyze and monitor the condition of the environment, the factors determining climate change and all aspects of energy conversion. The emphasis in the book is on physics, on the concepts and principles that help in understanding the ways to produce energy efficiently or to mitigate climate change. Extra attention is given to photosynthesis, not only because of its importance in the field of renewable energy, but also because a comprehensive physics approach is lacking in the literature. With regard to international treaties and conventions, we discuss the most important one for the subject matter of this book: the climate convention (Chapter 9) and more briefly the non-proliferation treaty (Chapter 6). The actual political situation is heavily influenced by the internal situation in major actors like the United States, Russia and China. That may change rapidly. As the need for a guaranteed energy supply is a constant in international relations, this book should give the reader enough background to judge the policy which his or her country is putting forward.
xiv
Preface
The structure of the book follows its emphasis. After an introduction, solar radiation is discussed as the input for most of the renewable energies (Chapter 2), next the factors influencing climate change (Chapter 3), then energy from fossil fuels with an excursion to the private car (Chapter 4) and renewable energy (Chapter 5). Because of its importance and the ongoing discussion of safety we have devoted a separate chapter to nuclear power (Chapter 6). Most if not all ways of energy conversion produce pollution as an inevitable side effect; therefore we discuss transport of pollutants in Chapter 7, albeit less detailed than in the second edition of this book. Monitoring is the subject of Chapter 8 and, finally, in Chapter 9 the social aspects are discussed. In the Appendices some helpful information is compiled. The changed emphasis of this third edition results in many new sections and paragraphs and the omission or condensing of several sections of the earlier editions. For the interested reader these parts of the second edition may be downloaded free of charge from the website http://www.few.vu.nl/environmentalphysics. From this website also a description of environmental experiments for a student’s lab may be downloaded, described briefly in Appendix D. The site also contains a few computer codes that will illuminate some points made in the text. Finally, any mistakes or omissions we discover will be put on the site as well. Almost all equations in this book will be derived from first principles. After a, perhaps lengthy, derivation we often analyze the resulting formula by looking at the units on both sides of the equality sign (=). This provides more physical insight than comparing dimensions length, mass and time only. This same procedure was followed in the authors’ book ‘Environmental Science, Physical Principles and Applications’ which is meant for a general science readership; there it worked very well. For clarity, units are given between square brackets [ ]; also dimensions are given between these brackets. In practice no confusion will arise. Finally we mention some practical points. References are given between square brackets [ ] and printed at the end of each chapter. Also exercises are put there and for teachers the publisher has a teachers’ manual available with the worked out solutions. In the final chapter ‘social questions’ ask for a considered opinion of the student. The student of course is entitled to have his or her own opinion, so there is no ‘right’ answer. But the arguments should be physically sound. In the teacher’s manual some arguments on a certain statement will be given, without the pretention to present the ultimate truth.
Amsterdam, 20 January 2011
Acknowledgements
The authors acknowledge help and suggestion as follows: Chapter 2: Dr Annemarie Huijser for her contribution on melanins. Chapter 3: Dr Schuurmans for his suggestions on the first edition of the book and for suggesting table 3.5. The authors are indebted to Dr Aad van Ulden for his suggestions; dr Rob van Dorland has provided a simple model for radiative transfer, discussed in Sect. 3.2.2 which is grateful acknowledged. IPCC staff kindly provided us with high resolution versions of Plates 1 and 2. Chapter 4: Dr Andriesse for pointing out the real heat engine of Sect. 4.2.3 Chapter 5: Dr Jo Hermans for suggesting an alternative explanation for the Lift force in Sect. 5.2.2. We are grateful to Drs Barzda, Cisek, Roszak, Nield, Nelson, Schulten, Magnuson, Cogdell, T. Moore, Dekker, Boekema, Novoderezhkin and Hambourger for providing us with high resolution pictures of their work, which we used in our colour plates. Chapter 6: Dr Hugo van Dam for valuable suggestions to improve the text of the second edition; Dr Adri Bos for suggestions on sect. 6.3.2; Dr Frank Barnaby for suggestions on sects. 6.1.7 and 6.4.6; Dr Wim Hogervorst for his helpful contribution to Sect. 6.4.2; Dr Joern Harry for helping us with a description of the Japan Fukushima calamity, which was added in proof. Chapter 8: Dr Ilse Aben for introducing us to the secrets of Sciamachy and correcting our draft; Dr Apituley and Dr Donovan for providing a text and illustrations for backscatter lidar of sect. 8.6.2 Chapter 9: Dr John Grin for his valuable social insights Prof Karel Wakker is gratefully acknowledged for his careful reading of chapters 1 to 4 and his many suggestions. The authors are pleased with the way in which Dr Bart van Oort and Ademir Arapovic patiently have drawn and redrawn the many figures in this book and acknowledge the help of Joris Snellenburg with the colour plates.
1 Introduction Physical science is a fascinating subject. Mechanics, quantum physics, electrodynamics, to name a few, present a coherent picture of physical reality. The present book aims at inspiring students with the enthusiasm the authors experienced while working in the field. The work of the physical scientist always takes place in a social context. Ultimately, the scientist has to contribute to society, sometimes by increasing our knowledge of fundamental processes, more often by employing his or her skills in industry, a hospital, a consultancy firm or teaching. In the majority of cases the scientist contributes by tackling societal problems with a physics aspect or by educating students in understanding the strengths and limitations of the physics approach. The text Environmental Physics focuses on two problems where physical scientists can contribute to make them manageable. The first is the need for a safe and clean supply of energy now and in the future, the second is the way to deal with the forecasted climate change. The major part of the text deals particularly with the physics aspects of these two problems. A brief discussion of the social context is given below with a section on the contribution of science. Science can point out the physical consequences of political choices, or of not making choices, but the decisions themselves should be taken through the political process. A more comprehensive discussion of the societal context is given in the last chapter of this book.
1.1
A Sustainable Energy Supply
The concept ‘sustainable development’ became well known by the work of the World Commission on Environment and Development, acting by order of the General Assembly of the United Nations. In 1987 it defined sustainable development as ([1], p. 8): Meeting the needs of the present generations without compromising the ability of future generations to meet their own needs. Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
2
Environmental Physics
This is not a physics definition, as the meaning of ‘needs’ is rather vague. Does it imply an expensive car, a motorboat and a private plane for everybody? The definition leaves this open. In the political arena the precise meaning of ‘needs’ is still to be decided. Still, the concept forces one to take into account the needs of future generations and rejects squandering our resources. Indeed, sustainable development, the World Commission emphasized, implies that we should be careful with natural resources and protect the natural environment. Since 1987 many governments have put the goal of a ‘sustainable society’ in their policy statements. Besides protection of the environment and a safe energy supply it then comprises objectives like good governance, social coherence, a reasonable standard of living. In this book we focus on a sustainable energy supply and adapt the 1987 definition as follows: A sustainable energy supply will meet the energy needs of the present generations without compromising the ability of future generations to meet their own energy needs. The environmental consequences of energy conversion should be such that present and future generations are able to cope.
Like with the previous definition the precise meaning of this statement is the subject of political debate. From a physics point of view one may make the following comments: 1. An energy supply based on fossil fuels is not sustainable. The resources of coal, oil and gas are limited, as will be illustrated in Chapter 9. So in time other energy sources will be required. In the meantime the environmental consequences of fossil fuel combustion should be controlled. 2. Renewable energy sources like solar energy, wind energy or bio fuels may be sustainable. Their ultimate source, solar irradiation, is inexhaustible on a human time scale. To be sure of the sustainability of renewable energy sources, one has to perform a lifecycle analysis: analyse the use of energy and materials of the equipment and their environmental consequences from cradle to grave. This book will provide building blocks for such an analysis. 3. It is under debate whether nuclear fission power is sustainable. The resources of 235 U, the main nuclear fuel, are large, but limited. Also, during the fission process many radioactive materials are produced. Proponents of nuclear power argue that most of these ‘waste’ materials may be used again as fuel and the remainder may be stored; in practice, it is claimed, nuclear fission power would be ‘virtually sustainable’. Power from nuclear fusion may be sustainable, but its commercial exploitation is still far off. Governments all over the world are stimulating renewable energies. Not only because of their sustainability. Another strong reason is the security of energy supply. This requires diversification of energy sources. Fossil fuels, especially oil and gas, are unevenly distributed over the world. Industrial countries do not want to be too much dependent on the willingness of other countries to supply them with oil and gas. One may put forward that solar irradiation is unevenly distributed as well, but even at moderate latitudes the irradiation is substantial and the wind blows everywhere. Apart from these considerations, the combustion of fossil fuels produces CO2 , which has climatic consequences, to be discussed in the next section.
Introduction
aS
R
S 1 [m2]
3
Earth
Figure 1.1 Solar radiation is entering the atmosphere from the left, S [Wm−2 ]. A fraction a, called the albedo, is reflected back.
1.2
The Greenhouse Effect and Climate Change
In the simplest calculation the temperature of the earth is determined by the solar radiation coming in and the infrared (IR) radiation leaving the earth, or energy in = energy out
(1.1)
The amount of radiation entering the atmosphere per [m2 ] perpendicular to the radiation is called S, the total solar irradiance or solar constant in units [Wm−2 ] = [J s−1 m−2 ]. Looking at the earth from outer space it appears that a fraction a, called the albedo, is reflected back. As illustrated in Figure 1.1, an amount (1 − a)S penetrates down to the surface. With earth radius R the left of Eq. (1.1) reads (1 − a)Sπ R 2 . In order to make an estimate of the right-hand side of Eq. (1.1) we approximate the earth as a black body with temperature T . A black body is a hypothetical body, which absorbs all incoming radiation, acquires a certain temperature T and emits its radiation according to simple laws, to be discussed in Chapter 2. At present the student should accept that according to Stefan–Boltzmann’s law a black body produces outgoing radiation with intensity σ T 4 [Wm−2 ]. The total outgoing radiation from the earth then becomes σ T 4 × 4π R 2 . Substitution in Eq. (1.1) gives: (1 − a)S × πR 2 = σ T 4 × 4πR 2
(1.2)
or S (1.3) = σT4 4 Numerical values of σ, R and S are given in Appendix A. For albedo a one finds from experiments a = 0.30. Substitution gives T = 255 [K], which is way below the true average earth surface temperature of 15 [◦ C] = 288 [K]. The difference of 33 [◦ C] is due to the greenhouse effect, for which the earth’s atmosphere is responsible. As will be shown later, the emission spectrum of the sun peaks at a wavelength of 0.5 [μm], whereas the earth’s emission spectrum peaks at 10 [μm], the far IR. Several gases in our atmosphere, the so-called greenhouse gases, absorb strongly in the IR. In that way a large part of the solar radiation reaches the surface, but the emitted IR radiation has difficulty in escaping. The same effect happens in a greenhouse, hence the name. It will be discussed later how human activities contribute to the greenhouse effect by increasing the concentration of greenhouse gases like CO2 , tropospheric O3 , N2 O, CH4 and many HFCs. The increase of their concentrations necessarily leads to an increase in the surface temperature of the earth and consequently to climate change. It will be a challenge (1 − a)
4
Environmental Physics
to science, technology and policy to find ways to slow down the increase in greenhouse gas concentration.
Example 1.1: Geothermal Heat Flow As an illustration of the physics of this section, consider the heat flow F from the interior of the earth reaching the earth’s surface. This flow originates from decay of radioactive elements in the earth interior and the friction between the tectonic plates caused by their ‘tidal motion’. The flow is estimated1 as F ≈ 42 × 1012 [W]. What is the flow per [m2 ]? Assume the solar influx would not be present. Calculate the equilibrium surface temperature of the earth arising from the internal heat flow only. Answer The heat flow per [m2 ] is found with F/(4π R 2 ), where R is the radius of the earth. With the data from Appendix A we find that this becomes 0.082 [Wm−2 ]. If there is no solar input the internal flow is the only source. In a steady-state situation the inflow from the interior equals the outflow at temperature T : energy in = energy out. Following Stefan–Boltzmann’s law we have F/(4π R 2 ) = σ T 4 , which gives T = 35 [K]. So, without the influx of solar energy the temperature of the earth’s surface would be down to a value about 35 [K]. Therefore, as we know well, the present, pleasant temperature on earth is due to the sun.
1.3
Light Absorption in Nature as a Source of Energy
The sun also provides energy for plants, algae and so-called cyanobacteria by photosynthesis, a mechanism, which will be discussed extensively in Sections 5.4 to 5.7. Figure 1.2 shows the solar absorption by the important pigments of several photosynthetic organisms. All the light energy absorbed by the pigments can be used for photosynthesis. For comparison the solar spectrum as it enters the top of the atmosphere is shown. At the earth’s surface it is more structured (as we will find in Figure 2.2), but the message remains that the pigments shown absorb in important parts of the spectrum. Chlorophyll-a, the major pigment of higher plants, algae and cyanobacteria, absorbs red and blue light. In combination with carotenoids such as β-carotene they provide plants with their typical green colour. The major photosynthetic organisms of the world oceans, the cyanobacteria (often called blue-green algae), absorb sunlight by a specialized protein which is called phycobilisome and contains as major pigments phycoerythrin (absorbing around 580 [nm]), phycocyanin (absorbing around 620 [nm]) and allophycocyanin (absorbing around 650 [nm]). As is shown in the picture bacteriochlorophyll-a and bacteriochlorophyll-b are the major pigments of two classes of photosynthetic bacteria absorbing in the near IR part of the solar spectrum. 1
In this book data without a reference are taken from refs. [2] and [3], in this case from [2], p. 92.
Introduction
5
Phycocyanin
Pigment absorption / [a.u.]
1.0 Phycoerythrin
0.8
Allophycocyanin Chlorophyll-a
Solar spectrum
Bacteriochlorophyll-a
0.6
0.4 and 0.2 Chlorophyll carotenoids Bacteriochlorophyll-b
0 400
600
800
Visible Blue
1000
1200
Wavelength / [nm] Red
Figure 1.2 Light absorption by important pigments of a variety of photosynthetic organisms in arbitrary units (a.u.) as a function of wavelength. For comparison the solar spectrum is sketched, adapted to the same scale.
1.4
The Contribution of Science: Understanding, Modelling and Monitoring
The first question for a physical scientist is always, ‘Why are things as they are?’ For example, why is the surface temperature on earth 33 [K] higher than calculated by means of Eq. (1.3). Understanding starts in this case by indicating the role of the earth’s atmosphere and the greenhouse effect. Then one has to proceed and understand the 33 [K] quantitatively. In Chapter 3 we show that an analysis of the composition of the atmosphere and the radiative properties of the atmosphere and the surface allow us to give a quantitative explanation. A second question would be how the greenhouse effect is influenced by human activities. Here one needs a model to project these activities into the future, calculate emission of gases and other factors influencing climate, and finally calculate the future greenhouse effect. One has to check how the greenhouse gases behave over time and therefore one has to monitor their concentrations continuously. Comparisons of predictions with reality will increase our understanding of climate and climate change. One will find this continuous loop of understanding, modelling, monitoring and increased understanding in many parts of science, if not all. In order to increase the production of bio fuels from crops or algae, for example, one needs to increase the efficiency of photosynthesis. Again the first step is to understand the process (Sections 5.4 and 5.5). The next step is to construct a model which reproduces the real situation with parameters that may be changed by human interference. The third step is to make a pilot with parameters
6
Environmental Physics
which are supposed to increase the bio-yield of the crop, monitor all external influences and check the results against the model. This, in fact, is the way in which technology works. A realistic model needs to take into account many variables in a consistent and coherent way. That is the subject of computer science, a field which is outside the scope of this book. The present text does provide the student with the building blocks of modelling in the realm of climate and sustainable energy and makes it possible to judge a model as to its applicability in certain situations. Because of the importance of monitoring as part of the scientific process, Chapter 8 is devoted to this subject.
Exercises 1.1 Check the units to the left and right of Eq. (1.3) using Appendix A. Check the calculation which produces a temperature of 255 [K] for the earth. 1.2 Repeat the calculation following Eq. (1.3) for the planets venus and mars. (a) Calculate the solar ‘constants’ SV and S M for these planets, using that the distance from venus to the sun averages 0.72 times the distance from earth to the sun. For mars the corresponding number becomes 1.52. (b) Ref. [3], Sect 14 gives the albedo of venus as 0.65 and that of mars as 0.15. Find their radiation temperatures, that is, the temperatures following from Eq. (1.3). Compare with the surface temperatures of 730 [K] and 218 [K] and explain. 1.3 Calculate the temperature of the solar surface, using the total solar irradiance S, the solar radius Rs = 6.96 × 108 [m] and the sun–earth distance Rse = 1.50 × 1011 [m]. Use the assumption that in this regard the sun behaves as a black body.
References [1] Brundtland, G. (Chair) (1987) Our Common Future, World Commission on Environment and Development, Oxford University Press, Oxford. [2] Kaye, G.W.C. and Laby, T.H. (1995) Tables of Physical and Chemical Constants, 16th edn, Longman, Harlow, England. [3] Haynes, W.M. (1996) Handbook of Chemistry and Physics, 77th edn, CRC Press.
2 Light and Matter On a microscopic level the interaction of solar radiation with matter is described as the interaction of photons of a certain frequency υ or wavelength λ with atoms or molecules. Below, first the emission of a hot black body with temperature T is presented, both for the earth’s temperature and the sun’s. Then the measured solar spectrum is discussed in a general way. In order to describe the interaction between photons and atoms or molecules one needs a few quantum mechanical formulas. For light absorption in a medium one needs to sum over all atoms or molecules the light will encounter. That leads to the concept of optical density, which is necessary to understand the absorption of the earth’s IR radiation and the greenhouse effect. In a solar cell, to be described in Chapter 5, the interaction of a single photon with the cell may result in an (albeit very small) electric current. A plant is a much more complicated system than a solar cell; it requires the cooperation of some 10 photons to complete one full turnover of the photocell in which the solar energy is stored as chemical energy. As an appetizer for Chapter 5 some elements of the photosynthetic process are discussed below. Finally, solar light, especially in the ultraviolet (UV) region not only is beneficial, but may also damage biomolecules, such as those present in the human skin. It is explained how the ozone filter in the outer atmosphere is protecting humans against too much shortwavelength radiation.
2.1 2.1.1
The Solar Spectrum Radiation from a Black Body
The term ‘black body’ originates from experiments with a black cavity held at a certain temperature. It appears to emit a smooth spectrum with the intensity Iλ as a function of wavelength λ given by the shape of the curves in Figure 2.1. The position of the peak shifts to larger wavelengths for lower temperatures. Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
8
Environmental Physics
Figure 2.1 Emission spectra of a black body with intensity Iλ [Wm–2 μm–1 ] as a function of wavelength λ, calculated with Eq. (2.4). The left spectrum corresponds with Planck’s curve (2.2) for a temperature T = 5800 [K] (the radiation temperature of the solar surface, which gives the general behaviour of the spectrum); the right spectrum corresponds to T = 288 [K] (the temperature of the earth’s surface). Note that the vertical scales differ by many orders of magnitude. Also note that the left curve peaks at approximately 0.5 [μm] and the right spectrum at about 10 [μm].
In the beginning of the twentieth century attempts were made to calculate the spectral shape of black-body radiation. In these calculations it was assumed that at each wavelength all energies could be emitted, so there was no relation between wavelength and energy. However, the resulting spectral composition of the black- body radiation did not correspond to experiments. In a historic contribution to quantum mechanics Planck solved the problem by making the assumption that at any wavelength λ the energy [J] of the emitted electromagnetic field is restricted to integral multiples of the energy E = hν = h
c [J] λ
(2.1)
Here υ is the frequency of the electromagnetic field and λ is the corresponding wavelength while c is the speed of light. The symbol h denotes Planck’s constant, given in Appendix A. In other words, the electromagnetic field consists of photons with energies E = hv. Thus, Planck arrived at the so-called Planck energy distribution for the radiation IE dE emitted by the black body with energies between E and E + dE: I E dE =
1 1 2π ν 3 2π E 3 dE = dE [Wm−2 ] c2 ehν/kT − 1 h 3 c2 e E/kT − 1
(2.2)
Here T is the absolute temperature of the black body [K] and k is Boltzmann’s constant [J K−1 ]. The radiation IE is also called the radiation flux, or briefly the flux, as it refers to the radiation energy passing a square metre per second in a certain direction.
Light and Matter
9
The energy density W(E)dE [J m−3 ] of the radiation field at energies between E and E + dE is found by realizing that radiation in a certain direction can pass a [m2 ] in two directions and may have two independent polarizations. This results in W(E) = 4 × IE /c. Below we will need the expression of energy density as a function of ω = 2π υ.1 Using E = ω gives: 1 ω3 (2.3) dω [J m−3 ] W (ω)dω = 2 3 π c eω/kT − 1 which is the energy density for frequencies between ω and ω + dω. In Eq. (2.2) the flux is given as a function of the energy E. We are also interested in the flux as a function of wavelength λ. We may use Eq. (2.1) and find (Exercise 2.1): 1 2π hc2 (2.4) dλ [Wm−2 ] λ5 ehc/λkT − 1 Of course, Iλ dλ again is the radiation energy with wavelengths between λ and λ + dλ passing a square metre per second [Wm−2 ]. In Figure 2.1 the intensity Iλ was displayed with λ and dλ in [μm], hence the dimension Iλ [Wm−2 μm−1 ]. For large frequencies υ or small wavelengths λ the (−1) in the denominator may be ignored and one finds the peak value λmax of the flux Iλ as (Exercise 2.2) Iλ dλ =
λmax T = 2898 [μm K]
(2.5)
This approximate, but useful, relation is called Wien’s displacement law. It shows how the maximum of the emitted radiation distribution depends on the temperature of the black body. For T = 5800 [K] the maximum of the emitted radiation is at λ ≈ 0.5 [μm], which agrees with the left curve in Figure 2.1. By integrating Eq. (2.2) over all energies one finds another well-known equation, Stefan–Boltzmann’s law (Exercise 2.3). This law expresses how the total flux I [Wm−2 ] emitted by a black body depends on its temperature T: I (T ) = σ T 4 [Wm−2 ]
(2.6)
Here, σ is the Stefan–Boltzmann constant, which is independent of the material. 2.1.2
Emission Spectrum of the Sun
In Figure 2.2 the emission spectrum of the sun is shown as it enters the top of the atmosphere; the lower and more structured curve represents the solar spectrum as it is observed at sea level. The difference is due to the absorption of the solar light in the earth’s atmosphere. The gases mainly responsible for this absorption are indicated: CO2 , water vapour H2 O, and O2 and O3 . The dashed curve in Figure 2.2 indicates the shape of the black-body emission spectrum for T = 5900 [K], which fits the curve equally as well as T = 5800 [K]. This implies that for simple considerations, as in Chapter 3, the solar spectrum may be approximated by that of a black body. Note that the left curve in Figure 2.1 gives the emission from the surface of the black body (the sun), whereas Figure 2.2 shows what is arriving at the earth. This explains the difference in scales of both figures (Exercise 2.4). 1
As ω = 2πυ it follows W(ω)dω = 2π W(ω)dυ = W(υ)dυ. In consulting handbooks be aware of the variables used.
10
Environmental Physics
Figure 2.2 Spectral distribution of incident solar radiation outside the atmosphere and at sea level. Major absorption bands of the important atmospheric gases are indicated. The shape of the black-body curve at 5900 [K] is shown for comparison. Note that Figure 2.1 refers to the emission at the solar surface while Figure 2.2 refers to the intensity on the earth. (Reproduced by permission of McGraw-Hill, copyright 1965, from [1], Figure 16.1.)
At temperatures such as we experience on earth, the black-body radiation has peak intensity far in the infrared, which is way out of the visible region, see Figure 2.1 right curve. Bodies at temperatures of for example 100 [◦ C] = 373 [K] still do not emit appreciably in the visible part of the spectrum. We cannot see, but ‘feel’ the IR radiation when we put our hands near a hot-water bottle.
Example 2.1 (a) Write the black-body spectrum (2.2) with T = 5800 [K] as function of electronvolt [eV], the energy unit of atomic and molecular physics. (b) Also plot the number of photons as a function of energy in [eV]. Check the validity of Stefan–Boltzmann’s relation (2.6) from the curves.
Light and Matter
11
Answer (a) Equation (2.2) is written in [J]. Following Appendix A one can write 1 [eV] = a [J] with a = 1.602 × 10−19 . All energies have to be replaced by aE, where E is the value in [eV]. This gives for the intensity IE IE =
2πa 3 E 3 1 × a E /kT [m−2 s−1 ] h 3 c2 e −1
(2.7)
I
On the left of Figure 2.3 the black-body intensity is plotted as a function of E [eV]. The product of peak times width approximately gives 3.8 × 1026 [eV s−1 m−2 ]; 1 [eV] = a [J], which gives 6.1 × 107 [Wm−2 ], close to the Stephan-Boltzmann value of 6.4 × 107 [Wm−2 ], which indicates that the derivation of Eq. (2.7) is correct. Numerical integration will show that this relation holds exactly.
Figure 2.3 Left: Black-body spectrum for T = 5800 [K] as function of energy E [eV]. Right: the number of photons per [eV] passing 1 [m2 ] per second. Note that the maximum of the curve on the right is shifted somewhat to the left with respect to the curve on the left-hand side. The visible region is indicated on both curves.
(b) Let NPE be the number of photons per [eV], passing 1 [m2 ] in 1 [s]. Then NPE × dE will be the number of photons with energies [eV] between E and E + dE passing 1 [m2 ] in 1 [s]. Multiplied by the energy E this should give IE dE . Therefore IE 2πa 3 E 2 1 = × a E /kT (2.8) [m−2 s−1 eV−1 ] E h 3 c2 e −1 The result is given on the right in Figure 2.3. The maximum is shifted to the low energy part and is outside of the visible region. NP E =
12
Environmental Physics
From a physics point of view it is interesting to notice that most transitions in atoms and molecules occur at energies of less than 1 [eV]. Solar photons therefore have energy enough to trigger these transitions. On the other hand, ionizing potentials (the energy required to eject an electron from the least bound state) of virtually all atoms and molecules are larger or much larger than 5 [eV], which literally saves our skins ([2], Section 4.1.2 or [3], 10–214/235).
2.2
Interaction of Light with Matter
The interaction of solar radiation with matter is described on a microscopic scale. Below, the formulas for a light-induced transition between two quantum states are derived, leading to the so-called transition dipole moments. Next, the Einstein coefficients for absorption and emission are introduced and finally the absorption of a beam of light in a medium like the atmosphere is discussed. Note: this section applies some basic quantum mechanics. Students who feel this is still somewhat too difficult are advised to read it anyway and try to understand Eqs. (2.17), (2.21) and (2.22). Then continue with Section 2.2.2
↓
2.2.1
Electric Dipole Moments of Transitions
Consider an atomic or molecular system described by a Hamiltonian H 0 . It has discrete energy levels Ek and wave functions k0 which are the solutions of the time-independent Schr¨odinger equation of the system H0 k0 = E k k0
(2.9)
Without perturbations the system can remain in a single state k0 . At t = 0 light with a single frequency ω starts interacting with the system, which is described as a perturbation H 1 (t). At time t the system will be found in a state (t), which is the solution of the time-dependent Schr¨odinger equation i
d = (H0 + H1 (t))(t) dt
(2.10)
The time-dependent wave function (t) can be expressed as (t) =
ck (t)k0 e−i Ek t/
(2.11)
k
Inserting (2.11) in (2.10) gives a set of equations for the time derivatives of the coefficients ck (t). One will get rid of some summations by closing with the bra k0 , which gives i dck (t) cn (t)eiωkn t k0 H1 n0 =− dt n
(2.12)
Here ωkn = (Ek – En )/. Remember that at t = 0 the system is in the stationary state 10 . This means that at t = 0 one has c1 (0) = 1 and ck (0) = 0 for all k = 1. Thus for small times t, the right hand side of Eq. (2.12) reduces to a single term. Integration for an arbitrary
Light and Matter
13
k = 1 gives ck (t) = −
i
t
eiωk1 t k |H1 | 1dt
(2.13)
0
The probability of finding the system in the excited state k, having absorbed an energy quantum ωk1 from the light beam, is given by |ck (t)|2 . Eq. (2.13) shows that |ck (t)|2 is directly related to the squared matrix element |k| H1 |1|2 . To proceed we will derive the perturbation H 1 (t). We assume that the perturbation is caused by the interaction of a plane electromagnetic wave with an atomic or molecular system at the origin of the coordinate system. For electrically neutral systems the leading term of the perturbation is the interaction between the electrons of the system and the electric field E(t) = E0 cos ωt. To first order this may be described by H1 (t) = −μ · E(t) where μ=−
qi ri
(2.14)
(2.15)
i
is the electric dipole operator of the system and the sum extends over all electrons with charge qi and position ri . One finds the probability Pk (t) = |ck (t)|2 that the system is in state k at time t by substituting Eq. (2.14) into Eq. (2.13). One assumes that the frequency ω of the light is close to the resonance frequency ωk1 of the transition, so one may ignore terms (ωkl + ω)2 in the denominator and find
|E0 |2 cos2 θ sin2 12 (ωk1 − ω)t 1 2 (2.16) Pk (t) = 2 |k |μ|1| (ωk1 − ω)2 As it represents a probability Pk (t) is dimensionless (Exercise 2.5). In (2.16) θ is the angle between the vectors μ and E0 . It is clear that the squared matrix element |μk1 |2 = |k |μ|1|2
(2.17)
determines the probability Pk (t) that the system is in state k at time t. The matrix element μk1 is called the transition dipole moment. Note that the probability of absorption (2.16) depends on the orientation of the vector μk1 with respect to the electric field E0 by the factor cos2 (θ ). For isotropic solutions or gases one has to average over all directions. For ordered systems the factor cos2 (θ ) leads to polarization effects. In a few steps the transition probability [s−1 ] of the transition 1 → k will be derived. The transition probability [s−1 ] is the increase dPk of the probability Pk (t) that the system is in state k during a time dt, divided by dt which gives dPk /dt. One therefore has to differentiate Eq. (2.16) and reorganize the resulting sine and cosine functions. This gives |μk1 |2 dPk sin((ωk1 − ω)t) |E0 |2 cos2 θ = 2 dt 2(ωk1 − ω)
(2.18)
14
Environmental Physics
One may show (Exercise 2.6) that after a long time t the fraction at the right of Eq. (2.18) peaks around ω = ωk1 . In such a case it is convenient to use the delta function, defined in Appendix C to describe the limit of Eq. (2.18) for t → ∞ (see Eq. (C6). This gives |μk1 |2 π dPk |E0 |2 cos2 θ × δ(ωk1 − ω) = dt 2 2
(2.19)
It is useful to check the dimensions of Eq. (2.19) (Exercise 2.7). One then should keep in mind the definition of the δ– function δ(ωk1 − ω)dω = 1 from which it follows that in the present case its dimension should be [ω–1 ] = [s]. For a plane electromagnetic wave the energy density of the field may be written as U = ε0 × E0 /2 [J m−3 ]. Finally, the average of cos2 (θ ) in three dimensions gives 1/3. Taking all factors together one finds dPk π |μk1 |2 U δ(ωk1 − ω) = dt 3ε0 2
(2.20)
In this equation ωk1 is defined by the energy difference between states k and 1. Therefore Ek − E1 = ω. In practice the transition can also take place in a band of frequencies around ω = ωk1 . Therefore one has to replace Uδ(ωk1 – ω) in Eq. (2.20) by an energy-dependent W(ω) [J m−3 s], which we already met in Eq. (2.3) for a black body. This finally results in π dPk |μk1 |2 W (ω) = dt 3ε0 2
(2.21)
As an extracheck one may compare the integral over ω in Eqs. (2.20) and (2.21). It is apparent that Uδ(ωk1 − ω)dω = U (ωk1 )[Jm−3 ] then is replaced by W (ω)dω, which is the integral of the energy density integrated over the band around ω = ωk1 . Eq. (2.21) represents the rate of transition 1 → k. The last equality defines the Einstein coefficient for absorption and stimulated emission as Bk1 =
↑
π |μk1 |2 3ε0 2
(2.22)
The dimensions of Eqs. (2.21) and (2.22) are discussed in Exercise 2.8. 2.2.2
Einstein Coefficients
Consider a simple system with two energy levels. A lower level with energy E1 populated by N 1 atoms or molecules and a higher level with energy E2 = E1 + ω and populated by N 2 atoms/molecules. This is indicated schematically in Figure 2.4. First note, that in deriving Eq. (2.21) it was not stated whether state k0 was the state of higher or of lower energy. Therefore the transition rate (2.21) applies both to absorption from an electromagnetic field, which gives B12 W(ω) and to stimulated emission in a field, given by B21 W(ω). Spontaneous emission from state 2 to state 1, which may occur ‘in the dark’ is given by a transition rate A21 for which we will derive an expression now. The rate of population of level 1 in the presence of a light field W(ω) may be written as dN1 = −B12 W (ω)N1 + B21 W (ω)N2 + A21 N2 dt
(2.23)
Light and Matter
15
E2, N2
hω
A21
B12W(ω)
B21W(ω)
Spontaneous emission
Absorption
Stimulated emission
E1, N1
Figure 2.4 The three basic types of radiative processes. Indicated are the Einstein coefficients and the transition rates for two levels with energies E1 and E2 , which are occupied by N1 and N2 atoms/molecules, respectively.
Assuming a steady state, where the population of the levels does not change, we have dN1 /dt = 0 and it follows that W (ω) =
A21 B12 N1 /N2 − B21
(2.24)
In the steady state with temperature T the ratio N 1 /N 2 is found from the Boltzmann distribution for N 2 /N1 , N2 = e−ω/kT N1
(2.25)
This fundamental relation expresses that the temperature motion of the atoms and molecules represented by the energy kT[J] will populate the higher energy level with energy E2 = E1 + ω with a relative probability e–ω/kT . For a higher energy difference ω the population of the upper level will decrease and for a higher temperature T it will increase. Finally, in the steady state the energy density (2.24) of the radiation field will be determined by the Planck distribution for a black body (2.3). When one substitutes (2.25) into (2.24) and put that equal to (2.3) one finds A21 1 ω3 [J m−3 s] (2.26) = W (ω) = 2 3 π c e ω/kT − 1 B12 e ω/kT − B21 The last equality should hold for all temperatures T, which implies B12 = B21
(2.27)
A21 ω3 = 2 3 B21 π c
(2.28)
and
or A21 =
ω3 |μ21 |2 3π c3 ε0
(2.29)
Consequently the same squared matrix element |μ21 |2 determines the rates and properties of all radiative processes.
16
2.2.3
Environmental Physics
Absorption of a Beam of Light: Lambert-Beer’s Law
The microscopic quantities |μ21 |2 , B12 , B21 and A21 are connected to the macroscopic phenomenon of absorption as described by Lambert-Beer’s law, which will be derived now. When a beam of light passes through a sample of material, the light beam is usually absorbed, but sometimes, as in a laser, amplified. Absorption dominates when most of the atoms/molecules are in their ground states (N 1 N 2 ) and when the beam intensity is weak. In that case the rate equation for transitions between states 1 and 2 is given by Eq. (2.23) without the term for stimulated emission as dN1 = −B12 W N1 + A21 N2 dt
(2.30)
Here B12 N 1 W is the rate at which light is removed from the incident beam. The last term of Eq. (2.30) describes emission from the excited state. That light is radiated into all directions. Also, N 2 N 1 so the contribution of the last term of Eq. (2.30) may be ignored. In Figure 2.5 a light beam is entering a slice of medium with thickness dz along the z-axis. Its energy density will depend on ω and z. Due to the absorption the density will decrease as well, making it a function of the time. This leads to W = W(ω, z, t) [J m−3 s]. The beam energy in the frequency interval ω to ω + dω is given by W(ω, z, t) dω a dz where a [m2 ] is the surface area of the slice. The energy decrease of the beam in the slice must be equal to the absorption by the medium. This results in −
a dz ∂W a dω dz = B12 N1 F(ω) dω W (ω, z)ω ∂t V
(2.31)
The left-hand side represents the energy decrease of the beam. The right-hand side represents the absorption, which in Eq. (2.30) was indicated by B12 N 1 W. Expression (2.30) had to be modified as follows. The population N 1 was replaced by the fraction within the slice, i.e. N 1 a dz/V where V is the total volume of the sample. Furthermore, Eq. (2.30) refers to the rate of absorption [s−1 ]. In order to find the absorbed energy per second [J s−1 ], this rate has to be multiplied by the photon energy ω, which explains the far right of Eq. (2.31). Finally, the factor F(ω)dω was inserted because the atom/molecule may absorb over a certain region around ω = ωk1 . This was discussed in deriving Eq. (2.21). The incoming light may comprise many frequencies ω. We are interested in the band around W d ω a dz area a
⎛ ∂I ⎞ dz⎟ dω ⎜I + ⎝ ∂z ⎠
I dω
z
dz
z+dz
Figure 2.5 A beam of light along the z-axis, drawn horizontally, is absorbed by a thin slice of a sample perpendicular to the beam.
Light and Matter
ω = ωk1 . Therefore, the weight function F(ω) was added with the property F(ω) dω = 1
17
(2.32)
band
From Eq. (2.31) one finds ∂W ω = −N1 F(ω)B12 W [J m−3 ] ∂t V
(2.33)
In practice, one is interested in the extinction of a beam with intensity I(ω) dω [Wm−2 ] between ω and ω + dω. In this case we use the fact that the energy loss of the beam in the slice of Figure 2.5 over a time dt (the left-hand side of Eq. (2.31)) is equal to the decrease of the intensity of the beam over the slice. This gives −
∂I ∂W dω a dz = (I (z) − I (z + dz)) a dω = − dz a dω ∂t ∂z
(2.34)
which leads to ∂ W/∂t = ∂ I/∂z. The intensity of the beam and its energy density are related by W = I/(cn), where c is the speed of light in vacuum and n the refractive index. On the left of Eq. (2.33) we substitute ∂ W/∂t = ∂ I/∂z. On the right we substitute W = I/(cn). This leads to a simple differential equation for I(z) B12 N1 ωF(ω) ∂I =− I ∂z V nc
(2.35)
On the right-hand side all quantities except I are independent of z, which gives LambertBeer’s Law I (z) = I (0)e−kz
(2.36)
where the absorption coefficient k is given by k=
B12 N1 ωF(ω) V nc
(2.37)
We mention in passing that the exponent kz in Eq. (2.36) may be replaced by k(z) dz, as is proven in Eq. (8.45). The quantity B12 in Eq. (2.37) was given by Eqs. (2.22) and (2.27). It is independent of the frequency ω of the field. One therefore may extract F(ω) from Eq. (2.37) and use (2.32) and will find k(ω) V cn dω (2.38) B12 = N1 ω band
where the integral runs over a band around the transition frequency ω12 . Eq. (2.36) shows that one can measure k(ω) over the band and find the Einstein coefficients from (2.38). Using (2.22) and taking n = 1 one then will find the transition dipole moment |μk1 |2 as k(ω) 3ε0 c 2 |μk1 | = dω (2.39) π (N1 /V ) ω band
18
Environmental Physics
In the following few equations we will replace the official MKS units on the right-hand side by more convenient units. Correction factors will be introduced on the right-hand side in order to keep the correct units [μk1 ] = [C m] on the left. Often one works with a concentration C [mol/L = mol/dm3 ]. Using Avogadro’s number NA one finds N1 /V [m−3 ] = N1 /(VN A )[mol/m3 ] = N1 /(VN A 103 )[mol/L] = C[mol/L] This leads to C=
N1 [mol/L] V N A 103
(2.40)
The factor N 1 /V in Eq. (2.39) may be extracted from Eq. (2.40), which leads to k(ω) 3ε0 c |μk1 |2 = dω (2.41) 3 πC N A 10 ω band
The function k(ω) was defined in Eq. (2.36). That equation may also be written as I (l 0 ) = I (0) × 10−OD
(2.42)
where l0 [m] is the path length in the medium and OD is the optical density. In atmospheric applications this exponent usually is called optical depth. By comparing Eqs. (2.36) and (2.42) one will find k=
l0
OD log e
(2.43)
10
This gives 3ε0 c |μk1 | = πC N A 103 l 0 10 log e
2
band
OD dω ω
(2.44)
where the dipole matrix element μk1 is found with its MKS unit [C m]. It is interesting that the macroscopic measurement of the optical density OD directly gives the microscopic property μk1 .
Example 2.1 Extinction in Plants For applications in, e.g. plants one expresses the path length as l [cm] and writes OD = εlC
(2.45)
(a) Find the dimension of the molar extinction coefficient ε. (b) Reduce Eq. (2.44) to an integral which contains ε. Answer (a) OD is dimensionless (see (2.42). so [ε][cm][mol/L] = dimensionless, which gives [ε] = [L mol−1 cm−1 ]
Light and Matter
(b) Substitution of (2.45) in (2.44) gives ε ε 3ε0 c 2 −61 |μk1 | = dω = 1.023 × 10 dω, 10 π N A 10 × log e band ω band ω
19
(2.46)
where we used l0 = 10–2 l and used the numerical values in Appendix A.
Example 2.3
Optical Density in Leaves
A leaf containing chlorophyll has in its absorption maximum at 680 [nm] an extinction coefficient ε =105 [dm3 mol−1 cm−1 ]. The chlorophyll concentration in the leaf is about 10−3 [mol dm−3 ] and its path length for light passing through the leaf about 0.02 [cm]. Calculate OD and the reduction of intensity. Answer OD = 105 × 0.02 ×10−3 = 2. So through a leaf with a thickness of 0.02 [cm] the intensity of the (red) 680 [nm] light is reduced by a factor of 10−2 = 100 (from Eq. (2.42). As a consequence, no red light is detected below the outer array of leaves on a tree. Finally, we have a second look at Figure 2.5. The decrease of energy over the slice was given in Eq. (2.34) as a(I(z) − I(z + dz))dω [W]. This must be proportional to the number of particles in the slice, which is (N/V) adz and also proportional to the intensity of the beam I(z)dω. The proportionality constant σ 0 has the dimension [m2 ], as follows from a(I (z) − I (z + dz))dω[W] = (N/V )adz × I (z)dω × σ 0
(2.47)
On the right the product (N/V) adz × σ 0 may be interpreted as the total area with which the incoming beam interacts. Then I(z)dω may be interpreted as the number of photons impinging. Their product, the right-hand side of Eq. (2.47) would represent the decrease of intensity of the beam, which corresponds with the left hand side. Consequently σ 0 may be interpreted as the cross section of the particles in the slice. From Eq. (2.47) it follows that ∂ I/∂z = −(N /V )σ 0 I and I (z) = e−(N/V ) σ
0
z
= e−kz
(2.48)
where (2.36) was used. Consequently k = (N/V)σ 0 . The cross section usually is given as σ [cm2 ], from which σ 0 [m2 ] = 10–4 σ [cm2 ]. Using Eqs. (2.43), (2.45) and (2.40) one finds ε = 10−3 σ N A 10 log e
2.3
(2.49)
Ultraviolet Light and Biomolecules
In this section we discuss the absorption of light in the ultraviolet (UV) region by biological molecules such as proteins and nucleic acids. Since both the amount and the spectral
20
Environmental Physics
composition of UV light incident on the earth’s surface is totally determined by the ozone present in the atmosphere, we shall summarize processes that lead to the building up and destruction of the ozone layer and show that small variations in ozone concentration may lead to dramatic biological effects. These effects are largely related to the highly non-linear properties of Lambert–Beer’s law (Eqs. (2.36) and (2.37)) where the concentration C = N 1 /V appears in the exponent. 2.3.1
Spectroscopy of Biomolecules
Figure 2.6 shows the solar emission spectrum below 340 [nm] at the earth’s surface in combination with the absorption spectrum of two important biomolecules: DNA, which is the carrier of the genetic code and α-crystallin, which is the major protein of the mammalian eye lens. The absorption of light by these biological molecules is essentially zero in the region 320 [nm] < λ< 400 [nm], which is called the near UV or UV-A; the absorption is intense in the region 200 [nm] < λ < 290 [nm], which is called the far UV or UV-C and only overlaps with the solar spectrum in the wavelength region 290 [nm] < λ < 320 [nm], the mid-UV or UV-B. From Figure 2.6 it appears that the interaction between sunlight and the two biomolecules will take place essentially in the UV-B region. The absorption of DNA is due to the DNA bases, guanine, thymine, cytosine and adenine, and it peaks about 260 [nm], with a maximum extinction of ε = 104 [L mol−1 cm−1 ] (expressed per mole of base). Note that here the units of Eq. (2.45) are used because they are convenient for biological applications.
OD /[a.u.]
0.8
0.75
0.6 0.5 0.4
UV-C
UV-B
UV-A 0.25
0.2
Relative sunlight intensity
1.0
0.0 240
260
280
300
320
340
Wavelength /[nm] Figure 2.6 Absorption spectra of DNA (dotted) and α-crystallin (dashed) in the wavelength region 240–340 [nm]. The solid line indicates the solar spectrum at the earth’s surface in the same region normalized to one at 340 [nm]. The left axis refers to the optical density in arbitrary units and the right axis to the relative solar intensity.
Light and Matter
21
The absorption of proteins like α-crystallin in the wavelength region 250 [nm] < λ < 300 [nm] is mainly due to the aromatic amino acids tryptophan and tyrosine, and it peaks at about 280 [nm]. For a protein with a number nTRP of tryptophan amino acids and a number nTYR of tyrosine amino acids the extinction at 280 [nm] is given by ε280 = 5600 n TRP + 1100 n TYR [L mol−1 cm−1 ]
(2.50)
For a typical biological cell about 10% of the total absorption of solar UV due to proteins would be due to the absorption of nucleic acids. Often proteins carry additional cofactors which allow proteins to carry out their specific tasks. For instance, in haemoglobin the haem group (which belongs to the class of porphyrin molecules), is attached to the protein in a haem–protein complex which is active in oxygen binding of blood. In these cases the absorption extends into the near-UV, visible and sometimes near infrared region of the spectrum. Although in several cases the absorption of light in this wavelength range is crucial for the biological function (photosynthesis, vision), we shall not further discuss it here.
2.3.2
Damage to Life from Solar UV
The interaction of solar UV with biological cells may lead to severe damage of the genetic material or even cell death. In complex, multicellular organisms, exposure to solar UV may lead to damage of crucial parts of the organism, and in the case of humans, to skin cancer. As Figure 2.6 showed, the UV-B wavelength range will be of prime importance in these processes. In particular, the light-absorbing parts of the DNA, called the chromophores, form the prime target in the process that leads to photo-damage. Figure 2.7 shows the action spectrum E(λ), which is the damage done by a unit of irradiation of a certain wavelength in producing erythyma (sunburn) in human skin. It also shows the UV tail I(λ) of the solar spectrum. The damage done by solar radiation at wavelength λ is therefore given by the product E(λ)I(λ), which is called the solar erythemal effectiveness. Note that Figure 2.7 is drawn with a vertical logarithmic scale. It shows that the efficiency E(λ) of producing erythyma increases four to five orders of magnitude between 350 and 280 [nm], precisely in the region where the solar UV spectrum collapses. The action spectrum E(λ) must be largely ascribed to the direct absorption of solar UV by DNA. Although for λ > 320 [nm] the action spectrum tends to be higher than the DNA absorption in Figure 2.6, implying that there are more absorbers, the primary target is probably still DNA. Similarly, studies on cellular cultures where cell survival was measured have identified the chromophores of DNA as the primary target molecules. The major product of photo-absorption in DNA is the pyrimidine dimer, a covalent structure involving two thymine molecules or a cytosine–thymine pair. Its presence in DNA has major destructive effects on the reading and translation of the genetic code. The damage D done to a biological system by solar UV can be calculated from the action spectrum E(λ) by multiplying with the incident solar radiation at the earth’s surface I(λ)dλ
22
Environmental Physics
with wavelengths between λ and λ + dλ and integrating ∞ D=
E(λ)I (λ)dλ
(2.51)
0
Since E(λ) is a function with a large negative slope and I(λ) is a function with a large positive slope, even a minor change in I(λ) may lead to large changes in the damage D. 10
Relative units
10 10 10 10 10 10
0
-1
Solar spectrum
Erythema action spectrum
-2
-3
Solar erythemal effectiveness
-4
-5
-6
260
280
300
320
340
360
380
Wavelength /[nm] Figure 2.7 Erythema (sunburn) action spectrum, plotted together with the solar spectrum and the resultant ‘solar erythemal effectiveness’ E(λ)I(λ) of Eq. (2.51). Note the vertical logarithmic coordinate. (Reproduced by permission of Plenum Press, copyright 1978, from [2], Fig 8.4 p. 119.)
2.3.3
The Ozone Filter as Protection
Ozone (O3 ) forms a thin layer in the stratosphere, with a maximum concentration between 20 and 26 [km] above the earth’s surface (Figure 3.2). The atmospheric ozone absorbs essentially all the radiation below a wavelength of 295 [nm], due to a strong optical transition at about 255 [nm], which extends into the mid-UV region. Figure 2.8 shows the spectrum of ozone between wavelengths of 240 and 300 [nm]. At the maximum of 255 [nm] the absorption cross section is about 10−17 [cm2 ]; it is about half maximum at 275 [nm] and has decreased to about 10% of its maximum value by 290 [nm]. That ozone forms a very thin shield indeed, is probably best demonstrated by the fact that the amount of O3 in the atmosphere corresponds to a layer of 0.3 [cm] at standard temperature and pressure (Exercise 2.9). Ozone is constantly formed in the upper layer of the atmosphere through the combination of molecular oxygen (O2 ) and atomic oxygen (O). The latter is formed through the
Absorption cross section /[10-17 cm2]
Light and Matter
23
1. 2
x 100 0. 8
DNA 0. 4
0 240
260
280
300
320
340
360
Wavelength /[nm] Figure 2.8 Absorption spectrum of ozone in the wavelength region 240 [nm] to 300 [nm] (solid line), reproduced by permission of the Optical Society of America, copyright 1953, from [3], Figure 1, p. 871. On the right the spectrum is given on a 100-fold enlarged scale from 300 [nm] to 350 [nm], kindly made available by Dr Kylling who used data going back to [4]. For comparison, the absorption cross section of DNA shown in Figure 2.6 is given, redrawn to the same scale as the ozone spectrum on the left.
photo-dissociation of O2 in the 100 [km] region by light of wavelengths shorter than 175 [nm]. Sunlight excites the electronic transition between the triplet ground state of O2 (3 u− ) and a triplet excited state (3 g− ). Once excited, the O2 molecule may dissociate into two oxygen atoms: one in the ground state of O (3 P), and one in an excited state of O (1 D). The latter has an energy of 2 [eV] above the ground state. As a consequence of these processes, sunlight with λ < 175 [nm] is totally extinguished above the stratosphere. Once formed, atomic oxygen attaches to O2 to form O3 . The efficiency of O3 formation by UV light is sensitive to a large number of factors, amongst which are the availability of O2 , changes in stratospheric temperature, chemicals and dust from volcanic eruptions. Most of the ozone is produced above the equator, where the amount of incident solar UV light is maximal. Ozone formed at these latitudes then diffuses to the poles, where it is ‘accumulated’. The effective thickness of the ozone layer may increase from ≤0.3 [cm] at the equator to ≥0.4 [cm] above the pole at the end of winter. The ozone concentration displays significant daily and seasonal fluctuations and tends to be highest in late winter and early spring. Ozone is not only permanently being formed, but also constantly broken down in the stratosphere and only a very small fraction of the formed ozone escapes down to the troposphere. There are basically two pathways for the destruction of ozone and the
24
Environmental Physics
reformation of O2 : O + O3 → 2O2 O3 + O3 → 3O2
(2.52) (2.53)
These reactions are the net result of a complex set of reactions catalysed by various gases and radicals ([5]). We mention explicitly atomic chlorine Cl, nitric oxide NO and hydroxyl radicals OH. The Cl radical, for example, strips off an O atom from O3 forming ClO. It then loses an O atom to a free O atom and returns to the radical Cl state. The two O atoms recombine to O2 . The net effect is Eq. (2.52). Note that this scheme requires free O atoms to be present at an altitude of 25 [km] or more. How are the free radicals NO, Cl and OH produced? The OH radical is a product of the breakdown of H2 O vapour, for instance produced in the exhaust of supersonic aeroplanes. Although part of the Cl radical may be formed from HCl, released by volcanoes, the major input of Cl into the stratosphere originates from chlorofluorocarbons (CFCs), which are used as foam-blowing agents, refrigerants and propellants. CFCs are extremely stable in the troposphere. However, a small fraction may escape into the stratosphere, eventually reaching the upper stratosphere where they may be decomposed under the influence of UV light, thereby producing Cl and ClO. N2 O is partly of anthropogenic origin and is released from soils and water where it has been formed as a fertilizer waste product. Like the CFCs, the N2 O released at the earth’s surface may eventually be photo-decomposed and NO is formed. These radicals, together with the OH radical are responsible for 90% of the removal of stratospheric ozone. The crux of the ozone problem is demonstrated by Figures 2.6 and 2.8. Any small variation in the ozone concentration will lead to changes in both the amount of UV light at a particular wavelength and the transmission of shorter wavelength radiation. This is of course a direct consequence of Lambert–Beer’s law (2.42). A certain percentage change in concentration of ozone yields the same percentage change in optical density OD. A decrease of one OD unit at a particular wavelength implies a 10-fold increase in the amount of light of that wavelength reaching the earth’s surface. Since the action spectrum for damage to living cells or tissue is an exponentially increasing function with decreasing λ (see Figure 2.7) the amount of damage may dramatically increase, even with a relatively small decrease in the amount of ozone. Figure 2.9 shows experimentally that the amount and spectral distribution of UV light is in fact a strong function of the ozone concentration. In this figure the irradiance at ground level was measured for three different ozone concentrations. The atmospheric pressure was converted to a fictitious pressure of 1 [atm]. From the logarithmic scale it is obvious that at the lower wavelengths the increase in irradiance going from C to A is considerable. Predictions from models for atmospheric ozone production and breakdown indicate a stratospheric ozone depletion ranging from 5–20% due to CFC and N2 O production. From results as presented in Figures 2.6 and 2.8 one may predict that a 10% ozone depletion would result in a 45% increase in effective UV-B radiation. These alarming numbers illustrate the necessity to monitor the structure of the ozone layer accurately and to quantify the effects of increased UV on living organisms. In Chapter 9 we will discuss briefly that international agreements to ‘save the ozone layer’ seem successful.
Light and Matter
25
Figure 2.9 UV intensity at the earth’s surface for various effective thicknesses of the ozone layer. The three cases are A = 0.273 [cm], B = 0.319 [cm] and C = 0.388 [cm]. All measurements were in the clear sky and converted to a fictitious pressure of 1 [atm] = 105 [Pa.]. (Reproduced from T. Ito, Frontiers of Photobiology, 1, Figure 1, p 515. Copyright Elsevier 1993.)
2.3.3.1
Melanin Pigments in the Human Skin2
Melanins are a broad class of macromolecules found throughout nature. In humans they are responsible for a wide variety of colorations of skin, hair and eyes, and they are also present in the inner ear and in neurons of the substantia nigra in the human brain. Epidermal pigments can be divided into two classes: the black to brown eumelanin and the yellow to reddish pheomelanin. Darker skins contain more eumelanin and have a higher ratio of eumelanin to pheomelanin, as compared to lighter skins [7]. The chemical composition of the two pigments is completely different. Eumelanin is a complex polymeric structure based on 5,6-dihydroxyindole (DHI) and 5,6-dihydroxyindole-2-carboxylic acid (DHICA), while pheomelanin is a highly heterogeneous polymer based on benzothiazine and benzothiazole derivatives. The structures of these compounds are shown in Figure 2.10. Both eumelanin and pheomelanin show an intriguing absorption, increasing monotonically towards the UV, as shown in Figure 2.11. This featureless absorption is very uncommon amongst organic chromophores, which generally show distinct absorption bands 2 This subsection is based on a gratefully acknowledged contribution by Annemarie Huijser (Lund University, Department of Chemical Physics, Sweden), which was slightly edited by the authors.
26
Environmental Physics
Figure 2.10
Building blocks of eumelanin (A) and pheomelanin (B).
(see Figure 5.28). The peculiar absorption of eumelanin and pheomelanin are generally attributed to a combination of a large variety of different absorption bands due to the large heterogeneity in composition. It might allow for protection of DNA in the skin against UV-induced damage, although the true function of epidermal pigments is so far not at all understood. There are indications that eumelanin may be photo-protective, while pheomelanin is thought to become toxic upon UV-exposure due to formation of reactive species
Figure 2.11 The broad-band absorption spectra of eumelanin and pheomelanin. (Reprinted from Biophysical Journal, 90, 3, Tran et al, 743–752, copyright 2006 with permission from Elsevier.)
Light and Matter
27
Figure 2.12 UV induced reaction mechanisms in physiologically relevant states of DHICA, the building block of eumelanin: the carboxylate anion (A) and the neutral state (B). (Reproduced by permission of Wiley-VCH.)
(such as radicals). Knowledge of UV-induced reactions is, however, largely lacking. A difference in UV response between eumelanin and pheomelanin, if it exists, might explain the higher susceptibility to skin cancer of people with a fair complexion. Spectroscopic studies to unravel the UV-induced reaction mechanisms in eumelanin and pheomelanin have been highly complicated by the very complex structures of the pigments and the unknown properties of even the small constituents. This has driven researchers to investigate smaller building blocks with a well-defined structure. As an example, the UV-induced reaction mechanisms in the eumelanin building block DHICA are discussed. At physiologically relevant conditions DHICA can occur in two forms: the carboxylate anion and the neutral state. Recent time-resolved fluorescence experiments taking into account isotope effects have shown that these two states respond completely different upon UV exposure, as visualized in Figure 2.12. After excitation the carboxylate anion decays back to its ground state by fluorescence and via non-radiative decay channels (Figure 2.12 (A)). The neutral state, on the other hand, quickly dissipates part of the absorbed UV energy via sub-[ps] excited state intramolecular proton transfer (ESIPT) from the carboxylic acid group towards the nitrogen, which gets a positive charge (Figure 2.12(B)). The formed state subsequently mainly decays back to the ground state non-radiatively and shows an only weak fluorescence and short lifetime [9]. In terms of photo-protection, the neutral state (B) would be preferable above the anionic state (A). In (B) a significant part of the excitation energy is already dissipated through ESIPT on the sub-[ps] time scale and the short lifetime of the formed state (240 [ps], vs 2.1 [ns] for the anionic state (A)) reduces the probability of forming reactive species and causing mutagenic reactions. Further research suggests that, although monomers by themselves are not likely to exist in the full pigment, monomer-like processes still occur to some extent,
28
Environmental Physics
in addition to other (e.g. intermonomer) processes. This suggests that this conclusion not only holds for the DHICA building block, but also for the complete eumelanin pigment.
Exercises 2.1 Check the dimensions of Eqs. (2.2) and (2.3). Derive (2.3) from (2.2) using (2.1). 2.2 Find Wien’s law (2.4) from (2.3) using the approximation of small wavelengths. 2.3 Find Stephan–Boltzmann’s law (2.5) by integration of (2.2) using E = hν. Look for the nasty integral in a book or use mathematics software to find it. Compare the value of σ you find with the value given in Appendix A. 2.4 Deduce from Figure 2.1 the peak value of solar black-body radiation to be expected on earth. Use for the radius of the sun Rsun = 6.96 × 108 [m] and for the distance from the centre of the sun to the centre of the earth Rse = 1.49 × 1011 [m]. Check with Figure 2.2. 2.5 Show that the probability Pk (t) in Eq. (2.16) is dimensionless. 2.6 In the transition from (2.18) to (2.19) we used π sin ((ωk1 − ω)t) → δ(ωk1 − ω) 2(ωk1 − ω) 2 for long times t. (a) Plot the left hand side for a few long times and (b) by change of variables reduce the integral of the left-hand side to a well-known form. 2.7 Check the dimensions of Eq. (2.19) 2.8 (a) Check the dimensions of Eq. (2.21) and (b) deduce the dimensions of the Einstein coefficient (2.22). 2.9 Consider the ozone filter at a wavelength of 255 [nm]. Calculate the optical density OD for a path length of 0.3 [cm] using Figure 2.8 and Eq. (2.49). Use Eq. (2.42) to calculate the reduction in intensity over the path. Verify that the remaining intensity is negligible. Perform the same calculation for λ = 290 [nm] and compare.
References [1] Valley, S.l. (ed.) (1965) Handbook of Geophysics and Space Environments, McGraw Hill, New York. [2] Parrish, J.A., Anderson, R.R., Urbach, F. and Pitts, D. (1978) UV-A: Biological Effects of Ultraviolet Radiation with Emphasis on Human responses to Long wave Ultraviolet, Plenum Press, New York. [3] Inn, E.C.Y. and Tanaka, Y.J. (1953) Ozone absorption coefficient in UV and visible. Journal of the Opical Society of America, 43, 870–973. [4] Molina, L.T. and Molina, M.J. (1986) Absolute aborption cross sections of ozone in the 185–350-nm wavelength range. Journal of Geophysical Research, 91, 14501–14508, Table 1 [5] Jagger, J. (1985) Solar UV Actions on Living Cells, Praeger Special Studies, Praeger Publishing Division of Greenwood Press Inc., Westport, Conn., USA. A general text on the interaction of light with biomolecules in relation to the ozone problem.
Light and Matter
29
[6] Ito, T. (1993) UV-B observation network in the Japan Meteorological Agency. Frontiers Photobiology, 1, 515–518. [7] Simon, J.D., Peles, D., Wakamatsu, K. and Ito, S. (2009) Current challenges in understanding melanogenesis: bridging chemistry, biological control, morphology, and function. Pigment Cell and Melanoma Research, 22, 563–579. [8] Tran, M.L., Powell, B.J. and Meredith, P. (2006) Chemical and structural disorder in eumelanins: A possible explanation for broadband absorbance. Biophysical Journal, 90, 743–752. [9] Huijser, A., Pezzella, A., Hannestad, J.K. et al. (2010) UV-dissipation mechanisms in the eumelanin building block DHICA. ChemPhysChem, 11, 2424–2431.
3 Climate and Climate Change Climate at a certain location is defined as the average weather conditions over a period of some years. It not only comprises the temperature, but also the temperature variations during a year, the rainfall, the cloudiness, the humidity and the like. The climate system described by physical models consists of many components, depicted in colour Plate 1 at the end of this book and in Figure 3.1 in shades of grey. The main drivers of climate are the solar input (top left in the figure), the terrestrial radiation (centre-left) and the atmospheric gases and aerosols (centre). Climate change is caused by changes in these drivers, which for a large part are located in the atmosphere. Therefore in Section 3.1 the vertical structure of the atmosphere is discussed, together with the vertical motion of air. The radiation balance and the greenhouse effect follow in Section 3.2. The uneven distribution of solar input over the globe, the rotation of the earth and the geography of land and oceans cause a horizontal transport of energy by air and water. Their equations of motion are input for climate modelling and given in Section 3.3. The currently observed climate change is the combined effect of natural variability and human-induced climate change. The natural variability can be studied by analysing the climate change before the beginning of the industrial revolution, around 1750. That will be the subject of Section 3.4. The remainder of this chapter deals with modelling (Section 3.5) and the work of the Intergovernmental Panel on Climate Change IPCC (Sections 3.6 and 3.7; its social context is discussed in Section 9.4.2). IPCC is not, by itself, a research organization. It consists of groups of scientists, organized in working groups, who summarize and interpret the literature on climate and climate change. They refer to papers appearing in established, peer-reviewed journals, compare the papers and draw conclusions as to the state-of-the art knowledge in their field. IPCC has a role in political decision-making as it publishes the consequence of policy or the lack of policy concerning the climate. In practice IPCC working groups also have a steering role in research policy, as they point out gaps in knowledge and understanding of climate phenomena. Some of the IPCC graphs which we use in this book are produced by the IPCC Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
32
Environmental Physics
Figure 3.1 The components of the climate system, which define climate and climate change. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).) See also Plate 1.
working groups, others are taken from the literature. For simplicity we only refer to pages in the IPCC reports and leave it to the reader to go back to the original papers, if necessary. This chapter should provide the student with enough insights to appreciate the science basis of climate modelling and the consequences of an unrestrained combustion of fossil fuels and other climate-change drivers. Note: in order to appreciate this chapter the student should be familiar with Appendix B, up to Eq. (B11).
3.1
The Vertical Structure of the Atmosphere
The atmosphere extends horizontally with a scale in the order of the circumference of the earth, which is 40 000 [km]. The vertical scale, shown in Figure 3.2, is much smaller, at most 100 [km]. The changes of the weather are taking place in the so-called troposphere. Looking at the graph from bottom to top, it shows that the fall or rise of temperature with altitude defines in order the troposphere, stratosphere, mesosphere and thermosphere. The regions separating these ‘spheres’ are called the tropopause, stratopause and mesopause. Their altitude varies with latitude.
Climate and Climate Change
33
60
Mesosphere Stratopause
40 Stratosphere 20
Tropopause
Summer pole Noctilucent cloud Temp e
ratur e
Mother-of-pearl cloud
Cloud vapor aerosol
Troposphere 0 -80
Air density / [kg m-3]
Mesopause
Ionosphere
80
Turbosphere Uniform N2,O 2,Ar proportions
Altitude above sea level / [km]
Winter pole
-60
-40
-20
0
O3
}
10-6
0.001
10-5
0.01
10-4
0.1
10-3
1
10-2
10
10-1
100
1
Atmospheric pressure / [hPa]
Thermosphere 100
1000
20
Temperature / [ °C] Figure 3.2 The vertical structure of the atmosphere. The left vertical scale denotes the altitude which on the right vertical scale corresponds with respectively the air density and the atmospheric pressure. Horizontally one finds the temperature with a significant dependence on the seasons around 80 [km] of altitude. The names of the atmospheric regions are indicated. (By permission of Oxford University Press.)
One notices the increase in temperature above 80 [km]. This is mainly due to the photodissociation of molecular oxygen O2 into atomic O. These atoms strongly absorb solar radiation of wavelengths between 100 and 200 [nm]. Alternatively, dissociation into O+ ions and electrons may occur, giving rise to the ionosphere which reflects radio waves. Similarly, absorption of UV sunlight by O2 between 20 and 40 [km] of altitude produces stratospheric ozone O3 , which strongly absorbs solar radiation with wavelengths of 200 to 300 [nm] and causes a general increase in temperature in these regions (cf. Chapter 2). The troposphere is the scene of strong vertical motion of air caused by the heating of the earth’s surface. Its temperature profile can be understood by the adiabatic expansion of a rising parcel of air. ‘Adiabatic’ means that no heat is exchanged with the local environment. To a very good approximation a parcel of air is in hydrostatic equilibrium with the air above and below. In Figure 3.3 a cylindrical parcel is shown with cross section A. Let the pressure at height z be p(z) and at height z + dz be p + dp. In vertical equilibrium the force upwards pA must be equal to the force downwards (p + dp) A + gρAdz. This leads to the equations p A = ( p + d p) A + gρ A dz
(3.1)
d p = −gρ dz
(3.2)
or
34
Environmental Physics
p+dp
A
z+dz
z
A p
ρgA dz Figure 3.3 Derivation of the hydrostatic equation (3.2). An air parcel of area A is in hydrostatic equilibrium with the ambient air.
which is called the hydrostatic equation. In the lower troposphere one may take the gravity acceleration g as constant; the density ρ has a strong exponential decrease with increasing altitude (as can be seen in Figure 3.2) and it may not be taken constant. However, one may derive a relation between ρ and p as the thermodynamic behaviour of the air in the atmosphere can be approximated rather well by the equation of state of an ideal gas: pV = n RT
(3.3)
Here, n is the number of moles in the volume V, where 1 [mole] equals M [g] and M is the molecular weight of the air as a mixture of gases (Exercise 3.1). Also, R is the universal gas constant, given in Appendix A. The mass m of the air is given by m = n M × 10−3 [kg]
(3.4)
The specific gas constant for air is defined as R =
103 R M
(3.5)
Its numerical value is given in Appendix A. By substituting n from (3.4) and R from (3.5) into Eq. (3.3) one obtains p=
m R T = ρ R T V
(3.6)
With the hydrostatic equation (3.2) it follows that ∂p g p = −gρ = − p = − ∂z RT He
(3.7)
In most of the atmosphere (Figure 3.2) the temperature T varies between 200 and 300 [K]. Taking some average value T av for T one may define an effective scale height H e = R T av /g [m] as was done on the right-hand side of Eq. (3.7). Then Eq. (3.7) gives an exponential decrease of pressure with height as p = p0 e−z/He
(3.8)
Climate and Climate Change
35
For T = 250 [K] one finds H e ≈ 7.3 [km] and a decrease of a factor of 10 in pressure for an increase of altitude with 16.8 [km]. This indeed agrees with the logarithmic character of the scales on the left and right in Figure 3.2 and correctly gives their relationship. When a parcel of dry air moves up without exchange of heat with its surroundings, the relationship between temperature and altitude follows the so-called dry adiabat. Similarly, a parcel of wet, saturated air will condense part of its water vapour and follow the saturated adiabat. The resulting formation of clouds is an important element of the climatic situation and is accounted for by sophisticated models. These three subjects will be discussed below. 3.1.1.1
The Dry Adiabat
Consider 1 [kg] of air. The first law of thermodynamics δ Q = cV dT + p dV
(3.9)
expresses that some of the added heat δQ is used to increase the temperature by dT, while the remaining heat energy is used to perform work p dV. For constant volume the only use for the heat is to increase the temperature, therefore cV is the specific heat for constant volume [J kg−1 K−1 ]. Consider a parcel of air, which may deform and expand, but retains its mass of 1 [kg]. Its volume then is related to its density ρ by V = 1/ρ. Eq. (3.9) then may be rewritten with the help of Eq. (3.6) as 1 p 1 1 = cV dT + d − d p = cV dT + R dT − d p δ Q = cV dT + p d ρ ρ ρ ρ 1 (3.10) = c p dT − d p ρ Here, cp = cV + R is the specific heat at constant pressure, which follows from taking dp = 0. For dry air without exchange of heat with its surroundings δQ = 0 and the relation between temperature and pressure follows from the right-hand side of Eq. (3.10) as cp dT = dp/ρ. This may be rewritten as dT = d p/(ρc p ) = R T d p/( pc p ) or R T dT = dp pc p
(3.11)
where Eq. (3.6) was used. Eq. (3.11) may be used to derive the famous Poisson equation for adiabatic change of an ideal gas pV κ = constant
(3.12)
in which κ = cp /cV (Exercise 3.2). The dry adiabat is given as the change of temperature with altitude z −gp ∂T ∂p R T g ∂T × = × = =− ∂z ∂p ∂z pc p RT cp
(3.13)
36
Environmental Physics
The quantity d = g/cp is called the dry adiabatic lapse rate and represents the slope of the curve T(z). With the data of Appendix A one finds d ≈ 0.01 [K m−1 ]. The minus sign in Eq. (3.13) illustrates that rising air will expand and consequently will cool. For wet, but not saturated, air with a mass fraction ω of water vapour Eq. (3.13) may be used with the understanding that c p = (1 − ω)c p,air + ωc p,water vapour 3.1.1.2
(3.14)
The Saturated Adiabat
If the parcel of humid air rises and cools, the water vapour it can contain decreases and part of the water vapour will condense. The mass fraction ω of water vapour in the parcel will increase an amount dω < 0. Here (–dω) is the mass fraction that has condensed. The evaporation heat per unit of mass is given as Hv [J kg−1 ]. Eqs. (3.9) and (3.10) give δ Q = (Hv )(−dω) = c p dT −
1 dp ρ
(3.15)
A derivation similar to the one from Eqs. (3.11) to Eq. (3.13) yields dT =
Hv 1 dp − dω ρc p cp
Hv ∂ω Hv ∂ω g ∂T × × =− − = −d − = −s ∂z cp cp ∂z cp ∂z
(3.16) (3.17)
The saturated adiabatic lapse s rate is smaller than the dry rate d , as ∂ω/∂z < 0. This is clear because the liberated condensation heat compensates somewhat for the cooling by expansion. This effect is largest for warm air, as it can contain a lot of water vapour, leading to a factor of 2 or even 3 between both lapse rates. 3.1.1.3
Clouds
The principle of cloud formation may be illustrated by the formation of a cumulus cloud, as depicted in Figure 3.4. At ground level a parcel of air is heated and starts rising. It contains water vapour, and its temperature decreases according to Eqs. (3.13) and (3.14). In the example of the graph the parcel is somewhat warmer than its surroundings and keeps rising following its adiabat (this situation is called dry, unstable). At level A condensation starts and the parcel follows the saturated adiabat with a somewhat smaller slope; in the graph the parcel now is considerably warmer than the ambient air, it keeps rising and condensing until all water vapour has been condensed at level B (this situation is called wet, unstable). It then follows again a dry adiabat, where from level C the parcel appears to be cooler than the ambient air. It therefore stops rising. (This situation is called dry, stable). In between levels A and B a high cloud, a cumulus, has been formed.
3.2
The Radiation Balance and the Greenhouse Effect
The two principal factors determining the earth’s surface temperature are the solar flux in and the terrestrial flux out. The solar input is proportional to πR2 , which is distributed
Climate and Climate Change
37
C B Γs Altitude z
cloud
Γd
A
Temperature T Figure 3.4 Formation of a cumulus cloud. A rising parcel of air first follows a dry adiabat, starts condensation at level A following a saturated adiabat. At level B all vapour has left and a dry adiabat is followed again. The ambient temperature is drawn; the ‘path’ of the parcel of air is dashed.
over the total surface of 4πR 2 and results in S/4 as average input, in the same way as in Section 1.2. In an equilibrium situation the solar influx and the terrestrial outflux are in balance. This radiation balance of the earth, in its simplest version is given in Figure 3.5. The atmosphere, in the middle region of the figure, absorbs solar radiation and scatters or reflects part back into outer space. The remainder reaches the earth’s surface, where part is reflected and part is absorbed. The 30% reflection and absorption in the figure comprises reflection by the earth’s surface and the atmosphere, including clouds, and represents the albedo of the earth as viewed from outer space, given in Section 1.2. The surface loses energy by emission of IR, most of which is radiated back, that is, downwards, by the atmosphere. The relevant data are given on the right in Figure 3.5. The net effect of the radiation is warming of the surface by 31 units and cooling of the atmosphere by the same amount. This is compensated by two effects. The first is the evaporation of water, which cools the surface and after condensation heats the atmosphere (26 units). The second is the convection of heated air upwards following the dry adiabat discussed before (5 units). The student may check that for outer space, for the atmosphere and for the surface, the incoming energy equals the outgoing, as it should in an equilibrium situation. On the right in Figure 3.5 one observes that most of the IR emission of the earth is absorbed by the atmosphere. Only 7% of the emitted 114% reaches outer space; this may be interpreted as a transmission of t = 7/114 = 0.06 of the atmosphere for IR radiation. The absorption of IR radiation by the atmosphere is strongly dependent on its wavelength. This is illustrated in Figure 3.6, which shows the upward IR flux at the tropopause.
38
Environmental Physics Solar radiation
Thermal radiation
30%
63%
7%
a Sc tte ga
or Abs
rin
Kinetic energy 0.8 [MJ m -2 ]
nd
n ptio
ec
l ref
23%
1%
Deep space
Emission
100% = S/4
1%
tio
Static energy 2620 [MJ m -2]
n
Internal energy 1880 [MJ m -2]
Atmosphere
Potential energy 740 [MJ m -2] 98%
47% Absorption
26%
5%
Emission
Latent heat 64 [MJ
m-2]
107% + 15 [°C] 114%
Evaporation Convection Abs Emission
Earth
Figure 3.5 Radiation balance for the earth and the atmosphere. The solar input is taken as 100% and its destiny is displayed. The bottom line depicts the earth surface, the upper part outer space and the atmosphere occupies the region in between. For the atmosphere estimates of the energy content are displayed. (Reproduced with permission of the Dept of Geography, ETH Zurich, copyright 1990 from [3], p. 28.) ¨
Figure 3.6 The intensity of the radiation (2.4) is displayed as a function of μ = 1/λ. The solid line shows the upward Infrared flux Iμ at the tropopause [Wm–1 ]. It is fitted with three dashed lines, corresponding to black-body radiation at temperatures of 294 [K], 244 [K] and 194 [K]. Note that the wavelength λ is shown (nonlinearly) on top of the figure. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
Climate and Climate Change
39
It is fitted with three black-body curves of which the curve with T = 294 [K] gives the best results. This is a little larger than the T = 288 [K] which is adopted in this book as the temperature of the earth, but the curve will be very similar. It appears that the atmosphere is relatively transparent for wavelengths in a ‘window’ between 8 [μm] and 12.5 [μm], but opaque at wavelengths around 15 [μm]. In general, greenhouse gases are gases in the atmosphere that absorb IR radiation. The most important are CO2 and H2 O, but we will see later that others also contribute. The transmission t = 0.06 of the atmosphere refers to the radiation passing the window. The rest of the IR emission is absorbed, re-emitted, absorbed and so on, in a process of radiative transfer to be discussed in Section 3.2.2. At this point we just notice that the thermal radiation upwards (63 units) and downwards (98 units) in Figure 3.5 is different. This can be explained by the fact, shown in Figure 3.2, that the lower atmosphere is warmer than the upper atmosphere. The lower atmosphere emits more black-body radiation than the upper atmosphere because of the Stephan–Boltzmann law σ T 4 (see Eq. (2.6)). Both layers emit equally up and down. Outside the window shown in Figure 3.6 most upward radiation from the lower atmosphere is absorbed, like most of the downward radiation from the upper atmosphere. One is left with warm radiation downwards and cooler radiation upwards. In the following we first discuss the effect of simple changes in the radiation balance (Section 3.2.1), next we discuss radiation transfer in the atmosphere (Section 3.2.2) with an analytical model in Section 3.2.3. The results are used to describe global warming (Section 3.2.4) and the properties and effects of the greenhouse gases (Section 3.2.5) 3.2.1
Simple Changes in the Radiation Balance
From Figure 3.2 it follows that most of the mass of the atmosphere is concentrated in the troposphere. Most changes occur in the troposphere; the higher layers of the stratosphere are much more stable and adapt quickly to changes in the radiation balance. This is the reason why below the radiation balance is studied at the top of the troposphere. At the top of the troposphere Figure 3.5 may be summarized as S (3.18) = σ Ta4 + tσ Ts4 4 The left-hand side summarizes the incoming solar radiation; the right-hand side adds the contribution from the higher troposphere with the terrestrial IR radiation transmitted through the window. The albedo of the earth a = 0.30 is composed of contributions from all parts of the earth, the troposphere and the clouds. It represents some average of the individual albedos shown in Table 3.1. We cannot proceed without a model, but for lack of it we just take the difference Ts – Ta = constant. We then may make a few qualitative observations of the behaviour of the equilibrium surface temperature Ts under extreme conditions (Exercise 3.5). (1 − a)
3.2.1.1
The White Earth
Assume that the earth’s surface is completely covered with snow, both on land and on the oceans (ice and snow). The resulting albedo is high, Table 3.1 suggests a value a = 0.50. One then finds a surface temperature Ts = 268 [K], which is well below the freezing point
40
Environmental Physics Table 3.1 Mean values of albedos for various surfaces. Surfaces
%
Clouds
Horizontal water (low solar angle) Fresh snow Sand desert Green Meadow Deciduous Forest Coniferous Forest Crops Dark soil Dry earth
5 85 30 15 15 10 10 10 20
Cumuliform Stratus Altostratus Cirrostratus
% 70–90 60–85 40–60 40–50
With permission of John Wiley, copyright 1977, reproduced from [5], p. 30.
of sea water, which is 271.1 [K]. This means that the earth would remain covered with snow and ice; in other words a white earth is a stable solution of the energy balance equations. It is, however, so remote from the present situation that the transition to a white earth is hardly thinkable.
3.2.1.2
The Nuclear Winter/Extinction of Dinosaurs
The climatic consequences of nuclear war were seriously discussed in the 1980s. It was argued that the explosion of a few hundred nuclear warheads would lead to enormous fires all over the world. That would bring great quantities of small particles of dust and smoke into the atmosphere, essentially cutting much of the earth’s surface off from sunlight. Even a slightly larger value a = 0.35 would lead to Ts = 284, a cooling of 4 [◦ C]. It is to be hoped that mankind will be spared a nuclear winter. A similar phenomenon may have occurred some 66 million years ago with the extinction of the dinosaurs. Many specialists believe that an asteroid hit the earth, which would have raised a global dust cloud with the same cooling effect, possibly resulting in lack of food.
3.2.1.3
The Cool Sun
If the sun behaves like other stars of its type, some billions of years ago the solar irradiance S must have been considerably smaller than nowadays. Two billion years ago for example, it would have been 85% of its present value and 4 billion years ago some 72% of its present value [5]. In our approximation this would lead to temperatures of 278 and 268 [K], respectively. Still life existed even then, requiring temperatures well above the freezing point of water (273 [K]). It is therefore believed that due to an elevated concentration of greenhouse gases in the atmosphere, the smaller solar irradiation was largely compensated. The increase of the solar irradiance with time must to a significant extent have been compensated by the fixation of greenhouse gases like CO2 .
Climate and Climate Change
Solar
Terrestial
41
Terrestial
Window
WINDOW
Visible
k
(9 [m])
(9100 [m])
(9 [m])
Figure 3.7 Absorption of radiation in the atmosphere. A value of 100% indicates that the radiation is not able to pass through the entire atmosphere. For solar radiation this is defined from top to surface, while for IR radiation this is defined from the earth surface to the top of the atmosphere. The left-hand figure refers to a broad range of wavelengths and is drawn horizontally on a logarithmic scale. The right-hand figure is restricted to the most important greenhouse gases H2 O and CO2 . Underneath the averaged values of the absorption coefficient k for part of the spectrum is indicated (cf. Eq. (3.19)). (With permission of Oxford University Press, copyright 2010, reproduced from [2], Figure 8.5, p. 290.)
3.2.2
Radiation Transfer1
In Figure 3.6 it was shown that the atmospheric gases absorb IR radiation at many wavelengths outside a ‘window’ between 8 and 12.5 [μm]. In Section 3.2.5 we will find that the gases which contribute most significantly to this absorption are CO2 and H2 O. This is shown explicitly in Figure 3.7. The left picture refers to the complete solar and terrestrial spectrum and therefore is drawn horizontally on a logarithmic scale. On the vertical scale 100% absorption at a wavelength means that radiation at that wavelength cannot cross the atmosphere directly. As to the IR window around 10 [μm], it appears that CO2 and H2 O hardly absorb there, although there is some absorption by O3 . The absorption lines shown appear in bands, much broader than the narrow lines of individual molecules that one may find in the literature [7]. One reason is that there are often several absorption lines close together. Each of them is broadened, amongst other things due to collisions between molecules in the air which shorten the lifetimes of their excited states (Section 8.1.3). These facts together result in the appearance of absorption bands. The right-hand side of Figure 3.7 shows a very schematic representation of the absorption of CO2 and H2 O. Underneath the averaged values of the absorption coefficient k for the part of the spectrum is indicated. These values are to be considered as approximations connected with the simplified analytical model to be discussed below.
1 The authors are indebted to Dr Rob van Dorland for his assistance with this subsection. This subsection and the next one are based on Van Dorland’s thesis [6].
42
Environmental Physics
We are interested in the upward flux of long-wave radiation at the top of the troposphere and its change by the addition of greenhouse gases. For global temperature changes the effects of changes of atmospheric greenhouse gas content on the net flux at the top of troposphere are dominant (Section 3.2.4). Since the changes in upward flux are more important than the changes in the downward component, we consider here the upward flux and its change by the addition of greenhouse gases. We will derive a general formula for the upward flux at an altitude z in the atmosphere. Figure 3.8 shows that there are two contributions, one from the surface and one from the underlying layers of the atmosphere. To be more specific, we consider a certain spectral range λ of the black-body spectrum (2.4). Its emission will be called B [Wm−2 ]; from the surface of the earth it will be denoted by an index s which gives Bs and from an arbitrary altitude z in the atmosphere it will be indicated by B(z ). Its dependence on the wavelength will not be indicated explicitly. The contribution from the surface follows from Lambert–Beer’s law (2.36), which has already been derived as Fs+ = Bs e−kz
(3.19)
In order to find the contribution from the underlying layers we consider 1 [m2 ] in a strip between z and z + z , as indicated in Figure 3.8. Such a strip of gas does not emit in the same way as a solid surface. Its emission, for example, will be proportional to its thickness. Its emission is found by imagining the strip in a cavity which is in thermodynamic equilibrium. In this case black-body radiation is absorbed and the same amount of radiation is emitted again. Even without a cavity local thermodynamic equilibrium would require that the black-body emission equals the absorption of a hypothetical incoming black-body beam. We assume that this is the case. From Eq. (2.35) or (2.36) it follows that over the strip the absorption of a beam with intensity I will be given by I = –kIz . If a black-body spectrum B(z ) entered this strip the absorption would be B = –kBz . For local thermodynamic equilibrium the emission + (z ) from should be equal to kBz . Therefore, for any incident beam the emission Fatm the strip should have this behaviour: + (z ) = k B(z )z Fatm
Bse-kz
ΔF +atm
2
1 [m ] Bs
(3.20)
z
z’+Δz’ z’
surface
Figure 3.8 The upward radiation flux at height z in the atmosphere has a contribution from the surface and from underlying layers. The black-body contribution from the strip between z + and z + z is indicated by F atm .
Climate and Climate Change
43
The upward flux from the atmosphere at location z is given by the integral, where the absorption is taken into account by Eq. (2.36) with the proper distance z – zt
+ Fatm
z =
k B(z ) e−k(z−z ) dz
(3.21)
0
For the downward flux a similar relation can be derived. Equations (3.19) and (3.21) describe the radiation transfer in a vertical direction. Note that the absorption coefficient k defined in Eq. (2.37) is dependent on the absolute concentration of the gas, which is N 1 /V. In deriving Lambert–Beer’s Law (2.36) k was assumed to be independent of the coordinate. In order to proceed with simple formula we stick with this assumption knowing that precise results have to be computed numerically. Then the decrease in absolute concentration with altitude has to be taken into account. From the derivation of Lambert–Beer’s Law (2.36) it also follows that the absorption coefficient is dependent on the transition probability B12 and therefore is a function of the wavelength of the radiation. In an extremely simplified way, values of k are given in Figure 3.7), where for the atmospheric window a value kw = (9100 [m])–1 is given and outside the window koutw = (9 [m])–1 . This means that at a distance of 9 [m] the intensity of an IR beam outside the window decreases its intensity by a factor 1/e ≈ 0.37. If the human eye was only sensitive to λ ≈ 15 [μm] it would be difficult to survive. In fact at this wavelength the CO2 absorption is very strong, leading to k ≈ [cm−1 ] in this wavelength region resulting in a visible depth of only a few [cm]. In order to proceed one needs to know the function B(z). We distinguish between the window region 9–12.5 [μm] and the rest of the spectrum. At the surface with a temperature of 288 [K] one finds Bsw = 110 [Wm−2 ] and outside the window Bsout = 280 [Wm−2 ] (Exercise 3.6). For the change in B(z) with altitude we make two different assumptions. The first is a linear decrease in the atmospheric temperature with altitude up to the tropopause, which is suggested by Figure 3.2. Then it is simple to integrate Eqs. (3.19) and (3.21) numerically and find the upward going fluxes at the top of the troposphere. This will be done in Example 3.1; the second assumption is a linear decrease of B(z) with z; this will be done in Section 3.2.3.
Example 3.1
Radiation at the Top of the Troposphere
(a) Put the temperature at the top of the troposphere (13000 [m]) as 247 [K] and write down the linearly decreasing function T(z). (b) Calculate the surface contribution (3.19) at the top of the troposphere inside the window and outside the window. Compare with the data in Figure 3.5. (c) Similarly compute the up going atmospheric radiation on top of the atmosphere inside and outside the window and compare with Figure 3.5.
44
Environmental Physics
Answer: (a) Write T(z) = a + bz; substitute 288 [K] for z = 0 and 247 [K] for z = 13 000. It follows that T(z) = 288 – 0.00315z. (b) Inside the window Bsw = 110.45 [Wm−2 ], to be multiplied by e–13000/9100 which gives 26.5 [Wm−2 ]. Outside the window one has Bsout = 280 [Wm−2 ], to be multiplied by e–13000/9 , which is negligible. The result 26.5 [Wm−2 ] has to be compared with 0.07 × S/4 = 23.9 [Wm−2 ] of Figure 3.5, which is a reasonable agreement. (c) The Planck function (2.4) is dependent on temperature. Strictly speaking the fraction of the spectrum inside and outside the window will be a function of temperature, and therefore also of altitude. We ignore this for the rather small range of temperatures and write inside the window Binside (z) = (110.45/390.5) × σ T(z)4 ; integration of (3.21) up to 13 000 [m] is straightforward and gives 58.8 [Wm−2 ]. Outside the window, similarly, Boutside (z) = (280/390.5) × σ T(z)4 , which results in 151.4 [Wm−2 ]. Together this gives 210.2 [Wm−2 ] to be compared with 0.63 × S/4 = 215.2 [Wm−2 ], again a reasonable agreement. Note the following: (1) The temperature of 247 [K] at the tropopause was chosen so as to give reasonable results for this simple model. A more realistic value would be 220 [K]. (2) Because of the large value of k outside the window only the highest strip in the atmosphere contributes to Eq. (3.21). The fact that the decrease of k with altitude was not taken into account overestimates the absorption in the top layers, which here was corrected by using a high value for the temperature at the tropopause. (3) The contribution of the atmospheric emission at the top of the troposphere is significant both inside and outside the window. (4) A realistic calculation should divide the spectrum into many bands.
The addition of more gases to the atmosphere, which absorb in the IR, the so-called greenhouse gases, will increase the value of the absorption coefficient k at certain wavelengths. Without a temperature change of the earth or of the atmosphere the total upward going flux at the tropopause will become smaller. This effect will be largest in the window, where the absorption is still small. This statement may be checked numerically for the model in Example 3.1 (Exercise 3.8).
3.2.3
A Simple Analytical Model
Van Dorland [6] introduced an analytical model to study the radiation balance. The decrease in black-body radiation with height is assumed to be linear B(z) = Bs − z
(3.22)
where = −∂ B/∂z > 0. This is a drastic assumption and has to be qualified, but for our purpose it is sufficient to mention that the variable z in Eq. (3.22) increases with height [6].
Climate and Climate Change
Next substitute Eq. (3.22) into Eq. (3.21) and find after some algebra + (1 − e−kz ) − z Fatm = Bs + k
45
(3.23)
Add the surface contribution (3.19) and find for the total upward flux (3.24) (1 − e−kz ) k The effect of the increase of the concentration of greenhouse gases and hence of k is shown by differentiating (3.24) with respect to k. This gives F + (z) = Bs − z +
dF + (3.25) = − 2 1 − (1 + kz)e−kz dk k One may verify that the expression between brackets always is positive, hence the derivative (3.25) always is negative and the upward flux decreases with increasing k. In an atmospheric window one will have k 1 and may approximate (3.25) by dF + /dk = –z2 /2, which would increase with z. Outside the window(s), in an opaque region one will have k > 1 and (3.25) may be approximated by dF + /dk = –/k2 , which is very small for large k. Increase of the absorption k gives hardly any decrease in flux, a phenomenon which is known as saturation. 3.2.4
Radiative Forcing and Global Warming
At this point we understand that one may calculate the decrease of upward going flux at the top of the troposphere by the addition of greenhouse gases. To be complete one also has to compute the change in downward flux originating from changes in the stratosphere. All taken together there will be a decrease I of the upward going flux at the top of the troposphere. The climate system will react by increasing the temperature of the earth’s surface and changing the temperature of the atmosphere. A convenient way to quantify the climatic effect of the human addition of greenhouse gases is given by the concept of radiative forcing. The idea can be illustrated by Figure 3.9, where a simple atmosphere is sketched. The net radiation flux at the top of the atmosphere will vanish under equilibrium conditions. Assume a sudden increase in the concentration of greenhouse gases. This would lead to a net reduction in the outgoing long-wavelength radiation at the top of the atmosphere
Flux out - flux in = -ΔI Top atmosphere In Out Earth’s surface
Figure 3.9 A sudden decrease of outward flux I at the top of the atmosphere will lead to an increase in the surface temperature.
46
Environmental Physics
by I. The incoming energy from the sun remains the same. The energy balance at the top of the atmosphere should be restored by an increase T of the earth’s surface temperature. This effect is called radiative forcing: the radiation imbalance at the top of the troposphere enforces a rise in temperature([1] p. 134). The required flux increase I to compensate for the decrease by greenhouse gas absorption will be connected with the increase of the surface temperature T s by the relation I =
∂I Ts ∂ Ts
(3.26)
The intensity I = tσ Ts4 is the earth’s direct outgoing radiation measured at the top of the atmosphere. We ignore changes in the radiation transfer by the atmospheric layers. It then follows with Eq. (1.5) that 4I 4 S ∂I = 4tσ Ts3 = = (1 − a) ∂ Ts Ts Ts 4
(3.27)
Here Eq. (1.5) was used. Substitution of the correct values and a = 0.30 gives ∂ I /∂ Ts = 3.3 [Wm−2 K−1 ]. Eq. (3.26) is often written the other way round as ([8] p. 656) Ts = GI
(3.28)
where G ≈ 0.3 is called the gain factor, which operates on the cause I to produce the effect T s . The concept of radiative forcing dates back to the beginning of climate modelling, but is still widely used to compare models, or to compare the contribution of the individual greenhouse gases to climate change. In fact all changes in the components of the climate system depicted in Figure 3.1, such as changes in land use, may be computed in terms of a forcing. In Section 3.6 we will show the comprehensive summary of all forcings, taken into account by IPCC. This leads to a total, averaged global forcing (since 1750) in 2005 as I = 1.6 [Wm−2 ] ([1], p. 204). With Eq. (3.28) this would give T s = 0.5 [◦ C]. This calculation is too simple for two reasons, which we will briefly discuss below. Firstly, a temperature increase at the surface will lead to effects which will reinforce or mitigate the warming effect. Secondly, the oceans have an enormous heat capacity, which will slow down global warming. 3.2.4.1 Effects Reinforcing Global Warming r Melting of ice and snow will lower the albedo. r More water vapour in the air will lead to smaller transmission t and a higher value of k. r The increase in the cloud cover from higher evaporation will have the same effect. r Several processes will cause (further) increase in CO2 concentration: a higher sea water temperature gives less CO2 absorption in sea water; higher polar temperatures will cause a smaller ocean circulation and decreasing absorption; a faster decay of organic materials will give more CO2 and CH4 (from rotting). r Thawing of permafrost, covering some 20% of the earth’s land surface, may liberate large quantities of CH4 and CO2 trapped in the frozen soil ([1], pp. 77,110).
Climate and Climate Change
47
r More CO2 leads to an increased growth of plants, which may lower the albedo, according to Table 3.1. 3.2.4.2 Effects Mitigating Global Warming r An increase in sea water temperature would lead to an increase in the growth of algae. They would use CO2 for photosynthesis and reduce the CO2 concentration in the water, absorb more from the atmosphere and reduce the atmospheric CO2 concentration. r Most CO2 is due to burning of fossil fuels, which liberates aerosols, small particles that will increase the earth’s albedo a by backscattering incoming solar radiation. r With a higher surface temperature more sea water will evaporate. At higher altitudes it will condense, heating the upper layers more and cooling the surface. r More CO2 leads to an increased growth of plants which will bind CO2 . Taking everything together, one can replace Eq. (3.28) by Ts = G f I
(3.29)
where the precise value of Gf [K/(Wm−2 ] has to be found by detailed modelling. The quantity Gf [K/(Wm−2 ] is called the climate sensitivity parameter (λ in [1], p. 133 or S in [1], p. 825), defined as the increase in temperature [K] for a radiative forcing of 1 [Wm−2 ]. Its value may be estimated from the data IPCC [1] gives for a doubling of the CO2 concentration, viz. T s ≈ 3.0 [◦ C] ([1], p. 799) and I ≈ 3.7 [W/m2 ] ([1], p. 140). This gives Gf ≈ 0.8 [K/(Wm−2 )]. With the forcing I = 1.6 [Wm−2 ] mentioned before one can find the temperature rise, since 1750 as 1.3 [◦ C]. 3.2.4.3
Time Delay by Ocean Warming
Equations (3.26) to (3.29) were derived assuming a sudden increase in greenhouse gas concentration with an immediate response of the surface temperature by T s . We now still keep the sudden increase in greenhouse gas concentrations, leading to the same extra downward flux I [Wm−2 ]. That flux is used to heat the surface which, we assume, will be ocean only. The upper layer of the ocean is well mixed. The mixing with the lower layers is much less. Therefore in a first approximation the incoming heat is used to warm the top mixed layer of the ocean. Let the heat capacity of a column with a cross section of 1 [m2 ] in the mixed layer be cs [J m−2 K−1 ]. The temperature increase T s of the water will be a function of time T s (t) Between times t and t + dt the incoming extra heat from above will be Idt [J m−2 ]. The temperature rise of the mixed layer resulting from this extra incoming heat at time t will be d(T s (t)) [K] for which energy cs d(T s ) [J m−2 ] is needed. The temperature rise T s (t) induces up going radiation T s /Gf [Wm−2 ] at time t (see Eq. (3.7)). Between times t and t + dt that becomes (T s /Gf )dt [J m−2 ]. This leads to an equation with dimensions [J m−2 ] on the left- and right-hand sides: I dt =
Ts dt + cs d(Ts ) [J m−2 ] Gf
(3.30)
This equation expresses that the downward flux on the left-hand side is used for the upward flux (first term on the right) and the warming of the top ocean (last term on the
48
Environmental Physics
right). Dividing (3.30) by dt leads to I =
d(Ts ) Ts + cs Gf dt
(3.31)
This differential equation looks somewhat cumbersome, but if the student replaces T s (t) by f (t) it will be recognized as an inhomogeneous differential equation of first order. The solution becomes Ts (t) = G f I (1 − e−t/τ )
(3.32)
where τ = cs Gf . For long times t → ∞ the exponential vanishes and one will get back Eq. (3.29). For a water column with height h = 120 [m] one finds τ = 12.2 [yr] (Exercise 3.10). Note that in deriving Eq. (3.32) a sudden forcing was assumed. In order to find the present temperature rise one has to specify the rise of forcing with time and to calculate the temperature effect by applying Eq. (3.32). With a simple model Exercise 3.11 yields a temperature increase from 1850 to 2005 of 0.87 [◦ C], somewhat higher than the observed 0.76 [◦ C], from [1], p. 37. The difference may be due to the simplicity of our model or to the fact that the observed value is composed of natural and human-induced changes, or both. From Eq. (3.10) it is clear that even with a radiative forcing that is kept constant the temperature will keep increasing for some time. It is also clear that the parameter τ will control this process. The above estimate for τ only took into account the top layer of the oceans. In reality that layer will mix with lower layers by the deep ocean circulation, where water goes down at the poles, moves along the deep ocean and slowly moves up again with a time constant of about 300 years. This implies that even with constant forcing the temperature will keep increasing until the complete ocean has warmed up, which may not be for some 600 years. 3.2.5
The Greenhouse Gases
In Table 3.2 the concentrations of the most important greenhouse gases and their contribution to the greenhouse effect are given numerically. One finds a total warming of 33 [◦ C] Table 3.2 Greenhouse effect of the most important atmospheric gases. The warming effect is given as estimated in 1984 [9].
Trace gas CO2 N2 O CH4 HFCs PFCs SF6 O3 troposphere H2 O vapour
Warming Increase/ effect [%/yr] Conc/ppmv (1984)/[◦ C] Lifetime/(yr) (1998–2005) 379 0.32 1.77 ≈10−5 ≈10−4
5000
Other data are from [1], p 141, 212, 213.
7.2 1.4 0.8 0.6 2.4 20.6
50–200 114 12 1–270 1000 3200
0.49 0.22 0.09 ≈10 ≈3 3.8
GWP (20 yr)
GWP (100 yr)
1 289 72 >1000 >1000 16 300
1 298 25 >1000 >1000 22 800
Climate and Climate Change
49
produced by these gases [1]. Their concentrations rise because of human activities with a rate also given in Table 3.2. The increase in these concentrations is important for the climate, as they contribute to the human-induced greenhouse effect. For a reliable calculation of the future effects of adding greenhouse gases to the atmosphere, one would need to take into account the absorption spectrum of each gas. Also significant is the time an emitted gas molecule stays in the air and thus contributes to the effect. Estimates of these lifetimes are given in the table. Finally, it is helpful to policy makers to have a simple number, which comprises the combined effect of all gases. For this purpose the concept of global warming potential (GWP) of a certain gas is introduced. As the increase of the CO2 concentration is most widely discussed, it is taken as the standard. The GWP of a certain gas then compares the warming effect of adding 1 [kg] of the gas to the warming effect of the addition of 1 [kg] of CO2 . Because of the different lifetimes the GWP will depend on the time horizon one considers. In the last column of Table 3.2 GWPs are given for 20 years and 100 years after emission. When one has decided on a time horizon, one may convert the emission of a certain greenhouse gases to the emission of CO2 by multiplying by the GWP. This is called the equivalent CO2 emission. The reason why CO2 is taken as the standard for emissions becomes clear from Table 3.2. The top six rows refer to gases which are directly emitted into the atmosphere by human activities. Of these, CO2 gives the largest contribution and its increase with time is well documented. For the period since 1970 it has been monitored accurately, as is shown in Figure 3.10. In summer photosynthesis binds more of the emitted CO2 than in winter, which explains the periodicity in the graph.
Figure 3.10 CO2 concentrations in the atmosphere, measured on Hawaii, far from industrial centres. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
50
Environmental Physics
Figure 3.11 Increase of concentrations of the most important greenhouse gases over time. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
In Figure 3.11 earlier concentrations of the top three greenhouse gases CO2 , CH4 and N2 O are shown, as found from air bubbles in ice cores. It appears that the great increase in their concentrations started around 1750 with the beginning of the industrial revolution. Therefore, this year is taken as the reference point in calculating human influences. The three greenhouse gases shown in Figure 3.11 were present in nature even before 1750. Their concentrations are increasing due to human activities as summarized in Table 3.3. The most widely discussed greenhouse gas, CO2 , enters the air by combustion or burning of fossil fuels or biomass. Burning of pure coal would be represented by the chemical reaction C + O2 → CO2
(3.33)
but in practice fossil fuels and biomass consist of C, CH4 , Cx Hy , (CH2 O)n and many other trace elements. They all originate from photosynthetic processes. Table 3.2 shows in rows 4, 5 and 6 gases which originate only from human activities. The HFCs and PFCs comprise a set of chemicals, many of which were shown to contribute to the ozone hole, discussed in Section 2.4.1. Those are banned according to an international agreement, the Montreal protocol; others, however, are not banned and will remain as refrigerants or by-products of Al production and other industrial processes. Although their concentrations are still low, many of them show an increasing concentration and they have a large GWP. The origin of the chemical SF6 is indicated in Table 3.3.
Climate and Climate Change
51
Table 3.3 Human sources of principal greenhouse gases ([1], 138–145). CO2 CH4 N2 O SF6
Combustion of fossil fuels, gas flaring, cement production (using CaO from CaCO3 ), biomass burning Wetlands (rotting), rice agriculture, biomass burning, ruminant animals Microbes in fertilized agricultural lands Electrical insulation fluid, inert tracer for studying transport processes
In the lower part of Table 3.2 one finds tropospheric ozone. This gas is a result of emissions of nitrogen oxides, carbon monoxide and organic compounds. Its greenhouse effect is taken into account in the calculations. Finally, in the bottom row one finds water vapour, which has the biggest greenhouse effect of all. In model calculations one starts from the major greenhouse gases, computes the climate effect, finds the change in water vapour concentration and cloud formation and takes that into account in a feedback loop.
3.3
Dynamics in the Climate System
In the previous sections the radiation balance was averaged over the globe. Figure 3.12 shows that this approximation is too simple to explain the temperature differences over the earth. The curve shows the absorbed solar radiation as a function of latitude. It peaks at the equator, where the sun every day reaches a high elevation and it is small at the poles, where the sun not only disappears part of the year, but the snow reflects much of the solar light as well. The absorbed solar radiation therefore will be small. The emitted long wave radiation σ T 4 , dashed in the graph, is indeed smaller at the poles than at the equator due to the lower temperature, but the net energy absorbed at the poles is negative, illustrated in the lower curve.
N(φ) / [Wm-2]
300
Absorbed solar Emitted longwave
200 100 0
Net radiation absorbed
-100
-90
-60
-30
0
30
Nnet 60
90
Latitude φ Figure 3.12 Absorbed solar radiation and emitted long wave radiation as a function of latitude. The graph is averaged over latitude circles and over the year. The net radiation absorbed is negative at the poles, implying energy transport from the equator to the poles. (This article was published in Global Physical Climatology, Hartmann, pg 37, copyright 1994, Elsevier.)
52
Environmental Physics
Figure 3.12 shows that at high latitudes the emitted energy is larger than the absorbed energy. Consequently there must be an enormous energy transport from the equator to the poles.
Example 3.2 Energy Transport to the Poles Estimate the energy flux, which passes the latitude φ = 35◦ to the north. Answer Assume that all net absorbed energy north of the equator goes to the north. From Figure 3.12 we estimate that at φ = 0◦ the absorbed energy equals 75 [Wm−2 ] and it vanishes at φ = 35◦ We approximate the curve by a straight line and use radians instead of degrees. This gives Nnet (φ) = 75 − 123 φ [Wm−2 ] which has to be integrated over a strip with width R dφ, as shown in Figure 3.13. The resulting flux F is found as 0.61 (75 − 123 φ)2π R 2 cos φdφ F=
(3.34)
0
which gives F = 5.65 × 10 [W] or, accounting for the circumference of the earth at φ = 35◦ one finds 1.7 × 108 [Wm−1 ] = 170 [MWm−1 ] crossing the latitude circle: a medium-sized power station per metre. 15
N
R cosφ
R dφ
φ M
Figure 3.13 latitude φ.
R
The net absorption of Figure 3.12 is integrated over a strip with width Rdφ at
For realistic analyses the energy transport has to be modelled correctly and has to take into account the geography of the earth. For the oceans the situation is sketched in Figure 3.14.
Climate and Climate Change
53
Figure 3.14 Global ocean circulation. Dark arrows indicate warm surface currents and grey arrows represent deeper return currents. A circle with a cross indicates downwelling and circles with a dot indicate upwelling of deep waters. (Reproduced from Netherlands Tijdschrift Natuurkunde, Dijkstra, 63, 79–83, 1997.)
One notices warm surface currents, indicated by dark arrows and colder return currents, indicated in grey. The surface currents are concentrated in space. The Gulf Stream in the North Atlantic, for example, has a width of 60 [km], is 500 [m] deep and moves with a velocity of 1 [m s−1 ]. This corresponds with a mass transport of 3 × 1010 [kg s−1 ]. The so-called Kuroshio near Japan moves a similar mass. They make up a considerable part of the energy transport, calculated in Example 3.2 (Exercise 3.12). The oceans transport energy as sensible heat, in the air much is transported as latent heat. At warm latitudes water evaporates and moves upwards in the atmosphere; next the humid air is transported and condenses at another place, liberating the evaporation heat as condensation heat. The orders of magnitudes are illustrated in Exercise 3.10. From these examples it follows that computations of climate, as for weather, require the equations for horizontal and vertical motions of air and water. We discuss them briefly, starting with the vertical motion of air.
3.3.1
Horizontal Motion of Air
Horizontally, the velocity of a parcel of air changes frequently in direction and magnitude, so, contrary to the vertical case of Section 3.1, the full equations of motion should be solved. Consider a volume dτ with mass ρdτ and velocity u. Newton’s equations of motion may be written as du ρdτ = Fpress + Fviscous + FCoriolis + Fgravity dt
(3.35)
54
Environmental Physics
On the left-hand side one finds the traditional ‘acceleration times mass’ and on the righthand side the forces which operate on the parcel of air. These forces will be discussed successively below. 3.3.1.1
Pressure Gradient Forces
The first term on the right of Eq. (3.35) is the pressure gradient force Fpress . Its magnitude and direction are found from the left-hand side of Figure 3.15. Consider a rectangular block with its sides parallel to the coordinate axes and look in the x-direction. The force to the right equals p(x, y, z)dydz and the force to the left ∂p dx dydz (3.36) p(x + dx, y, z)dydz = p(x, y, z) + ∂x The total force in the positive x-direction therefore becomes (−∂ p/∂ x)dτ
(3.37)
for the block dτ = dxdydz. In the y- and z-directions one finds the same expression with x replaced by y or z. This result is summarized by Fpress = −∇ p dτ
(3.38)
where the vector ∇ p has x–, y–, z– components ∂ p/∂ x, ∂ p/∂ y, ∂ p/dz. A short but comprehensive discussion of vector algebra and vector differentiation may be found in Appendix B. 3.3.1.2
Viscous Forces
On the right-hand side of Figure 3.15 one observes the same block as on the left-hand side, but in this case the velocities are indicated. Consider the x– component ux (x, y, z) of the velocity u and assume for the sake of the argument that the velocity of air is to the right. We follow Newton’s assumption that the viscous force at the bottom of the block will be proportional to −∂u x /∂z. The minus sign is understood by assuming that component ux increases with z, so ∂u x /∂z > 0. Then, the drag of the surrounding air on the block should be to the left, hence the minus sign. The student may check that with other assumptions the same equation follows. The proportionality constant μ is called the dynamic p(x+dx,y,z)dydz ux(z+dz) dz
dz
dy p(x,y,z)dydz
Figure 3.15
dy (xyz)
dx
ux(z) (xyz)
dx
Derivation of the pressure gradient force (left) and the viscous force (right).
Climate and Climate Change
viscosity [Pa s]. The force on the bottom therefore becomes ∂u x −μ dxdy ∂z x yz Similarly, the force on the top has the other sign ∂u x dxdy +μ ∂z x yz + dz The net force in the x– direction therefore becomes ∂u x ∂u x ∂ 2u x dxdy + μ dxdy = +μ 2 dτ Fx = − μ ∂z x yz ∂z x yz + dz ∂z
55
(3.39)
(3.40)
(3.41)
where, as before, the volume element dτ = dxdydz. The force Fy in the y– direction can be found by replacing ux by uy in Eq. (3.41). In the atmosphere the friction force essentially works in the horizontal (x– and y–) directions as the vertical (z–) component of the velocity can be neglected. Strictly speaking, the vertical is defined as the line connecting the centre of the earth with the location on the surface one is considering; the horizontal plane then is perpendicular to this vertical. The volume element dτ has a mass ρdτ so the viscous force per unit of mass in the x– direction becomes ∂ 2u x μ ∂ 2u x Fx = v =+ ρdτ ρ ∂z 2 ∂z 2
(3.42)
which defines the dynamic viscosity v. Note that for a complete treatment the friction along the surface should be added to Eq. (3.35). 3.3.1.3
Coriolis Force
The Coriolis force, the third contribution in Eq. (3.35) is a fictitious force, to be added to the physical (natural) forces in order to correct for the acceleration of the coordinate system. In the present case, that accelerated system is fixed at a location on the surface of the earth. When in an inertial coordinate system a particle moves in a straight line, that line will seem curved, when measured from a rotating frame. The fictitious extra force should account for the resulting trajectory of the particle. Of these forces only the Coriolis force is relevant for our discussions. The force is written as FCoriolis = −2 × uρdτ
(3.43)
The vector represents the earth’s rotation, as shown on the left in Figure 3.16. As the air velocity u is essentially horizontal, only the vertical component of (perpendicular to the local horizontal plane) will determine the acceleration of the air parcel. That vertical component depends on the latitude β; its magnitude is sin β. Because of the factor of 2 in Eq. (3.43) one calls f = 2 sin β the Coriolis parameter.
(3.44)
56
Environmental Physics Ω
Ω β
k β
uG
∇p
β
phigh plow
Figure 3.16 Coriolis force and geostrophic flow. On the left the vector representing the earth rotation is projected on the local vertical k to find the horizontal component of the Coriolis force. On the right the direction uG of the geostrophic flow is derived for the Northern Hemisphere.
3.3.1.4
Gravity
The last term on the right-hand side of Eq. (3.35) is the gravity force. For a parcel of air with mass ρdτ it may be written as Fgravity = g ρdτ
(3.45)
where g is the acceleration due to gravity. It may be shown that in the absence of vertical velocity the hydrostatic equation (3.2) follows from (3.45) with (3.35) and (3.38) (Exercise 3.13). It may be remarked that vertical motion does occur, for example in rising hot air or in strong downpour from a cumulonimbus cloud. These must be taken into account for detailed weather calculations ([2], p. 210), but in most cases the vertical component of the air velocity is much smaller than the horizontal one and may be ignored.
Example 3.3 Geostrophic Flow Derive an expression for the velocity of air at high altitude (500 [m], say), assume a constant horizontal air velocity u, give arguments why certain terms in Eq. (3.35) may be ignored and check whether the air velocity indeed can be constant. Answer The assumption of constant air velocity implies that the left-hand side of Eq. (3.35) vanishes, as well as the viscous forces. The surface is so far away that friction need not be taken into account and gravity is compensated by the hydrostatic decrease of pressure upwards. This leaves us for the horizontal component of Eq. (3.35) with − ∇ pdτ − 2 × uρdτ = 0 (3.46) or −
1 ∇ p = 2 × u ρ
(3.47)
Climate and Climate Change
57
It is seen from the left of Figure 3.16 or from Eq. (3.44) that at the equator the vector product 2 × u has vertical components only, as both and u are in the horizontal plane. Therefore Eq. (3.47) only makes sense at middle latitudes. In that region the horizontal component of 2 × u which has magnitude f × u (see Eq. (3.44) should be equal and opposite to the horizontal component of ∇ p/ρ. The situation is sketched on the right in Figure 3.16 with two isobars plow and phigh . The vector ∇ p/ρ goes from plow to phigh as indicated. The vertical component of points upwards from the paper. It follows that the velocity u must go to the left, as shown in the figure, for then the vector product 2 × u goes contrary to ∇ p/ρ. Note that in the southern hemisphere the vector has the opposite direction, so in that case the direction of uG in Figure 3.16 would be reversed. The flow which follows from Eq. (3.47) apparently is directed parallel to the isobars and is called the geostrophic flow. It is constant in direction and magnitude and therefore obeys the assumptions made. Its magnitude is given by uG =
|∇ p| fρ
(3.48)
The measured pressure gradients give values within 10% of the measured wind velocities at high altitudes (Exercise 3.14). Velocities at an altitude of 1000 [m] may be of the order of 25 [m s−1 ] or higher. 3.3.1.5
↓
Coupling of Horizontal and Vertical Properties in the Atmosphere
In the following paragraphs it will be shown that horizontal and vertical changes in the atmosphere are coupled. In modelling, therefore, both horizontal and vertical dimensions have to be considered and one may not restrict oneself to one dimension. We consider the higher parts of the atmosphere where Eq. (3.47) for the geostrophic flow holds: −(∇ p)/ρ = 2 × u. With k the unit vector in the vertical direction this may be rewritten with Eq. (3.44) as k×u=−
1 ∇p ρf
(3.49)
The equation of state (3.6) reads p = ρ R T
(3.50)
and the hydrostatic equation (3.2) may be written as g=−
1 ∂p ρ ∂z
(3.51)
Substitution of p from (3.50) into (3.51) gives g=−
∂T 1 ∂ρ R T − R ρ ∂z ∂z
Dividing both sides by T and using that R is independent of z gives R ∂ρ R ∂ T ∂ ∂ ∂ g =− − = −R (ln ρ) + (ln T ) + (ln R ) T ρ ∂z T ∂z ∂z ∂z ∂z
(3.52)
(3.53)
58
Environmental Physics
or ∂ g = −R (ln p) (3.54) T ∂z Note that one obtains Eq. (3.49) from Eq. (3.51) if one substitutes k × u for g on the left and ∇ p/ f for ∂/∂z on the right. Therefore, we may deduce from Eq. (3.54) without further derivation that R k×u = − ∇(ln p) T f
(3.55)
This equation is differentiated with respect to z and with the help of Eq. (3.54) we find ∂ k×u R ∂ g 1 g ∇T (3.56) =− ∇ ln p = ∇ = − ∂z T f ∂z f T f T2
↑
As k points in the vertical direction, the left-hand side of this equation has horizontal components only. Eq. (3.56) therefore expresses that the variation of the temperature T in the horizontal plane is through ∂/∂z coupled to the vertical variation of the velocity u. 3.3.2
Vertical Motion of Ocean Waters
The vertical cross section of the oceans is sketched in Figure 3.17 with the Pacific on the left and the Atlantic on the right. The land masses in the oceans are shaded grey. On top one will note the mixed layer that was introduced in Section 3.2.4. Below this top layer one distinguishes upper waters, deep waters and bottom waters. Note that the vertical scale has been changed at 1000 [m] depth. The vertical structures of the Atlantic and the Pacific appear to be quite different in the Northern, Arctic region. Notice that in the North Atlantic surface water goes down; this was already indicated by the circles with a cross in Figure 3.14. After going down, bottom water is formed, which will eventually go up again
Pacific ocean 0 100
Atlantic ocean 0 100
Mixed layer
Upper waters
Upper waters
1000
1000
Deep waters
Deep waters 4000
4000
Bottom waters
Bottom waters 5500 90°S
Mixed layer
Eq.
90°N
5500 90°S
Eq.
90°N
Figure 3.17 Layered structure of the oceans with the Pacific on the left and the Atlantic on the right. Note the change of scale at a depth of 1000 [m] and the different bottom structure for the Pacific and the Atlantic in the Arctic region. In the South bottom water is formed in both oceans, while in the North it is mainly formed in the Atlantic.
Climate and Climate Change
59
in the so-called thermohaline circulation. In the South, bottom water is formed in both oceans. The word ‘thermohaline’ indicates the driving force behind the ocean circulation: heat and salinity. The Gulf Stream transports relatively warm water to the North, as discussed earlier. In the Arctic, water evaporates, which cools the remaining waters, and because the evaporated water vapour is pure, the salinity of the remaining water increases. Both effects increase the density to values larger than the water underneath, so the water sinks to the bottom of the ocean, as indicated. The amount of deep bottom water formed this way is about 1010 [kg s−1 ] in the Antarctic Ocean and double that in the North Atlantic ([10], 201). It must be remarked that the wind and the tides also influence the circulation. Therefore the modern term MOC, the Meridional Overturning Circulation, is coming into use. 3.3.3
Horizontal Motion of Ocean Waters
The horizontal circulation in the oceans has already been shown in Figure 3.14. The driving forces are the wind stress, pressure differences and the Coriolis force, while the presence of the land masses puts a boundary to the ocean currents. The wind stress along the surface is especially important where there are dominant winds constantly flowing in the same direction, such as the Easterly trade winds in the tropics. When wind stress is ignored, Eq. (3.35), which was derived for the motion of air in the atmosphere, can be applied to the oceans as well. Except at places of downwelling and upwelling, the approximation that the velocity u is horizontal can be made. A further, more daring, approximation is that viscous forces can be ignored. In that case the discussion on the geostrophic flow can be repeated, leading to Eq. (3.48) which reads uG =
|∇ p| fρ
(3.57)
The pressure variations in the atmosphere are reflected in the ocean surface, leading to a similar value of |∇ p| and the Coriolis parameter f will be the same. The density of sea water, however, is a factor of 1000 larger than that of air, leading to velocities a factor of 1000 smaller. Instead of 30 to 50 [m s−1 ] one would expect 3 to 5 [cm s−1 ]. In practice, most observed surface speeds are between 5 and 50 [cm s−1 ].
3.4
Natural Climate Variability
Climate variations have always happened. That may be illustrated by reconstructions of the evolution of sea surface temperatures during the last 400 000 years, shown in Figure 3.18. The top graph refers to the North Atlantic, the bottom one to the South Atlantic and the middle one to the Equatorial Indian Ocean. Both the North and South exhibit a similar alternation of warmer and colder periods and some fine structure. With the exception of the last few hundred years the variations must have been due to internal variations in the climate system and external drivers of change. The cold periods are called a glacial period or ice age, where large parts of the earth’s surface were covered with snow and ice. Except at the equator there is a succession of these
60
Environmental Physics
Figure 3.18 Trends in sea surface temperatures during the last 400 000 years in the North Atlantic (top), the South Atlantic (bottom) and the Equatorial Indian Ocean (middle). (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
glacial and interglacial periods, which are most clearly to be seen in the South Atlantic curve of Figure 3.18. The Serbian scientist Milankovitch, who worked in the 1920s, attributed this phenomenon to changes in the orientation of the earth’s axis with respect to its orbit around the sun. The situation is sketched in Figure 3.19. The eccentricity of the orbit varies between 0.002 and 0.050 (present value 0.0167), the tilt (the angle between the earth’s axis and the normal on the orbital plane) varies between 22.05◦ and 24.50◦ (present value 23.45◦ ) and the axis itself performs a precession around that normal ([1], 444 - 446). The first two effects are due to gravitational interactions of the earth with the rest of the solar system, while the precession is due to the deviation of the earth from a perfect sphere: it has a ‘belt’ around the equator which experiences a gravitational torque from the sun. The variations
Climate and Climate Change
61
Tilt Precession
earth axis Sun
Eccentricity
Figure 3.19 Three parameters determine the insolation of the earth: the eccentricity of the orbit (indicated by two extreme positions), the tilt of the axis (changing between 22◦ and 24.5◦ ) and the precession of the axis around the normal to its orbital plane.
are quasi-periodic; the eccentricity with periods of 100 000 years, the tilt with 41 000 years and the precession with 19 000 and 23 000 years. It must be noticed that the precession and the change of tilt do not change the average insolation of the earth. They do lead to a difference in insolation between the hemispheres. This is quickly understood by taking an extreme example, a tilt of almost 90◦ . At the right in the diagram the Southern Hemisphere has full sunlight day and night (a terrible summer) and the Northern Hemisphere has day and night in complete darkness. After half a year the situation is reversed. For a moderate tilt, as indicated in Figure 3.19, consider the equatorial plane of the earth, which passes through the centre of the earth. The orbital plane also contains that centre. The two planes intersect in a straight line through the earth’s centre. In Figure 3.20 the situation Spring equinox (March 20) Summer solstice (June 21)
Perihelion (January 4)
Winter solstice (December 21) Autumnal equinox (September 22)
Figure 3.20 The earth in its elliptical orbit with the sun as one of its focal points. The major axis of the ellipse is indicated as is the earth equatorial plane. Twice a year the sun will be on the intersection between orbital plane and equatorial plane: the vernal equinox and the autumnal equinox. This intersection moves around with the precession. The season dates are for the UK in the year 2013, the perihelion for 2014. Note that the minor axis of the ellipse (not drawn) will be a little to the left of the drawn intersection.
62
Environmental Physics
is sketched. In the summer of the Northern Hemisphere the sun will be above the equatorial plane. In the winter it will be below. Twice in between the sun will pass the equatorial plane of the earth, in which case the intersection of equatorial plane and orbital plane will contain both the earth and the sun. In these two cases day and night will be equally long: the beginning of autumn or spring. Because of the precession the equatorial plane moves around and consequently also the intersect between both planes. So the positions of spring and autumn move along the orbit. If for a highly eccentric orbit midsummer in the Northern Hemisphere occurs at a position close to the sun, the summer will be relatively hot and the winter (with a position far from the sun) relatively cold. For the Southern Hemisphere, the winter will be mild (the planet is close to the sun) and the summer will be mild as well. These examples show that the accumulation of the three effects may affect the hemispheres in different ways. The insolation averaged over the year and the total globe will only vary a little, as a consequence of a change in eccentricity, but this minor effect may be ignored. The onset of ice ages therefore should be attributed to a local decrease of insolation. Furthermore, the coupling between both hemispheres by means of the ocean currents (Figure 3.14) and winds will result in the complex climate variation which is observed. For the onset of the last ice age, 110 000 years ago in the Northern Hemisphere, the decrease in insolation has been estimated as 40 [Wm−2 ] for mid June at 65◦ North with respect to the value of 380 [Wm−2 ] at present ([1], 445). The next ice age is expected 30 000 years from now ([1], 449), most likely too late to compensate for the present temperature rise from the greenhouse effect. The Milankovich variations do not explain climate variations on a shorter time scale. There are at least two other effects. The first is the variation of the solar emission. The sunspot cycle of 11 years has measurable effects on the insolation. Reconstructions from 1700 to 2000 suggest fluctuations of the ‘solar constant’ S between 1365 and 1366.5 [Wm−2 ] ([1], p. 190). The second effect is the eruption of volcanoes. Large eruptions bring huge amounts of dust and sulfate aerosols into the air. This results in an increase of the albedo a of the earth because they prevent part of the light from penetrating the atmosphere. Consequently the temperature will drop. The Pinatubo eruption in 1991 indeed caused a drop of temperature of about 0.3 [◦ C] for a few years ([1], 62, 684). Dust and aerosols slowly precipitated by rain or gravity. Finally, it must be mentioned that the physics of the radiation balance is not sufficient to explain or predict climate change satisfactorily. Figure 3.21 shows the concentration of the greenhouse gases CO2 and CH4 over the last 20 000 years, reconstructed from ice and fern data ([1], 25 448). Their variations before 1750 must be due to biochemical processes, induced by changes in the radiation balance. The strong increase in recent times is most apparent in the figure. Interesting are the vertical bars, which indicate the variability for the past 650 000 years.
3.5
Modelling Human-Induced Climate Change
A perfect model should be able to reproduce the past climate, comprising many more than the graphs shown here, for all possible locations, all possible times and all possible
Climate and Climate Change
63
Figure 3.21 Concentrations of greenhouse gases CO2 and CH4 during the last 20 000 years. The variations must be due to coupling of the radiation balance with biochemical processes. The increase in recent times is attributed to human-induced emissions. The grey width indicates the range of natural variability during the past 650 000 years. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
variables. The model may not restrict itself to physics but also calculate the biochemical feedbacks, some of which appeared in Figure 3.21. If the past is reproduced completely, one will have faith in the forecasts as well. Obviously, the scientific community has not reached this phase yet, and it probably never will. Therefore, the best one can do is to give the best possible estimates and point out the uncertainties in the predictions. Policy-makers and society-at-large then may choose a path with the smallest risks and the largest potential benefits. In this section we will give an indication of the elements climate modelling will comprise. First we discuss the carbon cycle as one of the biochemical cycles to take into account. Next, we give an overview of the physical equations and finally we discuss what kind of models are used in practice. 3.5.1
The Carbon Cycle
Carbon in the form of CO2 is one of the most important greenhouse gases. It is therefore important to know how carbon circulates in the ecosystem of the earth. In Figure 3.22 a simplified overview of the global carbon cycle is given, where the numbers in boxes are estimated for the 1990s. They refer to the carbon content [1012 kg] of parts of the ecosystem and the numbers near the arrows refer to the annual fluxes [1012 kg C/yr]. Drawn arrows represent relatively rapid changes and dashed ones are relatively slow. The top of the diagram shows the atmosphere where the carbon content directly influences the greenhouse effect. Far left, the input from the burning of fossil fuels is shown. This number is easiest to estimate as the use of fossil fuels is well-documented in UN statistics.
64
Environmental Physics Fossil CO2 Volcanic outgassing
Atmosphere
6.4
762
CO2 from
changes in landuse
1.6 Land sink Photosynthesis 2.6 120
Weathering Respiration 0.2 119.6
Vegetation + Organic soil carbon 2260
0.8 Rivers
90.6
Gasexchange 92.2
Marine biosphere 3
39 50
918 Surface ocean
11 Dissolved and particulate organic carbon
91.8 101 Deep sea 37200
Sedimentation 0.2
Remineralization
Fossil fuel resources 3450 Sediments, carbonaceous rocks
Figure 3.22 Global Carbon Cycle with data for the 1990s. The numbers in boxes refer to the C content [1012 kg] and the numbers near the arrows to C fluxes [1012 kg/yr]. Drawn arrows indicate fast fluxes and dashed ones slow exchanges. More details are displayed in Plate 2, which is Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).
3.5.1.1
Land-Use Change
The input from changes in land use, also on the left in Figure 3.22, is a little more complicated. Conversion of forests to agriculture releases the biomass present in the trees. Also the carbon and nitrogen contents of the soil decrease. The crops that replace the trees have there own carbon content, but the amount of carbon is less as the volume of crops is smaller than that of trees.
3.5.1.2
Ocean-Atmosphere Interaction
Somewhat to the right of the centre of Figure 3.22 the oceans are shown. From measurements of pressures in the top of the ocean and from studies with radioactive tracers one estimates the exchange between atmosphere and ocean: an absorption of 92.2 × [1012 kg/yr] and an emission of 90.6 × [1012 kg/yr] The difference between these big numbers and the inflow from the rivers 0.8 × [1012 kg/yr] determines the net uptake of carbon by the oceans. This difference, which is essential for estimating the reaction of the oceans to the appearance of additional CO2 in the atmosphere, will have a relatively big uncertainty.
Climate and Climate Change
65
The extra CO2 gas in the ocean top layers will react with water according to the equilibrium reaction CO2 (gas) + H2 O → H2 CO3 (aq)
(3.58)
Here the addition of (aq) means that the carbonic acid H2 CO3 is dissolved in water. Next the acid dissociates into bicarbonate and carbonate according to the two reactions H2 CO3 (aq) + H2 O → H3 O+ (aq) + HCO− 3 (aq)
(3.59)
2− + HCO− 3 (aq) + H2 O → H3 O (aq) + CO3 (aq)
(3.60)
Both equations are governed by an equilibrium constant. The relative magnitude of the HCO3 − and CO3 2− concentrations strongly depend on the pH of the water. For sea water with pH ≈ 8 one finds that HCO3 − dominates with a relative concentration of 97% (Exercise 3.15). As a consequence the carbon will react with calcium in marine organisms according to the reaction Ca2+ +2HCO− 3 (aq) → CaCO3 +CO2 +H2 O
(3.61)
In this way the carbon is stored in calcium carbonate (or calcite), which is used for the growth of shells and tissues. Part of it slowly sinks down to the bottom of the oceans. In a steady state this downward flux is compensated by an upward flux of carbonrich waters near the equator, along the continental margins and in high latitude waters. The process of transport down and up again is called the ‘marine biological pump’. Its magnitude does not only depend on the carbon content of the water, but for a very large part on the availability of nutrients to sustain marine life such as N, P and micro-nutrients like Fe and Mn. With a doubled CO2 concentration in the atmosphere it is difficult to predict how the availability of nutrients will respond to the resulting change of climate. It may go both ways. If we are lucky more nutrients will result in more photosynthetic micro-organisms in the oceans. They would fix part of the excess of CO2 , thereby mitigating the climate change. If we have bad luck and the nutrients are lacking, the photosynthetic life in the oceans would be strongly reduced. In that case the ‘pump’ would stop functioning and the atmospheric CO2 concentration might increase by another 150 [ppmv]. 3.5.1.3
Atmosphere–Land Interaction
Somewhat to the left of the centre in Figure 3.22 the exchange between atmosphere and vegetation is shown. Again two big numbers determine the exchange: photosynthesis is estimated as 120 × [1012 kg/yr] and respiration at night which includes emission of CH4 due to rotting amounts to 119.6 × [1012 kg/yr]). Besides these two effects there is a ‘land sink’ due to increase of vegetation, which amounts to 2.6 × [1012 kg/yr]. The land uptake is calculated from the carbon budget. This is shown in Table 3.4 for the 1990s and 2000–2005. The input into the atmosphere from fossil fuel emissions can be estimated rather accurately. This also holds for the accumulation in the atmosphere. The uptake into the ocean is modelled from which the net land uptake is inferred. This net uptake is the effect of land use change, which binds less CO2 (which can be estimated) and a sink due to increasing growth of vegetation caused by the higher CO2 concentration in
66
Environmental Physics
Table 3.4 Global mean CO2 budget/[1012 kg C yr−1 ].
Estimated fossil fuel + industrial emissions Measured accumulation in the atmosphere Modelled ocean uptake Inferred net land uptake Partitioning of land uptake Land use change, CO2 increase Residual land sink, CO2 decrease
1990s
2000–2005
6.4 ± 0.4 3.2 ± 0.1 2.2 ± 0.4 1.0 ± 0.6
7.2 ± 0.3 4.1 ± 0.1 2.2 ± 0.5 0.9 ± 0.6
1.6 (0.5–2.7) 2.6 (0.9–4.3)
Not available Not available
With permission of Cambridge University Press, copyright 2007, reproduced from [1], p. 26).
the atmosphere. For the 1990s the residual sink is estimated as around 2.6 × [1012 kg/yr], which explains the number in Figure 3.22; for the 2000s the data were not available. From Table 3.4 one may conclude that half of the extra CO2 emissions are stored in the oceans or by land. It is not certain that this will remain so. As to the oceans, the coin may flip to both sides; as to the land, some models suggest that in the future increased respiration will produce a land source instead of a sink ([1], p. 777). That would strongly reinforce the greenhouse warming and climate change. 3.5.1.4
The Final Steady State
The numbers in Figure 3.22 refer to the present situation. In the future most recoverable fossil fuels will have been mined and combusted, say a quarter, which amounts to 1500 [1012 kg]. Model calculations [13] have compared two time scales, one in which the resources are exhausted in 200 years and another one, where that happens in 400 years. The result of such a calculation is shown in Figure 3.23. In the lower graph two types of emissions are shown, both starting in the year 1900. The drawn curve corresponds to a quick release of carbon, within 200 years, whereas the dashed one corresponds to a slower exhaustion of resources, in about 400 years. The upper graph shows the resulting atmospheric CO2 concentrations as a function of time. In both cases a final equilibrium would be reached some 1500 years after 1900 with a concentration of about 400 [ppmv]. When the pulse is distributed more evenly in time (the dashed line) the peak height is lower. In both cases the climatic consequences will be big. But it stands to reason that in the second case they might be smaller, as the ecological system has more time to adapt. It must be noted, of course, that other greenhouse gases may take over the role of CO2 in the future and consequently the equivalent CO2 concentration would continue to rise. The description of the carbon cycle given here illustrates that climate modelling not only requires a good understanding of physics and computer science, but also an in-depth knowledge of chemistry and biology. 3.5.2
Structure of Climate Modelling
The most advanced climate models are the so-called GCMs, general circulation models or general climate models. They explicitly consider all actors in the climate system, depicted in
600
67
Atmospheric CO2 concentration
500 400
Final equilibrium 300
Emissions /[10 12 kg C yr -1]
Concentration /[ppm]
Climate and Climate Change
1900
2300
2700
3100
20 15
CO2 emissions
10 5 0 1900
2300
2700
3100
Year Figure 3.23 Release of all recoverable fossil fuel resources. The lower graph shows a quick release (drawn) and a slower release (dashed). The upper graph indicates the resulting CO2 concentration in the air. Adapted by permission of the European Commission, copyright 1999, from [13], p. 4.
Figure 3.1 and are often called AOGCMs, Atmospheric-Ocean Coupled General Circulation Models, to indicate that the coupling between atmosphere and ocean is taken into account. Their structure is shown in Figure 3.24. They are usually built up from modules, describing separately ocean, land, sea ice and atmosphere. The modules produce fluxes of energy (sensible and latent heat), momentum (winds, sea currents, rising air) and mass (water, carbon). These fluxes are exchanged and taken into account, indicated by the central box. The need for coupling all modules will be apparent from the discussion so far. A higher temperature at high latitudes will cause expansion of the sea water, decreasing its density. If the effect is big enough, the surface waters in the North Atlantic no longer will go down, stopping the thermohaline circulation (MOC). That in its turn would influence the transportation of energy and the carbon cycle and have its impact on the land vegetation. All these effects affect the atmosphere, the box on top in Figure 3.24. 3.5.3
Modelling the Atmosphere
The fields of science that are most important to model the behaviour of the atmosphere are given at the top right in Figure 3.24. Thermodynamics describes air as an ideal gas (3.15) and applies conservation of energy (3.21) in cloud formation and precipitation. Clouds also are responsible for 2/3 of the earth’s albedo ([1], 114) and thereby largely determine the
68
Environmental Physics
Atmosphere Thermodynamics Dynamics Water vapour/clouds Chemistry Radiation
Land Vegetation Soil, lakes, glaciers Drainage
Interconnection of energy, momentum and mass (water, carbon etc.) fluxes
Temperature Thermodynamics
Sea ice
Temperature Salinity Circulation
Oceans Figure 3.24 Structure of general climate models (GCMs). The four players, land, ocean, sea ice and atmosphere are modelled and their interactions taken into account by the central box. (Reproduced from A Climate Modelling Primer, McGuffie, pg 166, fig 5.1, Copyright 2005, John Wiley & Sons Ltd.)
local and global radiation balance. Chemistry is needed to analyse the composition of the atmosphere, the photo reactions that may break down or form greenhouse gases and to model the CO2 uptake of the oceans (see our discussion of the carbon cycle). Dynamics starts from Newton’s equations of motion (3.35) and describes horizontal motion of air while the hydrostatic equation (3.2) describes the vertical motion. In order to appreciate the computational effort of modelling we look at the equations of motion (3.35) in more detail: du (3.62) ρdτ = Fpress + Fviscous + FCoriolis + Fgravity dt The forces on the right-hand side have been discussed earlier (Eqs. (3.36)–(3.45)) and will be a function of longitude, latitude and altitude. The left of Eq. (3.62) describes a parcel of air with mass ρdτ and velocity u. The velocity u is a function of time and position u(x, y, z, t). We write out the x-component of the time derivative du/dt on the left of Eq. (3.62). This leads to ∂u x dx ∂u x dy ∂u x dz ∂u x du x (x, y, z, t) = + + + dt ∂ y dt ∂z dt ∂t ∂ x dt ∂ ∂ ∂ ∂u x (3.63) + uy + uz ux + = ux ∂x ∂y ∂z ∂t ∂u x = (u.∇)u x + ∂t
Climate and Climate Change
69
The first equality expresses that the parcel of air moves in time. Its position (x, y, z) therefore is also a function of time and has to be differentiated. The second equality is just a way of rewriting dx/dt = ux , dy/dy = uy and dz/dt = uz . The third part of the equality then may be read as a scalar product (see Eq. (B4) in Appendix B) of the real vector u(x, y, z, t) and the symbolic vector ∇(∂/∂ x, ∂/∂ y, ∂/∂z) as is expressed concisely in the last part of Eq. (3.63). The right-hand side of Eq. (3.63) shows clearly that the left-hand side is nonlinear; if, for example, the velocity u doubled, the time derivative would become four times as big. The consequence is that solving Eqs. (3.62) with two slightly different initial conditions (pressure, density, velocity) at time t = 0 may lead to quite different solutions some time later. This effect is studied in chaos theory. In climate modelling one does not need to know the temperature and precipitation at a certain date and location, as in weather forecasting, but rather the change in average temperature and precipitation in a certain month and a certain region as a consequence of human-induced effects like radiative forcing. It is believed that the effects due to nonlinearity of the equations then average out ([1], pp. 105, 117). Equations (3.62) in general cannot be solved analytically. One therefore puts a twodimensional grid over the globe and uses a number of vertical layers. In each little block of the resulting three-dimensional grid all properties are assumed to be the same. The 23 GCMs discussed in [1] typically have 30 or more vertical layers in both atmosphere and ocean and a horizontal resolution of 1◦ by 1◦ ([1], p. 597–599). In this way the models are able to forecast climate as a function of region and time of the year. In fact, global maps are published with forecasts as to temperature, precipitation, ice cover for the different seasons. It must be mentioned that not all details can be found from the models. For processes involving clouds for example, several sets of parameters are in use, which is one of the aspects in which models diverge ([1], p. 602). To be realistic the properties of the surface have to be taken into account. They are summarized in Table 3.5 and need to be known as a function of latitude and longitude. The first two, relief and roughness describe the geography of the surface, the rest its properties.
Table 3.5 Boundary conditions of GCMs. Earth surface properties
Reason properties required for modelling
Relief Roughness Albedo Emissivity Heat capacity Heat conduction Soil humidity Ice and snow cover Salinity
Obstacles for air flow Friction Radiative fluxes Infrared flux Heat currents into and out of the soil Melting and freezing Evaporation and rain absorption Run-off, influence on albedo Mixing run-off water in sea
70
3.5.4
Environmental Physics
A Hierarchy of Models
The point of climate modelling is forecasting the consequences of any mix of greenhouse gas emissions, land-use change and other effects of human interference. That should be done with a variety of realistic parameter sets in order to find the range of properties for the climate to be expected. However, each run of a complete GCM, comprising all aspects of Figure 3.24, will cost a lot of computer time; therefore less complex models have been designed, where aspects of a GCM are summarized in a few effective parameters, which are estimated by comparison with a more complete GCM ([1], 643–645).
3.6
Analyses of IPCC, the Intergovernmental Panel on Climate Change
The science working group of IPCC evaluates the available literature on climate and climate change. As an example Figure 3.25 shows the radiative forcing of climate between 1750 and 2005, due to several origins, which are briefly discussed below ([1], 199–202). At the top the contribution of CO2 amounts to 1.66 [Wm−2 ]. This number is found by averaging the published values, which are based on the well-known past and present concentrations of this greenhouse gas and its well-known absorption of long-range radiation (Section 3.2). In the CO2 case the group judges that there is both sufficient evidence from physics and sufficient consensus amongst the publications to assign a small error bar. The precise value of the error bar follows from expert assessment of the published values and their ranges. This procedure is followed everywhere in the graph. The second row of Figure 3.25 summarizes the effects of three other important greenhouse gases CH4 , N2 O and the group of halocarbons. The physics is understood, but there are some differences between different data sets and various radiative transfer models. For ozone, the third row, the error bar is much higher. For stratospheric ozone the changes prior to 1970 are uncertain, for tropospheric ozone the changes in lightning are uncertain and for both the trends near the tropopause. Stratospheric water vapour from oxidation of CH4 is found in the next row. The error bar arises from the models of oxidation and radiative transfer. The surface albedo, the next row, has a negative and a positive component. Incomplete combustion of fossil fuel will produce finely divided carbon particles, which precipitate on snow and decrease its albedo: a positive forcing effect. Land-use change such as deforestation and desertification increase the albedo: a negative forcing. Aerosols scatter solar radiation back into space, (a higher overall albedo aa ) and produce a negative forcing, either directly or by affecting the cloud albedo. The latter effect follows from GCMs, but there is no direct observational evidence, hence the large error bar. Linear contrails produced by aircraft only have a small effect; changes in solar irradiance are somewhat higher. The resulting net effect of all human activities is estimated as 1.6 [Wm−2 ], the value that was used in Section 3.2.4.
3.7
Forecasts of Climate Change
An increasing concentration of greenhouse gases will result in higher temperatures and changes of climate. Suppose that at some future date the equivalent CO2 concentration,
Climate and Climate Change
71
Radiative forcing terms
{
CO2
Long-lived greenhouse gasses
N2O CH4
Human activities
Halocarbons Ozone
Stratospheric
Tropospheric
(-0.05)
Stratospheric water vapour Surface albedo
Black carbon on snow
Land use
Direct effect Total aerosol Cloud albedo effect
{
Natural porcesses
Linear contrails
(0.01)
Solar irradiance Total net human activities -2
-1
0
1
2
Radiative forcing /[w m-2] Figure 3.25 Radiative forcing of climate between 1750 and 2005. On the left the contributions to radiative forcing are indicated and on the right they are shown with error bars based on expert assessment. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
or more precisely the net human-induced radiative forcing, is kept constant at some value Cconstant . The resulting equilibrium temperature rise can be calculated from Eq. (3.9) as Ts = G f I
(3.64)
This equilibrium will be reached after some time because of the ocean delay (3.32). As mentioned earlier the parameter Gf is estimated as about 0.8 [K/(Wm−2 )]. The forcing I
72
Environmental Physics
has a simple relationship with the equivalent CO2 concentration as [CO2 ] 3.7 × log I = log 2 280
(3.65)
If CO2 has the pre-industrial concentration of 280 [ppmv], the human-induced forcing is defined as I = 0 and when this concentration is doubled it follows from Eq. (3.65) that I = 3.7 [Wm−2 ]. This value is generally accepted and the logarithmic dependence on the concentration is seen as a good approximation. For the 2005 value [CO2 ] = 379 [ppmv], for example, one finds with Eq. (3.65) I = 1.62 [Wm−2 ], in good agreement with Figure 3.25. From Eqs. (3.64) and (3.65) the equilibrium temperature rise can be calculated as 3.7 Cconstant × log . (3.66) Ts = G f × log 2 280 In Figure 3.26 it is assumed that the equivalent CO2 concentration, including all elements of Figure 3.25, will stabilize at one of four values 450 [ppmv], 550 [ppmv], 750 [ppmv] of 1000 [ppmv]. In the lower figure the corresponding emissions are calculated with the so-called simple Hadley model. It is clear that in all cases the emissions must go down as the greenhouse gases have a rather long lifetime in the atmosphere (Table 3.2). The decrease of emissions must be strongest with the lowest equilibrium concentration of 450 [ppmv]. From (3.60) it appears that this concentration would lead to an equilibrium temperature rise of 2 [◦ C].
Figure 3.26 Stabilization of CO2 concentrations at four values (top) and the corresponding emissions in a simple model, including the impact of the carbon cycle (bottom). (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
Climate and Climate Change
73
The lower graph includes the effects of the carbon cycle on the CO2 concentration. It appears that higher temperatures will reduce the land carbon uptake. The photosynthetic production goes down and the respiration of the vegetation goes up. The difference between the two large numbers of Figure 3.22 then decreases. Figure 3.26 starts from a stabilizing greenhouse gas concentration and calculates the corresponding emissions. One may also work the other way round, start from a realistic economic scenario, then calculate the emissions of all greenhouse gases and all forcings separately, add them up, compute the resulting total forcing and finally find the resulting climate as a function of time and location from a GCM. This in fact has been done, and we will describe one of the scenarios, called A1B and discuss its climatic consequences as computed by the available GCMs. The A1B scenario [15] describes a world of very rapid economic growth, a population peak in the midst of the twenty-first century and a rapid introduction of new and efficient technologies. There is a balance between fossil-fuel intensive and nonfossil energy sources and it is assumed that there is no implementation of internationally agreed emission targets. The model is run from 2000 to 2100 and in some cases even to the year 2300. The emissions of greenhouse gases follow from the model ([1], 803). For the concentrations there is already a range of possible values depending on for example, the precise operation of the carbon cycle. Consequently, there is a range in the radiative forcing, which increases in time. For A1B in 2100 the range is estimated to be between 5.8 and 6.4 [Wm−2 ], see Figure 3.27. The resulting temperature rise is calculated for 19 GCMs, which produce an extra spread, illustrated in the right-hand graph of Figure 3.27. The global temperature rise from 1990 to the year 2100 will be between 1.8 and 4.2 [◦ C] in this scenario, with 3.4 [◦ C] as the most probable value ([1], 763). The sea-level rise in 2100 will be mainly due to thermal expansion (0.13 to 0.32 [m]), while all other effects like melting of glaciers, ice caps, and Greenland and Antarctic ice sheets contribute another 0.08 to 0.16 [m] ([1], 820).
Figure 3.27 Radiative forcing (left) and temperature rise (right) calculated for the A1B scenario for the next three centuries. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
74
Environmental Physics
A warmer climate allows more water vapour in the air (Exercise 3.18), which will lead to more precipitation. In 2100 the global precipitation is expected to be 4% higher than in the year 2000 ([1], 763). This extra will be distributed unevenly over the globe and will be concentrated in the tropics (monsoons) and at high latitudes. More rain near Greenland will reduce the salinity of the sea water, which in its turn would slow down the thermohaline circulation or MOC, but in this century no complete collapse is predicted. The amount of warm waters that are transported northward from the equator will diminish, however, resulting in a somewhat less warm climate in mid-latitudes in Europe ([1], 775,776). This only changes the distribution of heat over the globe, so elsewhere it will become even warmer. Interesting is the response of the Arctic sea ice to global warming. Melting of more ice during summer will allow open water to receive more solar energy, because of the lower albedo of water as compared to ice. This will lead to open waters deep into September, or even in Autumn ([1], 776).
Exercises 3.1 For a mixture of gases like air: (a) Derive R = xi Ri , where xi = ρ i /ρ is the relative mass of component i, ρ = ρ i and Ri = 103 R /Mi for Mi the molecular weight of component i. (b) Find R for air and compare with the value given in Appendix A. 3.2 From Eqs. (3.11) and (3.3) derive the Poisson equation (3.12) for adiabatic change. 3.3 Consider 1 [m3 ] of dry air, which is heated from 10 [◦ C] to 20 [◦ C]. It picks up evaporated water vapour from the surroundings, by which the humidity becomes 80%. The air stays at ground level. Calculate the increase in sensible heat and the increase (from zero) of sensible heat. Note: for this exercise you need to consult tables like in Chapter 1, refs [2] or [3]. 3.4 According to Figure 2.1 the tails of the solar spectrum and the earth IR spectrum will cross at a certain wavelength. Calculate the wavelength at which the intensity of the solar (black-body) radiation received on earth [Wm−2 ] equals the emitted IR [Wm−2 ]. Use for the radius of the sun Rsun = 6.96 × 108 [m] and the distance from the centre of the sun to the centre of the earth Rse = 1.49 × 1011 [m]. Check with Figure 2.1 that both spectra are well separated. 3.5 Check: (a) that the terms in Eq. (3.18) numerically correspond with Figure 3.5; calculate T s – Ta ; (b) perform the calculations for a = 0.50; a = 0.35; S1 = 0.85 × S; S2 = 0.72 × S. 3.6 Distinguish between the window region 9–12.5 [μm] and the rest of the spectrum. Use the Planck function (2.4) and calculate the contribution of both parts of the spectrum to the black-body emission from the earth surface. For a temperature of 288 [K] find for the window region Bsw = 110.45 [Wm−2 ] and outside the window Bsout = 280 [Wm−2 ]. 3.7 Use the data of Example 3.1 and plot the upward atmospheric flux as a function of z both inside and outside of the window. For the flux outside the window also plot the first 100 [m] of height and interpret the results.
Climate and Climate Change
75
3.8 Take the model of Example 3.1 and plot as a function of kw and koutw the upward atmospheric flux separately inside the window and outside the window. Take values starting from those in Example 3.1 and go somewhat higher. Add the surface contribution (3.19) inside the window and comment. 3.9 Convert the volume fraction of water vapour in Table 3.2 into a mass fraction of 3 × 103 [ppm] (parts per million). Estimate the height of the water column when, hypothetically, all water vapour rains out and find it to be 3 [cm]. Take the average annual rainfall over earth as about 100 [cm]. What is your conclusion on the turnover rate of water in the atmosphere and on the sensitivity of models to a correct description of the water cycle? 3.10 Estimate τ , of Eq. (3.30) starting with calculating cs for a water column of h = 120 [m]. 3.11 Take the radiative forcing in 1750 as I = 0 and in 2005 as I = 1.6 [W m−2 ]. (a) Derive a smooth exponential curve for I(t) between 1750 and 2005. Assume this to have been the ‘real forcing’. (b) The forcing can be viewed as a big number of small sudden forcings d(I(t)). Write down an integral for T s (t = 2005) using Eq. (3.32). (c) Calculate the result, for example by using a simple code. (d) Assume the forcing does not increase after 2005 and plot the temperature between 2005 and 2050. (e) Repeat the calculation for the year 1850 and find the temperature increase to be 0.87 [◦ C]. (f) Check whether more recent IPCC reports give later data for the forcing or the heat capacity of the oceans and if so, recalculate the temperature increase. Hint: Rewrite the forcing as i(t) and the temperature increase as T(t). 3.12 The temperature difference between the Gulf Stream and the returning colder water is about 12 [◦ C]. Calculate the transported heat energy which is transported, using the data in the text. Assume for the Kuroshio a similar result and compare with Example 3.2. 3.13 Derive the hydrostatic equation (3.4) from Eqs. (3.35), (3.38) and (3.45) assuming that all vertical motion vanishes. 3.14 Find on internet a mapping of the isobars in your neighbourhood. Make a rough estimate of |∇ p|. Also calculate your value of the Coriolis parameter f . Find the geostrophic velocity and compare with the value given on your weather map. 3.15 For sea water one may take pH = 8. Use the equilibrium constants K 1 = [H3 O+ ][HCO3 − ]/[H2 CO3 ] = 4.3 × 10−7 and K 2 = [H3 O+ ][CO3 − − ]/[HCO3 − ] = 4.8 × 10−11 to calculate the fractions in sea water of [CO3 2 − ], [HCO3 − ] and [H2 CO3 ]. Students proficient with computers may plot the three fractions as a function of pH. 3.16 Scenario A1B forecasts a thermal expansion of sea water in the year 2100 of 0.13 [m] for a temperature rise of 1.8 [◦ C]. (Actually this is one of the low GCM forecasts). (a) Calculate the height of the oceans that is supposedly expanding, assuming that the height h expands and the rest does not. Use the cubic expansion coefficient for water α = 21 × 10–5 [K]. (b) How does your answer compare with Exercise 3.10. Explain. 3.17 The West Antarctic ice sheets (together some 0.83 million [km2 ] and only 6% of the total Antarctic ice surface) might lose their grip on the rock by warming and collapse into the sea. Estimate the resulting sea level rise. 3.18 Show (from tables) that air of 21 [◦ C] can contain about 6% more water vapour than air of 20 [◦ C].
76
Environmental Physics
References [1] Solomon, S., Qin, D., Manning, et al. (eds) (2007) for the Intergovernmental Panel of Climate Change, The Physical Science Basis, IPCC: Cambridge University Press, Cambridge, UK. The IPCC papers contain a host of articles, with many authors. Readers should consult www.ipcc.ch to find the authors of the material we are referring to. [2] McIlveen, R. (2010) Fundamentals of Weather and Climate, 2nd edn, Oxford University Press. This is a very readable book with extensive meteorological discussions and mathematical derivations on a rather elementary level. [3] Hutter, K., Blatter, H. and Ohmura, A. (1990) Climate Change, Ice Sheet Dynamics and Sea Level Variations, Dept of Geography, ETH-Zurich. [4] Houghton, J.T., Jenkins, G.J. and Ephraims, J.J. (eds) (1990) Climate Change, The IPCC Scientific Assessment, Cambridge University Press, Cambridge, UK. [5] Campbell, I.M. (1977) Energy and the Atmosphere, A Physical-Chemical Approach, John Wiley, London. This is an interesting and easy to read introduction. [6] van Dorland, R. (1999) Radiation and Climate, From Transfer Modelling to Global Temperature Response, Thesis, Utrecht University, Utrecht, Netherlands. [7] http://savi.weber.edu/hi_plot/. The spectra to be plotted are based on the so-called Hitran data base, which regularly is updated. [8] Schlesinger, M. (1988) Quantitative analysis of feedbacks in climate model situations of CO2 induced warming, in: Physically-Based Modelling and Simulation of Climate and Climate Change, Part 2 (ed. M.E. Schlesinger), Kluwer, Dordrecht. [9] Sch¨onwiese, C.-D. and Diekmann, B. (1987) Der Treibhauseffekt, Deutsche Verlag Anstalt, Stuttgart. The warming effect as given in Table 3.2 originates from K.Y. Kondratiev and N.I. Moskalenko (1984). [10] Hartmann, D.L. (1994) Global Physical Climatology, Academic Press, San Diego. [11] Dijkstra, H.A. (1997) Oceaancirculatie en klimaatverandering. Nederlands Tijdschrift voor Natuurkunde 63, 79. [12] Houghton, J.T., Ding, Y., Griggs, D.J. et al. (eds) (2001) for the Intergovernmental Panel on Climate Change, Climate Change 2001, The Scientific Basis, Cambridge University Press, Cambridge, UK. [13] ESCOBA, A European Multidisciplinary Study of the Global Carbon cycle in Ocean, Atmosphere and Biosphere, European Commission 1999, EUR 16989 EN. [14] McGuffie, K. and Henderson-Sellers, A. (2005) A Climate Modelling Primer, 3rd edn, John Wiley, Chichester, UK. [15] Nakicenovic, N. and Swart, R. (eds) (2000) IPCC Special report on Emission Scenarios, Cambridge University Press, UK. Chapter 4.
4 Heat Engines It is characteristic for human beings to delegate hard physical labour. Before the industrial revolution animals and slaves were used to produce mechanical output in, for example, a treadmill. An estimate of the power that can be produced by a healthy human being can be made from an anecdote reported by the French physicist Coulomb [1]. He was told that French marines had reached the peak of Tenerife, with a height of 2923 metres, on foot in 7.25 hours. With an average body weight of 70 [kg] this corresponds to 77 [W] during their walk (Exercises 4.1 and 4.2). As a human being has to rest part of the day, the effective power averaged over 24 hours will be even lower. The industrial era in Europe started with the extensive use of water wheels and windmills to produce the mechanical power to drive the looms, soon followed by the use of steam engines for these and many other applications. In steam engines combustion of wood or coal produced heat, which then was converted into mechanical power. More generally, heat engines convert heat into power, irrespective of the origin of the heat. Physicists recognized that both heat and mechanical work are forms of energy. They developed the field of thermodynamics to describe the conversion process and found that thermodynamical variables are useful to understand and quantify many other processes, like chemical reactions, condensation and evaporation. Heat engines still form the heart of industrial society. This is illustrated in Figure 4.1. On the right in the figure one finds the end uses of power, from agriculture to transport. The power is provided from the top middle of the figure, which originates from the energy sources on the left. The sources are from stocks below the surface of the earth, which by mining or drilling get their concentrated forms, either as chemical energy (oil, gas, coal) or physical energy as in uranium. These may be used to produce intermediate heat, which a heat engine converts to power, for space heating in buildings or as process heat to speed up chemical reactions in industry. On the left of Figure 4.1 one also finds renewable energies (solar, biomass, wind and hydropower), some of which may be converted directly into mechanical or electrical power, and others which give intermediate heat and drive a heat engine. It is a challenge to scientists Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
78
Environmental Physics Flow energy = renewables Solar Wind Biomass Hydro
Concentrated energy Chemical Physical Uranium Oil 235U Gas Coal CxHy
Power Electric or mechanical
Heat engines
Application of power Agriculture Industry Services Domestic Transport
Intermediate heat
Mining drilling Energy stocks Oil fields U-ores Gas fields Coal beds
Heat end use Space heating Process heat
Figure 4.1 Energy supply in society. On the right one finds the end uses of power and the end uses of heat. On the left the energy sources, from stocks or from renewables, are displayed. They provide power either directly or by means of intermediate heat. In the latter case one needs heat engines to convert heat into power.
and engineers to apply their knowledge and design machines which loose as little energy as possible in all the steps of Figure 4.1. From Figure 4.1 it is obvious that heat engines play a key role in modern society. In this chapter we therefore start with heat itself. In Section 4.1 transfer and storage of heat is discussed, followed by the principles of thermodynamics in Section 4.2. In Section 4.3 examples of converting heat into motion are given, according to the literal meaning of the word thermodynamics: heat and motion. In power stations mechanical power from heat engines is converted into electrical power, which acts as an energy carrier to the final user. Therefore, in Section 4.4 electricity is discussed as an energy carrier, while Section 4.5 is devoted to the pollution aspects of heat engines. In Section 4.6 the subject matter is applied to the energy aspects of the private car. Finally, in a brief Section 4.7 the financial aspects of building and maintaining an expensive installation are described. In this chapter all properties summarized in Appendices B and C will be used.
4.1
Heat Transfer and Storage
For some applications heat needs to be stored for a period of time. In these cases the loss of heat should be as small as possible. For other applications one needs heat at a different place to where it is produced and one has to know the ways in which heat energy may be transported. The hot exhaust pipe of a motor cycle illustrates the mechanisms of heat transport. When one touches the exhaust pipe, one will burn one’s fingers as heat is transported from the inside to the outside of the pipe by conduction. When one puts the
Heat Engines
79
T1 T2 x1
x2
x
q´´ d Figure 4.2 Heat flowing in a homogeneous sheet with surface area A over a distance d from a high temperature T1 to a lower temperature T2 , resulting in a heat current density q from left to right.
hand underneath the pipe one feels its radiation and when one puts the hand above the pipe one will feel both the rising hot air (convection) and radiation. Finally, heat is transported from the engine to the outside air with the flow of exhaust gases, mainly as sensible heat, partly as latent heat of water vapour. These mechanisms will be discussed below. 4.1.1
Conduction
Consider for simplicity a sheet of homogeneous material with two plane-parallel surfaces at a distance d, as sketched in Figure 4.2. It is assumed that the surface area A is big enough to ignore the effects of the edges. The temperature is indicated by T along the vertical axis and the position perpendicular to the planes by x. At position x1 the sheet has a temperature T 1 and at position x2 it has a temperature T 2 . The heat current density q is defined as the amount of heat in joules [J] which flows per second [s] through 1 [m2 ] perpendicular to its direction. Its dimension therefore is written as q [J s−1 m−2 = Wm−2 ]. The total heat current q may be written as q = Aq [W]
(4.1)
For a homogeneous material, as in Figure 4.2, Fourier’s law gives a simple relation between the heat current density q and the gradient of the temperature. In a one-dimensional situation, as shown in Figure 4.2, this may be written as dT (4.2) q = −k dx For a general three-dimensional case one uses the gradient vector (see Appendix B) and the expression becomes q = −k∇T (4.3) where k [Wm−1 K−1 ], the thermal conductivity is dependent on the kind of material, its temperature and density. Table 4.1 gives values of k for some common materials. For insulators one should use materials with small k and for highly conductive systems materials with a large k – especially metals.
80
Environmental Physics
Table 4.1 Heat transport properties of common materials at T = 300 [K]. Thermal conductivity k/[Wm−1 K−1 ]
Density ρ/[kg m−3 ]
Insulators Air Glass fibre (loose fill) Urethane foam Cork Mineral wool granules Paper Glass Gypsum plaster
0.026 0.043 0.026 0.039 0.046 0.180 1.4 0.22
1.161 16 70 120 190 930 2500 1680
225 32 3.6 1.8
24 44 92
1.4 7.5 1.21
470 1620 635
Building materials Cement mortar Soft wood Hard wood Oak wood Brick Concrete
0.72 0.12 0.16 0.19 0.72 1.4
1860 510 720 545 1920 2300
4.96 1.71 1.77 1.46 4.49 6.92
1020 290 380 499 1075 1680
Metals Iron Aluminium Steel (C, Si) Copper
80.2 237 52 401
7870 2700 7800 8933
228 972 149 1166
17000 24000 13500 37000
0.27 0.52 0.13 0.06
1515 2050 1100 80
2.23 1.38 0.59 5.77
572 1400 540 80 1610 1120 525 105
Material
Miscellaneous Sand Soil Soft rubber Cotton Porcelain Human skin Linoleum Carpet wool Water
Fourier Contact coefficient Coefficient b/ a/[10−7 m2 s−1 ] [J m−2 K−1 s−1/2 ]
0.37
0.61
996
Data from [2]; the lowest four b-values from unpublished lecture notes Delft Technological University.
Figure 4.2 describes a stationary situation, therefore the heat flow q will be independent of x as for any plane the heat flow ‘in’ should equal the heat flow ‘out’. One finds with Eqs. (4.1) and (4.2):
q = −k A
dT = constant dx
(4.4)
Heat Engines
81
Therefore the derivative dT/dx is constant and T(x) is a straight line, as already shown in the figure. It follows that dT/dx = (T 2 − T 1 )/d and d = qR (4.5) kA Eq. (4.5) looks analogous to Ohm’s equation for an electric current I between potentials V 1 and V 2 , which reads V 1 − V 2 = iR, where the electric resistance is R. The heat resistance R [K W−1 ] follows from Eq. (4.5) and may be written as T1 − T2 = q
d (4.6) kA which is analogous to the expression for the electric resistance, in which case k would be the electric conductivity. This explains the name thermal conductivity for k in Eq. (4.2). In fact, Eq. (4.3) with T replaced by V and q by j is another expression of Ohm’s law. The analogy between conduction of heat and electricity reaches even further. This will become clear from looking at two sheets, with different thicknesses d1 and d2 and different thermal conductivities k1 and k2 as indicated in Figure 4.3. Again a stationary situation is considered, therefore no heat accumulates and one has the same current density q everywhere. Eq. (4.5) may be applied to each sheet and the two resulting equations may be added. This gives R=
d1 = qR1 k1 A d2 T2 − T3 = q = qR2 k2 A T1 − T3 = q(R1 + R2 ) = qR
T1 − T2 = q
(4.7)
where R = R1 + R2 may be interpreted as the substitution resistance of two resistances in series.
T1
T2 x1
T3 x3
x2 q´´ d1 k1
x
q´´ d2 k2
Figure 4.3 Two parallel homogeneous sheets in a stationary situation. Thickness d and thermal conductivity k may be different for the two sheets. The heat current density is the same everywhere; the thermal resistances may be added.
82
Environmental Physics Table 4.2 Typical values for convection heat transfer coefficients h/[Wm−2 K−1 ]. Phase
Free convection
Forced convection
Gases Fluids
2–25 50–1000
25–250 50–20 000
Reproduced with permission of John Wiley, copyright 1990, from [2], p. 9. Phase change = evaporation or condensation: 2500–100 000.
4.1.2
Convection
When a hot body like a teapot of temperature T s is in contact with air with temperature T ∞ , the air particles will be heated, become less dense, move up and take part of the heat from the teapot with them. Newton’s law of cooling describes the heat current density of the resulting free convection as q = h(Ts − T∞ )
(4.8)
where h [Wm−2 K−1 ] is the convection heat transfer coefficient. This process of heat transfer becomes more effective when air is blown around the pot with a fan, as more particles of air are passing the pot. This is called forced convection. Here, again Eq. (4.8) describes the process, but the coefficient h will be much larger. Table 4.2 gives some typical values of the convection coefficients. The range of values of h is big; also cooling with a fluid like water appears to be much more effective than with air, which agrees with experience. The bottom row of Table 4.2 will be discussed later. For a surface with area A it is possible to define a convection heat resistance analogous to Eq. (4.5). From (4.8) it follows that Ts − T∞ =
q q = = qRconv h hA
(4.9)
with Rconv = 1/(hA). For a wall consisting of two parallel sheets with two outer surfaces, shown in Figure 4.3, the convection at the two outer surfaces adds to heat resistance. With convection coefficients h1 and h2 , the total heat resistance would become R=
1 d1 d2 1 + + + h 1 A k1 A k2 A h 2 A
(4.10)
This again reads as four resistances in series. Note that we came across convection in Figure 3.5 as one of the mechanisms by which the earth’s surface exchanges energy with the atmosphere. In the same figure we also met another mechanism of energy transfer, radiation. 4.1.3
Radiation
Any body in a radiation field may be characterized by three parameters, each depending on wavelength λ: the fraction α λ of incoming radiation to be absorbed, the fraction ρ λ to be reflected and the fraction tλ to be transmitted. As there are no other mechanisms than
Heat Engines
83
absorption, reflection and transmission it follows that α λ + ρ λ + tλ = 1. A black body by definition has no transmission or reflection at any wavelength, so ρ λ = tλ = 0 and α λ = 1. The emission from a black body depends on wavelength λ and temperature T but not on direction. At any (λ, T) no surface can emit more than a black body, of which the spectrum was already given in Eqs. (2.4) or (2.2), and shown in Figure 2.1. Integrated over all wavelengths one obtains formula (2.6), also used in Chapter 3, viz. q = σ T 4
(4.11)
For a body which is not completely black one could write the spectrum as ελ times the black-body value (2.4) at wavelength λ. Here, ελ is called the emissivity of the surface with ε λ ≤ 1. Kirchhoff’s law states that, at any temperature T and for any surface, emissivity equals absorptivity ελ (T ) = αλ (T )
(4.12)
For a black body ελ (T) = α λ (T) = 1 and for a hypothetical grey body both are independent of λ and T, so ε = α. A grey body with surface temperature T s in an environment that is completely surrounding the body will emit q = εσ T s 4 . The surroundings may be assumed to behave like a black body as no radiation is reflected back. This environment, however, produces its own radiation, which may be described by an effective radiation temperature T surr . The absorption of this radiation by the grey body will be q = εσ T surr 4 . As ε = α it follows that the net heat exchange with the surroundings can be written as 4 4 q = εσ Ts4 − ασ Tsurr (4.13) = εσ Ts4 − Tsurr In principle, one could define a heat resistance here as before, and add it to the resistance (4.10) (Exercise 4.3) 4.1.4
Phase Change
The bottom row of Table 4.2 refers to phase change, in particular the combination of convection with evaporation or condensation. An example is the way in which power stations dispose of their surplus heat. In so-called wet towers a spray of hot water from the condenser is cooled by rising air (forced convection). Furthermore, part of the water evaporates, extracting its evaporation heat from the spray as well, which together with the forced convection leads to effective cooling. Another example of phase change is found in the refrigerator, where a working fluid is circulated in a closed circuit. Inside the refrigerator the fluid is forced to evaporate, thereby cooling its surroundings. At the outside the vapour passes a condenser where the resulting condensation heat is exchanged with the surroundings. The resulting convection can be experienced by putting your hand above the grating at the back of the fridge. 4.1.4.1
Thermosyphon and Heat Pipe
Two other examples of phase change are shown in Figure 4.4. On the left a thermosyphon is shown, which consists of a closed pipe with liquid at the bottom. There heat is supplied from the outside, causing evaporation of the liquid. At the top of the syphon a cooling
84
Environmental Physics
(a)
Heat in
Figure 4.4
Condensation
Heat out (b)
Vapour
Heat in
liquid
Cooling
Wick
Liquid
Heat out
Transport of heat by phase change in (a) a thermosyphon and (b) a heat pipe.
fluid is passing; consequently the vapour is cooled and it condensates, liberating the heat again. The heat is taken away by the cooling fluid. By gravity the condensed liquid again descends. The heat pipe, on the right in Figure 4.4 does not need gravitation to return the fluid to its original position. In this case there is a wick or gauze at the inner side of the pipe. The condensation takes place at the right of the figure and evaporation on the left. This causes a concentration gradient in the wick, resulting in a force to the left, to even out the concentration differences. 4.1.5
The Solar Collector
The solar collector, sketched in Figure 4.5 produces hot water from incoming solar radiation. The solar radiation is absorbed by an absorber, which heats water tubes that transport hot water to a container on the right. If necessary, some additional gas heating can be provided before the water is used for the tap or for home heating. Below, the mechanisms discussed above are used to answer some simple questions. For the sake of the argument it is assumed that under optimal conditions the water is heated to 80 [◦ C]. Reflection Insolation
Glazing Absorber Water tubes
Insulator
Hot water Home
Figure 4.5 A simple flat plate solar collector. Solar radiation is absorbed; it heats water tubes, which transport hot water to a container where, if necessary, additional heating is provided.
Heat Engines
Example 4.1
85
Maximum Incoming Energy Density
Give an estimate of the incoming energy density S [Wm−2 ] for which the collector should be designed. Answer The absolute maximum will be the solar constant S = 1366 [Wm−2 ], the energy density entering the upper atmosphere. From Figure 3.5 we conclude that on average only 47% reaches the surface, which is 650 [Wm−2 ]. At the poles the irradiation will be lower and at the equator higher. Therefore it seems reasonable to take a maximum energy density of S = 1000 [Wm−2 ] as a requirement for the collector design.
Example 4.2
Temperature Differences within the Collector
The absorber in Figure 4.5 is constructed from 10 [cm] thick high-quality steel. What will be the maximum temperature difference between the top of the absorber and the water tubes. Answer The thermal conductivity of steel is given in Table 4.1 as k = 52 [Wm−1 K−1 ]. From Eq. (4.5) one has T1 − T2 = q ×
d 1000 [Wm−2 ] × 0.1 [m] = = 1.9 [K] k 52 [Wm−1 K−1 ]
This is small, but not negligible, so a thinner absorber would help. The construction should be insulated with, for example, glass fibre to reduce conduction losses to the surroundings of the device.
Example 4.3
The Glazing of the Collector
What is the function of the glazing on top of the collector? Answer Without glazing the absorber would be in direct connection with the atmosphere at, say T ∞ = 20 [◦ C]. The loss of heat by convection is found from Eq. (4.8) with h = 10 [Wm−2 K−1 ] as an average value from Table 4.2. This gives for the convection loss q = h(Ts − T∞ ) = 10 [Wm−2 K−1 ] × 60 [K] = 600 [Wm−2 ] which is close to the maximal incoming power density. So, without glazing the temperature of 80 [◦ C] can never be reached. With the glazing there still will be convection
86
Environmental Physics
underneath the glass. Assume that the resulting glass temperature is 30 [◦ C]. The heat loss from absorber to glass then would be q = h(Ts − T∞ ) = 10 [Wm−2 K−1 ] × 10 [K] = 100 [Wm−2 ] which is still considerable. That is the reason why modern collectors work with a vacuum between the absorber and the ambient atmosphere.
Example 4.4 Radiation Losses from the Collector Argue why one would need coatings on the absorber with α λ = ελ ≈ 0.94 in the visible region and α λ = ελ ≈ 0.07 (or if possible less) in the IR region. What is the use of the glass cover in this respect? Answer Assume that the absorber emits the maximum possible, which is that of a black body q = σ T 4 . For T = 273 + 80 [K] the radiation loss becomes q = 881 [Wm−2 ], more than the incident radiation. This again would mean that the temperature of 80 [◦ C] cannot be reached. The clue is to use the fact that emissivity ελ and absorbtivity α λ are functions of λ and moreover that the solar spectrum to be absorbed and the infrared spectrum to be emitted (for T = 80 + 273 [K]) peak at very different wavelengths, see Figure 2.1. Therefore the trick is to use a coating on the absorber with a high value α λ ≈ 0.94 in the visible region around λ ≈ 0.5 (see Eq. (2.5)). In that region almost all of the incoming radiation will be absorbed. Of course ε λ ≈ 0.94 also in that region (Eq. (4.12)). However, blackbody emission for T = 80 + 273 [K] peaks at λ ≈ 8.2 [μm] (Eq. (2.5)) and is almost negligible in the region λ ≈ 0.5 [μm]; therefore one will not lose anything in the visible wavelength spectral regions. The coating must be such that in the infrared region around λ ≈ 8.2 [μm] the emission will be very small. In practice, values α λ = ελ ≈ 0.07 have been achieved, still implying a loss of 0.07 × 881 [Wm−2 ] = 62 [Wm−2 ]. The infrared radiation from the coating will be absorbed for a large part by the glass cover, which is transparent for visible light, but opaque for IR light. Even in the absence of convection inside the cover, the absorbed IR radiation will increase the cover’s temperature, resulting in some radiation loss to the outside.
Example 4.5 A Realistic Solar Collector A realistic solar collector has a glass cover with a high transmission tc = 0.97, a temperature T c = 30 [◦ C] and an emissivity in the IR region ε = 0.94. The ambient air has a temperature T air = 20 [◦ C] and the surroundings have a radiation temperature T surr = –10 [◦ C]. The convection coefficient may be taken as h = 10 [Wm−2 K−1 ]. (a) Calculate the heat flux into the collector. (b) A mass m [kg s−1 ] of water is passing the tubes per second and is heated from T in to T out . Calculate temperature increase
Heat Engines
87
of the water T out − T in for m = 0.01 [kg s−1 ] and a surface area of the collector A = 3 [m2 ]. Answer (a) As the glass cover does not transmit the IR from the absorber, the only interaction of the collector with the surroundings is the glass cover itself. One has the following relation for the net input q [Wm−2 ]: 4 q = tc S − εσ Tc4 − Tsurr − h(Tc − Tair ) = 336 [Wm−2 ] (4.14) (b) The specific heat of water is given in Appendix A as cp [J kg−1 K−1 ]. One has q = Aq = mc p (Tout − Tin ) where the dimensions check, for [W] = [J s−1 ] = [kg s−1 ] [J kg−1 K−1 ] [K]. The answer becomes T out − T in = 24 [◦ C]. Note: On the internet one will find many modern types of solar collectors. It is interesting is that some models are using heat pipes to transport heat from the absorbers to a central tube, where water will take the heat to a central container.
4.1.6
The Heat Diffusion Equation
In general, the temperature T in a medium will depend on position r and time t so T = T(r, t). The heat diffusion equation describes the change of temperature that follows from conservation of heat, for the case where the heat transport in the medium is by conduction only. Conservation of heat means that for any time interval t to t + dt and for any volume element dV the following expression holds: increase of heat content [J] = net inflow by conduction [J] + internal heat production [J] (4.15) At the very right of Eq. (4.15) the heat production (which is negative in the case of a heat sink and positive for exothermal chemical reactions or radioactive decay) is determined by ˙ dt[J] to (4.15). The first ˙ t) [J s−1 m−3 ], which contributes qdV the heat production rate q(r, term on the right-hand side of Eq. (4.15), the net inflow, can be found by using Appendix B. There it is shown that the outflow of a field q [Wm−2 ] is described by div q [Wm−3 ]. The first term on the right of (4.15) is the inflow between t and t + dt and becomes − div q dVdt [J]. On the left-hand side of Eq. (4.15) the heat content of the volume element may be written as ρcp TdV [J], where ρ [kg m−3 ] is the density and cp [J kg−1 K−1 ] is the specific heat. Eq. (4.15) therefore gives ∂ ˙ dt (ρc p TdV)dt = −div q dV dt + qdV ∂t
(4.16)
88
Environmental Physics
Using Fourier’s equation (4.3) leads to ∂T ρc p = div(k gradT ) + q˙ (4.17) ∂t This equation is known as the heat diffusion equation or in short the heat equation. It is, however, advisable not to use the short hand, as Eq. (4.17) does not describe convection or radiation of heat. If the thermal conductivity k is independent of position, Eq. (4.17) may be simplified to ∂T = k div(gradT ) + q˙ (4.18) ρc p ∂t or ∂T 1 q˙ (4.19) = aT + ∂t ρc p where is the Laplace operator (Appendix B) and the material property a = k/(ρcp ) is called the Fourier coefficient and is, for a number of materials, listed in Table 4.1. When there are no heat sources or sinks present q˙ = 0 and Eq. (4.19) reduces to ∂T = aT (4.20) ∂t and in one dimension ∂T ∂2T =a 2 ∂t ∂x In the following we will give some examples of Eq. (4.21) taken from daily life. 4.1.6.1
(4.21)
Periodic Heat Wave Hitting a Semi-Infinite Medium
A periodic heat wave with frequency ω and amplitude T 0 coming from the left hits a very thick wall perpendicular to the wave. The coordinate perpendicular to the wall is called x; because of the thickness of the wall it is assumed to extend from x = 0 to x = ∞. The boundary conditions of this problem may be written as x = 0 : T = Tav + T0 cos ωt (4.22) x = ∞ : T = Tav where T av is the temperature at great depth in the wall, where the temperature variation never will be experienced. We guess that the solution within the medium will be a periodic wave travelling with velocity v and with an amplitude which will be damped as a function of x. This gives x . (4.23) T = Tav + T0 e−Ax cos ω t − v This function indeed satisfies the boundary conditions (4.22). By substitution it is shown that (4.23) is a solution of Eq. (4.21) provided that A = v/(2a) and also A = ω/v, which gives √ v = √2aω (4.24) A = ω/2a The damping depth xe = 1/A is the depth at which the amplitude of the wave is reduced by a factor 1/e and the delay time τ = 1/v is the time required for the wave to travel 1 [m] (Exercise 4.8).
Heat Engines
4.1.6.2
89
A Sudden Change in Temperature
A similar half-infinite medium as above now experiences a sudden temperature change from T 0 to T 1 . Examples are the cooling of a hot chimney by a shower or the heating of a stone floor by a fire. The boundary conditions are T (x = 0, t ≥ 0) = T1 T (x > 0, t = 0) = T0
(4.25)
The equations imply that the outside temperature T 1 remains constant in the course of time, while inside the material a heat flow will start. The solution of the heat diffusion equation (4.21) with these conditions leads to the error function defined in Eq. (C7) of Appendix C: x t ≥0 (4.26) T = A + Bx + Cerf √ 2 at This may be verified by substitution into (4.21). The error function should be differen√ tiated first to its argument x/(2 at) and then to x or t. The next step is to use Eqs. (4.25) and one will find x t ≥0 (4.27) T = T1 + (T0 − T1 ) erf √ 2 at The heat current density q follows in one dimension from Eq. (4.2). This gives k(T1 − T0 ) −x 2 /(4at) e √ πat
(4.28)
k(T1 − T0 ) b T1 − T0 =√ √ √ π aπ t t
(4.29)
q = At the boundary x = 0 one finds q =
where b = kρc p is called the contact coefficient and displayed in the last column of Table 4.1. Its name is understood from the following example. 4.1.6.3
Contact Temperature
One brings two half-infinite materials with temperatures T 1 and T2 (T 1 > T 2 ) into ideal thermal contact at time t = 0. The contact plane will acquire a contact temperature T c somewhere in between. This contact temperature is found by the requirement that the flow out of the hotter material (with contact coefficient b1 ) equals the flow into the colder (with contact coefficient b2 ). From Eq. (4.29) one finds b1 b2 √ (T1 − Tc ) = √ (Tc − T2 ) πt πt
(4.30)
or Tc =
b1 T1 + b2 T2 b1 + b2
(4.31)
For finite materials in contact this relation will be a good approximation for a short period of time when the heat flows have not changed the temperature of the objects involved, for
90
Environmental Physics
example for a human foot walking on a colder floor. Its skin temperature will be about 30 [◦ C]. From experience it is known that on an oak wooden floor with a temperature of 15 [◦ C] a naked foot still feels comfortable. Apparently the heat flow out of the foot is small.
Example 4.6 A Pleasant Floor Temperature From the data given above deduce the temperatures where the naked foot still feels comfortable for a concrete floor and for a cork floor. Discuss the physics behind the difference. Answer The contact temperature of skin and wood follows from Eq. (4.31) and Table 4.1 as 25.4 [◦ C]. Eq. (4.31) may be rewritten as T 2 = ((b1 + b2 )Tc − b1 T 1 )/b2 which gives for concrete T 2 = 22 [◦ C] and for cork T 2 = −31 [◦ C]. The physical origin of the difference is that the low k value of cork results in a small heat current into the cork. The heat flow out of the foot then will be small, allowing a large temperature difference (Exercise 4.9).
4.1.7
Heat Storage
The solar collector of Figure 4.5 is coupled to a reservoir of hot water of about 150 [L], which at a temperature of 80 [◦ C] would be enough for a few hot showers at night when the sun no longer heats the reservoir. With good insulation the hot water may last till the next morning (Exercise 4.10). It is more complicated to collect hot water during the summer and use it during the winter. One obviously needs a much larger volume of water if one stores the water as sensible heat. An average household in mid-latitudes would need to store more than 100 [m3 ] water of 90 [◦ C] with very good insulation (Exercise 4.11). A better proposition could be to use the latent heat from a phase change at a temperature not far from room temperature. Water with phase change temperatures of 0 [◦ C] or 100 [◦ C] is not convenient. There are many suitable materials, each with their own strengths and weaknesses; much research is going on to match materials with a particular application [4]. We mention two examples of phase change materials. The first is octadecane, a paraffin with a melting point at 28 [◦ C] and heat of fusion of 189 [MJ/m3 ]. At temperatures below 28 [◦ C] it is a solid; for temperatures above, it becomes a liquid, extracting heat from the surroundings. When the heat is needed the temperature is lowered again. Difficulties are a low conductivity in the solid phase (k = 0.15 [Wm−1 K−1 ]) and a density change from solid to liquid. A second example is found in the class of hydrates such as Glauber salt Na2 SO4 .10H2 O. Below 32.4 [◦ C] the salt is solid. At that temperature it loses its water and becomes anhydrous sodium sulfate and a saturated solution of Na2 SO4 in water. The heat of fusion is 377 [MJ/m3 ]. Again there is a large density difference between the two phases. Moreover, part of the anhydrous sulfate settles down as bottom sediment; it is therefore difficult to get a reversible process.
Heat Engines
91
It appears (Exercise 4.11) that for the examples mentioned the enthalpy of fusion is too low for seasonal storage. Research therefore is looking for materials with fusion heats in the order of [GJ/m3 ]. It should be added that some phase change materials could be encapsulated inside building materials. During the day heat is absorbed by phase change keeping the building cool and during the cooler night the heat is given back to the building. In this way daily temperature changes are damped. The last example of heat storage we mention is to transfer the heat surplus of the summer into the ground water. There it is kept in aquifers, sand mixed with water, until the heat is needed in winter. There are two ways to transfer the heat. The simplest way is to have two boreholes, one to pump ground water up; the second to pump it down again after heat has been extracted. A better way is to have a closed system with one or several boreholes. In each hole a heat pump, to be discussed later in Eq. (4.64), takes care of the heat transfer. In this case there are two heat exchangers, one down in the ground and the second in a building. The heat pump will work in both directions: heating the building in winter and cooling it in summer. In using groundwater as storage medium, one will lose some heat to the surroundings and one has to be careful not to pollute the groundwater.
4.2
Principles of Thermodynamics
The two laws of thermodynamics provide a macroscopic understanding of energy conversion. For their application thermodynamic variables are introduced of which entropy S is the most important. 4.2.1
First and Second Laws
In the simplest case, a homogeneous quantity of matter with mass m in a single phase, the macroscopic state of the system is defined by two independent variables, usually the pressure p and either the volume V or the temperature T. For a mixture of chemical substances in ϕ perhaps several phases (solid, liquid, gas) the number of moles n i of chemical i in phase ϕ must also be specified ([5], [6]). The First Law of Thermodynamics expresses conservation of energy. This law has already been used for the specific case of 1 [kg] of air in Eq. (3.9). In general, the First Law may be written for an infinitesimal (= very small) change as δ Q = dU + δW
(4.32)
That is, the amount δQ of heat added to the system is used to increase its internal energy by dU and to perform work δW. In many handbooks ([5], [6]) δW is defined as work done on the system, in which case it will have a minus sign in Eq. (4.32). In practice, one will consider changes of a system from state 1 to state 2. In that case the work W 1→2 done by the system will depend on the path. Also the heat added Q1→2 will depend on the path. The fact that the amount of heat is dependent on the path is indicated by the symbol δ in Eq. (4.32). On the other hand, the change in internal energy only depends on the internal energy of the initial and final states U 1→2 = U 2 − U 1 . Such a function is called a state
92
Environmental Physics
pA
dz
dV A V Figure 4.6
The work done by the gas equals pAdz = pdV.
function and its change is indicated by the symbol d as in dU (Eq. (4.32)). A finite change of a state function may be indicated by the symbol , for example U = U 2 − U 1 . For a quasistatic, reversible process where expansion is the only kind of work, the work done by the system may be written as δW = pdV. This is illustrated in Figure 4.6 for the simple case of a gas. The gas expands over a distance dz against the air pressure p. Its counterforce equals pA where A is the surface area of the piston. The work done (force × path) indeed equals pAdz = pdV. For the case of Figure 4.6 Eq. (4.32) becomes δ Q = dU + pdV
(4.33)
or in integral form 2 Q = U2 − U1 +
pdV
(4.34)
1
where the subscript 1 → 2 is omitted in the added heat Q. If the system also performs some other work, for example electromagnetic, that work is expressed by adding δW e to Eq. (4.33) and its integral to Eq. (4.34), giving δ Q = dU + pdV + δWe 2 2 Q = U2 − U1 + pdV + δWe 1
(4.35) (4.36)
1
The Second Law of Thermodynamics may be formulated by introducing the entropy function dS =
δQ T
(4.37)
where the reversibly added heat δQ is divided by the absolute temperature T. The integral of this function is independent of the path ([5], 179/180); S is a state function indicated by the symbol d in dS. More generally, one may include irreversible processes as well, which leads to the Clausius inequality dS ≥
δQ T
(4.38)
Heat Engines
p1,V1
93
p2,V2
Initial state
Final state
Throttle Figure 4.7 A certain mass expands through the throttle via narrow holes without heat exchange with the surroundings. The enthalpy is the same on the left and the right.
Here the equality sign (=) would hold for reversible processes. The Second Law says that for a closed system (where the added heat δQ = 0) the entropy does not decrease dS ≥ 0
(4.39)
From the four state functions (p, V, U, S) others can be derived for specific applications: enthalpy H, free energy F and Gibbs free energy G all with the unit [J]. The enthalpy H is defined by H = U + pV
(4.40)
For an infinitesimal change of state it follows that dH = dU + pdV + V d p = δ Q + V d p
(4.41)
The last equality is restricted to processes where Eq. (4.33) holds. Many processes (phase change, chemical reactions) are taking place at constant pressure dp = 0. Then the added heat is equal to the increase in enthalpy of the system. Enthalpy changes are tabulated as H ([kJ kg−1 ] or [kJ mol−1 ]). The significance of the term pV in Eq. (4.40) can be understood by the example of the throttle shown in Figure 4.7. A certain amount of gas expands through little holes from the left to the right without exchange of heat with the surroundings Q = 0. The pressures p1 on the left and p2 on the right are kept constant (p1 > p2 ). Eq. (4.34) may be written as 0 0 = U2 − U1 +
V2 pd V +
V1
0 pdV = U2 − U1 + p1
dV + p2 V1
0
V2
= U2 − U1 − p1 V1 + p2 V2
dV 0
(4.42)
or H1 = U1 + p1 V1 = U2 + p2 V2 = H2
(4.43)
which means that the enthalpy of the initial state equals the enthalpy of the final state. In Eq. (4.42) p2 V 2 − p1 V 1 is the work done by the system, so pV may be interpreted as a work function. Another useful function is the free energy F of the system defined by F = U − TS
(4.44)
94
Environmental Physics
Its relevance follows from the reversible infinitesimal change dF = dU − TdS − SdT = dU − δ Q − SdT = −δW − SdT
(4.45)
where Eqs. (4.37) and (4.32) were used. For a reversible isothermal change with dT = 0, it follows that –δW = dF or in words: the work done on the system is equal to its increase in free energy. Conversely, for an isothermal process the work δW that can be delivered equals the decrease in free energy, hence the name ‘free’. Note that this applies to the total work, expansion and nonexpansion work together. A similar meaning can be attached to the last state function to be considered, the Gibbs free energy (often shortened to free energy, so be careful) G = H − TS
(4.46)
For a reversible, infinitesimal process one finds, using Eqs. (4.40), (4.35) and (4.37) dG = dH − T dS − SdT = dU + pdV + Vd p − TdS − SdT = δ Q − δWe + Vd p − δ Q − SdT = Vd p − SdT − δWe
(4.47)
Consequently, for isobaric, isothermal processes (dp = dT = 0) one has dG = −δWe
(4.48)
This relation shows that nonexpansion work δW e reversibly done by the system will decrease its Gibbs free energy by the same amount (dp = dT = 0). The Gibbs free energy is often used to discuss a system consisting of different compounds, for example, ni moles of chemical i with i = 1,2,3. . . Then one could write G = G(p, T, ni ) and its differential as
∂G
∂G
∂G
dp + dT + dn i (4.49) dG = ∂p
∂T
∂n
T,n i
p,n i
i
i p,T,n j
where the partial derivatives on the right assume all variables except one to be constant. Eqs. (4.49) and (4.47) should hold for all values of dp, dT, dni , therefore their coefficients should be equal. Thus the first two terms in Eq. (4.49) are equal to the first two terms of Eq. (4.47), while the last term may be written as ∂G
dn i = μi dn i (4.50) −∂ We =
∂n i p,T,n j i i which defines the chemical potentials μi = ∂G/∂ni [J mol−1 ]. The nonexpansion work δW e done by the system if dp = dT = 0 equals the right-hand side of Eq. (4.50) with an extra minus sign. In general one has μi dn i (4.51) dG = Vd p − SdT + i
Similarly one will find (Exercise 4.12) dU = − pdV + TdS +
i
μi dn i
(4.52)
Heat Engines
95
The chemical potential μi [J mol−1 ] gives the increase of internal energy of the system when one adds dni moles of substance i while keeping all other variables constant. In deriving Eqs. (4.45) and (4.48) the fact that for reversible processes the definition of entropy (4.37) gives δQ = TdS was used. For irreversible processes the Clausius inequality (4.38) gives TdS ≥ δQ. This implies (Exercise 4.13) that for isothermal processes (dT = 0) δW ≤ −dF
(4.53)
and for isothermal isobaric processes (dp = dT = 0) δWe ≤ −dG
(4.54)
Eq. (4.53) shows that for isothermal (dT = 0) irreversible processes (the inequality sign) the work is smaller than the decrease in free energy; similarly if dp = dT = 0 the nonexpansion work that can be done is smaller than the decrease in Gibbs free energy. It must be stressed that Eqs. (4.51) and (4.52) hold for all processes, reversible or irreversible. The difference in G– or U–values between two states can be found from tabulated values of the chemical potentials, as they are independent of the path. 4.2.2
Heat and Work; Carnot Efficiency
An ideal heat engine operates between two reservoirs of heat, one at high temperature T H and one at a low temperature T C . It is assumed that the reservoirs are so big that addition or extraction of heat does not change the temperatures. For each reservoir the finite change of entropy S may be found from Eq. (4.37) as Q δQ = (4.55) S = T T where Q is the total amount of heat added to the reservoir at constant temperature T. The Clausius inequality (4.38) becomes Q (4.56) T The two reservoirs and the engine form a closed system for which the Second Law becomes, from Eq. (4.39) S ≥
S ≥ 0
(4.57)
The ideal heat engine, sketched in Figure 4.8a, operates in a cycle where the initial and final states of the system are the same. In the cycle an amount of heat QH is taken from the high temperature reservoir, produces work W and the remaining heat QC is delivered to the cold reservoir. The net heat put into the engine equals QH − QC . For a completed cycle the change in internal energy of the system is zero. The First Law (4.32) then gives QH − QC = W. The entropy change S has a contribution (4.55) from the hot reservoir and one from the cold reservoir. The entropy change of the engine is zero as the initial and final states are the same. The hot reservoir has −QH as heat added and the cold reservoir +QC . This gives as total entropy change (4.57) −
QC QH QH − W QH + =− + ≥0 TH TC TH TC
(4.58)
96
Environmental Physics
TH
TH QH
TH = Tamb QH
QH Win
Wout QC
QC
TH = Tamb
TC = T amb
(a) Heat engine
TH = Tamb QH Win
Wout QC
QC TC
(b) Heat pump
(c) ‘Cold’ heat engine
TC (d) Refrigerator
Figure 4.8 Ideal ways of converting heat into work and vice versa. The heat engines (a) and (c) deliver work, the heat pump (b) and refrigerator (d) use work to convert low temperature heat into high temperature heat. (Reproduced by permission from IEEE Publishing, copyright 1986, from [3], p. 76.)
or
TC W ≤ QH 1 − TH
(4.59)
The best one can achieve is to perform all changes reversibly which gives the maximum work W max from the equality sign in Eq. (4.59) TC (4.60) Wmax = Q H 1 − TH The thermal efficiency η of a heat engine is defined by η=
W work output = heat input QH
(4.61)
For the reversible heat engine, working between two reservoirs this gives the Carnot efficiency ηCarnot = 1 −
TC TH
(4.62)
The usual heat engine operates between the ambient temperature T C and a much higher temperature T H , as shown in Figure 4.8a. In principle, an engine could also operate using the ambient temperature as T H and a much lower temperature as T C , illustrated in Figure 4.8c. Heat engines could also use work as input in order to transfer heat from a lower to a higher temperature. The first example is the heat pump depicted in Figure 4.8b. It may be used to convert low temperature heat in, for example, groundwater to the higher temperatures needed in a living room. The coefficient of performance COP of installations is generally defined as COP =
useful output required input
(4.63)
Heat Engines
97
for a complete cycle. For the heat pump the useful output is the high temperature heat QH . The input is the work W = QH − QC , where again the first law (4.32) is applied. For a complete cycle the only entropy changes are taking place in the reservoirs. This gives from (4.57) QC QH TH QH − ≥0→ ≥ TH TC QC TC With this inequality the COP can be calculated as COP =
useful output Q C −1 TC −1 QH QH −1 = 1− ≤ 1− = ηCarnot = = required input W QH − QC QH TH (4.64)
This COP is bigger than 1 and in practice may be as high as 5 or 6. The heat pump is sometimes called a thermodynamic lever. To come back to Section 4.1.7 on heat storage, it is clear that heat storage in summertime in the ground or in groundwater may be done by a heat pump. In summer the low temperature T C is in the ground and the higher temperature T H in a building; in winter it is the other way round. Figure 4.8d shows the second example where work is used to transport heat QC from T C to a higher temperature T H . The lower temperature is inside the refrigerator or freezer and the higher temperature is the ambient one, consequently the ‘useful output’ of Eq. (4.63) becomes QC and the required input remains W = QH − QC . The COP can be calculated as COP =
QC QC useful output = = = required input W QH − QC
QH −1 QC
−1
≤
−1
TH −1 TC
(4.65)
Some examples of refrigerators will follow in Section 4.3.5. 4.2.3
Efficiency of a ‘Real’ Heat Engine
The equality signs for efficiencies or COPs in the previous section apply to cases where all changes are reversible. In this section we consider a heat engine which is closer to reality ([7], 33–37), as the unavoidable irreversible processes are described explicitly. One distinguishes between external reservoirs with temperatures T H and T C and hypothetical internal reservoirs with temperatures θ H and θ C . These internal reservoirs remain the same during the process; they do not keep heat, just transmit it. In this more realistic situation heat QH is transported from the external to the internal reservoir by conduction (irreversibly). Next, that heat drives a Carnot engine between the two internal reservoirs; this process is assumed to be reversible as the irreversibility has already been taken into account. Between the internal reservoirs work W is produced; heat QC leaves the cooler internal reservoir and is transported irreversibly to the external cold reservoir. The equations representing this process are easy to write down. Conduction is proportional to the difference in temperature, see Eq. (4.5), which we rewrite as Q H = aH (TH − θH ) Q C = aC (θC − TC )
(4.66)
98
Environmental Physics
Inside the engine everything happens reversibly, so we use Eq. (4.59) with the equality sign and find η=
QH − QC QC θC W = =1− =1− QH QH QH θH
(4.67)
Eq. (4.67) gives the opportunity to express both QC /QH and θ C /θ H in (1 – η). With the two equations (4.66) there are four equations from which one may eliminate QC , θ C and θ H . One finds 1 TC Q H = a TH − 1−η aH aC (4.68) a= aH + aC The delivered work becomes
W = ηQ H = ηa TH −
1 TC 1−η
(4.69)
Note that for η = 0 or η = (1 − T C /T H ), which is the Carnot efficiency of the complete device, the work will vanish. For constant a, T H , T C the maximum work is found by taking ∂ W/∂η = 0. This gives a quadratic equation in η with two roots, one of which becomes
TC (4.70) η =1− TH The other root has η > 1, which is not physical. The maximum efficiency is found as in √ Eq. (4.70), which is smaller than the Carnot efficiency by a factor 1 + TC /TH > 1. 4.2.4
Second Law Efficiency
The thermal efficiency defined in Eq. (4.61) is connected with the conversion of heat into mechanical energy. More generally, efficiency for an energy transfer process could be defined by η=
energy output (heat or work) by a device energy input required
(4.71)
In this way, besides the thermal efficiency (4.61) the COP (4.63) is also included in the definition, and the efficiency (4.71) may be either smaller or bigger than 1. It must be stressed that a high efficiency applies to the performance of a device and does not imply an intelligent use of energy. Heating of a human being inside a house, for example, may be achieved in several ways: (a) By heating the house with electric heaters with an efficiency close to 1 (b) By heating the house with heat pumps, electrically driven with COP > 1 (c) By gas heaters in each room with η = 0.8, but only burning at full capacity when somebody uses the room (d) by heating the house with a gas driven central heating system (e) By wrapping the person concerned in an electric blanket.
Heat Engines
99
If the aim is to make a person comfortable with as little energy consumption as possible – and if option (e) is discarded, all aspects should be taken into account: the way in which the electricity or gas is produced, the losses during transport to the building and so on. In comparing options (c) and (d) it is clear that option (c) will be the most economic as it is geared to the task of keeping a person comfortable. The point of the foregoing is that in studying energy efficiencies one should look at the task to be performed and not solely at the device. To do so, the Second Law efficiency ε for a single output (either work or heat) is defined as follows ε=
useful output (heat or work) by a device/system maximum possible output by any device/system for the same energy input (4.72)
This Second Law efficiency is never bigger than 1 and focuses on the task (hence ‘useful output’). The maximum in the denominator refers to the maximum permitted by the laws of thermodynamics. A concept related to the Second Law efficiency is the available work B of a system or fuel. It is defined as B = the maximum work that can be provided by a system or a fuel as it proceeds by any path to a specified final state (4.73) in thermodynamic equilibrium with the atmosphere The atmosphere is taken as the obvious reservoir in applications, although other reservoirs may be introduced in the definition. In the process of providing work the system may perform some work against the atmosphere. That, however, cannot be recovered and therefore is not counted as available work. Note that definition (4.73) applies to work, which can be considered as the highest form of energy. Work can be converted into heat with 100% efficiency. That is not possible the other way round. By comparing Eqs. (4.72) and (4.73) it will be clear that the available work will be related to the denominator of Eq. (4.72). A simple example of available work is that of a mass m raised to a height h. The available work then equals mgh. A second example is that of a quantity of heat Q extracted from a hot reservoir with temperature T H . With the atmospheric temperature taken as T 0 the available work becomes T0 (4.74) B = Q 1− TH Here the Carnot efficiency (4.62) was used, as the maximum work corresponds with reversible changes everywhere in the process. It is clear that high-temperature heat comprises a high value of available work. Example 4.7
Second Law Efficiency of a Gas Heater
A living room is heated by a gas heater, which delivers an amount Q of heat with temperature T H during a day. Compare the efficiency η of Eq. (4.71) with the second law efficiency ε of Eq. (4.72) using the concept of available work.
100
Environmental Physics
Answer In order to deliver the required amount of heat Q, an amount of gas is needed with an enthalpy change on combustion |H |. The efficiency (4.71) becomes η=
Q |H |
(4.75)
Let the available work in the same amount of gas be B. This is used to drive a heat pump between the low temperature T 0 of the atmosphere outside and the required temperature in the living room T H . The maximum amount of heat that can be delivered to the living room by the heat pump will be B × COP. When one inserts Eq. (4.64) for the coefficient of performance COP the Second Law efficiency becomes Q T0 Q Q = 1− (4.76) = ε= B × COP B/ (1 − T0 /TH ) B TH In practice the available work B in the gas and the enthalpy change |H | are close. If one ignores the difference the Second Law efficiency ε = (1 − T 0 /T H ) × η is much smaller than η. With for example T 0 = 265 [K] and T H = 296 [K] one finds ε = 0.10 × η. Conclusion: the Second Law efficiency of the gas heater is only 10% of the efficiency (4.75). For the heat pump Q = B × COP; its Second Law efficiency obviously is 100% as this device was used as a reference. 4.2.4.1
Available Work = Exergy
In order to express the available work B in terms of entropy changes, one considers a system characterized by its energy U, volume V and entropy S ([8], 35–38, [9]). The atmosphere has temperature T 0 and pressure p0 . The system may exchange heat and work with the atmosphere (reversibly!) and will reach equilibrium with the atmosphere at energy Uf , entropy Sf and volume Vf . The atmosphere is so big that it does not change. Define a function B, which is to become the available work by B = (U − U f ) + p0 (V − V f ) − T0 (S − S f )
(4.77)
In order to show that this B obeys definition (4.73) one considers a finite change of the system by U, V and S. There is a heat flow Q to the atmosphere; useful work W is done on other systems and nonuseful work W = p0 V on the atmosphere. The ‘other systems’ are assumed to be passive receivers of work, possibly in an irreversible way; they do not otherwise interact with the system or the atmosphere. The First Law (4.32) may be written as − Q = U + W + W
(4.78)
The change in entropy (4.37) is calculated for the atmosphere, for which the temperature T 0 does not change (S)system+atmosphere − (S)system ≡ (S)s+a − (S) =
Q T0
(4.79)
Heat Engines
101
Below the system + atmosphere is denoted by the index s + a, but the changes for the system itself do not have an index attached. From Eqs. (4.78) and (4.79) one finds W = −Q − U − W = = −T0 (S)s+a + T0 S − U − p0 V = = −T0 (S)s+a − (U + p0 V − T0 S) = −T0 (S)s+a + B
(4.80)
In the last equality definition (4.77) was used while recognizing that U = Uf − U, V = Vf − V and S = Sf − S. From Eq. (4.80) it also follows that B = W + T0 (S)s+a
(4.81)
‘System + atmosphere’ do not exchange heat with the ‘other systems’, so from Eq. (4.39) one has (S)s+a ≥ 0. The last term on the right in Eq. (4.81) therefore is non-negative and the maximum of useful work W equals B, which therefore correctly is called ‘available work’, according to Eq. (4.73). The term T 0 (S)s+a in Eq. (4.81) may be called the lost work due to the entropy increase of system + atmosphere. Because Eq. (4.73) expresses the available work in state functions one may consider it as a state function itself, called the exergy of the system. It is now possible to rewrite the Second Law efficiency (4.72) if one produces work W. Then the numerator in (4.72) will be W and the denominator will be the available work (4.81). This gives ε=
W W + T0 (S)s+a
(4.82)
Definition (4.77) of the available work may be compared with the change in Gibbs function G for isobaric, isothermal processes (p = T = 0). In this case Eq. (4.47) for a finite change becomes G = U + pV − T S (4.83) since pdV = p dV = pV and T dS = T dS = T S and dU = U . It follows that B and G are equivalent if the change of the system occurs at atmospheric conditions: p = p0 and T = T 0 . The concept of Second Law efficiency is useful as an intellectual tool to compare different applications of the same kind of fuels. It helps in deciding how to make the best use of the available work, like in Example 4.7.
4.2.5
Loss of Exergy in Combustion
In this section we will show that the loss of exergy in combustion is of the order of 30% ([8], 42–46). Consider a fuel/air mixture with atmospheric pressure p0 , atmospheric temperature T 0 and volume V 1 . After adiabatic combustion (process α) these quantities become p0 , Tc and Vc , where Tc is the high flame temperature. The combustion products cool down (process β) with intermediate temperatures T. During cooling the products deliver work between temperatures T and T 0 , for example by driving a piston in an internal combustion engine. From temperature T to temperature T + dT (with dT < 0) isobaric expansion work is performed while sensible heat δQ = −ncp dT > 0 is extracted from the hot ‘reservoir’ of
102
Environmental Physics
combustion products at temperature T; some average specific heat cp is taken for n moles of the mixture. The atmosphere is acting as a cold reservoir and we assume that the work δW is delivered with the Carnot efficiency (4.62). Then T0 (4.84) δW = δ Q 1 − T The total work performed until the combustion products have cooled down is T0 δW =
W =
Tc
Tc
nc p
T0 1− T
dT
T0
(4.85)
= nc p (Tc − T0 ) − nc p T0 ln(Tc /T0 ) The second term on the right originates from the fact that the ‘hot reservoir’ does not keep a constant temperature, but cools down. The first term on the right reflects the enthalpy change for the isobaric process α Eq. (4.41). After processes α and β are done, one ends up with the same p0 ,T 0 as in the beginning except that chemical energy contained in the Gibbs function was liberated during combustion. In many cases |G| ≈ |H| and also, as remarked before, |H| ≈ |B|. Therefore the term ncp (Tc − T 0 ) in (4.85) will be roughly equal to the exergy before combustion. Following this interpretation of Eq. (4.85) the term –ncp T 0 ln(Tc /T 0 ) then would be roughly the lost exergy or lost work. If this is correct, it should also follow from the term T 0 (S)s+a in (4.81). Here (S)s+a is the entropy increase of system + atmosphere. For process β the Carnot efficiency (4.84) was taken, so it is assumed to have been reversible and its contribution to (S)s+a will vanish. Process α(combustion) was adiabatic, so the atmosphere remains unchanged and the major entropy increase (S)s+a originates from the temperature rise of only the system with δ Q = nc p dT dS = δ Q /T = nc p dT/T
(4.86) (4.87)
at each temperature T. The lost work T 0 (S)s+a becomes Tc T0 S = T0
δ Q = nc p T0 ln(Tc /T0 ) T
(4.88)
T0
which indeed is lost from the work in Eq. (4.85). The relative amount of lost exergy or lost work becomes |B| nc p T0 ln(Tc /T0 ) T0 ln(Tc /T0 ) ≈ = B nc p (Tc − T0 ) (Tc − T0 )
(4.89)
With T 0 = 300 [K] and Tc = 2240 [K] one finds a loss of 30%, due to the irreversibility of the combustion process. In reality, the Carnot efficiency of Eq. (4.84) is too optimistic because of irreversible processes; in practice, the loss will be higher.
Heat Engines
4.3
103
Idealized Cycles
In Figure 4.8 four idealized ways were presented of converting heat into work and vice versa. In practice any engine will work in a cycle, where the initial and final states of the system are the same. In this section, four, still idealized, cycles are described where heat is converted into work and one cycle for refrigeration, where work is used to extract heat from a cold reservoir. In describing the cycles extensive use will be made of the thermodynamic functions p, V, S, T, H since that makes it easy to see what variables are kept constant during parts of the cycle. In the first two cycles to be discussed, the Carnot cycle and the Stirling cycle, heat is reversibly transported from a high temperature reservoir to a low temperature one. This should result in the Carnot efficiency (4.62), which indeed is the case. It must be stressed that in a heat engine the source of heat does not matter. The heat may come from solar heating or from waste heat in a nuclear power station, or it may use the flame of the combustion of oil, gas, wood or coal. These combustion processes may be conducted with due precautions against pollution. Finally, the heat may be transported over some distance by means of heat pipes (Section 4.1.4). 4.3.1
Carnot Cycle
The cycle is sketched in Figure 4.9; because all changes occur reversibly, all parts of the cycle have definite thermodynamic variables and the cycle is indicated by drawn lines. For simplicity we take heats QH and QC as positive in the considerations below. For heat leaving the engine a minus sign is added, if required. In part 1 → 2 of the cycle a gas is compressed adiabatically (δQ = 0). That is immediately clear from the diagram on the right, as the entropy is conserved with the value S1 = S2 . In the next step 2 → 3 the gas expands isothermally and picks up an amount of heat QH = T H (S3 – S2 ). This relation follows from Eqs. (4.37) or (4.55) because the temperature T H remains constant. In step 3 → 4 the gas expands adiabatically. From 4 → 1 it is compressed at the constant temperature T C ; it picks up a negative amount of heat T C (S1 – S4 ). It is T p
QH 2
TH
3
2
3
TC
1
4
1
4
QC S
V
S2
S3
Figure 4.9 Idealized Carnot cycle with on the left its pV diagram and on the right its ST diagram. From the latter one observes that in parts 1 → 2 and 3 → 4 of the cycle the entropy is conserved, hence those parts are adiabatic; in part 2 → 3 heat QH is taken from the high-temperature reservoir while in part 4 → 1 heat QC is released.
104
Environmental Physics
more convenient to describe this as the release of QC = T C (S4 − S1 ) = T C (S3 − S2 ). The initial and final states are the same (U = 0) so the work done follows from the First Law (4.32) as W = Q H − Q C = (TH − TC )(S3 − S2 )
(4.90)
The thermal efficiency (4.61) is found as QH − QC TH − TC TC W = = =1− (4.91) η= QH QH TH TH which is precisely the Carnot efficiency (4.62) as expected. The work W can also be expressed by means of the pV diagram on the left of Figure 4.9. For 2 → 3, for example, the work done by the system would be 3 W2→3 =
pdV
(4.92)
2
and for the complete cycle would be 2
3 pdV +
W = 1
4 pdV +
2
1
pdV + 3
pd V = 4
pdV
(4.93)
cycle
If one looks at the figure it is clear that the contribution from the steps 2 → 3 and 3 → 4 is positive, and together represent the total area below steps 2 → 3→ 4. For the other two steps the integration goes to the left and together they represent the total area underneath steps 4 → 1 → 2, but with a negative sign. The four integrals together are equal to the area within the diagram, indicated by the right-hand side of Eq. (4.93). Carnot cycles are used for demonstration only; they are not used in practice. The reason is that for an ideal gas the isotherm obeys Eq. (3.3) pV = constant and the adiabat follows the Poisson equation (3.12) pV κ = constant. Because κ = 1.4 the curves in Figure 4.9 are close together, so the work done during a cycle is relatively small, and friction and other losses count heavily. 4.3.2
Stirling Engine
Much research and development has been done on the Stirling engine, which is named after a Scottish theologian and inventor in the 1830s. The idealized cycle is shown in Figure 4.10 with on the left the pV diagram and on the right the corresponding ST diagram. T p
QH
3
3
TH 2
4
TC
1
V
4
QR Q R’ 2
S2
1
S1
QC S3
S4
S
Figure 4.10 Idealized cycle for a Stirling engine. During two of the steps the volume is kept constant, during the other two the temperature is kept constant.
Heat Engines
105
In step 1 → 2 a gas (air, helium or hydrogen) is isothermally compressed, releasing heat QC . In step 2 → 3 the gas is pressurized without changing volume, the temperature rises with absorption of heat QR . In step 3 → 4 the gas expands with constant temperature absorbing heat QH . Finally, in step 4 → 1 the pressure falls with constant volume and heat QR is released. by looking at the pV diagram on the In practice QR ≈ QR , which may be understood left of Figure 4.10. In both cases the heat equals cV dT , the temperature change and range is about the same, while the specific heat cV will not be very different at the two volumes 3 4 V 1 and V 2 concerned. If QR ≈ QR , then also 2 T dS ≈ 1 T dS. If the shapes of the two curves 2 → 3 and 1 → 4 in the TS diagram are similar, it follows that S3 − S2 ≈ S4 − S1 or S4 − S3 ≈ S1 − S2 . An essential part of the Stirling engine is played by the so-called regenerator. In this device 98% of the released heat QR is stored and fed back as QR . Therefore the thermal efficiency (4.61) is not found from η = W/(QH + QR ), but from η = W/QH . This gives η=
Q H + Q R − Q C − Q R QH − QC W = = = QH QH QH
(4.94)
QC TC (S1 − S2 ) TC = 1− =1− =1− QH TH (S4 − S3 ) TH
which indeed is the Carnot efficiency, as we are essentially working between two reservoirs. 4.3.3
Steam Engine
A mixture of water and water vapour is used to perform the cycle of a steam engine. In Figure 4.11 a simplified version of the so-called Rankine cycle is given. On the right one finds the TS diagram, where underneath the dashed line an equilibrium mixture exists of liquid with vapour. To the left of that line one has the liquid phase, to the right the vapour phase, while above the dashed line the phase difference disappears. The numbers 1 to 4 on the left correspond to 1 to 4 on the right. At point 1 of the cycle one has liquid water, which is compressed by a pump, doing work Wp resulting in a higher temperature of the water (point 2). Between points 2 and 3 (in the boiler on the left) heat QH is added isobarically at pressure p2 , first increasing the 3
Turbine
4
W
QH
QH Condensor
Boiler
T QC
Pump
2
1
3
Liquid
Vapour
2
Liquid+ vapour
1
4
QC
S Wp
Figure 4.11 Idealized Rankine cycle for a steam engine. The points indicated by numbers 1, 2, 3, 4 on the left correspond with points in the TS diagram on the right. The dashed line in that diagram represents the phase diagram underneath which a liquid–vapour equilibrium mixture exists.
106
Environmental Physics
temperature until the liquid reaches the dashed line. Then the temperature remains constant while all liquid evaporates; when that is finished the temperature rises again until point 3. After point 3 the vapour is allowed to expand adiabatically in a turbine (or an oldfashioned cylinder with piston) performing work W until point 4 is reached. There at low pressure and temperature all vapour condenses, releasing heat QC until point 1 is reached. The cycle may be repeated with the same water or with fresh water. The cycle is best described by using the enthalpy function (4.40). We first discuss the two steps 1 → 2 and 3 → 4, where work is involved, and then the two other steps where heat is exchanged. In step 1 → 2 one has δQ = 0, but the enthalpy of the system will increase because of the work Wp > 0 done on it by the pump, which increases its enthalpy: Wp = H 2 − H 1 . This relation holds because in the discussion following Figure 4.7 it was concluded that the term pV in the definition of enthalpy H = U + pV corresponds to work. The work by the system will be denoted by W 1→2 = −Wp . Finally one uses the fact that putting pressure on water indeed increases the pressure, but hardly changes the volume. This gives 2 − W1→2 = W p = H2 − H1 =
2 d( pV ) =
1
( pdV + V d p) 1
2 =
V d p = V1 ( p2 − p1 )
(4.95)
1
In step 3 → 4 the system performs positive work W 3→4 itself, losing enthalpy: W3→4 = H3 − H4
(4.96)
The net work performed by the system becomes W = W1→2 + W3→4 = H1 − H2 + H3 − H4
(4.97)
For the heat exchange we turn to the other two steps. Step 2 → 3 is performed at constant pressure p2 and step 4 → 1 at a constant, but lower pressure p1 . For these isobaric processes it follows from Eq. (4.41) that the added heat equals the enthalpy increase. So Q H = H3 − H2 > 0 Q C = H4 − H1 > 0
(4.98) (4.99)
According to the first law (4.32) W = QH − QC = H 3 − H 2 – H 4 + H 1 , which agrees with (4.97). The maximal thermal efficiency (4.61) of the process becomes η=
H3 − H2 − H4 + H1 H4 − H1 QC W = =1− =1− QH H3 − H2 H3 − H2 QH
(4.100)
The absorbed heat QH is made as high as possible by boiling under high pressure p2 and high temperatures. The heats QC and QH are difficult to measure directly, therefore one uses tabulated functions of enthalpies instead.
Heat Engines
4.3.4
107
Internal Combustion
In an internal combustion engine air is mixed with a vaporized fuel that is ignited by a spark (such as for the Otto cycle) or is ignited by the temperature rise resulting from compression (the diesel cycle). The engines are not heat engines in the sense discussed before: there are no external heat reservoirs, nor is there a single gas changing its thermodynamic state during the cycle. In fact, the chemical energy of the fuel/air mixture is converted into work, exhaust gases are expelled, fresh air supplied and fuel injected. In practice, however, the air/fuel ratio in mass is around 15 for an Otto cycle and around 20 for a diesel engine, which implies that air dominates in the ‘cycle’. Also the nitrogen in the air remains essentially unchanged, which makes it acceptable to approximate the cycle of an internal combustion engine in a pV or TS diagram. The lower temperature in the diagram may be taken as the ambient air temperature and the higher temperature as the temperature of the fuel/air/exhaust mixture after ignition. 4.3.4.1
Otto Cycle
In Figure 4.12 the pV and TS diagrams of an idealized Otto cycle are sketched, together with a picture of the moving piston. At point 1 the mixture of n1 moles of air with gasoline vapour enters the cylinder. In step 1 → 2 the mixture, approximated as an ideal gas, experiences adiabatic compression. The temperature rises, there is no heat exchanged with Fuel/air
2
Ignition 2 3
1 2 3
1 4 Exhaust
Flywheel transmission T
3
3
QH
p 2
4 1
V2
V1
T2 T1 V
2
4
QC
1
S
Figure 4.12 The idealized Otto cycle of an internal combustion engine. On top a cylinder with piston and flywheel and numbers 1, 2, 3, 4, which correspond to the pV and TS diagrams below.
108
Environmental Physics
the surroundings (δQ = 0) and the entropy remains constant. This is represented by a vertical line in the TS diagram. For adiabatic compression of an ideal gas the Poisson relation pV κ = constant
(4.101)
holds, with κ = cp /cV , the ratio of the specific heat with constant pressure cp and the specific heat with constant cV volume (see Eq. (3.12)). From the ideal gas law pV = n1 RT one may deduce that p = n1 RT/V. Substitution in the Poisson relation gives TV κ –1 = constant, and it follows that T1 V1κ−1 = T2 V2κ−1
(4.102)
In step 2 → 3 the pressure rises with constant volume; heat QH is absorbed, simulating the explosion in the gasoline cylinder. Assuming a constant cV we find Q H = cV (T3 − T2 )
(4.103)
In step 3 → 4 the n1 moles of air expand adiabatically up to the initial volume V 4 = V 1 . Similar to Eq. (4.102) we have T3 V3κ−1 = T4 V4κ−1 = T4 V1κ−1
(4.104)
In step 4 → 1 heat QC is released to a cold reservoir, simulating the exhaust part of the cycle, while the exhaust gases are replaced by fresh air and fuel. Similar to Eq. (4.103) Q C = cV (T4 − T1 )
(4.105)
The thermal efficiency (4.61) of the cycle becomes η=
QH − QC QC T4 − T1 W = =1− =1− QH QH QH T3 − T2
(4.106)
Define a compression ratio r = V 1 /V 2 ; then from (4.104) one will find T 3 = T 4 κ–1 ; also T 2 = T 1 κ–1 . Substitution in Eq. (4.106) results in η =1−
1 r κ−1
(4.107)
It follows that a bigger compression ratio r will lead to higher engine efficiency. An upper limit for r is given by the fact that too high a compression will pre-ignite the engine or cause pinging or knocking. This effect can be prevented by ‘doping’ the fuel. Before environmental regulations outlawed this, lead was used for this purpose. In practice, the upper limit of the compression ratio for internal combustion engines is around r = 13. 4.3.4.2
Diesel Engine
In a diesel engine pre-ignition is prevented by making step 2 → 3 in the pV-diagram horizontal, as shown in Figure 4.13. Step 1 → 2 again is adiabatic compression, but this time of the air only. Eq. (4.102) again holds true. At point 2 the compression is so high that when the fuel is added at this point the mixture ignites. This happens isobarically, so in step 2 → 3 the piston moves out and the heat absorbed equals Q H = c p (T3 − T2 )
(4.108)
Heat Engines
109
T 2
T3
3
p
T4 T2 2 T1 1
4 1
V2 V3
V1
3
QH
V
4
QC S
Figure 4.13 Idealized diesel cycle. One essential difference from the Otto cycle is that step 2 → 3 is isobaric; the other difference is that fuel is injected at point 2, the end of the compression, preventing pre-ignition.
At point 3 the fuel addition is cut off and the expansion stops. The volume V 3 defines a cut-off ratio rc f =
V3 V2
(4.109)
Step 3 → 4 again is adiabatic expansion, for which Eq. (4.104) holds and in step 4 → 1 heat Q C = cV (T4 − T1 )
(4.110)
is released to the outside atmosphere. The thermal efficiency (4.61) now becomes η=
κ W QH − QC QC 1 T4 − T1 1 1 rc f − 1 = =1− =1− =1− QH QH QH κ T3 − T2 κ r κ−1 rc f − 1
(4.111)
The proof of the last equality is left to the reader (Exercise 4.19). Apart from the factor containing the cut-off ratio rcf , this expression is very similar to the efficiency (4.107) of the Otto cycle. The compression ratio r of the diesel engine is between 18 and 25, which makes it possible to be more efficient than the Otto spark ignition engine. The drawback with the diesel engine lies in the last factor of Eq. (4.111); it has the cut-off ratio both in the numerator and the denominator. For rcf > 1 the ratio (rcκf − 1)/(rc f − 1) slowly increases with rcf (Exercise 4.20). A small value of rcf would increase the efficiency, but reduce the work per cycle, as the surface area inside the curves on the left of graph (4.13) would become smaller, requiring more cycles per second and possibly larger losses. In practice, therefore, both parameters r and rcf are used to optimize the performance of the engine. The traditional diesel fuel consists of carbon compounds somewhat heavier than gasoline, which is mostly C8 H18 . Diesel fuel emits much more particulate matter than gasolinepowered vehicles. These particles are small and easily inhaled. They consist of a solid carbon core on which many toxic compounds may adsorb ([21], 41). In a diesel engine a variety of fuels may be used besides diesel, such as methanol (CH3 OH) or ethanol (C2 H5 OH) ([4], 94; [3], 138–143). In all cases the engine designer has to be aware of the strict environmental regulations.
110
Environmental Physics
Condensor
High-pressure vapour
Expansion valve (throttle)
Evaporator
Low-pressure vapour
Vapour compression: 1. Compressor Absorption: 1. Absorb vapour in liquid while removing heat 2. Elevate pressure of liquid with pump 3. Release vapour by applying heat
Figure 4.14 Two refrigeration systems, vapour compression and absorption. In both cases the fluid is evaporated at low temperature and pressure, picking up heat from the refrigerator, then compressed in different ways. At high temperature and pressure the vapour condenses, delivering heat to the surroundings after which it will expand via a throttle. (Reproduced from Principles of Energy Conservation, A Culp, table 9.1, pg 482, 1991, with permission from McGraw-Hill)
4.3.5
Refrigeration
In refrigeration work is used to transport heat from a reservoir of low temperature to a reservoir of high temperature (see Figure 4.8d). In practice a working fluid will perform a cycle between two reservoirs. The low-temperature reservoir should be somewhat lower than the refrigerator to be cooled, otherwise no heat would flow, −10 [◦ C] say, and the high temperature reservoir should be somewhat higher in temperature than room temperature for the same reason, +35 [◦ C], say. In Figure 4.14 the cycle is sketched. It uses the thermodynamic properties of a phase change to absorb or eject a big amount of heat with a relatively small mass and volume of working fluid. At low temperature and pressure the fluid is evaporated, taking heat away from the refrigerator. Next, the fluid is compressed in one of two ways to be discussed below, then the fluid is condensed at high temperature and pressure, giving off heat to the surroundings; finally, by way of a throttle, the fluid expands to the initial pressure again. 4.3.5.1
The Vapour-Compression Cycle
In the vapour-compression cycle, applied in households and small- or medium- sized shops, the vapour is compressed by an ordinary pump. In Figure 4.15 the cycle is sketched in a pH diagram. Just as in a conventional pV diagram, the dashed line indicates where one has pure liquid to the left of the left branch, or pure vapour right of the right branch. On the branch itself the liquid (left) or vapour (right) is saturated. In between, going from left to right the temperature remains constant and the fraction of vapour increases [10]. We follow Figure 4.15, starting at point 1. (a) At point 1 the working fluid is just saturated vapour. In step 1 → 2 it is compressed adiabatically by a pump. As there is no exchange of heat with the surroundings, the work done by the pump equals the increase of enthalpy W = H 2 – H 1 . Note that here we used the connection between enthalpy and work, discussed below Eq. (4.43).
Heat Engines
111
Critical point Saturated vapour
p
Saturated liquid
3
2
Throttle 1
4
H3=H4
H1
H2
H
Figure 4.15 pH diagram of a commercial refrigerator cycle. Below the dashed line equilibrium of the fluid and vapour phases exist. Along a horizontal line the fractions of vapour and liquid change at constant temperature. In step 4 → 1 heat is taken from the refrigerator at low temperature and given off to the surroundings in step 2 → 3.
(b) At point 2 the vapour enters a condenser; it first cools isobarically until the dashed curve is reached. Then it really starts condensing at constant temperature. The added heat, according to Eq. (4.41) becomes H 3 − H 2 , which is negative, meaning that heat is taken away from the fluid and is dispersed in the surroundings. (c) In step 3 → 4 the fluid expands through a throttle. Following the discussion of Figure 4.7 the final enthalpy equals the initial H 4 = H 3 , although the intermediate enthalpies are not defined. That is why step 3 → 4 is dashed. At point 4 one has a mixture of liquid and vapour. (d) In step 4 → 1 the working fluid expands isobarically, while heat H 1 − H 4 > 0 is added. This heat is extracted from the refrigerator, giving cooling. The coefficient of performance COP, defined in Eq. (4.63), here becomes COP =
cooling achieved H1 − H4 = work input H2 − H1
(4.112)
As an example we consider a refrigerant with code HFC-134a (CH2 FCF3 ). It works with a condensing temperature of 35 [◦ C] (at point 3) and an evaporation temperature of −10 [◦ C] (at points 4 and 1). The enthalpies H 4 = H 3 and H 1 are extracted from published tables [10] in the following way. H 1 is found from the enthalpy of saturated liquid at a temperature of −10 [◦ C]. In the same tables one finds S1 . Similarly H 3 is found from the enthalpy of saturated liquid at 35 [◦ C]. This also determines pressure p2 = p3. To find H 2 is more complicated. One knows the pressure p2 = p3 and the entropy S2 = S1 (as the process went adiabatically from 1 → 2). Then one can use published graphs H(S, p) to find H 2 = H(S1 , p3 ). The values are H 3 = H 4 = 248.7 [kJ kg−1 ], H 1 = 391.2 [kJ kg−1 ], H 2 ≈ 422 [kJ kg−1 ], which yields COP = 4.6. In practice, the COP may be about 70% of the theoretical value ([3], 120).
112
Environmental Physics
Boiling pressure/[kPa]
10000
Ammonia (NH3)
HCFC 22 (CHClF2) home refrigerator HFC-134a (CH2FCF3)
CFC 12 (CCl2F2) auto refrigerator C4H10 (iso)
1000
CFC 11 (CCl3F2) blowing Water (H2O)
-40 -20
20
40
60
80
100
200
Boiling temperature/[°C]
10
Figure 4.16 Boiling point versus pressure for some common fluids. Note that the x-axis coincides with the atmospheric pressure of 1 [atm] = 100 [kPa]. Fluid HFC-134a, discussed in the text, evaporates at −10 [◦ C] with a pressure of about 2 [atm] and condenses at 35 [◦ C] with a pressure of 9 [atm].
The choice of working fluids is restricted by the requirement that the low and high pressures should be convenient for use, the low temperature not too far below the freezing point of water and the high temperature not too far from the ambient temperature. Therefore one has to look at the boiling points of fluids as a function of pressure. Such a pT diagram is shown in Figure 4.16. Note that the atmospheric pressure of 1 [atm] = 100 [kPa] coincides with the x-axis. In the example below Eq. (4.112) we discussed refrigerant with code 134a. From the graph one may deduce that an evaporation temperature of −10 [◦ C] requires a pressure of about 2 [atm] and condensation at 35 [◦ C] requires about 9 [atm], both convenient values for domestic or small business applications. Of the fluids shown NH3 , HCFC 22 and C4 H10 (isobutene) also look acceptable. Propane (not shown) has its curve close to HCFC 22 and therefore also would seem suitable. Historically NH3 has been widely used as a refrigerant. It is, however, toxic to brass and bronze, although not to ferrous metals; also it is toxic and irritating to eyes, nose and throat. Therefore small leaks in the device or, worse, a breakdown would pose health risks ([10], p. 429). Propane and butane are inflammable, so pose risks as well. Therefore in the 1930s CFCs, which do not react, were chosen as the working fluid.
Heat Engines
113
Around 1980 it became clear that the Cl atoms in the CFCs are liberated by sunlight and destroy the ozone layer in the upper atmosphere (cf. Section 2.4.3). Following the Montreal protocol of 1987, compounds which destroy the ozone layer, like HCFC 22, are being phased out and other refrigerants are used. HFC-134a (CH2 FCF3 ), which was discussed above does not reach the ozone layer and has interesting properties. It is, however, much more expensive than ammonia or HCFC 22 ([10], p. 432) – and price is another decisive parameter. 4.3.5.2
Absorption Refrigeration
The amount of mechanical energy H 2 − H 1 needed to compress the vapour in Figure 4.15 is rather high. Another way to achieve compression (step 1 → 2 in Figure 4.15) is indicated on the right in Figure 4.14, which we will describe using an ammonia–water combination. In this case the NH3 vapour coming out of the evaporator at low pressure is absorbed in water, which liberates heat that has to be removed. Next, the pressure of the water is increased by a pump. Then external heat is added in order to liberate the high pressure NH3 from the water; effectively the pressure of the NH3 is increased and the other steps of the cycle follow. The heat for desorption of the NH3 from the water could come as waste heat from nearby power stations. In winter that waste heat could be used for heating and in summer for cooling by absorption refrigeration, coupled to air conditioning. It must be mentioned that besides the NH3 combination LiBr–water combinations are also used.
4.4
Electricity as Energy Carrier
Electricity as power source is very convenient. Wherever there is a socket one has access to immediate power for heating or for providing mechanical work. The conversion into heat works with almost 100% efficiency and the conversion into mechanical work with about 90%, which implies an exergy of 90% for an electric Joule [Je ]. The electric power from a power station is expressed as [MWe ] = 106 [Je /s] or in short [MW]. If one wants to indicate the thermal input one uses [MWth ]. Almost all the electricity in the world is produced in generators. The principle is based on Faraday’s Law, which says that a rotating winding in a constant magnetic field produces an alternating electric current. In this way mechanical energy of rotation is converted into electric power. For small instruments like a bicycle dynamo, permanent magnets are used to produce the static magnetic field. In power stations usually a separate winding is added to the big rotating coil, which produces its own alternating electric field; this field is then rectified to produce DC power for the magnets. The rotational energy to drive the coils is sometimes produced by hydropower (Section 5.3.1), but in the majority of cases it is produced by heat engines (Section 4.3). Instead of letting the working fluids expand in cylinders which drive reciprocating pistons, one usually lets them perform work against a turbine which immediately produces rotational energy. The loss of energy in converting mechanical to electrical energy can be up to 10% for big generators, but the really big losses occur because of the laws of thermodynamics which
114
Environmental Physics
restrict the efficiency of converting heat into work to values below the upper limit of the Carnot efficiency ηCarnot = 1 − or the more realistic
TC TH
(4.113)
η =1−
TC TH
(4.114)
where we copied Eqs. (4.62) and (4.70). The lower temperature T C in Eqs. (4.113) and (4.114) is determined by the cooling facilities, which may be ambient air in cooling towers or water from lakes or rivers, both not lower than about 290 [K]. The higher temperature T H should be well below the melting point of the materials or metals used and is often much lower. In order to understand how the production of electricity is organized we discuss the varying demand of electric power, ways to use the input energy efficiently, storage of electricity and electricity transmission. 4.4.1
Varying Grid Load
An essential factor in the distribution of electricity is the grid, which in many countries is controlled by a public authority. The grid load is varying during the day and often is somewhat higher in the winter than in summer. As an example, the load for a few European countries is shown in Figure 4.17. It is clear that there are large variations during the day.
Summer 19.06.91
Winter
GW 65
19.12.90
D F
60
D
55 50
F
45 40
I
I
35 30 25
E
E
20 15 10 5 0
0
2
4
6
8 10 12 14 16 18 20 22 24h
0
2
4
6
8 10 12 14 16 18 20 22 24h
Figure 4.17 Grid load of electricity during the day-night cycle for a few European countries, both midsummer and midwinter. D = Germany, F = France, I = Italy, E = Spain. The total electricity use will be higher because of self-generation. (Reproduced by permission of UCPTE (Union for the Coordination and Production and Transmission of Electricity, half yearly reports)
Heat Engines
115
It is the mission of the electricity utilities to produce electricity against as low as possible costs. This is achieved by dividing the demand in a base load, which is the minimum to be expected during a day, a peak load and an intermediate load. For the base load one uses power plants which are expensive to build, but have a low fuel cost, such as nuclear power stations. In that case the capital cost of the plant is distributed over many [kWh]. In fact, it is expensive to stop the operation of nuclear power plants, so their main use would be for the base load anyway. For the peak demand in Figure 4.17 one will use plants which are cheap to build, but of which the fuel is expensive, such as gas turbines. These turbines are easy to switch on and off which is an additional advantage. Coal fired power plants are between the two mentioned above and may be used for base load or intermediate load. Figure 4.17 does not show the variations of the order of minutes or even seconds. They must be kept small in amplitude in order to keep the grid stable. For this purpose short-term storage in flywheels or batteries is used (Section 4.4.3). Figure 4.17 does not comprise all electricity which is generated in a country. Industrial plants may generate their own electricity of which only the surplus, if there is any, is fed into the grid. In recent years small units like households also increasingly generate electricity by solar cells (Section 5.1) of which again only the part which is not used may be fed into the grid. It must be mentioned that fed-in self-generated electricity puts an extra strain on the managers who have to stabilize the grid. 4.4.2
Co-Generation of Heat and Electricity
Large scale power plants often have several units. The biggest are those where the heat from the combustion of fossil fuels (coal, oil, gas) or the heat from nuclear fission is used to heat the steam for a cycle, such as that shown in Figure 4.11. The higher temperature T H in the cycle is limited by the corrosion of the materials which increases with temperature. It is then a matter of optimizing costs between more expensive materials (capital cost) and thermal efficiency (fuel costs), which determines the operating temperature (Eqs. 4.113 and 4.114). In practice, the steam cycles work with T H ≈ 800 [K] and cooling at T C ≈ 300 [K], resulting in a Carnot efficiency (4.113) of 63% and a ‘realistic’ efficiency (4.114) of 39%. At present, the best fossil-fuel fired power stations have an efficiency of 42%, whereas for nuclear power stations the efficiency is lower, at about 33%. The reason is that preventing corrosion in the nuclear fuel elements requires lower temperatures. For peak load one uses gas turbines. They use natural gas as a fuel and may be operated in a so-called Brayton cycle, with both internal and external combustion. The temperature T H of the cycle could be as high as 1700 [K], but usually does not exceed 1400 [K]. The temperature T C corresponds to the outlet temperature of the gases, usually around 900 [K]. These two numbers result in a Carnot efficiency of 36% or a ‘realistic’ value of 20%. The most advanced gas turbines have T H ≈ 1530 [K] and T C ≈ 850 [K] resulting in a Carnot efficiency of 44% or a ‘realistic’ value of 25%. Siemens [12] claims an efficiency of close to 40% for its gas turbines. 4.4.2.1
Combined Cycle
The data given for the input and output temperatures suggest immediately the introduction of so-called combined cycle power systems. Here the exhaust of a gas turbine is used to
116
Environmental Physics
A ηA qA work = η AqA
B (1 - ηA )qA
ηB
qB
Heat
work = ηB{qB+(1-ηA)qA}
Figure 4.18 Combined power cycle. The output (1 − ηA )qA of a gas turbine A is used to heat the steam of a steam turbine B. If required extra heat qB can be added to B.
heat the steam of a steam turbine. Figure 4.18 shows the scheme. Assume that the heat input of the gas turbine is qA , its measured efficiency ηA and consequently its heat output (1 − ηA ) qA . Let some extra heat qB be added to steam turbine B which has efficiency ηB . The total efficiency of the combined cycle becomes electric output ηA qA + ηB ((1 − ηA )qA + qB ) = heat input qA + qB ηB (qA + qB ) + ηA qA (1 − ηB ) = qA + qB ηA qA (1 − ηB ) = ηB + qA + qB
ηtotal =
(4.115)
For the case that qB = 0 the efficiency simplifies to ηtotal = ηA + ηB − ηA ηB
(4.116)
Individual efficiencies of 40 and 30% would result in an overall efficiency of 58%. 4.4.2.2
Cogeneration
Another word for cogeneration is Combined Heat Power or CHP, which expresses the point. The rest heat of a power plant is put to use in space heating in winter or to drive an absorption refrigeration device coupled to air-conditioning in summer. In this way the exergy input produces a large useful output (heat+electricity). The first example of cogeneration was in the USA where Thomas Edison used it for the Pearl Street Station (1882, [13]). At present rest heat from power plants with temperatures of 80 to 130 [◦ C] is used for district heating or cooling. At a smaller scale Mini CHP with an electric output of 5 to 500 [kWe ] is used for buildings and small business. At an even smaller scale Micro CHP (<5 [kWe ]) is used for houses or small business. The output of heat and electricity are coupled. More electricity produces more rest heat. The manager of Mini or Micro CHP will give priority to the heating requirements, which will vary during the day and with the seasons. Consequently there sometimes will be a shortage of electricity, which then has to be taken from the grid and at other times there will be a surplus, which has to be fed into the grid. The economic viability of CHP therefore depends on the price the utilities are willing to pay to electricity produced by others.
Heat Engines
117
In recent years governments are realizing that CHP is utilizing fossil fuels more efficiently. CHP installations decrease the dependence on imports and ease the environmental consequences of producing electricity. As an extra advantage the distribution of a large number of small CHP power stations will make the grid less vulnerable to blackouts. So by government regulations utilities are stimulated or forced to pay a reasonable price to what they see as competitors. 4.4.3
Storage of Electric Energy
In Figure 4.17 typical examples of varying grid load for a few countries were shown. It will be clear that not all variations can be dealt with by coupling or decoupling power stations. One needs quickly accessible electric storage as well. Here we will discuss the principles of a few of the available options. The first storage option is mechanical storage in flywheels; they are able to unload in seconds or minutes, levelling off short-term variations. Another fast option is superconducting mechanical energy storage (SMES), but the capacity is still small. Batteries unload their chemical energy more slowly, in half an hour or longer. Pumped hydro storage also works in the order of hours just like compressed air storage, discussed in Exercise 4.17. For even longer time frames one might use chemical storage in hydrogen. We finish in Table 4.3 with an overview of typical storage devices. The examples of electric storage mentioned here not only are used to level off the grid load, but also on a more local scale to deal with grid blackouts. Data centres, hospitals and the like, which must continue their work, may have batteries or flywheels for standby in case of blackout, giving the diesel backup generators time to start up. Also, many renewable energy installations work intermittently; when they are widely in use, storage of electricity will be a necessity. 4.4.3.1
Flywheels
A flywheel is a circular rigid body which can rotate around an axis with a high angular velocity ω [s−1 ]. The simplest flywheel has virtually all of its mass at the rim, as shown in
Table 4.3 Storage technologies and their typical performances. All data are averages. Technology Flywheels Superconducting SMES Lead acid batteries in series Pumped hydro station Compressed air Hydrogen+fuel cell Compiled from [17], [18].
Round trip efficiency/(%)
Discharge times
Power
lifetime
90 95 50–90
1–200 s secs 0.5–5 h
1–50 [kWe ] 1–100 [MWe ] 0.01–10 [MWe ]
105 –107 cycles 30 years 200–1200 cycles
80 75 40
hours hours
100–1000 [MWe ] 50–100 [MWe ] 0.05–1 [MWe ]
30 years tens of years
118
Environmental Physics
Figure 4.19 Flywheels with different mass distributions. In the left graph the mass is concentrated at the outer rim, in the middle graph in the centre, while in the right-hand graph the flywheel consists of concentric fibre rings, connected by resilient material.
Figure 4.19a. The velocity of the elements at the outer rim with radius R equals ωR. The total kinetic energy then becomes K =
1 1 1 2 mv = mω2 R 2 = I ω2 2 2 2
(4.117)
where I = mR2 is called the moment of inertia. Other flywheels may have the bulk of their mass in the centre (Figure 4.19b) or the wheel may consist of a series of concentric rings made from fibre, connected by resilient material (Figure 4.19c). In such cases the moment of inertia will be different. The outward force on a flywheel will be the centrifugal force, which in the simple case of Figure 4.19a with all mass at the radius R becomes m
v2 = mω2 R R
(4.118)
The outward force per unit of volume becomes Fvol =
mω2 R = ρω2 R [kg m−2 s−2 = N m−3 ] V
(4.119)
This force should be counteracted by the stress σ [N m−2 ] acting opposite to the centrifugal force. Looking at the dimensions of F vol and σ it is clear that there is a difference of a unit of length, so in a very first approximation one would guess that the stress in the material caused by the centrifugal force is given by σ = Fvol × R = ρω2 R 2
(4.120)
The kinetic energy per unit of mass of the flywheel is called the specific energy and is given by K σ σ 1 = ω2 R 2 = = Kw m 2 2ρ ρ
(4.121)
For the flywheel of Figure 4.19a K w = 0.5. For other designs the factor K w may become as high as 1.0. One will observe that the specific energy of the flywheel will become large
Heat Engines
119
when the material has a small density ρ and is able to endure large stresses σ . That is the reason why modern flywheels work with light and strong carbon fibre. 4.4.3.2
Superconducting Mechanical Energy Storage (SMES)
At low temperatures, just above absolute zero, many metals and some alloys become superconducting. This property is applied in superconducting magnetic energy storage. A direct current (DC) is sent through a superconducting solenoid and short circuited. Because of the absence of resistance the current will run indefinitely. The high currents produce a high magnetic field which requires safety measures around the device. The necessary low temperatures demand cryogenic cooling, which is probably the reason that SMES devices are used on a relatively small scale. On the other hand, such devices have the capability to return the stored energy in a matter of seconds. 4.4.3.3
Batteries
The most common rechargeable battery still is the lead acid battery, which is shown schematically in Figure 4.20. Like in all electrochemical cells there is a cathode which accepts electrons and an anode which rejects electrons. For the lead acid battery the following reactions occur on discharging: − anode: Pb(s) + SO2− 4 (aq) → PbSO4 (s) + 2e 2− + cathode: PbO2 (s) + SO4 (aq) + 4H3 O (aq) + 2e− → PbSO4 (s) + 6H2 O(l)
(4.122) (4.123)
During the discharge PbSO4 precipitates on both electrodes, reducing the density of the fluid, giving a fast way to estimate the status of the battery. The electrodes are somewhat porous in order to increase their effective surface area. Chemists have developed a quick way to find the voltage of an electrochemical cell (Exercise 4.22), but we will find it by way of the Gibbs free energy of the various compounds. In Eq. (4.48) it was found that for isobaric, isothermal processes, like in the cell considered here, the nonexpansion work δW e , equals the decrease in Gibbs free energy, assuming that the process occurs adiabatically. The nonexpansion work for the battery considered is charge × voltage or Q × V [CV = J]. For one mole of Pb reacting in Eq. (4.122) this
eLoad
Cathode
Anode H3O + Pb
PbO2 H2SO 4
Figure 4.20 Lead acid battery with H2 SO4 as electrolyte and with lead at the anode and lead oxide at the cathode.
120
Environmental Physics
becomes δW e = QV = nN A eV [J] where N A is Avogadro’s number and e the electron charge. The factor n = 2 indicates that two electrons are exchanged for every atom of Pb reacting. The decrease of Gibbs free energy is found by combining Eqs. (4.122) and (4.123): + Pb(s) + PbO2 (s) + 2SO2− 4 + 4H3 O (aq) → 2PbSO4 + 6H2 O(l)
(4.124)
Gibbs free energies of formation are tabulated in handbooks of (physical) chemistry ([6], [14] and elsewhere) for standard conditions, that is, a pressure of 1 [atm] and a temperature of 25 [◦ C]. The phrase ‘of formation’ means that the compounds are formed hypothetically from their atoms. Assuming standard conditions one will find for one mole −G = 394.36 × 103 [J] = n NA eV [J] or
V = 2.04 [V]
(4.125)
(Exercise 4.22). Note: a voltage is defined as the potential difference between anode and cathode without a current running, so without changes to the device. The assumption in our derivation that the process goes reversibly therefore does not matter. In a car battery six cells are coupled in series giving a voltage of 12 [V]. For grid applications many batteries have to be coupled. In both cases the battery is reloaded by applying an external potential somewhat higher than the cell voltage, giving a reverse current. 4.4.3.4
Pumped Hydro Storage
Consider two reservoirs of water, one high and one low, with a height difference h connected by two tubes. The first tube is used when there is a surplus of electricity (too much wind, or too much nuclear power during the night); then it pumps water from the low lying reservoir to the higher one. In the other tube a flow Q [m3 s−1 ] occurs when there is lack of electricity. With the density ρ of water the mass flow becomes ρQ [kg s−1 ]; when that flow is used to drive a generator the maximum output power (for 100% efficiency) would be P = ρ Qgh[Js−1 = W] ≈ Q × (h/100) [MW]
(4.126)
Gravitational storage has been used for a long time already and it is able to store large quantities of energy as back-up power. It is usually combined with a hydropower station, where one of the two reservoirs is the lake behind the power dam. 4.4.3.5
Hydrogen and Fuel Cell
Hydrogen may be produced from electricity by electrolysis of water 2H2 O + electric energy → 2H2 + O2
(4.127)
The formation enthalpy of water under standard conditions is tabulated as 285.83 [kJ mol−1 ]. To produce 1 [kg] of hydrogen under standard conditions would require about 39 [kWh] of electric energy and 8.9 [L] of water (Exercise 4.23). In practice, much more electric energy is needed, leading to an efficiency of at most 70%. If the input electric power is alternating one also has to rectify it, leading to further losses. This may be put as a disadvantage of hydrogen as an electricity carrier. On the positive side, the conversion of hydrogen back to electric energy can be done with high efficiency in a fuel cell. Its principle is illustrated in Figure 4.21. Hydrogen gas enters
Heat Engines
121
Load H2
O2 e
H3O +
H3O +
Cathode
Anode
e
Figure 4.21 The hydrogen fuel cell. Hydrogen enters at the anode where it produces positive H3 O+ ions. They move in an acidic electrolyte to the cathode where they react with entering oxygen, producing water.
a porous anode where it comes into contact with an electrolyte and a catalyst, producing positive ions (protons) and electrons according to the reaction 2H2 + 4H2 O → 4e− + 4H3 O+
(4.128)
The protons move through the electrolyte to the cathode where the reaction 4e− + 4H3 O+ + O2 → 6H2 O
(4.129)
discharges the ions. The net effect is the oxidation of hydrogen to water according to 2H2 + O2 → 2H2 O
(4.130)
The electrons produced in Eq. (4.128) move from the anode to the cathode by an outer load. Thus the chemical energy is converted into electrical work δW e . For isobaric, isothermal processes we have from Eq. (4.54) that δWe ≤ −G = −(H − TS)
(4.131)
For an isobaric process the chemical energy in the hydrogen could be converted into heat δQ = H. We therefore can define the maximum efficiency of the fuel cell process by ηmax =
output δWe −G = = input −H −H
(4.132)
For the hydrogen fuel cell (4.130) one finds ηmax = 0.83 for producing exergy from chemical energy. If one produced heat of a certain temperature T H with an ambient temperature T 0 the exergy would be much lower because of the Carnot factor (1 − T 0 /T H ). The term −TS on the right of Eq. (4.131) corresponds to heat δQ = –TS, which is lost from the cell to the surroundings. The voltage of the fuel cell is found in the same way as Eq. (4.125) for the battery, which is another example of an electrochemical cell. For one molecule of water two electrons are exchanged, so again n = 2, which gives V = 1.229 [V]. To drive the reverse reaction
122
Environmental Physics
(4.127) a higher potential will be needed. A hydrogen fuel cell with Pt as catalyst would work at an operating temperature of between 80 and 200 [◦ C]. If one wants to store the energy from renewables in the form of hydrogen, the most obvious choice is photovoltaics, which will deliver a direct current and no rectifying is needed. Another option would be to put biomass into a gasification plant at high temperatures (600 [◦ C] to 1800 [◦ C]). That would produce a mixture of H2 , CO and CH4 which can be converted to H2 at high temperatures. Still another option is to use the ‘waste heat’ of future high-temperature nuclear power stations to split water into its components. 4.4.3.6
The Hydrogen Economy
In the hydrogen economy hydrogen would be the main storage medium for electric power. In one of many proposals most houses and buildings would have solar cells on their roofs and wind turbines in the neighbourhood producing electricity. The surplus would be used to make hydrogen, which could be stored near the houses and also during the night in the hydrogen tanks of private cars. Even a surplus of conventional electricity could be used that way and if the grid needed it the hydrogen could be converted to electricity, which would then be sold again to the grid. The difficulty is that at standard conditions the energy content of hydrogen is very low (Exercise 4.23 and Table 4.4); it would have to be compressed or liquidized or absorbed in special compounds. Despite these technological problems there is much research on different kinds of electrolytes and different sorts of anode and cathode materials. The outcome is a wide variety of fuel cells, some in the research phase, and others in commercial production. They vary in possible applications and in operating temperatures, which range form 50 [◦ C] to 1000 [◦ C] ([16]). Another difficulty in using hydrogen for storage of electrical energy is that one needs at least two conversions. First one has to produce hydrogen by electrolysis, using direct current (DC). The efficiency of this process is at most about 80% [15]. The second conversion happens in the fuel cell, where the efficiency will be lower than the 83% calculated above, say 75%. The two steps together leaves one with 0.8 × 0.75 = 0.6 or 60% of the original energy. If one needs to rectify alternating current (AC) into DC one loses another 10%. In practice one will lose about half of the input.
Table 4.4 Energy content of (potential) car fuels. Fuel Gasoline (petrol) Diesel Methanol Ethanol (alcohol) H2 gas H2 gas H2 gas H2 fluid
Pressure/[atm] 1 1 1 1 1 200 800 14 at −240 [◦ C]
Energy content/(MJ L−1 ] 35.5 38.5 18.0 23.5 0.011 2.5 10.2 10.0
Heat Engines
123
It must be mentioned that that there exist fuel cells that operate on liquids like methanol (CH3 OH). These fuels do not have the drawback of a low energy density (Table 4.4) and fuel cells based on carbon compounds may become a competitor of the hydrogen economy. 4.4.3.7
Comparison of Storage Technologies
In Table 4.3 we compare medium- to large-scale storage technologies, which may be coupled to power stations or to the grid. In the second column one finds the round-trip efficiency from electricity to storage to electricity again. It is apparent that the losses are not negligible. In the third column one finds typical times for discharge, in the fourth the power that can be provided during the discharge, while the last column gives an idea of the lifetime of the installation. The properties of the storage technologies shown in Table 4.3 are quite different. It will depend on the costs which one will be used on the grid. For use in a car the energy content in [MJ L−1 ] of possible fuels will be an important characteristic, as the available volume will be limited. Some data for (potential) car fuels are shown in Table 4.4. It is clear that one needs special facilities for a hydrogen car to be a realistic proposal. 4.4.4
Transmission of Electric Power
Transport of electricity is done by high power lines. The principal energy loss is caused by heat production in the wire, although conduction into the air causes some additional losses. So, when the resistance between power plant and consumer is R the energy lost per second is Plost = I 2 R [W]
(4.133)
The total power PT transmitted to the consumer, assuming current I and voltage V to be in phase becomes PT = IV
(4.134)
The ratio of the power lost to the power transmitted becomes Plost R I2R IR = = = PT 2 PT IV V V
(4.135)
This ratio should be as small as possible. Decreasing the resistance R would mean thicker wires, which increases costs. For specific applications one could use superconducting wires with R = 0. Cooling to the required low temperatures is very expensive, as liquid helium is needed. The discovery of superconductivity at higher temperatures implies that the cooling might be done with the much cheaper liquid nitrogen. Much development is going on in producing strong wires for the exotic materials which show high temperature superconductivity. The other possibility to diminish losses is to increase the potential V in the denominator of Eq. (4.135). The upper limit here is imposed by the corona discharges of the wires and discharges between the wires and the ground. The higher V, the higher the towers supporting the transmission lines must be and the more expensive the long isolation chains
124
Environmental Physics
between wires and towers. Also, one needs to transform the high voltage of the power lines to acceptable domestic voltages of 120 [V] to 220 [V]. Therefore AC lines are needed to facilitate transforming. Despite the advantages of high voltage AC transmission there are some disadvantages compared to DC lines. First, the towers and insulation √ of AC lines should take account of the amplitude of the voltage which is a factor of 2 higher than the mean value, which determines the energy transport. Second, the time variation of the AC more easily induces sparks, perturbing radio transmission; also the current will be concentrated at the surface of the conductor, the so-called skin effect. Therefore one often uses a few parallel lines to increase the surface area and decrease its resistance. A physical analysis of electricity transmission compares three constructions where in each case the maximum voltage between line and earth is V 0 and each separate power line has the same resistance R. In the first situation one has two DC lines with voltages V 0 and −V 0 , respectively, with respect to earth. Let I DC be the load current, which flows through the load and the two power lines. The power delivered then would be 2 V 0 I DC and the heat 2 R. loss in the lines would be 2I DC In the second situation one has two AC lines with voltages V 0 cosωt and −V 0 cosωt, respectively. Again the maximum voltage is V 0 , and the current through the lines will run with the same phase as the voltage I 0 cosωt and −I 0 cosωt. The power delivered would be 2 I 0 V 0 cos2 ωt with a time average I 0 V 0 . In order to deliver the same power as in the DC case one would need I 0 = 2I DC . For the two lines together the heat loss will be 2 R, twice the value for 2(I 0 cosωt)2 R. Averaging over time, one finds for the heat loss 4I DC the two DC lines while the same power to the load was delivered. In the third situation one has three power lines, with three-phase voltages V 0 cosωt, V 0 cos(ωt + 2π /3), V 0 cos(ωt + 4π /3) and with their return currents to earth. For each phase the load is wired such that these return currents respectively are I 0 cosωt, I 0 cos(ωt + 2π /3), I 0 cos(ωt + 4π /3). One may show that the total heat loss in the lines 2 R/3, which is smaller than for two AC lines, but still bigger than for two DC becomes 8IDC lines (Exercise 4.24). For these reasons DC high voltage connections have also been introduced, usually abbreviated as HVDC = High Voltage Direct Current. The difficulty is that one needs expensive AC–DC converters at both ends of the cable so it is only worthwhile for long distance transmission. There are long DC lines all over the world. In Europe, for example, there is a DC connection between France and Britain under the British Channel, by which France exports its surplus nuclear power to the UK. In North America a 1480 [km] line transmits hydropower from Quebec to the population centres in Canada and the United States. For the future one might envision one or several supergrids of HVDC, connecting large areas, such as Europe and North Africa. They could transmit a surplus of energy from regions with high winds or intensive sunshine to far away areas where there is a shortage of electric power. The supergrid would connect to the existing or improved high voltage AC grids at a few coupling points [19] (Exercise 4.25). A completely different way to transport electricity is to first produce hydrogen, then transport it to the location where it is needed and finally use fuel cells to produce electricity again. One then has to deal with the low round trip efficiency of the hydrogen/fuel cell cycle (cf. Table 4.3).
Heat Engines
4.5
125
Pollution from Heat Engines
Pollutants may be loosely defined as manmade substances that in concentrations above a certain threshold cause damage to plant, animal and human life. They may occur in air, water and soil, and be transported from one place to another. Their sources are found in the main sectors of modern, industrial society: transportation, production of electricity, burning of refuse, combustion of fuels and all kinds of industrial and agricultural processes. These primary pollutants may react with each other or with natural occurring substances, causing secondary pollutants, such as photochemical smog. As an important subgroup of pollutants the air pollutants are displayed in Table 4.5 with their origins and their health effects. These pollutants and their physical-chemical origins will be discussed below. Radioactivity will be considered later in the chapter on nuclear power. 4.5.1
Nitrogen Oxides NOx
The symbol NOx or NOx comprises the nitrogen oxides NO and NO2 . Together with hydrocarbons in the air they are among the major causes of photochemical smog. NOx result from any type of combustion in which air, consisting of N2 and O2 , is brought to a high temperature by burning fuel. NOx is formed from N2 and O2 in the flame, at a temperature of a few thousand [K]. Its formation increases with temperature as the reaction is endothermic that is, it needs heat to run. A second source of NOx is the nitrogen present in the fuel itself. Fossil fuels originate from plants, which contain nitrogen. For some coals or oils the nitrogen concentration is in the order of 1%; in this case about 80% of the NOx originates from the fuel itself.
Table 4.5 Air pollutants, origins and health effects. Pollutant
Principal origin
NOx
Road transport
SO2
Combustion of fossil fuels
CO Fine particles and heavy metals; aerosols
Road transport r Industrial emissions r Exhaust gases
Volatile organic compounds (VOC)
r Road transport r Solvent use
From [20], p. 3.
Health effects
r r r r r r r r r r
Respiratory effects Inhibits plant growth Acid rain Exacerbates respiratory pathology Acid rain Bad tissue oxygenation Irritation and damage of respiratory functions Occasional carcinogenic Bad smells Mutagenic and carcinogenic (benzene)
126
Environmental Physics
Natural gas contains little or no nitrogen. For gas NOx production is essentially due to high combustion temperatures. The remedy for NOx emission can be twofold. In the first place one tries to lower the flame temperature. That may easily lead to lower efficiencies of the cycles concerned. It is a matter of design to avoid local high-temperature spots in the flame. Another way of reducing NOx production is to lower the available oxygen for combustion. This has the drawback that the films of oxides that protect the walls of the combustion chamber against high temperatures disappear as well, a problem that may be solved by new materials. This example serves to recognize that solving one problem may cause another. Much research is done on fluidized bed combustion for power production. Here one has a mixture of crushed coal and some magnesium or calcium carbonate, where the air enters from below. The boiler tubes of a steam generator are in direct and very good thermal contact with the bed. The combustion can then be performed at lower temperatures, around 1200 [K], while still producing the same temperature in the steam for the external combustion cycle. For power stations there are no easy ways to remove NOx after it has been produced – and some production is unavoidable. Under study are new designs or operations, for example, spraying chemicals like NH3 into the exhaust gases of power stations to try to reduce NOx to N2 . 4.5.2
SO2
Plants not only contain nitrogen, but also sulfur. Coal and oil may contain a small percentage of sulfur by weight. There are big differences in sulfur content between coals from different origins, while gas is usually free from it. Transportation fuels are fairly free from sulfur (petrol < 0.05%; diesel < 0.25% [21], 1045), so coal- and oil-fired power stations are the main source of sulfur oxides in the air. SO2 gas is very soluble in water and is therefore easily absorbed in the moist tissues of the mouth, nose and lungs. In high concentrations and long exposure times, the resulting acid can do considerable damage to man [21]). Less acute, but as bad, is the transportation of SO2 by the winds, in the form of gas or absorbed in water particles; it precipitates far from the origin causing acid rain. In power stations sulfur has to be removed from the fuel or from the emissions. In fluidized beds one may mix the fuel with chalk (CaO), giving CaSO4 , or one may spray the effluent gases with CaCO3 in so-called wet scrubbers, again giving CaSO4, which is left behind. 4.5.3
CO and CO2
All fossil fuels contain carbon the combustion of which leads to CO2 , with the climatic consequences discussed in Chapter 3. Technical means of reducing the CO2 production itself are hardly imaginable. One therefore has to capture the carbon dioxide from the effluent gases and store it underground, either as gas or in the form of metal carbonates. This is called CO2 sequestration. Besides CO2 also CO will be produced in fossil fuel combustion. In an equilibrium situation one would have 1 CO2 → CO + O2 2
(4.136)
Heat Engines
127
with an equilibrium constant ([14], 287 or [6] 279 or [21] 79) K =
[CO] × [O2 ]1/2 0 0 0 = e−G /RT = e−(H −T S )/RT = 32737e−34033/T [CO2 ]
(4.137)
Here, the numerical value for the universal gas constant R(Appendix A) has been substituted, as well as the tabulated values for H 0 and S0 . Remember that values of H 0 , S0 and G0 hold at standard conditions. The variation of G0 with temperature is strong, but H 0 and S0 are only weakly temperature dependent ([14]271). That is why the latter were substituted in Eq. (4.137), which is a reasonable approximation for a wide range of temperatures. Although equilibrium may not have time to develop completely, Eq. (4.136) gives a good impression of what happens. The three concentrations may be normalized by putting [CO2 ] + [CO] + [O2 ] = 1. The ratio α of moles of oxygen to moles of carbon in the flue gases, which is a measure of the air-to-fuel ratio may be written as α=
2[CO2 ] + [CO] + 2[O2 ] moles O = [CO] + [CO2 ] moles C
(4.138)
For every combination α, T one may calculate [CO] (Exercise 4.26 [22]). The stoichiometric quantities of a reaction refer to the case that the reaction completely develops to one side, the right say, for example when all reaction products are taken away. For the combustion of octane (C8 H18 ), which is close to gasoline one would find α = 3.125. For a reaction at T = 3000 [K] it follows that [CO] = 0.213. Qualitatively it follows from Eq. (4.137) that with increasing temperature K would become bigger, and consequently also [CO]. For a higher value of α, so with more O2 , reaction (4.136) shifts to the left, giving less CO. In every cycle of the engine less fuel is burnt, which may result in lower power. 4.5.4
Aerosols
Strictly speaking an aerosol (literally air + solid) is a system of liquid or solid particles, uniformly distributed in a finely divided state in the air. From an aeroplane it is observed as a haze in the air above industrialized areas. In the field of air pollution, the terms particles, particulate matter and aerosols are used interchangeably. These terms comprise all compositions, masses, sizes and shapes. The chemical composition and the mass of a particle are uniquely defined; its size and shape are not. So the latter two are taken together in the definition of an equivalent diameter d, which is the diameter of a water sphere which under the influence of gravity will settle to the ground with the same speed as the particle under consideration ([21], p. 116). The masses, which occur at a given (equivalent) diameter d can be measured and displayed. This is done in Figure 4.22 in a qualitative way with a logarithmic scale for the d-axis. Notice the two peaks, one at a diameter of somewhat less than 1 [μm] and one at a diameter of around 10 [μm]. The smaller particles pose the highest health risk, as they penetrate deepest into the lungs. When they are chemically or biologically active, for example when they contain sulfuric acid, they may do much local damage ([21], p. 116). As the smallest particles are most dangerous, regulations define a quantity PM10 . This is
Environmental Physics Mass concentration (arbitrary units)
128
Urban Rural Remote background
0.1 1.0 10 100 Particle diameter d/[µm]
Figure 4.22 Mass distribution of polluting particles in the atmosphere. The horizontal scale is logarithmic. (Reproduced by permission of John Wiley & Sons, Inc)
the particulate matter [μg m−3 ] with diameter d < 10 [μm] ([21], p. 32). The percentage of the finest particles (d < 2.5 [μm]) is indicated by FPM and its mass by PM2.5 [μg m−3 ]. In Figure 4.22 one notices that small particles even are present in the ‘remote’ natural background. That may imply that during evolution the human body has developed a defence mechanism against the particles and can deal with them to a certain extent. In high concentrations they pose a health hazard, in particular to those parts of the population that are physically weakest. At first glance the burning of coal and wood would seem the ‘bad guy’ in the production of small particles. This indeed is true for open fires or malfunctioning stoves. Industrial installations such as modern power stations, however, use all kinds of technologies to remove particles from their exhaust gases. An old method lets the gases pass through a chamber where the particles settle down by gravity. More modern is the use of a cyclonic device, by which the particles are deposited on the wall and subsequently removed. Finally we mention electrostatic precipitators, where the particles move past wires with a potential of 30–60 [kV], are charged, attracted by grounded plates and removed ([21], p. 88). In countries that can afford clean technologies for power production and enforce their use, the remaining source of particles is urban traffic. Incomplete combustion of diesel will produce carbon particles. The wear and tear of the engine will produce small particles, consisting of carbon and metals, which may be emitted into the ambient air. Lubricating oil may add to this problem. In these respects modern diesels still performs worse than modern petrol-driven cars. 4.5.5
Volatile Organic Compounds VOC
The class of pollutants called volatile organic compounds, abbreviated as VOCs, comprises the many organic compounds that easily evaporate from the liquid phase. Many of them are in the chemical form of hydrocarbons, written as Cx Hy with various combinations (x, y). Sometimes they are written shorthand as CH.
Heat Engines
129
The volatility of these compounds is apparent from the persistent smell near gas filling stations due to spilt petrol (often by careless drivers). Besides the ill effects shown in Table 4.5 an important secondary effect is that VOCs combine with NOx under the influence of sunlight to form ozone and smog ([21], p.28). This is the main reason for regulating the emissions of VOCs. 4.5.6
Thermal Pollution
All engines discussed in this section have a thermodynamic efficiency which is significantly smaller than 1 (Eqs. (4.62) and (4.70)). Therefore much heat has to be dumped into the natural environment. For big power plants one method is to use water from lakes and rivers to cool the condenser and return the water, increasing the temperature of the water source and damaging the life of plants or fish. Other cooling methods dump most of the waste heat in the atmosphere. One method uses big cooling towers where hot condenser water is directly sprayed into the tower and is cooled by rising air in which part of it evaporates, losing latent heat and the rest of the cooling is done by exchanging sensible heat. In this system fresh cold water has to be added continuously to make up of the loss of water by evaporation (wet towers). Alternatively one cools by heat exchange between a condenser and tubes in the tower that are cooled by rising air (dry towers). In cogeneration or combined heat-power (Section 4.4.2) the waste heat is reduced as part of the heat of generating electricity is put to use. 4.5.7
Regulations
The environmental pathway of pollutants may be depicted in the following way ([23], 62) Source → Emissions → Concentrations → Exposure → Dose → Health effects (4.139) The World Health Organization WHO compiles studies on health effects as functions of concentrations, and publishes guidelines on concentrations which may be considered to be safe for health. The real concentrations are often (much) higher than these thresholds. In the political process countries have to weigh health effects on the population and the economic costs of strict environmental regulations. In practice, they end up with acceptable concentrations which are higher than the WHO guidelines (see [23] for examples on air quality), but still require an effort to accommodate. The most practical measure is to limit the emissions at their source, so power stations, cars, trucks and so on have to comply with certain standards, which are enforced by regular official checks. It usually turns out that in the neighbourhood of, for example, motorways the concentrations of pollutants in the air still are too high. One then may decide not to build hospitals and schools in these regions.
4.6
The Private Car
As the private car is an important source of air pollution and an important consumer of energy, we discuss in this section its power consumption, the traditional fuels, the three-way converter, the electric car and the hybrid car.
130
Environmental Physics
4.6.1
Power Needs
When a vehicle (car, truck or train) is moving at a constant velocity u there are two main forces acting to slow down the motion: the air drag F d and the rolling resistance F r . The air drag is the resistance which a body with cross section Af experiences in a moving fluid; ignoring wind velocities, this may be expressed as ρ (4.140) Fd = Cd Af u 2 2 Here ρ is the air density and Cd the drag coefficient. This coefficient may be calculated for a few simple cases. In the case of a flat plate the momentum hitting the plate per second would be the total air within a block with area Af and length u, which would have momentum (ρAf u)u [kg m s−2 ] with the dimensions of a force [N]. In the simplest approximation all this momentum would vanish after hitting the plate, giving Fd = ρ Af u 2
(4.141)
which corresponds to Cd = 2. In reality the air moves around the plate; a more accurate calculation for a flat plate would give Cd = 1.3. For modern passenger cars the drag coefficients are around 0.3, depending on streamlining and velocity. The other force acting on a vehicle is the rolling resistance F r . This will be proportional to the mass M of the vehicle and is mainly due to the internal compression of the part of the tyre touching the road and the depression of the part just leaving the road. An empirical relation is the following Fr = Cr g M
(4.142)
Here the coefficient Cr depends on the quality of the road and varies between 0.01 for an asphalt road and 0.06 for a sand road. The coefficient Cr will be smaller for a heavy truck with stiff tyres, but in all cases it increases with wear and tear. Figure 4.23 shows both forces as a function of velocity for a typical compact car. One notices that for higher speeds
1200
Total resistance Resistance/[N]
1000 800
100 km h-1
600
Air drag Fd
60 km h-1
400 200
Rolling resistance Fr
0 0
10
20
30
40
50
Car velocity/[m s -1] Figure 4.23 Air drag and rolling resistance for a light passenger car with M = 1130 [kg], Af = 1.88 [m2 ], Cd = 0.3 calculated from Eqs. (4.140) and (4.142).
Heat Engines
131
the air drag starts to dominate. The power consumption due to the two forces mentioned becomes P = (Fd + Fr )u [W]
(4.143)
as u is the path of the car over one second. The actual power that needs to be supplied is much higher than Eq. (4.143). There are the inherent losses to the engine, like internal cylinder losses, friction losses and transmission losses. There are losses to accessories like the air conditioner (aircon); here the chemical energy of the fuel has to be converted to mechanical energy, next to electrical energy of the batteries, then to the mechanical energy of the aircon, which is nothing more than the refrigerator of Section 4.3.5. In fact, part of the improvement in fuel economy in the 1970s and 1980s by producing cars of smaller mass M was used to drive the accessories. A third need for power is the acceleration from rest (velocity u = 0) to velocity u. The kinetic energy mu2 /2 [J] has to be achieved in a time t. The corresponding power would be 1 2 mu Pa = 2 t
(4.144)
Fast acceleration will require much more power than driving at a constant speed (Exercise 4.28). 4.6.2
Automobile Fuels
The top two rows in Table 4.4 give the energy contents of automobile fuels which are widely in use. For cars it still is petrol and for trucks diesel fuel. They are based on fossil oil, of which the cheap resources are finite and have the pollution properties discussed earlier. The next two belong to the family of alcohols and will be discussed briefly. Hydrogen is discussed in Section 4.6.4. Chemically, methanol and ethanol may be understood as methane or ethane in which one hydrogen atom is replaced by an –OH group, as shown in Figure 4.24. Ethanol is the alcohol contained in beverages like beer, wine and spirits. Methanol does not contain sulfur, therefore SO2 emissions are absent. There are small emissions of particles, not from the fuel itself, but from the lubricating oils. Its flame temperature is rather low, resulting in small NOx emissions. Emissions of CO cannot be avoided, but can be limited by a catalytic converter (Section 4.6.3). The principle pollution problem is the formation of the poisonous formaldehyde H2 CO, in which the –OH group in methanol is replaced by an aldehyde (=O). Methanol is most easily produced from natural gas, but may also be produced from coal or wood ([21], pp 69, 165, 1045). Ethanol like methanol has no sulfur content. It has the additional advantage that it can easily be produced from biomass, so the resulting CO2 is bound again in biomass for the next round of use. In this way there is no net increase in the concentration of this greenhouse gas in the atmosphere and the use of ethanol does not deplete fossil fuel reserves, except what is needed for its industrial production. At present it is mainly used in Brazil (96% ethanol, 4% water, [21], p. 1435) or as admixture to petrol or gasoline with the name gasohol = gasoline × alcohol.
132
Environmental Physics
H
H
H C H
H C OH H
H methane, CH4
H H C H
H C H H
ethane, C2H 6
Figure 4.24
methanol, CH3OH
H H C H
H C OH H
ethanol, C2H 5OH
Structure of methane, ethane, methanol and ethanol.
Natural gas consists mainly (>90%) of methane. It may be liquefied by compression in the form of LNG. Similarly, liquefied petroleum gas LPG is used as fuel for cars. Both LPG and LNG have no particle emissions, except from the lubricating oils, and they have moderate emissions of hydrocarbons. The NOx emissions depend on the flame temperature, which may be somewhat lower than for petrol ([21], p. 164). 4.6.3
Three-Way Catalytic Converter
The pollutants CO and NOx are formed in endothermic reactions; this means that energy has to be supplied in order to run the reaction. This would suggest that at lower temperatures than the high temperatures occurring during combustion, reaction (4.136) would run to the left and similarly that NOx returns to N2 and O2 . Indeed, at 600 [K] equilibrium (4.136) would be almost completely shifted to the left (Exercise 4.26). The shift to the left in Eq. (4.136) does not happen automatically in the short path between hot engine and cold tailpipe. The exhaust gases are quickly removed from the engine and there is no time for a new equilibrium to establish at the lower temperatures of the outlet. Therefore, one puts a catalyst in the tailpipe between the combustion chambers and the outlet, which speeds up the return reactions with extra air added to give Eq. (4.136) a boost to the left. The catalyst usually consists of metals (Pt, Pa, Rh) with a large surface area to optimize contact of the exhaust gases with the catalyst. In principle, the catalyst itself does not change in the reaction process, but it may become dirty and has to be maintained. Figure 4.25 gives a schematic picture of a TWC, a three-way converter, as it is used in cars. The name indicates that it fulfils three tasks at the same time: CO and Cx Hy are oxidized to CO2 and H2 O, while the nitrogen oxides NOx are reduced to N2 and O2 ([21],
Heat Engines
133
CO CxHy NOx
CxHy
CO
CO2
N2
H2O
O2
N2 H2O CO2
Air
Figure 4.25 Three-way catalytic converter. Note the air which is supplied and the shaded catalysts. (Reproduced by permission of John Wiley & Sons, Inc)
pp 223, 224, 718). Of course, electronic devices regulate the flow of the exhaust gases, the air/fuel ratio at the inlet and the additional air into the converter. Technology has done a lot to reduce the emissions of individual cars. It must be emphasized again that good maintenance and regular check-ups are essential to keep the emissions low during the lifetime of the vehicle. This, of course, has to be enforced as not everybody will do it voluntarily. 4.6.4
Electric Car
The chief advantage of the electric car is that its emissions are close to zero. In fact, only wear and tear of the engine and wheels will give some emission of small particles. If the electricity is produced in fossil-fuel fired power stations, the emissions are shifted to the power stations. Still, removing pollutants from a limited number of power stations may be more economical than doing the same from a large number of cars. Secondly, the efficiencies of power stations will be higher than those in traditional gas or diesel cars. Finally, the remaining emissions from power stations will be diluted when they reach population centres. If electricity from renewable energies becomes widely available and good storage facilities in the cars are provided, an additional advantage of the electric car would be that its operation would not use exhaustible oil resources. The bottleneck of the electric car is storage. For electricity one will need a lot of batteries, which need hours to charge from the grid. For simple electric cars the range is in the order of 60 [km]; for cars with sophisticated and expensive batteries the range may increase to 300 [km]. One may use hydrogen storage, as discussed in Section 4.4.3, and use a fuel cell to generate the required electricity. Also here, the storage of hydrogen is not trivial (Table 4.4). In a bus there is more space than in a private car and in fact there are buses running on hydrogen in various places.
134
4.6.5
Environmental Physics
Hybrid Car
The hybrid car combines the best of both worlds: a gasoline-fuelled car is combined with an electromotor. There are two possible drivelines ([21], p 986). In the first one the gasoline engine, battery and electromotor are in series. The gasoline engine produces electricity, which is stored in a set of batteries, which drive an electromotor. The engine can run at constant speed with low emissions, while all accelerations are produced by the electromotor. In the second possible driveline, engine and battery/electromotor work in parallel to provide torque to the wheels. The mechanical output of the engine is used to power the vehicle and to charge the batteries. In both configurations microprocessors control the operation of the vehicle to reduce the emissions and to optimize the fuel efficiency. On braking, the kinetic energy is not wasted as heat, but fed back into a storage system, a flywheel, electric capacity or battery. The fuel efficiency of a typical hybrid car is satisfactory (Exercise 4.29).
4.7
Economics of Energy Conversion
Companies that are deciding whether or not to build an installation like a power station, a building or an expensive machine make a calculation in which they distinguish between fixed costs and variable costs. The fixed costs of a power station are essentially the capital costs of paying interest on the invested capital and paying back the capital over a number of years. If the company has its own money it could put it in the bank and receive interest, keeping its capital intact. If it is investing, it has to decide whether it will get its money back before the installation has to be replaced. So, own money or not, the capital costs are there. The variable costs of a power station are fuel costs and the costs for operation and maintenance, such as wages of engineers, reparations and so on. For some installations, such as those that produce electricity from sun or wind, it is not the fuel costs but the saved fuel costs that determine whether or not to build the system. In Section 4.7.1 we show how to make such a calculation. In fact, private persons can also use the method in deciding to buy a house or a solar collector. In Section 4.7.2 we draw on the everyday experience that the second or third time that one performs a certain task (painting a wall, changing the wheels of a car, repairing a flat tyre) is easier than the first time. This is also true in manufacture, where the existence of a learning curve has been shown in practice. 4.7.1
Capital Costs
When a certain amount of money P is put on a bank or lent out against an interest rate i, and the interest is calculated and added to the capital at the end of each year, the total (future) value F(t) of the initial investment after t years will be F(t) = P(1 + i)t
(4.145)
This value, also called compound interest, grows exponentially with time. Eq. (4.145) relates the future value F(t) to the present value P once the interest rate is known. From an economic point of view both values are equivalent.
Heat Engines
135
Consider a uniform payment A at the end of each year over t years. Then one can reduce each payment back to time t = 0 by using the inverse of Eq. (4.145), giving A A A + . . . .. + + (1 + i) (1 + i)2 (1 + i)t
(4.146)
which is a geometric series with a sum A
(1 + i)t − 1 =P i(1 + i)t
(4.147)
This is called the present value P of the series of payments and mathematically this again is equivalent to a uniform series of payments. Economically this is not precisely true, since a single amount of money P in a bank for a number of years t would draw the so-called long-term interest rate i. A large amount of smaller payments would yield a lower interest rate j. This complication will not be discussed here, but in Example 4.8. Eq. (4.145) may be used to calculate the present value P of a payment F(t) due t years in the future P=
F(t) (1 + i)t
(4.148)
In considering long-term investments one has to take into account that prices may rise according to a general inflation rate. Also, in estimating the economic feasibility of, for example, building a wind turbine one has to estimate costs of saved fuel in the future, which may rise or fall according to the assumptions one makes. Both effects together will result in an apparent escalation rate e. This rate will depend on the assumptions made and in their investment decisions companies will be ‘on the safe side’. The presumed escalation rate e implies that a uniform series of payments or receipts A has an even lower present value in purchasing value than indicated by Eq. (4.147). One now has the geometric series P=
A{(1 + e)t (1 + i)t − 1} A A = + ...... (1 + i)(1 + e) (1 + i)t (1 + e)t (1 + e)t (1 + i)t (e + i + ei)
(4.149)
In practice, utilities have to pay an interest rate i on an investment and expect a series of annual receipts at some escalation rate e corresponding to a continuous price increase. If the first receipt, at the end of the first year is A(1 + e) its present value at the beginning of the year will be A(1 + e)/(1 + i). The present value of the payment at the end of the second year will be A(1 + e)2 /(1 + i)2 . The present value of the series over a total of t years becomes 1+e t −1 1+i P = A(1 + e) (e = i) (4.150) e−i or P = At
(e = i)
The last equation shows that price increase and interest rate cancel if e = i.
(4.151)
136
Environmental Physics
4.7.1.1
Levelized End-of-Year Cost
When comparing alternatives for the production of electric power one has to compare the cost of future payments of interest against future receipts for the power delivered. These should all be reduced to the present value P. The parameters of the alternatives such as lifetime t of the installation or even the interest rate i may be different, however. Therefore it is convenient to introduce the levelized end-of-year cost L. This is the uniform payment to be made at the end of each year up to the end of life of the installation, which gives the present value P according to Eq. (4.147). In that equation we have to replace A by L which gives L=P
i(1 + i)t (1 + i)t − 1
(4.152)
The factor multiplying P on the right-hand side is called the capital recovery factor. It is the factor by which the capital P should be multiplied in order to find the annual payment over t years that precisely recovers the capital P. Many banks offer a mortgage on a house where the constant annuity (interest + repayment) L for the mortgage P is calculated by Eq. (4.152). 4.7.1.2
Rest Value
At the end of life of an installation there may remain a rest value S. This will be positive when the installation or the building can be sold, but it may be negative when a high cost of decommissioning is expected. This will be the case for heavily polluted chemical installations or for nuclear power stations. The rest value S(t) after t years can be reduced to its present value S(0) by Eq. (4.145) S(0) =
S(t) (1 + i)t
(4.153)
For long periods of time this results in a small fraction of S(t). With an interest rate i = 0.08, the fraction becomes 0.15 for t = 25 years or even smaller than 0.0005 after t = 100 years. The latter number is relevant if one plans to let nuclear power stations cool down for 100 years after the end of operation before dismantling them. Because of the long periods of time involved, the decommissioning cost does not weigh heavily in the levelized costs L during the lifetime of the installation. 4.7.1.3
Building Times
The capital costs of an installation are influenced by the building times. Usually the commissioning company has to make annual payments that, by contract, may rise by a factor (1 + e) due to inflation. Also the company has to pay an interest rate i over the payments without earning any money from the installation. If the first payment A to the contractor takes place after one year, then the series of payments becomes A, A(1 + e). . . over t years. At that time the installation is planned to start earning money. The series of payments during the construction of the installation has to be calculated at the time t when it is finished, in order to estimate the capital costs. This is done by calculating the future value of the series of payments. After the first payment t − 1 years
Heat Engines
137
have passed, so its future value will be A(1 + i)t–1 . All payments together give a series A(1 + i)t–1 , A(1 + e)(1 + i)t–2 . . . Its total after t years becomes F=
A (1 + i)t − (1 + e)t (i = e) i −e
(4.154)
This result is bigger than the value At for t equal payments. With i = 0.08, e = 0.05 and t = 6 one will find F = 1.37At and for t = 12 even F = 2.01At. Therefore a long construction time would double the costs compared to what one naively would expect. 4.7.1.4
Break-Even Point
Suppose a person has to decide on whether to buy a ‘fuel-free’ power installation for a price I which has to compete with buying power from a utility. Suppose that the annual price increase of the utility’s power is e and the money for the fuel saved is put in the bank with interest rate i. Then, at the end of t years of operation the fuel saved corresponds precisely to Eq. (4.154). Its present value P can be calculated with Eq. (4.148). The break-even point is reached for a time t when P ≥ I.
Example 4.8
Comparing a Fuel Bill with Costs of a Solar Collector
A lady is considering an investment of I = 3000 Euro (or dollars) in a solar hot water installation, but she first wants to calculate if her fuel bill is high enough to make the investment worthwhile. Her present fuel bill is A0 ; its expected annual increase e = 0.06. The rest value after 20 years S = 500 € (or $). The interest rate on the loan she has to take i1 = 0.10. She can do two things with the money saved: (i) at the end of the year put it in a savings account with interest rate i2 = 0.05 (always i2 i1 ) or (ii) if she has a cooperative bank which allows it, use the money to pay back part of her loan, which effectively amounts to i2 = 0.10. Calculate A0 . Answer For the first year her fuel bill will be A0 . The future value of the money saved will be given by Eq. (4.154) with either i1 or i2 . Let the future values be indicated by F 1 or F 2 , respectively. For the parameters given Eq. (4.154) results in F1 = 55.4A0
and
F2 = 88.0A0
The break-even point is reached when the present value of the fuel saved equals the investment (ignoring operation and maintenance costs) F+S = I = 3000 1.1020 or F = 3000(1.10)20 − 500 = 19682. With the interest rate i2 = 0.05 this gives A0 = 355 €, and with the interest rate i2 = 0.10 this gives A0 = 224 €. Conclusion: the result is very sensitive to the parameters.
138
4.7.2
Environmental Physics
Learning Curve
Future costs are estimated with a learning curve. This curve summarizes the common experience that the production of a second device is easier and less costly than the first one. In manufacture this is due to improved production methods, increased know-how of the employees, small changes in the product itself and so on. In economics one defines a learning rate L, which is the fraction that the production costs have gone down if the accumulated production has doubled. This is represented by Cn = C1 n −a
with a = − ln(1 − L)/ ln 2
(4.155)
Here C1 is the production cost of the first unit and Cn the production cost of the nth unit. One may check that C2n − Cn = −LCn (Exercises 4.32 and 4.33) Note that a decrease in costs of a device only happens when the production and hence the sales of the device are continuing. This is the rationale behind government subsidies of renewable energies like solar energy. Without subsidies solar panels would not sell. The idea is that following the learning curve the costs go down and the need for subsidies as well. In the end renewable energy installations should be competitive with energy from fossil fuels.
Exercises 4.1 Marines of bodyweight 70 [kg] reach an altitude of 2923 [m] in 7.75 hours. Calculate the total potential energy gained. What is the resulting power? What is more sensible, to calculate the power based on 7.75 hours, or based on 24 hours? 4.2 In his fitness centre one of the authors (mass m = 75 [kg]) walks 4 [km] during an hour on a conveyor belt with a slope of 10%. According to the fitness centre he loses 2.4 × 106 [J] of energy in body fat. (a) Estimate the potential energy gained when walking a mountain in the same way. (b) Compare with Exercise 4.1. (c) What is the efficiency with which body energy is converted into potential energy. (d) What happens to the rest of the energy? 4.3 Define the heat resistance for the radiative interaction of a body of surface temperature T s with surroundings at ambient temperature T ∞ . 4.4 A modern ‘double glazed’ window consists of three layers: 4 [mm] glass, 6 [mm] air and again 4 [mm] glass. Consider only the heat loss by conduction from the inside to the outside, with an inside temperature of T 1 = 20 [◦ C] and an outside temperature of T 2 = 5 [◦ C]. (a) Calculate the heat resistance for 1 [m2 ] with the data in Table 4.1. (b) Calculate the thickness of a single glass window with the same heat resistance. (c) Calculate the heat loss [W] for a double glazed window with an area of A = 3 [m2 ]. (d) Calculate the heat loss for a single glass window of 6 [mm] with the same area. Note: the air space should not be too large, otherwise internal convection will occur. Besides conduction loss there is radiation loss as well, which is reduced by applying a coating. 4.5 A water pipe for central heating with length L = 10 [m] and diameter 25 [mm] is running over a cold attic with T air = 5 [◦ C]. The radiation temperature of the surroundings is usually lower than the air temperature and is taken as T surr = −5 [◦ C].
Heat Engines
4.6
4.7
4.8
4.9
4.10
4.11
139
The temperature of the pipe is T p = 70 [◦ C]. The return pipe has a temperature of T pr = 40 [◦ C]. The convection coefficient may be taken as h = 10 [Wm−2 K−1 ] and the emissivity of the metal pipe at this temperature as ε = 0.1. Note: the emissivity is strongly dependent on the material, the paint and the temperature. For metals it usually is low. (a) Calculate the heat losses of convection and radiation of both pipes. (b) Deduce the cost of 1 [kWh] of natural gas from your energy bill (using Appendix A). If you cannot find it, use the authors’ value € 0.52 for 1 [m 3 ] of natural gas. (c) Calculate the cost of the lost energy on the attic during one month. With the data from Exercise 4.5 (b) calculate the cost of the heat loss of the double glazed window in Exercise 4.4 over one month. Do the same for a single glazed window of 6 [mm]. The attic pipes of Exercise 4.5 are insulated with d = 2.5 [cm] glass fibre loose fill. (a) Calculate the heat loss of the hot pipe and the return pipe. Ignore radiation losses. For simplicity take the surface of the cylinder as a plane and ignore side effects. (Mathematically interested students may do it exactly, using the cylindrical form of the gradient operator, Appendix B, and deduce the quality of the approximation). (b) Calculate the monthly costs now and compare with the results from Exercise 4.5. Calculate damping depth xe and delay time τ for (a) a brick wall, a hard wood wall and a concrete wall for daily temperature variations, and (b) sand under annual temperature variations. In case (b) draw conclusions as to how deep a water pipe should be buried in the region where you live. In all cases approximate the temperature variations using Eq. (4.22) (a) The temperature of the skin of the human hand appears to be 32 [◦ C]. The maximum contact temperature without damage to the finger is estimated as 45 [◦ C]. Calculate the temperature of hot iron that a human hand could handle, and also the maximum temperature of soft wood and of porcelain. (b) A child is licking the frost off an iron bridge railing with temperature −5 [◦ C]. The child’s tongue has a temperature T 1 = 37 [◦ C] and b1 = 1400 [J m−2 K−1 s−1/2 ]. Will the tongue freeze to the railing? What happens for a hard wooden railing? Note: if you ever see this happen, poor hot coffee on the tongue. A cylindrical hot water tank (height 1 [m], radius 22 [cm]) is heated during the day from 20 [◦ C] to 80 [◦ C]. (a) Check that the heat capacity would be enough for two baths of 120 [L] at 50 [◦ C] or seven showers (40 [L]) at the same temperature, assuming the temperature of tap water is 20 [◦ C]. (b) The tank is insulated on all sides with d = 10 [cm] urethane foam. The air temperature is 10 [◦ C]. Estimate the loss by conduction for the first hour, assuming all insulated surfaces are flat (a very good approximation) and ignore the temperature decrease with time. Look at your results and argue why this is an acceptable assumption. Use h = 10 [Wm−2 K−1 ] for convection, ignore radiation. A household in a mid-latitudes country is using 60 × 109 [J] of heat per year. In spring and autumn solar collectors provide the necessary heat. The surplus in summer is to be used in winter, say Q = 20 × 109 [J]. (a) Calculate the volume [m3 ] of hot water of 90 [◦ C] to be stored, supposing the start temperature is 20 [◦ C]. (b) Calculate the decrease in heat over the first day, using the method in Exercise 4.10 and increase all sizes of the cylinder discussed there; take T air = 10 [◦ C]. Note the difficulty of insulation for such a long period of time. (c) Consider phase change in a paraffin like
140
4.12 4.13
4.14
4.15
4.16 4.17
Environmental Physics
octadecane and (d) a hydrate like Na2 SO4 .10H2 O. In cases (c) and (d) determine the volumes required with data from the text and compare with (a). Estimate the amount of [J] one needs to add to keep the octadecane or the hydrate at its temperature for 50 days. Note: hydrates with (much) higher fusion heats up to a few [GJ/m3 ] are under investigation. Derive Eq. (4.52) Derive Eqs. (4.53) and (4.54) and argue that for isothermal processes the free energy tends towards a minimum and for isothermal, isobaric processes the Gibbs free energy will tend towards a minimum. Sketch an isotherm in a pV diagram with two states of the system 1 and 2 on the isotherm. In loop A one goes from 1 to 2 and back reversibly; in loop B one goes from 1 to 2 irreversibly and back from 2 to 1 reversibly. From the Clausius inequality (4.38) it follows that in loop A the total entropy of system + surroundings does not change. For loop B the total entropy should increase. Show where the entropy increase takes place: in the system or in the surroundings. Consider a hot reservoir with temperature T H and a cold reservoir with temperature T C . Take two heat engines with different efficiencies η1 = η2 , which both operate reversibly. One is working as in Figure 4.8a, while the other is put in the reverse as a heat pump, Figure 4.8b. Show that one would be able to convert heat from the hot reservoir into work, while leaving the cold reservoir unchanged (a ‘perpetuum mobile’). Draw a conclusion as to η1 = η2 . For the ‘real’ heat engine of Section 4.2.3 determine the entropy increase and where it is found. The concept of available work is applied to storage of energy by compressing 1 mole of air (an ideal gas) adiabatically from atmospheric pressure and temperature p0 , T 0 and volume V 1 to p2 , V 2 , T 2 . (a) Argue why the initial state has B = 0 and (b) show that after compression B2 = RT0
4.18 4.19 4.20
4.21
r (κ−1)/κ − 1 κ −1
(4.17-1)
where κ = cp /cV and r = p2 /p0 , which is the initial compression ratio, and R is the universal gas constant. During storage the air will lose heat Q to the atmosphere, resulting in p3 , T 3 while keeping V 3 = V 2 . (c) Use the relation B = T 0 (S0 )s+a for the loss of exergy and find B/B2 in terms of T 0 , T 2 , T 3 . (d) Use κ = 1.4, cV = (5/2)R and r = 10 to verify that for complete cooling (T 3 = T 0 ) one finds B/B2 ≈ 0.3. (NB. For this somewhat cumbersome calculation use that for adiabatic change pV κ = constant.) Explain why the efficiency (4.100) of the steam engine is not equal to the Carnot efficiency. Prove the last equality of Eq. (4.111) For students who have access to some simple plotting software. (a) Plot the efficiency (4.107) of the Otto cycle as a function of the compression ratio r. (b) Plot the efficiency of the diesel engine (4.111) for r = 10 and r = 25 as a function of the cut-off ratio rcf . Use the data underneath Eq. (4.112), the definition of the refrigerating effect as H 1 − H 4 [kJ kg−1 ], and the fact that the refrigerating capacity of the 134a device
Heat Engines
4.22
4.23
4.24
4.25
4.26
4.27
4.28
4.29
141
equals 50 [kW] (to be used in small industries) to calculate (a) the refrigerating effect [kJ kg−1 ], (b) the flow rate [kg s−1 ], (c) the compressor power [kW] and (d) the COP from the refrigerating capacity and the compressor power. Hint: make use of the units provided. What do you conclude for a domestic freezer with a compressor of 100 [W] electric power? Make for yourself a little table with the compounds from Eq. (4.124), with G and H of formation at standard conditions and S. Check the decrease in Gibbs free energy for the lead acid battery of Eq. (4.124), using the tabulated values. Also check dG = dH − TdS − SdT from Eq. (4.46). Students in chemistry may also use data on standard reduction potentials to find the correct voltage of the battery. Use Eq. (4.126) to calculate the minimum amount of energy in [kWh] required as input to produce 1 [kg] of hydrogen. Also calculate its volume under standard conditions and its energy content in [MJ L−1 ]. Consider three AC power lines with phases differing by 2π /3 in the following way V 0 cos ωt, V 0 cos(ωt + 2π /3), V 0 cos(ωt + 4π /3). Each of the lines has its return current to ground; the loads are organized such that these currents also are in phase I 0 cosωt, I 0 cos(ωt + 2π /3), I 0 cos(ωt + 4π /3). (a) Show that the return currents cancel in the ground. (b) Estimate I 0 such that the power delivered is the same as in the DC case. (c) Calculate the power loss in the three lines and compare with the DC case. A HVDC line, made of copper with resistivity (or specific resistance) ρ = 1.7 × 10–8 [m] has a length l = 800 [km] and a diameter of 2r = 22 [cm]. It is delivering 1000 [MW] of electricity at a grid point with a voltage of 1000 [kV]. Calculate (a) the electric resistance of the line; (b) the electrical current; (c) the heat loss in the cable, both in absolute value and in percentage of the transmitted power; (d) the voltage difference V between the beginning and the end of the line. Note: in practice, the resistance of the cable is higher and losses are in the order of 3 to 2.5%/1000 [km]. Write p = [CO], q = [CO2 ], r = [O2 ] with p + q + r = 1. Use Eqs. (4.137) and (4.138) to construct a table for [CO] with three values of α (2 or 3.125 or 5) and four values of T(2000 [K], 3000 [K], 4000 [K], 600 [K]). Hint: you will find an equation of the third degree in p which is easiest solved numerically. A compact car (Af = 1.94 [m2 ], Cd = 0.30 , M = 1160 [kg]) runs 16.7 [km] on 1 [L] petrol at a speed of 90 [km h−1 ]. Calculate the power Pd = F d u against air drag and the power Pr = F r u against road friction (Cr = 0.01). What fraction of the chemical energy in the gas is used to combat both types of friction? An electric car (Af = 2.08 [m2 ], Cd = 0.30, M = 1238 [kg]) has an engine of 215 [kWe ]. It accelerates from rest to 60 [miles per hour] in 3.9 [s]. Calculate the power Pd = F d u against air drag and the power Pr = F r u against road friction (Cr = 0.01) at a constant speed of 60 [miles/hour]. Also calculate the power needed during the initial acceleration and compare. A hybrid car with Af = 2.60 [m2 ], Cd = 0.30, M = 1345 [kg], has a petrol motor of 73 [kW] and an electromotor of 60 [kW]. It accelerates from rest to 100 [km/h] in 10.4 [s]. Its petrol consumption is 3.9 [L] for 100 [km]. Calculate (a) the power consumption due to friction (Eq. (4.143)) at 100 [km/h], during acceleration (4.144) and (b) the fraction of chemical energy used when cruising at 100 [km/h].
142
Environmental Physics
4.30 A student gets a mortgage of € 100 000 (or $ 100 000). This has to be paid back in 25 years with an interest rate of 10%. Calculate the annual payment on basis of annuity (= interest + repayment). What is the capital recovery factor. 4.31 A wind power plant has an investment cost I corresponding to € 1200 per installed [kW]. The money should be recovered in t = 15 years with an interest rate i = 0.08. Its rest value S = 0, operation and maintenance will cost a fraction O = 0.02 of the investment. The turbine is producing E [kWhe ] of electricity annually and is operating on land 29% of the time. Give a formula for the cost per [kWh] and calculate its numerical value. Note: the result is very sensitive to the interest rate and the expected lifetime. 4.32 Check that C2n − Cn = −LCn using Eq. (4.155) 4.33 Plot the learning curve as given in Eq. (4.155) for learning rates L = 0.10 (Concentrated Solar Power) and L = 0.18 (Photo Voltaic Power). Check that for CSP, in seven doublings of accumulated production the cost would fall by 50%.
References [1] Ferguson, E.S. (1971) The measurement of the man-day. Scientific American, 224, 96–103. [2] Incropera, F.P. and DeWitt, D.P. (1990) Introduction to Heat Transfer, 2nd edn, John Wiley and Sons, New York, USA. [3] Dunn, P.D. (1986) Renewable Energies: Sources, Conversion and Application, Peter Peregrinus, London. [4] Baetens, R., Jelle, B.P. and Gustavsen, A. (2010) Phase change materials for building applications: A state-of-the-art review. Energy and Buildings, 42, 1361–1368. [5] Zemansky, M.W. and Dittman, R.H. (1989) Heat and Thermodynamics, 6th edn, McGraw-Hill, Singapore. [6] Atkins, P.W. (1995) Physical Chemistry, 5th edn, Oxford University Press, Oxford, UK. [7] de Vos, A. (1992) Endoreversible Thermodynamics of Solar Energy Conversion, Oxford University Press, Oxford, UK. [8] American Institute of Physics (1978) Efficient Use of Energy, Conference Proceedings No 25, American Institute of Physics. [9] Karlsson, S. (1990) Energy, Entropy and Exergy in the Atmosphere, thesis, Chalmers University of Technology, G¨oteborg. [10] Stoecker, W.F. (1998) Industrial Refrigeration Handbook, McGraw-Hill, New York. [11] Union for the Coordination of Production and Transmission of Electricity (1991) Half Yearly Reports I/II, Arnhem, the Netherlands. [12] Siemens www.energy.siemens.com/hq/en/power-generation/gas-turbines. [13] Renewable Energy Institute www.cogeneration.net/ThomasEdisonsCogenPlant.htm. [14] Oxtoby, D.W., Gillis, H.P. and Nachtrieb, N.H. (1999) Principles of Modern Chemistry, 4th edn, Saunders, Fort Worth, USA. [15] Wilson, J.R.W. and Burgh, G. (2008) Rational Energy Choices for the Twenty-First Century, John Wiley, www.tmgtech.com.
Heat Engines
143
[16] US Department of Energy www.hydrogen.energy.gov. [17] International Energy Agency (2005) Variability of Wind Power and other renewables, Management options and strategies, International Energy Agency, Paris, France. [18] Utrecht Centre for Energy Research (2006) Storage of Electricity (in Dutch), Utrecht Centre for Energy Research, Utrecht, Netherlands. [19] Weinholdt, N. (2010) Smart supergrid. New Energy, 3, 20–23. [20] European Union (1999) Air Quality: Facts and Trends, EU, DG XI. [21] Bisio, A. and Boots, S. (eds) (1997) The Wiley Encyclopedia of Energy and the Environment, John Wiley, New York. [22] Seinfeld, J.H. (1986) Atmospheric Chemistry and Physics of Air Pollution, John Wiley, New York; a more recent edition is Seinfeld, J.H. and Pandis, S.N. (1998) Atmospheric Chemistry and Physics from Air Pollution to Climate Change, John Wiley, New York. [23] World Health Organization (2005) Air Quality Guidelines, Global Update 2005, Particulate Matter, Ozone, Nitrogen Dioxide and Sulphur Oxide, World Health Organization, Copenhagen, Denmark.
5 Renewable Energy Energy sources which are for all intents and purposes inexhaustible are called renewable. Most of them derive from solar radiation, which on earth amounts to about 120 000 [TW]. This may be compared with human energy consumption of about 15 [TW]. In Section 5.1 methods are discussed by which solar energy can be converted into electricity. Conversion of solar radiation into hot water for domestic purposes in solar collectors was already discussed in Chapter 4 (Section 4.1.5) and will not be repeated here. In Chapter 3 we noticed that nature herself is converting part of the solar energy into the kinetic energy of the winds. In Section 5.2 it is shown how and to what extent wind energy can be converted into electricity. Solar radiation is responsible for evaporation of water, resulting in clouds, which the winds redistribute over the earth. After raining out, the water flows to the lowest position. How the resulting hydro energy can be harnessed is shown in Section 5.3. The power from the waves or from the tides is also briefly discussed there. Finally, plants, algae and certain bacteria convert solar energy into high-energy chemical compounds, which are used by the organism to grow, maintain and multiply. This photosynthetic process captures and consumes about 150 [TW] and is the subject matter for the last sections of this chapter. The content of these sections is seldom discussed at textbook level, which is one reason to devote ample space. The other reason is the increasing importance of bio-based products, like biomass and biofuels, to the energy supply of many countries. The structure of the bio sections in this chapter is as follows. First, in Section 5.4 a simple treatment of photosynthesis is given from a thermodynamic point of view. Next, in Section 5.5 the physics of photosynthesis is discussed more broadly, with due attention to the microscopic processes which play a role. Interesting is the development of organic materials that do the PV conversion and may be applied as coatings on an ordinary glass window. They are often called Gr¨atzel cells after their inventor (Section 5.6). Finally, in Section 5.7 the possibilities of a bio solar energy are explored.
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
146
Environmental Physics
The use of geothermal heat, which to all practical purposes can be called renewable, is not discussed here, as except for its use in heat exchangers no new physics is involved.
5.1
Electricity from the Sun
The solar radiation used as input for the generation of electricity depends on time and place. In Section 5.1.1 we will derive some simple formulas for this time dependence, assuming a circular orbit of the sun around the earth. A conceptually simple method to produce electricity is to use solar heat to drive a heat engine, from which the kinetic energy is converted into electric power. A modern design called CSP = Concentrating Solar Power focuses incoming solar radiation onto a heat engine. This will be discussed in Section 5.1.2. The direct conversion of solar photons into electric power is called PV = Photovoltaic. Here solid-state physics is applied to find efficient and, in particular, cheap ways for this conversion. PV systems are widely in use and are improving continuously (Section 5.1.3). 5.1.1
Varying Solar Input
The orbit of the earth around the sun already was sketched in Figure 3.20. In Figure 5.1 the sun is observed from the earth in its motion through the skies. The plane of the solar orbit, the ecliptic, is drawn as the ‘horizontal’ plane. The normal MT to the ecliptic is in the plane of the drawing. The earth axis between South Pole and North Pole N is also in the plane of the drawing and is perpendicular to the equatorial plane of the earth. The intersection of both planes is called MV. The line MT is perpendicular to the ecliptic and therefore perpendicular to MV. The line MN is perpendicular to the equatorial plane and also perpendicular to MV. It follows that MV is perpendicular to the plane of the drawing. The earth is represented by the sphere around the centre M. The point S in the figure is the position where the line between M and the far away sun crosses the earth’s surface. The daily rotation of the earth is represented by a motion of the points of the earth along a circle around the axis. The sphere itself is not moving.
P
T
N
Earth’s equator
K
F
Q H
M
Elliptic D V
C α ε
S δ
A
Figure 5.1
B
Annual motion of the sun S viewed from the centre M of the earth.
Renewable Energy
147
In the course of a year the sun moves from V (the beginning of spring) to C and around the circle via D back to V. Its position on the circle is indicated by an angle α. The position of the sun may also be indicated on the meridian connecting the sun and the poles. The declination δ is the angle between the sun and the position A, where the meridian crosses the earth’s equator. Keeping the sun S fixed, consider a point H on the meridian. That point on earth has a daily motion indicated by the circle around the polar axis; at a certain moment it is located at K. At K the zenith is given by the line MK. The angle between MK and MS indicates how close the sun is to the zenith. From looking at Figure 5.1 it is clear that this angle is smallest if K = H. Consequently when it is noon at H, the sun is at its highest point, which is the south direction. At S it is noon as well and the sun is at the zenith; the declination δ may be found as the latitude at which the sun is at the zenith at noon. Finally consider the situation where the sun is at point V, the beginning of spring. We already saw that MV is perpendicular to the plane of the drawing FDBCMT. The angle DMW is 90◦ . In the beginning of spring all solar rays are parallel to MV and therefore perpendicular to the plane of the drawing. It then is sunrise at P and sunset at Q. The earth rotates from P to Q in precisely 12 hours. Therefore on the arbitrary circle PKHQ, and thus at any point on earth, day and night are equally long (Exercise 5.2). This excursion was necessary to determine the solar influx at the arbitrary location K, indicated in Figure 5.1, as a function of time. We ignore the motion of the sun along the ecliptic over 24 [h] and assume circular motion of the sun along the ecliptic. We consider the situation N days after the start of spring. The angle α then is found as α=N
2π 365.24
(5.1)
On a sphere the following sine rule applies to A, S and V sin α sin δ = sin(π/2) sin ε
(5.2)
sin δ = sin ε sin(2π N/365.24)
(5.3)
from which it follows that
Eq. (5.3) shows that the declination δ varies between +ε and –ε over the year. The insolation at K (Figure 5.1) on a horizontal plane is determined by the solar influx at S multiplied by cosβ where β is the angle between the vectors MS and MK. The vector MK experiences a daily rotation along the parallel circle where at noon it coincides with H. The vectors are found by looking at Figure 5.2, where the meridian circle through H and S is drawn, as well as two unit vectors e1 and e2 defining the positions on the circle. A third unit vector e3 = e1 × e2 is needed to define point K. The position of S follows from Figure 5.2 MS = R sin δe1 + R cos δe2
(5.4)
The distance from H and K to the earth axis equals R sin(π /2 – λ) = Rcos λ. The distance from M to the circular plane through K and H remains fixed at Rsinλe1 The daily motion of K is determined by an angular frequency ω = 2π /(24 × 60 × 60) [s−1 ]. The position of
148
Environmental Physics
N H (latitude λ )
e
1
π/
M
2−
λ
δ
e2
R S
Sun
Equator
Figure 5.2 The meridian plane through the sun and the earth’s axis. The insolation at an arbitrary position and time is calculated by using the unit vectors e1 , e2 , e3 (the latter not shown), which are fixed in space.
K becomes MK = R{sin λe1 + cos λ(e2 cos ωt + e3 sin ωt)}
(5.5)
For t = 0 this gives the position of H and it therefore indicates noon. From Eqs. (5.4) and (5.5) one finds cos β =
MS.MK = sin λ sin δ + cos λ cos δ cos ωt (MS)(MK)
(5.6)
The times of sunrise (−T) and sunset (+T) are given by cosβ = 0, leading to cos ωT = − tan λ tan δ For λ = π /2 − δ and for λ = −π /2 − δ one finds cos ωT = −1, which gives T = ±12 [h] and the sun never sets. For the half year when δ > 0 this corresponds to the Arctic summer and the other half year with δ < 0 to the Antarctic summer. The total amount of sunlight A(λ, δ) received during a day on a horizontal surface is found by integrating Eq. (5.6) from sunrise to sunset (or in the Arctic and Antarctic regions over 24 hours for the appropriate part of the year). This result has to be multiplied by the solar irradiance S. The results are shown in Figure 5.3 as a function of the day of the year, obtained by varying N in Eq. (5.3). For ease of comparison the result is presented in units of 1 hour of perpendicular sunlight, which is 3600 × S [Jm−2 ]. In this way the value of S disappears. In reality Figure 5.2 shows that at higher latitudes solar light has to penetrate a thicker layer of the atmosphere, leading to more absorption. So, despite the peaks for high latitudes in Figure 5.3, even in summer the equator surface will receive more sunlight than the poles. Let us look at point K of Figure 5.1. If one wants to receive as much solar energy as possible on a given surface on earth, one should slant it so that its normal faces the sun. The incoming direct solar radiation on a surface tracked perpendicular to the sun’s rays is called DNI, direct normal irradiation. The rest of the incoming radiation is scattered by
Renewable Energy λ = 90°
10 Total insolation during a day
149
8
λ = 0°
6 4
λ = 25°
2
λ = -50° 50
100
λ = 50° 150
200
250
300
350
Days since spring (N. H.)
Figure 5.3 Insolation on top of the atmosphere integrated over a day as a function of the time of the year and several latitudes on the Northern Hemisphere. It is expressed in the amount of solar radiation incident on a plane perpendicular to the solar rays: 4.9 × 106 [Jm−2 ].
air particles, water vapour and clouds. Diffuse radiation provides daylight, even on cloudy days. The simplest solar device, on top of a dwelling or small building, would use a flat surface that faces the sun at noon, while the angle with the zenith is fixed so as to get the highest results when one needs the solar radiation most. If home heating is the objective one will point the normal to the surface to an average winter position of the sun; if one wants solar energy for cooling, the normal should point to the average summer position. For better results one could change the orientation of the surface during the day, keeping it towards the sun. From Eq. (5.6) this would imply keeping cos ωt = +1 instead of having it vary from 0 to +1 and back to 0 again. This time average of cos ωt equals 2/π ≈ 2/3. This amounts to a gain of a factor of 1.5. With sophisticated solar tracking one could correct for the variation of the declination δ with the seasons as well, which may be cost-effective for medium- and large-scale applications. It must be stressed that tracking the sun only would influence the amount of direct solar radiation received and not the indirect, diffuse radiation, which may be considerable. The fraction of diffuse radiation compared to the total varies from 20% in desert areas to 60% in cities like London. When mirrors are used to concentrate solar radiation, they will only focus the parallel, direct solar rays and not the diffuse radiation. The factor of 1.5 estimated above for daily solar tracking is, therefore, too optimistic. 5.1.1.1
Smart Buildings
Much effort is being spent on reducing the need for heating and cooling by special design of buildings. New buildings often are well insulated, keeping the heat inside during wintertime; this is combined with ways to use direct heating from sunlight that is radiating the buildings anyway. In summer, special glazing and ‘sun curtains’ may keep the incoming heat low. Use of special low-wattage lighting should keep the heat produced inside buildings at a low level.
Winter
Window
Environmental Physics
Window
150
Summer
Figure 5.4 Examples of smart design. On the left a balcony permits solar radiation to enter during wintertime while in the summer during midday the balcony keeps the solar heat out of the building. On the right a wind tower is constructed such that the air speeds up on top. This generates low pressure that causes a gentle draft in the living quarters below. (Reprinted from Renewable and Sustainable Energy Reviews, Gallo, The utilization of microclimate elements, 89–114, fig 6, pg 93, Copyright 1998, with permission from Elsevier.)
Smart design has a rich tradition dating back to the time before computers or electricity. Figure 5.4 (left) shows how a balcony can make use of the sun in winter, while prohibiting incoming light in the summer. Figure 5.4 (right) shows a wind tower, used in (semi-) tropical countries. Wind is allowed to pass through the top of the building, causing low pressure at the top, which creates a gentle draft into the living quarters without any need for electric fans or air-conditioning [1]. The optimal way of building will depend on the circumstances and the optimization of costs vs. benefits. Putting a building such that the roof faces the south will make maximal use of solar radiation on solar panels or solar heat collectors. If one uses coatings on glass windows to generate electricity by so-called Gr¨atzel cells (Section 5.6) it will make sense to have the glass windows facing south. 5.1.2
Electricity from Solar Heat: Concentrating Solar Power CSP
In Concentrating Solar Power (CSP) mirrors will concentrate incoming solar radiation on a fluid that is heated, drives a turbine and produces electricity [2]. There are four mirror systems in use, of which the most popular one uses parabolic trough mirrors (Figure 5.5). The system rotates around a horizontal axis to track the ‘up and down’ movement of the sun during the day. Because of the parabolic shape, the rays are concentrated in a focal line with absorbers, through which synthetic oil flows. Just as in solar collectors, the surface of the absorbers is coated, absorbing light in the solar spectrum, but not emitting in the infrared (Section 4.1.5). The heated oil passes a heat exchanger where water is heated, evaporated and superheated. When there is sufficient demand for electricity the superheated steam is used to drive a turbine, which then produces electric power. If there is no demand the heat energy of the steam is stored. The storage system is designed to supply electricity during the evening, that is, short-term heat storage (Section 4.1.7). The system usually consists of two tanks with molten salts, a cold one and a hot one. Excess heat warms the salt, which moves from
Renewable Energy
151
Figure 5.5 Sketch of a parabolic trough system. The parabolic mirrors concentrate the incoming radiation is a focal line where absorber tubes heat passing oil. Hot oil is indicated by a heavy drawn line, cold oil by a light grey line [3].
the cold tank to the hot one. During the evening, when the sun is not shining, heat from the hot tank may be used to supply electricity. Most CSP systems have the possibility to use fossil-fuel heating to drive the turbines when the storage system runs out. The three main components of the CSP system are the solar field, which may be of the order of 1 [km2 ], the storage system and the turbine. For a given solar field the capacities of storage and turbine may be tailored to meet the needs. In Table 5.1 four different configurations are shown. With a small storage system, one may produce electricity during the daytime and a little after; for a medium-sized storage system the production may be shifted into the evening. Both configurations are suitable to supply electricity for the intermediate load of the grid (Section 4.4.1). Because investment costs are the decisive factor for CSP installations (free fuel, and moderate operation and maintenance costs) the investment cost will reflect the price of the supplied electric power.
Table 5.1 Four different configurations of CSP plants for a given solar field size. Based on [2], p. 14/15. Storage system Small Medium size Large Large
Turbine 250 [MW] 250 [MW] 120 [MW] 620 [MW]
Production times [hours]/load
Investment cost
8–19/intermediate 12–23/delayed intermediate 0–24/ base load 11–15/peak load
Low Medium Medium High
152
Environmental Physics
In Table 5.1 two large storage systems are envisioned, one with a small turbine of 120 [MW] and one with a large turbine of 620 [MW]. The former can supply electricity all 24 hours and is suitable for the base load, while the latter is pushing all stored power during the four afternoon hours when the demand of electricity is highest and consequently also the price. A utility that is going to build a CSP installation needs a forecast for the electricity demand as a function of time in order to decide what to build. It must be added that at the time of writing CSP systems are producing less than 80 [MW], but larger systems are being developed. The price of CSP electricity and thereby its economic perspective will depend on the direct normal irradiation DNI. This value is highest, of the order of 2000 to 2600 [kWh m−2 yr−1 ] in desert areas. The levelized electricity cost is the cost including repayment and interest (Section 4.7). For the highest DIN value the electricity cost was around $ 0.20/[kWh] in 2010. In Section 4.7.2 we discussed that future costs may be calculated with a learning curve. We defined the learning rate L, which is the fraction that the production costs have gone down if the accumulated production has doubled. This is represented by; Cn = C1 n −a with a = − ln(1 − L)/ ln 2
(4.155)
Here C1 is the production cost of the first unit and Cn the production cost of the nth unit (see Exercises 4.31 and 4.32). For CSP one assumes L = 0.10 and increasing production of CSP installations. In this case the price of electricity production by CSP would go down to $ 0.10/[kWh] in 2020 and $ 0.05/[kWh] in 2030 ([2], p. 29). The cost of transmission of power to the consumer has to be added to these values. Deserts and arid regions with high DNI values are unevenly distributed over the globe. In North America they concentrate in Southern California, from which HVDC lines might transmit CSP power to the eastern USA. In Europe only the south of Spain has got a high DNI value, as have the desert regions of North Africa. One envisions HVDC lines from these places to the industrialized regions in Europe. Another drawback of locations in desert regions is that they have hardly any clouds, humidity and rainfall. Cooling water required for the steam condensers therefore is in short supply, and its expense will add to the price. Improved cooling with little or no use of water, possibly combined with other mirror systems is under investigation. 5.1.3
Direct Conversion of Light into Electricity: Photovoltaics PV
Direct conversion of solar radiation into electricity, that is without the intermediary of heating steam or oil, is practised in solar cells, which consist of semiconductor materials. This method is called PV or photovoltaics, as photons are directly converted into a potential difference [V]. Solar cells are connected to modules that may operate independently of each other, but usually are connected parallel or in series. As Si is the material most widely in use (85 to 90% in 2010) we use this material to discuss the principles of PV, using some shortcuts to arrive at the essential Eq. (5.10). For a detailed derivation, see refs. [4], [5],[6]. A discussion of other PV cells is given in [7], [8]. The top left of Figure 5.6 shows the two-dimensional representation of a pure Si crystal. In three dimensions it has a tetrahedral structure where each atom is connected by a covalent
Renewable Energy +4
+4
+4
+4
+4
+4 Extra electron
+4
(a) Pure Si
+5
153
+4
+4
+4
+3
+4
Missing electron +4
+4
(b) Donor impurity
(c) Acceptor impurity
(b) n-type
(c) p-type
Energy gap
Hole energy
Electron energy
Conduction band empty at T = 0[K]
Eg
Valence band full at T = 0[K]
(a) Intrinsic
Figure 5.6 The top picture (a) gives a two-dimensional representation of an Si crystal. Each atom is connected to its neighbours by four electrons ; in picture (b) one Si atom is replaced by an atom with five valence electrons, resulting in an extra electron, which is called a donor impurity; in picture (c) a Si atom is replaced by an atom with three valence electrons, the missing electron is described as a hole: an acceptor impurity. In the bottom half of the figure the energy levels are indicated for the three cases, where at temperature T = 0 the conduction band is empty and the valence band is full. In n-type Si a donor level close to the conduction band is indicated by a dashed line. It is so weakly bound that its electrons, indicated by black dots, may move freely through the conduction band. Similarly in p-type Si an acceptor level is indicated and holes move freely through the valence band.
bond to four of its neighbours. In Figure 5.6 this is indicated by four valence electrons on the middle atom, which connect with the neighbouring atoms. These atoms each use one of their valence electrons to connect with the middle one. The figure is supposed to expand to all sides. Bottom left of Figure 5.6 shows the energy levels of the crystal. They are organized in two bands with an energy gap in between. Within a band all energies are allowed. At temperature T = 0 the lowest band, called the valence band is completely filled with the valence electrons of all atoms. At higher energies an energy gap with magnitude Eg forms a forbidden zone, still higher follows the conduction band. Here at T = 0 no electrons are present. At higher temperatures the thermal motion of the atoms will put a few electrons from the valence band into the conduction band, giving a little electric conductivity, hence the name semiconductor. In general the probability f (E) that an electron occupies an orbit with
154
Environmental Physics
energy E is given by the Fermi-Dirac distribution function f (E) =
1 e(E−μ)/kT
+1
(5.7)
Here, E = μ is called the Fermi level where the occupation probability is just 1/2 . In Figure 5.6 the Fermi level would be in the forbidden energy gap; nevertheless Eq. (5.7) gives the correct behaviour for the two bands. In Si one has Eg = 1.12 [eV]; other PV materials have gaps between 1 and 1.5 [eV]. Because the thermal motion at room temperature is determined by kT ≈ 0.025 [eV] one may approximate Eq. (5.7) by f (E) ≈ e−(E−μ)/kT
(5.8)
which is called a Boltzmann tail, as it has the behaviour of the Boltzmann distribution, which is valid for a continuous distribution of available energies (2.25). Taking E – μ = Eg /2 for the bottom of the conduction band one finds f (E) ≈ 10–10 , which again indicates that the density n of the free electrons will be extremely small. In Si at room temperature one has n ≈ 1.5 × 1016 [m−3 ]. Note that each electron that jumps to the conduction band leaves a hole behind. This hole may readily be occupied by another valence electron of which there are many, the new hole again filled up and so on. Consequently the holes move around as freely as the electrons. In an electric field they would move opposite to the free electrons and they behave like electrons with a positive charge. In Figure 5.6b one of the Si atoms is replaced by an atom with five valence electrons such as Sb, P, As. There is an electron ‘too many’ in the crystal, called a donor impurity. Because of the availability of mobile negative charges this is called n type Si. The donor electron (charge −e) is bound in the Coulomb field of the extra atom with uncompensated charge +e. This resembles the hydrogen atom, except that in this case the electron moves ∗ in a dielectric medium (εr ≈ 11.7); it also has an effective mass m ≈ 0.2me due to its easy movement through the periodicity of the crystal. These data result in weak binding, indicated by the dashed curve a little underneath the conduction band in the bottom figure. Its calculated binding energy is 0.02 [eV] below the beginning of the conduction band; the experimental value depends on the impurity and is in the order of about 0.04 [eV] ([5], pp 579–580). The thermal motion with kT ≈ 0.025 [eV] is able to put many of them in the conduction band, where they are drawn as dots. In Figure 5.6c an Si atom is replaced by an atom with three valence electrons, for example, B, Al or Ga., resulting in a missing electron or a hole too many, shown by a dashed line. This hole may accept an electron from the next atom, hence the name acceptor impurity, resulting in a hole in the valence band. Because the availability of mobile positive charges determines its behaviour this is called p type Si. The acceptor hole, like the donor electron, is only loosely bound; its energy level is indicated by the dashed line just above the valence band; the holes may move around freely in the crystal. For PV applications one needs Si which is very pure: less than 1:109 uncontrolled impurities. The reason is that impurities (like crystal defects) cause unwanted recombination of electrons and holes, which then are not available for the PV process. To the pure Si, donor impurities are added, resulting in n ≈ 1025 [m−3 ] of free electrons in n-type Si. There still are electrons and holes due to thermal motion. Their concentration equals n ≈ 1016 [m−3 ]. The amount of thermal electrons is negligible compared to the electrons
Renewable Energy
155
n
+
p
-
V
+
E
Charge density
h e
h e
{
-
--------------
{
++++++++++++++
+ I0
- I0
Diffusion current
Field current
Figure 5.7 On the left a n-p junction is sketched. Without light and with the electrodes disconnected electrons diffuse to the p-side where they are in short supply and holes move to the n-side producing a diffusion current I0 . The resulting charge density is indicated and the corresponding electric field E is shown. This field causes a field current of minority carriers – I0 .
due to the donor impurities. The holes do play a role, as we shall see. They are called minority carriers, while the electrons are called majority carriers. Alternatively acceptor impurities are added resulting in n ≈ 1022 [m−3 ] of free holes in p-type Si. In this case the holes are the majority charge carriers. The electrons, because of their low concentration n ≈1016 [m−3 ], now are the minority carriers. After these preliminaries we consider a solar cell in the form of a round wafer, consisting of a thin layer (≈1 [μm]) of n-type Si on top of a thicker layer (≈ 100 [μm]) of p-type Si. Light can enter through the thin layer and reach the junction, the boundary between both layers. In Figure 5.7 the situation is sketched; two electrodes are connected to the n- and p-sides with an applied potential difference V as indicated. Consider a situation without light and with the electrodes disconnected. Near the junction between p and n there is a surplus of electrons at the n side compared to the p side. In Chapter 7 we shall discuss that any concentration gradient causes diffusion from places with high concentration to places with lower concentration. In this case the surplus at the n-side causes diffusion of electrons from n to p. On the p-side these electrons add to the minority carriers and eventually recombine with holes. Similarly, positive holes at the p-side will move from p to n and eventually recombine with electrons. The diffusion of positive and negative charges together forms a diffusion current, which is also called a recombination current because of the eventual recombination of electrons and holes. The diffusion results in a density of positive charges at the n side and negative charges at the p side, indicated in Figure 5.7. It follows that an electric field E and a potential difference ϕ are built up over the junction. This E field counteracts further movement of the majority carriers when the counteracting potential energy e ϕ becomes of the order of kT. However, in this E field in the direction from n to p, the minority carriers will move: electrons from the p side to the n side and the holes in the other direction. This will happen in any field in the direction of n to p as soon as the particles or holes are in the vicinity of the junction; therefore in first order the resulting current will not be dependent on the
156
Environmental Physics
magnitude of E. At equilibrium the field current –I 0 from the minority carriers will be equal in magnitude but opposite in direction to the remaining diffusion current I 0 . The total current will be I 0 – I 0 = 0. The field current also is called the generation current, as it originates from electrons and holes generated by thermal motion. We will use two plausibility arguments as a shortcut to arrive at Eq. (5.9). Remember that the electrodes in Figure 5.7 still are disconnected and we are working at equilibrium. Because of the mentioned competition between e ϕ and the thermal motion kT one would expect that the concentration of electrons and holes contributing to the diffusion current will follow a Boltzmann relation ∼e–e ϕ/kT . Consequently the current behaves like I 0 ∼ e–e ϕ/kT . We now apply a potential V between the p-side and the n-side, as shown in Figure 5.7, a so-called forward bias. This will lower the counteracting potential from ϕ to ϕ – V and it is plausible that the diffusion current increases from I0 ∼ e−e ϕ/kT to ∼ e−e( ϕ−V )/kT . The field current is not influenced, as discussed above. This will lead to a diode current ID = I0 eeV /kT − I0
(5.9)
This current runs in the direction of the diffusion current in Figure 5.7, which corresponds with the signs on the applied potential V. If that potential was negative the barrier to the diffusion current would increase, quenching the current. The pn junction only lets current pass in the direction from p to n; hence the name diode current. Finally, consider a photon entering the solar cell on the thin side. Let its energy be E > Eg , so that it creates an electron hole pair near the junction. This may be at either side, but assume that it happens on the n-side. The extra electron may be ignored, as there are many of them already. The extra hole, however, gives an extra current I s from n to p, the opposite direction to the diode current (5.9). If pair creation had occurred at the p-side, the extra hole could be ignored, but the extra electron would move from p to n, inducing a current from n to p. In both cases the solar-induced current I s is opposite to the diode current and the total current becomes I = ID − Is = I0 (eeV /kT − 1) − Is
(5.10)
We shall see that this current is negative. Effectively it runs opposite to the diode current and from + to – through the voltage V. This voltage is now understood as the external load of the photocell. The solar-induced current I s can be estimated roughly by assuming an insolation of 1000 [Wm−2 ] and that all photons with energies higher than the energy gap produce an electron. One then finds (Exercise 5.7) I s ≈ 400 [A m−2 ], which is in the range of experimental values [8]. Compared with this value the field current is very small; we take a value I 0 = 3.3 × 10–8 [A m−2 ]. For room temperature one has kT ≈ 0.025 [eV]. With these values the IV curve of a typical Si photocell is drawn in Figure 5.8. For V = 0 one finds I = −I s ; for V → ∞ the curve rises very steeply because of the exponential function. The value V = V 0c for which I = 0 can be calculated easily
Renewable Energy
157
200 V0c
I /[A m-2]
100 0 0.2
-100
0.4
0.6
V /[ Volt]
-200 -300 M
-400
Figure 5.8 IV diagram for a typical Si solar cell with I0 = 3.3 × 10–8 [A m−2 ], Is = 400 [A m−2 ], kT = 0.025 [eV].
from Eq. (5.10) 0 = I0 (eeV0c /kT − 1) − Is Is +1 I0 kT Is +1 ln V0c = e I0
eeV0c /kT =
5.1.3.1
(5.11)
Efficiency
The efficiency η of a solar cell is defined by η=
output power in [Wm−2 ] incoming radiation in [Wm−2 ]
(5.12)
The power output P for a point M on the diagram of Figure 5.8 is given by P = IV [W]
(5.13)
and is represented by the surface area of the rectangle indicated in the IV diagram. The maximum output is achieved for a value smaller than V = V 0c ; for the parameters given one finds V 0c = 0.58 [V], while the maximum power is found for V = 0.5 [V] and amounts to 192 [W] (Exercise 5.8). The insolation of the cell was deduced for an incoming radiation of 1000 [Wm−2 ]. The efficiency of the cell therefore would be 19%. The efficiency of real solar cells is measured under standard conditions of 25 [◦ C], an insolation of 1000 [Wm−2 ] at ground level after the solar rays have passed an atmosphere which is 1.5 as thick as the real one. This corresponds to the oblique angles of incidence at mid latitudes. The absorption results in a spectrum which is even more structured than the one given in Figure 2.2 at sea level and happens to be more favourable to the PV effect than the black-body spectrum (2.8). The best efficiency for a Si solar cell such as described here has been measured as 25% [8]. Figure 5.8 with the parameters given there, however, gives a reasonable representation of present-day commercial Si cells. Part of the incoming solar radiation is converted into heat. This happens for the part of the radiation with E < Eg . But electrons created at energies E > Eg soon lose their excess
158
Environmental Physics
energy as heat within the conducting band and only leave the band when their energy E = Eg . One may calculate (Exercise 5.10) that the energy lost as heat would be around 50%. 5.1.3.2
Costs
The decisive factor for large-scale use of PV is the cost per [kWh]. The efficiency of the individual solar cell does play a role, but a less efficient cell may lead to a lower [kWh] price, depending on its production costs. Moreover, a solar cell is always part of a greater device. In Figure 5.9 one can see a solar panel, which many people have on their roofs, but also a solar array, used by utilities. Contrary to CSP, PV will also use diffuse solar radiation for electricity generation. But also here a high irradiation and cloudless skies will improve the output and reduce the [kWh] price. In 2008 the cost of solar PV in such an ideal situation with large-scale solar arrays was $ 0.24/[kWh] ([2], p. 9). In residential applications with panels only, the price was higher, and depending on insolation was between $ 0.36 and $ 0.72/[kWh]. In many countries utilities are forced to accept that their customers generate their own electricity and may feed their surplus, if any, into the grid. The customers then see their [kWh] meters going backwards. In this case the client will compare his or her PV price with the utility price, which includes transportation, taxes and profits. In some sunny countries a personal PV installation then may be competitive. In remote places, far from the grid, PV even now provides a good alternative to dirty and noisy diesel generators. In most private dwellings the roof space is too small to provide enough [kWh] for personal use. Therefore the utilities’ cost of PV will be decisive for wide-scale application of PV. That cost must be compared with the [kWh] price of coal, which is in the order of $ 0.04 to $ 0.06/[kWh], but of course may increase when resources begin to run out. Future costs may be guessed by using the learning rate experienced in the past. For the solar cell this appears to be between L = 0.15 and L = 0.22 (see Section 4.7.2 on the definition of learning rate and Eq. (4.155) for application to CSP). For the total installation one assumes L = 0.18 ([2], p.18). If sufficient solar installations are sold one expects that in 2050 the utility price of solar electricity will have dropped to $ 0.06 to $ 0.09/[kWh].
Module
Panel
Array
Solar cell
Figure 5.9 The photovoltaic hierarchy. For domestic use panels will suffice. Utilities will need large arrays. (Reproduced from Solar Electricity, Markvart, pg 79, fig 4.2, with permission from John Wiley & Sons, Ltd.)
Renewable Energy
159
Finally, it should be mentioned that PV systems deliver DC currents. Therefore, when connected to the grid one will need DC–AC converters, and when used as a stand-alone device one will need a set of batteries as storage.
5.2
Energy from the Wind
The wind has been used since ancient times to sail ships and to drive windmills. An example of a windmill for polder drainage with a maximum power of about 30 [kW] is shown on the left of Figure 5.10. A modern device for harvesting the winds is shown on the right and is called a wind turbine, which may have a power of 5 [MW] or more. In the nacelle one finds a gearbox and a generator, which converts the rotation of the blades into electric power. The nacelle and blades as a whole are able to rotate around a vertical axis in order to find the optimal orientation on the wind. The swept area is the circle around the tip of the blades, indicated on the right in Figure 5.10. The power P present in the wind equals the kinetic energy of the air mass m [kg m2 s−1 ] passing 1 [m2 ] per second in the direction of the wind velocity u. Figure 5.11a illustrates that m equals the mass in a ‘bag’ in the direction of u and with length u; one finds m = ρu where ρ is the density of air. The power P becomes mu2 /2 or P=
1 3 ρu [Wm−2 ] 2
(5.14)
which increases with the third power of the wind velocity. It is therefore advantageous to build wind turbines on windy locations and high in the air where, as we shall see, the wind velocity is high.
Blade
Nacelle
Tower
Swept area
Hub height H
Figure 5.10 On the left an old fashioned windmill used for polder drainage is shown. On the right a modern wind turbine is sketched where the circle indicates the swept area.
160
Environmental Physics (a)
(b)
u
uin
A = 1 [m2 ]
u
uout
Ain
u
A
Aout
Figure 5.11 The left hand side (a) illustrates Eq. (5.14): all particles in the ‘air bag’ shown pass the perpendicular surface A with an area of 1 [m2 ] in 1 [s]. Figure 5.11(b) shows the flow through a turbine with surface area A. The curves indicate the outer streamlines.
If the wind velocity u makes an angle β with the normal of the unit area in Figure 5.11a then the volume of the ‘air bag’ and consequently the energy of the passing air has to be multiplied by cosβ < 1. The air density ρ in Eq. (5.14) is strongly dependent on temperature and pressure. One therefore replaces the density ρ by ρ=
p R T
(5.15)
Here Eq. (3.6) was used, where p [Pa] denotes the atmospheric pressure at the location of measurement and T [K] the absolute temperature; R is the specific gas constant for dry air given in Appendix A. In the following sections we will discuss a physical limit on the energy which one can tap (Section 5.2.1), the aerodynamics of the wings of a turbine (Section 5.2.2) and the dependence of the air velocity with space and time (Section 5.2.3). Finally we turn to modern turbines, and their development (Section 5.2.4) 5.2.1
Betz Limit
It is not possible to tap all power P from the wind, as then the wind behind a turbine would come to a standstill. Closer to reality is Figure 5.11b. Air enters the turbine from the left, perpendicular to the turbine and flows horizontally. The undisturbed air far in front of the turbine (upstream) has velocity uin and is passing an area Ain . This area is determined such that it just passes the turbine, which has a cross section A corresponding to the area swept by its blades. At the turbine the air has velocity u. At the right of the turbine (downstream) the same air passes cross section Aout with velocity uout . The curved lines in Figure 5.11b indicate the stream lines of the air flow. In a stationary situation the air mass passing the three cross sections per second [kg s−1 ] should be the same (conservation of mass). Therefore Jm = ρ Ain u in = ρ Au = ρ Aout u out
(5.16)
In a time t a mass Jm t is passing each cross section. The turbine will be constructed such that uout < uin in order to get an energy loss from the air and hence an energy gain for the turbine. Therefore Aout > Ain , which explains the way the figure is drawn.
Renewable Energy
161
The loss of kinetic energy from the air in time t is found as 1 Jm t(u 2in − u 2out ) 2 The loss of momentum of the same air becomes Tin − Tout =
p = Jm t(u in − u out )
(5.17)
(5.18)
According to Newton’s law the force F exerted on the air by the turbine equals p (5.19) = Jm (u in − u out ) t This force performs work on the air. In a time t the displacement of air through the turbine amounts to u t. The work done on the air in this time W = –Fu t; the minus sign indicates that the work is done against the air stream and diminishes its kinetic energy: W = T out − T in < 0. With Eqs. (5.17) and (5.19) and multiplying by (−1) one finds F=
−W = Tin − Tout Fu t = Jm (u in − u out )u t =
1 Jm t(u 2in − u 2out ) 2
(5.20)
It follows that u = (u in + u out )/2
(5.21)
With this equation and a parameter a it is possible to express u and uout in terms of uin u = u in (1 − a) u out = u in (1 − 2a)
(5.22)
The power P [W] of the turbine equals the energy transferred per second and can be expressed as a function of the parameters a and uin using Eq. (5.20) W 1 (5.23) = Jm (u 2in − u 2out ) = 2ρ Au 3in a(1 − a)2 t 2 In discussions of wind energy the coefficient of performance, which was introduced as COP in Eq. (4.63) now is denoted by cp but defined similarly P=
useful output P = 4a(1 − a)2 (5.24) = 1 required input 3 ρ Au in 2 Note that in the denominator the kinetic energy of air is taken upstream and over the cross section A of the turbine, as A and uin are known parameters. The maximum value of cp can be estimated by putting its derivative equal to zero. One finds a = 1/3 and the maximum value becomes (Exercise 5.11) cp =
16 ≈ 0.59 (5.25) 27 This is the Betz limit, published by Betz in 1926 [9]. From Eq. (5.22) it follows that the maximum performance of a wind turbine corresponds to a downstream air velocity of uout = um /3. cp =
162
Environmental Physics
In practice, the performance of wind turbines will depend on their design. Windmills like the one shown in Figure 5.10 (left) may have cp ≈ 0.17([10], p. 21; [11], Exercise 5.12). Modern three-bladed propellers may have cp ≈ 0.45 for a range of wind speeds and much smaller values outside of this optimal range [12].
Example 5.1 Pressure Differences in a Wind Turbine (a) Derive a formula for the pressure difference between the left and the right of the turbine of Figure 5.11b for the optimal Betz case; (b) calculate the pressure difference for uin = 6 [m s−1 ]; (c) compare your result with the hydrostatic pressure of 1 [cm] of water; (d) what side of the turbine has the lower air pressure ([11], p. 127)? Answer (a) The force F exerted on the air is given by Eq. (5.19). The corresponding pressure is the force per unit of area, which becomes Jm (u in − u out ) ρ Au(u in − u in (1 − 2a)) F = = A A A (5.26) 4 2 = ρu in (1 − a)u in (2a) = ρu in (2a)(1 − a) = ρu 2in 9 Here we used Eqs. (5.16) and (5.22) and a = 1/3. (b) With ρ = 1.25 [kg m−3 ] and uin = 6 [m s−1 ] we find F/A = 20 [Pa = N m−2 ]. (c) The hydrostatic equation (3.4) gives a pressure of gρ(0.01) = (9.8)(1000)(0.01) ≈ 100 [Pa]. The pressure difference which is causing the rotation of a turbine therefore corresponds to only 2 [mm] of water pressure. (d) Eq. (5.19) gives the magnitude of the force exerted on the air. The direction is to the left, for it diminishes the kinetic energy of the air. The force exerted on the turbine by the air has the same magnitude but the opposite direction, that is, to the right. The pressure difference of the air (−∇ p) therefore works to the right; it follows that the air pressure on the right is lower than on the left.
5.2.2
Aerodynamics
The power that a wind turbine can extract from the wind is determined by the flow of the wind around its blades. The relevant concepts are sketched in Figure 5.12. The blade is supposed to be perpendicular to the paper; it moves around a horizontal axis resulting in an upward velocity uup as shown. The incoming wind enters horizontally in the direction of the axis with a homogeneous and constant velocity u. Gravitational forces are not considered. The situation is therefore symmetrical around the rotational axis of the turbine and the assumption that the blade moves upwards does not limit the conclusions drawn below. We are interested in the forces which the blade experiences and will describe the velocity field relative to the blade; we move to a different coordinate system (by a Galilein transformation), where the forces are the same. The incoming wind then still has a horizontal component u, but also a downward velocity udown , which together give a relative velocity urel . This velocity enters with an angle of attack γ with respect to the chord line, as indicated on the left picture of Figure 5.12.
Renewable Energy
u
163
urel udown Lift
urel uup
γ
γ D
Chord line
Figure 5.12 Blade of a wind turbine in a horizontal wind field entering from the left. The figure on the left shows how the blade moves upwards resulting in wind velocity urel with respect to the blade. On the right the relative field is given, where a small angle of attack γ results in more curvature of the streamlines on the top than on the bottom.
The right part of Figure 5.12 shows the streamlines with respect to the moving blade. A small angle of attack, about 13◦ , results in a strong curvature on top of the blade and a much smaller curvature underneath ([10], p. 23–27). Suppose for the sake of the argument that an air particle with mass ρdτ follows a streamline on top of the blade which is curved according to a circle with radius R. Then there must be a centripetal force (ρdτ )u2 /R corresponding to a pressure gradient force (3.38) −∇ pdτ . Consequently the pressure near the blade is smaller than a little above the blade [13]. Underneath the blade the curvature is small and will be approximately the atmospheric pressure. The hydrostatic decrease of pressure (3.8) will be small over such a small distance. Therefore a net Lift force results perpendicular to the air velocity urel . In the direction of the relative air velocity there is a drag force D. Note that the word ‘Lift’ originates from the study of aeroplanes where the Lift force keeps the plane in the air. 5.2.2.1
Blade Design
The upward motion of the blade in Figure 5.12 is governed by the difference of the vertical components of Lift and drag D. From Figure 5.12 it is obvious that the ratio of Lift to D should be as large as possible. This ratio will depend on the angle of attack, which again depends on the ratio of udown to u. The upward velocity of the blade depends on the distance to its rotational axis. Positions close to the axis have a small upward velocity and positions near the tip have a large velocity. In order to optimize the output of the turbine one has to ‘twist’ the blade. 5.2.2.2
The Wake
It has been shown above that downstream from a turbine the wind velocity uout will be a fraction of the velocity uin upstream, in the optimal situation uout = um /3. So downstream
164
Environmental Physics
uin
r
u (x)
rout
α uout
uin x
x=0
Figure 5.13 Wake of a single wind turbine. In a conical approximation with a half top angle α the velocity in the wake only depends on the distance x to the turbine.
from a turbine, the wind field needs to supply energy from higher layers to establish a field which again has the same velocity uin as the original field. The situation is sketched in Figure 5.13. In the simplest approximation one only considers one turbine and its wake. The wake is approximated by a cone with half top angle α. The coordinate x = 0 is supposed to be a little downstream of the turbine, where the wind velocity equals uout . It is assumed that the total mass in a cylinder with radius r parallel to the wind velocity uin is conserved: in each second [s] the mass entering from the left equals the mass leaving at the right. Look at position x in Figure 5.13. There the radius of the cone equals π r2 and the volume of air passing in one [s] is π r2 u(x). At x = 0 there are two contributions to the passing 2 2 u out and π (r 2 − rout )u in . We assume that the air density is the same volume of air: πrout everywhere; conservation of mass then reduces to conservation of volume 2 2 )u in + rout u out = r 2 u(x) (r 2 − rout
(5.27)
The rest of the derivation is straightforward, but a little cumbersome (Exercise 5.13). From Eqs. (5.16) and (5.22) and writing R for the turbine radius it follows that 2 = R2 rout
1−a 1 − 2a
(5.28)
From Figure 5.13 one finds, using that for small angles tan α ≈ α r = rout + αx From Eqs. (5.27), (5.28) and (5.29) one gets ⎛ ⎜ ⎜ u(x) = u in ⎜1 − ⎝
(5.29) ⎞
2a αx 1+ √ R (1 − a)/(1 − 2a)
⎟ ⎟ 2 ⎟ ⎠
(5.30)
Renewable Energy
165
Clearly, α, which represents the angle of the cone is the essential parameter here. For quick calculations one may take α = 0.1 ([10], p. 101). A semi empirical formula which takes into account the hub height H and the roughness x0 of the terrain, which we will discuss below is α= 5.2.3
1 2Ln(H/z 0 )
(5.31)
Wind Farms
For large-scale electricity production wind turbines are organized in wind farms. That reduces the overheads, as permissions to build 100 wind turbines are not much more expensive than the cost of building one. Also, the cost of coupling to the grid and of maintenance will be less than proportional to the number of turbines. The distance between turbines in a wind farm should be such that they avoid each other’s wake. A rule of thumb is to position the turbines such that in the dominant wind direction the distance is at least five times the diameter of the swept area and in the second dominant wind direction at least three diameters. With a = 1/3 (the Betz optimum) and α = 0.10 Eq. (5.30) then gives in the dominant direction u = 0.77uin and in the second direction u = 0.67uin (Exercise 5.14). Another reason for keeping a distance between turbines is that downstream of a turbine there is always turbulence from the motion of the blades. This turbulence may damage a turbine if it is too strong. In designing a modern wind farm one will calculate the number of [kWh] that may be expected and the costs of the farm. The geometry and the number of turbines in a given terrain will be determined such that the resulting costs per [kWh] are minimal. Besides the wakes there are two other factors which will be discussed in detail in the next subsection. The first factor is the vertical wind profile; this will decide how high the turbine should be built: higher velocities at higher heights have to be weighed against the cost of the higher tower. Note that because of the wake effect the distances between the turbines also have to be larger. The second factor is the wind statistics; the variation of velocity with time during a year will determine how many [kWh] may be produced. In this book a semi empirical discussion is given of wake, wind profile and wind statistics useful as a zero order approximation for administrators and companies. For costly investments more accurate calculations are required. One needs measured data for the wind velocity in direction and magnitude as a function of altitude and time. These data can be used as input for computer modelling which should result in the optimal geometry and turbine heights. A zero-order calculation is sensible nevertheless, for it gives a feeling for the orders of magnitude to be expected from an accurate calculation. 5.2.4
Vertical Wind Profile
It is common experience that the wind velocity increases with altitude. At any altitude there are fluctuations in velocity with time, but if one ignores these a semi empirical formula for the velocity u(z) as a function of altitude z can be derived: the vertical wind profile. We start at ground level z = 0 and work upwards. At the surface the air velocity must be zero for the air molecules stick to the ground. Just above the ground, internal friction of the air will determine its motion. In this laminar
166
Environmental Physics
Table 5.2 Parameters for the vertical wind profile. Type of terrain Water areas Open country Farmland, buildings, hedges Farmland, many trees, forest, villages
Roughness class
Roughness z0 /[m]
Exponent α
0 1 2 3
0.001 0.012 0.05 0.3
0.01 0.12 0.16 0.28
With permission of John Wiley, copyright 1997, reproduced from [10], p. 8.
boundary layer the air will experience a tangential stress τ along the surface with dimensions [N m−2 ] = [kg s−2 m−1 ]. Besides this force, the essential physical property will be the density ρ [kg m−3 ]. From these two variables a quantity u∗ can be constructed
τ (5.32) u∗ = ρ One can check readily that this quantity has the dimensions of velocity; it is called friction velocity or shear velocity as it represents the velocity in the boundary layer. From measurements of τ it appears that u∗ ≈ 0.3 [m s−1 ]. Note that in constructing u* only the dimensions of τ and ρ were used. It is an example of dimensional analysis, which is a quick method to find relations between physical observables. This method also suggests a relation for the increase of air velocity ∂u/∂z above the laminar layer. This derivative will depend on u* and on z. The simplest relation then becomes u∗ ∂u = (5.33) ∂z kz where the dimensionless constant k is called the Von Karman constant with k ≈ 0.4. Again one will note that the dimensions on the right are equal to those on the left. Eq. (5.33) has the solution u∗ z u∗ (5.34) ln z + B = ln u= k k z0 Here z0 [m] is a measure of the roughness of the terrain. Typical empirical values are given in Table 5.2. The logarithmic formula is a good approximation of the wind profile from z ≈ z0 up to a few 100 [m]. It is practical to take the velocity at a height of 10 [m] as reference, as measurements are often carried out at that height. Then one gets rid of u* and k and obtains ln z/z 0 (5.35) u(z) = u(10) ln(10/z 0 ) We note in passing that the logarithmic dependence (5.34) also applies to the vertical velocity profile in rivers, with other values for the parameters, of course. Engineers often use an exponential relationship z α (5.36) u(z) = u(10) 10 In Table 5.2 some typical values of α are given besides those of z0 . It appears that, in general, both approximations are close to each other (Exercise 5.15). Note that Table 5.2
Renewable Energy
167
only gives an idea of the parameter values to be expected. It is always better to do a few measurements and fit the parameters. From Eqs. (5.35) and (5.36) it follows that for a wind turbine with blades of 40 [m] and a hub height of 100 [m] the blades rotate between 60 [m] in height and 140 [m] in height at each turn. There is an appreciable difference in wind velocity that has to be taken into account in the design. Construction problems, however, arise particularly from the turbulent variation in time over the swept area. Note that the foregoing discussion only concerns the magnitude of the wind velocity. In Chapter 3 we have seen that at altitudes above 500 [m], say, the wind velocity essentially is geostrophic in magnitude and direction. Friction does not play a role and the wind direction follows the isobars. Near the ground the friction between moving layers of air is important, and the wind direction is found by taking the direction of −∇ p with a turn to the right in the Northern Hemisphere and a turn to the left in the Southern Hemisphere. The net effect is a difference in direction of about 30◦ between the air direction in lower and higher layers. For the operation of a commercial wind farm one needs to estimate its output at least 12 hours before the wind electricity has to be delivered. For if one sells electricity on the market, one will get a harsh penalty if one does not deliver. On the other hand, if one has a standard contract to deliver electricity, then if the wind is poor one has to buy electricity from a utility which is able quickly to switch on its power station, in practice, gas or hydro storage. The wind companies therefore start from the geostrophic wind, which is rather predictable and match it to a vertical wind profile at lower altitudes. This procedure, including accounting for the wakes is well understood [14]. 5.2.5
Wind Statistics
At any altitude the wind velocity is changing in time. In order to estimate the annual output of a wind turbine one needs to know the probability f (u)du that a wind velocity occurs between u and u + du. The measurements closely resemble the so-called Weibull probability distribution f (u) =
k u k−1 −(u/a)2 e a a
(5.37)
The parameters k and a [m s−1 ] are found by fitting the experimental data. This is not done by fitting (5.37) directly, but rather by plotting
u g(u) = 1 −
f (u)du
(5.38)
0
The integral on the right is the probability that the velocity is smaller than u. The function g(u) therefore represents the probability that the velocity is larger than u. As there are not many data for large velocities it is more precise to fit (5.38). This is done by plotting log(–log(u)) against logu (Exercise 5.16). With the Weibull distribution the average wind velocity follows as u = a(1 + 1/k)
(5.39)
168
Environmental Physics
where (z) is the well-known gamma function. The average energy input of a wind turbine may be found from u 3 = a 3 (1 + 3/k). In practice, one usually finds k ≈ 2, which corresponds with the simpler Rayleigh distribution. If one needs a quick estimate for the energy input one may take k = 2 and deduce the parameter a from the average wind velocity by Eq. (5.39). It may be added that in Germany the performance of a wind turbine is given for a so-called reference site with k = 2, z0 = 0.1 [m], and an average velocity of 5.5 [m s−1 ] at a height of 30 [m]. 5.2.6
State of the Art and Outlook
Modern turbines are sophisticated machines, with a high variety of dimensions, specifications and power output. The outputs range from 0.35 [kW] to 5 [MW] for the largest machines with a swept area of more than 12 000 [m2 ] ([12]). Let us look what happens with the turbine if the incoming wind speed uin slowly increases. At uin = 0 the turbines are at rest. They usually start working at a cut-in speed of a few [m s−1 ], needed to overcome internal friction and work smoothly. For wind velocities somewhat higher than the cut-in speed the energy input will increase with the u 3in law (5.14), but the coefficient of performance cp is also dependent on the incoming velocity uin ; the output therefore does not follow the u 3in law. The bigger turbines have a cut-out wind speed after which the turbine is switched off for safety reasons. It may be as high as 25 [m s−1 ]. A little before the cut-out speed is reached, at the rated wind speed, the turbine achieves its rated power, which is the nominal power output. At high incoming wind velocities uin the turbines have a mechanism to limit or reduce the power output. The bigger turbines (> 100 [kW]) almost all have pitch control. These machines have a variable angular velocity, implying that the ratio of u and udown in Figure 5.12 can be regulated in order to optimize the angle of attack and the output. At high velocities the blades are rotated a little around their long axis, making the angle of attack less than optimal. With stall control the turbine is rotating at a fixed speed, implying that udown is constant. The blade is designed such that at undesirably high velocities the angle of attack increases too much and the efficiency of the turbine decreases. It follows that at certain low and high velocities the turbine will not deliver power at all, while at many other wind velocities the power output is smaller than the rated power. One may measure the total number of [J] delivered during a year of operation and calculate the number of hours at the rated power of the turbine required to give the same output. This is called the capacity factor, which may be 35% on land and 45% for coastal or mountainous areas. The contribution of wind power to the electricity needs of a country is found by multiplying the installed capacity by the capacity factor. Large turbines may have blades as long as 50 [m] with 18 rotations per minute. This gives a tip velocity of 94 [m s−1 ]. These high velocities produce a high noise level, which is one of the reasons why the locations of wind turbines are limited by regulations. In 2009 the cost of wind electricity on land, onshore, ranged from 7 $ct to 13 $ct per [kWh] depending on local conditions. Offshore prices of 11 $ct to 13 $ct per [kWh] are quoted [15]. On a national scale there will always be reserve capacity with power stations in case of breakdown of one or two. Therefore, as long as the contribution of wind energy is small there is little need to put in extra back-up capacity for periods of little or no wind.
Renewable Energy
169
When the contribution of wind energy becomes considerable extra back-up capacity will be needed. These balancing costs should be added to the prices quoted. The extra backup can be organized in different ways. Denmark for example, with 20% of the electricity from wind, has a deal with neighbouring country Norway to draw on its hydropower for gravitational storage (Section 4.4.3). Another way may be to rely on wind energy from locations several hundreds of [km] away, where the wind velocity is no longer correlated with the location under question. This would require a strong electricity grid over long distances with small Ohmian losses, which would be required for other renewables as well (Section 4.4.4). From the prices quoted it appears that, depending on location, wind electricity is almost competitive with electricity from other sources. This will become even more so as more turbines are built and the prices fall according to the learning curve (Eq. (4.155)). The learning rate is estimated as L = 0.07 onshore and L = 0.09 offshore. If the building of turbines proceeds according to the projections the cost of wind power should decrease by 23% by 2050 ([15], p. 17). Cost reduction also requires research. One is looking at stronger and lighter materials to enable larger rotors and lighter nacelles. Especially offshore, maintenance may be difficult under adverse weather conditions. Therefore ways to reduce maintenance requirements and costs are under investigation. At present offshore turbines are adapted onshore devices. It may be desirable to design offshore turbines ‘from scratch’, geared to the conditions under which they have to operate. It is also advantageous to share experience of offshore turbines between users and builders, although commercial sensitivity often tends to secrecy. Finally, it is necessary to compile wind characteristics over areas larger than 200 [km] in order to find out how a shortage of wind at one location may be compensated by a surplus elsewhere.
5.3
Energy from the Water
Rivers flow from higher locations to lower ones. If the drop is big enough it makes sense to dam a river and use the height difference upstream and downstream to produce electric power (Section 5.3.1). In other cases the flow of the river may be used to turn a water wheel and use its rotation directly (sawing wood or grinding grain), or again to produce electric power (Section 5.3.2). Ocean waves are caused by winds that blow irregularities on the ocean’s surface into running waves. The kinetic energy of the water particles may be converted into rotation of a turbine and then into electric power (Section 5.3.3). Finally, the kinetic energy of the tidal motion caused by sun and moon may, in favourable locations, be converted into electric power (Section 5.3.4). 5.3.1
Power from Dams
A hydropower station uses a dam, from which water will go down a height h and pass a turbine. The potential energy of the water is converted into kinetic energy of the turbine, which is coupled to an electric generator. A mass m with height h will have a potential energy mgh, where g is the acceleration of gravity. If Q [m3 s−1 ] is passing the turbine its
170
Environmental Physics
mass will be ρQ [kg s−1 ], where ρ is the density of water. Consequently the mechanical power P the dam produces will be P = ρ Qgh[Js−1 ] ≈ 10h Q [kW]
(5.40)
This of course is equivalent to Eq. (4.126), which was derived in the context of pumped hydro storage of electric power. For large power stations 90% of the mechanical output (5.40) may be converted into electric power. The principal use of the dam is to use the reservoir to regulate the electric output by regulating the flow Q through the turbines. Besides storage as a second use, many dams also are used to regulate irrigation water to downstream agriculture. Both uses, of course, may compete. Hydropower provided around 1/6 of all electric power on world level in 2008. In most industrial countries most of the hydropower resources are already in use; the expansion must come from developing nations. By 2050 the total output of hydropower may have doubled. 5.3.2
Power from Flowing Rivers
The kinetic energy of a flowing river in some places still is used to drive waterwheels. The rotational energy then is used directly for powering saws in the woodcutting industry, or looms, as in the early British textile industries. Modern waterwheels often convert the rotation of the wheel into electric power. For a flow with velocity u and mass m, the kinetic energy equals mu2 /2. If Q [m3 s−1 ] passes the waterwheel or turbine its mass again will be Qρ [kg s−1 ], giving a mechanical power P= 5.3.2.1
1 1 Qρu 2 ≈ Qu 2 [kW] 2 2
(5.41)
Comparison of Dams and Flowing Rivers
The height h in Eq. (5.40) easily can be 50 [m], while the velocity u in Eq. (5.41) often is not much higher than 1 [m s−1 ]. This implies that for the same amount Q of water passing, dams are 1000 times as effective as flows. Besides, in a dam one may use all the water from a stream, while in a waterwheel usually only part of the stream is forced to pass the wheel. Altogether this means that the kinetic energy of flows nowadays only will be useful in specialized small-scale applications. 5.3.3
↓
Power from Waves
We estimate the power in the motion of the waves by considering a very deep ocean with gravity as the only acting force. We use mathematics which may be too advanced for some students and we also have to make a few drastic mathematical approximations in order to derive Eq. (5.52). If you skip the derivation, try to understand Eq. (5.52). A water particle is defined by its equilibrium position (x, y, z). In general, the water particle will have a displacement s(x, y, z, t) from this position. The local pressure p(x, y, z, t) is defined as the deviation from the equilibrium pressure. The extra pressure will be the
Renewable Energy
171
Wave direction z=0
s sz s (x, y, z) p(x, y, z, t) = ρgsz
s
s
Figure 5.14 A particle with equilibrium position (x, y, z) is displaced from its equilibrium by a vector s(x, y, z, t). The pressure p(x, y, z, t) is found as the extra water pressure caused by the extra water sz on top.
hydrostatic pressure (3.4) due to the extra water above, so p(x, y, z, t) = ρgsz
(5.42)
The situation is sketched in Figure 5.14 where s(x, y, z, t) and p(x, y, z, t) are indicated. Note that the vectors s(x, y, z, t) are drawn at a level a little below z = 0 in order to avoid erroneous pressures at z = 0 in the valley of the wave. We now calculate the kinetic energy in the wave motion with the simplest assumptions. First, assume that everywhere rot s = 0. This is a strong assumption as it means that everywhere relations exist between the components sx , sy , sz of the field s(x, y, z, t). The x-component of rot s = 0, for example, reads ∂s y /∂z − ∂sz /∂ y = 0. It has been proven in courses of mathematical physics that in such cases there exists a wave function ψ(x, y, z, t) (note the name !) with the property that s = −∇ψ
(5.43)
The equation of motion for a general case was given in Eq. (3.35). In the application to waves one may ignore viscous forces and Coriolis forces. Also it may be assumed that the gravity force cancels the equilibrium pressure force. The only remaining force on the right in Eq. (3.35) is the local pressure force p(x, y, z, t), which for a volume element dτ becomes −∇ pdτ (Eq. (3.38). The force on the left of Eq. (3.29) is written as ρdτ (du/dt). The equation of motion for waves becomes ρ
du = −∇ p dt
(5.44)
In Eq. (3.63) the time derivative of the x-component in this equation is written out as du x /dt = (u · ∇)u x + ∂ux /∂t with similar relations for the other components. We make the simplification du x /dt = ∂u x /∂t and similarly u = ds/dt = ∂s/∂t. From Eq. (5.44) we then find ρ
∂u ∂ 2s du =ρ = ρ 2 = −∇ p dt ∂t ∂t
(5.45)
172
Environmental Physics
With Eqs. (5.43) and (5.42) we find ∂2 1 1 (−∇ψ) = − ∇ p = − ∇(ρgsz ) 2 ∂t ρ ρ
(5.46)
We assume that the density ρ is constant and use that spatial and time derivatives may be interchanged and find ∂ψ ∂2 (5.47) ∇ 2 (ψ) = ∇(gsz ) = ∇ −g ∂t ∂z We integrate the ∇ operator by omitting it and find ∂ 2ψ ∂ψ = −g 2 ∂t ∂z
(5.48)
We assume that there is no net outflow of mass in any element of volume. In Appendix B (Eq. (B14)) it is shown that in this case with ρ = constant one has 0 = divu = div
∂ ∂s = divs ∂t ∂t
(5.49)
which must hold for all times and places. Consequently divs = 0. From Eq. (5.43) it follows that div gradψ = 0. We assume that the waves propagate in the x-direction and extend infinitely in the y-direction. Then ψ will not depend on y; therefore ψ = ψ(x, z, t) and 0 = div gradψ =
∂ 2ψ ∂ 2ψ + =0 ∂x2 ∂z 2
(5.50)
This Laplace equation has many solutions, but we are looking for a wave with velocity v of which the amplitude is decreasing with depth. We try ψ=
↑
a sin k(x − vt)ekz k
(5.51)
For t → t + 1 and x → x + v the function ψ remains the same; it therefore represents a wave propagating in the x-direction with velocity v. The wavelength λ is the smallest distance for which the wave repeats itself: x → x + λ gives the same result, so kλ = 2π . One may show that solution (5.51) obeys Eq. (5.50), while the water particles move in circles with a radius diminishing with depth (Exercise 5.17, where also the Eqs up to (5.56) are derived in more detail). At z = 0 the amplitude equals a. The kinetic energy of a unit volume dτ = 1 is found as 1 ∂sz 2 ∂sx 2 1 1 2 2 + (5.52) = ρa 2 k 2 v2 ekz J m−3 ] T = ρ ux + uz = ρ 2 2 ∂t ∂t 2 The kinetic energy contained in a complete column is found by integrating this equation from z = –∞ to z = 0, which gives Tcolumn =
1 2 2 ρa kv [J m−2 ] 4
(5.53)
Renewable Energy
173
The propagation velocity v which appears in this equation is found by substituting ψ from Eq.(5.51) in Eq. (5.48), which results in v2 =
g k
(5.54)
The kinetic energy contained in a column becomes Tcolumn =
1 ρga 2 [J m−2 ] 4
(5.55)
The power Pcolumn contained in a complete column equals the energy passing 1 [m] perpendicular to the waves in 1 [s]. Therefore Pcolumn = Tcolumn × v =
1 ρga 2 v [Wm−1 ] 4
(5.56)
It is convenient to rewrite this relation in quantities that are easy to measure. The first is the distance between the valley and crest of the wave: H = 2a. The second is the time T between two successive highest points of the wave at a certain location. From Eq. (5.51) we see that kvt and kv(t + T) must give the same wave, which leads to kvT = 2π . With Eq. (5.54) one finds v = Tg/2π. This leads to P=
ρg 2 H 2 T [Wm−1 ] ≈ H 2 T [kWm−1 ] 32π
(5.57)
This is the correct relation for a pure wave (5.51). Real waves are not pure, but comprise a range of k-values around a certain value. The waves that are observed propagate with the group velocity vg = v + k(dv/dk) = v/2
(5.58)
The power in real ocean waves becomes P=
ρg 2 H 2 T 1 [Wm−1 ] ≈ H 2 T [kWm−1 ] 64π 2
(5.59)
For waves with H = 3 [m] and T = 8 [s] one would find P = 36 [kWm−1 ]. When approaching the coast part of this power will be lost to friction against the ocean floor. 5.3.3.1
Converters
Most converters remain stationary at one location and convert the up and down movement of the water into rotational motion. An example is given in Figure 5.15. The sea water performs a vertical motion s(x, t). A buoy floating on the water has an open tube in its middle through which the sea water may enter. The waves (5.51) passing the buoy will cause a harmonic vertical motion Z(x, t) of the buoy and a similar vertical motion s1 (x, t) inside the tube. The three motions will be out of phase and will have different amplitudes s = a sin k(x − vt) Z = Z 0 sin k(x − vt − δ Z ) s1 = s0 sin k(x − vt − δ1 )
(5.57)
174
Environmental Physics
Figure 5.15 Matsuda’s pneumatic wave energy conversion device. (Reproduced by permission of Academic Press Ltd, London.)
The phase difference means that with respect to the buoy the tube water will perform an up-and-down movement s1 – Z. One may observe this effect in old-fashioned fishing boats with a well in the middle; the water goes up and down with respect to the boat. The valves in Figure 5.15 are positioned in such a way that both during the upward movement and during the downward movement the air passes the propeller near the turbine in the same, upward direction.
5.3.4
Power from the Tides
A tidal power station has a barrage or dam at a location where high tidal height differences occur. Inland there should be enough space to store a lot of water, often at the mouth of a river or at a wide cleft. With incoming tide the water will pass the dam through a system of locks; at low tide the water is let out by a hydraulic turbine to generate electricity. It is also possible to use the power of the incoming tide to drive a turbine, using the tidal wave twice. The power of the tidal station will depend on the potential energy mgh of the water inland from the dam. Here Eq. (5.40) applies. The height difference h and the flux Q [m3 s−1 ] of the outgoing water are the decisive factors in determining the economic feasibility. If one used the incoming tidal wave as well, Eq. (5.41) of the flowing river would apply. The tides are determined by the lunar cycle and are very predictable. Therefore the amount of power to be produced can be calculated accurately and sold on the market. A difficulty is the high capital expense to build the tidal power station, which will make the electricity cost high. On the other hand, the station could be combined with pumped storage during the few hours of low tide, which might increase its economic competitiveness [17].
Renewable Energy
5.4
175
Bio Energy
Bio energy is the energy stored by photosynthesis in plants, algae and certain photosynthetic bacteria. This energy is contained in the food we consume and therefore essential for life. This energy may also be used directly to light a fire to keep us warm, or in its fossilized form for the many applications discussed in Chapter 4. Plates 3 to 6 in the photo section of the book illustrate how scientists of different disciplines view photosynthesis. The top section of Plate 3 shows the leaf of a plant at face value and in microscopic enlargements. This is the view of the traditional biologist. On a molecular scale it appears that photosynthesis occurs in a membrane with a thickness of approximately 4 [nm]. This membrane is sometimes folded like a towel in a cupboard (Plate 4) effectively increasing the internal electrochemical potential differences which drive the formation of sugars. A biochemist will study the chemical reactions which are taking place in a single membrane and looks at photosynthesis as in Plate 5, which in fact is the picture at the bottom left in Plate 3. The biophysicist will take a closer look at the electrons created by the incoming photons, their way through the membrane and the energy levels which they occupy (Plate 6). Even simpler is to look at the thermodynamics of the process, which is the approach of the Sections 5.4.1 to 5.4.4 below. In Section 5.5 a broader and more complete physics’ view of the process will be given; there a simplified version of Plate 6 will be taken as the starting point (Figure 5.18). As the process has evolved over many years of evolution, its physics’ description is complex and has to use concepts from quantum mechanics, physical chemistry and biochemistry. Section 5.6 is devoted to organic photocells, which have some features in common with photosynthetic light harvesting, while, finally, in Section 5.7 new developments in photosynthesis research and advanced applications are discussed. Sections 5.5 to 5.7 are suitable for a more advanced ‘second round’. 5.4.1
Thermodynamics of Bio Energy
As pointed out above, cooperation between many physical and biochemical processes is responsible for the conversion of solar energy into some stable chemical form by photosynthesis. The net effect of all these processes may be summarized in the equation H2 O + CO2 + 485 [kJ/mol] → (CH2 O) + O2
(5.58)
Here, 485 [kJ/mol] is the Gibbs free energy which is stored and CH2 O in parentheses summarizes the essential building block of a long-chain molecule. This equation is often written as 6 H2 O + 6 CO2 + 2910 [kJ/mol] → C6 H12 O6 + 6 O2
(5.59)
Reactions (5.58) or (5.59) immediately show the advantage of using photosynthesis to absorb sunlight: it stores the energy of sunlight in the form of chemical free energy. That stored energy is liberated when reactions (5.58) or (5.59) run to the left. Figure 5.16 gives a thermodynamic representation of the photosynthetic process. A ground state ChlS consisting of a so-called substrate molecule S and a chlorophyll molecule ∗ Chl may absorb energy E = hv0 from a single photon and get into an excited state Chl S −1 with rate ki [s ]. Rates represent here the number of transitions per molecule per second.
176
Environmental Physics
E* kst
Chl* S kb P
E = h ν0
ki
≈ E*-Δ μst
kl
Chl S
≈ Δμst
Figure 5.16 Thermodynamics of photosynthesis. A molecule S is present with chlorophyll molecules Chl, together represented as ChlS. The chlorophyll is excited with a rate ki [s−1 ] to a ∗ state Chl S. It then either decays back with a loss rate kl or it turns molecule S into a molecule P with a storage rate kst . Energy is stored in P, but may be lost when P decays back with rate kb . The vertical scale refers to the internal energy of the molecules, but at T = 0 may be read as the Gibbs free energy. ∗
State Chl S either returns to the ground state with a loss rate kl (which includes loss due to conversion into heat or triplet formation) or the state uses its energy to change S into a product molecule P with storage rate kst . In P the energy is stored, but the state also may ∗ return to state Chl S with rate kb . Typical rates in photosynthesis are given in Table 5.3. The two numbers given for ki represent high intensity incoming sunlight and low intensity on a cloudy day. One observes that there is a difference of five orders of magnitude between the two cases. For high intensity the numbers show that every 0.1 [s] a molecule is excited, which then has a probability of going to P which is 20 times as high as the loss rate. Both processes run much faster than the initial absorption. Figure 5.16 represents a qualitative model in which the photosynthetic process is described as a single step conversion of light into Gibbs free energy. On this molecular level the chemical potentials μi defined in Eqs. (4.49) and (4.50) as [J mol−1 ] have to be divided by Avogadro’s number N A , which gives μ(T) [J molecule−1 ]. Their temperature dependence may be written as μ(T ) = μ0 + kT ln[Chl]
(5.60)
where the ground state of the chlorophyll molecule was taken. This formula can be made plausible by looking at the Gibbs function of n moles of an ideal gas at constant temperature. Eq. (4.51) with dT = 0 and dn = 0 then gives dG = V d p
(5.61)
Table 5.3 Typical rates in photosynthesis [s−1 molecule−1 ]. kl 109
kst
ki (bright sunlight)
ki (cloudy day)
2 × 1010
10
10−4
Renewable Energy
177
We integrate this from a pressure p0 to a final pressure p using Eq. (3.3) for an ideal gas. This gives
p
p V d p = n RT
G( p) − G(0) = p0
dp =n RT (ln p − ln p0 ) p
(5.62)
p0
or G( p) = G 0 + n RT ln p
(5.63)
where the constants are put into G0 . For a mixture of two ideal gases, n1 moles of gas 1 and n2 moles of gas 2, each gas has its own partial pressure p1 and p2 , respectively. The Gibbs function then becomes
G( p) = G 0 + n 1 RT ln p1 + n 2 RT ln p2
(5.64)
The partial pressure p1 will be proportional to the concentration [gas 1] and similarly for the other gas. This gives
G( p) = G 0 + n 1 RT ln[gas1] + n 2 RT ln[gas 2]
(5.65)
where the constants of proportionality were put into G 0 . Eq. (4.50) showed that the chemical potential is found by differentiation ∂G/∂n i . In Eq. (5.65) G 0 will be dependent of the value of ni we find for each gas μ = μ0 + RT ln[gas]
(5.66)
On a molecular level we divide by N A , use k = R/N A and find (5.60). Accepting the plausibility of Eq. (5.60) the difference μ in the chemical potentials of the excited state and the ground state of the chlorophyll may be written as μ = μ0 (Chl∗ ) + kT ln[Chl∗ ] − μ0 (Chl) − kT ln[Chl] = hv0 + kT ln
[Chl∗ ] [Chl]
(5.67)
Here, at T = 0 the difference in chemical potential is equal to the excitation energy hν 0 of the chlorophyll. This follows from Eq. (4.52), which reads dU = −pdV + TdS + μi dni . For excitation of chlorophyll dV = 0, T = 0, while for the ground state dni = −1 and for the excited state dni = +1. This gives for the excitation energy dU = μexcited state − μground state , which completes our proof. In the absence of light, thermal equilibrium between the two states requires the chemical potentials to be the same: μ = 0. Eq. (5.67) then gives [Chl∗ ] = [Chl]e−hv0 /kT
(5.68)
which is precisely the Boltzmann distribution (2.25). This by the way adds to the plausibility of Eq. (5.60). As the next step we consider absorption of light, but ignore the back reaction P→S in ∗ Figure 5.16. In steady state the formation of Chl equals its decay ki [Chl] = (kl + kst )[Chl∗ ]
(5.69)
The efficiency φst of the storage process is defined as one minus the probability of return to the ground state divided by the probability of formation from the ground state. Ignoring
178
Environmental Physics
the back reaction, indicated by a superscript 0, gives φst0 = 1 −
kl [Chl∗ ] kl [Chl∗ ] kst =1− ∗ = ki [Chl] (kl + kst )[Chl ] kl + kst
(5.70)
For photosynthesis to work one should have kst kl or φst0 ≈ 1, which is indeed the case in nature (cf. Table 5.3). We now have to take into account the unavoidable back reaction with rate kb . We first switch on sunlight to store energy in P. When the light is switched off again one will have ∗ a steady state if the formation of Chl S from P equals its decay to P and the ground state kb [P] = (kl + kst )[Chl∗ S]dark
(5.71) ∗
Since kst kl this looks like an equilibrium between P and Chl Sdark , which would imply roughly the same chemical potential μ(Chl∗ S)dark = μ(P)
(5.72)
The final step is to consider a night and the following day. During the day the steady ∗ ∗ state of formation of Chl S, now called Chl Slight , and its decay is described by ki [ChlS] + kb [P] = (kst + kl )[Chl∗ S]light
(5.73)
Following the same definition as before, the efficiency of storage in a steady state with light φstss becomes φstss = 1 −
kl [Chl∗ S]light ki [ChlS] − kl [Chl∗ S]light = ki [ChlS] ki [ChlS]
(5.74)
Photosynthesis in nature is a matter of many days or weeks. It is therefore reasonable to assume that [P] only slowly changes with time, which means that [P] during the night from Eq. (5.71) may be taken equal to [P] during the subsequent day. In that case one finds from Eqs. (5.70), (5.71), (5.73) and (5.74) φstss =
kst kl [Chl∗ S]dark kl [Chl∗ S]dark − = φst0 − kst + kl ki [ChlS] ki [ChlS]
(5.75) ∗
We may get rid of the concentrations by looking at the chemical potentials for Chl Sdark and ChlS using (5.60) μ(Chl∗ S)dark = μ0 (Chl∗ S)dark + kT ln[Chl∗ S]dark = μ0 (ChlS) + hv0 + kT ln[Chl∗ S]dark (5.76) μ(ChlS) = μ0 (ChlS) + kT ln[ChlS]
(5.77)
Renewable Energy
179
If one subtracts the second equation from the first and uses Eq. (5.72) one finds ln
μ(P) − μ(ChlS) − hv0 μst − hv0 [Chl∗ S]dark = = [ChlS] kT kT
(5.78)
Here μst = μ(P) − μ(ChlS)
(5.79)
is the Gibbs free energy stored. This represents the available work that one can get out of biomass. From Eqs. (5.75) and (5.79) one obtains φstss = φst0 −
kl μst −hv0 e kT ki
(5.80)
From Figure 5.16 it is clear that the exponential will be negative. For a high negative exponential the steady-state storage efficiency is high, but the available work μst will be small. On the other hand, for available work close to the incident light energy the exponential will only be slightly negative and the storage efficiency (with the data from Table (5.3) !!) will be small. Apparently nature has to compromise. Another way to look at the same point is to require high storage efficiency, say φstss = 0.9. With the data from Table (5.3) and Eq. (5.70) one has φst0 = 0.95. From Eq. (5.80) one then can estimate values of μst – hν 0 for bright and weak sunlight using the data from Table (5.3). With T = 290 [K] and kT = 0.025 [eV] one finds Strong sunlight :
μst − hv0 = −0.54[eV]
(5.81)
Weak sunlight :
μst − hv0 = −0.82[eV]
(5.82)
These numbers should be compared with hv0 ≈ 2 [eV] for sunlight (see Figure 2.3). For a certain plant there is only one value for μst . However, plants which live low in a forest with weak sunlight could have a lower value for μst than plants which live in open spaces. This indeed happens in nature.
Example 5.2
Storage Efficiency
Plot the steady-state storage efficiency (5.80) as a function of hv0 – μst for ki = 10 [s−1 ], ki = 1 [s−1 ] and ki = 0.1 [s−1 ]. For the other parameters use Table 5.3. Answer The efficiency (5.70), without the back reaction is the maximal possible efficiency and amounts to 0.953. The behaviour (5.80), including the back reaction can be plotted and shown in Figure 5.17. Along the horizontal axis the value hv0 – μst is plotted, which is the energy that is not stored. For high values the efficiency is close to the maximum, but the price is a low value for the energy stored.
180
Environmental Physics ki = 10 0.95
0.94
ki = 0.1
f
ki = 1 0.93
0.55
0.6
0.65
0.7
0.75
0.8
hv0- ∆μst /[eV]
Figure 5.17 Steady state efficiency φstss as function of the unstored energy hv0 − μst for three values of ki .
5.4.2
Stability
In a period of time t with the light switched on, the amount of Gibbs free energy stored will be G st = ki ( t)[ChlS]φstss {μ(P) − μ(ChlS)} [J]
(5.83)
In the dark (night or winter) the loss of free energy per second through the unavoidable back reaction is given by Rl = kl [CHl∗ S]dark {μ(P) − μ(ChlS)} [Js−1 ]
(5.84)
The time tl in which the stored energy is lost again follows from Eqs. (5.83) and (5.84) tl =
G st ki ( t)[ChlS]φstss = Rl kl [Chl∗ S]dark
(5.85)
After harvesting, the chlorophyll responsible for the back reaction, soon disappears. The energy stored in P then remains there. 5.4.3
Solar Efficiency
Section 5.4.1 discussed the efficiency of the photosynthetic process as the interplay between storage and losses due to the back reaction. To answer the question what fraction of the incoming solar energy can be stored by photosynthesis the net reaction (5.58) has to be separated into its constituent partial reactions. There are two basic processes. The first is oxidation of H2 O 2 H2 O → O2 + 4 H+ + 4 e−
(5.86)
and the second is reduction of CO2 CO2 + 4 H+ + 4 e− → CH2 O + H2 O
(5.87)
Renewable Energy
181
The net effect of these two reactions is found by addition and indeed gives Eq. (5.58). An incoming photon can only interact with a single electron. Eqs. (5.86) and (5.87) tell us that at least eight photons are required to run process (5.58). In fact, in order to run the process two more photons are needed to form additional molecules of ATP, a compound used for maintenance of the cell. The 10 photons are absorbed in the wavelength region of 400 [nm] to 700 [nm], which corresponds with energies of 3.1 to 1.8 [eV]. The 10 photons absorbed will have an energy of about 25 [eV]. This is equivalent to 2400 [kJ/mol], to be compared with the 485 [kJ/mol] stored in process (5.58). In photosynthesis light of wavelengths between 400 and 700 [nm] is used. At ground level this represents 42% of the energy of the total spectrum of sunlight and it is called photosynthetic active radiation (PAR). The average energy content of these quanta is 218 [kJ/(mol quanta)]. Combining all these data it is calculated that maximally 9% of sunlight energy (considering all wavelengths) can be converted into chemical energy as new biomass. Only considering the PAR range, the efficiency is 21.4%. Based on solar irradiation data [18] it can be calculated that the maximal theoretical biomass productivity in the south of Spain is 280 [tonnes ha−1 year−1 ].
Example 5.3
Solar Spectrum in the Photosynthetic Active Region
For a black body of 5800 [K] representing the solar emission spectrum calculate the percentage of the energy between 400 [nm] and 700 [nm]. Answer We use the solar emission spectrum given in Eq. (2.7) and Example 2.1 IE =
2πa 3 E h 3 c2
3
×
1
ea E /kT − 1
[m2 s−1 ]
(2.7)
where E is expressed in [eV] and a =1.602 ×10−19 . We use E = hc/λ with the energy in [J] or rather E = hc/(aλ) with the energy in [eV]. We find that 400 [nm] corresponds to 3.10 [eV] and 700 [nm] corresponds with 1.77 [eV]. One needs a little programme to calculate the integral of Eq. (2.7) between 1.77 [eV] and 3.10 [eV] and divide by the integral over the complete spectrum (which equals σ T 4 ). We find 35% of the spectrum between the wavelengths indicated. The difference with the 42% given above must be due to the structure of the solar spectrum as it reaches ground level (see Figure 2.2). To find the efficiencies reached in practice one has to compare the harvest during a year with the solar input during that year. The rice culture in Indonesia may reach 1%, sugar cane 2% and some algae and cyanobacteria even reach 5–6%. In particular cyanobacteria, with the old name ‘blue-green algae’, absorb in a broader spectral range than plants. Their theoretical maximum therefore will be higher than the 9% deduced above. There are several reasons to use cyanobacteria for the production of biomass from solar energy. In the first place cyanobacteria absorb light above 700 [nm], some even close to 750 [nm]. This has been achieved by including special pigment clusters in the lightharvesting antenna that display a strongly red-shifted absorption due to favourable excitonic
182
Environmental Physics
interactions (see Figure 5.19 below, where the absorbing wavelengths are indicated). We note that there is a price to pay: with the reaction centre operating at 680–700 [nm] light excitation energy transfer to the reaction centre will slow down because for longer wavelengths the photons have less energy and excitation becomes more of an energetically up-hill process. For the cyanobacterium with the most red-shifted light-harvesting species the efficiency of charge separation has dropped a few % (from 98 to 95%). The second important reason to use cyanobacteria is that their genetics are very well known, implying that by using the tools of molecular biology the activity of certain genes can be amplified or suppressed. Like in all photosynthetic organisms a large fraction of the absorbed energy (more than 80%) is converted into heat under conditions of high light stress. The main reason is to protect the organism from photodamage and to achieve optimal fitness to produce next generations. For a ‘fuel-bacterium’ this optimum may be shifted to higher light intensities. The third important reason is that the mass culture of cyanobacteria does not necessarily compete with the land use for food production. Cyanobacteria can be grown in lakes (they grow very well in the Dutch lake IJsselmeer at mid latitudes), or in specific culturing facilities [19]. Only if agricultural land was converted into ponds would the growth of cyanobacteria/marine algae compete with food production on land. Finally, as will be discussed below (Figure 5.18), photosynthesis is driven by two photosystems that essentially compete for the same photons in the range 400–700 [nm]. In principle, if the losses following the charge separation by Photosystem 2 could be limited to less than 0.5 [eV], then the second charge separation by Photosystem 1 could be performed by photons in the 700–900 [nm] range, increasing the potential efficiency of photosynthesis to close to 15%. It may be added that cyanobacteria in water have it easier than plants and trees on land. The cyanobacteria do not need a system with roots and capillaries to suck up water from the ground since they already live in the water. Food and minerals can be obtained easily, if available, and a large part of the organism can be devoted to photosynthesis. We do not suggest that production of ‘new biomass’ is easy, to be compared with a ‘free lunch’. Two different aspects have to be kept in mind. The first is optimizing the organisms to the incident solar spectrum and the local climatic conditions. The second is the logistics of having sufficient CO2 and minerals available in the water, their harvesting and handling. On both aspects much research and development has to be done, in fact, is already in progress.
5.4.4
Energy from Biomass
If one does not have the time to wait for fossilization of biomaterials, one can harvest the bio energy directly. The easiest way is drying biomaterials and burning them; another way is composting. A more sophisticated method is the production of biogas, essentially CH4 , from carbohydrates. This is still done in a traditional way at some farms which are not connected to the national grid; they have a pressure tank in a ditch where rotting plants produce methane. Gas, of course, is easier to use for cooking and heating than raw biomaterials. Other possibilities are the use of manure to produce biogas. A cow, for example, is able to produce 1 [m3 ] or 26 [MJ] of biogas per day in her guts.
Renewable Energy
183
Finally one could produce ethanol and other liquid fuels out of biological raw materials. They may then be used in cars for combustion. An interesting way in which you can kill two birds with one stone, is industrial gasification. Biomass, for example from sugar cane, is gasified in large reactors; the gas is used to fuel a gas turbine, which produces electricity and heat very efficiently. Part of the heat is used as input for the gasification process. What is of much help in preventing rapid climate change is that one could maintain equilibrium: plant as many sugar canes as are being gasified. In that case there is no net CO2 production, as for each (CH2 O) molecule dissociating into CO2 another is formed, binding CO2 . Of course there should be caution that no other greenhouse gases escape. The main net effect would be that solar energy is converted into electricity. It should be mentioned here that CH4 may be used to produce H2 gas which is stored or transported for use in fuel cells to produce electricity (Section 4.4.3). The fact that photosynthesis can capture solar energy without net production of greenhouse gases is one of the reasons for studying the process in detail. Another reason is that plants may be grown on land or algae in water that are available and not useful for food production. The research has two main directions: increasing the net efficiency of photosynthesis and using the photosynthetic process to produce electricity directly in a bio solar cell. The next section (5.5) on the physics of photosynthesis will lay the foundation for the research, while in Section 5.7 the possibilities of the bio solar cell are explored. As will be shown in detail in Section 5.5, in the process of photosynthesis a potential difference is built up between the two sides of a membrane. This Coulomb energy is eventually converted into chemical energy. If it were possible to stop the process once the charge separation had been performed and construct a battery with high stability, a bio solar cell would be obtained (Section 5.7). Another interesting research direction would be to use photosynthesis to produce marketable compounds without any intermediary steps.
5.5
Physics of Photosynthesis
In Plates 3 to 6 the photosynthetic process is viewed from different scientific angles; Plate 6 represents the physicists’ approach. It is clear that even Plate 6 has to be simplified for a physics textbook, which is done in Figure 5.18. Students should not forget that nature is more complicated than the scientific simplifications. We mention two examples. The first is that the folding of the membrane, shown in Plate 4 (Ref. [20]) is not represented in Plate 5 (Ref. [21]) and Plate 6. A second is the fact that Plates 5 and 6 suggest that there is an equal number of PSII (or PS2) and PSI (or PS1) systems, while in nature the relative number of PS1s is much higher. Keeping the warnings about simplifications in mind, the research challenge is to formulate and experimentally test a model of the photosynthetic membrane in which the many functions of the membrane are included. Such a comprehensive model should describe its essential physical and mechanistic properties. In the subsections below several of these properties are discussed.
184
Environmental Physics H+
CO2 fixation
Light
Light
e-
H+
NADH+ pH ≈ 8 4 [nm]
H 2O → e- + H+ + 1/4 O 2
PS2
pH ≈ 5
e-
Cyt b6 f
PS1
Figure 5.18 Charge separation in the photosynthetic membrane. In Photosystem 2 (PS2) the incident light results in a charge separation by oxidizing H2 O, indicated bottom left. The electron travels upward (dashed line) and moves to the right while picking up a proton. Half of the electrons make a round in Cyt b6 f, picking up more protons. In PS1 incident light again results in a charge separation, indicated at the bottom; the energy is stored and used to reduce CO2 . A more complete picture is found in Plate 6.
5.5.1
Basics of Photosynthesis
The physics view presented in Plate 6 for plants is reproduced in an even simpler way in Figure 5.18. In the membrane, indicated by two horizontal lines, one finds two large protein complexes, which in nature are coloured green. That colour is caused by a pigment, a word which is used for any organic molecule which colours in light. The large complexes in the membrane are called Photosystem 2 (PS2 or PSII) and Photosystem 1 (PS1 or PS1); they operate in series, first PS2 oxidizes water, next PS1 reduces a compound called NADP+ , which after a few steps reduces CO2 . Historically PS1 was identified first and then PS2, which explains the counterintuitive ordering of the names. Both the PS2 and PS1 complexes contain close to 300 pigments, most of which serve to absorb the incident solar light to form an excited state. Next they transport the excited-state energy to a few pigments in the centre of the complex, the so-called reaction centre (RC). In these centres, both in PS2 and PS1, the excited state energy with high efficiency is converted into a transmembrane charge separation. We now describe the process, leaving details to following subsections. The indications ‘top’ and ‘bottom’ in our description refer to the picture and not to reality, where the ‘top’ may be underneath the ‘bottom’. An absorbed photon, at the top of the membrane near PS2 on the left, creates an excited state, travelling via many chlorophylls to the reaction centre, indicated by a crooked line. In the reaction centre the incoming energy is used for charge separation, producing an electron and a positive charge. The electron travels upward quickly, indicated by the long grey dashes. The positive charges are stored in a manganese complex at the ‘bottom’ of the membrane until four positive charges have been accumulated. The resulting energy is used to oxidize water. The net effect corresponds to the half-reaction in PS2 : 2 H2 O + 4hv → 4e− + 4H+ + O2
(5.88)
The liberated protons sustain a relatively low pH ≈ 5 at the bottom of the membrane. The electrons at the opposite side of the membrane travel to PS1 along a path (indicated by small dots) that leads them along a complex called Cyt b6 f. The moving electron picks
Renewable Energy
185
up one of the available protons at the top, which then moves across the membrane again sustaining the concentration gradient of protons between the top and the bottom of the membrane. As a consequence of photosynthesis the top has a pH ≈ 8. Without photosynthesis this gradient would gradually disappear. The potential energy of the gradient is used to generate ATP, a molecule which stores energy in such a way that a cell can use it for chemical or physical work. This effect is reinforced, as half of the electrons from PS2 make a detour to Cyt b6 f picking up more protons. In PS1, like in PS2, an incoming photon creates an excited state, again travelling along many chlorophylls in a few [ps] to the reaction centre, where a second charge separation takes place. Here the created electrons reduce a compound called NADP+ on top of the membrane to form NADPH, while the electron hole is neutralized by an electron from PS2. The compound NADPH together with ATP is necessary for the formation of (CH2 O) in the half-reaction where CO2 is reduced in PS1 : CO2 + 4 e− + 4 H+ + 4 hv → (CH2 O) + H2 O
(5.89)
Note that according to the description given, the electrons and protons in the two reactions (5.88) and (5.89) are different entities, although their sum gives the overall photosynthetic reaction (5.58). Also note that reactions (5.88) and (5.89) do not show the intermediary biochemical reactions where NADPH and ATP play an essential role. In fact, the cell requires additional ATP, which explains why in estimating the overall efficiency in Section 5.4.3 two extra photons on top of the eight in Eqs. (5.88) and (5.89) were used. These two photons generate electrons in PS1, which make a trip to Cyt b6 f, picking up protons. To summarize, four photons on PS2 take care of half reaction (5.88), four photons on PS1 take care of half reaction (5.89), while two photons on PS1 are needed for additional ATP. As a matter of interest we make two more comments. The first is that the same complex set of electron and proton transfer reactions as occur in the complex Cyt b6 f discussed below Eq. (5.88) were copied by evolution in the human respiratory system. Here processes (5.88) and (5.89) occur from the right to the left. The energy stored in food is converted into a proton gradient by transferring the electrons to reduce the oxygen in our lungs to water vapour. Part of this energy is transferred to ATP. Together with the remaining proton gradient this energy is readily available to perform work. A second comment is that the two photosystems PS1 and PS2 were developed during evolution in separate sets of organisms. Their combination is not by necessity the best way of converting solar energy into biomass. Research therefore is going on how to ‘improve’ on evolution. 5.5.2
Light-Harvesting Antennas
More than 99% of the pigments in a photosynthetic system operate as light-harvesting antennas, just to absorb light to create an electronic excited state. This is a state of a molecule or complex where an electron is occupying an energy level higher than the ground state. Later on we will encounter other excited states where the intrinsic structure of a molecule is vibrating around an equilibrium situation. After absorption of a photon the excited pigments transfer this electronic excited state to the reaction centre of PS2 or PS1, where it is used to drive charge separation. Plate 7 and Figure 5.19 summarize this for one of the photosynthetic purple bacteria [23]. Bacteria are
186
Environmental Physics
Figure 5.19 Light harvesting in a photosynthetic purple bacterium. Light, entering from the right, is absorbed by the disk-like and diamond-like bacteriochlorophylls with wavelength maxima as indicated. The ‘crooked’ linear molecules, carotenoids, absorb between 400 [nm] and 500 [nm] and transfer their excitation energy in 0.7 [ps] to the bacteriochlorophylls. Within the ring the excitation turns around in 80 or 100 [fs], between the rings in 3 [ps] and from LH1 to the RC in 35 [ps] where within 3 [ps] charge separation occurs. The times mentioned are approximate. (Reprinted from Current Opinion in Structural Biology, Femtosecond spectroscopy of photosynthetic light-harvesting systems, Fleming & van Grondelle, 738–748, Copyright 1997 with permission from Elsevier.) A similar picture is reproduced in Plate 6.
studied in detail by many authors, which is the reason why they are used as an illustration. For plants the photosynthetic mechanism is essentially the same; the differences are in details like absorbing wavelengths and transition times. In Figure 5.19 two pigments bound to proteins are organized as an energetic funnel. They are called bacteriochlorophylls and indicated by ellipses and diamonds. In the periphery, on the right in Figure 5.19 one finds the LH2 complex, where B800 and B850 indicate the wavelengths of the absorption maximum of their components. The ‘crookedly’ drawn molecules near the bacteriochlorophylls are linear polyenes called carotenoids named after the well-known orange carrots, as they absorb mainly between 400 and 500 [nm]. The bacteriochlorophylls and the carotenoids together form the light-harvesting or antenna system. The carotenoid excitations are transferred to the bacteriochlorophylls in less than 1 [ps]. Within a ring a single excitation moves around very rapidly in about 100 [fs], while the transfer between rings is in the order of 3 [ps]. The transfer from the LH1 ring to the reaction centre happens in about 35 [ps], while the charge separation within the RC again happens fast, in 3 [ps]. These different time scales are relevant as any comprehensive model of photosynthesis has to reproduce these transfer times. The transfer has to be fast anyway, for in about 1 [ns] an excitation energy is converted into heat. Plant antenna systems operate similar to those in the purple bacteria. They use somewhat different chlorophylls, again with carotenoids.
Renewable Energy
187
A ‘good question’ is why light-harvesting antennas are needed at all; why could not the reaction centres themselves do the job of light absorption. There are several reasons. The first is that photosynthesis must be able to operate at low levels of light intensity, that generate much less than one electronic excitation per molecule of chlorophyll per second: ki 1, see Table 5.3. Yet the biochemical reactions associated with photosynthesis require several electron transfer events. Eq. (5.88), for example, shows that four electronic excitations are needed for the oxidation of water. These four excitations must occur within 0.1 to 1 [s], otherwise the excitation energy will dissipate into heat. The many antennas concentrate the available light energy from hundreds of light-absorbing pigment molecules into a single reaction centre. The second reason for the light-harvesting antennas is that with many antennas a few reaction centres suffice to absorb the light energy necessary for the survival and reproduction of the organism. This is beneficial because reaction centres consist of a complex system of many different compounds and molecules (see Plate 8, left [24]; right [25]) and therefore require a large investment in resources. Antennas are simpler and ‘cheaper’ to produce, even in large numbers. The third reason is that different antennas may contain different pigments; in this way they absorb in different parts of the solar spectrum. Consequently a broad range of excitation energies are fed into the same reaction centre. A fourth reason is that the antennas point in all possible directions and therefore absorb solar radiation from all directions. The consequence is that the variation of the direction of incoming solar light during the day influences the photosynthesis only a little. Finally, the antennas contain many pigment proteins. If the intensity of the incoming sunlight is so great that it might damage the organism, the interplay of these pigment proteins provides a quenching mechanism, stopping the transfer of the electronic excitation. This is explained in Section 5.5.6.
5.5.3
Energy Transfer Mechanism
The energy transfer between the antennas shown in Figure 5.19 or Plate 7 and the reaction centre (RC) is amazingly efficient. In order to explain the mechanism a top view of the RC of a purple bacterium is shown in Figure 5.20. Plate 8 shows the same figure in colour and a similar picture for a plant. In the photosynthetic bacterium a ring of 30 bacteriochlorophylls surrounds the reaction centre; a similar ‘ring’ occurs in Photosystem 1 of plants, but it may consist of 100 chlorophyll molecules. The photons absorbed in the antennas produce electronic excited states of which the energy is transferred to the RC in 35 to 40 [ps]. In Figure 5.20 the size 10 [nm] is indicated, illustrating that the pigment density is high, with a distance of less than 1 [nm] between neighbouring pigments. In a densely packed structure thepigments are coupled by interaction between their electronic dipole moments μ = − qi ri which were introduced in Eq. (2.15). The i
dipole–dipole interaction is written as μ1 ·μ2 , where μ1 and μ2 refer to the two interacting electric dipoles. Without solving quantum mechanical equations it is possible to make a few observations. The first observation is that the range of the dipole–dipole interaction is of the order of 10 [nm], while the distance between the pigments is about 1
188
Environmental Physics
Figure 5.20 Top view of the reaction centre (RC) and the surrounding ring of 30 bacteriochlorophylls for one of the photosynthetic purple bacteria. The excitation of the chlorophylls is transferred within 35 to 40 [ps] to the RC pigments in the centre, where a transmembrane charge separation is initiated. The membrane is in the plane of the drawing, the charge separation is perpendicular to this plane. (With permission of the AAAS, copyright 2003, reproduced from Chapter 5, Ref. [25], Fig 2C. Also see Fig. 5.20. This version was kindly made available to us by Dr Roszak and Dr Cogdell of Glasgow University.) For more details see Plate 8 (right).
[nm]. Consequently excited states have to be described as linear combinations of states ϕ n of single pigments ψk =
ckn ϕn
(5.90)
n
A delocalized excited state like ψ k is called an exciton, where the ending on suggests that it has certain properties like a particle. The coefficients ckn describe the amplitude with which states ϕ n participate in the exciton ψ k . The degree of delocalization can vary between one (i.e. no delocalization) and 10 pigments. The number depends on the strength of the dipole–dipole coupling and the unperturbed site energies. These energies vary, for the excitation energies of each pigment depend on the interaction between the pigment and its protein environment, even without the perturbation of the dipole–dipole interaction. A second observation is that several exciton states ψ k will exist, differing in the amplitude with which the states ϕ n are participating in ψ k . If a particular exciton state is excited, thermal fluctuations of the surroundings and interactions of a particular state ϕ n with the surrounding proteins will cause transitions to other exciton states. The overlap between exciton states ψ k and ψ l will be given by < ψk ψl >=
n,m
∗ ckn clm < ϕn ϕm >=
n
∗ ckn cln
(5.91)
Renewable Energy
189
Figure 5.21 Coherence between exciton states in a purple bacterium. A ring of 18 bacteriochlorophylls, numbered on the horizontal axis, is the active part of a light- harvesting complex LH2. The ‘position’ of the excitation is observed in the lighter regions; the dark centre has the highest probability of finding the excitation. The vertical time scale shows that there are two major excitons: the first localized at numbers 2 and 4, the second at numbers 10 and 11. Interaction with the surrounding pigments causes the excitation to jump between the two exciton states. (With permission of Elsevier, copyright 2006, reproduced from Chapter 5, Ref. [26], Fig. 7. Also published in Chapter 5, Ref. [27]. This version was carefully prepared by Dr Novoderezhkin.) Also see Plate 9.
The transition rate between the two exciton states is determined by the overlap squared, multiplied by two factors as follows ∗ ∗ ∗ ckn cln ckm clm < vn vm > Jkl (5.92) Wkl = n,m
The factor Jkl is called the spectral density and reflects the density of states as a function of energy. This function can be determined experimentally from low temperature fluorescence spectra. The factor < v∗n vm > reflects the average coupling of the locally excited states ϕ n and ϕ m to the vibrations of the pigments. This factor is difficult to guess; therefore it is taken to be equal for all transitions and acts as a scaling factor. In this way Eq. (5.92) gives an excellent fit to the observed energy transfer in photosynthetic proteins. The third observation is that the excitons display coherence. This is best explained by looking at Figure 5.21 or Plate 9, which was measured with [fs] laser pulses on the light harvesting complex LH2 of the purple bacterium [26], [27]. This complex consists of 18 bacteriochlorophylls organized similar to Figure 5.20. The chlorophylls are numbered 1 to 18 along the horizontal axis of Figure 5.21.
190
Environmental Physics
In Figure 5.21 one may distinguish two excitons, one centred on numbers 2 and 4, and the other one centred on 10 and 11. The vertical time axis shows that first the 2–4 exciton is excited; next after 350 [fs] the 10–11, then again after 350 [fs] the 2–4 and finally again the 10–11. The regularity in this measured pattern must originate in properties of the wave function of the excitation, which apparently ‘remembers’ a previous state. This property is called coherence. It may be added that at a time scale of about 60 [fs] the excitation moves between numbers 2 and 4, and later on the same scale between 10 and 11. A similar phenomenon is observed macroscopically in the case of a series of onedimensional pendulums, which are weakly coupled by springs. If one excites a single pendulum, one in fact excites all, obvious from the fact that the excitations are transferred from one pendulum to the next and further. The excitation apparently is not localized and the collective excitation may be called a pendulon. After some time the excitation is back at number 1 and the process repeats: coherence. Coherent excitations such as shown in Figure 5.21 are found in all photosynthetic systems; in algae and plants, as well as bacteria. It stands to reason that the coherence shown is responsible for the rapid and efficient energy transfer within the light-harvesting complex. It also makes it possible for the reaction centre to pick up the excitation energy in a much slower process of about 35 [ps]. 5.5.4
Charge Separation
In the reaction centre a charge separation takes place following excitation of one of the pigments. Plate 10 shows the reaction centre for a purple bacterium, in this case called Rhodopseudomonas viridis. The right figure of that plate is reproduced in Figure 5.22. The excitation of the reaction centre starts at a special pair of bacteriochlorophylls P in the heart of the complex, where an electron acquires enough energy to leave P and jump from molecule to molecule, leaving a positive charge behind near P. Thus in P, drawn at the top of the figure, the charge separation is initiated. In steps of 3 [ps], 1 [ps] and 200 [ps] the electron arrives at a compound of the class of quinones and after 200 [μs] to a second quinone with uptake of a proton and the electron. A second excitation leads to a second turnover, producing a quinol, which is a doubly reduced, doubly protonated quinone. This compound leaves the RC and is replaced by a fresh quinone. The effective reaction after the two turnovers is Quinone + 2 e− + 2 H+ → quinol
(5.93)
After this process has taken place two positive charges are left near P and the uptake of the protons by the quinone leave two negative charges at the opposite site of the membrane: charge separation. Interesting and poorly understood is the reason why in the purple bacteria the electron only travels along one of the branches of pigments shown in Figure 5.22 and not along the other one. In plants it is even more peculiar. In the RC of plants’ PS2 the special pair P cannot be distinguished; in fact it has been established that often the charge separation starts at a chlorophyll in the active branch. Again it only occurs in one of the two branches. In the other plant RC, the one of PS1, the electron transfer can occur along both branches. A matter that is well understood is why the first electron transfer steps in photosynthetic RCs accelerate upon lowering the temperature, while usually reaction rates go down with
Renewable Energy
191
Figure 5.22 Charge separation in the reaction centre of a purple bacterium. The membrane is shown and the sequence of electron transfer events, starting in a special pair P, which leads to transmembrane charge separation. Note that in this picture the charge separation is indicated on top, while in Figure 5.18 it is drawn at the bottom. (Kindly made available to us by Prof Cogdell and Dr Roszak of Glasgow.) Also see Plate 10 (right).
lowering of temperature. The reasoning is illustrated in Figure 5.23. We explain the figures by looking at the left one. An electron is transferred from a donor D to an acceptor A. This happens when they are close but still at a finite distance r. Donor and acceptor form a complex DA with a minimum of free energy, indicated by M in Figure 5.23. When their internal orientation changes or the surrounding proteins obtain a slightly different orientation the energy will become higher. All possible changes are summarized in a single coordinate q indicated along the horizontal axis. The corresponding energy curve is often assumed to be a parabola, reflecting harmonic motion with respect to q. After the electron has been transferred the complex is indicated by D+ A− . The minimum of the free energy will be lowered by G > 0, otherwise no transfer would take place. At equilibrium the structure of the complex will be a little different from DA; therefore the position of the minimum will be at another q-value indicated by N in the figure. The shape of the energy curve for D+ A− is assumed to be the same parabola as for DA. The configuration of DA with the same q-orientation as the D+ A− equilibrium state, both internally and with respect to the protein environment, has energy λ, which therefore is called the reorganization energy. The left figure is drawn with λ > G, the middle one has λ = G and the right one has λ < G.
192
Environmental Physics Energy DA
D+A -
DA D+A -
DA D+A -
λ λ
C
ΔG
M=C
M N q
λ > ΔG
λ
C M
N q
λ = ΔG
N q
λ < ΔG
Figure 5.23 Electron transfer from donor D to acceptor A with a certain free energy difference G between complex DA and the complex D+ A− . The horizontal coordinate q summarizes all displacements from the free energy minima M and N of DA and D+ A− , respectively. The free energy of DA at the minimum N of D+ A− is defined as G + λ, where λ is called the reorganization energy, the energy required in DA to have an internal organization corresponding to the ground state of D+ A− .
At this stage it has to be pointed out that the electrons are moving much faster than the nuclei within a complex. That implies that they adapt to the energy curves shown in Figure 5.23. Consequently at point C, where the two parabolas cross, the electron will have the same energy in DA as in D+ A− . This is the optimal situation for electron transfer. As donor and acceptor still are a distance r apart the electron has to tunnel through a potential barrier, basically the protein medium between D and A. Again consider the left graph of Figure 5.23 (λ > G) and assume that the electron levels near C, determined by a Boltzmann factor (2.25), are not completely occupied. On lowering the temperature, fewer levels near point C will be occupied and the reaction rate will go down. This is the usual situation. In the middle graph (λ = G) the electrons in the lowest states of DA can tunnel to D+ A− . In this case, on lowering the temperature the occupation of the lower levels will increase and the transfer rate will increase as well. In the drawing on the right (λ < G) the energy of point C is only slightly above that of the DA minimum, so again on lowering the temperature the occupation of the lower levels will increase and the electron transfer will increase, albeit on a somewhat smaller scale. There is a second effect though. Consider a state of DA a little below point C. Configurations DA and D+ A− then are only a little different. It is therefore possible that the DA configuration itself tunnels through the barrier adapting the D+ A− configuration. This is called nuclear tunnelling; the electrons moving much faster will follow the configuration and tunnel as well. Look again at the drawing on the left (λ > G). One may move point N to the left until the y-value of point C becomes the same as that of the right figure. Again on lowering the temperature more low-lying levels will be occupied and more electrons may tunnel.
Renewable Energy
193
Nuclear tunneling, however, will not take place as the configurations DA and D+ A− differ much more than on the right; consequently the reorganization energy λ is much higher (Exercise 5.23).
5.5.5
Flexibility and Disorder
An intrinsic property of biological matter is that on the one hand it is ordered, as shown from X-ray diffraction, which shows crystal-like structures (Plate 8), while at the same time it is energetically disordered. This phrase refers to the variation of electronic excitation energies of the pigments caused by small fluctuations in their local protein environment. This energetic disorder often is of the order of the energy difference between exciton states or the energy difference between the lowest exciton state and the first chargeseparated state. The disorder described here may be advantageous because of the broadening of the absorption lines leading to more coverage of the solar spectrum. It may as well be disadvantageous, as a certain pathway in a certain realization may contain several uphill steps, which will not be very efficient. This disadvantage may be overcome, for it has been demonstrated that an exciton may travel by several pathways. If one of them is blocked by an accidental disorder another will be taken. Figure 5.24 (or Plate 11) shows three possible pathways for charge separation in the RC of PS2. On the left of the left frame one finds a column with eight pigments and a PD2 + PD1 − state, each with their absorbing wavelength without interaction. They are, however, weakly coupled, leading to exciton states with their absorbing wavelengths on the right. In each exciton state several pigment excitations will participate; those with significant amplitudes (Eq. (5.90)) are indicated by connecting lines. The absorption spectrum is displayed a little bit on the right again. It exhibits a broad peak, which is separated into individual peaks corresponding to the exciton states. In fact, in this way the energies of the exciton states were reconstructed. The middle frame shows three possible charge separation pathways in the RC. The pathway in a specific case is found by applying an electric field, leading to so-called Stark spectra. The data points are shown in the right frame. It appears that the top case (which incidentally corresponds to charge separation in the special pair P) reproduces the Stark experiments; the other two pathways are found in other experiments. The conclusion is that all three (and perhaps more) pathways are possible. Applying an electric field amplifies and visualizes one particular pathway, other experimental conditions will amplify other pathways; in nature the three pathways shown, or even more, may be realized, depending on the ambient conditions. The flexibility of the pathways may for a large part overcome the large disorder intrinsic to biological matter.
5.5.6
Photoprotection
Even the average sunlight at mid latitudes is already too much for the average plant. With increasing light intensity the number of charge separations increases and the thus produced negative and positive charges cannot be dealt with. In that case recombination of positive
194
Environmental Physics
Figure 5.24 Exciton structure of PS2 RC. In the left frame the unperturbed absorption wavelengths of eight pigments and a charge separated state are given. The spectrum somewhat on the right in the left frame is analysed as an envelope of nine exciton states with energies shown. The participation of unperturbed states in the exciton states is indicated by connecting lines. In the middle frame three possible pathways for charge separation are given. In the right frame the top pathway (where the special pair P is used for charge separation) reproduces the experiments and is the favoured pathway in the presence of an external electric field. See also Plate 11. (With permission of Elsevier, copyright 2007, based on Chapter 5, Ref. [30], Figs 4,7,8 . This version was kindly made available by Dr Novoderezhkin.)
and negative charges occurs with increasing probability, liberating energy, which leads to excited chlorophyll states. In particular chlorophyll triplet states (with total spin S = 1) are dangerous, as they react efficiently with molecular oxygen, leading to damage in the PS2 RC. Even in average sunlight each half hour one of the proteins in the PS2 RC, called D1, is damaged to such a degree that the total PS2 photo system has to be taken apart in order to replace the damaged D1 protein. For this reason plants take a lot of trouble to reduce the possible damage as much as possible. The mechanism to achieve this must be based on physical principles, as large fluctuations in solar intensity can occur in a matter of seconds, for example when a cloud passes in front of the sun. It has been discovered that the light-harvesting antennas of PS2 can switch between at least two states. In one state the antennas absorb solar photons, of which energy is passed
Renewable Energy
195
on to the PS2 RC with high efficiency, to produce charge separations; in the other state absorbed solar energy is rapidly converted into heat. In fact the LH2 complex shown in Figure 5.19 and Plate 7 for the case of purple bacteria (which is similar in plants) has such an intrinsic capacity. It may even switch between three states: the ‘normal’ light-harvesting state, a quenched state and a third state in which the absorbed energy is converted into light in the red, between 680 [nm] and 730 [nm], releasing the remainder of the energy as heat. Under the influence of sunlight the equilibrium between the three states in LH2 is shifted to the quenched and red-shifted states. That will lead to fewer charge separations and less damage to the plant. The LH2 seems to operate like a nano-dimming switch that is used to gradually switch the light-harvesting function of the PS2 antennas on and off. The mechanism inducing this may be the build-up of a proton gradient at the lower side of the membrane (Figure 5.18 and Plate 6).
5.5.7
Research Directions
From the discussion in the previous sections it will be clear that several details of the photosynthetic process need to be better understood. We mention:
(1) The precise mechanism of photoprotection (Section 5.5.6) (2) The reason why in many, but not all, photosynthetic organisms the charge separation takes place along one branch of the pigments (Section 5.5.4) (3) How to bypass the need for two extra photons to run Eqs. (5.86) and (5.87) (4) Was quantum coherence as illustrated for the antennas and also present in the charge separation process optimized during evolution (5) The possibility of designing an ‘artificial leaf’.
A further challenge is to formulate and experimentally test a model of the complete membrane in which all the functions mentioned are included. One has to add the physical properties of the membrane as well as the organization at the supramolecular level and its relation to the functional and physical state of the system. The goal of the model should be to predict the amount of electron transport and the accumulation of Gibbs free energy as a function of light intensity. The application of such a model could be used to optimize the photosynthetic system, for example by utilizing a larger part of the solar spectrum. It also may be used to increase the yield of some specific product, for example carbohydrates or hydrogen, for use as fuel. These applications are to change parts of the system by biological or genetic engineering and belong to the realm of systems biology and synthetic biology. A different research direction, which, however, will profit from the comprehensive model mentioned above, is to use the charge separation of photosynthesis to produce a bio photocell. Before discussing artificial photosynthesis in Section 5.7 we devote Section 5.6 to organic solar cells, which have features of both photosynthesis and photovoltaics.
196
Environmental Physics
5.6
Organic Photocells: the Gr¨atzel Cell
In photovoltaic cells based on semiconductors as discussed in Section 5.1.3 the photoactive material, such as Si, performs both the functions of light absorption and current generation by the creation of an electron–hole pair (charge separation). This dual function requires highly purified materials and consequently increases the production costs. Also, at low light intensities semiconductor-based solar cells are relatively inefficient, due to charge recombination by impurities present in the photoactive material. 5.6.1
The Principle
In the organic photovoltaic cell [31], [32] the two functions, light absorption and charge transfer, are physically separated, very similar to the natural process of photosynthesis (Section 5.5). We will focus on the Gr¨atzel cell, named after its inventor, of which a schematic arrangement is shown in Figure 5.25.
Figure 5.25 The Gratzel cell. After arrival of a photon steps 1 to 4 follow, in this order. ¨ Horizontal bars denote the energy levels of the states involved. The system consists of a sensitizer, indicated as a S+ /S redox couple, in this case RuL2 (SCN)2 dye, absorbed on TiO2 nanoparticles deposited on an electrode; essential is the 3I− /I3 − redox couple in aqueous solution. The formal potential of this solution is given by the midpoint, where the redox couple is 50% oxidized and 50% reduced. This happens at 0.2 [V] measured versus a counter electrode. In the semiconductor on the left its Fermi level EF is indicated. (Reproduced with permission from [31], copyright 1991 Macmillan Magazines.)
Renewable Energy
197
In studying Figure 5.25 the student should keep in mind that in physical chemistry oxidation is defined as the ejection of one or more electrons; reduction is the opposite process: accepting electron(s) ([33], esp. Chapter 10). Redox couples are defined by Ox + ve− → Red
(5.94)
where Ox is being reduced to Red by accepting ν electrons. If one reads the reaction from right to left, Red is ejecting ν electrons and oxidized to Ox. An example of a redox couple is the one indicated in Figure 5.25 − − I− 3 + 2e → 3I
(5.95)
In practice one will always have two redox couples, exchanging electrons, one going from left to right in Eq. (5.94) and the other one going from right to left. In Figure 5.25 the other redox couple is the dye, indicated by S+ /S, where S stands for sensitizer. The potentiality of reduction of a redox couple is given by its potential [V] as measured against a standard electrode (which in this case has a potential of 0.34 [V] against the standard hydrogen electrode). The redox couple which has the lower potential will reduce the other one. The potentials with respect to the standard hydrogen electrode are tabulated for standard situations with all concentrations equal to one. In Figure 5.25 the 3I− /I3 − redox couple is at the so-called midpoint between oxidation and reduction, which in this case corresponds to a potential of 0.2 [V]. The potential on the right of Figure 5.25 is increasing downwards; in this way the energy of the electrons increases upwards. In the operation of the Gr¨atzel cell one may distinguish the following sequence of events occurring on quite different timescales. They have been indicated by numbers (1) to (4) in Figure 5.25 and are described below. Note that step (1) describes light absorption and the separate and somewhat later step (2) the charge separation. (1) Within 10−15 [s] (= one femtosecond = [fs]) an incoming photon is absorbed by an organic dye, indicated by the redox couple S+ /S (here RuL2 (SCN)2 ), that is attached to a semiconductor nanoparticle, usually TiO2 ; after absorption the dye is in an excited ∗ state denoted by S+ /S . (2) The excited dye injects an electron into the conduction band of the nanoparticlesemiconductor in less than 10−12 [s] (= one picosecond = [ps]), and the oxidized dye stays behind. It has lost an electron and its excitation energy and is in state S+ . The semi conductor is of n-type in order to avoid recombination with the then scarcely available holes. The electron hops from nanoparticle to nanoparticle on a time scale of [ps] until the conducting anode, usually consisting of fluorine-doped tin-dioxide (SnO2 :F), is reached (on the far left in the left block in Figure 5.25). The hopping process may be viewed as a random walk between energetically disordered localized sites, and is very different from electron conduction in a metal. The trip of the electron ends at the Fermi level of the semi conductor as the Fermi level corresponds to the average electron energy in the semi conductor. (3) A compound present in the aqueous phase of the cell (in our example the redox couple 3I− /I3 − ) and in contact with the semiconductor material rapidly reduces the oxidized dye S+ on a time scale of 10−9 [s] (= one nanosecond = [ns]) before charge
198
Environmental Physics
recombination can occur. Note in Figure 5.25 that the couple 3I− /I3 − indeed has the lower potential of the two and itself is being oxidized. (4) On a time scale of 10−3 [s] the oxidized redox couple is reduced by a conducting counter electrode. By connecting the two electrodes via an external load electrical work can be performed. One of the major problems associated with a typical organic solar cell, namely the very limited path length for light absorption, was solved in the Gr¨atzel cell in a very elegant manner, illustrated in Figure 5.26. Instead of attaching the dye to a flat TiO2 surface, a porous layer of TiO2 nanoparticles, each about 0.5–1.0 [μm] in diameter was constructed to bind the dye. By depositing a layer of a few [μm] of such dye-labelled nanoparticles on the conducting surface an increase in the surface area relative to a flat surface of TiO2 by a factor 1000 was achieved. As a consequence the absorption coefficient of the cell is
Figure 5.26 A porous layer of nano-sized TiO2 /dye particles is attached to the surface of a conducting electrode. The dye RuL2 (SCN)2 is covalently attached to the nano-size TiO2 particles. As the layer is porous the aqueous solution containing the redox compounds is still in contact with the dye. Illumination produces an electron–hole pair of which the electron almost immediately enters the semiconductor, passing through several crystallites before reaching the substrate. The hole on the dye is removed by an electron from Eq. (5.95) going left. (Reproduced with permission from [32], copyright (1995), American Chemical Society.)
Renewable Energy
199
now very large. At the same time, due to the porous material, the oxidized dye remains accessible to the redox couple 3I− /I3 − , which is soluble in water. 5.6.2
Efficiency
Three factors which determine the efficiency of Gr¨atzel cells are discussed below: the limited spectral range of absorption by the dye, losses in quantum efficiency and energetic losses. 5.6.2.1
Spectral Range of Absorption
The major source of loss in the conversion efficiency of solar power to electrical power is the limited spectral range of absorption by the dye. In the end the cell can only produce power upon the absorption of a photon and it is the product of the solar emission spectrum and the dye absorption spectrum that determines the relative success. Typical dyes employed in the first designs were ruthenium derivatives, such as (RuL2 (SCN)2 ), which absorb in the UV-blue region of the spectrum. A recent development is the design of new dyes, such as triscarboxy-rutheniumterpyridine ([Ru(4,4,-(COOH)3 -terpy)(NSC)3 ], which efficiently absorbs light right into the red and near-infrared part of the spectrum, has a deep-brown to black colour and is also called the ‘black dye’. In combination with the three-dimensional nanostructure on which the dye is deposited, a photon of the right wavelength is absorbed with efficiency close to 100% (this apart from some small optical losses in the top electrode). More recently, compounds like porphyrins and phthalocyanins have been applied in constructing Gr¨atzel cells. Those compounds are cyclic arrangements of N, C, O and H atoms and are able to host a multitude of different metal ions (Mg2+ , Fe2+ , Zn2+ etc.) in their central cavity. These molecules may display strong absorption bands in the visible and near IR and can be produced in large amounts at low costs. In photosynthesis, which employs chlorophylls (an Mg-porphyrin), the excited chlorophyll is in a singlet state (1 Chl∗ ) that may decay into a triplet state (3 Chl∗ ), which in its turn reacts very efficiently with oxygen to produce singlet oxygen, a very reactive and potentially lethal species (in fact the process of singlet oxygen formation after the illumination of porphyrin dyes is used in photodynamic therapy of certain tumours). To prevent decay into 3 Chl∗ , plants possess carotenoid molecules that are positioned close to the chlorophylls and quench the chlorophyll triplets. In fact the carotenoids have a dual function, they not only quench the chlorophyll triplets, they also act as light harvesters by absorbing solar photons in a spectral window not covered by the chlorophylls and next transferring the energy to chlorophyll on an ultrafast timescale. If porphyrins are employed in Gr¨atzel cells, it would be wise to copy the strategy of nature. Molecular constructs that have both functions do exist. 5.6.2.2
Quantum Efficiency
Quantum efficiency losses in the Gr¨atzel cell may arise from slow electron transfer from the excited state of the dye to the semiconductor (step (2) in Figure 5.25), from the random hopping of the electron amongst the nanoparticles and from the slow electron transfer from the reduced part of the redox couple to the oxidized dye (step (3)). Fast recombination
200
Environmental Physics Energy Ligand Π+ orbital
kf
Ti3+/4+
hν kb Ru (II/III) dxy, dxz, dyz
Figure 5.27 Electronic states involved in electron–hole pair formation and charge recombination. Illumination puts one of the Ru electrons into one of the liganding π + states involving the carboxyleted bipyridyl groups. Rate kf measures the rate of light-induced charge transfer from the dye to the semiconductor. This should be as large as possible (>109 [s−1 ]). Rate kb reflects the back reaction and should be as small as possible to avoid unwanted charge recombination. (Reproduced with permission from [32], copyright (1995) American Chemical Society.)
between the electron in the conduction band of the semiconductor and the oxidized dye also gives a loss of electrons for the external circuit. Let us look in more detail at the determining factors for the rates (2) and (3). A fast rate for step (2) requires a direct overlap between the molecular orbital of the excited dye from which the electron leaves and the molecular orbital of the semiconductor, which accepts the electron. For example, for the dye RuL2 (SCN)2 bound to a nanocrystalline oxide film, the injecting orbital is the π + wavefunction of the so-called carboxylated bipyryl ligand, as indicated in Figure 5.27. This state directly interacts with the surface 3d orbital of the conduction band of the TiO2 ; this is represented by kf in Figure 5.27. In contrast, the back reaction of the semiconductor conduction band electrons with the oxidized ruthenium complex (indicated by kb in Figure 5.27), typically in the microsecond domain, involves a d-orbital localized on the ruthenium metal atom, whose overlap with the orbitals of the TiO2 conduction band is relatively small. Therefore the losses in quantum yield mentioned above will be very limited in the case of the Gr¨atzel cell. Note that Figure 5.27 is very similar to Figure 5.16, which summarized the thermodynamics of photosynthesis. 5.6.2.3
Energetic Losses
Preferably, illumination should only create an electron–hole pair and leave the dye molecule in its lowest vibronic state, in other words it should induce 0–0 vibronic transitions (Chapter 8). Unavoidable energetic losses in the Gr¨atzel cell do occur, due to fast internal conversion processes in the dye following excitation into a higher vibronic state. Concerning the energy yield, we note in Figure 5.25 that the reduction (step (3)) of the oxidized dye by the redox couple requires 0.6 [V], while one may show that only 0.2 [V]
Renewable Energy
201
would be necessary for a fast electron transfer rate. Therefore one is looking for other redox couples which are as efficient as the 3I− /I3 − couple in terms of rate of dye reduction [ns] and uptake of electrons from the counter electrode, but with a lower voltage in step (3). So far none have been found. Using the voltage data represented in Figure 5.25 one may deduce the voltage of the Gr¨atzel cell shown there as the difference between the voltage of the redox couple of 0.2 [V] and the voltage of the Fermi level of the TiO2 semiconductor, about −0.6 [V]. This adds up to a voltage of 0.8 [V]. Another observation is that the photons (step (1)) have an energy of 1.6 [V], which corresponds with a wavelength of 800 [nm]. The dye indeed absorbs in the red and infrared, which is an important part of the solar spectrum (Figure 2.3). The broader the spectral range in which the dye absorbs, the larger the part of the solar spectrum which is harvested. Figure 5.28 shows the absorption of a few dyes under investigation (lower graph). They are compared with the absorption of natural pigments such as chlorophylls and bacteriochlorophylls (upper graph). In both graphs the incident solar spectrum is indicated. It looks as if some dyes are able to compete with nature [36].
Figure 5.28 The top graph shows the absorption of sunlight by natural pigments, chlorophylls a and b, which occur in plants and bacteriochlorophylls a and b (in methanol or ethanol). The lower graph shows the absorption spectra of a TiO2 thin film sensitized with a Ru-based red dye. In both case the average incident solar spectrum is indicated at surface level for mid latitudes. (Reprinted from Chemistry & Biology, Energy Conversion in Natural and Artificial Photosynthesis, Connell et al, 17, 5, 434–447, 2010 with permission from Elsevier.)
202
Environmental Physics
5.6.3
New Developments and the Future
The current Gr¨atzel cell can be improved following a variety of complementary strategies: improving the dye, improving the electrolyte and improving the thermostability and robustness. 5.6.3.1
The Dye
The choice of the dye is the key element of the dye-sensitized solar cell. In general one should consider the following criteria: (1) The dye should be spectrally tunable; this means that the absorption band may be shifted or broadened by the use of several dyes. (2) Possess tunable redox properties; the potential difference with the electrolyte (here 3I− /I3 − ) should be much smaller than the 0.6 [V] in Figure 5.25. (3) It must be feasible to anchor the dye to the semiconductor particle. (4) The dye should attach well to the TiO2 and be ‘friendly’ to the solvent in which the electrolyte is dissolved. (5) Photo-stability: assuming one turnover per dye molecule per second and a lifetime of the cell of ten years requires 108 –109 turnovers per dye molecule without significant bleaching. In the current Gr¨atzel cell the dye of choice is a ruthenium complex. Chemical engineering of the complex allows for spectral tuning between the visible and the infrared. However, if Gr¨atzel cells are to be widely employed a cheaper dye like a porphyrin or phthalocyanin combined with a carotene derivative must be developed. A possible design for a more efficient Gr¨atzel cell could be the ‘tandem’ or pn-dyesensitized solar cell. Here one combines the classical Gr¨atzel cell, where electrons are injected into an n-type semiconductor (TiO2 ), with a second cell, where, following excitation of another dye, an electron is extracted from a p-type semiconductor (NiO). In this design a redox compound shuttles between the two cells. In another design a construction has been realized stacking a Gr¨atzel cell for highenergy photons on top of a copper-indium-gallium selenide thin-film bottom for lowenergy photons. This combination has a solar to electric conversion efficiency greater than 15% [35]. 5.6.3.2
The Electrolyte
A liquid electrolyte, used in the first designs, carries many disadvantages for the long-term stable operation of the Gr¨atzel cell: leakage, evaporation and reaction with atmospheric oxygen. Organic amorphous conducting materials in which positive charges hop from one molecule to the next have been used to replace the liquid electrolyte [34]. A ruthenium amphiphilic dye (Z-907) has been synthesized which has an increased tolerance to water. A dye sensitized solar cell (DSSC) based on this ruthenium amphiphilic dye, when incorporated in a polymer matrix, achieved a conversion efficiency of 6.1%, which was not reduced much after operation at high temperatures. One possible reason for the enhanced performance of this cell might be the reduced leakage of the cell because of the use of the polymer. Later, Gr¨atzel and coworkers designed an 8.2% efficiency cell using a solvent-free
Renewable Energy
203
liquid electrolyte based on a melt of three salts, thereby avoiding the use of iodine-based solutions. 5.6.4
Applications
The Gr¨atzel cell is in production by a large number of companies, including Toyota for application in cars and homes. Sony has developed an organic solar cell with energy conversion efficiency greater than 10%. Research and development of new designs with new molecules may in the near future lead to efficient solid-state, dye-sensitized solar cells with long lifetimes, to be processed into multimodules at very low production costs. This would offer a realistic and cheap alternative to the semiconductor-based photovoltaic cell. One might speculate that the tunability of the dye’s absorption could allow the construction of a cell that would be transparent in the visible and absorb all light with λ > 700 [nm], converting it to electricity. Macroscopic versions of such photovoltaic cells could be incorporated as ‘Gr¨atzel windows’ in a building. They would absorb a major part of the solar spectrum (cf. Figure 4.30), while remaining transparent with a bluish glaze.
5.7
Bio Solar Energy
Nature has demonstrated that solar energy conversion can be performed on a vast scale. Sunlight is the primary energy input for the majority of life on the planet. Globally, photosynthesis stores solar energy in a fuel, generally reduced carbon compounds with the necessary electrons coming from water at a rate of ∼150 TW (Exercise 5.1). CO2 + H2 O + sunlight → fuel + O2
(5.96)
Likewise, nature uses the process of cellular respiration to efficiently oxidize these ‘solar fuels’, with oxygen typically acting as the terminal electron acceptor. Altogether, photosynthesis and cellular respiration may provide a blueprint for a future human energy infrastructure based upon the ‘photosynthetic’ conversion of solar photons to stored chemical energy, which is subsequently released in a ‘respiratory’ process using molecular oxygen. The requirement for energy storage, for instance in the chemical bonds of fuels, is fundamental, given the temporal mismatch between solar irradiance and human energy demands (day versus night, summer versus winter). Our current energy infrastructure is predicated upon the long-term energy storage afforded by solar fuels produced in the distant past in the form of fossil fuels. Fuel production is also a means of concentrating diffuse solar energy for transportation needs, and affords higher power densities than are attainable with battery or mechanical storage. Satisfying future human energy demands will require the widespread conversion of contemporary sunlight into solar fuels. However, producing solar fuels on a global scale can only be accomplished using inexpensive earth-abundant materials in the process. Doing so is one of the few viable means of providing for human needs in an egalitarian and carbon neutral manner.
204
Environmental Physics
A basic premise underlying this section is that over more than 3 billion years, nature has evolved elegant solutions to the problem of providing the energy required by biological organisms. However, throughout evolutionary history nature has always selected for the most reproductively fit individuals, not necessarily those with the attributes most readily adapted to fulfilling human energy needs. In this section we will discuss aspects of natural energy systems that may enable attainment of the high rates of solar energy transduction needed to power society. We begin with a brief comparison between the natural processes of photosynthesis and cellular respiration and their human-engineered analogs, such as photovoltaics and fuel cells. The remainder of the section focuses primarily on artificial photosynthesis, illustrating its promise for new methods of solar energy conversion, and for the synergistic development of improved fuel cells and related technologies. Lastly, we emphasize some of the lessons that can be learned from nature, and indicate how human ingenuity can modify the operation principle of a natural system to improve and optimize its performance using the tools of genetics and synthetic biology.
5.7.1
Comparison of Biology and Technology
In photosynthesis (Section 5.5), solar energy is collected by a light-harvesting antenna as electronic molecular excited states. These excited states migrate amongst pigment molecules, finally reaching a reaction centre, where the exciton is split into a hole and an electron by electron transfer to a specific discrete molecular electron acceptor. The energy loss in splitting the exciton in this way is at most ∼ 200 [mV], considerably less than the cost of splitting the exciton in solid- state silicon-based devices. With the discovery of oxygenic photosynthesis about 2.5 billion years ago, where water could be used as a source of high-energy electrons, the unlimited application of photosynthesis by nature became available. The oxidation of photosynthetically produced biomass, in a process known as aerobic respiration, provides energy for both photosynthetic and nonphotosynthetic organisms. In this process, biomass (in the form of NAD(P)H), is subsequently oxidized by molecular oxygen. The oxidation of NAD(P)H is carried out stepwise at three coupling sites across the inner mitochondrial membrane. The last step of this electron transport chain occurs at the membrane-protein cytochrome c oxidase, where molecular oxygen is reduced to water. The large free energy change associated with the oxidation of NAD(P)H and reduction of O2 is coupled to transmembrane proton pumping at each of the three coupling sites. Here the electrochemical energy is translated into a concentration gradient of protons across the membrane. Moreover, electrical neutrality is not maintained since the proton is a charged particle; therefore an electrical potential is also generated. The protonmotive force (pmf ) across a membrane is essentially the sum of the and pH terms. This pmf is the common denominator underlying all bio energetic processes in cells. A typical value of pmf under steady-state conditions would, in electrical units, be about 200 [mV]. This set of events, referred to as oxidative phosphorylation occurs with very high energy conversion efficiencies, approaching 90% in some organisms. In both photosynthesis and respiration, the pmf is used for a myriad of energy-linked processes, the most important of which is ATP synthesis; thereby pmf powers the metabolic, transport, mechanical,
Renewable Energy
205
Figure 5.29 Comparison of biological and technological systems for (a) solar energy conversion to fuels and (b) utilization of this stored chemical energy. Both biology and technology use functionally analogous steps, indicated by the horizontal arrows, during fuel production and consumption. However, a key difference is the use of molecular recognition and proton motive force (pmf) in biology as compared to the electrical circuits and emf used in technological systems. (With permission of the Royal Society of Chemistry, copyright 2009, reproduced from Chapter 5 Ref. [37], Fig. 1. This version was kindly made available to us by Dr Hambourger and Dr Moore of Arizona State University. See also Plate 12.)
informational and structural processes of life. Spectacular molecular motors such as the bacterial flagella, and ATP synthase, which is ubiquitous in cells, are driven by pmf. Figure 5.29 and Plate 12 show that many parallels can be drawn between biological energy conversion and technological energy transduction, where for technology two examples are shown: electrolysis, to produce H2 from electrons, and the fuel cell, to produce electrons from H2 . The process of charge separation in photosynthesis is functionally replicated in both solid-state and molecular photoconversion devices. Similarly, water electrolysers can be used to convert the thus generated solar current in a fuel, a bit similar to oxygenic photosynthesis, although in photosynthesis right from the initial charge separation the energy of the photon is stored as an electro-chemical gradient, the stability of which increases with time. Conversely, the oxidation of fuels in a fuel cell is functionally analogous to the process of oxidative phosphorylation as it occurs in mitochondria. Nevertheless, there are many crucial differences between biological and technological energy transduction, and much that humans can still learn from nature. The key to the generation and utilization of pmf in biological systems is the compartmentalization of the cell by membranes into small volumes, of a few hundred [nm3 ], allowing the formation of and pH, even with a relatively low turnover frequency of the catalytic centres. Within these membranes, nature uses an arrangement of proteins for catalytic activity. These membrane proteins do not only have well-defined structures, revealed by X-ray crystallography, but also exhibit a highly organized supra-molecular architecture allowing precise control over factors such as electron flow, proton activity, reactive intermediates, substrate access, product release and the local dielectric environment. The natural constructs are often modular, facilitating self-assembly and self-repair, which are some of the most appealing aspects of biological systems. Further, biological systems
206
Environmental Physics
typically operate at ambient temperatures and pressures and near neutral pH, using only earth-abundant materials. It is hoped that some of these remarkable features of biology will inspire future innovations in technological energy transduction. In biological systems, the electrons are carried from one protein to the next by discrete molecular species, such as quinines, cytochromes, iron–sulfur proteins, flavo-proteins and so on, operating in the electron transport chains of photosynthetic and mitochondrial membranes. In both photosynthesis and oxidative phosphorylation, molecular recognition is used to avoid undesired side reactions and ‘short circuits’, placing electrons exactly where they are needed to achieve the desired biochemical process with an efficiency of close to 100%. Accurate molecular recognition is thus a fundamental characteristic of biological systems, effectively replacing the wires of human-engineered systems. In contrast to biology, technological systems make use of electromotive force (emf ) to conduct electrons through a wire. Rather than using molecular recognition, the pathway of electron flow is dictated by an electrical circuit. As has been demonstrated in the electronics industry, human technologies are capable of extremely intricate circuits, directing electrons to precise locations. It must be kept in mind that semiconductor fabrication is only achieved with considerable effort, and is (thus far) unable to attain the atomic-scale precision found in biological ‘circuits’. Furthermore, technological systems are still unable to take advantage of some of the most extraordinary aspects of biology, such as reactions driven by pmf, precise control of proton activity and three-dimensional catalyst architectures. In many photovoltaic applications, efficient exciton capture, migration and exciton splitting remain fundamental challenges to produce highly efficient and moderately priced devices. Mimicking what is probably the most important reaction occurring in photosynthesis, the efficient extraction of electrons from water accompanied by the production of oxygen using light, has been the primary barrier to efficient water splitting in artificial photosynthesis. An essential element of the protein environment around the Mn-cluster of Photosystem 2 (Plate 5) that performs this reaction with quantum efficiency close to one, is that it offers a strongly asymmetric matrix that destabilizes the water molecule upon binding and stabilizes the formation of the molecular oxygen. One might hope that intelligently designed nanostructured surfaces could be used in a similar fashion. In combination with new catalysts like Co- or Ni-cubane structures [39] or manganese clusters like those discussed below [40] this may lead to new technology allowing for efficient water oxidation in an artificial photosynthetic device. In both fuel cells and water electrolysers, compartmentalization of anodic and cathodic reactions is only achieved by macroscopic separation, and ion-conducting membranes remain an expensive component of these devices. Further, many of the most effective technological catalysts rely upon exotic materials, which constrain efforts to scale-up these devices to the terawatt ([TW]) level. Clearly, nature still has much to teach us. Recent advances in nanotechnology are allowing humans to control matter on the scale of the biological machinery of life. As this field progresses, and we continue to turn to nature for inspiration, humans will have the opportunity to develop complex, self-assembling, self-repairing energy-transducing systems that previously have been the secrets of biological systems. In these efforts, detailed knowledge of natural systems will provide fundamental design principles for the development of artificial systems. Conversely, knowledge gained from the study of artificial
Renewable Energy
207
systems will improve our understanding of, and ultimately control over, complex biological machinery. 5.7.2
Legacy Biochemistry
While humans on the one hand endeavour to mimic the processes of nature, it is, on the other hand, important to recognize the limitations of biological systems. These limitations largely arise from the evolutionary history of certain biosynthetic and biochemical pathways. Photosynthesis started about 3.5 billion years ago with the appearance of some microorganisms, about 3 billion years ago followed by cyanobacteria. The oxygen produced by photosynthesis initially was used to oxidize metals and minerals. About 2.4 billion years ago this oxidation was complete and the concentration of atmospheric oxygen started to rise. This put a significant stress on natural energy-converting systems. In particular, selection for maximal biological fitness, including the flexibility to adjust to disparate environmental conditions, is certainly not the same as selection for maximal energy-conversion efficiencies. Evolution involves incremental alterations to existing biological machinery, and thus aspects of modern biology likely reflect adaptational holdovers from selective pressures in ancient environments. Through billions of years of evolution, organisms have become finely tuned to prevail in their local environment, their ‘niche’. However, it is likely that in the search for optimal fitness evolution has taken ‘missteps,’ and also failed to explore some of the parameter space most relevant to human endeavours. By way of example, we briefly explore instances of legacy biochemistry in the photosynthetic machinery. An initial observation is that photosynthetic organisms evolved from earlier life forms that had already developed key energy carriers (i.e. NAD(P)H and ATP). This early life evolved under anaerobic conditions (that is without oxygen), with a limited redox span in the environment. Under those early conditions the earth was relatively reducing, with CO as one of the most reducing species (CO2 /CO, Eo = −0.517 [V] versus the normal hydrogen electrode NHE) and NO3 − or perhaps Fe3+ was one of the most oxidizing (NO3 − /NO2 − , Eo = 0.433 [V] vs NHE; Fe3+ /Fe2+ , Eo = 0.771 [V] vs NHE]). The corresponding potential difference would be 0.95 [V] or 1.3 [V]. Clearly, the photosynthetic machinery was selected to interface with the existing bioenergetics of the then living cell. However, the evolution of chlorophyll-based oxygenic photosynthesis, using two reaction centres operating in series gave rise to large redox spans. PS2 was designed to extract electrons from water, meaning that it must operate at an oxidation potential of about 1.1 [V]. With a 680 [nm] photon of 1.8 [eV] and quinones, already operating as electron carriers in the early life forms, operating around 0 [V], a loss of about 0.7 [V] in energy storage by PS2 was unavoidable. Similarly, PS1, operating at a modest oxidation potential of 0.5 [V] produces a reduction potential of about −1.3 [V] after photon absorption, a large fraction of which is lost as heat during the final formation of NAD(P)H. While the basic chemical processes could have been driven by about 1.3 [V], in reality 2.4 [V] =1.1[V] – (–1.3 [V]) is provided by the light. Some portion of this apparent ‘overpotential’ is required to drive the forward reaction at an appreciable rate, but some portion of this large voltage drop appears to be a wasteful consequence of the evolutionary history of the photosynthetic apparatus. In other words: this could have been done with another pigment than chlorophyll and using the near infrared
208
Environmental Physics
part of the spectrum, which broadens the absorption spectrum and increases the efficiency of the process. An obvious improvement of the efficiency could be achieved by replacing PSI by its bacterial equivalent, absorbing in the infrared.
5.7.2.1
Photoprotection
RuBisCo is the key protein that drives the production of carbohydrates from CO2 in the socalled Calvin cycle, which was indicated in, for example, Figure 5.29a, using NADPH and ATP. RuBisCo has a low binding affinity for CO2 and a low turnover rate. As a consequence during high light intensities reduced compounds accumulate, for instance the plastoquinone pool gets over-reduced and as a consequence the risk of recombination reactions increases. Recombination of a charge separated state may lead to the formation of chlorophyll triplet states and this triplet state may react with abundant molecular oxygen to form singlet oxygen from the triplet oxygen ground state. Plants and cyanobacteria have developed a variety of complex protection mechanisms to avoid the formation of oxygen radicals. The first and most important concerns the presence of carotenoids in the photosynthetic pigment proteins. These carotenoids are positioned in such a way that a chlorophyll triplet formed in the light-harvesting antenna and the reaction centre has a high probability of being quenched by the carotenoids. The second important mechanism applied by oxygenic photosynthetic organisms is their ability to switch off the light-harvesting system of Photosystem 2 under intense light conditions by a process called ‘nonphotochemical quenching’. The switch occurs in response to the build-up of a pH gradient across the thylakoid membrane; it involves a rearrangement of the energetic states of the light-harvesting antenna in such a way that the lowest energetic state is now short lived. As a consequence the majority of the absorbed solar photons is dissipated as heat, thereby further protecting their photosynthetic apparatus. A third protection mechanism occurs in case the reaction centre of Photosystem 2 gets damaged; in normal sunlight this occurs about once every half hour, after about 106 to 107 turnovers. After damage in the D1-protein of the PS2 reaction centre has occurred, the whole PS2 supercomplex is removed from the grana membrane, displayed in Plate 4, then disassembled (from tens of proteins), a new D1 protein is inserted, the complex reassembled and put back in place.
5.7.2.2
C3 versus C4 Plants
RuBisCo can alternatively act as an oxygenase and react with O2 . In this process called photorespiration, O2 is taken up and ‘fixed’ and CO2 is released, thereby wasting energy that was converted during the light reactions of photosynthesis The inefficiencies of legacy biochemistry provide opportunities for human ingenuity to make transformational improvements in energy transduction. For example, human ingenuity over ∼ 7500 years of selective breeding has turned the small fruit of the teosinte plant into the corn grown around the world today. It is hoped that similar ingenuity, which can now be coupled with the almost limitless potential of synthetic biology, and a host of other advanced technologies, can quickly find novel solutions to limitations imposed by legacy biochemistry, meeting human energy demands in the process.
Renewable Energy
5.7.3
209
Artificial Photosynthesis
In general, artificial photosynthesis aims to mimic the natural process of photosynthesis (Eq. (5.96)) using a molecular or solid-state construct. In order to mimic photosynthetic energy conversion, it is necessary for a synthetic photo system to: (1) Absorb incident photons, generating excited states (i.e. excitons) (2) Transfer this excitation energy to a donor/acceptor interface, where photochemical charge separation takes place (the exciton is split) (3) Transfer charge away from this interface, in order to limit the rate of wasteful recombination reactions (4) Couple the photochemically generated charges to appropriate catalysts for the oxidation of water and production of a high-energy, reduced fuel. Synthetic photo systems allow ready manipulation of individual components at the molecular level, facilitating direct testing of theoretical considerations such as the effects of distance, orientation, linkage, driving force, solvent polarity and reorganization energy upon the rate and yield of a photo-induced electron transfer, as well as subsequent charge shift and charge recombination reactions. Note that study of artificial photosynthetic systems may help to understand the fundamental principles of the natural process. 5.7.3.1
Artificial Photosynthetic Membrane
In a chemically synthesized artificial photosynthetic reaction centre, an electron donor and acceptor are suitably organized to perform photo-induced electron transfer. An essential role is played by the class of compounds called porphyrins. Depending on the properties of the attached side groups these porphyrins may act as electron donor or as electron acceptor. The primary electron donor is typically a porphyrin (reminiscent of the chlorophyll primary donor in nature) or a metal-polypyridyl complex, and the primary acceptor is often a porphyrin, viologen, quinone, perylene imide or fullerene. Light excitation gives rise to fast photo-induced electron transfer, functionally mimicking photochemical charge separation in natural photosynthesis. Using chemical synthesis, secondary electron donors or acceptors can be attached to this ‘reaction centre’, allowing subsequent charge transfer reactions that further spatially separate the electron and hole, thereby increasing the lifetime of the charge-separated state. The rates of the charge separation, charge shift and charge recombination reactions are controlled by thermodynamics, the electronic coupling between the initial and final states, and the reorganization energy needed to convert the initial into the final state (Section 5.5.4). In many systems, the electronic coupling between donor and acceptor can be precisely controlled by covalent attachment. The energetics of the electron transfer processes and the reorganization energies are controlled by the choice of donor, acceptor and medium. Aside from photo-induced electron transfer, other photosynthetic processes such as energy transfer from an antenna system to the reaction centre, and photo protection at high light intensity have been replicated in molecular systems (see below). Synthetic reaction centres can be used to convert photon energy to pmf, as demonstrated by [37], illustrated in Figure 5.30 and Plate 13 left. They used a molecular triad containing three molecules: a carotenoid (C in the figure), a free base porphyrin (P) and a carboxylatebearing naphthoquinone (Q). The triad is inserted into a closed membrane vesicle, called a
210
Environmental Physics
Figure 5.30 An artificial photosynthetic membrane converting light energy into pmf, thereby driving ATP synthesis. Molecular triad (C-P-Q) molecules are inserted into a liposome containing the lipid-soluble quinone shuttle (QS ). Photo-induced charge separation gives rise to proton pumping via QS . With ATP synthase incorporated into the membrane, the resulting pmf is utilized for the synthesis of ATP. The graph shows the amount of ATP produced as a function of irradiation time. For more details see Plate 13 (left). (Reproduced by permission of the Royal Society of Chemistry, copyright 2009, and MacMillan Publishers Ltd, copyright 1998. This version was kindly made available to us by Dr Hambourger and Dr Moore of Arizona State University.)
liposome, containing a lipid-soluble quinone shuttle Qs . Due to the amphiphilic character of the triad, the polyene tail inserts preferentially into the lipid membrane, thus defining precisely the vectorial orientation for the array of reaction centres. Photon absorption by the porphyrin leads to excited-state electron transfer to the covalently attached quinone, followed by hole transfer from the oxidized porphyrin to the carotenoid secondary donor. The final charge-separated state is sufficiently long-lived to allow electron transfer from the reducing side of the triad (the quinone) to the lipid-soluble quinone shuttle. Along with reduction of the quinone, a proton is extracted from the external aqueous solution, creating a neutral semiquinone that diffuses across the lipid membrane to the oxidizing side of the triad (the carotenoid). Here the semiquinone donates an electron to the oxidized carotenoid, in the process expelling a proton into the internal aqueous solution, generating pmf. Such a system can drive ATP production from ADP and inorganic phosphate, when ATP synthase is incorporated into the liposomal membrane. These
Renewable Energy
211
liposomal systems serve as proof of concept that photo-induced charge separation in molecular systems can be used to store incident solar energy by generating pmf in a manner analogous to photosynthesis. 5.7.3.2
Mimicking the Reactions of Photosystem 2 in Artificial Photosynthesis
The use of water as an electron donor in photosynthetic energy conversion has provided an enormous evolutionary advantage to cyanobacteria, algae and higher plants. The fourelectron oxidation of water to O2 is accomplished by Photosystem 2 and specifically by the catalytic Mn4 Ca complex bound to the Photosystem 2 reaction centre. The abundance and environmental benefits of water as a source of electrons, makes biomimetic systems built on the principles of Photosystem 2 a goal of the utmost importance. Indeed, photochemical charge separation in molecular systems has already been coupled to the oxidation of coordinated manganese ions. The major difficulty in designing an artificial water-splitting system is the accumulation of the four positive charges required for the production of one O2 molecule out of two water molecules (Eq. (5.88)). Charge accumulation makes it increasingly difficult to add or remove the next electron unless each electron transfer is coupled to a charge-compensating reaction. The Mn4 Ca complex of Photosystem 2 compensates for charge accumulation by deprotonation and rearrangements of the higher d-orbitals of some of its constituents (socalled ligand rearrangement), thereby keeping its potential close to that of water oxidation. One of the most successful compounds is the RuII - Mn2 II,II triad shown in Figure 5.31 and Plate 13 right. Excitation of the Ru-trisbypiridine unit leads to rapid (t = 40 [ns]) electron transfer to the acceptor and oxidation of Mn2 II,II . This charge-separated state has a long lifetime of close to a [ms] at room temperature. In this structure, three rounds of light excitation are able to accumulate three oxidizing equivalents on a manganese dimer, transforming the initial Mn2 II,II cluster into Mn2 III,IV . In 90% water all the acetates have dissociated already in the Mn2 II,II state. Even though the charge is increased upon ligand exchange, oxidation of the complex is facilitated because water ligands allow for a protoncoupled electron transfer. The complex Mn2 III,IV formed after three turnovers has two fully deprotonated, water-derived oxo-ligands and thus the same charge (2+) as the original bis-acetato Mn2 II,II . The three oxidation states can be reached within a narrow potential range of about 0.2 [V], thanks to charge-compensating ligand exchange and proton-coupled electron transfer. Such work has many similarities with the donor side of PS2, and is a significant step towards interfacing synthetic reaction centres with functional water-oxidation catalysts. Both the generation of pmf and the multielectron oxidation of water are research areas requiring a detailed understanding of how proton motion can be coupled to electron transfer, a topic discussed below. 5.7.3.3
Proton-Coupled Electron Transfer
The oxidation of water, the reduction of CO2 to reduced carbohydrates and the build-up of pmf all require the simultaneous transport of protons and electrons. For a catalytic reaction such as water oxidation by the oxygen-evolving complex of Photosystem 2 the accumulated effect of four positive charges is required, and the only way to avoid the build-up of highly charged and therefore reactive intermediates is to transfer electrons together with protons
212
Environmental Physics
Figure 5.31 RuII -Mn2 II,II complex mimicking light-induced stepwise oxidation of the Mn4 Ca cluster in Photosystem 2. Repeated photo-induced electron transfer to the acceptor [Co(NH3 )5 Cl]2+ leads to the oxidation of Mn2 II,II to Mn2 III,IV . The lower panel is a simplified scheme of the charge-compensating reactions: in 10% H2 O, acetate is the dominant ligand and charge-compensating reactions during oxidation of the Mn2 complex cannot occur. However, in aqueous solution (90% H2 O) the acetate is replaced by H2 O and now every oxidation step of The Mn2 complex is accompanied by a deprotonation facilitating one additional oxidation. The assignment of H2 O and OH ligands is tentative. (Reproduced from American Chemical Society Copyright 2009. Chapter 5 Ref. [40] Fig. 4 . See also Fig 5.31 in the main text. This version was kindly made available by Dr Magnuson, Uppsala University.) For more details see Plate 13 (right).
away from the catalytic centre, keeping the complex electrically neutral. Specific protondonating amino acids in the close vicinity of the oxygen-evolving complex ensure that this electro neutrality is kept during the reaction cycle. Similarly, in the transport of electrons from PS2 to PS1 (Figure 5.32) plastoquinone (PQ) plays a crucial role. In its fully reduced form plastoquinol (PQH2), the molecule
Renewable Energy
213
Sequential proton electron transfer
Coupled proton electron transfer
Figure 5.32 In sequential proto-electron transfer the charged particles each have to cross a potential barrier (top graph), in coupled transfer the charge is neutral and the barrier is lowered.
has accepted two electrons and two protons, which we already showed in Eq. (5.93). In fact the semiquinone PQ- is highly unstable. In its neutral form PQH2 is able to cross the membrane against an electrochemical gradient. By taking up the protons on one side of the membrane and releasing them at the other side, part of the absorbed energy can be stored as an electrochemical gradient. Finally, by transferring electrons and protons simultaneously in a concerted reaction, high-energy intermediates can be prevented as they would occur in a sequential mechanism (one after the other). This is schematically indicated in Figure 5.32. As a consequence, coupled proton electron transfer can occur at a much higher rate than the sequential equivalent. 5.7.4
Solar Fuels with Photosynthetic Microorganisms: Two Research Questions
Photosynthesis is a multilayered process: light absorption by the light-harvesting proteins in the thylakoid membrane leads to electron transport, which in turn leads to the formation of NADPH and ATP, according to a chemi-osmotic mechanism. Driven by the free-energy potentials of ATP/ADP and NADPH/NADP, CO2 is then converted in the Calvin/Benson cycle (see Figure 5.29a) into reduced sugars or other (reduced) compounds. The resulting building blocks then drive cell synthesis via a complex set of anabolic pathways. A major scientific question is how photosynthetic activity regulates metabolism and the expression of photosynthetic genes and vice versa. This is schematically symbolized in Figure 5.33 and Plate 14. Cyanobacteria and green algae perform oxygenic photosynthesis and have the capacity to release some of the excess energy that they store as a fuel: H2 or a reduced carbohydrate. How to optimize fuel production, either by manipulating the expression level of existing genes (genetic engineering) or introducing new genes (synthetic biology) is the major scientific question. 5.7.5
Conclusion
In conclusion, the process of photosynthesis, as it has evolved in nature, has demonstrated the capacity to store solar energy on a global scale. It is clear that although many steps
214
Environmental Physics
Figure 5.33 Schematic representation of a cyanobacterial cell. Photosynthesis occurs in the thylakoids and uses light to produce ATP and NADPH. These drive the Calvin cycle, which is under the control of a variety of gene products. In the cell, enzymes are available that produce biomass which can be converted into ethanol (C2 H6 O). Alternatively, new enzymes can be added to the bacterium using synthetic biology to directly produce a fuel (C2 H2 ) that is expelled by the cell. Also the concentration and activity of the sugar and fuel- producing enzymes is under genetic control. See also Plate 14. Understanding the operation of this proteo-genomic network is the realm of ‘systems biology’. (With permission of Elsevier, copyright 2009, adapted from Chapter 5 Ref. [41], Fig. 2. See also Fig. 5.33 in the main text.)
of the photosynthetic process proceed with high efficiency, the overall storage process is not limited by the amount of incident solar light, but by inherent properties of the system. Under high light conditions a significant fraction of the captured energy is dissipated as heat to prevent photodamage of the photosynthetic apparatus or other essential cellular components. If damage does occur, complex repair mechanisms have been designed. Thus, the natural process of photosynthesis can teach us how biology during evolution has optimized certain processes, while at the same time limiting the damage of others. And this based on pathways of biosynthesis that were for a large part established. In this chapter we have demonstrated that there is a lot to learn from natural photosynthesis, maybe the most intriguing is the light-driven oxidation of water and the production of oxygen by Photosystem 2 with only a small overpotential, a trick that, if it could be copied
Renewable Energy
215
in a physical chemical system would open the door for solar fuel generation by artificial photosynthesis. Alternatively, based on our current molecular understanding, we may ‘redesign’ photosynthesis, restrict some or all the large losses and possibly directly engineer a pathway from light to direct product formation. In the end we should never forget that the conversion of solar light into food, or a current or a fuel will require large fractions of the available land simply because solar light is a dilute energy source. Consequently, a solar-energy-based society can only be realized with the highest possible efficiency combined with careful energy consumption.
Exercises 5.1 On the first page of Chapter 5 it is stated that the incoming solar radiation on earth is about 120 000 [TW] and that the photosynthetic process consumes about 150 [TW]. Show this agrees with the data given in Chapters 1 and 3. 5.2 Copy Figure 5.1. (a) Show that when the sun is in C it is day everywhere within the arctic circle and night everywhere within the Antarctic. (b) Make a model of the earth by finding a globe or making a simple globe by taking a melon and drawing the equator, the tropics and the two polar circles. Take a strong torch and fit it such that its horizontal rays point to the centre of the globe. Give the globe a tilt of about 23.5◦ . Reproduce the beginning of summer, autumn, winter and spring by rotating the globe. Find how the length of day changes. 5.3 From the times of sunset and dawn in your newspaper determine the length of the day at your latitude and date. Calculate the length of the day using Eq. (5.7). You will find a (small) difference. How do you explain this? 5.4 If you have solar panels on your roof, combined with a [kWh] measuring device, check the number of [kWh] per week during the year and compare with Figure 5.3. Explain the difference. If you do not have solar panels argue what deviation from Figure 5.3 to expect. 5.5 On good locations in South Morocco and Southern California one finds DNI ≥ 2500 [kWh m−2 yr−1 ]; on a horizontal surface this is smaller, at 1440 [kWh m−2 yr−1 ]. (a) Convert these values to [Wm−2 ] and (b) compare the first value to the total solar irradiance. 5.6 (a) Calculate the number of photons NN leaving the sun with energy E > 1.12 [eV], see Figure 2.3. (b) Calculate the fraction f that is irradiating the earth’s surface assuming an insolation of 1000 [W/m2 ]. (c) Calculate the PV current I s of a solar cell assuming that all photons with E > Eg are converted into electrons. 5.7 Reproduce V 0c = 0.58 [V] and the other data given underneath Eq. (5.13) for the parameters given in the text. 5.8 Take the same solar cell as shown in Figure 5.8 and the previous exercise, but with an insolation of 500 [Wm−2 ]. Calculate the current density I s and find a similar graph to that shown in Figure 5.8. What is the maximum output in this case? Compare with Exercise 5.7 and observe that output and input are almost proportional. Explain this by looking at Eq. (5.11).
216
Environmental Physics
5.9 Take the black-body spectrum (Eq. 2.8) and calculate for what value of the energy gap Eg the output energy of the electrons is optimal. 5.10 Check the derivation of Eqs. (5.23) to (5.25). 5.11 A state-of-the-art Dutch windmill, shown on the left of Figure 5.10 was operating in 1937. The height of its axis was 16 [m], the radius of its sails 14.35 [m]. With a wind velocity of 7.77 [m s−1 ] it was able to pump an amount of water 34.1 [m3 s−1 ] over a height of 1.67 [m]. (a) Calculate the pumping power. (b) Calculate the input wind energy. (c) The wind energy entering into the rotation of the wings was measured and a value cp = 0.17 was found. Calculate the power taken from the wind. Explain the difference with your result in (a). 5.12 Check the derivation of Eqs. (5.28) to (5.30). 5.13 Check the statement in Section 5.2.3: with a = 1/3 (the Betz optimum) and α = 0.10 one finds in the dominant direction u = 0.77um and in the second direction u = 0.67uin . Also use Eq. (5.31) to calculate α for H = 100 [m] and z0 = 0.12 [m] and repeat the calculation for 10 and 6 radii. 5.14 Take u(10) = 10 [m s−1 ]. Make four plots with the parameters in Table 5.2, each comparing Eqs. (5.35) and (5.36). ∞ 5.15 (a) Check that for Eq. (5.37) f (u)du = 1. (b) Check Eq. (5.39). (c) Prove that 0
log (–log(u)) = log log e + k log u – k log e. (d) Students may acquire wind statistics from a neighbouring site and deduce the parameters k and a. If you cannot find them use our values k = 2.13 and a = 6.45 [m s−1 ]. (e) Find the average wind velocity for this site and the total number of [J] passing 1 [m2 ] in 1 year.
↓ ↑
5.16 (a) Show that the water particles in our approximation of waves move in circles with a radius which decreases with depth. (b) Show that (5.51) obeys Eq. (5.50). (c) Derive all other equations up to (5.56), including their dimensions. 5.17 (a) Estimate the [CO2 ] in the air if all fossil fuel (C) reserves in Figure 3.22 were burnt and nothing stored in the oceans. (b) Calculate the amount of oxygen which will be bound by combustion of all fossil carbon and the decrease in oxygen in the atmosphere. (c) Compare the amount of oxygen corresponding to all stored C in Figure 3.22 with the amount of oxygen in the atmosphere. 5.18 Assume a steady state for photosynthesis and consider a day consisting of 12 hours of sunlight followed by 12 hours of darkness. (a) Write down an equation for the relative loss of photosynthetic free energy during the night relative to the preceding day. (b) Plot the relative loss as a function of hv0 – μst for ki = 10 [s–1 ] and ki = 10–4 [s–1 ]. Use Table (5.3) for your data. for DA and 5.19 (a) Reconstruct the curves of Figure 5.23 by taking 2x2 + 5 as the parabola the following three parabola as D+ A− , from left to right: 2(x – 3)2 , 2(x − 5/2)2 and 2(x – 1)2 . (b) Calculate the reorganization energy λ for the three cases. (c) Calculate point C in the third (right-hand side) case. (d) Find the coordinates of point C which is just mirrored. (e) Calculate the parameter a in 2(x – a)2 for which C is the intersect with the DA curve and find its value for the reorganization energy λ. Check that λ indeed is much larger than in the third case mentioned, prohibiting nuclear tunneling. (e) Plot the three relevant curves.
Renewable Energy
217
5.20 In biomass production on an industrial scale in the near future a lake situated at 50◦ North has a surface area of 600 [km2 ] and is covered with cyanobacteria (= blue green algae) who convert all incoming sunlight with an efficiency of 8%. (a) Consider 1 [m2 ] of the lake, provide for the energy input required to dry the algae, manipulate them and turn them into a liquid fuel like ethanol, and make a rough estimate how many [L] of ethanol one may obtain (using Table 4.4 and Figure 5.3) (b) Take the lake itself and calculate the energy content to harvest annually and calculate the percentage of the total energy consumption of your country that is provided for, using Figure 9.2 or a recent value from internet.
References [1] Gallo, C. (1998) The utilization of microclimate elements. Renewable and Sustainable Energy Reviews, 1+2, 89–114. C. Gallo, M. Sala and A.A.M. Sayigh, eds. [2] International Energy Agency www.iea.org/papers/2010/csp_roadmap.pdf. [3] US Department of Energy http://www.eere.energy.gov/solar/m/csp_program.html. [4] Markvart, T. (1994) ed., Solar Electricity, John Wiley, Chichester, UK, 1996. [5] Ashcroft, N.W. and Mermin, N.D. (1976) Solid State Physics, Holt, Rinehart, Winston, New York, esp. Chapter 29. [6] Palz, W. (1978) Solar Electricity, An Economic Approach to Solar Energy, Butterworths, Unesco, London. Appendix 1. [7] International Energy Agency http://www.iea.org/papers/2010/pv_roadmap.pdf. [8] PV Education www.pveducation.org/pvcdrom. [9] Betz, A. (1959) Einf¨uhrung in die Theorie der Str¨omungsmaschinen, Braun, Karlsruhe, pp. 225–226. [10] Walker, J.F. and Jenkins, N. (1997) Wind Energy Technology, John Wiley, Chichester. [11] Johnson, G.L. (1985) Wind Energy Systems, Prentice Hall, Englewood Cliffs, NJ, USA. [12] German Wind Energy Association BWE (2009) Wind Energy Market, 19th edn, German Wind Energy Association BWE, Berlin [13] Babinsky, H. (2003) How do wings work? Physics Education, 35, 497–503. [14] Landberg, L. (2001) Short term prediction of local wind conditions. Journal of Wind Engineering and Industrial Aerodynamics, 89, 235–245. [15] International Energy Agency http://www.iea.org/papers/2009/Wind_Roadmap.pdf. [16] Sørensen, B. (1979) Renewable Energy, Academic Press, London, UK. [17] Ocean Energy Council http://www.oceanenergycouncil.com/index.php/Tidal-Energy/ Tidal-Energy.html. [18] European Commission Joint Research Centre http://re.jrc.ec.europa.eu/ esti/index_ en.htm. [19] Wageningen University www.algaeparc.nl. [20] Dekker, J.P. and Boekema, E.J. (2008) Supercomplexes of photosystems I and II with external antenna complexes in cyanobacteria and plants, in Photosynthetic Protein Complexes. A Structural Approach (ed. P. Fromme), Wiley-VCH, Germany, pp. 137–154.
218
Environmental Physics
[21] Nield, J. http://www.queenmaryphotosynthesis.org/nield/psIIimages/ oxygenicphotosynthmodel.html. [22] Hu, X., Damjanovic, A., Ritz, T. and Schulten, K. (1998) Architecture and Mechanism of the light-harvesting apparatus of purple bacteria. Proceedings National Academy of Science, 95, 5035–5941. [23] Fleming, G.R. and van Grondelle, R. (1997) Femtosecond spectroscopy of photosynthetic light-harvesting systems. Current Opinion in Structural Biology, 7, 738–748. [24] Ben-Shem, A., Frolov, F. and Nelson, N. (2003) Crystal structure of plant photosystem I. Nature, 426, 630–635. [25] Roszak, A.W., Howard, T.D., Southall, J. et al. (2003) Crystal structure of the RC-LH1 core complex from Rhodopseudomonas palustris. Science, 302 (5652), 1969–1972. [26] Novoderezhkin, V.I., Rutkauskas, D. and van Grondelle, R. (2006) Dynamics of the emission spectrum of a single LH2 complex: interplay of slow and fast nuclear motion. Biophys. Journal, 90, 2890–2902. [27] van Grondelle, R. and Novoderezhkin, V.I. (2010) Photosynthesis, quantum design for a light trap. Nature 463 (7281) 614–615. [28] Roszak, A.W. and Cogdell, R.J. Private communication. [29] Deisenhofer, J., Epp, O., Sinning, I. and Michel, H. (1995) Crystallographic refinement at 2.3 A resolution and refined model of the photosynthetic reaction centre from Rhodopseudomonas viridis. J. MoL BioL, 246, 429–457 [30] Novoderezhkin, V.I., Dekker, J.P. and van Grondelle, R. (2007) Mixing of exciton and charge-transfer states in photosystem II reaction centers: Modeling of Stark spectra with modified Redfield theory. Biophysical Journal 93, 1293–1311. [31] O’Regan, B. and Gr¨atzel, M. (1991) A low-cost, high-efficiency solar cell based on dye-sensitized colloidal TiO2 films. Nature, 353, 737–739. [32] Hagfeldt, A. and Gr¨atzel, M. (1995) Light-induced redox reactions in nanocrystalline systems. Chem. Rev., 95, 49–68. [33] Atkins, P.W. (1994) Physical Chemistry, 5th edn, Oxford University Press, Oxford. [34] Bach, U., Lupo, D., Comte, P. et al. (1998) Solid-state dye-sensitized mesoporous TiO2 solar cells with high photo-to-electron conversion efficiencies. Nature, 395, 583–585. [35] Liska, P., Thampi, K.R., Gr¨atzel, M. et al. (2006) Nanocrystalline dye-sensitized solar cell/copper indium gallium selenide thin-film tandem showing greater than 15% conversion efficiency. Applied Physics Letters, 88. [36] McConnell, I., Li, G. and Brudvig, G.W. (2010) Energy conversion in natural and artificial photosynthesis. Chemistry and Biology, 17, 434–447. [37] Hambourger, M., Moore, G.F., Kramer, D.M. et al. (2009) Biology and technology for photochemical fuel production. Chem. Soc. Rev. 38 25–35. [38] Steinberg-Yfrach, G., Rigaud, J.-L., Durantini, E.N. et al. (1998) Light-driven production of ATP catalysed by F0F1-ATP synthase in an artificial photosynthetic membrane. Nature, 392, 479. [39] Kanan, M.W. and Nocera, D.G. (2008) In Situ formation of an oxygenevolving catalyst in neutral water containing phosphate and Co2+. Science, 321, 1072–1075.
Renewable Energy
219
[40] Magnuson, A., Anderlund, M., Johansson, O. et al. (2009) Biomimetic and microbial approaches to solar fuel generation. Accounts of Chemical Research, 42 (12), 1899–1909. [41] Hellingwerf, K.J. and Teixeira de Mattos, M.J. (2009) Alternative routes to biofuels: Light-driven biofuel formation from CO2 and water based on the ‘photanol’ approach. Journal of Biotechnology, 142 (Special Issue), 87–90.
6 Nuclear Power The energy to be gained by controlled nuclear reactions originates from the beginning of the solar system about 1010 years ago. At that time many atomic nuclei were formed of which only the stable ones and the very long lived are still present. The physics of nuclear power is understood from Figure 6.1, which shows the binding energy per nucleon versus the mass number A for these nuclei. The binding energy is the energy required to separate all nucleons from each other. That is also the energy which is liberated if they all come together again. If one divides the binding energy by the mass number A one obtains the binding energy per nucleon shown in Figure 6.1. The curve in Figure 6.1 exhibits a maximum around A ≈ 60, in the vicinity of 56 Fe, which therefore is one of the most stable nuclei. One may check that the fission of a nucleus with A ≈ 235 into two parts of about A ≈ 118 would increase the binding energy by about 1 MeV per nucleon, which in total would be 235 [MeV]. In reality, the induced fission of 235 U liberates 207 [MeV] of energy. In nuclear power stations almost 200 [MeV] can be delivered as heat ([1], p. 12). In nuclear fusion two very light nuclei are forced to form a new nucleus. From Figure 6.1 it follows that in this case an even larger increase in binding energy per nucleon can be obtained. In order to appreciate the order of magnitude of the energy released one should remember that combustion of fossil fuels takes place by chemical reactions, that is by rearranging electrons in atomic or molecular orbits with energies in the order of [eV]. As the number of electrons is in the order of the number of nucleons, the use of nuclear power would liberate ≈1 [MeV]/electron, which gives a gain in energy by a factor 106 . This will be reflected in a reduction of the volume of fuel required for a given amount of energy by the same factor, and a similar reduction in the volume of waste. The traditional argument in favour of nuclear energy is that this factor of a million buys a lot of safety facilities. It is interesting to note that a similar large factor stood at the cradle of the scientific revolution around 1780 in Britain. Before that time hydropower from running rivers or height differences of about 10 [m] was widely in use to drive the looms. When a water Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
222
Environmental Physics
Figure 6.1 Binding energy per nucleon versus the mass number A for stable nuclei. Fission of heavy nuclei or fusion of light ones will produce an increase in binding energy per nucleon and liberate nuclear energy.
molecule falls by 10 [m], the gain in kinetic energy equals about 2 × 10−5 [eV]. The use of chemical energy in the form of fossil fuels is 50 000 times more effective than the use of falling water; it boosted production and changed society. It is an open question whether something similar will happen with nuclear power. Below, we will first discuss how a nuclear power station works (Section 6.1), then a brief discussion of nuclear fusion (Section 6.2), followed by a summary of health aspects and regulations in Section 6.3. In Section 6.4 the management of the fuel cycle is described, with due attention paid to ways to reduce the amount of very long-lived radioactive materials. The characteristics of the planned fourth generation of nuclear reactors will be briefly discussed in Section 6.5.
6.1
Nuclear Fission
The physics of nuclear fission is discussed in handbooks on nuclear physics. The operation of a nuclear power station requires attention to many details ([1],[2],[3]) of which the physical essentials will be presented below. 6.1.1
Principles
The uranium present in nature consists mainly of two isotopes: 0.7% 235 U with the remaining 99.3% 238 U. Of these two 235 U can easily be used for fission. The dominant fission process of 235 U is initiated by slow neutrons 235
U + n (slow) → 236 U → X + Y + νn (fast) + ≈ 200 [MeV]
(6.1)
Nuclear Power
223
We will discuss several aspects of this equation, which is central to reactor physics. First we explain the meaning of ‘fast’ and ‘slow’. Fast neutrons have energies of 1 [keV] up to a few [MeV]; the energies of the neutrons on the right hand side of Eq. (6.1) on average is about 1 [MeV]. Slow neutrons have an energy of 1 [eV] or less, down to so-called thermal neutrons, which are in equilibrium with the surrounding matter with a temperature T [K]. For thermal neutrons the velocities therefore follow the Maxwell–Boltzmann distribution m 3/2 1 2 u 2 e− 2 mu /kT (6.2) f (u)du = 4π 2π kT where f (u)du is the probability that a neutron has a velocity between u and u + du. In a simplified form we have already met Eq. (6.2) in Chapter 2. Relation (6.2) is found from (2.25) by using the fact that many combinations (ux , uy , uz ) give the same velocity 2 u; the number of these combinations is proportional to the volume in u-space 4πu du. Eq. (6.2) then follows from the requirement that f (u)du = 1(Exercise 6.3). One may show (Exercise 6.4) that Eq. (6.2) peaks at a kinetic energy kT. For T = 293 [K] one has a thermal neutron with a peak velocity of 2200 [m s−1 ] and an energy of 0.025 [eV]. The probability that a neutron will induce fission or any other reaction in a single nucleus is described by the cross section σ [m2 ], which we already met in Eq. (2.47). In the present case it is defined as σ =
number of (fission) reactions per second number of incident neutrons per m2 per second
(6.3)
It may be interpreted as the area, that is ‘seen’ by the incident neutron. It appears that the cross section for fission of 235 U for a thermal neutron is about 1000 times as large as that for a neutron of 1 [MeV], which is the reason why in traditional reactors the fast neutrons produced in (6.1) are slowed down. Eq. (6.1) describes that after capture of a slow neutron a compound nucleus 236 U is formed which has sufficient energy to fission spontaneously into two (sometimes three) reaction products. They are called X and Y because they occur in many pairs over a large part of the periodic table. They usually decay further, in some cases giving rise to delayed neutron emission. The fission product 87 Br, for example, emits electrons after on average 55.6 [s], giving 87 Kr, partly in an excited state, which then rapidly emits a neutron. About 0.75% of the neutrons emitted by the fission of compound nucleus 236 U are delayed by an average delay time of ≈12.5 [s]. ([1], p. 39). Besides the fission products one will notice that a number ν of fast neutrons are emitted after absorption of one slow neutron in a fissile material, in this case 235 U. For fission of 235 U this number averages at ν ≈ 2.43; for other fissionable materials it may be slightly different; for 239 Pu it amounts to ν ≈ 2.87 and for 233 U to ν ≈ 2.48. Of these fast neutrons at least one has to survive competing processes during slowdown in order to produce a sustainable fission reactor. These processes will be discussed in Section 6.1.2. Of the gain in binding energy of process (6.1) the bulk (≈168 [MeV]) goes into kinetic energy of the fission fragments X and Y; these have a very small mean free path within the uranium fuel (≈ 10 [μm]), essentially converting their kinetic energy into heat at the location of the fission. Another 20 to 25 [MeV] is delivered as kinetic energy of the neutrons and as β- or γ -radiation, with a range of 10 to 100 [cm] contributing to the overall generated heat in the reactor ([1], 12).
224
Environmental Physics Reflector
Coolant Steam to turbine
Control rods
Heat exchanger (steam generator) Water from condensor
Core with moderator
Figure 6.2 Scheme of a nuclear reactor system. The heat from the reactor core is taken away by a coolant. A reflector tries to keep as many neutrons inside as possible, whereas control rods can absorb superfluous neutrons if necessary.
In a fission reactor the heat is used to generate steam, which drives a turbine, producing electricity. In Figure 6.2 the essentials of a reactor system are presented. The reactor core contains many fuel elements (not shown) which are surrounded by a moderator, which slow down the fast neutrons produced in (6.1) sufficiently to initiate a new fission. A coolant, which may also play the role of the moderator, takes away the heat which is produced. The reflector keeps as many neutrons inside the reactor as possible, while the control rods consist of neutron absorbers, which control the neutron fluxes. In 2009 there were about 435 nuclear reactors in the world for electricity production. Most of them were built in the 1970s and 1980s. Typical data on the three types most in use and utilizing process (6.1) are shown in Table 6.1. The PWR is the pressurized water reactor, where high pressure prevents steam formation. Steam to drive a turbine is produced in a second loop by a heat exchanger. The BWR is the boiling water reactor where the steam is generated in the reactor itself. CANDU stands for Canadian deuterium-uranium reactor. All three types shown use the coolant as moderator; for PWR and BWR, this is ordinary water, but for the CANDU it is heavy water, D2 O, Table 6.1 Data on current nuclear reactors.
Fuel Enrichment (% 235 U) Moderator Coolant Electric power/[MWe ] Coolant out/[◦ C] Maximum fuel temperature/[◦ C] Net efficiency Pressure in reactor vessel/[bar = 105 Pa] Conversion ratio
PWR
BWR
CANDU
UO2 ≈2.6 H2 O H2 O 1150 332 1788 34 155 ≈0.5
UO2 ≈2.5 H2 O H2 O 1200 286 1829 34 72 ≈0.5
UO2 Natural (0.7) D2 O D2 O 500 293 1500 31 89 ≈0.45
Summarized and reproduced by permission of John Wiley, copyright 1992, from [3], 634–635.
Nuclear Power
225
as its name suggests. Ordinary water will absorb the neutrons to some extent, forming D from H; therefore they need uranium which is enriched from the ‘natural’ 0.7% to about 3%. For CANDU, enrichment is not necessary as deuterium hardly absorbs neutrons. The conversion ratio in the table refers to the number of fissile Pu nuclei formed from the 238 U nuclei. The reactors discussed above belong to the so-called second generation of power reactors. Since then the models have improved but not changed, defining the third generation, while the nuclear industry is working on the fourth generation, which should be on the market towards 2030 ([1], 271; Section 6.5). Half of the new types under study are operating on fast neutrons. In this chapter we focus on the physics of PWR and BWR in Table 6.1. Before going into detail we need to discuss two concepts, radioactive decay and macroscopic cross section. 6.1.1.1
Radioactive Decay
Radioactive decay of particles is a statistical process in which the decrease dN of the number of particles in a small time interval dt is proportional to the number N and the time interval dt dN = −λN dt
(6.4)
Here λ [s−1 ] is called the radioactive decay constant. It follows that dN/dt = –λN with the solution N (t) = N0 e−λt
(6.5)
The half life T 1/2 is defined as the time after which N = N 0 /2; this leads to T1/2 =
ln 2 λ
(6.6)
The mean lifetime of the particles is defined as 1 N0
∞ t(−dN ) = − 0
1 N0
∞ t
dN dt = λ dt
0
∞ 0
te−λt dt =
1 λ
∞ 0
se−s ds =
1 λ
(6.7)
The first integral refers to the time t that the decaying particles (–dN) have lived. 6.1.1.2
Macroscopic Cross Section
The cross section σ of a reaction was defined in Eq. (6.3), where it was interpreted as the area seen by the incident neutron. It is measured in barns with 1 [barn] = 10−28 [m2 ]. For the nucleus 235 U the geometrical cross section would be about 1.7 [barn]; the measured cross section of fission by thermal neutrons amounts to 582 [barn], which indicates that the reaction proceeds very well at thermal energies ([3], 103). For fission the cross section is indicated by a subscript: σ f . For absorption the cross section, defined in a similar way as in Eq. (6.3) is indicated by σ a and for scattering by σ s . In a reactor the neutrons have to deal with many nuclei, say N nuclei per [m3 ]. The macroscopic cross sections are indicated by an upper case [m−1 ] with the same subscripts
226
Environmental Physics A
I(x)
I0
I(x+dx) -dI = INdxσa
x=0
x
x
x+dx
Figure 6.3 A parallel beam of neutrons is entering a half infinite slab at x = 0. The macroscopic absorption cross section a corresponds with the relative decrease in intensity (–dI/dx)/I at position x due to absorption only.
as before, that is f , a , s and defined as f = N σf
(6.8)
and similarly for absorption and scattering. We now show that these quantities are directly related to the mean free path of the corresponding process. The mean free path of the incident neutrons is defined as the average distance [m] the neutron can travel before it causes fission. Consider a half infinite slab of material with its boundary in the y, z-plane, extending to the right of the plane x = 0, as sketched in Figure 6.3. A beam of neutrons with intensity I 0 [m−2 s−1 ] is entering and is absorbed in the material, assuming other processes are absent. At a certain position x the intensity of the beam will be decreased to I(x) [m−2 s−1 ]. In the next slab with thickness dx and with an area A [m2 ] there are NAdx centres of absorption. Eq. (6.3) referred to a single nucleus; for NAdx nuclei the number of absorptions is found by multiplying the left and the right of Eq. (6.3) by NAdx. The number of absorptions in this part of the slab then becomes (NAdx)σ a I(x). This is equal to the decrease in intensity of the beam − AdI (x) = NAdx σa I (x) = AI (x)a dx
(6.9)
It follows that dI/dx = – a I(x) or I (x) = I0 e−a x
(6.10)
The neutrons absorbed in the layer dx of Figure 6.3 have covered a path with length x. In the case of absorption the mean free path λa of the incident neutrons is defined as the average distance [m] the neutron can travel before it is absorbed. This becomes 1 λa = I0
0 I =I0
1 x(−dI ) = I0
x=∞
x=∞
xe−a x dx =
I a xdx = a x=0
x=0
1 a
(6.11)
Note that this derivation is very similar to the one given in Eq. (6.7) and that the dimensions are correct, as λa is expressed in [m] and a in [m−1 ]. 6.1.2
Four Factor Formula
We now take a closer look at a reactor which has N(235) atoms of 235 U per [m3 ] and N(238) atoms of 238 U. In this reactor there are a number of n slow neutrons around able to initiate
Nuclear Power
227
fission (6.1). After fission and slowing down of the resulting fast neutrons there will be available k × n slow neutrons to initiate a new round of fission. Below we will derive the so-called four factor formula for the multiplication factor k. Some of the n slow neutrons will be captured by 235 U without fission with a capture cross section σc (235); others will be captured by 238 U without fission with a capture cross section σ c (238). This implies that the number of fast neutrons produced by process (6.1) is not νn but ηn with η=ν
N (235)σf (235) N (235){σf (235) + σc (235)} + N (238)σc (238)
(6.12)
The factor η is known as the neutron reproduction factor. The ηn fast neutrons will leave the fissile material and are forced to slow down in a moderator. This material consists of atoms with atomic weight A. The neutrons will lose kinetic energy by elastic collisions with these atoms. The average energy fraction lost in these collisions equals ([1], p. 29) 2A
E = E ( A + 1)2
(6.13)
which for big A behaves as 2/A. Therefore the number of collisions necessary to reach thermal conditions increases with A. Consequently only light nuclei may serve as moderator, as on any collision the neutron may be absorbed. In practice, only three nuclei are in use: H (in the form of H2 O) and D (in the form of D2 O), which were discussed in connection with Table 6.1, and carbon (C) in the form of graphite; its absorption cross section for neutrons is small but with A = 12 it is on the edge. We follow the adventures of the ηn fast neutrons resulting from (6.12). Before slowdown some of them will induce fission in 238 U and 235 U leading to a fast fission factor ε which is only slightly bigger than 1 because of the small fission cross sections for fast neutrons. This results in εηn fast neutrons. For a finite size reactor a fraction lf , with the subscript f for fast, will leak out, leading to ηε(1 − lf )n
(6.14)
fast neutrons available for slowdown. During slowdown the neutrons face obstacles. In particular there are some sharp resonances in the absorption cross section n + 238 U as a function of neutron energy not leading to fission. During slowdown a neutron will pass through a sequence of decreasing energies. Their number and their values depend on the initial neutron energy and on the mass A of the moderator. If during slowdown a neutron happens to have a resonance energy it may be absorbed and lost for the fission process. This is taken into account by a resonance escape probability p < 1. Obviously, a light moderator would be better than a heavier one, as with fewer steps to reach thermal energies, the resonance energy might be bypassed. After slowdown some of the slow neutrons may escape the reactor before doing their duty. This is taken into account by a leakage factor ls for slow neutrons. This results in ηεp(1 − lf )(1 − ls )n
(6.15)
slow neutrons. Finally, only a fraction f < 1, the so-called thermal utilization factor will be absorbed in the fuel, the rest being absorbed in the moderator and the cladding of the fuel elements. Therefore the number of neutrons kn available for another round of
228
Environmental Physics
fission becomes kn = ηεp f (1 − lf )(1 − ls )n
(6.16)
Here, k is called the multiplication factor of the reactor. The reflector shown in Figure 6.2 should reduce the leakages lf and ls . For a very large reactor, theoretically infinitely big, the leakage factors vanish, leading to the four-factor formula k∞ = εηp f
(6.17)
For a typical reactor this multiplication factor will be in the order of 1.10 or 1.20. Taking into account the leakage factors the factor k could obtain the value k = 1 required for stable operation. Let us look in more detail at Eq. (6.17). The quantities ε and η are both larger than one, whereas p and f are smaller than one. As for η, in virtue of (6.12) its value will increase with the enrichment of the uranium. For natural uranium it amounts to η = 1.33, for 5% enriched uranium it becomes η = 2.0, close to the value η = 2.08 for pure 235 U. So for a reactor of the type described here, higher enrichment than a small percentage does not make sense. The fissioning material is traditionally prepared in long rods, set in a moderating medium. Let us assume that the moderator-to-fuel ratio increases. Then the thermal utilization factor f will decrease, but the chance p of escaping resonance absorption increases (there are fewer absorbing nuclei and better moderation provides a greater chance of bypassing the resonances). The result is that the product pf has a maximum at a certain ratio of moderator to fuel. This is shown in Figure 6.4 for a typical case where ε and η do not depend on the ratio. The value of pf is therefore proportional to k = εηpf .
Figure 6.4 The multiplication factor k∞ as a function of the moderator-to-fuel ratio Nmod / Nfuel . The specific parameters refer to natural uranium with D2 O as moderator. Note that the curve with k has two intersection points, A and B, where k = 1 (adapted and reproduced by permission of H. van Dam and J. E. Hoogenboom from [4].)
Nuclear Power
229
As a matter of interest we note that the ratio for which pf is maximal depends on the diameter d of the fuel rods. The resonance absorption determining p is so strong that absorption happens at the surface of the fuel elements when a neutron enters the element from the moderator. For a constant amount of fuel, increasing d means decreasing the relative surface area, thereby increasing p. It turns out that for a combination of natural uranium and graphite as moderator the peak lies at k∞ = 1.05 for d = 3 [cm]. The migration length of a neutron in the moderator is about 54 [cm]. These data were used to build the first nuclear reactor in Chicago in 1942. The fact that k∞ has a peak as a function of N mod /N fuel leads to a straightforward stability consideration. In Figure 6.4 the behaviour of k∞ is sketched for natural uranium with D2 O as moderator. It shows that the peak value of k∞ is larger than one. When leakage is taken into account the resulting multiplication factor k will exhibit a similar peak-like behaviour with a maximum still higher than k = 1. So the reactor has to operate somewhat to the left (point A) or to the right (point B) of the maximum. Assume that the reactor of Figure 6.4 operates at a certain value of N mod /N fuel and k = 1 and suppose that the temperature rises. In cases where water (in this case D2 O) acts as moderator the water will expand or boil, resulting in a smaller ratio N mod /N fuel . If the reactor were operated to the left of the maximum (A), the value of k would go down, becoming smaller than one, and the reactor would slow down. If the reactor were operated at the right of the peak (B), the value of k would rise with increasing temperature, resulting in k > 1 and in an increase in the reactor power, which would increase the temperature even further, leading to a runaway effect. Note that it is cheaper to operate on the right (relatively less fuel), but safer to operate on the left. 6.1.3
Reactor Equations
For a finite reactor the leakage of fast and slow neutrons out of the reactor has to be taken into account. Leakage will depend on the neutron fluxes at the boundaries of the reactor, which have to be calculated by means of a diffusion equation. In practice one writes down the equations for a finite reactor and requires the reactor to run stationary, that is, at constant power. The time dependence of the fluxes of neutrons then should disappear and the so-called critical size of the reactor is found. For a finite reactor its geometry will enter into the calculation, such as the distance between the fuel rods, the material composition of these rods, the tubes of the cooling system, the properties of the external boundary and many other data. This will require extensive numerical calculations. We therefore discuss the reactor in general terms only. The resulting equations will then be applied to a homogeneous reactor of a simple shape, one of the few examples that can be calculated algebraically. The slow neutrons that keep the fission process (6.1) running will comprise a spectrum of several energies. For simplicity we assume that only one neutron energy is present. Below, we indicate the differential equations and their solutions. As variable we choose the neutron density n(x, y, z) [m−3 ], which is the number of neutrons per [m3 ]. Consider a volume element of the reactor. Conservation of the number of neutrons may be expressed as ∂n = sources − absorption − leakage ∂t
(6.18)
230
Environmental Physics
In this equation ‘sources’ refers to the creation of neutrons within the volume element, ‘absorption’ refers to their absorption in all kinds of materials and ‘leakage’ to their leakage out of the volume element. All volume elements may have different properties, while at the boundaries of the reactor flux may leave the vessel. In order to understand the principles we will derive the differential equations with simple assumptions. We write down the terms on the right of Eq. (6.18) starting with the absorption terms. The neutrons have a single energy and therefore a single velocity u [m s−1 ]. The mean free path of a neutron in an absorbing medium is λa [m] (Eq. (6.11)). The expression u/λa [s−1 ] will be the number of absorptions per neutron per second. Then nu = a nu [m−3 s−1 ] (6.19) λa will be the number of absorptions per [m3 ] per second, where we used Eq. (6.11). The quantity nu = φ in Eq. (6.19) is called the neutron flux. The source term in Eq. (6.18) may be written as k∞ times the absorption and not k, as the leakage term is treated separately. This gives for the sources k∞ a nu [m−3 s−1 ]
(6.20)
The leakage out of the volume element may be described by the divergence of a vector field, as is explained in Appendix B; the divergence (div) was met in Eq. (4.16) while discussing the heat equation. In the present case the vector field is the neutron current density J [m−2 s−1 ], which is the number of neutrons passing one [m2 ] per second, where the [m2 ] is taken perpendicular to the velocity of the neutrons. The leakage in Eq. (6.18) consequently may be written as divJ
(6.21)
The neutron current is a diffusion effect caused by different neutron densities in the material; therefore J and n are related, analogous to Eq. (4.3), as J = −D grad(nu)
(6.22)
For historical reasons grad(nu) is taken instead of grad(n). The diffusion coefficient D has the dimension of a length, as the student should check. In our simple treatment it may be taken to be independent of position, which gives divJ = −D div grad(nu) = −D (nu)
(6.23)
Putting in the necessary minus signs, Eq. (6.18) may be written as ∂n (6.24) = k∞ a nu − a nu + D (nu) ∂t It is useful to look separately at the behaviour of the last two terms by putting their sum equal to zero −a nu + D (nu) = 0
(6.25)
∂ 2 (n(x)u) = a n(x)u ∂x2
(6.26)
In one dimension this becomes D
Nuclear Power
231
Table 6.2 Properties of thermal neutrons in several moderators at T = 293 [K]. The lifetime in the last column refers to an infinite medium ([2] p. 242). Moderator
ρ/[103 kg m−3 ]
L/[cm]
D/[mm]
Lifetime l/[s]
H2 O D2 O Be C (graphite)
1.00 1.10 1.85 1.70
2.75 100 21 54
1.6 8.5 5.4 8.6
2.1 × 10−4 0.14 3.9 × 10−3 1.6 × 10−2
Reproduced by permission of Kluwer Academic Publishers, copyright 1994, from [2], pp. 150,242.
with solution n(x)u = (nu)0 e−x/L
(6.27)
The last two terms of Eq. (6.24) apparently represent a density which is decreasing with position because of the absorption a . The quantity L [m] is called a diffusion length and is found by substitution of solution (6.27) into Eq. (6.26): L2 =
D a
(6.28)
A large value of the diffusion coefficient D will increase the diffusion length, while a large value of the absorption a will decrease the diffusion length. For a few moderators the values of L are given in Table 6.2. Note the difference in diffusion length of thermal neutrons for heavy water (D2 O) and ordinary water (H2 O) as moderator, which makes D2 O suitable for natural uranium, while ordinary water requires enrichment. 6.1.4
Stationary Reactor
Eq. (6.24) may be rewritten as ∂n = (k∞ − 1)a nu + D (nu) ∂t
(6.29)
For a stationary reactor the time dependence of this equation will vanish: ∂n/∂t = 0. The corresponding geometrical dimensions are called the critical size of the reactor. To simplify our discussions we assume the reactor to be homogeneous (fuel mixed with moderator); then there is no variation of the parameters in Eq. (6.29) with position. As the velocity u is taken to be constant as well, Eq. (6.29) simplifies to
n + B 2 n = 0
(6.30)
with B 2 = (k∞ − 1)
a k∞ − 1 = D L2
where Eq. (6.28) was used. In reactor physics B is called the buckling parameter.
(6.31)
232
Environmental Physics
6.1.4.1
The Rectangular Reactor
Consider a rectangular reactor with sizes a, b, c, in the x, y, z-directions; the x-values vary between x = –a/2 and x = +a/2, and similarly for the other two coordinates. The differential equation (6.30) can then be separated with solution n(x, y, z) = nx (x)ny (y)nz (z) and with B 2 = Bx2 + B y2 + Bz2 . Using Eq. (B15) in Appendix B for in rectangular coordinates gives three similar equations. For the x-direction it becomes d2 n x (x) + Bx2 = 0 dx 2
(6.32)
There are many sine and cosine solutions for this equation. We suspect that a stationary physical solution would have no zeros inside the reactor and should vanish at its boundaries. Then as only solution remains n x (x) = n x0 cos Bx2 =
πx a
π 2 a
(6.33) (6.34)
The other two sets of equations are found by substituting y and z for x in Eqs. (6.32) to (6.34). From Eq. (6.31) one finds π 2 π 2 π 2 k∞ − 1 2 2 2 2 = B = B + B + B = + + (6.35) x y z L2 a b c which determines the critical size (a, b, c) of the reactor. A bigger reactor would have less leakage of neutrons and would be supercritical with k > 1. It does not have a stationary solution to Eq. (6.29). A smaller reactor would have too much leakage and be subcritical with k < 1 without a stationary solution. 6.1.4.2
Nonleakage Probability
The difference between the multiplication factors k and k∞ is the leakage of neutrons out of the reactor. This may be shown as a nonleakage probability P=
a nu neutrons absorbed = neutrons absorbed + leaked out a nu − D (nu)
(6.36)
The numerator on the right in Eq. (6.36) will be recognized as the absorption (6.19). The last term in the denominator is precisely divJ as found in Eq. (6.23). For the stationary reactor one may replace n by –B2 n from Eq. (6.30). One finds P=
(D/L 2 )nu a nu 1 = = a nu − D (nu) 1 + L2 B2 (D/L2 )nu + DB 2 nu
(6.37)
Comparing Eqs. (6.16) and (6.17) one finds k = Pk∞
(6.38)
Nuclear Power
6.1.5
233
Time Dependence of a Reactor
The next step would be to solve the time-dependent equation (6.29). This is complicated and we will take a shortcut. As mentioned in Section 6.1.1 the fission process (6.1) leads to prompt fast neutrons and after an average time of td ≈ 12.5 [s] a small fraction β = 0.75% of delayed neutrons with an average energy of 0.5 [MeV] follows. We first ignore the latter and take them into account later. Consider a group of N neutrons that are absorbed, most of them leading to fission. They result in kN slow neutrons after a time determined by two processes: slowdown of the fast neutrons and diffusion within the moderator before being absorbed. Of these processes the diffusion time l dominates and is given in Table 6.2. One may conclude that in a time dt = l the number of slow neutrons increases by dN = kN − N = (k − 1)N. Note that in using the parameter k, leakage is taken into account. It follows that N dN = (k − 1) dt l
(6.39)
For k = 1 the reactor is stable with N = N 0 . Assume that for any reason at t = 0 the multiplication factor jumps from k = 1 to k = 1 + ρ. In this case Eq. (6.39) gives ρN dN = dt l
(6.40)
N (t) = N0 eρt/l
(6.41)
with the solution
The reactor period is defined as τ=
l ρ
(6.42)
which is the time after which the neutron density has increased by a factor e. For a reactor with water as moderator Table 6.2 gives l = 2.1 × 10–4 [s]. With, for example ρ = 10–3 the reactor period would become 0.2 [s], much too small to activate counter measures, as after 10 [s] the flux would have increased by a factor 5 × 1021 . It must be mentioned that for a reactor operating on fast neutrons alone, the time l is determined by the short time that the fast neutron needs to cross its mean free path up to the next fast fission. That time l ≈ 10–7 [s], with the same perturbation ρ = 10–3 would lead to a reactor period τ = 10–7 [ms]. Although a time of 0.1 [ms] is much too small for mechanical intervention, the same is true for the larger time of 0.2 [s]. It is fortunate, however, that for reactors operating with slow neutrons the delayed neutrons allow for mechanical counter measures on small perturbations. For fast reactors, not discussed here, the physical design should comprise control. We now include delayed neutrons in the argument that led to Eq. (6.42). We assume that at time t = 0 there are N slow neutrons to be absorbed. Again the multiplication factor jumps from k = 1 to k = 1 + ρ. A fraction (1 − β) of the resulting neutrons is prompt; their number is (1 − β)k N = (1 − β)(1 + ρ)N ≈ (1 − β + ρ)N
(6.43)
234
Environmental Physics
They will give rise to another round of absorption and fission after a time t = l. For a small perturbation ρ < β there is no net increase in the number of neutrons after the small time t = l. To determine the time after which a net increase will happen, we calculate the average reproduction time. For a fraction (1 − β) of the neutrons the reproduction time is l; for the fraction β it is td . The average reproduction time therefore will be tr = (1 − β)l + βtd ≈ βtd
(6.44)
The last approximation is allowed as βtd l. Analogous to Eq. (6.39) we may say that in the time dt = tr the number of neutrons increases with dN = ρN. In the same way as Eq. (6.42) this would lead to a reactor period τ=
tr ρ
(6.45)
With the parameters used earlier ρ = 10–3 , β = 0.0075, td = 12.5 [s] one finds τ = tr /ρ = 93 [s], long enough for mechanical intervention. After this period the neutron density will rise much faster again, with period l/ρ. For larger values of ρ the reactor period τ would become smaller because of the ρ in the denominator of Eq. (6.45). For ρ > β the exponential growth of the neutron density would start at t = 0, at first with a period l/(ρ – β) later, after t = td with period l/ρ. 6.1.6
Reactor Safety
Following the definitions of the International Atomic Energy Agency, the IAEA, which amongst other things has been established to stimulate the ‘peaceful use of nuclear energy’, we distinguish between three types of components of the safety measures. A passive component operates automatically without any external (human) actions or extra components. An example is a sprinkler system against fires. If components of a safety system need some outside interference they are called active, such as the fire brigade. Inherent safety is achieved by the elimination of a specified hazard by means of the choice of material and concept. The example here would be a concrete building filled with glassware and pottery where there is nothing to burn. The concept of inherent safety is useful for the chemical industry as well. If one has to move petrol via a pipe system with a pressure of 7 [bar] at a temperature of 100 [◦ C] a small leak would lead to an explosion; with a temperature of 10 [◦ C] this is very unlikely. Obviously the pipe system should be designed in this virtually inherently safe way. Following this discussion it is clear that passive safety measures will depend on the design of the reactor. The worst thing which could happen during the operation of a reactor is that the temperature rises, for example by loss of cooling. A passive safety measure would be to design a reactor such that it has a negative temperature coefficient dk/dT around the operational parameters; for then the reactivity would go down with a rise of temperature and the temperature presumably would fall. A negative temperature coefficient was discussed in connection with Figure 6.4 regarding moderation. Another example of passive safety is the use of the Doppler effect. If the temperature of a reactor rises, the absorption lines for neutrons in 238 U will broaden. Consequently the neutrons will be absorbed at a broader range of energies, resulting in
Nuclear Power
235
a decrease in the chance p of escaping resonance. Therefore, with rising temperature the multiplication factors k∞ and k will decrease, giving a negative temperature coefficient. Note that this argument holds only when the resonance absorption in 238 U is a significant effect, which is only the case for a reactor with not too highly enriched uranium. If for any reason these safety measures are not sufficient to control the reactor, the reactor should be shut down. This could be done manually (active safety) or automatically (passive safety). A possibility is to embed the reactor in a pool of cold water with a high concentration of boron. This element has a high absorption cross section for thermal neutrons (σ a = 759 [barn]). With rising temperature the boron water would enter the reactor core by thermal expansion, shutting off the reactor immediately. No external power would be required. 6.1.6.1
Decay Heat
If the reactor is shut down the fission process will stop. However, the fission products X and Y in Eq. (6.1) will continue to decay, emitting β,γ -radiation and neutrons, which all will convert into heat. Besides, neutrons absorbed by 238 U will produce heavier nuclei, actinides like 239 Pu. We come back to them in Section 6.4, but mention here that they have a long life so their short-term decay heat is small. Finally, the delayed neutrons will cause fission for about a minute after reactor shutdown. Figure 6.5 gives a sketch of the decay heat after shutdown, essentially due to the fission products X and Y (Eq. (6.1)). The graph refers to a reactor which has been operational for a relatively long time, during which a stock of fission products has been built up. After an emergency shutdown the decay heat must be accommodated by a large heat capacity of the reactor parts and by extra cooling (Exercise 6.8). Finally a strong containment should be in place to prevent any escaping radioactive materials from entering
Figure 6.5 Decay heat after shut down for a reactor which has been working for a long time. It is shown as a fraction of the original heat power of the reactor. Note that the inset refers to the first day. Based on data from [2], pp. 124, 125.
236
Environmental Physics
the environment. Such containment should be able to withstand high internal pressure and attacks from the outside. 6.1.6.2
Reactor Accidents of the Twentieth Century: Chernobyl (1986) and Harrisburg (1979)
The safety measures mentioned above were taken into account in the design of virtually all reactors except the Chernobyl types. The cause of the Chernobyl accident in the former Soviet Union in 1986 was an experiment with the reactor in which the engineers cut off certain safety signals. During the experiment the k-value of the reactor became bigger than k = 1, which was not corrected in time. This caused a high temperature, the fuel was destroyed and injected into the coolant, reactions between high-temperature water and Zr in the fuel cladding produced hydrogen, which initiated a chemical explosion, with the result that the reactor containment (weaker than in the West, in fact a simple building) blew up [5]. The other well-known accident with a nuclear reactor happened in the USA in Harrisburg in 1979. In this case a valve was not closed properly, the operator misinterpreted the signals and made decisions by which the reactor lost its coolant. The decay heat was not taken away and the reactor largely melted. This so-called core melt-down was known as a possibility and the containment of Western reactors is designed to survive such an accident. Still, in Harrisburg, radioactive materials escaped into the environment, although to a much smaller extent than in the Chernobyl case, as will be related in Section 6.3.3. The nuclear industry has reacted to these accidents by improving the existing safety measures, but also by envisaging new designs that depend only on passive systems and inherently safe constructions. The latter should be based on two essential aspects. First, when the temperature of the reactor core rises significantly, the reactor should shut down because of physical principles. Second, after an emergency shutdown the decay heat of the fission products should be transported away, again by virtue of physical principles. To finish this discussion it should be mentioned that one always will need active possibilities to change the multiplication factor k, for example to start or stop a reactor. Also, when fuel rods age the fission products will capture neutrons without fission, effectively reducing k. In addition, the 235 U will burn up during reactor operation. Although the latter effect is to some extent compensated by the production of fissile Pu (illustrated in Figure 6.14) the ageing of the fuel requires adaptation of the reactor parameters during the process. This adaptation may be done by the control rods, indicated in Figure 6.2. They contain strongly neutron-absorbing materials like the boron mentioned earlier. When the reactor starts these rods are placed deep into the reactor; when the fuel ages the rods are drawn out. A more recent development is to put Gd into the fuel. Its odd isotopes have a large neutron absorption cross section. When these isotopes absorb a neutron they become even and their neutron absorption is much smaller. This effect compensates the ageing of the fuel. In reactor physics Gd is called a burnable neutron poison. 6.1.6.3
Calamity in Fukushima, Japan (2011)
On March 11, 2011 a heavy, magnitude 9.0, earthquake struck Japan near Fukushima, to the northeast of Tokyo. The quake destroyed the electric power grid, among other things.
Nuclear Power
237
We give a summary of the events as they appeared while finishing the manuscript of this book. Later analysis will appear on our site www.few.vu.nl/environmentalphysics. In response to the heavy quake the nuclear reactors in the region were shut down automatically by inserting control rods (passive safety). The fissile materials in the reactor, even when shut down, release a large amount of heat, particularly in the beginning, as was shown in Figure 6.5. This heat has to be removed by cooling water. With a lack of external electricity, diesel generators took over the role of powering the pumps. Within an hour the quake was followed by a tsunami, a hugh wave with a height of 10 [m] or more, which disabled the diesel generators for some of the reactors. As was planned for this contingency, a set of batteries took over their role. After eight hours their stored energy was exhausted. Consequently the water in which the fuel rods were emerged started boiling off, producing steam. The upper parts of the fuel bundles heated up from the decay heat of the fission product and the cladding of Zr was oxidized by the steam (H2 O) resulting in hydrogen gas under increasing pressure. Even melting of part of the fuel has been supposed. Just outside the reactor containment, but inside the reactor building, spent fuel rods were stored in cooling pools. The breakdown of cooling also affected these pools, so that the top part of the rods were also heating up and volatile fission products escaped. The high pressure in the reactor containment had to be reduced in order to avoid breakdown of the reactor vessels and to make it possible to pump water inside. Therefore the gas had to be vented to the atmosphere. Part of the hydrogen escaped into the reactor building, where it exploded with oxygen, causing destruction of the ceiling and upper walls of the building. The water in the reactor vessel and the spent fuel pools acts as a radiation shield. Boiling off the water also destroys the shield, which made it very dangerous for the workers to come close and to make repairs. Even helicopter flights were not always acceptable. Seawater was used to extinguish fires and cool the reactor and the pools. After a week the grid started operating again and the situation seems to be improving, although a few critical points remain. Some of the reactors, those cooled with seawater will corrode and have to be taken out of action permanently. 6.1.7
Nuclear Explosives
Nuclear fission bombs use 235 U or 239 Pu and operate on fast neutrons only. We mentioned before that the fission cross sections for fast neutrons are much smaller than for thermal neutrons. In an explosive this is compensated by the high concentration of the fissile materials. The mean free path of fast neutrons in 235 U is about 16 [cm] (Exercise 6.6). In order to reduce leakage a 15 [cm] thick mantle of natural uranium is constructed around the 235 U sphere. This scatters back outgoing neutrons effectively, resulting in a critical mass of about 15 [kg] 235 U ([6], p. S27). In an explosive, fission of 235 U is initiated by firing together two pieces of 235 U, each below the critical size and with a partial mantle. When the halves come together, a polonium sample will emit α particles, impinging on 9 Be; this causes a neutron boost which initiates the fission reaction. For an explosive of 239 Pu a conventional detonation around a 239 Pu sphere will compress the sphere until a critical density is reached. For a higher density the macroscopic cross
238
Environmental Physics
section (6.8) increases, and the mean free path becomes smaller, giving less leakage. Depending on how much the actual density exceeds the critical one, a multiplication factor of k = 1.5 or k = 1.6 is reached. The implosion method can also be used on 235 U. Fusion bombs work according to the principles explained in Section 6.2, where the high temperatures needed for fusion are caused by a classic fission explosive.
6.2
Nuclear Fusion
From Figure 6.1 it was already concluded that fusion of the lightest nuclei could lead to energy gains. In practice the following reactions are possible: D + 3 T → 4 He + 1 n + 17.6 [MeV] 2 D + 2 D → 3 He + 1 n + 3.27 [MeV]
(6.46) (6.47)
D + 2 D → 3 T + 1 H + 4.03 [MeV] 2 D + 3 He → 4 He + 1 H + 18.3 [MeV]
(6.48) (6.49)
2
2
In these equations deuterium and tritium are indicated by 2 D and 3 T, although the more correct notation would have been 2 H and 3 H. From Eqs. (6.46) to (6.49) it is clear that the positively charged nuclei need to overcome an appreciable repulsive Coulomb barrier before the attractive nuclear forces can lead to the energy gains indicated. The Coulomb barrier amounts to 500 [keV] or 600 [keV], but because of the quantum mechanical tunnelling through the barrier the maximum cross section is reached at 100 [keV] for the 2 D + 3 T reaction and at a few hundred [keV] for the other reactions. So, for the near future the attention is focused on the first reaction (6.46). In principle, a mixture of deuterium and tritium is heated up to energies of 10 to 20 [keV]. These kinetic energies correspond to the Maxwellian distribution (6.2) with temperature T; in fact we already mentioned that the peak of the energy distribution is at kT. For 10 [keV] this would correspond to 120 million [K]. It is customary in this field to express temperatures in units of [keV]. As the ionization energy of the hydrogen atoms concerned is around 13.5 [keV] all atoms are ionized and an electrically neutral mixture of ions and electrons is obtained: a plasma. We note in passing that the temperatures of ions and electrons may be different. The deuterium required to fuel the plasma is in ample supply. In fact, for every 6700 hydrogen atoms in water there is one deuterium atom. The hydrolysis efficiency of water depends on the mass of the atoms. Therefore by (repeated) hydrolysis one obtains pure deuterium. The (electric) energy required should count as a (small) cost for fusion. Tritium, with a half life of 12.3 years, is produced by neutron bombardment of lithium, which is in ample supply as well. It is planned to have the plasma surrounded by a lithium blanket; the neutrons from reaction (6.46) then will breed the required tritium. The ions and electrons in a plasma must be prevented from hitting the walls; otherwise so many alien nuclei would be sputtered from the walls that a fusion reaction would stop immediately. The integrity of the plasma is retained by magnetic confinement. In fact, about 80% of all fusion research is done on the so-called Tokamak design, a Russian word that means toroidal magnetic chamber (Figure 6.6)
Nuclear Power
239
Figure 6.6 Tokamak. The main magnetic field is toroidal; the vertical magnetic field provides a centripetal Lorentz force by which the ions follow the main toroid. Other magnets produce induction currents that heat the plasma as required.
In the following we will discuss the energy balance (losses and gains) in a plasma for a unit of volume. The plasma will be confined for a finite time τ E , which is determined by: r r r r
Instabilities The fraction of ions which hit the walls after all Collisions between ions and electrons The synchrotron radiation originating from the helix-circular shape of the particle orbits.
When n is the number of ions per [m3 ], the number of electrons will be the same since the plasma is electrically neutral. The average kinetic energy of the particles at temperature T is 3kT/2(Exercise 6.3). The energy [J m−3 ] therefore is 2n × 3kT/2 = 3nkT. The loss per second from the processes mentioned will therefore be 3nkT [Wm−3 ] τE
(6.50)
Another energy loss is caused by Bremsstrahlung occurring when charged particles meet and bend off. The number of encounters will be proportional to n(n − 1) which is essentially n2 and the number per unit of time will be proportional to their relative velocity u as well. This velocity will, on average, be determined by the square root of the kinetic energy, so it will be proportional to (kT)1/2 . The Bremsstrahlung loss therefore may be written as αn2 (kT)1/2 where α [m3 J1/2 s−1 ] is proportionality constant. The total power PL [Wm−3 ]
240
Environmental Physics
leaving the plasma will be PL = αn 2 (kT )1/2 +
3nkT τE
(6.51)
Assume that one has nd [m−3 ] deuterium nuclei and nt [m−3 ] tritium nuclei incident on deuterium with a relative velocity u. The cross section of the d + t reaction is defined similar to Eq. (6.3). The number of reactions per second on a single deuterium nucleus therefore is the cross section σ multiplied by the number of incident tritium nuclei per [m2 ] per second. This number equals nt u; consequently the number of reactions per deuterium nucleus per second becomes nt σ u [s−1 ]. The total number of reactions R [m−3 s−1 ] becomes R = n d n t <σ u>
(6.52)
The symbol <σ u> indicates an average over all σ u combinations by means of a Maxwell distribution (6.2) for the velocities and a separation of centre-of-mass and relative energies. Both the velocities u and the cross section σ depend on the energy kT. The resulting behaviour is displayed in Figure 6.7. In the region of interest, between 10 and 20 [keV] it may be approximated by the straight line (kT )2 [m3 s−1 ] (6.53) ([keV ])2 The denominator indicates that the numerical value of kT in [keV] should be used. The production rate of thermonuclear energy Pth [Wm−3 ] is found by using Eq. (6.52), multiply by the energy E = 17.6 [MeV] produced by every single reaction (see (6.46)) and <σ u> = 1.1 × 10−24
Figure 6.7 The property <σ u> for the D + T reaction as a function of energy kT. (Reproduced from Tokamaks, John Wesson, pg 7, fig 1.3.1, 1987, with permission from Oxford University Press.)
Nuclear Power PT = Pth + PL
Thermonuclear power (Pth ) Heating power (PH = PL)
Generator Plasma
241
Power loss (PL)
Heating power source
Residual electric power >0
PH < ηP T
Figure 6.8 Power flows in deriving the Lawson criterion or the energy break-even criterion. (Reproduced from Tokamaks, John Wesson, pg 7, fig 1.3.1, 1987, with permission from Oxford University Press.)
taking nd = nt = n/2 n2 (6.54) 4 The energy break-even criterion for the production of fusion power, ignoring the power required to organize the system, is reached when the energy production becomes larger than the energy loss. This statement can be made more precise, leading to the Lawson criterion. The derivation of the Lawson criterion follows the sketch of Figure 6.8. The external heating power PH is provided to counter the losses PL of Eq. (6.51). These two should balance as the thermonuclear power (6.54) may not heat the plasma directly. In that case the energy from Eq. (6.54) first has to be converted into electric power with an efficiency η. The total amount of power PT [Wm−3 ] leaving the plasma is the sum 3nkT <σ u> E (6.55) + α(kT )1/2 + PT = PL + Pth = n 2 4 τE Pth = <σ u> E
The generator produces ηPT of electrical energy suitable to heat the plasma. The criterion for energy gain becomes η PT = η(PL + Pth ) > PL
(6.56)
or Pth >
1−η PL η
(6.57)
Substitution of (6.51) and (6.54) gives nτE >
η 1−η
3kT <σ u> E − α(kT )1/2 4
(6.58)
The right-hand side of this inequality is pictured in Figure 6.9. In drawing the figure Lawson’s parameter values η = 1/3 and α = 3.8 × 10–29 [J1/2 m3 s−1 ] were used as well as
242
Environmental Physics
Figure 6.9
The right-hand side of Eq. (6.58) as a function of energy kT.
<σ u> from Figure 6.7. As we are interested in the energy region 10 to 20 [keV] one may deduce nτE > 0.6 × 1020 [m−3 s]
(6.59)
This is Lawson’s criterion. Another way of looking at the energy balance would be to require that the plasma, once ignited, to maintain its own energy. This is called the selfheating criterion or ignition criterion. This requirement could be achieved if the kinetic energy of the 4 He nuclei produced by the fusion process (6.46) could be kept within the plasma, using their relatively short range and high charge. This condition is written as Pα > PL
(6.60)
where n2 (6.61) <σ u> E α 4 This corresponds to Eq. (6.54) when the total energy E is replaced by the α-particle energy Eα . This energy is only one-fifth of the total energy because of the mass ratios in Eq. (6.46): Eα = E/5. By combining Eqs. (6.51), (6.60) and (6.61) one finds Pα =
nτE >
12kT E α <σ u>
(6.62)
Substituting expression (6.53) for <σ u> in the energy region 10 to 20 [keV] leads to nτE (kT ) > 31 × 1020 [m−3 s keV]
(6.63)
Nuclear Power
243
100 ITER
Year
un
g
QDT=1
hl
JT 60 JET JET JET JET DIII D TFTR JT 60 JET TFTR JET TFTR DIII D TFTR QDT=0.1 ALC C JT 60 DIII D FT TFTR Reactor relevant DIII D conditions TEXTOR ALC A ASDEX
1997
1
m
it
of
br
em
ss
tra
10
Li
Fusion product n 1tET1 /[10 20 keVm-3s]
Inaccessible region
0.1
PLT T10 TFR
0.01
1980
PLT TFR
1970 T3
0.1
1965 1 10 100 Central ion temperature kT /[keV]
Figure 6.10 Fusion quality diagram, where the left-hand side of Eq. (6.63) is shown as a function of the plasma (ion) temperature. (Reproduced by permission of Jet Joint Undertaking.)
For the lower part of the region of interest one may multiply both sides of Eq. (6.59) by 10 [keV] and obtain 6 × 1020 [m−3 s keV]; similarly for the higher part multiply by 20 [keV] and find 12 × 1020 [m−3 s keV]. It appears that inequality (6.63) is a stronger condition than Lawson’s criterion (6.59). The left-hand side of inequality (6.63) is called the fusion product. Its value together with the plasma temperature achieved indicates how far a certain machine is from fusion conditions. A fusion diagram that combines the data for older machines, long out of use, with more recent ones is given in Figure 6.10. The ‘inaccessable region’ is defined as the region where the Bremsstrahlung losses in Eq. (6.51) are bigger than the other losses. The right-hand side of Eq. (6.63) is indicated by Q = 1 on top of the graph. One will notice the good status of JET, the Joint European Torus, where for 4 [s] a plasma existed with a fusion energy of 22 [MJ]. All major industrial countries are now working together on the ITER project, short for International Thermonuclear Experimental Reactor (but also Latin for ‘the way’). Its design conditions are for baseline operations n = 9 × 109 [m−3 ], τ E = 3.4 [s], kT = 20 [keV] [8]. The fusion product would be nτ E E = 61 × [1020 keV m−3 s], which is above criterion (6.63). The ITER machine, essentially a prototype, should produce a plasma in 2018 and be operational for 20 years. After ITER a DEMO reactor is envisaged which should start its
244
Environmental Physics
operation in the 2030s; it should deliver power to the grid in the 2040s. The age of fusion could then start around 2075 [9].
6.3
Radiation and Health
A significant factor in decision-making on the use of nuclear power is the question whether this way of energy conversion is dangerous to health. The danger which distinguishes nuclear power from other types of energy conversion is the large inventory of radioactive materials in a power station and the risk to health when part of it escapes into the environment. In this section these health aspects of radiation are discussed. 6.3.1
Definitions
The radioactivity of a sample is one becquerel (1 [Bq]) if per second one nucleus of the sample decays: [Bq] = [s−1 ]. An old-fashioned unit to measure this is the curie, which corresponds with the radioactivity of 1 [g] of radium: 1 [Ci] = 37 × 109 [Bq]. This relation now is taken as the exact definition of a curie. The radioactivity of a sample by itself is not a good measure of its health effects. The radiation may have a low energy or the sample may decay very quickly, which will reduce the effects. Therefore the absorbed dose was defined, expressed in gray:: a living creature receives an absorbed dose of one gray (1 [Gy]) when it absorbs an energy of one [J] per [kg] of body weight, [Gy] = [J kg−1 ]. Here the old fashioned unit is the rad (radiation absorbed dose) with 1 [rad] = 0.01 [Gy]. The radiation damage in human tissue not only depends on the dose, but also on the kind of radiation received. Therefore the equivalent dose has been introduced, being the dose multiplied by a radiation weighting factor wR . The unit is the sievert (Sv) with the dimension [J kg−1 ]. In this case the old-fashioned unit is the rem (radiation equivalent man) with 1 [rem] 0.01 [Sv]. The weighting factors for the different kinds of radiation are displayed in Table 6.3 ([10] par 112).
Table 6.3 Radiation weighting factor for several types of radiation. Type of radiation Photons, electrons, muons Protons, charged pions α-particles, fission fragments, heavy ions Neutrons up to 0.01 [MeV] Neutrons from 0.01 to 1000 [MeV] Neutrons higher than 1000 [MeV]
Radiation weighting factor wR 1 2 20 2.5 Rising to a peak of 20 at 1 [MeV], then decreasing 2.5
With permission of Elsevier, copyright 2007, taken from [10] par 112, 118.
Nuclear Power
245
Table 6.4 Norms for upper limits of radiation per year as recommended by ICRP. Application
Occupational limit/ [mSv year−1 ]
Limit for the general public/[mSv year−1 ]
20
1
150 500 500
15 50
Effective dose (total body) Specific parts of the body: Lens of the eye Skin Hands and feet
With permission of Elsevier, copyright 2007, taken from [10], par 300.
6.3.2
Norms on Exposure to Radiation
From natural sources, such as cosmic radiation and the radioactivity in the atmosphere and soil, the average human being receives a dose equivalent of approximately 2 [mSv] a year. The precise value depends on the geographic location. In present-day life one receives radiation from materials used in buildings (especially concrete) and from medical diagnostics. For the average person in industrialized countries this amounts to about 1.1 [mSv] per year ([11]) We first summarize the official norms for the amount of dose equivalents that is taken as acceptable and then discuss their motivation. The International Commission on Radiological Protection (ICRP) has given norms [10] for exposures other than from natural and medical sources. In simplified form they are given in Table 6.4. The norms are higher for radiological workers, which are the people who come into contact with radiation in their profession, as radiation exposure can be seen as a professional risk. The effective dose in Table 6.4 is the sum of all equivalent doses for all organs and tissues in the body weighted with a tissue weighting factor wT . These tissue weighting factors are chosen such that whole body wT = 1. For specific tissues therefore separate norms are published. It should be mentioned that the norm for the extra total body dose (i.e. the effective dose) is of the order of the natural dose. ICRP recommendations are aimed at governments who in their planning will estimate the radiation emitted by an installation (accelerator, power station, storage facility). In that planning three principles should apply: (1) Justification: the government decision should do more good than harm (2) Optimization of protection: doses should be as low as is reasonably achievable taking into account economic and social factors (3) Application of the limits: the total dose to any individual should not exceed the doses of Table 6.4 (and all other details not presented here). From the formulation of these principles it is clear that value judgements will be involved: how does one, for example take into account the economic and social factors of principle 2. This is a matter for political decision-making. The norms of Table 6.4 were established by studying the effects of radiation on individuals who had suffered a quantifiable radiation dose. In particular, the fate of the victims of the nuclear explosions at Hiroshima and Nagasaki in 1945 has given a lot of information on
246
Environmental Physics
Figure 6.11 Dose–effect relations for stochastic processes. The curve on the right of D2 is measured and on its left it is based on models. For doses D1 < D2 the ICRP norms are based on linear extrapolation. The vertical value a0 is the natural incidence of the ill effect (D = 0).
the effects of low doses of radiation. For higher doses animal experiments and accidental exposures of humans give information on the incidence of ill effects. The studies mentioned have led to the distinction of two types of effects of radiation. In stochastic processes the probability of the effect depends on the dose but the effect itself and the seriousness of the effect does not. In deterministic processes the radiation above a certain threshold destroys tissues and cells and the seriousness of the effect increases with the dose. For a low dose of radiation the cause-effect relation is stochastic and for a high dose the relation is deterministic. The occupational limits in Table 6.4 are such that the cause effect relation is not yet deterministic. For stochastic processes the dose–effect relation is shown in Figure 6.11. The effect of very small doses, to the left of point D2 , cannot be measured; they have to be extrapolated. The extrapolation in Figure 6.11 is linear and does not have a threshold. For a zero dose the natural incidence of the effect happens. A linear extrapolation may overestimate the effect, as the organism has many biological repair mechanisms available to correct small damage. The approximation of ICRP therefore may be assumed to be on the safe side. For radiation the most prominent ill effect is the development of tumours, many of which eventually result in death. The total effective dose in Table 6.4 has been estimated such that for lifelong exposure to the dose the mortality per year will increase by 1 in 104 . For occupational workers it will increase by 1 in 103 , in both cases based on linear extrapolation with no threshold. Whether one accepts extra radioactivity in the environment will depend on the way in which risks are considered. There are three possible positions: (1) Avoid any risk (2) The risk should be related to the benefit (3) Any dose below the ICRP norms is acceptable.
Nuclear Power
247
ICRP takes the second position and explicitly rejects position 3, as governments should keep the doses as low as reasonably achievable. 6.3.3
Normal Use of Nuclear Power
Governments of virtually all industrialized nations have adopted the ICRP recommendations in their environmental laws and regulations. They have ordered strict rules as to the radioactive emissions of their nuclear installations. In absence of accidents the exposures for the general public are much smaller than the norms of Table 6.4. The individual dose in a country like the Netherlands, as a consequence of the presence of only one nuclear power plant, is too small to be measured. It is calculated as 10−10 [Sv/ year]. The doses are mainly due to consumption of food, in particular fish because of the biological concentration of elements like 60 Co. Even if locally the doses are much higher than the average numbers they still will be acceptable. 6.3.4
Radiation from Nuclear Accidents
Normal use of nuclear power produces very small emissions; it is more useful to look at what might happen in case of accidents. During recent years we have experienced three major accidents with nuclear power stations: Harrisburg, USA in 1979, Chernobyl, USSR (now Ukraine) in 1986 and Fukushima, Japan, 2011. From Harrisburg the consequences were least serious, as most of the radioactivity was kept within the confinement: 1017 [Bq] of inert gases were released and 6.5 × 1011 [Bq] of 131 I. A single individual may have received a dose of 0.8 [mSv] and the average dose for the population within a distance of 80 [km] has been a factor of 100 less. The reactor core in Chernobyl contained 40 × 1018 [Bq] of radioactive material. Virtually all activity present in inert gases was released (1.7 × 1018 [Bq]) and some 1 × 1018 [Bq] to 2 × 1018 [Bq] in other materials. A considerable amount of the radioactivity was long lived: 4 × 1016 [Bq] with a half life in the order of 104 days and 6 × 1013 [Bq] with a half life in the order of 106 days were added to the radiation dose of the population for some time. It is a matter of opinion whether such a risk should prohibit the further development of nuclear power. Other serious accidents in Russia happened in the Urals near Sverdlovsk, where both in 1957 and in 1967 a vessel with nuclear waste exploded, because of the temperature rise from its decay heat. It contaminated a region of tens of kilometres with 90 Sr (half life 28 [yr]) and 137 Cs (half life 30 [yr]). As they are β- emitters they do most harm inside the body, where they become built into the tissues. The contamination was of the order of 1010 [Bq km−2 ] over many square kilometres. The radioactivity released from the Fukushima accident was partly due to the decay products which were vented with hydrogen to reduce the pressure in the reactor vessels. Another part originated from the dry and overheated fuel in the spent fuel pools. The emissions were about 7% of those of Chernobyl, still considerable. 6.3.5
Health Aspects of Fusion
From a safety point of view, fusion of deuterium and tritium in a plasma has advantages over nuclear fission. This originates from the fact that at any moment only 1 [g] of D + T is really present in the plasma. Of these only tritium is radioactive, with a half life of 12.3 [yr].
248
Environmental Physics
The radioactivity of 1 [g] of T amounts to 370 × 1012 [Bq]. In the walls of the reactor vessel a [kg] of T may be present and another 1 or 2 [kg] in storage around the reactor. Tritium is the most dangerous of the radioactive materials connected with fusion. As it is gaseous it is easily released by an accident and moreover the atom is so small that it easily leaks through container walls. The total activity of the T present would be about 1016 [Bq], a few orders of magnitude lower than the activity in a nuclear fission power station. There is of course radioactivity in the fusion reactor walls, arising from the long irradiation with neutrons, some 1020 [Bq] in the stainless steel for a 1000 [MWe ] power station. This is in the same order of magnitude as in a fission reactor (1021 [Bq]), but in contrast a fusion reactor cannot melt down. If anything goes wrong the plasma will collapse and fusion will stop. Note that reactions (6.48) and (6.49) do not produce neutrons and could be of some advantage from the irradiation point of view. As tritium poses the worst danger, the potential consequences of daily routine discharges have been studied. For people living around the reactor this could amount to 0.015 [mSv/ year], well below the limits of Table 6.4. It is believed that the worst conceivable accident would release less than 200 [g] of tritium. This could lead to a dose of 60 to 80 [mSv] at a distance of 1 [km]. As this is comparable to the admissible dose for a radiological worker it would not pose an unacceptable risk – or so the argument goes. Anyway, the half life of tritium is much smaller than those of many fission products and the actinides.
6.4
Managing the Fuel Cycle
The nuclear fuel cycle is a concept that is fundamental to the exploitation of nuclear fission. In Figure 6.12 the left-hand side indicates that uranium ore is mined, leaving some tailings behind. The processed ore is shipped to an enrichment plant if the reactor operates on enriched uranium. Next the fuel elements are fabricated and put into a reactor. The spent fuel may be treated in a fuel conditioning plant and then left as waste. If this were all there would be no cycle, but only a once-through process.
Uranium mines
U
Enrichment plant
Fuel element fabrication
Pu+U Tailings
Reprocessing
Spent fuel
Reactor Spent fuel
Radioactive waste
Fuel conditioning
Figure 6.12 Elements of the nuclear fuel cycle from uranium mines to reactor and radioactive waste. Without the dashed connections and reprocessing there is no cycle but a once-through process.
Nuclear Power
249
Closing the fuel cycle would imply that the spent fuel is reprocessed and most of it used again as input for the fabrication of fuel elements. Also here part of the spent fuel, for example some of the fission products, cannot be broken down and will end up as radioactive waste anyway. This part of the cycle is dashed in Figure 6.12. Below, the different elements of the fuel cycle will be described, bearing in mind three aspects: (1) Protection against production of nuclear warheads (2) Health hazards during operation of the system (3) Long-term health hazards, long after a particular plant has been shut down. 6.4.1
Uranium Mines
Uranium is mined in the form of uranium oxides, often chemically bound to other oxides; the richer ores contain some 1 [kg] to 4 [kg] uranium per 1000 [kg] of ore. This is processed locally producing yellow cake with about 80% U3 O8 . The tailings consist of residues with still enough uranium to produce 222 Rn gas, which can escape into the surroundings; in fact 85% of the radioactive materials of the ore are still in the tailings. In nature the ores with their radioactivity would have been there as well, but after mining they are kept close to the open environment, usually in ponds underneath water at the processing site. Ore and tailings are at the beginning of the cycle; there is no real danger of their use for explosives, but the radioactivity of the continuously emitted 222 Rn with a half life of 4 days is real. 6.4.2
Enrichment
Only 0.7% of the uranium nuclei in nature consists of the isotope 235 U, the rest being 238 U. Although it is possible to run nuclear reactors with natural uranium (cf. Table 6.1) many reactor builders prefer uranium containing some 3 to 4% 235 U. As explained below Eq. (6.17), for power production, a higher enrichment than 5% does not make sense. Higher enrichments of 235 U definitely are required for construction of a nuclear explosive. Table 6.5 gives the critical mass (rather than the critical size) of a core of enriched uranium, surrounded by a layer of natural uranium to reflect back neutrons to the fissioning core. From Table 6.5 it is clear that isotope separation is one way to obtain nuclear explosives. The separation must use physical principles, as the atomic structure of the 235 U and 238 U Table 6.5 Critical mass of uranium versus enrichment with 235 U with a 15 [cm] thick reflector of natural uranium around from Taylor, quoted in [6]. Enrichment with 235 U/% 100 80 60 40 20 10
Critical mass/[kg] 15 21 37 75 250 1300
235
U content/[kg] 15 17 22 30 50 130
250
Environmental Physics
atoms and accordingly the chemistry will be essentially the same. Therefore chemical separation methods do not work. There are three separation methods based on physical principles. Of these gaseous diffusion and gas centrifugation have been long established; the third one, laser separation has not found commercial application. 6.4.2.1
Gaseous Diffusion
Uranium is processed into the gas UF6 . At a certain temperature T all molecules will have the same average kinetic energy mu2 /2; hence the molecules of both uranium isotopes will have slightly different velocities. This results in slightly different penetration probabilities through a porous medium. In fact, the lightest molecules will have a slightly higher average velocity and penetrate a little bit more easily. By repeating the process in a cascade the enrichment will increase. This process is technologically the simplest, but it requires huge cascades, a lot of space and much energy. 6.4.2.2
Gas Centrifuge.
A particle with mass m rotating with an angular frequency ω can be described in a rotating coordinate frame as experiencing a fictitious centrifugal force mω2 r, where r is the distance to the rotational axis. The potential describing this force is V= –mω2 r2 . Below Eq. (2.25) it was explained that the relative population of higher energy levels ω is determined by e–ω/kT . Analogously, in this case the Boltzmann distribution (2.25) is expressed as n = n 0 e−V /kT = n 0 e(mω
2 2
r )/(2kT )
(6.64)
where n0 is the number of particles where the field vanishes, that is on the axis of the centrifuge. In Eq. (6.64) the mass enters into the equation just as with gas diffusion, therefore a cascade of centrifuges will enrich uranium to the desired degree. This process is technologically more advanced than diffusion as the angular frequency ω in Eq. (6.64) needs to be large to give a reasonable separation. The required high-speed centrifuges are not easy to construct and to maintain. 6.4.2.3
Laser Separation1
The mass difference of the two uranium nuclei causes a slightly different size and shape of both nuclei, which results in a slightly different Coulomb field in which the atomic electrons move. Added to this, the position of the centre of mass will be slightly different in the two nuclei. These two physical facts together result in an isotope shift, which is a small difference in the energy of the atomic levels of the nuclei. In uranium (92 electrons) the major configuration of the outer electrons is (5f)3 (6d)(7s)2 , so there is an even number of electrons in the inner shells as well. The wave functions of the electrons overlap with the atomic nucleus and respond to the differences in the Coulomb fields of 235 U and 238 U. The resulting isotope shift is for certain levels larger than the line widths due to Doppler broadening.
1
This subsection is based on a contribution from Dr Wim Hogervorst which is grateful acknowledged.
Nuclear Power
251
6.19eV
0 235
U
238
U
Figure 6.13 Laser separation of 235 U and 238 U. Shown is a three-step process resulting in ionization of 235 U. Not shown is the hyperfine splitting of 235 U. In the 238 U spectrum on the right the dashed lines are drawn at the position of the 235 U levels.
Transitions in 235 U also show hyperfine splitting due to interaction of outer electrons with the nonzero nuclear spin. This implies that an electronic transition is spread out over many hyperfine transitions, complicating any laser separation process. In 238 U the hyperfine splitting is absent, as the nuclear spin is zero. Consequently the atomic energy levels of the two isotopes are different enough to separate vaporized atoms by exciting them with laser beams of very well-defined frequencies, where the laser beam for 235 U has to overlap with as many hyperfine transitions as possible. The principle is shown in Figure 6.13, where in the 238 U spectrum the positions of the corresponding 235 U levels are dashed. It appears that, because of auto-ionization effects in two- or three-step processes between certain 235 U levels, rather high cross sections for ionization can be reached. Subsequently the 235 U ions are separated from the 238 U atoms by an electric field. The process requires several steps: in the first step metallic uranium is evaporated at high temperatures (3000 [K]). This leads to a major problem, as hot atomic uranium vapour is extremely aggressive. In the second step laser irradiation is applied. There are about 500 000 known lines in the spectrum. Therefore, it requires a tremendous laser spectroscopic effort to find the ‘pincode’ for an actual separation process. In this step the density cannot be too high; otherwise charge exchange would result in 238 U ions as well. Note that the majority of the atoms now consist of 238 U and only a small amount of ionized 235 U. In the third step the electric field is supplied and the ionized 235 U sampled in a collector. However, many of the evaporated atoms also reach the collector. Therefore in one separation step only the low enrichment required for most nuclear reactors can be reached. Several successive separations would be required to obtain much higher enrichment in sufficiently high quantities. Because of the technological complexity and the aggressive nature of metallic uranium, laser separation by commercial enrichment companies was given up a decade ago.
252
Environmental Physics 254
Neutron capture
Key:
253
Beta decay 250 249
244
Cm
Secondary chain 239
239
238
U
240
Pu
Pu
241
245
243
Pu
242
Pu
246
Cm
Am 243
244
Cm
247
Cm
Cf
Bk 248
251 250
Cm
Cf
252
Cf
Es
253
Fm 254
Es
255
Fm
255
Es
Cf
Bk 249
Cm
Am
Pu
Np
239
U
Figure 6.14 Build-up of transuranic nuclei in nuclear reactors. Neutrons are absorbed and, usually after a few absorptions, exchanged for protons, while emitting an electron.
6.4.3
Fuel Burnup
After fabrication of the fuel, the fuel elements enter the fission process described in Section 6.1. Assume that one starts with enriched uranium only. Because of the high neutron density many neutrons will be absorbed, particularly in the dominant 238 U. This results in heavier, transuranic nuclei, where, usually after a few absorptions, a neutron is exchanged for a proton, accompanied by β–decay, as shown in Figure 6.14. The build-up of transuranic nuclei will increase with time, as shown. After some time the build-up of fission products (X and Y in Eq. (6.1)) and the burn up of fissile materials will lead to a decrease in the multiplication factors η and k. The fuel then has to be replaced. Typically, when one starts with 1 [kg] of uranium with about 30 [g] of 235 U, it is replaced when it has still about 8 [g] of unused 235 U. Because of the neutron absorption in 238 U it also contains about 6 [g] of Pu, more than half of which is the fissile isotope 239 Pu. This isotope has a half life of 24 000 years so that its decay during the fuel cycle is negligible. 6.4.4
Reprocessing
The fact that there is still so much fissionable material in the spent fuel is the rationale behind reprocessing. In a chemical plant uranium and plutonium are separated from each other and from all the rest of the materials: not only from the actinides of Figure 6.14, but also from the fission products, indicated by X and Y in Eq. (6.1), and their daughter nuclei. Next one produces so-called MOX, mixed oxide fuel, consisting of the Pu mixture and either natural uranium or the depleted uranium residue from the enrichment process. MOX is used to fuel up to one-third of the reactor, where the Pu essentially replaces 235 U as the fissile component. It may be mentioned that not all Pu isotopes are fissile and those that are not may have a large cross section for absorption without fission. This will lead to slightly different reactor characteristics ([1], p. 209).
Nuclear Power
253
Table 6.6 Radioactivity in the 1/3 of the fuel elements, that is annually taken out of a typical light water reactor of 1000 [MWe ]. Shown is the activity in units of 1016 [Bq] after time t as indicated.
Fission products Actinides (89 ≤ Z ≤ 103) Fuel cladding Total
t=0
t = 0.5 [year]
t = 10 [year]
t = 300 [year]
15000 4500 16 19500
480 16.5 3.7 500
37 9.5 0.4 47
≈4 × 10−4 ≈2 Negligible ≈2
Based on data from [12], p. 144 and [13], Figure 1.
6.4.5
Waste Management
In the production of nuclear electricity one cannot avoid producing radioactive waste. In fact, operation of nuclear power plants in the past has already produced waste that has to be kept from entering the natural environment. Table 6.6 gives an estimate of the radioactivity of the fuel that is annually taken out of a typical light water reactor with an electric power of 1000 [MWe]. The much smaller amount of radioactivity of the reactor components is not taken into account in these estimates. Usually one stores the spent fuel for a few years at the reactor facility. After this time it has ‘cooled down’ in two respects: it has lost the activity of the shorter-lived radioactive nuclides, by which not only the radioactivity has decreased, but also the heat production from the decay. During ‘cooling down’ the decay heat has to be taken away by air or water. From the data given in Table 6.6 it appears that the fission products decay rapidly and that after a few hundred years their remaining activity is small. It is essentially due to 99 Tc, 135 Cs and 129 I with half lives of 105 , 106 and 107 years, respectively ([1], p. 230). If one indeed is able to close the fuel cycle (see Figure 6.12 and Section 6.5) one will separate the fission products from the actinides by chemical means and deal with the actinides separately. If not, both have to be stored as waste. After cooling down, the volume of the waste is reduced by solidification. This is the evaporation of the fluid components. What is not reused is encapsulated in glass and disposed of. An important agency where disposal options are discussed is the International Atomic Energy Agency, in short IAEA. In this body virtually all countries participate; it has as one of its aims the promotion of the civilian use of nuclear power. As for waste management, it distinguishes three classes ([14], table II): (1) Exempt waste (EW) with activity levels so low that members of the public will receive an annual dose of less than 0.01 [mSv] (2) Low- and intermediate-level waste (LILW) with higher activity levels, but with a thermal power below about 2 [kWm−3 ]. This is subdivided in short-lived waste (LILW-SL) and long-lived waste (LILW-LL) (3) High-level waste (HLW) with thermal power above about 2 [kWm−3 ] and long-lived radionuclides For exempt waste no disposal options are advised; for LILW-SL, disposal near the surface or geological options are advised, for the long-lived waste only geological disposal
254
Environmental Physics
is considered as acceptable. There the waste should be kept out of the natural environment for geological lifetimes. Two possibilities which we will discuss briefly are disposal in deep rock or in deep-lying salt domes. 6.4.5.1
Deep Rock
Rock, even granite, is not a solid piece of material; it contains smaller and bigger cracks through which water can move. At the earth’s surface it is readily observed that plants and trees will find space for their roots in rock. For deep rock as a disposal site tests should be made to check their long-term containment. In one of the tests the migration of Pu in deep granite was checked. It was assumed that migrating Pu compounds would diffuse into little hairline cracks in granite and there would be adsorbed into the walls. It was found, however, that the net Pu velocity was a few orders of magnitude larger than calculated [15]. It appeared that the Pu was picked up by colloids that were too big to enter the hairline cracks. Instead they were moving through bigger cracks in which the groundwater velocity was much higher. As will be discussed in Chapter 9 it is a value judgement whether the possibility of mistakes should let society discard a technology altogether or whether it should accept a ‘learning’ period. 6.4.5.2
Salt Domes
A salt dome perhaps is a more homogeneous formation than rock. In this case one has to consider the following aspects: (1) The temperature rise in the salt as a function of position and time (2) The consequent changes in its mechanical properties, its movements and creep (3) Possible changes in the crystal structure of the salt in which potential energy could be stored slowly and then suddenly released, causing damage to the salt layer (4) Leakage of radioactivity from glass or other containers and their migration to the natural environment. The heat production is not trivial. For a 1000 [MWe ] power station the HLW accumulated during the lifetime of the reactor will, 13 years after removal, still produce 700 [kWth ]. Fortunately salt has a thermal conductivity of k = 4.885 [Wm−1 K−1 ], which is relatively large (see Table 4.1); the heat is widely dispersed, but for the HLW from a few power stations, a considerable temperature rise may occur locally.
Example 6.1 Temperature Rise in a Salt Dome Put an amount of waste, producing s [W] (constant in time) in a sphere with small radius R at time t = 0 After a long time an equilibrium T(r) will be obtained. (a) Derive an expression for T(r) with r > R. (b) Calculate the temperature rise for r = 1[m], s = 1 [kW] and k = 4.885 [Wm−1 K−1 ].
Nuclear Power
255
Answer (a) There will be spherical symmetry; for a heat current density q one has s = 4π r2 q . From Eq. (4.3) one has q = –kdT/dr; therefore s = −4πr 2 k
dT dr
s s dT → T (r ) = =− +A 2 dr 4π kr 4π kr
(6.65) (6.66)
For r → ∞ one has T(r) → A; therefore A is the temperature without waste present. (b) For the data given one finds T (1[m]) = 16.3 [K]. For the 700 [kWth ] mentioned in the main text, one would need many small holes at a distance large enough to keep the total temperature increase within bounds.
6.4.5.3
Transmutation
Much of the spent fuel of a once-through cycle is in the form of 238 U. One may separate U from the rest, thereby greatly reducing the volume of the waste. If the fuel is recycled a few times the 238 U fraction will be less and the fraction of fission products and actinides will be higher. To deal with the spent fuel one distinguishes partitioning and transmutation. Partitioning refers to methods which are used to separate the various components of the used fuel; uranium, plutonium, other actinides or fission products. Transmutation refers to the conversion of a nuclide into one or several other nuclides in a nuclear reactor as a result of neutron-induced fission or capture reactions. Difficulties are small neutron cross sections for fission products and for some of the actinides ([1], pp. 238–241). One of the possibilities for doing the job as well as possible is to put the waste in a subcritical reactor, which is made critical by an external neutron source from an accelerator, for example, by bombarding lead using high energy protons. The external neutron source guarantees safety and by increasing the flux it may compensate for any deterioration of the fuel in the reactor. Partitioning and then transmutation of spent fuel decreases its long-lived activity. A typical example of what could be achieved is given in Figure 6.15. Along the vertical axis one finds on a logarithmic scale the number of sieverts that would result from ingestion of a ton of spent fuel, a hypothetical value, but useful for the time being. The horizontal dotted line indicates the value for ore of natural uranium; along the horizontal axis the time after removal from the reactor is displayed. The lower drawn curve refers to the fission products, of which the toxicity crosses the uranium ore line at 270 years. The higher drawn curves display the toxicity of the actinides and the sum of both. Here the uranium line is crossed after 130 000 years. If the actinides Pu, Am and Cm were almost completely removed from the spent fuel and used as fuel in fast reactors, the radiotoxicity of the remaining waste reduces significantly. Depending on the degree of removal, indicated in the figure, the uranium line is crossed after 1000 or even 500 years. The degrees of removal indicated are realizable in practice.
238
256
Environmental Physics
F W
Figure 6.15 Effect of transmutation. The horizontal dotted curve refers to natural uranium ore. The other curves display the toxicity of spent fuel from that ore as a function of time. Drawn curves show the contribution of the fission products and the actinides separately, dashed curves the effect of an almost complete transmutation. (Reproduced by permission of The Nuclear Institute.)
6.4.6
Nonproliferation
A nuclear weapon consists of a nuclear explosive and its means of delivery: a missile, aircraft or artillery shell. It is generally recognized that a world in which many countries have nuclear weapons is not a safe place. Even worse is the prospect that subgovernmental or terrorist groups may have these weapons at their disposal. This is the rationale behind the nonproliferation treaty (NPT) and the safeguards precautions. It should be explained at this stage that most countries have become parties to the nuclear nonproliferation treaty, where, except the recognized nuclear weapon states (USA, UK, France, Russia and China), all participants pledge not to use nuclear reactions for weapon production; in exchange the non-nuclear weapon states get all possible help in the civilian use of nuclear energy. The IAEA, mentioned earlier, is called on to verify that state parties comply; it aims to detect diversion of nuclear materials from civilian to military use, amongst other things, by inspection teams. Nuclear technologies which may be used for weapon production are called sensitive technologies. The major concerns in this respect are the following. 6.4.6.1
Enrichment
Although most countries have put their enrichment facilities under international safeguards, for laser isotope separation this seems difficult to enforce. In general, all three enrichment
Nuclear Power
257
technologies move towards smaller size with less energy consumption, and possible secret facilities become more difficult to detect. 6.4.6.2
Nuclear Reactors
The easiest way to produce fissile material is to put an extra sample of natural uranium in a working reactor. When one does this for a short time only, the series of Figure 6.14 does not have time to build up. From the Pu isotopes one chiefly has 239 Pu, of which the critical mass with a thick reflector of natural uranium is around 4.5 [kg] with a spherical volume of about 0.28 [L]. Without a reflector it would be 10.2 [kg] for pure 239 Pu and 11 [kg] for so-called weapons-grade plutonium, which consists of 93% 239 Pu. Although the critical mass of a Pu explosive does not increase very much with the addition of other Pu isotopes besides 239 Pu, the design of a reliable weapon becomes more difficult. On the other hand, chemical separation of the Pu from the other atoms in the irradiated sample is a process well within the reach of most countries. 6.4.6.3
Reprocessing
As the objective here is to separate Pu from other elements, much Pu will be around. The unavoidable small inaccuracies in Pu-bookkeeping could be used to divert Pu from the process. 6.4.6.4
Conclusion
The problem of safeguarding the elements of the nuclear fuel cycle against military misuse will obviously increase with the amount of Pu around. The ‘best’ way to handle this problem would be to have as many of the parts of the fuel cycle as possible at the same place. Moreover they could or should (?) be put under international protection, inspection and safeguards. Only power stations have to be dispersed because of the cost of transporting electricity (cf. Section 4.4.4); their fuel can be mixed with a strong radioactive source so as to discourage misuse.
6.5
Fourth Generation Nuclear Reactors
In this chapter we mentioned the main concerns about the civilian use of nuclear power: safety, waste and proliferation. The Generation IV International Forum (GIF), which started in the year 2001, aims at reducing or even eliminating these concerns ([1] pp. 270–272, [17]). There are six types of systems under study, most of which will use a closed fuel cycle. This includes reprocessing of the spent fuel (Figure 6.12) and use of the actinides as reactor fuel, thus effectively burning most of the natural uranium. The plutonium produced in principle could be used for making an explosive. The precaution is that the plutonium is produced inside the core and not in a blanket around the core. In the core a high burn-up of uranium results in many isotopes of plutonium besides 239 Pu, increasing the critical mass. In some of the designs the spent fuel could be reprocessed by new techniques without separation of the plutonium, resulting in a mixture which is even more difficult, if not
258
Environmental Physics
impossible, to use for an explosive. This also could happen on site, reducing the need for transportation of plutonium and other sensitive materials. Finally, passive and inherent safety measures should guarantee the safety of the plant. Even in the Generation IV reactors there will remain an amount of long-lived radioactive material which cannot be reused or reprocessed for reasonable costs. The volume may be reduced by transmutation, for which a few well-chosen sites may be selected. The remainder should be stored in geological deposit sites. Generation IV nuclear reactors are planned to operate at temperatures of 500 to 1000 [◦ C], much higher than the 330 [◦ C] of the conventional nuclear power stations. The high temperature not only gives a higher thermal efficiency, but power stations with the highest temperatures can use water to produce hydrogen, which may be used as input to a hydrogen economy (Section 4.4.3); besides, high temperature heat can be used as process heat in the chemical industry. Cooling of the reactors at high temperatures may be done by gas, liquid lead or sodium, molten salt or water above the critical point. In several countries there is experience with these ‘exotic’ coolants, but research and development on components of the reactors is still going on. One of the subjects studied is the corrosion of materials at high temperatures. Commercial deployment of generation IV reactors is not expected before about 2030.
Exercises 6.1 Show that the fission of 1 [g] of 235 U a day will produce a thermal power of about 1 [MW]. 6.2 Check the statement that the free fall of one hydrogen atom of 10 [m] liberates 10−6 [eV] of gravitational energy. ∞ ∞ 6.3 Using Eq. (6.2) show (a) that 0 f (u)du = 1 and (b) 0 12 mu 2 f (u)du = 32 kT . Note: the latter equation corresponds with the theorem that for every degree of freedom the average energy will be kT/2. Here the three degrees of freedom are the three dimensions x, y, z. 6.4 (a) Show that the maximum of Eq. (6.2) corresponds with a kinetic energy of kT. (b) Show that for T = 293 [K] this gives a velocity of ∼2200 [m/s] and a kinetic energy of 0.025 [eV]. (c) Also show that 3kT/2 for T = 300 [K] corresponds to 0.039 [eV] and to a velocity of 2700 [m s−1 ]. 6.5 Write down Eq. (6.30) for a spherical reactor with radius R using Eq. (B18). Derive a formula for R in a reactor which is just critical. 6.6 Consider a sphere consisting of pure 235 U (a model of a nuclear explosive) and estimate the mean free path for fast neutrons. For fast neutrons one may use the following data (averaged over energies) σ f = 1.3 [barn] ([1], 201); η = 2.4 ([1],218); ρ = 19 × 103 [kg m−3 ]. (a) Calculate f , λf and (b) make a rough estimate of the radius R for which the leakage will be half of the neutrons produced and argue that this will be about the critical radius; calculate the corresponding critical mass (an accurate calculation will give 50 [kg]). 6.7 Plutonium nitrate PuO2 (NO3 )2 is dissolved in water. (a) Show that the minimum concentration for which the solution may become critical, ignoring surface effects is
Nuclear Power
259
9 [g kg−1 ] of 239 Pu. (b) What do you conclude for the storage of the solutions? (c) Double the concentration of 239 Pu and take a finite size cubic reactor (a = b = c) with leakage. Deduce the critical size of the reactor. You may use the following data: for 239 Pu: ν = 3.0; σ f = 664 [barn]; σ c = 361 [barn]. For N: σ c = 1.78 [barn]. For water molecules σ c = 0.66 [barn]. Capture in O may be ignored. By permission of R. Stephenson inspired by [18], p. 160. Hint: For (a) take 1 mole of PuO2 (NO3 )2 dissolved in y moles of water. 6.8 After emergency shutdown of the reactor illustrated in Figure 6.5 it is immersed in 1000 [m3 ] of cooling water of 20 [◦ C]. Estimate the number of hours it takes before it boils at 100 [◦ C]. Take the thermal power of the reactor as 2000 [MWth ]. Note: By pressurizing the reactor core the boiling temperature may rise to about 260 [◦ C] and the time for measures will increase. 6.9 Consult Example 6.1. It is clear that outside the waste the temperature will go up until the equilibrium is reached. It therefore is a function of position and time. Outside the waste the solution of Eq. (4.19) becomes B r T (r, t) = erfc √ (6.9-1) 4πar 2 at where erfc is the complementary error function defined in Eq. (C9). (a) Prove that Eq. (6.9-1) is a solution of Eq. (4.19) for q˙ = 0 and (b) show that for t → ∞ it converges to solution (6.66); relate the value of B to s. Note: because of (b) Eq. (6.9-1) is the complete solution of (4.19) outside the waste.
References [1] Stacey, W.M. (2007) Nuclear Reactor Physics, 2nd edn, Wiley-VCH Verlag, Weinheim, Germany. One of the recent standard texts. [2] Glasstone, S. and Sesonske, A. (1994) Nuclear Reactor Engineering, 4th edn, Chapman and Hall, New York. [3] Duderstadt, J.J. and Hamilton, L.J. (1992) Nuclear Reactor Analysis, John Wiley, New York. [4] Hoogenboom, J.E. (1997) Lecture Notes, IRI, Delft the Netherlands. [5] van Dam, H. (1992) Physics of nuclear reactor safety. Rep. Prog. Phys, 11, 2025–2077, esp. p. 2073. [6] APS Study group on nuclear fuel cycles and waste management (1978) Report to the American Physical Society by the study group on nuclear fuel cycles and waste management. Reviews of Modern Physics, 50(1), S1–S176. [7] Wesson, J. (1987) Tokamaks, Clarendon Press, Oxford. [8] Joint European Torus JET Joint Undertaking Annual Report 1997 + Information received from ITER Communications, April 19, 2010. [9] International Thermonuclear Experimental Reactor www.iter.org/proj/iterandbeyond. [10] International Commission on Radiological Protection, ICRP (2007) The 2007 recommendations of the International Commission on Radiological Protection. ICRP Annals of the ICRP, 37(issues 2–4, April-June).
260
Environmental Physics
[11] United Nations Scientific Committee on the Effects of Atomic Radiation, UNSCEAR (1993) Report to the General Assembly with scientific annexes, United Nations, New York. [12] Kraushaar, J.J. and Ristinen, R.A. (1984) Energy and the Problems of a Technical Society, Wiley, New York. A well-written book aimed at a US audience. [13] Kusters, H. (1990) Reduktion des Risikos von nuklearen Abfalle durch Transmutation? Atomwirtschaft, June, 287–292. [14] International Atomic Energy Agency (1994) Safety Series No. 111-G-1.1, Vienna. [15] Buddemeier, R.W. and Hunt, J.R. (1988) Transport of colloid contaminants in groundwater : radionuclide migration at the Nevada test site. Applied Geochemistry, 3, 535–548. [16] Magill, J., Berthou, V., Haas, D. et al. (2003) Impact limits of partitioning and transmutation scenarios on the radiotoxicity of actinides in radioactive waste. Nuclear Energy, 42, 263–277. [17] The Generation IV International Forum www.gen-4.org. World Nuclear Association www.world-nuclear.org [18] Stephenson, R. (1954) Introduction to Nuclear Engineering, McGraw-Hill, New York.
7 Dispersion of Pollutants Any method of energy conversion will cause some pollution, if not in the conversion process itself, by burning fossil fuels, then in the production of the conversion installations. Examples of the latter are the poisonous chemicals used in the production of solar cells. Although many chemicals may be reused or made harmless, some will enter the environment. In this chapter we study the ways in which pollutants may be dispersed there. Pollutants are usually imbedded in a medium such as air, water or soil. We will them call guest particles, even when they occur as single molecules and the medium in which they happen to be will be called the host medium. The simplest process is diffusion, to be discussed in Section 7.1. Here, a difference in concentration of guest particles, or more precisely their concentration gradient, is the driving force for dispersion. If the host medium is moving, the major cause of dispersion is the motion of the host medium and diffusion is just a small correction. This is the case for guest particles in a moving river for which a one-dimensional approximation is given in Section 7.2. The motion of groundwater is rather slow, which simplifies the mathematical treatment, as will be shown in Section 7.3. A more thorough mathematical treatment of the flow of media is given in Section 7.4 where the basic Navier–Stokes equations are derived. These equations appear to be nonlinear and in almost all cases must be tackled numerically with the help of the most sophisticated computers. These methods almost always involve putting a grid over the region of space that one wants to describe, and calculating spatial and temporal derivatives at the points of the grid. This implies that any computation is made for a specific geometry and has to be repeated for other geometries, and moreover that physical phenomena smaller than the grid spacing have to be described by approximations and models anyway. It is therefore important to get a feeling for the physics involved; the old-fashioned analytical methods of the other sections of this chapter give a frame of reference for any more precise calculation, they are usually quickly done on PCs and they are a source of inspiration for any approximations that one might want to make.
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
262
Environmental Physics
Such an approximation is the process of turbulent diffusion, briefly discussed at the end of Section 7.4. Another one is the use of Gaussian plumes to describe dispersion in the air (Section 7.5). Finally we summarize the way in which engineers describe turbulent jets and plumes (Section 7.6). The final sections draw a little on Section 7.4, but can be read by those students for whom this section is too complicated mathematically or conceptually. For all sections of this chapter a working knowledge of vector algebra, summarized in Appendix B, is required.
7.1
Diffusion
If one has a mass of pollutant at a certain position in a medium it will disperse in time over its entire expanse, even when macroscopically the host is at rest. This process originates from the collisions between atoms and molecules. If the size of the guest and host molecules is comparable, one speaks of molecular diffusion. If the size of the suspended molecules is appreciably larger, this phenomenon is called Brownian motion. Molecular diffusion is observed when one looks at the dispersion of a drop of ink in water at rest without any temperature differences or other disturbing influences. Usually molecular diffusion is a minor effect in dispersing pollutants. More important are flows of liquids (rivers, oceans) or gases (air), and the turbulence associated with them. This will be discussed in later sections. There are, however, examples of molecular diffusion which are important from an environmental point of view, for example diffusion of highly radioactive materials in clays, or in groundwater at rest. Another reason to discuss molecular diffusion is that the techniques used may be applied to large-scale turbulent diffusion as well [1]. 7.1.1
Diffusion Equation
The concentration C(x, y, z, t) = C(r, t) of a diffusing substance is defined as its mass M, divided by the sample volume V in which it is dispersed C(r, t) =
M V
(7.1)
Here, it is assumed that V is large compared to a3 , where a is the average distance between the guest particles. It is further assumed that C(r, t) is a continuous and differentiable function, as usual. Finally, the concentration C is assumed to be so small that the change in mass of volume elements due to diffusion may be ignored. The flux F(r, t) is by definition pointing in the direction in which the guest particles flow; its magnitude F [kg m−2 s−1 ] is the mass of the guest particles crossing a unit area [m2 ] in a second [s] in the direction of F. The relation between flux F and concentration C is known as Fick’s law: F = −D∇C
(7.2)
The constant D is called the diffusivity, diffusion coefficient, or diffusion constant. It depends on temperature, molecular weight and so on; therefore the term ‘constant’ is misleading. Fick’s law may be derived rigorously from the kinetic theory of gases, which
Dispersion of Pollutants
263
Table 7.1 Diffusion coefficients D at 25 [◦ C] and atmospheric pressure. D/[m2 s−1 ] In air CO2 H2 O vapour C6 H6 (benzene)
16.4 × 10−6 25.6 × 10−6 8.8 × 10−6
In water CO2 (water of 20 [◦ C]) N2 H2 S NaCl (water of 20 [◦ C])
1.60 × 10−9 2.34 × 10−9 1.36 × 10−9 1.3 × 10−9
From [2], p. 134.
gives the dependence of the diffusion coefficient on temperature and pressure as well. Some values of diffusion coefficients are given in Table 7.1. Fick’s law (7.2) is exactly analogous to Eq. (4.3), which describes the heat flow q as proportional to ∇T , the gradient of the temperature. Therefore Eq. (4.3) is often called the law of heat diffusion. Similarly, a form of Fick’s law was discussed in Eq. (6.22) where the neutron flux J was taken as proportional to the gradient of the neutron density, grad (nu). In order to derive a differential equation for the concentration C(r, t) we consider an arbitrary volume V. The outflow of guest particles from that volume is given by divFdV [kg s−1 ]. Conservation of the mass of the guest particles implies that this must be equal to the decrease of mass inside the volume per second, which is −∂( CdV )/∂t [kg s−1 ]. Consequently ∂ divFdV = − CdV (7.3) ∂t This relation holds for any volume V which is fixed in space. This gives the equation of continuity ∂C ∂C + divF ≡ +∇ ·F=0 ∂t ∂t
(7.4)
This equation holds as long as F represents the flow of guest particles. Consider guest particles in a host medium which moves with velocity u. The flux F [kg m−2 s−1 ] then has a component uC as the guest particles are moving with the flow. This phenomenon is called advection. Together with the diffusion (7.2) this gives F = uC − D∇C
(7.5)
Combining Eqs. (7.4) and (7.5) gives ∂C + ∇ · (uC) − ∇ · (D∇C) = 0 ∂t
(7.6)
264
Environmental Physics
Writing out the differentiation results in (Exercise 7.1) ∂C + (∇ · u)C + u · ∇C − ∇ · (D∇C) = 0 ∂t
(7.7)
For many host fluids ∇ · u = 0 (Exercise 7.2), which includes the case where u = 0. For cases where the diffusion coefficient D is also independent of position one finds ∂C + u · ∇C = D∇ 2 C ∂t
(7.8)
On the left in Eq. (7.8) we have the local or partial derivative of C(x, y, z, t) with respect to time t, which is the change of concentration with time at a certain location (x, y, z). Of course one can also look at the change of concentration with time if one follows the flow. In that case one writes C = C(x(t), y(t), z(t), t), where the time dependence of x, y, z is such that dx(t)/dt = ux and similarly dyt)/dt = uy and dz(t)/dt = uz . The change of concentration with time while following the flow becomes dC ∂C dx ∂C dy ∂C dz ∂C ∂C ∂C = + + + = ∇C · u + = u · ∇C + dt ∂ x dt ∂ y dt ∂z dt ∂t ∂t ∂t
(7.9)
which is the left-hand side of Eq. (7.8). The expression dC/dt on the left of Eq. (7.9) is called the total derivative of C with respect to t. Eq. (7.9) again shows that the term u · ∇C in Eq. (7.8) describes the change of concentration of guest particles as they are moving with the flow of the host medium, in other words advection. Eq. (7.8) may be written in a simple mathematical notation as dC = D∇ 2 C dt
(7.10)
This equation holds for host media where ∇ · u = 0 and where the diffusion coefficient is independent of position. Note again that Eq. (7.10) in form is identical to the heat diffusion equation (4.20). For the same physical conditions the same solutions would hold. Below we will, however, discuss a few different physical examples of the diffusion of guest particles in a host medium. From Table 7.1 it appears that diffusion is a small effect (Exercise 7.4). The examples will show what the solutions look like, which is helpful for applications where Eq. (7.8) is taken as a model, but the parameters are larger. 7.1.1.1
Instantaneous Plane Source in Three Dimensions
Consider a homogeneous medium at rest, that is, u = 0 and take D as constant. Look at the case in which the concentration C = C(x, t) is a function of x and t only and independent of both y and z. The differential equation (7.8) simplifies to ∂ 2C ∂C =D 2 ∂t ∂x
(7.11)
A solution to this equation is the Gauss function Q 2 C(x, t) = √ e−x /(4Dt) 2 π Dt
(7.12)
Dispersion of Pollutants
265
√ which may be checked by substitution (Exercise 7.3). The factor 2 π Dt in front is chosen such that (7.12) is identical to the normalized Gauss function (C1) in Appendix C. This becomes clear from the substitution √ (7.13) σ = 2Dt in Eq. (7.12). One indeed finds C(x, t) =
Q 2 2 √ e−x /(2σ ) σ 2π
(7.14)
In the limit σ → 0 this becomes Qδ(x) (see (C5)), where δ(x) is the delta function. The limit σ → 0 in Eq. (7.14) is equivalent to the limit t → 0 in Eq. (7.12). Therefore C(x, t → 0) = Qδ(x)
(7.15)
Consequently, one may interpret solution (7.12) as the physical situation where at t = 0 an amount Q [kg m−2 ] is released in the plane ∞ x = 0: an instantaneous plane source. Note that the dimension of δ(x) equals [m−1 ], as −∞ δ(x)dx = 1. The concentration C(x, t) therefore has the dimensions [kg m−3 ], which is consistent with our interpretation. In a second interpretation Eq. (7.12) is seen as the solution of a one-dimensional problem where the amount Q [kg] is released at t = 0, an instantaneous point source in one dimension with the concentration C(x, t) in [kg m−1 ]. This interpretation will be used extensively in Section 7.2. In both interpretations, for times t > 0 the concentration disperses symmetrically, keeping the same integral ∞ C(x, t)dx = Q
(7.16)
−∞
for all values of time t. The substitution (7.13) may be used to hide the time dependence of the concentration, which has been done in Eq. (7.14). The mean square distance <x2 > over which the guest particles have travelled has to be calculated with a normalized function, which is the concentration (7.14), apart from the factor Q. With Eq. (C3) one finds < x 2 > = σ 2 = 2Dt 7.1.1.2
(7.17)
A Finite-Size Cloud
Consider a ‘cloud’ of guest particles released at t = 0 with u = 0 and D constant as before. The initial conditions for a simple case may be expressed as C = C0 C =0
b b <x <+ , 2 2 elsewhere
at −
t =0
(7.18)
t =0
We interpret this as a homogeneous cloud released around the plane x = 0 which extends to infinity in the y- and z-directions, with dimensions C [kg m−3 ]. The case for the release of a finite-size cloud from a source in one dimension follows the same mathematics. The cloud will diffuse perpendicular to the x = 0 plane. In order to solve Eq. (7.11) with initial conditions (7.18) one could interpret this cloud as a superposition of a big number
266
Environmental Physics
of delta functions with strength C0 dx [kg m−2 ] representing an instantaneous plane source in a strip between x and x + dx . In this case the right-hand side of Eq. (7.15) should read C0 δ(x − x )dx
(7.19)
Consequently the solution (7.14) should read C0 d x −(x−x )2 /(2σ 2 ) √ e σ 2π
(7.20)
where the origin of the source was moved to x . The complete solution becomes C0 C(x, t) = √ σ 2π
b/2
2
e−(x−x ) /(2σ ) dx 2
(7.21)
−b/2
Note that the time dependence of the concentration is hidden in σ 2 = 2Dt. As the problem is similar, albeit not identical to the one discussed below Eq. (4.22), it comes as no surprise that the error function again appears in the solution of Eq. (7.21). Using the definitions given in Appendix C, Eq. (C7) one finds b/2 + x b/2 − x C0 erf + erf (7.22) C(x, t) = √ √ 2 σ 2 σ 2 7.1.1.3
Instantaneous Line and Point Sources in Three Dimensions
Eq. (7.14) gave the solution of the diffusion equation (7.10) for an instantaneous plane source at x = 0 at time t = 0. By analogy one would write down the solutions for an instantaneous line source or point source as C(r, t) =
Q 2 2 e−r /(2σ ) √ (σ 2π )n
(7.23)
where r is the distance to the source and n is the number of Cartesian coordinates (x, y, z) required to define r. For n = 1 and r = x this reduces to the solution (7.14). In this case there is one significant dimension x and the problem is equivalent with a one-dimensional problem, as was noted before. For an instantaneous line source there are two essential dimensions and we should put n = 2 in Eq. (7.23) and r2 = x2 + y2 . For an instantaneous point source in three dimensions n = 3 and r2 = x2 + y2 + z2 . It can be shown by substitution that Eq. (7.23) is correct (Exercise 7.5). In all cases σ 2 = 2Dt
(7.24)
It is clear that σ has the same dimensions as the distance r, which is [m]. Also, in three dimensions the concentration C should always be written in [kg m−3 ]. So, from Eq. (7.23) one finds that for a plane source Q is expressed in [kg m−2 ], for a line source in [kg m−1 ] and for a point source in three dimensions as [kg]. For a point source in one dimension the concentration C is expressed in [kg m−1 ] and the source Q therefore again in [kg].
Dispersion of Pollutants
7.1.1.4
267
Continuous Point Source in Three Dimensions
Consider a point source at the origin which, starting at t = 0, emits continuously at a rate of q [kg s−1 ]. In a time interval dt it therefore emits qdt [kg]. At a certain time t and position r one may find the concentration C(r, t) by integrating (7.23) for n = 3, that is, by adding all individual and instantaneous ‘puffs’ of strength qdt at earlier times t < t. One should realize that σ 2 = 2Dt in Eq. (7.23) should be written down explicitly, for at time t the contribution from a puff at t will be found by putting σ 2 = 2D(t – t ). One finds a so-called convolution t dt q −r 2 /[4D(t−t )] e (7.25) C= 8(π D)3/2 (t − t )3/2 0
One should try to bring this into a form resembling the error function Eq. (C7) or its complement (C9). By substitution of β 2 = r2 /(4D(t – t )) one finds r q erfc C(r, t) = (7.26) √ 4π Dr 2 Dt 7.1.2
Point Source in Three Dimensions in Uniform Wind
Consider a point source of guest particles in a flow of uniform velocity u, which may be air, water or any other host medium. The x-axis is taken in the direction of the flow. 7.1.2.1
Instantaneous Point Source
For an instantaneous point source at the origin at t = 0 we use a x , y , z coordinate system moving with the flow. The relation to the x, y, z-system is given by the Galilei transformation x = x − ut y = y z = z
(7.27)
In that coordinate system Eq. (7.23) holds which gives C(x , y , z , t) =
Q 2 2 2 2 e−(x +y +z )/(2σ ) √ 3 (σ 2π )
(7.28)
Again, Q [kg] is the emitted mass of guest particles and σ 2 = 2Dt. With transformation (7.27) the solution in the fixed coordinate frame becomes C(x, y, z, t) =
Q 2 2 2 2 e−((x−ut) +y +z )/(2σ ) √ 3 (σ 2π )
(7.29)
One may verify that this solution obeys Eq. (7.8) which combined diffusion and advection (Exercise 7.6). Later we will use (Eq. (7.77)) that for a one-dimensional case one should take solution (7.12) with x replaced by (x – ut). 7.1.2.2
Continuous Point Source
In order to find the concentration C(x, y, z, t) at location x, y, z and time t resulting from a continuous point source at the fixed origin, starting at t = 0 in a uniform flow u one
268
Environmental Physics
combines the procedures which led to Eqs. (7.25) and (7.29). We start with a delta puff at time t < t with magnitude Q = qdt [kg]. Again we use a coordinate system which moves with the flow. In this case the new coordinate frame starts moving with the puff, that is, at time t . The integrand of (7.25) contains the factor r2 in the exponent which now should read as r 2 → x 2 + y 2 + z 2
(7.30)
The guest particles move with the flow and disperse over a time t – t . In order to return to the fixed coordinate system for the puff under investigation we should adjust Eq. (7.27) and write x = x − u(t − t ) y = y z = z
(7.31)
This transformation is substituted into Eq. (7.28) with Q = qdt and σ 2 = 2D(t – t ), while the summation over all puffs leads to the integral q C(x, y, z, t) = 8(π D)3/2
t
e−{(x−u(t−t ))
0
2
+y 2 +z 2 }/{4D(t−t )}
dt (t − t )3/2
(7.32)
The integral may be simplified by the substitution β 2 = r2 /4D(t – t ), which was √ already used to deduce Eq. (7.26). This leads to an integral with lower boundary r/2 Dt and upper boundary ∞. The integrand no longer contains a time dependence. One obtains an analytical result by taking the limit t → ∞, which gives q (7.33) e−u(r −x)/(2D) C(x, y, z, t) = 4πr D with r2 = x2 + y2 + z2 . This result is symmetrical for rotation around the x-axis. It may be visualized in the z = 0 plane by drawing lines of constant concentration. This is done by introducing dimensionless variables x = ux/D and y = uy/D and r 2 = x 2 + y 2 . It follows that qu (7.34) e−(r −x)/2 C= 4π D 2r This may be further simplified by taking C = 4π D 2 C/(qu), which gives 1 −(r −x)/2 (7.35) e r This relation does not contain parameters like D, q and u. Figure 7.1, which shows the lines of constant C therefore gives the structure of the solution to the advection-diffusion equation (7.8) for a continuous point source in a uniform flow in the limit t → ∞. In Figure 7.1 one observes how ‘slender’ the plume is. The advection of guest particles by the moving fluid in the x-direction dominates the diffusion. This becomes even clearer when for large x, downstream, one approximates, in the usual variables x, y, z y2 + z2 y2 + z2 y2 + z2 2 2 2 ≈ x 1 + (7.36) = x + r = x +y +z = x 1+ x2 2x 2 2x C=
Dispersion of Pollutants uy D
269
C = 2 x 10-3
30 20
5 x 10-3
10
10-2
1/30 0
50
100
200
200
ux D
Figure 7.1 Lines of constant concentration C defined in Eq. (7.35). Shown is the z = 0 plane for a continuous point source at the origin, starting at t = 0 within a uniform velocity field in the x-direction in the limit t → ∞. (Reproduced by permission of Kluwer Academic Publishers, copyright 1973, from [1], Fig 1.9a, p.18.)
Substitution in Eq. (7.33), the solution in the limit t → ∞ gives q q 2 2 2 2 2 C≈ e−(y + z )/(2σ ) e−u(y + z )/(4x D) ≈ 4πr D 2π uσ 2
(7.37)
On the right we approximated r ≈ x in the denominator in front and σ 2 = 2Dx/u both in front and in the exponent. The exponential function in Eq. (7.37) describes the dispersion perpendicular to the t = x/u that flow. Its squared standard deviation σ 2 is apparently proportional to the time
the particles need to travel from the source to location x. It does not depend on puffs emitted earlier or later. So, individual puffs do not interfere and diffusion along the x-axis is negligible in the limit of large x and large times t. That this interpretation is correct also follows from the fact that one may show (Exercise 7.7) that Eq. (7.37) obeys Eq. (7.8) without time dependence 2 ∂ 2C ∂ C ∂C + =D (7.38) u ∂x ∂ y2 ∂z 2 There is no time dependence as the limit t → ∞ describes a stationary situation. The advection in the x-direction on the left of Eq. (7.38) therefore equals the diffusion in the y- and z-directions on the right. From Eq. (7.37) one may write down directly the expression for an infinitely long line source along the z-axis which emits q[kg m−1 s−1 ] perpendicular to the uniform flow.√The z-dependence disappears in (7.37) and in virtue of Eq. (7.23) one has to multiply by σ 2π as one has one dimension less. This gives q 2 C=√ (7.39) e−y u/(4Dx) 4π Dxu 7.1.3
Effect of Boundaries
In the presence of walls the differential equations will still apply, if the proper boundary conditions are used. Consider the one-dimensional problem of an instantaneous plane source without external wind and with a cloud of guest particles propagating in the x-direction,
270
Environmental Physics
both positive and negative. If there is a wall at x = −L the flow F at that point should be zero, giving F = −D
∂C(x = −L , t) ∂x
(7.40)
It is clear that Eq. (7.12) does not obey this boundary condition, although it does solve the diffusion equation (7.11). Mathematically one could add an imaginary source at x = −L that would give the same flow at x = −L, but in the other direction. The sum of both solutions is again a solution of the linear equation (7.11), but now obeys the boundary condition (7.40). The final solution therefore becomes 2 Q 2 e−x /4Dt + e−(x+2L) /4Dt (x > −L) C= √ 2 π Dt
(7.41)
This would be sufficient to solve diffusion equations in the air, where the ground acts as a wall; the vertical direction then would be the x-direction and the source of guest particles would be a plane. A real river would have two boundaries, at x = L and x = −L, respectively. Adding an imaginary source at x = −2L gives the right condition at x = − L, but not at x = L. In fact one needs a whole series of mirror sources x = −2L, −4L, −6L. . . and also at x = 2L, 4L, 6L. . . to get both boundary conditions right at the same time. This gives a series ([5], Eq. (2.46)) n=∞
Q 2 e−(x+2n L) /(4Dt) (−L < x > L) C= √ 2 π Dt n=−∞
7.2
(7.42)
Dispersion in Rivers
The question to be tackled is how a concentration of pollutants that enter a river at a factory is dispersed by the flow somewhere downstream ([3]–[5]). Of course, one expects the pollutant to be dispersed so that downstream a smaller concentration spreads over a larger region. The direction of the flow is always called the x-direction. The vertical coordinate is called z, and y represents the remaining, transversal direction, horizontally perpendicular to the average flow. The y-coordinate varies between y = 0 and y = W and the depth of the river is indicated by h(y). The three-dimensional diffusion equation in a moving flow with velocity u was derived in Section 7.1. We repeat the main results. The flux F was written in Eq. (7.5) as the sum of advection and diffusion: F = uC − D∇C
(7.43)
Next the equation of continuity was used leading to ∂C + ∇ · (uC) − ∇ · (D∇C) = 0 ∂t
(7.44)
Dispersion of Pollutants
271
Finally, for a velocity field with ∇ · u = 0 and a diffusion coefficient D, which is constant and the same in the three directions we found ∂C + u · ∇C = D∇ 2 C ∂t
(7.45)
Note that without the approximations mentioned, one should solve Eq. (7.44). 7.2.1
↓
One-Dimensional Approximation
The simplest approach to the problem is by reducing it to one dimension. One assumes that the pollutant is inserted in a plane cross section over the total width and depth of the flow; the only relevant coordinate then would be x, along the flow and we should find a one-dimensional form of Eq. (7.45). In fact, we will derive Eq. (7.65) for the average concentration C, which results from both advection and dispersion. The dispersion will be a generalization of diffusion and will be indicated by a parameter K instead of D. We will also express K in other quantities ([5], Section 4.1.2). The resulting equations should be used with care as they only hold after long times and include turbulence only approximately. In a river, sketched in Figure 7.2, the velocity u is in the x-direction, with a velocity distribution u(y) across the river; the velocity u(y) will be largest near the middle of the stream. Consider two guest particles close to one another. Assume that one particle diffuses a little into the middle of the stream, while the other one does not. The first particle will get a higher velocity in the x-direction than the second particle and would therefore travel a certain distance much quicker than the second one. Advection therefore will be the main cause of driving the particles apart. This implies that the effective coefficient K must be larger than the diffusion constant D. We will first discuss molecular diffusion only and then include the effect of turbulence.
y=0 y h(y)
y
-Dh ∂C' | y + dy ∂y
y=W
y + dy y=W u'Ch|ξ u(y) y
y=0
u'Ch|ξ + dξ
-Dh ∂C' | y ∂y ξ
ξ + dξ
x
Figure 7.2 Interpretation of the one-dimensional dispersion equation. On the left-hand side a stream is sketched, of which a small part is enlarged on the right-hand side. The variables indicated there refer to a coordinate frame, moving with the flow.
272
Environmental Physics
In order to derive the relevant equations it is convenient to introduce quantities averaged over the width W of the river and weighted with the depth h(y). We assume that the variables do not change with the vertical coordinate z. This is a drastic approximation, for we have seen in Section 5.2.4 that the velocity of the flow should approach zero at the bottom of the river. Anyway, besides u(y) we define a mean u by 1 u= A
W u(y)h(y)dy
(7.46)
0
W Here A is the cross-sectional surface area of the river: A = 0 h(y)dy. We note that Au is the volume of water passing a cross section in a unit of time [m3 s−1 ]. From the concentration C(x, y, t) of pollutant, a mean value C is defined by 1 C(x, t) = A
W C(x, y, t)h(y)dy
(7.47)
0
One can express the precise values of variables with respect to the mean values in the following way u(y) = u + u (y)
(7.48)
with 1 A
W
u (y)h(y)dy = 0
(7.49)
0
Also we define C(x, y, t) = C(x, t) + C (x, y, t)
(7.50)
where again 1 A
W
C (x, y, t)h(y)dy = 0
(7.51)
0
Note that the primes u , C in this section do not mean differentiation, but the deviation from the mean value. We have to solve the diffusion equation (7.45), which for a flow in the x-direction becomes 2 ∂ 2C ∂C ∂ C ∂C + + u(y) =D (7.52) ∂t ∂x ∂x2 ∂ y2 It is easiest to describe the problem in a system moving with the average flow. Like Eqs. (7.27) we use a Galilei transformation, which in this case we write as ξ = x − ut y=y t =t
(7.53)
Dispersion of Pollutants
273
In Eq. (7.52) u(y) has to be replaced by u (y), while ∂C ∂ξ ∂C ∂C = = ∂x ∂ξ ∂ x ∂ξ
(7.54)
Using Eq. (7.50) we find from Eq. (7.52) 2 ∂ ∂ ∂ 2C ∂ (C + C ) + (C + C ) + u (y) (C + C ) = D ∂t ∂ξ ∂ξ 2 ∂ y2
(7.55)
On the far right in this equation we have used that C is independent of y. In this equation the diffusion term ∂ 2 /∂ξ 2 may be neglected, as the effect of a change of velocity by diffusion in the y-direction will dominate. With some other arguments by the British mathematician Taylor most of the remaining terms can be ignored as well, leading to ∂C (y = 0) ∂C (y = W ) = =0 ∂y ∂y (7.56) Instead of summarizing Taylor’s arguments it is more illuminating to study the righthand side of Figure 7.2, which pictures the river in the moving frame (7.53). In that figure a rectangle is analysed as to its inflow and outflow. The rectangle represents a pillar of width dξ in the ξ -direction and of width dy in the y-direction over the total depth h(y). In the ξ -direction only advection over the total depth h is considered, which assumes that the advection is dominated by the mean concentration. The variation of u and h with ξ will be neglected, which results in an advection term u Ch. In the y-direction only diffusion occurs, determined by the y-derivative of C, which equals the y-derivative of C . This gives –Dh∂C /∂y over the total depth h. After a long time the situation becomes stationary and the total amount of guest particles in the moving rectangle will remain the same. We may therefore put the outflow in the ξ -direction, which is a difference of two terms, (cf. Figure 7.2) equal to the inflow in the y-direction, which also is a difference of two terms. By taking the first terms of the series expansion it follows that ∂C ∂ ∂C = Dh (7.57) uh ∂ξ ∂y ∂y u
∂C ∂ 2C =D 2 ∂ξ ∂y
with boundary conditions
For constant diffusion D and depth h this indeed reduces to Eq. (7.56). We proceed with the more general Eq. (7.57) and integrate twice with respect to y, which gives ∂C ∂ξ
y 0
1 Dh(y2 )
y2
u (y1 )h(y1 )dy1 dy2 = C (ξ, y, t) − C (ξ, y = 0, t)
(7.58)
0
Here, the boundary conditions of Eq. (7.56) were used and the fact that C is independent of y. The flow of guest particles through 1 [m2 ] perpendicular to the ξ -direction is defined as 1 J (ξ ) = A
W 0
C(ξ, y, t)u hdy
(7.59)
274
Environmental Physics
According to Eq. (7.50) we may write C(ξ, y, t) = C(ξ, t) + C (ξ, y, t). For substitution (7.59) we may aswell write C(ξ, y, t) = C(ξ, t) + C (ξ, y, t) − C (0) as the integral in Eq. C (0)u hdy = C (0) u hdy = 0 because of Eq. (7.49). On substitution in (7.59) the integral over C(ξ, t) vanishes for the same reasons. It follows that 1 J (ξ, t) = A
W
1 C(ξ, y, t)u hdy = A
0
1 ∂C = A ∂ξ
W
0
y
1 Dh(y2 )
u (y)h(y) 0
W
0
(C (ξ, y, t) − C (0))u hdy y2
(7.60) u (y1 )h(y1 )dy1 dy2 dy
0
This unpleasant equation is reduced to two somewhat simpler ones J = −K 1 K =− A
∂C ∂ξ W
(7.61)
y
u (y)h(y) 0
0
1 Dh(y2 )
y2
u (y1 )h(y1 )dy1 dy2 dy
(7.62)
0
Eq. (7.61) resembles a diffusion equation in one dimension, which is the reason why K defined in Eq. (7.62) is called the longitudinal dispersion coefficient. Conservation of mass in one dimension gives from Eq. (7.4) ∂J ∂ ∂C ∂C =− = K (7.63) ∂t ∂ξ ∂ξ ∂ξ For cases where K is independent of ξ this gives the one-dimensional equation ∂C ∂ 2C =K 2 ∂t ∂ξ
↑
(7.64)
The left-hand side is the time derivative in the coordinate system, which is moving with velocity u with respect to the coordinate system at rest. This therefore is the total derivative with respect to time, which was encountered in Eq. (7.10). In the coordinate system at rest Eq. (7.64) becomes the one-dimensional dispersion equation ∂C ∂C ∂ 2C +u =K 2 ∂t ∂x ∂x
(7.65)
The right-hand side of Eq. (7.64) remained unchanged because of Eq. (7.54). Eq. (7.65) is analogous to Eq. (7.8) with D replaced by K. Eq. (7.62) shows that this parameter is inversely proportional to the diffusion coefficient D and strongly dependent on the variation of velocity u (y) over the river cross section. The molecular diffusion represented by D is very small. From Eq. (7.17) and Table 7.1 it may be determined that after a release of salt in water at rest it would need 12 years before a root mean square dispersion of 1 [m] was reached. In the case of advective dispersion
Dispersion of Pollutants
275
this number will depend on K, which in turn depends on the velocity variation u (y). For the special case of a fluid between two close plates the integral (7.20) is easy to calculate (Exercise 7.9). One finds K ≈ 0.00064 [m2 s−1 ] and the salt needs 10 minutes to achieve a root mean square dispersion of 1 [m]. The latter number should not be taken literally as Eq. (7.57) was derived for a stationary situation and only holds after a long time. 7.2.2
Influence of Turbulence
In the example just given turbulence was absent because the distance between the two boundaries was very small. In a real river the dominant physical effect causing dispersion is turbulence. This is easily observed when some cans of dye are emptied into flowing water at intervals of a minute. Each time they will disperse differently. This is due to instabilities in the flow pattern; these instabilities not only occur in rivers with a complicated geometry, but also in long pipes, when the inertia forces dominate the viscous forces. In Sections 7.4 and 7.5 we shall give a physical description of turbulence and introduce the so-called Reynolds number Re to quantify this effect. Instabilities occur when Re is above some critical value. Essentially, turbulence is described as a statistical interaction within the fluid just like molecular diffusion which is the consequence of many collisions between molecules. In empirical practice one just replaces the coefficient for molecular diffusion D by similar coefficients ε for turbulent diffusion. As a preliminary it is necessary to look at the simple case of a uniform flow in a channel of constant depth d and width W. Assume that the cross section of the channel is rectangular, and that a slight slope S = tan α causes a flow with constant velocities without turbulence. In Figure 7.3 a parcel of fluid is sketched covering the depth of the channel with a length of 1 [m] and width W [m]. The two forces that act are gravity and the tangential stress along the bottom of the channel, which we already met in Section 5.2.4. Along the bottom surface the stress τ 0 [N m−2 ] operates opposite to gravity. As there are no accelerations both forces must be equal in magnitude, which gives W τ0 = ρW dg sin α
(7.66)
From S = tanα ≈ α it follows that τ0 = ρdgS
(7.67)
For the slope S any other force may be substituted which causes the flow to run, such as a pressure difference. In order to describe turbulence we need some characteristics of the flow. W τO
d
α g
Figure 7.3
ρWdg sinα α
Stationary fluid motion in a channel. Gravity and friction forces are in equilibrium.
276
Environmental Physics
A physical parameter describing the boundary layer at the bottom of the flow will be the friction velocity u ∗ already introduced in Eq. (5.32). It has dimensions [m s−1 ] and is dependent on the relevant dynamical quantities. By combining Eqs. (5.32) and (7.67) one finds for the friction velocity or the shear velocity τ0 = (7.68) = gSd u∗ ρ For a turbulent river one could go back to the definition of flux F in Eq. (7.43) and represent turbulence empirically by replacing the single diffusion coefficient D by three values εx , εy and εz , representing turbulent mixing in the three directions. One then writes F = uC − εx
∂C ∂C ∂C ex − ε y e y − εz ez ∂x ∂y ∂z
(7.69)
with ex , ey , ez the unit vectors in the three dimensions. The εs are empirical variables, called eddy viscosities or coefficients of turbulent diffusion ([5], pp. 107–112). As turbulence will relate to the behaviour in the boundary layer, it seems acceptable that u ∗ will appear in the expressions for ε; as the dimensions of ε should be [m2 s−1 ], the velocity u ∗ [m s−1 ] should be multiplied by a length, for which the average depth d of the flow seems a good candidate. For the vertical εz one usually takes the value εz = 0.067u ∗ d
(7.70)
while for the transversal εy the empirical value appears to be strongly dependent on the type of flow: ε y = 0.15 u ∗ d straight channel ε y = 0.24 u ∗ d irrigation channel ε y = 0.60 u ∗ d meandering stream
(7.71)
All these estimates have large margins of uncertainty and the last decimal will not be significant. Also, the depth is usually so much smaller than the width that one assumes rapid mixing over the depth and ignores εz . In the longitudinal x-direction one may ignore the direct turbulent diffusion, for by the argument depicted in Fig. (7.2) and leading to Eqs. (7.61) and (7.62) it is again clear that the transversal turbulent diffusion will dominate dispersion in the flow direction of the stream. In fact, it is again Eq. (7.65) that describes the one- dimensional dispersion in the x-direction. The factor K is again given by Eq. (7.62), with the only adaptation that the molecular diffusion coefficient D has to be replaced by εy . This gives 1 K =− A
W
y
u (y)h(y) 0
0
1 ε y h(y2 )
y2
u (y1 )h(y1 )dy1 dy2 dy
(7.72)
0
Remember that the prime in u (y) does not mean differentiation, but it represents the deviation from the average defined in Eq. (7.48). If εy is independent of the transversal position y one may write in shorthand ([5], Eq. (5.17)) K =
W 2 (u )2 I εy
(7.73)
Dispersion of Pollutants
277
Here (u )2 I represents the integral of Eq. (7.72) while the length dimensions in I were taken out and put in W 2 . The student should check that K in Eq. (7.73) indeed has the correct dimensions [m2 s−1 ]. The coefficient εy has the form (7.71); therefore one may also represent K by K =α
W 2 (u )2 u∗d
(7.74)
where an empirical value for the integral I is hidden in α. Finally (u )2 will be roughly proportional to (u)2 which leads to a factor α in K = α
W 2 (u)2 u∗d
(7.75)
Fischer ([5], p. 136) gives a value α ≈ 0.011, with a margin of perhaps a factor of two. A few simple examples are discussed in the next subsections in order to illustrate the methods. It should be kept in mind that many approximations have been made and that in practice one uses empirical values for K. Some assumptions, such as the constancy of the cross sectional area A, may be omitted, giving somewhat more complex equations. 7.2.3
Example: A Calamity Model for the Rhine River
Suppose that at a certain moment a factory ejects an amount M [kg] of poisonous substance in a river like the European River Rhine. As a first approximation one may take the onedimensional equation (7.65) to describe the propagation of the poisonous concentration along the river. This equation reads ∂C ∂ 2C ∂C (7.76) +u =K 2 ∂t ∂x ∂x As this equation is the one-dimensional form of Eq. (7.8) the solution is the onedimensional form of Eq. (7.29): M 2 e−(x−ut) /(4K t) C(x, t) = √ 2 πKt
(7.77)
where of course the coefficient of longitudinal dispersion K is used. At a certain point the amount of water passing will be S [m3 s−1 ] and the amount of pollution passing in a second will be uC [kg s−1 ]. Therefore the concentration ϕ [kg m−3 ] passing a cross section of the river per second will be ϕ=
uC M/S 2 2 e−(t−x/u) /(4K t/u ) = 2 S 4π K t/(u )
(7.78)
The magnitude of this concentration will decide whether the river is suitable for swimming, whether the water is fit for drinking, and so on. In practice, one divides a river into stretches in which the parameters K and S are essentially constant. Inflowing rivers may be taken into account by increasing S from stretch to stretch. The parameter K may be determined experimentally by injecting tracers into the river as a precaution before a calamity takes place. The results of such a model experiments are shown in Figure 7.4.
278
Environmental Physics
Figure 7.4 Calamity model for the river Rhine. Calculations (drawn curves) are compared with calibration experiments (dots). The upper figure refers to Maximiliansau, 210 [km] downstream from the ‘calamity’ and the lower figure to Lobith, 710 [km] downstream. Reproduced by permission of Dr A. van Mazijk from [6], p. 42–47.
One observes that the peak concentration lowers and that the width of the distribution increases with time, but is not symmetrical. The reason is that it takes some time to pass a location. In that time the width increases; therefore the concentration measured locally is not symmetrical in time. There will be a tail, although in practice more pronounced than in the simplest theory. 7.2.4
Continuous Point Emission
Assume that an effluent is being emitted in a channel of constant width W and depth d at a rate q [kg s−1 ] at location x = y = 0. There will be rapid vertical mixing, so the source
Dispersion of Pollutants
279
may be replaced by a vertical line source of strength q/d [kg m−1 s−1 ]. If the boundaries of the river may be ignored, the concentration after a long time is approximated by Eq. (7.39) with the diffusion coefficient D replaced by the transverse εy and the velocity u by the average u q 2 C= e−y u/(4ε y x) d 4π ε y xu
(7.79)
This solution supposes an infinitely long line source in the z-direction and then obeys the advection-diffusion equation (7.38) when the z-variable is taken out u
∂ 2C ∂C = εy 2 ∂x ∂y
(7.80)
The student should check that C indeed is expressed in [kg m−3 ]. Eq. (7.79) would be an exact solution of the advection-diffusion equation (7.80) when the velocity of the river over the cross section was constant and when one could ignore the walls. The walls of the river at y = 0 and y = W (with constant W) will imply boundary conditions that the walls do not absorb. In this case the net transversal flux of pollutants at the walls is zero, so ∂C/∂y = 0 at the walls. Let the source of pollutants have coordinates x = 0, y = y0 . Then the walls can be taken into account by introducing, besides the source at y = y0 a double infinite series of mirror sources located at y − y0 = ±2W, ±4W, ±6W. . . and y + y0 = ±2W, ±4W, ±6W. . ., as explained in Eq. (7.42). To work out the infinite series of mirror sources it is convenient to introduce dimensionless quantities C0 =
xε y q y y0 ;y = ; x = ; y0 = 2 udW uW W W
(7.81)
in which Eq. (7.79) for a source at y0 = 0 without walls reads C 1 2 =√ e−y /(4x ) C0 4π x
(7.82)
It is now easy to write down the series analogous to Eq. (7.42) with the source at y = y0 and walls at y = 0, W as n=∞
C 1 2 2 =√ e−(y −y0 −2n) /(4x ) + e−(y +y0 −2n) /(4x ) C0 4π x n=−∞
(7.83)
This result is plotted in Figure 7.5 for a centreline injection y0 = W/2. The profiles at the centre of the river and at the side are shown. For x = 0.1 the mixing over the cross section of the river is rather complete, within 5%. The location at which that happens is called the mixing length L. Taking x = 0.1 as standard the mixing length for centreline injection follows from Eq. (7.81) as L=
0.1uW 2 εy
(7.84)
280
Environmental Physics
Figure 7.5 Concentration profile for a continuous centreline injection. The curve ‘centreline’ shows the concentration along the centre of the river as a function of distance to the injection point. The curve ‘side’ shows the concentration along one of the sides. Where the curves meet, the river is thoroughly mixed. (Reproduced from Mixing in Inland and Coastal Waters, Fisher et al, 114, Copyright 1979 with permission of Elsevier.)
7.2.5
Two Numerical Examples1
The concepts discussed above may be illustrated by two practical examples.
Example 7.1 Dilution of Pollution An industry discharges 10 million [L] of effluent a day with a concentration of 200 [ppm] of pollutants. Use [ppm m3 ] as a measure for the discharge. Assume a slowly meandering, very wide river with a depth of 10 [m], an average velocity of 1 [m s−1 ] and a friction velocity u ∗ = 0.1 [m s−1 ]. Calculate the width of the plume and the maximum concentration 300 [m] downstream with a discharge of q = 23 [ppm m3 s−1 ].
1
The examples were taken by permission of Academic Press from [5], examples 5.1 (p. 117) and 5.2 (p. 118).
Dispersion of Pollutants
281
Answer We assume a discharge at x = y = 0. It follows from (7.71) that εy = 0.6 [m2 s−1 ]. The river is very wide, so the walls can be ignored. Let the width of the plume be defined by the length of two standard deviations on both sides. We calculate this at x = 300 [m]. Eq. (7.79) then gives a width b of (7.85) b = 4σ = 4 2ε y x/u = 76 [m] The maximum concentration is found where y = 0, for then the exponential in Eq. (7.79) becomes one. This leads to q = 0.05 [ppm] (7.86) Cmax = d 4π ε y xu The student should check the dimensions on the right in Eqs. (7.85) and (7.86).
Example 7.2
Mixing Length
A plant discharges a pollutant in the middle of a straight rectangular channel with a depth d = 2 [m], a width W = 70 [m], a slope S = 0.0002 and an average velocity u = 0.6 [m s−1 ]. What is the length L for ‘complete’ mixing? Answer From Eq. (7.68) it follows that u ∗ = 0.06 [m s−1 ]. From (7.71) one finds εy = 0.019 [m2 s−1 ]. According to Eq. (7.84) one finds L = 15.5 [km]. 7.2.6
Improvements
In view of the approximations made, the equations derived above can only give firstorder estimates. Also, it should be realized that the longitudinal dispersion coefficient K is not easy to calculate from Eq. (7.72), as the u (y) function generally will not be known. Thus, in practice, one makes some measurements in order to deduce a K-value for a particular stream. How can the transport equations be improved? The first possibility is to assume an x-dependence for K, which would simulate an x-dependence of the depth h(y), the cross section A, the velocity field u , or all of them. If one studies the derivation of Eq. (7.63) one will see that when h, A, and u are varying slowly with x, Eq. (7.63) will be a good approximation, leading to ∂C ∂ ∂C ∂C +u = K (7.87) ∂t ∂x ∂x ∂x Another approach would be to return to Eq. (7.43) and replace the single diffusion coefficient D by the three coefficients of turbulent diffusion εx , εy and εz , similarly to Eq. (7.69). As vertical mixing happens quickly one may average over the depth by writing
282
Environmental Physics
d times an averaged C(x, y, t) [kg m−3 ] instead of C(x, y, z, t) [kg m−3 ], which amounts to looking at the total column of fluid. Similarly average fluxes are given by ∂C Fx = u x C − εx d (7.88) ∂x ∂C Fy = u y C − ε y d (7.89) ∂y where the fluxes now refer to the amount of mass passing through a unit width [kg m−1 s−1 ]. If one compares Eqs. (7.88) and (7.89) with Eq. (7.69) it will appear that we have assumed that the average of the derivative over depth equals the derivative of the average. Conservation of mass in the x, y-plane gives, from Eq. (7.4) ∂ Fy ∂C ∂ Fx ∂C + div F = + + =0 ∂t ∂t ∂x ∂y
(7.90)
If the averages over depth u x and u y may be taken as constant and C = Cd is written in the time derivative on the left in Eq. (7.90) we find ∂(Cd) ∂(Cd) ∂C ∂C ∂ ∂ ∂(Cd) + ux + uy = εx d + εy d (7.91) ∂t ∂x ∂y ∂x ∂x ∂y ∂y For εy the values given by Eq. (7.71) may be inserted. For εx a value has to be adopted as the best guess. The foundations of Eq. (7.91) appear to be somewhat shaky. One therefore usually returns to the simpler equation (7.87) or starts from the more sophisticated Navier–Stokes equations to be discussed in Section 7.4. 7.2.7
Conclusion
Pollutants may enter a river by regular and legitimate discharge by industries or agriculture. In this case governments will have regulations based on norms which are meant to protect the quality of the river. Calamities such as accidental discharge of pollutants in high concentrations may happen and the methods discussed in this chapter will make it possible to calculate its consequences before the discharge reaches population centres. Many harmful chemicals have been stored in ponds surrounded by dikes to wait for final disposal. Occasionally dikes break down and the stored chemicals pollute rivers. The obvious precaution is to neutralize the harmful chemicals, often acids, before they are stored. Another reason to do this is that all fluids will seep through the soil and disperse in the environment with the movements of the groundwater. This slow, but unstoppable, process is the subject of the next section.
7.3
Dispersion in Groundwater
Groundwater is an important resource for drinking water; in the United States, for example, about 40% of the population depend on groundwater for their drinking water [7]. One usually takes it for granted that groundwater is free from contaminants, pumps it up and does not
Dispersion of Pollutants
283
Disposal site
q 1300 [m] Main flow
300 [m]
Main waterflow
Figure 7.6 On the left a chromium plume is shown in Long Island, USA; it has formed over 13 years in an aquifer of sand and gravel. (Reproduced by permission of John Wiley & Sons, Inc.) On the right a microscopic description is given of a groundwater flow to the right where the particles have to crawl through the pores of the soil.
spend the money necessary to do a complete chemical analysis. Luckily groundwater still is a rather clean and reliable resource, and it is worth it to keep it that way. If contaminants have entered the groundwater, advection is the main method of dispersion as the diffusion coefficients in water are very small (Table 7.1). This process is slow, as is shown on the left-hand side of Figure 7.6. There a disposal site is shown, from which chromium travelled 1300 [m] in 13 years, with an average velocity u = 100 [m yr−1 ]. One of the reasons for this low velocity is depicted on the right-hand side of Figure 7.6. The water has to crawl through the pores in the soil. The low velocity of groundwater simplifies the equations as we shall see in the subsections below. To solve them analytically one has to simplify the layer structure of the soil; for complicated structures one has to solve the equations numerically. 7.3.1
Basic Definitions
A simplified structure of the earth surface layer is sketched in Figure 7.7. There are impervious layers, for example, clay, through which no water can penetrate and there are water-carrying layers, called aquifers, often consisting of sand or sandstone. Look at Figure 7.7, starting at the bottom, where one will notice an aquifer between two impervious layers. This is called a confined aquifer. Above the confined aquifer we see an impervious clay layer and above that layer again an aquifer, which is now called an unconfined aquifer. The reason for the adjective is that the top of the aquifer is connected with the atmosphere by means of pores in the ground. At position A in Figure 7.7 a tube is drawn, which is open at both ends. At the top it is in open connection with the atmosphere and at the bottom it penetrates the unconfined aquifer. In practice, it has a metal casing with little holes pierced in it, so the tube can penetrate the soil and still be filled with water. The height to which the water rises is called the water table. At that position in the soil one has atmospheric pressure.
284
Environmental Physics
A
B Stream
Vadose zone Water table
Unconfined aquifer sand
Clay 'impervious' layer
Sand or sandstone confined aquifer
Impervious
Figure 7.7 Aquifers underneath the earth surface. The top aquifer is unconfined and in connection with the atmosphere. The lower one is confined between two impervious layers. The open tube at A defines the water table. Between the water table and the surface one has the vadose zone with lots of life. The tube at B shows a rather high water pressure in the aquifer deep down.
In Figure 7.7 the water table is not precisely horizontal, but follows the terrain somewhat. Indeed, as will be explained, the variation in water table causes groundwater flow. Between the water table and the open air one has the vadose zone, which may be humid and is also called the unsaturated zone. Small capillaries may suck up water into this zone and plants and trees may use the humidity to dissolve their food. In this soil a lot of life is present and many chemical reactions are taking place. On the far left in Figure 7.7, one will notice at position B a deep tube, again open at both ends. The tube goes down into the confined aquifer. Apparently the water pressure down
Dispersion of Pollutants
285
Table 7.2 Particle diameter, hydraulic conductivity and porosity for various soils ([9], p. 8; [8], p. 53)).
Clay Silt Sand Gravel
Particle diameter d/[mm]
Hydraulic conductivity k/[m s−1 ]
Porosity n/%
<0.002 0.002–0.06 0.06–2 >2
10−10 –10−8 10−8 –10−6 10−5 –10−3 10−2 –10−1
35–55 35–60 20–35 20–35
there is so high that the water in the tube rises almost to the water table of the unconfined aquifer. If this happens one may easily withdraw water from the deep, confined aquifer. If the pressure is high enough, a natural source will well up. These pressures arise in artesian wells, where the aquifers slope down from far away hills. The elevation of the aquifer in the hills is responsible for the high water pressure down in the valleys. At the left in Figure 7.7 the water table touches the surface of a stream. This is the steady-state situation. When the surface of the stream rises, water will penetrate into the sandy banks; consequently the water table will also rise, albeit slowly. When the water level goes down, water will seep from the aquifer into the stream. Groundwater moves through the pores between the soil particles, as illustrated on the right-hand side of Figure 7.6. The characteristics of the movement are connected with the properties of the soil. They are shown in Table 7.2. The pores taken together may be represented by the porosity n, which is the volume of the pores divided by the total bulk volume of a sample. The last column of Table 7.2 gives the porosity for various soils. It appears that the porosities do not vary widely, while the particle sizes do. These sizes influence another property, the hydraulic conductivity k, displayed in the centre column of Table 7.2. This property will be defined below, but from the name we understand that this quantity will determine the ease by which the groundwater moves. The fact that a clay layer is indicated as ‘impervious’ in Figure 7.7 implies that the pores in clay must be small and very narrow. Many of them must have ‘dead ends’ blocking the flow. Others provide loopholes for the water, but the particles are so small that the water in Figure 7.6 (right) has to travel a very long path (Exercise 7.11). Anyway, Table 7.2 shows that groundwater can penetrate even clay. The average flow has velocity u with magnitude u. Take 1 [m2 ] perpendicular to the average flow u. The specific discharge vector q [m s−1 ] is defined as the amount of water [m3 s−1 ] passing that [m2 ] in the direction of u. Its length is q < u as only the pores contain water with a fraction n < 1 of the volume: q = nu 7.3.1.1
(7.92)
Hydraulic Potential
In Figure 7.7 we already showed two cases of a vertical tube in an aquifer, which was open at both ends. In the left-hand side of Figure 7.8 part of Figure 7.7 is again shown with a few additional variables. The lower opening of the tube is at position x, y, z where z is the height
286
Environmental Physics z
y
z
p(x,y,z+Δz) f dx dy dz p
ρg (x,y,z)
φ
dz dy
z
p(x,y,z) x
dx
ρg dx dy dz) Figure 7.8 On the left-hand side the definition of the hydraulic potential is sketched. On the right-hand side the forces used in the derivation of Darcy’s equations are displayed.
above some fixed horizontal plane, possibly sea level. The height h of the water column above (x, y, z) corresponds with a hydrostatic pressure p = ρgh at (x, y, z) where ρ is the density of the water. The quantity φ =z+h =z+
p ρg
(7.93)
is defined as the hydraulic potential or the groundwater head at location (x, y, z). From Figure 7.8 it follows that it is precisely the distance of the top of the water in the tube to the z = 0 level. Note from Figure 7.7 that the groundwater head for the two aquifers is different. So φ may be a function of both the horizontal and the vertical position of the end of the tube. 7.3.2
Darcy’s Equations
Groundwater flow is described by Darcy’s equations or Darcy’s Law, named after the French engineer Henry Darcy. He based these relations on experiments with groundwater percolation through filter beds in connection with the design of the Dijon water supply in 1856. These equations may be derived as follows ([9]–[11]). Consider a steady flow of incompressible groundwater without change of water storage in the soil. Take a volume element dτ = dxdydz with the z-direction of the Cartesian coordinate system pointing upwards. The pressure in the water is indicated by p(x, y, z). The total force per unit of volume on the volume element is F [N m−3 ]. Then one has for the z-component (cf. Figure (7.8) right) Fz dτ = −ρgdτ + p(x, y, z)dxdy − p(x, y, z + dz)dxdy + f z dτ ∂p ∂p dxdydz + f z dτ = −ρg − + f z dτ = −ρgdτ − ∂z ∂z
(7.94)
Here, ρ is the water density, g is the gravitational acceleration and fz is the z- component of the friction force f per unit volume. In the x- and y-directions the gravity term on the
Dispersion of Pollutants
right-hand side is absent and Eq. (7.94) simplifies to ∂p Fx dτ = − + f x dτ ∂x ∂p + f y dτ Fy dτ = − ∂y
287
(7.95) (7.96)
The friction force f is approximated by assuming μ f=− q (7.97) κ where q is the specific discharge vector (7.92). This friction force is different from the one in Eq. (3.41). That friction is caused by fluid particles rubbing against each other. In Eq. (7.97) friction is caused by interaction of the fluid with the narrow pores (cf. Figure 7.6 right) where proportionality is a good approximation. The constant μ in Eq. (7.97) is the dynamic viscosity of the fluid, which we met in Eq. (3.41) and κ is called the permeability of the medium, a soil parameter, usually increasing with the width of the pores. Using Eq. (7.97) one finds from Eq. (7.94) ∂p μ (7.98) − qz ∂z κ and similar equations in the x- and y-directions, but without the gravity term. The left-hand side of these equations is taken as zero, as the accelerations of groundwater are negligible. Eqs. (7.95), (7.96) and (7.98) lead to Fz = −ρg −
∂p μ + qx = 0 ∂x κ
(7.99)
∂p μ + qy = 0 ∂y κ
(7.100)
∂p μ (7.101) + qz + ρg = 0 ∂z κ These equations may be further simplified by using the hydraulic potential (7.93) with φ = z + p/ρg and assuming that the water density ρ is constant. The three equations (7.99) to (7.101) then may be summarized by κρg grad φ (7.102) q=− μ We will check this for the z-component of Eq. (7.102). This gives κρg ∂φ κρg 1 ∂p κρg κ ∂p qz = − =− 1+ =− − μ ∂z μ ρg ∂z μ μ ∂z
(7.103)
which is equivalent to (7.101), if one multiplies by μ/κ. The student should check the other components of Eq. (7.102). Darcy deduced Eq. (7.102) empirically and found values for the hydraulic conductivity κρg (7.104) k= μ
288
Environmental Physics
This leads to Darcy’s equations or Darcy’s law q = −k grad φ
(7.105)
The derivation of Eq. (7.105) leads to the following comments:
↓
(1) The assumption that the density ρ is constant is not trivial. Change in salinity with position may well occur in practice. This effect is ignored. (2) The hydraulic conductivity k is proportional to κ, the permeability of the medium. It is therefore strongly dependent on the size of the pores. Some empirical numbers are given in Table 7.2 (3) Except near wells and sinks the horizontal components of the water discharge q (or velocity u) are much bigger than the vertical qz (or uz ). Assume that the vertical discharge vanishes: qz = 0. From Eq. (7.105) it follows that ∂φ/∂z = 0 and the groundwater head φ will be independent of the depth to which the test tube is lowered. By decreasing the depth (−z) one will reach the situation where p = 0, that is, the local pressure is precisely the atmospheric pressure. These positions as a function of (x, y) define the so-called phreatic surface. This surface need not be horizontal for on a nonlocal scale qz will not vanish completely. In the absence of a vertical flow component the phreatic surface is found by pushing a test tube to some depth, as on the left of Figure 7.8; one deduces the height z for which p = 0 from Eq. (7.93). In practice there may be capillary suction that pulls the water through narrow pores to above the phreatic surface. This results in an unsaturated zone in the soil. The imprecisely defined boundary between saturated and unsaturated soil is called the groundwater table. The water level in shallow open tubes, or in wells or drains will represent the phreatic surface, which therefore is a more physical concept. (4) The derivation of Eq. (7.105) assumed an isotropic medium. If the soil were anisotropic one usually generalizes the relation between q and grad φ into a tensor relation q = −k · grad φ
↑
(7.106)
with a 3 × 3 tensor k. This may happen in a layered soil where the horizontal permeability κ is larger than the vertical one. It can be shown that the tensor k is symmetrical. It therefore can be brought into diagonal form by a local rotation of the coordinate system. Eq. (7.105) then reduced to three equations, each with their own hydraulic conductivity. For a slowly varying layer structure one axis will be vertical with conductivity kv ; the two horizontal ones will be equal and are indicated by kh . (5) In most cases |grad φ| is not bigger than 2 × 10−3 . For a mixture of sand and gravel like on the left in Figure 7.6 one might from Table 7.2 guess k ≈ 10−3 [m s−1 ]. From Eqs. (7.105) and (7.92) one would deduce a velocity of groundwater of 30 [cm day−1 ] or 120 [m year−1 ]. For sand or silt the velocities will be much smaller. (6) Turbulence was ignored. But because the velocities are so small this seems well justified. This may be checked later when the Reynolds number has been discussed. 7.3.2.1
Vertical Flow in the Unsaturated Zone
The unsaturated soil near the surface is of essential importance to life. In a moist environment oxygen gives rise to many chemical reactions and a variety of microorganisms influence the
Dispersion of Pollutants
289
chemical and biological composition of the soil. Therefore here we mention a few aspects of the vertical water flow in the zone, although in the rest of the book we shall ignore it. The water in the unsaturated zone is called soil moisture; it exerts a pressure which is negative with respect to the atmospheric pressure. Without flow the capillary suction at a point above the phreatic surface is equal to minus the height above that surface in [m] water pressure. More precisely, the suction head is expressed as ψ = p/(ρg) [m], where the pressure p is considered to be negative with respect to atmospheric pressure. Let us restrict ourselves to vertical flow only, where the groundwater head φ is a function of z only: φ = φ(z). Then Darcy’s equations (7.105) give d qz = −k (ψ + z) = −k dz
dψ +1 dz
(7.107)
Especially in the unsaturated zone, the hydraulic conductivity k is a function of position and time because of changing moisture content. The flow is upwards or downwards, depending on the derivative of the suction head ψ. The limiting case (no flow) happens when dψ = −1 dz
(7.108)
For more negative values the flow qc will be positive, so upwards, which describes evaporation. For less negative values the flow will be downwards, which corresponds with the infiltration after rain. If the suction vanishes the derivative will go to zero and Eq. (7.107) reduces to qz = −k0 where k0 approaches the saturated hydraulic conductivity. 7.3.2.2
Conservation of Mass
The flux of groundwater is the mass [kg] which passes 1 [m2 ] perpendicular to the average flow in 1 [s]. It may be written as F = ρq
(7.109)
for the specific discharge factor q was defined (above Eq. (7.92)) as the amount in [m3 ] passing the same [m2 ] in the same time. The derivation of the equation of continuity (7.4) follows again from the conservation of mass in any fixed volume V. In the present case the equation of continuity reads ∂ρ =0 ∂t ∂ρ div (ρq) + =0 ∂t div F +
(7.110) (7.111)
For a steady flow the density is constant in time; in most applications variation of ρ with position may be ignored as well. This gives div q = 0
(7.112)
290
Environmental Physics
If the hydraulic conductivity defined in Eq. (7.104) is independent of position, substitution of Eq. (7.105) into Eq. (7.112) leads to the Laplace equation div grad φ =
∂ 2φ ∂ 2φ ∂ 2φ + 2 + 2 =0 2 ∂x ∂y ∂z
(7.113)
This equation is discussed in any undergraduate course on electromagnetism. All properties of the equation discussed there still hold. One may even take the similarity further by introducing point wells and point sinks, such as positive and negative charges in a Coulomb field, and use the method of images. As a matter of fact, the mathematical equation (7.113) together with its boundary conditions gives unique solutions. It is the physicist who should give the interpretation and be aware of the assumptions underlying the equations. 7.3.3
Stationary Applications
A few relevant examples should illustrate the application of Darcy’s equations (7.105). Assume that (1) The ground is isotropic and saturated with water (2) The flow picture is independent of time, that is, stationary (3) The flow essentially occurs parallel to the vertical x, z-plane, that is, it is independent of the y-coordinate: φ = φ(x, z). Darcy’s equations (7.105) then read as
7.3.3.1
qx = −k
∂φ ∂x
(7.114)
qz = −k
∂φ ∂z
(7.115)
Vertical Flow
As a first example we restrict ourselves to vertical flow with φ = φ(z). Consider the situation depicted in Figure 7.9. There are two horizontal layers. The top layer is only a little permeable, for example clay, and the lower one forms an aquifer, for example consisting of sand. The sand has a positive groundwater head φ = h, shown on the left in Figure 7.9a. This may be caused by the weight of the top layer. It may also refer to a building excavation where the local aquifer experiences pressure from the higher layers outside the excavation. The top of the top layer is defined as the surface z = 0. The top layer in this example is saturated and its groundwater head φ precisely reaches the surface z = 0; therefore φ(0) = 0. There will be a vertical flow qz = q0 of unknown magnitude in the top layer. For the bottom layer with its much greater pores the flow is ignored. Let us first look at the top layer. From qx = 0 and Eq. (7.114) it follows that φ is independent of x. Substituting qz = q0 on the left of Eq. (7.115) gives φ=−
q0 z + constant k
(7.116)
Dispersion of Pollutants (a)
291
(b) h h z=0
Pressure
0
Clay Q
p Sand
σ′ ptotal
Figure 7.9 Vertical groundwater flow. On the left, in (a), one observes two layers. The lower layer is under pressure from the top layer or from the surroundings, resulting in a groundwater head rising above the surface z = 0. On the right, in (b), the groundwater pressure and the internal grain pressure σ are indicated as function of depth. Note that the total pressure ptotal = p + σ . If σ becomes zero or negative at position Q one has quicksand.
As φ(0) = 0 for the top layer, one finds for that layer q0 z φ=− (7.117) k The groundwater pressure in the top layer follows from Eq. (7.93) q 0 p(z) = ρg(φ − z) = −ρgz +1 (7.118) k We now look at the lower layer. The pressure is just hydrostatic, as in the lower layer flows are ignored. The hydrostatic equation (3.2) then gives p(z) = ρg(h − z)
(7.119)
The curve p(z) is drawn in Figure 7.9b in such a way that the pressure joins continuously at the interface of both layers indicated by Q. As h can be measured, the discharge q0 follows from the fit of both curves at point Q. One should realize that p(z) is the groundwater pressure in the pore system. One could also draw the total pressure ptotal (z) as a function of depth z: this pressure at a certain z is the weight of the wet soil column on top of it. Assuming for simplicity that the density of wet soil is a constant ρ w and independent of the layer, one finds for the total pressure with z negative below the surface ptotal (z) = −ρw gz
(7.120)
This is drawn in Figure 7.9b as well. The total pressure ptotal (z) may be interpreted as the sum of water pressure p(z) and the intergranular soil pressure σ , as indicated in Figure 7.9b. From Eq. (7.119) it follows that the derivative dp(z)/dz in the lower sand layer is (−ρg) and therefore constant. When for any reason the groundwater head h increases,
292
Environmental Physics y = tan x z
H
x=0
φ= 0
z=0
φ = kH
x
-π/2
φ = φ1
φ = φ4 φ = φ3
0
π/2
π
x = arctan y
x-interval
φ = φ2
Figure 7.10 Flow underneath a wall at x = 0 extending upwards from z = 0. At the right of the wall the groundwater head equals H; at the left it equals zero. Lines of constant potential = kφ and the perpendicular streamlines are indicated. The figure on the right gives the deviating definition of x = arctan y, which was used here.
the slope dp(z)/dz will remain the same, the curve p(z) in the lower layer will move parallel to the right, and therefore the point Q in Figure 7.9b would also move to the right. If the groundwater head keeps increasing, eventually the soil pressure σ in the clay will vanish and the clay will burst. In the limiting case that the intergranular soil pressure almost vanishes, the soil cannot longer bear a weight: quicksand. 7.3.3.2
Flow Underneath a Wall
Consider a very thick permeable layer with hydraulic conductivity k below the horizontal plane z = 0, sketched in Figure 7.10. At x = 0 there is a vertical wall from z = 0 upwards (the y, z-plane), separating a region with a groundwater head φ(x < 0, z = 0) = 0 on the left from a region with φ(x > 0, z = 0) = H on the right. A practical example might be the effort to keep polluted soil to the right of a wall. The weight of the soil then causes a higher groundwater head in the layer underneath. Figure 7.10 illustrates the extreme situation where no other boundaries are present. Without derivation it will be shown that the solution of the Laplace equation (7.113) in this case is φ=
z H arctan π x
(7.121)
Here the function y = tanx is defined as usual and shown on the right of Figure 7.10. Because of the periodicity of the function its inverse function is not uniquely defined. We use that freedom to define a unique inverse by taking the x-interval between x = 0 and x = π instead of the usual choice of between x = − π /2 to x = +π /2. So for y running from y = –∞ to y = 0 the function x = arctany runs from x = π /2 to x = π . At y = 0 there is a discontinuity and if y runs from y = 0 to y = ∞ the function x = arctany runs from x = 0 to x = π /2 in the usual way. For the derivatives of φ the chosen convention does not make a difference. The student should check that Eq. (7.121) satisfies Laplace’s equation (7.113). Next, study the boundary conditions of Figure 7.10. For x < 0 and z approaching zero from below, the argument of arctan goes to the limit zero from the positive side, so φ goes to
Dispersion of Pollutants
293
the limit φ → (H/π ) arctan(+0) = 0. For x > 0 and z again approaching zero from below, the argument of arctan goes to zero from the negative side: φ → (H/π ) arctan(−0) = H . So, the boundary conditions are satisfied with the unconventional definition of arctan. Substituting (7.121) in Darcy’s Eqs. (7.114) and (7.115) gives kH z π x 2 + z2 kH x qz = − π x 2 + z2
qx = +
(7.122) (7.123)
For the top of the permeable layer z = 0 it follows that qz = −
kH πx
(7.124)
Therefore, for x > 0 one has qz < 0 and the water goes down, and for x < 0 it comes up. The amount of water Q that leaves the reservoir at the right in the region a < x < b per unit length in the y-direction is found by integration of Eq. (7.124) b qz dx =
Q= a
kH b ln π a
(7.125)
It is clear that for a = 0 or b = ∞ the integral Q becomes infinite. At x = a = 0 this originates from the discontinuity in the potential function (7.121). For numerical calculations one could take a smoother function φ or make a fit with a > 0. 7.3.3.3
The Method of Complex Variables
A general method of tackling complicated problems is found in the method of complex variables. We start by introducing a function = kφ
(7.126)
For constant k, Darcy’s equations (7.105) now become even simpler q = −grad
(7.127)
In order to apply complex functions one has to stick to two dimensions, either x and z, as above, or x and y when we consider horizontal groundwater flow. Because Eq. (7.102) holds for all three variables x, y, z it also holds for any two of them, consequently the mathematics for both cases is the same. As we want to keep the symbol z for a complex number, we take x and y as our coordinates, although one of them may indicate a vertical dimension. Eq. (7.127) suggests that be interpreted as a two-dimensional potential. Lines where = constant would then be aequipotential curves. The discharge vector q is perpendicular to these curves everywhere (Appendix B, Figure B.5).
294
Environmental Physics
Another function of importance is the stream function . This function is introduced by returning to conservation of mass, which for a constant density led to Eq. (7.112). In two dimensions this gives ∂q y ∂qx + =0 ∂x ∂y
(7.128)
This equation holds for all (x, y). Mathematics textbooks prove that in this case there exists a function with the property that qx = −
∂ ∂y
(7.129)
∂ (7.130) ∂x It follows that this vector q(−∂/∂y, ∂/∂x) is perpendicular to the vector (∂/∂x, ∂/∂y), as their scalar product vanishes. The vector (∂/∂x, ∂/∂y) may be written as grad , which is perpendicular to curves with = constant. Consequently these curves have the same direction as the flow vector q. The curves with = constant represent the streamlines of the flow. The vector q is perpendicular to curves with = constant; therefore curves with = constant and = constant are perpendicular everywhere. In Figure 7.10, describing the flow underneath a wall, curves with = constant and the perpendicular streamlines are indicated. The student should check that also obeys the Laplace equation. The stream function relates to the discharge Q. Consider Figure 7.11 and look at points A and B with an auxiliary point C such that xC = xA ; yC = yB . In Figure 7.11 the discharges QAC , QCB and QAB through the corresponding lines are indicated. Let H be the thickness of the layer through which the groundwater flows, measured perpendicular to the x, y-plane. Then from Eq. (7.130) the instream into ABC becomes QAC + QCB with qy = +
C Q AC = H
C qx dy = −H
A
A
∂ dy = H (A − C ) ∂y
(7.131)
B Q CB = −H
q y dx = H (C − B ) C
C
QCB
B
y QAC
QAB
A
x Figure 7.11
Interpretation of the stream function .
(7.132)
Dispersion of Pollutants
295
The minus sign in the lowest equation corresponds to the (arbitrary) drawing in Figure 7.11, where the flow is in the negative y-direction. From conservation of mass without accumulation it follows that the outflow equals the inflow Q AB = Q AC + Q CB = H (A − B )
(7.133)
where Eqs. (7.131) and (7.132) were used. From Eq. (7.133) it follows that the difference in stream function between two points A and B represents the amount of water flowing per unit thickness (H = 1 [m]) through any line connecting both points. By combining Eqs. (7.127), (7.129) and (7.130) we find ∂ ∂ = ∂x ∂y ∂ ∂ =− ∂y ∂x
(7.134) (7.135)
These relations may be recognized as the Cauchy–Riemann equations, which are ‘necessary and sufficient’ for the existence of an analytical function = + i
(7.136)
of a complex variable z = x + iy. The aim of the calculations then is to deduce the functions (x, y) and (x, y) in a water-carrying layer. Mathematics shows that these functions are determined from their values on a boundary, which are specified from the physics of the problem. Streamlines, for example, can only start at a source or at a water surface and can only end at a sink or at another water surface. Before the advent of the personal computer a lot of attention was paid to analytical methods to find (z), comprising both (x, y) and (x, y). This was done by the method of conformal mapping by which complicated boundaries were transformed into simple, for example, rectangular ones. At present these methods are replaced by numerical solutions of the differential equations. Then it is also easy to take into account variations in density ρ and conductivity k with position and solve a complete three-dimensional problem. Analytical methods, though, still keep their value in giving a first approximation and in getting a feeling about what the numerical solutions should look like. A few examples will be given below. 7.3.4
Dupuit Approximation
In the third comment below Eq. (7.105) the concept of groundwater head was explained by assuming that the main groundwater flow q is horizontal. If this were strictly true, the groundwater head φ would be a function of the horizontal components only: φ = φ(x, y). For in that case qz = 0 and consequently ∂φ/∂z = 0 everywhere (cf. Eq. (7.105)), from which it follows that φ is independent of z. This simplification is called the Dupuit approximation. Dupuit’s approximation will not hold in the case of Figure 7.9, where two layers were depicted with a different groundwater head. Neither will it apply near the wall in Figure 7.10 where strong vertical components of the discharge are expected (cf. Eq. (7.124)). The approximation holds well, however, for a broad class of problems connected with unconfined aquifers. As mentioned before, in these water-carrying layers the top surface of
296
Environmental Physics
Figure 7.12 Horizontal flow through an unconfined aquifer with groundwater head h(x, y). By its pores the aquifer is in open connection with the atmosphere. The rectangle on the right is drawn perpendicular to the flow q.
the groundwater is, via pores, in open connection with the atmosphere (Figure 7.7). As remarked earlier, at this phreatic surface the groundwater pressure equals the atmospheric pressure, or vanishes when one only describes deviations from atmospheric pressure. Unconfined aquifers are the topmost water-carrying layer. It is assumed that their bottom is formed by an impervious or semi-impervious layer. It is the unconfined aquifer where groundwater is in closest contact with the human environment. The equations that govern its flow may be derived by looking at Figure 7.12. The top of the impervious layer is taken as z = 0 and the groundwater head is denoted as h(x, y) instead of the usual φ(x, y). The total mass flow through the layer in a strip of unit width and height h perpendicular to the flow q is found from Eq. (7.105) and Figure 7.12 ρhq = −k ρh grad h
(7.137)
Conservation of mass, the assumption of time independence and Eq. (7.111) lead to div(ρhq) = 0 or, assuming that ρ and k are independent of position ∂h ∂ ∂h ∂ h + h =0 ∂x ∂x ∂y ∂y
(7.138)
(7.139)
or ∂ 2h2 ∂ 2h2 + =0 ∂x2 ∂ y2
(7.140)
This is a Laplace equation for h2 (note the exponent!), whereas Eq. (7.113) applies to φ = h. The difference is related to the fact that Figure 7.12 refers to the total horizontal movement through a vertical cross section. Of course, Eq. (7.113) is more general and
Dispersion of Pollutants h1
297
h1
h2 0
h2
x
L
r1
r2
r
Figure 7.13 One-dimensional flow through a linear wall (left) or a circular wall (right). On the left both water levels are connected by a parabola as function of x. The parabola underneath indicates a streamline. The curves on the right have a more complicated dependence on r.
should hold as well. That equation may take into account vertical motion, which was ignored in Eq. (7.140). 7.3.4.1
Aquifer between Two Canals
Consider two parallel canals with an aquifer (soil) between, all on the same impervious layer. This problem can be described in one dimension by h = h(x) with h(0) = h1 and h(L) = h2 . This is illustrated on the left of Figure 7.13. Eq. (7.140) simplifies to
with solution
∂ 2h2 =0 ∂x2
(7.141)
x h 2 = h 21 − h 21 − h 22 L
(7.142)
In Figure 7.13 (left) the function h(x) will become a parabola connecting both water levels. The discharge per unit width over the total water column at position x becomes hq = −kh
1 dh 2 k 2 dh =− k = h 1 − h 22 dx 2 dx 2L
(7.143)
which is independent of x because of conservation of mass. It is obvious from Figure 7.13 (left) that there is a vertical component of the groundwater velocity. Equation (7.112) may now be used to estimate whether the vertical discharges qz indeed are small, taking Eq. (7.142) as a first approximation. Eq. (7.112) in the present case becomes ∂qz ∂qx + =0 ∂x ∂z z qz = 0
∂qx − dz = ∂x
z k 0
∂ 2h ∂ 2h dz = kz ∂x2 ∂x2
(7.144)
(7.145)
With Eqs. (7.141) and (7.142) and some algebra this leads to qz = −
zq 2 kh
(7.146)
298
Environmental Physics
These negative (i.e. downward) velocities are highest at the groundwater head where z = h. In reality the top streamlines finish on the right a little above height h2 , resulting in a seepage face. 7.3.4.2
A Circular Pond
More relevant to the environment than the situation sketched on the left of Figure 7.13 is perhaps the pond depicted on the right of this figure. A pond with radius r1 is surrounded by a soil wall with outer radius r2 . The groundwater level on the outside (h2 ) is lower than on the inside (h1 ). The discharge of possibly polluted water is calculated with Dupuit’s equation (7.140) in cylindrical coordinates (B19) omitting the irrelevant z- variable ∂h 2 1 ∂ 2h2 1 ∂ =0 (7.147) r + 2 r ∂r ∂r r ∂ϕ 2 We assume circular symmetry by which we get rid of the ϕ-variable and find ln r − ln r1 h 2 (r ) = h 21 − h 21 − h 22 ln r2 − ln r1
(7.148)
The discharge vector over the total height and a width of 1 [m] becomes hq = −kh gradh and is in the radial direction, which gives hqr = −kh
∂h k h 21 − h 22 = ∂r 2r ln(r1 /r2 )
(7.149)
The total discharge Q = 2π rhqr becomes Q = πk
h 21 − h 22 ln(r2 /r1 )
(7.150)
which is independent of r because of conservation of mass. 7.3.5
Simple Flow in a Confined Aquifer
A confined aquifer is a saturated water-carrying layer confined between two impervious layers. Assume that the layer is horizontal with uniform thickness H. It is not appropriate to apply Eq. (7.140) as the layer is saturated. The groundwater head φ then normally will be higher than the top of the layer and will be indicated by φ = h(x, y), so the head h no longer measures the mass of the moving water. Following the discussion earlier in this section one introduces a potential function = kh(x, y) and a stream function (x, y) to describe the flow. We shall discuss two examples of flows in confined aquifers. 7.3.5.1
Flow around a Source or Sink
The first example is that of a source or sink in the layer. The source or sink is assumed to be a cylindrical, vertical hole of radius R ≤ r1 . In the layer itself, at r = r1 the groundwater head will be φ = h 1 and a large distance away at r = r2 it will be φ = h 2 . One may distinguish two cases. In the first one (h1 > h2 ) water is put into the cylinder and groundwater will flow from the cylinder to its surroundings (a sink). In the second case (h1 < h2 ) water is
Dispersion of Pollutants
299
taken out of the cylinder and groundwater will flow from the surroundings to the cylinder (a source, well or spring). Cylindrical symmetry leads to a simplified Laplace equation (B19) within the layer r1 < r < r2 1 ∂ ∂ r =0 (7.151) r ∂r ∂r Similar to the derivation of Eq. (7.148) one now finds (r ) = kh 1 − k(h 1 − h 2 )
ln r − ln r1 ln r2 − ln r1
(7.152)
The radial discharge qr becomes qr = −
∂ k(h 1 − h 2 ) = ∂r r (ln r2 − ln r1 )
(7.153)
The total seepage Q from outside to inside at distance r is found by Q = −2πr H qr = −
2π kH(h 1 − h 2 ) ln r2 − ln r1
(7.154)
We notice that Eq. (7.154) reduces to Eq. (7.150) when H = (h1 + h2 )/2, which should not mislead us, as the physical situation is different. Eq. (7.154) is the basic equation for pump tests, applied to determine the so called transmissivity kH from the seepage Q and the measured drawdown (h1 − h2 ). The freedom of choice in the position of the z = 0 plane is used by taking h2 = 0 at some large radius r2 . From Eq. (7.152) it follows that then (r2 ) = 0. We put h2 = 0 in Eqs. (7.152) and (7.154) and find (r ) =
r Q ln 2π H r2
(7.155)
which indeed is consistent with (r2 ) = 0. Looking at Eqs. (7.153) and (7.154) one notices that for Q < 0 one has qr > 0, so the water flows outwards (a sink) and for Q > 0 the water flows inwards (a well, source or spring). The aequipotential lines = constant are circles in the x, y-plane around x = y = 0 and the streamlines = constant must be radial lines. In polar coordinates they will read = c1 θ
(7.156)
The constant c1 is found by realizing that the difference d = (r, θ + dθ ) − (r, θ ) must be the mass flow in for a unit thickness through the circle segment joining them (cf. Eq. (7.133)). Hence d = r dθ (−qr ) = r dθ
∂ ∂r
1 ∂ ∂ = r ∂θ ∂r Eq. (7.158) will hold in general. With = c1 θ and Eq. (7.155) this leads to →
=
Q θ 2π H
(7.157) (7.158)
(7.159)
300
Environmental Physics
The complex function = + i which was introduced in Eq. (7.136) becomes Q Q (ln r + iθ ) − ln r2 2π H 2π H Q Q = ln z − ln r2 2π H 2π H
= + i =
(7.160) (7.161)
where we used z = x + iy = reiθ . The advantage of working with the complex function is that it helps to calculate more complicated problems. We notice that the Laplace equation is linear in . This implies that for a complicated problem with, for example, many sources and sinks the sum of the corresponding functions is again a solution of Laplace’s equation. This is the superposition principle. So, from the resulting one may calculate the flow pattern using Eq. (7.127). 7.3.5.2
Source or Sink in Uniform Flow
As an example of applying the superposition principle, consider a source or sink in a uniform flow with discharge U in the negative x-direction, sketched in Figure 7.14. For the uniform flow one has q = −U ex = −grad = −
∂ ∂ ex − ey ∂x ∂y
(7.162)
where ex and ey are the unit vectors in the positive x-and y-directions, respectively. From the identities in Eq. (7.162) it follows that = Ux and therefore (cf. Eq. (7.110)) = Uy and for a uniform flow = + i = U (x + iy) = Uz
(7.163)
y
r
P
S
θ x
Figure 7.14 Flow diagram for a well (Q > 0) in the origin and a uniform flow with discharge U in the negative x-direction. The point of stagnation S is given where qx = 0.
Dispersion of Pollutants
301
For an extra source or sink one should add Eq. (7.161) to this field, which leads to the complex function which describes the source or sink in a uniform flow =
Q Q ln z + U z − ln r2 = + i 2π H 2π H
(7.164)
From this equation it follows that = Ux +
Q x 2 + y2 ln 4π H r22
(7.165)
y Q arctan 2π H x
(7.166)
= Uy +
The lines = constant are the streamlines depicted in Figure 7.14. The student should appreciate the power of the method. One not only finds the groundwater head φ = /k by a simple addition, but the streamlines as well as a ‘free’ extra. 7.3.6
Time Dependence in a Confined Aquifer
Consider a river or a lake in connection with a confined aquifer of constant thickness H. The aquifer is approximated by a one-dimensional groundwater head φ(x, y, z) = h(x). In equilibrium the groundwater head h0 in the aquifer will be equal to the river level, as indicated in Figure 7.15. The situation is essentially one-dimensional with variable x while x = 0 denotes the interface between aquifer and river. Suppose that at time t = 0 the river level suddenly falls to h0 − s0 and remains at that level. The aquifer will empty slowly into the river and a time-dependent head h(x, t)
s0
h(x,t ) h0 H z=0
x=0 x
Figure 7.15 River in connection with an aquifer. At time t = 0 the water level in the river drops by a magnitude s0 . The dashed line gives the groundwater head after a time t. After very long times the head drops to h0 – s0 everywhere.
302
Environmental Physics
will result. The total mass flow through 1 [m] of width and total height H of the aquifer will be ρ H q = −kρ H ∇h
(7.167)
The so called storage coefficient or storativity S of an aquifer is defined as the volume of water [m3 ] taken into or released from storage per [m2 ] of horizontal area per [m] rise or decline in groundwater head; usually S < 0.001. Therefore, the mass of water taken in for a rise dh of the groundwater head per [m2 ] is ρ Sdh
(7.168) 2
Conservation of mass in the x, y-plane requires that per horizontal [m ] the outflow per second, which is div(ρHq) and the mass increase per second ρS∂h/∂t together are zero ∂h =0 ∂t Assuming a constant density ρ and using Eq. (7.167) leads to div (ρ H q) + ρ S
∂h =0 ∂t In the one-dimensional case considered here we find − kρ H ∇ 2 h + ρ S
(7.169)
(7.170)
S ∂h ∂ 2h = (7.171) 2 ∂x k H ∂t This is precisely the one-dimensional form (4.21) of the heat equation with a = kH/S The sudden decrease s0 in river level corresponds with the sudden change in temperature (4.25). So the solution of Eq. (7.171) corresponds with Eq. (4.27) and we write S x (7.172) h(x.t) = h 0 − s0 + s0 erf 2 kHt For t = 0 we indeed find h = h0 and for t → ∞ we find h = h0 − s0 . 7.3.7
Adsorption and Desorption of Pollutants
Pollutants move with the groundwater and crawl with the water through the pores of the soil, shown in Figure 7.6 (right). Their interaction with the soil is summarized by the concept of sorption. This comprises absorption and adsorption. Adsorption is the attachment of particles or molecules to the surface of a solid or liquid and absorption is the uptake of particles or molecules by the total volume of a solid or liquid. Absorption and adsorption are physical processes, which go back to microscopic forces between atoms and molecules of different substances. If a chemical bond occurs, one uses the word chemisorption ([7], p. 117). Other useful concepts are desorption, which is the reverse of adsorption or absorption and partitioning, which is the process of distribution of a contaminant between a liquid and a solid, which we already met in Section 6.4.5. The dispersion of pollutants by groundwater is called hydrodynamic dispersion. As the origin is similar to the origin of diffusion, namely collisions of particles with the
Dispersion of Pollutants
303
surroundings, the process is described by differential equation (7.8), which in onedimensional form reads ∂C ∂ 2C ∂C = D 2 −u ∂t ∂x ∂x
(7.173)
The definitions are the same as in Eq. (7.8) with the concentration C [kg m−3 ] being the mass of pollutant per [m3 ] of groundwater, that is, in the pores; u is the velocity of the groundwater, but the coordinate x may be curvilinear. Also, because of the crawling of the groundwater, the coefficient D of hydrodynamic dispersion will be larger than that for molecular diffusion only; it is estimated empirically. As before Eq. (7.173) is called the dispersion-advection equation. To describe adsorption of contaminants to the solid soil one has to add a third term to Eq. (7.173), which then becomes ∂C ∂ 2C ρb ∂ S ∂C = D 2 −u − ∂t ∂x ∂x n ∂t
(7.174)
Here S is the mass of the pollutant [kg] adsorbed on the solid part of the medium (i.e. the particles) per [kg] of solids; ρ b is the bulk mass density [kg m−3 ] of the porous medium, while n is the porosity defined in Section 7.3.1. The ratio ρ b /n therefore represents the bulk mass per [m3 ] of pore space. So Sρ b /n [kg m−3 ] is a measure for the adsorbed mass of pollutant per [m3 ] of pore space, which is consistent with the dimensions of concentration C. One may check the sign in Eq. (7.174) by putting D = u = 0. The only process by which C can then increase is the decrease in adsorption S. In practice it appears that to a good approximation S = κC
(7.175)
where κ [m3 kg−1 ] expresses the partitioning of pollutants between solids, represented by S and water, represented by C. Eq. (7.174) may then be simplified to
1+
∂C ∂ 2C κρb ∂C = D 2 −u n ∂t ∂x ∂x
(7.176)
This equation applies to cleaning a contaminated piece of ground by desorption. Usually some holes are then put in the ground, one in the middle of the contamination, where groundwater is pumped out, and a few around, where clean water is put in. The contaminated water which was pumped out is cleaned and put back into the groundwater outside of the contamination. In this way the ground eventually will be cleaned. The physical transport mechanism here is advection, represented by the term u∂C/∂x in Eq. (7.174). For certain chemicals the partitioning term κ may be very large, leading to a high factor R = 1 + κρ/n on the left of Eq. (7.176): for PCBs (polychlorinated biphenyls) it may be as large as R ≈ 1000. Ignoring hydrodynamic dispersion D in Eq. (7.176) this would mean that the velocity u by which the ground is cleaned in a first approximation reduces from u to u/R. With a typical value u ≈ 1 [m day−1 ] this implies a long period of time, and high expense, to clean the ground. It should be stressed that the discussion above applies to contaminants dissolved in groundwater and possibly adsorbed in soil. Much more complicated is the motion of oil
304
Environmental Physics
from a leaking tank into the ground. One has to describe several phases: oil, water and possibly air. This is beyond the scope of the present text.
7.4
Mathematics of Fluid Dynamics
Fluid dynamics deals with fluids, a term comprising the motion of gases like air (aerodynamics) and the mechanics of liquids (hydrodynamics). A fluid is characterized by the fact that its elements very easily change shape; any force will be able to deform its volume. In the present section we discuss the basic properties of fluids and derive the fundamental Navier–Stokes differential equations ([12], [13]). We finish this section by introducing the concept of the Reynolds number and a brief introduction to turbulence. The objective of this section is to appreciate the difficulty in solving the basic equations of motion and the frequent need for simplifications. It thus justifies the simplified treatments of the preceding sections and it introduces the next ones on turbulence. In fluids, forces not always operate perpendicular to the surfaces on which they act. We already met this phenomenon in Section 3.3.1 where the internal friction force between two fluid elements acted along the interface between both elements. In such cases the force vector F and the normal n to the surface where the force acts, will make an angle with each other. This implies that the vectors are connected by mathematical entities called tensors. We cannot escape their use in fluid dynamics, even in an elementary text. After some simplifications the tensors will disappear and students for whom tensors are too advanced at this stage may pick up the discussion at Eq. (7.207). However, they should understand summation convention, defined in Eq. (7.180).
↓
7.4.1
Stress Tensor
Consider a fluid element of which the surface area is divided into small parts, which may be approximated by plane surface elements denoted by δA. By definition the vector δA = nδA points outwards, towards other fluid elements. The outer fluid exerts a force (n, r, t)δ A
(7.177)
on the element under consideration. This force per unit area (a vector!) is called stress. The Greek letter here does not indicate a summation, but refers to the first letter of ‘surface’. Stress is not necessarily perpendicular to the surface, as internal friction may cause tangential components. Because of Newton’s law action + reaction = zero, the stresses between two adjoining fluid elements are pointing in opposite directions. Therefore the stress is an odd function of n, obtaining a minus sign when n is replaced by –n. It is a surface force as it acts through the surface and has dimensions [N m−2 ]. Consider now a volume element in the shape of a tetrahedron with three orthogonal faces (Figure 7.16) and a sloping face with surface area δA = nδA. The unit vectors along the perpendicular axes are called a, b, c and the surface areas of the three sides δA1 , δA2 , δA3 . Its volume can be expressed as 1/3 of the height from the top to the opposite side times the surface area of that side. This gives (OS)δA/3 = (OP)δA1 /3 = (OQ)δA2 /3 =
Dispersion of Pollutants
305
c R
δA
n δA2 S δA1 Q
O P
b
δA3
a
Figure 7.16 Volume element with three orthogonal faces and surface areas δA1 , δA2 and δA3 and a fourth plane face with surface area δA. Unit vectors a, b and c are orthogonal to the first three faces and n is orthogonal to the inclined face.
(OR)δA3 /3. Also we have (OS) = (OP) cos(∠POS) = (OP)(n.a) and two similar equations. Consequently δ A1 = n.aδ A δ A2 = n.bδ A δ A3 = n.cδ A
(7.178)
The sum of the surface forces acting on the fluid element therefore can be written as (n)δ A + (−a)δ A1 + (−b)δ A2 + (−c)δ A3 = ((n) − (a)(a.n) − (b)(b.n) − (c)(c.n))δ A
(7.179)
We had to write (−a) in the first line because (−a) is the required outward direction; in the second line the property was used that is an odd function of the argument. Now take the ith component of the vector (7.179) with i = 1, 2, 3 denoting x-, y- and z-components, respectively, and use summation convention a.n = a1 n 1 + a2 n 2 + a3 n 3 ≡ a j n j
(7.180)
which implies summing over repeated indices. We will use the convention only when it is really helpful, as in this case. For now the ith component of the surface force (7.179) becomes (7.181) i (n) − a j i (a) + b j i (b) + c j i (c) n j δ A
306
Environmental Physics
The next step is to consider the equation of motion for the fluid element: mass × acceleration = total body force + total surface forces
(7.182)
The left-hand side of this equation is proportional to the mass and hence to the volume δV; also the total body force will be proportional to δV. These forces therefore are proportional to L3 if L is a linear dimension of the element. The total surface force is proportional to the total surface area, that is, proportional to L2 . Let L become smaller and smaller, approaching zero. It follows that the total surface force (7.181) dominates on the right-hand side of Eq. (7.182), for the volume parts of the equation go quicker to zero than the surface part. For small elements the total surface force (7.181) should therefore vanish identically i (n) = σij n j
(7.183)
in which one should keep in mind the summation over j. The stress tensor σij = Limit (a j i (a) + b j i (b) + c j i (c))
(7.184)
small volume
is a quantity with nine components, relating the vectors and n. It is called a tensor because Eq. (7.183) will hold in any coordinate system (although of course the components will depend on the system). In mathematics courses it is proven that both indices behave like a vector under coordinate transformation. Of course the nine components of σ ij are defined by means of Eq. (7.184). It is more illuminating, however, to use Eq. (7.183) as the defining equation.Thus, once one knows the stress tensor σ ij at position r, one may deduce the surface force on any element at that location by Eq. (7.183). Consider a component of σ ij in a certain coordinate system, defined by three unit vectors e1 , e2 , e3 . Take a plane surface element perpendicular to the j-direction with normal n = ej . According to Eq. (7.183) i = σ ij ; therefore σ ij is the i-component of the force per unit area exerted across a plane normal to the j-direction. The diagonal components σ 11 , σ 22 , σ 33 are called the normal stresses and the non-diagonal elements the tangential stresses, (sometimes shearing stresses). For two dimensions the forces are illustrated in Figure 7.17. Note that σ 22 δx1 is the force that the upper element exerts on the lower element, the element indicated by grey. Similarly at the bottom σ 22 δx1 is the force that the grey element exerts on the element underneath. As we need the force exerted on the grey element a minus sign is added. This explains the minus signs on the left and bottom sides of the element. To proceed we introduce two tensors which have the same components in all rectangular, Cartesian coordinate systems: (1) the Kronecker delta tensor with elements δ ij such that δ ij = 1 if i = j, otherwise δ ij = 0 (2) the alternating tensor with elements εijk such that (a) εijk = 1 if the three numbers i, j, k are a positive permutation of the nos. 1, 2, 3, which is the result of an even number of interchanges of position (for example 3,1,2) (b) εijk = −1 if i, j, k are a negative permutation of 1, 2, 3 (for example 1,3,2) (c) εijk = 0 otherwise, that is, if two or three indices are equal.
Dispersion of Pollutants
307
Normal stress Tangential stress x2
σ22δx1
σ21δx2 σ12δx1
δx2
- σ11δx2
- σ12δx1
σ11δx2 - σ21δx2
- σ22δx1 δx1
x1
Figure 7.17 Normal and tangential forces on a rectangular fluid element of unit depth. The minus signs on the left and bottom components indicate that the vectors point in the negative direction.
Now take again a fluid element with point O inside and consider the torque τ of the surface forces around point O. For the surface element δA one has a contribution to the total torque of τ = r × δ A
(7.185)
The student may check that a vector product, defined in Eq. (B6) may be written in summation convention with help of the alternating tensor. In the present case one gets τi = εijk x j k δ A = εijk x j σkl n l δ A
(7.186)
For the total torque one has to integrate over the complete surface of the element, indicated by the symbol . The i-component of the total torque becomes εijk x j σkl n l δ A = εijk x j σkl (δ A)l (7.187) This is an integral around a closed surface. The integrand may be interpreted as the scalar product of a vector ω and the surface element δA such as ω·δA. The scalar product is represented by the repeated index l. The fact that there is a loose index i does not effect this interpretation. From Gauss’s divergence theorem (B16) the surface integral (7.187) is equal to a volume integral (in which the divergence summation is taken over m) ∂σkm ∂ (εijk x j σkm )dV = εijk δm j σkm + x j dV εijk x j σkl (δ A)l = ∂ xm ∂ xm ∂σkm = εijk σk j + x j dV (7.188) ∂ xm We now apply a line of arguments similar to the one leading to Eq. (7.183). We represent the linear dimensions of the fluid element by a length L and note that quantities such as
308
Environmental Physics
σ ij , its divergence, velocity and acceleration, will be independent of L. In the last part of Eq. (7.188) the term ε ijk σ kj dV will behave like L3 and the other one like L4 . The torque of body forces, not shown in Eq. (7.188) also will behave like L4 , for they represent a distance times volume. The torque of all forces together is equal to the time derivative of the angular momentum; this behaves like distance times momentum, which again is proportional to L4 . For L → 0 the term behaving like L3 in Eq. (7.188) will dominate, giving rise to contradictions unless it vanishes identically. Therefore εijk σk j = 0
(7.189)
If we write this out for i = 1 it follows that σ 23 − σ 32 = 0, with similar relations following from i = 2, 3. Therefore σ kj = σ jk and the stress tensor is symmetrical. This holds in any coordinate system, for we did not specify the system. Mathematics textbooks show that for a symmetrical tensor one can always find a coordinate system in which the nondiagonal elements vanish (by a so-called principal axes transformation). Then, if one considers a rectangular fluid element with planes perpendicular to the local principal axis, only normal stresses act. Note, however, that the stress tensor σ ij may be dependent on the position. In such a case the orientation of the principal axes would change with position as well. It is easy to show that for a fluid at rest the stress tensor is isotropic everywhere with σ 11 = σ 22 = σ 33 = σ ii /3 (Exercise 7.18). This leads to the definition σij = − pδij
(7.190)
in which p is called the static fluid pressure. This pressure p is usually positive, implying that the normal stresses are usually negative, corresponding to compression. 7.4.2
Equations of Motion
In the equations of motion for a fluid element the acceleration of that element will appear. The total derivative du/dt defined in Eq. (7.9) should therefore be used. The mass of a volume element dV with density ρ becomes ρdV. The product of mass times acceleration becomes du ρd V dt
(7.191)
This should be equal to the sum of the volume forces and the surface forces acting on the element. The volume forces are represented by F, the resultant force per unit of mass. For the fluid element this gives FρdV
(7.192)
The surface forces are represented by the stress tensor σ ij . Consider a part of the surface of the fluid element: nδA. The i-component of the surface force is given by (7.183) as σij n j δ A
(7.193)
Dispersion of Pollutants
309
The i-component of the total surface force acting on the element will be the integral over the closed surface (7.194) σij n j δ A = σij (δ A) j Like the reduction of Eq. (7.187) one may apply Gauss’s divergence theorem, giving ∂ ∂ σij (δ A) j = (σim )dV → (σim )dV (7.195) ∂ xm ∂ xm The last part of this equation applies to a small fluid element where the integral sign may be omitted. One finds that the i-component of Eq. (7.191) should be equal to the sum of the i-components of the volume force (7.192) and the surface force (7.195). This gives ∂ du i σij dV ρdV = Fi ρdV + dt ∂x j
(7.196)
This holds for elements of any shape. Dividing by dV gives du i ∂ σij ρ = Fi ρ + dt ∂x j
(7.197)
One notes that surface forces only contribute to change of momentum if the divergence of σ ij on the second index is nonzero. Otherwise they may change the shape of the element, but not its momentum. 7.4.3
Newtonian Fluids
In order to proceed with the equations of motion (7.197) one has to know more about the stress tensor σ ij . For a fluid at rest it was found in Eq. (7.190) that σij = − pδij
(7.198)
For a moving fluid one will usually have tangential stresses so Eq. (7.198) will not apply. It is convenient to use an expression such as (7.198) as a reference. Therefore one defines the scalar quantity 1 (7.199) p = − σii 3 Remember that a scalar is a number, independent of the coordinate system in which one works. For a fluid at rest Eq. (7.199) reduces to (7.198). In general cases the sum (7.199) is still a scalar, as the summation over i establishes its invariance under rotations. One again calls p the ‘pressure’ at a certain point in the fluid. One may show that more precisely Eq. (7.199) may be interpreted as the average over all directions of the normal components of the stress tensor (Exercise 7.19). With definition (7.199) the most general expression for the stress tensor becomes σij = − pδij + dij
(7.200)
The tensor dij is entirely due to the motion of the fluid and because of (7.199) it has the property that its so-called trace vanishes: dij = 0. As the stress tensor is symmetrical with σ ij = σ ji it follows from Eq. (7.200) that dij is symmetrical as well: dij = dji .
310
Environmental Physics
In order to find dij one needs some approximations. The main one is to recognize that the nondiagonal elements of σ ij are due to friction between adjacent fluid elements; friction only happens between elements which have a different velocity, as with the same velocity there could be no friction. For a fluid with a simple shearing motion in the x1 ( = x)-direction only, and with an increase of x1 -velocity into the x2 (= y)-direction one would expect ∂u 1 (7.201) d12 = d21 = μ ∂ x2 This assumption was already made by Newton, whence the name Newtonian fluids for fluids which obey Eq. (7.201). In fact, without much discussion the same approximation was made in Eq. (3.39). The factor μ is called the dynamic viscosity of the fluid; it is a measure of the local friction between fluid elements. We generalize Eq. (7.201) by assuming that dij will be linear in the first order derivatives of the velocity of the fluid ui . Then we expect 1 (7.202) dij = 2μ eij − δij 3 ∂u j 1 ∂u i + (7.203) eij = 2 ∂x j ∂ xi (7.204) = div u = eii In the example of a simple shearing motion in the x-direction one has u1 (x2 ) = 0 and u2 = u3 = 0; Eq. (7.202) indeed reduces to Eq. (7.201). If one accepts (7.203) it is obvious that the divergence term should be subtracted in Eq. (7.202) as we should stick to dii = 0. It may be proven ([12]) that Eqs. (7.202), (7.203) and (7.204) hold within the assumption of linearity for fluids with an intrinsic isotropic structure. For fluids consisting of long-chain molecules they may not be valid. 7.4.4
Navier-Stokes Equation
We now tackle the equation of motion (7.197) with expressions (7.200) for σ ij and (7.202) for dij . We find du i ∂p ∂ 1 ρ + (7.205) = ρ Fi − 2μ eij − δij dt ∂ xi ∂x j 3 This is the famous Navier–Stokes equation. One should realize that most quantities between brackets on the right will be a function of position. Some approximations (which, however, are not always appropriate) may be useful: (1) The viscosity μ does not depend on position. As μ is strongly dependent on the temperature this approximation implies that temperature gradients should be small or zero. (2) The fluid is incompressible, that is, the density ρ is constant with position and time. The equation of mass conservation (Eqs. (7.4) and (7.111)) ∂ρ + div (ρu) = 0 ∂t then leads to divu = = 0.
(7.206)
Dispersion of Pollutants
↑
311
With these two approximations one may write Eq. (7.205) as (Exercise 7.20) ρ
du = ρF − ∇ p + μ∇ 2 u dt
(7.207)
We note in passing that the contribution of friction to the acceleration du/dt is essentially determined by the ratio v =μ/ρ, which is called the kinematic viscosity to distinguish from the dynamic viscosity μ. 7.4.4.1
Gravity Only
Let us assume that the volume forces F in Eq. (7.207) are due to gravity only. In Eq. (7.192) F was defined as the force per unit of mass. Therefore F=g
(7.208)
which by substitution into Eq. (7.207) gives ρ
du = ρg − ∇ p + μ∇ 2 u dt
(7.209)
For a fluid at rest (u = 0), it follows from Eq. (7.209) that one may put ∇ p = gρ
(7.210)
p = p0 + ρg.r
(7.211)
which leads to
One may check this relation by taking the z-component ∂p/∂z = ρ∂(gx x + gy y + gz z)/∂z = ρgz , which agrees with the z-component of Eq. (7.210). In the usual case where gravity acts along the z- direction Eq. (7.211) reduces to p = p0 − ρgz, which corresponds with the hydrostatic equation dp = −gρdz which we already met in Eq. (3.2). For a fluid, which may be moving one defines the modified pressure P as the deviation from Eq. (7.211) p = p0 + g.r + P
(7.212)
leading to the equation of motion for gravity as only body force ρ
du = −∇ P + μ∇ 2 u dt
(7.213)
It is convention to denote the modified pressure, which is the deviation from the hydrostatic pressure, by a lower case p. We will adopt this convention in the following. 7.4.5
Reynolds Number
Let us take the i-component of Eq. (7.213). This gives, replacing P by p ρ
∂p du i + μ∇ 2 u i =− dt ∂ xi
(7.214)
312
Environmental Physics
On the left we write out the total derivative, using Eq. (7.8), giving with summation convention ∂u i ∂ 2ui ∂p ∂u i +μ (7.215) + uj =− ρ ∂t ∂x j ∂ xi ∂x j∂x j We summarize that in deriving Eqs. (7.213) and (7.215) the following approximations were made: (1) μ constant (2) An incompressible fluid, so constant ρ (3) Gravity as the only volume force. We now write Eq. (7.215) with its boundary conditions in terms of dimensionless parameters characterizing the physical problem. There will be an essential length L, for example, a distance between enclosing boundaries, and some representative velocity U, for example, the steady speed of a rigid boundary. We introduce new variables to describe the problem u u = U r r = L (7.216) tU t = L p p = ρU 2 Substituting this in Eq. (7.215) leads to ∂u i ∂ p 1 ∂ 2 u i ∂u i + u = − + j ∂t ∂ x j ∂ xi Re ∂ x j ∂ x j
Re =
ρLU LU = μ v
(7.217)
(7.218)
Note that using the summation convention ∂ 2 ui /∂xj ∂xj in Eq. (7.215) made the change in units from Eq. (7.215) to Eq. (7.217) transparent. One may check that Reynolds number Re and all variables appearing in Eq. (7.217) are dimensionless. This means that flows with the same boundary conditions in terms of L and U and with the same value of Re are governed by the same equation (7.217). They are dynamically similar. In practice, dynamic similarity is widely used to find models for complicated flows. Reynolds number may be interpreted as a measure for the ratio of the forces of inertia, on the left in (7.214), and the viscous forces on its extreme right. This ratio may be written in terms of dimensionless quantities as du /dt ρdu i /dt i (7.219) μ∂ 2 u /∂ x ∂ x = Re × ∂ 2 u /∂ x ∂ x i j j i j j We presume that U and L in Eq. (7.216) were chosen so as to make the ratio of the nominator and denominator between the right-hand brackets of the order of one. Then Re
Dispersion of Pollutants
313
is indeed the ratio of the inertia and the viscous forces. In practice, it appears that for large Re a flow easily creates eddies; even a small instability in a laminar flow then will cause turbulence, as the viscous forces are too weak to prevent eddies from occurring or to extinguish them. A final remark to be made is that Eq. (7.217) is highly nonlinear. In fact, u j is multiplied by its derivative. The consequence is that it is impossible to find general analytic solutions. One cannot go further than qualitative discussions or turn to numerical approximations. 7.4.5.1
Flow through a Pipe
As a concluding example we consider the problem of a flow through a circular pipe. Suppose that the flow follows the direction x. In practice, one is interested in the loss of energy by internal friction. The origin of that friction is the fact that the velocity ux along the boundary will be smaller than it is in the middle of the pipe. For a straight pipe of circular cross section the flow appears to be laminar up to Re ≈ 2000. For spiral formed pipes eddies start at higher Reynolds number, depending on the radius of curvature. When that radius equals 50 times the pipe diameter eddies start at Re ≈ 6000. When it is more curved with a radius of 15 times the pipe diameter one finds Re ≈ 7600. A curved pipe therefore stabilizes the flow, leading to smaller losses in energy. 7.4.6
Turbulence
Anybody watching a fast-flowing stream will enjoy the ever-changing picture of eddies, and anybody looking at a smoking chimney will notice the change of the plume over time. One of the striking features is that eddies appear on all scales large and small. It seems therefore clear that these phenomena are too complicated to reproduce all at the same time by numerical calculations and that one will have to make approximations for certain features to get reliable results for others. Even with the most sophisticated present-day computer programs one has to make assumptions that often go back to more traditional considerations ([14], [15]). Turbulence occurs at high Re, where the influence of viscous forces is small. Observations show that large eddies transfer their kinetic energy to smaller eddies, until eventually viscosity makes the eddies disappear, while their energy is converted into heat. This energy transport process is the essence of turbulence rather than its random appearance. This means that one may not ignore viscosity in a complete description of turbulence. Another point to be kept in mind is that turbulence is a property of the flow and not of the fluid itself. At low velocities there will be no turbulence, whereas viscosity, albeit relatively small, is always present. In this section two traditional approaches will be followed: dimensional analysis and order of magnitude estimates. More advanced methods are not presented here, but we finish with a brief discussion of eddy viscosities which we met in Section 7.2.6 7.4.6.1
Dimensional Analysis and Scales
The fundamental dimensions in mechanics are [kg], [m] and [s], or, more generally, mass, length and time. A physicist is trained to check that his or her equations have the same
314
Environmental Physics
dimensions on both sides of the equality sign. In fluid mechanics this requirement is used as a procedure to find physical entities adapted to the scale of the particular problem. In this way the friction velocity was introduced in Eqs. (5.32) and (5.33) to describe the velocity scale in a boundary layer. Eq. (5.33) gave a simple expression for the increase of velocity u with height u∗ ∂u = ∂z kz
(7.220)
In this case a dimensionless constant k was introduced in the equation as dimensional analysis does not imply that such constants will be equal to one. In order to compare scales of molecular and turbulent diffusion, we first take an imposed length scale to compare times and in a second example an imposed time scale to compare lengths. Reynolds number Re will appear in the ratio of time scales (7.223) and in the ratio of length scales (7.225). This scaling property of Re is a more precise interpretation than the ratio (7.219) of inertia forces to viscous forces In a system with an externally imposed length scale L one may compare the time scales of molecular diffusion and turbulent diffusion. Take as an example the diffusion of a gas in a room with dimension L. From the diffusion equation (7.10) one may write down an order of magnitude estimate δC δC ≈D 2 Tm L
(7.221)
where δC is a concentration difference which will be reached by molecular diffusion at distance L after time T m , so T m ≈ L2 /D. If the velocity of turbulent transport for the major eddies in the room is u, the corresponding time scale T t becomes Tt ≈
L u
(7.222)
The ratio of both times may be expressed as Tt L D D v 1 ≈ = ≈ = Tm u L2 uL uL Re
(7.223)
In this derivation we used the experimental fact that in air D ≈ v, which can be verified by looking at Appendix A for the value of v and in Table 7.1 for a value of D. Then Eq. (7.218) may be used to get 1/Re. The interesting point is that the Reynolds number appears in the ratio of the time scales (7.223), showing that turbulent transport usually is much quicker than molecular diffusion. One may check this by lighting a cigar or incense in a lecture hall; after a few minutes it will be smelt in the farthest corner. In the second example a time scale is imposed on the atmospheric boundary layer by the rotation of the earth. In this case the length scales of turbulence and molecular diffusion are compared. The Coriolis acceleration for a particle with velocity u is given by 2u sin β, where β is the geographical latitude (cf. Eq. (3.43)). The time scale is given by T = 1/f , where f = 2 sin β (Eq. (3.44)). At mid latitudes one may easily check that T ≈ 104 [s]. In the
Dispersion of Pollutants
315
absence of turbulence the relation between length and time scales would again be given by Eq. (7.221) leading to L 2m ≈ DT
(7.224)
With D ≈ 10−5 [m s−2 ] from Table 7.1 this gives Lm ≈ 32 [cm]. The length scale Lt for turbulence is determined by the velocity u of the biggest eddies in the boundary layer; Eq. (7.222) then again gives Lt ≈ uT. The ratio of the length scales for turbulent diffusion and molecular diffusion becomes √ L 2t L tu L tu Lt = ≈ (7.225) ≈ = Re 2 Lm Lm D v Again Reynolds number shows up; with u ≈ 0.1 [m s−1 ] one finds Re ≈ 107 . 7.4.6.2
Kolmogorov Scales
The discussion so far has focused on the largest eddies. Energy dissipation, however, happens in the smallest ones. The rate of energy dissipation ε per unit of mass has the dimension [J s−1 kg−1 ]; this works out as [m2 s−3 ]. The dynamic factor governing this dissipation must be the kinematic viscosity v with dimensions [m2 s−1 ]. It is now possible to derive the so-called Kolmogorov scales for length η, time τ and velocity w as expressions in ε and v 1/4 v3 [m] η= ε v 1/2 τ= [s] ε
(7.226)
w = (vε)1/4 [m s−1 ] where one may check that w = η/τ . On this scale Reynolds number becomes, from Eq. (7.218) Re =
ηw =1 v
(7.227)
Its low value underlines that on this scale viscosity dominates the processes. Let us denote the length scale of the large eddies by l and the velocities of its parcels of fluid by u. The basic assumption, which agrees with observations, is that these eddies lose their energy when a parcel has travelled a distance of the order of l. The associated time is l/u. The kinetic energy of a unit of mass is of the order u2 , so the energy which a unit of mass is losing per second becomes u3 /l. On average this should be equal to the rate ε with which the smallest eddies are dissipating their kinetic energy into heat. So ε=
u3 l
(7.228)
316
Environmental Physics
With this relation it is possible to relate the size η of the smallest eddies to the size l of the largest. One finds η = l
v3 ε
1/4 1/4 3 1/4 −3/4 v ul 1 = = = (Re)−3/4 l4 u 3l 3 v
(7.229)
The Reynolds number on the right of Eq. (7.229) was found from Eq. (7.218) and clearly refers to the largest eddies. With Reynolds numbers in the order of Re ≈ 104 , Eq. (7.229) shows that length scales of the largest and smallest eddies differ by an order of 103 . Numerical grids to perform precise calculations should describe both eddies at the same time, which is impossible and illustrates the need for physically sensible approximations. A relation similar to (7.229) holds for the time scales τ τ = = t l/u
vu 2 εl 2
1/2 =
v 1/2 = (Re)−1/2 ul
(7.230)
where Eq. (7.228) was used. The angular velocities in the eddies, or more precisely, the vorticities, are inversely proportional to the time scales. Therefore, they will be largest for the smallest eddies, as one would imagine intuitively. It must be mentioned that, besides Reynolds numbers, there are quite a few other dimensionless numbers in use for describing flows. They are applied to design model experiments where the relevant dimensionless numbers are reproduced; from model experiments one draws conclusions as to the behaviour of the real flows. This remains outside the scope of this book. 7.4.6.3
Turbulent Diffusion
We recall Eq. (7.69), which read F = uC − εx
∂C ∂C ∂C ex − ε y e y − εz ez ∂x ∂y ∂z
(7.231)
Here, εx , εy , εz are the coefficients of turbulent diffusion in the three directions. Eq. (7.231) is an empirical relation, based on analogies between diffusion and dispersion by turbulent flows. The physical motivation for Eq. (7.231) is that turbulent diffusion is a random process like molecular diffusion and to the extent that random or statistical processes are all one knows of the physics involved, the differential equations will be similar. It is possible to give a physical foundation for the concept of coefficients of turbulent diffusion by detailed statistical analysis. It then appears that the eddy viscosities in Eq. (7.231) in general will be complicated functions of position. Conservation of mass divF + ∂C/∂t = 0 (Eq. (7.4)) will lead to differentiation of ε x , εy , εz , written down for rivers in Eq. (7.91), which does not give simple results. It seems wise to restrict oneself to proven applications, like those discussed in Section 7.2 for rivers. Thoughtless application of Eq. (7.231) may lead to erroneous results (Exercise 7.23).
Dispersion of Pollutants
317
Table 7.3 Horizontal dispersion of pollutants in the atmosphere Distance
Time scale
1–10 [m] 3–30 [km] 100–1000 [km] Hemisphere Globe
Seconds Hours Days Months Years
Reproduced by permission of SDU from [16], p. 11.
7.5
Gaussian Plumes in the Air
This section bears some resemblance to Section 7.2, where the dispersion of pollutants in rivers was discussed. Now we turn to dispersion in air and it is instructive to consider the time scales concerned. Table 7.3 shows the properties for horizontal dispersion in the atmosphere and Table 7.4 for vertical dispersion. One observes that horizontal dispersion is more rapid than vertical dispersion, because of the horizontal character of the wind velocities. However, even vertical dispersion goes much more quickly than calculated by molecular diffusion alone (Section 7.1); this is due to convection in the atmosphere. For both types of dispersion the turbulence of the atmosphere will lead to rapid mixing and dispersion of pollutants. The discussion in this section will focus on point sources like chimneys. In Figure 7.18 the plume of a chimney is shown for different temperature gradients within the atmosphere. On the left-hand side the adiabatic temperature dependence on altitude is dashed. With a drawn line the real temperature curve is given, leading to the stability and instability conditions discussed in Figure 3.4. One may assume that the air from the chimney is a little hotter than the surroundings, so it starts to rise. What happens next depends on the temperature profile as we can see on the right-hand side of Figure 7.18. The resulting plumes will be familiar from looking at a smoking chimney. It is clear that a realistic calculation of the dispersion of pollution would require knowledge of the temperature profile over the region of interest and a realistic description of its geography. One should calculate the wind field from the meteorological variables and
Table 7.4 Vertical dispersion of pollutants in the atmosphere. Time scales refer to dispersion from ground level to the layers indicated. The boundary of the layers varies with day and night. The troposphere determines the weather and the stratosphere acts as a lid. Layer
Extension
Time scale
Boundary layer Troposphere Stratosphere
Ground to 100 [m]/3 [km] 100 [m]/3 [km] to 10 [km]/15 [km] 10 [km]/15 [km] to 50 [km]
Minutes to hours Days to weeks Years
Reproduced by permission of SDU from [16], p. 11.
318
Environmental Physics h
T
Stable from ground up to above chimney
T
Stable from ground to above chimney
h
h
T
Stable above chimney, unstable at ground level
Figure 7.18 Plume shapes of a high chimney (100 [m]) for three ambient temperature profiles T(h) where h is the height above ground level (drawn curves). The adiabatic temperature function, which is supposed to be the same in the three cases, is dashed. The resulting plumes are displayed on the right with their axes dashed. The top graph shows an inversion, resulting in a high stability and a narrow plume. In the middle graph the situation is neutral up to an inversion, causing a plume which reflects against the inversion and the ground. In the bottom graph the atmosphere is unstable at ground level, causing strong mixing and a wide plume.
compute the solution of the Navier–Stokes equations for the emitted parcel of polluted air, while using some approximations to describe the turbulence. A complete Navier–Stokes calculation would be essential if, because of some industrial failure, there was a sudden emission of dangerous material. Then one should be able to calculate quickly what measures to take, what warnings to issue and so on. When one performs a series of these calculations and averages are taken, it appears that simple models are able to predict the resulting ingestion of pollutants in an average way at ground level at an arbitrary distance from the source. For many purposes it is sufficient to know an average ingestion, or, what amounts to the same thing, the accumulated ingestion over a year. For these purposes the simple models discussed below serve as a useful approximation [17]. In a simple model calculation it is assumed that there is a dominant horizontal wind velocity U, whose direction is taken as the x-direction of the coordinate frame. There
Dispersion of Pollutants
319
will be advection to the air at this velocity and turbulent diffusion in the three x-, y- and z-directions. As a model we take diffusion with an instantaneous point source in uniform wind, which led to Eq. (7.29). In the present case we adapt that equation to the fact that the three directions may behave differently and obtain
Q e C(x, y, z, t) = 3/2 (2π ) σx σ y σz
2
− (x−U2t) − 2σx
y2 2σ y2
−
z2 2σz2
(7.232)
Here C [kg m−3 ] is the concentration of guest particles (pollutants). The normalization is such that at any time t the space integral CdV = Q. This implies as before that solution (7.232) describes a point source at x = y = z = 0 emitting at t = 0 a mass of particles Q [kg]. From Eqs. (7.231) and (7.13) one would find σx2 = 2εx t, σ y2 = 2ε y t, σz2 = 2εz t(Exercise 7.23). In practice σ x , σ y , σ z have a different time dependence and are found in several ways: (1) (2) (3) (4)
From statistical analysis, combined with measurements, to be introduced directly below From empirical relations, to be discussed around Figure 7.20 By detailed calculations of the atmosphere, not discussed here Using so-called K-models, not discussed here.
7.5.1
Statistical Analysis
Consider a point source from which particles are emitted in certain puffs, each corresponding to an instantaneous point source. The actual velocities of the particles will vary around the mean flow (U, 0, 0) and are described by (U + u, v, w). The time average of the velocities u, v and w of course will vanish, as will the average of the velocities over all particles. Still, in a real flow there will be a correlation between the velocities at a certain time t and the velocities at time t + τ a little later. What matters in the statistical models considered here is not the behaviour of a particular particle in a particular puff, but the average behaviour of many particles in many puffs. The correlation should therefore also be described in an average way. The concentration C discussed in this section is to be considered as an average over many emitted puffs from the source and is assumed to be proportional to the probability of finding a particle at a certain position. Let us consider a coordinate system, moving with average flow in which U = 0. The velocity field will then be described by (u, v, w) and overbars (u, v, w) will denote the average over many puffs and particles. The time t is left as a parameter which indicates the time passed since the particles over which the velocity is averaged were emitted. For a stationary process, an average over many puffs will be independent of time u(t) = 0 u 2 (t) = u 2 = constant
(7.233) (7.234)
320
Environmental Physics
For the v- and w-components similar relations hold. Define the autocorrelation function R(τ ) by ([1], p. 26) R(τ ) =
u(t)u(t + τ )
(7.235) u2 This function expresses how much the particle at time t + τ ‘remembers’ of its previous velocity a time τ earlier. It is assumed that the statistical character of the motion is such that after averaging over all puffs the dependence of the autocorrelation on time t disappears, so that the autocorrelation function depends on τ only. If this assumption holds, one may write down expression (7.235) for t = 0 and also for t = −τ . Both should be equal to R(τ ), but the latter expression may also be interpreted as R(−τ ). It follows that R(τ ) = R(−τ ) and the autocorrelation function R(τ ) is an even function of τ . Other properties are that R(0) = 1 and R(∞) = 0, for after a long period of time the particle will have ‘forgotten’ its early velocities, or more precisely: for long times τ the product u(t)u(t + τ ) will be as often positive as negative. For a single particle with x(0) = 0 one may write down t x(t) =
u(t )dt
(7.236)
0
and d 2 dx x (t) = 2x = 2x(t)u(t) = 2 dt dt
t
0
u(t )u(t)dt = 2
u(t + τ )u(t)dτ
(7.237)
−t
0
where in the last equality τ was chosen as an integration variable with t = t + τ . Eq. (7.237) now may be averaged over many puffs, like in Eq. (7.233) which leads to d 2 x =2 dt
0
0 u(t)u(t + τ )dτ = 2
−t
t u 2 R(τ )dτ = 2u 2
−t
R(τ )dτ
(7.238)
0
where Eq. (7.235) and the even character of R(τ ) were used. Eq. (7.238) is called Taylor’s theorem. Next, consider a cloud of particles around x = 0. Its size will be described by the average of x2 over all particles and can therefore be found by integrating Eq. (7.238), giving t x 2 = 2u 2 0
dt
t R(τ )dτ
(7.239)
0
Integration by parts of the integral over t can be performed by writing the τ -integral as one times itself. The ‘one’ is integrated, giving t and the integral over τ is differentiated giving R(t ). This leads to t x2
=
(t − τ )R(τ )dτ
2u 2 0
(7.240)
Dispersion of Pollutants
321
This shows that the autocorrelation function R(τ ) determines the size of the cloud. The next step is to approximate the autocorrelation function taking into account its even character and the properties at τ = 0 and τ = ∞, discussed above. For times that are not too small, one may approximate the autocorrelation function by R(τ ) = e−τ/tL (τ 0)
(7.241)
This relation has the right behaviour for τ → ∞ but it is not even at τ = 0, which we ignore as we integrate over positive values of τ . We now calculate x 2 , y 2 and z 2 . For a Gauss distribution (C1) x 2 = σx2 , which is the width of the distribution (C3). Guessing that we will end up with a Gauss distribution anyway, we just define standard deviations by σx2 = x 2 , σ y2 = y 2 , σz2 = z 2 . Take the y-direction, for example. Substitution of Eq. (7.241) into Eq. (7.240) gives, after time integration t 2 2 −t/t L 2 2 +e −1 (7.242) σ y = y = 2v t L tL For the x- and z-directions one may write down similar equations. For times which are small compared to the time scale of turbulence t tL one uses the first three terms of the series expansion of e −t/tL . In this way Eq. (7.242) may be approximated by σ x = u 2 t = u m t (7.243) σ y = v2 t = vm t σz = w2 t = wm t Thus for small times the standard deviations vary linearly with time. The mean velocities um , vm , wm are defined implicitly on the right-hand side of Eqs. (7.243). Note that the derivation is somewhat sloppy because Eq. (7.241) only holds for large τ while we integrate over all τ . Eqs (7.243) are accepted as an acceptable approximation anyway. 7.5.2
Continuous Point Source
Before discussing experimental and empirical ways to determine the standard deviations, we return to the Gaussian plume (7.232). The result for a continuous point source emitting a mass q [kg s−1 ] particles per second may be found in a similar way as that following Eq. (7.28). One assumes the equations (7.243) to hold for all times, substitutes them in an equation like (7.29) and integrates over all times ∞ (x − U t )2 y2 z2 dt q exp − − − (7.244) C(x, y, z) = (2π)3/2 2σx2 2σ y2 2σz2 σx σ y σz 0
A somewhat cumbersome time integration, where the time dependence (7.243) is substituted into Eq. (7.244) gives 2 2 π Ux U2 qu m U x Ux exp − 2 1 + C(x, y, z) = exp erfc − √ (2π)3/2 vm wmr 2 2u m 2 u mr 2u 2mr 2 u mr 2 (7.245)
322
Environmental Physics
with r2 = x2 +
u 2m 2 u 2m 2 y + 2z v2m wm
(7.246)
where one should note the negative argument of the erfc function. For some distance downstream one may write r → x and, ignoring turbulent diffusion in the direction of the flow, one may write U/um → ∞, for um then will be much smaller than U. One observes that the second term in Eq. (7.245) dominates. Using Eq. (7.244) with x = Ut then leads to z2 y2 q exp − 2 − (7.247) C= 2π σ y σz U 2σ y 2σz2 This is very similar if not identical to Eq. (7.37), which was derived with σ -values behaving as the square root of the time t. The reason may be that the interpretation is similar; for in both cases the x-dependence is lost as the diffusion in the direction of the main flow is neglected. Further, in both cases the integral over a cross section perpendicular to the flow may be calculated as q (7.248) Cdydz = U The essential point both derivations have in common is that the (turbulent) diffusion takes place over a slice perpendicular to the main flow. Therefore we may say the following: consider a time dt in which a mass qdt of particles are emitted as a slice. After a time t the slice has travelled a distance x = Ut and the width of the slice will be dx = Udt. If one multiplies the integral (7.248) by the width Udt of the slice dx one indeed finds the mass of particles qdt which were emitted in the time dt. 7.5.3
Gaussian Plume from a High Chimney
Consider a chimney with height h, which emits a mass of q [kg s−1 ] particles per second continuously. Take the vertical as the z-direction; the coordinates of the source therefore are (0, 0, h). At the ground level z = 0 we assume complete reflection: all particles going down are bouncing back. This means that as in Eq. (7.41) we have to add a fictitious source with coordinates (0, 0, −h). The plume of Eq. (7.247) now is generalized as (z − h)2 (z + h)2 y2 y2 q exp − 2 − + exp − 2 − (7.249) C= 2π σ y σz U 2σ y 2σz2 2σ y 2σz2 From a health point of view one is interested in the mass concentration of pollutants in the air at ground level z = 0, which becomes h2 y2 q exp − 2 − (7.250) C= π σ y σz U 2σ y 2σz2 If the plume is completely absorbed where it hits the ground one should not add the mirror source and the concentration in the air at ground level will be halved.
CπU σy h 2 / (σzq)
Dispersion of Pollutants
323
0.6 0.4 0.2
0.5
1
1.5
σz / h
Figure 7.19 Ground-level concentration underneath the axis of a horizontal plume. Along the vertical axis the function CπUσ y h2 /(σ z q) is plotted, which is proportional to the concentration C. (Reproduced by permission of Kluwer Academic Publishers from [1], Fig. 3.6 p. 68.)
The largest concentrations of the plume (7.250) will be found just below the axis at y=0 h2 q exp − 2 (7.251) C= π σ y σz U 2σz The distance to the chimney is hidden in the parameters σ y and σ z The concentration C of Eq. (7.251) is sketched in Figure 7.19 as a function of σ z /h, which should be a measure of to the chimney. The ordinate in Figure 7.19 is written as CπU σ y h 2 /(σz q) = 2the distance 2 h /σz exp(−h 2 /(2σz2 )). √ In Figure 7.19 one observes a maximum at σz / h = 1/ 2. The value of that maximum is Cπ Uσ y h2 /(σ z q) ≈ 0.74. This implies that the maximum ground level concentration C is inversely proportional to the square of the height h of the chimney – a good reason to build high chimneys. It must be noted that Figure 7.19 applies to the position underneath the plume; when the distance to this position increases Eq. (7.250) shows an extra Gaussian exponential which implies that the concentration will go down rapidly. The parameters σ y and σ z in Eqs. (7.250) and (7.251) will strongly depend on (1) The height: the turbulence will decrease further from its cause, which is the ground (2) The roughness of the ground; it will increase with the roughness represented by a parameter like z0 defined in Eq. (5.34) (3) The stability of the atmosphere: more stability implies lower turbulence. We now discuss two of the ways to determine the parameters σ y and σ z : empirical and semi-empirical. 7.5.4
Empirical Determination of the Dispersion Coefficients
The empirical approach assigns stability categories A (very unstable) to F (moderately stable) to atmospheric conditions, the Pasquill categories. They are defined in Table 7.5. Empirical relations between σ y and σ z , on the one hand, and the distance x from the source, on the other hand, have been found, using the atmospheric conditions A to F. For groundlevel release and flat terrain one finds the curves reproduced in Figure 7.20. In order to
324
Environmental Physics
Table 7.5 Pasquill stability categories. Day Surface wind velocity/[m s−1 ] 2 2–3 3–5 5–6 6
Night
Strong insolation
Moderate
Slight insolation
Overcast or ≥4/8 low cloud
≤3/8 low cloud
A A–B B C C
A–B B B–C C–D D
B C C D D
E D D D
F E D D
Reproduced by permission of Kluwer Academic Publishers from [1], Table III.2, p. 71. A = extremely unstable; B = moderately unstable; C = slightly unstable; D = neutral; E = slightly stable; F = moderately stable.
correct for the height h and roughness z0 one has to correct the numbers deduced from Figure 7.20 in a semi-empirical way. This complication is usually ignored. One will be aware of the many approximations made above. We just draw attention to the fact that the wind direction U is not constant, as was assumed. Observing any smoking chimney or any child’s wind vane will confirm this. Another point is that plumes resemble the bent shapes of Figure 7.18 rather than the horizontal plumes assumed here. 7.5.5
Semi-Empirical Determination of the Dispersion Parameters
A more physical way to determine the standard deviations σ y and σ z is to divide the turbulence in a quickly varying part and a more slowly varying contribution [19]. Let us first look at the quickly varying part. It originates from convection and changes in
Figure 7.20 Dispersion coefficients σ y and σ z as a function of the distance from the source for the atmospheric stability conditions defined in Table 7.5 (From [18], V1-V2).
Dispersion of Pollutants
325
stability. One may apply Eq. (7.242) and either measure v2 and w2 or calculate them from meteorological models. In order to determine σ y2 and σz2 one still needs the parameter tL . The parameter tL in Eqs. (7.241) and (7.242) refers to the autocorrelation of particles when one follows the flow in the course of time. Thus, when wind velocity measurements v(t) and w(t) are performed at a fixed position they will be measured in a so-called Eulerian time scale with a local correlation time T e . One uses the empirical relation t L = 3Te
(7.252)
to connect both time scales. It appears that to a good approximation one may write for the quickly varying part (with subscript q) σzq = σ yq
(7.253)
This holds for altitudes 0.1zi < z < 0.9zi where zi is the mixing height, which is the thickness of the earth’s boundary layer (cf. Table 7.4). The slowly varying part of the standard deviations (with subscript s) originates from changes in the horizontal wind field. It is assumed that here only σ y is influenced, so σzs ≈ 0
(7.254)
and for the slowly varying wind field one may write, following Eq. (7.243) σ ys = vm t
(7.255)
Here and in Eq. (7.242) the time t indicates the time since the particles were emitted. As the mean square velocity vm can be measured or calculated, one finds σ ys . The standard deviations σ y and σ z are found from 2 2 + σ ys σ y2 = σ yq
(7.256)
2 σzq
(7.257)
σz2
=
≈
2 σ yq
When one has found σ y and σ z in this way, one may use Figure 7.20 to deduce the Pasquill category with which they correspond. If one were to deduce this category directly from Table 7.5 it appears that simple application of Table 7.5 gives erroneous results. In a flat country, for example, one finds category D (neutral) in 70% of the cases, whereas from Eqs. (7.256) and (7.257) one finds that category in only 35% of the cases. 7.5.6
Building a Chimney
If one is planning to build a factory and use a chimney to get rid of pollutants, the important question is what ground-level concentrations are to be expected. Taking it the other way around, given some legal norms for the ground-level concentration one could deduce the minimum height the chimney should be. One could measure the velocity field u(t), v(t), and w(t) at the location to be built, using Eqs. (7.252) to (7.258). Then one should take into account that the wind velocity increases from ground level z = 0 to the chimney height h. Here one usually assumes the logarithmic velocity increase (5.34), which is applicable for neutral atmospheric situations z u∗ (7.258) ln U (z) = k z0
326
Environmental Physics
where z0 is the ground surface roughness introduced before. One may calculate the average of U(z) by integrating from z0 to h and divide by (h − z0 ). Approximating h − z0 ≈ h this becomes U(h/e) So, if one is planning to build a chimney of height h one should perform the preparatory measurements at height h/e.
7.6
Turbulent Jets and Plumes
In this last section of ‘dispersion of pollutants’ we discuss the way in which engineers traditionally have tackled the problem of flows. It is based on physical intuition and the physical fact that equations should have the same dimensions on both sides of the equality sign. We take our examples and data from sewage disposal ([5], Chapter 9). Consider a round pipe with radius R, which discharges a volume Q [m3 s−1 ] of water per second with a certain concentration C0 [kg m−3 ] of pollutant. The pipe enters a lake or an ocean at a depth H. The situation is sketched in Figure 7.21. The question then is how the discharge dilutes and disperses in the ambient waters and at what distance an acceptably low concentration of pollutant is reached. One may assume that the ejected flow has been
Figure 7.21
Defining the parameters of a simple jet or vertical plume.
Dispersion of Pollutants
327
forced to be turbulent for in this way the mixing with the surroundings will be optimal. We do not use the Navier-Stokes equations but apply a zero order approximation, starting from the initial conditions at the end-of-pipe. The approximations to be made only hold for large Reynolds numbers and at some distance form the end-of-pipe. At the point of discharge the ejected flow will have a momentum defined as ρM [kg m s−2 ] and an average velocity W [m s−1 ]. This leads to the equations Q = π R2 W M = π R2 W 2
(7.259) (7.260)
The student should check that ρM indeed is the momentum of the fluid passing the cross section πR2 in 1 [s]. Note that the density ρ, which could have been put at both sides of Eq. (7.260) is divided out, which results in a convenient dimension for M [m4 s−2 ]. Reynolds number Re = UL/v defined in Eq. (7.218) may be expressed in the momentum flux M by noting that a representative length in the present case will be the radius R, while a representative velocity will be W, both end-of-pipe. We find √ √ π R2 W 2 M WR UL = ≈ = (7.261) Re = v v v v For turbulence one takes Re > 4000. Besides volume and momentum the ejected flow may have a buoyancy B as well. This could be caused by a difference in density from a difference in temperature and/or salinity with the receiving waters. An expression for the buoyancy at the end of the pipe may be found by considering a volume Q of effluent with density ρ entering ambient water with density (ρ + ρ 0 ). Thus, ρ 0 is the density deficiency of the effluent at the end of the pipe, hence the subscript 0. The downward gravity force will be ρQg and the upward Archimedes force will be (ρ + ρ 0 )Qg, resulting in an upward buoyancy force ρB [N s−1 = kg m s−3 ] defined by B=
ρ0 gQ ρ
(7.262)
where again the density ρ has been brought to the right-hand side. Outside the pipe one could still follow the flow and distinguish the flow from the surroundings. The flow therefore has a well-defined cross section at any distance from the source. Also, for simplicity we assume that the density ρ is constant over a cross section. Thus we define a mass flux ρμ [kg s−1 ] by ρwdA or μ = wdA (7.263) ρμ = flow
flow
3 −1
where μ is the volume flux [m s ] of the flow, the integral runs over the cross section of the flow and w is the local velocity perpendicular on the cross section. Turbulent velocities are ignored as they amount to no more than 10% of the average velocities. Similarly the momentum flux ρm [kg m s−2 ] is given by ρw2 dA or m = w2 dA (7.264) ρm = flow
flow
328
Environmental Physics
and the buoyancy flux ρβ [kg m s−3 ] by ρβ = gρwdA
or
flow
β= flow
ρ gwdA ρ
(7.265)
where in this case ρ is the difference between the density of the flow at a certain cross section and that of the surrounding fluid. As a matter of definition one talks about a pure plume if at the end of the pipe one may approximate Q = M = 0, so that buoyancy B is the only initial value of interest. A smoke plume above a fire would be an example. Similarly one talks about a simple jet if a flow has initial momentum flux M and volume flux Q, but no buoyancy; the density of the jet then is the same as that of the surrounding fluid. Finally one speaks about a buoyant jet if there is a buoyancy B as well. Below we illustrate the engineering method which could be used as a zero order check by physicists as well. Only a few simple cases are discussed and the receiving fluid will be described as simply as possible, that is without stratification or cross flows. 7.6.1
Dimensional Analysis
A physical engineer will identify the dominant physical quantities which should enter into the required equations and identify their dimensions: length [L], time [T] and mass [m]. Following the convention, we distinguish between the units [m], [kg] and [s] and their dimensions in the remainder of this section. If there are n variables in a certain physical problem, the dimensions of the equations should match and one could in this case construct n − 3 dimensionless groups of variables which should relate to each other. In the simple cases shown below one finds essentially n − 2 dimensionless groups as we use two dimensions, length and time only. The dimensions of the dominant variables are easily found from Eqs. (7.259) to (7.265) as [L 3 ] [T ] [L 4 ] [m] = [M] = 2 [T ] [L 4 ] [β] = [B] = 3 [T ] [μ] = [Q] =
(7.266)
where the mass does not occur, as it has been divided out in the definitions. Figure 7.21 showed the relevant parameters: z is the distance from the orifice and x is the distance from the axis of the jet or plume, which for simplicity has been drawn vertically. Also shown is the local velocity w, which should be taken as a time averaged value. Both for the velocity distribution w(x) and the concentration of pollutant C(x) we will assume a Gaussian shape with its maximum on the axis. This is in line with the arguments of preceding sections and certainly is the simplest symmetrical shape. Thus w(x) = wm e−(x/bw )
2
(7.267)
Dispersion of Pollutants
329
and C(x) = Cm e−(x/bT )
2
(7.268)
where wm = w(0) and Cm = C(0) are the values on the axis of the jet or plume and bw and bT indicate the width of the Gauss distribution. 7.6.2
Simple Jet
For simplicity we discuss the simple jet as a horizontal discharge with parameters Q and M, while the buoyancy B = 0 and gravity is ignored. From the three quantities Q, M and z and the two dimensions [L] and [T ] it follows that here 3 − 2 = 1 dimensionless quantity determines the physics. Note that x is not an independent variable as the widths of the plumes should follow from Q, M and z. As we want to find the dependence of the parameters wm , Cm , bw , bT on the distance z to the source we choose as dimensionless quantity a relevant length scale. This scale should depend only on the two initial parameters Q and M. They can be combined into a length scale by defining Q lQ = √ M
(7.269)
√ which, from Eqs. (7.259) and (7.260) is equal to A where A = π R2 is the initial cross sectional area of the jet. Because wm has the dimensions of velocity and M/Q is the only velocity following from the initial conditions, the dependence of wm on the relevant parameters only can be of the form Q z = f (7.270) wm M lQ where f is an unknown function of the dimensionless distance z/lQ to the source. We may, however, deduce the asymptotic properties of f . For small z the function f should approach the value f = 1 because wm → W = M/Q (see Eqs. (7.259) and (7.260)). More interesting is the limit where z/lQ approaches infinity. This could happen in three ways: (1) z → ∞ with fixed Q, M and consequently fixed lQ (2) Q → 0, therefore lQ → 0, with z, M fixed (3) M → ∞, therefore lQ → 0, with z, Q fixed. The point is that each of these three limits should give the same physical flow, as the right-hand side of Eq. (7.270) is the same in the three cases. Take a flow with a fixed, but average, value of Q and M. At large values of z (case (1)) the flow should be the same as a flow with the same value of M, a large value of z and a series of Q values approaching zero (case (2)). These flows can only be indistinguishable if for large values of z the dependence of Q disappears in Eq. (7.270). This implies wm
Q lQ = a1 (z l Q ) M z
(7.271)
330
Environmental Physics
Using Eq. (7.269) one may check that indeed wm is independent of Q. The empirical constant has been determined as a1 = 7.0 ± 0.1, which implies that data points very nicely fit Eq. (7.271). In a similar vein, relations for the width parameters bw and bT are found. Indicating both by b one again finds that in general z b = f (7.272) lQ lQ Again, for large z the value of Q should disappear from the equations, which implies that b/lQ and z/lQ should be proportional; consequently b and z should be proportional. The experiments indicate
and
bw = 0.107 ± 0.003 (z l Q ) z
(7.273)
bT (7.274) = 0.127 ± 0.004 (z l Q ) z The volume flux μ at some distance should be proportional to the initial volume flux Q, which leads to z μ = f (7.275) Q lQ where f again is an unknown function. For large z the value of Q should be insignificant, as before, which leads to z μ (7.276) = c j (z l Q ) Q lQ This quantity is interpreted as the mean dilution of the jet. The value of cj can be calculated from the earlier estimates. Substituting Eq. (7.267) into Eq. (7.263) and integrating gives (7.277) μ = π wm bw2 Using Eqs. (7.271), (7.269) and (7.273) leads to a value cj = 0.25 in Eq. (7.276). Consider an initial concentration C0 [kg m−3 ] of pollutant in the jet. With Q [m3 s−1 ] the volume output per second, the initial pollutant mass per second is Y = QC0 [kg s−1 ]
(7.278)
Physically we expect the pollutant concentration on the axis to be proportional to the injection Y. For large z the ratio Cm /Y may only depend on M and z. The dimensions of Cm /Y is found as [TL−3 ]. Thus one should construct from M and z a variable with the same dimensions. That leads to Cm a2 (7.279) = √ (z l Q ) Y z M or Q lQ Cm = a2 √ = a2 (z l Q ) (7.280) C0 z z M It has been estimated experimentally that a2 ≈ 5.64.
Dispersion of Pollutants
Example 7.3
331
The Simple Jet2
Consider a simple horizontal jet with Q = 1 [m3 s−1 ] and a velocity W = 3 [m s−1 ] discharging into a fluid with the same density. Calculate the characteristics of the jet at a horizontal distance of 60 [m]. Answer
√ We have M = QW = 3 [m4 s−2 ]. The length scale of this problem will be l Q = Q/ M = 0.58 [m]. From Eq. (7.271) and a1 = 7.0 it follows that wm = 0.20 [m s−1 ]. The decrease in concentration of a pollutant on the axis will be found from Eq. (7.280) and the value of a2 = 5.64 This gives Cm /C0 = 0.054. The dilution (7.276) becomes μ/Q = 26, where cj = 0.25 was used.
7.6.3
Simple Plume
The simple plume is defined by Q = M = 0, so it only has a buoyancy B and the plume is assumed to be rising vertically. The physical variables concerned are B and the distance to the source z. Again one wants to calculate the mass and momentum fluxes some distance away. The viscosity of the fluid is ignored again on the basis of the argument that turbulence will be dominant. Looking at the dimensions of B and z it appears that the only velocity scale is found as (B/z)1/3 . For the longitudinal velocity wm on the axis one finds 1/3 B wm = b1 z
(7.281)
where experimentally b1 = 4.7. The integrated momentum flux m can be constructed from z and B in one way only, which leads to the expression m = b2 B 2/3 z 4/3
(7.282)
where it is found that b2 = 0.35. Finally the volume flux μ is written as μ = b3 B 1/3 z 5/3
(7.283)
with experimentally b3 = 0.15. For later use these formulas are rearranged a little bit by expressing μ in terms of m as √ b3 √ mz = c p mz μ= √ b2
2
Taken by permission of Academic Press from [5], Example 9.1, p. 328.
simple plume
(7.284)
332
Environmental Physics
where the last equality defines cp = 0.254. For a simple jet we found a similar formula in Eq. (7.276), which may be written as √ (7.285) μ = c j M z simple jet The difference between the two equations (7.284) and (7.285) is the appearance of the initial momentum flux M in the equation of the jet and the local flux m in the equation for the plume. Indeed for a plume the flux increases as the 5/3 power of z, as can be seen from Eq. (7.283). This originates from the buoyancy, which gives extra velocities. Therefore, cp may be regarded as the growth coefficient for a plume and cj as the growth coefficient for a jet. Eqs. (7.282) and (7.283) may be combined in a dimensionless number Rp μB 1/2 b3 = 5/4 = R p 5/4 m b2
(7.286)
where Rp = 0.557 by virtue of the values quoted above. Finally, for environmental reasons it is important to find the decrease in the concentration of the pollutant on the plume axis Cm by comparing it with the initial injection flux Y. This should be a function of the buoyancy B and the distance z from the source. Looking at the dimensions of Cm /Y the only possible relation is Cm b4 = 1/3 5/3 Y B z
(7.287)
with the empirical number b4 = 9.1. Example 7.4 A Simple Plume3 A fresh water discharge with Q = 1 [m3 s−1 ] is located at a depth of 70 [m] in an ocean. The initial concentration of pollutant is C0 = 1 [kg m−3 ]. The discharge has a temperature of 17.8 [◦ C], and the ocean has a temperature of 11.1 [◦ C] and a salinity of 3.25%. Find the buoyancy B. Calculate the pollutant concentration Cm of Eq. (7.287), the volume flux μ of (7.283) and the dilution μ/Q at 60 [m] from the source, so 10 [m] below sea level. Explain the (high) value of the dilution. Answer The output of pollutant Y = QC0 = 1 [kg s−1 ]. Despite the fact that Q = 0 for a real plume like this one, we still believe that Q is small enough to describe the discharge as that of a simple plume. Using tabulated values of sea water density and fresh water density and using Eq. (7.262) one finds a buoyancy B = 0.257 [m4 s−3 ]. With Eq. (7.287) one directly calculates Cm = 0.016 [kg m−3 ]. With Eq. (7.283) one finds μ = 88 [m3 s−1 ]. The dilution follows as μ/Q = 88. The fact that the volume flux μ is so much higher than the initial flux Q originates from the increase in velocity w due to buoyancy.
3
Taken by permission of Academic Press from [5], Example 9.2, p. 332.
Dispersion of Pollutants
333
It should be noted that real ocean diffusers usually consist of a long pipe with small holes releasing fluid. This should be described as a line source and the plume subsequently may be described as a two-dimensional plume, resulting in different equations. Indeed, even the dimensions will be different, with Q in [L 2 T −1 ], M in [L 3 T −2 ] and B in [L 3 T −3 ].
Exercises 7.1 Write out Eq. (7.6) in components and show that it equals Eq. (7.7). 7.2 Consider a velocity field u(x, y, z, t) From conservation of mass and a spaceindependent density show that it follows that divu = 0. 7.3 Show that (7.12) is a solution to Eq. (7.11). 7.4 Consider the√instantaneous diffusion of a plane of CO2 gas in air. Calculate the times after which < x 2 > = 0.01 [m], 1 [m] and 100 [m]. 7.5 Verify Eq. (7.23) by substitution in Eq. (7.10), simultaneously for n = 1, 2, 3. 7.6 Verify that the concentration given in Eq. (7.29) obeys the differential equation (7.8) of combined advection and diffusion. 7.7 Show that approximation (7.37) obeys the diffusion equation (7.38). Remember to write σ 2 = 2Dx/u. 7.8 Write down the differential equation corresponding to Eq. (7.39). 7.9 (a) Consider an instantaneous point source of √ salt in water at rest. Determine the time it takes for the root mean square distance < x 2 > = 1 [m] to be reached. (b) Consider two flat horizontal plates at a distance W and a large depth h. The upper plate moves with a velocity U/2 to the right and the lower one with the same velocity to the left. Assume a fluid between the plates, which follows the plates with u = 0 and with a linear behaviour of u(y) without turbulence. Calculate K = U 2 W 2 /(120D) from Eq. (7.62). Substitute U = 10−2 [m s−1 ], W = 10−3 [m] and D = 1.3 × 10−9 [m2 s−1 ]. Find K. Use Eqs. (7.65) and √ (7.23) for n = 1 to calculate the time it takes for a root mean square distance of < x 2 > = 1 [m] to be reached (inspired on [5], Section 4.1.3). 7.10 Two channels each have a flow of 2 [m3 s−1 ]; one has a tracer concentration C0 , the other one is free of tracers. At a certain point they join and form one wider straight channel with width W = 77 [m], depth d = 0.70 [m] and with a slope S = 0.001. Calculate u and εy for the joint channel. Make an estimate of the mixing length. Mathematically inclined students may do this exactly from Eq. (7.83). 7.11 Calculate the porosity n for a sample of small spheres radius R, (a) in a cubic array (b) stacked as a rhombohedron. 7.12 In Figure 7.9 the depth of the top layer is d > 0, ρ the density of water and ρ w the density of wet soil. Derive a condition for quicksand to occur in terms of h/d, ρ and ρ w . 7.13 Calculate Q of Eq. (7.125) with ‘realistic’ parameters H = 10 [m], b = 30 [m], a = 5 [m], k = 10−9 [m s−1 ] (clay) or k = 10−4 [m s−1 ] (sand). 7.14 Calculate hq of Eq. (7.142) with the following data: L = 10 [m], h1 = 5 [m], h2 = 3 [m] and k = 10−9 [m s−1 ] (clay) or k = 10−4 [m s−1 ] (sand).
334
Environmental Physics
7.15 Use Figure 7.14 to find the stagnation point S. Determine for the stream function and streamline through S. Find the intersection of this streamline with the y-axis. 7.16 In a confined aquifer with thickness T there is a source of strength Q at z = −d and a sink with strength −Q at x = +d, both on the x-axis. Write down the -and -functions for each, add and indicate how to find the streamlines. Use Eqs. (7.155) and (7.159) and find for the streamlines circles with their centre on the y-axis. Indicate the flow direction in the stream lines. Treat this as a two-dimensional problem. 7.17 In Eq. (7.172) take the following values h0 = 20 [m], s0 = 1 [m], S = 10−3 , k = 10−4 [ms−1 ], H = 10 [m]. Plot h(x, t = 10 [s]), h(x, t = 20 [s]), h(x, t = 30 [s]) as a function of x and find that the lowering of the groundwater head proceeds to the right. 7.18 Prove that for a fluid at rest the stress tensor σ ij is isotropic; σ ij = δ ij σ kk /3. 7.19 Show that p = −σ ii /3 (Eq. (7.199)) may be interpreted as the average over all directions of the normal components of the stress tensor. 7.20 Derive Eq. (7.207). 7.21 Consider a vertical plane on which a thin film of vertical fluid with thickness d is slowly streaming downwards under the influence of gravity. There are no accelerations but the viscosity μ = 0. Write down the equations of motion for a volume element with thickness (d − x) at the outside of the flow using gravity and viscosity only and calculate the downward velocity uy (x) as a function of the distance x to the wall. The flow is laminar, the velocity uy is a function of x only and uy (x = 0) = 0. 7.22 Use equations (7.221), (7.222) and (7.223) to calculate the times for molecular diffusion in a heated room of 10 [m] × 10 [m] and the time for turbulent diffusion. To calculate the latter one may assume that air heated above a radiator by about 10 [◦ C] is accelerated up to some 10 [cm] above. The resulting velocity u should be reduced by a reasonable factor as it has to work its way against the hotter air at the top of the room. Calculate Reynolds number as well. 7.23 Apply conservation of mass to Eq. (7.231), assuming that the coefficients of turbulent diffusion are independent of location. Take the dominant flow in the x-direction and write down the solution for a point source. Compare with solution (7.232). 7.24 Show from Eq. (7.243) that one may write σ y = iy x and σ z = iz x. Rewrite Eq. (7.250) in terms of y/h and z/h. Make a graph of Eq. (7.250) as a function of x/h and y/h. Introduce K = CUh2 /q and draw a contour with K = 0.10 using iy = 0.2 and iz = 0.1. 7.25 Reproduce Figure 7.19. 7.26 The total output of a pollutant from a simple jet is given by Y = QC0 . At a certain position z, far away one may calculate the throughput of pollutant per second as the integral over a cross section of Cw , where for C and w the asymptotic expressions may be used. Do this and use the values of the experimentally determined parameters given in Section 7.6. You will find 0.83QC0 , which is smaller than the expected QC0 . Where does the difference originate from?
References [1] Csanady, G.T. (1973) Turbulent Diffusion in the Environment, Reidel, Dordrecht, the Netherlands. A good reference to Sections 7.1 and 7.6 with much attention to random motion.
Dispersion of Pollutants
335
[2] Jansen, L.P.B.M. and Warmoeskerken, M.M.C.G. (1987) Transport Phenomena Data Companion, Edward Arnold, London. [3] Cunge, J.A., Holly, F.M. Jr. and Verwey, A. (1980) Practical Aspects of Computational River Hydraulics, Pitman, London, for use in the study of sect. 7.2. [4] Vreugdenhil, C.B. (1989) Computational Hydraulics, Springer, Berlin. Useful for computational application of Sects. 7.2 and 7.3. [5] Fischer, H.B., List, E.J., Koh, R.C.Y. et al. (1979) Mixing in Inland and Coastal Waters, Academic Press, New York. We draw extensively on this book for examples and illustrations. [6] van Mazijk, A., Wiesner, H. and Leibundgut, C. (1992) Das Alarmmodell f¨ur den Rhein – Theorie und Kalibrierung der Version 2.0-DGM 36, Civiele Techniek, Delft, Netherlands. [7] Fetter, C.W. (1993) Contaminant Hydrogeology, MacMillan, New York. [8] Devinny, J.S., Everett, L.G., Lu, J.C.S. and Stollar, R.L. (1990) Subsurface Migration of Hazardous Wastes, Van Nostrand Reinhold, New York. [9] Verruyt, A. (1982) Theory of Groundwater Flow, MacMillan, London. [10] de Marsily, G. (1986) Quantitative Hydrogeology, Groundwater Hydrology for Engineers, Academic Press. A fine text in the best French tradition. Helpful for sect. 7.3. [11] Harr, M.E. (1991) Groundwater and Seepage, Dover, New York. Reprint of a 1962 text with much attention to analytical methods and complex functions. Note the other sign for . [12] Batchelor, G.K. (1970) An Introduction to Fluid Dynamics, Cambridge University Press, Cambridge, UK. A fine text for all mathematical aspects of flow discussed in Sect. 7.4 and the exercises. [13] Tritton, D.J. (1988) Physical Fluid Dynamics, Clarendon Press, Oxford. Useful for Sect. 7.4. [14] Tennekes, H. and Lumley, J.L. (1972) A First Course in Turbulence, MIT Press, Cambridge, MA, USA. A standard and lucid text, esp. for Sect. 7.4.6. [15] Frisch, U. (1995) Turbulence, Cambridge University Press, Cambridge, UK. Much attention is paid to Kolomogorov’s contributions. [16] van Dop, H. (no year) Luchtverontreiniging, Bronnen, Verspreiding, Transformatie en Dispositie, SDU, KNMI, De Bilt, Netherlands. In Dutch. [17] Zuba, G. (1991) Air pollution modelling in complex terrain (in German), in Computer Science for Environmental Protection (eds M.H. Halker and A. Jaeschke), Springer, Berlin, pp. 375–384. [18] Gifford, F.A. Jr. (1961) Use of routine meteorological observations for estimating atmospheric dispersion. Nuclear Safety, 2, 47. [19] Erbrink, J.J. (1991) A practical model for the calculation of σ y and σ z for use in an online Gaussian dispersion model for tall stacks, based on wind fluctuations. Atmospheric Environment, 25A(2), 277–283.
8 Monitoring with Light Accurate quantitative analysis of the composition of the soil, the surface waters or the atmosphere is crucial in our assessment of the quality of the environment and our judgement of the relative success of measures taken to reduce pollution. Many of the techniques to analyse the environment are based on spectroscopy. This is mainly due to the fact that each atom, molecule (small or large) or molecular aggregate is uniquely characterized by a set of energy levels. Transitions between atomic or molecular levels by the absorption or emission of electromagnetic radiation result in highly specific spectroscopic features. Moreover, since each atom or molecule directly interacts with its close environment, the relevant energy levels and transition intensities may be perturbed, so each atom or molecule acts as a sensitive probe for its surroundings. These properties allow both the identification and the quantification of trace amounts of specific elements or molecules and an accurate assessment of their environment. This chapter starts with an overview of basic spectroscopy. We do not give the derivations which one will find in quantum mechanics text books, and restrict ourselves to concepts and results. In the remainder of the chapter we give examples of monitoring the atmosphere by satellites and the very widely used techniques of laser remote sensing. Methods to analyse samples for trace elements can be found on our web site.
8.1
Overview of Spectroscopy
For atoms and molecules all states and their corresponding energies are given by the solution of the stationary Schr¨odinger equation for that system. For atoms this yields a set of electronic states, each characterized by their own combination of quantum numbers. Electronic transitions may occur between these states, giving absorption (increased energy) or emission (decreased energy) of electromagnetic radiation. For molecules the states are calculated by the assumption that in first order the motion of the electrons can be Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
338
Environmental Physics
separated from the motion of the nuclei (the latter is assumed to be much slower: the Born–Oppenheimer approximation). In addition, the rotation of the molecule is separated from the relative vibrations of its nuclei. Thus, in general, the energy of a molecule is written as E MOL = E EL + E VIB + E ROT
(8.1)
Consequently, molecules may undergo transitions, not only between electronic states, but also between different vibrational and rotational states. In Figure 8.1 the energy levels of an arbitrary molecule are displayed. In a typical absorption experiment the amount of light transmitted by a sample is monitored by a detector as the frequency of the light source is swept over the desired frequency range. An atom or molecule may absorb electromagnetic radiation if the following condition is met: E f − E i = ω
(8.2)
where Ef – Ei is the difference in energy between the initial and final states involved and ω is the angular frequency of the incident light beam. The initial states Ei will be the occupied
E1
Absorption
E2
Emission
E3
Radiationless decay
E
kT E0
Figure 8.1 Energy levels of an arbitrary molecule. States with energies up to kT above the lowest state are occupied. Absorption occurs from the occupied states to higher levels. For emission the molecule must first be excited to a higher state (for example E3 ) and then, in a few steps of radiationless decay, decrease in energy by exchanging it with the surroundings or the many internal degrees of freedom. Finally, in the region E1 emission of radiation may occur.
Monitoring with Light
339
states, the lowest energy levels of the molecule, which by thermal motion are excited up to an energy of kT. In emission spectroscopy the electromagnetic radiation emitted by the molecules/atoms is detected as a function of the frequency of the radiation. Here a molecule is first excited to a higher energy, say E3 (see Figure 8.1) and then in a few steps returns to the lower states. It first transfers energy to the surroundings or to internal degrees of freedom by a process called radiationless decay; radiation happens but is a minor effect. At low energies around E1 radiative transfer to the ground state becomes dominant. Since in principle the same energy levels are involved in the absorption and in the emission process, the same spectroscopic information is obtained from both types of experiments and it will depend on the specific conditions which is preferred. The thus obtained absorption/emission spectra exhibit a number of lines or bands; some will be intense, some weak, while expected lines may be totally absent. Most lines will have a certain width. It is the purpose of this section to explain in general terms why this is so and how the measured spectra are related to atomic and molecular structures. Figure 8.2 summarizes the various types of spectroscopy that are available for these studies. We distinguish these types by the energy of the photon involved in the transition, represented by its wavelength λ or frequency v. We now sketch briefly the various types of spectroscopy, going from left to right in Figure 8.2. In nuclear magnetic resonance spectroscopy (NMR) the transitions between nuclear spin states in an applied magnetic field are detected. Typical frequencies are in the range of v ≈ 100 [MHz], corresponding to roughly v = 0.0033 [cm−1 ]. The wavenumber v is given by v = v/c = λ−1 and is for historic reasons expressed in [cm−1 ]. In the following, according
ν /1014 [Hz] 4.3 ν /104 [cm -1] 1.43
Radiofrequency Log ( ν /[Hz])
λ
6
7
8
Microwave
9
10
ν /[cm
]
-2
3.3 × 10
6.4
11
12
13
30 [μm] 3.3 kT/[eV] = 0.025 2×102 50
ν /[cm-1] = λ /[mm] =
7.1
Violet
Blue
Green
Yellow
Orange
Far Near infrared infrared
30 [cm] -1
5.7
5.2
700 620 580 530 470 420
Red
λ /[nm]
4.8
1.61 1.72 1.89 2.13 2.38
Vacuum U.V. ultraviolet
14
15
16
X-rays, γ-rays
17
3.3 × 10
19
20
3 [pm]
300 [nm] 4
18
3.3 × 10
6
3.3 × 109
1 8×103 1.25
Figure 8.2 The electromagnetic spectrum and the classification of the various spectral regions. The frequencies correspond to the left-hand side of their quadrangle. Note that the energy values kT = 0.025 [eV] and E = 1 [eV] are indicated in the spectrum with their corresponding wavenumber and wavelength. (Adapted from Physical Chemistry, P.W. Atkins, fig 18.1, p 432.)
340
Environmental Physics
to common practice, the bar on top of the wavenumber is sometimes, omitted, but in that case the added dimensions [cm−1 ] should avoid confusion. NMR spectroscopy is a powerful tool for detecting magnetic interactions of nuclei and the spectra are very sensitive to the molecular structure and chemical environment in which these nuclei are placed. Consequently, NMR can be applied to identify and even resolve three-dimensional structures of complex molecules such as proteins in solution and in the solid phase. The largest NMR machines today available use a frequency v ≈ 700 [MHz]. In electron spin resonance spectroscopy (ESR), atomic or molecular species containing unpaired electrons, such as organic free radicals are placed in an external magnetic field. Transitions between electron spin states can then be induced by radiation in the microwave region at about 10 [GHz]. ESR in the frequency region of 9500 [MHz] is called X-band ESR; in the region of 12 [GHz] it is called Q-band ESR. Note that in Figure 8.2 the energy kT = 0.025 [eV] is indicated, which corresponds to room temperature T = 300 [K]. By kT = hv this corresponds with a wavenumber v ≈ 200 [cm−1 ], also indicated. It follows that in both NMR and ESR the relevant energy differences are much smaller than kT at room temperature, implying that the signals are often very weak due to the extremely small population difference between the states involved (see Section 8.1.1 below). In rotational spectroscopy transitions between different rotational states of a molecule are observed. Most of these transitions occur in the microwave region, except those of very light molecules, which take place in the far infrared. Transitions between vibrational states in molecules occur in the infrared region of the spectrum, approximately between wavelengths of 50 [μm] and 2.5 [μm]. The energy difference between different vibrational states is larger than kT (corresponding with λ = v−1 ≈ 50 [μm] at room temperature) and consequently transitions involving the lowest vibrational level of a molecule are easily observed. Often ‘overtones’, i.e. transitions to high vibrational states, are possible in molecules and the corresponding wavelengths may extend into the 2.5 [μm] to 800 [nm] region of the spectrum. Raman spectroscopy explores vibrational and rotational energy levels of molecules by studying the frequency of light scattered by the molecules. As a consequence of the scattering process, the molecule may end up in a higher vibrational or rotational state and the scattered light emerges with a lower frequency than the original excitation beam and in a different direction. These low-energy scattered photons give rise to ‘Stokes Raman scattering’. Another kind of Raman scattering occurs if molecules are present in excited vibrational/ rotational states; the scattered radiation then may pick up some energy and emerge with a higher frequency than the excitation beam and thus produce ‘anti-Stokes Raman scattering’. Note that in general most of the scattered radiation will have the same frequency as the original excitation source (Rayleigh and Mie scattering). Relative to this elastic scattering the Raman effect is very weak and requires highly sophisticated equipment to be observed accurately. Electronic transitions between electronic states of atoms and molecules occur over a wide range of energies extending over the whole near-IR (near-infrared), visible, ultraviolet (UV) and far-UVregions. Electronic spectra give rise to the line spectra of atoms (such as the well-known spectral series of the H atom) and the complex spectra of molecules. In important biological molecules such as chlorophyll, β-carotene, aromatic amino acids or the DNA-bases, the absorption arises from electronic transitions between delocalized
Monitoring with Light
341
π -electron levels. Their wavefunctions are linear combinations of 2p atomic orbitals and these transitions are generally very intense. X-ray spectroscopy is used to study the inner electrons of atoms, which may be part of molecules. Such electrons are much more strongly bound than the electrons involved in optical electronic spectroscopy and consequently high-energy photons are involved in absorption or emission. Examples are X-ray absorption, X-ray emission, photoelectron spectroscopy and Auger spectroscopy. X-ray spectroscopy is outside the scope of this book. 8.1.1
Population of Energy Levels and Intensity of Absorption Lines
The measured intensity of an absorption line from a level with energy E depends on the population of that level. The probability P(E1 ) that an atom/molecule at a temperature T is in the level with energy E1 above the lowest level is given by the Boltzmann equation P(E 1 ) ≈ g(E 1 )e−E1 /(kT )
(8.3)
where g(E1 ) is the degeneracy of the level E1 . We met this relation in Eq. (2.25), where, in the derivation of the Einstein coefficients, for simplicity the degeneracy of the levels was ignored. If two transitions occur in the same atom/molecule, one arising from level E1 , the other one from E 1 , then with all other factors being equal, the ratio I/I of the intensities of these two transitions will be given by the ratio of the populations of both levels or g(E 1 ) −(E1 −E1 )/(kT ) I = e I g(E 1 )
(8.4)
The relative intensity of both lines in an absorption spectrum thus depends on the values of E1 – E 1 relative to kT. For a rotational transition in a medium-sized molecule and with kT ≈ 2000 [cm−1 ] at room temperature, we have E1 – E 1 kT, so both lines will be observed with comparable intensities. For electronic and vibrational transitions the separation between two adjacent energy levels is of the order of 104 to 105 [cm−1 ] and 102 to 103 [cm−1 ] respectively; consequently only (or mainly) transitions from the lowest level will be observed. Because of the factor kT in Eq. (8.4) cooling will have strong effects on rotational spectra, while in vibrational spectra the already weak absorption lines from higher excited states will disappear. 8.1.2
Transition Dipole Moment: Selection Rules
As shown in Section 2.2.1, transitions between the levels Ei and Ef induced by the electric field component of the electromagnetic field are governed by the size of the transition electric dipole moment μif = i |μ| f
(8.5)
In Eq. (2.17) the complex conjugate of this matrix element was written down, but as only the absolute value of the matrix element matters, we adopt the conventional (8.5). Transitions between levels i and f only can be induced by the electromagnetic field if the matrix element μif is different from zero. This implies that only a selection of all possible
342
Environmental Physics
combinations (i, f ) give a transition. This selection is represented by so-called selection rules. One of the selection rules for optical transitions that immediately follows from Eq. (8.5) arises from the conservation of angular momentum in the system ‘atom/molecule plus photon’. Since the photon carries one unit of angular momentum, this must be accounted for in the absorption of the photon to an excited state. For this reason the transition between the |1s and |2s states in atomic hydrogen cannot be induced by electromagnetic radiation. Indeed this transition is not observed in an absorption or emission spectrum and is ‘forbidden’. On the other hand for the |1s to |2p transition in atomic hydrogen the value of the orbital angular quantum number l increases from l = 0 to l = 1 (l = 1). This transition is observedas a strong line and is ‘allowed’. Definition μ = − i qi ri of the dipole operator (2.15) shows that the two states involved must have opposite parity, as the operator μ changes sign under inversion r → –r. For complex molecules the symmetry properties of the electronic and vibrational wavefunctions play an essential role in deciding whether a particular transition is allowed or forbidden. For rotational transitions the transition electric dipole moment μif equals zero, unless the molecule possesses a permanent dipole moment. The classical basis for this rule is that a rotating polar molecule represents an in-time fluctuating dipole, which can interact with the oscillating electromagnetic field. This implies that for a rotational transition the molecule must be polar. For instance, a molecule like H2 O will show a rotational spectrum, while N2 , CO2 and CH4 will not. For vibrational transitions the transition electric dipole moment is zero, unless the electric dipole of the molecule fluctuates during the excited vibration, either in size or in direction. Examples of such fluctuations are the asymmetric stretching and bending vibrations of CO2 , H2 O, CH4 and so on. And such vibrational transitions are indeed observed. A molecule like N2 does not possess a vibration that is associated with a change in dipole moment, and, consequently, does not exhibit a vibrational spectrum. 8.1.3
Linewidths
Spectral lines are not infinitely narrow, but show a certain width ω, which corresponds to an energy E = ω. A large variety of phenomena and processes may contribute to the observed linewidths and here we will discuss a few, assuming a single type of atom or molecule. In principle, we distinguish ‘homogeneous broadening’ and ‘inhomogeneous broadening’. In homogeneous broadening all the atoms/molecules that contribute to an absorption line suffer from the same broadening of the line. In inhomogeneous broadening different atoms/molecules of the sample absorb at slightly different frequencies, which results in a considerable line broadening since one is observing the sample as a whole. The latter effect can be overcome by performing ‘single molecule’ spectroscopy by which a single molecule is studied as a function of time. 8.1.3.1
Homogeneous Broadening
We first discuss lifetime broadening as a form of homogeneous broadening. If the excited state produced by the absorption of electromagnetic radiation has a finite lifetime τ , due
Monitoring with Light
343
to the possibility of spontaneous emission, the correct expression for the time-dependent excited state wave function is: f (r, t) = f (r, 0)e−i(E f /)t−t/(2τ )
(8.6)
The probability function | f (r, t)|2 behaves like e–t/τ , which corresponds to the finite lifetime τ . We may express the time-dependent part of f (r, t) as a linear combination of pure oscillating functions of the type e–(iE/)t by means of a Fourier transformation and obtain −i(E f /)t−t/(2τ ) = L(E)e−i(E/)t dE (8.7) e in which L(E) is the distribution or lineshape function. Eq. (8.7) demonstrates that the finite excited state lifetime τ implies that we have an ‘uncertainty’ in the excited-state energy, reflected by the distribution function L(E), which can be calculated to have the form L(E) =
/τ (E f − E)2 + (/2τ )2
(8.8)
The lineshape represented by Eq. (8.8) is called a Lorentzian shape. The full width at half maximum (FWHM) of this function is given by: E = /τ
(8.9)
ω = 1/τ
(8.10)
or
Note that Eqs. (8.9) and (8.10) are strongly reminiscent of Heisenberg’s uncertainty relation and therefore lifetime broadening is often referred to as uncertainty broadening. For atoms/molecules where the excited-state lifetime is only determined by spontaneous emission, the lifetime is called the natural or radiative lifetime τ R , given by τR−1 = A
(8.11)
where A [s−1 ] is the Einstein coefficient (2.29) for the transition, representing the number of transitions per second. The resulting ‘natural’ or ‘radiative’ linewidth is the minimum linewidth. For atoms in the gas phase or for molecules at T ≈ 0, where other decay processes no longer contribute, τ R –1 may be the observed linewidth ω. Note that natural linewidths depend strongly on the emission frequency ω, through the Einstein coefficient (2.29). Since they increase with ω3 , electronic transitions will have significant natural linewidths, while rotational transitions will be very narrow. Typically, for an allowed electronic transition the natural lifetime may be of the order of 10−7 to 10−8 [s], corresponding to a natural linewidth v = ω/2π of 1.5 to 15 [MHz]. In contrast, a typical natural lifetime for a rotational transition is about 103 [s], corresponding to a natural linewidth of only 1.5 × 10−4 [Hz]. Normally, for atoms and molecules many processes contribute to the shortening of the lifetime τ and hence increase the width of the absorption band. Let us discuss atoms in the gas phase which will undergo collisions. One usually assumes that the phases of the excited state wavefunctions before and after the collision are totally uncorrelated. This means that
344
Environmental Physics
the wavefunction no longer has the form (8.6) of a pure wave train, only disturbed by the natural lifetime. If the natural lifetime is long compared to the time scale of the collisions, one may approximate the wave train by a pure wave of one frequency only, but chopped into portions of length τ coll , which would correspond to the time between collisions. The relative phases of these portions will be completely random. Collisions in this way produce a dephasing of the wavefunction. One may show ([2], pp 84–88) that this will result in a Lorentzian lineshape of the transition with the FWHM given by ωcoll =
2 τcoll
(8.12)
A reasonable value of the collision time at a gas density corresponding to the atmospheric pressure of 105 [Pa] and at room temperature is τcoll ≈ 3 × 10−11 [s] or vcoll ≈ 1010 [Hz], which largely exceeds the natural linewidth. The width at an arbitrary pressure p and temperature T can be calculated from the one at some standard values (p0 , T 0 ) by: p T (8.13) vcoll ( p, T ) = vcoll ( p0 , T0 ) p0 T0 Molecules in the condensed phase (in solution, or in a host matrix) will interact with their surroundings and ‘collisions’ will occur between the molecule and phonons in the medium, also leading to dephasing of the excited state. The homogeneous linewidth vhom of a molecular excited state is often expressed as vhom =
1 1 + 2π T1 π T2∗
(8.14)
where T 1 is the excited-state lifetime and T2∗ is the pure dephasing rate. Note that T 1 is generally much smaller than the natural lifetime τ R , since in molecules many processes may contribute to the decay of the excited state. 8.1.3.2
Inhomogeneous Broadening
Inhomogeneous broadening of spectra occurs when different atoms/molecules contribute to slightly different parts of the spectrum. An example is so-called Doppler broadening, caused by the fact that in the gas phase the atoms/molecules do not all have the same velocity, but their velocities follow a distribution around a certain value. For instance, in emission the frequency of the emitted light suffers from a Doppler shift, which is due to the component u of the initial atomic/molecular velocity in the direction of the emitted photon’s wave vector u (8.15) ω ≈ ω0 1 + c A positive value u > 0 represents a particle moving towards the detector. Because in gases, atoms and small molecules may approach high speeds, the whole Doppler-shifted range of frequencies will be detected in an absorption or emission spectrum. Since, according to the Maxwell distribution, the relative probability that a particle has a velocity component u
Monitoring with Light
345
in a particular direction has a Gaussian shape, the resulting Doppler broadened lineshape will also be Gaussian. For a molecule/atom of mass M at temperature T the FWHM of this Gaussian lineshape function is given by: 2kT ln 2 (8.16) vD = 2v0 Mc2 An illustrative application of Eq. (8.16) is the measurement of the surface temperature of stars from the widths of particular emission lines in their spectrum. For instance, the sun emits a spectral line at 677.4 [nm], which is due to a transition involving highly ionized 57 Fe. From the measured FWHM of 5.3 × 10−3 [nm] one would calculate, using Eq. (8.16), the sun’s temperature as 6840 [K], which compares well with the temperature of 5800 [K] estimated from Wien’s law (2.5). For atoms or molecules in a nongaseous environment, the interaction with the host will fine-tune their absorption frequencies. For a crystalline host this may give rise to a (limited) set of sharp, well-defined lines in low-temperature spectra. For molecules in solution or in a glassy environment (such as a protein or a membrane) a broad distribution of sites may exist, each with their own transition frequency. This leads to the broad (v ≈ 35 [cm−1 ]) unstructured inhomogeneously broadened absorption bands observed for many molecules, amongst which are the natural pigments, such as chlorophyll and β-carotene. Specific techniques have to be applied to resolve the underlying homogeneous linewidths. 8.1.3.3
Composite Lineshapes
When two independent processes contribute to the lineshape, the resulting spectral line will be given by a so-called convolution of both lineshapes: ∞ L(ω) =
L 1 (ω )L 2 (ω + ω0 − ω )dω
(8.17)
−∞
where ω0 is the common central frequency of the two individual lineshape functions.
8.2
Atomic Spectra
The spectra of atoms show transitions between electronic states |i and | f , which are allowed if the transition dipole moment (8.5) is sufficiently large. Atomic electronic states are denoted by the symbols 2S+1 LJ , where 2S + 1 is the multiplicity, with S the total spin quantum number, L the total orbital angular momentum quantum number and J the total angular momentum quantum number. We shall first discuss the spectra of ‘one-electron’ atoms and then the spectra of ‘many-electron’ atoms.
8.2.1
One-Electron Atoms
One-electron atoms, such as H, Li, K and Na, have only one valence electron, while all their remaining electrons are in closed shells. Consequently, the quantum numbers J, L
346
Environmental Physics
and S only refer to the single valence electron. For one-electron atoms the selection rules are L = l = ±1 J = j = 0, ±1
(8.18) (8.19)
in which l and j are respectively the orbital angular momentum quantum number and the total angular momentum quantum number of the single valence electron. These rules show immediately that in atomic hydrogen the optical |1s → |2p transition is allowed, while the transition |1s → |2s violates the selection rules. As an example consider the sodium D-lines, observed in the emission spectrum of sodium excited by an electronic discharge. The yellow emission lines are observed at 16 956.2 [cm−1 ] and at 16 973.4 [cm−1 ] and are due to the transitions 2 S1/2 ←2 P1/2 and 2 S1/2 ←2 P3/2 , respectively. The observed splitting of the sodium D-lines is called the fine structure and results from the magnetic interaction between the electron spin S and the orbital angular momentum L, which is also called the spin–orbit interaction. The size of this splitting increases sharply with atomic number (∼Z 4 ).
8.2.2
Many-Electron Atoms
For atoms with more than one electron outside a closed shell, the spectra rapidly increase in complexity. Where the spin–orbit interaction is relatively weak, the total spin quantum number S and total orbital angular momentum quantum number L remain good quantum numbers. Then we obtain the correct atomic states by coupling these total momenta to give the total angular momentum J. This is called LS coupling, although the term Russell–Saunders coupling is also used. For these many-electron atoms one has the following selection rules: S = 0; L = 0, ±1 with l = ±1; this means that the orbital angular momentum quantum number of an individual electron must change, but whether or not this affects the total orbital angular momentum, depends on the coupling J = 0, ±1 with J = 0 → J = 0 forbidden. Figure 8.3 shows all possible allowed dipole transitions between a p2 and an sp configuration. With increasing spin–orbit interaction in heavy atoms the LS coupling fails and then individual spin and orbital angular momenta have to be coupled first to individual j angular momenta, which then are combined into one large J ( jj coupling). In that case the selection rules given above break down and transitions between states of different multiplicity are observed. The most prominent example of jj coupling is the intense line observed in the emission spectrum of a high-pressure mercury lamp at λ = 253.7 [nm]. This line corresponds to a transition between the singlet 1 S0 level and one of the triplet 3 P1 levels of mercury. Here the jj coupled wavefunctions are indicated by the dominant parts of the wavefunction when that function is expanded into wavefunctions in LS coupling. Of course, a transition between pure 1 S0 and 3 P1 levels would still be forbidden because of the selection rule S = 0.
Monitoring with Light 1
1
np2
S
D
3
P
1
P
347
J=0
J=2 J=2 J=1 J=0
J=1 ns np 3
P
J=2 J=1 J=0
Figure 8.3 Schematic representation of allowed dipole transitions between a p2 and an sp configuration.
8.3
Molecular Spectra
Molecular spectra are generally much more complex than atomic spectra. Their complexity arises from the large number of states, rotational, vibrational and electronic, between which transitions may be observed. Moreover, the relative motion of their atomic nuclei may be coupled to the electronic transitions, further adding to the complexity. Nevertheless, the molecular spectra constitute a rich source of information about molecular structure, molecular dynamics and molecular environment, and we shall illustrate a few of their important features in the following sections. 8.3.1
Rotational Transitions
Rotational transitions can occur between the rotational energy levels of a particular electronic state, in general the electronic ground state, only if that state possesses a permanent dipole moment. The rotational energy levels will be calculated from the general expression of a free rotating body for a few limiting cases. 8.3.1.1
Spherical Rotors
Spherical rotors are molecules with all three moments of inertia equal Ixx = Iyy = Izz = I. An example is CH4 . The energy of a spherical rotor is given by J (J + 1)2 ; J = 0, 1, 2, . . . 2I where J is the quantum number for rotational motion. EJ =
(8.20)
348
Environmental Physics
The separation between two neighbouring rotational levels is: E J − E J −1 = 2hcBJ
(8.21)
with B = /(4πcI) the rotational constant, by convention expressed in [cm−1 ]. Typical values of B for small molecules are in the range 1 [cm−1 ] to 10 [cm−1 ]. Note that EJ – EJ–1 decreases as I increases and, consequently, large molecules have closely spaced rotational energy levels. 8.3.1.2
Symmetric Rotors
Symmetric rotors have two equal moments of inertia, with, for example, Ix x = I yy = I⊥ and Izz = I. Examples of such molecules are NH3 and C6 H6 . The energy of such a rotor can be expressed as: E JK = hcBJ(J + 1) + hc( A − B)K 2 ;
J = 0, 1, 2 . . . ; K = 0, ±1, ±2 . . . ± J (8.22)
with A = /(4πcI) and B = /(4π cI⊥ ). In Eq. (8.22) the quantum number K represents the component of J along the molecular symmetry axis. It follows that with K = 0 there is no rotational angular momentum component along the symmetry axis; with K = J almost all the rotational angular momentum is along the symmetry axis. 8.3.1.3
Linear Rotors
In a linear rotor, such as CO2 and HCl, the rotation occurs only about an axis perpendicular to the line connecting the atoms. This corresponds to the situation that there is no rotational angular momentum along the connecting line, and we can use Eq. (8.22) with K = 0. We then obtain for the energy levels of a linear rotor E J = hcBJ(J + 1);
J = 0, 1, 2 . . .
(8.23)
Note that, although K = 0, each J-level still has 2J + 1 components on some external axis fixed to the laboratory. This degeneracy may be removed, for instance by an applied electric field (Stark effect). 8.3.1.4
Selection Rules
The selection rules for rotational transitions are again determined by an analysis of the transition dipole moment (8.5). For a molecule to have a pure rotational spectrum it must possess a permanent dipole moment. Thus, diatomic molecules like (O2 , N2 ) which consist of only one type of atom, called homonuclear, and symmetric linear molecules are rotationally inactive. Spherical molecules can only have a rotational spectrum if they are distorted by centrifugal forces, but, in general, they are rotationally inactive. For a linear molecule the selection rule for J is J = ±1
(8.24)
From equation (8.23) it follows that a pure rotational spectrum consists of a set of equidistant lines, separated by v = v/c = 2B [cm−1 ]. The relative intensities of each rotational line increase with increasing J, pass through a maximum and eventually tail
Monitoring with Light
349
off as J becomes large. This pattern is an immediate consequence of the Boltzmann population of the various rotational energy levels at room temperature, which for a linear molecule are determined by (cf. Eq. (8.3)) N (E J ) ∼ (2J + 1)e−E J /kT 8.3.2
(8.25)
Vibrational Transitions
Vibrational transitions can occur between the vibrational states of an electronic state, which under normal conditions is the ground state. For vibrational transitions the molecule must undergo a change in dipole strength or dipole direction, due to the vibration. For calculations one assumes the Born–Oppenheimer approximation to be valid. This approximation states that on the time scale of the motion of the electrons, the atomic nuclei may be considered at rest. On a slower time scale the nuclei move in a potential which – following the Born–Oppenheimer approximation – is calculated as follows. One assumes a certain configuration of the nuclear coordinates with respect to the equilibrium situation, indicated by R. For this configuration the electronic energies are calculated and added, giving an energy indicated by Eel (R). Together with the Coulomb repulsion between the nuclei, the resulting function of R represents the common potential in which the nuclei move. Close to the equilibrium the potential can often be approximated by a harmonic potential (a parabola), which therefore is widely used in calculations. 8.3.2.1
Diatomic Molecules
A diatomic molecule possesses one vibrational coordinate, the interatomic distance R. In regions close to the potential minimum, the molecular potential energy may be approximated by a parabola 1 (8.26) K (R − Re )2 2 with K the force constant and Re the equilibrium distance between the two atoms. Solving the Schr¨odinger equation by using the harmonic oscillator potential (8.26) shows that the reduced mass μ of the two atoms undergoes harmonic motion. Therefore the permitted energy levels are given by K 1 ω; ω = ; v = 0, 1, 2 . . . (8.27) Ev = v + 2 μ V =
in which v is the vibrational quantum number. The corresponding wavefunctions that describe the harmonic motion can be found in any textbook on quantum mechanics. Note that the steepness of the potential function, characterized by the force constant K, determines the energies of vibrational transitions. Anharmonicity of the vibrational motion can be introduced by using a more complex form of the potential V that resembles the true potential more closely. The most frequently applied form is given by the Morse potential
2 (8.28) V = De 1 − e−a(R−Re )
350
Environmental Physics
This has the advantage that realistically V approaches a constant value De for R → ∞. For R = Re we have V = 0, so De represents the depth of the potential. For small values of (R –√Re ) we use a series expansion and again find the shape (8.26) with K = 2De a2 and a = ω μ/(2De ). For the complete Morse potential (8.28) the permitted energy levels are given by 1 a2 1 2 xe ω; xe = ω − v + (8.29) Ev = v + 2 2 2μω where xe is called the anharmonicity constant. In practice even higher-order terms in (v + 1/2) are added to Eq. (8.29) and the resulting expression may be fitted to the experimental data. The selection rule for transitions between vibrational states is again derived from the transition dipole moment (8.5). If we denote our total vibronic wavefunction by el (r, R)φv (R), where r represents all electron coordinates, then a vibrational transition requires that ψel φv | μ |ψel φv = 0,
ifv = v
(8.30)
which is only the case if the dipole operator μ is a function of the internuclear coordinates R. Thus, expanding μ around the equilibrium position Re gives
∂μ(r, R) μ = μ0 (r, Re ) + (R − Re ) + . . . . (8.31) ∂R Re Substituting Eq. (8.31) into Eq. (8.30) the first term in the expansion yields zero, but the second term is of the form
∂μ(r, R) |ψel φv | R |φv ψel | (8.32) ∂R Re The first factor in Eq. (8.32) shows that the molecule does not need to have a permanent dipole moment in order to exhibit dipole transitions; a fluctuating dipole moment suffices. The important selection rules follow from the second factor. From the properties of the harmonic oscillator wavefunctions it follows within the harmonic approximation that Eq. (8.32) is only nonzero for v = ±1
(8.33)
or E = E v+1 − E v = ω −1
(8.34)
For example, HCl has a force constant K = 516 [N m ]. The reduced mass of HCl is 1.63 × 10−27 [kg], which is very close to the mass of the proton: 1.67 × 10−27 [kg]. This implies that in HCl the Cl atom is hardly moving. From these values it follows that ω = 5.63 × 1014 [s−1 ], which implies that a transition is observed at λ = 3.35 [μm] or 2984 [cm−1 ], in the infrared region of the spectrum. Since at room temperature kT ≈ 200 [cm−1 ] only the v = 0 level is populated, the vibrational absorption (and emission) spectrum of a diatomic molecule consists of a single line. Additional lines in emission spectra may be observed if a sample is (partially) heated by chemical reactions, producing higher values of kT. In addition, anharmonicity, as expressed by the Morse potential, leads to the transitions 2 ← 0, 3 ← 0, and so on. From their
Monitoring with Light
351
appearance the dissociation energy of the molecule can be calculated, using the so-called Birge–Sponer extrapolation [1]. Molecules of the type O2 , N2 , i.e. homonuclear molecules, do not exhibit electric-dipole vibrational spectra. However, quadrupole- and pressure-induced transitions of homonuclear molecules can be faintly observed. 8.3.2.2
Vibrational-Rotational Spectra
The rotation of a molecule is influenced by molecular vibration and therefore the two motions are not independent, but coupled. A detailed analysis shows that the rotational quantum number J changes by ±1 when a vibrational transition occurs (in exceptional cases J = 0 is also allowed). Thus a vibrational transition of a diatomic molecule with two different atoms and vibrational transition wavenumber v0 is found to consist of a large number of closely spaced components and the observed spectrum may be understood in terms of the set of energy levels
E v,J
1 = v+ ω + hcB J (J + 1); 2
v = ±1;
J = ±1
(8.35)
In a more detailed treatment the dependence of B on the vibrational quantum number is taken into account, but this is ignored here. From Eq. (8.35) one observes that the absorption spectrum consists of two branches, one with J = –1 and one with J = +1. The P-branch has J = –1 and v = v0 − 2BJ [cm−1 ] and an intensity distribution following the Boltzmann population over the various J-states. The R-branch has J = +1 and v = v0 + 2B(J + 1) [cm−1 ]. Sometimes the Qbranch (J = 0) is also visible. Figure 8.4 shows the high-resolution vibrational-rotational spectrum of gas-phase HCl, in which the J = 0 transition is absent. We finally note that the distribution of the absorption intensities over the rotational states is a sensitive function of the temperature.
Figure 8.4 High-resolution vibrational-rotational spectrum of HCl. The lines appear in pairs because the spectrum reflects the presence of both H35 Cl and H37 Cl in their natural abundance ratio of 3:1. The J = 0 branch is absent in these spectra. (Reproduced from Physical Chemistry, P.W. Atkins, fig 18.14, p 452.)
352
Environmental Physics
8.3.2.3
Polyatomic Molecules
In a molecule with N atoms there, in general, exist 3N – 6 modes of vibration and for a linear molecule 3N - 5 modes. The reason is that one has to subtract the three translational and three rotational degrees of freedom from the total of 3N degrees of freedom; for a linear molecule one has to subtract two rotational degrees of freedom instead of three. Thus, H2 O, a triatomic nonlinear molecule, has three modes of vibration, while CO2 , which is linear, has four vibrational modes. The modes of vibration may be chosen such that each of them can be excited individually; they are called ‘normal modes’. For instance, for the stretching modes of CO2 , one should not take the individual stretching modes of the two C–O bonds, but the symmetric and antisymmetric linear combinations (see Figure 8.5). Together with the two bending modes one obtains the four normal modes of CO2 . For more complex molecules a general method of dealing with molecular vibrations is to use group theory to classify the vibrational modes according to their symmetries. In essence, each normal mode behaves like an individual harmonic oscillator with EQ = This frequency depends on the force (n + 1/2)ωQ , where ωQ is the frequency of mode Q. constant KQ and the reduced mass μQ through ω Q = K Q /μ Q .
Figure 8.5 Representation of the two stretching modes and the two bending modes of CO2 . For the stretching modes on the right three different positions of the atoms in the molecule are indicated. The bending modes on the left correspond to two orthogonal planes which are physically equivalent and therefore have the same frequency.
Monitoring with Light
353
The reduced mass μQ may be very different for the various normal modes, depending on which atoms are actually moving. For instance, in the symmetric stretch of CO2 (top right in Figure 8.5) the C atom is static, while in the antisymmetric stretch (bottom right in Figure 8.5) both the C atom and the two O atoms are moving. Similarly, the value of KQ is made up of a combination of the various stretches and bends that contribute to that particular mode. The overall selection rule for infrared activity remains that the motion corresponding to the active normal mode should lead to a change in dipole moment. For CO2 this can be evaluated by simple inspection. The symmetric stretch with vS = 1388 [cm−1 ] leaves the dipole moment of CO2 unchanged at zero and this mode is inactive in the infrared. The antisymmetric stretch with vAS = 2349 [cm−1 ] leads to a change in dipole parallel to the line connecting the three atoms and consequently gives rise to a parallel infrared transition. Both bending modes of CO2 with vB = 667 [cm−1 ] are active in the infrared and appear as bands polarized perpendicular to the long axis of CO2 in the IR spectrum. 8.3.3
Electronic Transitions
Electronic transitions in molecules occur in the visible part and in the near- and far-UV parts of the spectrum. For instance, H2 O absorbs UV light below a wavelength of 180 [nm], chlorophyll shows strong absorption bands in the red (680 [nm]) and blue (400 [nm]) and so on. In principle the transitions between electronic states are governed by the same rules as those of atoms and the selection rules are again determined by the outcome of the integral that defines the transition dipole moment (8.5). There are also some important differences with atoms, which complicate the electronic spectra, but at the same time provide a wealth of information concerning the interaction of the molecule with its environment. 8.3.3.1
Molecular Orbitals and Transition Dipoles
In most molecules, the optical transitions cannot be located on one or even a few atoms. Molecules are more or less complex arrangements of atoms, and their electronic wavefunctions are linear combinations of, in principle, all atomic orbitals. Therefore, like the vibrational states discussed above, the electronic states of molecules are classified according to their symmetry properties. Then the outcome of the integral in the transition dipole moment (8.5) is only nonzero if it contains a totally symmetric contribution under the symmetry operations of the molecular point group. These general statements directly lead to the important conclusion that often an absorption line in a molecular electronic absorption spectrum corresponds to only one of the spatial components of the transition dipole. Thus the absorption and emission corresponding to that transition may be strongly polarized. For instance, in H2 O the molecular orbitals are linear combinations of the two hydrogen 1s atomic orbitals and the oxygen 2s, 2px ,2py and 2pz atomic orbitals (the two 1s electrons of O are ignored, since they are too low in energy). The six atomic wavefunctions under consideration are denoted by (H 1sa ), (H 1sb ), (O 2s), (O 2px ), (O 2py ) and (O 2pz ), where for hydrogen the subscripts a and b refer to the two different atoms. The structure of the H2 O molecule is sketched in Figure 8.6. The distance of the two H atoms to the O atom is equal, but the two OH lines make an angle. The bisector of that angle is taken as the C2 symmetry axis, which is used as z− axis. The subscript 2 in C2 indicates
354
Environmental Physics
C2 z σv x
Oxygen H H
y
σ' v Figure 8.6 The triangular HOH molecule defines the y, z-plane, called σ v . The z- or C2 -axis is the bisector of the angle HOH. The x, z-plane is called σ v . Reflection around σ v gives the same configuration as reflection around σ v .
that the two rotations of 0◦ and 180◦ about that axis leave the molecule unchanged. The y− axis is in the HOH plane, which is called σv . The perpendicular x, z− plane is called σv . With group theory one may classify the possible linear combinations of the atomic orbitals that form the bonding and antibonding molecular orbitals of H2 O. The bonding orbitals, when occupied, keep the atoms together; the antibonding orbitals, when occupied, drive them apart. The symmetry group of H2 O is C2v . It comprises the C2 axis and two mirror planes σ v , σ v , containing the C2 axis, whence the subscript v in C2v . Using the atomic orbitals given above, we can construct a symmetry adapted basis for the H2 O molecule which consists of the following set of wavefunctions a1 = c1 (H 1sa +H 1sb ) + c2 (O 2pz ) + c3 (O 2s) with A1 symmetry b1 = (O 2px ) with B1 symmetry b2 = c1 (H1sa −H1sb ) + c2 (O 2p y ) with B2 symmetry
(8.36)
Here, A1 symmetry means that the wavefunction does not change upon the three operations: rotation of 180◦ about the C2 axis, reflection about the σ v -plane and finally, reflection about the σ v -plane; B1 symmetry means that the wavefunction changes sign upon the rotation or the reflection about the σ v -plane (replacement x → –x); B2 symmetry means that the wavefunction changes sign upon the rotation or a reflection about the σ v -plane. The coefficients c1 , c2 , c3 , c 1 , c 2 can be found by the variational principle. This results in a set of three a1 orbitals (1a1 , 2a1 , 3a1 ), one b1 orbital (1b1 ), the original (O 2px ) orbital, and two b2 orbitals (1b2 , 2b2 ). The wavefunctions corresponding to the 1a1 molecular orbital (denoted by MO) show no nodes; the 1b2 MO has a nodal plane corresponding to the x, z-plane, while the 1b1 MO is a nonbonding orbital. The 2a1 MO is mainly located on the O atom, while the 2b2 and
Monitoring with Light
355
3a1 MOs contain more than one node in their wavefunctions, characteristic for antibonding MOs. The energetic order of the H2 O MOs is as follows: E(1a1 ) < E(1b2 ) < E(1b1 ) < E(2a1 ) < E(2b2 ) < E(3a1 )
(8.37)
Since we have to accommodate eight electrons (one from each H atom and six from the O atom) the ground state configuration of H2 O is (1a1 )2 (1b2 )2 (1b1 )2 (2a1 )2 . The lowest unoccupied molecular orbital has B2 symmetry (2b2 ) and it is not difficult to show that only the y polarized component of the electric dipole transition moment is allowed. Similar considerations can be applied to all molecules and these methods allow predicting ground and excited state configurations of molecules and the associated transition dipoles with great accuracy. Sometimes the absorption of a photon can be traced back to the presence of a particular set of atoms. For instance, molecules containing the –C=O (or carbonyl) group often show absorption around 290 [nm]. Similarly, molecules with many unsaturated –C=C bonds (benzene, retinal, chlorophyll) may show strong absorption in the near-UV or visible part of the spectrum. These groups therefore are responsible for the colour of the scattered light and are called chromophores, a term which simply means ‘coloured’. In the case of a single –C=C bond, the highest occupied molecular orbital (HOMO) is a π -orbital; the lowest unoccupied molecular orbital (LUMO) is a π ∗ orbital. The π and π ∗ orbitals are the symmetric and antisymmetric combinations of 2pz orbitals on the two C atoms, where the symbol π (the Greek equivalent of the Latin p) refers to the component of the total angular momentum along the axis connecting the two C atoms, which is equal to one. For a single –C=C bond the transition π → π ∗ occurs at about 180 [nm]. In the case of the carbonyl group –C=O, the oxygen atom possesses an additional nonbonding (n) electron pair. Here ‘nonbonding’ means that they do not directly contribute to the strength of the bond. Although the n → π ∗ transition is in first order forbidden, distortion of the –C=O chromophore, for instance due to an out-of-plane vibration, gives the n → π ∗ sufficient dipole strength to be observable and its absorption is found at about 230 to 290 [nm], for example in formaldehyde and in proteins. For molecules with extended conjugated chains or conjugated ring systems (e.g. benzene, β-carotene, chlorophyll, the visual pigment retinal, the aromatic amino acid tryptophan), the π → π ∗ absorption is shifted to the near-UV and the visible regions. Often these optical transitions are strongly allowed and molecules like chlorophyll and β-carotene are amongst the strongest absorbing species in nature. 8.3.3.2
The Franck-Condon Principle
The excitation of an electron to a higher electronic state may be accompanied by the simultaneous excitation of a vibrational or rotational transition. This is called a vibronic transition and the corresponding excited state is called a vibronic state. For simplicity we restrict the discussion to vibrational excitations. Because of the simultaneous excitation of an electron and a vibration, a progression of vibrational lines may be observed, or just a broad line, which then is due to unresolved vibrational structure. These structures are explained by the Franck–Condon principle, which states that, because nuclei are so massive, they can be considered as static on the timescale of an electronic transition (with a
356
Environmental Physics S1
Electronic excited state
Vibrational wave function having maximal overlap with ground state
S0
Electronic ground state
Figure 8.7 Illustration of the Franck–Condon principle in molecular spectroscopy. The electronic ground state is represented by S0 , the first electronic excited state by S1 , and higher states (not shown) by S2 , S3 . . . Within each electronic excited state the vibrational wavefunctions are indicated. The absorption of a photon corresponds to a purely vertical transition in this diagram because the relative positions of the nuclei do not change during the fast excitation process. The strongest transition will be the one with the highest overlap between ground state and excited state wavefunctions.
frequency of light in the order of 10−15 [s]). This principle was met in Section 8.3.2 as the Born– Oppenheimer approximation, but is generalized to deal with vibronic excitations. The Franck–Condon principle is illustrated in Figure 8.7. The electronic ground state has all electrons in their lowest orbital. Changes in the position or orientation of the atomic nuclei are again summarized by a single variable R, which is the horizontal axis in the figure. For several configurations the lowest electronic energy is calculated, giving the potential energy curve S0 in Figure 8.7. The calculation of electronic energies as function of R is repeated for the case that one electron is moving in a higher orbit, giving the potential energy curve S1 . This curve has a minimum at a higher energy because of the electronic excitation and moreover will be at another configuration of the nuclei, for excited electrons generally are further away from the nuclei and the nuclear configuration will respond to this. Even if no real calculation is done, one may sketch Figure 8.7 qualitatively. In both potential energy curves vibrational wave functions may be sketched and it is clear that the transition in which the overlap between the ground state vibrational wavefunction and an excited state vibrational wavefunction is largest will dominate the spectrum. As photon absorption is very rapid the transition is indicated by a vertical arrow in Figure 8.7. The arrow which is drawn there points to the excited wavefunction with the largest overlap
Monitoring with Light
357
with the ground state wavefunction. More precisely, the matrix element which determines the transition strength is given by f f f (8.38) ψeli φvi μψel φv ≈ μi f φvi φv = μi f Svv in which μif connects the electronic wavefunctions. The quantities 2 |Svv |2 = φvi φvf
(8.39)
the are called the Franck–Condon factors. These factors represent the squared overlap between the ground- and excited-state vibrational wavefunctions. Often several Franck–Condon factors may be sufficiently large for a progression of vibronic lines to be observed in the spectrum. Note that Figure 8.7 bears some resemblance to Figure 5.23 for electron transfer, since in the latter case it is also the overlap between wavefunctions like in (8.39) which determines the transition strength. 8.3.3.3
Decay of Excited States
Upon excitation to a highly excited vibronic state there are several ways in which the molecule can return to its ground state. This is indicated in Figure 8.8. Curves S0 and S1 are the same as were discussed in Figure 8.7, but in the present case the S refers specifically to the fact that they are supposed to be singlet states with total spin S = 0. The curve on the right corresponds with another excited electronic state, in this case an excited triplet state with total spin S = 1, hence the notation T 1 . Generally, a triplet state (T) is lower in energy than the corresponding singlet state (S), because it is no longer required that the two electrons involved occupy the same molecular orbit. After absorption of a photon the molecule will relax due to interaction with its surroundings. In general, the excess energy will be redistributed amongst other vibrational and/or rotational states of the molecule and eventually transferred to the surrounding molecules. This radiationless decay is called internal conversion and may occur within 10−12 [s] (or less) in the condensed phase; for isolated small molecules these decay times may be much longer. In this way the molecule quickly ends up in the v = 0 state of S1 . There is competition, however, from intersystem crossing to the triplet state T 1 . The time scale of interconversion between singlet and triplet states is of the order of 10−8 to 10−9 [s] and usually much slower than decay to the v = 0 state of S1 . Once in the lowest excited singlet state the molecule may decay through a variety of pathways. Radiationless decay to the ground state may occur, again through internal conversion. The rate of radiationless decay kic is highly variable and may range between 1012 [s−1 ] and <106 [s−1 ]. Alternatively the molecule may decay through spontaneous emission with a rate constant given by the relevant Einstein coefficient A21 in Eq. (2.29). This process is called fluorescence. Note that since the transition dipole moment determines both the rate of absorption and the rate of spontaneous emission, strongly absorbing molecules will also be strong emitters. The rate of radiative decay kR is about 107 to 108 [s−1 ] for a strong optical transition. Note that since the emission occurs from the v = 0 level of the lowest excited state and since the Franck–Condon factors determine the relative intensities of the vibronic lines in the emission spectrum, most of the emission occurs at longer wavelengths than the absorption.
358
Environmental Physics
S1
T1 Intersystem crossing Radiationless decay
Absorption
S0
Fluorescence Phosphorescence
Figure 8.8 Sequence of events after absorption, leading to fluorescence, triplet formation and phosphorescence. Note that the maximum in the fluorescence spectrum again will correspond with the largest Franck–Condon factor.
Only the transitions between the v = 0 levels occur in both spectra. After fluorescence the molecule returns to its ground state by internal conversion. In another sequence of events intersystem crossing from the singlet state S1 to the triplet state T 1 does occur. After radiationless decay to its lowest state the molecule may return to the S0 state via emission of a long wavelength photon: phosphorescence. Since the rate of intersystem crossing is slow, phosphorescence is a ‘slow’ process. An easy way to express the relative contribution of these different decay processes is by a quantity called the quantum yield. For instance, the quantum yield of fluorescence is given by kR φF = ki
(8.40)
i
where kR denotes the rate of fluorescence and ki underneath the summation sign in the denominator represents one of all possible decay processes.
Monitoring with Light
8.4
359
Scattering
When an atomic or molecular system is illuminated by light with a frequency which is not resonant with one of the available transitions, the light beam will not be absorbed, but may still be scattered [3]. Scattering is an ‘instantaneous’ process that requires two interactions of the radiation field with the atomic or molecular system. As a consequence of these two interactions a quantum with energy hv of the light beam is destroyed and a quantum with energy hvs is created. If the frequency vs of the scattered light is identical to that of the exciting beam, one speaks of elastic scattering. If the scattering molecules are much smaller than the wavelength of the light one speaks of Rayleigh scattering. For particles with dimensions of the same order or larger than the wavelength of the scattered radiation, due to interference effects the process becomes a complicated function of particle size and shape and is called Mie scattering. Alternatively, scattered photons may emerge with lower or higher energy than the original illuminating beam. This is due to nonelastic or Raman scattering and the specific Raman lines which are observed are very informative about molecular structure and molecular dynamics. 8.4.1
Raman Scattering
In Raman scattering one observes frequencies that originate from molecular vibrations that have become either excited during the scattering (Stokes Raman scattering) or de-excited (anti-Stokes Raman scattering). The two processes are depicted in Figure 8.9. During the Stokes Raman scattering process the energy difference hv − hvs has been transferred to the molecule, leaving it behind in some vibrational state with energy hvvib = hv − hvs . As the vibrational states are quantized one will observe discrete lines in the scattered spectrum. In the anti-Stokes Raman scattering process one starts from a quantized state with energy hvvib , which is occupied because of thermal excitations. The molecule transfers the energy hvvib to the scattered light which emerges with energy hvs = hv + hvvib or frequency
hν
h νs
hν
h νs
h νvib
Stokes Raman scattering
h νvib
Anti-Stokes Raman scattering
Figure 8.9 Raman scattering. On the left Stokes Raman scattering is indicated. In this case the molecule starts in its ground state, but after scattering ends up in a vibrational state. In anti-Stokes Raman scattering, on the right, the molecule starts in a vibrational state, which is occupied because of thermal excitation; after scattering it ends up in its ground state.
360
Environmental Physics
vs = v + vvib . The intensity of the anti-Stokes lines will be strongly dependent on the temperature, as that determines the occupation of the vibrational states (Eqs. (2.25) and (8.3)). In the following we will often refer to classical pictures, so it is also useful to explain the appearance of several discrete lines in the spectrum classically. The incident light then sets up a polarization in the molecular system at frequency v. At the same time this frequency is modulated by vibrating nuclei with frequency vvib .Thus, the observed polarization and therefore the emerging (scattered) radiation field contains a term which oscillates with the incident electromagnetic field and corresponds to Rayleigh scattering and two side bands: one at frequency v – vvib , which is recognized as Stokes Raman scattering and one at frequency v + vvib which is anti-Stokes Raman scattering. The intensity of Raman lines in the spectrum may be between 10−3 and 10−6 of the intensity of the Rayleigh scattered light. Similarly, since the polarizability of a molecule may vary as it rotates, the rotational frequency vrot appears in the Raman spectrum. The specific selection rule for vibrational Raman spectra of diatomic molecules is the same as for the normal IR spectrum: v = ±1. Thus, both Stokes and anti-Stokes lines may be observed in a typical Raman spectrum, but the latter depends on the population of the v = 1 level, which may be relatively low. In addition to the selection rule for the vibrational quantum number it may be shown that a vibrational Raman transition can only occur if the polarizability changes as the molecule vibrates. This implies that both homonuclear molecules (with one kind of atom only) and heteronuclear molecules (with more than one kind of atom), which swell and contract during a vibration, are Raman active. Superimposed on the v = ±1 transitions, one may observe a characteristic band structure arising from simultaneous rotational transitions for which the selection rule is: J = 0, ±2, like in pure rotational Raman scattering. For polyatomic molecules normal vibrational modes are Raman active only if they are accompanied by a change in polarizability. For the CO2 molecule this implies that the symmetric stretch, which was IR inactive, is Raman active. In general, the Raman activity of a particular normal vibrational mode is derived from group theory. One important rule is that for molecules with a centre of symmetry no vibrational modes exist which are both Raman and IR active. For rotational Raman transitions the overall selection rule is that the molecule must have an anisotropic polarizability. All linear molecules, homonuclear or heteronuclear, show rotational Raman spectra, making Raman spectroscopy a powerful technique. On the other hand, molecules such as CH4 and SF6 , being spherical rotors, are rotationally Raman inactive. In addition, we have the Stokes rotational Raman lines for J = +2 and the anti-Stokes rotational Raman lines for J = –2. This selection rule corresponds with the classical picture for rotational Raman scattering sketched above. Note that at room temperature many rotational levels are occupied and, in general, both branches will be observed. 8.4.2
Resonance Raman Scattering
One of the many interesting applications of Raman spectroscopy is resonance Raman scattering. When the frequency of the light used to excite a Raman spectrum approaches an allowed optical transition, the intensities of those vibrational modes that are coupled to this optical transition increase dramatically. As a consequence, the Raman spectrum is greatly
Monitoring with Light
361
Figure 8.10 Resonance Raman spectrum of OClO in cyclohexane as solvent obtained with 368.9 [nm] excitation. Peaks with an asterisk are due to the cyclohexane and are subtracted in the enlargement on the right. Vibrations and their overtones are clearly visible. (Reprinted with permission from The Journal of Physical Chemistry A, Excited-State Dynamics of Chlorine Dioxide in the Condensed Phase from Resonance Raman Intensities, Esposito et al, 101, 29. Copyright 1997 American Chemcial Society.)
simplified and this technique is therefore used to identify specific groups in a molecular structure that otherwise would disappear in the complex nonresonant spectrum. An example of the resonance Raman spectrum of ClO2 , for convenience written as OClO, is shown in Figure 8.10 [4]. This molecule has the same space structure as CO2 or OCO, depicted in Figure 8.5, and shows the same vibrations. In Figure 8.10 one notices frequencies corresponding to symmetric stretching (v1 ), bending (v2 ) and asymmetric stretching (v3 ). Even overtones (e.g. 2v3 ) are clearly visible in the spectrum. In atmospheric chemistry the photodecomposition of OClO by light with wavelengths of 300 [nm] to 400 [nm] into oxygen and chlorine may represent an important source for the formation of stratospheric chlorine, which participates in ozone depletion. The vibrational modes shown in Figure 8.10 are active in the structural changes that occur upon photoexcitation. 8.4.3
Rayleigh Scattering
Rayleigh scattering dominates elastic scattering of light for molecules whose size is much smaller than the wavelength of the scattered light. The intensity of Rayleigh scattering by a molecule with isotropic polarizability is given by I =
16π 4 c 2 2 α E0 3λ4
(8.41)
362
Environmental Physics
where E0 is the amplitude of the electric field vector and α = e2 /c. Note the strong increase of the amount of scattered light with diminishing wavelength λ. 8.4.4
Mie Scattering
Mie scattering is the process where the size of the molecules is comparable to the wavelength of the scattered light. In this case interference effects between light scattered from different parts of the same particle complicate the description of the process. It appears that the intensity of the scattered light increases towards shorter wavelengths with an approximate λ–2 dependence. 8.4.5
Scattering in the Atmosphere
In the atmosphere Mie scattering from particles is normally more prominent than Rayleigh scattering and determines the long-distance visibility. The marked wavelength dependence of Mie and Rayleigh scattering is responsible for the blue colour of a clear sky and red sunsets.
8.5
Remote Sensing by Satellites
In this section we use the ENVISAT satellite designed by the European Space Agency (ESA) as an example of a scientific satellite and describe measurements by one of its instruments called SCIAMACHY [5]. Also the American NASA organizes similar missions which are reported on the internet. 8.5.1
ENVISAT Satellite
This satellite was launched in 2002 and is expected to be operational until 2013 when it will run out of its liquid fuel. Its orbit is almost circular around the poles at an altitude of about 800 [km] and has a ground speed of about 7.5 [km s−1 ]. The orbital plane is orientated such that the satellite crosses the equator going south at local time of 10 in the morning. This fixes the angle between the earth–sun line and the orbital plane, which in 1 year makes one rotation. Such an orbit is called sun-synchronous and every time the satellite crosses the equator the illumination of the earth is the same. A sketch of the orbit is shown in Figure 8.11. A satellite usually has several instruments on board with different applications. For ENVISAT the so-called payload consists of 10 scientific instruments, of which SCIAMACHY is one. 8.5.2
SCIAMACHY’s Operation
The name SCIAMACHY is an acronym for SCanning Imaging Absorption spectroMeter for Atmospheric CHartograph Y. This improbable combination of types is chosen to evoke the Greek expression ‘Sciamachy’ which means, literally, hunting shadows, or figuratively performing an impossible task.
Monitoring with Light
363
K C B A
N Atmosphere
Sun E F
D
S G
H
Figure 8.11 Observation modes of SCIAMACHY. In positions A to C of the orbit the solar absorption by the atmosphere is measured directly (occultation mode). In the sunny side of the orbit represented by point D, the instrument is alternatively looking down (nadir viewing) and to the horizon GH (limb viewing). See also Plate 15a and [5] Figure 4-1.
SCIAMACHYs operation is indicated in Figure 8.11. Once in every orbit, in the stretch AC, the instrument is viewing the sun; the absorption of sunlight by different slices of the atmosphere is indicated by the measuring locations A, B and C, although in fact there are more than three such points. This is called the occultation mode. It follows from Figure 8.11 that solar occultation occurs every orbit at the same latitude. Therefore moon occultation is also measured, which happens in the Southern Hemisphere, when the moon is opposite the sun. At the sunny side of the orbit, represented by point D, the instrument alternatively faces downward to EF (nadir viewing) or slantward through the atmosphere to GH (limb viewing) at a distance of about 3280 [km]. Looking at Figure 8.11 it is clear that somewhat later in its orbit (in fact after 430 [s]) the instrument will look down on the region GH. The instrument is organized such that in limb viewing the line of sight is orientated a little against the rotation of the earth. In that way, at the time the satellite is above GH the same air is viewed in nadir viewing as earlier in limb viewing. The limb viewing is used to study the stratosphere (Figure 8.11). Combining nadir and limb viewing allows study of the troposphere. Note that in nadir and limb viewing the observed sunlight is due to scattering by the constituents of the atmosphere and reflection from clouds and the earth’s surface. The radiation detected from the atmosphere originates from the sun. Therefore the sun is observed regularly above the atmosphere, for example from point K in Figure 8.11. Monitoring unperturbed sunlight is required to find the influence of the atmosphere on the radiation observed in nadir and limb viewing. Moreover the moon is also observed, both through the atmosphere and above the atmosphere, for calibration and scientific purposes. SCIAMACHY is continuously measuring all incoming radiation in three spectral regions: a broad region between 214 [nm] and 1773 [nm] and two narrow bands, one between 1934 [nm] and 2044 [nm] and another one between 2259 [nm] and 2386 [nm]. These
364
Environmental Physics 200
300
500
1000
2000
λ / [nm]
O3 O2 (O2) 2 H2CO SO2 BrO OClO ClO NO NO2 NO3 H2O CO CO2 CH4 N2O Clouds Aerosols Figure 8.12 Wavelength regions observed by SCIAMACHY are indicated in grey. The white horizontal bars indicated the spectral regions used to observe the gases, clouds or aerosols shown in the left column. (Reprinted from Physics and Chemistry of the Earth, Part C: Solar, Terrestrial & Planetary Science, Global atmospheric monitoring with SCIAMACHY, Noel et al, 24, 5, 427–434. Copyright 1999 with permission from Elsevier.)
spectral regions are required to cover a large number of gases, clouds and aerosols. For instance, for just O3 observations, a narrower region would suffice. This is shown in Figure 8.12, where for all entities under observation the wavelength region which is crucial for their detection is indicated by horizontal bars. Several research groups are working in the same window, which gives a mutual check on the experimental results. The basic experimental set-up consists of a mirror system, a telescope and a detector. In order to distinguish the gases of Figure 8.12 from each other and from the background, the instrument must have a reasonably high resolution. In order to find the precise concentration of the trace gases one needs a high signal-to-noise ratio. Finally, in one revolution of 100.6 [min] the instrument experiences both day and night; the resulting temperature differences should be controlled to ensure a stable operation of the detectors and keep their thermal background constant. It is a technological achievement that the experimental apparatus and all these ‘command and control’ facilities are provided within a total mass of 215 [kg] and a power consumption of 140 [W]. 8.5.3
Analysis
The solar radiation which is entering the atmosphere is scattered and reflected, called the earthshine, is about 5% propagated vertically into space and observed in nadir by satellites like SCIAMACHY. There are two major effects. The first is Rayleigh and Mie scattering by molecules and particles; the resulting spectrum has little structure as a function of wavelength. The second effect is absorption by trace gases like those displayed in
Monitoring with Light
365
Figure 8.13 DOAS procedure. The solar and earthshine spectra on the left are divided, giving the top right graph on a logarithmic scale. In the light grey wavelength region the tops are fitted by a smooth polynomial; this results in the absorption spectra shown in the lower figure. (Reprinted from Physics and Chemistry of the Earth, Part C: Solar, Terrestrial & Planetary Science, Global atmospheric monitoring with SCIAMACHY, Noel et al, 24, 5, 427–434. Copyright 1999 with permission from Elsevier.)
Figure 8.12; this superimposes a spectral structure from which the concentration of the trace gases can be deduced. 8.5.3.1
DOAS
To analyse nadir measurements one often uses the DOAS method, acronym for Differential Optical Absorption Spectroscopy. The idea is illustrated in Figure 8.13. On the left the solar spectrum and the earthshine spectrum are sketched. The earthshine spectrum becomes weaker with smaller wavelengths, since the strong Rayleigh scattering prohibits much radiation leaving the atmosphere in the vertical direction. The earthshine spectrum is divided by the solar spectrum and the resulting ratio is given in the top right graph on a logarithmic scale. The DOAS technique then selects a wavelength region (light grey in the top right figure) and draws a smooth polynomial through the spectrum, which then is subtracted from the data. This procedure results in absorption spectra indicated in the lower graph of Figure 8.13.
366
Environmental Physics
sun SCIA Idiff
γ
K
γ0 ds
earth’s surface Figure 8.14 Radiative transfer in the atmosphere. Incident sunlight may be scattered in volume element around ds in the direction of SCIAMACHY directly, or by multiple scattering, for example by K.
The absorption spectra on the right in Figure 8.13 are the result of absorption of a long path of photons through the atmosphere. Therefore, a radiative transfer model is used to transfer the data to a vertical column underneath SCIAMACHY. Of the difficulties we mention the presence of clouds, which is often represented empirically by a cloud fraction. 8.5.3.2
Radiative Transfer Models
SCIAMACHYin nadir view looks down and only observes the radiation that is entering from below from volume elements like the one sketched in Figure 8.14. We follow the notation of ref [5], which deviates a little from Chapter 2. The vertical dimension in the element is called ds. The general equation for radiative transfer is written in shorthand as ([7] Chapter 2) dI = −α(I − J ) ds
(8.42)
Here I [J s−1 m−2 sr−1 ] is the intensity of the radiation in the vertical direction per solid angle (hence [sr−1 ]) and α [m−1 ] is the extinction coefficient indicating the fraction of the original beam which is removed from the beam by absorption or scattering. This explains the term –αI on the right-hand side of Eq. (8.42). The source term J gives the energy entering the vertical direction originating from two effects. The first contribution to J is directly scattered sunlight, which is weakened on its way to the volume element, but still enters the volume element from the original direction; this is indicated in Figure 8.14 and called direct radiation. The second contribution originates from light which has been scattered at least once, for example at point K in the figure and is called diffuse radiation. The source term has to be multiplied by the extinction α because that gives the fraction of the energy [m−1 ] the incoming radiation leaves behind. Thermal emission by the volume element occurs in wavelength regions much higher than those of Figure 8.12; in the analysis thermal emission does not play a role. From this discussion it is clear that what happens in the volume element shown will depend on its location in the atmosphere, because that will determine the direct radiation
Monitoring with Light
367
entering and the many contributions from diffuse radiation. We will keep the discussion simple and continue to use ‘shorthand’ in order to identify the main effects. We start with the direct radiation. It has no source except the sun and Eq. (8.42) reduces to dIdir = −α(s)Idir ds
(8.43)
where we made explicit that the extinction may depend on the position s along the trajectory, e.g. because of a change in density. This trajectory is the one entering the volume element from the top right in Figure 8.14. Along this trajectory Eq. (8.43) is rewritten as dIdir = −α(s)ds Idir
(8.44)
Integration gives ln Idir = − α(s)ds + c and finally Idir = Iirr e−
α(s)ds
(8.45)
At the top of the atmosphere the length of the trajectory is zero ( α(s)ds = 0) and the direct radiation becomes the solar radiation on top of the atmosphere I irr . Eq. (8.45) is suitable to describe occultation measurements, for in-scattering from diffuse radiation is small with respect to the direct radiation. Eq. (8.45) is a generalized version of Lambert–Beer’s Law (2.36), where in (2.36) the exponent kz is replaced by α(s)ds. In Eq. (2.36) one may therefore write k(z)dz if the extinction depends the variable z. We return to Figure 8.14 and look at the radiation observed from the satellite in nadir view. All this radiation is scattered at least once and therefore diffuse radiation. The intensity I in Eq. (8.42) is replaced by I diff , and we rewrite (8.42) dIdiff + α Idiff = α J ds
(8.46)
As indicated before, the source term J has two parts: from the direct solar radiation scattered in the volume element, and from all the diffuse radiation entering and then scattered. We first look at the incoming diffuse radiation. The probability that an entering photon is scattered rather than absorbed is denoted ω, not to be confused with angular velocity. The probability that the radiation then is scattered into a solid angle d about a direction forming an angle γ with the incident radiation is written as p(γ )d/4π. This probability also takes into account that the path of the radiation entering the element depends on its angle with the vertical, while the extinction α [m−1 ] is independent of the incident direction. The diffuse source function becomes ω (8.47) p(γ )Idiff d Jdiff = 4π where I diff represents diffuse radiation from all directions. The direct source function looks simpler for there is only one angle γ 0 and the source term refers to 1 [sr] of solid angle. This gives ω ω (8.48) p(γ0 )Idir = p(γ0 )Iirr e− α(s)ds Jdir = 4π 4π
368
Environmental Physics
Eqs. (8.46), (8.47) and (8.48) give dIdiff ω ω ω + α Idiff − p(γ )Idiff d = α p(γ0 )Idir = α p(γ0 )Iirr e− α(s)ds ds 4π 4π 4π (8.49) 8.5.3.3
Iteration Procedure
The procedure is to divide the atmosphere in ‘horizontal’ layers. Next one assumes a vertical concentration of the atmospheric gases as a function of altitude. Then one calculates with the help of Eq. (8.49) what the input of radiation into SCIAMACHY would be, compares with the experimental results from DOAS, changes the vertical concentrations and repeats the iteration until calculation and measurements coincide within experimental error. For precise measurements one needs to know the temperature and pressure of the observed air. The temperature determines the occupancy of the lower vibrational and rotational energy levels (Figure 8.1). It also determines the Doppler broadening of the spectral lines, see Eq. (8.16). Pressure and temperature determine the concentration and velocity of the dominant oxygen and nitrogen molecules, which give rise to collision broadening of the spectral lines. By means of Eq. (8.13) the collision width at an arbitrary p, T can be connected with the one at some standard values p0 , T 0 . 8.5.4
Ozone Results
Results from SCIAMACHY can be followed on internet in almost real time [8]. In Plate 15b we show a colour graph of the ozone column measured during one day. One will observe strips of results because when the satellite is at the back (the so-called eclipse side) no measurements can be made. Also, within a strip during limb viewing no nadir measurements are made. This explains the spotted picture. For complete earth coverage one needs three days of observations. The ozone density of the stratosphere is measured in Dobson units [DU]. A layer of 100 [DU] corresponds to 1 [mm] of thickness under standard temperature and pressure. The ozone layer exhibits large variations over the globe and strong seasonal fluctuations as well. The concentration tends to be highest in late winter and early spring. The picture shown was made on 30 November 2010 and shows a low ozone density above Antarctic.
8.6
Remote Sensing by Lidar
Lidar is an acronym for LIght Detecting And Ranging, which is a technique to measure atmospheric properties over a large distance from some position on earth, often from the surface, occasionally also from airplanes and balloons and even from space. In this section we start with a description of lidar and we will derive the lidar equation. There are many applications of lidar; we will restrict ourselves to two cases. The first is Differential Absorption Lidar (DIAL), which uses two different wavelengths of ultraviolet light to measure the distribution of ozone over a column of air from 20 [km] to 50 [km] in the stratosphere.
Monitoring with Light
369
The attention on ozone in this section should illustrate that the same gas in the atmosphere is studied not only from satellites, but in many ways. We mention that similar techniques are used to obtain density profiles of NO2 , SO2 and H2 O, amongst others. We finish with aerosol/cloud lidars, which are used for detecting particles in the atmosphere. Here the example will be the detection of ash originating from a recent volcano eruption in Iceland. 8.6.1
Lidar Equation and DIAL
LIDAR methods are used since the 1960s [9, 10]. Consider a pulsed laser, aimed vertically upwards. The light pulses, when travelling upwards, are gradually absorbed or scattered. The amount of absorption and scattering depends on the number and type of molecules that are encountered, and their absorption and scattering cross sections at the wavelength of the laser. The number of back scattered photons is detected as a function of the delay time between the firing of the laser pulse and the arrival time. We will now quantify these considerations. Suppose S(t) is the signal that monitors the amount of backscattered light at time t after the emission of the laser pulse at t = 0. A segment S(t, t) of the signal is recorded during a time interval between t and t + t and corresponds to the scattering by air molecules in a volume extending from R to R + R in the direction of the beam, with t R=c ; 2
R = c
t 2
(8.50)
Here c is the speed of light and t/2 the time to reach the volume under consideration. It is assumed that a laser pulse with duration τ is short compared to the sampling interval τ t. In almost all practical cases this is realized; one commonly used pulsed laser emits with τ 10 [ns] pulse duration; that corresponds to only 3 [m] extension in space. Normally, signals from a depth of at least R = 150 [m] (1 [μs]) are averaged in a channel. When a laser pulse of energy Eλ and wavelength λ is emitted into the atmosphere, the backscattered light is collected by a receiver mirror of area A, and sampled by a photo detector of efficiency ηλ . The signal Sλ (R, R) at wavelength λ and originating from molecules at an altitude between R and R + R can be described by the lidar equation ⎛ ⎞ R A ηλ βλ (R)(R) exp ⎝−2 (αλ (r ) + σλ Nabs (r ))dr ⎠ Sλ (R, R) = E λ ξ (R) 4π R 2 0
(8.51) In this relation, ξ (R) is a function describing the overlap of the emitted beam and the detection cone and β λ (R)R is the fraction of the incoming energy, scattered back under 180◦ between R and R + R. The attenuation integral in the exponential function, arising from Lambert–Beer’s law (8.45), is divided into a part arising from the molecular species of interest σ λ N abs (r) and a part arising from all other particles α λ (r). Of course, N abs (r)dr is the number of absorbing particles between r and r + dr. The factor of 2 in the exponent originates from the roundtrip of the light. From experiments one will find the left-hand side of the lidar equation as a function of R, while one wants to know N abs (r). One has to invert the equation and get rid of α λ (r). One of the possibilities is to use the DIAL method.
370
Environmental Physics
70
Stoic 24 july 1989 O3
60 50 40 30 20 10
Off
On
0 0 3 5 1 2 4 conc. / [1012 molec cm-3]
Figure 8.15 On the left the ideal optical DIAL process. The extreme left graph shows the ‘off’ wavelength: scattering, but no absorption. The other graph shows the ‘on’ wavelength: the target molecules absorb and scatter. The picture on the right shows the measured ozone concentration as a function of altitude. (This graph is reproduced by permission of Dr Stuart McDermid from [11], Fig 1, p. 26.)
8.6.1.1
DIAL
The acronym stands for DIfferential Absorption Lidar. To select for specific molecules in the atmosphere, two wavelengths are applied: the first wavelength is absorbed and scattered by the molecules that one wants to detect, called the target molecules; the second wavelength has a small absorption cross section for the target molecules and is only scattered. These wavelengths are called “on” and “off” wavelengths, respectively, as shown in Figure 8.15. If λ1 is the ‘on’ wavelength (absorbed and scattered by the target molecules) and λ2 is the ‘off’ wavelength (only scattered by the target molecules) then we can take the ratio Q(R) of the two signals: ⎫ ⎧ ⎬ ⎨ R E λ1 ηλ1 βλ1 (R) Sλ1 (R) = exp −2 [λ α(r ) + λ σ Nabs (r )]dr (8.52) Q(R) = ⎭ ⎩ Sλ2 (R) E λ2 ηλ2 βλ2 (R) 0
Note that the distance-dependent part of the detector efficiency ξ (R) has dropped out, and that only the wavelength dependent part ηλ has remained. The exponent now contains two parts: the difference in extinction coefficient λ α(r ) = αλ1 (r ) − αλ2 (r ) and similarly the difference in absorption cross section λ σ N abs (r). Ideally, the first part, from the ‘other’ particles is much smaller than the second part from the target molecules. This is the case when λ1 and λ2 are close together, but λ1 is in resonance with a transition in the target molecules whereas λ2 is not. To obtain accurate results, λ σ must be measured
Monitoring with Light
371
in the laboratory under conditions of temperature and pressure equivalent to those in the atmosphere, and with identical laser linewidths as in the lidar experiment. To obtain the density N abs (r) of the target molecules as a function of distance r, one takes the natural logarithm of Eq. (8.52) and differentiates with respect to R:
d ln Q(R) d βλ1 (R) (8.53) = ln − 2 {λ α(R) + λ σ Nabs (R)} dR dR βλ2 (R) Note that in Eq. (8.53) the pulse energies and spectral efficiencies do no longer occur since they are independent of R. The first term on the right-hand side can be nonzero when the wavelength-dependent backscattering β λ (R) varies with distance as a consequence of variation in droplet size of aerosols. If this effect is small and can be ignored one obtains Nabs (R) =
−1 d ln Q(R) λ α(R) − 2λ σ dR λ σ
(8.54)
If λ1 and λ2 are sufficiently close together, the second term on the right-hand side can be neglected and N abs (R) is found straightforwardly from the measurements. In the case of ozone the ‘on’ and ‘off’ wavelengths will have to be relatively far apart since the absorption band of ozone is wide, as inspection of Figure 2.8 will show. In the lidar experiment measuring ozone in the stratosphere, of which the results were shown on the far right in Figure 8.15, one used 308 [nm] as the ‘on’ wavelength and 353 [nm] as the ‘off’ wavelength [11]. From the enlargement shown for these wavelengths in Figure 2.8 it is clear that, although the absorption cross sections are small, their ratio will be at least a factor of 100. Still, one has to deal with the last term on the right in Eq. (8.54). For clear atmospheres Rayleigh scattering will dominate and the last term can be calculated and dealt with. The DIAL technique derives its popularity from the fact that it is self-calibrating, for the detector characteristics do not influence the measured absolute concentrations of the target molecules. As long as the few approximations made are valid and the difference cross section λ σ is known, the interpretation of the lidar measurements is relatively straightforward. 8.6.2
Range-Resolved Cloud and Aerosol Optical Properties1
Particles in the atmosphere and clouds may be measured and even identified by lidars. In general, lidars for cloud and aerosol sensing may be divided into two main types, namely (elastic) backscatter lidars and inelastic lidars. Backscatter lidars detect light backscattered at the same frequency as the transmitted signal, while inelastic systems detect (usually in addition to the elastic component) components of the backscatter radiation which have been shifted in frequency compared to the transmitted signal. Raman lidars are a type of inelastic lidar. In this case the laser source emits photons with wavelength λ1 ; a fraction of these photons excite vibration modes of nitrogen molecules at the measuring position between R and R + dR, after which photons are backscattered with a somewhat longer wavelength λ2 > λ1 , with the wavelength shift corresponding to the 1 The text and illustrations are a slightly edited version of a text which Dr Apituley and Dr Donovan of the Royal Netherlands Meteorological Institute KNMI kindly provided.
372
Environmental Physics
energy lost to the change in vibrational state of the nitrogen molecules. Thus, on the outward journey photons have the one wavelength λ1 ; from the return journey two wavelengths λ1 and λ2 are detected. The lidar equation (8.51) has to be written down for both signals and by a process similar to the one described above, particles in the atmosphere can be detected [12]. We will proceed by describing the experiments and their results in a little more detail. 8.6.2.1
Backscatter Lidars
In the case of a backscatter lidar and assuming negligible gaseous absorption at the emitted wavelength and single-scattering conditions2 , the lidar equation may be written as Clid
Sλ (R, R) = 2 βλ,Ray (R) + βλ,Aer (R) exp(−2 R
R (αAer (r ) + αMol (r ))dr )
(8.55)
0
Here the various instrument related constants present in Eq. (8.51) have been grouped together to form the lidar calibration constant Clid . Again the functions β(R) refer to backscattering and α(R) to extinction. The Mol and Aer subscripts are used to distinguish between molecular (Rayleigh) backscattering/extinction and aerosol (or cloud) backscattering/extinction, respectively. In Eq. (8.55) multiplication of the lidar signals with R2 removes the range dependence. In this way, the so-called range-corrected signals are obtained. Range-corrected signals are largely dominated by particle backscatter and are therefore well-suited for revealing aerosol layering structure and dynamics, and the presence of clouds. Such images showing the atmospheric development with time are displayed in Figures 8.16 and 8.18, to be discussed later. Quantitative information about aerosol scattering properties can be obtained by applying retrieval algorithms for e.g. backscatter or extinction. Since the backscatter lidar signal depends on both the backscatter and the extinction coefficients, an assumption has to be made about their relationship in order to invert the lidar equation and obtain the profile of backscatter and extinction. Usually, a simple relationship between extinction and backscatter is assumed L = αλ, Aer /βλ, Aer
(8.56)
which is referred to as the lidar ratio. This simplification can be used to cast Eq. (8.55) in the form of a differential equation which may be solved for the extinction and backscatter profiles. It should be noted that an appropriate boundary value has to be determined as well as a value for the lidar ratio [14]. For clouds, the relationship between extinction and backscatter can often be usefully constrained a priori. However, for general aerosol types, the extinction to backscatter relationship is quite variable, making it problematic to extract accurate quantitative extinction and backscatter values under many circumstances. As will be shown later, this limitation is largely overcome by using Raman lidar. By transmitting light of a known polarization state and detecting the polarization state of the return signal, lidars can provide additional information about the shape of the particles
2
For clouds multiple-scatter is often important and must be taken into account [13].
Monitoring with Light
373
scattering the emitted laser light. In order to accomplish this, an additional polarizationsensitive detection channel can be fitted to a backscatter lidar. A so-called depolarization lidar can distinguish whether scatterers are (almost) spherical (e.g. droplets), yielding low depolarization, or nonspherical (mineral dust, volcanic ash, ice crystals), giving high depolarization. For clouds, depolarization lidars are used to distinguish the cloud phase; water clouds consist of spherical water droplets, while ice clouds consist of highly nonspherical particles that are strongly depolarizing. Likewise, aerosols with a water-soluble composition that have a tendency to form droplets can be distinguished from aerosols from mineral origin that are irregularly shaped. 8.6.2.2
Raman Lidars
As opposed to backscatter lidars, Raman lidars exploit the process of Raman scattering to obtain more accurate quantitative results. In particular, Raman lidars can retrieve both the aerosol extinction profile and the aerosol backscatter profile without making critical assumptions on the scattering properties of the aerosols [12]. The lidar equation appropriate for Raman scattering can be written as ⎛ R ⎞ Clid
SλRam (R, R) = 2 βλRam (R) exp ⎝− (2αAer (r ) + αMol (λRam , r ) + αMol (λ, r ))dr ⎠ R 0
(8.57) where we have assumed that the aerosol/cloud extinction is approximately the same at the laser and Raman shifted wavelengths and again, negligible gaseous absorption occurs. The retrieval of extinction uses only a Raman scattered lidar signal, the backscatter retrieval uses two signals: an elastic one and a Raman scattered signal. The extinction profile α Aer is retrieved using a single lidar signal at the nitrogen Raman scattered wavelength (λRam ), with only the help of an atmospheric density profile (measured by a radiosonde or calculated by an atmospheric model). The extinction profile can be derived by a procedure involving taking the logarithmic derivative of the lidar signal at the Raman scattered wavelength. Since the molecular extinction α Mol and backscatter β λ in Eq. (8.57) are known functions of the atmospheric density they can be calculated using Rayleigh scattering theory. The backscatter profile β Aer is retrieved from the ratio of the Raman scattered signal and the elastic signal (i.e. taking the ratio of Eqs. (8.57) and (8.55) and solving for β Aer ). Thus, with a Raman lidar, accurate range-resolved information is obtained about the optical properties of aerosols and clouds without having to make critical a priori assumptions about the nature of the scatterers themselves. In fact, since the Raman backscatter β Aer (R) and extinction α Aer (R) are retrieved without an a priori assumption of the lidar ratio L in Eq. (8.56), L can be unambiguously determined. Since the lidar ratio varies with aerosol type and composition (i.e. marine aerosol, urban haze, soot) it can be effectively used as an approximate determination of aerosol type and composition, or in other words, whether the particles originate from anthropogenic or natural sources.
374
Environmental Physics
8.6.2.3
Results for the Eruption of the Icelandic Volcano Eyjafjallaj¨okull
The Raman lidar CAELI [15] is a high-performance, multiwavelength Raman lidar, capable of providing round-the-clock measurements. The instrument provides profiles of volume backscatter and extinction coefficients of aerosol particles, the depolarization ratio and the water-vapour-to-dry-air mixing ratio. A high-power Nd:YAG laser transmits pulses at 355, 532 and 1064 [nm]. Lidar signals are received at the elastically scattered wavelengths λ1 , but also at Raman scattered wavelengths λ2 by nitrogen and water vapour. The Raman scattering happens at the wavelengths 387 [nm], 607 [nm] and 1414 [nm]. The latter signal is weak and difficult to detect. With the first two signals the extinction can be determined accurately, although instrumental demands are very high for daytime conditions at the Raman wavelengths, since the weak Raman signals have to be filtered from the bright daytime sky. Multiple receiving telescopes are used to collect backscattered light. A large aperture telescope is used to collect light scattered at high altitudes. Due to geometric and optical considerations a large telescope is essentially blind for lidar signals originating from close to the instrument. Thus, a second small telescope is needed to cover the near range, in particular for measurements in the planetary boundary layer. During the 2010 eruption of the Eyafjallajokull in Iceland, the plume containing volcanic ash was transported directly towards Europe in the troposphere. This caused the aviation authorities to close the airspace for several days. Using lidar instruments; a polarizationsensitive backscatter lidar and a Raman lidar, installed at the Cabauw Experimental Site for Atmospheric Research (CESAR) in the Netherlands, it was possible to detect the plume and distinguish its properties.
Figure 8.16 Time–height display of the range-corrected signal at 1064 [nm] from the Raman lidar CAELI. Data is shown at 10 [s] resolution in time and 7.5[m] vertical resolution. On 17 May 2010 a layer of volcanic ash can be seen between 2.5 and 6 [km] altitude between 17:00 UTC and midnight. This is shown much better on colour Plate 16 (top), where the colours indicate the intensity of the backscattering, shown on the bar at the right. (This graph was made available by Dr Apituley, KNMI who gave kind permission to reproduce it.)
Monitoring with Light
375
Figure 8.17 Retrieved aerosol profiles from the Raman lidar CAELI from 20:15 to 20:45 UTC on 17 May 2010. The panels from left to right show respectively the backscatter profiles at three wavelengths; the extinction profiles at two wavelengths; the lidar ratio at two wavelengths; and the particle depolarization ratio. The variability in the lidar ratio (apart from fluctuations caused by instrumental noise) illustrates differences in aerosol type below 2.5 [km] and above. In the layer between 2.5 and 6 [km] the particle depolarization ratio shows high values, indicating highly nonspherical particles. (Reproduced from data with permission from Arnoud Apituley, KNMI.)
In Figure 8.16 and Plate 16 (top) the passage of the ash cloud, 17 May 2010, is shown, using Raman lidar. If you go from left to right in the picture you will see how the cloud starts small at 4 [km], then extends from 3 to 6 [km] and finally becomes smaller again and disappears around midnight. Figure 8.17 shows data products retrievable from the multiwavelength Raman lidar CAELI. The backscatter and extinction profiles are retrieved from the lidar data and subsequently, the lidar ratio is calculated. The variability in the lidar ratio (apart from fluctuations caused by instrumental noise) illustrates differences in aerosol type below 2.5 [km] and above. In the layer between 2.5 and 6 [km] the particle depolarization ratio shows high values, indicating highly nonspherical particles. Figure 8.18 and Plates 16 (middle) and 16 (bottom) show the situation on 18 April 2010 during the early days of the eruption when a cloud of ash was passing that caused the airspace to be shut down. The top graph of Figure 8.18 shows the results of the UV backscatter lidar, which operates at 355 [nm]. The bottom graph shows the simultaneously measured depolarization of the linearly polarized laser pulse. The high depolarization, shown in the bar on the right, illustrates the irregular shape of the volcanic ash.
376
Environmental Physics
Figure 8.18 Time–height display of the range-corrected signal at 355 [nm] from a Leosphere ALS-450 UV-backscatter lidar. The first plume of ash arrived on 16 April, and after a clear day on 18 April 2010 a second ash plume arrived over the Netherlands. The top graph refers to backscattering only, the lower graph indicates the depolarization, indicated on the bar at the right. The plume is visible between 1 and 2 [km], mainly in the middle of the day. The strong depolarization, shown more clearly on Plate 16 (bottom), indicates the irregular shape of the volcanic ash. (These two graphs were made available by Dr Donovan, KNMI, who gave kind permission to reproduce it.)
Exercises 8.1 Derive Eq. (8.12) using a Fourier transformation. You may, of course consult textbooks like [2]. The derivation is a little cumbersome, but not difficult. Start by showing that the probability that the molecule has free flight between times τ and τ + dτ equals p(τ )dτ =
1 −τ/τcoll e dτ τcoll
8.2 Calculate the rotational energy levels of NH3 . The NH3 molecule may be described as a symmetric rotor with a NH bond length of 101.2 [nm] and a HNH bond angle of 106.7◦ . The expression for the moments of inertia can be found in texts like [1], p. 555 (look up ‘moments of inertia’ in the index). Calculate the wavenumber v for the K = 0, J = 0 → J = 1 transition. Check in Figure 8.2 in what region of the electromagnetic spectrum the transition will be found.
Monitoring with Light
377
8.3 Which of the following molecules may show a pure rotational spectrum in the microwave region: H2 , HCl, CH4 , CH3 Cl, H2 O, NH3 . 8.4 An experimental peak f (ω) has height H and full width at half maximum W. Assume that a Gaussian shape would fit the peak and find that its integral F(ω)dω = 1.064H W ≈ H W . 8.5 In a simple model for conjugated polyenes, such as the photosynthetic pigment β-carotene, the π -electrons are allowed to move freely along the chain of carbon atoms. In this model the 22 π -electrons of β-carotene (each carbon atom of the conjugated chain contributes one π -electron) are regarded as independent particles in a one-dimensional box and the MOs are square well wavefunctions. The dominant optical transition of β-carotene occurs at 500 [nm] as the transition between the HOMO (n = 11) and the LUMO (N = 12); what will be the effective length of the β-carotene molecule? Compare this to the true length, taking a C–C bond length of 140 [pm]. Calculate the absolute value of the transition dipole moment using square well wavefunctions. Compare the result with the experimental value, using Eq. (2.46) and taking an extinction coefficient at the maximum ε = 150 000 [L mol−1 cm−1 ] and a FWHM of the β-carotene absorption spectrum of 40 [nm]. 8.6 In Section 8.6.2 you will find three displacements of wavelengths by using the Raman lidar COELI. (a) Check that all three correspond to the same excitation. (b) By using textbooks like [1] show that this indeed is a vibrational excitation in N2 . Note: diatomic molecules like N2 do not have a vibrational transition dipole moment. However, in Raman spectra the vibrational transitions are induced by the incoming light ([1] Section 16.12).
References [1] Atkins, P.W. (1994) Physical Chemistry, 5th edn, Oxford University Press, Oxford. [2] Loudon, R. (1983) The Quantum Theory of Light, 2nd edn, Clarendon Press, Oxford. [3] Van Der Hulst, H.C. (1957) Light Scattering by Small Particles, John Wiley, New York. This text contains an extensive discussion of Rayleigh and Mie scattering. The book is republished by Dover, New York, 1981 as an unabridged but corrected publication. [4] Esposito, A.P., Foster, C.F., Beckman, R.A. and Reid, P.J. (1997) Excited state dynamics of chlorine dioxide in the condensed phase from resonance Raman intensities. Journal of Physical Chemistry A, 101, 5309–5319. [5] Gottwald, M. (ed.) (2006) Sciamachy, Monitoring the Earth’s Changing Atmosphere, Deutsches Zentrum f¨ur Luft- und Raumfahrt DLR. This book may be downloaded from http://atmos.caf.dlr.de/projects/scops/; a new print will appear as Gottwald, Manfred; Bovensmann, Heinrich (eds) SCIAMACHY - Exploring the Changing Earth’s Atmosphere, Springer, Dordrecht, Heidelberg, London, New York, 2010. [6] Noel, S., Bovensmann, H., Burrows, J.P. et al. (1999) Global atmospheric monotoring with SCIAMACHY. Physics and Chemistry of the Earth (C), 24(5), 427–434. [7] Lenoble, J. (1993) Atmospheric Radiative Transfer, DEEPAK, Hampton, Virginia. [8] Tropospheric Emission Monitoring Internet Service www.temis.nl, Plate 8.15b was produced by temis/KNMI.
378
Environmental Physics
[9] Measures, R.M. (1984) Laser Remote Sensing: Fundamentals and Applications, Wiley, New York. [10] Schanda, E. (1986) Physical Fundamentals of Remote Sensing, Springer, New York, Berlin and Heidelberg. [11] NASA Network detection change status, NASA, Washington, 1990. [12] Ansmann, A., Wandinger, U., Riebesell, M. et al. (1992) Independent measurement of extinction and backscatter profiles in cirrus clouds by using a combined Raman elastic-backscatter lidar. Applied Optics, 31, 7113–7131. [13] Eloranta, E.W. (1998) Practical model for the calculation of multiply scattered lidar returns. Applied Optics, 37, 2464–2472. [14] Fernald, F.G. (1984) Analysis of atmospheric lidar observations: some comments. Applied Optics, 23, 652–653. [15] Apituley, A., Wilson, K.M., Potma, C. et al. (2009) Performance assessment and application of Caeli – a high-performance Raman lidar for diurnal profiling of water vapour, aerosols and clouds. Proceedings of the 8th International Symposium on Tropospheric Profiling, 19–23 Oct. 2009, Delft, the Netherlands, A. Apituley and H.W.J. Russchenberg and W.A.A. Monna, (eds) (2009) pp. SO6-O10.
9 The Context of Society In Chapter 1 we announced that this book should help to establish a sustainable energy supply as an essential building block of sustainable development. In this final chapter we start in Section 9.1 with interpreting the data on energy consumption and estimates of cheap resources of fossil and nuclear fuels; we finish with the energy options available for a modern industrial state. In Section 9.2 we briefly review the resources of fresh water, another area where the need for sustainability is manifest. In Section 9.3 we discuss how all methods of energy conversion, including renewables, have problematic side effects. Either in the conversion process or in the manufacturing of the equipment and installations, chemical compounds or radiation may intrude on the environment with possible harmful effects. For chemicals we discuss how one determines what concentrations in air or water will be acceptable. This resembles the discussion of the acceptable dose of radiation in Section 6.3.2. In Section 9.4 we describe international efforts to mitigate or limit the harmful consequences of industrial production. The success story is the Montreal protocol that aims at phasing out chemicals that were harming the protective ozone layer. The second story is about climate change. There are a host of data, gathered and interpreted by the IPCC, many of which were summarized in Chapter 3. The political follow-up seems to be lacking. It is too early to call international climate policy a failure, but so far it certainly is no success story. Apart from political will to take unpopular measures, a more fundamental question is whether management of a highly nonlinear system such as the global climate is feasible at all. In Section 9.5 we will raise some questions, from which we will conclude that utmost care is called for, and the need for sustainable development is underlined. In Section 9.6 we finish with the role of science in society. Students from the natural sciences must realize that scientific analysis is only one of the factors in political decisionmaking. Moreover, scientists also have different points of view, both on scientific questions and on their political implications. Some scientists are more optimistic about the possibilities of technology than others; some are taking a more political stance than others. Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
380
Environmental Physics
We will finish by presenting our personal views. First we join a plea for a new contract between science and society. The aim of that contract should be to provide an acceptable standard of living for the entire world’s population, now and in the future. In this sustainable development technological fixes have their place; several of these were presented in this book. That, however, will not be enough. We support the view that transition to a sustainable society requires a change of culture, both in science and in society. The present authors are physicists. They are trained in tackling well-defined problems by experimentation and modelling. Social reality is more ambiguous, even problem definitions and data selection are the subject of extensive discussions and disagreements between experts. In writing about society in this chapter we will stick close to a physical approach, by relying on internationally agreed data. Our interpretation and judgements are, of course, our own and we welcome comments from our readers. This chapter has a few numerical exercises, but in particular is supplemented by ‘social questions’ that may be used for writing short essays. These questions do not have straightforward ‘right’ or ‘wrong’ answers. The quality of the answer depends on the arguments and the correct use of scientific knowledge. In the ‘worked out exercises’, available for teachers, we give an indication of our own answers. Also here we welcome suggestions from our readers.
9.1
Using Energy Resources
Resources of land and fossil fuels are finite. So, eventually they will be exhausted. The question is when. More than 200 years ago, in 1798, the British economist Malthus published his Essay on the principles of population. He argued that most people will remain poor as the population doubles every 25 years, while the output of agriculture and industry will increase more slowly with time. In 1972, the Club of Rome made a similar point in analysing the finiteness and exhaustion of many materials [1]. Still, in the Western world, most people are not confronted with limitations on material resources. Therefore, we will look into this point for the example of energy, which is an essential resource, and finish with a brief discussion of the energy options available. 9.1.1
Energy Consumption
In Figure 9.1 the energy consumption per person is shown for India, the USA and for the world as a whole, from the beginning of the industrial revolution around 1800 up to the year 2008 ([2], [3], [4]). These countries are chosen because India is typical for a rapidly industrializing country and the USA is a typical example of a well-developed industrial state. On the left axis in Figure 9.1 both the units [109 J/year] and Watt [W] are indicated. This latter unit is more illuminating than the first, for we have calculated in Exercise 4.1 that manpower, maintained for some length of time, does not produce much more than about 76 [W] of mechanical work over 8 hours, which is not more than 25 [W] when averaged over a day. This has to be compared with the 1000 [W] of power, which a small electric heater will use.
The Context of Society
381
9
70 60
8
2000[W]
7 50
World population
World
1500[W]
5
40 30
6
4
1000[W] USA/10
3
World population / [10 9 ]
Energy consumption per person / [10 9 J Yr -1]
10
20 500[W]
2 India
10
1 1800
1850
1900
1950
2000
2050
Figure 9.1 Energy consumption per capita (or person) from 1800 to 2008. The energy data are given on the left-hand scale. Population data are given on the right-hand scale. The energy data for the USA have to be multiplied by 10 because they do not fit the scale [2], [3], [4].
From Figure 9.1 one observes that on a world scale the energy consumption per person is growing steadily, with a dip around 1970 due to the ‘energy crisis’ at that time. One may notice that after such a calamity the steady growth catches up again. In Figure 9.1 the growth of the world population is also indicated with a projection up to 2050. The total energy use is the product of the world population and the energy consumption per person. Both are increasing rapidly and their product is growing even faster. In compiling the data presented in Figure 9.1 the electrical joules resulting from hydropower were added to the chemical joules that enter a coal-fired power station. This looks like adding apples and oranges, for we have seen in Section 4.2 that electric joules can be almost completely converted into mechanical work, while for heat this will be not much higher than 40%. The effect of this adding up is that hydropower, solar PV and wind are weighted too low in the summation of total primary energy production. This could be corrected by introducing a concept like ‘saved fossil fuel energy’, where 1 electric joule would get a weight of 1/ε if ε is the average efficiency of fossil-fuel-fired power stations. In the present text this will be avoided in order to remain consistent with the international statistics. Another reason to abstain is that fossil joules are not only used for energy production, but also for about 10% for nonenergy uses in the chemical industry. The best way to proceed is to keep the warning about apples in oranges in mind. Another point to note is that the tables of the IEA, the International Energy Agency, are expressing energy in the unit [Mtoe] = Mega ton oil equivalent. With the help of appendix A, we converted those to the more physical joules. Finally, although joules from hydropower are just added to chemical joules, the joules from nuclear power stations are
382
Environmental Physics
Table 9.1 World energy use by fuel/[EJ/yr] and conventional reserves/[EJ] (in the year 2008 [2], [3], [5]). Source Oil Gas Coal/peat Nuclear Hydro Combustible biomass/waste ‘New’ renewables: PV, wind, CSP, (biosolar) Total
Use/ [EJ/yr]
Reserves/[EJ]
169 109 143 30 12 51
7200 7000 25000 ≈2400 <100
Comments [3] p 202; 7 barrels = 1 tonne [3] p 280 [3] p 127 Price <130 [$/kgU] [5] p 3 In nature 400 [EJ] [6] p 41 Competition with food
4 518
in the statistics multiplied by three with the argument that their efficiency is about 1/3 and one is interested in the amount of thermal joules produced.
9.1.2
Energy Consumption and Resources
For simplicity we assume that, on average, production and consumption of energy is the same and we break down the total number of joules in Figure 9.1 into their constituent parts: the fossil fuels, nuclear, hydro, conventional biomass combustion and other renewables (PV, wind etc.). This is done in Table 9.1, expressed in [EJ] = 1018 [J], where we also give an estimate of the ‘conventional’ reserves, where applicable. Besides these conventional reserves there are many unconventional reserves of which the common factor is that they will require more effort, money and joules to extract [3]. In studying Table 9.1 one should keep in mind that the proven conventional reserves may grow over time. In fact for oil and gas they are still increasing, albeit slowly. It therefore is not realistic to divide reserves by use and conclude that there is, for example, gas for only 70 years at the present level of consumption. The same holds for nuclear, where an increase in the price of uranium ore will expand the reserves, while this only has a small effect on the kWh price, for the price of nuclear power for 80% or more reflects the capital price of the nuclear installations. The reserves of hydro have stronger limitations, as expansion of hydropower may destroy large areas of valuable land or nature. Expansion of combustible biomass production will compete with agriculture. Therefore one is developing plants that grow on land that is unsuitable for agriculture and, as we discussed in detail in Chapter 5, may cultivate cyanobacteria (blue-green algae) in water to supply biomass or biofuel. If the production of this ‘new biomass’ succeeds for acceptable prices it may become a ‘technological fix’ for the energy problem. As to the ‘new’ renewables like PV, CSP, solar heating or cooling, and wind, their present contribution is still low, but is expected to grow, as was discussed in Chapter 5. Although the conventional fossil reserves are big enough to last for a century or so, the energy price is expected to rise due to higher demand from a growing population
The Context of Society
383
([3] p 69). This will give a better market position for the new renewables and expand their market share. Reflecting on Table 9.1 the question arises why governments should stimulate a transition to a sustainable society. Could not the ‘market’ solve our energy problems? There are at least three reasons for government intervention: (1) The combustion of fossil fuels releases CO2 , which is a strong greenhouse gas. There are differences: at present, combustion of oil liberates 74 [g] for 1 [MJ], natural gas 56 [g/MJ] and coal 104 [g/MJ]. Slowing down global warming requires a faster transition to new renewables than the energy market would dictate. Note that capture and storage of CO2 would have to be enforced; it works only at power stations (not at individual homes) and will increase the kWh price. (2) Besides CO2 , burning fossil fuels emits polluting gases like SO2 and NOx from the plants which are the origin of fossil fuels. These emissions are smaller for natural gas, but resources of ‘sour gas’ contain the greenhouse gases CH4 and CO2 . Of course these pollutants can be removed from the flue gases, but that has to be enforced by regulations and will increase the energy price. (3) The resources of oil and gas are unevenly distributed over the earth [3]. For reasons of energy security, i.e. of energy supply, governments want to be less dependent on energy imports.
9.1.3
Energy Efficiency
One way to reduce the dependence on energy supply is to increase the energy efficiency of all parts of society, in building, in industry, in transport. That there is something to gain will become clear from Figure 9.2. There we show the energy use per person in 30 countries around the world. We also show the national income per person, corrected for the difference in purchasing power, relative to the US$. That correction is significant, particularly in low income countries. One observes big differences, even between neighbouring countries, some of which are not difficult to explain. From Figure 9.2 one notices that the data points are widely scattered. Countries with the same energy consumption as the USA per dollar purchasing power have their data points on the line which connects the USA with the origin. It will be noticed that many countries are close. Countries which perform better than the USA on energy efficiency have their points on the right of the line. It will be noticed that Switzerland scores very highly, possibly because the Swiss are earning a considerable part of their national income by international financial services. It is remarkable that Russia and Canada are scoring badly on energy efficiency. That may be due to their indigenous energy resources by which the need for energy security is felt less. Industrial countries like France, Britain and Germany have had to rely on energy imports for a long time. They have adjusted their economic system, forced by high energy prices, to produce their goods with less energy. And this process is not diminishing their prosperity. It therefore seems feasible to produce $1 of goods for half the energy of the USA. In short: more products for 1 [J]. With such a target, the increase in energy consumption will slow down.
384
Environmental Physics
Energy consumption per person / [10 9 J Yr -1]
350
2008
Canada USA
300 Australia
250
Belgium Sweden Netherlands
Russia
200
Czech
France
Germany
Japan
150 Bulgaria
100
Norway
South Africa
Israel
Italy
Denmark Britain Switserland
Venezuela China
50
Egypt
Argentina Mexico Brasil Algeria
India
0 0
10000 20000 30000 Purchasing power per person GDP(PPP)/cap
40000 US$ (2000)
Figure 9.2 Energy consumption versus the national income (in purchasing power), both per person, for a wide range of countries around the world. The data are for 2008 and the purchasing power is expressed in US$ of the year 2000. Data were compiled from [2].
9.1.4
Comparing Energy Resources
In deciding what energy source to use for a particular application it is useful to have a checklist of their environmental consequences. Such a list is reproduced in Figure 9.3. The figure is taken from a solar energy conference [7], where the positive evaluation of solar power must have been popular. Nowadays we would point out that in the production of solar cells strong acting chemicals have to be used and be disposed of. Therefore for photovoltaics we would put ‘negligible/significant’ instead of ‘negligible’ under ‘waste disposal’. In Figure 9.3 fossil fuels rank rather badly; coal and oil especially lead to a broad spectrum of emissions. Notice a few other aspects, which were not mentioned earlier: almost all options lead to increased land use, some of them lead to noise, all have a nonnegligible visual intrusion on the landscape. For a few options, waste disposal has to be organized. The fact that the figure contains implicit judgements follows from the waste disposal entry for nuclear power, which is ‘significant/large’. On the basis of Chapter 6 we would put that in the highest category as ‘large’, comparable at least with the acid pollution of coal fired power plants, which in Figure 9.3 was put as ‘large’. On the other hand, you may as well conclude from Chapter 6 that the volume of the very long-lived radioactive nuclei produced by nuclear power is very low, even without transmutation. It has been estimated ([8], p. 170) that the 10 British nuclear power stations annually produce 0.84 [L] of radioactive waste per person per year, which amounts to the contents of a bottle of wine.
385
Land requ rement
No se
V sua ntrus on
Waste d sposa
Catastrophes
Heavy meta s
Part cu ates
Human hea th and safety
G oba warm ng CH4
CO2
Ac d po utant (e.g. S2O, NOx )
The Context of Society
Passive solar energy Wind power Photovoltaics Biomass Geothermal energy Hydroelectricity Tidal energy Wave power Coal Oil Natural gas Nuclear power Negligible
Negligible/significant
Significant
Significant/large
Large
Figure 9.3 Checklist to judge the environmental effects of energy technologies with five categories. The way in which the list is filled in reflects the judgement of the original authors. (Produced from Tenth E.C. Photovoltaic Solar Energy Conference (Eur (Series), 13807, 1991, with kind permission from Springer Science + Business Media B.V.)
Of this only 3% is very long-lived. In 1000 years and with 60 million people this would become 105 000 [m3 ]. In a layer of 1 [m] of thickness this is only 0.1 [km2 ]. The message is that it is a matter of judgement how to fill in a checklist like Figure 9.3. It would be interesting to have a last column in Figure 9.3, containing the price of the electricity produced by the various means. This would follow the price on the electricity market. Then coal would be one of the cheapest and photovoltaics (PV) one of the most expensive. The difficulty is that the market price does not include the environmental costs, as governments do not charge the power utilities or mining companies a realistic environmental
386
Environmental Physics
tax. Also, the cost of nuclear power would seem lower than would be realistic because of all kinds of government subsidies in the past and a government cap on the liability of power utilities for accidents with nuclear installations. On the other hand, renewables are subsidized because they contribute to energy security and sustainability. It therefore seems wise to omit the price of power out of environmental considerations. 9.1.4.1
Technological Fixes
Figure 9.3 suggests difficulties connected with all methods of energy conversion. The question then arises whether the difficulties can be overcome by a technological or sometimes
Table 9.2 Problems associated with energy conversion and their solutions. Problem
Effect/danger
Limited resources of high-grade fossil fuels Burning low-grade fossil fuels
(Armed) competition; economic disruption Mining damage; high emissions; high costs Climate change
Fossil fuel CO2 emissions
Limited resources of high-grade U Long-lived nuclear waste
Rising prices; opposition to mining Pollution
Short/medium-term solutions Improving energy efficiency Cleaning flue gases
Change to gas; CO2 sequestration; reforestation; improve energy efficiency Stop growth of nuclear power; reprocessing Stop growth of nuclear power; limited number of waste disposal sites Reinforce nonproliferation treaty
Nuclear proliferation
Nuclear war
High kWh cost of renewables Transport of liquid hydrogen Transport of oil
Slow dispersion
Tax facilities
Explosions
Transport of liquid gas Emissions of vehicles
Explosions
Keep out of population centres Safety on tankers; special sea routes Keep out of population centres Hybrid or electric cars
Oil spills
Deterioration air quality
Long-term solutions Renewable energies; nuclear power + breeding Renewables; nuclear power + breeding Renewables + nuclear power
Renewables; breeding; fusion Renewables; transmutation; fusion Nuclear disarmament; small number reprocessing units More research; ‘new’ biomass Absorption in metals Pipelines
public transport
The Context of Society
387
a social fix. To study this we enter Table 9.2, where for all methods of energy conversion the most significant problem is displayed, with its effect or harm and its solution in the short and the long term. Not all ‘fixes’ are technological; some, like tax facilities, are social or political. Virtually all solutions bear an economic or environmental cost. For example, tax facilities for improving energy efficiency or for cleaning flue gases, would imply a reallocation of public money or an increase in taxation elsewhere. To show that everything has its economic cost we point out that stabilizing the installed amount of nuclear power would have the positive effect of saving limited resources of high-grade uranium and would limit the amount of high-level waste. However, the profits of the nuclear industry would be too low to pay for much research on transmutation and on more efficient use of natural uranium. Consequently, public money would need to be spent on this subject, which is in fact what is done in many industrialized countries. There are cases where environmentally benign solutions do not cost money, but where intelligence and perseverance are enough. Examples include the clever design of new products or installations where the synergy of many measures reduces costs in terms of resources to build them or energy to run them. It is possible to design buildings or dwellings which feel more comfortable than the conventional ones; a possibly higher building cost is compensated by a lower energy consumption. Even expensive technology like PV or Graetzel cells may be installed on roofs and windows of newly designed buildings, where their electricity production may offset, at least partly, the higher investment. If these measures are put into practice, they might ease the upward trend in energy consumption, shown in Figure 9.1. The column on the far right in Table 9.2 shows that in the long term a sustainable supply of energy would require renewable energies, including ‘new’ biomass; it might be supplemented by nuclear power, including breeding from 238 U and transmutation and, if it is ever going to work, nuclear fusion. Whether nuclear power is an acceptable way into the future and whether it may be considered as sustainable depends on how one weighs the risks connected with this form of energy conversion. Renewables also have their drawbacks (Figure 9.3) and we do not yet know all the environmental aspects of large-scale bio-solar energy production. Anyway, for a long time we will be dealing with a ‘mix’ of all the energy options displayed in Table 9.1. In Table 9.2 we did not display socio-political measures. The most obvious would be to limit population growth, which would ease the strain on all resources. However, the demographic composition of the world’s population is such that growth will continue until after the year 2025, and reducing the growth rate will take effect only in the long run. Of course it has to be encouraged, if only to reduce the tensions in future generations.
9.1.5
Energy Options
Table 9.2 shows that in the long run only renewables and – under some conditions – nuclear power are available as proven technologies for the supply of energy. For the medium long term one could add ‘clean coal’ because coal is widely available and its emitted greenhouse gas CO2 may be captured and stored. ‘New’ biomass and biofuels from blue-green algae (cyanobacteria), grown with high efficiency, still has to be assessed for all its implications, but could be added to the options.
388
Environmental Physics
The question arises whether a modern industrial state could run on renewables alone. To discuss this question we draw on the work of MacKay [8] who studied the hypothetical energy options for Britain (England, Scotland, Wales) with the energy demand of the year 2008. The author uses 125 [kWh] per day or 164 [GJ] per year as the average energy consumption of a Brit. In Figure 9.2 we used 140 [GJ/year], a difference which, for the argument, may be ignored. MacKay’s answer is that by adding up all possible renewable energy resources for Britain, it, in theory, could cover the British demand. However, one would need wind farms the size of Wales, 500 [km] of coastline with wave power, 75% of the whole of Britain for energy crops, a PV area also the size of Wales, on the roofs both hot water collectors and PV. MacKay concludes that not only technological constraints, like the competition on the roofs limit the application of renewables. Also all kinds of social and environmental constraints will make it very difficult to maintain the British lifestyle with renewables alone. The reasons are the high population density and the fact that renewables use a lot of land space. Following MacKay there are three ways to proceed to reduce the energy demand in Britain ([8], p. 115): (1) Reduce the population (e.g. emigration) (2) Change the lifestyle (use the bicycle instead of the car) (3) Use all possible technological fixes. The first two points are not very realistic; as for the third, MacKay suggests increasing the share of electricity in the total supply, using electricity to drive heat pumps for home heating and electric cars. This, of course, exchanges chemical joules for high-exergy electric joules. When all technological possibilities are exhausted, one could tackle the supply side. The author ends up with renewables covering 50% of the energy demand, nuclear power and power from solar farms in the deserts equally covering the other 50%. Note that this offers far from complete energy security because half of the energy supply is still supposed to come from abroad. Still, with all efforts directed towards renewable, the energy security would be better than at present with the import of fossil fuels. In 2004 one of the present authors (EB) made a similar, though less detailed, calculation for the then 15 countries of the European Union. It was concluded that with all possible efforts to increase energy efficiency, in 2050 renewables would supply 45%, fossil fuels 40% and the other 15% either nuclear power or import of renewables. In this estimate real energy security would also not be achieved. 9.1.6
Conclusion
In the estimates given above, technologies which still have to be proven, like ‘new biomass’ or nuclear fusion were not taken into account. The authors favour research and development of, in particular, new biomass/biofuels. However, it must be realized that new biomass will also require a lot of space. Exercise 5.21, for example, uses the optimistic assumption that 5% of the incoming solar power, after handling, drying and chemistry, can be converted into liquid fuel. For the Netherlands, for example, to supply the complete energy consumption of 2008 with ‘new biomass’ with a net efficiency of 5% would require 40% of the land of the country, including the internal waters.
The Context of Society
389
Table 9.3 Fresh water supply on land/[1012 m3 yr−1 ] based on [9]. Pouring down as rain Accessible to humans of which in rivers total use 1990s of which in agriculture
113 9 Between 2 and 3 5 3
It seems to us that complete energy security will be hard to achieve in densely populated industrial countries. This is one reason, amongst many others, to live in peace with the rest of the world. If not, external circumstances will reduce our supply, hence our consumption, enforcing transition to a much less demanding lifestyle.
9.2
Fresh Water
The essential data for fresh water are summarized in Table 9.3 [9]. The water pouring down as rain is usually suitable as drinking water. Of course, it is the fraction on land that is interesting and is given in the table (Exercise 9.4). Accessible to humans is the fraction that rains on the agricultural land, fills the big lakes, seeps down to groundwater basins or flows off the land as rivers. The estimate of the latter is given in Table 9.3 as well. Important to notice is the big fraction of the accessible water which is used in agriculture: 3/5 = 60% of the present water consumption and 3/9 = 1/3 of the available fresh water. It is expected that the growing world population will need more water for animal and human food; also rising wealth puts a heavier strain on the water consumption per person. Therefore the experts believe that eventually most of the rivers will be used for irrigation. They will no longer reach the oceans, but be diverted to agricultural lands. As a rough estimate, the water used for agriculture may double with irrigation and consequently the food production may double as well. The growth curves will be similar to those in Figure 9.1. A factor of two in population growth and food consumption per person together is quickly reached. A technological fix would be the desalination of sea water, brackish water or purification of sewage water. All these processes cost a lot of energy and in this way scarce resources of water and of energy are connected. Therefore, fresh water is still in strong demand. At this very moment resources of fresh water are a source of disagreement and conflict between many countries. The data in Table 9.3 suggest that fights or even wars for the scarce resources of water will increase in the future. Sustainable development should apply to water as well. And, we may add, people must use peaceful ways to settle their arguments over anything, scarce resources included.
9.3
Risks
A reasonable standard of living for the billions of people living on earth can only be reached if the utmost use is made of science and technology. In a society depending on high
390
Environmental Physics
technology there will always be risks of adverse events, which will be weighed against the benefits of the particular technology in a political process. In this section we concentrate on the adverse effects of a technology, which are more complicated to judge than the, usually economical, benefits which can be calculated. We follow the definitions of a Royal Society study group [10] to discuss risk and risk estimation. Risk estimation is defined as ‘the collection and examination of scientific and technical data to identify adverse effects and to measure their probability and severity’. Risk is a number, the probability that a specified adverse event will occur during a stated period of time. The adverse effects themselves, without their probability, are called ‘harm’ or ‘danger’. When comparing the costs and benefits of a certain advanced technology one needs to include the financial costs of possible harm in the cost calculation. This is done by multiplying the probability of the adverse event (the risk) with the harm done. One defines detriment = risk × harm
(9.1)
This definition is used by insurance companies for calculating their rates. It is a wellestablished method for ordinary hazards. It does not work where the risk is very low, but the harm very large. This is the reason why insurance companies in their ‘small print’ exclude risks like war, nuclear explosions, floods and the like. It is confusing that sometimes what we call ‘detriment’ is called ‘risk’ [11]. In this section we discuss how one deals with the possible harm of a particular chemical in the environment. Once it has been determined what concentrations in air or water do not harm people, plants or animals, one may use dispersion models like those discussed in Chapter 7 to calculate acceptable emissions from industry, buildings, installations, vehicles and the like. In Section 9.3.1 we consider the harm of chemicals as a function of concentration. Next we turn to the problem of small risks with a large harm. Finally we summarize the advice of social scientists on how to deal with uncertainties. 9.3.1
Small Concentrations of Harmful Chemicals
The answer to the question of what concentration of chemicals in the environment is harmful is far from trivial and has tremendous economic implications. To start with the latter, if an industry is forced to reduce its emissions by a factor of 10, it will need extra filters at the outlets and methods to make the captured chemical compounds harmless (if possible). The cost ultimately will have to be paid by the consumer. Similarly, drinking water utilities have to push down the concentrations of all kinds of chemicals below the allowed level. In this case also, the price ultimately has to be paid by the consumer. At the end of the day, a compromise has to be found in which judgements are made about what health risk is acceptable and what is not. It is difficult to find out the difference between harmless and harmful intakes in the human body. One cup of coffee, for example, contains 100 [mg] of caffeine, which provides a stimulus for the drinker and generally is considered harmless. An intake of 10 [g] of caffeine, corresponding with 100 cups, may well be lethal to an average person. But if the 100 cups are distributed over 100 days (one cup a day) it again is considered harmless. In this example it is not only the dose itself that counts, but also the time factor: does the body have sufficient time to dispose of the hostile materials. Another telling example
The Context of Society
391
is pharmaceuticals, which are healing in small, prescribed concentrations, but often are damaging in larger concentrations. The scientific problem is to find out what concentrations in food, air or water are harmful, harmless or perhaps even healing. The traditional method is the use of dose–effect or dose–response relations, which we already met in Section 6.3. The effect of the supplied dose must be measurable, such as harm to a specified organ, a tumour or death. The dose d must be supplied to experimental animals under controlled conditions. It is expressed in [μg] of chemical per [kg] of bodyweight: [μg kg−1 ] or in [μg] per [kg] of bodyweight per day: [μg kg−1 day−1 ]. One of the most widely studied effects of a chemical is the change in mortality rate during the total possible lifetime of an experimental animal, which may be of the order of 2 years. So one has to compare a group of animals that receives a certain dose with a similar group that does not receive the dose; this has to be done in the same circumstances (food, light, air etc.). Finally, their mortality rates have to be determined and evaluated. Such a procedure gives a curve like the one at the left-hand side of Figure 9.4. Horizontally the logarithm of the dose d is written down and vertically the mortality rate. Point E has a mortality rate of 100% and point D a rate of 50%. The corresponding dose is called LD50 , where LD stands for lethal dose. Although practically all measured curves show the behaviour as displayed at the lefthand side of Figure 9.4, the number of factors of 10 along the horizontal axis may be quite different. For example, inhaling the gas CO for a number of hours in a concentration of 100 [mg m−3 ] causes a headache, but not much more, while a concentration in the air of 250 [mg m−3 ] is already lethal when inhaled over a few hours. These two numbers are rather close, which suggests that the curve in Figure 9.4 will be rather steep in this case. In Figure 9.4 the curve is measured from high doses on the right, down to point C and extrapolated (dashed) down to point B. The reason for the extrapolation is that it is
5%
Mortality rate
100%
4 50% 3 1
5% A B
2 C
D Log (d )
E
A
A1 B
B1 B
C
2
Dose d
Figure 9.4 The figure on the left shows a standard graph of induced mortality against the logarithm of the dose d, which means that the dose rate may span several factors of 10. For reasons of time or money there are no data on the left of point C. The figure on the right shows the part AC on the left graph on a linear scale. In order to decide between the four extrapolations to the left of point C one needs to supplement measurements with theories and models.
392
Environmental Physics
very difficult and expensive to measure at low concentrations. To illustrate this, the region from A to C is enlarged on a linear scale on the right-hand side of Figure 9.4. At dose B2 further experiments might find the hypothetical dashed data point and at point A1 after measurement with many experimental animals another hypothetical dashed data point. The error bars in A1 and B2 will almost touch the axis, which means that these measurements cannot decide between extrapolations 2 and 3 shown. If the chemical compound under investigation is not carcinogenic, extrapolation to point B is taken as the starting point for regulations. A safety factor of 10 is applied to take into account possible differences between man and animal, and another factor of 10 to take into account that a fraction of the human population will be more vulnerable than the average: the old, sick and very young. After applying this factor of 100 the resulting dose is taken as an acceptable daily intake. These factors of 10 are rather arbitrarily taken. Modern toxicology is investigating the mechanism by which a certain chemical may do harm to the human body and then looks into a similar mechanism for the most common experimental animals: rats, mice or other rodents. Depending on the mechanism, some animals are more like humans than others. This gives a criterion for the choice of experimental animal and may lead to better estimates of safety factors. If the chemical compound is carcinogenic, the damage presumably is caused by an attack of the chemical on the DNA of the organism. This is similar to the mechanism by which ionizing radiation may induce cancers. It is assumed, or at least regarded as possible, that any amount of the chemical under consideration or any amount of radiation has a certain probability of starting development of cancer. In other words there is no safe threshold. In terms of Figure 9.4, the dose–effect curve will be measured from D to C. As conventional animal experiments cannot distinguish between the convex curve (indicated by 3) or the straight line (indicated by 4), the straight line is assumed in order to be ‘on the safe side’. This is the way in which radiation risks, quoted in Table 6.4, are found. Also in this case further toxicological research may be able to lay foundations for a choice between the different extrapolations and find out in what cases there may be a threshold or when there is not. 9.3.2
Acceptable Risks
Assuming the straight line 4 in Figure 9.4 it is possible to calculate a probability of an individual catching a tumour for a certain concentration in air, water or food, and the question has to be answered as to what probability is acceptable for a particular chemical to cause death. The argument goes as follows. To begin with, it is assumed that the tumour is indeed lethal. The average lifetime of a human being is less than 100 years, therefore the average lifetime death probability is of the order 10−2 [yr−1 ]. The extra death probability induced by society should be considerably lower. The chance of death for a healthy young male is in the order of 10−4 [yr−1 ]. The individual death risk from a certain technology should be definitely lower than this number, so 10−6 [yr−1 ] is taken to find boundary conditions for regulations. In Table 9.4 we summarize data for acceptable risks, published some years ago by the Netherlands Department of Health, which will be typical for industrial states. It shows that when there is a probability that in a single event several people are killed, say 10, the acceptable probability is put as 10−5 , while for 100 people killed in a single event it is put
The Context of Society
393
Table 9.4 Acceptable risks for harm from a certain technological installation in the neighbourhood. The first 2 rows refer to any group, while the lower two refer to a single specified individual. The bottom line adds all risks to a person. Number of deaths in single event 10 deaths 100 deaths Individual risk from all events Individual risk from all sources and events
Acceptable risk per year 10−5 10−7 10−6 10−5
as 10−7 , whereas one might have expected 10−6 . The lower number reflects the impact of a big accident and the public outrage, although the risk for a single arbitrary individual is still very low. The acceptable risk of Table 9.4 is the basis of government regulations. On a private basis people do not always calculate risks and adjust their lifestyle accordingly. The risks of smoking for example and of drinking more than 3 glasses of wine per day are real, but do not prevent many people to use these stimulants. The point of course is that people decide to take these risks themselves while they rely on the government to protect them against industrial and natural risks. 9.3.3
Small Probability for a Large Harm
Definition (9.1) put ‘detriment = risk × harm’. For common hazards like fires and car accidents the risk or probability can be found from statistics. There are enough of these incidents to calculate a probability and have an estimate of the hazard. To be safe insurance companies are putting an upper boundary on the amount of money they will pay and calculate their premium accordingly. For hazards with a small probability the usual statistical procedure does not work. Take the example of an accident with a nuclear power station in which a large part of their radioactive inventory enters the environment. Up to the time of writing we have experienced three of these accidents, the ones in Harrisburg, Chernobyland Fukushima discussed in Section 6.1.6. Although the emission of the Harrisburg accident was modest we will count three accidents. The present number of commercial nuclear power stations is 435, many of which have been operational for 30 or 40 years. Together there are about 2 × 104 reactor-years of experience with three accidents. It is too early to conclude that the probability of an accident equals 1.5 × 10−4 [reactor-year−1 ]. The measuring time should be 5 or 10 times longer, between 100 or 200 more years of experience, which is too long for present-day decision-making. A way of estimating a probability for the release of large amounts of radioactive materials is the use of an event tree. The idea is shown in Figure 9.5 for the reactor design shown in Figure 6.2. There the water acts both as coolant, taking the heat from the reactor core to the turbine, and as moderator, slowing down fast neutrons to thermal. The event tree looks at possible failures of parts of the system and puts probabilities PA . PB , . . . to these failures based on experience. For a pipe break, for example, there is a lot of statistical data from industry.
394
Environmental Physics
Events
A Pipe break
B
C
D
E
Electric Emergency Fission Containment power core product integrity cooling removal system Succeeds Succeeds
Succeeds PE1
Succeeds
Fails PC
PA
PA . . . . . . . . very small PA P E . . . . . small 1
PA P D . . . . . small 1
PD 1 Succeeds
Initiating event
Radioactivity release
PE2
PA P D PE . . medium 1
2
PA P C . . . . . large 1
1
Fails PB
Fails PD
PA P C PD . . very large 2
1
2
PA P B . . . . . . very large
Figure 9.5 Simplified event tree showing the subsequent events in a nuclear ‘loss of coolant’ accident. After a pipe break with probability PA the bottom branch gives a small probability of failure of the electric power system PB and the upper branch the probability of nonfailure as (1 – PB ) ≈ 1. In the latter case the emergency cooling system may or may not fail, giving two other branches of the tree, and so on. (Adapted from [12] Appendix I).
Let us suppose that a pipe in the cooling system breaks down with a probability PA . The loss of coolant will stop the chain reaction because the water acts as moderator as well. However, the radioactive decay of the many reaction products will continue for some time (Figure 6.5) and their enormous decay heat could lead to a core melt-down and a release of radioactivity through the containment. To prevent core melt-down occurring, an emergency cooling system is present, providing extra water pumped by an electric system. If the electric power fails with an independent probability PB , the combined probability will be PA PB , which is small, but the release of radioactivity is very large (the lowest line in Figure 9.5). If the electric power does not fail, the emergency cooling system may break down (probabilityPC1 ) and the next line of defence is a system for removing the fission products, and so on. The numerical calculation on the basis of an event tree, multiplying probabilities, assumes that the different failures are independent of each other. Even if this criticism were valid, the method remains useful to identify weak spots in the security system, not only for nuclear installations, but for other complicated technology as well. 9.3.4
Dealing with Uncertainties
Given uncertainties concerning risks, costs and benefits, policy-making often takes the form of ‘trial and error’. This method is only sound under the conditions that the consequences of errors are known and within acceptable limits; moreover there should be time to correct an error and improve technology and regulations. In the problem area of climate change ‘trial and error’ does not work. In Chapter 3 we found that global warming from greenhouse gases like CO2 is real, but that there are large uncertainties in the rate of temperature rise as a function of time (Figure 3.27). The method
The Context of Society
395
of ‘trial and error’ will not work in preventing climate change because the effect of policy measures will only be manifest after some decades. Correcting measures by that time will again take a long time to have an effect and in the meantime society will have to cope with the consequences of – in hindsight – wrong decisions. The consequences of decisions one will take or not take now, in terms of local rainfall, temperature, cloudiness and so on only follow from model calculations, not from experience. And experience cannot distinguish between natural variations and policy-induced changes. Drastic measures therefore will have little or no popular support. Social scientists recommend the following catastrophe-aversion system for cases like the greenhouse threat and climate change ([13], p. 135): (1) Protect against the possible hazard; do so conservatively (2) Reduce uncertainty by modelling, testing and monitoring (3) As uncertainty is reduced and more is learned about the nature of the danger, revise original precautions. Strengthen precautions if new dangers are discovered, or if the possible harm appears to be worse than originally feared; weaken precautions if the reverse proves true. It is too early to apply point (3) as uncertainties are still too big. The second point (2) is being executed already by climate research and climate modelling. Main points of research are the influence of water vapour, aerosols, changes in land use, the exchange of heat and gases between atmosphere and oceans, and the influence of reforestation. Decisions on research priorities are taken within a small circle of interested scientists. The first point (1) regards the mitigation of harmful effects. It is officially termed a no-regret policy: take those measures that are least costly, that have an intrinsic value and slow down global warming. In Table 9.4 we mentioned measures like CO2 sequestration, improving energy efficiency and reforestation. The first measure refers to the binding of CO2 in flue gases and disposing of the gas in deep oceans or old gas wells. Reforestation or more general greening of the deserts is meant to bind CO2 in trees or biomass. This indeed buys time, which is desirable, but it is no final solution, since the amount of land that can be reforested is limited. It must also be noted that these measures do not diminish the increase of greenhouse gases other than CO2 , which contribute to about half the total effect. Twelve years ago, in the second edition of this book we wrote the following lines: We doubt that governments will easily take steps to avert the threat of global warming. Besides the points already discussed, we add two. In the first place it will not be attractive for one country to take expensive and perhaps unpopular measures when neighbouring countries do not comply. In the second place developing countries with a low energy consumption per capita will want to catch up with the economic wealth of the richer countries and cannot bear the expense of complicated environmental measures. Large scale aid from richer countries, reducing the consumption there, at present seems unrealistic.
Unfortunately, these lines still hold true, but that is no reason to throw the towel in the ring, as we shall see.
396
Environmental Physics
9.4
International Efforts
Thinning of the ozone layer and climate change are global threats and they have to be tackled in a global way. It does not make sense for one country to pay expenses to protect the environment and reduce emissions when the rest of the world does not follow. Therefore only global treaties will be effective. This worked for the protection of the ozone layer, but up to now has failed in the prevention of climate change, as we will briefly discuss. 9.4.1
Protection of the Ozone Layer
In Section 2.3.3 we discussed how the thin ozone layer in the stratosphere is protecting man, animals and plants from harmful UV radiation. The ozone hole was discovered in 1979 and it was soon understood that it originated from CFCs released by refrigerators. In 1985 an international agreement was reached in Vienna, which was extended by the so-called Montreal Protocol in 1987. It entered into force in 1989 and by 2010 about 95% of all chemicals which are harmful to the ozone layer had been phased out [14]. The Montreal Protocol required parties to gradually phase out the production and use of CFCs. The developing countries were allowed a time delay until the beginning of the twenty-first century. In practice, CFCs might be replaced by less harmful, but still damaging HCFCs. These in turn should be phased out in 2020 for developed countries and in 2040 for developing countries. The latter countries are assisted by a fund, of which between 1991 and 1999 around $109 was spent. Countries seem to keep the obligations from the Montreal Protocol, perhaps because the hole is unquestionably real; it is monitored continuously and its development can be watched on the internet [15]. One would therefore expect the ozone hole to gradually disappear with the CFCs in the environment. Unfortunately, there are indications that the global warming of the lower atmosphere introduces a slight cooling of the higher ozone region at 25 [km] altitude. If correct it may take longer before the ozone concentration returns to pre-industrial values. Nevertheless, the Montreal Protocol must be viewed as a success of international environmental policy. 9.4.2
Protection of Climate
The ozone hole can be measured accurately and the resulting increase in harmful UV radiation can be calculated precisely. Climate change is a more complicated phenomenon. There are many variables and there is nothing so simple as a dose–effect relation to calculate its consequences. Also, reducing climate change requires measures, which will interfere deeply in human society. It is no surprise that slowing down the rate of climate change is meeting much more opposition than the fight against the ozone hole. Let us review the situation. Following alerts by scientists, the World Meteorological Organization and the United Nations Environmental Program jointly established the Intergovernmental Panel on Climate Change (IPCC) in 1988. This panel published lengthy and detailed assessments of climate change in 1990, 1995, 2001 and 2007 and is continuing to do so [16]. The panel is using sophisticated models of which the concepts were discussed in Chapter 3 of this book. The results of the first assessment confirmed that there was reason to worry, which in 1992 led to
The Context of Society
397
the adoption of the United Nations Framework Convention on Climate Change. Subsequent negotiations resulted in the Kyoto Protocol of December 1997. 9.4.2.1
Kyoto Protocol
With the Kyoto Protocol the industrialized world accepted reductions in greenhouse gases leading to a level of 5.2% below 1990 levels in the year 2012. The percentage differs from country to country, but is on average 8% for the EU countries, 7% for the USA and 6% for Japan and Canada. As noted in discussing Table 3.2, the Global Warming Potential, which is the time horizon for the effect of greenhouse gases, is different for the different gases. The Protocol leaves the determination of this time horizon to future negotiations. The Protocol accepts that developing countries have to catch up in their development. So no quantitative ceilings for these countries were determined. Countries that are undergoing the process of transition to a market economy, such as Russia and the Ukraine form a third category. Some of them, including the two mentioned, are allowed higher ceilings for the year 2012. The most straightforward way of reducing emissions, would be to do it at home. That not only is painful, but, as some believe, also suboptimal. For, with a certain investment, the reduction of emissions in a developing country will be much more than in an industrialized country. The reason is that the installations in the developed world usually have higher energy efficiency than in the developed world. In virtue of this argument some mechanisms were developed to distribute the pain. 9.4.2.2
Outlook
The Kyoto Protocol was not ratified by the USA, one of the largest consumers of energy and one of the largest producers of greenhouse gases. Most of the countries that did ratify will not achieve their targets in 2012. The Kyoto protocol is virtually dead, but following the United Nations Framework Convention on Climate Change countries will continue to negotiate. We expect that the need for energy security will give continuing support to develop indigenous resources of renewable energy. This will moderate the increase in concentrations of greenhouse gases. To what extent this will mitigate global warming, time will tell. Lack of global agreements on curbing emissions give rise to proposals to turn to technological fixes. Applying global fixes is called geo-engineering. An example is the injection of iron into the Pacific Ocean. This already happens in nature, where the winds blow in iron oxides from the Gobi Desert. Putting in extra iron oxides would increase the growth of algae and extract CO2 from the atmosphere. A second example, which is more widely discussed, is putting SO2 or more generally, aerosols into the atmosphere [17]. From Figure 3.25 we notice that the man made aerosols from fossil-fuel burning already decrease radiative forcing. The reason is that aerosols reflect incoming sunlight back into space. One could theorize about bringing in the ‘right’ quantity of aerosols to get a negative radiative forcing, and consequently, a ‘pleasant’ global climate . Injections have to be organized continuously because aerosols will rain out. One could ask how all the SO2 raining down will acidify the oceans, but possibly a technological answer can be given as well.
398
Environmental Physics
There are, of course, practical questions as to who could legitimately decide how much geo-engineering to do, how this might be done on an international level, and how to define a pleasant climate. What is good for one may be bad for another. And what would happen if, for some reason, global injections were to stop. The temperature then would rise very rapidly because of all the emitted greenhouse gases. But a deeper question is whether global management of climate is feasible at all.
9.5
Global Environmental Management
Climate is determined by a highly nonlinear system of many interacting compartments of the environment, displayed in Figure 3.24. The simplest way of climate management is slowing down human-induced change by reducing the emission of greenhouse gases. Here one tackles climate change at the origin; with a slow increase in concentrations one stays close to nature and gives the ecosystem time to adapt. The examples of geoengineering given above are interfering more drastically with nature. In particular, putting aerosols into the atmosphere is tackling climate change at a different point to the source of change. 9.5.1
Self-Organized Criticality
Apart from its doubtful desirability, the question of the feasibility of geo-engineering may be tackled by describing climate as a self-organized system in which a small change may lead to dramatic consequences. We start with discussing a power law describing many natural events and look at the sand pile as metaphor for self-organized criticality (SOC). 9.5.1.1
The Power Law
Because of their disastrous consequences, earthquakes have been described and measured for a long time. Their magnitude m is proportional to log E, where E is the energy released during the quake. The number of quakes with energy between E and E + dE is defined as N(E)dE. There are not many earthquakes of large magnitude; similar to the plot of wind velocities (5.38) one gets better statistics by plotting the excess frequency of earthquakes ∞ Nc =
N (E)dE
(9.2)
E
In Figure 9.6 such a plot was made for the southeast of the United States ([18], p.13 and [20]). Figure 9.6 shows that the data points follow a straight line over at least five orders of magnitude on a double logarithmic scale. One therefore may write Nc ∼ E −τ
(9.3)
N ∼ E −τ −1
(9.4)
or
The Context of Society
399
Figure 9.6 Earthquake power law. For the New Madrid zone in the USA the number of earthquakes Nc per year during 1816–1983, which exceed a certain magnitude m is plotted. As m is proportional to log E, the straight line represents a power law behaviour Nc ∼E–τ . With permission of the ©American Geophysical Union, copyright 1985, reproduced from [20] Figure 3, p. 6741.
The proportionalities (9.3) or (9.4) show that the number of earthquakes as a function of energy follows a power law. The physical mechanism behind all quakes therefore must be essentially the same. Power laws occur everywhere in nature and even in real life. Per Bak [18] quotes many examples, of which we mention: the number of turbidite layers in a sediment deposition with a thickness bigger than a certain h; the X-ray intensity from solar flares; the number of words in any book (the Bible, Ulysses) which appear more than a given frequency. The slopes of the curves one plots may be different, but usually it corresponds to an exponent in (9.4) of between (−1) and (−2).
400
Environmental Physics
Figure 9.7 Metaphor of the sand pile. Adding grains to a pile (left) results in a self-organised critical state (right) where avalanches occasionally occur over the complete pile.
Bak concludes that there must be a single underlying mechanism behind all these phenomena, which he calls self-organized criticality ([18], [19]). The representative metaphor is the sand pile. 9.5.1.2
The Sand Pile
At a certain position on a flat beach one begins to add grains of sand. Slowly a sand pile will build up. Occasionally there may be an avalanche with grains going down, broadening the pile or leaving it. These avalanches are local and small. When the pile grows bigger, the avalanches grow in size as well. Finally the pile has become steep and cannot grow any more. This is called the critical state. Its formation is illustrated in Figure 9.7. By adding grains to the critical state, equivalent to adding potential energy, one will note that smaller and larger avalanches on average keep the size of the sand pile constant. The potential energy added leaves as kinetic energy of the grains. One could study the avalanches, count their size represented by the number E of grains in every avalanche and find the number of times N(E) that an avalanche of size E occurs. Experiments show a straight line in a double logarithmic plot, like in Figure 9.6. These experiments are often performed with rice grains, as their shape can be chosen and, more importantly, as one can mark certain grains and study their life history ([18], pp. 69–75). It appears that some grains travel all over the pile, into the interior and out again and may stay in the pile for a long time. Also, a region which is quiet when an avalanche starts may be disturbed a little while later by the many, minute interactions between the grains. It is impossible to predict what would happen to an individual sand pile if a single grain is added at a certain position. It may only cause a local disturbance with a small avalanche (if any), but it may also start a large avalanche in which the complete pile participates. What happens depends on very minor details in the composition of the pile. In the critical state the pile behaves like a single entity. The state is self-organized as all its internal interactions
The Context of Society
401
are building up while the state is being formed. It is critical, because even the smallest disturbance may cause the maximal effect. 9.5.2
Conclusion
The sand pile is a metaphor. It does not give a physical proof that geo-engineering one aspect of the climate will have big and unpredictable consequences in other aspects. It is convincing though, as self-organized criticality happens at so many instances in nature. The message should be that utmost care is called for in manipulating climate.
9.6
Science and Society
Students who have studied this book will be interested in environmental problems and they may find a job in which the knowledge in this book – and in more advanced texts – can be used. They will experience that a solution, which may seem sound from a scientific perspective, is not accepted by the organization (s)he is working for, or even not by the public at large. The position of science and the scientist has been formulated very well some years ago [21]: Sound science is about the best possible way to answer a given question; to present with rigour the certainties and uncertainties of knowledge, and the assumptions underlying certain conclusions. But, crucially, it is not a method for deciding which questions should be posed, or for determining the acceptable risks and desirable benefits of technologies. The public should be involved in the formulation of strategies, rather than merely being consulted on drafted proposals.
In this final section of the book we will sketch how we view the nature of science and its control. We finish, somewhat moralizing, on the way science and society should interact. 9.6.1
Nature of Science
Science in the strict sense of traditional natural science has an undisputed body of knowledge. Its data are published in ever-expanding handbooks of physical and chemical properties, which any scientist uses without question. The conductivity of a piece of copper wire at a certain temperature and pressure is the same everywhere. Also the experimental methods are rigorous. Students are trained in careful experimental set-ups and a precise control of the experimental circumstances. The experiments aim at measuring a certain property (conductivity of copper, for example) or at testing a certain theory (the occurrence of a certain gas in the solar atmosphere, for example). Traditionally, during an experiment, natural scientists will try to carefully vary a single parameter, while keeping the others constant. They know that by applying the methods in this way, they will get reproducible results, even when the experiments are repeated at the other side of the earth. The method of science becomes less straightforward when the phenomenon to be studied cannot be isolated from the rest of nature and cannot be imitated on a laboratory bench. Climate is an example in case. One can study the composite gases under controlled circumstances in the laboratory, but one cannot reproduce an atmosphere of 100 [km] of thickness
402
Environmental Physics
with all kinds of motions horizontally and vertically. Here, one has to rely on computer models, which are validated by their accuracy in reproducing the past behaviour of the atmosphere and the oceans. Scientific judgements and considered opinions form part of the final assessment of the outcome of the modelling. Scientific judgements were met before in discussing Figure 9.4 on dose–effect relations. The carcinogenic effect of very small doses may be smaller than the natural occurrence of tumours; even if the effect exists, one cannot measure it directly. So, one has to rely on considered scientific opinions as to the hazards of certain chemicals or of radiation in the environment. It is the responsibility of the scientist to share his or her arguments with the public, and, if necessary, accept a suboptimal outcome of the debate. Scientific analysis and technological fixes, if possible, are playing an increasing role in modern society. Scientists are invited to the media to share their expertise with the public. Therefore, giving a scientific judgement is part of the profession of the scientist. We feel that scientific training should be aware of this aspect. That is one of the reasons why we have put ‘social questions’ at the end of this chapter. 9.6.2
Control of Science
The two examples mentioned above, climate and low doses, make clear that considered opinions of scientists have big social consequences. If climate is indeed changing rapidly with disastrous consequences because of human interference, expensive measures have to be taken. And if a low dose of a chemical or radioactive material is harmless, it will save the industry a lot of money on emission controls. So the question rises of the independence and objectivity of scientific judgements. More broadly speaking, who decides on the scientific questions to ask, who puts up the money to find the answers and who publishes the answers. From the seventeenth to the nineteenth century much of the scientific work was done by scientists of independent means. The nineteenth century saw the rise of university laboratories, where scientific research was funded by the government, which accepted the academic freedom of the university professors in doing their jobs. In the twentieth century the industrial and military implications of science became manifest and at the same time scientific research became increasingly expensive. The high cost of science resulted in high government expenditure on science, and in the coining of the phrase ‘big science’. It resulted, towards the end of the twentieth century, in an increasing demand for social relevance. In other words, the taxpayers, represented by parliament and government, requiring value for money. Added to this, university science supplements its income by contract work. This happens for industry, for government and for public interest groups who want contra-expertise to combat government decisions. Science in universities and in large government funded institutions therefore resembles industrial research. In both cases the influence is top-down. The top is deciding where the money flows and individual scientists do not have much choice but to follow or resign. 9.6.3
Aims of Science
Four centuries ago the English politician and philosopher Francis Bacon declared emphatically that the main objective of science was ‘the relief of man’s estate’. At present a large part of the world population is living with a consumption level that is much more than the
The Context of Society
403
‘relief of man’s estate’. From Figure 9.2 we deduce that western industrial countries have a personal purchasing power that was unimaginable in Bacon’s time. The same Figure 9.2 shows that there are big differences in personal purchasing power between countries. And it is common experience that many if not all people in the developing world want to reach the consumption level of the average inhabitant of Western Europe, or preferably the United States. The uneven distribution of consumptive possibilities will remain a source of conflict and anyway, science is for everybody. The question therefore is whether science is able to deliver the energy consumption level of the Western countries to the world’s population. The discussion of energy options in Section 9.1.5 shows that with present proven technologies it is improbable if not impossible for all people to spent energy at the present level of the western world. To speculate on ‘new biomass’, nuclear fusion or nuclear power with transmutation is a gamble with the wellbeing of future generations that we do not want to take. The only responsible way to proceed to sustainable development would be to play it safe, to put all efforts into spending energy much more efficiently, to increase the share of renewables in the Western world, and to create a sustainable and affordable energy source for those 2 billion people who now have nothing. In the given time research efforts into ‘new’ and sustainable renewables should be pursued with a ‘Manhattan project’ sense of urgency. The final question is whether the measures indicated are enough to guarantee a high standard of living for the whole world population. The arguments given above have been put forward since the 1960s by many scientists. Not much has been achieved since then in terms of renewables and a more even global distribution of wealth. One would therefore expect structural or organizational obstacles to real progress. In the 1970s thoughtful authors blamed the capitalist society [22]. They supported Marx’ view that the means of production, including energy resources, conversion and distribution should be public property; this should guarantee a fair share for everybody. At present, social scientists who study the possible ways of transition to sustainable development blame the institutions and the organization of society rather than capitalism. One study [23] argues that the intrinsic limits of pragmatic solutions in the market and civil society may be overcome by combining them with structural reform. For a transition to sustainability the study relies on the big corporations, civil society and the alliances now increasingly being formed between them. The word capitalism does not figure and even Karl Marx is conspicuously absent in the list of references. Study [23] indeed has a point, for nowadays big transnational companies are publicly declaring the need to produce their products sustainably. The international food producer Unilever, for example, has published a ‘sustainable living plan’ [24], with the following goals to be reached in 2020: halving the environmental footprint of their products, enhancing the livelihood of the hundred thousands of people in their supply chain, and improving the health and wellbeing of 1 billion people (their customers). Of course, this plan has been inspired by the wishes of part of their consumers and shareholders. More general, all kinds of social movements together are shaping the ‘civil society’: organized consumers, organized members of investing pension funds, environmental groups, peace movements and the like. They possibly fulfil the role that Karl Marx once ascribed to the labour movement alone and may act as a counterforce against the ‘blind force of capital’. Indeed, the interaction between ‘market’ and ‘civil society’ has brought about this policy change of a big corporation towards sustainable development.
404
Environmental Physics
Another force to drive big corporations in the direction of sustainable development might be cultural. Highly qualified employees are hard to find, and even harder to keep. If they are attracted to sustainable development, an official commitment of a company may attract them, for in this way their expertise will work directly towards this aim. Finally, these arguments are no excuse for governments to leave the play to others. They have to take care of the right boundary conditions, but without private initiatives like the one by Unilever, sustainable development will not be achieved. From this discussion one may conclude that the terms ‘science’ and ‘society’ are not specific enough. For society one should read the major players: governments, corporations producing for the market, and ‘civil society’, comprising consumers and all kinds of nongovernmental organizations. For science one should read the research directors, the professional societies and the individual scientists. This brings us to our final point: 9.6.4
A New Social Contract between Science and Society
We support the view that a new ‘social contract’ between science and society should be ‘signed’ [25]. The phrase ‘social contract’ is reminiscent of the French philosopher Rousseau, who viewed society as a contract between people aimed at the wellbeing of everyone. The aim of the ‘new contract’ in our view should be the relief of man’s estate, equity between people and equal access to sources of energy. A contract is an agreement between groups of people, actors in society. It is a compromise in which all signatories will have to give in a little. Let us finish this book with saying, like an old-fashioned schoolmaster, what the parties to such an agreement should do within the general aim described above. r The general public or ‘civil society’ has to accept that there are limits to the personal use of resources and inherent limitations to technological fixes; growth in wealth will be more in quality than in quantity r ‘Members of the public’ have to accept that scientific analysis and understanding is different from wishful thinking; the latter does not lead to sustainable development r Interest groups who pay for scientific expertise have to accept that scientists will give ‘fair’ judgements that they not always will like r Individual scientists have to recognize the social context of their work and accept ethical and social considerations. r National and international funding organizations should take the responsibility to design specific contributions to solve the big problems this world is facing. r At the same time research leaders should fight to keep part of their funding without ‘strings attached’. Basic research without straightforward applications or social relevance is essential for maintaining scientific standards r Professional societies should not only ban and punish scientific misconduct. They should accept that scientific assessments reflect the values or preferences of the scientists. This means that political views may enter scientific assessments, which is acceptable if they are made explicit r Professional societies should expand their public information tasks. They should put forward a realistic picture of scientific possibilities. Optimism on scientific projects is motivating and acceptable, but difficulties should be pointed out as well.
The Context of Society
405
Exercises and social questions 9.1 In Appendix A the caloric value of natural gas is given. Assume that all gas consists of methane. Consult tables to calculate the caloric value of methane. Note: the value in Appendix A is the agreed standard value. Most kinds of gas have a somewhat lower caloric value. 9.2 In 2008 about 68 000 tonnes of natural uranium were used to supply the nuclear power stations. Calculate the amount of thermal [EJ] produced, using that about 3/4 of the 235 U nuclei will fission and that each fission delivers 188 [MeV] thermal energy. Compare with Table 9.1. 9.3 If your country does not appear in Figure 9.2, find your data point using [3]. Find out in what respect your country is different from its neighbours and explain why. 9.4 The land mass covers a fraction f = 0.29 of the earth’s surface. The annual precipitation on the globe averages r = 75 [cm yr−1 ]. Calculate the rainfall on land and compare with Table 9.1.
Social questions 9.5 In Section 9.1.2 three arguments were given for a transition to sustainable development. Do you support these reasons or only some of them? Argue your answer. Or do you have additional arguments? 9.6 Would you agree with the way in which the checklist in Figure 9.3 was filled in? Would you make other judgements? Specifically, the present authors and MacKay have a different position as to the storage of nuclear waste. What are your views? Suggestion for teachers: project Figure 9.3 with blank spaces in the class room and fill it in together, or individually. Then compare with the book. 9.7 In Section 9.3.1 and Figure 9.4 the extrapolation to point B was taken as a starting point for regulating the emission of noncarcinogenic chemical compounds with a safety factor of 100. Do you find that reasonable? 9.8 Do you find the reasoning that the effects of technology should not give an individual death probability larger than 10−6 acceptable? If not, what do you propose? 9.9 In Section 9.6.3 it was briefly discussed that in the 1970s a Marxist view was popular, namely that the means of energy production, conversion and distribution should be public property. At present the dominant view has become that ‘civil society’ is the most appropriate option to counterbalance the power of the big corporations. Which view do you find most attractive and why? 9.10 (On climate change) In climate politics one often hears the statement: ‘the temperature rise should be limited to 2 [◦ C]’. Comment and use Figure 3.26 in your discussion. 9.11 (On climate change) Geo-engineering is proposed as a means to mitigate climate change, e.g. bringing extra aerosols into the air to reflect more sunlight, increasing albedo aa . Comment. 9.12 (On risks from nuclear power) In Section 6.3.2 three possible positions were sketched regarding risks: avoid any risk, the risk should be related to the benefit, any dose below the norms is acceptable. What would your choice be and why?
406
Environmental Physics
9.13 (On nuclear ‘waste’) In disposal of radioactive waste (Section 6.4.5) and Example 6.1 models have to be used to judge whether a disposal site is safe for a time in the order of 1000 years or longer. Do you believe that models are reliable enough to take decisions, in this case the use of nuclear power, which may result in danger for many generations to come? 9.14 (On generation IV nuclear reactors) Some of the designs of generation IV nuclear reactors include a site where the elements of the nuclear power cycle are put at one site, with the exception of the power stations, which for financial reasons have to be dispersed. Would you rely on international protection and control for these sites in order to avoid proliferation and terrorist misuse?
References [1] Meadows, D.L. (1972) The Limits to Growth, A report to the Club of Rome Project on the Predicament of Mankind, Universe Books, New York. [2] International Energy Agency (2010) Key World Energy Statistics, International Energy Agency, Paris, France. [3] International Energy Agency (2008) World Energy Outlook 2008, International Energy Agency, Paris, France. [4] Starr, C. (1971) Energy and power. Scientific American, 244, 37. [5] World Nuclear Association www.world-nuclear.org look for ‘supply of uranium. [6] Center for International Security and Cooperation http://iis-db.stanford.edu/pubs/ 10228/fetter.pdf. [7] Baumann, A., Ferguson, R., Fells, I. and Hill, R. (1991) A methodological framework for calculating the external costs of energy technologies. Proceedings 10th European Photovoltaics Solar energy Conference (ed. A. Luque), Kluwer, Dordrecht, pp. 834–837. [8] MacKay, D.J.C. (2009) Sustainable Energy – without the hot air, UIT, Cambridge, UK; for personal use it may be downloaded from www.withouthotair.com. It discusses all energy options for Britain, including fossil, nuclear and all renewables. [9] J¨ager, C. (2000) Water: A global responsibility, in Understanding the Earth System: Compartments, processes and Interactions (eds E. Ehlers and T. Krafft), Springer Verlag, Heidelberg, p. 125–135 and 118. [10] Royal Society of London (1983) Risk Assessment, A Study Group Report, The Royal Society, London. Concepts used in Section 9.3 are clearly defined. [11] Whyte, A.V. and Burton, I. (eds) (1980) Environmental Risk Assessment, John Wiley, Chichester. What we call ‘detriment’ is here defined as ‘risk’. [12] US Atomic Energy Commission (1975) Reactor Safety Study, WASH-1400 (Rasmussen report). [13] Morone, J.G. and Woodhouse, E.J. (1986) Averting Catastrophe, Strategies for Regulating Risky Technologies, University of California Press, Berkeley, California. [14] United Nations Environment Programme www.unep.org/ozone. [15] NASA http://earthobservatory.nasa.gov; Tropospheric Emission Monitoring Internet Service www.temis.nl.
The Context of Society
407
[16] Intergovernmental Panel on Climate Change www.ipcc.ch. [17] Pearce, F. (2010) Polluting ships have been doing the climate a favour. New Scientist, 17 July, 22–23. [18] Bak, P. (1997) How Nature Works, The Science of Self-organized Criticality, Oxford University Press, Oxford. [19] Bak, P. and Chen, K. (1991) Self-organized criticality. Scientific American, 264, 46. [20] Johnston, A.C. and Nava, S.J. (1985) Recurrence rates and probability estimates for the New Madrid seismic zone. J. Geophys Res., 90(B8), 6737–6735. [21] Haerlin, B. and Parr, D. (1999) How to restore public trust in science. Nature, 400, 499. [22] Easlee, B. (1973) Liberation and the Aims of Science, An essay on Obstacles to the Building of a Beautiful World, Chatto & Windus, London. [23] Grin, J., Rotmans, J. and Schot, J. (2010) Transitions to a Sustainable Development, New Directions in the Study of Long term Transformative Change, Routledge, New York, USA. [24] Unilever downloadable from www.unilever.com. [25] Gibbons, M. (1999) Science’s new social contract with society. Nature, 402(Supp), C81–C84.
Appendix A Physical and Numerical Constants Planck’s constant Planck’s constant/2π Velocity of light Boltzmann’s constant Stefan–Boltzmann’s constant Universal gas constant Avogadro’s number Rydberg constant Elementary charge Permittivity of vacuum
h = 6.626 × 10−34 [J s] = 1.0546 × 10−34 [J s] c = 2.998 × 108 [m s−1 ] k = 1.381 × 10−23 [J K−1 ] σ = 5.671 × 10−8 [Wm−2 K−4 ] R = 8.315 [J K−1 mol−1 ] NA = 6.022 × 1023 [mol−1 ] R = 109 737 [cm−1 ] e = 1.602 × 10−19 [C] ε0 = 8.854 × 10−12 [F m−1 = C V−1 m−1 ]
Energy Total solar irradiance = solar ‘constant’ 1 [eV] (electron volt) 1 [tonne] coal equivalent 1 [tonne] oil equivalent 1 [m3 ] natural gas 1 [kWh] 1 [EJ] (exajoule) 1 [PJ] (petajoule) 1 [TJ] (terajoule) 1 [GJ] (gigajoule) 1 [MJ] (megajoule)
S = 1366 [J s−1 m−2 ] 1.602 × 10−19 [ J ] 0.0293 × 1012 [ J ] 0.04187 × 1012 [ J ] 0.039021 × 109 [ J ] 3.6 × 106 [ J ] 1018 [ J ] 1015 [ J ] 1012 [ J ] 109 [ J ] 106 [ J ]
Air Kinematic viscosity lower troposphere Specific gas constant dry air Density (10 [◦ C], 1 [atm]) Density (20 [◦ C], 1 [atm]) 1 [atm] pressure = 1 [bar] Specific heat dry air
v = 14 × 10−6 [m2 s−1 ] R = 287 [J K−1 kg−1 ] ρ = 1.247 [kg m−3 ] ρ = 1.205 [kg m−3 ] p = 1.01325 × 105 [Pa = N m−2 ] cp = 1007 [J kg−1 K−1 ]
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
410
Environmental Physics
Water Specific heat (27 [◦ C]) Heat of evaporation (25 [◦ C]) Viscosity (25 [◦ C]) Density (10 [◦ C]) Density (20 [◦ C])
cp = 4183 [J kg−1 K−1 ] H = 2.4 × 106 [J kg−1 ] μ = 0.89 × 10−3 [Pa s] ρ = 999.7 [kg m−3 ] ρ = 998.2 [kg m−3 ]
Earth Radius Angular velocity Gravitation acceleration
R = 6.38 × 106 [m] ω = 7.292 × 10−5 [s−1 ] g = 9.8 [m s−2 ]
Miscellaneous Molar concentration (mol weight M) 1 [barn] Mass of a proton Mass of an electron
1 [M] = 1 [mol L−1 ] = M × 10−3 [kg L−1 ] 10−28 [m2 ] 1.67 × 10−27 [kg] 9.11 × 10−31 [kg]
Appendix B Vector Algebra In this appendix the vector properties used in this book are summarized, a vector field is introduced and the differentiation of such a field is discussed. A more comprehensive account can be found in handbooks on mathematical physics.
B.1
Vectors: Definition and Properties
Vectors summarize physical entities that have a direction and a magnitude, such as a force or a velocity. They are indicated by bold printed letters A, B and so on. Also the location of a point in space with respect to another point, for example, the origin of a coordinate system may be indicated by a position vector r. In a handwritten text a vector is often indicated by an arrow above the letter: r In Figure B.1 a vector A is sketched with its three components, indicated by Ax , Ay , Az or alternatively by A1 , A2 , A3 . The length of a vector is given by the symbol itself A. From Figure B.1 it follows that A2 = A2x + A2y + A2z
(B1)
The sum of two vectors C = A + B is defined by summing their components C x = A x + Bx C y = A y + By C z = A z + Bz
(B2)
and the difference of two vectors D = A − B by subtracting their components D x = A x − Bx Dy = A y − By D z = A z − Bz
(B3)
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
412
Environmental Physics z
Az
A
Ay
O
Ax
y
√( Ax +Ay ) 2
2
x
Figure B.1
Vector A and its three components in an orthogonal x, y, z frame.
Addition and subtraction of vectors may be depicted in a graph, as in Figure B.2. If one puts in a coordinate system it is easy to see that Eqs. (B2) and (B3) hold. A scalar quantity is a quantity which is independent of the coordinate system which is used, like a number. The scalar product of two vectors is defined as a scalar A·B = AB cosα, where α is the angle between the two vectors. It may be shown that it also can be expressed in terms of components as A · B = AB cos α = A x Bx + A y B y + A z Bz
(B4)
An example is the work done by a force F on a particle that moves with a velocity u. This is illustrated in Figure B.3. In a small time dt the particle moves over the distance udt in the direction of u. The work dW is, by definition, the path times the projection of the force on the path, the scalar product dW = F cos αudt = F · u dt
(B5)
The vector product of two vectors is again a vector C = A × B with components C x = A y Bz − A z B y C y = A z B x − A x Bz C z = A x B y − A y Bx
(B6)
–B B
D=A–B
A
B A
C = A + B`
Figure B.2
Adding two vectors (left) and subtracting two vectors (right).
Appendix B: Vector Algebra
413
F
α u dt
Figure B.3 the path.
Work performed by force F is the product of the path times the projection of F on
Because of the minus signs it follows that A × B = −B × A
(B7)
The vector product C = A × B may also be defined geometrically, as in Figure B.4. The vector C is perpendicular to the plane defined by A and B, it has the magnitude C = AB sinα and its direction is the direction of a screwdriver turned from A to B. It may be shown that the definitions are equivalent.
B.2
The Vector Field
A vector which is defined in each point of space or part of space may be written as A(x, y, z, t) = A(r, t). An example is an air flow where, in each part of the atmosphere, parcels of air have a velocity u(r, t). In differentiating with respect to time one has to distinguish between the local derivative u(r, t + t) − u(r, t) ∂u = Lim t→0 ∂t t
(B8)
u(r(t + t), t + t) − u(r(t), t) du = Lim t→0 dt t
(B9)
and the total derivative
C
B α A
Figure B.4
Geometrical definition of the vector product C = A × B.
414
Environmental Physics
p grad p = ∂ ∂s p2 > p 1
s
p1
Figure B.5 Geometrical interpretation of grad p: perpendicular to surfaces with constant p, in the direction of increasing p and with magnitude the rate of change along the perpendicular.
In the local case the position r at which one looks is kept constant, in (B9) a parcel of air is followed in its course, indicated by r(t). This derivative describes the acceleration of the air parcel. B.2.1
The Gradient
A scalar field p(x, y, z, t), such as pressure in the atmosphere, is defined as a number in each point of a part of space. A vector field grad p = ∇ p may be constructed from a scalar field by the definitions ∂p ∂x ∂p (grad p) y = (∇ p) y = ∂y ∂p (grad p)z = (∇ p)z = ∂z
(grad p)x = (∇ p)x =
(B10)
It may be shown that grad p = ∇ p has the following properties, illustrated in Figure B.5: (1) grad p points perpendicular to surfaces with a constant value of p (2) The direction of grad p is toward increasing values of p (3) The length or magnitude of grad p may be found by taking a length parameter s along the perpendicular line and differentiating p with respect to s along the line, so |grad p| =
∂p ∂s
(B11)
These properties may be understood by drawing a local x-axis along the perpendicular line. As the local y- and z-axes lie in the plane p = constant one has ∂p/∂y = ∂p /∂z = 0 and all three properties follow. Note: the possible time dependence in p(x, y, z, t) does not play a role in space differentiation; in other words, everything is taken at a fixed time t. For simplicity the time dependence will be omitted in the following.
Appendix B: Vector Algebra
415
u
dS
u
dt
dS
Figure B.6 The flow through the surface element dS in time dt is determined by the volume of a parallelepiped with four sides parallel to u and with lengths u dt.
B.2.2
The Divergence
Consider a vector field A(x, y, z), where all three components of A may depend on position. One defines the divergence of the field by ∂ Ay ∂ Az ∂ Ax + + (B12) div A = ∂x ∂y ∂z The physical interpretation can be found by taking A = u, where u is the velocity field of a stationary flow. In Figure B.6 a surface element is shown, determined by the vector dS perpendicular to the element and with a magnitude equal to its surface area (its direction corresponds to a chosen direction around the surface element). The volume passing the surface element in time dt is equal to the volume of the parallelepiped with sides u dt, which is u dt dS cosα = u·dS dt. Consider an arbitrary volume element with sides parallel to the coordinate axes and lengths x y, z in a velocity field u, as in Figure B.7. The outflow through the surfaces per unit of time equals u·dS, which for the surface on the right becomes ux (x + x/2, y, z)yz. The left surface has an inflow, which is a negative outflow −ux (x − x/2, y, z)yz. The total outflow from the right and left surfaces becomes u x (x + x/2, y, z)yz − u x (x − x/2, y, z)yz =
∂u x ∂u x xyz = dV ∂x ∂x (B13)
The surfaces perpendicular to the y-direction give (∂uy /∂y)dV and perpendicular to the z-direction (∂uz /∂z)dV. The total volume outflow therefore becomes ∂u y ∂u z ∂u x + + dV = div u dV (B14) ∂x ∂y ∂z Therefore divu may be interpreted as the outflow per unit of volume in a unit of time. The result ‘divergence = outflow’ is adopted for any vector field A.
416
Environmental Physics z
u
y Δz P(x,y,z)
(x – ½Δx,y,z)
(x + ½Δx,y,z)
Δy x
Δx
(x – ½Δx)
(x + ½Δx)
Figure B.7 A velocity field u(x, y, z) of which only one vector is shown, is flowing through a hypothetical volume element with sides parallel to the coordinate axes. The outflow in a unit of time becomes divu dV.
Note that algebraically the divergence can be written as the scalar product of the symbolic vector ∇ with the vector A. This is readily seen from the equalities ∂ Ay ∂ ∂ ∂ Az ∂ ∂ Ax (B15) + + = Ax + Ay + Az = ∇ · A div A = ∂x ∂y ∂z ∂x ∂y ∂z B.2.3
Gauss’s Law
The argument given above directly leads to Gauss’s Law u·dS = divu dV surfaceS
volumeV
surfaceS
volumeV
(B16)
The symbol on the left-hand side means that the integral runs over a complete closed surface where, by convention, the vector dS points to the outside. As we saw in Figure B.6 the integrand u·dS represents the outflow by a small surface element dS. The integral therefore is equal to the total outflow, which, as we explained, equals the volume integral on the right-hand side of Eq. (B16). As any vector field A may be assumed to be a velocity field one generalizes Gauss’s Law to A.dS = divAdV (B17)
B.2.4
Laplace Operator
If A = gradp it follows that divA = ∇· ∇p = p, with the Laplace operator =
∂2 ∂2 ∂2 + + ∂x2 ∂ y2 ∂z 2
(B18)
where we used (B4) for the scalar product of ∇ with ∇. In cylindrical coordinates (r, ϕ, z) z is the coordinate along the z-axis of the cylinder, r the distance to the z-axis and ϕ an azimuthal angle. The effect of operating with on a function f (r, ϕ, z) may be
Appendix B: Vector Algebra
written as f =
1 ∂ r ∂r
∂2 f ∂f 1 ∂2 f r + 2 2 + 2 ∂r r ∂ϕ ∂z
In spherical polar coordinates (r, ϕ, z) one writes ∂f 1 ∂ ∂f 1 ∂2 f 1 ∂ r2 + 2 sin θ + 2 2 f = 2 r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂ϕ 2 or equivalently 1 1 ∂ 2 (r f ) + 3 f = 2 r ∂r r sin θ B.2.5
∂ ∂θ
∂(r f ) sin θ ∂θ
+
1 ∂ 2 (r f ) r 3 sin θ ∂ϕ 2
417
(B19)
(B20)
(B21)
Gradient Operator
In spherical polar coordinates (r, θ , ϕ) one defines three orthogonal vectors of unit length at each point of space: er = r/r, also eθ in the direction of a change in θ only and finally eϕ . The gradient operator may then be written as ∇ψ = er
∂ψ 1 ∂ψ 1 ∂ψ + eθ + eφ ∂r r ∂θ r sin θ ∂ϕ
(B22)
Similarly in cylindrical coordinates (r, ϕ, z), with three orthogonal unit vectors er , eφ , ez ∇ψ = er
∂ψ 1 ∂ψ ∂ψ + eφ + ez ∂r r ∂ϕ ∂z
(B23)
Appendix C Gauss, Delta and Error Functions For many simple examples in mathematical physics the Gauss function f (x) turns out to be a solution. In normalized form it is defined as 1 2 2 (C1) f (x) = √ e−x /(2σ ) σ 2π with +∞ f (x)dx = 1
(C2)
−∞
It is shown in many texts that σ in Eq. (C1) equals the mean square distance to x = 0 as +∞ σ = x 2 f (x)dx 2
(C3)
−∞
For a few values of σ the Gauss function is shown in Figure C.1. One observes that for decreasing values of σ the peak becomes narrower and higher, while the integral remains constant because of Eq. (C2). For σ → 0 one obtains a representation of the delta function δ(x)with the properties δ(x) = δ(−x) +∞ δ(x)g(x)dx = g(0)
(C4)
−∞
The last equality should hold for any physical function g(x). The representation of the delta function based on the Gauss function may be written as 1 2 2 δ(x) = Lim √ e−x /(2σ ) . σ →0 σ 2π
(C5)
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
420
Environmental Physics
f (x ) =
2 2 1 e− x /(2 σ ) σ 2π
1.0
0.8
σ = 0.5
0.8
erf ( β)
0.6 0.6 0.4
0.4
σ=1 0.2
σ=2
0.2
0.0 -3
-2
-1
0
1
2
3
0
x
1
β
2
3
Figure C.1 The Gauss function (C1) is shown on the left for the three values σ = 0.5, σ = 1.0, σ = 2.0. On the right the error function (C7) is shown.
Another representation which will be used is δ(x) =
sin ηx 1 Lim π η→∞ x
(C6)
One often needs the integral of the Gauss function (C1) from x = 0 to a certain value x = β. For σ 2 = 1/2 this integral defines the error function erf (β) by 2 erf (β) = √ π
β
e−x dx 2
(C7)
0
It follows that erf(0) = 0 erf(∞) = 1 erf(β) = −erf(−β)
(C8)
The error function is a monotonously increasing function of β indicated on the right-hand side of Figure C.1. The complementary error function is given by 2 erfc(β) = √ π
∞
e−x dx 2
(C9)
β
and therefore erf(β) + erfc(β) = 1
(C10)
Appendix C: Gauss, Delta and Error Functions
421
From Figure C.1 or from tabulated values it may be found that 68% of the area of integral (C2) is found in the region −σ < x < σ . The derivative of the error function follows from the definitions: erf(β + β) − erf(β) 2 d 2 (C11) erf(β) = Lim = √ e−β β→0 dβ β π
Appendix D Experiments in a Student’s Lab Below, a short description is given of a few experiments that can be done in an average students’ lab. A detailed description of the experiments, including a sample experiment and background theory may be found at http://www.few.vu.nl/environmentalphysics.
D.1
Determine the Hydraulic Conductivity of Soil
Groundwater flow can be described by Darcy’s law (Section 7.4). The hydraulic conductivity of a sample can be determined by measuring the discharge of a fluid Q [m3 s−1 ], which is needed to maintain a water level difference h across a vertically placed sample with length L and cross section A.
D.2
Determine the Thermal Conductivity of Sand
Heat transfer by conduction (Section 4.1) is caused by local temperature differences in a material. The heat flow q from a higher to a lower temperature follows Fourier’s law, which relates q with the temperature gradient ∇T and the thermal conductivity k [Wm−1 K−1 ]. For a cylindrical system one finds a one-dimensional relationship. The thermal conductivity of the sample can be determined by measuring the radial temperature profile in the steady state.
D.3
Heat Transfer by Radiation and Convection
In this experiment heat transfer q by means of radiation and (pressure dependent) convection (Section 4.1) is investigated. Thermal emissivity ε can be determined from time dependent measurements of a constantly heated metal rod with temperature T 1 in vacuum Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
424
Environmental Physics
surrounded by a surface with temperature T 2 . With Newton’s law of cooling (4.8) the convective heat transfer coefficient h is determined from the additional heat loss of the constantly heated metal rod. Note: this experiment is difficult because it uses a vacuum control unit to regulate the pressure in the vacuum chamber between 1 and 1 × 10−6 [bar].
D.4
Laser Doppler Anemometry
Laser Doppler anemometry (LDA) is a technology to measure local velocities of very tiny particles in a flow. It is based on measuring scattered laser light from particles that pass through a series of interference fringes, which is a pattern of light and dark stripes. The great advantage of LDA is that there is no physical contact with the flow being measured, so no disturbances are introduced; also a high spatial resolution can be created by focusing two laser beams. This makes LDA a valuable monitoring and measuring technique with many applications. The airflow within combustion engines, for example, can be measured to improve fuel efficiency and reduce pollution and noise. The LDA experiment at the physics lab of the VU University Amsterdam can be approached by internet from anywhere on earth. Just connect to http://www.fewvu.nl/ webexperiments.
D.5
Radon in the Environment
Radioactive uranium is found in soil and soil products. One of its decay products is the inert gas radon (Rn). This gas can intrude into the surrounding environment and penetrate into buildings and dwellings. For health reasons (Section 6.3) it is important to determine the amount of Rn gas in the local environment. This can be done by measuring the time dependent Rn daughter concentration C(t). These growth curves are dependent on temperature and humidity, which therefore have to be monitored. Note: this experiment requires rather long measuring times, beyond the patience and possibilities of many students.
D.6
Laser Remote Sensing
Laser spectroscopy (Chapter 8) is a powerful method to investigate fluorescence of the photosynthetic system of green plants. Wavelength-dependent and time-dependent fluorescence can be monitored from a distance. This results in basic information on fluorescence in general and the process of photosynthesis in particular which could give indications about the health of green plants. Because of the very weak signals, improvement of the signal-to-noise ratio by a lock-in technique plays a very important role.
Appendix E Web Sites Below we give a selection of web sites which may be useful for further study and which contain many details on the subject matter of this book. We restrict ourselves to sites from which documents may be downloaded free of charge.
E.1
General
The site connected with this book is http://www.few.vu.nl/environmentalphysics. It contains parts of the Second Edition of this book which were left out in the present edition (see Appendix F). It contains a very simple model to understand the effects of changing albedos of the atmosphere where the student may change the parameters. It also describes a few computer programs written in Mathematica © with which connected calculations can be performed. Finally it describes experiments for a student’s lab summarized in Appendix D. The World Health Organization, WHO, publishes key papers on health effects of energy conversion as well as guidelines to protect the population: www.who.int
E.2
Climate (Chapter 3)
The publications of IPCC, the Intergovernmental Panel on Climate Change may be downloaded from www.ipcc.ch. Spectral absorption lines for radiation in the atmosphere, which are regularly updated, are published in http://savi.weber.edu/hi_plot/.
E.3
Traditional Energy (Chapter 4)
All utilities and companies in the energy field have their web site; they often contain links to other sites as well. For gas turbines we used www.energy.siemens.com/hq/en/ power-generation/gas-turbines; for cogeneration we consulted www.cogeneration.net. Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
426
Environmental Physics
An argument against a ‘hydrogen economy’ is found at www.tmgtech.com. On hybrid cars you may consult www.hybridcars.com
E.4
Renewable Energy (Chapter 5)
The US government maintains an informative site on energy efficiency and renewable energy www.eere.energy.gov. On PV a helpful site is www.pveducation.org/pvcdrom.
E.5
Nuclear Power (Chapter 6)
General information including that on fourth generation nuclear power can be found in www.world-nuclear.org. It, for example, gives data on all civilian nuclear power stations in the world.
E.6
The Social Context (Chapter 9)
The International Energy Agency each year publishes an Energy Outlook, which may be downloaded free of charge after two years. The same is true for their Energy Technology Perspectives. The Key World Energy Statistics of the current year are free: www.iea.org. Technology Roadmaps for various kinds of energy conversion are free as well. They contain data on the present and the way to overcome obstacles for the future. The USA publish world statistics and an annual outlook as http://tonto.eia.doe.gov
Appendix F Omitted Parts of the Second Edition In this appendix we list those major parts of the Second Edition of Environmental Physics which are not covered by this Third Edition. For personal use they may be downloaded free of charge from our site: http://www.few.vu.nl/environmentalphysics. If any extract of text or illustration from this material is to be used for republication, either personal or commercial, you must request permission from the copyright holder, John Wiley & Sons Ltd. Details on how to request permission can be found at http://eu.wiley.com/ WileyCDA/Section/id-403441.html. Some parts of the Second Edition, which are out of date, were not put on our site. From Chapter 4 r Excursion 4C. Energy savings by cogeneration r Excursion 4E. Electromagnetic radiation and Health. From Chapter 5 r Time-averaged equations of motion and turbulent diffusion pp. 245–252 r A vertical buoyant jet, pp. 267–270 r Section 5.8 Particle Physics. The complete Chapter 6. Noise From Chapter 7 r Spectroscopy of molecules in a host environment, pp. 370–395.
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
Index
Note: Page numbers with italicized f’s and t’s refer to figures and tables 5,6-dihidroxyindole-2-carboxylic acid (DHICA), 25–7 5,6-dihidroxyindole (DHI), 25 absorption, 302 absorption lines, 341 absorption refrigeration, 113 acceptable risks, 392–3 acceptor impurity, 154 acid rain, 126 AC lines, 124 actinides, 235 adenine, 20 adiabat dry, 35–6 saturated, 36 adsorption, 302 advection, 263 aerodynamics, 162–5 aerosols, 127–8 air horizontal motion of, 53–8 mass of, 34 physical and numerical constants, 409 t specific gas constant for, 34 air density, 130–4 air drag, 130–4 albedo, 3, 37, 40 t alpha-crystallin, 21 alternating tensor, 306 angle of attack, 162 anti-Stokes Raman scattering, 340, 359–60 aquifers, 283–5 river in connection with, 301 f storage coefficient, 302 storativity, 302 between two canals, 297–8
unconfined, 295 vertical flow, 290 artificial photosynthesis, 209–13, 441 f atmosphere absorption of radiation in, 41 f clouds, 36 coupling of horizontal and vertical properties in, 57–8 dry adiabat, 35–6 modeling, 67–9 ocean-atmosphere interaction, 64–5 saturated adiabat, 36 scattering in, 362 upward radiation flux, 42–3 vertical dispersion of pollutants in, 317 t vertical structure of, 33–6 atmosphere-land interaction, 65–6 Atmospheric-Ocean Coupled General Circulation Models (AOGCMs), 67 atomic spectra, 345–7 autocorrelation function, 320 automobiles, 129–34 electric, 133 fuels, 131–2 hybrid, 134 power needs, 130–1 three-way catalytic converter, 132–3 autumnal equinox, 61 f Avogadro’s number, 18, 120 backscatter lidar, 370–3 bacteriochlorophylls, 4, 186, 190, 437 f balancing costs, 169 batteries, 119–20 becquerel, 244 benzothiazine, 25, 26 f benzothiazole, 25, 26 f
Environmental Physics: Sustainable Energy and Climate Change, Third Edition. Egbert Boeker and Rienk van Grondelle. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd.
430
Index
biochemical feedbacks, 430 f bio energy, 175–83. See also bio solar energy biomass, 182–3 solar efficiency, 180–2 stability, 180 storage efficiency, 179 thermodynamics, 175–9 biomass, 50 f , 182–3, 204 biomolecules, 20–8 damage from solar ultraviolet, 21–2 melanins, 25–8 ozone filter protection, 22–4 spectroscopy, 20–1 bio solar energy, 203–13. See also bio energy artificial photosynthesis, 209–13 biochemistry, 207–8 biological vs. technological systems, 204–7, 440 f C3 vs. C4 plants, 208 photoprotection, 208 proton-couple electron transfer, 211–13 solar fuels, 213 black body, 3, 7, 83 black-body radiation, 7–9, 38 f , 44 black dye, 199 blade, 163 blue-green algae, 4 boiling water reactor (BWR), 224 Boltzmann’s constant, 8 Boltzmann’s distribution, 15, 177, 250 Boltzmann tail, 154 Born-Oppenheimer approximation, 338, 349 Brayton cycle, 115 break-even point, 137 Bremsstrahlung, 239 Brownian motion, 262 buckling parameter, 231–2 buoyancy flux, 328 butane, 112 C3 plants, 208 C4 plants, 208 caffeine, 390 Calvin/Benson cycle. See Calvin cycle Calvin cycle, 208, 213 Canadian deuterium-uranium reactor (CANDU), 224 capacity factor, 168 capital costs, 134–7 capital recovery factor, 136 carbon cycle, 63–6 atmosphere-land interaction, 65–6 land-use change, 64 ocean-atmosphere interaction, 64–5 steady state, 66 carbon dioxide (CO2 ) in atmosphere-land interaction, 65–6
concentrations in the atmosphere, 49 f equivalent concentration, 72 and global warming, 46–7 in ocean-atmosphere interaction, 64–5 pollution, 126–7 sequestration, 126 carbon monoxide(CO), 126–7 carcinogenic chemicals, 392 Carnot cycle, 103–4 Carnot efficiency, 95–7, 102, 104, 114, 115 carotenoids, 186, 208, 209 catalyst, 132 Cauchy-Riemann equations, 295 chaos theory, 69 charge separation, 184 f , 190–3 chemical potentials, 94 chemicals, 390–2 chemisorption, 302 Chernobyl accident, 236, 247 chimney, 322–3, 325–6 chlorofluorocarbons (CFCs), 24, 112–13, 396 chlorophyll- a, 4 chord line, 162, 163 chromophores, 21, 355 circular pond, 298 Clausius inequality, 92, 95 climate modeling, 66–7 protection of, 396–8 radiative forcing, 71 f schematic view of components, 429 f variability, 59–62 climate change. See also global warming forecasts, 70–4 human-induced, 63–70 models, 63–70 climate sensitivity parameter, 47 climate system, 51–9 components of, 32 f dynamics in, 51–9 horizontal motion of air, 53–8 horizontal motion of ocean waters, 59 vertical motion of ocean waters, 58–9 clouds, 36 albedos, 40 t cumulus, 37 f finite-size, 265–6 coal, 125–6 coefficient of performance (COP), 96–7, 111 cogeneration, 116–17 coherence, 189–90 combined cycle power systems, 115–16 combined heat power (CHP), 116–17 combustion, 101–2 complementary error function, 420–1
Index composite lineshapes, 345 concentrating solar power (CSP), 146, 150–2 condensor, 110 f conduction, 79–81 conduction band, 153 confined aquifers, 283 flow around source or sink, 298–300 simple flow in, 298–301 source or sink in uniform flow, 300–1 time dependence in, 301–2 conformal mapping, 295 conservation of mass, 289–90 contact coefficient, 80 t, 89 contact temperature, 89–90 continuous point source, 267–9, 321–2 convection, 82 convection heat transfer coefficient, 82 converters, 173–4 convolution, 267 cooling towers, 129 Coriolis acceleration, 313 Coriolis force, 55, 56 f Coulomb barrier, 238 covalent bond, 152–3 critical size, of reactor, 231–2 cumulus cloud, formation of, 37 f curie, 244 cut-in speed, 168 cut-off ratio, 109 cut-out speed, 168 cyanobacteria, 4, 182, 208, 214, 442 f cytosine, 20, 21 dams, 169–70 Darcy’s equations, 286–8, 289 Darcy’s Law, 286–8 DC lines, 124 decay heat, 235–6 decay of excited states, 357–8 deep rock, 254 delayed neutron emission, 223 density, 80 t dephasing, 344 desorption, 302–4 deterministic processes, 246 deuterium, 238 DIAL (differential absorption lidar), 370–1 diatomic molecules, 349–51 diesel engine, 108–9 differential absorption lidar (DIAL), 370–1 diffuse radiation, 366 diffusion, 262–70 continuous point source in three dimensions, 267 effect of boundaries, 269–70 equations, 262–7
431
finite-size cloud, 265–6 instantaneous line/point sources in three dimensions, 266 instantaneous plane source in three dimensions, 264–5 in uniform wind, 267–9 continuous point source, 267–9 instantaneous point source, 267 diffusion coefficients, 262–3 diffusion constant, 262 diffusion current, 155–6 diffusivity, 262 dimensional analysis, 166, 328–9 dinosaurs, extinction of, 40 diode current, 156 direct normal irradiation (DNI), 148 direct radiation, 366 dispersion-advection equation, 303 dispersion of pollutants, 261–333 diffusion, 262–70 continuous point source in three dimensions, 267 effect of boundaries, 269–70 equations, 262–7 finite-size cloud, 265–6 instantaneous line/point sources in three dimensions, 266 instantaneous plane source in three dimensions, 264–5 in uniform wind, 267–9 fluid dynamics, 304–16 equations of motion, 308–9 Navier-Stokes equation, 310–11 Newtonian fluids, 309–10 Reynolds number, 311–13 stress tensor, 304–8 turbulence, 313–16 Gaussian plumes in air, 317–26 building a chimney, 325–6 continuous point source, 321–2 empirical determination of dispersion coefficients, 323–4 from high chimney, 322–3 semi-empirical determination of dispersion parameters, 324–5 statistical analysis, 319–21 in groundwater, 282–302 adsorption of pollutants, 302–4 aquifer between two canals, 297–8 circular pond, 298 conservation of mass, 289–90 Darcy’s equations, 286–8 definitions, 283–5 desorption of pollutants, 302–4 Dupuit approximation, 295–8
432
Index
dispersion of pollutants (Continued) flow underneath walls, 292–3 hydraulic potential, 285–6 method of complex variables, 293–5 simple flow in confined aquifers, 298–301 stationary applications, 290–5 time dependence in confined aquifer, 301–2 vertical flow, 290–2 vertical flow in unsaturated zone, 288–9 in rivers, 270–82 continuous point emission, 278–9 dilution of pollution, 280–1 improvements, 281–2 influence of turbulence, 275–7 mixing length, 281 one-dimensional approximation, 271–4 Rhine River calamity model, 277–8 turbulent jets and plumes, 326–33 divergence, 415–16 DNA, absorption spectra of, 20–1 DNA bases, 20 DOAS method, 365–6 Dobson units, 368 donor impurity, 154 Doppler broadening, 344 dose-effect relation, 391 drag coefficient, 130–4 dry adiabat, 35–6 Dupuit approximation, 295–8 dye sensitized solar cell (DSSC), 202 dynamic viscosity, 54–5, 310 earth angular velocity, 410 t eccentricity of orbit, 61–2 elliptical orbit, 62 f energy transport to poles, 52 gravitation acceleration, 410 t radius, 410 t rotation, 55 earthquakes, 398–9 eddy viscosities, 276 Einstein coefficients, 14–15, 341 elastic scattering, 359 electric car, 133 electricity, 113–14 co-generation of heat and electricity, 115–17 grid load, 114–15 storage, 117–23 batteries, 119–20 flywheels, 117–19 hydrogen fuel cell, 120–2 pumped hydro storage, 120 superconducting mechanical energy storage, 119 transmission of, 123–4
electronic excited state, 185 electronic spectra, 340–1 electronic transitions, 353–8. See also rotational transitions; vibrational transitions decay of excited states, 357–8 Franck-Condon principle, 355–7 molecular orbitals, 353–5 transition dipoles, 353–5 electron spin resonance spectroscopy, 340 emission spectroscopy, 339 emissivity, 83 energetic disorder, 193 energy, 380–8 consumption, 380–2 conventional reserves, 382 t conversion, 386 t efficiency, 383 options, 387–8 physical and numerical constants, 409 t resources, 382–3, 384–7 sustainable, 1–2 technological fixes, 386–7 world energy use by fuel, 382 t energy-break even criterion, 241 energy conversion, 134–8 break-even point, 137 building times, 136–7 capital costs, 134–7 learning curve, 138 levelized end-of-year cost, 136 rest value, 136 energy gap, 153 energy levels, 341 energy supply light absorption, 4 in society, 78 f sustainable, 1–2 enrichment, 249–51 gas centrifuge, 250 gaseous diffusion, 250 laser separation, 250 and nonproliferation, 256–7 enthalpy, 93 ENVISAT satellite, 362 equation of continuity, 263 equivalent CO2 concentration, 72 equivalent CO2 emission, 49 error function, 420 ethane, 132 ethanol, 131, 132 f eumelanin, 25–7 evaporator, 110 f event tree, 393–4 excited state intramolecular proton transfer (ESIPT), 27
Index excited states, decay of, 357–8 exciton, 188–9, 439 f exempt waste (EW), 253–5 exergy, 100–1 loss of, 101–2 Eyjafjallaj¨okull volcanic eruption, 374–5 Faraday’s Law, 113 far ultraviolet (UV-C), 21 fast fission reactor, 227 Fermi-Dirac distribution, 154 Fick’s law, 262 First Law of Thermodynamics, 91–2 fluid dynamics, 304–16 equations of motion, 308–9 Navier-Stokes equation, 310–11 Newtonian fluids, 309–10 Reynolds number, 311–13 stress tensor, 304–8 turbulence, 313–16 fluidized bed, 126 fluorescence, 357 fluxes, 8 flywheels, 117–19 forced convection, 82 formaldehyde, 131 fossil fuels, combustion of, 50 f four factor formula, 226–9 Fourier coefficient, 80 t, 88 Fourier’s law, 79 free electrons, 154 free energy, 93–4 freezer, 97 fresh water, 389 friction velocity, 166 fuel cell, 120–2 Fukushima accident, 236–7, 247 fusion product, 243 gain factor, 46 Galilei transformation, 162, 272 gas centrifuge, 250 gaseous diffusion, 250 gasohol, 131 Gauss distribution, 321 Gauss function, 264–5, 419–20 Gaussian plumes, 317–26 building a chimney, 325–6 continuous point source, 321–2 empirical determination of dispersion coefficients, 323–4 from high chimney, 322–3 semi-empirical determination of dispersion parameters, 324–5 statistical analysis, 319–21
433
Gauss’s Law, 416 general circulation models (GCMs), 66–7, 73 boundary conditions, 69 t structure of, 68 f generation current, 156 Generation IV International Forum (GIF), 257 Generation IV nuclear reactors, 257–8 geo-engineering, 397–8 geostrophic flow, 56–7 geothermal heat flow, 4 Gibbs free energy, 94, 119–20, 175, 180 glacial period, 59–60 Glauber salt, 90 global environmental management, 398–401 global warming, 45–8. See also climate change effects mitigating, 47 effects reinforcing, 46–7 radiative forcing, 45–6 time delay by ocean warming, 47–8 global warming potential (GWP), 48 t , 49 gradient, 414–15 gradient operator, 417 Gr¨atzel cell, 196–203 applications, 203 dye, 202 efficiency of, 199–201 electrolyte, 202–3 energetic losses, 200–1 new developments, 202–3 principle, 196–9 quantum efficiency, 199–200 gravitational storage, 120 gravity, 56 gray, 244 greenhouse effect, 3–4 human-induced, 49 and radiation balance, 36–51 greenhouse gases, 48–51 equivalent CO2 emission, 49 global warming potential, 49 human sources of, 51 t increase of concentration over time, 50 f IR absorption, 3–4, 39, 44 and upward flux, 42 warming effect, 48 t grey body, 83 grid load, 114–15 groundwater, dispersion of pollutants in, 282–302 adsorption of pollutants, 302–4 aquifer between two canals, 297–8 circular pond, 298 conservation of mass, 289–90 Darcy’s equations, 286–8 definitions, 283–5 desorption of pollutants, 302–4
434
Index
groundwater, dispersion of pollutants in (Continued) Dupuit approximation, 295–8 flow underneath walls, 292–3 hydraulic potential, 285–6 method of complex variables, 293–5 simple flow in confined aquifers, 298–301 stationary applications, 290–5 time dependence in confined aquifer, 301–2 vertical flow, 290–2 vertical flow in unsaturated zone, 288–9 groundwater head, 286 groundwater table, 288 guanine, 20 Gulf Stream, 53, 59 Hadley model, 72 haemoglobin, 21 half life, 225 Hamiltonian, 12 Harrisburg accident, 236, 247, 393–4 heat, 95–7 heat current density, 79 heat diffusion equation, 87–90 heat engines, 77–8 Carnot cycle, 103–4 efficiency of, 97–8 internal combustion, 107–9 diesel engine, 108–9 Otto cycle, 107–8 pollution, 125–9 aerosols, 127–8 carbon dioxide (CO2 ), 126–7 carbon monoxide(CO), 126–7 nitrogen oxides, 125–6 regulation, 129 SO2 , 126 thermal pollution, 129 volatile organic compounds, 128–9 steam engine, 105–7 Stirling engine, 104–5 heat flow, 4 heat pipe, 83–4 heat pump, 96 heat resistance, 81 heat storage, 90–1 heat transfer in common materials, 80 t conduction, 79–81 convection, 82 experiments, 423–4 phase change, 83–4 radiation, 82–3 highest occupied molecular orbital (HOMO), 355 high-level waste (HLW), 253 high voltage direct current (HVDC) lines, 124
homogenous broadening, 342–4 homonuclear atom, 348–9 human-induced greenhouse effect, 49 hybrid car, 134 hydraulic conductivity, 285, 423 hydraulic potential, 285–6 hydrochlorofluorocarbons (HCFCs), 112–13, 396 hydrodynamic dispersion, 302–3 hydrofluorocarbons (HFCs), 50, 111 hydrogen economy, 122–3 hydrogen fuel cell, 120–2 hydrostatic equation, 34 ice, 39–40 ice age, 59–60 ICRP (International Commission on Radiological Protection), 245–7 ideal gas, 34 ignition criterion, 242 impervious layers, 283 infrared (IR) radiation, 3–4 inhomogenous broadening, 344–5 instantaneous point source, 267 Intergovernmental Panel of Climate Change (IPCC), 31–2, 70, 396 internal combustion, 107–9. See also heat engines diesel engine, 108–9 Otto cycle, 107–8 International Commission on Radiological Protection (ICRP), 245–7 International Energy Agency (IEA), 381 International Thermonuclear Experimental Reactor (ITER), 243–4 ionosphere, 33 isobutene, 112 ITER (International Thermonuclear Experimental Reactor), 243 jj coupling, 346 junction, 155 kinematic viscosity, 311 Kolmogorov scales, 315–16 Kronecker delta tensor, 306 Kuroshio, 53 Kyoto Protocol, 397 Lambert-Beer’s law, 16–19, 42, 367 Laplace equation, 290 Laplace operator, 88, 416–17 laser Dopper anemometry (LDA), 424 laser remote sensing, 424 laser separation, 250 Lawson criterion, 241–2 learning curve, 138, 152
Index learning rate, 138, 152 lethal dose, 391 levelized electricity cost, 152 levelized end-of-year cost, 136 lidar, 368–76. See also remote sensing by satellites; spectroscopy backscatter, 370–3 differential absorption, 370–1 equation, 369 Eyjafjallaj¨okull volcanic eruption, 374–5 Raman, 370–1, 373 ratio, 372 light absorption, 4, 5 f biomolecules, 20–1 Lambert-Beer’s law, 16–19 light-harvesting antennas, 185–7 limb viewing, 363 linear rotors, 348 linewidths, 342–5 composite lineshapes, 345 homogenous broadening, 342–4 inhomogenous broadening, 344–5 liposome, 210 liquefied natural gas (LNG), 131 liquefied petroleum gas (LPG), 131 lithium, 238 longitudinal dispersion coefficient, 274 Lorentzian shape, 343 low- and intermediate-level waste (LILW), 253–5 lowest occupied molecular orbital (LOMO), 355 majority carriers, 155 Malthus, Thomas Robert, 380 many-electron atoms, 346 marine biological pump, 65 mass, conservation of, 289–90 mass flux, 327 Maxwell-Boltzmann distribution, 223 mean lifetime, 225 melanins, 25–8 Meridional Overturning Circulation (MOC), 59, 67, 74 mesopause, 32, 33 f mesosphere, 32, 33 f methane (CH4 ), 46, 132 methanol, 131, 132 f mid ultraviolet (UV-B), 21 Mie scattering, 340, 359 Milankovich variations, 60–2 minority carriers, 155 mixed oxide fuel (MOX), 252 molar extinction coefficient, 18 molecular diffusion, 262 molecular recognition, 204 molecular spectra, 347–58. See also spectroscopy
electronic transitions, 353–8 decay of excited states, 357–8 Franck-Condon principle, 355–7 molecular orbitals, 353–5 transition dipoles, 353–5 rotational transitions, 347–9 linear rotors, 348 selection rules, 348–9 spherical rotors, 347–8 symmetric rotors, 348 vibrational transitions, 349–53 diatomic molecules, 349–51 polyatomic molecules, 352–3 vibrational-rotational spectra, 351 moment of inertia, 118 momentum flux, 327, 331 Montreal protocol, 50, 396 Morse potential, 349–50 motion, equations of, 308–9 multiplication factor, 227, 228 nacelle, 159 nadir viewing, 363 NAD(P)H, 204, 207 naphthoquinone, 209 natural gases, 132 Navier-Stokes equation, 310–11, 318 near ultraviolet (UV-A), 21 neutron current density, 230 neutron flux, 230 neutron production factor, 227 Newtonian fluids, 309–10 Newton’s law of cooling, 82 nitrogen oxides (NOx ), 125–6 nonleakage probability, 232 nonphotochemical quenching, 208 nonproliferation, 256–7 no-regret policy, 395 normal stresses, 306, 307 f Northern Hemisphere, 61–2 n type Si, 154 nuclear explosives, 237–8 nuclear fission, 222–38 decay heat, 235–6 four factor formula, 226–9 macroscopic cross section, 225–6 nonleakage probability, 232 nuclear explosives, 237–8 principles, 222–6 radioactive decay, 225 reactor equations, 229–31 reactor safety, 234–7 rectangular reactor, 232 stationary reactor, 231–2 time dependence of reactor, 233–4
435
436
Index
nuclear fusion, 238–44 health aspects of, 247–8 Tokamak design, 238, 239 f nuclear magnetic spectroscopy, 339–40 nuclear power, 221–58 fuel cycle, 248–57 enrichment, 249–51 fuel burnup, 252 nonproliferation, 256–7 reprocessing, 252 uranium mines, 249 waste management, 253–5 nuclear fission, 222–38 nuclear fusion, 238–44 radiation, 244–8 normal use, 247 norms on exposure, 245–6 from nuclear accidents, 247 units, 244 nuclear reactor, fourth generation, 257–8 nuclear reactors, 257 active safety, 234 Chernobyl accident, 236, 247 Fukushima accident, 236–7 Harrisburg accident, 236, 247 inherent safety, 234 safety of, 234–7 scheme of, 224 f nuclear tunneling, 192 nuclear waste management, 253–5 deep rock, 254 partitioning, 255 salt domes, 254 transmutation, 255, 256 f nuclear winter, 40 numerical constants, 409–10 t occultation mode, 363 oceans circulation, 53 f horizontal motion of waters, 59 ocean-atmosphere interaction, 64–5 vertical motion of waters, 58–9 ocean warming, 47–8 octadecane, 90 one-dimensional dispersion equation, 271 f , 274 one-electron atoms, 345–6 optical density, 7, 18, 19 optical depth, 18 orbit, eccentricity of, 60–1 organic photocells, 196–203 applications, 203 dye, 202 efficiency of, 199–201 electrolyte, 202–3
energetic losses, 200–1 new developments, 202–3 principle, 196–9 quantum efficiency, 199–200 Otto cycle, 107–8 oxidation, 198 oxidative phosphorylation, 204 ozone layer, 22–4, 368, 396 partitioning, 255, 302 Pasquill stability categories, 323, 324 t pendulon, 189–90 perfluorinated compounds (PFCs), 50 periodic heat wave, 88 phase change, 83–4 pheomelanin, 25–7 phosphorescence, 358 photoprotection, 193–5, 208 photosynthesis, 4, 183–95 artificial, 209–13, 441 f basics of, 184–5 biochemist’s view of, 433 f biologist’s view of, 431 f charge separation, 184 f , 190–3 energetic disorder, 193 energy transfer mechanism, 187–90 flexibility, 193 light-harvesting antennas, 185–7 light harvesting in, 435 f photoprotection, 193–5 physicist’s view of, 434 f pigments, 436 f reaction centre, 184, 438 f research directions, 195 thermodynamics, 175–9 photosynthetic active radiation (PAR), 181 photosynthetic membrane, 432 f photovoltaics (PV), 146, 152–9 costs, 158–9 efficiency of, 157–8 phreatic surface, 288 phycobilisome, 4 phycocyanin, 4 phycoerythrin, 4 physical constants, 409–10 t pigment, 184 f Pinatubo eruption, 62 pipes, flow through, 313 pitch control, 168 Planck energy distribution, 8 Planck’s constant, 8 plasma, 238 plastoquinol, 211–13 plastoquinone, 211–13 Poisson equation, 35
Index Poisson relation, 108 poles, energy transport to, 52 pollution, 125–9 aerosols, 127–8 carbon dioxide (CO2 ), 126–7 carbon monoxide(CO), 126–7 nitrogen oxides, 125–6 regulation, 129 SO2 , 126 thermal, 129 volatile organic compounds, 128–9 polyatomic molecules, 352–3 porosity, 285 porphyrin, 209–10 power laws, 398–400 pressure gradient forces, 54 pressurized water reactor (PWR), 224 private cars. See automobiles propane, 112 protonmotive force, 204 p type Si, 154 239 Pu, 237–8 pumped hydro storage, 120 pyrimidine dimer, 21 Q-band electron spin spectroscopy, 340 quantum yield, 358 quicksand, 292 quinone, 210 radiation, 82–3 black-body, 7–9, 38 f , 44 diffuse, 366 direct, 366 and health, 244–6 deterministic processes, 246 norms on exposure, 245 t stochastic processes, 246 units, 244 upper limits per year, 245 t weighing factors, 244 t thermal, 38 f radiation balance, 36–48 analytical model, 44–5 changes in, 39–40 cool sun, 40 for earth and atmosphere, 38 f extinction of dinosaurs, 40 and global warming, 45–8 nuclear winter, 40 radiation transfer, 41–4 white earth, 39–40 radiation flux, 8 radiative forcing, 45–6, 71 f radiative transfer models, 366–8
radioactive decay, 225 radon, 249, 424 Raman lidar, 370–1, 373, 444 f Raman scattering, 359–60 Raman spectroscopy, 340–1 Rankine cycle, 105 rated wind speed, 168 Rayleigh distribution, 168 Rayleigh scattering, 359, 361–2 reaction centre, 184, 187, 438 f recombination current, 155 rectangular reactor, 232 reduction, 198 refrigeration, 110–13 absorption, 113 vapour-compression cycle, 110–13 refrigerator, 96 f , 97 rem, 244 remote sensing by satellites, 362–8 analysis, 364–8 DOAS method, 365–6 iteration procedure, 368 radiative transfer models, 366–8 ENVISAT satellite, 362 ozone results, 368 SCIAMACHY, 362–4 sun-synchronous orbit, 362 renewable energy, 145–215 bio energy, 175–83 bio solar energy, 203–13 organic photocells, 196–203 solar power, 146–59 water, 169–74 wind energy, 159–69 reorganization energy, 191 reprocessing, 252, 257 resonance escape probability, 227 resonance Raman scattering, 360–1 rest value, 136 Reynolds number, 311–16, 327 Rhine River, 277–8 Rhodopseudomonas viridis, 190–3, 438 f risks, 389–90 acceptable, 392–3 estimation, 390 rivers, dispersion in, 270–82 continuous point emission, 278–9 dilution of pollution, 280–1 improvements, 281–2 influence of turbulence, 275–7 mixing length, 281 one-dimensional approximation, 271–4 Rhine River calamity model, 277–8 rivers, power from, 170 222 Rn, 249
437
438
Index
rolling resistance, 130–4 rotational spectroscopy, 340 rotational transitions, 347–9. See also electronic transitions; vibrational transitions linear rotors, 348 selection rules, 348–9 spherical rotors, 347–8 symmetric rotors, 348 RuBisCo, 208 Russel-Saunders coupling, 346 salt domes, 254 sand pile, 400–1 satellite remote sensing, 362–8. See also lidar analysis, 364–8 DOAS method, 365–6 iteration procedure, 368 radiative transfer models, 366–8 ENVISAT satellite, 362 ozone results, 368 SCIAMACHY, 362–4 sun-synchronous orbit, 362 saturated adiabat, 36 saved fossil fuel energy, 381 scalar field, 414 scalar product, 412 scalar quantity, 412 scale height, 34 scattering, 359–62. See also spectroscopy in the atmosphere, 362 Raman, 359–60 Rayleigh scattering, 361–2 resonance Raman, 360–1 Schr¨odinger equation, 12, 349 SCIAMACHY, 362–4 analysis, 364–8 DOAS method, 365–6 iteration procedure, 368 ozone results, 368 radiative transfer models, 366–8 science aims of, 402–4 contribution of, 5–6 control of, 402 nature of, 401–2 and society, 404 sea-level rise, 73 sea surface temperature (SST), 60 f Second Law Efficiency, 98–9 seepage face, 298 selection rules, 341–2, 348–9 self-heating criterion, 242 self-organized criticality, 398–401 sensitive technologies, 256 shearing stresses, 306
shear velocity, 166 Si crystal, 152–5 sievert, 244 skin effect, 124 smart buildings, 149–50 snow, 39–40 SO2 , 126 soil moisture, 289 solar cells, 152 costs, 158–9 efficiency of, 157–8 solar collector, 84–7 solar constant, 3 solar erythemal effectiveness, 21, 22 f solar influx, 36–7 solar power concentrating solar power, 150–2 photovoltaics, 152–9 smart buildings, 149–50 varying solar input, 146–9 solar radiation, 38 f solar tracking, 148 sorption, 302 Southern Hemisphere, 61–2 specific discharge vector, 285 specific energy, 118 spectroscopy, 337–41 absorption lines, 341 atomic spectra, 345–7 many-electron atoms, 346 one-electron atoms, 345–6 electron spin resonance spectroscopy, 340 emission spectroscopy, 339 linewidths, 342–5 composite lineshapes, 345 homogenous broadening, 342–4 inhomogenous broadening, 344–5 molecular spectra, 347–58 nuclear magnetic spectroscopy, 339–40 population of energy levels, 341 Raman spectroscopy, 340–1 rotational spectroscopy, 340 scattering, 359–62 in the atmosphere, 362 Raman, 359–60 Rayleigh scattering, 361–2 resonance Raman, 360–1 selection rules, 341–2 transition dipole moments, 341–2 X-ray spectroscopy, 341 spherical rotors, 347–8 stall control, 168 state function, 91–2 stationary reactor, 231–2 steam engine, 105–6
Index
439
Stefan-Boltzmann’s law, 3, 9, 39 Stirling engine, 104–5 stochastic processes, 246 Stokes Raman scattering, 340, 359–60 storage coefficient (aquifer), 302 storativity (aquifer), 302 stratopause, 32, 33 f stratosphere, 32, 33 f stream function, 294 stress, 304 stress tensor, 304–8 summation convention, 305 sun annual motion of, 146 f cool, 40 emission spectrum of, 9–12 sun curtains, 149 sun-synchronous orbit, 362 superconducting mechanical energy storage (SMES), 117, 119 superconductivity, 123 supergrids, 124 superposition principle, 300 sustainable development, 1 sustainable energy supply, 1–2 swept area, 159 symmetric rotors, 348 syphon, 83–4
transmissivity, 299 transmutation, 255, 256 f transuranic nuclei, 252 tritium, 238, 247–8 tropopause, 32, 33 f troposphere, 32–4 tryptophan, 21 turbulence, 313–16 dimensional analysis and scales, 313–15 and dispersion in rivers, 275–7 Kolmogorov scales, 315–16 turbulent diffusion, 316 turbulent diffusion, 316 turbulent jets and plumes, 326–33 dimensional analysis, 328–9 simple jet, 329–30 simple plume, 331–3
tangential stresses, 166, 306, 307 f Taylor’s theorem, 320 temperature contact, 89–90 sudden change in, 89 terrestrial outflux, 36–7 thermal conductivity, 79, 80 t, 423 thermal efficiency, 96, 108 thermal emissivity, 423–4 thermal neutrons, 223 thermal pollution, 129 thermal radiation, 38 f thermal utilization factor, 227 thermodynamics, 77 first law, 91–2 second law, 92–3 thermohaline circulation, 59, 67, 74 thermosphere, 32, 33 f thermosyphon, 83–4 three-way catalytic converter, 132–3 thymine, 20 tidal power, 174 Tokamak design, 238, 239 f total solar irradiance, 3 transition dipole moments, 12–14, 341–2 transmission, 123–4
vadose zone, 284 valence band, 153 vapour-compression cycle, 110–13 vector product, 412–13 vectors, 411–17 divergence, 415–16 field, 413–14 Gauss’s Law, 416 gradient, 414–15 gradient operator, 417 Laplace operator, 416–17 scalar product, 412 vector product, 412–13 vernal equinox, 61 f vertical flow, 288–9 vertical wind profile, 165–7 vibrational-rotational spectra, 351 vibrational transitions, 349–53. See also electronic transitions; rotational transitions diatomic molecules, 349–51 polyatomic molecules, 352–3 vibrational-rotational spectra, 351 vibronic state, 355 vibronic transition, 355 vibronic wavefunction, 350 viscous forces, 54–5
235 U,
222–3, 237–8, 249–52 250–1 uncertainties, 394–5 uncertainty broadening, 343 unconfined aquifers, 283, 295 uniform flow, 300–1 United Nations Environmental Program, 396 United Nations Framework on Climate Change, 397 unsaturated zone, 284, 288–9 uranium mines, 249 238 U,
440
Index
volatile organic compounds (VOCs), 128–9 voltage, 120 volume flux, 327, 331 Von Karman constant, 166 wake, 163–5 water, 169–74, 389 fresh-water resources, 389 kinetic energy, 170 physical and numerical constants, 410 t potential energy of, 169–70 power from dams, 169–70 power from flowing rivers, 170 power from tides, 174 power from waves, 170–4 wave function, 171 wavenumber, 339 waves, 170–4 web sites, 425–6 Weibull probability distribution, 167–8 wet scrubbers, 126
wet towers, 83 white earth, 39–40 Wien’s displacement law, 9 wind energy, 159–69 aerodynamics, 162–5 blade design, 163 wake, 163–5 Betz limit, 160–2 outlook, 168–9 vertical wind profile, 165–7 wind farms, 165 wind statistics, 167–8 wind farms, 165 windmills, 159 wind tower, 150 wind turbine, 159 work, 95–7 World Health Organization (WHO), 129 World Meteorological Organization, 396 X-ray spectroscopy, 341
Plate 1 – Figure 3.1 Schematic view of the components of the climate system, their processes and interactions. (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], FAQ 1.2, Fig. 1).)
Plate 2 – (Biochemical feedbacks used in Figure 3.22.) (Reproduced with permission of Cambridge University Press, copyright 2007, from Climate Change 2007: The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the IPCC (Chapter 3, Ref. [1], Fig. 7.3, p. 515)). The original caption reads as follows (for the sources consult Chapter 3, Ref. [1]): The global carbon cycle for the 1990s, showing the main annual fluxes in [GtC/yr]: pre-industrial ‘natural’ fluxes in black and ‘anthropogenic’ fluxes in red (modified from Sarmiento and Gruber, 2006, with changes in pool sizes from Sabine et al., 2004a). The net terrestrial loss of 39 [GtC] is inferred from cumulative fossil fuel emissions minus atmospheric increase minus ocean storage. The loss of 140 [GtC] from the ‘vegetation, soil and detritus’ compartment represents the cumulative emissions from land use change (Houghton, 2003), and requires a terrestrial biosphere sink of 101 [GtC] (in Sabine et al., given only as ranges of –140 to −80 [GtC] and 61 to 141 [GtC], respectively; other uncertainties given in their Table 1). Net anthropogenic exchanges with the atmosphere are from Column 5 ‘AR4’ in Table 7.1. Gross fluxes generally have uncertainties of more than ±20 % but fractional amounts have been retained to achieve overall balance when including estimates in fractions of [GtC/yr] for riverine transport, weathering, deep ocean burial, and so on. ‘GPP’ is annual gross (terrestrial) primary production. Atmospheric carbon content and all cumulative fluxes since 1750 are as of end 1994.
Plate 3 The biologist’s view of photosynthesis comprises plants and their microscopic structure: the first three pictures in ever smaller scale. The picture bottom left refers to the reactions that are taking place on a scale of [nm]. They are the field of the biochemist (Plate 5). The microscope photograph of the chloroplasts Clivia miniata was kindly made for us by Dr Barzda and Dr Cisek of the University of Toronto. (With permission from Virginijus Barzda and Richard Cisek).
Plate 4 The plant photosynthetic membrane is folded like a towel; in this case it shows the membrane organization within the chloroplast, based on atomic models of the main protein components and thin sectioning electron microscopy. In three dimensions all membranes (depicted in grey, height about 4 [nm]) are interconnected and enclose the lumen (depicted in light blue). The closer view below shows a stack of grana domains on the left, with PS2 complexes in dark green and Light Harvesting Complexes in bright green. On the right, there are some interconnecting stromal domains with PS1 in dark blue, the cytochrome b6 f complex in orange, ATP in red and PS2 in dark green. (Reproduced by permission of Wiley-VCH Germany, copyright 2008, from Chapter 5, Ref. [20], Fig 6.2, p. 143. The high resolution picture was kindly made available by Dr. Boekema, Groningen University and Dr Dekker, VU University.)
Plate 5 The biochemist’s view of photosynthesis. In the microscopic structure the relevant compounds are shown and the electron and proton transfer reactions are indicated. (Reproduced by permission of Jon Nield). Readers are referred to his site (see Chapter 5, Ref. [21]), where the picture is updated regularly.
Plate 6 – (A simplified version of Plate 6 is shown as Figure 5.18.) The physicist’s view. The photosynthetic membrane is reduced to transfer of excitations and motion of electrons and protons.
Plate 7 – (A simplified version of Plate 7 is shown as Figure 5.19.) Light harvesting in bacterial photosynthesis. LHI-RC complexes are surrounded by LHII complexes. Both in LHI and LHII the bacteriochlorophylls are organized in ring-structures. The RC is positioned in the centre of the LHI ring. The ‘green’ and ‘blue’ disk like molecules are bacteriochlorophylls, the ‘yellow’ linear-shaped molecules are carotenoids. Excitations around 400–500 [nm] are mainly absorbed by the carotenoids, these are transferred to the bacteriochlorophylls in less than 1[ps]. Excitations ending up in the ‘blue’ ring of bacteriochlorophylls (named B800 after its absorption maximum around 800 [nm]) are transferred to the ‘green’ LHII or B850 ring in about 1 [ps]; within a single ring LHII or LHI excitations move around very rapidly, typically in much less than 1 [ps]. The transfer between rings is slower, between 1 and 10 [ps]. Since the LHI rings absorb more to the red than the LHII ring (875 [nm] vs. 850 [nm]) the excitations are concentrated around the RC. The transfer of excitations from LHI antenna to the RC is the rate-limiting step that occurs in about 40 [ps]. Once the RC gets excited, a fast (3 [ps]) charge separation occurs. Note: In the main text the notation LH1 and LH2 is used instead of LHI and LHII. This excitation transfer in the bacterial photosynthetic unit image was made with VMD and is owned by the Theoretical and Computational Biophysics Group, NIH Resource for Macromolecular Modelling and Bioinformatics, at the Beckman Institute, University of Illinois at Urbana-Champaign. (Reproduced with permission of the National Academy of Sciences, USA, copyright (1998) from Chapter 5, Ref. [22], Fig. 7. This version was kindly made available by Dr Schulten and his group at Urbana-Champaign. A simplified version of Plate 7 is shown as Fig. 5.19.)
Plate 8 – (Right) Figure 5.20 Topview of two photosynthetic pigment proteins as determined by X-ray diffraction. Left: PS1 of plants. The core with about 100 chlorophylls is partly surrounded by four peripheral light-harvesting pigment-proteins. (With permission of MacMillan Publishers Ltd, copyright 2003, reproduced from Chapter 5, Ref. [24], (Nature), Fig 2a. This version was kindly made available by Dr Nelson, Tel Aviv University.) Right: the RC-LH1 core complex of photosynthetic purple bacteria. A ring of 30 bacteriochlorophylls surrounds the RC. In both systems the electronic excitation is transferred within a few tens of [ps] to the RC pigments in the centre where a transmembrane charge separation is initiated. The membrane itself lies in the plane of drawing. In both structures a ring is indicated that shows the spatial separation between the light-harvesting pigments and the electron transfer chain. Fast energy transfer can still occur over a distance of 3 [nm], while electron transfer is virtually impossible, since it depends on the overlap of wavefunctions. Thus the charge-separated electron cannot escape, while excitations can still be collected. (With permission of the AAAS, copyright 2003, reproduced from Chapter 5, Ref. [25], Fig 2C. Also see Fig. 5.20. This version was kindly made available to us by Dr Roszak and Dr Cogdell of Glasgow University.)
Plate 9 – Figure 5.21 Oscillatory dynamics in a bacterial light-harvesting complex LH2, consisting of a circle of 18 bacteriochlorophylls, as revealed by analysis of single-molecule exciton spectra and femtosecond [fs] spectroscopy. Dynamics of population of chlorophylls after impulsive excitation is shown. A colour scale is used to indicate the absolute values of the site population from zero (blue) to the maximal value (red). Coherence between the collective exciton states created by the impulsive excitation manifests itself as reversible oscillatory jumps (half-period of 350 [fs]) between small groups of molecules (encircled). Within each group, faster oscillations between individual sites (with a half-period of 60 [fs]) are discernible. A black and white version is discussed in the text as Figure 5.21. (With permission of Elsevier, copyright 2006, reproduced from Chapter 5, Ref. [26], Fig. 7. Also published in Chapter 5, Ref. [27]. This version was carefully prepared by Dr Novoderezhkin.)
Plate 10 – (Right) Figure 5.22 Left: The photosynthetic reaction centre of the purple bacterium Rhodopseudomonas viridis. The reaction centre consists of three proteins, each of which is folded a few times up and down. They noncovalently bind four bacteriochlorophylls, two bacteriopheophytins, two quinones, one Fe atom and one carotenoid. The yellow cytochrome subunit on the top binds four haem groups that serve as electron donors. In many species this subunit is replaced by a soluble cytochrome. (Kindly made available to us by Prof Cogdell and Dr Roszak of Glasgow University (Chapter 5, Ref. [28]).) Right: sequence of electron transfer events leading to transmembrane charge separation. (Kindly made available to us by Prof Cogdell and Dr Roszak of Glasgow University (Chapter 5, Ref. [28]).) The picture on the right was adapted from a picture provided by the group. The helpful suggestions of Dr Roszak on this plate are gratefully acknowledged.
Plate 11 – Figure 5.24 Left frame: Exciton structure of PS2 RC. Wavelengths corresponding to the unperturbed site energies of eight pigments and the first charge transfer (CT) intermediate PD2 + PD1 − (lines indicate participation of the pigments in the exciton states), and the 77[K] absorption with the individual exciton components. Insert on the top shows the structure of the lowest exciton state, where the circles show the pigments that on average are coherently mixed in the lowest exciton state. The area under the circle is proportional to the population of the corresponding site. Middle frames: Possible pathways for primary charge separation in the PS2-RC. Circles show the localization of the electron and hole in the CT states (i.e. PD2 + PD1 − , ChlD1 + PheD1 − and PD1 + ChlD1 − ) that can be coupled to the lowest exciton state. Right frame: The Stark spectra calculated with the same CT states as shown in the middle frame. Red dots correspond to experimental data, the Stark signal is calculated with coupling to CT (dark blue) and without coupling to the CT state (light blue). It appears that in this experiment the top pathway was identified. (With permission of Elsevier, copyright 2007, based on Chapter 5, Ref. [30], Figs 4,7,8 . This version was kindly made available by Dr Novoderezhkin.)
Plate 12 – Figure 5.29 Comparison of biological and technological systems for (a) solar energy conversion to fuels and (b) utilization of this stored chemical energy. Both biology and technology use functionally analogous steps, indicated by the horizontal arrows, during fuel production and consumption. However, a key difference is the use of molecular recognition and proton motive force (pmf) in biology, as compared to the electrical circuits and emf used in technological systems. (With permission of the Royal Society of Chemistry, copyright 2009, reproduced from Chapter 5 Ref. [37], Fig. 1. This version was kindly made available to us by Dr Hambourger and Dr Moore of Arizona State University. In the main text it is reproduced as Fig. 5.29.)
Plate 13 – (Left) Figure 5.31, (Right) Figure 5.30 Left: An artificial photosynthetic membrane converting light energy into pmf, thereby driving ATP synthesis. Molecular triad (C-P-Q) molecules are inserted into a liposome containing the lipid soluble quinone shuttle (QS ). Photoinduced charge separation gives rise to proton pumping via QS . With ATP synthase incorporated into the membrane, the resulting pmf is utilized for the synthesis of ATP. The graph shows the amount of ATP produced as a function of irradiation time for [ATP] = [ADP] = 0.2 [mM] and [Pi ] = 5 [mM], open triangles and [ATP] = 0.2 [mM], [ADP] = 0.02 [mM] and [Pi ] = 5 [mM], filled circles, as well as for control experiments. (Reproduced by permission of the Royal Society of Chemistry, copyright 2009, and MacMillan Publishers Ltd, copyright 1998. This version was kindly made available to us by Dr Hambourger and Dr Moore of Arizona State University. In the main text it is reproduced as Fig. 5.30.The figure was published in Chapter 5, Ref. [37] Fig 2, which was adapted from Chapter 5, Ref. [38] Fig. 1.) Right: RuII -Mn2 II,II complex mimicking light-induced stepwise oxidation of the Mn4 Ca cluster in Photosystem 2. Repeated photo-induced electron transfer to the acceptor [Co(NH3 )5 Cl]2+ leads to the oxidation of Mn2 II,II to Mn2 III,IV . The lower panel is a simplified scheme of the charge-compensating reactions: in 10% H2 O, acetate is the dominant ligand and charge-compensating reactions during oxidation of the Mn2 complex cannot occur. However, in aqueous solution (90% H2 O) the acetate is replaced by H2 O and now every oxidation step of the Mn2 complex is accompanied by a deprotonation facilitating one additional oxidation. The assignment of H2 O and OH ligands is tentative. (Reproduced from American Chemical Society Copyright 2009. Chapter 5 Ref. [40] Fig. 4. See also Fig 5.31 in the main text. This version was kindly made available by Dr Magnuson, Uppsala University.)
Plate 14 – Figure 5.33 Schematic representation of a cyanobacterial cell. Photosynthesis occurs in the thylakoids and uses light to produce ATP and NADPH. These drive the Calvin cycle, which is under control of a variety of gene products (purple, green, red). In the cell, enzymes are available (orange) that produce biomass which can be converted into ethanol (C2 H6 O). Alternatively, new enzymes can be added (blue) to the bacterium using synthetic biology to directly produce a fuel (C2 H2 ) that is expelled by the cell. Also the concentration and activity of the sugar- and fuel-producing enzymes is under genetic control. Understanding the operation of this proteo-genomic network is the realm of ‘systems biology’. (With permission of Elsevier, copyright 2009, adapted from Chapter 5 Ref. [41], Fig. 2. See also Fig. 5.33 in the main text.)
Plate 15 – left – (Left: A simplified version of Plate 15 is shown as Figure 8.18.) Left SCIAMACHYs scientific observation modes: 1 = nadir, 2 = limb, 3 = occultation. (With permission of DLR-IMF, copyright 2010, reproduced from Chapter 8 Ref. [5] Fig 4-1.) Right. Worldwide ozone distribution. The graph shows the results of one day’s measurements of the ozone column. The spotted strips reflect that during limb viewing no nadir measurements are performed. (With permission of the temis management reproduced from Chapter 8 Ref. [8].)
Plate 16 – (Top) Figure 8.16, (Middle and Bottom) Figure 8.18 Top: Time–height display of the range-corrected signal at 1064 [nm] from the Raman lidar CAELI on 17 May 2010. Data is shown at 10 [s] resolution in time and 7.5 [m] vertical resolution. A layer of volcanic ash is seen between 2.5 and 6 [km] altitude between 17:00 UTC and midnight. The colours indicate the intensity of the backscattering, shown on the bar at the right. (This graph was made available by Dr Apituley, KNMI who gave kind permission to reproduce it.) Middle/bottom. Time–height display of the range-corrected signal at 355 [nm] from a Leosphere ALS-450 UV-backscatter lidar on 18 April 2010, the arrival of the second plume of ash above the Netherlands. The middle graph refers to backscattering only; the bottom graph shows the depolarization, indicated on the bar at the right. The plume is visible between 1 and 2 [km], mainly in the middle of the day. The strong depolarization indicates the irregular shape of the volcanic ash. (These two graphs were made available by Dr Donovan, KNMI, who gave kind permission to reproduce it.)