SIMULATIONS OF THE EFFECTS OF CLIMATE CHANGE
Ashok Malik
g RAJAT PUBLICATIONS NEW DELHI - 110002 (INDIA)
RAJAT PUBLICATIONS 4675/21 ,Ansari Road, Daryaganj New Delhi- 110002 (India) Phones: 23267924,22507277 E-mail:
[email protected]
Simulations of the Effects of Climate change © Reserved First published, 2008 ISBN 978-81-7880-344-9
[ The responsibility for facts stated opinion expressed or conclusions reached and plagiarism, if any, in this volume is entirely that of the Editor. The publisher bears no responsibility for them whatsoever.]
PRINTED IN INDIA Published by Mrs. Seema Wasan for Rajat Publications, New Delhi and Printed at Asian Offset Press, Delhi.
Contents
l.
Global Climate Change and Carbon Cycling
2.
Greenhouse Effect and Ecosystems
19
3.
Prediction of Climate Change
49
4.
Global Biogeochemical Carbon Cycle
65
5.
Variations in Climate Change
103
6.
Dynamics of Ecological Systems
125
7.
Modelling Techniques
141
8.
Evaluation of Terrestrial Systems
155
9.
Assessment of Freshwater Quality
175
1
10. Ecological Risk Assessment
193
11. Environmental Monitoring
203
12. Costs of Climate Change Mitigation
235
Bibliography
271
Index
273
1 Global Climate Change and Carbon Cycling The simultaneous changes in the chemistry of the atmosphere and the climate are expected to affect both the function and the structure of terrestrial ecosystems. Functional changes may include changes in processes such as photosynthesis, plant respiration and decomposition. Structural changes may be of various types, including changes in the distribution of carbon and nitrogen between the plant and soil pools, changes in the species composition within an ecosystem, and changes in the distribution of major vegetation groups or biomes. Process-based models have been used in regional studies to evaluate the functional responses of terrestrial ecosystems to changes in climate. One of these models, the terrestrial ecosystem model (TEM), has been used at the global scale to explore how carbon and nitrogen cycling in terrestrial ecosystems might change according to predictions of climate change made by several general circulation models (GCMs). A dominant feature of GCM-predicted climate for a doubled CO2 atmosphere is an increase in mean surface temperature of the globe; precipitation and cloudiness are expected to increase in some areas and decrease in others,
2
Simulations of the Effects of Climate Change
and there is disagreement among the output of GCMs about the spatial distribution of these changes. Elevated temperature may affect carbon cycling in ecosystems in a variety of ways. It may enhance decomposition of soil organic matter to increase the loss of carbon from the soil. Enhanced decomposition may also increase nitrogen availability through higher rates of nitrogen mineralisation. Uptake of this nitrogen by vegetation may enhance NPP. Decreases in NPP may result from elevated temperature by reducing soil moisture or by increasing respiration. Because the ability of vegetation to incorporate elevated CO2 into production depends on the nutrient and water status of the vegetation, climate change may influence carbon cycling in ecosystems by altering nutrient availability and soil moisture. The TEM ha"s been developed to evaluate how climate change influences simultaneous interactions among the carbon, nitrogen, and water cycles in ecosystems. Terrestrial Ecosystem Model (TEM)
The terrestrial ecosystem model (TEM) is a process-based ecosystem simulation model that uses spatially referenced information on climate, elevation, soils, vegetation, and water availability to make monthly estimates of important carbon and nitrogen fluxes and pool sizes. For each monthly time step in a model run, NPP is calculated as the difference between gross primary production (GPP) and plant respiration (RA). The calculation of GPP considers simultaneous interactions among temperature, light, CO2, water, and nitrogen availiability. Therefore, the response of GPP to elevated CO2 is potentially constrained by the availability of light, water, and nitrogen. The calculation of RA considers both maintenance respiration and construction respiration.
Global Clinulte Change and Carbon Cycling
3
The data sets used to drive TEM are gridded at a resolution of 0.5 0 latitude by 0.5 0 longitude. The sources for the climate data (air temperature, precipitation, and cloudiness), elevation, vegetation, and soil texture are described elsewhere; the climate data represent long-term averages. Hydrological inputs for TEM are determined with a water balance model that uses the climate, elevation, soils, and vegetation data. The application of TEM to a grid cell requires the use of monthly climatic and hydrological data and the soiland vegetation-specific parameters appropriate to the grid cell. Although many of the vegetation-specific parameters in the model are defined from published information, some are determined by calibrating the model to the steady-state fluxes and pool sizes of an intensively studied field site. Most of the data used to calibrate the model for the vegetation types considered in this study are documented elsewhere. In most of the analyses, TEM has been calibrated to the soil organic carbon and nitrogen found to approximately 1 m depth at the calibration site, the 'I m' calibration. Table 1 Areal extent of grasslands and coniferous forests
Area (106 km2) Grasslands Tall Short Total Conifer forests Boreal Forest Temperate conifer Temperate mixed Total All ecosystems
Cells
3.6 4.7 8.3
1557 2050 3607
12.2 2.4 5.1 19.7
7406 1081 2250 10737
127.3
56090
4
Simulations of the Effects of Climate Change
Global Grassland Communities and Conifer Forests
The global distribution of grasslands and conifer forests, which represents 22% of the area occupied by the terrestrial biosphere, is determined from a global georeferenced data base (0.5 0 spatial resolution) of potential vegetation developed from extant maps. The global grassland communities are aggregated into two major vegetation types, tall and short grasslands, based on the relative height of the dominant vegetation. Tall grasslands contain grasses with heights greater than 1 m and occur in more mesic sites than short grasslands. Although the northern meadow-steppe of Euro-Asia contains relatively 'short' species, the dynamics of this community are thought to be similar to the tall grass prairie of North America. Therefore, northern meadowsteppes have been classified as tall grassland. Both grassland types are found throughout the temperate and tropical zones. Do not consider grasslands found in savannas of the temperate and tropical regions.
Table 2 Estimates by the TEM of annual NPP for grasslands and conifer forests in the terrestrial biosphere at an atmospheric CO2 concentration of 355 ppmv (parameterised for 1 m soil carbon) Total NNPa Grasslands Tall Short Total Conifer forests Boreal forest Temperate conifer Temperate mixed Total
Mean NPPb
Max. NPPb
Min. NPPb
1.2 1.0 2.2
335 214 267
756 438 756
136 72 72
2.9 1.1 3.4 7.4
238 465 669 378
434 704 1066 1066
124 208 231 124
a Units are Pg C( 1015 g C ) yr·! b Units are g Cm-2 yr· t
Global Climllte Change and Carbon Cycling
5
Conifer forests are aggregated into three major vegetation types based on physiognomy and climate: boreal forests, temperate mixed forests, and temperate conifer forests. Boreal forests (located mainly in Canada and Alaska, northern Europe, and the Commonwealth of Independent States) contain both conifer and deciduous dominant species and represents 62% of all potential conifer forests. The next abundant forest type, temperate mixed forests (26%), also contains both conifer and deciduous dominant species and is found mainly in the United States, Europe, and China. Temperate conifer forests have a global distribution similar to temperate mixed forests, but are also found in many mountainous regions. Contemporary Climate
To estimate fluxes and pools of grasslands and conifer forests for 'contemporary' conditions, applied TEM at 355 ppmv CO2 using the long-term climate data with the 1 m calibration. For grasslands, TEM estimates an annual NPP of 2.2 Pg C (10 15 g C; Table 2). The vegetation and soil carbon estimates for grassland are 3.4 Pg C (Table 3) and 75.7 Pg C (Table 4). Although conifer forests occupy 2.4 times the area of grasslands, the estimated annual NPP is 3.4 times that of grasslands (7.4 Pg C; Table 2), vegetation carbon is 84.3 times greater (286.7 Pg C; Table 3), and soil carbon is 3.1 times greater (231.4 Pg C; Table 4). The higher vegetation carbon of conifer forests reflects the ability of forests to store carbon in woody tissue. Per unit area, TEM estimates that total carbon storage in global conifer forests is 2.8 times that in global grasslands.
Simuilltions of the Effects of Climate Chilnge
6
Table 3 Estimates by the TEM of vegetation carbon for grasslands and conifer forests in the terrestrial biosphere at an atmospheric CO2 concentration of 355 ppmv (parameterised for 1 m soil carbon) Total carbona Grasslands Tall
Short Total Conifer forests Boreal forest Temperate conifer Temperate mixed Total
Mean carbonb Max. carbonb Min. carbonb 1.8 1.6 3.4
512 337 413
1156 690 1156
208 113 113
118.5 90.7 77.5 286.7
9739 37803 15224 14586
17739 57196 24269 57196
5033 16939 5261 5033
a Unites are Pg C( 1015 g C ) yr-1 . b Unites are g C m o2 yro1
Table 4 Estimates by the TEM of soil carbon for grasslands and conifer forests in the terrestrial biosphere at an atmospheric CO2 concentration of 355 ppmv (parameterised for 1 m soil carbon) Total carbona Grasslands Tall Short Total Conifer forests Boreal forest Temperate conifer Temperate mixed Total
Mean carbonb Max. carbonb Min. carbonb
58.4 17.3 75.7
16211 3701 9156
23714 5129 23714
8039 1689 1689
132.3 45.1 54.0 231.4
10878 18804 10612 11777
12189 28219 15606 28219
4578 5097 6285 4578
a Unites are Pg C( 1015 g C ) yro1 b Unites are g Cmo2 yr1
However, the area-based estimates by TEM may be appropriately compared. The grassland NPP estimate of
Global Climllte Change and Carboll Cycling
7
267 g C m-2 yr-l by TEM (Table 2) is similar to the estimate of 225 g C m- 2 yr- l by Whit taker and Likens for temperate grasslands_ However, the vegetation carbon estimate of 413 g C m-2 (Table 3) is substantially lower than their estimate of 700 g C m-2 _ The soil carbon estimate of 9156 g C m-2 by TEM for grasslands is similar to 10692 g C m-2 (417 Pg / 3900 106 ha) reported by Ojima et aL for soil carbon to 1 m depth in potential world grasslands. The TEM estimate of 238 g C m-2 yr- l for boreal forest NPP (Table 2) is substantially lower than the Whittaker and Likens estimate of 360 g C m-2 yr- l • However, the estimate for boreal forest vegetation carbon (9739 g C m2 2; Table 3) is similar to their estimate of 9000 g C m- • The TEM estimate of 10878 g C m-2 for boreal forest soil carbon (Table 4) is lower than the 14900 g C m- 2 estimate of Schlesinger; the TEM estimate is lower because the model does not represent anaerobic processes that C3.use higher carbon storage to occur in northern peatlands. The Whittaker and Likens NPP estimate of 585 g Cmyr- l for temperate evergreen forest is intermediate between the TEM estimates for temperate conifer and temperate mixed forest. Their estimate of 16000 g Cm-2 for vegetation carbon of temperate evergreen forest is similar to the TEM estimate of 15224 g Cm-2 for temperate mixed forest; the TEM estimate for temperate conifer forest (37 803 g Cm-2; Table 3) is much higher because the calibration site is an old-growth forest in the Pacific Northwest of the United States. 2
The Schlesinger estimate of 11 800 g Cm-2 for temperate forest soil carbon is similar to the TEM estimate for temperate mixed forest (10612 g Cm-2; Table 4). Again the TEM estimate for temperate conifer forest (18804 g Cm2; Table 4) may be higher because the calibration site is an old-growth forest.
8
Simulations of the Effects of Climate Change
Development' of Future Climate Scenarios
We obtained the output of two general circulation models (GCMs) from the National Centre for Atmospheric Research. The simulations estimate equilibrium climates that correspond to a doubling of the atmospheric CO2 concentration and include one from the Geophysical Fluid Dynamics Laboratory and one from Oregon Stale University. The GFDL GCM represents a high temperature impact scenario and predicts increases of 4.9 °C for the grassland biome and 6.2 °C for the conifer forest biome. The OSU GCM represents a low impact scenario and predicts increases of 3.2 and 3.4 °C for the grassland and conifer forest biomes. Precipitation for grasslands is predicted to increase by both GCMs with larger increases predicted by OSU (82.0 mm) than by GFDL (46.5 mm). Precipitation is also predicted to increase for conifer forests, with similar increases predicted by GFDL (67.7 mm) and OSU (75.9 mm). Cloudiness is predicted by GFDL and OSO to decrease 1.4 and 2.1% for grasslands, respectively. For conifer forests, cloudiness is predicted by GFDL to increase 0.6%, but by 050 to decrease 2.5%. To help separate the effects of changes in CO2 concentration from those of the GCM climates on estimates of NPP and carbon pools, performed a factorial experiment with the 1 m calibration of TEM involving two levels of CO2 (312.5 and 625.0 ppmv) and three climate scenarios (contemporary and the two GCM climates). Responses to Doubled Carbon Dioxide
For doubled CO2 and no climate change, TEM predicts that NPP, vegetation carbon, and soil carbon for individual grid cells of grasslands and conifer forests either do not change or increase. For global grasslands, TEM predicts that NPP increases 0.2 Pg C (9.1 %; Table 5), vegetation
Global Climate Change and Carbon Cycling
9
carbon increases 0.3 Pg C (9.1 %; Table 6), and soil carbon increases 3.4 Pg C (4.3%; Table 7). For global conifer forests, TEM predicts NPP to increase 0.4 Pg C (5.5%; Table 5), vegetation carbon to increase 17.5 Pg C (6.2%; Table 6), and soil carbon to increase 9.0 Pg C (3.9%, Table 7). Thus, for doubled CO2 and no climate change conifer forests are potentially a more responsive carbon sink than grasslands. The response of conifer forest NPP and soil carbon to elevated CO2 predicted by TEM depends on the degree to which NPP is limited by nitrogen availability. In moist regions of temperate conifer forest, where NPP is predicted by TEM to be limited by nitrogen availability more than by soil moisture, there is little response to elevated CO2• In dry regions elevated CO2 promotes enhanced water-use efficiency in TEM which translates into increased NPP. Because most of the conifer forest region considered in this study is moist conifer forest, i.e. in the boreal and temperate-mixed regions, the enhancement of NPP and soil carbon in response to elevated CO2 is small. Responses to Changes in Climate
With no change in CO2 concentration, TEM predicts for both the GFDL and OSU climates that responses of NPP, vegetation carbon, and soil carbon for individual grid cells of grasslands and conifer forests can be either positive or negative. For global grasslands, TEM predicts annual NPP to increase 0.4 Pg C (18.2%) for the GFDL climate and 0.2 Pg C (9.1 %) for the OSU climate (Table 5). Vegetation carbon is predicted to increase 0.7 Pg C (21.2%) for the GFDL climate and 0.6 Pg C (18.2%) for the OSU climate (Table 6). For the GFDL climate, soil carbon decreases 3.8 Pg C(5.1 %; Table 7). In contrast, soil carbon is predicted to increase slightly for the OSU climate (0.7 Pg C, 0.9%; Table 7).
10
Simulations of the Effects of Climate Change
For global conifer forests, TEM predicts annual NPP to increase 1.0 Pg C (13.7%) for the GFDL climate and 0.9 Pg C (12.3%) for the OSU climate (Table 5). Increases in vegetation carbon are slightly higher for the GFDL climate (38.1 Pg C, 13.5%; Table 6) than for the OSU climate (34.4 Pg C, 12.2%; Table 6). The decreases in soil carbon predicted for the GFDL climate (31.3 Pg C, 13.7%; Table 7) are more than three times those predicted for the OSU climate (9.2 Pg C, 4.0%; Table 7). Thus, for the climates considered with no change in CO 2, grasslands are potentially either carbon sources or sinks and conifer forests are potentially sinks. The sink strength is greater for conifer forests than grasslands because of the ability of forests to store carbon in woody tissue. In grasslands, and in boreal and cool-moist temperate
regions of conifer forest, elevated temperature generally increases the NPP predicted by TEM through enhanced nitrogen availability. Schimel et al., based on 50-year simulations of climate change with the CENTURY model for sites in the Great Plains, attributed increased NPP to elevated nitrogen availability because of enhanced decomposition, but indicated that nitrogen losses related to higher decomposition could decrease NPP in the long term. Burke et al. applied the doubled-C0 2 climate predicted by the Goddard Institute for Space Studies (GISS) GCM to the central Great Plains and reported that above ground NPP for the region increased less than 10% after 50 years of simulation with CENTURY. This result is similar to the equilibrium response predicted by TEM for the OSU climate applied to global grasslands. Linked models of forest productivity and soil processes have also predicted that elevated temperature enhances conifer growth through increased nitrogen availability for simulations at specific sites.
11
Global Clinuzte Change and Carbon Cycling
Decreases in NPP may result from elevated temperature by reducing soil moisture. This mechanism primarily affects the NPP response of TEM in dry regions of conifer forest. Other models have predicted that elevated temperature may increase evapotranspiration to decrease forest growth in both dry and wet regions of present-day conifer forest. Elevated temperature may also decrease NPP by enhancing respiration costs relative to carbon uptake. This mechanism has been observed to primarily affect the NPP response of TEM in warm moist regions of temperate conifer forest; it has also been observed to be an important factor influencing the NPP response predicted by the Forest-BGC model to elevated temperature at a site in warm-moist temperate forest. Because elevated temperature influences processes that can both enhance or decrease the NPP predicted by TEM, the NPP response to the high-temperature GFDL climate did not substantially differ from the NPP response to the lowtemperature OSU climate.
Table 5 Response of NPP (1015 9 C yrl) by region for experiment involving two levels of atmospheric C02 and three levels of climate (parameterised for 1 m soil carbon) Climate: CO2 concentration (ppm): Grasslands Tall Short Total Conifer forests Boreal forest Temperate conifer Temperate mixed Total
Contemporary
GFDLI
OSU
312
625
312
625
312
625
1.2 1.0 2.2
1.3 1.1 2.4
1.4 .12 2.6
1.5 1.4 2.9
1.3 1.1 2.4
1.4 1.2 2.6
2.9 1.1 3.3
2.9 1.2 3.6
3.8 1.1 3.4
4.4 1.3 4.0
3.5 1.1 3.6
3.7 1.3 4.0
7.3
7.7
8.3
9.7
8.2
9.0
Simulations of the Effects of Climate Change
12
The NPP response of TEM to elevated temperature influences the response of vegetation carbon; increased NPP translates into increased vegetation carbon for both the GFDL and OSU climate scenarios. Linked models of forest productivity and soil processes have also predicted increased above ground vegetation carbon in response to climate warming for conifers of both the temperate and boreal region, although the response depends on whether or not soil moisture is affected.
Table 6: Response of vegetation carbon (1015 g C) by region for experiment involving two levels of atmospheric CO 2 and three levels of climate (parameterised for 1 m soil carbon) Climate: GFDLI Contemporary C02 concentration (ppm): 312 625 312 625 Grasslands 2.3 Tall 1.8 1.9 2.1 2.2 Short 1.5 1.7 1.9 Total 3.3 3.6 4.0 4.5 Conifer forests Boreal forest 117.9 120.5 154.9 178.2 98.0 88.6 108.9 Temperate conifer 88.3 Temperate mixed 75.7 80.9 76.5 91.9 Total 281.9 299.4 320.0 379.0
OSU 312
625
2.1 1.8 3.9
2.2 2.0 4.2
144.7 89.7 81.9 316.3
151.5 104.7 91.7 347.9
Table 7 Response of soil carbon (1015 g C) by region for experiment involving two levels of atmospheric CO2 and three levels of climate (parameterised for 1 m soil carbon) Climate: CO2 concentration (ppm): Grasslands Tall Short Total
Contemporary
GFDLI
OSU
312
625
312
625
312
625
58.0 17.0 75.0
60.0 18.1 78.1
54.9 16.3 71.2
59.6 18.4 78.0
58.7 17.0 75.7
61.7 18.4 80.1
Global Climate Change and Carbon Cycling
13
Conifer forests Boreal forest 131.9 Temperate conifer 44.3 Temperate mixed 53.1
134.2 48.0 56.1
114.1 37.3 46.7
129.5 45.0 54.7
128.6 40.8 50.7
133.9 46.4 55.6
Total
238.3
198.1
229.2
2201
235.9
229.3
The response of soil carbon to elevated temperature will depend on the NPP response, which influences inputs into the soil, and on the decomposition response per unit soil carbon, which influences CO2 losses from the soil organic pool.
In TEM, if elevated temperature does not substantially decrease available soil moisture, then it will increase decomposition per unit soil carbon. Elevated temperature is predicted by TEM to decrease soil carbon for the hightemperature GFDL climate, but not for the lowtemperature OSU climate. Schimel et a1. indicated that soil carbon levels decreased in response to elevated temperature at sites in both the northern and southern Great Plains of the United States. Similarly, Burke et a1. reported that soil carbon levels of the central Great Plains decreased approximately 3% after running CENTURY for SO years with the GISS climate. This decrease is intermediate to the responses predicted by TEM for the GFDL and OSU climates; the temperature increase predicted by the GISS climate is intermediate between the GFDL and OSU climates. Elevated temperature is predicted by TEM to decrease the soil organic pool of conifer forests for both the GFDL and OSU climates, with greater decreases predicted for the high-temperature GFDL scenario. A linked model of boreal forest productivity and soil processes also predicts that soil organic carbon of boreal conifers decreases in response to climatic warming.
14
Simulations of the Effects of Climllte Change
Responses to Changes in Climate and Carbon Dioxide
With changes in both climate and CO2 concentration, TEM predicts that responses of NPP, vegetation carbon, and soil carbon for individual grid cells of grasslands and conifer forests may be positive or negative. For global grasslands, TEM estimates annual NPP to increase 0.6 Pg C (27.3%) for the GFDL climate (Table 5). The predicted increases for the OSU climate are slightly less (0.4 Pg C, 18.2%; Table 5)., Vegetation carbon increases 1.2 Pg C (36.4%) for the GFDL climate and 0.9 Pg C (27.3%) for the OSU climate. The predicted increases in soil carbon are less for the GFDL climate (3.0 Pg C, 4.0%; Table 7) than for the OSU climate (5.1 Pg C, 6.8%; Table 7). For global conifer forests, TEM predicts annual NPP to increase 2.4 Pg C (32.9%) for the GFDL climate, which is about 40% more than the 1.7 Pg C (23.3%) increase for the OSU climate. Increases in vegetation carbon predicted for the two climates show a similar pattern; for the GFDL climate, the 97.1 Pg C increase (34.4%) is more than 40% higher than the 66.0 Pg C increase (23.4%) predicted for the OSU climate (Table 6). Soil carbon decreased slightly for the GFDL climate (0.1 Pg C; < 0.1 %; Table 7), but increased for the OSU climate (6.6 Pg C; 2.9%; Table 7). Thus, for the climates considered with elevated CO 2 concentration, conifer forests are potentially much stronger carbon sinks than grasslands because of the ability to store carbon in woody biomass. The response of NPP to elevated CO 2 and temperature predicted by TEM is influenced by moisture availability. In moist regions of temperate forest, elevated temperature enhances decomposition to increase nitrogen availability. The increased nitrogen availability allows the vegetation to generally 'incorporate elevated CO2 into _production, but the overall effect on NPP is sensitive to the plant respiration response. In contrast, the FOREST -
Global ClimJlte Change a1ld Carbon Cyc1iug
15
BGC model predicts a slight decrease in NPP for a site in warm-moist conifer forest because the enhanced respiration costs of elevated temperature more than offset the photosynthetic gains from elevated CO2, In dry regions of temperate forest, NPP is predicted by TEM to increase because enhanced carbon uptake in response to elevated CO2 generally more than compensates for decreased soil moisture or increased plant respiration caused by elevated temperature. Similarly, fO{ a site in a dry conifer forest the ForestBGC model predicts increased NPP in response to elevated temperature and CO2 because photosynthetic gains more than offset respiration costs. The increases in NPP predicted by TEM for conifer forests in response to elevated temperature and C02 translate into increased vegetation carbon for both the GFDL and OSU climates. However, the enhanced NPP is able to compensate for the increased decomposition per unit soil carbon for the lowtemperature OSU climate, but not for the GFDL climate. Thus, soil carbon increases are predicted for the OSU climate and decreases for the GFDL climate. Table 8 Response of soil carbon (1015 g C) in tall grasslands and temperate conifer forest for the 1 ill and 20 cill calibrations in the experiment involving two levels of atmospheric CO2 and three levels of climate Climate:
Contemporary
CO2 concentration (ppm): 312 Tall grasslands 1 m calibration 58.0 20 cm calibration 17.4 Temperate conifer forests 1 m calibration 44.3 20 em calibration
12.7
GFDLI
OSU
625
312
625
312
625
60.0 18.0
54.9 16.6
59.6 18.0
58.7 17.6
61.7 18.6
48.0
37.3
45.0
40.8
46.4
13.8
10.7
12.9
11.7
13.3
16
Simulations of the Effects of Climate Change
Sensitivity to the Calibration Depth of Soil Carbon
The responses of NPP and vegetation carbon to changed climate or CO2 do not demonstrate any sensitivity to the calibration depth of soil carbon for either tall grasslands or temperate conifer forests. However, the ,absolute responses of soil carbon for the 1 m calibrations of tall grasslands and temperate conifer forest are approximately three to four times larger than for the 20 cm calibrations (Table 8), Although the absolute response of soil carbon is always greater for the 1 m calibration, the proportional responses are essentially identical for both the 1 m and 20 cm calibrations. For models that make equilibrium estimates, these results indicate the importance of identifying at the calibration site the soil carbon that is likely to be actively decomposing over the time frame of interest. For climate change studies involving a doubled CO2 atmosphere, the appropriate time scale is decades to centuries. The inclusion of soil carbon that turns over on the time scale of millennia ('old carbon') will overestimate the response of soil carbon. The CENlURY model estimates soil carbon to a depth of 20 cm. This depth may be approximately appropriate for identifying the relevant soil carbon in grasslands where most inputs are near the surface, but in forests the rooting zone may be much deeper than 20 cm. Because both recent and old carbon may occur at all depths in a forest soil, depth may not be the best metric to identify the actively decomposing soil carbon that is appropriate to doubled CO2 climate studies. Spatial Scale for the Comparison of Model Responses
During the next century, substantial simultaneous changes are predicted to occur jn several climatic variables
Global Climate Change and Carbon Cycling
17
including CO2 temperature, precipitation, and cloudiness. To assess the infh.;.,-nce of these changes on regional carbon cycling, it is desirable to represent how the interactive effects of climate change and elevated CO2 influence ecosystem processes in a spatially continuous manner. Although several models have been used to study the potential effects of climate changes on carbon cycling in grasslands and conifer forests, few have been used to study the interactive effects of climate with elevated CO'. Also, most investigations have focused on potential responses at specific sites rather than the responses at larger spatial scales; the results of site-specific investigations can appear contradictory so that responses at the regional scale are difficult to assess. The spatial scale is considered at two resolutions. The fine resolution is 'site', which may include a grid ceB or polygon for whiClh the climate variables were treated in the study as having no spatial variability. The coarse spatial scale is 'regional', which define as an aggregation of grid cells or polygons. The CENTURY model has been used to study potential responses of NPP and soil carbon in the grassland biome at both the site and regional scales. The region considered by Burke et al. is the central Great Plains of the United States, which is contained within the total grassland region considered by TEM. The climatic responses predicted by both models are consistent at both the site and the regional spatial scales, although the regions considered by CENTURY and TEM are not identical. Of the CENTURY studies, only Ojima et al. consider responses to changes in both climate and CO2; these are consistent with the TEM results at the site scale.
18
Simulations of the Effects of Climate Change
The application of models to study the potential responses of conifer forests has primarily focused on the effects of climate change at the site scale. The results of these studies are consistent with responses predicted by TEM at the site scale, but only one of the studies has examined the response of soil carbon.
2 Greenhouse Effect and Ecosystems Significant changes of the marine system on a global scale are less well documented, but it has been shown clearly that man-made pollutants are invading even the deep sea. Air and water pollution are generally increasing. It is, of course, hardly surprising that the presence of almost 5000 million people on Earth is altering the natural systems significantly. Some such changes must be accepted in order to permit the exploitation of the natural resources on which man is dependent. What do the changes of the terrestrial and marine ecosystems and other ongoing changes mean to man in a long-term perspective? It is important to keep in mind throughout the following discussion that the CO2 problem, or rather the problem of a possibly changing climate due to emissions of greenhouse gases into the atmosphere, cannot be considered in isolation. It is one of many important environmental problems that must be addressed but in a long-term perspective probably the most important one. Ute realisation that the climate might change as a result of emissions of carbon dioxide into the atmosphere is not new. Arrhenius pointed out that the burning of fossil fuels might cause an increase of atmospheric CO2 and
20
Simulations of the Effects of Climate Change
thereby change the radiation balance of the Earth. During the 1930s Callendar for the first time convincingly showed that the atmospheric CO2 concentration was increasing. Earlier attempts had not been successful primarily due to non-representative sampling. The problem was revived again by C. G. Rossby in the 1950s, who was the driving force behind the initiation of CO2 measurements in Scandinavia, and by R. Revelle, who was instrumental in getting C. D. Keeling engaged in the observational programmes on Mauna Loa, Hawaii, and at the South Pole in 1957-58. At about the same time Revelle and Suess presented the first more careful assessment of the likely future CO2 increases due to fossil fuel combustion. This was followed by a more elaborate analysis by Bolin and Eriksson. The observations begun in 1958 have clearly shown that the concentration of carbon dioxide (C02 ) in the atmosphere has increased from about 315 ppmv then to about 343 ppmv in 1984. Approximately the amounts of CO2 that have been emitted into the atmosphere by fossil fuel combustion and changing land use (deforestation and expanding agriculture) and can relate the observed increase of atmospheric CO2 to these human activities. Since a continued increase of the atmospheric CO2 concentration might lead to changes of the global climate, it is essential to be able to project the likely future concentrations that may occur due to various possible rates of CO2 emission. The reason for concern about climatic effects is the so-called 'greenhouse effect' of CO 2 • While CO 2 is transparent to incoming short wave radiation from the Sun, it absorbs outgoing long wave radiation and re-emits this energy in all directions. Therefore, an increase of the atmospheric CO2 concentration leads to a warming of the Earth's surface and lower atmosphere. In addition, it is
Greenhouse Effect and Ecosystems
21
becoming increasingly clear that a number of other greenhouse gases in the atmosphere similarly affect the radiation budget. Their concentrations are also changing as a result of natural and human causes. Since increased concentrations of CO2 as well as of these other greenhouse gases all lead to a warming of the Earth's surface and lower atmosphere, the estimated climatic effects and further impacts (e.g. on sea level and agriculture) must be considered as a result of a combined effect of these potential origins of the warming. However, in order to be able to make estimates of their relative contributions to the warming and associated climatic changes at any given time, their effects are studied separately. NUMEROUS ASSESSMENTS OF THE
CO2 PROBLEM
The first international assessment of the CO 2 issue organised by UNEP, WMO and ICSU resulted from an expert meeting held in Villach, Austria, in November 1980. The projection of future fossil fuel use made at that meeting was largely based on a scenario developed at the Institute for Energy Analysis, Oak Ridge (USA). The projection of the atmospheric carbon dioxide concentration in 2025 was made by assuming that 40-55% of the total emissions would remain in the atmosphere (the so-called airborne fraction). The globally averaged surface temperature response to a doubling of the atmospheric CO 2 concentration was estimated, by examining the results of numerical models of the climate system. The WCP report concludes that C0i"induced climatic change is a major environmental issue but that, because of existing uncertainties, the development of a management plan for control ~f CO 2 levels in the
22
Simulations of the Effects of Climate Change
atmosphere and for preventing detrimental impacts on society would be premature. It was felt that research to place decision-making with respect to CO 2 on a firm scientific basis merits high priority. The meeting further emphasised that the CO2 problem affects both developing and developed nations and nUs for a special partnership of effort. The report of the Carbon Dioxide Assessment Committee of the U.S. National Research Council gives a detailed assessment of the various aspects of the CO2 problem. The model used a range of paths and uncertainties for major economic, energy and carbon dioxide variables, which allowed a 'best guess' of the future path of carbon dioxide emissions and a reasonable range of possible outcomes given present knowledge. The estimate of the net emissions of CO 2 from the biota was made on the basis of information presented in the report and available in the scientific literature. The possible future atmospheric CO2 concentrations were calculated using an estimate of 0.60 ± 0.10 as the likely future airborne fraction of the projected emissions due to fossil fuel combustion. The effects of CO2-induced climatic changes on agricultural, social and economic system were also assessed with emphasis on the United States. It was concluded that the longer-term agricultural effects are uncertain and depend strongly on the outcome of future research, development, and new technology in agriculture. The CDAC report reached a general conclusion similar to that of the WCP report, Le. that the evidence at hand about CO2-induced climatic change does not support steps to change current fuel-use patterns away from fossil fuels, although such steps may be necessary or desirable at some time in the future. The report pointed out that steps to control climatic change should start possibly with reductions of the emissions of other greenhouse gases,
Greenhouse Effect and Ecosystems
23
since their control may be more easily achieved. Further, the CDAC report suggested that the CO2 problem might serve as a stimulus for increasingly effective cooperative treatment of world issues. The study of the US Environmental Protection Agency, EPA took a different approach to the problem by examining whether specific policies aimed at limiting the use of fossil fuels would prove effective in delaying temperature increases over the next 120 years. The projections of future energy demand and supply were made using the world energy model of the Institute for Energy Analysis. A global carbon model developed at Oak Ridge National Laboratory (ORNL) was used to estimate the atmospheric CO2 concentration. The changes of the atmospheric temperature were evaluated using a simplification of a one-dimensional radiative-convective model developed at the Goddard Institute for Space Studies. The EPA analysis concluded that only a ban on coal use instituted before 2000 would effectively slow down the rate of a global temperature change and delay a 2°C increase until 2055. It was concluded further that major uncertainties include the increasing concentrations of other greenhouse gases and the atmospheric temperature response and that alternative energy futures produced only minor shifts in the calculated date of a 2°C warming. Although the results suggested that bans on coal and shale oil are most effective in reducing temperature increases in 2100, the EPA study concluded that a ban on coal is probably economically and politically infeasible. A Committee of the Health Council of the Netherlands made an assessment of the CO2 problem in 1983. The energy scenarios upon which the assessment was based were taken from the IIASA study, with CO2 emissions
24
Simulations of the Effects of Climate Change
from fossil fuels in 2030 ranging between 8.9 GtC/year and 16.2 GtC/year. The changes in the atmospheric carbon dioxide concentration were calculated using a model of the carbon cycle. The concentration in 2030 was estimated to be 431 ppmv and 482 ppmv for the IIASA 'low' and 'high' scenarios respectively. Considerable emissions from the further exploitation of the terrestrial ecosystems were assumed but also more effective uptake by undisturbed forest systems and by soils due to charcoal formation in the process of burning during deforestation. The likely future temperature
present uncertainties prudence dictates a cautious and flexible energy strategy. The group recommended a 'low climate-risk energy policy', which would promote the more efficient end use of energy, secure the expeditious development of energy sources that add little or no CO2 to the atmosphere, and keep global fossil fuel use, and hence CO 2 emission, at the present level. A further working group at the same conference noted that decisions that will have to be made in the decades ahead to prepare for or avert a carbon-dioxide climatic change will have to
Greenhouse Effect and Ecosystems
25
be made before all the answers have been obtained. It was concluded that the assessment of the impacts of climatic changes will have to be made now despite the uncertainties. The net emissions from the biota (due to deforestation and land use changes) in themselves will be insufficient to bring about a significant change of climate, while fossil fuel reserves are sufficiently large that environmental disturbance would occur if the reserves are exploited at an increasing rate in the future. Although there are differences in the estimated globally averaged surface temperature response to a doubling of the atmospheric CO2 concentration, these differences are not large, but the uncertainties of these estimates are considerable. It is also generally agreed that regional differences of climatic change cannot be predicted at present, and similarly the general way a given change of climate would influence people and nations around the world cannot be predicted. This is presumably the reason why there is substantial disagreement regarding the recommendations for future action. While some assessments conclude that there is not sufficient evidence to support changes of current fossil fuel use patterns, other assessments conclude that immediate action is necessary. The implications of the existing uncertainties have been evaluated differently in the various studies. MAJOR DISCOVERIES OF THE PRESENT STUDY
The ana1ysis of the 'C02 Problem' presented in this report has been pursued along the lines usually adopted by scientists. When an assessment is to be made of the problem as a whole, it is not sufficient to proceed step by step through this sequence. There will always be
26
Simulati01ls of the Effects of Climate Change
uncertainties in the successive analyses and it is difficult to assess what can be said with some degree of reliability with regard to the problem as a whole. It is, therefore, necessary to single out findings that are significant and upon which there is general agreement. At the same time, the nature of the uncertainties must be examined. It should be borne in mind that policy will ultimately be based on a judgment of the problem as a whole in relation to other societal problems, some caused by other environmental problems, some of a totally different nature. In this process the urgency of the problem is of decisive importance in the establishment of a policy for action. Long-term problems, such as the CO 2 issue, are therefore often deferred for later consideration. Both scientific progress and the adoption of a policy for action depend very much on the availability of factual information about ongoing changes. In the former case the information is essential for verification of model projections of future changes, in the latter case the evidence of observations usually is necessary to convince the general public and make action politically possible. Emissions of Carbon Dioxide into the Atmosphere
Combustion of fossil fuels-primarily oit gas and coal-today meets about 80% of the total global energy demand. Future emissions of carbon dioxide will depend on how this global energy demand will change and what role fossil fuels will play in the future global energy supply system. The characteristics of the energy system ('the energy mix') change only slowly because of the large capital investments that are required to establish the units that supply the energy needed, and because the lifetime of existing installations is long and accordingly the time
Greenhouse Effect and Ecosystems
27
required to develop new supply systems is long. Even so, projections of future energy demands made during the last decade or two have not been very successful, a principal reason being the rather dramatic events that have taken place in the oil market since the early 1970s. A considerable decrease of the energy use per production unH has occurred and a further decrease is technically possible. Whether this will take place or not depends primarily on the relative costs for investments in new energy supply systems on one hand and those for conservation and energy savings on the other. Many other factors, however, also play an important role in this context. It has been pointed out in a few recent analyses of
the problem and has become increasingly clear in the present analysis that the projections of energy use beyond the early decades of the next century are very uncertain. This also implies that the options for choices increase in the long-term perspective. It is therefore not very meaningful to attempt precise projections beyond 30-40 years, but instead likely upper and lower bounds for future energy demands are estimated. The estimated upper bound of possible scenarios implies an emission of about 20 GtC/year in the year 2050, i.e. about a fourfold increase of present emissions during the next 65 years. It is interesting to note that the last fourfold increase of emissions has occurred since the late 1930s, i.e. during the last 45-50 years. Although it is recognised that some studies have projected an even more rapid future increase, it is argued that this seems quite unlikely because of a number of environmental, social and logistic constraints. On the other hand it is not likely that the CO2 emissions can be reduced to a value below about 2 GtC/year in 2050, and this value
28
Simulations of the Effects of Climate Change
could only be achieved by sustained global efforts to limit the future energy demand and, particularly, the use of fossil fuels. Even lower values have been projected but are judged to be unrealistic on economic grounds. Low CO2 emissions would only be achieved if there were some global acceptance of a much more considerate attitude towards the natural environment, implying decreasing use of fossil fuels. Although some people already consider this as fundamental for the continued well-being of human society, a general acceptance will take a long time. At present about 5 GtC/year are emitted into the atmosphere through the combustion of fossil fuels. Although biotic emissions of carbon dioxide as a result of deforestation and land use changes have also contributed to the rise of the atmospheric carbon dioxide concentration in the past, it is clear that in the future the emissions from the biota will be comparatively small, since there is a physical limit to the extent of deforestation. If there is going to be a significant increase of the atmospheric carbon dioxide concentration during the next hundred years, it will come from the emissions of CO2 by fossil fuel use, and, in a longer term perspective, particularly from the use of coal. The range an uncertainty when they add to a prediction
of likely carbon emissions in 2050 is not of the kind that natural scientists assign an uncertainty range to an observation or by a model.
The future CO2 emissions due to the burning of fossil fuels as well as changing land use will depend on human decisions. In any evaluation of the problem, economic considerations will play an important role, but other environmental concerns are also being expressed, as shown by the increasing use of concepts such as 'quality of life'.
Greenhouse Effect and Ecosystems
29
Future Atmospheric CO2 Concentrations
Future atmospheric CO2 concentrations resulting from a given scenario of CO 2 emissions depend on the transfer processes, whereby CO2 is partitioned between the major carbon reservoirs in nature. It is well established that the crucial questions are: How rapidly is the huge storage capacity of the world ocean (including the ~ole of marine biota and sediments) becoming available as a sink for the CO2 emitted into the atmosphere? In which way do the terrestrial ecosystems modulate the increase of atmospheric CO2 and what are the implications of man's exploitation of the terrestrial biosphere, particularly by deforestation and changing land use? The assessment shows an improved understanding of the global carbon cycle in recent years. This is particularly due to the development of more realistic models of the carbon cycle, the simultaneous consideration of the observed changes in the global distribution of all three carbon isotopes, 12C, 13(: and 14C, as well as other global data for validation of carbon cycle models. Progress in documenting past changes of the carbon cycle, particularly by analysis of the air trapped in glacial ice, has also been of great significance. It is still not clear how to balance the global carbon budget, partly because of the poor knowledge of land use changes and partly because of the uncertainty about pre-industrial atmospheric CO2 concentrations. However, the remaining uncertainties about the global carbon cycle do not seriously influence the conclusions about future levels of atmospheric CO2, The main source of uncertainty is, rather, the projections of future CO2 emissions. The general conclusions can be summarised as follows:
30
Simulations of the Effects of Climate Change
A low CO2 emission scenario, i.e. constant or only very slowly increasing emissions during the next 4 decades and slowly decreasing emissions thereafter, would give atmospheric CO2 concentrations below about 440 ppmv, i.e. less than about 60% above the pre-industrial level. It is accordingly quite possible that a doubling of CO2 will be reached only after year 2100. A modest increase (1-2%/year) of CO2 emissions during the next 4 decades (and a decline thereafter) will lead to about a doubling of the CO; concentration (i.e. 550 ppmv) towards the end of the next century. The upper bound scenario implies CO2 doubling by about 2050 and it is accordingly rather unlikely that the CO2 concentration in reality will double before the middle of the next century. These conclusions suggest a somewhat slower development than that discussed in the first WMO /ICSU / UNEP assessment and in the U.S. National Academy of Sciences study (COAC, 1983). It is important to note that the more slowly the emissions increase, the smaller is the fraction that remains in the atmosphere. A warmer climate induced by increasing CO2 concentrations could, however, diminish the transfer of CO2 to the deep sea and thereby increase the future airborne fraction. Greenhouse Gases that May Affect the Earth's Radiation Budget
Carbon dioxide is not the only atmospheric constituent that is of importance for the heat budget of the atmosphere and, thus, the global temperature. Moreover, the concentrations of some of these other constituents are also observed to be changing. Water vapour is the major radiatively active constituent of the atmosphere, but systematic and significant changes of atmospheric water
Greenhouse Effect and Ecosystems
31
vapour will primarily occur in association with changes of the climate and have been implicitly included in numerical models simulating climatic change. It has been recognised for a long time that ozone is
important in the Earth's radiation budget and, since changes of atmospheric ozone concentrations can be induced by human activity, attention should be paid to its importance in comparison with CO 2, Some slight decrease of stratospheric ozone (less than 1%) seems to have been detected and this decrease might continue in the future. Ozone in the Northern Hemisphere troposphere has probably on average increased by more than 10% presumably because of human activities. An additional increase by 10% during the remainder of this century and the first decades of the next may well occur. Methane (CH4, present average global concentration 1.65 ppmv) is increasing at a rate of about 1.2% per year, presumably due to more extensive use of paddy fields in cultivating rice, increasing numbers of domestic ruminants, biomass burning and to leakage of natural gas when exploiting gas fields. Analyses of air trapped in glacier ice indicate pre-industrial concentrations probably only about 40% of those observed today. Nitrous oxide (N20, present average concentration 0.30 ppmv) is increasing by about 0.3% per year. Preindustrial concentrations may have been 5-10% below the values observed today. The increase is probably due to the use of nitrogen fertilisers in agriculture and forestry and also to combustion processes and is expected to reach 0.35-0.40 ppmv towards the middle of the next century. Man is also producing a large number of gases that were not present naturally in the atmosphere or only in small and insignificant amounts. FII and F12, will become of increasing significance for the radiation budget of the
32
Simulations of the Effects of Climate Change
Earth, while a series of other compounds probably will be of less importance combined than any of the gases considered separately above. An annual increase of F11 and F12 emissions by 4% does not seem unlikely, if no preventive measures are taken. It is fairly straightforward to compute the direct radiative effects of these greenhouse gases. Such computations can be made with adequate accuracy. They can also be included in General Circulation Models (GCMs). The uncertainties about the climatic changes that these greenhouse gases might cause are similar to those for CO 2 , Therefore, their role can be expressed approximately in terms of an equivalent amount of CO2, It is concluded that the temperature change due to
the changing concentrations of these greenhouse gases upto the present is about one half of the change calculated for the increase of the atmospheric CO2 alone (about 70 ppmv). The effect of these other greenhouse gases is equivalent to an additional increase of CO2 by 40-50 ppmv. The concentrations of several of these gases are increasing more rapidly than that of CO2, If the rates of increase as given above are applied during the next 50 years, find that such a scenario would be equivalent to a doubling of atmospheric CO2 concentrations well before the middle of the next century. Chlorofluorocarbons would then become the most important gases in addition to CO2, if no preventive measures are taken. On the other hand, their regulation would be easier to achieve than the limitation or reduction of CO2 emissions. Modelling of Future Global Climate
There are many factors that are known to cause changes of global climate. Global climate can be affected by changes J of solar energy output, changes of the Earth's orbit around
Greenhouse Effect and Ecosystems
33
the Sun, volcanic eruptions, changes of atmospheric composition, changes of cloudiness, the Earth's albedo and atmosphere-land-ocean interactions. These factors can act individually or together. In spite of recent theoretical advances and quite detailed understanding of many processes, it has not been possible to establish unequivocally the causes of documented past climatic fluctuations. It seems likely, however, that changes of the incident solar radiation caused by the slow variation of the characteristics of the Earth's orbit around the Sun have played a significant role on the glacial/interglacial time scale. Estimates of the effects of changes in atmospheric composition on climate are made using models of the climate system. Since the climate system is very complex, various approximations and simplifications have been made in the development of such models. Different modelling approaches have been adopted, giving rise to a range of climate models from simple zero-order models, simulating global mean temperature changes, to comprehensive three-dimensional general circulation models of the atmosphere-land-ocean system. Only the latter are capable of providing adequate information for evaluating the characteristic features of future climatic change, that is, not merely average changes for the Earth as a whole, but also some details regarding regional climatic change and changes in surface hydrological processes. Even though significant advances have been made in modelling the climate system, present models are not yet able to simulate reliably the many processes that govern the regional climate. However, comparison of model computations with observed features of the general circulation of the atmosphere, particularly the model capability to reproduce
34
Simulations of the Effects of Cli1tUlte Change
seasonal variations of weather and climate, has given us some confidence in available results. An evaluation of the large number of results from climate models leads to the conclusion that the global equilibrium temperature change expected from increases of CO 2 and other greenhouse gases equivalent to a doubling of the atmospheric CO2 concentration is likely to be in the range 1.S-S.5°c. The largest sources of uncertainty in modelling global average temperature change appear to be the levels of feedback from clouds, ice-albedo and, possibly, lapse-rate and water-vapour changes. Moreover, storage of heat in the oceans delays the warming expected for an equilibrium response to carbon-dioxide-induced warming and may also significantly modify the geographical distribution of climatic change. The role of the oceans is important but also uncertain. Model results suggest that the global warming resulting from increases in greenhouse gases to date is in the range 0.3°C to 1.1°C. Continental-scale or regional-scale climatic change cannot yet be modelled realistically but a few tentative conclusions can be drawn. All three- dimensional model results suggest that the largest temperature increases would occur in the high latitudes in the fall and winter seasons, and there is unanimous agreement that the stratosphere will cool. There is also some evidence for mid-latitude, mid-continental drying during the summer. Detecting the Climatic Effects of CO2
An analysis of surface temperature data since the middle of the last century shows that both hemispheres experienced a general warmjng from the late 19th century until about 1940 and a cooling until the mid-1960s. Since then the globe as a whole appears to have warmed, with
Greenhouse Effect and Ecosystems
35
a delay in this warming trend in the Northern Hemisphere. Detection of a CO 2-induced warming in the observational record has become a high priority issue. The detection problem can be viewed in terms of the concept of the signal-to-noise ratio, i.e. one can claim to have detected a change in climate once the signal (e.g. the increasing temperature) has risen appreciably above the background noise level (e.g. natural variability of temperature). Past records show that climate has varied naturally on time scales from a few years to centuries, millennia and longer. Considering also the warming due to the increase of other greenhouse gases, it seems that the observed temperature change during the last 100 years is in the lower part of the projected range. It is not possible to conclude whether this might be due to a more modest role of the positive feedback mechanisms as described in the climate models or to a del"y of a warming due to the inertia of the oceans. A major problem in detecting the climatic effects of CO2 increases is to explain the medium to longer timescale (decadal or greater) fluctuations in the observed temperature record. Until the comparatively rapid global warming of 1920 to 1940 and the cooling between 1940 and the mid-1960s, in particular, have been adequately explained, claims regarding the detection of CO2 effects can be easily criticised. Ocean Waters Expand When Warmed the Atmosphere
Changes of global temperature affect the various components of the hydrological cycle in different ways and with different response times. For example, changes in precipitation patterns over land affect runoff from rivers
36
Simulations of the Effects of Climate Change
and glaciers into the sea. Ocean waters expand when warmed. Catastrophic collapse of ice sheets has also been proposed as a potential risk causing a comparatively rapid rise of sea level, i.e. 1-2 cm/year. Measurements of sea level changes since early this century show an average rate of rise for global sea level of 14 ± 5 cm per century. If the observed (modest but significant) correlation between sea level and air temperature during this time is assumed to be valid in the future, it is estimated empirically that a global warming of l.5°C to 5.5°C would lead to a sea level rise of 25 to 165 cm. Probably the major contributing factor to such a sea level rise would be the thermal expansion of the ocean water. The small glaciers would probably decrease in extent and might contribute to a rise of sea level by 20 ± 10 em in the course of a century for a warming of 3.5 ± 1.0°C. It seems likely that the Greenland ice sheet would decrease, but probably enhanced precipitation (snow) in the Antarctic would increase the accumulation and balance approximately the net flux of ice and water from Greenland to the sea. Better oceanographic knowledge is required to assess the influence of any climate warming on the stability of the West Antarctic ice sheet. One possibility is that global warming could warm ocean waters and change ocean circulation sufficiently to cause a catastrophic collapse of the West Antarctic ice sheet following a global warming of 3-4°C. Another possibility is that increased precipitation over the ice sheet may outweigh any effect of increased ocean temperatures, which may be confined mainly to the top 100 m, in which case the volume of West Antarctic ice may slowly increase in a stable manner. Even if a collapse started, it is likely to take several hundred years
Greenhouse Effect and Ecosystems
37
to raise sea level by, say, 5 m, corresponding to the possible discharge of the West Antarctic ice volume into the ocean. Global Vegetation can be Affected on Ecosystems
Global vegetation can be affected in two general ways: by the direct effects of higher CO 2 concentrations on plant growth, and by changes in climate. At the scale of individual plants, higher CO2 concentrations have been shown consistently in short-term laboratory experiments to stimulate growth and yield of both C3 and C4 plants. This results principally from enhanced net photosynthesis, and from increased water use efficiency through a reduction of the stomatal aperture in the leaf and, hence, transpiration. However, there are large uncertainties involved in extrapolating these experimental results to longer time scales and larger area scales. A comparison of global vegetation and climatic changes of the distant past leaves little doubt that a climatic change of the order of magnitude indicated by dimate models for a doubling of atmospheric CO2 could, potentially have profound effects on global ecosystems. However, the prediction of the direction, magnitude and rate of changes in ecosystems requires reliable estimates of climatic change at the regional scale, and such estimates are currently unavailable. Despite the inability to make predictions, sensitivity analyses can produce useful information for judging the possible direction and magnitude of effects for given changes in CO2 levels or climate variables, and thus for identifying specific regions and environmental changes which may warrant policy attention in the future. There is ample opportunity to test the sensitivity of global ecosystems to changes in climate variables by using
38
Simulations of the Effects of Climate Change
scenarios of regional climatic change derived from GCM results or the instrumental record, or simply arbitrary climatic changes. In this volume, the emphasis is placed on agriculture and forests. From a global perspective, geographic differences in agricultural regions have important implications for assessing the effects of increased CO2 and climatic change. For example, rainfall is the principal constraint on agriculture in the tropics and sub-tropics, whereas in the temperate and higher latitudes, temperature has a relatively greater influence.
In many developing countries of the lower latitudes, advances in food production have been achieved in large part through the expanded use of marginal lands, a trend which may be increasing the sensitivity of agriculture in these regions to climatic change or variability. In other areas, notably most of the major grain- producing countries of temperate latitudes, food production has risen principally through intensification and, hence, increases in average yields within the core crop regions. An unresolved issue here is whether the technological applications, which have made these long-term yield gains possible, have increased or decreased the sensitivity of crops to shortterm climatic variations. Another important trend has been the expanding volume of global grain trade during the past several decades, a trend which has increased inter regional reliance for food production and distribution. Any adverse climatic change in the core crop regions of the temperate zones, where both major centres of supply and demand are located, could have large socio-economic impacts on the developing countries of the lower latitudes, whose, purchasing power cannot always compete during times of scarcity. In this respect, the world has become
Greenhouse Effect and Ecosystems
39
increasingly interconnected in the face of increasing CO2 and climatic change which, itself, is global in nature. For the forest ecosystems of the world, three major issues need to be addressed, according to scale. First, at the microscale, the issue is how changes in processes such as photosynthesis, stomatal conductance or development may modify plant growth and, ultimately, forest productivity. The effects of changes in CO 2 concentrations or climate variables (like temperature) have to be analysed with respect to factors that currently limit productivity (like nutrients or moisture supply). Second, at the mesoscale, the issue is how the dynamic interaction and competition between ecosystem species would be affected, with consequences for local forest composition. Third, at the macroscale, the issue is how the size and areal extent of the world's forests may be altered. The largest spatial response could be expected from changes in temperature and precipitation at the cold or semi-arid margins of forest extent. At this scale, the lag time between climatic change and the response of forests could range from decades to hundreds, or even thousands, of years, based on evidence from the last glaciation. The Approaches of Global Agriculture
In the context of agriculture, four broad approaches to
assessing the impacts of increasing CO 2 and climatic change address the major issues noted above: (a) crop impact analysis; (b) marginal-spatial analysis; (c) agricultural sector analysis; and (d) historical case studies.
Crop impact analyses are concerned specifically with the effects on plant growth and crop yields. Considering the direct effects of higher CO:! concentrations in the absence of climatic change, it is estimated from laboratory
40
Simulations of the Effects of Climate Change
experiments on individual plants that a doubling of CO2 from 340 to 680 ppmv could result in a 0-10% increase in the growth and yield of C4 crops (e.g. maize, sorghum, sugarcane) and a 10-50% increase for C3 crops (e.g. wheat, soybean, rice), depending on the specific crop and growing conditions. These positive effects are obtained under most environmentally stressful well as non-stressful conditions, and would therefore benefit both the environmental margins and the core of crop regions. Greater yield benefits would be expected to accrue to those regions of the world where Cy rather than C4, crops predominate. Considering the sensitivity of crop yields to climatic change without including the direct CO2 effect, crop impact analyses have focused largely on grain yields in temperate and higher latitudes, to the neglect of the tropics and subtropics. From studies using various types of crop-climate model and climatic change scenarios, it is estimated that, with no precipitation change, a warming of 2°C might reduce average yields of wheat and maize in the midlatitudes of North America and Western Europe by 10 ± 7% assuming instantaneous warming and no change in cultivars, technology, or management. These yield reductions would be offset by wetter conditions and exacerbated by drier conditions. These estimates pertain to core region; in contrast, average yields at the cool margins of cereal production, for example, might well benefit from a lengthening of the growing season and a reduction of damaging frosts.
Marginal-spatial analyses examine the margins of production where conversion to other crops (or genotypes) or land uses is most likely to take place. A very limited number of studies in the mid- and high-latitudes suggest potential shifts in the boundaries of cereal regions of the
Greenhouse Effect and Ecosystems
41
order of magnitude of several hundred kilometres per °c change (assuming that existing crop regions are largely climatically determined and optimally located). Other studies of high altitude locations with steep environmental gradients suggest potential altitudinal shifts of more than a hundred metres per °C change. While these estimates are highly uncertain, any such shifts at the margins would certainly modulate the effects of climatic change on regional crop yields and production. Table 1 Relative effects
of increased CO, on growth and yield: a tentative compilation
Under non-stressed conditions Under environmental stress: Water (deficiency) Light intensity (low) Temperature (high) Temperature (low) Mineral nutrients: Nitrogen (deficiency) Phosphorus (deficiency) Potassium (deficiency) Sodium (excess) Sign of change relative to control constraints. + + = strongly positive + = positive o = no effect ?
= not
+
+
o to
+ + + +
+
+ +
+
o to
o to +
+
?
o to
+
+
07
O? ?
? ?
+
+
+
CO2 under similar environmental
known or uncertain
Agricultural sector analyses and historical case studies examine the range of environmental, agricultural and socio-economic impacts and explore the ways in which agricultural systems adjust to climatic change and variability. There are many feedback mechanisms that can enhance or diminish the potential impacts of
42
Simulations of the Effects of Climate Change
environmental changes on crop yields and food production. A limited number of studies, using linked regional models or global agriculture and trade models, suggest that, in many regions of the world, agriculture would readjust crop yields and food production to changing climate. Over the long term, yields and production in such regions may be more sensitive to technology, price or policy changes than to climatic changes, and these factors are largely manipulatable whereas climate is not. In past studies the pragmatic approach has been to consider the impacts of climatic change separately from primary responses of plants to increased CO2, But it is clear that the effects are interactive and non-linear, not simply additive, and should be studied accordingly. In any case, there exist large uncertainties in extrapolating laboratory results to field conditions.
Furthermore, most studies of agricultural impacts have focused on average climatic changes. However, it has been demonstrated in several instances that small changes in average climate can result in relatively large shifts in the frequencies of climatic extremes like droughts. These shifts may be equally, if not more, important to farmers than long-term changes in mean climate, especially in the marginal lands of many developing countries where climatic extremes take heavy economic and human tolls. The degree to which agriculture in such regions can be assisted in developing adjustments to buffer the ill-effects of present climatic variability, the better prepared they will be to adapt to any adverse effects of future climatic change, should they occur. In general, given the uncertainties in regional scale estimates of climatic change and the numerous deficiencies in methodologies of impact assessment, there is presently
Greenhouse Effect and Ecosystems
43
no firm evidence for believing that the net effects of higher CO2 and climatic change on agriculture in any specific region of the world will be adverse rather than beneficial. The Response of Global Forests to Increasing Atmospheric CO2 and Climatic Char.ge
The forests of the Earth constitute a complex system with many possible responses, both to the direct effects of an increase in atmospheric CO2 concentration and to the possible changes in climate. These responses may originate from phenomena that operate on very different scales of time and space. In general, formidable difficulties are encountered in the 'scaling-up' of the short-term physiological and biochemical response of leaves and individual plants to estimate the intermediate and long-term responses of forests. The difficulties arise from the large uncertainties involved in the methods of extrapolation and from the complex interactions that occur at larger scales. The two uncertainties are presently large enough to preclude meaningful estimates of the effects on forests of higher CO2 concentrations and climatic change except in the most general way.
With respect to the direct effects of CO2, these problems of scaling-up are compounded by the lack of experimental evidence for relevant forest species, particularly for plants that have been allowed to acclimate to enhanced CO2 concentrations over one or more growing cycles. Although higher concentrations of CO2 have been shown to increase the growth rates of individual trees in controlled conditions over the short term, it is highly uncertain whether such effects would be sustained and would lead to increased productivity in actual forest environments ovei" the long term. In uncontrolled environments, the direct CO2 effects are complicated by
44
Simulations of the Effects of Climate Change
micrometeorological differences in the degree of coupling between forests and atmosphere (within as well as between forest systems), and by species competition and interaction. If, indeed, elevated CO2 concentrations do result in long-term growth enhancement, increases in productivity would be more likely to occur in commercial forests than in mature forests in which the capacities for increased carbon storage are more limited. Direct experimentation at this scale, however, is largely impracticable. In order, therefore, to assess the responses of forest systems to both higher CO 2 concentrations and changes in climate, experimental studies must be augmented by empirical observation and simulation modelling. With respect to the effects of climatic change, empirical climate-vegetation models and forest simulation models have been used to assess the responses of forests at scales ranging from a single point in a forest to an entire continental system. In general, the results of a limited number of such studies suggest that climatic changes of the order of magnitude predicted by climate models for a doubling of atmospheric CO2 are potentially sufficient to produce substantial intermediate and long-term changes in the composition, size, and location of the forests of the world. The natural forests of the high latitudes in general and the boreal forests in particular, appear sensitive to predicted temperature changes, and it is at these latitudes that climate models predict the largest warming to occur as a result of increased concentrations of greenhouse gases. Warmer conditions could possibly lead to large reductions in the areal extent of boreal forests and a poleward shift in their boundaries. The forests of the tropical and subtropical zones, on the other hand, would probably be
Greblhouse Effect and Ecosystems
45
more sensitive to changes in precipitation than temperature. Because of the high uncertainty regarding future changes in precipitation in the tropics, and because of the present lack of models that can be used to simulate the effects of tropical ecosystems to changes in climate variables. Clear priorities can be set for future research in carbon-cycle modelling, better estimation of the effectG of other trace substances, sea-level projections and climate modelling. In the modelling of agricultural impacts, model validation and cross-model comparisons are required. The assessment of impacts on forest ecosystems indicates an essential lack of direct observations of whole system performance of forests under altered environmental conditions. Finally, it is clear that there needs to be a better integration of available methods and approaches in impact assessment. Expected Climatic Change
As far as the expected climatic change is concerned, this assessment concludes that a doubling of the CO 2 concentration would lead to an increase of the globally averaged surface temperature by 1.5-5.5°C. The uncertainty is considerable, but there is almost unanimous agreement that
a substantial warming would occur, Some global mean surface warming most 'likely has occurred but may well be partly obscured by natural climatic fluctuations. Model estimates as well as observed changes are subject to considerable uncertainty. Further, the observed warming could be attributable to other causal factors. It is not possible to state unequivocally that a CO2 or greenhouse gas signal has been detected. The observed
general increase of mean global surface temperature during the last hundred years is, however, in general accord with model results.
46
Simulations of the Effects of Climate Change
Our knowledge about past changes of climate as well as model computations of future changes indicate that marked regional differences can be expected. This implies, obviously, that some parts of the globe will experience significantly larger changes than indicated by the average global change, in other regions they will be less. The spatial
patterns of future changes are, however, not known. Some of the regional climatic anomalies observed today and usually attributed to the natural variability of the climate, might be due to some extent to an ongoing man-induced change. Future Environmental Changes
The major uncertainty with regard to future environmental changes is due to our inability to predict man's future behaviour. Some limited success in foreseeing changes of global society has been achieved by recognising its inertia and by using econometric models. These methods are not applicable for long-term changes. The uncertainties of projections of future atmospheric CO2 concentrations from a given CO2 emission scenario are considerably less than those of the emission scenarios themselves. The major uncertainty is thus related to the difficulty of projecting the future use of fossil fuels. It is also difficult to foresee how mankind will respond if there is conclusive evidence that the increasing emission of other greenhouse gases into the atmosphere is an equally important factor in causing climatic change. The Holistic View of Man
Different kinds of emissions contribute to the same environmental problem (Le. the greenhouse effect), while, on the other hand, one single human activity may cause different environmental problems (e.g. fossil fuel combustion emits CO2, sulphur dioxide and nitrogen oxides into the atmosphere leading to air pollution, acid deposition, etc.).
Greenhouse Effect and Ecosystems
47
It is becoming increasingly clear that many of these key problems are closely related both because physical, chemical and biological processes interact and also because one single human activity can contribute to several processes. It is obvious that environmental policy must be developed with such a holistic view of man and his environment.
On the other hand the full spectrum of policy choices has usually not been emphasised. Such choices are for example the improvement of energy use efficiency, the introduction .of conservation measures, accelerated development of specific technologies, the introduction of stricter environmental regulations and development of strategies of how to adapt to a changing climate. There is a wide range of possible energy futures because such choices can be made. Man's Response to Climatic Change
With regard to the impact of climatic change on natural terrestrial ecosystems, agriculture, forestry, etc., the lack of projections of regional climatic change due to increasing concentrations of greenhouse gases in the atmosphere means that more precise assessments of likely future changes on this scale cannot be made. For quite some time, research will be restricted to examining the sensitivity of these ecosystems to given climatic change scenarios. This is a serious shortcoming, since it will be difficult to tell more specifically what the implications of a change might be for particular groups of people (e.g. farmers). Those engaged in national planning will also usually be hampered because of lack of regionally specific information. Agricultural systems analysis looks at the readjustment that would be needed on the local, national
48
Simulations of the Effects of Climate Change
and international scale in response to climatic changes. It is clear. that these would be quite different depending on the spatial scale being considered. A shift of a climatic zone might not change the total production of a particular crop in a large and well developed country, because of possible shifts within the country of the regions where production takes place and the introduction of new varieties is possible. Such alternatives are not usually available to small countries, and considerable difficulties may be encountered at the national level. The individual farmer will also be affected very differently depending on whether or not cultivation of other crops than the traditional ones is feasible and on whether or not migration to regions with improved climate is possible. The seriousness of the problem further depends on the rate of the expected changes, which also warrants study.
3 Prediction of Climate Change The 1995 IPCC assessment noted that coupled climate models simulate many aspects of the observed climate with a useful level of skill at hemispheric and continental scales. Many current models use flux adjustments to avoid serious local errors or long-term drift in simulating present climate. The comprehensive diagnosis and evaluation of models, an essential part of model development, are limited by the lack of observations and data sets. Models can reproduce some of the gross features of recent climate change, though this agreement could arise from a cancellation of errors in estimates of past radiative forcing and model sensitivity. There is broad agreement in the qualitative simulated response to a gradual increase in greenhouse gases between models, though the magnitude of the global mean temperature change varies by a factor of more than 2. The changes are physically plausible. Regional changes and changes in variability (for example storm tracks) are generally inconsistent from model to model. Simulations with a simple representation of aerosols indicate that forcings other than that due to greenhouse gases need to be taken into account in both hindcasts and projections of the future.
50
Simulations of the Effects of Climate Change SCIENTIFIC PREDICTION
In order to address the requirement for reducing uncertainties in the prediction of human influence on climate, the following objectives have been accepted for AI: To better understand and quantify the contribution of anthropogenic factors to climate. To predict, up to the end of the 21 st century, climate change from scenarios of concentrations of greenhouse gases and anthropogenic influences. To provide corresponding estimates of regional climate change to the extent possible. As a basis for meeting these objectives, Al requires: Historical records of anthropogenic emissions of greenhouses gases and aerosols, and changes in land use. Physically based models to convert emissions to concentrations to radiative forcing. Physically based climate models which incorporate the processes necessary to simulate the response to the forcing accurately. Evaluation of models against observations. Model integrations using the historical forcing record and idealised forcings. In addition, projections of the human influence on climate in the future require Scenarios of future anthropogenic emissions. Extension of historical integrations into the future under these scenarios.
Prediction of Climate Change
51
Improvements are being made to the representation of forcing due to greenhouse gases and sulphate aerosols, and the effect of other factors is being included. Development of climate models which include the carbon and sulphur cycles are well under way. There is a general trend to use higher resolution which combined with other improvements is producing simulations without flux adjustments which are more realistic. Modelling programmes such as AMIP and eMIP are fostering improvement in the extent and standards of model evaluation. A major limiting factor is the improvement in the way subgrid-scale processes are represented, particularly clouds, and the lack of satisfactory means for evaluating the correctness of such parameterisations. The main sources of uncertainty in simulating recent and future climate are the specification of anthropogenic radiative forcing and the sensitivity of climate to change. A more complete list, based on McBean et al. is given below. Uncertainties governing the rate and magnitude of climate change and sea level rise: the factors controlling the distribution of clouds and their radiative characteristics; the hydrological cycle including precipitation, evaporation and runoff; the distribution and time evolution of ozone and aerosols and their radiative characteristics; the response of the terrestrial and marine systems to climate change and their positive and negative feedbacks; the response of ice sheets and glaciers to climate; the influence of human activities on emissions;
52
Simulations of the Effects of Climate Change
the coupling between the atmosphere and oceans, and ocean circulation; the factors controlling the atmospheric concentrations of carbon dioxide and other greenhouse gases. Additional uncertainties relating to regional patterns of climate change ~and surfac~
and atmospheric processes and their
linkages; coupling between global, regional, and smaller scales. PROBLEMS OF CLIMATE CHANGE
The problems of climate change are too large to be undertaken by anyone institution or nation. While there are certain principles on which modellers are in general agreement, there is a large number of ways of implementing these principles, and it is by no means obvious which approaches are both correct and practical. For example, there is no agreed or fail-safe method for initialising climate models: the existence of several major modelling centres around the world allows a diversity of approach which is more likely to bring progress than relying on one or two centralised institutions. Thus, one of the roles of CLIVAR is to co-ordinate the diverse approaches taken around the world so that progress is maximised. This includes facilitating the sharing of results, joint planning of future activities avoiding unnecessary duplication, and ensuring a comprehensive approach to the problem in a way th~t does not stifle individual creativity. CLIVAR is also responsible for providing guidance on the assignment of priorities for funding agencies so that critical areas receive adequate resources in a timely manner.
Prediction of Climate Change
53
CLIVAR through Al can improve the prediction of climate change by: Ensuring the research necessary to reduce the uncertainty in climate change is carried out, and in particular, ensuring active co-operation between the JSC/CLIVAR Working Group on Coupled Modelling (WGCM) and those of GEWEX concerned with water vapour and cloud feedbacks. Ensuring the research necessary to reduce the uncertainties in radiative forcing is carried out, particularly by organising collaboration between JSC/ CLIVAR WGCM and IGAC, GAIM, and SPARe. Making clear to funding agencies the need for longterm development and evaluation of climate models. This can be done through strong recommendations made by the JSC/CLIVAR WGCM. Ensuring a deeper analysis of climate experiments to understand the mechanisms of the simulated response and to identify reasons for differences in model responses. A careful analysis of experiments is necessary. The need for a fuller analysis of climate simulations should be communicated to funding agencies to make sure the financial support for such work is available. Ensuring the evaluation of climate models against well documented past climates, in conjunction with CLIVAR/PAGES. Define standard forcing scenarios, including improved scenarios of future emissions in conjunction with IPCC and make them generally available to the modelling community. Al is largely a modelling programme and differs from other principal research areas of CLIV AR in that it
Simulations of the Effects of Climate Change
54
depends on inputs from observational and modelling programmes outside CLIVAR for success (for example GEWEX, IGAC). The most important task for CLIVAR is make sure that effective collaboration occurs between At and these outside programmes at working level. Input Required
The forcing factors have to be determined now, because there is no way to measure them retrospectively. Measurements should include both anthropogenic and natural forcing factors to allow hindcasts of recent climate change for validation of models, and for the detection and attribution of climate change. Forcing factors (which need to be monitored continuously and accurately) are: Greenhouse gases. Tropospheric aerosols (sulphate, biogenic, mineral, soot). CO2 sinks etc. Surface albedo changes (land use, etc.) Solar output. Volcanic aerosols. In the case of gases and aerosols, both emissions and concentrations are required for evaluating the models which convert emissions to concentrations. Solar and volcanic forcing are included for completeness - without knowing these it is impossible to separate anthropogenic and natural climate change. Physical, Chemical and Biological Processes in Models
The main sources of uncertainty in models used to predict and detect anthropogenic climate change are the estimates
Prediction of Climate Change
55
of radiative forcing (greenhouse gases, aerosols, changes in land use, etc.) and the modification of the climate response through certain feedbacks, e.g. clouds, water vapour, snow and ice, vegetation etc. Of the feedbacks, the main uncertainties arise in cloud feedbacks (uncertain to a factor of 2 or 3) and the feedback due to aerosol effects on clouds (estimated by IPCC to cool current climate between 0 and 1.5 Wm- 2, compared to a warming of 2.4 Wm- 2 by the main greenhouse gases). Improving the estimates of the indirect forcing and the magnitude of cloud feedbacks should be given the highest priority. Understanding of the physical, chemical and biological processes involved in forcing and feedbacks and their representation in models are urgently required. These will only be provided with a lot of help from programmes outside CLIVAR, including GEWEX, IGAC, GAIM, ACSYS and SPARC as well as parts of CLIVAR DecCen. Seeing that there is efficient and effective collaboration between the relevant parts of these programmes is the most important task for CLIVAR in assuring the success of Al and A2. Another major uncertainty in modelling climate change is the rate at which heat is mixed from the surface into the deep ocean. This a governing factor in determining the rate and patterns of climate change and will require continued and extended measurements of ocean parameters. MODELLING AND PREDICTION OF CLIMATE CHANGE
Numerical modelling is the core activity of CLIVAR ACe. In addition to simulating future climate change, activities will include model improvement, intercomparison and evaluation, sensitivity and predictability studies, calibration of simple models for scenario development and activities to provide guidance on observing systems. Numerical experimentation under CLIVAR for Al and A2 (and
56
Simulations of the Effects of Climate Change
DecCen) is being co-ordinated and monitored by JSC/ CLIVAR WGCM. It is recognised that setting priorities for modelling experiments under Al shoul9 be co-ordinated with the IPCC process. L
Progress in understanding the response and predicting the response of climate to anthropogenic forcing depends on improving coupled ocean atmosphere climate models. This includes the improvement of current parameterisations and the inclusion of new processes (notably biological and chemical feedbacks). Progress in model development will be continually reviewed to assess model improvements made through numerical experimentation, and to identify areas which require further process studies. For the latter case, requirements to the appropriate WCRP or IGBP projects will be specified and co-operation will be fostered with these projects. The evaluation and validation of models is essential both in assessing the impact of model development and the credibility of simulations of climate feedbacks and climate change. A standard approach to model validation is one of the requirements for IPCC assessment. Several groups have completed long control simulations with coupled ocean atmosphere GCMs both to enable analysis of mechanisms of simulated decadal to centennial variability and to provide estimates of internal climate variability for climate detection. The same models have been used to simulate the instrumental temperature record using estimates of the historical forcing due to increases in greenhouse gases and simplified representation of the direct forcing by sulphate aerosols for climate detection studies, and these simulations have been extended into the future using a limited number of emission/forcing scenarios. Simulations of paleoclimate: whereas numerical weather prediction models can be tested daily by verifying
Prediction of Climate Change
57
against observations, the opportunities for testing climate models on actual changes in climate is limited. Although there are large uncertainties in estimates of past forcing and climatic reconstructions, past climates provide the only opportunity of testing models with a substantial change in climate. The most promising periods are the mid Holocene (600(') years BP) and the last glacial maximum (21000 years BP) which have been selected by PMIP (Paleoclimate Modelling Intercomparison Project) an ongoing joint eLIVAR/PAGES activity. Models will need to be run with idealised forcing scenarios to allow the comparison of mechanisms of change in different models. At present, there is no accepted way of initialising coupled models for prediction experiments, or for bringing them to full equilibrium with a given set of boundary conditions. Each group uses its own method to initialise their model, and it is not clear that a method which has proved successful with one model will work for another model. Hence, in the initial round of comparisons, some latitude in the method of spin up and the initial conditions will be allowed. As techniques for bringing coupled models to equilibrium are developed, the comparisons will be more tightly controlled. Simulations of the last hundred years or so will be run to aid the detection and attribution of climate change using, where appropriate, standard forcing data sets. Predictions
A limited number of simulations based on standard future forcing scenarios will be encouraged as contributions to IPee. Note that the scenarios are likely to be idealised in some respects. Simpler models may have to be used to interPolate the results to specific IPee scenarios.
58
Simulations of the Effects of Climate Change
The predictability of climate, and the scales on which climate is predictable remain open questions. Currently, simulated internal variability is sufficient to warrant ensemble simulations for both detection and prediction. Multi century simulations are also needed to investigate whether or not climate change alters shorter term climate variability such as ENSO and North Atlantic Oscillation. The stability of climate is to be investigated using numerical experimentation. Model studies can also provide guidance on the spatial and temporal coverage necessary for monitoring climate to the accuracy necessary for climate change detection. Such studies are analogous to the observing system studies carried out for weather prediction purposes. The assessment of impacts of climate change require details of climate change on at least a regional (country) scale. At present, their usefulness is limited by the quality of the driving - as noted in IPCC95, (IPCC, 1996) confidence in simulated GCM changes at regional scales is low. However, these techniques need to be and are being developed and evaluated so that they are ready to take advantage of future improvements in global models. Thus the development of techniques to predict small-scale changes in climate are an essential part of A1. At present, the approaches are rather diverse in terms of technique and region. CLIVAR can help by promoting two kinds of studies Evaluation of regional models driven by reanalysis data to determine the accuracy of the regional response when driven by perfect boundary conditions. Comparison with runs driven by GCM data will indicate the extent to which poor boundary conditions degrade the model.
Prediction of Climate Change
59
Comparison of the accuracy of statistical downscaling techniques with regional modelling. There is a requirement for a more systematic evaluation and intercomparison of the various techniques for estimating regional climate change. (Numerical aspects of regional models are cur;:ently the remit of WGNE). COMPUTING REQUIREMENTS TO INVESTIGATE CLIMATE CHANGE
Climate modelling is limited by available computing resources and hence requires state of the art supercomputers. Current models running with 30 levels on a 4 degree latitude/longitude grid require about one and a half hours computing time on a single processor of a CRAY C90. Even at this resolution, the long simulations required to investigate climate change can only be carried out at a handful of centres. The increase in computing time with resolution is given below Resolution (degrees) CPU Time
Data
4x4 1
2x2 10
Ld 100
0.5x0.5 1000
1
5
25
125
Thus increasing the resolution from 4 to 0.5 degrees (which are necessary to resolve e.g. important aspects of the thermohaline circulation in the ocean) would require a factor of 1000 increase in computing and over 100 in data storage) to maintain the current length of experiments (centuries to millennia). At present, there are demonstrated benefits in increasing atmospheric resolution to at least 1 degree and ocean resolution to a small fraction of a degree, with appropriate increases in vertical resolution. Even now, computing resources limit the running of hierarchies of experiments which are needed to elucidate feedbacks and explore sensitivities so that the model response is well understood and the model uncertainties are quantified, and
Simulations of the Effects of Climate Change
60
ensembles of experiments to enable a better definition of the anthropogenic fingerprint" for detection studies. There is also a continuing need to maintain experienced staff to improve models, to diagnose results, and to design and carry out the appropriate field studies to guide model development. II
RESEARCH PROJECTS
Perfect Sensitivity Studies
Purpose: To identify mechanisms of climate change and understand the reasons for differences in model response. Background: The IPCC95 (IPCC, 1996) assessment produced a range of a factor of two in models driven by a 1% per year increase in CO2, Much of the discrepancy can be traced to differences in cloud feedback. Feasibility: Experiments and intercomparisons need to be carefully designed in order to help increase our understanding and to identify the "reasons for the differences between different models. This requires a commitment of resources from individual modelling centres to fully understand the response of their own model, and to run idealised experiments and store diagnostics which enable meaningful comparison of models. Experiments with prescribed changes in SST (to identify cloud and other feedbacks). Equilibrium 2 x CO2 experiments with mixed layer oceans (to calibrate the climate sensitivity of the atmospheric component of coupled models). Transient experiments with a 1% per year increase in atmospheric CO2 (along with 2. above to determine the slowing effect of the oceans).
Prediction of Climate Change
61
A"angememt of Simple Models
Purpose: To estimate the climate response to a variety of forcing scenarios. At present, the only practical approach is to use simpler models. In order to support the results thus obtained, the simpler models need to be calibrated against full GCMs. Background: Past IPCC reports have relied on energy balance models to investigate the response of the global mean temperature response and sea-level rise to a range of emission scenarios and model sensitivities. Only in the 1995 report were attempts made to show that energy balance models could be used to interpolate and extrapolate GCM results.
Feasibility: Calibration of other models has continued at a low priority and in a rather ad hoc fashion. Careful calibration is time consuming and hence has not been readily undertaken. Standardised Forcing Scenarios
Purpose: To enable an evaluation of the model dependence of results, and identification of the reasons for differences in model response. Background: In IPCC95, assessment of results was hampered by the use of different forcing scenarios.
Feasibility: Attempts to standardise forcing scenarios have failed in the past as institutions have chosen different forcing scenarios, perhaps due to pressure to use the "latest" emission scenarios, or limitations on resources. In the past, certain scenarios have become established in time as standard, and this process can be facilitated by JSC/ CLIVAR WGCM. Data sets of historical forcing, forcing for idealised experiments, and for future emission scenarios are required. Collaboration with IGAC and IPCC will be
62
Simulations of the Effects of Climate Change
required. As climate models evolve to include chemical processes, scenarios will be defined by emission scenarios rather than by concentrations. Coupled Model Intercomparisons (CMIP)
Purpose: To allow independent evaluation of the performance of models used to make climate projections, to identify common model errors and identify priorities for model improvement, and observational studies to guide that improvement. Background: As part of IPCC95, an intercomparison of coupled models used later in the report had to be carried out (Gates et al., 1996). This was inevitably limited in scope by the time and resources available. Feasibility: Setting up model intercomparisons is easy, but doing them well is difficult. For example, AMIP revealed a lot about differences in model performance and errors which are common across models, but improvement in models has been more difficult to achieve. The first stage of CMIP to look at model control simulations is under way, and a second stage, to look at differences in response to transient forcing, has been set up. The second stage will be repeated if and when a satisfactory method of producing uniform initial conditions can be found. Intercomparisons are particularly effective if based on idealised experiments designed to accentuate particular aspects of the models response (e.g. oceanic inertia). Need for Linkages
CLIVAR ACC is mainly a modelling activity and will not organise process studies. Hence it relies critically on other activities in WCRP and IGBP. The need for linkages is discussed throughout much of the text above. One of the most important roles of CLIVAR is to co-ordinate research,
Prediction of Climate Change
63
especially for Al which is largely a modelling programme dependent on other programmes for observational input. Those processes already identified as being the most critical and the appropriate projects are listed below, with high priority items given first. Clear communication of needs from CLIVAR ACC (usually through JSC/CLIVAR WGCM) to the relevant projects and frequent updating of progress in the key areas are required. This may be in the form of joint workshops, invited experts and reports. Clouds and related processes (GEWEX). This is probably the main source of uncertainty for AI. Particular attention will be paid to those processes which govern climate sensitivity, especially to determine if they are realistic and faithfully represented in models. Aerosol forcing (IGAC). Aerosol forcing is a major source of uncertainty both for detection and prediction of climate change. CLIVAR will rely on the work of IGAC for the conversion of emissions to concentrations and the specification of radiative properties. The appropriate links for making needs known to IGAC and keeping in touch with the relevant parts of IGAC will be established. Progress in aerosol modelling intercomparisons will be monitored. Land surface processes (GEWEX). Most of the impacts of climate change are effected through change in near surface climate over land. Hence accurate prediction of land surface processes is a primary concern for CLIVAR ACe. CLIVAR ACC will rely on GEWEX for model improvements in this area. Progress in the PILPS and other intercomparison projects will be monitored.
Simulations of the Effects of Climate Change
64
Water vapour feedback (GEWEX - GVaP) There remain considerable uncertainties in the processes governing upper tropospheric water vapour in the tropics, how well these processes are represented in models, and the magnitude of the water vapour feedback expected with global warming. Reducing these uncertainties is of high priority for maintaining the credibility of climate models, and there is potential to make considerable progress before IPCC 2000. Sea ice (ACSYS) Sea ice has strong influence on the sensitivity of climate in high latitudes, as well as on the formation of deep water. Deep water formation (DecCen) This is important in determining poleward heat transports. The stability of the deep ocean circulation is still unresolved. Stratospheric processes (SPARC) Changes in ozone due to human activity, volcanic eruptions and changes in solar output need to be taken into account in both Al for prediction and understanding of past climate and in A2 to enable separation of natural and human effects. Biological processes (GAIM) The correct prediction of future carbon dioxide levels will need to couple atmosphere-ocean models with biogeochemical models, including biological models of the oceans and vegetation. Coupling with dynamical vegeta tion models will also be needed. to correctly represent the changes in land surface processes.
4 Global Biogeochemical Carbon Cycle The availability of carbon as carbon dioxide in the air, as carbonates in the earth's crust, as carbonate ions in the sea, and in the many organic compounds in land biota, the soil, and in the sea is basically dependent on the fact that gases containing carbon (methane and carbon dioxide primarily) escaped from the interior of the earth during geologic ages. The biosphere, as it exists today, has evolved in a complex interplay between carbon and many other elements; primarily hydrogen, oxygen, the basic nutrient elements, nitrogen, phosphorus, sulphur, and some metals in minor quantities that are fundamental to the development of life. It is for this reason that the carbon cycle cannot be dealt with independently of the cycles of other elements involved in the biogeochemical system. A treatment of the carbon cycle implies consideration of physical, chemical, biological and geological processes that proceed on very different time scales, that is, from millions of years for the slow movement of the earth's crust, to weeks and days for the rapidly changing scene of the water surface of the sea. The slow processes are determined by the time required to accomplish significant changes in the earth's crust, for instance. It is of note,
66
Simulations of the Effects of Climate Change
however, that in ge0logy the so-called 'slow processes' frequently represent extended periods of quiescence which are briefly interrupted by active pulses of relatively short duration. In contrast, rapid transfers basically depend on the characteristic motions of air and water. For example, that the most slowly varying component of the hydrologicalcycle is deep ground-water, the amount of which is small, and the cryosphere, i.e. ice and snow, the time scale of which primarily depends on the evolution of the Antarctic and Greenland ice sheets, i.e. times of at least 10 000 to 100 000 years. The stability of the biosphere is to a considerable extent dependent on how the rather rapid processes interact. These problems are of course fundamental in themselves, but are of particular relevance today, as man has become a prime biogeochemical factor, perturbing the natural balance. Man's intervention consists primarily in the burning of fossil fuels and the changing of land use. The observed changes in the atmospheric concentration of carbon dioxide show directly that these interventions are significant on a global scale and simple model computations indicate that very marked changes, with possibly far-reaching consequences for the climate of the earth, may take place during the next century. A far better understanding of the key processes that control the carbon cycle is, therefore, of great importance the consequences of man's continued interference with the carbon cycle on a global scale. The modelling approach was begun about twenty years ago. Many important developments have been published since then. The following presentation is, to a considerable extent, based on his results. A more detailed treatment of the ocean circulation and the land biota is presented by Bjorkstrom. Additional light will be cast on
Global Biogeochemical Carboll Cycle
67
the question of the relative importance of oceans and the land biota as the most important sink for atmospheric carbon dioxide. Major Reservoirs The Atmosphere
Total Concentration
The present concentration of carbon dioxide in the atmosphere is about 329 ppm (per volume). Since accurate measurements began in 1957 an increase of about 17 ppm has been recorded. This result is based on measurements at Mauna Loa in Hawaii, at the South Pole, and from aircraft flights at middle- and high latitudes in the northern hemisphere. It has long been assumed that the preindustrial CO2 level, i.e. before 1850, was between 290 and 295 ppm, which value has been determined by: extrapolation back in time, based on the fact that the observed increase of carbon dioxide in the atmosphere 1957 75 has been 54% of the emissions due to fossil fuel combustion, that this ratio was the same from 1850 to 1957 and that the use of fossil fuels was that estimated by Keeling; and an assessment of the reliability of measurements during the latter part of last century and the determination of the most likely value. We note, however, that man has also been emitting carbon dioxide by deforestation and expansion of arable land and that the measurements in the middle of last century were quite unreliable. The atmospheric carbon dioxide concentration may, therefore, have been lower and the degree of uncertainty is most likely at least 10 ppm.
68
Simulations of the Effects of Climate Change
The data from the South Pole and Mauna Loa show oscillations, probably associated with the climatic variations known as the Southern Oscillation, which is a quasi-periodic oscillation of the major wind systems over the Central and South Pacific.
Isotopic Composition of carbon There are three carbon isotopes in nature: 12C, 13C, and 14 C, with the approximate relative amounts I, 10-2, and 1012. The former two are stable, while 14C is produced in the atmosphere by cosmic radiation reacting with nitrogen, and decays with a half-life of 5570 years. Since 1952, 14C has also been emitted into the atmosphere by nuclear bomb tests and from nuclear power plants. TIle 13(: content of the atmosphere is 7%0 less than the PDB 13 C-standard, while the corresponding I3C value for modern wood and fossil fuel carbon is 25 %0. Analyses of 13(: in tree rings show, however, a decrease of 1 1.5%0 over the past 200 years. Since the fractionation between 12C and 13 C during the assimilation process has remained the same, this change may be interpreted as a change of I3C in the air. The fractionation factor is slightly dependent on temperature, but this effect can be corrected for.
Similarly, a decrease (the 'Suess-effect') of about 2% was recorded in the 14C content of wood from the middle of the last century until 1954. This is due to the dilution that is caused by emitting CO2 by fossil fuel combustion, as it contains no 14e. Since 1954, large amounts of 14C have been added to the atmosphere because of nuclear bomb testing, causing a marked increase in 14C concentration until 1962, at which time atmospheric testing almost ceased. Since that time, the 14C concentration has been gradually decreasing.
Global Biogeochemical Carboll Cycle
69
The Oceans
The Ocean Circulation On the whole, the oceans are stably stratified and almost all the energy that maintains motions in the sea is supplied at the surface. Only when exceptionally dense water is formed at the sea surface by cooling or increased salinity when freezing occurs can it sink to appreciable depths. This occurs principally around the Antarctic and in a limited region in the Iceland Greenland area. These areas are the only sources for the very large volume of deep water in the world oceans. At the circumpolar convergence zone in the southern seas, and also on the polar side of the major ocean currents in the North Pacific and North Atlantic, the water sinks to intermediate depths (100 1000 m) and moves towards the equator. Similarly, the rather warm, but saline Mediterranean water penetrates the Atlantic Ocean to intermediate levels. The sinking of cold water is balanced by very slow upward motions, which, at a first approximation, are evenly distributed over the entire ocean. Although the stratification is stable, some vertical mixing also occurs. The vertical distributions of 14C and oxygen have been used to assess the slow upward motion (w) and the vertical diffusivity (K), yielding values of w = 1 cm/day and K = 1 cm2 /sec.
Dissolved Carbon Compounds in the Sea The concentration of inorganic carbon in sea water varies from between 1.95 mg-atoms/l in tropical surface water, to about 2.35 mg-atoms/l in the deep water. The total amount of inorganic carbon in the sea is about 35 000 x 1015 g. The ionic composition of sea-water is primarily determined by the carbonate and borate systems:
70
Simulations of the Effects of Climate Change
CO2 +J..J 0· ----- H+ + HCO·3 ~~ HCO···H+ + C02· 3--3 H 3 BO3 •.- - H+ + H 2 BO·3 The amount of dissolved carbon dioxide in sea-water, which is the only species of the carbonate system in direct exchange with atmospheric CO2, is only about one per mille of the total amount of dissolved inorganic carbon. Because of this fact and the necessary hydration and diffusion of the CO2 molecules through a thin boundary layer at the sea surface, the rate of transfer and exchange between the atmosphere and the sea is limited. Therefore, the turnover time between the atmosphere and the surface layers of the sea is 5 8 years. Quinn and Otto have pointed out that a more detailed treatment of the diffusive layer at the sea surface is necessary to describe the exchange processes properly, but the results of Bolin are on the whole still valid as quoted. This turnover time for atmospheric CO2, relative to the ocean, agrees well with the observed decline in the 14C content of atmospheric CO2since 1962. For a more detailed discussion, the reader should refer to Machta. The dissociation of sea-water, ion composition, and the amount of dissolved CO2 gas, are dependent on the total amount of inorganic carbon in the sea-water, and the CO2 partial pressure increases much more rapidly than the total amount of inorganic carbon. This ratio, called the buffer factor, is approximately 9 for a carbonate borate solution such as sea-water and at the present atmospheric partial pressure of 330 ppm. It approximately doubles if the partial pressure is doubled. This fact makes the oceans much less of a potential sink for the CO2 emissions into the atmosphere than one would expect from the large size of the oceanic carbon pool.
Global Biogeochemical Carbon Cycle
71
In addition to the inorganic carbon, the oceans contain about 1000 x 1015 g of organic carbon. This organic material is the decay product of life in the sea and also, to some extent, of the terrestrial biota carried to the ocean by rivers. An understanding of the turnover processes requires knowledge of the chemical composition of the dissolved organic matter and a more detailed study than has been done so far.
The Content of 14C in Sea-water The radioactive carbon isotope 14C, contained in some atmospheric CO2 molecules, is transferred to the sea. Due to the different molecular weight of 12CO/and 14(:°2, the transfer rates are not the same for the two molecules. This fractionation results in a 14C/l2C ratio in surface water which is 0.95 of that of atmospheric CO2, The 14C-activity is, therefore, often expressed as a deviation (in per mille) from 0.95 of a standard activity, denoted by 14C. Due to the comparatively slow turnover of the sea and radioactive decay the 14C/12C C ratio of deep water is considerably lower. The lowest value, 230 per mille, corresponds to an age of about 1800 years. Surface waters have been significantly influenced by the transfer of bombproduced ~ 14C to the sea since 1954. Positive 14 C-values have been measured down to layers below 400 m, particularly in the North Atlantic, which indicates that the influence of bomb-produced 14C may extend to these deeper layers. Small changes have also been ohserved in the Pacific Ocean. The youngest deep water is found in the eastern North Atlantic, with the age of the water increasing southwards. Generally, Atlantic deep water is considerably younger than Pacific deep water where the age increases northwards. These ages indicate the direction of flow of the deep water from the main formation areas in the North Atlantic and around the Antarctic continent.
72
Simulations of the Effects of Climate Change
A few 14C determinations of dissolved organic compounds are available. They indicate an average 14C age of about 3000 years. Dissolved organic matter in surface water may be considerably younger, but little data is available.
The Global Distribution of Primary Productivity in the Ocean Phytoplankton growth is influenced by a number of factors such as light intensity, availability of mineral nutrients, turbulent mixing, secondary production, and adaptation processes. In certain areas of the sea, the phytoplankton exhibits strong seasonal variability, the predominant feature being the outburst in spring. The global distribution of primary production is determined by the availability of nutrients (nitrogen and phosphorus) in the uppermost tens of metres, where there is sufficient light for effective photosynthesis. In areas of intense upwelling, the rate of photosynthesis reaches values of 500 1000 g C/m2 year, while in the 'desert' areas of slow sinking motion, where only small amounts of nutrients are available, the rate may be as little as 10% of this value or even less. For the following discussion, adopt the values 45 x 1015 g C/year for the total primary production in the oceans and 3 x 1015 g C for the biomass. This yields a ratio of 15, corresponding to a turnover time of 24 days for living organic matter in the sea. For the development of a global carbon cycle model, need to describe in some detail photosynthesis in the oceans and its dependence on environmental parameters. However, keep the number of variables small. A first attempt of this kind was made by Riley et al. and model work has progressed ever since. Numerous field experiments have been carried out to obtain as complete descriptions as possible of the relevant processes, for example with regard to season, or diverse marine habitats. Reasonable agreement between theory and
Global Biogeochemical Carbon Cycle
73
observation has been obtained to within one order of magnitude. Progress is slow because of the intricate nature of life cycles and their reciprocal dependence on a multitude of environmental factors. As an illustration, some recent results from an experiment in the North Sea will be summarised, i.e. the Fladen Ground Experiment, FLEX 76. In other parts of the oceans with different climate and hydrography, the ecological interplay will, however, be quite different.
Fladen Ground Experiment, FLEX 76. To permit a detailed comparison between theory and a field experiment, a large number of environmental variables must be measured simultaneously with high spatial (in particular vertical) and temporal resolution. As a contribution in this regard, the Fladen Ground Experiment was carried out in 1976 (FLEX 76). This area was chosen because it is almost in the centre of the North Sea where horizontal gradients of physical conditions are relatively small and, therefore, the vertical exchange and transport processes are of primary importance for the balance. The depth in the Fladen Ground area is sufficient (150 m) to permit the development of a dynamically and biologically decoupled upper layer during the time of the experiment, but is also sufficiently shallow to permit the downward particulate flux, measured near the bottom, to be related to the process in the upper layer. An additional advantage of this site is that its general hydrography and productivity has previously been studied by Steele. Measurements of the physical, chemical, and biological processes were obtained in the entire water column. The spring plankton bloom was chosen for study. Full consideration was given to both horizontal and vertical variability. The experiment took place within a 100-km-side square and covered a three-month period from mid-March to mid-June 1976. The initial phase concentrated on the
74
Simulations of the Effects of Climate Change
analysis of factors which trigger the plankton bloom, and on observations of the rapid cell division which characterises the onset of the bloom. The main phase of the experiment consisted of an intensive data acquisition programme lasting approximately one month. During the end phase, secondary programmes were carried out as a supplement to the main programme. On account of the sensitivity of plankton growth rates, during the initial stages of an outburst, to small variations in the density structure and vertical exchange rates (Hasselmann, personal communication), a rather weak horizontal variability in these physical parameters can yield a pronounced patchiness of plankton and nutrient distributions. This makes it difficult to detect interactions between the horizontal variability of the physical and biological chemical components of the system by direct spectral correlation techniques. To overcome some of these difficulties, a permanent station was occupied for the entire length of the experiment, while, at the same time, a drifting station followed a discrete 'plankton patch' across the FLEX box. The principal factor controlling the onset of a plankton bloom appears to be related to the establishment of a therm~line. Such a phase boundary allows planktonic cells to stay in the euphotic zone for an extended period of time. Following the first phytoplankton maximum, grazers (a principal species is Calanus finmarchicus) and bacteria living on organic substances appear in larger numbers. The FLEX experiment dearly demonstrates that phase boundaries in mid-water have a pronounced effect on the pattern of dissolved mineral species. Mechanism and rate of molecular exchange across a well-developed pycnocline have been studied thoroughly. Vertical advection and diffusion will move deep water through the density boundary to the surface layer. For instance, in the
Global Biogeochemical Carbon Cycle
75
main pycnocline in the Black Sea, at a water depth of about 200 m, the vertical eddy-diffusion coefficient is about 0.014 cm2 / s. From this value, one can readily calculate the net upward flux of dissolved species such as hydrogen sulphide. Conversely, there is a downward diffusion gradient and transfer of, e.g., oxygen. Models using a constant eddy-diffusion coefficient for the vertical transfer of passive properties in a continuously layered medium have only limited applicability.
Biomineralisation Process Photosynthesis transfers carbonate ions in the water to organic compounds and is thus an effective link in the carbon cycle. The bacterial decomposition of organic compounds returns carbon to the sea-water in dissolved form either as inorganic carbonate ions or organic dissolved compounds. Carbonate-forming organisms, in addition, remove carbonate from sea-water by extracting CaC0 3 • This process of biomineralisation is far more important than the inorganic precipitation of aragonite, calcite, and dolomite, which is globally insignificant. Only recently has light been shed on the rather intricate biochemical process of biomineralisation. In principle, carbonate deposition is an outgrowth of excretionary processes and related to the regulation of Ca2+ ions in the cytoplasm. The two main enzymes involved are an ATPase as Ca 2 + transport enzyme and carbonic anhydrase; both have zinc ions at their active centre. The actual role of carbonic anhydrase in calcification does not depend on its ability to hydrate CO 2, but to remove carbonic acid from the site of calcification: Ca2+ + 2 HCO3- •-- - Ca(HCO) ... - - CaCO3 + H 2 CO3 32 - Thus, carbonate deposition involves interaction of Ca2+ and HC0 3 -, resulting in the formation of an unstable intermediary product Ca(HC03)2' As long as calcium is
76
Simulations of the Effects of Climate Change
not the limiting factor, the rate of formation of calcium carbonate will depend on the rate by which carbonic acid is removed from the calcification site. In the presence of carbonic anhydrase, calcification is significantly increased due to the formation of a complex between the high-pH form of the enzyme and a neutral H 2C03 molecule, which is the substrate for carbonic anhydrase. Respiratory CO2 is often mentioned as the sole or principal source for the carbonate ion in biological calcite and aragonite. Stable isotope data, however, argue strongly against this supposition. Only in a few species can a significant contribution from respiratory CO 2 be demonstrated. Most invertebrates, and even the otoliths in fish, form their carbonate in, or almost in, isotopic equilibrium with the dissolved carbonate species present in their environment. In fact, the distribution of oxygen isotopes of the carbonates in many aquatic biological species reflects the water temperature in the habitat where the organisms live. The mediators in carbonate nucleation are organic templates with a strong affinity to calcium ions; they are composed of specific polysaccharides or proteins. To illustrate the kind of structural relationships that emerge, electron micrographs of a shell, showing the intercalations between the organic substrate and the CaC03 crystals. A crucial point in the present discussion is the observation that biogenic carbonate extraction in the sea is chemically a non-equilibrium process, as far as the carbonate system is concerned. It is governed by a speciesspecific regulation mechanism involving a template mechanism which is based on solid-state principles. Inasmuch 'as carbonic anhydrase is the chief enzyme regulating carbonate formation, any inhibitors of carbonic anhydrase in the environment will affect rates of calcification. These inhibitors can be related to physical
Global Biogeochemical Carbon Cycle
77
parameters such as abnormal temperature, salinity, certain pesticides, PCBs or nutrient deficiency.
Calcium Carbonate Dissolution The upper layers of the ocean are supersaturated relative to CaCOy while CaC03 starts to dissolve at greater depths due to the increasing hydrostatic pressure. The transition level is called the lysocline. At still greater depths, all calcareous remains exposed to sea-water completely dissolve at, and below, the carbonate compensation depth. This depth occurs around 3700 m. For instance, in the equatorial zone of the Pacific, the CaC03 compensation depth may actually be positioned as much as 5000 m below the surface. On the whole, however, these transition levels lie considerably deeper in the Atlantic than in the Pacific Ocean. It is of interest to note that in areas where waters become stratified for an extended period of time, as is the case in the Black Sea or some deep East African Rift lakes, carbonate compensation depth is close to the thermo::halocline, i.e. at a water depth of 50 to 200 m. Carbonate particles that fall below this phase boundary dissolve, whereas those that settle in shallow areas above the pycnocline remain intact. Scanning electron micrographs of chemically precipitated calcites from the Black Sea sedimented above and below the 02 H 2S interface show this process. In view of the above relationship it has been difficuit to explain why certain calcareous remains have escaped dissolution when settling below the critical CaC0 3 compensation depth. Organic coatings around individual mineral grains and organisms have been suggested as a critical inhibitory factor in CaC03 dissolution. Recently, Honjo has drawn attention to a transport mechanism, by means of faecal pellets. In principle, predators 'package' their excretion product which is full of calcareous remains
78
Simulations of the Effects of Climate Change
in the form of pellets. These pellets may have a carbohydrate skin, a pellicle, or can be without a protective organic cover. Coccoliths may thus be rapidly transported from the euphotic zone, through the lysocline, to the deep ocean floor without dissolving. The sinking rate of a pellet is approximately 160 m per day compared to about 0.15 m per day for a discrete coccolith. Obviously the characteristics of the bottom sediments are greatly influenced by these facts divide the ocean sediments accordingly as follows: Those lying beneath the calcite compensation depth, where there are practically no calcium carbonate sediments. The only remains that are found are partly dissolved faecal pellets. Those found between the calcite compensation depth and the lysocline, within which range there are some calcium carbonate sediments. Those lying above the lysocline but betow the continental shelves. These sediments are very rich in calcium carbonate, particularly at greater depth. Those on the continental shelves, where the deposits are a mixture of calcium carbonate and detritus from the continents. Broecker and Takahashi have tried to estimate the amount of calcium carbonate with which ocean water could . exchange calcium and carbonate ions. They consider that only about 10 cm of sediments would be available, which is equal to the mean burrowing depth of benthic organisms. Sediments below the bioturbated zone are protected from further- dissolution. The amounts of carbonate available are 2400 x 1015 g C in the Atlantic and 1250 x 1015 g C in both Indian and Pacific Oceans. The total amounts of calcium carbonate sediments are,
Global Biogeochemical Carbon Cycle
79
however, several orders of magnitude larger and could be in exchange with the sea-water during geological time scales. For a dynamic treatment of the carbon cycle, to understand the processes that determine the rate of dissolution of the calcium carbonate from the sediment. The flux of calcium and carbonate ions from the carbonate grains in the sediment into the bulk ocean water is dependent on four distinct processes: The dissolution at the interphase between the grains and the pore water; the molecular diffusion in the pore water towards the sediment surface; the molecular diffusion in a thin layer of water above the sediment surface; and the turbulent transfer into the interior of the ocean which primarily occurs quasi-horizontally along density surfaces. It is difficult to distinguish between 1 and 2 above, since
this can only be done with certainty by detailed measurements in the pore water. Different models for this transfer in the top layers of the sediment have been developed, emphasising one or the other process, the resaturation model or the stagnant-film model. In the resaturation model, the rate-limiting process is that of CaC03 dissolution into the pore water. In the stagnantfilm model, the molecular diffusion out of the sediment and the stagnant water film at the sediment surface are the rate-limiting processes. Takahashi and Broecker show that the observed decrease of sediment CaC03' with increasing depth below the lysocline, favours the former hypothesis with a characteristic time of 5 10 min for the rate-limiting process.
80
Simulations of the Effects of Climate Change
The formation of deep-sea carbonate sediments is essentially due to the fact that calcareous planktonic remains, which form in the photic zone, settle to the sea floor. The deposition rate can be determined from the 14 C-profile in the sediment at levels above the lysocline where no appreciable dissolution takes place. The sedimentation rates are in the range of 1 to 10 cm per 1000 years. In terms of carbon deposition, these values correspond to 0.5-5 x 1015 g C/year if integrated over the whole ocean surface, and a most probable value of 1-2 x 1015 g C/year. Ecosystems on Land
Basic Characteristics It may seem simple to divide the biosphere into two major compartments, i.e. the oceans and the continents. The distinction is, however, not always clear. There is an extensive intertidal coastal zone of salt marshes, mangrove vegetation, sand beaches, and rocky shores. These ecosystems are among the most productive of the world. Similarly, fresh waters are intimately associated with land vegetation and soil processes. Since the basic characteristics of the land vegetation is closely connected with the associated soil systems, it is necessary to deal with them simultaneously. This is particularly true when concerned with the dynamics of ecosystems during longer time periods (several decades or longer), for example, changes caused by climatic variations. The natural ecosystems on earth have been markedly influenced by man. More than 10% of the land surface is used in agriculture. The world forests, covering about 30% of the land surface, are rapidly being exploited and grassland, savannas, and shrubland are changed by
Global Biogeochemical Carboll Cycle
81
increasing herds of cattle. It becomes, therefore, important to distinguish between natural ecosystems and derived ecosystems. The most extreme of the latter are the urban systems. A most characteristic feature of the ecosystems on land is their mosaic structure, primarily as a result of basic soil features and the distribution of climatic zones. Each one of these will, therefore, still be quite inhomogeneous and will consist of regions with different characteristics. The definition of these subsystems, the description of their dynamics, and the reduction to a few variables becomes, therefore, a fundamental task. It must precede the construction of simple dynamic ecosystem models to be incorporated into a global model of the carbon cycle. The biogeocenosis, a basic ecologic unit of which all more aggregated systems are composed, consists of a plant association (BB) (phytocenosis), on which depends the existence of animals, bacteria and fungi. Together, they form a trophic network. The green photosynthetic plants are the producers of organic material. The total quantity of carbon fixed by photosynthesis is the gross primary production, PB. Part of this is used as an energy source for maintenance and is called respiration, RA• The organic matter stored in plant tissues in excess of respiration is the net primary productivity, PN (or NPP). Under very favourable conditions, RA may be merely 10% of PB' but on the average RA equals approximately PN. P N serves as food for the animals, the consumers, whereby a predation trophic chain is initiated. Dead organic matter from producers and consumers (litter fall) initiates a decomposition trophic chain on, and particularly in, the soil, due to the activity of small animals and microorganisms. Even though their total mass is small, they constitute an important link in the carbon cycle because of their rapid turnover. Some parts of the dead organic matter are,
82
Simulations of the Effects of Climate Change
however, decomposed rather slowly, particularly in cold climates, and form humus. When concerned with the global carbon cycle, particularly need to estimate the total living biomass and its distribution over various subsystems. Also important, is the characteristic transit time distribution function of the flux of carbon molecules through the biome under consideration, i.e. what are the relative amounts produced as short-lived structures such as grass and leaves, and what is stored in wood for longer periods. Similar estimates are needed for the soil system where the transit time distribution function is of prime concern, as it gives the probability distribution of the time for the carbon incorporation as dead organic material in the soil to oxidation and release to the atmosphere. Some general idea can be obtained, in this regard, with the aid of 14C measurements of organic soil compounds, even though, usually, only the 'mean transit time' or 'turnover time' is obtained in this way.
Biomass and Net Primary Production Estimates of the amount of carbon in the form of living organic matter (biota) are still quite uncertain. The definitions of the different biomes are by no means unique, and in estimating their geographic extension there is always room for subjective judgement. It is also clear that the estimates for separate ecosystem types are more "uncertain than those for the living phytomass as a whole. There is a principal difference between the estimates by Bazilevich et al. and those by other authors. The former estimate the potential biological resources, without regard to the interference by man. Their estimates, particularly those of phytomass, are, therefore, overestimates.
An estimate has also been made of the annual litter fall, yielding a value of about 80% of the NPP. This value
Global Biogeochemical Carbon Cycle
83
is of importance for the modelling of the dynamics of the land biota subsystem. Merely about 20% would, therefore, be accumulated each year as wood. The
phytomass values quoted above are for the last one or two decades. Part of the discrepancies between the various estimates may be due to different assumptions about the time or time period to which it is supposed to apply. Clearly, considerable changes are occurring due to man's increasing interference with the natural ecosystems. repres~ntative
The process of deforestation is continuing at an accelerated rate. The total area of earth that can be used for agriculture is hardly more than 26 x 1012 m 2, i.e. not quite twice the present agricultural area. If the present rate of deforestation and expansion of agricultural land continues, one must expect a decline during the first half of the next century, since no further land suitable for agriculture will be available. The most likely explanation for this change is that more carbon previously contained in living or dead organic matter has been returned to the atmosphere, due to man's activities, than would have been the case under natural conditions. A quantitative interpretation requires a careful analysis of the fluxes in a transitory state of the carbon cycle with the aid of models. The analyses of Stuiver, Bohn, and Wagener yield values for the net flux of carbon to the atmosphere of about 100 x 1015 g C, in addition to that emitted by burning fossil fuel, which is in general accord with the more direct estimates discussed previously. Organic Carbon in the Soil
The estimates of the total amount of organic carbon in the soil, BS, vary even more than those for living land biota. Keeling has summarised earlier estimates and arrives at a
Simulations of the Effects of Climate Change
84
value of 1050 x 1015 g C, which is similar to the one given by Baes et al. Table 1, give on the other hand values which are considerably greater. The major reason for this difference is the uncertainty in the estimates of the amount of peat. Earlier summaries only include rather small quantities, while Bohn gives a value of 860 x 1015 g C. In the boreal zone, peat decomposes very slowly. Most of the area which constitutes the present boreal zone was covered by ice prior to about 10 20 thousand years ago. As a consequence, the peats encountered here are younger in age. The high abundance of peats signals a significant net transfer of carbon to the soil. Table 1 Soil organic matter (dry weight) in the biomes Ecosystem type
Area 1012 m 2 Organic matter kg/m2
Equatorial rain forest Tropical seasonal forest Savannas derived spiny Warm deserts tropical subtrop.-medit. Sclerophyll forest Semideserts Cold deserts Temperate grasslands evergreen steppes chernozems kastanozems Temperate forests evergreen deciduous conif. plantations Peatlands
Total Carbon BS 1015 g
11.0 5.0
8 10
40 23
9.0 10.0
27 15
112 68
5.6 4.2 4.0 9.2 2.5
2 2 15 4 3
5 4 27 17 3
2.4
30
33
3.5 2.8
60 1r;
95 19
4.0 3.0 2.0 1.5
10 15 20 1000
18 20 18 882
85
Global Biogeochemical Carboll Cycle Mixed forest Taiga Tundra Polar deserts Mangroves Swamps and marshes Halol'hytes, lakes, streams Cultivated land Total biosphere
1.5 15.4 6.7 1.5 0.3 2.0
20 20 10 2 6 30
14 140 30 1 1 27
2.0 16.0
2
2
10
73 1672
There is ample evidence that the bacterial decomposition of organic soil compounds is increased if land becomes used for agriculture. Paul has reported losses between 2.5 and 6.5 kg/m2 from Canadian chernozemic soils during the last 60 to 75 years. The changes of the 13(:/12C ratio already mentioned are due both to a reduction of the extent of forests, and to the extension of agricultural land, because of the more intense use of this land and increase of the bacterial decomposition of organic soil carbon.
Carbon in the Freshwater System The following approximate values for the discharge with rivers to the sea: dissolved inorganic carbon, 0.4 x 1015 g C/year; dissolved organic carbon, 0.1 x 1015 g C/year; particulate organic carbon, 0.06 x 1015 g C/year.
The Dynamics of Ecosystems An ecosystem may be in a state of progression, steady state, or regression, generally as a result of changing external environmental conditions. A naked or denuded site is gradually colonised in a succession of biomp.s (biogeocenoses) of increasing diversity and phytomass. Generally, the succession proceeds from semidesert to grassland, shrubland and to forest. The terminal state, the climax, represents a state of maximum phytomass or a maximum that, in some senses, combines large phytomass
86
Simulations of the Effects of Climate Change
and high primary production. In this state, there is no further increase of biomass, and a balance between fixation and decomposition, and between respiration and the energy supply for maintenance of living tissues is acquired. For a local ecosystem, a forest fire is a catastrophe which initiates renewed progression towards a climax. Considering a sufficiently large area, within which there is an approximate statistical balance between the destruction by forest fires and the progression in other areas, the region as a whole is in a climax state. The spatial extension of biomes is primarily determined by climate and soil characteristics. Since, to a first approximation, the climatic zones follow latitudes, the biomes more or less form zonal bands. This is already clear from Koppen's classification of climate, and is well illustrated in the north-south cross section from the Arctic Sea to the warm deserts in the southern parts of the U.S.S.R. as given by Walter. The rapid decrease of precipitation, the increase "f temperature and, associated with it, the potential evaporation towards the south gradually causes forests to fade out and there is transition of steppe, semidesert and desert. Burning of Fossil Fuel
The burning of fossil fuels (coal, oil, and gas) has been accelerating since the middle of the last century and will probably continue to do so for quite some time into the future. Keeling has estimated the annual input to the atmosphere, primarily by using UN statistics. During the interim period, the annual increase was merely 1.2%. It has varied during the twenty years of CO2 observations and on the average been 54%. It should be emphasised that this value is based on the assumption that no other net transfers due to man's activities have occurred, which, as will be shown, is not a valid assumption. Since the oil
Global Biogeochemical Carbon Cycle
87
crisis in 1973 another slow-down of the expansion has occurred, however slight. It is not possible to judge whether this is a temporary situation or possibly the beginning of a slower expansion in the future. This depends on the development of alternative energy sources and on energy conservation measures. The total emission of carbon, in the form of carbon dioxide to the atmosphere, until 1976 was about 140 x 1015 g. The estimated available resources are given by Perry and Landsberg at about 5000 x 1015 g, of which 85% is in the form of coal and the remainder is divided between oil, oilshale, tar sand, and gas. Much larger reserves are present in the earth's crust and some estimates indicate that as much as 7000 8000 x 1015 g might be recoverable. The future use of fossil fuels is difficult to predict. Keeling and Bacastow and Revelle and Munk suggest the use of a logistic distribution as a function of time. To begin with, the combustion grows exponentially, but when the total consumption starts to become significant in comparison with the total resources, the annual percentage increase declines and becomes zero when half of the resources have been used. The annual use then decreases. A Complex Model for the Carbon Cycle
Many models of the carbon cycle have been deVeloped. It is hardly useful to add another, unless a very specific purpose for doing so is given. It seems most important to formulate a model that has the following characteristics: It should be simple in its basic structure, but should
permit the inclusion of a more detailed treatment as about processes is improved or more data become available for verification. The simplifications of reality that are necessary for arriving at reasonably simple models, should be
88
Simulations of the Effects of Climate Change
carefully developed from the more complete knowledge about the behaviour of the various subsystems. The carbon cycle depends on other biogeochemical cycles, e.g. phosphorus and nitrogen, both in the sea and on land, and oxygen and calcium in the sea. This fact offers further possibilities for verification of the model. The uncertainties of model predictions should be carefully assessed and the results should not merely be given as one plausible scenario. This requires the performance of a great number of sensitivity studies. The carbon cycle depends on the motions of air and water and on a number of physical, chemical and biologkal processes. As has already been pointed out, the atmosphere may be considered as well mixed. The following discussion will be limited to such time scales and therefore, disregard the seasonal variations or the more detailed problems of carbon dioxide transfer, associated with concentration differences within the atmosphere. The total amount of carbon present in the atmosphere will be the single variable describing the atmospheric carbon dioxide. The small amounts of methane, carbon monoxide, and other carbon-containing trace gases will not be considered in the present context, since their concentrations can be assumed to remain unchanged without serious error. The Real Ocean Circulation
Previous models used in c..nalysing the carbon cycle have been simple two- or three-reservoir or diffusion models. Even though they are undoubtedly useful for a first overall assessment of the role of the oceans in the carbon cycle, they suffer from serious deficiencies. A somewhat more
Global Biogeochemical Carbon Cycle
89
realistic model has been used by Broecker et al., who considered the role of the thermocline region in some detail. Few attempts have been made to deal with several tracer elements simultaneously, in order to settle more precisely the reliability of the ocean models used. Ocean surface waters are divided into two reservoirs, cold surface water and warm surface water, both extending to about 75 m depth, i.e. the depth of the seasonal thermocline. The division line is at about 40° latitude. Warm surface water is that part of the oceans where a well-defined permanent thermocline exists, and cold surface water comprises the rest of the ocean surface water, which more or less effectively communicates with deeper layers of the oceans by convection. The intermediate water is modelled with the aid of two reservoir!) (75 540 m and 540 1000 m respectively) which are in convective mutual exchange with the cold surface water. The deep sea is described with the aid of eight 500-m-deep layers underneath each other, all receiving water from the cold surface water, which, in turn, gives rise to a slow upward motion, increasing in strength as continuity requires. Carbonate System of the Sea
There are two important aspects of the CO2 air sea exchange that should be included in any attempt to model the global carbon cycle quantitatively: The resistance for CO 2 exchange caused by the hydration-diffusion process in the top layer of the sea. The change of the carbonate equilibrium in sea-water that is induced by the CO2 addition to the atmosphere ocean system. The former aspect is most easily modelled by assigning a proper turnover time (5 8 years) for CO2 in the atmosphere. The latter requires the formulation of the carbonate borate
90
Simulations of the Effects of Climate Change
chemical equilibrium. It should be observed that the chemistry of the carbonate system may well be more complex than given by Keeling. Some of the parameters required to describe the chemistry of the carbonate system are temperature dependent. The division of the ocean surface waters into one reservoir for cold water and another one for warm water will permit experiments. Detritus Sedimentation in the Sea
The rate of photosynthesis in surface waters is primarily limited by the supply of nutrients, particularly nitrogen and phosphorus. This, in turn, is determined by the ocean circulation (and to a minor extent by water pollution). A proper treatment of the biological activity, therefore, implies the simultaneous treatment of the phosphorus and nitrogen cycles. The phosphorus cycle is reasonably well understood chemically, while this is not so for the nitrogen cycle. Oxygen is consumed in the course of the bacterial disintegration in the intermediate and deep water. The oxygen cycle, therefore, is linked with the carbon cycle and the vertical distribution of oxygen will serve as an important piece of information in determining the relative role of water renewal and detritus decomposition for the maintenance of both the total carbon and oxygen distribution in the sea. Carbon is also withdrawn from the surface waters by the formation of carbonate shells. Calcium is involved in this process and a proper treatment should, therefore, include the consideration of the calcium cycle in the sea. The difference of calcium concentration between surface and deep water is less than 1%, i.e. less than 0.1 mgatoms/1, while the difference of total carbon is 0.2-0.3 mg atoms/1. The latter difference is due to the combined effect of decomposition of organic material and dissolution of calcium carbonate detritus. The dissolution of CaC03 will
Global Biogeochemical Carbon Cycle
91
change the concentrations of calcium and carbon with the same amount (expressed in moles). Carbonate Sediments in the Oceans
The transfer of carbonates from land to the oceans, and the formation of carbonate sediments on the ocean bottom, are important aspects of the global carbon cycle. For time periods of millenia or less, as a first approximation, assume a balance between the flux of carbonate ions with rivers to the sea and the net accumulation of carbonate sediments. The ocedn model outlined above could be used for the study of the slow changes on the geological time scale, but this requires a careful analysis of the main physico-chemical processes occurring on these time scales. A change in the amount of carbonate ions in seawater, as would result from a net transfer of CO2 to the sea, would change this situation. A simple formulation of the CaC03 flux from the sediment into the water may be approximated by a transfer velocity proportional to the undersaturation. In a more refined treatment, one should consider the full complexity of the transfer between sediment CaC03 pore water and the stagnant layer of molecular diffusion and bulk ocean water. Takahashi and Broecker have shown that the rate-limiting step for this transfer may be the resaturation time of the pore water. The transfer rate is then not linearly dependent on the concentration gradient, which could be accounted for in this rather simple model. Modelling the Ecosystems
Drastic simplifications of the complex land ecosystems are necessary in order to include them as interacting subsystems in the global carbon cycle model. The turnover time for carbon (BB/PN) varies from about 30 years for the mixed forests, and taiga, and about 15 years for the
Simulations of the Effects of Climate Change
92
tropical forests, to merely 1 or 2 years for grassland and cultivated fields. This ratio is less than 10 years in the tropics, a few decades in middle latitudes, and increases further towards cold climate (taiga and tundra). According to these tables, the carbon in peat has a turnover time of about 6000 years. In the first attempts to incorporate the role of land biota into a global model, a distinction was only made between short-lived (2-3 years) and long-lived (40-60 years) land biota and those two reservoirs contained both living and dead organic material. The reasons for describing the land ecosystems in some more detail are: Tropical and temperate forests have rather different characteristics Man influences the various ecosystems quite differently. Climatic changes or other alterations in the environmental factors will shift the extension of various ecosystems, which should be described in the model. In view of these remarks, consider the following aggregated ecosystems as a suitable approximation of reality: tropical evergreen forests; deciduous and boreal forests; grassland, steppe, and swamps; cultivated land; peatland. For each of these to define the reservoirs: -
biomass;
93
Global Biogeochemical Carboll Cycle
-
soil carbon mass.
Transfer takes place between the atmosphere (0) and these two reservoirs due to: gross primary production
(O~ 1);
respiration (1 ~ 0); forest fires (1 ~ 0); litter fall (1 ~ 2); decomposition of soil organic matter (2 ~ 0) The model considered by Revelle and Munk only deals with one characteristic biome and further divides the biomass into one part, which is active in the photosynthesis (leaves, needles, grass, etc.), and another (wood), which behaves in a similar way to soil carbon and is combined with it. One part (about half) of the carbon fixed by photosynthesis is returned by respiration, while the other part (the net primary production) is turned into wood or soil carbon. Machta and Keeling, on the other hand, distinguish between short-lived and long-lived biota which do not influence each other, but assimilate carbon from the atmosphere separately and return it by bacterial decomposition. Oeschger et al. finally assume that the biosphere increases its assimilation proportionally to the amount of atmospheric carbon dioxide in excess of the preindustrial amount. The behaviour of the models depends critically on the way in which the transfer rates (a e above) are modelled. The gross primary production (a) or the primary production (a and b) have customarily been assumed to be proportional to a power, b, of the atmospheric concentration, where 0 < ~< 0.4, and also proportional to the assimilating biomass. Machta and Keeling, assume that both short-lived and long-lived organic matter assimilate,
94
Simulations of the Effects of Climate Change
while Revelle and Munk consider that carbon in wood and soil is maintained by transfer from the short-lived pool. A clear distinction between the part of the annual net primary production that is transferred into wood (a) and the other part that forms humus (d) has not been made by any of these authors, but is considered by Bjorkstrom. The return flow of carbon to the atmosphere in long-lived organic matter (Keeling), in the soil (Bjorkstrom), or in a combined reservoir (Revelle, Munk) is assumed to be proportional to the amount of carbon in the reservoirs considered. Steady-State of the Carbon Cycle
To validate the results obtained from model computations, it is important to use both data that describe the steady state of the system particularly for the oceans and data on the changes that have occurred due to man's interventions especially changes in total amounts of CO2, 11 C, and 14C in the atmosphere. On the other hand, the steady-state distributions of characteristic ocean properties are related to the behaviour of the model on longer time scales. It is appropriate, firstly, to consider the ocean system and take account of what its physico-chemical characteristics imply and, secondly, to try to determine what additional knowledge can be gained from the use of the time-dependent data The Role of the Oceans Previous ocean models have been based on three different assumptions, with regard to the exchang.e between the surface water and the deep sea: Model A. The vertical transfer is described by exchange processes between a few (2 or 3) well-mixed res~rvoirs. Model B. The transfer to the thermocline region or the deep sea is accomplished through the deep-water 'outcrops' in polar regions.
Global Biogeochemical Carbon Cycle
95
Model C. . The vertical transfer is described by vertical turbulent diffusion, which is determined by the vertical distribution of 14 C. Model D. The transfer to the intermediate water is due to turbulent exchange and detritus flow, and the transfer to the deep water is accomplished by a slow thermo-haline circulation of the water.
A direct comparison has been made of the ways in which these different models respond to an injection of a tracer in the atmosphere. The rates of transfer between the different reservoirs in the various models have, in all cases, been determined with the aid of a given age distribution of the real ocean, which the models are supposed to depict. Still, their behaviour in response to such an injection of a tracer is quite different. Model A reacts most slowly, while an extreme case of model D, in which all water sinking from the cold surface water reservoir sinks to the bottom before starting its upward motion, is a considerably more effective sink. The largest difference is obtained during the time interval 300 800 years after the injections began. During the first 50 100 years, on the other hand, the differences are rather small. Model D is detailed enough to determine the relative role of water motions and settling detritus for vertical transfer of carbon. The experiments with model D, conducted so far, show rather conclusively that the deep sea (below about 1000 m) cannot have served as an essential sink for the carbon dioxide emissions into the atmosphere by man during the last 50-100 years. This conclusion was already drawn by Broecker et al. and Keeling and is, thus, now verified by this model, which is more closely tuned to the real circulation of the oceans. The role of the intermediate waters (75 1000 m) as a sink for excess atmospheric CO2 is less clear. Keeling pointed out that if the surface waters were to extended to
96
Simulations of the Effects of Climate Change
about 1000 m, the oceans could have served as a sink for the part of the fossil fuel CO2 that has not stayed in the atmosphere. He did not take into consideration, however, the additional release of carbon dioxide due to the reduction of land biota and soil carbon. Some preliminary experiments with model 0, using 14 C and oxygen data, clearly show that the intermediate water may exchange rather rapidly with surface waters (turnover time about 40 years) and thus serve as an important sink for fossil fuel CO2, The GEOSECS 14C and tritium measurements qualitatively verify that this may be possible, but a more detailed analysis of the penetration of the bomb-produced radioactivity is required. So far, analyses have essentially been limited to the Atlantic Ocean. Some special attention should be given to the role of the shallow seas. Most of the detritus formed in these areas is deposited. Some of it is returned to the sea-water by bacterial decomposition, but there is a steady and rapid accumulation of organic mud and thus a net withdrawal of carbon from the global cycle. The increased photosynthesis that is caused by the emission of nutrients to the water particularly phosphorus also contributes significantly to an increased deposition. As phosphorus is primarily used as a fertiliser in agriculture, most of it is soon locked as inorganic insoluble phosphorus compounds in the soil. A considerable amount still finds its way to lakes, coastal waters and. the sea, possibly 2 4 x 1012 gl year. Organic matter has a C/P ratio of approximately 40. Thus 0.08 0.16 x 1015 g C is bound annually, due to the fertilising of lakes and coastal waters with phosphorus. In lakes depleted in oxygen, the chemical process at the water sediment interface changes. Phosphorus dissolves prior to the full decomposition of the organic material. In this way, the water may repeatedly be fertilised by phosphorus. It is not possible to assess the importance of
Global Biogeochemical Carboll Cycle
97
this effect quantitatively but it seems, at a first glance, to modify the carbon cycle only moderately. The Ocean Sediments
The dissolution of CaC03 from sediments has only been included in model A without detailed consideration of the relevant exchange processes. The results probably represent an upper limit to their role as a sink for CO2 emissions due to man. Broecker and Takahashi have analysed the problem in greater detail and their treatment should be included in model D to obtain overall consistency in the treatment. The dissolution of the carbonate fraction in bottom sediments primarily takes place at depths below the lysocline. Due to the changing composition of sea-water and the associated lysocline rise, more sediments become exposed to under-saturated sea-water. In the Atlantic, the lysocline is found between 4700 and 3500 m, while it is shallower in the Pacific Ocean. The renewal of deep water is considerably more rapid in the Atlantic Ocean than in the Pacific Ocean and, therefore, Broecker and Takahashi limit their considerations to the latter. They assume that the input of CO2 to the atmosphere due to fossil fuel combustion will increase annually by 2%, and that the increase in atmospheric CO2 of 0.9 ppm/year as recorded during the last decade will also increase by 2% annually until the year 2100. The formation of deep water in the North Atlantic Ocean primarily influences the water masses to the west of the mid-Atlantic ridge and above the lysocline at about 4700 m. On the basis of 14(: and tritium measurements from GEOSECS, they conclude that the renewal of this water mass takes place on a time scale of about 200 years. To maximise the dissolution rate, they assume that this water mass is horizontally and vertically well-mixed.
98
Simulations of the Effects of Climate Change
Accepting that the resaturation time of the sediment pore water is the rate-limiting step, and assuming that bioturbation effectively mixes the sediments down to about 10 cm, they conclude that the sediments could playa significant role as a sink for CO2 emitted by man.· This tentative conclusion should be tested for internal consistency using model D. This requires special care. Water penetrates to the deep sea in very limited regions, and thus considerable departures from the equilibrium between the sediment and the ocean water may occur, while most other parts of the sea floor are not affected. Relative Role of Land Biota and the Oceans as a Sink for Carbon Dioxide
Broecker et al., Machta, Keeling, Oeschger et al., Keeling and Bacastow, Revelle and Munk, Siegenthaler and Oeschger, and Bjorkstrom have all attempted to analyse the relative role of land biota and the oceans as a sink for the carbon dioxide emitted to the atmosphere by man's activities. In all cases, the ocean sediments have so far been assumed to have played an insignificant role. Model assumptions are different in several regards and have not been systematically verified by observations, which makes comparisons difficult. In more recent model experiments, much lower preindustrial atmospheric CO2 concentrations and a considerable transfer of CO2 to the atmosphere due to deforestation and increasing agriculture have been considered. Also the simultaneous increase of land biota biomass in unexploited parts of the world forests, due to more rapid growth in a more CO2-rich atmosphere, has been simulated in such experiments. The partitioning of the emissions between the atmosphere (A), land biota and .. soil (B and S) and the oceans has been computed by Siegenthaler and Oeschger and Bjorkstrom with principally
Global Biogeochemical Carboll Cycle
99
the same results. These computations were made for a total output of CO2 to the atmosphere, due to man, of 185 x 1015 g C until 1970, of which 70 x 1015 g C was due to oxidation of land biota and soil carbon. A preindustrial atmospheric CO2 concentration of about 255 ppm is required, if the growth rate of land biota does not depend on the atmospheric CO2 concentration ([3 = 0). Siegenthaler and Oeschger show that this result is consistent with a decrease in the 13 C content of wood with about one per mille during the last hundred years. More rapid exchange of water between intermediate levels (100-1000 m) and the cold surface water would make the oceans a somewhat more effective sink, but it would not substantially change the result. If we, on the other hand, accept a preindustrial atmospheric CO2 concentration of about 290 ppm, the partitioning of man's emissions between the atmosphere, biota and soil, and the oceans is about 35, 50, and 15% respectively. Depending on the size of the land biota and the rate of intermediate water renewal, the required value for ~ is between 0.1 and 0.25. There is, in this case, obviously no net flux of CO 2 from land biota and the soil to the atmosphere, but rather a transfer from exploited to unexploited parts of the land. Even though available 13(: data rather seem to indicate a preindustrial CO 2 concentration in the atmosphere of less than 290 ppm this data is not yet extensive and accurate enough to exclude the possibility that a significant growth of unexploited forest biomass has occurred. Virgin forests are, however, decreasing in areal extension and such a development is, therefore, bound to be a temporary one. The land biota model used by Revelle and Munk does not behave much differently for time periods until the present. Available data are inadequate for judging whether
100
Simulations of the Effects of Climate Change
one or the other land biota behaviour is the more realistic. It points out one interesting consequence of the model assumption that photoassimilation is dependent both on the size of the biota reservoir and the atmospheric CO2 concentration. This non-linearity implies that this reservoir will gradually accumulate much more of fossil fuel emissions than the oceans. Whether this is an approximately correct description of the land biota behaviour or not is of considerable interest. This requires a careful analysis of what are the most important limiting factors for growth in natural ecosystems. Judged by the data of Goudriaan and Ajtay increase in CO2 partial pressure in the atmosphere will beneficially influence the water-uptake characteristics of plants rather than enforce rates of photosynthesis. The availability of mineral nutrients, such as phosphorus or nitrogen, is the more determining factor in the net assimilation rate. It is also clear, however, that the rapidly increasing exploitation of world forest resources will imply that the above listed possible regulating mechanism of the natural carbon cycle will diminish in importance, and possibly disappear. Finally, it is of interest to point out one likely feedback mechanism of the land biota soil system which, in the long run, may be of significance for the partitioning of fossil fuel carbon between the major reservoirs. The turnover time for carbon in land biota and the soil is dependent on the climate. It is quite short in tropical regions, considerably longer in the boreal forest and a few thousands of years for peat land. A warmer climate for the earth, as might result from an increasing amount of CO2 in the atmosphere, should, therefore, increase the rate of decomposition of soil organic matter at northerly latitudes, but possibly also increase the amount of carbon stored in the living biota of the forests.
Global Biogeochemical Carboll Cycle
101
Prediction of Future Changes in CO2 Concentration
The two most detailed analyses, i.e. those by Keeling and Bacastow" and by Revelle and Munk depict two rather extreme possibilities. The former consider the oceans as the on1y major sink and find that the increase of the ,atmosp~riccol'\Centration might become six to seven times ,the preindustrial value, if the total emission was 5000 x 1015 g C. It was assumed that no transfer from the land biota to the atmosphere occurs due to man's activities and that and, increase of land biota, due to increased assimilation, will only occur until the first decades of the nex{century. From then onwards, the oceans are the only sink for the excess CO2, Due to the buffering of the ocea:t\S, the present value of the airborne fraction, about 50%, will ina.e~se to above 80%. Revelle and Munk, on the oth~r hand, do not put any restnctions, in their initial computation, on the rate of increase of carbon in the land biota soil reservoir, using a valu.e of ~= 0.05. They further assume short..nved and longlived land biota soil reservoirs of 200 X,lOIS g C and 2600 x 16)5 g'C respectively, and the net pri~ production to be 53 x 1015 g C/year. This implies preindustrial, steadystate turnover times for these two reservoirs of 4 and 49 years. The non-linear dependenc;e of the nvt primary production on: (i) size of the, assimilating land biota reservoir (short-lived land biota), and (Ii) the atmospheric CO2 c9n\~entration, will rapidly increase 'the flow of carbon to t~e lar\d biota as fossil fuel emission steps up even at the rather small value of that was chosen. At about the time for the peak of fossU fuel combustion (around years 2100), the growth of organic matter on land will exceed the output, due \0 combustion, and the atmospheric CO2 concentration will again de€:rease.
102
Simulations of the Effects of Climate Change
About one hundred years later, atmospheric CO2 concentrations will already have returned to preindustrial values, the land biota soil reservoir will have almost tripled its size and taken up about 90% of the emitted carbon, while merely about 10% will have escaped into the oceans during the few hundred years when the atmosphere/ sea CO2 equilibrium was disturbed significantly. Revelle and Munk themselves seriously question whether such a development is at all possible and discuss, at some length, factors that may limit the land biota response to an increasing atmospheric CO2 concentration. In view of the fact that man's exploitation of forests and expanding agriculture generally leads to a net transfer of CO2 to the atmosphere it seems, however, unlikely that a majpr increase of the land biota soil reservoir will occur, .8$.; pr9jected by Revelle and Munk. The comparison of the two model computations strongly emphasises the neel for a much better ~derstanding of land ecosystem dynamics. A critical area of research would also be a study on the carbon-uptake characteristics of shallow marine sediments for organic matter and CaC03 • Also, the flow of carbon to the floor of the deep sea via the 'faecal pellet bomb' may account for a part of the missing carbon. The changes in atmospheric CO 2 conc~ntrations during the next 30 40 years are not very different in the model computations referred to above and essentially imply that, during this time period, the airborne fraction of emitted fossil fuel CO 2 will essentially remain unchanged. A tontinued annual increase of the fossil fuel combustion by 4% ·would yield an atmospheric CO2 concentration of about 380 ppm at the tum of the century.
5 Variations in Climate Change Climate is the average state of the atmosphere and the underlying land or water, on. time scales of se~sOns and longer. Climate is typically described by the sta~~(s of a set of atmospheric and surface variables, su.ch as temperature, precipitation, wind, humidity, cloudiness, soil moisture, sea surface temperature, and the concen.tration and thickness of sea ice. The statistics may be in terms of the long-term average, as well as other measures such as daily minimum temperature, length of the growing season, or frequency of floods. Although climate and climate change are usually presented in global mean terms, there may be large local and regional departures from these global means. These can either mitigate or exaggerate the impact of climate change in different parts of the world. A number of factors contribute to climate and climate change, and it is useful to define the terms climate forcings, climate sensitivity, and transient climate change for discussion below. CLIMATE FORCINGS
A climate forcing can be defined as an imposed perturbation of Earth's energy balance. Energy flows in
104
Simulations of the Effects of Climate Change
from the sun, much of it in the visible wavelengths, and back out again as long-wave infrared (heat) radiation. An increase in the luminosity of the sun, for example, .is a positive forcing that tends to make Earth warmer. A very large volcanic eruption, on the other hand, can increase the aerosols (fine particles) in the lower stratosphere (altitudes of 10-15 miles) that reflect sunlight to space and thus reduce the solar energy delivered to Earth's surface. These examples are natural forcings. Human-made forcings result from, for example, the gases and aerosols produced by fossil fuel burning, and alterations of Earth's surface from various changes in land use, such as the conversion of forests into agricultural land. Those gases that absorb infrared radiation, i.e., the "greenhouse" gases, tend to prevent this heat radiation from escaping to space, leading eventually to a warming of Earth's surface. The observations of human-induced forcings underlie the current concerns about climate change. The common unit of measure for climatic forcing agents is the energy perturbation that they intruduce into the climate system, measured in units of watts per square meter (W 1m2 ). The consequences from such forcings are often then expressed as the change in average global temperature, and the conversion factor from forcing to temperature change is the sensitivity of Earth's climate system. Although some forcings-volcanic plumes, for example-are not global in nature and temperature change may also not be uniform, comparisons of the strengths of individual forcings, over comparable areas, are useful for estimating the relative importance of the various processes that may cause climate change.
Variations in Climate Change
105
CLIMATE SENSITIVITY
The sensitivity of the climate system to a forcing is commonly expressed in terms of the global mean temperature change that would be expected after a time sufficiently long for both the atmosphere and ocean to come to equilibrium with the change in climate forcing. If there were no climate feedbacks, the response of Earth's mean temperature to a forcing of 4 W 1m2 (the forcing for a doubled atmospheric CO2) would be an increase of about 1.2°C (about 2.2°F). However, the total climate change is affected not only by the immediate direct forcing, but also by climate "feedbacks" that come into play in response to the forcing. For example, a climate forcing that causes warming may melt some of the sea ice. This is a positive feedback because the darker ocean absorbs more sunlight than the sea ice it replaced. The responses of atmospheric water vapor amount and clouds probably generate the most important global climate feedbacks. The nature and magnitude of these hydrologic feedbacks give rise to the largest source of uncertainty about climate sensitivity, and they are an area of continuing research. As just mentioned, a doubling of the concentration of carbon dioxide (from the pre-Industrial value of 280 parts per million) in the global atmosphere causes a forcing of 4W I m 2 • The central value of the climate sensitivity to this change is a global average temperature increase of 3°C (S.4°F), but with a range from l.soC to 4.5°C (2.7 to 8.1°F). The central value of 3°C is an amplification by a factor of 2.5 over the direct effect of 102°C (2.2°F). Welldocumented climate changes during the history of Earth, especially the changes between the last major ice age
106
Simulatiolls of the Effects of Climate Change
(20,000 years ago) and the current warm period, imply that the climate sensitivity is near the 3°C value. However, the true climate sensitivity remains uncertain, in. part because it is difficult to model the effect of cloud feedback. In particular, the magnitude and even the sign of the feedback can differ according to the composition, thickness, and altitude of the clouds, and some studies have suggested a lesser climate sensitivity. On the other hand, evidence from ,p~leoclimate variations indicates that climat~ sensitivity could be higher than the above range, although perhaps only on lo~ger time scales. TRANSIENT CUMATECHA~GE
Climate fluctuates in the absence of any change in forcing, just as weather fluctuates from day to day. Climate also responds in a systematic way to climate forcings, but the response can be slow because the ocean requires time to warm (or cool) in response to the forcing. The response time depends upon the rapidity with which the ocean circulation transmits changes in surface temperature into the deep ocean. If the climate sensitivity is as high as the 3°C mid-range, then a few decades are required for just half of the full climate response to be realised, and at least several centuries for the full response. Such a long climate response time complicates the climate change issue for policy makers because it means that a discovered undesirable climate change is likely to require many decades to halt or reverse. Increases in the temperature of the ocean that are initiated in the next few decades will continue to raise sea level by ocean thermal expansion over the next several centuries. Although society might conclude that it is practical to live with substantial climate change in the
Variations in Climate Change
107
coming decades, it is also important to consider further consequences that may occur in later centuries. The climate sensitivity and the dynamics of large ice sheets become increasingly relevant on such longer time scales. It is also possible that climate could undergo a sudden large change in response to accumulated climate forcing. The paleoclimate record contains examples of sudden large climate changes, at least on regional scales.
Understanding these rapid changes is a current research challenge that is relevant to the analysis of possible anthropogenic climate effects. The time required for the full response to be realised depends, in part, on the rate of heat transfer from the ocean mixed layer to the deeper ocean. Slower transfer leads to shorter response times on Earth's surface. EFFECTS OF INCREASED
CO2
ON PHOTOSYNTHESIS
The net primary production contributes to the living biomass, which is decreased by grazing, diseases, and harvesting, and by transition to dead organic matter. Most of the dead organic matter is decomposed in the course of time. The questions to be dealt with concern the influence of increasing atmospheric CO 2 on the net primary productivity and, subsequently, on the amounts of living and dead biomass. It is useful to distinguish between natural and agricultural ecosystems. Agriculture is a human activity with the explicit goal of harvesting the produced organic matter. Therefore, no accumulation of organic matter may be expected in agricultural ecosystems, even if production increases. In natural ecosystems, accumulation depends on the rate of decomposition.
108
Simulations of the Effects of Climate Change
The production of organic plant material is based on the formation of glucose out of water and carbon dioxide under the action of light. In terrestrial plants, water vapour inevitably escapes when the plant takes up carbon dioxide from the air. When the plant loses too much water and is threatened by drought, it usually reacts by closing the stomatal openings, so that both water loss and CO 2 assimilation are reduced. Hence, water may be a limiting factor for net primary productivity. The first product of photosynthesis, glucose, is converted to other plant material, such as starch, cellulose, lignin, fat, and proteins. Formation of proteins requires nutrients such as nitrogen, phosphate, and potassium. These plant nutrients are usually taken up from the soil by the roots. Lack of nutrients limits protein formation and, ultimately, the photosynthetic activity of the plants. Temperature influences all life processes and is, therefore, a major factor in plant production. The response of plants to their environment depends on internal factors and varies from species to species. Fortunately, most species are sufficiently alike, especially in agriculture, to allow general statements. For some purposes, grouping of species is necessary. Plant Groups
Two important groups of plants can be distinguished according to their photosynthetic performance, the C 3 and C 4 plants. The names derive from intermediate products in the biochemical pathway of carbon fixation. Some characteristics of C 3 and C4 plants are given in Table 1. Details on the biochemical pathways can be found in Devlin and Barker, but here the main interest concerns the differences in the net CO2 assimilation rate. C4 plants
109
Variations in Climate Change
have a considerably higher optimum temperature than C 3 plants, so that it is not surprising to find them mainly in warm regions. Many tropical grasses are among the C4 plants, while almost all plants from the temperate regions are C3 plants. Other factors being optimal, the net CO2 assimilation of a single leaf increases at first linearly with the intensity of the photosynthetically active radiation (PAR) which, for all practical purposes, coincides with the visible part of the spectrum. When light intensity is further increased, the CO2 assimilation levels off because of saturation with light. In the dark, the net rate of CO2 fixation is negative, due to respiration. At the so-called light compensation point, net CO2 assimilation is zero. A similar curve can be found for the response of net assimilation to CO 2 concentration in the ambient air, provided light intensity is high enough. C3 and C4 plants show a marked difference in CO2 compensation point. The higher values in C3 plants are caused by photorespiration: an additional respiratory process induced by the combined action of light and oxygen. At low O2 concentrations, the photo respiration disappears and the C3 plants have the same CO2 compensation point as C4 plants. Apparently in C 3 plants, O 2 and CO 2 have a competitive behaviour towards the carboxylating enzyme system. In C4 plants, oxygen does not have such an effect. Table 1 Some characteristics of C1 and Ci plants C~
CO2 assimilation rate in high light Temperature optimum CO2 compensation point in high light Photorespiration
2-4 g C0,lm2 h
20-25°C
50 ppm present
4-7 g CO,lm2{h 3O-35°C 10 ppm
not present
110
Simulations of the Effects of Climate Change
There exists a third group of plants with the so-called Crassulacean acid metabolism (CAM plants). These plants are able to absorb CO2 during the night and to fix it to organic acids. During daytime, the stored carbon dioxide is reduced photosynthetically. The stomata are open during the night and can be closed in daytime, so that water loss is very low in relation to dry matter production. A large storage capacity of carbon dioxide requires fleshy leaves. Therefore, this type of metabolism is limited to succulent plants such as Crassulaceae. The ecological advantage of CAM in dry regions is obvious, but in terms of dry matter production the quantitative importance is negligible. The pineapple is the only CAM plant used in agriculture. Potential Annual Net Primary Productivity
The potential net primary productivity depends on which variables are assumed to be optimal. Man's ability to change climatic conditions is very limited. Therefore, it is standard procedure to take the climatic conditions for given external parameters and only to assume an optimal water and nutrient supply. Under such conditions, crop production is still limited by incoming radiation, by suboptimal temperatures, and by the length of the vegetation period. Under high light conditions, the leaves become saturated at a higher level in C4 plants than in C3 plants. De Wit calculated a potential gross rate ()f dry matter production of 37.5 g per m 2 ground per day for a sunny day in June in the northern hemisphere. This calculation was made for a C3 plant. For C4 plants the result would probably be 30-50% higher. If respiratory losses are taken into account, the potential net dry matter production per m 2 ground per day is 20 g for C3 and about 28 g for C4 plants.
Variations in Climate Change
111
The effective growing period in northwestern Europe is about 100 days, so that the potential annual net primary production is about 2000 g/m2• In the tropics, the growing season covers the whole year. Due to higher temperatures respiratory losses are larger, so that C4 plants there produce about 20 g/m2 day at most, the same level as for C 3 plants in temperate regions. Potential net annual primary productivity in the tropics, therefore, amounts to 7000 g dry matter 1m2• These potential yields have been experimentally achieved, with water and nutrients optimally available and in disease-free cultures. Under farming conditions, and especially in natural vegetations, the circumstances are not as favourable, since one or more factors are suboptimal. Although int¢ractions between different factors must sometimes be taken into account, the most fruitful way to analyse deviations from the potential productivity is to consider only one factor as limiting. After this pat:}icular factor has been improved, another one may become' limiting. Some possible liiniting factors will be discussed in some detail. Umfffng Factors of organic material Shortage
0' NutrientS
If man does not harvest the':Rroduced organic material, it will return to the soil as fallen leaves, seeds, etc. During microbial decomposition, part of the nutrients will again become available to the plants. They are added to the quantities released by weathering of soil minerals and to those supplied by rain and microbial nitrogen fixation.
Therefore, in mature ecosystems the net primary productivity may sometimes approach the potential NPP. ~,?wever.. :-ndthing should then be harvested by man. In l'liiiiitive ~gricultural systems, such as shifting cultivation, the stock of nutrients built up in the past is made available
112
Silllulations of the Effects of Climate Change
to a crop by forest burning. For the first few years, the crop is reasonably well supplied with nutrients, but the level soon declines through uptake and leaching. Without fertiliser application, the annual yield will reach a rather low equilibrium level, determined by the natural sources of nutrients. The most important plant nutrients are nitrogen, phosphorus, and potassium (N, P, K), but other elements such as Ca, Mg, Fe, Cu, Mn are also required, the latter in smaller amounts. Deficiency of an element not only shows up in a characteristic symptom, but it invariably reduces the net primary production. Each of the nutrients plays a role in specific processes. In rye grass, the minimum nitrogen content of living tissue is 1.6%. In small grains, the minimum nitrogen content of straw <== is 0.4% and of kernels I %, resulting in a minimum requirement of 14 g N perl000 g grain (grain : straw = 1 : 1). The annual natural nitrogen supply (rain + microbial activity) is about 3 g N/m2year so that, without additional fertiliser application, grain productions greater than 200 g/m2 year are seldom recorded. However, nitrogen fixation by leguminous species or by blue-green algae in paddy fields can increase this level by a factor 2 or 3. Without such additional fixation, nitrogen is the main limiting factor for natural production, even under semiarid conditions. Climatic Factors
Mean annual temperature and mean annual precipitation can be considered as factors that determine the type of natural vegetation. The decline in NPP with decreasing mean annual temperature is mainly due to a decrease in length of the growing season. The influence of temperature on the performance of a single species way not be taken
Variations in Climate Change
113
as representative for its influence on net primary productivity as a whole. With changing climatic conditions, the species composition changes to such an extent that the net primary productivity is not limited by mean temperature, as long as its value is higher than 10°C. Because of water shortage, the actual growing period of the vegetation may be shorter than the period permitted by temperature. The potential evapotranspiration during the summer months in the Netherlands is about 500 mm, whereas the average rainfall in this period is 300 mm. To permit unrestricted growth, 200 mm of water should, therefore, be stored in the soil from excess precipitation in winter. A clay or loam soil can retain this amount of water in the potential rooting zone, but not so sandy soils, where water shortage may occur in summer. Sometimes the water cannot be taken up, because growth and functioning of roots are prevented by anaerobic conditions. Paradoxically, such conditions occur in poorly drained soils, suffering from water surplus. Drainage of soils and a good supply of water are, therefore, extremely important for good agricultural production. Water shortage builds up gradually, while it becomes more and more difficult for the plant to withdraw water from the soil. Some plants "react by closing their stomata, but others do not. There is an important ecological difference between these two groups. Those that close their stomata are savers that wait for better times. The others are spenders that use the water as fast as they can, and hope that it will start raining in time or that they will have completed their life cycle by the time the water is finished. Still, there is no marked difference in the ratio of transpired water and dry matter formed between these types of plants. This ratio does decline with
114
Simulations of the Effects of Climate Change
fertilisation, because nitrogen increases dry matter production but not the transpiration rate. Since water shortage often limits production, the relation between crop transpiration and crop production is a classical problem of Agrcultural science. In northwestern Europe the transpiration and net production rates are approximately proportional to radiation. In semiarid regions, were bright sunshine prevails, photosynthesis is saturated with light for a large part of the growing season. A unique relation between production and water use is obtained when the latter is divided by the free water evaporation. Different species use water with varying degrees of efficiency, in particular C4 plants use water twice as efficiently as C~ plants. Typical ratios for the amount of transpired water per unit dry matter produced are 200 for C4 plants and 400 for C3 plants. Therefore, in regions with limited irrigation facilities, such as South America, Australia, and Africa, C 4 plants are more productive. Carbon Dioxide
Measurements on individual leaves show that, in many cases, a CO 2 enrichment of the ambient external air increases the net assimilation rate at light saturation. In other circumstances, saturation with CO2 occurs at 300 ppm, even under light saturation. Measurements by an enclosure method over a crop surface show for rye grass that CO2 enrichment enhances the assimilation rate over the whole range of light intensities, but that for maize the assimilation is constant above 200 ppm. This saturation effect is probably caused by CO2induced stomatal closure. In a number of plant species, stomatal conductance is regulated in such a way that the
Variations in Climate Chang/?
115
CO 2 concentration in the substomatal cavity is approximately constant. When the CO2 assimilation rate rises with increasing radiation intensity, the stomata open up further to compensate the CO2 depletion in the Cavity below them. The level at which the CO 2 cot:lcentration inside the substomatal cavity is maintained is about 120 ppm in C4 plants and about 220 ppm in some C3 plants. Hence, the C4 plants maintain a CO2 concentration gradient across the stomata that is twice as large as in C3 plants. The ratio between CO 2 uptake and water vapour loss or the efficiency of water use is, therefore, twice as large in C4 plants than in C3 plants. Sometimes plants simply keep their stomata wide open and do not react to increasing CO2 with closure. Goudriaan and Van Laar recorded such behaviour for the sunflower. Absence of this CO2 ..steered regulation of stomatal resistance results in higher transpiration coefficients (than with regulation), but also in increased CO2 assimilation rates at high light intensities. According to Raschke, CO 2 regulation of stomatal resistance may be induced by waterstress. Therefore, it may well be that the same species of sunflower does show regulation under field conditions where waterstress is almost inevitable. Considering the success of CO2 enrichment in the production of crops such as cucumber and lettuce in glasshouses, it is likely that such crops do not close their stomata (water and nutrient supply is optimal in glasshouses). Moreover, the experience with CO2 enrichment in glasshouses may not be extrapolated to field conditions because in glasshouses without CO2 fertilisation, the CO2
116
Simulations of the Effects of Climate Change
concentration in the air may drop to values as low as 100 ppm. As mainly C 3 plants are grown in glasshouses, it is not surprising that such low values limit production. In the field, CO2 is usually not depleted to less than 250 ppm, because of the much better turbulent exchange with the atmosphere. BIOCHEMICAL PATHWAY INVESTIGATIONS
The Alaska Department of Natural Resources has jurisdiction over the management of the recreation uses of the rivers (rafting, canoeing, and fishing) and mining. The U.S. Environmental Protection Agency monitors mining discharges into these watersheds as required by the National Pollutant Discharge Elimination System (NPDES) of the Clean Water Act. Finally, both sport and subsistence hunting are important in the region and are managed by Federal and State agencies. Placer gold was first discovered in the Fortymile Mining District in 1886 and has been mined there ever since. Yeend summarises the mining history of the gold placers of the Fortymile River region and provides details of the Tertiary and Quaternary deposits that support those placers. Within the past 3 or 4 years there has been a dramatic increase in exploration and claim-staking activity in this region of Alaska. This renewed interest is due mainly to the discovery of highly Au-mineralised quartz bodies, several hundred meters below the surface, on the Pogo property in the Goodpaster River watershed. It is this discovery, and the likelihood of additional ones, that is a major driver for the regional environmental geochemical studies currently being conducted within the U.S. Geological Survey Mineral Resources Programme.
Variations in Climate Challge
117
ECOSYSTEM PROCESS
A conceptual framework for this work is taken from the state-factor model as detailed by Van Cleve and others. This approach assumes that the processes by which energy, nutrients, and trace elements flow through the ecosystems of the region are controlled by five "exogenous" conditions (statefactors): regional climate, biota (microbes, plants, and animals), topography, soil parent material, and time. Ideally, in order to study ecosystem perturbations (either natural or human-caused), only one of these statefactors should be varied while the other four are kept as constant as possible. Unfortunately, these factors mayor may not be independent of each other and may be difficult to isolate (for example, time); however, as Van Cleve and others explain, subarctic boreal forest ecosystems are both extreme (cold, dry climate) and simple (low number and diversity of biotic species) compared to eco~ystems in more temperate regions. Terrain, Vegetation, and Hydrology
The Interior Highlands Ecoregion (also known as the Yukon-Tanana Upland) is characterised by vegetated, rounded, low mountains with scattered, sparsely vegetated to barren high peaks (up to a maximum elevation of about 2,200 m).
The vegetation of the region is classified by Viereck and Little as closed spruce-hardwood forest containing tall to moderately tall white and black spruce (Picea glauca and P. mariana, respectively), paper birch (Betula papyrifera), aspen (Populus tremuloides), and balsam poplar (Populus balsamifera). Fires are common, due mainly to lightning strikes, and most of the study area shows evidence of past fire disturbance.
118
Simulations of the Effects of Climate Change
The area has a continental climate, and the weather station at the village of Eagle (70 km north of the study area) records the following average annual weather values for the period 1949-2000: precipitation, 302 mm (12 in.); minimum mean annual temperature, -10.9°C (12.7°F); maximum mean annual temperature, 2.1°C (36°F); snowfall, 142 cm. The Fortymile and Goodpaster Rivers drain mostly subarctic boreal forest, tundra, and muskeg having discontinuous permafrost. Even soils not underlain by permafrost are commonly frozen to various depths for much of the growing season. Over the year, discharge by the Fortymile River and its tributaries is highly variable because of (1) a rapid spring ice-break-up period (AprilJune), (2) runoff from storm events (the frozen nature of the terrain accentuates the runoff potential), (3) summer dry periods, and (4) freeze-up. Maximum flow on the mainstem of the river in May can exceed 340 m 3 S·l (1 m 3 sol = 35.31 ft3s-1), and base flow during January, February, and March is <1 m 3 sol. There is usually some flow on the mainstem in midwinter from groundwater sources. Breakup usually occurs in late April. The rivers and streams of this region are dark ("black-water") because of dissolved organic matter. The water is, however, low in conductivity (-90-220 pS cm-l), has pH values that are commonly above 7.3, and has turbidity values that seldom exceed 2 ntu. Geologic Framework
Bedrock and associated surficial deposits control the primary minerals and chemical elements that are available for weathering and that ultimately enter hydrologic and biologic systems. Therefore, understanding the geologic framework of the area is important to assessing the geoavailability and bioavailability of metals in the area.
VariatiollS ill Climate Change
119
In a complementary project, new geologic maps have been compiled of both watersheds. These build on the previous geologic mapping of Foster, Foster and others, and Dusel-Bacon and others. This new effort was conducted to address two specific needs: (1) the structural characterisation of regional lithologic units that underlie the study area; and (2) the definition of any possible mineralised and (or) altered zones that might occur in the study area. This detail of information is critical to the assessment of both the subsurface hydrologic flow patterns and the chemistry of surface and subsurface water, soils, and vegetation. The supracrustal rocks are biotite schist, biotiteamphibole schist, quartzite, marble, sulphide-rich biotite schist, and pelite. The protoliths for these supracrustal rocks are, respectively, graywacke, mafic volcanic and compositionally equivalent intrusive rocks, quartz-rich sandstone, limestone, sulphide-rich siliciclastic sediments, and pelitic sediments. Late metamorphic sulphide-bearing quartz veins cut all of the supracrustal rocks. The supracrustal rocks are interpreted to have been originally deposited on a continental margin and (or) distal to an island-arc complex in a back-arc basin. Intruding the metamorphic supracrustal rocks are three main granitoid suites. The Steele Creek Dome Tonalite is a composite body of foliated biotite-hornblende tonalitic orthogneiss containing country-rock rafts of paragneiss. Two mica+garnet-bearing leucogranite bodies locally invade the supracrustal rocks.
120
Simulations of the Effects of Climate Change
Study of Soils
Soils at the study sites are classified as Cryaquepts (Inceptisols) having variable amounts of undecomposed (fresh) and decomposed organic matter in the A horizon. Van Cleve and others estimate that about 78 percent of the land area of Alaska is occupied by Inceptisols. More than 70 percent of sites examined in this study thus far have been near saturation, and permafrost was commonly encountered 15 to 50 cm below the surface. Most sites have silty-loam to fine-sandy-Ioam A soil horizons with abundant root penetration. The A horizon is usually less than 10 cm thick and light brown in colour. Several of the sites have ash/ charcoal layers within the A horizon, indicating past fire disturbance. The B horizon is usually about as thick as the A horizon, lighter in colour, and contains moderate root volume. The deepest soils (C soil horizons) commonly extend to depths greater than 20 to 40 cm and consist of fine- to coarse-sand textured material with small blocks of angular 1;>edrock and few roots. Because of the low temperatures, these soils experience limited chemical and biological weathering; therefore, the clay content and cation exchange capacity are low. Nevertheless, slow organic matter decomposition and nitrification does produce slightly acid soil conditions in the mature forest ecosystem. The presence of silt (micaceous glacial loess) in the upper horizons is ubiquitous, and finding that it has an important influence on overall soil chemistry. Foster and others state that in the Big Delta 1:63360 quadrangle, silt and sand from the Tanana River flood plain to the south dominate the composition and texture of soils within about 50 km of the river. Siltloam- textured soils are common in interior Alaska and are derived from loess laid down during the Holocene and during the last glacial maximum.
Variations in Climate Change
121
CADMIUM PATHWAYS
Although both reconnaissance and detailed studies on the distribution of Cd in stream sediments and lithologic units have been conducted throughout Alaska, there have been no studies on the bioavailability of Cd and its biological transport mechanisms. Occu"ence and Speciation of Cadmium
Biogeochemical investigations will be conducted on the pathways that control Cd transport and uptake by vegetation. The occurrence of Cd in eolian-dominated subarctic soils developed over major rock units will be examined, as well as its relative bioaccumulation in willow. The bioaccumulation of Cd by willow (Salix spp.) has been recognised for some time; however, it was Shacklette that presented some of the earliest data for Cd in willow and compared these data with other deciduous tree species. The connection of high leveis of Cd in willow to adverse animal health, under natural (geogenic) conditions, has only recently been demonstrated. Gough and others present regional geochemical baselines for Cd and As in plants and soils. These values present a "snapshot" for the material sampled during this Fortymile River watershed study. In general, the soil Cd concentrations are slightly higher (two to three times) in our study area when compared to values from material collected elsewhere in Alaska, whereas Cd concentrations in plant material are similar. All geochemical data for the Fortymile River watershed available to date are published in Crock and others and include data from rock, bottom sediment, water, soil, and vegetation samples.
122
Simulations of the Effects of Climate Change
New studies that will be initiated in 2002 have three major objectives: (1) define the cycling of Cd and its bioaccumulation in willow and compare concentrations in willow to those in different rock types and soil horizons in mineralised and non-mineralised areas; (2) define the transient (and permanent) reservoirs for Cd (using soils and vegetation) and the probable transfer processes between these reservoirs; and (3) with the assistance of an Alaska state veterinary toxicologist assess the relative importance of Cd concentrations in willow to the health of browsing animals (moose and ptarmigan). The importance of willow to moose is demonstrated by feeding studies that have developed the following order of browse preference: willow» aspen> birch> alder> cottonwood. Mobilise and Redistribute Cadmium
Cadmium in soils within the study area is derived from the w~athering of loess and of the primary metasedimentary and metavolcanic bedrock. In these soils, Cd is commonly adsorbed' by various clay minerals and by calcium and magnesium carbonates. CadI,Ilium does not form highly stable complexes with organic matter. In addition, in these cold boreal forest soils, where microbial decomposition rates are low, the Cd that is tied up in organic matter probably remains immobile for some time. Cadmium can form many compounds of low solubility by precipitation as carbonates, hydroxides, and phosphates. This process is less important, however, in the moderately low pH, generally low-carbonate soils of
Variations in Climate Change
123
the study area. Cadmium is readily leached from soils at pH values below about 5.0, especially if the soils have a sandy texture. Soils so far examined in the Fortymile watershed range in pH from 4.8 to 6.2. The oxic, slightly acidic soils of the region may permit a large amount of Cd to be absorbed and translocated by plants. Zinc has close geochemical similarities with Cd and competes for exchange sites in soil. However, these soils are not particularly high in Zn, and therefore Zn probably has a minimal adverse influence on Cd bioavailability. Impacts of Cadmium
In general, Cd is considered a heavy metal of major environmental concern because of its high mobility and the small concentration at which it can adversely affect plant and animal metabolism. In addition, there is growing concern over the increase in environmental Cd (mainly from industrial emissions) and its adverse impact on soil biological activity. Cadmium is a nonessential heavy metal and a powerful enzyme inhibitor. In a recent study of white-tailed ptarmigan in Colorado, Larison and others found that a diet of willow buds, with mean Cd concentrations as low as 2.1 ppm (dry weight basis), resulted in renal tubular damage and increased chick mortality. In addition, these authors hypothesised that Cd poisoning may be more widespread than previously suspected among other willow-feeding herbivores (for example, hare, beaver, and moose) in areas with high Cd in browse species. In a recent study by the Alaska State Department of Health and Social Services, soil, sediment, fish, and berries
124
Simulations of the Effects of Climate Change
from the Red Dog Mine area in northwest Alaska were examined for unusually high heavy metal levels. This study did not find metal levels in these foods to be excessive. In addition, the State wishes to know the level of. Cd in moose muscle, liver, and kidney tissue, because these are consumed by subsistence hunters in rural Alaska. Very high levels of Cd in moose liver and kidney tissue have recently been reported as a concern for human consumption in Vermont.
6 Dynamics of Ecological Systems All ecosystems possess a characteristic trophic structure, and may therefore be studied by the investigation of that structure. In most systems, the only external source of energy is the sun, but, for many sub-systems, energy enters in the form of live or dead organisms, or in the form of decomposed organisms from another system. The energy is then used by organisms for synthesising new compounds for growth and reproduction, and also to maintain the cells in their bodies, for movement, and to maintain body temperatures. While the energy for these processes can be made available through the breakdown of organic molecules in respiration, not all of the energy released in this way is utilised by the organisms, so that a proportion is lost and dissipated as heat. As a result, there is a constant flow of energy through the ecosystem from primary producers to carnivores and decomposers, and a constant loss of energy to the atmosphere as a byproduct of respiration. ECOLOGICAL SYSTEMS
Particular interest has always been expressed in the ways in which organisms combine in communities which are characteristic of a particular type and place, and which
126
Simulations of the Effects of Ciimate Change
reflect past and present land use. Plant communities and associations over the world as a whole can be broadly classified in biomes. Biomes embrace the major vegetation types of the world, and, within broad limits, have a characteristic productivity. Udvardy has extended this classification into biogeographical realms, biogeographical provinces within the realms, and major biomes or biome complexes. Biome classifications of this kind, combined with hypotheses about the effects of climatic factors, facilitate comparisons of productivity at regional and world scales. Measurement of changes in biomes provides information on the effects of major influences such as the clearing of forest, desertisation, reforestation, etc., which are usually a combination of man's activities and climate. Within communities, changes in populations of organisms reflect the responses of communities to climate, to modification by man, and to the natural processes of . succession through which one complex of organisms gives way to another, leading ultimately to a climax or truncated climax community. Dynamic changes within communities occur at different rates at different localities, so that variations occur from place to place even within the same communities. Spatial and dynamic heterogeneities, therefore, have a marked effect on the patterns of change in ecological systems. It is essential to develop models or analogues of ecological systems which are capable of representing spatial and dynamic heterogeneity so as to avoid assumptions of homogeneity which cannot be . substantiated. Changes in populations are measured by comparing numbers of organisms at particular points of time and space, sometimes distinguishing between stages in the
Dynamics of Ecological Systems
127
development of the organisms, e.g. eggs, larvae, pupae, adults, etc., or between sexes and age classes. In more detailed studies, it may be possible to follow particular marked individuals at various points in time, and so determine patterns of distribution, survival, reproduction, feeding and death. An alternative to the study of an ecosystem by investigating changes in populations and communities of organisms is the tracing of flows of energy, nutrients and pollutants through the system. Radiant energy from the sun is trapped by green plants and combined with various chemical elements to form organic compounds which enable all the essential properties of life to proceed in living organisms. Such organisms can be classified according to their trophic levels, i.e. by their mode of nutrition. Green plants which obtain their energy from the sun are the primary producers and form the first of these trophic levels. The herbivorous animals which range from minute invertebrates to large mammals feed on these living plants, and are therefore described as being primary consumers; they occupy a second trophic level. Other animals prey upon the herbivores and form a third trophic level and, in turn, these carnivores are preyed upon by top or predatory carnivores to produce a fourth trophic level. The decomposers and detritus feeders form yet another trophic level, which accepts the residues from the other levels and turns them back into nutrients to be used by the primary producers, together with the sun's energy, to store new energy for the whole ecological system. Change in ecological systems may therefore be assessed by an examination of the changes in the flows of energy. Such investigations formed an important part of the International Biological Programme (IBP) and many techniques were developed during that Programme for the
128
Simulations of tile Effects of Climate CJumge
measurement of biomass and the consequent changes of energy within different compartments of ecosystems. It is not, however, solely the flow of energy to the
tropic levels of an ecosystem which is of importance in the-study of ecology. An almost equal interest is focused upon the flow of chemical elements, either as nutrients or as pollutants, through ecological systems. Primary producers take up mineral elements such as nitrogen and phosphorus, in the form of soluble mineral salts from surrounding soil or water. Herbivores and carnivores obtain nitrogen and phosphorus mainly as organic compounds in their food, though cattle and human beings may require supplements of raw minerals such as salt or copper. The dead organismS, together with their waste and excretory products, are broken down by decomposers (mainly bacteria and fungi) which release mineral nutrients in a form available for re-use by the primary producers. Similarly, there is a cycling of carbon released in the atmosphere as carbon dioxide as a result of respiration of plants, animals and decomposers. This carbon dioxide is taken up by green plants during photosynthesis, and the carbon passed on to animals when they eat the plants. In recent times, it has become increasingly important to study the flow of pollutant elements and compounds through ecological systems. Many of these pollutants are stored and concentrated in animal tissues, reaching their highest levels in predatory animals, after successive concentrations at lower trophic levels.
The mechanisms by which these substances pass through the trophic levels are therefore of particular significance, and it has become important to measure the changes taking place in such systems by the--uptake and
Dynamics of Ecological Systems
129
storage of elements and compounds which are otherwise foreign to the natural system. Finally, the heritable characteristics of organisms are passed from parents to offspring by complex series of genetic events. Successive generations of organisms are subjected to varying pressures by their environment and by competing organisms; differentiated selection of genotypes thus induces change into the ecological system through the genetic make-up of the component organisms. It is, therefore, possible to describe dynamic change in ecosystems by documenting the genetic composition of organisms, and to predict future changes by modelling heritability and selection processes within defined populations. Similarly, it may frequently be possible to predict the effects of selection pressures upon popUlations of organisms through genetic mechanisms. Such models require particular information about the extent to which various characteristics of organisms are associated with genetic components, together with a knowledge of the fundamental laws of genetics. In considering the dynamic change of ecosystems, of course, forget the important physical and chemical factors of the environment. Changes in these factors may require to be measured because of their impact upon organisms, or because they have themselves been altered by the organisms. Some of the more important factors may be related to the climate within which the organisms exist, either for a large area or for relatively small parts of the system, as in the soil. Some dynamic changes involve regular cycles, as in the diurnal rhythms and in the seasons, but many of the changes are relatively unpredictable, as is the climate from year to year at a particular location. Other longer-term
130
Simulations of the Effects of Climate Change
changes are associated with such factors as the melting of the ice in polar regions, sun spots, etc. Closely associated with climate are the factors of physiography which, particularly in Britain, have a strong influence upon the effects of climate through slope, aspect, and the drainage of soils. Similarly, the physical and chemical properties of soils are constantly changing, either through the effects of organisms themselves, or through deliberate attempts at management by man. The measurement and modelling of such changes become, therefore, an essential part of the modelling of dynamic change in ecosystems. To Measure Change in Ecological Systems
In the past, much of the measurement of change in ecological systems has been done through repeated survey. Such measurement requires an initial survey to provide a baseline against which change can be measured, followed by successive surveys to establish the direction and extent of change. There are, however, difficulties about the design of surveys to measure change, particularly when it is not clear what those changes may be. Thus, although a detailed survey may be made with the intention of providing the baseline for the monitoring or surveillance of change, and even though the precision of estimates made in the baseline survey may be high, the change itself may not be monitored with any great precision unless there is some clear hypothesis or hypotheses to direct the design of these surveys. Where, however, there are relevant hypotheses about the nature of the change, it may be possible to design repeated surveys which are capable of detecting change with reasonably high precision. J
/
Dynamics of Ecological Systems
131
Sampling with partial replacement, as developed for continuous forest inventory, is a particularly appropriate technique for use in such circumstances. A statistical checklist, highlighting some of the more important questions to be asked in the design of sample surveys, is given by Jeffers. Some authors make a distinction between monitoring and surveillance, depending on whether or not there is an attempt to correct the change taking place and bring it back to some stable state. Whether or not the ecological system is to be maintained in some preassigned state, it is necessary to determine the change which is taking place and to measure its extent. In essence, any investigation of dynamic change requires the definition and bounding of the problem to be framed as hypotheses which can be tested formally, even if that test can only be conducted after a chain of deductive reasoning from one or more hypotheses which are capable of direct verification. Three basic classes of hypotheses may be distinguished: Hypotheses of relevance identifying and defining the variables, species and sub-systems which are relevant to the problem. Hypotheses of processes, linking the sub-systems within the problem, and defining the impacts imposed on the system. Hypotheses of relationships, and of the formal representations of those relationships by linear, nonlinear and interactive mathematical expressions. These three classes of hypotheses may well be linked within a formal chain of deductive argument, leading to processes which can be summarised by a decision table enumerating all the hypotheses, and combinations of
132
Simulations of tlte Effects of Climate Change
hypotheses, that must be specified in order to solve a particular problem. The decision table also specifies, for each combination of hypotheses, the decisions or actions that should be taken to ensure that the problem is correctly solved. Because decision tables provide a clear concise format for specifying a complex set of hypotheses and the various consequent courses of action, they are ideal for describing the conditions for interaction between component parts of a model. The extension of these techniques to the enumeration of the necessary combinations of hypotheses for particular courses of action where uncontrolled events may intervene. The clearer definition of hypotheses about the nature and rate of change in ecological systems enables survey to be replaced by short-term and long-term experiments. The increased control over experimental areas provided by well-considered experimental design increases the precision with which change can be measured, and also offers the possibility of testing the interacting effects of various factors, including methods of management, conservation measures, and various forms of protective legislation. Indeed, by effective planning and design, it may be possible to provide a carefully assessed programme of environmental management linked to the detection of desirable or undesirable change in ecosystems. At the same time, unexpected change may be detected and included in subsequent measurements in the experimental areas. Regrettably, however, rather little emphasis has been given so far to the use of experiments as opposed to surveys taken as cross-sections in time, possibly because research workers are reluctant to commit their successors to maintaining the measurement and assessment of their experiments, or, alternatively, because research workers are reluctant to be committed to a programme of research by their predecessors.
Dynamics of Ecological Systellls
133
It may well be that the academic and institutional organisation of science, in developed and developing countries alike, precludes the rational design of investigations to determine and measure change in ecological systems over anything more than half a decade. A statistical checklist of the questions to be asked when designing experiments to detect change in ecological systems is given by Jeffers. Ideal Models of Dynamic Change Vegetation
The species composition of all biological communities varies in time and space, the rate and character of the change being a function of the scale at which a community is examined, and of the external and internal factors which influence different populations in different ways. The interaction between organisms and environment that characterises the point in time and space then becomes part of a larger system when arrays of sub-systems are combined into a biological community. It follows that communities and landscapes constitute a range of possibilities. These possibilities arise from combinations of the extent of differences between small sites and their patterns of diversity, the degree of biological influence on sites and on population regulation, the behaviour and longevity of dominant populations, the relative stability of populations, the roles of disturbance and succession, the kinds of succession and the extent to which species are replaced during successions. Classical ecology has been mainly concerned with changes are reflected in vegetation, particularly in vegetation redevelopment following perturbation of some community. This emphasis arises because plants provide the energy base for all other biological activity, and
134
Simulatiolls of tlte Effects of Climate Change
because the vegetation provides the mechanical structure of the biological environment in which other organisms exist. The classical concepts of ecological succession involve two essential assumptions: first, that species replacement during succession occurs because populations tend to modify the environment, making conditions less favourable for their own persistence and leading to progressive substitution; and second, that a final and stable system, or climax, ultimately appears which is selfperpetuating, and is in balance with the physical and biological environment. This view of ecological succession, developed over the centuries, and based largely on observed relationships between vegetation and environment, and on patterns of revegetation on agricultural and forest land, has been generally accepted by the scientific community, even though obvious exceptions to the form of succession exist. Egler was probably the first person to suggest formally that the classical model of succession may not apply in all situations, and he referred to this classical model as 'relay floristics' and suggested that, in many cases, the initial floristic composition following a perturbation may dominate the entire pattern of subsequent succession. He suggested that, unless species persisted throughout the perturbation, or were able to enter the perturbed site shortly afterwards, they would not subsequently be represented in the community that developed. Evidence has gradually accumulated to suggest that the concept of the initial floristic composition may have wider applicability than was originally envisaged by Egler. More recently, Connell and Slatyer have proposed a broader overall system of successional processes which incorporates the possibility of the pathways of relay floristics and initial floristic composition operating independently or in combination, and which, in addition,
Dynamics of Ecological Systems
135
includes a pathway in which succession is truncated at a point short of the expected climax, a phenomenon which has been frequently observed (but seldom explicitly recognised) in successional theory. The basic dichotomy between the classical 'relay floristics' concept (pathway 1) and the other models (pa,thways 2 and 3) is reflected in the immediate divergence of the pathways. In modell, referred to by Connell and Slatyer as the 'facilitation' model, the classical replacement pattern occurs, each successive suite of species which occupies the site tending to make the environment less favourable for their own persistence and more favourable for their successors to invade and grow to maturity. In model 2, the 'tolerance' model, environmental modifications induced by earlier colonists may either increase or decrease the rates of recruitment and growth to maturity of later species. The latter appear later because they either arrived later, or, in present directly after the perturbation, had their germination inhibited and their growth suppressed. In contrast to the 'facilitation' model, in model 3termed the 'inhibition' model by Connell and - Slatyer the early occupants, rather than facilitating the progressive occupancy by other species, inhibit the invasion of other species by pre-empting available space through physical occupancy, through physical competition, and the use of allelopathic substances, or through other effective means of inhibition. This inhibition has the effect of truncating the succession at a stage that would normally be regarded as being composed of non-climax species. Later succession species may only be able to enter the site when the inhibiting species are damaged or killed. If there is a subsequent perturbation, new succession may well follow a different pathway, avoiding a repetition of successional truncation.
136
Simulations of the Effects of Climate Change
In essence, the Connell and Slatyer concepts are based on the simple premise that the presence of a particular species in a community is dependent on the product of two probabilities. The first probability is that of a propagule being available at a site, itself a function of the ability to survive a pertubation or to reach the site by appropriate dispersal mechanisms. The second probability is that of the propagule being able to become established at the site and reach reproductive maturity, itself a function of the environmental requirements of the species, its adaptive ability and its reproductive strategy in relation to the prevailing environment. Populations
Dynamic change in ecosystems may be measured directly by assessment of the numbers of plants and animals to be found in some convenient unit of space and time. The counting of numbers of plants per unit area at a p~rticular instant of time is usually a straightforward, if tedious, task. Where the number of plants is large, sampling methods may be used to limit the numbers that have to be counted, and counts may also be limited to the larger and more conspicuous plants. Problems of identification, especially in the younger stages of most plants, will also frequently limit the extent to which a complete count of all species of plants occurring in a defined area is attempted. The measurement of plants to determine the biomass of vegetation and the rates of growth of individual species and communities of plants is a more difficult task, and comprehensive procedures have been designed to ensure that effective standards are established for comparative measurements. The estimation of primary production of forests for the International Biological Programme is described by Newbould and the measurement of primary
Dynamics of Ecological Systems
137
production of grassland is described by Milner and Hughes. This Handbook will not attempt to repeat the information contained in texts on population assessment and measurement, but instead concentrates on the underlying mathematical models for the counts and measurements obtained. Counts of animal populations and measurements of the growth rates of animals are even more difficult, if only because of the difficulties of catching the animals. It may also be necessary to identify, count, and measure animals at different stages of their development, particularly when the purpose of the investigation is to model the population dynamics of one or more species. Dempster emphasises that a real understanding of the factors determining the abundance of animals can be obtained only by the intensive study of animal populations in the field, and describes the methods available for such studies. The practical focus for models of population change will depend entirely on the hypotheses which are formulated for any particular investigation. For some studies, interest may be concentrated on only one species, but most studies are likely to be concerned with two or more species, as in the investigation of predator-prey relationships, or as in the elucidation of the complex interrelationships of competition between species for the critical factors of a habitat. Communities
The study of ecosystems at the level of biomes has already been referred to in the discussion of the definition of ecological systems. Change of one biome to another takes place only rarely, and probably only when there is a major climatic shift, or as a result of wholesale intervention by man. Within biomes, however, there are frequently smaller changes which are detectable because of the variations in
138
Simulatiolls of the Effects of Climate Change
the communities of plants and animals which are to be found at particular sites. Indeed, spatial variation or heterogeneity may provide broad indications of changes which will take place in time. Greatest interest is usually centred on those changes in communities of plants and animals which are related to measurable changes in the physical and chemical factors of the environment. In this way, a biogeographical interpretation of the dynamic and spatial variations may be achieved, leading to carefully structured hypotheses about the development of ecotones, climax states and seres. However, it is important to ensure that such hypotheses are capable of being tested, if the conceptual models of dynamic c:hange are to be regarded as scientific as opposed to philosophical. Management
By far the greatest influence on conceptual models of dynamic change in ecosystems is that of deliberate management by man to achieve particular aims. From earliest times, man has sought to maintain and improve his standard of living by hunting animals for food, clothing, and working materials such as bone, sinew, and horn. His pursuit and capture of animals for these purposes modified ecological systems already changing for other reasons, sometimes leading to the extinction of his prey. With increasing knowledge, man was able to domesticate some animals and to adjust his rates of cropping domestic and wild animals so as to sustain human populations in the modified ecosystems in which these animals lived. Nevertheless, unexpected changes still occurred, sometimes because of diseases and pests which changed the carefully contrived balance between the animals and the rest of the ecosystem, sometimes because of economic and social pressures which induced changes
Dynamics of Ecological Systems
139
in demand and supply faster than the ecological system could itself respond. Similarly, the harvesting and cropping of wild plants quickly led to methods of cultivation which have transformed whole ecosystems throughout the world. Ecologists sometimes pretend that these cultivated ecosystems are less interesting by comparison with natural or semi-natural ecosystems, but this view is an artefact of the perceptions induced by urban-orientated education systems, leading to a higher valuation of the 'wild' and 'natural' than of the modified and cultivated. From the point of view of the complex interrelationships between plants and animals, and of both with their environment, changes in cultivated ecosystems are as ecologically interesting as those in natural or seminatural ecosystems. Indeed, the study of such changes, in response to economic, social and technical initiatives, is at the heart of the applied ecology of agriculture and forestry. While much of the management of cropped ecosystems is concerned with judicious disturbance, disease and pest control, and with regeneration, an especially important influence on dynamic changes in such ecosystems is that which is due to deliberate manipulation of the genetic composition of the crop plants or animals. Such manipulation may also be extended to weed species, and to pests and diseases. In natural and semi-natural ecosystems, genetic changes are usually relatively slow. In managed ecosystems, it is usually necessary to speed up these changes by programmes of plant or animal breeding. The new strains which result from these programmes then make new demands on the ecosystems for which they were created, inducing further modification and change.
140
Simulations of the Effects of Climate Change
Theory and Applications of Ecology
In particular, the changes related to the flow of energy
through the various trophic levels will provide a basic concept around which many of the models can be clustered and interrelated. Closely associated with the flows of energy will be the flows of nutrient and pollutant elements and compounds which may act as stimulants or inhibitants of growth, reproduction and distribution of organisms. The ultimate fate of many of these substances will help to determine the nature and extent of dynamic changes. Models of population dynamics may ,then be used to study the processes of ecological succession, although some problems of succession will be addressed directly by appropriate models. Finally, the changes related to evolution and genetic composition will necessarily be based on genetic models which consider the patterns and mechanisms of inheritance. Obviously, they are especially relevant to many kinds of fundamental research in ecology. Indeed, many of us believe that little progress can be made in ecology unless mathematical models are used to describe the complex interrelationships between organisms, and between organisms and the physical and chemical factors of their environment. There are, however, many fields of practical application for which these models are relevant. Agriculture and forestry, as two broad fields of applied ecology, have already been mentioned. Wildlife conservation, including both plants and animals, is another field of applied ecology where the modelling and prediction of dynamic change is essential if wise decisions are to be made about the conservation of specific organisms, habitats, or the management of nature reserves.
7 Modelling Techniques The use of a mathematical notation in the modelling of complex systems is, therefore, an attempt to provide a representational symbolic logic which simplifies, but does not markedly distort, the underlying relationships. The use of symbolic logic, because it is essentially a simplification, necessarily gives an imperfect representation of reality, and must therefore be regarded as a caricature. Nevertheless, the various mathematical rules for manipulating the relationships enable predictions to be derived of changes which may be expected to occur with time in ecological systems as various component values of these systems are changed. These predictions, in turn, enable comparisons to be made between model systems and the real systems which they are intended to represent, and, in this way, to test the adequacy of the model against observations and data derived from the real world. This appeal to nature' is an essential part of the scientific method. Indeed, manipulation of the model system may itself suggest the experiments which are necessary to test the adequacy of the system. I
The advantages of formal mathematical expressions as models are:
142
Simulatiolls of the Effects of Climate Challge
they are precise and abstract; they transfer information in a logical way; and they act as an communication.
unambiguous
medium
of
They are precise because they enable predictions to be made in such a way that these predictions can be checked against reality by experiment or by survey. They are abstract because the symbolic logic of mathematics extracts those elements, and only those elements, which are important to the deductive logic of the argument, thus eliminating all the extraneous meanings which may be attached to the words. Mathematical models transfer information from the whole body of knowledge of the behaviour of interrelationships to the particular problem being investigated, so that logically dependent arguments are derived without the necessity of repeating all the past research. Mathematical models also provide a valuable means of communication because of the unambiguity of the symbolic logic employed in mathematics, a medium of communication which is largely unaffected by the normal barriers of human language. The disadvantage of mathematical models lie in the apparent complexity of the symbolic logic, at least to the non-mathematician. In part, this is a necessary complexity - if the problem under investigation is complex, it is likely, but not certain, that the mathematics needed to describe the problem will also be complex. There is also a certain opaqueness of mathematics, and the difficulty that many people have in translating from mathematical results to real life is not confined to non-mathematicians. It is, therefore, always important to ensure that the results of mathematical analysis are correctly interpreted and to
143
Modellillg Techlliques
translate solutions from mathematical formulae into everyday language. Perhaps the greatest disadvantage of mathematical models, however, is the distortion that may be introduced into the solution of a problem by inflexible insistence on a particular model, even when it does not really fit the facts. The pursuit of mathematical models is sometimes intoxicating, to the extent that it is relatively easy for scientists to abandon the real world and to indulge in the use of mathematical languages for what are essentially abstract art forms. ANALYSIS OF SYSTEMS
Systems analysis provides a framework of thought designed to help decision-makers to choose a desirable course of action, or to predict the outcome of one or more courses of action that seem desirable to those who have to make decisions. In particularly favourable cases, the course of action that is indicated by the systems analysis will be the 'best' choice in some specified or defined way. The framework is intended to focus and to force hard thinking about complex, and usually large, problems not amenable to solution by simpler methods of investigation, for example by direct experimentation or by survey. The special contribution of systems analysis lies: (i) in the identification of unanticipated factors and interactions that may subsequently prove to be important, (ii) in the forcing of modifications to experimental and survey procedures to include these factors and interactions, and (iii) in illuminating critical weaknesses in hypotheses and assumptions. The inherent complexity of ecological relationships, the characteristic variability of living organisms and the apparently unpredictable effects of deliberate modification
144
Simulations of tile Effects of Climate Challge
of ecosystems by man necessitate an orderly and logical organisation of ecologial research which goes beyond the sequential application of tests of simple hypotheses, although the 'appeal to nature' invoked by the experimental method necessarily remains at the heart of this organisation. Applied systems analysis provides one possible format for this logical organisation, a format in which the experimentation is embedded in a conscious attempt to model the system so that the complexity and the variability are retained in a form in which they are amenable to analysis. A further reason for the use of systems analysis in ecological research lies in the relatively long time scales which are required for that research. It is, therefore, necessary to ensure the greatest possible advance from each stage of the experimentation, and the models of systems analysis provide the necessary framework for such advances. Furthermore, the present state of ecology as a science, with its extremely scattered research effort over a wide field, urgently needs a unifying concept. Not only is there a marked incompatibility of the many existing theories, but the weakness of the assumption behind these theories is largely unexplored, partly because the assumptions themselves have never been stated. Systems analysis can, therefore, be used as a filter of existing ideas. Theories which have been shown to be incompatible can be tested as alternative hypotheses, and the systems analysis itself will frequently suggest the critical experiments necessary to discriminate between these hypotheses. MATHEMATICAL MODELS
While some of the general properties of mathematical models have been touched upon above, an experienced
Modelling Techniques
145
systems analyst can recognise broad 'families' of models, in much the same way that an experienced botanist is often able to identify the genus to which the plant belongs, even when he does not know the species. The list is far from being exhaustive, and the categories are also not mutually exclusive. The classification is, however, probably sufficient to provide examples of the basic requirements of models applied to practical problems. Functional Relationships of Systems
Many ecological models are based on studies of systems dynamics which are themselves based on servo-mechanism theory, coupled with the use of high-speed digital computers to solve large numbers of equations in a short time. These equations are more or less complex mathematical descriptions of the operation of the system being simulated, and are in the form of expressions for levels of various types which change at rates controlled by decision functions. The level equations represent accumulations within the ecological system of such variables as weight, numbers of organisms and energy, and the rate equations govern the change of these levels with time. The decision functions represent the policies or rules, explicit or implicit, which are assumed to control the operation of the system. The popularity of dynamic models of this kind arises from the flexibility of the models to describe systems operations, including non-linear responses of components to controlling variables and both positive and negative feedback. However, this flexibility has its disadvantages. It is, in any case, usually impossible to include equations for all the components of a system, as, even with modern computers, the simulation rapidly becomes too complex. It is, therefore, necessary to obtain an abstraction based on judgement and on assumptions as to which of the many
146
Sil/lulatiolls of tile Effects of Clil/late Change
components are those which control the operation of a system. The application of systems dynamics in modelling involves three principal steps. First, it is necessary to identify the dynamic behaviour of the system of interest, and to formulate hypotheses about the interactions that create the behaviour. Second, a computer simulation model must be created in such a way that it replicates the essential elements of the behaviour and interactions identified as essential to the system. Third, when it has been confirmed that the behaviour of the model is sufficiently close to that of the real system, the model can be used to understand the cause of observed changes in the real system, and to suggest experiments to be carried out in the evaluation of potential courses of action. Systems dynamic models have an intuitive appeal. The formulation of the models allows for considerable freedom from constraints and assumptions, and for the introduction of the non-linearity and feedback which are apparently characteristic of many ecological systems. The ecologist is thus able to mirror or mimic the behaviour of the system as he understands it, and gain some useful insight into the behaviour of the system as a result of changes in the parameters and driving variables. The power of computers to make large numbers of exact but small computations also enables the ecologist to replace the analytical techniques of integration by the less accurate, but easier, methods of difference equations. Furthermore, even when the values of parameters are unknown, relatively simple techniques exist to provide approximations for these parameters by sequential estimates, or even to use interpolations from tabulated functions. In particularly favourable cases, it may even be possible to test various hypotheses of parameters or functions.
Modelling Techniques
147
The lack of a formal structure for such models and the freedom from constraints can, however, also be disadvantageous. For one thing, the behaviour of even quite simple dynamic models may be very difficult to predict, and it is easy to construct models whose behaviour, even within the practical limits of the input variables, is unstable or inconsistent with reality. Even more difficult, determination of the way in which such systems will behave frequently requires extensive and sophisticated experimentation. It is certainly always necessary to test the behaviour of such model in relation to the interaction of changes of two or more input variables, and seldom, if ever, sufficient to test the responses to changes in one variable at a time. Matrix Models
The models usually strive for 'reality'-a recognisable analogy between the mathematics and the physical, chemical, or biological processes-sometimes at the expense of mathematical elegance or convenience. The price paid for the 'reality' is frequently a necessity to multiply entities to account for relatively small variations in the behaviour of the system, or some difficulty in deriving unbiased or valid estimates of the model parameters. Matrix models represent one family of models in which 'reality' is sacrificed to some extent in order to obtain the advantages of the particular mathematical properties of the formulation. The deductive logic of pure mathematics then enables the modeller to examine the consequences of his assumptions without the need for time-consuming 'experimentation' on the model, although computers may still be required for some of the computations.
148
Simulations of the Effects of Climate Change
Some of the most elegant of these matrix models are represented by the Leslie deterministic models predicting the future age structure of a population of animals from the present known age structure and assumed rates of survival and fecundity. Predator-prey systems, which sometimes show marked oscillations, can also be encompassed by matrix models, by a relatively simple exploitation of techniques for relating population size and survival. Seasonal and random environmental changes and the effects of time lags may similarly be incorporated, though the models necessarily become increasingly complex in formulation. Dynamic processes such as the cycling of nutrients and the flow of energy in ecosystems can also be modelled by the use of matrices, exploiting the natural compartmentation of the ecosystem into its species composition or into its trophic levels. Losses from the ecosystem are assumed to be the difference between the input and the sum of output from, and storage in, anyone compartment. Although matrix calculations are sometimes extensive, especially in matrix inversion and in the calculation of eigt::nvalues and eigenvectors, and will often require the use of computers, they are usually less difficult to programme than those involved in dynamic models. Furthermore, the properties of the basic matrices of the models enable the modeller to exploit the deductive logic of pure mathematics. Matrix models, therefore, represent an important and neglected family of models in systems analysis. A particular type of matrix model which is especially useful in the modelling of dynamic change in ecological systems is that of the Markov models, in which the basic format is of a matrix of entries expressing and probabilities of the transition from one state to another at specified intervals. .
Modelling Techniques
149
Statistical Models
The families of models so far considered have been mainly deterministic. That is to say, from a given starting point, the outcome of the model's response is necessarily the same and is predicted by the mathematical relationships incorporated in the model. Such models are necessarily mathematical analogues of physical processes in which there is a one-to-one correspondence between cause and effect. There is, however, a later development of mathematics which enables relationships to be expressed in terms of probabilities, and in which the outcome of a model's response is not certain. Models which incorporate probabilities are known as stochastic models, and such models are particularly valuable in simulating the variability and complexity of ecological systems. Multivariate Models
A particular class of statistical models which has special relevance to the modelling of dynamic change in ecosystems is that of multivariate models, where it is necessary to consider changes between many variables, or variates. A variate is a quantity which may take anyone of the values of a specified set with a specified relative frequency or probability. Such variates are sometimes also known as random variables and are to be regarded as defined not merely by a set of permissible values like any ordinary mathematical variable, but also by an associated frequency or probability function expressing how often those values appear in the situation under discussion. There are many situations in ecology and other applications of systems analysis where models have to capture the behaviour of more than one variate. These models are known collectively as 'multivariate' and are related to techniques known collectively as 'multivariate analysis'-an expression which is used rather loosely to
ISO
Simulations of the Effects of Climate Change
denote the analysis of data which are multivariate, in the sense that each individual under investigation bears the values of p variates. Broadly, these models may be divided into two main categories: those in which some variates are used to predict others; and those in which all the variates are of the same kind, and no attempt is made to predict one set from the other. For the latter, which may be broadly described as descriptive models, there is a further subdivision into those models in which all the inputs are quantitative, and those models in which at least some of the inputs are qualitative rather than quantitative. Predictive models, on the other hand, may first be subdivided according to the number of variates predicted, and then by whether or not all the predictors are quantitative. The term 'mathematical programming' describes a series of models whose aim is to find the maximum or minimum of some mathematical expression or function by setting values to certain variables which may be altered within defined limits. The underlying mathematics of these models was developed during the early application of mathematical techniques to practical problems in what has now come to be known as operational research. The simplest of these problems, in which the objective function and the constraints are all linear functions, can be solved relatively easily by standard methods. In essence, any inequalities in the constraints are first removed by introducing some additional 'slack' variables. Any feasible solution to the problem is then sought and, once such a solution has been found, iterative attempts are made to
Modelling Techniques
151
improve the solution, i.e. to move it closer to the defined optimum of the objective function by making small changes in the values of the variables. This iterative procedure continues until no further improvement can be made. Non-linearity in either the objective function or the constraints, or both, introduces disproportionate levels of difficulty in finding appropriate solutions. So, too, do problem formulations which impose limitations on the size of the lumps in which units of some particular resource can be allocated. There is, nevertheless, a well-developed theory of non-linear programming to cope with such problems. As a further extension, some large optimisation problems can be reformulated as a series of smaller problems, arranged in sequences of time or space, or both. A reformulation of this kind is frequently desirable in order to reduce the computational effort of finding a solution, although care has to be taken to ensure that the sum of the optimal solutions of the sub-problems approaches the optimum solution of the whole problem. Game Theory Models
Closely related to mathematical programming models are the models which are based on the theory of games. The simplest of these models is known as the two-person, zerosum game. Such games are characterised by having two sets of interests represented, one of which may be nature or some external force, and by being 'closed' in the sense that what one player loses in the game the other must win. The theory can, however, be extended to manyperson, non-zero-sum games. Such models represent an interesting, and so far little explored, alternative approach to the solution of strategic problems. The extension to the more complex non-zero-sum games and to many-person games, in which coalitions can be formed between the
152
Sill/lIlntiolls of tile Effects of Climate Change
players, represents a field of research which deserves increased attention, particularly in ecological research related to the assessment of environmental impact and environmental planning. Catastrophe Theory
The theory of catastrophes is an elegant development of mathematical topology applied to systems whiclt have four basic properties, namely bimodality, discontinuity, hysteresis and divergence. Catastrophe theory models have attracted much interest and attention since they were first proposed in 1970. The models have considerable intellectual and visual appeal, but are not easy to apply in highly multivariate situations. There are also serious difficulties to be overcome in estimating the parameters of the model from ecological data. RnAnONSHIPS BETWEEN FAMILIES OF MODEL
It is clear that the list of families of models described above is far from being exhaustive, and that the categories are also not mutually exclusive. Thus, Markov models belong both to the family of matrix models and to the family of statistical models. Furthermore, some basic statistical models are frequently essential to the development of dynamic mod~ls. The distinction between deterministic and stochastic models is of particular importance. With a deterministic model, one will always arrive at the same predictions for given starting values, and for given values of the coefficients or parameters of the model. If, on the other hand, a stochastic moiel is used as a basis for simulation, the outcome of the simulation will not always be the same, even when the parameters and starting values are the same. The random elements in the model ensure
Mod£lling Techniques
153
variability, and the aim of such models is to mirror the variability found in living organisms and in ecological systems. As with experiments on the organisms themselves, it will usually be necessary to make repeated trials of the simulation in order to determine the ways in which the system will respond to various changes. A further important distinction is related to the dimensionality of models, i.e. to the number of independent dimensions or variables that are included in the model system. Many variables will be inter-correlated and the actual number of independent dimensions will, therefore, be smaller than the total number of variables. Maynard-Smith makes a distinction between 'models' and 'simulations'. He regards a mathematical description with a practical purpose, which includes as much relevant detail as possible, as a 'simulation', and restricts the use of the word 'model' to descriptions of general ideas which include a minimum of detail. Their aim is to provide as good a description as possible of the underlying processes or relationships on which dynamic change depends. Such models may help to weld together widely disparate theories about ecological systems, or, alternatively, they may help to snow the incompatibility of commonly held beliefs or theories. Once formulated, the descriptive model will be used to obtain a further understanding of the way in which a single organism, a community of organisms, or a whole ecosystem will respond to various changes, natural or induced. Alternatively, a model may be described primarily for the purpose of prediction. Such predictive models may pay relatively little attention to the physics, chemistry or biology of the underlying processes, but will be regarded as efficient only if they enable predictions of future states of the system to be made with a known degree of
154
Simulatiolls of tlte Effects of Climate Challge
accuracy. Predictive models may be derived as the operational versions of descriptive models, the improved knowledge of the underlying processes being used to refine the predictive capability of the mathematical expressions. It may, nevertheless, be possible to develop predictive models directly, sometimes with drastic simplification of the mathematical assumptions about ecological processes. Efficient descriptions and efficient predictions are not necessarily closely related. A third class of models may be distinguished as decision models. The aim of such models is neither to provide a description of the ecological system nor to predict the future state of such a system, but, instead, to guide practical decisions about the management or treatment of the system. In a sense, of course, both descriptive and predictive models can be used to guide practical decisions. Decision models are, however, specially formulated so as to provide such guidance more directly, and, in addition, by showing the consequences of particular choices about the management of an ecological system, they help to indicate a management strategy which is 'best' in some predefined way. Decision models are not usually derived from descriptive or predictive models, but are developed from families of models which have distinctive mathematical properties, like mathematical programming. CHOICE OF FUNCTIONAL MODELS
Three main classes of models are described in detail, namely dynamic or functional models, Markov models, and multivariate models. For all three classes, there is now sufficient experience of their application to the modelling of dynamic change to enable an assessment to be made of their usefulness, and of the difficulties of constructing models from the kinds of data likely to be available.
8 Evaluation of Terrestrial Systems A long history exists for the experimental manipulation of whole ecosystems. Among limnologists, Juday et al. were among the earliest to manipulate a whole lake experimentally. Over a period of five years, they enriched Weber Lake, Wisconsin, with an assortment of six organic and inorganic fertilisers including phosphate, lime, potash and soybean meal. Suggesting that soybean meal was the most effective fertiliser in terms of increasing phytoplankton standing crop and stimulating fish growth, their results led them to conclude that "organic content is the limiting factor in the plankton production." Micro- and mesocosms are unsuitable also in longterm experiments, because the natural floral and faunal assemblages may become unbalanced as a result of the experimental conditions. For example, Schindler reported that for experiments lasting more than a few weeks, even the very large aquatic enclosures (limnocorrals) used in the Experimental Lakes Area (ELA), Canada, developed abnormally high populations of periphyton. Thus, the experimental manipulation of whole aquatic and terrestrial ecosystems is the most effective mechanism for assessing the fate of chemicals. Provided the time, the money, the manpower,
156
Simulations of the Effects of Climate Change
and, most importantly, the appropriate "system" is available, whole system experiments can provide the opportunity to test the biological effects and responses of chemicals and to apply the scientific method in situ. Subsequent to the experiments of Juday et a!., Hasler et al. added hydrated lime to two lakes in Wisconsin, in order to test whether the transparency of the water would be improved. Based on pretreatment data and also on data from relatively similar reference lakes nearby, Hasler and his co-workers found that light penetration and alkalinity were greatly increased in Cather Lake; however, in, Turk Lake, which was limed in a slightly different manner, only a small effect of the treatment was observed. These results clearly .demonstrated the need for improved controls during whole-lake manipulations. A new classic experiment involving a whole-lake manipulation was conducted in the early 1950s. Peter-Paul Lake on the Michigan-Wisconsin border was a small, brown-water lake with two basins connected by a narrow, shallow channel. In 1951, Johnson and Hasler constructed an earthen dam across the narrow part of the lake and separated it into two lakes, Peter Lake and Paul Lake. Peter Lake was treated with hydrated lime (Ca(OH)2) beginning in 1951 and continuing until 1954, while Paul Lake remained as a control or reference lake. The significant feature of this experiment is that it was perhaps the first instance in which a whole ecosystem "control" was part of the experimental design. As a result of these early experiments employing whole-lake manipulations, many other scientists have been inspired by the "ecosystem" approach for the study of the fate and behaviour of chemicals in the natural environment. The treatment of double-basin lakes as two lakes has been attempted in several other situations, e.g.,
Evaluation of Terrestrial Systems
157
in Lakes 226 and 302 of the Experimental Lakes Area of Canada and in Little Rock Lake, Wisconsin. In addition, the experimental manipulation of systems other than" lakes has become prevalent in the lit~rature. One example is the Reducing Acidification In Norway (RAIN) project in which two pristine catchments were acidified by the addition of H 2S04 and H 2S04 plus HNO y respectively; while at an acidified catchment, ambient acidic precipitation was excluded by means of a "roof" and clean precipitation was added beneath. TVPE OF LAKE MANIPULATION
Since the first experiments of Juday et al. on Weber Lake in Wisconsin, dozens of experiments have been conducted using whole-lake manipulations. These experiments can be divided into two categories: (1) those involving only a "single" lake to which the test chemical is added and (2) those involving a "double-basin" lake which has been mechanically separated into two lakes that are then manipulated independently. The most common approach to manipulate lakes is to add the test chemical to a single lake. The major problem with this type of lake manipulation (as indeed is the case with all whole system manipulations) is in establishing a suitable "control" or "reference." As Likens has pointed out, in whole system experiments, "reference" is perhaps the more appropriate word to use because the complexity of natural ecosystems precludes the use of an absolute experimental "control." For single-lake manipulations, two ways exist to establish a control. The first is to use the lake itself as a reference. This approach requires that the lake be studied for one to several years prior to the experiment. Establishing the length of time needed before the treatment
158
Si1llulatiolls of the Effects of Climate Challge
begins can be a problem, because determining a priori the natural variation in the system is impossible. A long-term (Le., ongoing) monitoring programme is ideal, provided it is inexpensive and simple. Also, it should include measurements for abiotic and biotic parameters that are sensitive not only to changes in the chemical currently under study, but also to changes in chemicals that might be tested in the future. . Alternately, or in addition to using the lake itself as a reference, other relatively similar lakes can be used as "controls," while the study lake is being manipulated. While this approach has the advantage that potentially fewer years of background data may be required prior to the commencement of the experiment, finding two or more lakes having identical morphometries, water renewal rates, food web structures, etc., is virtually impossible. Ideally, both long-term data should be collected for the study lake prior to its manipulation, and other lakes should be monitored as references during the course of the experiment. Finally, data should continue to be collected even after the experiment has ceased. An alternate approach to the manipulation of a single lake is to mechanically divide a lake into two or more basins, and then to manipulate each basin individually. Peter-Paul Lake was separated into two lakes by an earthen dam, while at ELA and in Little Rock Lake, Wisconsin, a vinyl curtain reinforced with nylon (ELA) or Dacron (Little Rock Lake) was installed to separate the basins. In ELA Lake 226, the north basin received annual additions of C, N, and P, while the south basin received only C and N; and in Lake 302, P, N, and C were added to the hypolimnion of the north basin of the lake, and the south basin was left as a reference. In Little Rock Lake, the north basin was acidified with H 2S04 while the south basin remained as a reference.
Evaluation of Terrestrial Systems
159
The separation of a single lake into two provides a natural reference or "control." At ELA Lake 226, a problem developed with leakage of water between the two basins and movement of water over the surface collar. Alternately, multibasin lakes that behave as individual lakes could be used. An example is Kennedy Lake, British Columbia, Canada, which is composed of two discrete arms, separated by a very narrow, shallow sill with a depth of <10 m. Stockner and Shortreed fertilised Clayoquot Arm with P from 1978 to 1984 inclusive, while the Main Arm was fertilised only in 1979 and 1980. Similarly, at ELA, the natural sill between the two basins of Lake 302 was so effective at separating the basins, that installation of the curtain in 1974 had little effect on the chemistry or the phytoplankton of the two basins. Whatever the approach selected to manipulate the whole-lake system, the method to add the chemical to the lake is dependent on the chemical being tested. A great deal of information is available on the addition of nutrients to lakes, both in North America and Europe. For example, at ELA in Canada, Lake 227 has been fertilised with Nand P since 1969; Lake 304 was treated with N, P, and C in 1971 and 1972, with Nand C in 1973 and 1974, and with p and N in 1975 and 1976; and Lake 303 was treated during the summer of 1975 and 1976 with Nand P, while Lake 230 was treated during the winter of 1974 and 1975 with Nand P. In other parts of Canada, P has been added to three subarctic lakes in Schefferville, Quebec, while in British Columbia, many lakes have been fertilised with both Nand P. In Scandinavia, six lakes in the Telemark region of Norway were treated with P and/ or N- functional models to improve fish stocks, while in Sweden N and/or P were added to Lakes Magnusjaure, and Gunillajaure.
160
SimI/lations of the Effects of Climate Chal/ge
Phosphorus can be added to the lake in several forms. Hakanson et al. used H'.e emissions from fish-cage farms to add P (and other nutrients) to two lakes in Sweden. Sodium phosphate (Na 2 HP04) was added to Lake 227 in Canada and to Lake Hymenjaure in Sweden in the first year of study; but in both studies, phosphoric acid (H3P04) was substituted in subsequent years. Phosphoric acid would appear to be the preferred source of P for lake water additions because of its greater solubility. If nitrogen is being added to the lake in conjunction with the P, the two nutrients can be added as ammonium phosphate «NH)3P04) or as a commercial fertiliser. Nitrogen alone is added as either ammonium chloride (NHSI) or, more commonly, as ammonIum nitrate (NH 4 N0 3). Carbon was added to ELA Lakes 226,227, 302N and 304 as sucrose or to ELA Lake 224 as 14C-Iabelled NaHC03. During the ice-free season, the cheapest and simplest method for adding these nutrients (with the exception of the radioisotope) is by pouring the liquid or predissolved fertiliser (in the case of dry chemicals) onto the surface of the lake through the prop-wash of the boat while criscrossing the lake. To simulate point- source additions, the chemicals can be added using a semi-continuous trickle feed from a large (e.g., 200 L) barrel or drum situated on a raft in the middle of the lake or on the shore. If the funds are available, the fertiliser can be spread over the lake from an airplane. During the winter, nutrients can be dispensed through a hole in the ice or through the lake inflows. However, often nutrients are not added during the winter months because productivity is low at that time of year. While nutrients are loaded generally onto the surface or epilimnetic waters of the lake, Schindler et al. injected
Evaluation of Terrestrial Systellls
161
P, N, and C into the hypolimnion of the north basin of Lake 302 to test the hypothesis that the phosphorus would be permanently transferred to the sediments before it could reach the surface waters and cause algal blooms. In the hypolimnetic experiment of Schindler et al., as in many other whole-lake fertilisation experiments, the nutrient additions commonly are made weekly during the ice-free season. Loadings can be uniform throughout the season, or they can be adjusted according to the P, N, or C concentration in the lake or the outflow discharge. If an immediate or dramatic increase in concentration is required, a large pulse of fertiliser is added at the beginning of the ice-free season. For example, after iceout in June 1979, Smith et al. added 8 kg of Pas H 3P04 to the surface of Lejeune Lake in order to double the pretreatment concentration of total P in the lake water. Thereafter, the lake was loaded at a rate of 17.6 kg P /yr at weekly intervals. A final point concerns the importance of pretreatment, post-treatment, and "during" treatment sampling. The sampling regime, including the frequency of collection and the location of the sampling station(s}, and the methodologies involved for both the chemical and the biological analyses must be well established prior to the commencement of the experiment, otherwise many years and much money will be wasted. Good sampling techniques are important, not only for whole-lake manipulations involving nutrients, but also for those experiments in which other chemicals are added to the lake system. For chemicals such as sulphuric and nitric acid, some information is available on whole-lake manipulations as a result of the heightened concern over the effects of acidic precipitation. Pioneering work in this area was conducted at the ELA in Canada, where
162
Simlllations of the Effects of Climate Change
experimental acidification of Lake 223 (in the form of H 2S04) was conducted beginning in 1976, acidification of Lake 114 (in the form of H 2SO) was conducted from 1979 to 1986 (AI 2S04 was added in 1984), and acidification of Lake 302 (in the form of H2S0~ in one basin, HN03 in the other) had been carried out from 1982 (Schindler 1991). At Little RO'ck Lake, Wisconsin, the north basin was acidified with H 2S04 from 1985 to 1990, while the south basin remained as a reference. Similar to the methods employed for nutrient additions, in each lake the acid was added by slowly pouring the concentrate from a moving boat, into the propwash of an outboard motor. Physical mixing studies suggest that this method is sufficient to mix the acid into the epilimnion within a few hours after addition. Furthermore, despite the high density of the acid, no evidence of it sinking through the thermocline to the bottom of the lake was found. As with nutrient additions, the acidification regime for the ELA lakes was designed to reduce the pH to a predetermined value early in the ice-free season, and then to hold it at that value until the following spring. Thus, large quantities of acid were added to the lakes early in the season, followed by weekly loadings. This regime is analogous to a large pulse of acid entering the lake during spring snowmelt, followed by episodic additions from summer rains. When acid is added to lakes, it must be relatively free of impurities such as heavy metals, otherwise, the effects of the metal contaminants might be confused with the effects of the acid. When metals have been tested on whole-lake systems, radioisotopes have been used with some success. For example, the gamma-emitting isotopes, 75Se, 203Hg, 134(1;, 59Fe, 65Zn, and 60CO were added to ELA
Evaluation of Terrestrial Systems
163
Lakes 224 and 226 and the alpha-emitting isotope 226Ra was added to Lakes 224,226,227, and 261. The isotopes were dispensed as either chloride salts or nitrate salts 2mHg only) with the exception of 75Se which was supplied as Na 2Se03 • In all studies, the radionuclides were combined in a metal drum containing 10 L of water (per lake or lake basin). The drum was then mounted on a raft and towed around the lake for a period of about one hour while the contents drained out. The barrel was then refilled with water, and the operation was repeated. This procedure is the same used to dispense 14°C in the nutrient addition experiment conducted on Lake 224. Unlike the whole-lake manipulations involving nutrients and acids, no weekly additions were made of the metal radio tracers in the experiments discussed above. Malley et al. conducted an experiment in which both stable Cd (as CdCl 2) and the radio tracer l09Cd were added to the epilimnion of Lake 382 in 33 weekly additions during the period 23 June to 29 October 1987. The large cost of radioisotopes and the hazard involved in using both radioisotopes and metals precludes their widespread use in many other lakes. Nonetheless these types of whole-lake experimen1:s provide the opportunity to monitor the pathway of metals from the water to the biotic and the abiotic compartments. Hakanson et al. and Hakanson added Se in conjunction with lime to five la~es in Sweden. The Se was added either using mixed lime, or it was encapsulated in rubber tubes placed into nets from which the selenium was successively released. However, the purpose of their work was to reduce Hg concentrations in fish by means of precipitating the Hg from the lake water as HgSe, and not to examine the behaviour of the Se. Similarly,
164
Simulatio1ls of the Effects of Climate Challge
radionuclides other than metals have been added to some ELA lakes to determine diffusion coefficients and not to evaluate the effect of the chemical itself. Methods to Manage Streams
Experiments conducted on streams are somewhat different from those conducted on lakes, because manipulation of an entire stream is virtually impossible unless the chemical is added at the source of each tributary. Consequently, the chemical being tested is dispensed usually at a certain point in the stream, and its effects are noted downstream. Thus, the control or reference site for the experiment must be upstream of the addition. Probably the greatest number of studies on stream manipulations have involved the experimental acidification of streams. Study sites have been located in New Hampshire, Maine and Colorado in the United States, in Wales, and in Norway. In many cases, the acid is added as H 2S04, although Hall et al. used HCI because the concentration of Cl+ was low in both the biologic and geologic material at theiF study site and because CI+ is a good chemical tracer of groundwater movement. For similar reasons, McKnight and Bencala injected LiCl (in conjunction with H 2SO) as a conservative tracer in their experimental stt"eam. Hall et al. added AICl~ instead of HCl in some of their experiments to compare neutralisation mechanisms by the stream during acidification by a weak (AICl3) and a strong (HCI) acid, and also to produce Al levels representative of those that occur after experimental deforestation of catchments in the Hubbard Brook Watershed. Most of the stream acidification experiments mentioned above were relatively short term (Le., the
EvaluatiOIl of Terrestrial Systems
165
addition of the acid lasted less than 24 hours). First-, second-, and third-order streams were manipulated, with the size of the experimental sections (Le., the distance to the downstream sites) ranging between about 50 and 200 m. The injection rate, the concentration of acid used and the duration of the input of acid to the streams were determined for each experiment according to the flow rate of the stream and the decrease in pH required. For example, McKnight and Bencala injected a 7.25 mol/L H 2S04 solution into a headwater stream in Colorado at a rate of 950 mL/min for a period of 3 hours to achieve a decrease in pH from about pH 4 to about pH 3. Samples must be collected as soon as possible and as frequently as possible at both the upstream (Le., reference) site and at the downstream sites. To know a priori when steady state might be expected to occur and also to avoid unnecessary sampling, travel or mixing times and distances in these short-term experiments should be determined using a dye such as rhodamine WT, rhodamine B, or fluorescein, despite the possibility that the dyes might cause an effect independently. For acidification experiments requiring that the pH be depressed for long period of time, frequent measurements of pH downstream of the injection site must be made so that the flow rate of the acid into the stream can be modified to account for changes in stream discharge. Hall et al. found that when the discharge was variable, monitoring of the pH was needed at short time intervals (Le., every five min) to maintain a constant pH in the experimental section of Norris Brook (in the Hubbard Brook Watershed, New Hampshire, US). However, when the discharge was relatively constant, monitor pH was needed only at 6- to 8-hour intervals. They maintained approximately pH4 in Norris Brook for a period of five months in 1977 by manually adding dilutc_
166
Simulations of the Effects of Climate Change
(0.05-1 N) H 2S04 from a carboy, and modifying the drip rate of the acid into the stream with a Teflon stem needle valve in borosilicate. The effects of pesticides in streams also have been studied by direct additions. For example, the effects of methoxychlor (1,1, 1-trichloro-2,2-bis-(p-methoxyphenyl) ethane), a replacement for DDT, were investigated in the Coweeta Basin, North Carolina, where two first-order streams were treated with methoxychlor on nine and four separate occasions, and also in the province of Quebec, Canada. Wallace et al. used two hand sprayers to apply a 25 percent emulsifiable concentrate of methoxychlor for a 2to 4-hour period at a rate of 10 mg/L (based on discharge). Wallace and Hynes compared two methods for stream manipulations. They used a Piper Pawnee 235 fixed-wing aircraft equipped with a standard boom and nozzle spray rig to spread a 15 percent methoxychlor solution from an altitude of 45 m to stream M-26, whereas stream M-ll was treated from the ground at a calculated rate of 0.075 mg/ L for 15 min. Regarding nutrient manipulations, some work has been conducted on streams located in the Walker Branch Watershed, Tennessee, and also in the Hubbard Brook Experimental Forest, New Hampshire. Elwood et al. conducted a long-term (95 day) P-addition experiment in which they divided a 340 m reach of Walker Branch (a second-order woodland stream) into three sections: a 70 m upstream section was kept as a control «IOllg/L P04P), a 150 m section was continuously enriched to IOOllg/ L P04 -P, and a 200 m section downstream was enriched to IOOOllg/L P04-P. Phosphorous solutions were prepared weekly in 2 x 200 L drums containing H 3P04 mixed with stream water
Evaluation of Terrestrial Systems
167
that was siphoned through surgical tubing at a rate of -20 mLlmin to the head of each stream section and introduced at points where rapid mixing occurred. As with the acidification experiments discussed above, the phosphorus concentration in the drums (-60 and 450 J.1g P04 -P IL) was based on the discharge of the stream. Another similarity with the acidification experiments was the addition of a conservative tracer by Meyer who added NaCl in conjunction with the P (as KHl04) to Bear Brook, New Hampshire, over a period of 27 hours. NaCl was added as an inert tracer to monitor changes in phosphorus concentration due to dilution over the reach. As in lake manipulations, the importance of a rigorous sampling regime cannot be overemphasized. Adequate abiotic and biotic samples must be collected before, during, and after the treatment at both the reference and the downstream sites so that the effect of the chemical can be properly evaluated. MANIPULATING CATCHMENTS AND FORESTS
Two common approaches exist to manipulate whole catchments and forests. In "addition" experiments, the test chemical is added to the system (similar to whole-lake and stream manipulations), whereas in "exclusion" experiments the chemical, in conjunction with other nutrients and contaminants, is excluded from entering the system. Many whole-catchment and forest manipulation studies have dealt with the addition and/or exclusion of acids. Most notable among these is the RAIN project conducted in Norway. The project comprised two parallel large-scale experimental manipulations, representing both addition experiments (Sogndal) and exclusion (Risdalsheia) experiments. At Sogndal, four pristine headwater catchments were selected; two of the sites acted as controls
168
Silllulations of the Effects of Climate Change
(SOGI and SOG3), a third site was acidified with H 2S04 (SOG2), and the fourth site was acidified with a 1:1 mixture of H 2S04 + HN03 (SOG4). Acid addition, which began in April 1984, consisted of application to the snowpack of 0.02 mm of water at pH 1.9 and four or five events of 11 mm at pH 3.2 during the snow-free months. Acid was mixed with lakewater from SOGI and applied at a rate of 2 mm/hour using commercial irrigation equipment. Before and after each acid addition, 2 mm of unacidified lake water were added to "wet" and "wash" down the vegetation, respectively. In conjunction with this RAIN project experiment, a similar methodology was employed by Wright et al. to acidify a Sogndal catchment with sea salt, except that seawater instead of H 2S04 was mixed with the lakewater from SOG1. At Risdalsheia, an acidified area in southern Norway, three natural headwater catchments were selected; a site (ROLF) acted as a control and two sites (KIM and EGIL) were covered with transparent roofs. At KIM and EGIL, precipitation was collected by means of a gutter and cistern system. At KIM, the water was pumped through a filter and ion exchange system, seawater was added to increase the concentration of salts to ambient levels, and then the water was pumped back out to a sprinkler system mounted beneath the roof. The system at the EGIL catchment was similar to that at KIM except that the water was not treated; rather, it was recycled back beneath the roof. Both catchments were watered at a rate of 2 mm/hour. During winter the systems were shut down. In 1985, artificial "acid" snow was made using commercial snow-making equipment; from 1986 onwards, ambient snow was used and added beneath the roofs with a snowblower.
Evaluation of Terrestrial Systems
169
Other whole-catchment experiments involving acid additions have been conducted in two subcatchments of Gardsjon Lake, as part of the Gardsjon Project in Sweden in Bear Brook Watershed, Maine (Norton et al., in press) as part of the Watershed Manipulation Project (WMP) in the United States, and also in Hoglwald, Germany, as part of the Experimental Manipulation of Forest Ecosystems Project (EXMAN) in Europe. At Gardsjon, catchment Ll received 90 kg S/ha in October 1985 and 108 kg S/ha in October 1986 (as Na 2S04) while catchment F5 received 112 kg S/ha (as elemental 5) each year. At Bear Brook, dry (NH4)2S04 (1880 eq/ha yr) was applied by helicopter to the West Bear catchment while the East Bear catchment remained as a reference. At Hoglwald, the catchment is being irrigated with acid in the form of H 2S04 , Irrigation was also the method used by Bayley et al. to experimentally acidify a fen located in ELA. The fen (MIRE 239) was irrigated with a 1:1 mixture of H 2S04 and HN03 using a pipe distribution network to deliver water to 160 sprinklers, an irrigation pump to supply the water, a hydrant to regulate delivery of the water, an in-line meter to monitor pumping rates, and pressure gauges on the pump and on the distribution lines. Acid was added to the water in the experimental part of the bog, resulting in a spray with a pH of about 3.0. Irrigation lasted four to five hours, followed by 20-30 min of rinse at higher pH (5-7).
Evidently extensive, intensive, and expensive sampling networks are necessary to implement and maintain a whole-catchment manipulation study. At least one year (e.g., the RAIN project) and preferably several years (Norton et al., in press) of baseline data are required before the manipulation can begin. This situation has resulted in the development of several co-operative programmes in Europe, directed towards the assessment
170
Sillllliatiolls of the Effects 'of Climate Change
and prediction of the impact of environmental change on whole-catchment systems. ENCORE comprises a network of 18 sites and -40 catchments in seven countries. As part of the baseline programme, background environmental data (e.g., vegetation, soil type) and input and output flux information are being collected at each site using standard protocols. Furthermore, a more intensive process or mechanistic study or a whole catchment manipulation experiment must be performed at each site. Past and future ENCORE whole catchment manipulations include acidification and de-acidification experiments (i.e., the RAIN project), the addition of neutral salts and fertilisers, and also liming experiments. In addition to ENCORE, other co-operative European programmes using whole catchment manipulations include: EXMAN in which two manipulations (i.e., fertilisation, liming, irrigation, acidification or roof construction) must be performed at each site; NITREX (NITRogen saturation EXperiment) which involves nine separate large-scale nitrogen addition or exclusion experiments; and CLIMEX (CLIMatic change EXperiment) which is a proposed project to experimentally enrich with CO2 and raise the temperature at two entire forested headwater ecosystems. METHODS TO ADD NEUTRALISING AGENTS
While the addition of lime or limestone (Ca(OH)2 or CaC03) has long been practised as a means to increase lake productivity, the addition of lime and other neutralising agents as a mitigation technique for restoring or protecting biota in acidified waters has become
Evaluation of Terrestrial Systems
171
widespread only in recent years. Lake neutralisation has been practised on a large scale in Sweden and Norway and to a lesser extent in other European countries such as Italy and Canada. While several agents have been used to neutralise lakes, lime has been widely adopted, because it is readily available in different size grades, is safe to handle, and is relatively inexpensive. Although pure calcite (CaC03) may be preferred because it does not cause the pH in the lake to increase beyond the equilibrium value upon dissolution as does Ca(OH)2 and CaO, limestone, dolomite, lime slags, and olivine all have been used with reasonable success. The disadvantage of using certain calcite-based materials is that they are not readily soluble. Some researchers have suggested that if the CaC03 reaches the sediments after application to the lake's surface, it may form metal and humic complexes (especially in humic lakes) that decrease the dissolution rate of the calcite and render it ineffective. Consequently, they suggest that, when applying the lime, measures should be taken to maximise the dissolution of the sinking calcite and thus minimise losses to the sediments. These include using a small particle size, dispersing the lime over as large an area as possible in proportion to lake volume, and applying the lime as a slurry to separate the particles. If the latter method is used, a small amount of surfactant (~O.15-0.20 percent by weight) such as sodium polyacrylate is added to facilitate the suspension of the calcite and ultimately promote its wetting and dispersion in the water column. An alternative approach is to apply a larger particle size of lime to the lake surface in the hope that it will reach the sediments and release a slow diffusive Ca2+ flux across the sediment-water interface. Gubala and Driscoll
172
Simulations of the Effects of Climate Change
compared the effectiveness of a water column application of CaC03 (Le., mean particle size = 2Jlm) to a water column-sediment application of CaC03 (Le., a ~ -2 mixture of 6-44Jlm and 40-400 Jlm CaC03) in Woods Lake, New York. They found that while the water column-sediment application involved a 50 percent greater dose of calcite than the water treatment alone, both treatments appeared to have a similar effect in terms of the net amount of runoff and acidic inputs that were neutralised. The method to apply the lime directly onto the lake's surface can be either from a pontoon boat equipped with high-pressure tanks or manually from a small boat although, recently, a boat has been developed in Sweden specifically for the purpose of spreading lime. In remote and relatively inaccessible areas or if a large number of lakes are to be treated, helicopters or other aircraft can be used. In addition, lime has been put onto the ice of lakes and onto the snow along the edge of the lake. The successful neutralisation of lake acidity by the addition of lime directly to the lake's surface requires a knowledge of the goals for the treatment (e.g., the final acid neutralising capacity, ANC, of the water or the depth in the sediments to which neutralisation should occur), the type of lime used, the method of application, the initial water column and sediment acidity, and the physical features of the lake including surface area, water depth, temperature regime, and hy"rodynamics. While the amount of lime or the frequency of application required to achieve a desired ANC in Sweden cannot be stated with absolute precision, between 10 and 30 g of lime (as CaC03)/m3 are needed to neutralise typical acid lake water. In addition to applying lime directly onto the lake's surface, lime has been added to streams, for example in
Evaluation of Terrestrial Systel1/s
173
Canada in Sweden and in Norway. An alternative approach to liming the lake directly is to apply the lime to the entire catchment of the lake or to the wetland areas within the catchment. For example, in June 1984, 1500 kg of finely ground dolomite (0-0.2 mm) was spread onto catchment F2 as part of the Gardsjon Project in Sweden. In the "liming-mercury-cesium" project conducted in Sweden between 1986 and 1989, several types of lime or potash or Se or fertilisers were added to many lakes, wetlands, and catchments in Sweden in an attempt to reduce the Hg and mes concentrations in the fish. Fertilisers were added in conjunction with the lime to help mitigate the effects of the lake acidity, a procedure that has been employed by others. While the magnitude of this project precludes a complete discussion here, Hakanson and his co-workers have provided a comprehensive cost-benefit analysis of these remedial measures, and they have compared the advantages and disadvantages of whole-lake versus wetland versus catchment lime additions. Hakanson and Andersson point out that wetland and catchment liming is superior to lake liming for several reasons: (1) the effect is prolonged, (2) the potential "liming shock" in the lake is avoided, (3) the biota in the streams and rivers in the catchment also benefit, and (4) the transport of metals such as Fe and Al is reduced into the lake from the catchment. Yet, in certain types of lakes, such as humic lakes and those have a short retention time, liming has a limited effect. Therefore, Lindmark proposed an alternate approach
174
Simulations of the Effects of Climate Change
for neutralising lakes. The CONTRACID method, which is based on the cation exchange properties of the lake sediment, involves injecting a sodium carbonate (soda ash) solution into the sediment (by a harrow) so that the sediment becomes sodium stocked. During acidification, a reverse ion- exchange process occurs (Le., the Na in the sediments is replaced by H+). This process provides a long-lasting neutralising capacity in addition to biological stimulation (from P release) that may be preferable to frequent liming. Unfortunately, no consensus is apparent in the literature as to the efficacy of this technique.
9 Assessment of Freshwater Quality Water pollution often represents a very complex set of factors that degrade aquatic systems. Intervention in the return of a river to its prepolluted condition can attack the mosaic of habitats; for example, the erection of a large dam can modify biological structures and reduce the consumer populations (invertebrates and fish) as much as the effects of industrial discharges to surface water. Estimating the degree of overall degradation of a site due to pollutant load requires targeted objectives that depend upon different methods of remediation. POLLUTION DAMAGE
In a lake or stream, the consequences of various discharges
depend on the flow of the river and the assimilation capacity of the receiving environment. This ability to transform and transfer allochtonous river drift depends on the physical, chemical, and biological characteristics of the system in question. Three possibilities exist: The introduction and dissemination of toxic or inhibiting substances. When nutrients are unlikely to enter the trophic structure (due perhaps to toxic inhibitory substances), pollution causes a gradual and eventually complete disappearance of species in the biological
176
Simulations of the Effects of Climate Change
structure, as has been demonstrated for some pesticides, cyanides, detergents, and metals. Excessive amounts of nutritive substances (organic matter, including nutrients). When the river drift of nutritive substances is progressive yet still within the assimilation capacity of the system, eutrophication (Le., complexation of the biological structure) may be only temporarily accelerated. The amount of river drift presumed to be nutritive exceeds the assimilation capability of the receiving water. This condition results in the gradual development of a pollution condition characterised by the accentuated simplification of the consuming biological structure. The accumulation of unused substances implies a chemical modification of the environment of which several parameters first reach, and then exceed, the tolerance limits of an increasingly large number of species; the simplification of the consuming biological structure worsens this process, which accelerates exponentially. Pollution of an aquatic system is manifested at the population level by three phenomena. Modification of the structure of the initial population results in the development of a few saprophage or euryoeces populations, such as Oligochetes or the Hydropsychidae, and the decrease in abundance of other more sensitive organisms, such as certain Heptageniidae or Plecopters. Simultaneously, progressive desertion of habitat by lentic facies in favour of lotic facies can be observed, whereby species of the initial population do not appear or disappear a process termed "imposed habitat changes". Appearance, then proliferation, of species elective of specific river drift occurs; for example, this may occur because of intense development of certain algae, bacteria, and fungi (Cladophoraceae, Spirogyra, Sphaerotilus, Leptomitus, FusariulIl, Cellolobacteria, Ferrobacteria, and Sulphobacteria) downstream from organic or specific
Assessment of Freshwater Quality
177
discharges. Gradual disappearance in a specified order of all or part of the initial population then follows. These phenomena emphasize the differences between the eufunctional and dysfunctional systems, regardless of their potential trophic level. Eutrophication corresponds to an increase in biodiversity, and is, therefore, the opposite of the adverse effect of pollution or of the general system degradation phenomena which produce a decrease in the biodiversity of consuming organisms. Thus, an environment can be polytrophic without being eutrophic; and, for numerous reasons, a polluted environment may become dystrophic. SPECIALISED METHODS
Analysis of published findings unveils the contrast between the large number of biological analyses performed, whose variety justifies both the statement by Bartsch and Ingram that "there are as many methods as biologists working on this topic" and the observation that few proposals include a complete, precise, and standardised protocol used as a standard by others. Depending on the specialisation of the analysts, the protocols show great variety in the types of organisms, sampling procedures, taxonomic units selected, and procedures for data analysis. The great variety of techniques employed in biocenotic testing requires that quantitative data be representative of the taxokenoses, and underscores the perception that each investigator develops and uses custom-tailored methods based on biological and ecological peculiarities of the group of organisms in question, characteristics of the investigated biotopes, work scale, and objectives of the investigations.
178
Simulations of the Effects of Climate Change
When a method of comparative analysis is proposed, defining appropriate and specific sampling protocols is problematic, because the parts of the method are interdependent.· The macrobenthos sampling techniques that are employed jointly during seminars organised by the European Commission on biological indicators have shown the superior efficacy of a differentiated sampling protocol, in which the number of readings and the habitat categories are predefined in contrast to other techniques in which the types of habitats are either not defined or are subjected to unrestricted research for a given time duration. The general sampling methods should include monitoring of: "drift" organisms directly in the water; impact of periphyton "growth" and macrobenthos levels on the local nutrient supplies; and micro-organisms (including periphyton), macrobenthos, and plants (such as mosses) that concentrate micropollutants on the artificial supplies or substrates. Although the number of comparative analysis protocols is rather small, the sampling techniques are apparently subjected to successive revisions. Virtually all organisms, from unicellular ones to fish, have been used as indicators for a given component of freshwater quality. Studies pertaining to stagnant water (lakes, ponds) especially employ planktonic organisms and organisms related to streams, the benthic organisms. A distinction must be made betWeen essentially fundamental biocenotic analyses, which require a determination of the species, and practical methods in which identification is restricted to units of taxonomic groups that can be identified by non-specialised operators. Although methods using Diatoms, Mollusks,
Assessment of Freshwater Quality
179
and the Oligochaetes exist, the most frequently employed practical methods involve the simplified analysis of the macrobenthos of streams. In France, the dulcicole macroscopic invertebrates include approximately 150 families, 700 genera, and 2200 listed species. Depending on the nature of the organisms employed and the objectives being pursued, two main trends can be distinguished in the assessment of water quality and the biogenic tendencies of systems: assessments based on the presence of organisms considered indicators of a specific type of contamination (which includes analyses for bacterial contamination and those involving "biotic indices"); and global approaches, based on the investigation of all or part of the aquatic populations, for which the absence of some organisms is as significant as the development of populations of other organisms. PROCEDURES
Based on fundamental parameters for the analysis of populations by specific variety (or taxonomic richness) and density (or abundance) of individual organisms, the criteria have evolved from a simple comparison of specific indicators to the interpretation of biological structures in relation to reference organisations (biotypology) established in the abstract space of mathematical analyses. Direct comparisons of species indices provide observations that are especially interesting when they allow specification of biocenotic evolution. No specific analytical procedure is required, and the value of the conclusions depends on the precision of the findings and on the biological and ecological knowledge of the operators.
180
Simulations of the Effects of Climate Challge
Graphic procedures or simple formulas such as species deficit allow a global or group-by-group visualisation of the changes which have an impact on biocenoses. The advantage of such determinations is that they offer a most complete perspective of the population of a location at a given time, that can be used as a reference for subsequent studies. This situation mandates that choices regarding impact studies be made by a recognised expert to have some value. Furthermore, true impact studies must be provisional; for that reason, they require access to biological and parametric models. Such access is a very rare occurrence. From the baseline parameters to study the populations formed by the taxonomic variety and abundance of individual organisms, authors have proposed numerous indices of diversity and regularity, of which the most frequently used are that of Shannon with incorporation of measurement of the regularity ("evenness") of taxonomic distribution, including "equitability," "diversity," and "redundance". These techniques are discussed and critiqued in the works of Margalef, Pielou, and Legendre and Legendre. The main deficiency of these indices seems to be that similar values are offered for situations that may be very different. The structure of populations can be investigated by their distributions of variety and abundance. Applying the log-normal model of Preston to the analysis of populations of benthic Diatoms, Patrick et al. proposed to use the flattening of the Gaussian curve, a function of the number of species and the standard deviation, as the indicator of water quality. Without consideration for some distributions resulting in different curves, the examination of numerous readings shows that, according to the taxons in question and to sampling procedures and types of environments, highly divergent structures are obtained, and they are well
Assessment of Freshwater Quality
181
suited for the adjustment of the most diverse functions (binomial, positive and negative, logarithmic, exponential, linear) when multiple peaks do not appear. However, modifications that involve the distributions may constitute a useful contribution to the formulation of interpretative hypotheses. The findings need to be compared with biological reference organisms. Whenever the environments under study have been previously investigated with exhaustive definition of the biological organisations in abstract mathematical terms, new findings can be analysed with the initial structure used as a reference (Le., biotypology). Whether a water flow is considered in isolation or as a hydrographic network, the replacement of one set of species with another throughout a water flow system is plotted into the first two axes of an AFe by an ecological continuum of species in the shape of a U ("Guttman effect"), to which other specific structure can be compared). Within a biogeographic area of interest, the essential problem is loss of reference, or loss of the assurance that a sufficient number of control (sampling) stations (which might have degraded to varying degrees) exist for the establishment of a minimum reference structure. Such frameworks were established for 12 water flows through the Doubs hydrographic network on the basis of findings made between 1969 and 1972. All subsequent modifications are interpreted in relation to this structure. The species of the organisms are not determined, and, to assess the water quality, the water-sediment interface (i.e., a wholistic aquatic environment) is characterised with the aid of simple formulas or standard tables, taking into account the nature of the taxons and the taxonomic variety. Since these different methods have been synthesized, two methods will be described.
182
Simulatiolls of the Effects of Climate Change ANALYSIS
Based on simplified analyses of the macrobenthos, biotic indices have been designed to allow numerous operators (taxonomic non-specialists given adequate training) to establish a balance and draw a large-scale map of the general status from a national hydrographic network. A notation table offers a direct index of the sampling station (0-10) as a function of the nature and variety of the fauna in relation to three benthic samples of 1/10 m 2 in running water (lotic facies) and three benthic samples in calm water (lentic facies). Despite its simplicity and its wide use in Europe in forms adapted to particular aspects of the networks, this method has low sensitivity. By contrast, the maximum index class (Ib = 9±1) represents suboptimal quality, explaining its use as a reference for, rather than an index of, the absence of pronounced degradation. Thus, the index is not a trend indicator. Attempts to improve sensitivity and precision have revealed that these objectives can be achieved for jurassictype pre-mountain water streams; however, limitations then become enhanced. For instance, the relative quality of the systems is underestimated with little slope or warmer systems that are naturally less well suited to develop such organisms as Plecopters or Heptageniidae. Nevertheless, these tests have contributed to the definition of a more exact sampling protocol. In parallel, Chandler proposed a similar method, relying on a score of 0-100 obtained from a table combining the nature and abundance of the fauna (five estimated classes) with specific identification of most indicator groups. From practical experience, the family constitutes the basic unit, and the abundance criteria are discarded. Similar tests (experimental protocol Cb2) have shown that the continual presence of numerous families
Assessment of Freshwater Quality
183
is inadequate for them to be selected as indicator taxons. These processes led to the proposal of a simplified protocol. The analysis of large amounts of data defined the following characteristics: the minimum practical size of a sample as 1/20 m 2; the required and sufficient number of readings as 8; the precise sampling protocol that circumscribes the mosaic of habitats; the listing of the taxons used (135, of which 38 served as indicators based on frequency and fidelity);and the table of standard index values (0-20) according to the nature and taxonomic variety of the benthic fauna collected by the proposed protocol. The classification of indicators is accomplished using two series of factor analyses of the distribution of families in sampling stations with little or no degradation, and then with the Rhithron (a stream with predominantly Cyprinides) altered in various ways. The classification of the taxons according to their relative general tolerance was performed in a manner similar to that for fish. This relative ranking differs· considerably from those hierarchies of the "score systems", in which the positions of some taxons seem to be the result of the fact that several readings were made in environments that had already been degraded considerably (i.e., "loss of reference"). STANDARDISED GLOBAL BIOLOGICAL INDEX
This standard describes a method to determine the standardised global biological index published by Vemeaux et aI., which was promulgated under the name
IBGN by AFNOR. IBGN assesses the biogenic tendency
184
Simulatiolts of the Effects of Climate Change
of a waterstream station from the results of a macrofauna analysis that is considered to produce a comprehensive expression of the general quality of a waterstream station under otherwise constant conditions. When applied to an isolated site, the meth~d determines the global biological quality within a range of parameters, whose maximum value corresponds to the optimal combination of set variety with the nature of the benthic macrofauna. When applied comparatively (e.g., upstream and downstream from a discharge), the method evaluates the effect of a disturbance on the receiving environment within the limits of its sensitivity. The benthic macrofauna samples (diameter> 500 pm) are taken at each station, according to a sampling protocol that takes into account the different types of flexible net (diameter = 500 nun), using: retractable panels, removable base (1/20 m 2), sampler "Surber" position, rack, metal frame, cutting blade, sampler "Shrimpnet" p0sition, and habitats defined by the support structure and the flow speed. A representative sample consists of eight samplings. The sorting and identification of the sampled taxons are performed to determine the taxonomic variety of the sample and its fauna indicator group. The habitats located in calm water (lentic facies) are prospected with the help of a shrimper, using traction over 50 em, or, by default, by back-and-forth movement over an equivalent surface (the additional surface compared with that of the Surber compensates for the loss of a portion of the individuals). A ring-shaped illumination lens is used for the sorting in a binocular lens (stereoscopic microscope G £ 50) issued for the identification of the taxons.
Assessment of Freshwater Quality
185
SAMPLING
The !BGN is determined per station, which is defined as the segment of a water stream whose length is virtually equal to 10 times the width of the stream bed at the time of the sampling. The detection of disturbance is facilitated in extreme situations at the moment of low waters (minimum flow, maximum temperature) or during critical periods (discharges and seasonal human activities). The samples must be taken during a period of stabilised flow for at least 10 days. For each station, the benthic fauna sample consists of eight samplings of 1/20 m2 each (volume sampled for the loose substrates: 0.5-1 L) performed separately in eight different habitats selected from among the combinations defined for each station. The eight samples together must provi!ie a representative picture of the mosaic of habitats of the station. Each habitat is characterised by a support-speed set. In the absence of certain habitats, the samples can be obtained according to the strata. Each stratum, sampled separately, constitutes a complete sampling. For example, in the absence of a len tic habitat in a mountain stream, the surface of the grid is sampled; then, separately, the inside surface and the underlying substrate are sampled a second time. The samples are taken wIth the help of the sampling devices. Each sample is immediately fixed on site by the addition of a 10 percent (v Iv) formaldehyde solution, placed in a plastic bag, transported packed in ice, and then stored in the refrigerator. The surface speeds are estimated for each habitat. The support categories (5) are studied in the order of the succession (from 9-0). For each support category, the sampling is made in the speed class where the support is best represented. The speed classes (5 to 1) are listed in decreasing order.
Simulatiolls of the Effects of Climate Change
186
When a monotonous station (straightened course, silted bed, or canal) does not include the eight different types of support, the number of samplings is extended to eight through samplings taken of the dominant support. The percentage of coverage of each habitat (SV set) can be estimated from the following: % r =
>75%
class =
5
50% 4
25%
10
>10
3
2
1
BIOLOGICAL ANALYSIS
Selected Taxonomic Unit
The selected taxonomic unit is the family with the exception of some fauna groups (branches or classes) with little representation or where the taxonomic analysis unveiled specialisations. The repertory includes 138 taxons which may be included in the overall variety (L t), of which 38 are indicator taxons that form the nine indicator fauna groups. The Mollusks and the Achetes are also listed. The collected organisms are sorted and determined according to larval, nymph or adult stage, provided that this latter stage is aquatic. Empty sheaths or shells are not taken into account. To facilitate the interpretation of the results, the samplings should not be mixed and the fauna list of the station should be prepared by indicating the distribution of the taxons in the eight habitats. Global Biological Index (IBGN)
The following must be determined sequentially: The taxonomic variety of the sample (L t), equal to the total number vf taxons collected even if they are represented only by a single individual.
187
Assessme1lt of Freshwater Quality
Indicator fauna class (GI) considering only the indicator taxa represented in the samples by at least three individuals or 10 individuals depending on the taxons. !BGN can be derived from the at and GI values. For example: GI
= 8, L t = 33 »>
IBGN
= 17
GI
= 5, L t = 30 »>
IBGN
= 13
L t = 14 »> IBGN = 7
GI =3,
Because of the significant absence of indicator taxa (3 or 10 individuals), the IBGN score equals O. Test Report
For each station, the test report must include the date; the exact geographic location (Lambert coordinates); the ecological type, if known; the distance from the source; the altitude; the length of the wet bed at the time of the sampling; the water temperature; the nature of the support and the flow rate pertaining to the eight samplings performed for the station (SV set) with an indication of the dominant habitat or, preferably, the approximate collected classes; the list of sampled taxons with their distribution over the eight habitats, with a possible indication of their relative abundance; the taxonomic variety of the sample t); the indicator fauna group (sequence number of GI); and the standardised global biological index (IBGN). For cartographic representation of the results, each segment of the stream can be assigned one of the following colours, depending on the value of IBGN:
0:
IBGN
~
17
16-13
12-9
8-5
~4
colour
blue
green
yellow
orange
red
188
Simulations of the Effect.; of Climate Change
The IBG variations throughout a segment or a water stream in its entirety can be plotted in a graph where the distance from the source is the abscissa and the index values are the ordinate. Example
An illustration has been prepared by the author of the Pont de Fleurey on the Dessoubre stream (affluent of the Doubs) at Jura Massif in France. The Dessoubre, a mesorhithron stream with the association Tadpole-TroutGrayling-Minnow-Loach, presEi!nts a habitat diversity and a water quality corresponding to its ecology type. The start of a trend manifested by the most stenoecious fauna to leave the habitats of the lenitic facies should be noted. In 1981, this station was one of the stations used for the sampling of the range of index values in search of optimal values. BIOLOGICAL QUALITY OF LAKES
Basis
Although benthic macrofauna, because of its variety and abundance, constituted the material of choice for the establishment of practical biological methods for the assessment of the general status of streams, at present no similar methods can be applied to still-water systems, although Limnology emerged with lacustrine investigations. This can be explained by several factors. The Character of the Organisms
Whereas benthos constitutes the core of the organism of streams, lakes are, however, characterised by microscopic planktonic organisms (phytoplankton and zooplankton) that have very brief developmental cycles, and present significant spatial and time-related variations. Thus, this
Assessment of Freshwater QualIty
189
material is difficult to use to determine the significance to the entire system. Therefore, employment of the zoomacrobenthos, whose integration power is much greater, has been projected. However, a portion of the species only colonizes in the littoral zones whose habitat mosaics prove to be very different from one lacustrine basin to the next. Brinkhurst shows the general phenomenon of a decrease in fauna (here, generic) variety with depth. The main components of the macrobenthos capable of colonizing lacustrine sediments up to depths of 250300 m belong to the "difficult groups," such as Mollusks, more specifically Pisidies, the Oligochetes, and especially the Chironomide dipters for which the analysis of a great number of species associations has offered for a long time the basis for lacustrine biotypology with the work of the great forerunners such as Thienemann or Brundin in the late 1940s. The studies of comparative biocenotics, performed with this material, can be conducted only by true specialists, which unfortunately is increasingly less the case. Interpretation Ambiguities (Simplified Methods)
While simplified methods are proposed based on the single phytoplankton or on the basis of the species of a single faunistic class, order, or family, the challenge is in uncovering the meaning of the analytical findings, especially when a global qualitative perspective requires considerable integration of widely diverse information. Therefore, the indices proposed by Lafont et al. by a simplified analysis of the Oligochete populations objectively express the biological quality of the watersediment interface; Saether considers the communities of Chironomide dipters of the deep zone to be the indicators of the "quality of the waters," and links his results to a
190
Simulations of the Effects of Climate Change
"trophic level" relative to the system, pollutions, and dysfunctions that are included and not differentiated. The application of this method to the lakes of the Jura approximates "eutrophic" effects; the phytoplanktonic biomasses mark a varied range of partial primary productions, and physicochemical analyses of the sediment reveal a great variety of sedimentological types. Yet, equally apparent is the absence of relationships between the global sedimentary composition (% carbonates and MO), the primary production, and the depths of the basins. Two main conclusions can be drawn from these comparisons: The need to differentiate the trophic level from the nutritive substance content which express a potential and the "trophic status" of a system, by expressing a functioning or dysfunctioning mode whose sediments and fauna supply images for which interpretations must be found. The usefulness to have available a practical biological method for the assessment of the general biogenic aptitude of a lake, which would offer sufficient synthetic significance. Besides the recent proposals of Lafont et al. and Mouthon which propose, respectively, simple assessment methods for the biological quality of lacustine systems on the basis of Oligochetes and Mollusks, all other proposals tend to define different "trophic levels," but not the resulting biogenic aptitudes. Estimate the Biogenic Qualify of Lakes
An experimental classification of lacustrine systems based on a comparative analysis of the benthic fauna has been proposed. This method is called the Lacustrine Biogenic
Assessment of Freshwater Quality
191
Index, and includes a comparative sampling protocol, original biologic descriptors, and a standard table that allows the definition of the biological type and the biogenic index of a lake. Only fine sediment over 5 cm is collected using a modified Ekman bucket with the addition of lateral ballast as well as a penetration limiter. Coarse substrates and hydrophytes are avoided, as are certain sites such as beaches, harbours or substrate enclosures. Two samplings are performed each station to form a station sample; and two depths, to which two isobaths correspond, are prospected (Zo at 2-2.5 m; and Zf at 3/4 Z maximum relative depth). The number of stations per isobath is proportional to the length of each isobath, and should be determined using the following factors:
= 1.8.JlOL
(1)
at Zf nf = 1.4 .JlOL
(2)
at Zo, no
where L is the length of the isobath in question, expressed in km. The stations are distributed regularly (virtually in an equidistant manner) over each isobath. The samples are taken during a single sampling trip during an isothermia period which follows thawing and springtime circulation. Depending on the altitude, in the Jura lakes the expeditions took place in April or in May. Each sample (consisting of two samplings) is filtered through a conic net with a mesh width of 250 11m; then 5 percent formaldehyde solution is added and the sample is then placed in a plastic bag with the air removed. The samples are transported on ice, and stored in a refrigerator. The samples are analysed separately. The
Simulations of the Effects of Climate Change
192
macroinvertebrates are identified but not counted out. The selected taxonomic unit is genus except for the Oligochetes, Nematodes, Hydracarians, and Ceratopogonidae. A listing shows the fauna data and the frequency of each taxon is expressed in percentage of occurrence in relation to the no and nf counts of the stations (or the samples per isobath). The quality coefficient of the fauna of the fine sediment is determined where the found taxa are classified in the decreasing order of sensitivity to the physicochemical quality of the water-sediment interface. Only taxon indicators are selected whose frequency is at least equal to 50 percent of the number of samples no taken at depth Zoo The descriptors include: va = fauna variety ( generic) at depth Zo, qo = quality of the fauna at depth Zo, and df = bathymetric distribution coefficient at depth Zf
vj
dj=kva
(3)
where k = 0.047 vo + 1, F = relative faunistic deficit index, and F = ?df. qo. For example, if vo = 38 and F = 0.77 for type B4, then biogenic index/20 = 15; and if vo = 23 and F = 0.38 for type B~ then biogenic index/20 = 08. Qualitative levels include eubioric lake, eumerobiotic lake, merobiotic lake, merodysbiotic lake, and dysbiotic lake; and the quantitative levels include oligobiotic lake, oligomesobiotic lake, mesobiotic lake, mesopolybiotic lake, and polybiotic lake. The combinations of the two series of information is used to define the type of lake as either euoligobiotic or mesomerobiotic or dyspolybiotic lake. The variety of endobenthic fauna sampled in the littoral zone beyond the river zone (Zo = -2, -2.5 m) constitutes a good indication of the biogenic potential of the system in relation to consumer organisms.
10 Ecological Risk Assessment The ecosystem health metric proposed is a comprehensive, multiscale, dynamic, hierarchical measure of system resilience, organisation, and vigour that closely tracks the concept of sustainability. Assessment scale is an important issue, because it tends to define the scope of the policy options considered for mitigation. Currently, an overemphasis exists on population and process-level analyses at the expense of the ecosystem and ecoregion levels. As with the health metric proposed, assessment of ecosystems at multiple levels is important to insure that the cure is no worse than the disease. Finally, a somewhat different perspective to assess ecological systems is discussed. By considering changes in an ecosystem's delivery of ecological benefits (goods and services), the assessment may be able to answer more directly the question of significance. ECOSYSTEM HEALTH
The term "health" is commonly used in reference to ecosystems by both scientific and policy documents, but a satisfactory definition of ecosystem health remains to be developed. While the framework of human health may
194
Simulatiolls of the Effects of Climate Change
provide a starting point, severe limitations are imposed on the parallel between human health and ecological health. In addition to its use in ecological assessments, a definition of ecosystem health also provides a means to aid the integration of the analyses of ecological and economic systems. As a starting point, this analysis begins with five axioms of ecological management: The Axiom of Dynamism. Nature is more profoundly a set of processes than a collection of objects; all is in flux. Ecosystems develop and age over time. The Axiom of Relatedness. All processes are related to all other processes. The Axiom of Hierarchy. Processes are not related equally, but unfold in systems within systems, which mainly differ regarding the temporal and spatial scale on which they are organised. The Axiom of Autopoiesis. The autonomous processes of nature are creative, and represent the basis for all biologically-based productivity. The vehicle of that creativity is energy flowing through systems that in turn find stable contexts in larger systems, that provide sufficient stability to allow self-organisation within them through repetition and duplication. The Axiom of Differential Fragility. Ecological systems, which form the context of all human activities, vary in the extent to which they can absorb and equilibrate human-caused disruptions in their autonomous processes. These axioms regularly recur, even if implicitly, in the following discussion, and they are essential elements of the definition of ecosystem health proposed below.
Ecological Risk Assessment
195
Ecosystem health is often framed in terms of human health. While both are complex systems, medical science has a large body of knowledge and expert systems (in the form of doctors) available to advance diagnosis. Such analytical tools are absent for ecosystems. However, ecosystems have been studied extensively with respect to their stability and resilience. Six major concepts are most often used to describe ecosystem health : homeostasis, the absence of disease, diversity or complexity, stability or resilience, vigour or scope for growth, and 'balance between system components. Each concept represents a piece of the puzzle, but none is sufficiently comprehensive, especially in terms of being able to deal with many different levels of ecological systems. Homeostasis is the simplest and most popular definition of system health: any and all changes in the system represent a decrease in health. The greatest difficulty with this approach is in differentiating between naturally occurring stresses and .external (including anthropogenic) stresses. This definition is best used for warm-blooded vertebrates, since they are homeostatic and since normal ranges can be more easily determined from large populations. However, for ecological systems, all changes (or even any given change) cannot be assumed to be bad. The best exampl\f this is succession*for the initial state,
196
Simulations of the Effects of Climate Change
succession is an irr~versible change, and one that might be necessary for the system to be sustained. Given that ecosystems are constantly changing, this definition does not deal with a fundamental characteristic of ecosystems. The definition of ecosystem health as the absence of disease has several failings. First, while various (including anthropogenic) ecosystem stresses can be described, their mere existence does not indicate that they are adverse stresses. A separate, independent definition of ecosystem health would be required. Second, this definition yields only a dichotomous result that is inadequate to characterise complex systems. The notion of basing a definition of ecosystem health on a system's diversity or complexity rests on the assumption that these characteristics are predictors of stability or resilience and that these are indicators of ecosystem health. Presently, the analytical basis is insufficient to use this concept, but network analysis may yield a more sophisticated framework for incorporating system diversity into a definition of ecosystem health. Stability and resilience are key measures of ecosystem health, since healthy organisms and systems have the ability to recover from stresses or to use the stress in some creative manner to improve their status. A failing is that these measures do not characterise the level of system organisation or the level at which the system is functioning. Odum has suggested that the level of a system's metabolism (energy flow) is an indicator of its ability to deal with stresses. The concept of ecosystem balance is based in Eastern traditional medicine, and the notion that a healthy system is one that maintains a proper balance between its parts. However, the proper balance can only
Ecological Risk Assessmerzt
197
be determined by some independent measure of ecosystem health.
Based on these framing concepts, a practical definition of ecosystem health must have four essential characteristics. First, it must integrate the definitions described above into one that combines system resilience, balance, organisation (diversity), and vigour (metabolism). Second, it must represent a comprehensive description of the ecosystem. Third, it must use weighting factors to compare and aggregate different components of the system. Finally, it must be temporally and spatially hierarchical. Such a definition would be: "An ecological system is healthy and free from distress syndrome, if it is stable and sustainable-that is, if it is active, and maintains its organisation and autonomy over time, and is resilient to stress". Accordingly, a diseased system is one that is not sustainable, and will eventually cease to exist-clearly illustrating the importance of the temporal and spatial aspects of the definition. Distress syndrome refers to the irreversible processes of system breakdown leading to death. Two very important tools for making operational this definition are network analysis and simulation modelling. Network analysis, in this context, refers to all variations of the analysis of ecological and economic networks. It has the potential for yielding an integrated, quantitative, hierarchical treatment of all complex systems, including ecosystems and combined ecological-economic systems. An important area of network analysis is the development of common pricing mechanisms for ecological and economic systems. In complex systems with many interdepen-dencies, a problem with mixed units is often present. Ecological analyses have ignored this
198
Simulations of the Effects of Climate Change
problem by choosing a single commodity as an index; yet this ignores interactions between commodities, and is consequently unrealistic and quite limiting. Evaluating the health of complex systems demands a pluralistic approach and an ability to integrate and synthesise the many diverse perspectives that may be present. An integrated, multiscale, transdisciplinary, and pluralistic approach is required to quantitatively model systems (including organisms, ecosystems, and ecologicaleconomic systems). Achieving such a capability requires the ability to predict the dynamics of ecosystems under stress as well as advances in high-performance computing. ECOLOGICAL ASSESSMENT OF REGIONAL SCALE
Assessment of scale is important, because scale tends to define the scope of the policy options considered for mitigation. Currently, an overemphasis exists on population and process-level analyses at the expense of the ecosystem and ecoregion levels. As with the health metric proposed above, assessment of ecosystems will be important at multiple levels to insure that policy decisions do not result in undesired ecological consequences. Also, guarding against haphazard aggregation of measures across ecological levels of organisation will be important. Different scales of ecological systems may be driven by very different dynamics so the best indicators or metrics for one level may be inadequate or misleading at another scale. While the concept of multiscale analyses is logical and desirable, it poses significant difficulties since most ecological research deals with very small geographic areas (usually 1 m 2). Recognising the need for ecological assessments to deal with much larger landscapes, some
Ecological Risk Assessment
199
ecologists began in the 1980s to argue, that regional-level ecology was important to understand the smaller scale and that ecological assessments must be capable of assessing at the regional level. "Ecological risk assessment, unlike human health risk assessment, must address a diverse set of ecological systems, from tropical to Arctic environments, deserts to lakes, and estuaries to alpine systems.... Ecological risk assessment may occur over much wider temporal and spatial scales than those for human health risk assessment". Meeting this need for multiscale analysis will require the same type of research required to provide the foundation for the definition of ecosystem health, namely, network analysis and simulation modelling. Analyses such as these will be much more feasible because of recent advances in high-powered computing and visualisation techniques. An important element of multi scale analysis of ecosystems, and even single scale analyses, is the proper characterisation of uncertainty. Even with the significant advances in modelling, large amounts of uncertainty will exist with respect to anthropocentric effects on ecosystems. Developing the means and the methods to characterise this uncertainty in a meaningful manner for policy makers should be a major research area for ecological assessment. Ecological Benefits Patterns
Even when unable to assess ecological health across scales, space, and time, some often-asked issues remain, such as the significance of the findings. Even if public policy advances were at the point where the maintenance of ecological health is considered an important goal, tradeoffs and choices will be needed by policy makers. This situation strongly suggests that simply characterising the
200
Simulations of the Effects of Climate Change
risk of potential outcomes will be an inadequate response. By developing the ability to characterise ecological benefits more completely and by characterising the impact from the loss of ecosystem health on the delivery of those benefits, ecological assessments will make great strides towards resolving the signifi:ance matter. Current ecological assessment documents and frameworks clearly state that policy makers must be consulted to develop the ecological risk assessment endpoints and that the assessment must ascertain the significance of observed changes. The selection of assessment endpoints relates in part to policy interests (e.g., to specified regulatory endpoints or to public concerns); thus, changes in assessment endpoints must be related ultimately to changes in parameters of the ecosystem that humans care about (anticipating the significance issue) . The products that will result from the process clearly will not be couched in terms with which policy makers are most comfortable nor in the metrics that they will understand and be able to communicate to their various constituencies. This tendency to describe scientific findings in terms that are, in the view of the policy- maker, either arcane or in multiple metrics has been referred to as multidimensionality. The result is that the findings are described in a manner so detailed and fragmented that no one can grasp the overall implications. Benefits are ecological goods and services, and have been compared to ecological structure and function. Ecological goods and services can be described as those benefits that humans derive from ecological systems. For example, cut trees provide lumber (an economic and ecological good), while uncut trees take up air pollutants (an economic and ecological service). The uptake benefit
Ecological Risk Assessment
201
will be lost when the trees are cut for lumber, and vice versa. To economists, the term "benefits" often denotes a monetised valuation of an economic good or service. However, in this context, the term "benefit" is used to refer to all ecological goods and services whether or not their value has been monetised. Since the monetisation step is often controversial, leaving it aside permits these efforts to focus on the scientific questions surrounding the identification and quantification of benefits. However, an analytical loss will exist in the absence of monetising (a common metric with which to measure and express the magnitude of the benefits). Ecological benefits occur in four groups: (1) market benefits (first wave), such as lumber, for which economic markets exist; (2) non-market use benefits (second wave), such as recreational benefits, for which no direct markets exist; (3) non-market, non-use benefits (third wave), such as the existence value and bequest value; and (4) fourth wave benefits are those that would fit into any of the three previous categories, but which have not routinely included in previous benefits analyses, such as pollution uptake, climate modification, habitat, and biodiversity. While a variety of graphical methods can depict the status and change in magnitude of these benefits, a polartype chart may be best to demonstrate the technique. The polar chart has some appeal, because of its division into four quadrants.
202
Simulations of the Effects of Climate Change
By placing first wave benefits in quadrant one, second wave benefits in quadrant two, etc., the status of each wave's benefits can be clearly shown. This graphical analysis might illuminate a policy maker's choices; for instance, consider the choice to harvest the trees in a hypothetical old-growth forest.
11 Environmental Monitoring The atmosphere has few mixing constraints and, therefore, this can be achieved by measurements in remote areas and in the upper atmosphere. Carbon dioxide. Historical trend of atmospheric carbon dioxide as well as estimates of future concentrations are based primarily on a sole set of continuous observations, which dates only from 1958. Additional and continuing baseline data are necessary to determine the global representativeness of the current trend, to verify the future estimates and to study the partitioning of CO2 between the atmosphere, oceans and biosphere. Aerosols and particles. Aerosols and particles in the
atmosphere play a special role in the atmospheric energy balance and in physical processes important in the fonnation of clouds, precipitation, fogs, etc. Moreover their role depends not only on the total particle count but also on the number of particles of various sizes and their distribution with height. The rapid development of vertically directed LIDAR as a measurement technique gives promise of an early capability for monitoring the vertical distribution of particle loadings well into the stratosphere. This technique should be utilised as soon as feasible to complement or perhaps replace periodic aircraft
204
Simulations of the Effects of Climate Change
sampling. By monitoring the intensity of solar radiation at selected narrow-bands in the visible and ultra-violet spectral regions (e.g., 0,50 and 0,38 micrometers), direct information can be obtained on the total atmospheric loading of aerosols (atmospheric turbidity) in the optically effective size range.
Solar radiation. Since solar radiation is the critical energy source to the earth and atmosphere, comprehensive monitoring is required for trends in the solar energy received at the surface. Instruments for the following measurements are commercially available and are used in operational programmes: Broad-band direct and diffuse radiation (e.g., measurements of all wavelengths>0.40, >0.53, and >0.70 micrometers) Narrow-band direct radiation (e.g., measurements between the wavelengths of 0.30 to 0.35, 0.35 to 0040, ... 0.55 to 0.60, 0.60 to 0.70, 0.70 to 1.00 and 1.00 to 1.80 micrometers) Net (incident minus reflected) all-wave radiation. Meteorological data. Standard meteorological surface observations, induding wind, temperature, humidity, pressure, prevailing weather, etc., should also be obtained to complement the basic measurements. In addition, vertical observations of temperature, humidity, pressure and wind velocity by rawindsonde should be made.
The earth's surface. Various land-use practices that significantly alter the earth's surface such as deforestation and creation of man-made lakes, can affect local climate by influencing the energy balance. To determine whether large-scale changes have occurred, global land-use should be inventoried periodically, for example, every five years. Such a survey can best be carried out by satellite measurements.
Environmental Monitoring
205
Qoudiness and albedo. Since global climate is particularly sensitive to changes in cloudiness, surveys by satellite of this parameter should be encouraged even though there is no defmite indication that man has as yet caused wide-spread alterations in cloudiness. Measurements of stratospheric cloudiness and water vapour may have to be made from aircraft. Another particularly useful satellite"measurements is that of wholeearth albedo. Variations of the earth's reflectivity, which can be affected by land-use, cloudiness, etc., can be documented by such measurements. Waste heat. Because of the increasing rate of energy consumption throughout the world, the amount of waste heat produced by man could become a significant regional climatic factor in several decades. Therefore, energy-use statistics should be inventoried continuously on a regional basis to determine their current importance. Nitric oxide and ozone. Concentrations of these trace gases in the stratosphere may be affected by the operations of supersonic aircraft. Background concentrations of nitric oxide, a product of combustion, and ozone, a product of stratospheric gas reactions, should be determined before large scale supersonic flights begin. Recommend that the following variables be initially monitored at low exposure (baseline) stations: Atmospheric carbon dioxide content: atmospheric turbidity; solar radiation (including broad-band direct and diffuse radiation, narrow-band direct radiation, and net all-wave radiation); standard meteorological variables. Were commend that the following variables be considered for inclusion at a later date: Vertical distribution of aerosols; size aistribution of aerosols; rawindsonde data; surface vertical fluxes of carbon dioxide; global albedo (by satellite); ozone, water vapour and trace gases in the
206
Simulations of the Effects of Climate Change
stratosphere (by aircraft). The following variables be monitored at medium exposure (regional) stations: Atmospheric turbidity; solar radiation (including broadband direct and diffuse radiation and net all-wave radiation); standard meteorological data. DATA FROM AIR, WATER, SOILS AND BIOTA PERTINENT
Before considering in detail, the range of variables from air, water, soils and biota for possible inclusion in a monitoring system it is useful to stress the dynamic interrelation of these media via the geophysical, geochemical and biological transport mechanisms operating in the environment.' Effective analysis of any secular trends for potentially hazardous substance will be made much simpler. This involves a study of the sources and rates of injection of each substance into each environmental medium and the rate of removal into other media, i.e., residence times. The ultimate fate of each substance, whether it accumulates irreversibly in anyone medium or whether it continues to cycle indefinitely.
Air. Residence times of substances emitted to the lower atmosphere are generally short (weeks or less) unless they enter the stratosphere where they can remain for many months or years. The atmosphere is thus more appropriately regarded as a transport mechanism with rapid and efficient mixing, making it possible to obtain accurate representative measurements of atmospheric constituents by sampling at a few selected points only. When monitoring the quantities of the different substances it is necessary to take into consiqeration whether the substance occurs as a gas, as particles or attached to particles. The actual size distribution of particles is very important when considering their availability to organisms, including man. .
Environmental Monitoring
207
From the budgeting point of view it is important to monitor injections of substances to the air and the transfer mechanisms from the air to water and soil. The interfaces between atmosphere and water and between the atmosphere and the continents deserve particular attention. Of special significance among the transfer mechanisms is precipitation since it has an important scrubbing action on atmospheric gases and particles. Its composition (precipitation chemistry) is a useful guide to the nature and amount of airborne substances carried to the earth's surface and available to interact with biota. Water. Residence times in water are longer than those of air and the presence of serious mixing constraints in oceans makes representative sampling much more difficult unless many more sites are involved. Despite this qualification, bodies of salt and freshwater reflect the history of surrounding land use in an informative way. Substances released into rivers etc. find their way into aquatic biota and bottom sediments which may often irreversibly accumulate many substances and thus act as a valuable historical record of previous changes and trends. The output from rivers to the oceans is not only a national and regional problem but also of concern to any global budgeting of critical substances essential for the global monitoring system. Soils. Soils like sediments are often the ultimate sinks of many important substances particularly in low rainfall areas. They are the most intensively used resource of any nation and the irreversible accumulation of substances in them is thus of critical importance. Mixing constraints are of course greater in soils than in air or water and there are large-scale geographical differences in the accumulation and loss of substances to soils, depending on local usage by man, soil chemistry, rainfall etc.
208
Simulations of the Effects of Climate Change
They not only receive substances by dry deposition and precipitation from the air but are the source of dusts and gaseous exhalations which can be atmospherically transported over great distances. Volatile substances such as organochlorine compounds and dimethyl mercury are of interest in that they may evaporate from warmer soils and condense in soils of cooler regions. Special attention should be paid to the occurence of new technical substances in the soils of tropical regions.
Biota. The reason for monitoring certain substances in biota is twofold. They may cause adverse biological
effects and they may be in greater concentrations and therefore more readily detectable. Knowledge about the levels found may be used in risk evaluation. For substances with threshold-effects the existing levels should be used for an estimate of the safety margin before effects appear. Organisms are important as a means of transport for substances through the biosphere. They can take up and accumulate certain chemicals and transmit them through food chains, by a process of biological magnification, where an increased accumulation at higher trophic levels occurs. Therefore the effects are often most pronounced at the tops of the food chains. The transport of substances along food chains takes time and it is thus of great importance to detect any significant accumulation of substances at the lowest trophic levels. By using sophisticated chemical methods it is now possible to detect even very minute amounts of substances. Organisms at the bottom of the food chains contribute to an early warning system. Even similar substances may behave differently in the same food chain. This depends on different metabolic patterns and abilities to excrete substances. This variability
Environmental Monitoring
209
has a marked genetic component and is partly the explanation of the development of tolerance. In aquatic organisms, the direct uptake of substances from water may sometimes be more important than via the food chains. When chemical methods are not sufficiently sensitive to estimate the trace amounts of a substance in the abiotic environment, accumulator organisms may be analysed instead. Bioaccumulators also often integrate the chemical environment both in time and space. A fish in a lake may integrate the conditions in that lake over a long period of time and wide ranging marine organisms may reflect the situation in extensive marine areas. In certain cases specific organs may give additional information on the chemical situation in the environment. For example, different amounts of mercury accumulating In the feathers of migratory birds formed at their summer and winter quarters respectively, indicate geographical differences in mercury exposure.
In certain cases organisms may be used as indicators of the presence or absence of a specific substance or of certain levels of it in the environment. The foregoing discussion emphasizes the need to sample and measure environmental substances in such way in all media that their flux rates can be calculated and an "environmental balance sheet" drawn up for each substance. This will help us to gather valuable information such as, for example, whether a detected increase of a substance in one medium represents a real overall global increase or merely the appearance of the substance working its way through the environmental cycles from another medium." Since ,nearly all scientific competence in investigating the environment is traditionally media oriented, it is unrealistic to erect a completely new system based on this
•
210
Simulations of the Effects of Climate Change
dynamic approach. It is thus proposed to discuss below the range of monitorable parameters in each of the media separately and to attempt a synthesis at the end which once more emphasizes the need for a dynamic overview of the whole environment. Atmosphere
Carbon dioxide. Land plants obtain all their carbon from atmospheric carbon dioxide and from carbon dioxide released from soil respiration. All animals, including man, exist from the carbon compounds made by plants. Changes in the amount of atmospheric carbon dioxide might have an influence on global climate as previously indicated or may alter primary productivity in green plants since carbon dioxide is sometimes a limiting factor to plant growth. Sulphur dioxide and hydrogen sulphide. Numerous epidemiological studies clearly indicate an association between sulphurdioxide and health effects of varying severity. Other studies have shown that chronic injury to plants can occur with prolonged low concentrations of these gases as well as adverse economic and aesthetic phenomena related to atmospheric visibility, and the soiling, and corrosion of materials. Carbon monoxide. This gasis known to have important physiological effects on man at the increased levels found in dense traffic. It is now known that the surface of the oceans releases substantial amounts into the atmosphere. However, its fate in the atmosphere is not known and it is of some importance to as certain whether the gasis accumulating there. Nitrogen oxide and nitrogen dioxide. These gases play important roles in the formation of "photochemical smog" which is being recognised as an increasing problem to man
Environmental Monitoring
211
and plants in and about urban complexes in the temperate zone. When released into the lower stratosphere by high flying aircraft, these gases possibly may interfere with the ozone balance.
Ozone (and ozone precursors). These substances have an important chronic and acute impact on biological systems by the impairment of performance, on pulmQnary function, and by vegetation damage. Ammonia. Almost all of the ammonia in the atmosphere is produced by natural biological processes although considerable increases are found over industrial cities. The ambient air concentrations are lower than those hazardous to plants and animals. However, a long-term trend would have an important biological significance. Aerosols and particulates. The impact of these substances on biological systems covers a wide range of important physical and pathological consequences, both direct and indirect. For instance, Aitken Nuclei of less than O.IJl radius are important in the formation of precipitation, fogs, and haze, etc. The so-called "large" particles in the 0.1-2 Jl range affect optical phenomena such as visibility and turbidity and can be important lung irritants in man, especially the particulate decay products of sulphur oxides, which can carry absorbed or adsorbed gases deep into the respiratory system. Insecticides, herbicides and other biotoxins (in air and precipitation). The bio-environmental problems associated with the use of insecticides and herbicides and a number of other biotoxins, particularly those from industrial processes, fuel and refuse burning have been well documented. Since one of the most rapid and effective methods for distribution of these materials on a global basis is via the atmosphere, early detection of significant changes in their distribution could be achieved by
212
Simulations of the Effects of Climate CMnge
monitoring the concentrations of these materials in air and precipitation.
Chlorinated aliphatic hydrocarbons. Carbon tetrachloride, trichlorethylene and similar compounds used in cleaning may become important atmospheric constituents in the future. Water
Although the organisational pattern required for monitoring freshwater is likely to differ substantially from marine monitoring, many of the critical variables to be studied are the same in both media. The following discussion, dealing principally with the marine environment, also applies to freshwater unless otherwise stated. There are two major groups of parameters of potential importance in water monitoring. Biological stimulants. Biological toxins including radionuclides.
Biological stimulants. The effects of biostimulants on the environment are usually observed on a local scale and may result in unsightlyblooms of aquatic vegetation, algae and bacteria. Unless these products are removed from the system and allowed to decay elsewhere~ premature deoxygenation of the aquatic environment can occur. Chemical species known to stimulate the growth of primary and heterotrophic producers include N0'3,NH3,PO;-,K+, CO;-,HC03 and organic matter, along with various trace metals. Other substances regarded as biostimulants are a variety of organic compounds such as vitamins, hormones and other unidentified "growth factors" present usually in trace amounts, particularly in domestic sewage.
Environmental Monitoring
213
In coastal areas, estuaries, fjords, lagoons and epicontinental seas, the increasing input of nutrients and potential nutrients from sewage and industrial outlets as well as from dumping is often the cause of a disturbance in the normal biological equilibrium. Because of sampling and storage problems, it seems inadvisable at present to split up the different phosphorus and nitrogen containing nutrients into subgroups. However, total phosphorus and total nitrogen (excluding gaseous nitrogen) both in dissolved and particulate form should be included in a monitoring system for the seas. This restriction to total P and total N may be less satisfactory for freshwater. Because of the anticipated effects that a rise in the carbon dioxide content of the atmosphere might have on climate, it is necessary to understand the circulation of carbon dioxide not only in the atmosphere but also in the oceans, and especially the exchange of carbon dioxide between sea-surface and atmosphere. It is expected that an increasing amount of information about carbon dioxide in the sea and components of the carbonate system will be achieved by ongoing research and survey activities within the next five years. This will be accomplished through improved methods and instrumentation. Continental erosion, industrial activities, sewage injection and dumping of mass residues from chemical production e.g. red mud, nutrification and overproduction might change the turbidity of surface waters considerably. Such events might also effect offshore areas all over the world and become an international problem, especially when the dumping of large quantities of relevant material is carried out in off shore waters. Unusual depletion of oxygen normally indicates high organic loading of water. This is certainly not a world wide issue, but may be a regional problem, especially in such areas where natural processes and man-included
214
Simulatiolls of the Effects of Climnte Challge
effects both lead to stagnation and emphasize an already existing natural tendency of oxygen deficiency as is the case in the Baltic. It is therefore desirable to include oxygen measurements in a monitoring system. Biological toxins. Potential toxins include almost all heavy metals and many organic compounds. Toxicity may manifest itself at any level of the food chain or may significantly alter the species composition of biota by enhancing those populations of organisms differentially tolerant of the specific toxin involved. In other cases, where substances are not directly toxic, they may concentrate in tissues of living organisms making them unfit for consumption by other organisms, including man. Such materials, if they are persistent in the environment, can increase in concentration in aquatic systems.
Many metals, including mercury, lead, cadmium, vanadium, chromium, copper, zinc, iron, arsenic and selenium and their related inorganic and organic compounds are considered to be potentially hazardous. The levels of mercury and lead are believed to have risen considerably in the surface layer of the oceans through man's production and use of them. Both have regional if not global effects on the marine ecosystem and are accumulated in food chains. The other metals mentioned here are mostly of local or regional interest only. Chlorinated organic compounds such as DDT and its metabolites, Aldrin, Dieldrin, Endrin, residues from the fabrication of polyvinyl chloride and similar chlorination products, i.e., aliphatic, chlorinated hydrocarbons, polychlorinated biphenyls, and residues from the fabrication of such compounds, alicyclic chlorinated compounds such as Lindane (' g- BHC) are considered to be potentially hazardous.
Environmental Monitoring
215
The toxicity of various oils and oil-products varies widely depending on the combination of environmental factors and also on the biological state of the organisms at the time of contamination. Different species and different life stages of organisms have been demonstrated to show different susceptibilities to pollution. Natural biogenic hydrocarbons on the other hand may have well defined biological functions. Therefore methods for oil pollution monitoring must be able to deal with the entire spectrum of oils and oilproducts ai: high and low concentrations as well as with Hie natural hydrocarbons in sediments and organisms. Further, for pollution research and for law enforcement there is a need for differentiation between natural hydrocarbons and pollutants and for the recognition of oils form different sources and among oil-products resulting from different refining processes. Existing analytical technology, using gas-liquid chromatography is well on the way to achieving this. Many organic compounds, which occur naturally or are emitted by human activity, may influence biota indirectly through their capacity to complex with or in other ways modify the chemistry of inorganic ions. These processes may alter the biological availability not only of the toxic heavy metals but also of essential trace metals required for the normal growth of organisms. The full magnitude of such problems cannot be understood until the organic contaminants are identified and their chemical stability and affinity for metals assessed. For the present, wherever possible, metal analysis should differentiate between species in ionic solution and those in organic combination. The availability or toxicity of metal ions is also strongly dependent upon concentrations of accompanying ions, particularly hydrogen, calcium and magnesium.
216
Simulations of the Effects of Climate Change
Metal toxicity to fish is known to be reduced by factors of ten to a hundred in "hard" as against "soft" waters. Soil
Soil composition. A great deal is already known about the physico-chemical composition of the world's soils, largely as a result of the long-term painstaking surveys of surface geology and soil patterns necessary for mapping the potential mineral at:ld agricultural resources of a nation. These are essentially national or regional problems. A recent and more sophisticated development has been the use of stream sediment analysis. Streall'l, lake and marine sediments average the prevailing soil chemical conditions for trace elements over a wide area and their use as indicators is proving a valuable tool in studying the occurrence of mineral deficiencies. In future it may be possible to use this method for studying the regional buildup of aerially distributed pollutants which fall onto and accumulate in soils often thousands of kilometers from their source. The great value of the analysis of plant and animal tissues as indicators of prevailing soil conditions is already well understood. Non-ferrous (heavy) metals emitted to the air or directly deposited on soils can be fixed, particularly by soil organic matter. Many of the metals including lead, arsenic, antimony, nickel, indium, mercury, cadmium, zinc, cobalt and chromium are known or suspected to be a hazard to human and animal health, several having been linked with the occurrence of cardiovascular disease and gastric and other cancers. Mercury can be alkylated in certain soils to highly toxic forms by soil bacteria. DDT, PCB and other organochlorines may be fixed in certain soil horizons and also have significant effects on biota in soils. Any of these
Environmental Monitoring
217
contaminants may undergo a process of biological / concentration as they passup the human food chain. Studies of the dynamics of their accumulation, movement, and their residence times in soils are needed using soil, plant and animal analyses. Macronutrients (5, N, P and C compounds) constitute major factors of soil fertility. Problems may arise in connection with the wide and intensive use of fertilisers or improper land use. Soil structure and cover. Under intensive grazing and/ or mineral fertilising, soil structurE:" and the vegetational cover of soils often suffer a decline. This is normally a local problem but may be a matter of international concern where extensive deforestation or overgrazing, overburning or other human pressure leads to a loss of soil organic matter, to bare soil, to windblown soil and even perhaps to the extension of arid zones. This can be particularly serious where plant regeneration is very slow. The extension of bare ground can be registered by satellite sensing.
Organisms Organisms will collect and sometimes accumulate from air, water and their food, certain toxic substances and radionuclides. The coverage of the monitoring programme should include monitoring at the four main trophic levels: primary producers (green plants); primary consumers (herbivores); secondary consumers (predators); decomposers and scavengers. It also follows that particular attention must be paid
to those organisms that show high accumulation rates. These can be used as test organisms and temporal integrators. It should also be recognised that organisms that feed over a wide area can effectively integrate geographical variation in contamination levels.
218
Simulations of the Effects of Climate Change ANALYTICAL METHODS FOR DIFFERENT CRITICAL SUBSTANCES
The above general review of a wide range of variables needs to be followed by a discussion of groups of critical substances leading to a selection of the priority substances for the initial stage of the monitoring programme. Here taken note of the preliminary results of a special working party of SCOPE dealing with: "Materials which may significantly alter the biosphere and their determination and assessment". This working party will later report on analytical methods for different critical substances. The relevance and technical feasibility of monitoring the priority variables and are satisfied that they are appropriate to the problems, and can be monitored with available techniques. Special operational manuals have to be prepared at a later stage when the final decisions about variables have been made.
Pesticides and related substances. DDT and its metabolites and degradation products may serve as a valuable model for the monitoring of pesticides in general. Other persistent organochlorines are aldrin, dieldrin, BRC, endrin, methoxychlor, lindane and heptachlor. Some of these compounds are widely used, but it is not yet proved that they have the same general global distribution as DDT. Polychlorinated biphenyls (PCB) are very resistant to biodegradation, have a global distribution and marked effects on biota. They should be given high priority in any initial monitoring programme. Substituted phenoxy acetic acids (herbicides) and' organophosphorus compounds may be considered later for further inclusion in the global system. It has yet to be established whether these compounds have a global distribution. Non-ferrous (heavy) metals. Once liberated to the environment from mineral extraction and purification, these will always be a potential hazard as they are never
Environmental Monitoring
219
biodegradable. It is possible that they may eventually become immobilised as very stable substances in marine sediments but more information is needed on this. One particular difficulty is that unlike organochlorine compounds, metals already occur in the natural environment so the problem of arriving at natural levels is much more difficult. First priority should be given to lead, mercury and cadmium as these three metals are already significantly involved in environmental problems. Other metals such as arsenic, zinc, vanadium, selenium, berylium, nickel, chromium and manganese, may be included in the monitoring system at a later date. Organic substances in the oceans. The occurence of petroleum products in the oceans is regarded by some scientists as a very serious global problem. The compounds from crude oil may enter the oceans from oil spills; during the transport of oil products over the seas or, in the case of volatile fractions, through aerial transportation. The oil problem is important and possible global effects are foreseen. Pilot activities are given high priority.
The Chlorinated aliphatic hydrocarbons, waste products from the plastics industry, have been found to have an extensive distribution in the North Atlantic as a result of ocean dumping. Even if they are rather toxic, they are broken down within a comparatively short time. These substances do not have the same priority as PCB but may be considered for inclusion in a global system at a later date. Substances in relation to geochemical cycles. Human activities may change the geochemical cycles of the major macro nutrient elements at least in local areas. Much attention has been paid to the environmental problems relating to the sulphur cycle. As the man-made emissions of sulphur to the atmosphere have about the same size as the natural emission, it is possible that man's activities
Simulatiolls of the Effects of Climate Challge
220
have changed the sulphur cycle in a profound way, with resultant effects on ecosystems. The "acid rain" problem is linked to this. For the present moment, extensive research activities are being undertaken which might contribute to a better understanding of the mechanisms involved. This work is an essential prerequisite to any future global monitoring. Extensive emissions both to air (NOx automotive emissions) to waters (sewage) and to the soils (artificial fertilisers) may have global importance. Changes in the phosphorus cycle may also be critical as this element may play an important role in the eutrophication of water. On the other hand phosphorus is an element which might be limiting to agricultural ~ductivity in the future and resource conservation and ma~ment may be very important. " An~lt&, critical substance is carbon, by some regarded as the limiting substance in eutrophication. Carbon dioxide levels in air are also of great importance for organic productivity. We are not yet prepared to recommend immediate implementation of monitoring programmes for the geochemical cycles, but research and pilot activities directed towards their inclusion at a later stage are recommended. On the basis of the foregoing considerations, first priority that data be collected on the following substances in air, water, soils and biota, at a number of stations for the purpose of assessing secular trends in relation to the pollution of the biosphere: Mercury. Lead. Cadmium.
Environmental Monitoring
221
DDT, its metabolites and degradation products. Polychorinated biphenyls (PCB). We further recommend that the following substances be considered for a later inclusion in this network. Petroleum products Persistent organochlorine compounds other than DDT Chlorinated phenoxy acetic acid derivates. Organophosphorus compounds. Chlorinated aliphatic hydrocarbons. Other metals (As, V, Zn, Se, Cr, Cu, Be, Ni, Mn). Relevant compounds in the cycles of 5, N, P and C. Oxygen in water. PHYSICAL, CHEMICAL AND BIOLOGICAL DATA
The interest expressed in the idea of environmental monitoring by various governments and by the world scientific community stems from a basic concern with the safe guarding of human health and well-being as defined in the very broadest sense, Le., any phenomenon which can be detected as a significant disamenity to man. Thus, apart from the direct harm to human health, arising from exposure to incipiently pathogenic agents (e.g. harmful micro-organisms, toxic substances) in air, water and food, indirect harm could arise from: certain forms of climatic change; a reduction in the productivity of crops; other changes in livestock and biota; modified aesthetic values and environmentally induced social problems. This is the total human environmental problem and further clarification is required to obtain a more practical view of human health in the context of the present discussion.
222
Simulations of the Effects of Climate Change
The indirect effects referred to above, Le. any future climatic change or future reduction in biological productivity, can influence nutritional and living-standard factors, which could predispose man to succumb more readily to pathogenic agents on a much wider scale in the future than he does at present. The action of such indirect effects in areas of the world where because of adverse local climates or poor soils, underfed populations living in extreme poverty exhibit high moribundity and mortality rates from pathogenic agents. Such nutritionally generated health problems have been with us for many years, aggravated by bad housing, defective sanitation and pest-infestations. Existing national and inter-governmental health organisations still recognise this as their basic area of involvement and continue to be very active in this field. Apart from this more traditional area of concern, there exists a strong feeling that nowadays, man may be exposed to an additional and growing burden of environmentally induced health hazards generated by his intensive agricultural and urban-industrial use of the environment. Thus, superimposed on the patterns of disease characteristic of pre-urban-industrial or pre-intensive agricultural societies can discern a newer component which is either known or suspected to be induced by exposure to these 20 th century conditions. They include: diseases of the blood and circulatory system (e.g. anaemia, hypertension, arteriosclerosis and is chaemic heart disease): certain forms of cancer (e.g. leukaemia, kidney, liver, stomach, lung, bladder); respiratory complaints (e.g. asthma, emphysaema, chronic bronchitis); impairment of nervous function (e.g. encephalopathy, mental disorders); teratogenic effects (e.g. congenital malformations) and mutagenic or allergy effects.
Environmental Monitoring
223
The possible causal agents here are generally agreed to be one or more of the following, some less certain than others: oxides of sulphur and nitrogen, ozone, carbon monoxide, non-ferrous metals (e.g. Pb, Hg, Cd, As, Be, Ni, Zn, Cr); radionuclides; nitrates and nitrites; organochlorine compounds (e.g. pesticides, chlorinated dioxans, polychlorinated biphenyls, chlorinated aliphatic compounds) and other more or less complex pharmaceutical substances and food-additives with poorly understood side-effects. It is generally agreed that we need a more thorough registration of morbidity and mortality attributable to these diseases or some form of index-parameters to these (e.g. crude morbidity and mortality rates in excess of normalised data, perinatal mortality, rates of first admission to mental-care as against re-admission rates). This survey may be carried out in four broad strata or critical groups:
Very high exposure groups at special risk from the suspected causal agents listed above. High exposure groups below occupational exposure levels but having higher than average exposure on account of living in large cities, or intensively industrial or agricultural regions. Medium exposure groups living in rural parts of densely populated countries with a high level of technology and/or intensive agriculture but not at risk levels (a) or (b) above. Low exposure (baseline) groups living in remote regions of the world practising primitive agriculture, pastoralism or hunting. Along with such studies, a simultaneous programme of exposure assessment should be conducted. This would
224
Simulations of the Effects of Climate Change
attempt to evaluate the levels of the suspected causal substances in the local air, food (including imported foodstuffs) and drinking water and relate it to the levels actually present in huma.n tissues (e.g. bone, liver, kidney, spleen, blood, skin, hair, body fat). It is important here to analyse materials for as many substances as possible at the outset. Multifactorial statistical processes will later enable the investigators to concentrate on a priority list of two or three substances for each disease category. The ways in which such causal substances accumulate with age in the various categories (a) - (d) above has already proved valuable for Cd and Pb. This kind of knowledge, acquired directly from field studies can be effectively supported by long-term chronic toxicological studies carried out on experimental animals in order to induce experimentally the various types of illness by the administration of trace amounts of suspected causal agents, over several generations if necessary. Another valuable experimental approach is a biochemical search for impaired enzyme activity, the apperance of intermediate metabolites accumulating in tissues or body fluids of affected organisms, including man, as a result of impairment of metabolic function (e.g. D-aminoIaevulinicacid dehydrogenase activity and the appearance of this acid in urine of lead intoxicated subjects). This combined field- and experimental- approach helps to associate with more certainly each disorder with its specific causal agents. It also provides information on what threshold levels of each substance are harmful to human health. Without these critical values it will not be possible to assess the seriousness of current global levels of the various substances and much time and/f'esources will be wasted in a costly and elaborate monitoring process which cannot be evaluated.
Environmental Monitoring
225
Many technical problems exist in obtaining representative sampling, natural ranges of genetic tolerance, synergistic effects and the computation of reliable threshold dose levels. This latter difficulty has been avoided for radionuclide exposure by adopting the simple concept that there is no zero-effect dose of a radionuclide and that all exposures are cumulative and additive with an effect proportional to the final dose received by the body or population studied. This operational concept used by UNSCEAR and IRCP of calculating the so called "overall harm commitment" of population and relating it to a stochastic index of damage to a human population merits attention for pollutants. There is already some evidence indicating thai some pollutants may act like radionuclides are supposed to behave in having no toxicity threshold, and attempts to follow this radionuclide approach for contaminants such as lead or methyl mercury. Again it is important to recognise from the outset that in assessing "total harm commitment" for contaminant it is necessary to make an "ecological" approach to the dynamics of the substance studied. Thus, one must know its rate of supply to the body via food-webs, its absorbtion rates in gut and lung, its elimination rate by the body as well as its environmental stability or persistence. It is also important to continue to review new chemical substances for their possible long term harm to man.
In the light of the above remarks the following be periodically surveyed wherever data can be obtained: Human life expectancy. Population age-structure. Excess crude mortality.
226
Simulations of the Effects of Climate Change
Growth rate in terms of body weight and height. Frequency of diseases of blood and cardio-vascular system (anaemia, is chaemicheart-disease, arteriosclerosis, hypertension). Frequency of certain forms of cancer (leukaemia, cancer of stomach, liver, kidney, bladder, lung). Surveys should be carried out in various age groups and in the following four critical groups representing various degrees of exposure: - Very high exposure (occupational). High exposure (urban-industrial, or intensive agricultural exposure). Medium exposure (rural populations in densely populated countries). Low exposure (populations from remote regions). A simultaneous programme of tissue analysis (bone, blood, liver, kidney, spleen, body fat) for lead, mercury, cadmium, DDT and its metabolites, polychlorinated biphenyls, should be carried out on postmortem and other material, carefully selected to represent various age-groups and levels of exposure. Data collected under 1-6 above should be correlated using traditional threshold-dose-Ievel quality criteria and also attempts made to use the "no zero-dose effect" method used for radionuclides. We recommend a periodic review of other potentially hazardous substances, including new chemicals, to help determine whether they have any long-term effect on human health. We recommend research to establish biochemical monitors of disease e.g. accumulation of intermediate metabolites in the human body.
Environmental Monitoring
227
BIOLOGICAL SYSTEMS
Relevant physical and chemical measurements of the abiotic environment will not in them selves be informative concerning actual effects on biota. The rationale for monitoring animals and plants and their associations is that only by doing this. Moreover, the whole concern about the environment has evolved because from time to time directly adverse effects on biota and man have been observed. Biological parameters therefore constitute an indispensible, if not the most important part of any comprehensive environmental monitoring system. Monitoring of the physical or chemical properties of the environment is relevant only in conjunction with established or strongly suspected effects on biota. The view taken in this report, that biological parameters constitute effect parameters means that there is motivation for biological mor itvring even if a direct and specific cause and effect relationship has not yet been established. Observed adverse d"'a!1ges in the living environment will provide warning signals and detection mechanisms and will draw attention to the fact that research is needed to clarify the underlying cause as a preparation for corrective management. Biological systems are extremely complicated and possible variables for monitoring very numerous. It is thus essential to fmd those biological variables that most efficiently provide reliable information about effects on biota. First the feasibility of performing observations and measurements most likely to be informative and then, by developmental research, further refme them into workable parameters. The effects on biota that need to be monitored are caused by (1) more or less direct human impact, (2) climatic changes and (3) biologically active chemicals
Simulations of the Effects of Climate Clumgt
228
introduced into the environment. The variables selected should be informative regarding at least one of these three groups of causes. The biological parameters may be sought at different levels of biological organisation, from the lowest level of molecules, via populations to the levels of whole communities and biocoenoses. When looking into the problems at the highest levels it is necessary to take the totally integrated picture of the biotic and the abiotic environment into consideration. Therefore studies on ecosystems and biomes must be carried out and the biological monitoring activities integrated with physical and chemical monitoring. Our approach here will be to make a broad review of different possible areas where effects can be expected and to isolate those where monitoring is both feasible and relevant. The following list includes those kinds of biological parameters that will be considered in the evaluation of a minimum programme. Biomestudies. Distribution of vegetation types. Species diversity. Primary productivity, biomass and growth rate. Size and distribution of species populations. Specific population characteristics: reproductive success, mortality, age structure and migrality. Physiology, ontogeny and pathology. Genetics. Behavioural responses and mental performance. Phenology. Registration of short lived biological phenomena.
Environmental Monitorillg
229
Biome studies. It is an impossible task to analyse each of them separately. However, there is enough evidence to indicate that a number of basic principles are the same over large regions with roughly similar ecosystem structures, e.g. within tundra ecosystems or within tropical forest ecosystems. Regions with similar ecosystems are called biomes, and the logical approach is to call for studies in representative ecosystems within each biome, i.e. biome studies. From the monitoring point of view, biome studies should provide information on where in the total circulation of energy and substances the critical points are located in terms of sensitivity to environmental stresses and to human control. Information of this kind is of great importance for the determination of the most efficient biological and other parameters for monitoring. By comparing states and processes of comparable ecosystems in low, medium and high exposure situations it will be possible to detect effects caused by human impact without time-consuming long-term monitoring at fixed plots. The biome studies will, if properly designed, constitute indispensible parts of a global monitoring programme as centres for research and analysis activities directed to the integrative evaluation of complex biological processes and to the isolation of specific parameters suitable for large scale routine monitoring.
Distribution of vegetation types. The surface of earth is continuously changing. Ecosystems are disappearing and being replaced. It would be an important task to make repetitive surveys of the occurrence and distribution of different ecosystems on a global scale.
Species diversity. It is well known that one of the major cri,teria for the health of ecosystems is their degree of
230
Simulations of the Effects of Climate Change
stability. A useful measure of the degree of stability is species diversity. Agents causing environmental damage generally cause decreased species diversity. Measurements of species diversity are difficult to carry out expect for very limited parts of total biota. The soil still remains the most important and intensively used part of the biosphere, and soils are not renewable in the same way that air or water is. Toxic substances often accumulate in soils and changes in soil quality tend to be more or less irreversible. It is therefore particularly important to detect changes in soil quality as early as possibble. The programme should include sampling of representative soil transects from low exposure to high exposure situations in relevant biome types. Identification of species will generally not be possible and therefore the relative occurrence of different ecological groupings of organisms has to be used. An interim approach to soil health monitoring pending the development of methods for a more detailed programme is to use grosss oil respiration as an index of biological activity of soils. Aquatic algae often show very early and characteristic reactions to changes in the physical and chemical properties of water. They are particularly sensitive to increased levels of nutrients but also to toxic chemicals. Paricular attention should be paid to changes of the algal communities in the marine environment, since changes there may mean that global effects of pollution are occurring. Air plankton (pollen, fungal spores, bacteria, etc.) are often carried long distances by moving air masses. The quantitative and qualitative composition of air plankton may provide information on the movements of specific air masses across continents, assist in forecasting animal and plant diseases and allergies in man and contribute to the detection of major changes in the general composition of vegetation and microfauna and microflora.
Environmental Monitoring
231
Primary productivity, biomass and growth rate. These variables depend primarily on climate, water and soil quality but may also be affected by toxic chemicals. It seems however that the prospects for detecting effects of human impact upon these parameters are not great. Natural ecosystems have generally a high buffering capacity and modified ecosystems are generally managed with the purpose of preventing changes of any kind. However, these are a group of parameters that should be carefully analysed in connection with the biome studies for possible future inclusion in the routine programme. Size and distribution of species populations. One of the most characteristic and significant adverse effects of man's impact on the environment is that many species decrease in number or in distribution range. Some species even become extinct. These effects have been observed particularly in birds and mammals but also in other vertebrates, plants and in some invertebrates and microorganisms. The decreases in population size in many birds, particularly birds of prey, as a consequence of reproductive failure induced by mercury compounds and organochlorines have been particularly instructive. These decreases in population size, whether caused by toxic chemicals or by other forms of human impact, have been of tremendous importance in establishing the present concern about the environment. When looking further into the problems of defining suitable organisms it is recognised that a very restricted number of groups are suitable for monitoring. These groups are: (1) Vanishing or endangered vertebrates, because they are sensitive to environmental changes and because monitoring programmes already exist which have provided useful information, and (2) Birds,. because they have proved to be responsive to a wide range of environmental changes, because they are easy to monitor
232
Simulations of the Effects of Climate Change
(taxonomically well known and easy to count) and there are numerous reputable ornithological organisations capable of taking part in a programme at minimum cost.
Specific population characteristics: reproductive success, life expectancy, mortality, age structure and migrality. Certain specific population characteristics may often be detected much earlier than changes in population size or distribution because most species reproduce at a rate much higher than necessary to keep the population level constant. It is now known that the decreases in population size of many birds of prey is caused by reproductive failure, i.e. decreased natality. However, it takes some time before this affects the population size. Increased mortality has been observed for many animal species without accompanying population decreases. Life expectancy and age structure are important from the point of view of evaluation of the cause and effect relationship. Migrality is also necessary in the same context.
Physiology, ontogeny and pathology. Organisms respond to environmental stresses in many different ways. Effects of air pollution can be detected in the blood of vertebrates, congenital malformations are known to be partly environmentally induced and it is well known that plants react to different toxic substances with sometimes very specific pathological symptoms. We believe that the use of certain sensitive plants for the detection and monitoring of effects from air and other forms of pollution is promising. In many cases naturally occurring plants such as mosses, liverworts, lichens, may be used, particularly in high exposure situations. Promising results have been obtained with specially planted species selected either because of their general sensitivity to pollutants or because of their specific sensitivity to certain other substances.
Environmental Monitoring
233
Genetics. A number of substances released into the environment including the radionuclides are known to cause genetic changes, either affecting genetic variability or mutation rates. A number of possible ways to monitor the effects of such substances: studies of the genetic variability and mutation rates of a number of natural and laboratory populations of standard strains of animals, plants and micro-organisms. Suitable organisms include genetically well known standard strains of cultivated plants, Musca, Drosophila and laboratory mice and rats. There are however for the present no preparations for such a programme. It is known that some organisms may rapidly become adapted to tolerate elevated levels of toxic chemicals. This property provides a possibility of determining recent history of exposure by tolerance bioas says or by following gene-linked morphological changes as with industrial melanism.
Behavioural responses and mental performance. Extremely early effects from low levels of toxic chemicals can be detected on a laboratory scale in the behaviour and performance of animals, for example in relation to mating behaviour, learning ability. The techniques available are however not yet standardised. Phenology. The biological effects of climatic change are expected to be apparent first as changes in the seasonal timing of different biological phenomena (flowering of plants, arrival of migratory birds, mating, pupation and flying of insects etc). Extensive observations have been made of regional and local variations in time of flowering of widely distributed and genetically uniform species such as the common lilac (Syringa vulgaris). German foresters have made extensive use of phenological observations in the
234
Simulations of the Effects of Climate Change
planning of forest operations. Worldwide studies of phenology were proposed as a part of the International Biological Programme, but this work was carried out only in a few countries. Such a programme requires a large number of observations with a representative geographic distribution. A principal advantage of the use of phenology in environmental monitoring is that many competent amateur observers can be enlisted. It might also be possible to use remote sensing techniques for registering the flowering of certain trees.
Short-lived biologicalphenomena (local catastrophes). A number of short--lived biological phenomena may serve as very informative detectors of unknown environmental problems. Extensive kills of sea-birds have occurred from time to time along most coasts of the world. To some extent they may be caused by bad weather conditions but toxic chemicals and/or heavy metals have been supposed to constitute an important contributary factor. Thus, such observations may deserve monitoring in order to be reported in a systematic way so that research efforts can be diverted to the problem immediately in order to find out what the cause was. If the results show that some neglected environmental factor was responsible, it should be decided whether it deserves more or less permanent inclusion in the monitoring system. Other short-lived phenomena that could be mentioned as possible candidates for inclusion into the reporting system are sudden plankton blooms in the oceans, certain kinds of pest outbreak, certain rapid and unexpected species extinctions etc.
12 Costs of Climate Change Mitigation Climate change due to the enhanced greenhouse effect is likely to be the most significant environmental issue confronting the global community in the twenty first century. Of all industrialised countries, Australia is one of the most vulnerable to the impacts of climate change. This reflects Australia's already variable climate, poor soils, vulnerable ecosystems and high proportion of population living in coastal areas. Thus the potential impacts of climate change and the need to develop appropriate adaptation strategies are now important considerations in the context of national, state and local government responses to the issue. The economic costs of climate change mitigation are relatively well understood, as are the sectors and industries most likely to be affected by mitigation policies and measures. By contrast, the economic costs of climate change impacts are not well understood. It is essential that economic assessments of climate change are framed in the context of a sound appreciation and understanding by decision-makers of all of the potential costs and benefits associated with climate change and climate change response. Once the costs of climate change impacts and net benefits of adaptation strategies
236
Simulatiolls of the Effects of Climate Change
are better understood, decisions can be made about the most appropriate combination of mitigation and adaptation measures. REASONING FOR COSTING CLIMATE CHANGE IMPACTS
Climate Change Response Rationale
Climate change due to the enhanced greenhouse effect is likely to be the most significant environmental issue confronting the global community in the twenty first century. The Third Assessment Report (TAR) of the Intergovernmental Panel on Climate Change (IPCC), released in 2001, confirms that the global climate "has demonstrably changed 011 both global and regional scales since the preindustrial era, with some of these c1u/llges attributable to human activities".
The TAR also stresses that, even with concerted international action to reduce greenhouse gas (GHG) emissions, further global warming is likely to occur over the next few decades leading to regionally and locally significant impacts. An IPCC special report, 'The Regional Impacts of Climate Change: An Assessment of Vulnerability', indicates that Australia is one of the most vulnerable of all industrialised countries to the impacts of climate change. This reflects Australia's already variable climate, poor soils, vulnerable ecosystems and high proportion of population living in coastal areas. The TAR has confirmed the vulnerability of a range of ecosystems, economic sectors and communities in Australia to climate change including, in particular: water supply and hydrology; natural ecosystems including:
Costs of Climate Change Mitigatioll
237
coral reefs forests and woodlands alpine systems; wetlands riverine environments;' agriculture, forestry and fisheries; coastal settlements and the built environment; energy sector human health; and tourism. Thus the potential impacts of climate change and the need to develop appropriate adaptation strategies are now important considerations in the context of national, state and local government responses to the issue. The Australian Government has recognised the importance of impacts and adaptation with the establishment of a National Climate Change Adaptation Programme in 2004. This programme aims to prepare all spheres of government, vulnerable industries, communities and ecosystems to manage unavoidable consequences of climate change. The Adaptation Programme is closely linked with the Australian Greenhouse Science Programme, which improves the scientific understanding of the causes, nature, timing and consequences of climate change to better inform industry and government decisionmakers. The AGO release, in 2002, of 'Living with climate change', which provides an overview of the potential impacts of climate change in Australia, and a more recent AGO publication 'Climate Change. Economic Rationale
Any external 'shock' to the economic system (such as climate change) can be examined in terms of:
238
Sillll/latiolls of tile Effects of Climate Challge
first, the (hypothetical) costs if individuals, companies and governments take no action to avoid or reduce the costs associated with that shock; or second (more likely), the costs if action is taken to avoid at least some part of these costs by mitigating the size of the shock itself or by adapting to the shock as efficiently as possible - the assumption being that economic agents are flexible and will act to reduce the costs of an external shock. Both mitigation and adaptation involve investment and other costs and both provide benefits in terms of lower impact costs. In relation to the climate change issue, the economic costs of climate change mitigation are relatively well understood, as are the sectors and industries most likely to be affected by mitigation policies and measures. By contrast, the economic costs of climate change impacts, the sectors likely to be affected and the costs and benefits (Le. net benefits) of adaptation measures are not well understood. It is essential that economic assessments of climate
change are framed in the context of a sound appreciation and understanding by decision-makers of all of the potential costs and benefits associated with climate change and climate change response. The assessment of costs of climate change impacts and net benefits of adaptation strategies is most appropriately undertaken as a two-staged process, with the focus in _the first stage being on identifying the concepts, issues and methodologies appropriate for a robust analysis. Once these concepts, issues and steps have been identified, decision-makers can then proceed with the more expansive and involved task of costing the impacts of climate change, under a range of scenarios, at the regional, sectoral and national levels. Once the costs of climate change impacts
Costs of Climate Change Mitigation
239
and net' benefits of adaptation strategies are better understood, decisions can be made about the most appropriate combination of mitigation or adaptation measures. FRAMEWORK FOR CLIMATE CHANGE CHANGE ADAPTATION
The purpose of this chapter is to examine and establish an appropriate framework for costing the impacts of climate change and climate change adaptation. This framework consists in turn of: the economic framework for the assessment - welfare economics remains the most coherent and consistent framework for valuing, in dollar terms, the impacts of climate change; the objectives and scope of the assessment; and baseline definition. Economics Framework Welfare Economics
The foundation of all economic analysis is that scarcity necessitates trade-offs between alternative resource us~s. The trade-offs are made on the basis of the value of the resources to individuals and society. This value is determined by individual preferences, with the total value of any resource being the sum of the values that individuals place on its use. Individual preferences can be expressed in two equivalent ways: willingness to accept (WTA) - the minimum payment that the owner of a resource is willing to accept for its use - with marginal WTA being represented diagrammatically in economic analysis as a supply curve; and
240
Si/lllliatio/ls of the Effects of Climate Challge
willingness to pay (WTP) - the maximum amount a consumer is prepared to pa~ for using the resource with marginal WTP being represented diagrammatically in economic analysis as a demand curve. Thus if applied in the context of costing the impacts of climate change, WTP measures the maximum people would be willing to pay to avoid a particular impact (by adopting, for example, adaptation or mitigation strategies), while WT A measures the minimum people would be prepared to accept (as compensation) for living with the impact. When measuring the costs of climate change impacts it is important that the cost assessment should consider all value or welfare changes as a result of the climate change impacts. In measuring welfare changes need to draw a distinction between economic (opportunity) cost and financial cost, and between social cost and private cost. The net economic LOst of a given climate change impact (or adaptation option) is the total value that society places on the resources that have been used to produce the goods and services forgone as a result of the impact (or of the resources diverted from alternative uses to adapt to the impact). These resources are measured in terms of the value of the next best alternative to which they could have been applied (i.e. the value of the opportunity foregone or opportunity cost - measured in tum by WTP / WTA). Thus economic costs may differ greatly from financial (accounting) costs which are simply a measure of the financial payments made for goods and services. The net economic cost of a climate change impact on society will comprise both the private costs (benefits) of the impact and the external costs (benefits). Collectively, these are defined as the social cost (benefit). Thus:
241
Costs of Climate Change Mitigation
Social cost
= private costs
+ external costs
Where: Private costs are the costs internal to the individual consumer or producer arising from the impacts of climate change. These will typically be measured in terms of changes in 'consumer surplus' and 'producer surplus' which arise from the impacts of climate change on consumer demand or producer supply of a good or service, where: consumer surplus is the value consumers place on a good or service over and above the purchase price; and producer surplus is the amount by which a producer's revenues from selling a good or service exceed production costs. External costs are costs that are external to the market. Many of the environmental, health and social impacts of climate change will fall into this category. Within a welfare economics framework, it is the social cost of climate change impacts that represent the full economic costs of impacts and therefore will determine the most economically efficient policy response. Analysis to Economic Assessment
Within the welfare economics framework, there is a spectrum of approaches or levels of analysis that can be used to assess the costs of climate change impacts. As we move through that spectrum the level of detail and completeness provided by the analysis increases, as does the level of complexity of the analysis. The two main levels of economic assessment are as follows: Partial equilibrium analysis. The impacts of climate change or adaptation policies and measures can be
242
Simlliations of the Effects of Climate Change
examined in terms of direct effects on the economic value of a single market for a good or service. Consumer and producer surplus are the measures typically used to estimate economic value. There are number of economic valuation techniques that can be applied to valuing the impacts of climate change on an industry or market within a partial equilibrium framework. Economic valuations can be undertaken at a level that is detailed enough to provide analysis of the specific workings of a market or industry. However, flow-on effects and feedbacks between the target market and the rest of the economy are not captured in these valuation studies. This requires the use of general equilibrium analysis. General equilibrium analysis. Where the impacts of climate change on a market result in indirect economic impacts or economic flow-on effects throughout the economy or where the impacts of climate change are being assessed for number of markets or industries, then general equilibrium analysis should be used. Macroeconomic (input-output) modelling and Computable General Equilibrium (GCE) modelling are the two main approaches to general equilibrium analysis. It should be recognised that these approaches are not
mutually exclusive. Thus, in practice, the results of the results of an economic valuation assessment or partial equilibrium analysis can be used as input into a general equilibrium analysis. Economic efficiency and Decision-making
Economic efficiency is only one outcome that is likely to guide decision-making on climate change and responses to the impacts of climate change. This is because other societal objectives may not be captured fully in the welfare
Costs of Climate Cha1lge Mitigatioll
243
economics approach to costing climate change impacts. Some analysts argue that an efficient response to the impacts of climate change - in the sense that nobody can be made better off without someone being made worse off - does not guarantee an equitable outcome. Thus there may be a trade-off between an efficient policy response and an equitable one. Another issue often raised in relation to welfare economics is that, notwithstanding the use of hypothetical market techniques such as contingent valuation it cannot adequately measure the non-use values of natural systems or the extent to which human-produced stocks of capital may not be substitutable for natural capital stocks. Regardless of these limitations, welfare eco!\omics when extended to address externalities, uncertainties, equity etc - provides the best framework for placing a dollar value on the impacts of climate change, encompassing a consistent and flexible set of methods and tools for costing most impacts. There are however, alternative or complementary tools or methods for assisting decision-making on the impacts of climate change and climate change response more generally, including multi-criteria decision analysis. Objectives and Scope of the Assessment
It is crucial that the objectives and scope of an economic assessment of the impacts of climate change are fully defined prior to any assessment being undertaken and that these reflect the problem at hand. The objectives and scope of the assessment are a function of the decision problem, which in turn relates to: the decision-making context - and associated baseline; the climate change impacts to be Ineasured - their type and geographic scale.
244
Sill/ulatiolls of tile Effects of Climate Challge
Decision-making Contexts
There are essentially two decision-making contexts in which the cost estimates of the impacts of climate change are likely to be used: Impact assessment/prioritisation. The objective is to produce estimates of the net economic costs of climate change impacts, for the purpose of establishing the relative importance of different impacts or possibly of establishing the significance of impact costs relative to mitigation costs. Adaptation option appraisal. The objective is to produce estimates of the net benefits of adaptation to specific climate change impacts for the purpose of choosing between different adaptation options. For each of these contexts a baseline (or reference scenario) needs to be specified.
Baseline Specification for Impact Assessment As previously noted, the objective in this context is to estimate the net economic costs (positive or negative) of climate change impacts in the absence of adaptation measures. The baseline (or reference scenario) can be defined as the situation assumed to exist (in a system, sector, industry or region) in the absence of climate change (the 'without climate change' case). The difficulty with baseline specification is that there are potentially two different baselines which can be used to cost the impacts of climate change: A static baseline (or fixed reference scenario) assumes that existing socioeconomic, environmental and physical conditions will continue to prevail in the study sector or region into the future. Using the impact of climate change on tourism to the Great Barrier Reef as an example, the use of a static baseline
Costs of Climate Change Mitigatioll
245
would assume that current tourism levels will continue into the future and the costs of climate change to tourism in the region are therefore the difference between the current net benefits of tourism and the net benefits of tourism with climate change. Due to expediency, this static baseline approach has been frequently used in studies of the costs of climate impacts. As noted by Metroeconomica however, the use of a static baseline is an unrealistic representation of the future. A more realistic approach is one that uses a dynamic (or moving) baseline to describe the future without climate change. This requires constructing realistic projections of future environmental, social-economic and physical conditions relevant to the study sector or region. Baseline Assessment for Adaptation Option Appraisal
The baseline (or reference scenario) is the future impacts (and associated costs) of climate change in the absence of adaptation. The effect of the adaptation response is to reduce the impact of the climate change on the system or sector, with the reduction in the cost of the impact representing the net benefit of the adaptation response. In this way, the net benefits (or costs) of different adaptation responses can be compared. Impacts of Climate Change on Systems
A sound understanding of the likely or simulated impacts of climate change on systems, sectors and/or regions is obviously an essential prerequisite for any assessment of the costs and benefits of those impacts. This includes an understanding of the following: The type of impacts to be costed. Climate change will generally involve a spectrum of impacts from direct
246
Simlliations of the Effects of Climate Change
impacts, such as warmer atmospheric and ocean temperatures, through to indirect biophysical impacts on natural systems, and socio-economic impacts on human systems and sectors. For each impact, 'exposure units' will be affected. Generally, the economic analyst attempting to value the impacts of climate change will, for any system or sector, seek to value all of the impacts along the spectrum. This may not always be the case though, particularly if the decision-making context is the assessment of the costs and benefits of different adaptation responses to a specific impact. One issue confronting the analyst is that the extent to which all impacts can be quantified across all exposure units will vary considerably depending on the level of data aggregation and the level of uncertainty associated with the impacts. The geographic scale of the impacts. Decisions about the geographic scale of impacts to be valued (e.g. local, regional or national) will both influence and be influenced by the level of data aggregation and accuracy and the method of valuation used. Close liaison between economic analysts and scientific community will be important, not only to understand the spectrum of impacts relevant to a particular economic assessment but also the risks and uncertainties associated with those impacts. ECONOMIC TeCHNIQUeS
Applying Techniques
The economic valuation techniques can be used to cost local or regional scale climate change impacts on a single industry or market, within a partial equilibrium framework. Because they use disaggregated data, studies
Costs of Climate Change Mitigatioll
247
drawing on these techniques are sometimes referred to as 'bottom-up' studies. The techniques have the advantages of being: flexible, in the sense that they can potentially be applied to a wide range of impacts, sectors or markets; and relatively straightforward to apply (e.g. they do not require complex economic models). However, the use of these techniques in isolation is predicated on an assumption that a climate change impact will not lead to price changes in the affected market. Where climate change is likely to lead to a change in output or demand for a good/service that is significant enough to affect its price, then the measurement of changes in producer or consumer surplus will require knowledge of the demand and supply functions that exist in the particular market. This, in turn, will require information on the price elasticity of demand for the affected good or service. This additional analysis will often require the processing of significant amounts of information, although this can generally be done by means of a simple model. If the impacts of climate change are expected to have economic flow-on effects then general equilibrium analysis will be required. General equilibrium analysis could draw on information derived through economic valuation studies. Discussion of Techniques
Although there is no established categorisation of economic valuation techniques, they can generally be grouped into two major categories - methods that use 'directly observed market behaviour' and methods that draw upon 'hypothetical market behaviour'.
248
Simulatiolls of the Effects of Climate Change
Direct Observation Direct observation methods use the prices for goods and services that are traded in the market to estimate producer and consumer surplus and thereby directly or indirectly infer cost or value relating to the affected good or service. For example, in calculating the economic impact of climate change on commercial fisheries in the Great Barrier Reef, an observed market method might calculate how much the catch is valued in the market and use an estimate of how much catch may decline in the future (as a result of the loss of coral reefs and other impacts of climate change) to value lost consumer surplus. In this case, prices of the affected 'good' (fish) are observed and their use allows the direct estimation of loss in value. These methods are sometimes also referred to as 'revealed willingness to pay' methods since willingness to pay is 'revealed' through market prices. Direct observation methods can, in turn, be split into two further categories: Direct markets will cost climate change impacts using the market price of the affected good or service which has been obtained in a conventional market through the forces of supply and demand. Methods in this category include: estimates of the change in input/output of a market; and estimates of replacement or restoration costs. Indirect markets will cost climate change impacts by observing behaviour in surrogate markets for an affected good or service. These surrogate markets can be applied to the impacts of climate change when changes to the flows of valued 'services' are not priced in conventional markets, such as impact of climate
Costs of Climate Challge Mitigatioll
249
change on the value of recreational fishing or diving in the Great Barrier Reef. Many environmental and social services fall into this category. Methods include: the travel cost method; and hedonic pricing.
Hypothetical Markets Hypothetical market methods are generally used when value is not directly observable in the market, as is the case with 'non-use' valuess. For example, the non-use values of coral reef ecosystems are not directly observed in the market. To estimate these values, survey questionnaires can be used to directly or indirectly elicit individual valuations in a hypothetical or constructed market for the ecosystems. Hypothetical market methods are sometimes referred to as 'expressed willingness to pay (WTP), methods, since people are asked through surveys to 'express' their willingness to pay for a good or service based on a hypothetical scenario. Methods in this category include: contingent valuation (direct valuation); choice modelling (indirect valuation). The different categories and types of costing techniques are discussed further below. Change in Input or Output of Markets
In many cases climate change could have a direct impact
on: the ability of an economic agent or pro.ducer (e.g. a commercial fisher, a farmer or a tourist operator) to produce a good or service; and/or
250
Simulatiolls of the Effects of Climate Clumge
the costs that the agent incurs in producing that good or service. For example, if climate change led to loss of coral reefs on the Great Barrier Reef, which in turn led to a decrease in fish stocks, commercial fishers could either: allocate more resources in order to maintain current harvest rates; or more likely reduce the size of their overall catch. Either way, the commercial fishers will suffer economic loss. This loss can be measured as the costs of the increased resource inputs - known as the 'production cost technique' or as the value of decreased output - referred to as the 'change in productivity approach'. The choice of approach adopted will depend on the anticipated response of the producer to the impact. There are a number of ways in which changes in production costs or productivity can be measured. These include: measuring gross margin for each unit of output and multiplying by the estimated change in output as a result of climate change impacts; in the case of agriculture, estimating the changes in land values with and without climate change impacts (as rural land values are linked to the land's productive capacity); calculating the unit costs of resource inputs, such as labour or natural resources, and multiplying by the projected change in resource use; and using the total budget approach to estimate the difference between net income (the value of gross output minus the cost of gross resource inputs) with and without climate change impacts.
Costs of Climate Change Mitigati01l
251
Examples in Australia of the application of these methods to cost the impacts of climate change include: Beare and Heaney, who as part of study into climate change and water resources in the Murray-Darling Basin (MOB), modelled changes in net returns (total budget approach) to estimate the loss in agricultural returns arising from the impacts of climate change on water availability and salinity. Howden, Reyenga and Meinke, who estimated changes (generally increases) in gross margins for wheat cropping arising from increases in yields due to a doubling in CO2 concentrations and temperature increases. Restoration Costs
Another group of direct market methods relying on observable market behaviour that can be used to estimate the costs of climate change impacts are the preventative expenditure and replacement cost techniques. Preventative Expenditure
The preventative expenditure technique measures the expenditure incurred in order to avert damage to the natural environment, human infrastructure or to human health. The technique can be used to measure the impacts of climate change on both marketed and non-marketed goods and services, with the exception of non-use values. In terms of costing climate change impacts, preventative expenditure should be seen as a minimum estimate of impact costs since it does not measure the consumer surplus. Preventative expenditure, if undertaken, would in reality be an adaptation cost, since it is an expenditure aimed at reducing the impacts of climate change. As such, great care needs to be taken if using the technique in the context of a cost-benefit analysis of adaptation options.
252
Simulations of the Effects of Clinuzte Cluznge
Replacement Cost Technique
The replacement/restoration cost technique can be used to measure the costs incurred in restoring or replacing productive assets or restoring the natural environment or human health as a result of the impacts of climate change. As with preventative expenditure, restoration costs is a relatively simple technique to use and has the added advantage over preventative expenditure of being an objective valuation of an impact - i.e. the impact has occurred, or at least is known. Use of the replacement costs method relies on replacement or restoration measures being available and the costs of those measures being known. As such, the method is unlikely to be appropriate for costing the impacts of climate change on irreplaceable assets such as biodiversity or cultural heritage or indeed, the loss of a human life. Another shortcoming with the technique is that actual replacement or restoration costs do not necessarily bear any relationship to willingness of individuals' to pay to replace or restore something. This can be seen in relation the potential health impacts of climate change - the health service costs incurred to restore the health of someone made ill by a tropical disease may be less than that person's WTP to avoid getting the disease in the first place. Hedonic Pricing
Hedonic pricing is an indirect or surrogate market technique that attempts to judge individuals' value for a non-marketed good or service by observing their behaviour in related markets. The two markets often used for hedonic pricing are the property market and the labour market. The hedonic property value approach attempts to measure the welfare effects of changes in environmental goods or services by estimating the
Costs of Climate Change Mitigation
253
influence of environmental attributes on property prices. The method assumes that people buy property for different attributes such as size, quality and proximity to work, and that some of these attributes, such as views, proximity to parks and local air quality, relate to environmental values. By estimating the demand and prices of properties with different sets of attributes, it is possible to estimate how much a specific environmental attribute, such as coastal views, is valued by people. Econometric models are generally used to isolate the effects of specific environmental attributes on property prices. This 'hedonic property price function' is then used to infer how much people are willing to pay to protect that environmental attribute. In the context of costing the impacts of climate change, the method has already been used internationally to assess the costs of climate change on coastal resources. It could also potentially be used to assess individuals' WTP to protect valued scenic areas from the impacts of climate change or possibly even their WTP to live in a particular climate zone. The hedonic wage risk approach is applied to wage rates to measure the value of changes in health (morbidity and mortality) risks. It involves identifying a relationship between the risk of death in a job and the wage rate for that job. This method can potentially be applied to costing some aspects of the impacts of climate change on human health and life expectancy. The main strengths of the hedonic pricing methods outlined above are that they rely on observed markets and that they can be applied to quite a wide range of environmental and social values. One weakness of the methods is that results of hedonic studies are very sensitive to assumptions used in the econometric modelling of the price function. A further weakness is that
254
Simulations of the Effects of Climate Change
hedonic pncIng studies, to be rigorous, will generally require the collation of a comprehensive and relevant database for the relevant market. The process of collecting this data can be expensive and time-consuming. Travel Cost Method
The travel cost method is another method that attempts to infer value for particular environmental or social attributes from observed behaviour in related markets. The method has been commonly used both in Australia and overseas to indirectly value specific sites valued for their environmental and social amenities such as national parks, wetlands, rivers and heritage sites. The method uses information (often obtained from user surveys) on visitation rates and travel costs involved in people visiting a site (as a proxy for an admission fee) to predict changes in demand for the site in response to changes in travel costs. This data is then used to derive a. demand curve for the recreational services provided by the site and thereby the total benefits (consumer surplus) that present visitors derive from the site. For example, the travel cost method could be used to estimate the total consumer surplus enjoyed by present users of the Great Barrier Reef. Provided information on quality has been used to derive a 'whole experience' demand curve, it may then be possible to assess the impact on the visitation rate and consumer surplus of damage to the Great Barrier Reef due to climate change. Contingent Valuation and Choice Modelling Methods
In contrast to the methods discussed above, which all use observed data, the contingent valuation and choice modelling methods rely on surveys to elicit directly (contingent valuation) or indirectly (choice modelling) the values that respondents place on non-marketed environmental goods and services. ,
1 I
~
Costs of Climate Change Mitigatioll
255
Contingent Valuation Method (CVM)
The contingent valuation method (CVM) involves directly asking people in a survey how much they would be willing to pay to protect from damage specific environmental or cultural services such as a national park, biodiversity, ecosystem 'or cultural heritage site. In some surveys, people are asked how much they would be willing to accept to allow the environmental service to be damaged or lost. It is called contingent valuation because people are asked to state their WTP /WTA, contingent on a specific hypothetical scenario. The CVM is probably the most widely used method to estimate non-use environmental and cultural values. It is also probably the most controversial nonmarket valuation method. Many CVM studies have been subject to considerable debate, especially over whether hypothetical markets adequately measured people's willingness to pay for environmental quality. In particular, studies using an openended elicitation format are often criticised for strategic bias, in that the respondents may have understated or overstated their true WTP /WTA in order to influence the decision-making process. Another criticism is that the CVM places individuals in the role of consumers rather than citizens and therefore cannot adequately capture nonuse values. Choice Modelling Method
The choice modelling method (often referred to as 'contingent choice') is similar to contingent valuation in that it can be used to estimate non-use environmental values and that it asks people to make choices based on a hypothetical scenario. However, it differs from CVM in that it does not directly ask people to state their values in dollar terms. Instead, dollar values are inferred from the hypothetical choices or trade-offs that people make. The
256
Simulatiolls of the Effects of Cli11Ulte Change
method asks respondents to state a preference between one group of environmental services at a given price or cost and another group of environmental characteristics at a different cost. Because it focuses on trade-offs among scenarios with different environmental outcomes, choice modelling is likely to be particularly suited to decisions involving a choice between different policy options or measures such as between different options for adaptation to climate change. Choice modelling has the advantages over contingent valuation of: allowing respondents to think in terms of preferences or priorities rather than directly expressing dollar values; de-emphasising price as simply another attribute and therefore allowing respondents to choose between attribute bundles that include price; and reducing the likelihood (If protest responses. Thus, when appropriately designed, choic~ modelling can minimise many of the biases that can arisE! in open-ended or discreet choice contingent valuation studies. A potential drawback with choice modelling is that by providing respondents with a range of attributes and possible options, the approach can increase the complexity of the task for respondents during questioning, making it difficult for them to evaluate preferences or tradeoffs. Also, the choice modelling approach requires more sophisticated statistical techniques to estimate willingness to pay than contingent valuation. A number of choice modelling studies have been undertaken in Australia recently, focussing in particular on valuations of wetlands and riverine ecosystems.
Costs of Climate Change Mitigatioll
257
Benefit Transfer Method
In addition to the methods described above, the 'benefit transfer' method can provide a relatively low cost, 'shorthand' means of valuing non-market environmental and social benefits by transferring values derived in other valuation studies to the case study in question. For example, numerous overseas and Australian contingent valuation and choice modelling studies have been undertaken over the past 20 years examining the WTP of households or visitors to preserve different species, ecosystems or natural areas from real or perceived environmental threats. It may be possible, using the benefit transfer technique, to apply the results of one or more of these studies to an assessment of the economic costs of climate change on the ecosystems and biodiversity of the Great Barrier Reef. A major shortcoming with the benefit transfer method is that it relies on derived values from another source. Thus, great caution would need to be exercised when considering applying the values from other studies to assessing, for example, the economic costs of climate change on the ecological values of the Great Barrier Reef. Problems that could affect the validity of derived values include: differences between the species/ecosystem or other environmental 'good' valued in the source study and the species or ecosystems of the Great Barrier Reef; divergence in the magnitude of impacts under consideration; disparity of socioeconomic characteristics; and potential bias or other deficiencies with the source study.
Simulations of the Effects of Climate Change
258
In any case, it is rarely appropriate to directly trans~er aggregate value estimates from one study to another; some reworking of the original valuations being required. The NSW EPA database 'Envalue' provides a database of environmental valuation studies, including an overview of the studies and their potential application using the benefit transfer method. Adaptation Option
Once the net costs of climate change impacts have been assessed for a given market or sector, it becomes possible for decision-makers to value the benefits of adaptation to those impacts.By assessing the resource costs of different adaptation options (policies or programmes) then it becomes possible for the decision-maker to determine which adaptation option offers the greatest benefits relative to costs. There are two main techniques available for as~essing the relative costs and benefits of alternative adaptation responses (measured solely or principally in monetary terms): Cost-Benefit Analysis (CBA); and Cost-Effectiveness Analysis (CEA). A third available technique for assessing alternative adaptation options, in the_ absence of full information on monetary costs and benefits, is multi-criteria analysis. Cost-benefit AnalYSis
Cost-Benefit Analysis (CBA) is an economics decision support tool designed to show whether the total benefits of a project or programme, measured in economic terms, outweigh the costs of implementing that programme. It is the most widely accepted technique for determining the economic viability of a project. In the context of climate
Costs of Climate Change Mitigation
259
change, CBA can be used to determine whether the total benefits of an adaptation response (policy or programme), measured in terms of reduced costs of the impacts of climate change, exceed the costs of the adaptation response. CBA methodology is well established, involving the following standard steps: Step 1: objective definition- define objective of project or policy. Step 2: option specification - specify and. define project options, including the base case (without project) option. Step 3: quantify costs and benefits - identify and quantify the negative effects (costs) and positive effects (benefits), including external effects, associated with each option. Step 4: value costs and benefits - value (price) cost stream and value benefits stream for each option using one or more of the methods. Step 5: NPV /BCR determination - determine the present value of the net benefit stream (NPV) and/or the benefit cost/ratio (BCR). Step 6: sensitivity analysis - conduct sensitivity analysis based on changes to assumptions affecting cost or benefit streams and discount rate. Step 7: distributional analysis - identify the distributional effects of each option. Step 8: assess non-monetary costs and benefits qualitatively assess costs and benefits which cannot be priced. Step 9: decision-making - make decision on preferred option based on NPV, distributional effects and nonmonetary costs and benefits.
260
Simulations of the Effects of Climate Change
Avoiding Double Counting of Benefits An important pitfall to avoid when conducting a costbenefit analysis is double counting of costs or benefits. This could well be an important issue when undertaking a CBA of adaptation options involving a range of impacts that are not mutually exclusive. Taking the example of impacts of climate change on the Great Barrier Reef again, these impacts are likely to include biophysical impacts such as loss of coral reefs and associated ecosystems, as well as socio-economic impacts on the commercial fishing industry and tourism. Double counting could occur when attempting to cost the biophysical impact (e.g. the loss of the reef system) by aggregating all of the dependant socio-economic impacts, or when assessing use and non-use values. This problem can be avoided by ensuring that the individual socioeconomic impacts are counted only once in the assessment process. \
Assessing all Costs and Benefits Another key issue is to ensure that all costs and benefits are fully addressed in the assessment. It is likely when assessing climate change adaptation options that many impacts of climate change will not have been quantified because of uncertainties about the extent of the impacts. Even if the impacts have been quantified, it is possible that certain non-market goods and services cannot be adequately valued in monetary terms due to difficulties with applying the surrogate or hypothetical market techniques discussed above. Lack of monetary estimates for climate change impacts however, should not mean that those impacts can be overlooked. As outlined in Step 8 above, non-monetary costs and benefits should be qualitatively assessed as part of the decision-making process.
Costs of Climate Change Mitigation
261
The treatment of non-monetary costs and benefits is one of the major criticisms of CBA - either that they are overlooked in the analysis or that the process by which the non-monetary impacts have been assessed in relation to the monetary costs and benefits is not clearly established. One way of dealing with the issue of nonmonetary costs and benefit in the context of a CBA is to make transparent the process by which these costs and benefits have been assessed. Thus it would certainly be possible and acceptable with respect to some analyses of climate change adaptation options, for example, to stipulate that the known benefits of an adaptation measure clearly exceed the costs of adaptation even if some of the benefits have not been valued in monetary terms. The point here is that the application of CBA will not always require perfect knowledge of the monetary costs and benefits of climate change impacts. Another possible way of dealing with the absence of information on monetary costs and benefits is to combine assessment of monetary and non-monetary costs and benefits within a multi-criteria decision analysis framework. Proponents of multicriteria analysis argue that this is the best way of making transparent judgements about the relative importance of monetary and nonmonetary costs and benefits. Cost-effectiveness Analysis (CEA)
Cost-effectiveness analysis (CEA) is another economics decision support tool. It is generally used to determine the least-cost way of achieving a predetermined physical or environmental goal. It can also be used to identify a means of maximising an environmental or physical benefit for a given economic cost. An advantage that CEA analysis has over CBA is that it does not require a desired benefit of an adaptation to
262
Simulations of the Effects of Climate Change
be explicitly valued in monetary terms. Provided each adaptation option is likely to achieve the._same or similar level of benefit, only the costs of adaptation need to be monetised. Thus simplification of analysis is achieved. CEA is likely to have widespread application to the assessment of climate change adaptation options. --\
In general terms, the steps taken in applying CEA to assessment of adaptation options will be is follows: Step 1: objective definition - define objective or goal of adaptation project or policy. Step 2: option specification - specify and define adaptation options. Step 3: value costs - identify and value the costs (capital and recurrent) of each option. Step 4: quantify benefits - benefits of each option need to be quantified (but not valued) (e.g. ML of water delivered; km2 of foreshore protected; species saved). Step 5: cost-effectiveness - calculate the present value of the net incremental cost stream of each option per benefit associated with the option (e.g. $ per ML delivered). Results can be presented in the form of incremental cost (or supply) curves. Step 6: sensitivity analysis - conduct sensitivity analysis based on changes to assumptions affecting cost streams and incremental benefits. Step 7: decision-making - make decision on preferred option(s) based on cost effectiveness. CEA can potentially be applied to all levels of decisionmaking on climate change adaptation, from assessment of individual adaptation projects to regional or national level adaptation policies and strategies.
Costs of Climate Change Mitigation
263
The techniques provide flexible and generally straightforward approaches to estimating the costs of climate change impacts or f~r assessing the costs and benefits of alternative adaptation options. The techniques are essentially limited to assessing the impacts on individual sectors or markets, although the results from these assessments (:an integrated into general equilibrium analysis. There are question marks against the use of some of these methods to estimate the costs of non-market impacts of climate change, particularly non-use values. Methods for assessing non-use values, such as contingent valuation and choice modelling have been refined significantly in recent years and are now quite widely applied both in Australia and overseas. Nevertheless, the methods remain controversial and can be quite time consuming and expensive to apply. MODELLING THE COSTS OF CLIMATE CHANGE
Equilibrium Analysis
General eqUilibrium analysis accounts for the inter-sectoral reallocation of resources that could occur as a consequence of climate change. It accounts for the effects on the inputoutput structure of the economy, effects that cannot be captured through partial equilibrium analysis. Thus, it is appropriate to use general equilibrium analysis when the impacts of climate change (or adaptation policies and measures) to be modelled are likely to simultaneously affect many sectors or markets and factor prices and incomes. The two main types of models that can be used to undertake general equilibrium analysis are input-output (10) models and Computable General Equilibrium (GCE) models. Integrated Assessment (IA) models, a generic term describing the models that attempt integrate the physical
264
Sillluiatiolls of the Effects of Climate Challge
impacts of climate change, with socio-economic effects. These often include GCE modelling. Computable General Equilibrium Models Features of CGE Models
Computable General Equilibrium (GCE) models are models of the total economy, covering all sectors of the economy and the interactions between those sectors. Essentially they simulate markets for factors of production and products across the whole economy using systems of equations specifying supply and demand behaviour across the different markets. The models are designed to examine the welfare changes (measured in terms of GDP or GNP) arising from an external 'shock' impacting on price. The shocks examined may relate to a hypothetical or actual government policy such as the introduction of a carbon or energy tax. Other shocks, such as those relating to the impacts of climate change can also be examined, provided they are price-related. Common features of GCE models include that they: determine quantities (of product and factors) and prices; focus on equilibrium resource allocation; product and factor markets are perfectly competitive; all markets clear; households (product) demand and (factor) supply functions are consistent with utility maximisation; and producer supply and demand functions are assumed to be consistent with profit maximisation., Weaknesses of CGE Models
A key strength of CGE models, already alluded to, is their
Costs of Climate Challge Mitigatioll
265
ability to model effects of policies and other external shocks on the total economy, rather than just on discrete, individual markets or sectors. This is an important consideration in the context of costing the impacts of climate change which is likely to involve simultaneous impacts across a wide number of markets and sectors. Furthermore, CGE models have now been developed at different levels of aggregation. Thus, some GCE models are essentially domestic models containing qUite significant sectoral, regional and household detail. Other GCE models are global models and tend to have little domestic disaggregation but can assess the international effects of policy shocks such as terms of trade effects. Another important characteristic of many (although not all) recent GCE models is that they are dynamic - that is, they include relationships between variables in the model at different points in time. Thus, they do not assume a static baseline. Potential limitations with CGE models have been raised, particularly in the context of their application to the climate change issue. These include: By their nature, the models can say little about the implications of non-price policies and shocks. This means that important policy responses to the impacts of climate change relating to adaptation, such as landuse planning, public information, product and building standards, and the development of new technologies and systems in response to impacts cannot effectively be modelled. It also raises questions about the ability of the models to effectively assess nonmarket impacts and 'Catastrophic events. Most GCE models do not model the equity effects of policies for different income levels. Thus, the impacts of climate change involving significant distributional issues are unlikely to be reflected well in GCE models. In response, it is important to note that these possible
266
Simulatiolls of the Effects of Climate Change
limitations are not unique to GCE models and could equally well apply to all methods of economic analysis. Furthermore, there are potential techniques and methods for dealing for equity issues and with non-price policies. GCE Models Applied to the Climate Change Issue
The application of CGE modelling to the assessment of costs of climate change impacts has yet to be undertaken in Australia. A number of models have been developed or adapted specifically for the purpose of evaluating the economic implications of Australian and international greenhouse gas (GHG) mitigation policies. These include: MMRFGreen (Monash University), MM600+ (Econtech), GTEM (ABARE) and G-Cubed (Australian National University). These models have all assessed the economic costs but not the benefits of GHG mitigation. Pezzey and Lambie provide a comparative analysis of the technical specifications of the models. As Pezzey and Lambie have noted, GHG control is an issue that is suited to the application of GCE modelling since as it involves ".. a long time horizon, global pollution derived from a major commodity, pervasive economic effects that include impacts on trade and public finance, a case for economic instruments". Many of these characteristics apply equally to the potential impacts of climate change and adaptation to those impacts. Other key points from Pezzey and Lambie's analysis that are pertinent to the potential application of GCE models to costing the impacts of climate change include: Each model can make a valuable contribution to greenhouse policy analysis but the choice of model will depend upon the policy question of interest.
Costs of Climate Change Mitigatioll
267
The models' emission reference levels, representations of technical change and substitution and demand elasticities are important influences on cost projections. The realism of policy analysis and the estimation of abatement costs using these models may be restricted by: the inability of the models to include non-price policies (for example information campaigns, exhortation and land-use planning); and the representation of the rate of technical change (all current versions of the models treat technical change as exogenous to some degree). GeE models could potentially be used in several ways to
model the impacts of climate change. These include: examining the general equilibrium impacts of a specified or assumed change in the market price of commodi ty resulting from the impacts of climate change. For example,. assume a significant increase in the price of water for domestic, industrial or agricultural users (resulting from measures taken to adapt to reduced rainfall and runoff in water supply catchments estimate for a given climate change scenario) and then trace this change through the rest of the economy; or modelling directly the general equilibrium impacts of climate change on a market or sector by drawing on the results of 'bottom-up' studies. This requires demand and supply responses for relevant industries to be built into the model. For example, 'bottom-up' studies are used to estimate the costs of adapting to a reduction in available water supplies for a given climate change scenario. These costs are then traced through the rest of the economy.
268
Simulations of the Effects of Climate Change
As indicated by an earlier point made in relation to Pezzey and Lambie's analysis, the choice of model and the way in which it is used will depend very much upon the climate change impact to be assessed or the adaptation policies or measures of interest. Internati9nally, a number of models have been developed to assess the costs of climate change impacts at regional or global levels, usually within an integrated assessment framework. As previously noted, some of these models have been developed or adapted to assess the costs and benefits of climate change mitigation within a costbenefit framework. Their objective is to assess at what point the marginal benefits of climate change mitigation policies (Le. reduced costs of climate change impacts), are outweighed by the marginal costs of the mitigation (Le. emission abatement policies or targets) and thereby determine an optimal level of emission reduction. Critics of some of the models have pointed to the models' treatment of non-market impacts, distributional effects and catastrophic events. Integrated Assessment (lA)
Integrated assessment (IA) seeks to combine socioeconomic and biophysical assessments of climate change. It is an: " ... interdisciplinary process tlwt combines, interprets and communicates knowledge from diverse scientific disciplines in an effort to investigate and understand causal relationships within and between complicated systems" Ahmad and Warrick.
Integrated assessment can employ a range of methods including scenario analysis, qualitative assessment and computer modelling. Many integrated assessment models of global climate -'change have been developed over the past decade or so, most of which have a focus on mitigation responses and costs. Many of these models have
Costs of Climate Change Mitigation
269
taken a 'top-down' approach, assessing damages at a regional or global level based on a global mean temperature change associated with a doubling of atmospheric CO 2 concentrations. Given this level of aggregation and widely different assumptions used in terms of the nature of associated impacts, it is not surprising to find that these studies have produced a very wide range of results. More recent 'bottom-up' IA models have sought to capture the individual direct effects of climate change at the local or regional level. As Mendelsohn et al. note however, "Although these models have done a good job of capturing spatial and sectoral detail, they oftell cOlltain so much detail that they are difficult to interpret. Further, they often lack sound damage estimates because they do not seek to estimate welfare effects and because they fail to accoullt for adaptation (thus they are) far from providing clear alld careflll damage estimates".
The ~hallenge therefore is to make use of the spatial and sectoral detail of 'bottomup' models, while ensuring that: inter-regional and economy-wide effects and feedbacks are addressed; feedbacks arising from policy responses and adaptation measures are fully captured; and impacts are assessed on the basis of realistic scenarios of changes to future environmental, social-economic and land use conditions that are unrelated to climate change. The results must then be presented in way that is useful for decision-making purposes. In Australia, an integrated assessment of climate change in the Cairns and Great Barrier Reef (CGBR) is currently the subject of a scoping study for the AGO. lie
270
Simulations of the Effects of Climate Change
proposed IA is to comprise a number of elements including: development of climate change projections specific to the region; development of regional models of land use and socioeconomic change; development of spatial vulnerability and hazard maps; integrated assessment models for the key sectors in the region, incorporating models and methodologies from different disciplines wit:l)in a unified systems framework; c;:ost-benefit analysis of adaptation options; and expanded monitoring to reduce knowledge gaps. Many of the important elements of effective IA identified earlier - a strong regional and sectoral focus, projections of environmental, socio-economic and land use changes, and incorporation of feedbacks including from adaptation - appear to have been included in the proposed CGBR region study, although it is unclear how inter regional and economy wide effects are to be addressed.
Bibliography Alverson, Keith D. and Thomas Pedersen, "Environmental Variability and Climate Change", {Stockholm]: International
Geosphere-Biosphere Programme, 2001. Balifto, Beatriz M., Michael J.R. Fasham, and Margaret C. Bowles, eds., "Ocean Biogeochemistry and Global Change: JGOFS Research Highlights, 1998-2000", Stockholm, Sweden:
International Geosphere-Biosphere Programme, 2001. Edgerton, Lynne T., "The Rising Tide: Global Warming and World Sea Levels", Washington, D.C.: Island Press, 1991. Gates, David Murray, Climate Change and its Biological Consequences. Sunderland, MA: Sinauer Associates, 1993. International Geosphere-Biosphere Programme, A Study of Global Change. Stockholm, Sweden: IGBP Secretariat, Royal Swedish Academy of Sciences, 1998. Knox, Joseph B. and Ann Foley Scheuring, eds., "Global Climate Change and California: Potential Impacts and Responses. Berkeley, CA: University of California Press, 1991. Minger, Terrell J., "Greenhouse Glasnost: The Crisis of Global Warming: Essays", New York: Ecco Press; [Salt Lake City, Utah]: Institute for Resource Management, 1990. Mortensen, Lynn L., ed., Global Change Education Resource Guide, [Boulder, CO]: University Corporation for Atmospheric Research, 1996. Nunn, Patrick D., Environmental Change in the PacifiC Basin: Chronologies, Causes, Consequences, Chichester, West Sussex, England; New York: Wiley, 1999.
272
Simulations of the Effects of Climate Change
Peters, Robert L. and Thomas E. Lovejoy, eds., Global Wanning and Biological Diversity. New Haven, CT: Yale University Press, 1992 Revkin, Andrew, Global Wanning Understanding the Forecast, New York: Abbeville Press, 1992. Somerville, Richard, The Forgiving Air: Understanding Environmental Change, Berkeley, CA: University of California Press, 1996. Thompson, Russell D. and Allen Perry, eds., Applied Climatology: Principles and Practice. London; New York: Routledge, 1997. Veroustraete, F., et al., eds., Vegetation, Modelling and Climatic . Change Effects, The Hague, The Netherlands: SPB Academic Pub., 1994.
Index Aerosol forcing 63 Agricultural systems analysis 47 Atlantic deep water 71 Atmospheric CO2 concentration 25 Calcium carbonate sediments 78 Carbonate deposition 75 Carbon-cycle modelling 45 Clean water act 116
Forest fires 93 General Circulation Models (GCMs) 1 Global temperature change 23 GreenHouse Gas (GHG) 236 Gross Primary Production (GPP) 2
Hydrological cycle 51
Climate models 50 Computable General Equilibrium (GCE) 263 Contingent Valuation Method (CVM) 255 Cost-Benefit Analysis (CBA) 258 Crop impact analysis 39 Cultivated land 92 Deep water formation 64 Dynamic changes 129 Ecological goods 200 ~cological succession 134 Euphotic zone 74 Experimental Lakes Area (ELA) 155 Faecal pellet bomb 102 Forcings-volcanic plumes 104
Integrated Assessment (IA) 263 Intergovernmental Panel on Climate Change (IPCC) 236 International Biological Programme (IBP) 127 Land-biota soil reservoir 102 Land surface processes 63 Local ecosystem 86 Mineral nutrients 100 Modelling programmes 51 National Pollutant Discharge Elimination System (NPDES) "116 Nitrogen fertilisers 31 Oak Ridge National Laboratory
274
Simulations of the Effects of Climate Change
(ORNL) 23 Oceanic carbon pool 70 Photosynthetically Active Radiation (PAR) 109 Production cost technique 250 Reducing Acidification In Norway (RAIN) 157 Regional climatic change 47 Soil carbon mass 93 Stratospheric processes 64
Terrestrial Ecosystem Model (TEM) 1 Third Assessment Report (TAR) 236 Tropical evergreen forests 92 Volcanic aerosols 54 Volcanic eruption 104 Watershed Manipulation Project (WMP) 169 Wildlife conservation 140 Willingness To Accept (WTA) 239