Advances in
Nuclear Science and Technology VOLUME 25
Advances in
Nuclear Science and Technology Series Editors
Jef...
38 downloads
1254 Views
12MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Advances in
Nuclear Science and Technology VOLUME 25
Advances in
Nuclear Science and Technology Series Editors
Jeffery Lewins Cambridge University, Cambridge, England
Martin Becker Oregon Graduate Institute of Science and Technology Portland, Oregon
Editorial Board
R. W. Albrecht Ernest J. Henley John D. McKean K. Oshima A. Sesonske H. B. Smets C. P. L. Zaleski
A Continuation Order Plan is available for this series. A continuation order will bring delivery of each new volume immediately upon publication. Volumes are billed only upon actual shipment. For further information please contact the publisher.
Advances in
Nuclear Science and Technology VOLUME 25
Edited by
Jeffery Lewins Cambridge University Cambridge, England
and
Martin Becker Oregon Graduate Institute of Science and Technology Portland, Oregon
KLUWER ACADEMIC PUBLISHERS NEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW
eBook ISBN: Print ISBN:
0-306-47812-9 0-306-45604-4
©2002 Kluwer Academic Publishers New York, Boston, Dordrecht, London, Moscow Print ©1997 Kluwer Academic/Plenum Publishers New York All rights reserved No part of this eBook may be reproduced or transmitted in any form or by any means, electronic, mechanical, recording, or otherwise, without written consent from the Publisher Created in the United States of America Visit Kluwer Online at: and Kluwer's eBookstore at:
http://kluweronline.com http://ebooks.kluweronline.com
PREFACE
The present review volume not only covers a wide range of topics pertinent to nuclear science and technology, but has attracted a distinguished international authorship, for which the editors are grateful. The opening review by Drs. Janet Tawn and Richard Wakeford addresses the difficult matter of questioning scientific hypotheses in a court of law. The United Kingdom experienced a substantial nuclear accident in the 1950s in the form of the Windscale Pile fire. This in itself had both good and bad consequences; the setting up of a licensing authority to ensure nuclear safety was one, the understandable public sentiment concerning nuclear power (despite the fire occurring in a weapons pile) the other. Windscale today is subsumed in the reprocessing plant at Sellafield operated by British Nuclear Fuels plc and it was inevitable perhaps that when an excess cluster of childhood leukaemia was observed in the nearby village of Seascale that public concern should be promoted by the media, leading to the hearing of a claim of compensation brought on behalf of two of the families of BNFLs workers who had suffered that loss. The review article demonstrates the complexity of understanding such a claim against the statistical fluctuations inherent and shows how the courts were persuaded of the need to propose a biological mechanism if responsibility were to be held. The Company were undoubtedly relieved by the finding. At the same time, the concerns raised led to a deeper understanding of such matters which our authors summarise admirably. An analogous technique involving stochastic modelling, better known to our readers perhaps as the Monte Carlo method, is shown in the next chapter to be usefully applied to the problem of determining parameters for elementary reactor dynamics equations. From the United States, Tim Valentine, however, shows that this view is unnecessarily limited; the Monte Carlo simulation can be used more directly to evaluate time-dependent development in nuclear reactors of considerable (and therefore realistic) complexity. It makes particular sense in extracting cross-spectral data for noise analysis, as the author shows. Chapter 4 is also from a United Kingdom author. It may have a further link to the first through the word “gene” but Jonathan Carter actually takes us through the biological analogy of the genetic optimisation algorithm that has proved successful in optimising reactor cores. It might be recollected that despite the characterisation of nuclear power as capital intensive, running cost cheap, nevertheless as much money is committed in the plant lifetime to supply the fuel as is spent on the original construction. Improvements in the specification of reload cores can then provide dramatic returns. The genetic algorithm has proved a powerful way to optimise the highly constrained and very large, non-linear problem that emerges. The techniques, including new developments reviewed here, have application to a wide range of problems, such as oil-well diagnostics. Perhaps the most exciting is the “tabu” algorithm that might with advantage be made known to a wider audience therefore than the nuclear community. v
vi
PREFACE
Professor Marseguerra, our Italian expert, furthers the dynamic understanding of reactor systems by reviewing the foundations of wavelet analysis. Fourier analysis and Fourier transformation have not only great practical strength in applications but have led to a deep understanding of functional analysis—readers may recollect that Fourier himself was denied the Gold Medal in three successive competitions before the deep significance of his work was appreciated, so remarkable a development it was. There can be little doubt that wavelet analysis provides a powerful technique which will be of value to the professional nuclear engineer; our Italian author can be thanked for offering just such an interpretation. From Belgium in our next article, but by an author again of international standing, Professor Devooght brings us a further stochastic study, the application of Monte Carlo simulation to the study of system reliability. The review shows clearly why the Monte Carlo method is efficient in solving complex problems and takes us further into the problem of employing variational-adjoints and other methods to promote accuracy in the face of rare events. Our author provides a signpost to modern studies of dynamic reliability that can be expected to provide more realistic estimates of the hazards of a real and changing world. The TMI accident and the Chernobyl accident emphasised the central significance of both the plant operators and the control room. Modern computers should make it easier for the operators to function, routinely, in accident conditions—and not overlooking the need for improved production efficiency. But to do this requires a blend of hardware, software, and knowledge of human behaviour. A jointly authored review of the problems in computerising control rooms is provided by Dr. Sun, from the US, and his colleague Dr. Kossilov (formerly of the IAEA), both acknowledged experts in this area combining electronic-digital technology with ergonomics. The clew of this volume, however, returns to matters arising from Chernobyl itself, the nuclear accident of epic proportions that has had such wide implications. But although much of the world may be concerned with the public image of nuclear power in the aftermath of the 1986 accident, in Russia, Ukraine, and Byelorussia (Belarus), there is an immediate, substantial, and very practical problem. Our Russian authors from the Kurchatov Institute provide an account of the consequences of Chernobyl that many will find profoundly moving in the wholesale involvement of so many workers in the clean-up process and the specification of the difficulties yet to be overcome. To compare these problems with those facing colleagues in the rest of the world is sobering indeed. As the Christian millennium looms larger, it is right to seek a balanced view of nuclear power. Clearly France and many Far Eastern countries hold to a continuing dependence, in some cases enlargement, on nuclear power to produce electricity. The stations must be operated safely; the processing industry must be operated efficiently; and above all politicians must come to grips with questions of disposal versus waste storage. But at the same time, burgeoning concerns over thermal warming and the need to contain carbon-dioxide release and the finite supply of fossil fuels, are just examples of the reassessment that may yet see the
vii
next century accept a balanced programme of nuclear power. Much depends on a demonstration of renewed confidence by the United States of America. We hope to keep our readers not only abreast of the changes but to contribute to their ability to bring them about. For this, as always, we thank our distinguished authors for their timely contribution. Jeffery Lewins Martin Becker
CONTENTS
Childhood Leukaemia and Radiation: The Sellafield Judgment E. Janet Tawn and Richard Wakeford 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Introduction Childhood Leukaemia around Nuclear Installations The Gardner Report The Legal Cases The Nature of Epidemiology Background to the Gardner Hypothesis and Subsequent Findings Radiation Genetic Risk Genetic Risk from Sellafield Occupational Radiation Exposure Heritability of Leukaemia Etiology and Heterogeneity of Cases Animal Studies Dose Rate Effects Unconventional Genetic Mechanisms The Legal Judgment and Aftermath
1 2 4 5 7 9 13 14 15 17 18 18 19 20
References
22
Reactor Dynamics from Monte Carlo Calculations Timothy E. Valentine 1. 2. 3. 4. 5. 6. 7.
Introduction Review of Statistics of Stochastic Processes Reactor Transfer Functions Time Delay Estimation Monte Carlo Simulation Application to the Advanced Neutron Source Reactor Summary
31 32 34 38 39 43 51
References
51
Notes on a Simplified Tour: From the Fourier to the Wavelet Transform Marzio Marseguerra 1. 2. 3. 4. 5. 6.
Introduction Preliminaries The Continuous Windowed Fourier Transform Frames The Continuous Wavelet Transform The Discrete Windowed Fourier Transform The Discrete Wavelet Transform
53 54 58 65 69 74 78
ix
x
CONTENTS
7. The Multiresolution Analysis 8. Sub-Band Filtering 9. Conclusions
References
81 103 111 112
Genetic Algorithms for Incore Fuel Management and Other Recent Developments in Optimisation Jonathan N. Carter 1. 2. 3. 4. 5.
Introduction Optimisation Problems Optimisation Algorithms for Combinatorial Optimisation Optimisation Methods for Continuum Problems Conclusions
113 113 121 144 147
References
149
Appendix: Gray Coding
153
The Computerization of Nuclear Power Plant Control Rooms Bill K.H. Sun and Andrei N. Kossilov 1. 2. 3. 4. 5. 6. 7.
Introduction Human-Machine Interface Computerization in Nuclear Power Plant Control Rooms Safety and Licensing Implementation and Maintenance Issues Future Trends Conclusions
155 156 158 163 165 167 168
References
169
Consequences of Chernobyl: A View Ten Years on A. Borovoi and S. Bogatov 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Introduction The Accident Area Pollution Creation of the Sarcophagus: Its Advantages and Shortcomings Research Activities Associated with the Sarcophagus What is the Threat from the Sarcophagus? Necessity and Strategy for the Transformation of the Sarcophagus Remediation of Contaminated Areas Medical Consequences: Residence in the Contaminated Areas Conclusion
171 172 182 190 191 192 199 200 203 211
References
212
CONTENTS
xi
Dynamic Reliability Jacques Devooght Introduction Physical Setting The Chapman-Kolmogorov Equations Reduced Forms Exit Problems Semi-Markovian Generalization Subdynamics Application to Event Trees Semi Numerical Methods The Monte Carlo Method Examples of Numerical Treatment of Problems of Dynamic Reliability 12. Conclusions
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
References INDEX
215 216 218 222 227 230 233 236 242 245 261 272 274 279
CONTENTS OF EARLIER VOLUMES1
CONTENTS OF VOLUME 10
Optimal Control Applications in Nuclear Reactor Design and Operations, W. B. Terney and D. C. Wade Extrapolation Lengths in Pulsed Neutron Diffusion Measurements, N. J. Sjsötrand Thermodynamic Developments, R. V. Hesketh Kinetics of Nuclear Systems: Solution Methods for the Space-Time Dependent Neutron Diffusion Equation, W. Werner Review of Existing Codes for Loss-of-Coolant Accident Analysis, Stanislav Fabic CONTENTS OF VOLUME 11
Nuclear Physics Data for Reactor Kinetics, J. Walker and D. R. Weaver The Analysis of Reactor Noise: Measuring Statistical Fluctuations in Nuclear Systems, N. Pacilio, A. Colombina, R. Mosiello, F. Morelli and V. M. Jorio On-Line Computers in Nuclear Power Plants—A Review, M. W. Jervis Fuel for the SGHWR, D. O. Pickman, J. H. Gittus and K. M. Rose The Nuclear Safety Research Reactor (NSSR) in Japan, M. Ishikawa and T. Inabe Practical Usage of Plutonium in Power Reactor Systems, K. H. Peuchl Computer Assisted Learning in Nuclear Engineering, P. R. Smith Nuclear Energy Center, M. J. McKelly CONTENTS OF VOLUME 12
Characteristic Ray Solutions of the Transport Equation, H. D. Brough and C. T. Chandler Heterogeneous Core Design for Liquid Metal Fast Breeder Reactors, P. W. Dickson and R. A. Doncals Liner Insulation for Gas Cooled Reactors, B. N. Furber and J. Davidson Outage Trends in Light Water Reactors, E. T. Burns, R. R. Pullwood and R. C. Erdman Synergetic Nuclear Energy Systems Concepts, A. A. Harms 1
Volumes 1–9 of the series were published by Academic Press.
xiii
xiv
CONTENTS OF EARLIER VOLUMES
Vapor Explosion Phenomena with Respect to Nuclear Reactor Safety Assessment, A. W. Cronenberg and R. Benz CONTENTS OF VOLUME 13
Radioactive Waste Disposal, Horst Böhm and Klaus Kühn Response Matrix Methods, Sten-Oran Linkahe and Z. J. Weiss Finite Approximations to the Even-Parity Transport Equation, E. E. Lewis Advances in two-Phase Flow Instrumentation, R. T. Lahey and S. Benerjee Bayesian Methods in Risk Assessment, George Apostolakis CONTENTS OF VOLUME 14
Introduction: Sensitivity and Uncertainty Analysis of Reactor Performance Parameters, C. R. Weisben Uncertainty in the Nuclear Data used for Reactor Calculations, R. W. Peeble Calculational Methodology and Associated Uncertainties, E. Kujawski and C. R. Weisben Integral Experiment Information for Fast Reactors, P. J. Collins Sensitivity Functions for Uncertainty Analysis, Ehud Greenspan Combination of Differential and Integral Data, J. H. Marable, C. P. Weisbin and G. de Saussure New Developments in Sensitivity Theory, Ehud Greenspan CONTENTS OF VOLUME 15
Eigenvalue Problems for the Boltzmann Operator, V. Protopopescu The Definition and Computation of Average Neutron Lifetimes, Allen F. Henry Non-Linear Stochastic Theory, K. Saito Fusion Reactor Development: A Review, Weston M. Stacey, Jr. Streaming in Lattices, Ely M. Gelbard CONTENTS OF VOLUME 16
Electrical Insulation and Fusion Reactors, H. M. Bamford Human Factors of CRT Displays for Nuclear Power Plant Control, M. M. Danchak Nuclear Pumped Lasers, R. T. Schneider and F. Hohl Fusion-Fission Hybrid Reactors, E. Greenspan Radiation Protection Standards: Their Development and Current Status, G. C. Roberts and G. N. Kelly
CONTENTS OF EARLIER VOLUMES
xv
CONTENTS OF VOLUME 17
A Methodology for the Design of Plant Analysers, T. H. E. Chambers and M. J. Whitmarsh- Everies Models and Simulation in Nuclear Power Station Design and Operation, M. W. Jervis Psychological Aspects of Simulation Design and Use, R. B. Stammers The Development of Full-Scope AGR Training Simulators within the C. E. G. B., C. R. Budd Parallel Processing for Nuclear Safety Simulation, A. Y. Allidina, M. C. Singh and B. Daniels Developments in Full-Scope, Real-Time Nuclear Plant Simulators, J. Wiltshire CONTENTS OF VOLUME 18
Realistic Assessment of Postulated Accidents at Light Water Reactor Nuclear Power Plants, E. A. Warman Radioactive Source Term for Light Water Reactors, J. P. Hosemann and K. Hassman Multidimensional Two-Phase Flow Modelling and Simulation, M. Arai and N. Hirata Fast Breeder Reactors—The Point of View of French Safety Authorities, M. Laverie and M. Avenas Light Water Reactor Space-Dependent Core Dynamics Computer Programs, D. J. Diamond and M. Todosow CONTENTS OF VOLUME 19
Festschrift to Eugene Wigner Eugene Wigner and Nuclear Energy, A. M. Weinberg The PIUS Principle and the SECURE Reactor Concepts, Kåre Hannerz PRISM: An Innovative Inherently Safe Modular Sodium Cooled Breeder Reactor, P. H. Pluta, R. E. Tippets, R. E. Murata, C. E. Boardman, C. S. Schatmeier, A. E. Dubberley, D. M. Switick and W. Ewant Generalized Perturbation Theory (GPT) Methods; A Heuristic Approach, Augusto Gandini Some Recent Developments in Finite Element Methods for Neutron Transport, R. T. Ackroyd, J. K. Fletcher, A. J. H. Goddard, J. Issa, N. Riyait, M. M. R. Williams and J. Wood
xvi
CONTENTS OF EARLIER VOLUMES
CONTENTS OF VOLUME 20
The Three-Dimensional Time and Volume Averaged Conservation Equations of Two-Phase Flow, R. T. Lahey, Jr. , and D. A. Drew Light Water Reactor Fuel Cycle Optimisation: Theory versus Practice, Thomas J. Downar and Alexander Sesonske The Integral Fast Reactor, Charles E. Till and Yoon I. Chang Indoor Radon, Maurice A. Robkin and David Bodansky CONTENTS OF VOLUME 21
Nodal Methods in Transport Theory, Ahmed Badruzzaman Expert Systems and Their Use in Nuclear Power Plants, Robert E. Uhrig Health Effects of Low Level Radiation, Richard Doll and Sarah Darby Advances in Optimization and Their Applicability to Problems in the Field of Nuclear Science and Technology, Geoffrey T. Parks Radioactive Waste Storage and Disposal in the U. K., A. D. Johnson, P. R. Maul and F. H. Pasant CONTENTS OF VOLUME 22
High Energy Electron Beam Irradiation of Water, Wastewater and Sludge, Charles N. Kurucz, Thomas D. Waite, William J. Cooper and Michael J. Nickelsen Photon Spectroscopy Calculations, Jorge F. Fernández and Vincenzo G. Molinari Monte Carlo Methods on Advanced Computer Architecture, William R. Martin The Wiener-Hermite Functional Method of Representing Random Noise and its Application to Point Reactor Kinetics Driven by Random Reactivity Fluctuations, K. Behringer CONTENTS OF VOLUME 23
Contraction of Information and Its Inverse Problems in Reactor System Identification and Stochastic Diagnosis, K. Kishida Stochastic Perturbation Analysis Applied to Neutral Particle Transfers, Herbert Rieff Radionuclide Transport in Fractured Rock: An Analogy with Neutron Transport, M. M. R. Williams CONTENTS OF VOLUME 24
Chernobyl and Bhopal Ten Years on, Malcolm C. Grimston
CONTENTS OF EARLIER VOLUMES
xvii
Transport Theory in Discrete Stochastic Mixtures, G.C. Pomraning The Role of Neural Networks in Reactor Diagnostics and Control, Imre Pázsit and Masaharu Kitamura Data Testing of ENDF/B-VI with MCNP: Critical Experiments, Thermal-Reactor Lattices and Time-of-Flight Measurements, Russell D. Mosteller, Stephanie C. Frankle, and Phillip G. Young System Dynamics: An Introduction and Applications to the Nuclear Industry, K.F. Hansen and M.W. Golay Theory: Advances and New Models for Neutron Leakage Calculations, Ivan Petrovic and Pierre Benoist Current Status of Core Degradation and Melt Progression in Severe LWR Accidents, Robert R. Wright
CHILDHOOD LEUKAEMIA AND RADIATION: THE SELLAFIELD JUDGMENT E. Janet Tawn1 and Richard Wakeford2 1
Westlakes Research Institute Moor Row Cumbria CA24 3JZ UK
2
British Nuclear Fuels plc Risley Warrington WA3 6AS UK
INTRODUCTION In October 1993, a year from the commencement of court proceedings, Judgment was given in the High Court of Justice, London, for the Defendants, British Nuclear Fuels plc (BNFL) in cases brought by two individuals claiming compensation for the causation of leukaemia/lymphoma. This review will examine the events leading up to the trial and the scientific issues which were raised then, and subsequently, in addressing the question of a possible role for ionising radiation in excesses of childhood leukaemia occurring around nuclear installations. Ten years earlier a television documentary “Windscale - the Nuclear Laundry” had drawn attention to an apparent excess of childhood leukaemia in the coastal village of Seascale 3 km from the Sellafield nuclear installation in West Cumbria. The “cluster” consisted of 6 cases which had occurred since 1950, when nuclear operations commenced at Sellafield. This number of observed cases was about 10 times the number expected on the basis of national rates. The documentary suggested that exposure to radiation from radioactive discharges from Sellafield might be responsible for this excess of childhood leukaemia. The concern generated by this programme prompted the UK Government to set up an Independent Advisory Group, chaired by Sir Douglas Black, to enquire into these claims, and in 1984, the Group confirmed the excess of childhood leukaemia.1 However a radiological assessment carried out by the National Radiological Protection Board (NRPB) found that radiation doses to the children of Seascale from Sellafield discharges were several hundred times too small to account for the excess.2 This led to speculation that the risk of radiation-induced childhood leukaemia in Seascale had been greatly underestimated3 and a programme of scientific work was recommended to clarify
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
1
2
E. JANET TAWN AND RICHARD WAKEFORD
the risk of childhood leukaemia in Seascale, including a case-control study of leukaemia and lymphoma in West Cumbria. The report of the Black Advisory Group1 also recommended that an expert group with “significant health representation” should be set up to examine the effects of environmental radioactivity and the Committee on Medical Aspects of Radiation in the Environment (COMARE) was established in 1985. COMARE’s first task was to reassess discharges from Sellafield following suggestions that certain emissions of radioactive material had been underestimated, in particular the atmospheric releases of irradiated uranium oxide particles from the chimneys of the early air-cooled military reactors at Sellafield (the “Windscale Piles”) during the mid-1950s. A reassessment of discharges4 did indicate that particular releases had been underestimated in the original assessment carried out for the Black Advisory Group, but the revision of the discharge record had relatively little impact upon the radiological reassessment carried out by the NRPB because the original assessment had been based, wherever possible, upon environmental measurements rather than discharge records. The First Report of COMARE5 found that the conclusions of the Black Advisory Group,1 that Sellafield radioactive discharges could not account for the excess of childhood leukaemia in Seascale, remained unchanged by the additional discharge information, but the Report expressed dissatisfaction over the way this information had come to light. The COMARE First Report5 reinforced the recommendations of the Report of the Black Advisory Group.1 Two of the recommendations of the Black Advisory Group1 concerned cohort studies of children born in Seascale and children attending schools in Seascale. These cohort studies would provide more secure information than the geographical correlation studies which had identified the excess of childhood leukaemia in Seascale. The results of these cohort studies were published in 1987.6,7 The excess of childhood leukaemia was confirmed although the results suggested that the excess was concentrated among those born in the village. The authors considered that this finding might indicate that one or more risk factors might be acting on a “locality specific basis before birth or early in life”.
CHILDHOOD LEUKAEMIA AROUND NUCLEAR INSTALLATIONS Meanwhile, several other reports of raised levels of childhood leukaemia around certain other nuclear installations in Britain were being made. In particular, researchers from the Scottish Health Service reported that a tenfold excess of leukaemia had occurred among young persons living within 12½ km of the Dounreay nuclear establishment in northern Scotland during 1979-84.8 This finding was based upon 5 cases. In addition, an excess of childhood leukaemia cases was found around the nuclear weapons facilities at Aldermaston and Burghfield in West Berkshire.9 COMARE investigated both of these reports and they became the subject of the COMARE Second Report10 and Third Report.11 The COMARE Second Report10 was published in 1988 and examined the incidence of leukaemia and non-Hodgkin’s lymphoma among young people under 25 years of age living in the vicinity of Dounreay. The Committee confirmed the excess of childhood leukaemia around Dounreay, particularly in the coastal town of Thurso about 13 km east of the site, but noted that the excess was confined to the period 1979-84 out of the entire period 1968-84 available for study. Also, the Thurso cases were restricted to the western part of the town, within 12½ km of Dounreay. There was no apparent reason for this space-time pattern, and COMARE were cautious in their interpretation. Nevertheless, the observation of an excess of childhood leukaemia around the only other nuclear fuel reprocessing plant in Britain led COMARE to suggest that some “feature” of Sellafield and Dounreay might be leading to an excess risk of childhood leukaemia around the sites.
CHILDHOOD LEUKAEMIA AND RADIATION
3
Again, a detailed radiological assessment carried out by the NRPB demonstrated that radiation doses due to radioactive discharges were much too low (by a factor of around 1000) to account for the excess of childhood leukaemia in Thurso.12,13 COMARE recognised that conventional radiation risk assessment could not account for the raised level of childhood leukaemia around Dounreay, but the Committee suggested that research be undertaken to investigate whether there might exist unrecognised routes whereby radiation exposure could materially raise the risk of childhood leukaemia. The COMARE Second Report10 made a number of recommendations for scientific work to be carried out to investigate this possibility. The COMARE Third Report11 was published in 1989 and dealt with the observation of a raised level of childhood leukaemia around Aldermaston and Burghfield.9 The Committee confirmed an excess of leukaemia in young children living in the area, and noted that the excess also extended to other childhood cancers, a phenomenon which had not been observed in other studies. A study of childhood cancer around the nearby nuclear establishment at Harwell which was carried out for the COMARE Third Report11 found no evidence of an excess risk of childhood leukaemia around this site. Another detailed radiological assessment carried out by the NRPB14 demonstrated yet again that radiation doses to children from discharges of radioactivity from Harwell, Aldermaston and Burghfield were far too low to account for the excess of childhood leukaemia. Indeed, the doses from the minute quantities of radioactive material released from the Burghfield facility, around which the excess of childhood leukaemia appeared to be concentrated, were about a million times too low to be capable of accounting for the additional cases. Near Aldermaston, doses from the discharge of naturally occurring radionuclides from the coal-fired boilers were greater than the doses received from the emission of nuclear material.15 COMARE recognised that there was no value in continuing to investigate individual reports of raised levels of childhood leukaemia near particular nuclear sites. They suggested that no further detailed investigations should be conducted until the background pattern of childhood leukaemia, and how this might influence childhood leukaemia near nuclear sites, was better understood. The Committee recommended a continuing programme of fundamental scientific research to better understand childhood leukaemia and the role of radiation in its induction. Other studies of childhood leukaemia around nuclear installations in Britain had also been carried out. Of particular importance was a study undertaken by the Office of Population Censuses and Surveys and the Imperial Cancer Research Fund which examined cancer incidence and mortality in local authority areas around all the major nuclear facilities in England and Wales.16,17 This study was of particular importance because sites were not selected on the basis of prior knowledge of the data, a source of uncertainty which had affected several other studies of childhood leukaemia around nuclear installations.18,19 The investigation found no evidence for a generally raised risk of cancer among those living in areas around nuclear sites, but it did find some evidence to suggest a raised risk of childhood leukaemia near nuclear establishments which commenced operations before 1955. However, interpretation of this finding was complicated by the level of childhood leukaemia in control areas (with which the areas around nuclear sites were being compared) being especially low, rather than the level around installations being especially high. Subsequent work which was carried out to address this problem of control areas confirmed the excess of childhood leukaemia in local authority districts around nuclear sites, but the authors noted that this excess occurred in a large area around the facilities and that there was no evidence of a trend of increase of childhood leukaemia with nearness of a district to a facility.20 A further study which examined cancer in districts around potential sites of nuclear power stations found a pattern which was very similar to that found around existing sites.21 The implication was
4
E. JANET TAWN AND RICHARD WAKEFORD
that the raised level of childhood leukaemia found in districts near existing nuclear installations was more to do with the nature of the area in which a nuclear facility was sited, rather than the facility itself. By the end of the 1980s, it was recognised that the risk of leukaemia among children living around certain nuclear installations in Britain appeared to be raised, although the interpretation of this finding was far from straightforward.18,22 It was broadly agreed that direct exposure to ionising radiation was most unlikely to be the cause of the raised levels of childhood leukaemia, and research had not revealed any gross underestimates of the risk of radiation-induced childhood leukaemia which could account for the observed discrepancy between actual numbers of childhood leukaemia cases and the numbers predicted by radiological assessments.23,24 For example, studies of childhood leukaemia after the highest levels of atmospheric nuclear weapons testing in the early 1960s25,26 have not detected any unexpected rise in rates due to doses from radionuclides present in the fallout, and these are very similar to the radionuclides released in the effluent of nuclear fuel reprocessing plants. Therefore, these studies provide evidence against such radionuclides deposited within the body making an unexpectedly high contribution to the risk of childhood leukaemia. The finding concerning raised levels of childhood leukaemia around certain nuclear installations in Britain led to a number of studies being carried out in other countries. No convincing evidence of a raised risk of childhood leukaemia around nuclear facilities has been found in the USA,27 France,28,29 Germany,30 Canada31 and Sweden.32 Recently, however, evidence has been published of an excess of childhood leukaemia near the La Hague nuclear reprocessing plant in Normandy,33 although the results of a detailed investigation of these cases have yet to be published. Following the trial, Bithell et al. 34 reported the results of the most detailed investigation of the distribution of childhood leukaemia and non-Hodgkin’s lymphoma in electoral wards near nuclear installations in England and Wales during 1966-87. Sites considered were the 15 major installations, 8 minor installations and 6 potential sites of power stations. In no instance was a statistically significant excess of cases found in areas within 25 km of an installation. Tests for a trend of increasing incidence with nearness of a ward to an installation were performed. The only significant results were for Sellafield (which was entirely accounted for by the 6 cases in Seascale) and for the minor facility at Burghfield. One potential site also gave a significant trend. The authors concluded that there was “virtually no convincing evidence for a geographical association of childhood leukaemia and non-Hodgkin’s lymphoma with nuclear installations in general”. This study must be regarded as the definitive geographical correlation study of childhood leukaemia and nuclear installations in England and Wales.
THE GARDNER REPORT As a result of a recommendation made in the Report of the Black Advisory Group,1 Gardner and his colleagues carried out a case-control study of leukaemia and lymphoma in those born in West Cumbria and diagnosed during 1950-85 while under 25 years of age and resident in the district. This was published in February 1990 and became known as the Gardner Report.35 Many factors potentially influencing the risk of childhood leukaemia in the district were examined in this study, but the most striking finding was a statistically significant association between doses of radiation as recorded by film badges worn by men employed at Sellafield before the conception of their children and leukaemia in their children. An association was found with two measures of preconceptional dose: a recorded external dose of accumulated by a father before conception and a dose
CHILDHOOD LEUKAEMIA AND RADIATION
5
of received in the 6 months immediately preceding conception. These doses were calculated from annual summaries of film badge records, and doses for part-years were obtained pro rata from the annual doses. Relative risks of around 6 to 8 were reported for these highest dose categories. Similar associations were also found for leukaemia and non-Hodgkin’s lymphoma combined, although the results were driven by the leukaemia cases. The statistically significant associations were based on just 4 cases of leukaemia and a similarly small number of controls, and the same 4 cases were responsible for all the significant associations. Consequently, lower 95% confidence limits for the significantly raised relative risks were all between 1 and 2. Examination of the preconceptional doses received by the fathers of the 5 Seascaleborn leukaemia cases in the study led the authors to suggest that the association could effectively explain the excess of cases in the village. In the absence of any other factors which could account for the excess, particularly direct exposure to radiation from radioactive effluent, the Gardner hypothesis, as it became known, was attractive and the case-control study had apparently achieved its objective. The causal hypothesis put forward to explain the association between paternal preconceptional radiation exposure and childhood leukaemia suggested that the excess cases were the result of radiationinduced sperm mutations which manifested themselves in first generation progeny i.e. a dominantly inherited effect. The implications of this hypothesis were commented on at the time, particularly by scientists in the fields of human genetics and radiobiology.36-41 Two issues were of prime concern: the discrepancy between the genetic risks implied by the Gardner hypothesis and those generated by the International Commission on Radiological Protection (ICRP),42 and the lack of evidence to suggest that leukaemia has a strong heritable component. Nevertheless the Gardner Report received considerable media attention and the statistical association was translated into a causal link.
THE LEGAL CASES It was on the publication of the COMARE Second Report10 that the possibility of legal action on behalf of individuals (or their families) who had developed leukaemia or non-Hodgkin’s lymphoma while living near nuclear installations began to be seriously considered. The London firm of solicitors, Leighs (later Leigh, Day & Co) took the unusual step of advertising in a local Cumbrian newspaper for clients wishing to pursue claims against BNFL. Writs were issued in 1989 on behalf of children who had developed leukaemia or non-Hodgkin’s lymphoma while living near Sellafield. These writs claimed personal injury resulting from exposure to radiation from Sellafield. Writs were actually served in 1990 within weeks of publication of the Gardner Report and the focus of the claims shifted from environmental exposure to paternal preconceptional irradiation. The two cases of Reay BNFL and Hope BNFL were heard concurrently before a single judge between the period October 1992 and June 1993. The Judge arrived at his decision on the balance of probabilities, based on evidence presented by expert witnesses who prepared one or more written reports and gave oral evidence in Court. In both cases the father had received cumulative film badge doses prior to the child’s conception, although only Mr Reay had received in the 6 months preceding conception. Dorothy Reay was born in the town of Whitehaven, 13 km north of Sellafield and died of leukaemia there in 1962 at the age of 10 months. Vivien Hope was born in the village of Drigg, 3 km south of Seascale, in 1965 and moved to Seascale at the age of 6 years. In 1988 while still living in Seascale she was diagnosed as having non-Hodgkin’s lymphoma and to date she is in remission.
6
E. JANET TAWN AND RICHARD WAKEFORD
Because of the heavy reliance placed upon the Gardner Report, the Court ordered the UK Medical Research Council (MRC) and BNFL to make available to both Plaintiffs and Defendants, on a confidential basis, all relevant base data and associated dosimetry data of the West Cumbria case-control study. This enabled experts in dosimetry, statistics and epidemiology to examine and reanalyse the data used in the Gardner Report for the purposes of preparing expert evidence. In addition the legal process of Discovery obliged both sides to make available to each other any document held by them that was of relevance to the issues in the legal action. BNFL and its predecessor, the United Kingdom Atomic Energy Authority (UKAEA) gave copies of many thousands of documents to the Plaintiffs relating to occupational radiation dose and discharges to the environment. As a consequence the Plaintiffs suggested that the use by Gardner et al.35 of annual summaries of film badge readings had underestimated, perhaps significantly, the actual dose received by, for example, not including doses to the testes from internally deposited radionuclides. In addition, individual monitoring for neutrons had not been carried out in the early years of operations at Sellafield, personal protection being achieved through area monitoring. As a consequence neutron doses had to be individually assessed for a number of fathers in the West Cumbria case-control study. Further, the accuracy of early film badges under certain conditions was questioned and individual dose records were examined to determine what affect this might have upon preconceptional doses. Considerable debate took place outside the Court but in the end photon doses, assessed neutron doses and internal doses were agreed for the purposes of the litigation for Mr Reay and Mr Hope and for the relevant case and control fathers in the Gardner Report. These doses were used in the Gardner Report reanalyses. Mr Reay’s total preconceptional dose was agreed to be 530 mSv and that for Mr Hope 233 mSv. The Plaintiffs received an enormous amount of information during the Discovery process relating to discharges of radioactive material from Sellafield but were faced with some difficulty in claiming that environmental exposure to radiation was the most likely cause of the malignant diseases since this possibility had been examined in considerable detail in the period from the broadcast of the original television documentary in 1983. Those who had examined this possibility, including the NRPB,2,4 had concluded that radiation doses were a factor of several hundred times too low to account for the excess of childhood leukaemia in Seascale, and the various pathways and mechanisms that had been suggested as possibly leading to an underestimation of risk had effectively been discounted.23 It was partly because of this that the Gardner hypothesis looked attractive as an explanation for the Seascale “cluster”. In order to pursue environmental radiation causation it was necessary, therefore, either to identify a gross underestimate of radioactivity discharged, or to demonstrate that dose to target tissue was considerably greater than had been thought, or to postulate that the risk of childhood leukaemia due to somatic exposure to radiation had been substantially underestimated. Moreover, these large discrepancies would have to have been missed by the detailed examinations that had been carried out. This was a formidable task. In the end, the Plaintiffs did not put expert evidence concerning environmental doses before the Court, and contented themselves with cross-examining the Defendants’ experts in this field. The probability that the two cancers had been caused by radioactive material discharged from Sellafield was calculated to be very low: 0.16% for Dorothy Reay and 1.4% for Vivien Hope. During the trial, the Plaintiffs conceded that radionuclides discharged from Sellafield could not alone be the cause of the two cases or of the excess of childhood leukaemia in Seascale.
CHILDHOOD LEUKAEMIA AND RADIATION
7
THE NATURE OF EPIDEMIOLOGY Epidemiology is the study of patterns of disease in groups of humans with the objective of identifying those factors influencing the risk of disease. Results are obtained through the statistical analysis of data concerning potential risk factors in such groups. Epidemiology is an observational, that is a non-experimental, science which uses data gathered under the scientifically uncontrolled conditions of everyday life. Since data have not been generated under experimental conditions, particular care has to be taken in the interpretation of the results of epidemiological studies. An epidemiological association could directly reflect an underlying cause-and-effect relationship, but other interpretations must always be considered. One such alternative explanation is that a statistical association has occurred through the play of chance, and does not represent a genuine effect. By definition, statistically significant results will occur at a low frequency by chance alone. Special care has to be taken in the interpretation of statistically significant results in a study in which many statistical tests have been performed. This was the case in the West Cumbria case-control study35 which was essentially exploring data in a search for an indication of what might be the cause of the Seascale childhood leukaemia “cluster”. Chance can act to produce statistically significant results in many insidious ways, particularly in exploratory studies, and especially when the structure of an analysis has been influenced, however unintentionally, by some prior knowledge of the data. A further explanation of an epidemiological association which is not causal is that biases or systematic effects have entered into the study, such that the association is artificial. Because epidemiological data are generated under scientifically uncontrolled conditions, biases can be introduced into studies if sufficient care is not taken. Thus, systematic errors can occur through, for example, an unrepresentative selection of cases or controls, or through biased collection of information. Careful design and execution of an epidemiological study is required to avoid the introduction of bias. Another way in which an association found in an epidemiological study does not represent a direct cause-and-effect relationship is through confounding. In this case, unlike with chance or bias, a causal relationship is responsible for the epidemiological association, but only indirectly. The factor identified in the study is, in fact, related to a genuine causal factor and is not, in itself, the cause of the disease. In an observational study, the problems normally associated with experimental research are magnified, and this requires considerable caution in the interpretation of findings. Inferential frameworks have been proposed to assist in the scientific assessment of epidemiological associations. Probably the most famous of these was proposed by the eminent British epidemiologist, Sir Austin Bradford Hill in 1965.43 He suggested the following nine criteria of causality as a guide to the interpretation of epidemiological findings: Temporality This is a necessary condition for a causal relationship and requires that a putative cause must precede the effect. Consistency Owing to the difficulties encountered in epidemiological research, an association should be found under different conditions if a cause-and-effect interpretation is viable. Thus the association should be found with different study designs, in different groups of people, under as wide a variety of circumstances as possible. Strength of Association An association which is large and has high statistical significance is less likely to have been produced by chance or bias alone.
8
E. JANET TAWN AND RICHARD WAKEFORD
Biological Gradient An epidemiological association should demonstrate an appropriate dose response relationship, that is the association should tend to increase as the dose, or level of exposure to the supposed cause, increases. Coherence An association found in a limited group of individuals should be coherent with a wider body of epidemiological data. Therefore, if cigarette smoking is found to be associated with lung cancer in a study group of individuals, then national rates of lung cancer would be expected to increase (after an appropriate time lag to account for the latent period) with the increased consumption of cigarettes at a national level. They have in fact done so. Biological Plausibility An epidemiological association should not run counter to the established body of biological knowledge. This includes results from animal experiments. The criterion of biological plausibility is growing increasingly important in the interpretation of epidemiological associations as this knowledge expands. Human “Experiments” If circumstances of exposure change, mimicking experimental conditions, then the proposed consequent effect should change accordingly. Therefore, if cigarette consumption decreases at the national level as a result of public health campaigns, then lung cancer rates should decrease after an appropriate time lag, which they have done. Analogy It may be that a particular epidemiological association is analogous to another association for which the evidence for causality is stronger. Such an analogy would provide additional assurance that the association represented a cause-and-effect relationship. Specificity An association can be more likely to be causal if a specific effect is caused by a specific exposure, rather than a more general association with a spectrum of diseases with different causes. Sir Austin Bradford Hill said of these nine criteria of causality: “None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question - is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” “No formal test of [statistical] significance can answer those questions. Such tests can, and should, remind us of the effects that the play of chance can create, and they will instruct us in the likely magnitude of those effects. Beyond that they contribute nothing to the ‘proof’ of our hypothesis.” It will be seen that the interpretation of an epidemiological association is a complex process which has to be carried out with considerable care if erroneous inferences are not to be made. There have been many instances in epidemiology where apparently convincing associations have turned out not to be causal. Epidemiology could be seen as a statistical stopgap which is used in the absence of detailed biological mechanistic knowledge. Epidemiology would not be necessary if such mechanistic knowledge were available, but in most cases this ideal situation is far from being reality. Epidemiology does have the considerable strength of using direct observations on groups of humans; in other words it is a direct measure of health under particular circumstances. But epidemiological studies do need to be conducted, and results interpreted, with the appropriate caution.
CHILDHOOD LEUKAEMIA AND RADIATION BACKGROUND FINDINGS
TO THE GARDNER HYPOTHESIS AND
9 SUBSEQUENT
From the discussion above, it is clear that the initial interpretation of the association between paternal preconceptional radiation dose and childhood leukaemia found in the West Cumbria case-control study35 had to be made in the context of the body of scientific knowledge existing at that time. The epidemiological evidence supporting a causal interpretation was, at best, weak and there was evidence against a direct cause-and-effect relationship. Leukaemia among 7387 offspring of the Japanese atomic bomb survivors had been studied.44 The results provided no support for irradiation of fathers before conception of their children materially increasing the risk of leukaemia in these children, the observed number of cases being 5 against an expected number of 5.2, even though the doses received during the bombings were, on average, much higher than those received by the Sellafield fathers. A review of pre-1989 studies concerning the health effects of lowlevel preconceptional radiation by Rose for the then Department of Health and Social Security concluded that there was no reliable evidence of a causal association between childhood malignancies and low level doses of preconception paternal or maternal irradiation.45 Although a Chinese study initially reported an association of leukaemia with paternal diagnostic X-ray dose,46 a follow-up study failed to confirm these findings suggesting that the original association was probably due to recall bias.47 (The same group has recently reported a study in the USA which found a positive association of paternal preconceptional X-rays with infant leukaemia48 but again there are difficulties with self-reporting of data which could lead to recall bias). The COMARE Second and Third Reports10,11 had examined possible mechanisms by which occupational exposure, including a preconceptional effect, could be involved in the induction of childhood cancer and had concluded that such mechanisms were “highly speculative”. The Committee did, however, advise that these needed to be explored if only to be dismissed. COMARE, in a Statement of Advice to Government (Hansard 2 April 1990) issued two months after the publication of the West Cumbria case-control study35 noted the statistical association between recorded external radiation dose and the incidence of childhood leukaemia but were cautious in their interpretation since the conclusions of the study were based on very small numbers and the novel findings had not previously been recorded. The Committee further noted that “this type of study cannot provide evidence of causal relationship” and that additional evidence was required before firmer inferences could be drawn. Epidemiological studies of relevance to the scientific interpretation of the association between recorded external radiation dose and childhood leukaemia found in the Gardner Report soon followed. Yoshimoto et al.49 updated the study of cancer among the live born offspring of one or both parents who had been irradiated during the atomic bombings of Hiroshima and Nagasaki. Fourteen cases of leukaemia were found in children of parents who could be assigned the latest gonadal dose estimates against 11.2 expected from an unirradiated control group, a non-significant excess. Little compared the results of Gardner et al.35 with those from the Japanese study. He demonstrated that the excess relative risk coefficient (the excess risk per unit dose) derived from the Japanese data, whether considering parental or paternal doses, was statistically incompatible with that derived from the West Cumbrian data, the coefficient obtained from the Gardner Report being about 50 to 80 times higher than those obtained from the offspring of the Japanese bomb survivors.50 Using data obtained from the Radiation Effects Research Foundation in Japan, Little51 also showed that the risk of childhood leukaemia among those Japanese children born to fathers irradiated during the bombings and conceived within
10
E. JANET TAWN AND RICHARD WAKEFORD
approximately half a year of this exposure was also statistically incompatible with the findings of Gardner et al.35 In the COMARE Second Report10 the recommendation was made that a leukaemia and lymphoma case-control study be carried out around Dounreay and the results of this study were reported in 1991.52 After the publication of the Gardner Report the primary interest in the Caithness case-control study concerned the findings of relevance to paternal preconceptional irradiation. Urquhart et al.52 found that of 8 cases of childhood leukaemia and non-Hodgkin’s lymphoma resident within 25 km of Dounreay at diagnosis only 2 had fathers who had received a dose of radiation in the nuclear industry prior to the child’s conception and both of the preconceptional doses were <50 mSv. The authors observed that paternal preconceptional irradiation could not explain the excess of childhood leukaemia and non-Hodgkin’s lymphoma in the vicinity of Dounreay and therefore any “feature” common to both Sellafield and Dounreay which was raising the risk of childhood leukaemia in the vicinity of these installations (as suggested by COMARE in 198810) could not be paternal exposure to radiation prior to conception. However the number of cases considered by the Caithness case-control study was so small that the findings concerning paternal radiation dose were statistically compatible with both those of the Gardner Report and those of the Japanese study. A further case-control study53 of childhood leukaemia and non-Hodgkin’s lymphoma in three areas where excesses of childhood leukaemia had been reported (including West Cumbria) did find evidence of a raised incidence of childhood leukaemia and non-Hodgkin’s lymphoma associated with paternal preconceptional irradiation but there was considerable overlap with the study by Gardner et al.35 and therefore this study did not provide independent support for the Gardner hypothesis. The results of the first large childhood leukaemia case-control study designed to test the Gardner hypothesis became available just before the commencement of the trial in 199254 and were published subsequently. McLaughlan et al.54,55 examined 112 childhood leukaemia cases and 890 matched controls born to mothers living around operating nuclear facilities in Ontario, Canada and diagnosed during 1950 - 1988. No evidence was found for an increased risk associated with total cumulative paternal preconceptional dose or with paternal exposure to radiation in the 6 months immediately prior to conception. Looking specifically at the associations found in the study of Gardner et al.35 it is notable that no case but 5 controls were associated with cumulative paternal preconceptional doses and no case but 7 controls were associated with 6 month preconceptional doses Paternal exposure to tritium (principally associated with operation of CANDU reactors) was also assessed. No case, but 14 controls were associated with paternal exposure to tritium before conception. The Ontario case-control study, therefore, provided no support for the Gardner hypothesis. However, even a study of this size could not produce results which were statistically incompatible with those of the Gardner Report as demonstrated in expert evidence given by G.R. Howe during the trial and later by Little.56 During the trial Kinlen et al.57 published the results of a further large case-control study which examined childhood leukaemia and non-Hodgkin’s lymphoma throughout Scotland in relation to radiation doses received in the nuclear industry. This study of 1369 cases and 4107 controls covering the period 1958-1990 found no association with paternal preconceptional irradiation whether this be total cumulative dose or the dose received 6 months prior to conception. The Scottish case-control study does not support the findings of Gardner et al.35 but again the results are not incompatible statistically with those of the West Cumbria study. However, when Little56 later combined the results of the Ontario and Scottish studies and compared these with the Gardner Report the difference was of marginal statistical significance (p = 0.10-0.15).
CHILDHOOD LEUKAEMIA AND RADIATION
11
One feature of the West Cumbria study which seemed anomalous was the concentration of leukaemia cases associated with relatively high doses of paternal preconceptional irradiation in Seascale with 3 of the 4 cases in the highest dose groups being born in the village. This appeared to run contrary to the general knowledge that the majority of Sellafield workers resided in West Cumbria outside Seascale. Parker et al.58 investigated this peculiarity by placing onto a computer database records from the birth certificates of just over 250 000 livebirths registered within Cumbria during 1950-89. Using these data from birth certificates, and data on employees held at Sellafield, just over 10 000 Cumbrian born children were linked to fathers who were employed at Sellafield before the child’s conception. Of these, 9256 were associated with paternal preconceptional radiation exposure. Only 8% of these children were born in Seascale and only 7% of the collective dose of paternal preconceptional irradiation (whether cumulative or 6 months) was associated with these Seascale births. These proportions are highly inconsistent with paternal preconceptional irradiation providing the explanation for the Seascale leukaemia cluster since on the basis of the association seen in Seascale many more cases of childhood leukaemia would be expected to have been found in the rest of West Cumbria. The results of this study were presented in expert evidence by one of the authors (RW) during the trial, and provided strong grounds for believing that paternal preconceptional irradiation could not be the sole explanation for the Seascale cluster. Again during the trial, Kinlen59 published the results of a study of childhood leukaemia and non-Hodgkin’s lymphoma incidence in children living in Seascale. He found a significant excess not only among those born in the village but also among those born elsewhere. The significant excess in those born outside the village was based on 5 cases, 4 of which were associated with no (or in one case a trivial) dose of paternal preconceptional irradiation. Kinlen concluded that the dose of radiation received by a father before the conception of his child could not explain the excess of childhood leukaemia and non-Hodgkin’s lymphoma in Seascale because it could not account for the significant excess of cases in those born outside Seascale, but diagnosed while resident in the village. G.R. Howe, in expert evidence presented during the trial, showed that the raised relative risk associated with a paternal preconceptional irradiation dose of was effectively confined to those children born in Seascale. This observation was confirmed by the comprehensive study by the UK Health and Safety Executive (HSE)60,61 which was published after Judgment in the Reay and Hope BNFL cases had been delivered. The HSE study found that a statistically significant association between childhood leukaemia and non-Hodgkin’s lymphoma and paternal preconceptional irradiation was apparent among those born in Seascale, but that the raised relative risk in Seascale was statistically incompatible with the absence of a significantly raised relative risk among those born outside the village. No significant association between childhood leukaemia and nonHodgkin’s lymphoma and the paternal dose of radiation received shortly before conception (based upon original dose records) was found for either children born in Seascale or children born in the rest of West Cumbria, the association originally reported by Gardner et al.35 being heavily influenced by the proportioning of annual dose summaries, which led to erroneous dose estimates. The HSE study found that the positive association for childhood leukaemia and non-Hodgkin’s lymphoma did not extend to other childhood cancers. Indeed a negative association was found between these cancers and cumulative paternal preconceptional dose. One matter which received attention during the trial was whether the film badge doses producing the paternal preconceptional irradiation association in the Gardner Report might be acting as a surrogate for some other occupational exposure, in particular exposure to neutrons or internally incorporated radionuclides. G.R. Howe, in expert
12
E. JANET TAWN AND RICHARD WAKEFORD
evidence, demonstrated that although photon and neutron doses were correlated, the Gardner association appeared to be driven principally by photon doses. This finding was supported by the HSE study which, in a qualitative assessment of neutron exposures, found no association with assessed potential for exposure to neutrons. G.R. Howe also found no evidence for an independent association with internal dose. Again, this was confirmed by the HSE study and by the absence of an association with tritium in the Ontario case-control study 54,55 and with the high alpha particle doses from thorium and its decay products in Danish patients injected with the contrast medium, Thorotrast, in the later study of Andersson et al.62 These results make internal irradiation of the testes an unlikely explanation for the Gardner association. Indeed, no occupational factor examined in the HSE study could explain the restriction of the significant excess of childhood leukaemia and non-Hodgkin’s lymphoma among offspring of Sellafield workers to that small fraction who were born in the village of Seascale. The HSE study provided strong confirmation of the validity of the evidence upon which Judgment in the Reay and Hope cases was based. Little et al.63,64 subsequently showed that the Seascale association was statistically incompatible not only with the lack of an association in the rest of West Cumbria but also with the negative results of all other substantive studies using objective measures of radiation dose. One recommendation of the COMARE Third Report11 was that a case-control study of childhood leukaemia should be carried out around Aldermaston and Burghfield and the results of this study were published during the trial.65 Although an association of borderline significance was found with being monitored for exposure to radiation before conception, unlike the Gardner study there was no convincing evidence of an association with actual recorded external dose. The doses received were trivial in all instances, the cumulative doses being <5 mSv. The authors speculated that some unmonitored exposure to radioactive substances or chemicals could be responsible for the observed association but it was clear that the association could not account for the excess of childhood leukaemia in the area around Aldermaston and Burghfield, only 4 fathers of affected children being employed in the nuclear industry before conception. The results of two relevant geographical correlation studies were published during the course of the trial. Draper et al.66 examined cancer incidence in the vicinity of Sellafield between 1963 and 1990. They confirmed the excess of childhood leukaemia and non-Hodgkin’s lymphoma in Seascale over the period studied by the Black Advisory Group and found that this excess appeared to persist into the period after the Seascale cluster was originally reported in the television documentary in 1983. This excess of cases did not extend to those over 25 years of age, nor did it extend generally to the remaining area of the two local authority districts around Sellafield or to the rest of the county of Cumbria. Craft et al.67 examined cancer incidence among young people living in over 1200 electoral wards in the north of England during 1968-85. They found that Seascale had the most significant excess of childhood leukaemia and non-Hodgkin’s lymphoma. Interestingly, they also identified another electoral ward, (Egremont North), situated 7 km north of Sellafield with an excess of acute lymphoblastic leukaemia which fell within the top five most significant excesses in wards in the north of England. However, these two geographical correlation studies by themselves could not identify the causes of these excesses. The demonstration in evidence put before the Court that the association between paternal preconceptional irradiation and childhood leukaemia was effectively restricted to the village of Seascale, and did not extend to the great majority (>90%) of the offspring of the Sellafield workforce who were born outside this village, led to the Plaintiffs suggesting that the explanation for this phenomenon was an interaction between paternal radiation exposure and some factor (“Factor X”) which was essentially confined to
CHILDHOOD LEUKAEMIA AND RADIATION
13
Seascale. This explanation proposed that the irradiation of testes predisposed offspring to the action of some factor which affected the individual after conception, and that “Factor X” operated predominantly, but not exclusively, in Seascale. “Factor X” was suggested to be an infectious agent such as a virus, or environmental radioactivity. Although it is not unusual in epidemiology to find two risk factors which act together more than additively (for example, radon and cigarette smoke as risk factors for lung cancer), it is most unusual to find factors acting together more than multiplicatively. G.R. Howe, in expert evidence, demonstrated that the restriction of the paternal dose association to Seascale was so extreme that any interaction between paternal preconceptional irradiation and “Factor X” would have to be substantially greater than multiplicative, and that such an interaction was implausible. The need for the interaction to be appreciably supramultiplicative in order to explain the Seascale cluster was later confirmed by Little et al. 63 Further evidence against an interactive process being capable of explaining the restriction of the paternal preconceptional irradiation association to Seascale was provided by the Egremont North childhood leukaemia excess.67 One of the authors (RW) presented evidence to show that none of the four cases which occurred in Egremont North was associated with paternal preconceptional irradiation, even though the collective dose of such irradiation in this electoral ward was greater than in Seascale. If some factor in Egremont North is increasing the risk of childhood leukaemia, then it would be most remarkable that this factor did not affect those children who are supposedly most at risk, that is the children whose fathers were exposed to radiation prior to conception. That paternal preconceptional irradiation should take part in an exceptionally strong interaction with “Factor X” in Seascale, but not play any role in an excess some 10 km away seems highly unlikely. These findings were later reported by Wakeford and Parker.68
RADIATION GENETIC RISK
The causal hypothesis to account for the statistical association between paternal preconceptional irradiation and childhood leukaemia proposed that irradiation of fathers had induced mutations in sperm which had been transmitted to their offspring and subsequently resulted in the development of leukaemia. The biological plausibility of this proposed mechanism came under considerable scrutiny. Although this was only one of the criteria proposed by Sir Austin Bradford Hill in 1965 for establishing causation epidemiologically, considerable advances have been made in biological knowledge since that time and the necessity of there being an acceptable causal mechanism to explain the association was considered by both epidemiologists and geneticists to be of prime importance. Despite extensive studies on the offspring of the A-bomb survivors, no evidence has emerged of any adverse health effects attributable to inherited radiation-induced mutations. Indeed, no studies of exposed human populations have indicated unequivocally a radiation-induced heritable effect. In discussing their findings Gardner et al.35 noted that their report was “the first of its kind with human data” and this point was also made in the accompanying editorial in the British Medical Journal.69 Because of this lack of human information, genetic risk estimates have had to rely heavily on mouse data derived in controlled experiments using relatively large numbers of animals. The 7 specific-locus test developed by Russell70 has been used extensively for studying the qualitative and quantitative effects of radiation. Such experiments suggest a genetic doubling dose for low LET acute irradiation of 0.4 Sv.71 A significant finding to emerge from these early studies was that the genetic damage was greater by a factor of 3 if a given dose is delivered acutely rather than chronically.72 Applying a dose rate reduction factor
14
E. JANET TAWN AND RICHARD WAKEFORD
of 3 to the mouse data for acute exposures gives a doubling dose of 1.2 Sv for chronic low LET radiation. The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) have extrapolated this estimate to humans and use a doubling dose of 1 Sv, which is applied to incidence data on human genetic conditions to obtain risk estimates.73,74 The International Commission on Radiological Protection (ICRP) apply the risk estimates generated by UNSCEAR to derive appropriate protection practices.42 When considering first generation effects in a reproductive population, the risk for Mendelian plus chromosomal disorders from 10 mSv of low LET radiation exposure is estimated to be 18 per 106 live births. A further 12 cases of disease with a multifactorial origin is expected of which >90% will be of adult onset. Although no adverse effects were seen in Japan, a recent review by Neel et al. 75 has combined the analyses of a range of endpoints, each weighted for its heritable component, and shown that this combined data is consistent with a minimal doubling dose of between 1.7 and 2.2 Sv. Following examination of the Japanese parental exposures the authors have opted for a dose rate reduction factor of 2 as being most appropriate for these conditions and thus derive a doubling dose for chronic ionizing radiation of 3.4 to 4.4 Sv. Disturbed by the discrepancy between this estimate and the 1 Sv used for risk estimation, Neel and Lewis undertook a reanalysis of the mouse data for 8 different genetic endpoints and derived a doubling dose of 1.35 Gy.76 When a dose rate reduction factor of 3 is applied this gives an estimate for the doubling dose for chronic radiation of 4.05 Gy. This study also noted that there are good reasons to believe that the 7 specific recessive loci originally chosen by Russell are particularly mutable and more recent comparative studies have shown that they are 10 times more sensitive per locus than dominant cataract mutations.77 ICRP have noted the work of Neel and Lewis and acknowledge that the use of a doubling dose of 1 Sv for determining genetic risk from low dose rate low LET radiation in man is conservative.78
GENETIC RISK EXPOSURE
FROM
SELLAFIELD
OCCUPATIONAL
RADIATION
The genetic risk imposed by occupational radiation exposure on workers at Sellafield was the subject of expert evidence for the trial and has since been published. 79,80,81 The study by Parker et al.58 revealed that during 1950-1989 a total of 9256 children were born in Cumbria to fathers who had been occupationally exposed to radiation at Sellafield. This group of men had a collective preconceptional dose of 539 person Sv giving a mean dose of 58 mSv. If the ICRP risk estimate42,78 is applied, the expectation is of approximately 1 excess case of Mendelian plus chromosomal disorders in this population of children. Since the background frequency of such disorders is 16 300 per 10 livebirths (10 000 dominant, 2500 recessive, 3800 chromosomal73,74) this one extra case would be occurring in addition to approximately 150 spontaneous cases. A further contribution of <1 case of severe childhood disease with multifactorial origin would occur against an estimated background of about 185 births (2%) suffering from major congenital abnormalities of multifactorial etiology. Clearly any genetic effect in this population of 9256 children will not be discernible against statistical fluctuations in the background rate. Furthermore no genetic effect is going to be detectable in the subgroup of 774 children born to fathers with paternal preconceptional irradiation from Seascale in whom the dose profile is similar. The collective paternal preconceptional dose for the Seascale children is 38 person Sv and if the 5 leukaemia cases in this group are attributed to this dose the implied mutation rate, assuming involvement of only one locus, is per locus per Sv. A
CHILDHOOD LEUKAEMIA AND RADIATION
15
considerable amount of experimental data is available on the induction of mutations by radiation both in vitro and in vivo. These indicate induced rates which range from 1 to per locus per Sv.82,83,84 The rate deduced from the Seascale data is therefore 3 to 5 orders of magnitude greater. Postulating the involvement of a number of different genes with typical radiosensitivity will not explain this difference, since this would necessitate that the majority of the functional genes in man (50 000 - 100 000) could mutate in the germline and result in the single endpoint of leukaemia in the subsequent offspring. To circumvent the difficulty of having to confine a radiation-induced heritable mechanism of leukaemia induction to Seascale, it has been suggested, as discussed earlier, that preconceptional irradiation could be interacting with a postconceptional co-factor (“Factor X”) specific to the village of Seascale. However, since the proposed initiating or predisposing event is an inherited genetic change, the incidence of leukaemia, at the very least, represents the mutation rate of a dominant gene with complete penetrance. If, however, such a mutation only confers a modest degree of predisposition and not all those who inherit a defective gene contract leukaemia, then an even greater underlying mutation rate has to be implied. HERITABILITY OF LEUKAEMIA The Gardner hypothesis postulates an inherited etiology for the excess of childhood leukaemia in Seascale. However, unlike such childhood malignancies as retinoblastoma and Wilm’s tumour, where inheritance of a defective tumour suppressor gene provides a high risk of malignancy, evidence has not emerged for a similar mechanism for leukaemia. Although 10 times more common than retinoblastoma, which exhibits well established familial patterns, trawling of registers has revealed only a few family clusters of childhood leukaemia with only one apparent parent to offspring transmission.85 When considered against background rates in large populations this could be a chance finding. An increase in leukaemia in the offspring of patients who have received treatment for cancer could be indicative of genetic transmission, either as a result of an already preexisting familial predisposing gene or resulting from a new germline mutation induced by radiotherapy and/or chemotherapy. In fact, no such increase has been observed either for children of cancer patients generally or for children of survivors of leukaemia or nonHodgkin’s lymphoma.86,87 Further evidence arguing against an inherited predisposing gene for leukaemia comes from studies of childhood cancers in different immigrant groups. Racial incidence figures are maintained for such cancers as retinoblastoma and Wilm’s tumour whereas the incidence of leukaemia is associated with country of residence rather than ethnic origin, suggesting an environmental rather than heritable etiology.88 Data which, when first evaluated, was thought to point to a heritable component is the high rate of concordance for leukaemia in monozygotic twins. This, in the main, is confined to leukaemias in the first year of life and is attributable, not to an inherited mutation but to initiating events occurring in one twin in utero with subsequent transplacental passage to the other.89 The molecular nature of the genetic change in the MLL (or HRX) gene which characterises infant null acute lymphoblastic leukaemia has been analysed in three pairs of twins with the disorder. The rearrangement, which is not constitutional, was found to be unique to each twin pair thus providing evidence for the The high rate of concordance in mutational event occurring during pregnancy.90 monozygotic twins for infant leukaemia is the result of all steps towards malignancy occurring in utero. For leukaemia in older children, although the initiating genetic change
16
E. JANET TAWN AND RICHARD WAKEFORD
may occur prenatally and become established in both twins, different postnatal events result in a much lower rate of concordance. Nevertheless leukaemia is a genetic disease and mutational changes associated with leukaemia are readily identified by the presence of chromosomal rearrangements.91 These changes are acquired somatic mutations, many of which are associated with the activation of proto-oncogenes. Chromosome rearrangements move such genes from their normal regulatory control sequences placing them next to promoting regions thereby causing increased activity which results in a breakdown of proliferative constraints in the cell. Activated oncogenes invariably act in a dominant manner and thus only one of a pair of proto-oncogenes in a cell needs to be altered for such an effect to occur. There is no evidence from human studies to suggest that chromosome rearrangements resulting in proto-oncogene activation can be transmitted through the germ line. Genetic engineering has produced viable transgenic mice with constructs of oncogenes plus certain specific promotors incorporated into the constitutional genome but these have been produced for the study of the sequential steps of tumour development and the effect of various carcinogenic agents on the progression from the predisposed cell to malignancy and not for the study of heritability.92 Indeed attempts to introduce the bcr-gene with its natural promotor failed to produce viable offspring93 and activated oncogenes in the germ cell will most likely disrupt normal fetal development.94,95 This considerable evidence which refutes the central tenet of the Gardner hypothesis, i.e. that leukaemia can be caused by the inheritance of a mutated dominant gene, was raised by those questioning the biological plausibility of the causative mechanism suggested by Gardner et al.35 and was addressed in detail by H.J.Evans and J.V.Neel during the trial. There are however a number of rare well defined recessively inherited syndromes, such as ataxia telangiectasia and Fanconi’s anaemia, in which leukaemia occurs as one of a range of clinical endpoints.96 Such diseases are characterised by DNA repair deficiency, chromosome instability, and reduced immunocompetence, all factors which are thought to contribute to the enhanced risk of the somatic induction and maintenance of leukaemia initiating events. Leukaemia is also one of a range of malignancies seen in the Li Fraumeni syndrome, a familial syndrome in which enhanced cancer predisposition is associated with inheritance of a defective tumour suppressor gene.97 However, consideration of the Seascale in the Gardner Report and also the two cases brought before the Court indicate nothing to suggest that any are part of a wider syndrome and they appear to be indistinguishable clinically from other sporadic cases of leukaemia/lymphoma. This would seem to rule out any possibility that the effect reported by Gardner et al.35 was due to a recessive mutation in individuals already heterozygous for a cancer predisposing syndrome. In any event it would seem unlikely that the population of Seascale should contain a disproportionate number of such individuals. In the event, a review of the evidence led to the acceptance in the trial that the heritable component of childhood leukaemia is likely to be small. Neel has suggested that it is probably no more than 5%,98 and since the incidence of childhood leukaemia in the United Kingdom is about 1 in 1500 the background incidence of inherited leukaemia is therefore unlikely to be greater than 1 in 30 000. However if the 5 cases of leukaemia amongst the 774 births in Seascale associated with paternal preconceptional irradiation are attributable to an inherited etiology, the incidence in this group is 1 in 150. This implies a 200-fold increase in a dominantly inherited disorder in these Seascale children. Such a radiation-induced effect would not be expected to be confined to one outcome and if the same increase was operating for other gene mutations then an epidemic of genetic disease would be expected. If the same effect was being induced in the rest of West Cumbria which has 13 times the Seascale collective paternal preconceptional dose and 11 times the number of births associated with paternal irradiation exposure 58 such an increase is not
CHILDHOOD LEUKAEMIA AND RADIATION
17
likely to have gone unnoticed. Although no large scale study has yet been reported, Jones and Wheater99 found no increase in abnormal obstetric outcomes in Seascale. It is also inconceivable that such an effect would not have been observed in the offspring of the A-bomb survivors and indeed should also be detectable in areas of high background radiation.
ETIOLOGY AND HETEROGENEITY OF CASES Diagnostic information for Reay and Hope was examined for the purposes of the trial and the diagnoses agreed. For Dorothy Reay, who died in 1962, details were sparse and there were only a few poor quality haematological slides available for examination. Nevertheless it was agreed that this was a case of infant null acute lymphoblastic leukaemia. Evidence emerging around the time of the trial, and subsequently confirmed, identified a specific alteration of chromosome 11q23 involving the MLL (or HRX) gene in the vast majority of such cases.100 The mutational origin of this leukaemic event probably occurs in utero and is not therefore an inherited genetic change of the type postulated by the Gardner hypothesis. Furthermore Dorothy Reay was the one case of leukaemia amongst 1588 children born in West Cumbria outside Seascale to fathers who had received a preconceptional dose >100 mSv, 79 not an unexpected finding. More diagnostic information was available for Vivien Hope, including pathological material. During the course of the disease there had been no bone marrow involvement and it was agreed that this was a case of non-endemic (sporadic) Burkitt’s lymphoma. The etiology of Burkitt’s lymphoma has been well characterised.101 An important event is the translocation between chromosome 8 and chromosome 14 which results in myc oncogene activation. Unfortunately chromosome analysis was not undertaken in tumour tissue from Hope but in view of the diagnosis the expectation was that this event would have occurred. A hallmark of endemic Burkitt’s lymphoma is the association with early infection by Epstein-Barr virus (EBV), The polyclonal proliferation of EBV infected B cells remains unchecked particularly in areas associated with malaria. A transformed clone arises as a consequence of myc gene activation in one cell of this increased cell population. This somatic chromosome rearrangement is thought to arise as an accident of the normal developmental process of immunoglobulin gene rearrangement and is crucial to the parthenogenesis of Burkitt’s lymphoma. Although the role of EBV in the etiology of endemic Burkitt’s lymphoma is well established, the co-factor fulfilling a similar role in the etiology of non-endemic or spontaneous Burkitt’s lymphoma is unknown. Burkitt’s lymphoma has, however, been reported in patients who are immuno-suppressed as a result of HIV infection.102 Whatever the agent is that mimics the growth-promoting action of Epstein-Barr virus in non-endemic (sporadic) Burkitt’s lymphoma its action will likewise be somatic. Familial cases of Burkitt’s lymphoma have been reported in males with Xlinked immunodeficiency syndrome,103 a disorder characterised by a predisposition to EBV related disease. There was, however, no evidence that Hope suffered from this or any other inherited immunodeficiency disorder. Examination of the diagnostic details of the Seascale born leukaemia cases with paternal preconceptional irradiation in the Gardner Report indicates further diversity:59 two young children with an unknown subtype of ALL, one child with null AL, one young child with AML and a young adult with CML. This heterogeneity must raise questions when searching for a common etiology and distinct causative mechanism for the Seascale cases and the two cases which were the subject of the legal action.
18
E. JANET TAWN AND RICHARD WAKEFORD
ANIMAL STUDIES Support for a plausible mechanism to support the Gardner hypothesis had originally been drawn from the experimental work by Nomura on radiation-induced transgenerational carcinogenesis in mice. Nomura has recently reviewed his studies which date back to the 1970’s.104 The bulk of this work concerns the induction of lung adenomas in offspring of irradiated male mice. Unfortunately the experimental details provided give rise to queries with respect to the use of concurrent controls and the number of independent matings used for the heritability experiments. Examination of the data105 reveals small numbers in each dose and germ cell stage group and when lung adenomas are considered separately from total tumours the significance of the results is in doubt, at least for spermatogonial exposure. Furthermore Nomura had previously noted that tumour incidence was related to body size106 and yet there appears to have been no recognition of the possibility raised by P.B. Selby in evidence in the trial that the induction of dominant lethal mutations could have influenced litter size and hence size of the offspring. To date no studies have confirmed Nomura’s findings. A recent reanalysis of data from a lifespan study undertaken at Oak Ridge showed no increase in tumour incidence in offspring of irradiated mice107 and an attempt to repeat Nomura’s work by Cattanach et al., published after the trial,108 found that lung tumour incidence was not related to paternal radiation dose nor were there significant differences between germ cell stages irradiated. This latter work did, however, find that tumour rates in experimental and control groups varied in a cyclic way thus emphasising the necessity of concurrent controls for this type of study. In his more recent publications Nomura has reported data on leukaemia incidence in three different mouse strains.104,109 Numbers of leukaemias were small and in only one strain was there a significant increase following spermatagonial exposure. Even if taken at face value this increase, from 1 in 244 in the control offspring to 9 in 229 following 5.04 Gy spermatogonial exposure109 implies an induced mutation rate of per Sv which is still orders of magnitude lower than that deduced from the Seascale leukaemia cases. Furthermore the dose in these experiments was acute and a dose rate factor of 3 should be applied for comparison with the chronic exposure received by the Seascale fathers. Transgenerational carcinogenesis has been observed following chemical exposure110 but the interpretation of these experiments is difficult111,112 and positive effects following preconceptional maternal exposure may in some cases be attributable to transplacental transfer of the carcinogen to the fetus rather than to induction of a germ cell mutation.
DOSE RATE EFFECTS The biological effect of a given dose can be greatly influenced by the dose rate. For low LET radiation, as the dose rate is lowered and the exposure time becomes more protracted, the effect of a given dose is reduced. A dose and dose rate effectiveness factor is therefore commonly applied when deriving risk estimates for both somatic and genetic (i.e. heritable) risks for low dose low dose rate low LET radiation from high dose acute exposure data. For high LET radiation little or no dose rate effect is observed, this being consistent with the damage caused by high LET radiation resulting from a single densely ionising track. In contrast damage caused by low LET radiation can be the result of the interaction of more than one initial lesion and protraction of dose can allow the separation of these in time, thus allowing repair to occur (for review see Hall).113
CHILDHOOD LEUKAEMIA AND RADIATION
19
In recent years evidence has emerged that for high LET radiations protracting the exposure may lead to an increase in biological effect. This led to speculation that the association between paternal exposure and leukaemia observed in Seascale could be due to such an inverse dose rate effect. However the majority of these studies have involved oncogenic transformation assays on somatic cells in vitro and in general the enhanced effect seldom exceeds a factor of 2. Brenner and Hall114 have developed a model which is consistent with this experimental data and indicates that the results are heavily influenced by cell cycle kinetics. When this model is applied to the agreed neutron doses of the two fathers of the legal cases i.e. 11 mGy (220 mSv) and 3.5 mGy (70 mSv) dose rate effectiveness factors are derived of 3.5 and 1.5 respectively. Furthermore, examination of the doses of the Seascale case fathers indicates no neutron dose greater than 5 mSv and on the Brenner-Hall model no inverse dose rate effect would be expected. There is thus no evidence that the large discrepancy between the Seascale findings and the A-bomb data can be reconciled in this way. Perhaps more importantly in view of the postulated germline effect, in vivo experiments on neutron irradiation of mice spermatogonia115 demonstrated an inverse dose rate effect only at high doses and this can be explained by a greater incidence of cell killing.
UNCONVENTIONAL GENETIC MECHANISMS In the absence of evidence pointing to conventional mutation as an explanation for the Gardner hypothesis, consideration has been given to the possibility of other radiation genetic or epigenetic events which could be responsible.116 These were examined during the trial. One such phenomenon is transposon mediated mutation. Transposable genetic elements can change their positions within the genome and have been shown to be implicated in mutations of the haemophilia A gene.117 However, in man, such events are extremely rare and there is no evidence, to date, that the transposition of DNA elements is effected by radiation. Further speculation has centred round the possible involvement of sequence elements characterised by tandem repeats. These minisatellite regions occur throughout the genome and are highly polymorphic making them ideal markers for personal identification. The high rate of spontaneous mutation resulting in a change in repeat copy number has led to suggestions that minisatellite mutations could be used for population monitoring of germline mutations. Minisatellite mutations have been shown to occur at high frequency in the offspring of male mice following spermatid irradiation but the evidence for spermatogonial irradiation is less persuasive.118,119 Following the Judgment, Dubrova et al. 120 have found the frequency of minisatellite mutations to be twice as high in families from Belarus exposed to contamination from the Chernobyl accident compared to a UK control group. Mutation rates correlated with levels of thus giving support to a causal link although it was recognised that other environmental contaminants could also be involved. The authors acknowledge that the limited target size would rule out the possibility that the increase in mutations is due to direct damage to minisatellite DNA, a more likely explanation being that non-targeted effects stimulate minisatellite instability. A similar study in families of the Japanese A-bomb survivors has, however, failed to show increased mutation frequencies.121 It has yet to be established if minisatellite mutations play a role in the causation of human disease. Krontiris122 has suggested that minisatellite mutations could affect gene expression, and thus cell transformation, but the role of minisatellite DNA in normal cellular function remains unclear. In any event, in the context of the Seascale cases, any proposed role in transgenerational radiation-induced leukaemia is unlikely to have been confined to one village.
20
E. JANET TAWN AND RICHARD WAKEFORD
The role of fragile sites in malignancy has also come under scrutiny. These regions of chromosomal instability are revealed under certain in vitro cell culture conditions and although an association with the sites of somatic chromosome exchanges seen in leukaemia has been observed, no functional relationship has been demonstrated.123 The sequencing of the fragile site associated with X-linked mental retardation has shown the disease to be associated with an amplification of repeat DNA sequences.124 Expansion of trinucleotide repeat sequences has also been observed in other inherited diseases, e.g. Huntingdon’s chorea and myotonic dystrophy124 but to date there is no evidence of an association with the haemopoietic disorders associated with leukaemia. Nor is there any suggestion that radiation can induce fragile sites in germ cells or preferentially damage pre-existing fragile sites. A recently recognised epigenetic phenomenon is genomic imprinting. This acts at the gamete level and is the result of parental origin causing differential expression of the two alleles of a gene. Imprinting has been implicated in a number of human diseases, the best documented being deletion of chromosome 15q11-3 which results in Prader-Willi syndrome when the origin of the deleted chromosome is paternal and Angelman syndrome when the deleted chromosome originated maternally.125 Parent-of-origin effects have been reported for the Philadelphia chromosome,126 a somatic exchange which occurs in chronic myeloid leukaemia, but this interpretation has now been refuted127 and whether imprinting plays a role in leukaemia initiation is still unknown. In any event, to date, imprinting has not been shown to be either modified or induced by radiation. Susceptibility to the induction of spontaneous and induced somatic mutations is likely to be influenced by a range of mechanisms affecting, for example, the fidelity of DNA replication and repair, genomic surveillance and chromatin configuration. Similarly the induction of genomic instability or a mutator phenotype which then enhances the likelihood of further progression to malignancy will be affected by factors involved in apoptosis (programmed cell death) and the maintenance of genomic integrity. There is no evidence to suggest, however, that inherited radiation-induced genetic changes affecting any of these processes will occur at the orders of magnitude needed to explain the excess of leukaemia cases in Seascale nor that the results of such events would result only in haemopoietic malignancies.
THE LEGAL JUDGMENT AND AFTERMATH Judgment was given in the cases of Hope and Reay BNFL in October 1993.128 The Judge concluded “In my judgment, on the evidence before me, the scales tilt decisively in favour of the Defendants and the Plaintiffs, therefore, have failed to satisfy me on the balance of probabilities that ppi [paternal preconceptional irradiation] was a material contributory cause of the Seascale excess, or, it must follow, of (a) the leukaemia of Dorothy Reay or (b) the non-Hodgkin’s lymphoma of Vivien Hope” “In the result, there must be judgment for the Defendants”. A review of the evidence has been published.129 Although arguably the Court is not the best place to settle scientific controversy the legal cases focused the minds of a number of highly reputable scientists and accelerated the scientific investigation of the Gardner hypothesis. Subsequent events have not challenged the Judge’s decision but rather reinforced the view that paternal preconceptional irradiation cannot be responsible for the excess of childhood leukaemia in Seascale. In a comprehensive review, Doll et al.89 stated “In our opinion, the hypothesis that irradiation of the testes causes detectable risk of leukaemia in subsequent offspring cannot be sustained”. “We conclude that the association between paternal irradiation and leukaemia is largely or wholly a chance finding”. In contrast, epidemiological evidence
CHILDHOOD LEUKAEMIA AND RADIATION
21
for a role of infection in childhood leukaemia has grown. In particular, the hypothesis put forward by Kinlen130,131 that unusual population mixing induces unusual patterns of infection which increase the risk of childhood leukaemia has obtained considerable support. That such a process might apply to Seascale, an extremely unusual community in terms of social class composition, isolation and population mixing, is highly attractive. The 1993 report from UNSCEAR74 which considered hereditary effects of radiation, reviewed radiation genetic risks in the context of the Gardner Report and concluded that since no epidemic of genetic diseases has been reported around Sellafield or other nuclear sites it is highly unlikely that the conclusions of Gardner et al.35 are correct. In 1994 UNSCEAR132 reviewed epidemiological studies concerning the incidence of leukaemia around nuclear sites, thus adding to the assessment of the genetic implications of the Gardner hypothesis and concluded “A tentative explanation based on an association of childhood leukaemia and paternal exposure has largely been discounted following extensive investigations of the Sellafield area and elsewhere and because there is no sound genetic basis for this effect”. The COMARE Fourth Report133 has re-examined the Seascale leukaemia cluster in the light of epidemiological and dosimetric data and the advances in radiobiology and radiation genetics that have emerged since the report of the Black Advisory Group.1 The Committee confirmed again the significantly raised level of malignancies among young people in Seascale reported in the Black Report for the period 1963-83 and note that this has continued in the later period of study from 1984-92. This is primarily due to an excess of lymphoid leukaemia and non-Hodgkin’s lymphoma. Once more NRPB has carried out a detailed radiological assessment for the Seascale population134 leading COMARE to confirm that discharges from Sellafield could not account for the excess. In reviewing the hypothesis put forward by Gardner et al.,35 that paternal occupational radiation exposure had resulted in germ cell mutations leading to a material increase in the risk of childhood leukaemia, the Committee have stated “We have not found any epidemiological study elsewhere to support Gardner’s findings in Seascale in relation to preconception radiation effects” and “the level of risk implied by this explanation is inconsistent with the radiation doses actually received via occupational exposure and current estimates of genetic risk”. “We consider that ppi [paternal preconceptional irradiation] cannot account for the Seascale childhood leukaemia excess.” Whilst recognising that questions still remain on microdosimetry and mechanisms of biological response, particularly in relation to radionuclides incorporated in the germ cells, and recommending further research in these areas, the Committee acknowledge that it is unclear how these uncertainties could apply only to the village of Seascale. The COMARE Fourth Report133 also examined and dismissed a role for environmental exposure to chemicals. The Committee recognised that Kinlen’s hypothesis of population mixing130 which facilitates an infectious mechanism could play a role in the excess of childhood leukaemia in Seascale, but was not convinced that this could be the sole cause. The possibility that a combination of factors might account for the increase in cancer in Seascale was also examined but, while unable to rule this out, it was difficult to envisage a situation whereby such an interaction would be unique to Seascale. The cause of the Seascale leukaemia cluster remains unresolved. The Yorkshire Television programme in 1983 focused attention on the nearby Sellafield installation and although radioactive discharges were discounted as a causal factor, when Gardner et al.35 reported the association with paternal preconceptional irradiation this appeared to substantiate the link. The emotive nature of the media coverage caused considerable anxiety to the Sellafield workforce who were suddenly informed that in doing their daily work, and being exposed to radiation at levels within the limits recommended by ICRP and NRPB, apparently they were materially putting their children’s health at risk.
22
E. JANET TAWN AND RICHARD WAKEFORD
Unfortunately the media sensationalism also influenced the scientific literature and the statistical association between paternal preconceptional irradiation and childhood leukaemia was treated by some as being causal when a more rational appraisal would have demonstrated the implausible implications of this view. As a consequence there was a considerable amount of personal anxiety, and an expensive and lengthy Court action. It could also be argued that concentration on Seascale has deflected resources from addressing the wider issues of leukaemia etiology and causation. For the sake of science and humanity it is to be hoped that in future when a study yields a most unusual result it will be interpreted with the caution it deserves until the full implications have been evaluated and it receives independent confirmation.
ACKNOWLEDGEMENTS We thank Lynda Buckland, Anne Poyner and Chris Beresford for preparation of the manuscript.
REFERENCES 1.
2. 3.
4.
5.
6.
7.
8. 9.
10.
Independent Advisory Group (Chairman: Sir Douglas Black), Investigation of the Possible Increased Incidence of Cancer in West Cumbria, HMSO, London (1984). J.W. Stather, A.D. Wrixon, J.R. Simmonds, The risks of leukaemia and other cancers in Seascale from radiation exposure, NRPB-R171, (1984). D. Crouch, Science and trans-science in radiation risk assessment: child cancer around the nuclear fuel reprocessing plant at Sellafield, Sci. of the Total Env., 53: 201-216 (1986). J.W. Stather, J. Dionian, J. Brown, T.P. Fell, C.R. Muirhead, The risks of leukaemia and other cancers in Seascale from radiation exposure, Addendum to report R171, NRPB-171 Addendum, (1986). Committee on Medical Aspects of Radiation in the Environment (COMARE), First Report, The implications of the new data on the releases from Sellafield in the 1950’s for the conclusions of the Report on the Investigation of the Possible Increased Incidence of Cancer in West Cumbria, HMSO, London (1986). M.J. Gardner, A.J. Hall, S. Downes, J.D. Terrell, Follow up study of children born elsewhere but attending schools in Seascale, West Cumbria (schools cohort), Br. Med. J., 295: 819-822 (1987). M.J. Gardner, A.J. Hall, S. Downes, J.D. Terrell, Follow up study of children born to mothers resident in Seascale, West Cumbria (birth cohort), Br. Med. J., 295: 822-827 (1987). M.A. Heasman, I.W. Kemp, J.D. Urquhart, R. Black, Childhood leukaemia in Northern Scotland, Lancet, 1: 266 (1986). E. Roman, V. Beral, L. Carpenter, A. Watson, C. Barton, H. Ryder, D.L. Aston, Childhood leukaemia in the West Berkshire and Basingstoke and North Hampshire District Health Authorities in relation to nuclear establishments in the vicinity, Br. Med J., 294: 597-602 (1987). Committee on Medical Aspects of Radiation in the Environment (COMARE), Second Report, Investigation of the possible increased incidence of leukaemia in young people near the Dounreay nuclear establishment, Caithness, Scotland, HMSO, London (1988).
CHILDHOOD LEUKAEMIA AND RADIATION 11.
12. 13. 14.
15. 16.
17. 18. 19. 20.
21. 22. 23. 24. 25. 26.
27.
28. 29.
23
Committee on Medical Aspects of Radiation in the Environment (COMARE), Third Report, Report on the incidence of childhood cancer in the West Berkshire and North Hampshire Area, in which are situated the Atomic Weapons Research Establishment, Aldermaston and the Royal Ordnance Factory, Burghfield, HMSO, London (1989). M.D. Hill, J.R. Cooper, Radiation doses to members of the population of Thurso, NRPB-R195, (1986). J. Dionian, C.R. Muirhead, S.L. Wan, A.D. Wrixon, The risks of leukaemia and other cancers in Thurso from radiation exposure, NRPB-R196, (1986). J. Dionian, S.L. Wan, A.D. Wrixon, Radiation doses to members of the public around AWRE Aldermaston, ROF Burghfield and AERE Harwell, NRPB-R202, (1987). S.L. Wan, A.D. Wrixon, Radiation doses from coal-fired plants in Oxfordshire and Berkshire, NRPB-R203, (1988). P.J. Cook-Mozaffari, F.L. Ashwood, T. Vincent, D. Forman, M. Alderson, Cancer incidence and mortality in the vicinity of nuclear installations England and Wales 1959-80 Studies on Medical and Population Subjects No 51, HMSO, London (1987). D. Forman, P. Cook-Mozaffari, S. Darby, G. Davy, I. Stratton, R. Doll, M. Pike, Cancer near nuclear installations, Nature, 329: 499-505 (1987). R. Wakeford, K. Binks, D. Wilkie, Childhood leukaemia and nuclear installations, J. R. Statist. Soc. A., 152: 61-86 (1989). B. MacMahon, Leukaemia clusters around nuclear facilities in Britain, Cancer Causes Control, 3: 283-288 (1992). P. Cook-Mozaffari, S.C. Darby, R. Doll, D. Forman, C. Hermon, M.C. Pike, T.J. Vincent, Geographical variation in mortality from leukaemia and other cancers in England and Wales in relation to proximity to nuclear installations, Br. J. Cancer, 59: 476-485 (1989). P. Cook-Mozaffari, S. Darby, R. Doll, Cancer near potential sites of nuclear installations, Lancet, 2: 1145-1147 (1989). M. J. Gardner, Review of reported increases of childhood cancer rates in the vicinity of nuclear installations in the UK, J. R. Statist. Soc. A., 152: 307-325 (1989). J.W. Stather, R.H. Clarke, K.P. Duncan, The risk of childhood leukaemia near nuclear establishments, NRPB-R215, HMSO, Chilton (1988). T.E. Wheldon, The assessment of risk of radiation-induced childhood leukaemia in the vicinity of nuclear installations, J. R. Statist. Soc. A., 152: 327-339 (1989). S.C. Darby, R. Doll, Fallout, radiation doses near Dounreay, and childhood leukaemia, Br. Med. J., 294: 603-607 (1987). S.C. Darby, J.H. Olsen, R. Doll, B. Thakrar, P. deN. Brown, H.H. Storn, L. Barlow, F. Langmark, L. Teppo, H. Tulinius, Trends in childhood leukaemia in the Nordic countries in relation to fallout from atmospheric nuclear weapons testing, Br. Med. J., 304: 1005-9 (1992). S. Jablon, Z. Hrubec, J.D. Boice, Cancer in populations living near nuclear facilities, A survey of mortality nationwide and incidence in two states, JAMA, 265: 1403-1408(1991). C. Hill, A. Laplanche, Overall mortality and cancer mortality around French nuclear sites, Nature, 347: 755-757 (1990). J-N. Hattchouel, A. Laplanche, C. Hill, Leukaemia mortality around French nuclear sites, Br. J. Cancer, 71: 651-653 (1995).
24
E. JANET TAWN AND RICHARD WAKEFORD
30.
J. Michaelis, B. Keller, G. Haaf, E. Kaatsch, Incidence of childhood malignancies in the vicinity of (West) German nuclear power plants, Cancer Causes Control, 3: 255-263 (1992). J.R. McLaughlin, E.A. Clarke, E.D. Nishri, T.W. Anderson, Childhood leukaemia in the vicinity of Canadian nuclear facilities, Cancer Causes Control, 4: 51-58 (1993). L.A. Walter, B. Turnbull, G. Gustafsson, U. Hjlmars, B. Anderson, Detection and assessment of clusters of a disease and application to nuclear power plant facilities and childhood leukaemia in Sweden, Stat. Med., 14: 3-16 (1995). J-F. Viel, D. Pobel, A. Carre, Incidence of leukaemia in young people around the La Hague nuclear waste reprocessing plant: a sensitivity analysis, Stat. Med., 14: 2459-2472 (1995). J.F. Bithell, S.J. Dutton, G.J. Draper, N.M. Neary, Distribution of childhood leukaemias and non-Hodgkin’s lymphoma near nuclear installations in England and Wales, Br. Med. J., 309: 501-505 (1994). M.J. Gardner, M.P. Snee, A.J. Hall, C.A. Powell, S. Downes, J.D. Terrell, Results of case-control study of leukaemia and lymphoma among young people near Sellafield nuclear plant in West Cumbria, Br. Med. J., 300: 423-9 (1990). S. Abrahamson, Childhood leukaemia at Sellafield, Radial. Res., 123: 237-8 (1990). H.J. Evans, Ionising radiations from nuclear establishments and childhood leukaemias - an enigma, BioEssays, 12: 541-9 (1990). S.A. Narod, Radiation genetics and childhood leukaemia, Eur. J. Cancer, 26: 661-4 (1990). J.V. Neel, Update on the genetic effects of ionizing radiation, JAMA, 266: 698-701 (1991). K. Sankaranarayanan, Ionising radiation and genetic risks, IV, Current methods, estimates of risk of Mendelian disease, human data and lessons from biochemical and molecular studies of mutations, Mutat. Res., 258: 75-97 (1991). K.F. Baverstock, DNA instability, paternal irradiation and leukaemia in children around Sellafield, Int. J. Radiat. Biol., 60: 581-95 (1991). International Commission on Radiological Protection, 1990 Recommendations of the International Commission on Radiological Protection (ICRP Publication 60), Ann ICRP, 21: 1-3 (1991). A.B. Hill, The environment and disease : association or causation? Proc. R. Soc. Med., 58: 295-300 (1965). T. Ishimaru, M. Ichimaru, M. Mikami, Leukaemia incidence among individuals exposed in utero, children of atomic bomb survivors, and their controls : Hiroshima and Nagasaki 1945-79, Radiation Effects Research Foundation, Hiroshima, Tech Rep 11-81 (1981). K.S.B. Rose, Pre-1989 epidemiological surveys of low-level dose pre-conception irradiation, J. Radiol. Prot., 10: 177-184 (1990). X.O. Shu, Y.T. Gao, L.A. Brinton, M.S. Linet, J.T. Tu, W. Zheng, J.F. Fraumeni, A population-based case-control study of childhood leukaemia in Shanghai, Cancer, 62: 635-644 (1988). X. Shu, F. Jin, M.S. Linet, W. Zheng, J. Clemens, J. Mills, Y.T. Gao, Diagnostic Xray and ultrasound exposure and risk of childhood cancer, Br. J. Cancer, 70: 531-536 (1994).
31.
32.
33.
34.
35.
36. 37. 38. 39. 40.
41. 42.
43. 44.
45. 46.
47.
CHILDHOOD LEUKAEMIA AND RADIATION 48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60. 61.
62.
25
X. Shu, G.H. Reaman, B. Lampkin, H.N. Sather, T.W. Pendergrass, L.L. Robison & for the investigators of the Children’s Cancer Group, Association of paternal diagnostic X-ray exposure with risk of infant leukaemia, Cancer Epidemiol. Biomarkers Prev., 3: 645-653 (1994). Y. Yoshimoto, J.V. Neel, W.J. Schull, H. Kato, M. Soda, R. Eto, K. Mabuchi, Malignant tumours during the first 2 decades of life in the offspring of atomic bomb survivors, Am. J. Hum. Genet., 46: 1041-1052 (1990). M.P. Little, A comparison between the risks of childhood leukaemia from parental exposure to radiation in the Sellafield workforce and those displayed among the Japanese bomb survivors, J. Radiol. Prot., 10: 185-198 (1990). M.P. Little, A comparison of the apparent risks of childhood leukaemia from parental exposure to radiation in the six months prior to conception in the Sellafield workforce and the Japanese bomb survivors, J. Radiol. Prot., 11: 7790 (1991). J.D. Urquhart, R.J. Black, M.J. Muirhead, L. Sharp, M. Maxwell, O.B. Eden, D.A. Jones, Case-control study of leukaemia and non-Hodgkin’s lymphoma in children in Caithness near the Dounreay nuclear installation, Br. Med. J., 302: 687-692 (1991). P.A. McKinney, F.E. Alexander, R.A. Cartwright, L. Parker, Parental occupations of children with leukaemia in West Cumbria, North Humberside and Gateshead, Br. Med. J., 302: 681-687 (1991). J.R. McLaughlin, T.W. Anderson, E.A. Clarke, W. King, Occupational exposure of fathers to ionising radiation and the risk of leukaemia in offspring - a casecontrol study, AECB Report INFO-0424 (Ottawa Atomic Energy Control Board), (1992). J.R. McLaughlin, W.D. King, T.W. Anderson, E.A. Clarke, J.P. Ashmore, Paternal radiation exposure and leukaemia in offspring: the Ontario case-control study, Br. Med. J., 307: 959-966 (1993). M.P. Little, A comparison of the risks of leukaemia in the offspring of the Japanese bomb survivors and those of the Sellafield workforce with those in the offspring of the Ontario and Scottish workforces, J. Radiol. Prot., 13: 161-175 (1993). L.J. Kinlen, K. Clarke, A. Balkwill, Paternal preconceptional radiation exposure in the nuclear industry and leukaemia and non-Hodgkin’s lymphoma in young people in Scotland, Br. Med. J., 306: 1153-1158 (1993). L. Parker, A.W. Craft, J. Smith, H. Dickinson, R. Wakeford, K. Binks, D. McElvenny, L. Scott, A. Slovak, Geographical distribution of preconceptional radiation doses to fathers employed at the Sellafield nuclear installation, West Cumbria, Br. Med. J., 307: 966-971 (1993). L.J. Kinlen, Can paternal preconceptional radiation account for the increase of leukaemia and non-Hodgkin’s lymphoma in Seascale? Br. Med. J., 306: 17181721 (1993). Health and Safety Executive, HSE Investigation of leukaemia and other cancers in the children of male workers at Sellafield, HSE, London (1993). Health and Safety Executive, HSE Investigation of leukaemia and other cancers in the children of male workers at Sellafield: Review of the results published in October 1993, HSE, London (1994). M. Andersson, K. Juel, Y. Ishikawa, H.H. Storm, Effects of preconceptional irradiation on mortality and cancer incidence in the offspring of patients given injections of Thorotrast, J. Natl. Cancer Inst., 86: 1866-1870 (1994).
26 63.
E. JANET TAWN AND RICHARD WAKEFORD
M.P. Little, R. Wakeford, M.W. Charles, A comparison of the risks of leukaemia in the offspring of the Sellafield workforce born in Seascale and those born elsewhere in West Cumbria with the risks in the offspring of the Ontario and Scottish workforces and the Japanese bomb survivors, J. Radiol. Prot., 14: 187201 (1994). 64. M.P. Little, R. Wakeford, M.W. Charles, M. Andersson, A comparison of the risks of leukaemia and non-Hodgkin’s lymphoma in the first generation offspring of the Danish Thorotrast patients with those observed in other studies of parental preconception irradiation, J. Radiol. Prot., 16: 25-36 (1996). 65. E. Roman, A. Watson, V. Beral, S. Buckle, D. Bull, K. Baker, H. Ryder, C. Barton, Case-control study of leukaemia and non-Hodgkin’s lymphoma among children aged 0-4 years living in West Berkshire and North Hampshire health districts, Br. Med. J., 306: 615-621 (1993). 66. G.J. Draper, C.A. Stiller, R.A. Cartwright, A.W. Craft, T.J. Vincent, Cancer in Cumbria and in the vicinity of the Sellafield nuclear installation, 1963-90, Br. Med. J., 306: 89-94, 761 (1993). 67. A.W. Craft, L. Parker, S. Openshaw, M. Charlton, J. Newall, J.M. Birch, V. Blair, Cancer in young people in the North of England 1968-85: analysis by census wards, J. Epidemiol. Comm. Health, 47: 109-115 (1993). 68. R. Wakeford, L. Parker, Leukaemia and non-Hodgkin’s lymphoma in young persons resident in small areas of West Cumbria in relation to paternal preconceptional irradiation, Br. J. Cancer, 73: 672-679 (1996). 69. V. Beral, Leukaemia and nuclear installations: occupational exposure of fathers to radiation may be the explanation, Br. Med. J., 300: 411-412 (1990). 70. W.L. Russell, X-ray induced mutations in mice, Cold Spring Harbour Symp Quant. Biol., 16: 327-336 (1951). 71. K.G. Luning, A.G. Searle, Estimates of the genetic risks from ionising radiation, Mutat. Res., 12: 291-304 (1971). 72. W.L. Russell, L.B. Russell, E.M. Kelly, Radiation dose rate and mutation frequency, Science, 128: 1546-1550 (1958). 73. United Nations Scientific Committee on the Effects of Atomic Radiation, Sources, effects and risks of ionising radiation (UNSCEAR 1988 Report), New York, United Nations, (1988). 74. United Nations Scientific Committee on the Effects of Atomic Radiation, Sources and effects of ionising radiation (UNSCEAR 1993 Report), New York, United Nations, (1993). 75. J.V. Neel, W.J. Schull, A.A. Awa. C. Satoh, H. Kato, M. Otake, Y. Yoshimoto, The children of parents exposed to atomic bombs: estimates of the genetic doubling dose of radiation for humans, Am. J. Hum. Genet., 46: 1053-72 (1990). 76. J.V. Neel, S.E. Lewis, The comparative radiation genetics of humans and mice, Annu. Rev. Genet., 24: 327-62 (1990). 77. U.K. Ehling, Genetic risk assessment, Annu. Rev. Genet., 25: 255-89 (1991). 78. K. Sankaranarayanan, Genetics effects of ionising radiation in man, Ann. ICRP, 22: 75-94 (1991). 79. R. Wakeford, E.J. Tawn, D.M. McElvenny, L.E. Scott, K. Binks, L. Parker, H. Dickinson, J. Smith, The descriptive statistics and health implications of occupational radiation doses received by men at the Sellafield nuclear installation before the conception of their children, J. Radiol. Prot., 14: 3-16 (1994).
CHILDHOOD LEUKAEMIA AND RADIATION 80.
81. 82.
83.
84. 85.
86.
87.
88.
89. 90.
91. 92. 93. 94. 95. 96.
97. 98.
27
R. Wakeford, E.J. Tawn, D.M. McElvenny, K. Binks, L.E. Scott, L. Parker, The Seascale childhood leukaemia cases - the mutation rates implied by paternal preconceptional radiation doses, J. Radiol. Prot., 14: 17-24 (1994). E.J. Tawn, Leukaemia and Sellafield: is there a heritable link?, J. Med. Genet., 32: 251-256 (1995). W.L. Russell, E.M. Kelly, Mutation frequencies in male mice and the estimation of genetic hazards of radiation in man, Proc. Natl. Acad. Sci. USA, 79: 542-4 (1982). K. Sankaranarayanan, Ionising radiation and genetic risks, III, Nature of spontaneous and radiation-induced mutations in mammalian in vitro systems and mechanisms of induction by radiation, Mutat. Res., 258: 75-97 (1991). J. Thacker, Radiation-induced mutation in mammalian cells at low doses and dose rates, Adv. Radiat. Bioi., 16: 77-124 (1992). C.A. Felix, D. D’Amico, T. Mitsudomi, M.M. Nau, F.P. Li, J.F. Jr. Fraumeni, D.E. Cole, J. McCalla, G.H. Reaman, J. Whang-Peng et al., Absence of hereditary p53 mutation in 10 familial leukaemia pedigrees, J. Clin. Invest., 90: 653-8 (1992). G.J. Draper, General overview of studies of multigeneration carcinogenesis in man, particularly in relation to exposure to chemicals, In: Perinatal and multigeneration carcinogenesis, N.P. Napalkow, J.M. Rice, L. Tomatis, H. Yamasaki, eds. Lyon: International Agency for Research on Cancer, pp275-88 (1989). M.M. Hawkins, G.J. Draper, D.L. Winter, Cancer in the offspring of survivors of childhood leukaemia and non-Hodgkin lymphomas, Br. J. Cancer, 71: 1335-9 (1995). C.A. Stiller, P.A. McKinney, K.J. Bunch, C.C. Bailey, I.J. Lewis, Childhood cancer in ethnic groups in Britain, a United Kingdom childrens cancer study group (UKCCSG) study, Br. J. Cancer, 64: 543-8 (1991). R. Doll, H.J. Evans, S.C. Darby, Paternal exposure not to blame, Nature, 367: 67880 (1994). A.M. Ford, S.A. Ridge, M.E. Cabrera, H. Mahmoud, C.M. Steel, L.C. Chan, M. Greaves, In utero rearrangements in the trithorax-related oncogene in infant leukaemias, Nature, 363: 358-60 (1993). T.H. Rabitts, Chromosomal translocations in cancer, Nature, 372: 143-9 (1994). J.M. Adams, S. Cory, Transgenic models of tumour development, Science, 254: 1161-6 (1991). N. Heisterkamp, G. Jenster, D. Kioussis, P.K. Pattengale, J. Groffen, Human bcr-abl gene has a lethal effect on embryogenesis, Transgenic Res., 1: 45-53 (1991). R. Weinberg, Tumour suppressor genes, Science, 254: 1138-46 (1991). A.G. Knudson, Hereditary cancer oncogenes, and antioncogenes, Cancer Res., 45: 1437-43 (1985). G.M. Taylor, G.M. Birch, The hereditary basis of human leukaemia, In: Leukaemia 6th ed, E.S. Henderson, T.A. Lister, M.F. Greaves, eds. W.B. Saunders Co. USA pp 210-245 (1996). D. Malkin, p53 and the Li-Fraumeni syndrome [Review], Cancer Genet. Cytogenet., 66: 83-92 (1993). J.V. Neel, Problem of “false positive” conclusions in genetic epidemiology: lessons from the leukaemia cluster near the Sellafield nuclear installation, Genet. Epid., 11: 213-233 (1994).
28 99.
100. 101. 102.
103. 104. 105.
106. 107.
108.
109
110. 111.
112.
113. 114.
115. 116. 117. 118.
E. JANET TAWN AND RICHARD WAKEFORD K.P. Jones, A.W. Wheater, Obstetric outcomes in West Cumberland Hospital: is there a risk from Sellafield? J. R. Soc. Med., 82: 524-7 (1989). M. Greaves, Infant leukaemia biology, aetiology and treatment, Leukaemia, 10: 372-377 (1996). G. de The, The etiology of Burkitt’s lymphoma and the history of the shaken dogmas, Blood Cells, 19: 667-673 (1993). M. Subar, A. Ner, G. Inghirami, D.M. Knowles, R. Dalla-Favera, Frequent c-myc oncogene activation and infrequent presence of Epstein-Barr virus genome in AIDS - associated lymphoma, Blood, 72: 667-671 (1988). M. Tibebu, A. Polliack, Familial lymphomas, a review of the literature with report of cases in Jerusalem, Leukaemia and Lymphoma, 1: 195-201 (1990). T. Nomura, Paternal exposure to radiation and offspring cancer in mice: reanalysis and new evidences, J. Radiat. Res. Suppl., 2: 64-72 (1991). T.Nomura, Role of DNA damage and repair in carcinogenesis, In: Environmental Mutagens and Carcinogens, T. Sugimura, S. Kondo, H. Takebe, Liss, eds New York, pp 223-230 (1982). T. Nomura, Sensitivity of a lung cell in the developing mouse embryo to tumour induction by urethan, Cancer Res., 34: 3363-3372 (1974). G.E. Cosgrove, P.B. Selby, A.C. Upton, T.J. Mitchell, M.H. Stell, W.L. Russell, Lifespan and autopsy findings in the first generation offspring of X-irradiated male mice, Mutat. Res., 319: 71-9 (1993). B.M. Cattanach, G. Patrick, D. Papworth, D.T. Goodhead, T. Hacker, L. Cobb, E. Whitehill, Investigation of lung tumour induction in BALB/cJ mice following paternal X-irradiation, Int. J. Radiat. Biol., 67, 5: 607-615 (1995). T. Nomura, Further studies on X-ray and chemically induced germ-line alternations causing tumours and malformations in mice, In: Genetic Toxicology of Environmental Chemicals, part B: Genetic Effects and Applied Mutagenesis, C. Ramel, ed Alan R. Liss, New York, pp13-20 (1986). L. Tomatis, Transgenerational carcinogenesis: A review of the experimental and epidemiological evidence, Jpn. J. Cancer Res., 85: 443-454 (1994). P.B. Selby, Experimental induction of dominant mutations in mammals by ionising radiations and chemicals, In: Issues and Reviews in Teratology 5, H. Kalter, ed Plenum, New York, 181-253 (1990). V.S. Turosov, E. Cardis, Review of experiments on multigenerational carcinogenicity: of design, experimental models and analyses, In: Perinatal and Multigeneration Carcinogenesis, N.P. Napalkov, J.M. Rice, L. Tomatis, H. Yamasaki, eds IARC Lyon, ppl05-120 (1989). E.J. Hall, Radiobology for the Radiologist, J.B. Lippincott Company, New York, (1988). D.J. Brenner, E.J. Hall, The inverse dose dose-rate effect for oncogenic transformation by neutrons and charged particles: a plausible interpretation consistent with published data, Int. J. Radiat Biol., 58: 745-58 (1990). A.G. Searle, Mutation induction in mice, Adv. Radiat. Biol., 4: 131-207 (1974). R. Cox, Transgeneration carcinogenesis: are there genes that break the rules? NRPB Radiol. Prot. Bull., 129: 15-23 (1992). B.A. Dombroski, S.L. Mathias, E. Nanthakumar, A.F. Scott, H.J. Jr. Kazazian, Isolation of an active human transposable element, Science, 254: 1805-9 (1991). Y.E. Dubrova, A.J. Jeffrey, A.M. Malashenko, Mouse minisatellite mutations induced by ionising radiation, Nature Genetics, 5: 92-94 (1993).
CHILDHOOD LEUKAEMIA AND RADIATION
29
119. Y.J. Fan, Z. Wang, S. Sadamoto, Y. Ninomiya, N. Kotomura, K. Kamiya, K. Dohi, R. Kominami, O. Niwa, Dose-response of a radiation induction of a germline mutation at a hypervariable mouse minisatellite locus, Int. J. Radial. Biol., 68: 177-183 (1995). 120. Y.E. Dubrova, V.N. Nesteroc, N.G. Krouchinsky, V.A. Ostapenko, R. Nenmann, D.L. Neil, A.J. Jeffreys, Human minisatellite mutation rate after the Chernobyl accident, Nature, 380: 683-686 (1996). 121. M. Kodaira, C. Satoh, K. Hiyama, K. Toyama, Lack of effects of atomic bomb radiation and genetic instability of tandem-repetitive elements in human germ cells, Am. J. Hum. Genet., 57: 1275-1283 (1995). 122. T.G. Krontiris, Minisatellites and human disease, Science, 269: 1682-1683 (1995). 123. G.R. Sutherland, R.N. Simmers, No statistical association between common fragile sites and non-random chromosome breakpoints in cancer cells, Cancer Genet. Cytogenet., 31: 9-15 (1988). 124. R.I. Richards, G.R. Sutherland, Dynamic mutations: a new class of mutations causing human disease, Cell, 70: 709-12 (1992). 125. J.G. Hall, Genomic imprinting: Review and relevance to human diseases, Am. J. Genet., 46: 857-873 (1990). 126. O.A. Haas, A. Argyriou-Tirita, T. Lion, Parental origin of chromosomes involved in the translocation t(9:22), Nature, 359: 414-6 (1992). 127. T. Floretos, N. Heisterkamp, J. Groffen, No evidence for genomic imprinting of the human BCR gene, Blood, 83: 3441-4 (1994). 128. Reay v British Nuclear Fuels plc; Hope v British Nuclear Fuels plc (QBD: French J), Medical Law Reports, 5: 1-55 (1994). 129. R. Wakeford, E.J. Tawn, Childhood leukaemia and Sellafield: the legal cases, J. Radiol. Prot., 14: 293-316 (1994). 130. L.J. Kinlen, Epidemiological evidence for an infective basis in childhood leukaemia, Br. J. Cancer, 71: 1-5 (1995). 131. L.J. Kinlen, M. Dixon, C.A. Stiller, Childhood leukaemia and non-Hodgkin’s lymphoma near large rural construction sites, with a comparison with Sellafield nuclear site, Br. Med. J., 310: 763-768 (1995). 132. United Nations Scientific Committee on the Effects of Atomic Radiation, Sources and effects of ionising radiation (UNSCEAR 1994 Reports), New York: United Nations, (1994). 133. Committee on Medical Aspects of Radiation in the Environment (COMARE), Fourth Report, The incidence of cancer and leukaemia in young people in the vicinity of the Sellafield site, West Cumbria: Further studies and an update of the situation since the publication of the report of the Black Advisory Group in 1984, HMSO, London, (1996). 134. J.R. Simmonds, C.A. Robinson, A. Phipps, C.R.I. Muirhead, F.A. Fry, Risks of leukaemia and other cancers in Seascale from all sources of ionising radiation exposure, NRPB-R276, (1995).
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
Timothy E. Valentine Instrumentation and Controls Division Oak Ridge National Laboratory P. O. Box 2008 Oak Ridge, TN 37831-2008
I. INTRODUCTION Point kinetics models are commonly used to evaluate certain accident scenarios for nuclear reactors which, along with other safety analyses, provide the basis for the design of the reactor control and safety systems. However, the validity of the point kinetics models is a relevant issue if the results of these safety analyses are to be credible. Rod oscillator experiments1 have been performed in the past to determine reactor kinetics parameters. Other techniques such as varying coolant temperature or flow rate have been employed to change the reactivity of a reactor to investigate the kinetic behavior of a reactor core.2 Although these methods provide a direct measure of the reactor transfer function, they require a reactor perturbation. Noise techniques have been applied for over 40 years to determine reactor parameters. Analyses of the measured noise statistics (auto-or crosspower spectral densities) provide quantities that are directly related to the reactor transfer function. The early work by de Hoffman3 and Courant and Wallace4 analyzed neutron fluctuations for reactors operating at zero power. The number of neutrons from fission fluctuates about an average value that results in local variations in the neutron flux. The distance a neutron travels between collisions is random and results in stochastic fluctuations of the neutron population. The type of neutron event (fission, absorption, etc.,) is also random and results in momentary fluctuations in the neutron flux. In 1958, Moore5 suggested that measurements of the fission rate autocorrelation of a reactor operating at steady state could be used to determine the reactor transfer function. An application of this technique was performed by Conn6 in 1959 at Argonne National Laboratory. Cohn measured the autocorrelation of an ionization chamber output to obtain the reactor transfer function and performed a least squares fit to the transfer function to obtain a value of Similar analyses have also been performed at other facilities. The use of these techniques to estimate reactor parameters led to the development of stochastic models to study the fluctuations in the neutron populations.7,8 These theories modeled the probabilities of various neutron events in the reactor and obtained the moments of these probability
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
31
32
TIMOTHY E. VALENTINE
distribution functions. In 1968, Zolotar9 performed the first Monte Carlo analysis using stochastic models. Zolotar developed a one-speed Monte Carlo code to simulate fluctuations in the neutron populations for a bare reactor and to estimate parameters of the probability distributions. In doing so, Zolotar demonstrated that the Monte Carlo techniques could estimate parameters of the probability distributions that could be compared with point reactor models. Since Zolotar’s analyses, little had been done to use Monte Carlo techniques for noise analysis calculations until the development of the codes KENO-NR10 and MCNPDSP11. This work highlights a unique synthesis of noise analysis techniques with the Monte Carlo method. The time dependent nature of the Monte Carlo simulation is ideally suited for performing noise and/or correlation calculations. Explicit modeling of detectors in the Monte Carlo simulations allow for the direct calculation of time dependent detector responses. With the development of the Monte Carlo codes KENO-NR and MCNP-DSP, noise analysis measurements using neutron and/or gamma ray detectors can be simulated. Although these codes were originally developed to calculate source-driven noise analysis measurements,12 they can also be applied to a number of noise analysis simulations, and in particular, the determination of reactor transfer functions for steady-state reactor configurations. This article describes how noise analysis techniques incorporated into a Monte Carlo neutron transport code can be used to either investigate the applicability of point reactor kinetics transfer functions or, alternatively, to develop a more sophisticated model for the reactor transfer function. This article also demonstrates how noise analysis techniques can be used to determine the time delay between core power production and external detector response, an important parameter for design of the reactor control and safety systems. This manuscript describes the reactivity and source transfer functions obtained from the point reactor kinetics models and how the source transfer function is estimated using noise analysis techniques; it describes how noise analysis parameters may be estimated using Monte Carlo calculational models; and it presents an application of these techniques to a conceptual design of the Advanced Neutron Source (ANS) reactor.13 Finally, some conclusions are drawn about the applicability and limitations of these methods.
II. REVIEW OF STATISTICS OF STOCHASTIC PROCESSES Before proceeding to a detailed discussion of estimation of noise statistics it is necessary to briefly review the statistics of stochastic processes. A random physical phenomenon cannot be explicitly described by mathematical relationships because each observation of the phenomenon will be unique and any given observation will only represent one of the many possible states of the process.14 The collection of all possible states of the random or stochastic process is called an ensemble. Properties of a stochastic process are estimated by computing averages over the ensemble that is characterized by a probability density function, The probability density function describes the probability that the process will assume a value within some defined range at any instant of time.14 To determine the statistics of a stochastic process, knowledge of the probability density is required. For example, the mean value of an ensemble of random variables x(t) is the expectation value
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS The autocorrelation
33
of x(t) is the expectation of
where is a joint probability density function. The joint probability density function describes the probability that both random processes will assume a value within some defined pair of ranges at their respective times and 14 Likewise, the crosscorrelation between two random variables x(t) nda y(t) is
14 where is a joint probability density function for and A stochastic process is stationary if the statistics are independent of time. Therefore, the mean value is a constant and the correlation functions depend only on the difference in time, The complication of estimating the statistics of a stochastic process is greatly simplified if the process is ergodic. A stochastic process is ergodic if its ensemble averages equal appropriate time averages. In practice, random data representing stationary physical phenomena are generally assumed to be ergodic. For a stationary ergodic process, the mean value can be estimated as
the autocorrelation of the random variable x(t) is
and the cross-correlation between two random variables x(t) and y(t) is
For stationary ergodic processes, Eqs. 2.4 through 2.6 are equivalent to Eqs. 2.1 through 2.3. References 14 and 15 provide a detailed treatment of the statistics of random processes. In practice, the mean and the correlation functions are calculated from time averages of finite length and are estimated from discrete time signatures. The mean value is estimated from
where N is the number of points and autocorrelation is estimated as
is a discrete sample of x(t) at time
The
34
TIMOTHY E. VALENTINE
where n is the lag index. The cross-correlation is estimated as
By taking the expectation of Eq. 2.7, the expected value of the estimate of mean value of x can be shown to be the same as the true mean value. Likewise, the expected values of the estimates of the correlation functions can be shown to be the same as the true values of the correlation functions. These estimates are unbiased estimates of the statistics of the random process. Although the estimates are unbiased, there is a variance associated with the estimates that depends on the number of discrete samples of the signal, N. The variance of the estimates approaches zero as N approaches infinity. The statistics of random processes are also described in terms of frequency domain quantities. Spectral density functions are the frequency domain equivalent of the correlation functions. Like the cross-correlation function the cross-power spectral density (CPSD) between x and y represents the amount of common information between x and y. In general, the CPSD is a complex quantity because the cross-correlation function is not an even function. Likewise, the auto-power spectral density (APSD) is the Fourier transform of the auto-correlation. The CPSD may defined as the discrete Fourier transform of the cross-correlation function
where k is the frequency index, n is the lag index, and N is the number of discrete samples of the correlation function.
III. REACTOR TRANSFER FUNCTIONS 3.1 Introduction Classic reactor transfer function measurements attempt to relate fluctuations in reactivity to fluctuations in the neutron population. These fluctuations in reactivity are induced by oscillating control rods or otherwise perturbing the reactor to measure reactordynamic parameters. The point reactor kinetics equations are commonly used to describe the reactor transfer function for small fluctuations about an equilibrium condition although the use of space-dependent neutron dynamics has grown rapidly for analysis of reactor transients. However, there are other reactor transfer functions that can be measured and do not require a perturbation of the reactor and that may be compared to theoretical models for validation. The source transfer function can be obtained for a steady-state reactor configuration with time-dependent sources. In a subcritical reactor, a neutron source is needed to maintain the neutron flux at a quasi-static level. If the nuclear parameters of a subcritical reactor are assumed to remain invariant with respect to time and inherent noise
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
35
contributions are negligible, the fluctuations in the neutron production are mainly due to fluctuations in the neutron source. Uhrig16 has described how noise techniques can be used to obtain the source transfer function or impulse response of the subcritical reactor for a steady-state configuration. The nuclear parameters are then obtained by fitting the measured transfer function with an appropriate model. The general idea of this work is to incorporate noise analysis techniques in Monte Carlo calculations to obtain the source transfer function. However, because Monte Carlo calculations represent the full space, time, and energy dependence of the neutron populations, Monte-Carlo-estimated transfer functions can not only be compared to the point reactor kinetics transfer function but can also provide the basis for a more sophisticated kinetics model. This section reviews the point kinetics representation of the source transfer function. This section also describes how the Monte-Carlo-calculated transfer function is related to the point reactor kinetics equations and how the Monte Carlo transfer function is estimated from the noise spectra. Some frequency dependent features of the noise spectra are also discussed and their use in analyzing the applicability of point kinetics to describe the source transfer function is described. 3.2 Point Kinetics Transfer Function The point kinetics equations used to describe the dynamic behavior of a subcritical fissile assembly are17
and
where n(t) is the neutron density, is the i’th precursor density, is the total delayed neutron fraction, is the reactivity, is the neutron generation time, is the decay constant of the i’th precursor, is the delayed neutron fraction of the i’th precursor, and s(t) is the neutron source density. For reactivity oscillation analyses, the neutron density, precursor density, and the reactivity are assumed to fluctuate about their average values,
and where the average values are assumed to be equal to equilibrium values. By substituting these expressions into Eqs. 3.1 and 3.2 and assuming the oscillations are sufficiently small such that the product of is negligible, one obtains the following linearized equations for the fluctuating components,
36
TIMOTHY E. VALENTINE
and
Taking the Laplace transform of Eqs. 3.4 and 3.5 and combining the resulting algebraic equations yields
where
is the Laplace transform of and Therefore, the reactivity transfer function is simply
is the Laplace transform of
A similar derivation may be performed to obtain the source transfer function. For a steady-state subcritical reactor at a constant reactivity, the neutron density will fluctuate about its quasi-static level due to fluctuations in the neutron source. As before, the neutron, precursor, and source densities can be assumed to be composed of an average value and a fluctuating component while the reactivity is treated as a constant. After performing similar substitutions and algebraic manipulations as those required to obtain the reactivity transfer function, one obtains the following expression for the source transfer function
where is the Laplace transform of Comparing Eqs. 3.7 and 3.8 the reactivity transfer function can be shown to be related to the source transfer function by
Because current Monte Carlo calculations do not include delayed neutrons, they are not included in comparisons of the point reactor kinetics models and the Monte Carlo calculated models for the source transfer function. The introduction of delayed neutrons is not necessary because this is a steady-state analysis and the delayed neutron contribution is most significant for transient analyses. For transient analyses, the delayed neutron contribution could be calculated analytically and the prompt transfer function could be estimated from the Monte Carlo simulations. The source transfer function without delayed neutrons is
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
37
In some measurements and in the Monte Carlo calculations, fission detectors are used to analyze the fluctuations in the neutron population; therefore, the fluctuations in the neutron population must be converted to fluctuations in the neutron production rate. The fission rate is related to the neutron population by
where l is the neutron life time, is the macroscopic fission cross-section, and is the macroscopic absorption cross-section. The neutron production rate is By substitution of the standard definitions for k and the neutron production rate is expressed as
Therefore, the transfer function relating the neutron production to the neutron source is
This transfer function may be compared with the Monte Carlo calculated transfer function. 3.3 Reactor Transfer Functions from Noise Analysis Techniques As previously mentioned, the idea of obtaining the reactor transfer function from steady-state operation of a reactor is not new; this idea was first proposed by Moore and demonstrated experimentally by Cohn. The early work employed correlation techniques to determine the reactor transfer function. These techniques and their frequency domain equivalent are discussed in this section. A subcritical reactor with a source driving function and a fission detector can be considered as a linear single-input single-output system. The output, o(t), is related to the input, i(t), via the convolution integral
where h(t) is the impulse response of the system. The transfer function can be obtained from the cross-correlation between the input and output. The cross-correlation function is the expectation value
Rearrangement of integration and expectation yields
where
38
TIMOTHY E. VALENTINE
The cross-power spectral density (CPSD) between the input and the output is
where is the reactor transfer function and is the auto-power spectral density (APSD) of the input. The APSD of a Poisson-distributed white noise source is a constant function of frequency. The CPSD between the source and a fission detector is given by
where A is the value of the APSD of the source and is proportional to the source disintegration rate, and we have substituted in the expression for the transfer function, Eq. 3.13. If the point reactor kinetics models are applicable, the real part of the CPSD is positive because the reactivity is less than zero for a subcritical reactor. This property of the CPSD is useful for determining the applicability of the point reactor kinetics model. If two identical fission chambers are used to measure the fluctuations in the neutron production, the CPSD between the two detectors can be used to ascertain if the source and detectors can be considered a single-input dual-output system. For a single-input dualoutput system, the CPSD between two detectors with identical transfer functions, can be expressed as
where is the APSD of the input. Note that because all the terms in the CPSD are real the phase of the CPSD between the two detectors is zero. In experiments, zero phase in the measured CPSD between the two detectors verifies the assumption that the transfer function between the source and a detector is the same for both detectors. This is useful for determining if all symmetric modes are measured by all detectors.
IV. TIME DELAY ESTIMATION
The time delay between core power production and external detector response is one of the most important parameters in the design of control and safety systems of reactors with external detectors. In the event of a rapid core power change, the control and safety systems must respond quickly to mitigate the possibility of a severe excursion. Noise analysis algorithms incorporated into Monte Carlo calculations can also be used to estimate time delay between core power production and external detector response. The external detector response can be considered as the output of a single-input single-output system whose input is the neutron production in the reactor core. The time delay can be estimated from the phase of the CPSD between the input and the output. Estimation of the time delay in this manner is equivalent to pulsed neutron measurements as will be shown. As previously mentioned, the output of the system is related to the input via the convolution integral
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
39
If the output is a fraction, g , of the delayed input, the impulse response is
where D is the delay time, and is the Dirac delta function. Therefore, the output is simply The cross-correlation between the input and the output is
where
is the input autocorrelation. The CPSD between the input and the output is
Consequently, the phase of the CPSD is
Therefore, the delay is
where f is the frequency in hertz. In this manner, the phase of the CPSD between core power production and external detectors may be used to estimate the time required for neutrons produced in the core to reach the external detectors.
V. MONTE CARLO SIMULATION 5.1 Introduction
The Monte Carlo method can be formulated in terms of an integral form of the Boltzmann transport equation or from a physical viewpoint. One integral form of the Boltzmann transport equation commonly used for describing the estimation of the average neutron behavior in an assembly is the emergent particle density equation,18
where P denotes the phase space, is the emergent particle density (density of particles emerging from a collision or a source), is the kernel, and S(P) is the source term. This equation is a Volterra-type integral equation that may be solved using a Neumann series approach. The random walk procedure is implemented by representing the emergent particle density by a Neumann series,
where from
is the density of particles emerging from the nth collision and is determined
40
TIMOTHY E. VALENTINE
Using this formulation, the transport equation is solved by first tracking the source particles, to their first collision site to estimate The surviving first collision particles are then transported throughout the system to estimate the next collision sites. This process is repeated for a sufficient number of particle histories to obtain an estimate of which is then used to estimate some desired quantity such as a neutron capture in a detector. Although the integral form of the Boltzmann equation is useful for improving or modifying the standard random walk procedure, the random walk procedure can be simulated without even referring to the transport equation. Estimates of desired quantities are obtained by observing the behavior of a large number of individual radiation particles. Because actual particle transport is a stochastic process, the simulation should also be stochastic. All that is required for simulating particle transport is a probabilistic description of the particle emissions and interactions. With a description of the geometrical boundaries and the materials of the system, the particle distance to collision and interaction events can be randomly determined from the nuclear cross section data for the material. The Monte Carlo calculation is ideally suited for performing calculations of the higher moments like CPSDs of the neutron transport process in a fissile assembly because the variations of the neutron events are probabilistic in nature. Because typical biasing techniques are employed to reduce the variance of estimates of first moment quantities, they do not preserve the higher moments; therefore, analog Monte Carlo calculations must be performed when analyzing parameters which are directly related to the higher moments of the neutron populations. Because the use of average quantities reduces the statistical fluctuation of the neutron population, average quantities such as the average number of neutrons from fission are not used, instead, appropriate probability distribution functions are sampled. The Monte Carlo codes used for this type of analysis were developed to simulate a frequency analysis measurement that uses a source, which may be treated as a Poisson-distributed white noise source, and a variety of particle detectors to characterize fissile assemblies. The source is contained in an ionization chamber to detect each spontaneous fission. The source and detector responses are accumulated and the resulting sequences segmented into data blocks. A data block is a sample of the detector response for a specified period of time. The auto-and cross-power spectral densities are accumulated and averaged over many blocks of data. This measurement technique is used to characterize fissile materials and could be used to measure the transfer function of a reactor by measuring the CPSD between the source and radiation detectors placed near the core. There are two codes available to simulate the frequency analysis measurement. The first code is the Monte Carlo code KENO-NR which is a modified version of KENO-Va19 and uses group averaged neutron cross sections. This code calculates the auto- and cross-power spectral densities between a source and neutron detectors. The other code is MCNP-DSP which is a modified version of MCNP4a.20,TM MCNP-DSP is a continuous energy Monte Carlo code which calculates the auto- and cross-power spectral densities and auto-and cross-correlation functions between a source and radiation detectors for both neutrons and gamma rays. The Monte Carlo calculation does not impose limitations on the spatial dependence of the simulation except for the accuracy of representing physical systems. The only limitation of the energy dependence is that imposed by the cross section data files ™ MCNP is a trademark of the Regents of the University of California, Los Alamos National Laboratory.
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
41
whether continuous or group averaged and that imposed by the representation of the energy of neutrons and/or gamma rays from fission. 5.2 Monte Carlo Simulation This is a brief review of the basic random walk of particles in the Monte Carlo simulation. The reader is referred to Lux and Koblinger21 for a complete description of the Monte Carlo method. The Monte Carlo calculation typically proceeds by tracking a specified number of source particles and their progeny. However, in the KENO-NR and MCNP-DSP Monte Carlo calculations the tracking procedure consists of an additional outer loop over data blocks because the time and frequency statistics are averaged over many data blocks. The outer loop sets the current data block, and the inner loop tracks events following source fissions in the current data block. This is illustrated schematically in Fig. 5.1. The outer loop starts with nps=1. If nps is greater than the number of specified blocks to be accumulated (bks), then the final outputs are obtained. Otherwise, the number of source events per data block (nsdpb) is sampled from a Poisson distribution. The source particles and their progeny are then tracked until the particles are either absorbed or escape from the system. After all source particles for a given data block have been tracked, the block counter is incremented and the process repeats until all blocks have been accumulated. A block diagram of the inner loop structure is given in Fig. 5.2. The inner loop begins by obtaining information about the source event. The source is treated as a point source whose directional distribution is either isotropic or determined from an appropriate distribution function. The times of the source fission events are uniformly distributed within the data block. The energy of the neutrons are sampled from a corrected Maxwellian distribution. The number of neutrons from the spontaneous fission of are sampled from Spencer’s distribution.22 This information is stored in the bank for all but one
42
TIMOTHY E. VALENTINE
of the neutrons for the given source spontaneous fission. Next, the distance to the cell boundary is determined from the geometry description of the region and the direction of the source neutron, and the distance to collision is determined probabilistically from the total macroscopic cross-section of the material in which the source is located. If the distance to collision is shorter than the distance to the cell boundary, then the collision type and time are determined from the direction and velocity of the neutron; otherwise, the particle is transported to the boundary of the region and the process is repeated. The type of collision is probabilistically determined from the nuclear cross-section data. For example, the probability of fission event is determined from the ratio of the fission cross-section to the total cross-section. If a fission event occurs, the number of neutrons from fission are sampled from the appropriate probability distributions, the fission neutron directions are sampled isotropically, and their energies are sampled from the fission spectrum of the target nucleus. The birth time of the secondary particles is the sum of the source particle birth time and the transit time to the collision site. These progeny would then be stored in the bank to be tracked latter. If the collision event occurred in the detector material, the detector response at the time of collision is incremented. If the particle survives the collision, the distance to the cell boundary is recalculated along with the distance to collision and the process is repeated. If the particle is absorbed or causes fission, then the next particle from the bank is retrieved and the distance to the boundary and the distance to collision are determined for this particle. If there are no particles in the bank the inner loop counter is incremented. All particles are tracked until they are either absorbed or escape from the system. After all source and secondary particles for a given data block have been tracked, the detector responses are accumulated. This procedure is repeated for the specified number of data blocks to obtain average estimates of the detector responses. The APSDs and CPSDs are estimated from the detector responses by complex multiplication of the Fourier transform of the data blocks and averaged over blocks. 5.3 Detector Simulation
In these Monte Carlo calculations, the detector is specified by defining the detector material, type, and any energy thresholds. The detector response is segmented into time bins for each data block whose width is specified. There are three types of detectors available in these calculations: capture, scatter, and fission detectors. The detector response of capture detectors is due to neutron absorption followed by emission of secondary charged particles that ionize the detection media and produce an electrical pulse proportional to the kinetic energy of the secondary charged particle. In the Monte Carlo simulation, neutron absorption in the detector material results in a count at the appropriate time in the data block. In KENO-NR, all neutron absorptions in the detector media result in a count. In MCNP-DSP, a specified fraction of the neutron absorptions in the detector media lead to a count to allow for detector thresholds typically set in the measurements. Scattering detectors are those in which the detector response is due to neutron scattering in the detection media. These detectors are typically used to simulate liquid and/or plastic scintillators. To observe a count in a scattering detector, the neutron must deposit enough energy to the recoil nucleus to excite electrons in the scintillation material which produce light that is converted into an electrical pulse by a photomultiplier tube. In KENO-NR, the response of the scattering detectors is determined by the energy of the incident neutron. In MCNP-DSP, the response of the scattering detector is determined by the neutron energy deposition in the detection media and multiple scattering events of particles in the detectors are taken into account. The MCNP-DSP treatment is more realistic than the KENO-NR treatment of scattering detectors. In fission detectors, the fission fragments travel through the detection media ionizing the atoms in the detector. The large energy release per fission
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
43
allows for easy discrimination of other events that may also produce ionized atoms in the detector. In the calculations, a count is registered each time a fission occurs in the detection media, and the fission neutrons are stored for tracking. In MCNP-DSP, gamma rays can also contribute to the detector response for capture and scatter detectors if the calculation is a coupled neutron-photon calculation. VI. APPLICATION TO THE ADVANCED NEUTRON SOURCE REACTOR 6.1 Advanced Neutron Source (ANS) Reactor The ANS reactor was to be designed as an experimental facility for neutron research. Conceptual designs of the reactor included numerous facilities for neutron scattering analyses, materials irradiation, isotope production, and nuclear science studies. The reactor was designed to achieve a high flux in the reflector tank in order to perform unprecedented neutron scattering studies. To achieve the necessary flux, the conceptual designs consisted of a small compact reactor and a heavy water reflector. Figure 6.1 is a schematic of one ANS conceptual designs. In this design, two annular concentric fuel elements comprised the reactor core. The lower fuel element has a 200 mm id. and a 340 mm od., and the upper element has a 350 mm id. and a 470 mm od. Each fuel element consisted of involute fuel plates that were approximately 1.2 mm thick and spaced 1.2 mm apart. The plates were to be constructed of a highly enriched uranium silicide composite in an aluminum matrix. The silicide thickness varied radially. Uranium silicide was chosen because of its high density. In the following analysis, the two different diameter annular fuel elements were vertically displaced. Located in the inner region of the annular fuel
44
TIMOTHY E. VALENTINE
elements were three 71 mm od. hafnium control rods. The control rods were to be driven upward from the bottom of the reactor to the top of the reflector vessel to facilitate the fuel loading and removal. In-core irradiation facilities were located between the inner radius of the upper fuel element and control rods. A core pressure boundary tube separated the core coolant flow from the heavy water reflector. This would allow the large heavy water reflector region to be maintained at a low pressure relative to the core region pressure. Eight hafnium shutdown rods were located outside the core pressure boundary tube in the heavy water reflector. The shutdown rods would move down from the top of the reactor vessel, where they would be located during reactor operation, to the midplane of the reactor to affect shutdown of the reactor. The reflector vessel had a 3.5 m diameter and was approximately 4.3m high. A 1.3 m thick pool of light water surrounded the reflector vessel on the sides and was surrounded by a biological shield within the reactor building. In this design, eight beam tubes, two cold neutron sources, and one hot neutron source were located in the reflector vessel. 6.2 ANS Monte Carlo Model
The KENO-NR Monte Carlo model of this conceptual design of the ANS reactor was simplified significantly. The model used in these analyses consisted of the two annular fuel elements, three hafnium control rods, the heavy water reflector, and the light water pool. The tip of the inner control rods was positioned at the reactor midplane. The involute fuel plates of the annular fuel elements were not explicitly modeled. Instead, the annular fuel element models were comprised of 13 radial zones and 25 axial zones to represent the variations of the fuel density in the annuli. The beam tubes and cold and hot neutron sources were not modeled in some calculations. The source was positioned at the midplane on the axis of the reactor core to initiate the fission process. Part of the upper fuel element and part of the lower fuel element were treated as fission detectors. Calculations were performed at various radial positions of the external detectors (from approximately 400 mm to 2500 mm, the latter being the location of the external fission detectors). To increase the precision of the estimates of the noise statistics, the external detector was modeled as an annular ring of the relevant moderator for a given radial position that was 10 mm wide and 1080 mm high. Neutron scattering was the event scored as a detection. Modeling the detectors in this fashion decreased the calculation time by increasing the detector efficiency relative to that of a point detector. The presence of the structural components in the heavy water reflector reduce the core reactivity. To account for this negative reactivity, the neutron emission probabilities were arbitrarily reduced by 2% to produce a subcritical configuration rather than changing the control rod position. Variations to this model were also made for some calculations. In some calculations, the light water pool was replaced by a heavy water pool to determine the effects of the pool material on the calculated time delays. To account for the presence of the experimental facilities in the heavy water reflector, an amount of aluminum equivalent to the mass of aluminum in these experimental facilities was added to the heavy water reflector for some of the calculations. In the calculations which included the effect of the experimental facilities in the heavy water reflector, the neutron emission probabilities were not reduced. 6.3 ANS Reactor Transfer Function
The ANS reactor transfer function was estimated by dividing the CPSD between the source and fuel element fission detectors by the APSD of the source. This analysis was performed using reactor models both with and without the experimental facilities in the heavy water reflector modeled. The reactivity was obtained from the Monte Carlo
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
45
calculation and the neutron generation time was estimated by other calculations; these parameters were then used in point kinetics transfer function compared against the Monte Carlo calculated transfer function. For calculations which did not model the experimental facilities in the heavy water reflector, the reactivity was -0.0142, and the neutron generation time was 1.6 ms. This unusually large generation time is caused by the heavy water reflector. A comparison of the point reactor kinetics transfer function and the Monte Carlo calculated transfer function is shown in Fig. 6.2. There is excellent agreement between the Monte Carlo calculated transfer function and the prompt point kinetics transfer function for each annular fuel element. The presence of the experimental facilities in the heavy water reflector greatly reduces the reactivity. For the calculation which modeled the aluminum in the heavy water reflector, the reactivity was -0.0534, and the neutron generation time was 1.0 ms. In this calculation, the neutron emission probabilities were not reduced because the aluminum in the heavy water reflector significantly reduced the reactivity of the reactor. As expected, the neutron generation time was decreased when the aluminum was added to the heavy water reflector. A comparison of the Monte Carlo calculated transfer function with the point kinetics transfer function for the reactor with aluminum in the heavy water reflector is shown in Fig. 6.3. Once again, there is excellent
46
TIMOTHY E. VALENTINE
agreement between the Monte Carlo calculation and the point reactor kinetics model. Consequently, the preceding calculations demonstrate that point kinetics was applicable for these geometrical configurations of the ANS reactor. The applicability of point kinetics can also be verified by examining known properties of the CPSDs. For point kinetics to be applicable the real part of the CPSD between the source and the fuel element fission detector must be positive. Figure 6.4 illustrates that the CPSD real part between the source and the fuel element fission detector are indeed positive for all frequencies. The CPSD for the upper element has a greater value than the CPSD for the lower element because the upper fuel element has a higher concentration of than the lower fuel element. If the transfer function between the source and fuel element fission detector is the same for both fuel elements, the phase of the CPSD between the fuel element fission detectors must be zero. Furthermore, if the phase is zero, then the detected neutrons represent those of the fundamental mode, and the two fuel elements behave as one element. The phase of the CPSD between the two fuel elements is approximately zero as shown in Fig. 6.5. This was to be expected because the transfer functions between the source and the fuel element fission detectors are essentially the same. These properties of the power spectral densities have shown that point kinetics is applicable for this geometrical configuration of the ANS reactor and that the two vertically displaced annular fuel elements behave as one core. 6.4 ANS Calculated Time Delays
The time delay between core power production and the external detectors was estimated from the phase of the CPSD between the fuel elements and the external scattering detectors. These calculations were performed with and without aluminum in the heavy water reflector. The time delay was evaluated at several positions in the heavy water reflector and in the light water pool. The time delay increased as a function of distance from the core until it reached a saturation value in the light water pool. The results of the calculations are presented in Table 6.1 and are shown in Fig. 6.6. The time delay had a maximum (~ 18 ms) in the reflector and then decreased to 15 ms in the pool for the calculations that did not include the aluminum in the heavy water reflector. The decrease in the time delay is due to the fact that once a slowed-down neutron reached the light water pool it was absorbed locally due to hydrogen capture. The flight path of a neutron to a radial point is not a direct path but consists of many scattering paths in all directions while “diffusing” to the radial point. Therefore, the calculated time delay is the average flight time of thermal neutrons “diffusing” to each radial point. Because more neutron scattering occurred in the the average flight path was longer in the heavy water reflector; hence, the average flight time will be longer. Neutrons that have numerous scattering collisions in the may be scattered back into the core and would not contribute to the detector response in the light water pool. To further enhance understanding of this decrease in the time delay, a calculation was performed with the light water replaced by heavy water. For the case, the time delay increased with distance as shown in Fig. 6.7, thus confirming the effects of neutron absorption in the light water. Because the experimental facilities in the heavy water reflector were not modeled, the time delays were overestimated. Additional calculations were performed that included aluminum in the heavy water reflector. The aluminum in the heavy water reflector significantly reduced the time delays because neutrons that stayed in the heavy water reflector for relatively long periods could be absorbed by the aluminum. The results from these calculations for the three detector positions in the light water pool are given in Table 6.2. Including the aluminum in the heavy water reflector decreased the time delay to the external detectors by 5 ms. The time delay at the position of the external fission detectors is 10 ms.
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
47
48
TIMOTHY E. VALENTINE
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
49
50
TIMOTHY E. VALENTINE
REACTOR DYNAMICS FROM MONTE CARLO CALCULATIONS
51
VII. SUMMARY
Noise analysis and Monte Carlo methods are widely used in characterizing nuclear reactors. The combination of these two methods in the Monte Carlo codes MCNP-DSP and KENO-NR is unique and provides a useful tool for design of reactor safety and control systems. This article has demonstrated how Monte Carlo methods can be applied to investigate the applicability of point kinetics for steady-state reactor configurations or, alternatively, to provide a more detailed model for the reactor transfer function by computing the reactor transfer function from power spectral densities. The relationship between the point reactor kinetics transfer function and the Monte Carlo calculated transfer function was defined to compare the transfer functions obtained from the two models. Properties of the Monte Carlo calculated power spectral densities also provide additional means to investigate the applicability of point reactor kinetics. The phase of the CPSD between neutron detectors can be used to determine the time delay between core power production and external detector response. The time delay between core power production and the external detector response is essential to the design of reactor safety systems of a reactor. These Monte Carlo analyses apply to a specific steady-state configuration of the reactor and cannot be used to asses the applicability of point kinetics for other configurations or during reactor transients. Additional Monte Carlo calculations could be performed to investigate other reactor configurations. The Monte Carlo analysis was applied to a conceptual design of the ANS reactor to determine the applicability of point kinetics for the reactor operating in a steady-state configuration. This analysis has shown for the slightly subcritical configuration of the reactor that point kinetics is applicable and that the two vertically displaced annular fuel elements behave as one fuel element. If the point kinetics model for the reactor transfer function had not been applicable, the Monte Carlo calculated transfer function could have served as a model for the reactor transfer function. The reactor transfer function could be measured using source in an ionization chamber placed at the center of the core and a fission detector adjacent to the core. The Monte Carlo calculations also provided estimates of the time delay between core power production and the external detector response. For this conceptual design of the ANS reactor, the time delay between the core power production and the external detector response is approximately 10 ms if the aluminum equivalent of the heavy water reflector components is included in the Monte Carlo model. The estimate of the time delay without aluminum in the heavy water reflector is conservative and provided a limit for the design of the reactor safety systems. The time delay could also be obtained by measuring the CPSD between a detector near the core and external detectors.
Acknowledgment. The author is very grateful to his colleague J. T. Mihalczo for many discussions and advice regarding this work, and is indebted to J. K. Mattingly, R. B. Perez, and J. A. March-Leuba for their detailed review of this manuscript.
REFERENCES 1. 2.
Keepin, G. R., Physics of Nuclear Kinetics, Addison-Wesley Publishing Co. Inc., Reading, Massachusetts, (1965). Thie, J. A., Reactor Noise, Rowand and Littlefield, Inc., New York, New York, (1963).
52 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
TIMOTHY E. VALENTINE de Hoffman, F., Intensity Fluctuations of a Neutron Chain Reactor, MDDC-382, LADC-256 (1946). Courant, E. D. and Wallace, P. R., “Fluctuations in the Number of Neutrons in a Pile,” Phy. Rev., 72, 1038 (1947). Moore, M. N., “The Determination of Reactor Transfer Functions from Measurements at Steady Operation,” Nucl. Sci. Eng., 3, 387 (1958). Cohn, C. R., “Determination of Reactor Kinetic Parameters by Pile Noise Analysis,” Nucl. Sci. Eng., 5, 331 (1959). Saito, K. and Otsuka, M., “Theory of Statistical Fluctuations in Neutron Distributions,” J. Nucl. Sci. Technol., 2, 304 (1965). Harris, D. R., “Neutron Fluctuations in a Reactor of Finite Size,” Nucl. Sci. Eng., 21, 369 (1965). Zolotar, B. A., “Monte Carlo Analysis of Nuclear Reactor Fluctuation Models,” Nucl. Sci. Eng., 31, 282 (1968). E. P. Ficaro and D. K. Wehe,"Monte Carlo Simulations of the Noise Analysis Measurements for Determining Subcriticality," Proceedings International Topical Meeting Advances in Mathematics, Computations and Reactor Physics, Pittsburgh, Pennsylvania, April 28-May 2, 1991, Vol. 1, p. 5.22.1, American Nuclear Society, 1991. Valentine, T. E. and Mihalczo, J. T., “MCNP-DSP: A Neutron and Gamma Ray Monte Carlo Calculation of Source Driven Noise-Measured Parameters,” Ann. Nucl. Energy, 23, 1271 (1996). Mihalczo, J. T., Pare, V. K., Ragen, G. L., Mathis, M. V., and Tillet, G. C., “Determination of Reactivity from Power Spectral Density Measurements with Californium-252,” Nucl. Sci. Eng., 66, 29 (1978). D. L. Selby, R. M. Harrington, and P. B. Thompson, The Advanced Neutron Source Project Progress Report, FY 1991, ORNL-6695, Oak Ridge National Laboratory, Oak Ridge, TN, January 1992. Bendat, J. S. and Piersol, A. G., Random Data Analysis and Measurement Procedures, John Wiley & Sons, New York, New York, (1986). Papoulis, A., Probability, Random Variables, and Stochastic Processes, McGraw-Hill Inc., New York, New York, (1984). Uhrig, R. E., Random Noise Techniques in Nuclear Reactor Systems, Ronald Press Co., New York, New York (1970). Henry, A. F., Nuclear Reactor Analysis, Massachusetts Institute of Technology, Cambridge, Massachusetts (1975). Carter, L. L. and Cashwell, E. D., Particle-Transport Simulation with the Monte Carlo Method, Energy Research and Development Administration, Oak Ridge, Tennessee, (1977). Petrie, L. M. and Landers, N. F. ORNL/NUREG/CSD-2, Oak Ridge National Laboratory (1984). Briesmeister, J. F., Ed., “MCNP4A-A Monte Carlo N-Particle Transport Code,” LA-12625-M, Los Alamos National Laboratory (1993). Lux, I. and Koblinger, L., Monte Carlo Particle Transport Methods: Neutron and Photon Calculations, CRC Press, Inc., Boca Raton, Florida (1991). Spencer, R. R., Gwin, R., and Ingle, R., “A Measurement of the Average Number of Prompt Neutrons for Spontaneous Fission of Californium-252,” Nucl. Sci. Eng., 80, (1982).
NOTES ON A SIMPLIFIED TOUR: FROM THE FOURIER TO THE WAVELET TRANSFORM
Marzio Marseguerra Dept. of Nuclear Engineering Polytechnic of Milan Italy
INTRODUCTION Any signal may, in principle, be viewed as resulting from a linear superposition (possibly an integral) of assigned elementary functions. If the set of these functions is sufficiently large the coefficients in the superposition constitute the transform of the signal. A classic example is given by Fourier analysis in which the elementary functions are sinusoids of different frequencies which may be viewed as generated by varying the frequency of an initial mother sinusoid. The wavelet transform may be similarly viewed: here the elementary functions are generated by the dilatations and translations of a mother function which may be selected with some degree of freedom provided it has reasonably good time and frequency localization features. It turns out that this mother function should also have zero mean so that it resembles a small wave: hence the name mother wavelet. An example of a mother wavelet is the derivative of a bell–shaped curve such as a Gaussian function. With respect to the Fourier analysis, a mother wavelet with a reasonably good localization both in time and frequency gives to the wavelet transform the so called zoom in and zoom out capability – i.e. the capability of resolving both the details and the trend of a signal – which represents the basis for a list of several applications. In recent years, particularly since the second half of the 1980s, a tremendous interest has emerged towards the application of wavelet analysis to different research areas which include both theoretical and applied aspects. Examples may be found in a variety of fields, ranging from functional analysis and digital multidimensional signal processing to image and sound analysis or to the biomedical domain. Important results have been achieved in specific applications such as digital video data compression, a very important subject in the telecommunication field, or turbulent flow, of interest to hydraulic, naval and aerospace engineers and to meteorologists. Correspondingly, a great deal of papers and books have been published in which
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
53
54
MARZIO MARSEGUERRA
the theoretical aspects of the wavelet methodology have been deeply investigated with the aid of the whole arsenal of tools at the disposal of mathematicians. However, this scientific literature is generally written by specialists for specialists so that the mathematical demands of these sophisticated tools scale up considerably. Non– mathematicians hardly fully understand the matter and are often lost since in general they do not always have the clew of thread to get out of the labyrinth of theorems, propositions, lemmas, applicability conditions etc., required by rigorous mathematical reasoning. The present paper has been written mainly for an audience with a mathematical background at the engineering degree level; since it is devoted to people who work in applied areas, the so called theorem–structured way has been deliberately avoided, on account of the possibility that it might obscure the matter and discourage non– mathematicians from entering this new and fascinating field. A plain expository style has therefore been preferred, sometimes at the expenses of mathematical rigour, and the wavelets are presented as a succession of easily understandable analytical expressions. The present exposition is simply my understanding of a possible way for crossing the pillars of Hercules towards the wavelet sea. On the whole I followed, with some heuristic simplifications, the line of reasoning presented by G. Kaiser (1994) in the first part of his beautiful book A Friendly Guide to Wavelets which I systematically refer to for detailed explanations. Much of what is reported here I learned from Kaiser and an enormous intellectual debt is owed to him. Some parts are also borrowed from two other outstanding books, namely Ten Lectures on Wavelets by I. Daubechies (1992) (a book difficult for non–mathematicians, even though it received an AMS prize for exposition) and An Introduction to Wavelets by C. Chui (1992) (less difficult for non–mathematicians). I hope, the present paper will be of some help to professionals in engineering science for understanding specialized papers or up–to–date surveys such as those contained in the special issue on wavelets (1996) edited by and I. Daubechies. In essence, the purpose of this paper is to encourage engineers, and particularly nuclear engineers, to add the wavelet technique to the widely adopted Fourier methodology. The plan of the paper is as follows. Initially some preliminary mathematical notions are recalled; Section 2 then reviews the well established windowed Fourier transform; in Section 3 the concept of a frame is introduced; this concept is used in Section 4 where the continuous wavelet transform is introduced in a way which parallels that of the windowed Fourier transform; Sections 5 and 6 address the discrete versions of the above transforms; the multi–resolution analysis is presented in Section 7 and, finally, the link between the wavelets and the sub–band filtering is given in Section 8. At the end of the paper a list of essential references is given: comprehensive lists of up to date references on specific issues may be found, for example, in IEEE (1996).
1. PRELIMINARIES 1.1 Basis of a vector space Let us consider an M–dimensional vector space A standard basis in is a sequence of M column vectors such that all the elements of the generic vector are zeros except the element which is one, viz., where is the usual Kronecker–delta.
NOTES ON A SIMPLIFIED TOUR
55
A sequence of M linearly independent column vectors is a basis in if any column vector can be written as where is the component of along the vector When considering any two bases and in the vectors of one basis may be expressed as linear combinations of those of the other basis, viz., where is the component of along 1.2 Dual spaces and dual bases Consider the linear operator F which maps column vectors in to column vectors in viz. also written as Let and be any bases in and respectively. Then the result of applying F to a column vector is the column vector so expressed
where
is the component of the vector along the vector and Then, in the given bases, F is represented by a matrix of order (N, M) with elements and the application of F to is computed as a matrix product. Particular cases: i. M = 1, i.e. or The operator F is represented by a matrix of order (N, 1), i.e. by a column vector of order N. Since the matrix product of the (column) vector times the 1–dimensional vector (a scalar) is the (column) vector we may say that any (column) vector may be viewed as a linear operator mapping Formally: In words: when the N–dimensional (column) vector (regarded as an operator) operates on the 1–dimensional vector the result is the product of now viewed as a scalar in C, times ii. N = 1, i.e. or In the following we change symbols and write N instead of M. Thus the mapping here considered will be written as or The operator F, called a linear functional, is represented by a matrix of order (1, N), i.e. by a row vector of order N. The N– dimensional space spanned by these row vectors will be denoted by Since the matrix product of the (row) vector times the (column) vector is the 1–dimensional vector (a scalar) we may say that any (row) vector may be viewed as a linear operator mapping Given any basis (column vectors) of we may construct a basis (row vectors) of such that when operates on a vector the result is the component of with respect to the basis viz., Thus the row vector may be viewed as a linear functional. In particular, the vector composition rule (parallelogram rule) yields so that we have the so called duality relation Correspondingly, is called the dual space of and is the dual basis of In practice the basis vectors are given by assigning their components in the standard basis. In this basis the components of are the solutions to the system
56
MARZIO MARSEGUERRA Thus, the N algebraic systems(1.2) of N equations each, must be solved in order to get the whole sequence
1.3 Inner product Given
any inner product
must satisfy the following axioms:
where the overbar denotes complex conjugation. If are the components of and with respect to the standard basis, the standard inner product between and and the norm of are defined as and If the inner product vanishes we say that the two vectors are orthogonal. Clearly the concepts of orthogonality and length (square root of the norm) are relative to the choice of the inner product. More generally, a weighted inner product is defined as
1.4 Adjoints Given any inner products in and consider the operator F, represented by a matrix of order (N, M), which maps column vectors from to viz. The adjoint of F is defined as the operator F* which performs the inverse mapping
It can be shown that the adjoint F* of F exists and it is unique. The determination of the matrix representative of the operator F* is simple in case of orthonormal bases and in and respectively. In this case we have:
Multiplying the first equation by
and the second one by
yields
Application of the adjoint condition to one of the above equations, for example to the second equation, yields Thus the matrix representing F* is the Hermitian conjugate of the one representing F. A particularly interesting case of adjointness occurs when M = 1. Then becomes C. Let be any two 1–dimensional vectors in C. We choose as the inner product in We consider any as the operator (see eq.(1.1)) such that The adjoint of is the operator which performs the inverse mapping, i.e. the functional (row vector) For any and we get such that We also have so that
NOTES ON A SIMPLIFIED TOUR
57
Thus the inner product (a scalar) of any two (column) vectors in may be split in the (matrix) product of the adjoint (a row vector) of times This is a result of paramount importance and its generalization in the Hilbert space will be extensively used in the sequel.
1.5 Reciprocal basis Let us consider the space with a basis and let be the dual basis of We have seen that each row vector may be viewed as an operator performing the mapping The adjoint of is a column vector which may be viewed as an operator performing the inverse mapping, i.e. It can be shown that the sequence forms a new basis of this basis is called the reciprocal basis of The importance of the reciprocal basis stems from the fact that it allows us to compute the components of any vector in the given basis by taking an inner product, just as in the orthogonal case. Indeed from eq.(1.5) we get In the particular case of the above equation becomes But, since then and, finally The above represents a very important result: any given basis and its reciprocal basis are (mutually) orthogonal. Given any basis in we have seen that the vectors of the reciprocal basis are the adjoints of the vectors of the dual basis and that these may be computed from the by solving N systems (1.2). However, we may directly compute the from the of the original basis by resorting to the metric operator, defined as
and represented, e.g. in the standard basis, by a square matrix of order N. Indeed, from eqs.(1.5) and (1.6), we have
Thus
1
1.6 The resolution of unity The dual bases and linked by the duality relation (1.2), have the important property of resolution of unity. Since every can be written in the basis and since by definition of we have Regarding as an operator from and as an operator from then is an operator from i.e. from 1
The expression which follows is valid provided G is not singular. To this aim, given any define Then and Thus G is positive definite, and has, therefore, an inverse.
58
MARZIO MARSEGUERRA and we can write
so that the operator in brackets equals
the identity operator viz. or The above expressions are an example of a resolution of unity, also called resolution of identity. We would like to stress that in order to have a resolution of unity, we need either a pair of dual bases in and in or a pair of reciprocal bases and both in
2. THE CONTINUOUS WINDOWED FOURIER TRANSFORM 2.1 Introduction We are interested in analyzing a non–stationary, square integrable function of time aiming at resolving its frequency content along time. For example, with reference to a musical piece or to a speech, the problem amounts to identifying the time occurrence of a single note or phoneme. In nuclear engineering this problem occurs for example in the field of early detection of failures. This is an important issue since many major failures are preceded by the random occurrence of sudden, high frequency disturbances of short duration which should be detected and also detected as soon as possible. At first sight it might appear that a solution to these problems could be found within the classic Fourier approach by looking at the frequency content of the However, this approach does not generally solve the specific problem here considered. Indeed, since the Fourier Transform (FT) of a signal is usually computed after the signal has been measured during a time interval of suitable length, the frequency contributions due to single notes or to phonemes or to small disturbances of short duration are hidden in the bulk of the transform; moreover, in the Fourier approach, the signal is decomposed in the sum (actually, integral) of sinusoids having all possible frequencies whose amplitudes are fixed and related to the integral of the signal over the whole measurement time. In these integrals the time occurrence of a single note or phoneme or of a single short disturbance is lost, In this view, for example, a musical piece is considered as resulting from the superposition (constructive or destructive) of infinitely many pure notes, each generated by a musician who plays always the same note with same intensity (amplitude). In the mathematical language we say that the analyzing functions of the Fourier approach, namely the complex exponentials, suffer from lack of time localization. A possible way to circumvent this drawback of the Fourier approach was proposed by D. Gabor (1946) who introduced the so called Windowed Fourier Transform (WFT), also named short time FT, which is a sort of moving FT. In principle, for each time we consider only a suitably weighted portion of the signal near for example contained within a (possibly small) interval T preceding and perform the usual FT. By so doing, for each we obtain a function of the frequency in which the contributions of the sinusoids due to the rest of the signal outside T are missing and in which the various harmonics thereby computed are well localized since we know that they occurred within the time interval With reference to the above mentioned musical piece, each musician still plays the same note, but now changes the amplitude with time. The limits of the WFT approach will be discussed in Section 2.7.
NOTES ON A SIMPLIFIED TOUR
59
2.2 Notations In the following we shall proceed heuristically, thus skipping most of the subtleties of a rigorous mathematical approach. The main assumption is that the results so far obtained with reference to the N–dimensional vector space can be sensibly generalized to the case of an infinite dimensional Hilbert space 2. In this space we consider the set of measurable functions of time e.g. where and C are the sets of all real and complex numbers, respectively. The inner product between and and the norm of are (in the following, unless explicitly stated, all integrals run from to ):
In analogy with the finite dimensional case (see eq.(1.5)), the inner product will be often written in vectorial form. In these cases we see every as a bounded linear operator acting on in such a way that i.e. correspondingly, we define a bounded linear operator performing the inverse mapping such that The linear functional represents a particular case of an adjoint operator, defined similarly to eq.(1.4): given any two Hilbert spaces in correspondence of any bounded operator there exists a unique operator performing the inverse mapping such that
We will also generalize the basis of a vector space by introducing the concept of frame of a vector space. For the moment it is sufficient to say that a frame of vectors in a space represents a redundant reference system with more vectors than strictly necessary to form a basis: for example, in the bidimensional plane, three non collinear vectors form a frame. The rigorous definition of a frame will be given in the next section, when we rephrase in the frame language the results here obtained for the windowed Fourier transform. Concerning the FT, we consider the space of square integrable functions of one continuous variable which will be called time but which could equally be space, voltage, etc. Given a function of time its analysis is performed by the FT, viz.,
where is the frequency (Hz). The synthesis of is performed by the Inverse Fourier Transform (IFT) of viz.,
2
Recall that a Hilbert space is a linear vector space which is normed, complete and equipped with an inner product. The space is said to be complete if any Cauchy sequence converges to an when A sequence is called a Cauchy sequence if when both
60
MARZIO MARSEGUERRA
Both the above integrals are assumed to exist. i.e. if it has a compact support of length it is sufficient to know only in correspondence of the discrete sequence of frequencies ( where denotes the set of all integers) where Then the eqs.(2.2) and (2.1) become (in the following, if no limits are indicated in a summation sign, then is understood)
Moreover, a little algebra shows that
i.e. if for it is sufficient to know only in correspondence of the discrete sequence of times where This last condition follows from the requirement that the Nyquist frequency should be larger than or equal to the maximum frequency in Then the eqs.(2.1) and (2.2) become
Moreover, a little algebra shows that
2.3 The analysis of The idea underlying the WFT may be formalized by introducing a suitable window function which vanishes, or is negligible, outside an interval of length T near for example outside (–T , 0), and whose norm is To visualize the situation we may think of as a bell shaped curve or a rectangle starting at and ending at For completeness, we assume that may be a complex valued function, although in practice it is real. This window, suitably translated in time, will be used to localize in time 3. Then, for every instead of we consider the new function
and define the WFT of
3
as the usual FT of
Gabor (1946) selected a Gaussian curve for
viz.,
61
NOTES ON A SIMPLIFIED TOUR where whose FT is
In words, as a function of may be interpreted as a pure note located near which oscillates at the frequency within the envelope constituted by the carrier translated at More formally, eq.(2.7) shows that the family is constituted by a continuous sequence of vectors in doubly indexed by the time–frequency pair and eq.(2.6) shows that the WFT maps an defined in to an defined in the plane of the indexes. Since the family is obtained by translating and modulating with a complex exponential the window this window may be properly called the mother of all the notes and these may be written as
Since product
also
and eq.(2.6) may be written as an inner
where is the adjoint of Another expression for resorting to Parseval’s theorem, viz.,
The norm of
may be obtained by
is
where we have introduced the operator
This operator will be soon recognized as being the metric operator (see eq.(1.7)) of the WFT transform. Application of G to any and freely interchanging the integration orders, yields
so that
62
MARZIO MARSEGUERRA
Thus the operator G reduces to the norm of the window which is a positive constant, possibly equal to one if the window is normalized in energy. Then eq.(2.11) reads The above expression tells us that the univariate energy density per unit time of the signal namely is resolved in the bivariate energy density per unit time and frequency, namely 2.4 Time and frequency localization Assume that in addition to being well localized in time, is also well localized in frequency. In other words, assume that is a window function which vanishes outside a possibly small interval near Then the eqs.(2.6) and (2.10) allow understanding the time/frequency localization features of Indeed eq.(2.6) tells us that in correspondence of a given value, is related to the function of frequency resulting from Fourier transforming a portion of near More specifically, the frequency content of is limited to the frequency content of that portion of (weighted by translated at ) which belongs to the time length T of the window. The frequency content of this portion of not immediately evident from eq.(2.6), is better clarified by the companion eq.(2.10). This equation tells us that in correspondence of a given value, is related to the function of time resulting from an inverse Fourier transform of the portion of (weighted by translated at ) which belongs to the frequency interval of the window. In other words, the two equations (2.6) and (2.10), taken together, tell us that is related not to the whole frequency content of the portion of near but only to those frequency components of this portion portion which are near In conclusion, it can be said that if vanishes (or is very small) outside a small interval T and its Fourier transform vanishes (or is very small) outside a small frequency interval the WFT of namely provides good information on both the time and the frequency behaviour of In this respect it should be remembered that any window obeys an uncertainty principle which states that constant. This result agrees with our intuition according to which we must observe a signal during a finite time interval in order to estimate its frequency content during that interval. If that is if the window support then it is impossible to infer the frequency value of the signal at from the knowledge of the single value Analogously, if (in this case that is if the window support tends to the whole real axis, then independent of the WFT becomes the usual FT and any time localization is lost. 2.5 Reconstruction formula The WFT of a function taken with respect to a window eq.(2.6), reported below for convenience
or, in vectorial form
is given by
63
NOTES ON A SIMPLIFIED TOUR Inverse Fourier transforming, we get
Since the window vanishes everywhere except in the (small) time interval T near we cannot recover from the above expression. Instead we multiply eq.(2.17) by and integrate
Solving with respect to (IWFT):
we obtain the inverse windowed Fourier transform
where with G the operator defined by eqs.(2.12) or (2.13). Note that from the proportionality between and it follows that is also the mother of the family The two equations (2.15) and (2.18) represent the pair of windowed Fourier direct and inverse transforms of a function with respect to the window Substitution of eq.(2.16) in eq.(2.18), yields the vectorial form
which originates the continuous resolution of unity, viz.,
where I is the identity operator and where the second equation is obtained by taking the adjoint of the first one. Note that, from the definition (2.12) of G, eq.(2.21) may be written as Recall that in the discrete case described in §1.6, we have seen that a resolution of identity requires a pair of reciprocal bases, namely and Therefore, in analogy with the discrete case, we may say that and constitute a pair of reciprocal families. Moreover we also recall from §1.5 that the reciprocal vectors can be directly obtained by applying the inverse of the metric operator to the vectors (see eq.(l.8)). In the present case, eq.(2.19) indicates that the operator G, defined by eq.(2.13), does exactly the same job, so that actually it can be recognized as being the metric operator of the WFT. The resolution of unity is a useful expression for obtaining important relations. As a first example, eq.(2.14) may be easily obtained as follows
Since the integral at may be written as
is the norm of
in the space the equation We shall see in §3.2 that this
64
MARZIO MARSEGUERRA
expression means that the vectors form a continuous tight frame with frame bound C. A second, more important, example of application of the resolution of unity concerns the linear dependency among the vectors. Indeed multiplication of the second of eqs.(2.21) by shows that each may be expressed in terms of a linear integral operator acting on the others
where is the so–called reproducing kernel. The same situation occurs for the multiplying by from eq.(2.9) we have
Taking the adjoint of eq.(2.22) and
From eq.(2.22) it appears that the vectors are not independent so that they constitute a frame and not a basis. Correspondingly, the family may be said to constitute the reciprocal frame. 2.6 Consistency condition We have seen that the WFT of a function is the function We are now interested in investigating the inverse situation, namely in determining the conditions – called consistency conditions – under which a function is the WFT of a function Clearly not any such function can satisfy the consistency conditions since in that case we could select a function with arbitrary time–frequency localization features, thus violating the uncertainty principle. Let us call the proper subspace of such that any function is the WFT of a function It can be easily proved that a function is also if it satisfies the following consistency condition [write and substitute (2.20) for ]:
where is the reproducing kernel defined by eq.(2.23). In many instances we deal with a function which we only know to belong to Given a window we may equally use eq.(2.18) to perform its IWFT, viz., In general, if we take the WFT of we do not return to indeed, since is the proper subspace of (usually a small portion of the whole space in general We arrive instead to a function
NOTES ON A SIMPLIFIED TOUR
65
which represents the least square approximation to in This follows from a theorem which states that, among the WFT of all functions is the closest to in the sense that
The above situation is of interest for example when we try to reduce the noise in a given function To this aim we firstly perform the WFT of thus arriving at and then we modify the transform in correspondence of the pairs in a suitable domain (e.g. setting to zero the values below a certain threshold). By so doing becomes an generally however its IWFT, represents the least square approximation to the initial in the sense that their transforms are as close as possible in 2.7 Drawbacks of the WFT A distinct feature of the WFT is that the window has always the same size, independently of the local behaviour of the function to be analyzed. When we are interested in resolving the frequency content of along time, this feature represents a drawback of the procedure since it limits the usefulness of the approach in case of a wide–band, non–stationary signal. In this case the short duration details of the signal, i.e. its high frequency content, could be better investigated by means of a narrow window, whilst the analysis of the long term trends, that is the low frequency content, calls for a window with a wide support. Actually both kinds of windows should be simultaneously available since a narrow window looses the general trend while a wide one smears out the details of the signal. Thus the two kinds of windows complement each other and a collection of windows with a complete distribution of widths would be necessary. 3. FRAMES 3.1 Introduction In the present section we introduce the concept of a frame and show that the scheme thereby derived lends itself to easily obtaining the expressions relevant to transforms like that described in the preceding section. This approach is quite general, as it will be seen in the next sections where it is applied to the wavelet transform in a straightforward manner. 3.2 Continuous frames: definitions and main features In the framework of the WFT it has been shown (see eqs.(2.15) and (2.18)) that the analysis and the synthesis of a function are carried out by resorting to two families of vectors in – or, more generally, in – namely the family for the analysis and the reciprocal family for the synthesis, both indexed by the pair The analysis of (see eq.(2.15)) may be viewed as a continuous linear combination of the vectors with coefficients which gives rise to a function However, eq.(2.22) shows that the vectors are not independent and, correspondingly, from eq.(2.24) it appears that also the are not independent. Thus the family does not represent a basis
66
MARZIO MARSEGUERRA
in Intuitively speaking, we may say that, with respect to a basis, this family contains a redundant number of vectors which are said to represent a frame in In order to define the frame in a slightly more general scheme, in the following we shall consider two families of vectors in namely the family and its reciprocal family both indexed by where represents any 4. Generalizing the pair of continuous indexes belonging to a bidimensional set weighted inner product (1.3), different weights may be given to different by introducing a measure so that the integral of a function5 is We also denote by the set of square–summable functions with measure for which
Then is also a Hilbert space, called the auxiliary Hilbert space of The link between the new notations and the old ones relating to the WFT is
With the new notations, a frame may be so defined: A family of indexed vectors two constants A > 0 and the inner product
is called a frame in if there exist called frame bounds, such that for any satisfies the inequality constraint
The above expression is called the frame condition or also the stability condition since it compels within fixed limits, the ratio B/A representing a stability indicator; the closer to 1 is B/A, the more stable is If A = B we say that the frame is tight and the above condition becomes
At first sight the above definition of frame might appear a mathematical subtlety; that definition is instead the key for a deep and general understanding of the matter. Indeed the following consequences may be drawn from the above definition: i. The inner product representing a mapping from the space to the space may be viewed as performed by a frame operator (where is the range of T) so defined
so that We now introduce the adjoint T* of the operator T, which performs the inverse mapping 6, i.e. To obtain a formal expression for let us 4 5 6
The generalization to a multi–dimensional case is straightforward. We shall always consider honest, i.e. measurable, functions. Application of T* to implies returning to but not necessarily to in other
words, in general,
67
NOTES ON A SIMPLIFIED TOUR consider any two vectors
and
Then
and therefore, in a weak sense,
ii. In terms of the operators T and its adjoint T*, the norm of
is
where is the so called metric operator, which plays a role analogous to that defined in §1.5 for the discrete case. From the above definition it appears that G is Hermitian, self–adjoint and positive definite, so that it possesses an inverse The formal expression of G in terms of the frame vectors may be obtained by substituting eqs.(3.2) and (3.3) in the definition (3.5),viz.,
The frame condition (3.1) now plays its fundamental role: it guarantees that G and are both bounded. To see this, (3.4) is substituted in (3.1), viz.,
so that
and
and, finally and
which indicate that G and are both bounded 7. Then, in correspondence of any the vectors belong to and inner products involving them are admissible. iii. We now define the reciprocal vectors as
Then, from eq.(3.6) we get
7 In case of a tight frame with A = B = 1 the metric operator reduces to the identity operator, i.e. G = I.
68
MARZIO MARSEGUERRA which is the resolution of unity for the case at hand. In vectorial form, the resolution of unity and its adjoint are
iv. The above item iii. illuminates the fundamental role played by the inverse metric
operator for obtaining the reciprocal vectors. Unfortunately, in many instances, we are not able to find an explicit expression for the inverse of a given G. Nevertheless we may obtain good estimates for by resorting to the first inequality in (3.8). This inequality tells us that G may be expressed as the mean of the two bounds AI and BI, plus a remainder. A suitable expression for G is
Then and Substitution of (3.11) in (3.8) yields Since the series converges in norm to and then In practice A and B are reasonably close to each other so that the above series converges rapidly to its limit and thus it may be well approximated by a few terms. In case of a tight frame, namely A = B, the situation is quite simple: from eq.(3.8) we get G = AI and then In this case the reciprocal vectors are proportional to the frame vectors and both families of vectors have the same mother. v. We are now in a position to derive the reconstruction formula for Indeed, defining the operator from eq.(3.5) one has Then the reconstruction formula reads
so that S is the left inverse of T.
or, written in length (see eqs.(3.3) and (3.9)),
In summary, in order to reconstruct from its transform we resort to the reciprocal vectors which, in turn, are immediately obtained from the once is computed. The expression (3.12) shows that, actually, the operator S performs the inverse mapping of any vector to a vector However, this does not mean that if we apply the direct mapping T to the so obtained we should necessarily return to In other words, not necessarily equals since the operator TS performs the orthogonal projection from to the range of the operator T 8 . If then TS 8
Proof: any vector
may be decomposed in the sum of a vector
plus a component in the orthogonal complement The operator TS projects in itself and gives rise to a zero vector in when applied to Indeed consider any and let vanishes, viz. expression implies
To show that Then the inner product of Since is any vector in and, finally,
of
times this
NOTES ON A SIMPLIFIED TOUR
69
3.3 Discrete frames We shall now consider the case in which the set of continuous indexes is replaced by a discrete set constituted by a grid of points taken from Then the integral of a function over is replaced by the sum
and the norm of F is where is the weight of the single point possibly equal to unity. The frame condition (3.1) becomes
4. THE CONTINUOUS WAVELET TRANSFORM 4.1 Introduction The drawbacks concerning the time and frequency localization capability of the WFT mentioned in Section 2.7 are substantially reduced by the use of the wavelet transform technique. As in the Fourier approach, one starts with a window function called the mother wavelet, which takes the place of the mother the word wavelet is added since, besides the usual requirements of a finite norm and of a good time and frequency localization, the function should also have zero mean. These requirements imply that must exhibit an oscillatory behaviour along the small time interval in which it is localized (where it is sensibly different from zero); then it must resemble a short wave or a wavelet. In general may be a complex valued function, but in practice it is a real function. Analogously to the pure (musical) notes considered in the Fourier case, we construct a set of functions called wavelets, which are versions of the mother wavelet scaled by and translated by t, viz., 9
whose Fourier transform is
In eq.(4.1), the scale factor in the argument of represents a dilatation or a compression of the mother wavelet according to whether respectively; negative values of indicate also a reflection of the wavelet around the ordinate axis. The scale in bears a strong similarity to the inverse of the frequency appearing in the pure notes of the WFT: an increase of represents a dilatation of and, analogously, a decrease in represents a dilatation of the sinusoids within The difference between the two is that when is varied, the support of remains constant, whilst that of varies with being larger at larger 9
Since the whole family is obtained from a single function to be structured.
the basis is said
70
MARZIO MARSEGUERRA
scales and viceversa. As we shall see, this feature represents the basis for the zoom capability of the wavelet approach. Moreover, from eq.(2.8) we see that the FT of is related to the FT of at a frequency translated by from eq.(4.2) we see that the FT of is instead related to the FT of at a frequency multiplied by This difference is in favour of the wavelet approach since the filtering in the frequency domain is generally performed in terms of octaves, so that frequency multiplications by a fixed factor are preferable to frequency translations by a fixed amount.
4.2 The direct and inverse transforms The wavelet transforms will now be obtained as described in the general procedure given in Section 3 with the following change of notations
Firstly we find the condition under which the family constitutes a frame. Given any we define the Continuous Wavelet Transform (CWT) as the inner product (this product exists since so that
where is the frame operator (see eq.(3.2)). The CWT (or the frame operator T) performs a mapping from the space to the space having measure ds dt where is a weight function to be suitably defined later. The link between the norm of and that of is given by eq.(3.4), viz.,
where G is given by eq.(3.6), which now reads
The wavelet family constitutes a frame provided G is a bounded operator: in that case G is the metric operator of the wavelet approach (see eq.(3.9)). To see that G is bounded, we apply G to any Freely exchanging the integration orders yields
Substituting eq.(4.1), the factor in brackets becomes
NOTES ON A SIMPLIFIED TOUR
71
so that
The second integral on the In this case we have
does not depend on
provided we define
and eq.(4.5) reads and, finally, since is any function in we get G = C. Thus G is actually bounded provided the mother wavelet is chosen in such a way that it satisfies the admissibility condition:
In passing, note that the above condition also implies that, for
which is the above mentioned zero–mean requirement for the mother wavelet. The conclusion is that the family generated by any admissible mother wavelet actually constitutes a frame in Therefore, from eq.(3.9), the reciprocal wavelets are Moreover, since the frame operator G is a constant, the frame is tight (see item iv. of §3.2) and the vectors of the reciprocal wavelet family, required for the synthesis of are proportional to the frame vectors, viz.,
where is the mother of the reciprocal wavelets, also generated by Note that the reciprocal wavelets are generated by their mother exactly as the frame vectors are generated from From eq.(3.12) we finally have the reconstruction formula
It might appear that there is something wrong in the above equation; indeed if we integrate both members over the is zero since, from eq.(4.8), the integral over is zero, whereas the integral of the i.e. the area under may well have a finite value. The explanation of this apparent incongruence is that eq.(4.9) was obtained by resorting to eq.(3.12), which makes use of the operator T*, defined by (3.3) in a weak sense. Therefore, eq.(4.9) also holds true in weak sense, so that it can only be used to compute inner products It may be shown (Daubechies, 1992, p.25) that the integral on the of (4.9) converges to in but not in In words, if we perform the double integration over a finite area when and the difference between the two
72
MARZIO MARSEGUERRA
members of eq.(4.9) tends to a very flat, stretched, function which has same area as that of but vanishing norm in indeed this norm is an inner product and, coherently with the mentioned weak sense convergence, (4.9) holds true not per se but when used to compute inner products. In the case at hand the resolution of identity (3.10) is written
In general we consider positive scales only and in this case from the expression (4.6), we get
where the upper sign refers to and viceversa. The condition required for the admissibility condition of reads
The metric operator G is now piecewise constant, being
As before, the reciprocal wavelets are obtained by resorting to eq.(3.9)
where This time we have to distinguish between negative and positive frequencies, as required by eq.(4.13); to this aim we first Fourier transform the above equation (4.15) and then we take its inverse transform, separating the two frequency ranges, viz.,
where and
In case of a real mother wavelet we have and from (4.11) we get (4.14) become and
In this case from eq.(4.3) we have
so that Then eqs.(4.15) and
73
NOTES ON A SIMPLIFIED TOUR Finally, eqs.(4.10) and (4.3) yield
so that
in which the norm of
is taken with respect to the measure This expressions means that the vectors form a tight frame with frame bound C/2.
4.3 A feature of the mother wavelet in transient analysis As pointed out in §4.1, the CWT of a signal lends itself to localize and analyze transient phenomena such as singularities, peaks, etc. Obviously, the efficiency of this methodology strongly depends on the choice of the mother wavelet which, to a large extent, is up to the analyst. In this respect we have seen that, besides a somewhat vague requirement of good time and scale localization properties, the main requirement imposed to is represented by the admissibility condition (4.7) which implies the vanishing of its zero–th order moment. We now show that the vanishing of as much higher moments as reasonably achievable is a further feature highly desirable in signal transient analysis. To this aim let us consider (Daubechies , 1992, p.49; Cohen, 1996) a function continuous, bounded and square integrable together with its derivatives up to the order, with In order to investigate the regularity of at some point the approximating polynomial of degree is introduced, viz.,
Further, assume that inequality holds true where wavelet
Moreover
is
regular at
i.e. assume that the following
and is a strictly positive constant. Now, select a mother having vanishing moments up to at least the order, viz.,
is required to decay at infinity fast enough so that
Then it turns out that Proof: The condition (4.16) clearly implies that write
so that we may
74
MARZIO MARSEGUERRA
Finally, eq.(4.18) follows from the condition (4.17) provided it is further assumed that
The expression (4.18) tells us that, as the scale (which implies that also by the condition (4.19)) then: decays very fast to zero if is an ”honest” or a ”regular” function at i.e. if at it is with a large slowly decays to zero if is ”irregular” at i.e. if its index is small. Obviously these statements are valid provided is such that its moments are zero up to the order, with In that case, the conclusion is that wavelet transforming an and looking at small scales, we may observe peaks in the in correspondence to values of where is irregular. A possible application of methodologies analogous to the described one may be found in image analysis and compression. Indeed most of the information in an image is contained in the discontinuities of the graph. A deep and comprehensive investigation of this problem may be found in Mallat’s works (Mallat, 1992). 5. THE DISCRETE WINDOWED FOURIER TRANSFORM 5.1 Time–limited window We now deal with a time localized (or time limited) window, that is a window with compact support having a length as described in §2.2. Therefore also has the same support (translated by ), whatever the support of may be, and it can be expanded in a discrete Fourier series (eq.(2.3))
where
is the frequency step 10,
and
where the last equality follows from eq.(2.15). Having discretized the frequency in steps of size we proceed by discretizing the time in steps of length By so doing the set of continuous indexes is replaced by a discrete set constituted by the grid of points where Eq.(5.l) reads
where
10
Actually we are free to chose any frequency step smaller than 1/T since suitably padded with zeros, may be thought of as having a support larger than T.
75
NOTES ON A SIMPLIFIED TOUR
To obtain the reconstruction formula for we follow a procedure completely analogous to that of the continuum case (eq. (4.4)), viz.,
where The family constitutes a frame provided G is bounded; in this case G would then be the metric operator of the discrete window Fourier approach. To see that G is actually bounded we apply G to any viz.,
Substitution of eq.(5.2) in which
and
are replaced by
and
yields
so that G is
which is a function which approximates Unlike the continuum case, the operator G is not a constant; however we may say that, since has a compact support, the of (5.3) contains only a finite number of nonvanishing terms, so that Moreover, for so that, if that is, if then in and within the intervals Then is not bounded and the do not constitute a frame. In the opposite case, namely for is always positive 11. Then is confined between a greatest lower bound and a lowest upper bound so that, almost everywhere and
In conclusion, provided provided the time step is smaller than the window support T, the do constitute a frame; the operators G and are both bounded so that G actually is the metric operator. We may now utilize eq.(3.9) to get the reciprocal vectors required for the synthesis of viz.,
Remembering that
is
that is
we get
where
11
apart from an inessential zero measure set in case of
and
76
MARZIO MARSEGUERRA
is the mother of the reciprocal vectors The above expressions indicate that also the vectors can be obtained by means of translations and modulations of a single function, namely the which differs from the window as much as the function differs from the constant The reconstruction formula for i.e. the synthesis of is finally obtained from eq.(3.12) which in the present case reads
which is the discrete counterpart of eq.(2.18).
5.2 Band limited window The present analysis closely follows that of the preceding section, the difference being essentially that the roles of time and frequency are interchanged. A window is said to be band limited if its FT is zero outside a specified frequency band, viz., for The starting point is represented by the eq.(2.10), here written as
Since is band limited, also by ), whatever the band of Fourier series
has the same band width (translated may be, and then it can be expanded in a discrete
where is the discrete time step the coefficients are given by the
12
. From the second of eqs.(2.4) it is seen that of eq.(5.4) written for Then
Having discretized the time in steps of length we proceed by discretizing also the frequency in steps of size As for the case of a time localized window, the set of continuous indexes is then replaced by the discrete set constituted by the grid of points where and From eqs.(5.4) and (2.8) we get
To obtain the reconstruction formula for case (see eq.(4.4)), viz.,
we again proceed as in the continuum
12 Actually we are free to choose any positive time step smaller than since suitably padded with zeros, can be thought of as having a band width larger than
NOTES ON A SIMPLIFIED TOUR
77
From eqs.(2.8) and (5.5) we get
Substituting in (5.6) yields
where
is a non negative function which plays a role analogous to that of of the time limited window case, the difference being that now the inner product is taken in the frequency domain so that we have to come back to the time domain. Moreover, from eq.(5.8) it appears that a possible metric operator would act as a filter on Proceeding similarly, it can be shown that, provided is confined within the interval so that also then
and finally, since
by Parseval’s theorem
Thus the family does actually constitute a frame. Having ascertained that is bounded, the reconstruction formula for immediately follows from eq.(5.7) viz.,
where
Inverse Fourier transforming (5.10),
Thus the sequence whose members are the inverse FT of the the required reciprocal family of
is
78
MARZIO MARSEGUERRA
6. THE DISCRETE WAVELET TRANSFORM 6.1 Introduction Given a signal energy, viz.
let us consider its dilated version
having same total
where is a scale factor. A feature of the CWT is that the transform of computed at the scale–time coincides with that of computed in correspondence with the scale–time
Indeed, we have
so that
In the discrete wavelet approach, the continuum of scales and times is substituted with a regular grid. When sampled with a time step the initial continuous signal may be thought as taken at the initial scale so that where The condition (6.1) implies that the CWT of this signal dilated by i.e. at the scale and taken with a larger time step at times coincides with If now the dilated signal takes the place of the initial one, an additional dilatation leads to a scale and to a time step so that Proceeding in this way, at the step the scale is the time step is and the discrete time sequence is In conclusion, in the discrete approach, the condition (6.1) is satisfied provided that geometrical sequences of scales and times are selected.
6.2 Band–limited mother wavelet We now consider the case of a band–limited mother wavelet, i.e. a mother wavelet for In a discrete approach, the maximum time step at the 13. Application of Parseval’s theorem to eq.(4.3) and scale is then substitution of expression (4.2) for gives
13 This condition follows from the well known requirement that the Nyquist frequency be larger than or equal to
79
NOTES ON A SIMPLIFIED TOUR Since
is band–limited, also is band– limited, viz. and we may expand in a Fourier series the quantity
where is the scale–dependent time step at scale given by the inverse Fourier transform
for
The coefficients
are
Having discretized the time, we now discretize the scale according to the values of the geometrical sequence, viz. Eq.(6.2) is then written
Correspondingly, instead of the continuous wavelets discretized versions. To simplify the notations we write
we shall consider their
As in the preceding cases, we must now establish whether the wavelets so discretized constitute a frame (actually, a subframe of the continuous wavelet frame). In other words, we should see whether also in the present case of the discrete windowed transform we can arrive to a frame condition similar to that of eq.(5.9). To this aim we establish the link between the norm of and that of From the general expression (4.3) and from Parseval’s theorem we get
Then
From eqs.(6.4) and (6.3) we get
so that
where
80
MARZIO MARSEGUERRA
which is a real function everywhere positive, apart from a possible inessential set of measure zero. Eq.(6.6) is analogous to eq.(5.8) of the discrete windowed Fourier approach; again, it turns out that a possible metric operator would act as a filter on The next step consists in investigating whether is constrained within fixed limits; in this case the of eq.(6.6) would give rise to a constraint for and then, from Parseval’s theorem, also for To this aim we note that i. is scale–periodic, viz.,
so possible bounds for
may be determined with reference to one period
ii. The mean value of
over a period
is
where are the same constants of the continuous wavelet transform in case of positive scales (see eq.(4.11)); iii. It may be easily shown that where is defined by eq.(4.11). From the above remarks it follows that in each period and are essentially positive and oscillate around their mean values and respectively, which obey the admissibility condition (4.12). Therefore in each period – and then everywhere – the function is constrained between a greatest lower bound which is positive everywhere except at where and a lowest upper bound viz.,
If we define required constraint condition for
and
we end up with the viz.,
which plays the role of the admissibility condition (4.12). Therefore
Since
by Parseval’s theorem, eq.(6.6) then becomes
which is the required condition for the to be a frame. Having ascertained that the discrete family actually constitutes a frame, we may determine the
81
NOTES ON A SIMPLIFIED TOUR
reciprocal family eq.(6.5) as follows
required for reconstructing
The factor in brackets on the exists. Then
is
To this aim we write
and we know from eq.(6.7) that its inverse
where we have introduced the Fourier transform of the reciprocal wavelets, viz.,
Inverse Fourier transforming eq.(6.8) yields
where the i.e. the IFT of are the required reciprocal wavelets. The above analysis shows that, unlike the case of the continuous transform, the discrete reciprocal wavelets are not proportional to the wavelets In the present case, starting from an admissible mother wavelet we must proceed along the following steps. Firstly, we compute the discrete wavelets (eq.(6.4)) and their Fourier transforms successively we filter them with the filter, thus obtaining (eq.(6.9)); finally we get the required by inverse transforming Concerning the filtering action, we note that in the limit the function tends to which has the constant values in this case the above filtering action is missing and the proportionality between wavelets and reciprocal wavelets is restored. 7. THE MULTIRESOLUTION ANALYSIS 7.1 Introduction The multiresolution analysis (MRA) represents an important improvement in the field of the wavelet transform, essentially because it allows obtaining recursive expressions well suited for the computations 7.2 The nested spaces Given a function introduce the following quantities:
a scale factor
and a time step
we
82
MARZIO MARSEGUERRA
and
From the definitions it follows that aand are unitary operators, viz.,
Thus and analogously for b - application of the adjointness condition (1.4) to eq.(7.1) yields
c - the two operators do not commute; they enjoy the following properties
The FT of
is
Within the present context of the MRA, is generally a real, bell–shaped function centered near like the window of the WFT case. If is a suitably defined width of then is centered near and has a width of It follows that is a sample of weighted over an interval of length near Clearly, the greater the interval over which is averaged, the less the details of the profile of contribute to the computation of the sample value. In particular, the above interpretation of the inner product must be valid also for whatever the translation of may be: therefore we require that should obey the following first condition
which implies continuity of the Fourier transform for all note that this condition is convenient but not essential for the MRA. 14
Proof: in correspondence of an arbitrary
Since
and
14
. However
consider the quantity
then we have
83
NOTES ON A SIMPLIFIED TOUR
A second condition imposed on the scaling function is the orthonormality between translated versions, viz.,
which implies the same kind of orthonormality between translations at any scale:
For any define the vector space as the space whose orthonormal basis is the family In particular, is the space whose orthonormal basis is the family of translated scaling functions; thus by definition. Then any vector may be written as
where we have introduced the translation operator
where
In particular, from eq.(7.8) it follows that any
and we may identify with the set with The link between and
may be written as
of all square summable sequences is (see eq.(7.3)):
Taken together, the two above equations read
Since
its norm is finite, so that
This condition guarantees that the operator is bounded even if the sum in eq.(7.9) contains an infinite number of terms. In practice we approximate the vectors
84
MARZIO MARSEGUERRA
in by considering a finite number of terms in the sum appearing in (7.8): correspondingly becomes a polynomial in T and (a Laurent polynomial) and therefore it is bounded in In the following we shall assume that any operator like is a Laurent polynomial operator. These operators will be also called functions of T. Eq.(7.8) states that the vector is obtained by applying the operator to the vector so that is the image (the range) of under the operator In addition to the mapping performed by we also introduce the orthogonal projection the relevant operator is
Indeed, given any we get a vector in Thus is a projection operator. If is already in from the orthonormality condition (7.7) it follows that the application of leaves it unchanged. Finally, since is self–adjoint and idempotent 15, viz. and then which means that the projection is orthogonal. Since and the operator may be written as
We are now in a position to fix the third condition of the MRA. We have seen that within i.e. at scale the time shift between successive basis vectors and is then successive samples and may be intuitively viewed as representing sampled with a time step Going to the next scale the corresponding samples are taken with a time step times larger, so that the family constituted by vectors in contains more information on than the family of the vectors in From this intuitive consideration, the third condition of the MRA follows, viz.,
From this condition and from eq.(7.13) it follows that must satisfy the fundamental dilatation equation
15
Proof: we have
so that
NOTES ON A SIMPLIFIED TOUR where the operator
85
dependent on the choice of the scaling function
is
The dilatation equation tells us that the dilated scaling function is a weighted average of translated scaling functions the coefficients being the weights. The polynomial operator acting on vectors in is called averaging operator and the weights are called filter coefficients. From eq.(7.16) we get
We shall see that this equation represents the basis for obtaining eq.(7.59) which gives an explicit expression for Two additional conditions are imposed on the MRA. The requirement (7.15) may be written as so that the family constitutes a hierarchical structure of nested spaces, each containing those relating to scales with higher index Then one is lead to intuitively think that the space relating to the smallest possible index should coincide with and that relating to the highest possible index should consist only of a constant, whose value can only be zero, due to the requirement of belonging to These are exactly the fourth condition and the fifth condition imposed to MRA, viz.,
which, with some abuse of notations, may also be written as
Let us now consider the orthogonal complements of the spaces Eq.(7.15) tells us that the space contains therefore can be split in the sum of plus the orthogonal complement viz.,
Correspondingly, in addition to the orthogonal projection operator we introduce the orthogonal projection operator any we get
The operator viz., Since
is related to
we have
in the same way as
is related to
such that, for
(see eq.(7.14)),
MARZIO MARSEGUERRA
86
where, for example means that for any From eq.(7.20), after successive substitutions, we get Then the orthogonal projection of any over
then may be written
where the last equality follows from the idempotent character of When we get Since is contained in then which is contained in is contained in as well. Therefore is orthogonal not only to but also to and the family is constituted by disjoint, mutually orthogonal subspaces; the additional condition (7.18) also implies
We complete this section by considering the action of the operator on a function In other words, we would like to know on which space does the function live. The orthogonal projection of over is
since
when
The orthogonal projection of
over
since when Thus maps onto This result, together with eq.(7.13), indicates that the operator the following mappings
is
performs both
7.2 Useful operators and their relationships Define the operators
and
16
H and G will be identified as the low-pass and high-pass filters, respectively. From the above definition we get
16
More generally, H and G may be defined in consider their restrictions as indicated.
Here and in the following we
87
NOTES ON A SIMPLIFIED TOUR Analogously
17
These operators enjoy the following properties:
The products and sums of the two operators are
and
From the above definitions and from eqs.(7.21) it follows that
The above relations are pictorially described in Figure 1 in which the and spaces are drawn as segments and the vectors in a given space are represented by points in the corresponding segment. A space is contained in another one if the downward vertical projection of the segment representative of the former falls within the segment representative of the latter, e.g. The null space of an operator is represented as {0}. However note that the above scheme of vertical projections must not be applied to points representative of vectors; indeed the vertical from a point, e.g. in falls either within the segment representative of or within that representative of and one might erroneously infer that the vector corresponding to that point belongs either to or to whilst it is clear that can be split in a component in and another in 17
Note that isometric operators: e.g. for the case
and i.e. for
are so that
88
MARZIO MARSEGUERRA
We further introduce the up–sampling and down–sampling linear operators. The up–sampling operator is defined as follows
and derives its name from the fact that its application to any vector
yields
Note that is obtained from by multiplying the time shifts of all the components by Then, represents an enlarged version of in the sense that, like it is made up by the same linear combination of basis vectors, but these are now separated by a larger distance instead of In particular
Let us now apply we get
to any
From eqs.(7.3) and (7.16)
so that Note that The adjoint
so that actually of
maps
onto the restriction
is immediately obtained from (7.36), viz.,
of
NOTES ON A SIMPLIFIED TOUR
89
where we have introduced the down–sampling operator defined as the adjoint of the up–sampling operator, namely and the adjoint of (see eq.(7.2))
Up to now the scaling factor was allowed to take any real number greater than one; to further proceed with the MRA we need some relationships which can only be obtained if we restrict our considerations to a scaling factor
Henceforth we shall assume that takes the above value and, correspondingly, we will have a so called dyadic sequence of scales and the operators and will be substituted by and respectively. First of all, we obtain the key relationship between and Given any two vectors in correspondence of any we have
Since
we get
and therefore The above expressions, together with eqs.(7.33) and following relations (recall eq.(7.35)) i.e. )
allow one to obtain the
Note that is obtained from by first deleting the odd components and then by halving the time shifts of the remaining even components, i.e. In this process half of the information is lost and the support of is half of that of 7.4 The MRA approach
In the preceding sections we have seen that, given a suitable scaling function and a scale factor we may construct a ladder of nested spaces such that any may be split in the sequence of its orthogonal projections onto the spaces (eq.(7.23)). We shall now see that this sequence actually represents the WT of More precisely,
90
MARZIO MARSEGUERRA
assume that we are given a signal which we view as the orthogonal projection of an unknown signal to We successively consider the orthogonal projections of to and respectively, and then the orthogonal projections of to and respectively, etc. 18. In general we get
where
By so doing, after M steps we arrive at the expression (7.22) here reported for convenience
The vectors nal spaces
belong to the sequence of disjoint, mutually orthogoand represent the details of at the various scales the vector represents the residue of at the M–th scale. In the limit and the sequence is the complete WT of The main feature of the MRA approach is that, for any it allows one to obtain the pairs by means of a recursive procedure, numerically easy to implement, based on the introduction of a new sequence of vectors the wavelets, such that represents an orthonormal structured basis in All these wavelets may be obtained from a single function (dependent on the scaling function ), the mother wavelet, exactly as the sequence is obtained from Moreover, a similar recursive procedure allows one to go backwards, namely to obtain given the residue and the sequence 7.5 The wavelets We have seen that the space is made up by the sum of plus the orthogonal complement of namely (eq.(7.19)). Any vector may then be split in the sum of its orthogonal projections on and viz.,
Obviously, if its projection on vanishes; analogously, if its projection on vanishes. We are now interested in finding explicit expressions for these two projections in terms of the operators H and H*. Projection From eq.(7.30) or from eq.(7.36) it appears that is the image of H* acting on any vector in therefore any vector in – and therefore also – may be viewed as resulting from the application of H* to some Thus The problem of getting 18
will be considered below.
The orthogonal projections of any are zero since is orthogonal to both
to and
and
91
NOTES ON A SIMPLIFIED TOUR Projection Therefore any vector in
From eq.(7.32) it appears that is the null space of H. – and therefore also – obeys the condition
Substitution of eq.(7.37) for H yields
Since and are both Laurent polynomials, their product is also a Laurent polynomial and, from the first of eqs.(7.39) with and from (7.11) we get
This equation is satisfied if is odd, i.e. if it is a polynomial containing odd powers of T only. A possible solution consists in factorizing in the product of a suitably assigned function times an even function to be determined, viz.,
An appropriate
function is
whose adjoint is
so that Substituting the above expressions (7.43) and (7.42) in eq.(7.41) yields
We now deal with the problem of determining the translation operators Consider the operator The contribution of the second term on is zero since the null space of H. From eq.(7.36) we get Substitution of the second of (7.25) yields where the last equality follows from the fact that for is then
Consider the operator contribution of the first term on is zero since
and
applied to eq.(7.47). belongs to
The required expression
applied to eq.(7.47). The belongs to which,
92
MARZIO MARSEGUERRA
in addition to being the null space of G, is also the null space of
19
. Then
Eqs.(7.44) and (7.45) yield
Since the sum on eqs.(7.36), (7.37)
is an odd function of T, we get from eq.(7.38) and from
Substituting in (7.48) and also from the second of (7.25), yields
where the last equality follows from the fact that belongs to so that its orthogonal projection onto leaves it unchanged. In conclusion, the required expression for is We are now in a position to introduce the mother wavelet and all the wavelets thereby generated. We have seen that and are the orthogonal projections of a generic vector onto and respectively. Both these projections may also be obtained with the aid of the dilatation operator D. The first projection may be obtained by applying D onto the vector
where the last equality has been obtained making use of the dilatation equation (7.17). The second projection may be analogously obtained by applying D onto the vector
19 Proof: Any vector to a corresponding vector we get
may be obtained by applying the dilatation operator D (see eq.(7.8)). Then, from eqs.(7.3) and (7.16)
Application of to which is zero since
yields (see eq.(7.45)) is an odd function of T (see eq.(7.38)).
NOTES ON A SIMPLIFIED TOUR
where the vector
93
is so defined 20
In §7.8 we shall see that obeys the zero–mean condition (4.8): assuming that it is also localized in time, it may be called a wavelet. Figure 2, drawn with same symbols as the preceding one, depicts this situation. Thus, we have been able to obtain the orthogonal projections of a generic vector onto the subspaces and its orthogonal complement in by applying the dilatation operator D to the vector or to the vector respectively. Both these vectors are generated by the translation operator or applied the former directly to the scaling function which characterizes the current MRA analysis and the latter to a function of namely The wavelet can generate the wavelets just as generates the Indeed eq.(7.24) tells us that the dilated and translated versions of namely span the space Shortly we will see that the form an orthonormal basis in as a consequence, since the sum of the yields (see eq.(7.23)), the family actually represents an orthonormal basis in This family is generated by so that the basis is structured and the wavelet rightly deserves the name of mother wavelet. To demonstrate the orthonormality of the we start from The may be written as From eq.(7.33) we get
20
From (7.51) it appears that may be interpreted as the component of the basis vector of the space along the vector of the subspace i.e.
94
MARZIO MARSEGUERRA
From eqs.(7.49) and (7.25), and taking into account the orthonormality of get
we
The proof of the orthonormality of the is completely analogous to that of the (see eq.(7.7)). In terms of the coefficients the may be written as
The orthonormality condition implies
Substitution of eq.(7.46) yields the corresponding condition on the coefficients in viz., (see eq.(7.60)). Moreover, since the vectors are orthogonal to the vectors From the above consideration it follows that the orthogonal projection operator may be written as 21
7.6 The recursive relations for the coefficients of the WT
Let us consider a signal viewed as the orthogonal projection of an unknown signal to The recursive relations for the coefficients of the WT of are easily obtained in terms of the expressions resulting from the application of the operators H,G and their adjoints H*,G*. This is done by taking into account that the application of the operators and to vectors in yields same results as in (eqs.(7.35) and (7.38)), viz.,
operator (7.32),
21 Proof: if Alternatively, if
(eq.(7.37)) restricted to For we get
then is already in
(indeed, from
is a vector in then
and
95
NOTES ON A SIMPLIFIED TOUR
From eq.(7.38) it follows that the terms for which is odd do not contribute to the sum; when is even, e.g. we get
operator
operator
(eq.(7.36)) restricted to
restricted to From Figure 2 it is easily seen that for
Substituting eq.(7.50) for yields
and proceeding as in the above case of
operator (eq.(7.26)) restricted to easily seen that for
Proceeding as in the above case for
where
(indeed, from (7.31),
From Figure 2 it is
we get
From eq.(7.29) we get the identity and Eqs.(7.52) and (7.54)
yield
From the definitions
and and, analogously,
and (7.40) it turns out that Then
Proceeding in this way, we successively obtain the sequences of vectors in viz.,
in
and
96
MARZIO MARSEGUERRA
Since (again from eqs.(7.27) and (7.28)) and, analogously,
from the sequence
we get
Let us now consider the inverse operation of getting and From eq.(7.20) we get
In terms of the polynomials
and
this expression reads (see eqs.(7.56))
and then The first term on is a vector in and the second term is a vector in to the above equation once and once yields
Summing these two expressions (recall that
and
applying
)
where the last equality follows from the fact that In terms of the coefficients of the polynomials this expression reads (see eqs.(7.53) and (7.55))
so that we finally get
7.7 Computational procedures
The stage decomposition–reconstruction of a signal tized as follows:
may be schema-
Decomposition given
i. Compute the convolutions 22 and 22
The convolution of the two sequences then
and
and
where
is defined as
NOTES ON A SIMPLIFIED TOUR ii. Apply to the above convolutions the downsampling dyadic operator only the even terms of each convolution, thus obtaining and
97 i.e. retain
At this point, the components and may be computed from eqs.(7.56) and (7.57). Starting from computed in step ii) we may go further in the decomposition. Reconstruction given
and
i. Apply to the given sequences and the upsampling dyadic operator i.e. define two new sequences each obtained by inserting a zero after every element of the corresponding original sequence. By inserting the zeros in the odd positions of the resulting sequences, we get and and ii. Compute the convolutions and then sum them to get The above computational scheme is sketched in Figure 3.
7.8 Choice of the mother wavelet In the preceding sections we have seen that the mother wavelet is defined in terms of the scaling function (eq.(7.51)). The first and simplest scaling function suitable for generating an orthonormal wavelet basis goes back to Haar (1910), viz., where
Note that obeys the orthonormalization condition (7.6). A disadvantage of this scaling function is that it is poorly localized in frequency: indeed which slowly decays as From the definition we get
98
MARZIO MARSEGUERRA
In particular
The above coincides with the dilatation equation (7.16) provided
Eq.(7.44), then, yields wavelet is
is defined as
and, finally, the Haar mother
Coming back to the general problem of finding the scaling function, recall that depends, up to a normalization factor, on the translation operator through the dilatation equation (7.17). The FT of this equation reads
where
is the trigonometric polynomial
Writing (7.58) for and substituting in (7.58), yields Doing this over again yields
In the limit of the
and we finally obtain
in terms only
function, viz.,
The convergence of the above infinite product is detailed in Kaiser (1994, pp.183–190). Since and, as we shall see immediately below, the factors in the above product tend to unity and the product can be suitably truncated. Thus may be computed, at least numerically. Inverse FT, finally, yields the required From the above discussion it appears that the determination of the scaling function and then of the mother wavelet from eqs.(7.51) and (7.46), rests on the
99
NOTES ON A SIMPLIFIED TOUR determination of constraints 23 :
or
The coefficients
of this operator obey the following
Moreover
so that
and then, from the second of eqs.(7.60)
Let us move to the frequency domain. Since the FT of and we get
is
where
so that eq.(7.61) becomes
From (7.60) we get above equation yields Thus
substitution in the and, then,
The above conditions, together with eqs.(7.51) and (7.46), allow us obtaining the zero–mean condition for the mother wavelet viz.,
The eqs.(7.63) represent the basis for the construction of a sequence of Finite Impulse Response (FIR) filters constituted by polynomials of degree M (M = 0, 1, . . .) with real coefficients. Further assuming that the first and the last coefficient of each 23 The first constraint is obtained by substituting as obtained from the dilatation equation (7.17) in the normalization condition (7.5). The second constraint follows from the generalization of the dilatation equation (7.16). From eq.(7.3) we get Then
100
MARZIO MARSEGUERRA
polynomial, namely and have non zero values it follows that M must be odd. Indeed in case of an even M the condition (7.60) written for would become in contrast with the above assumption. Writing M = 2N – 1 we then have
To determine the 2N coefficients in we resort to eqs.(7.60) which give us N + 1 conditions: one from the first equation and the remaining N from the second one written for For N = 1, we have no degrees of freedom and the above mentioned conditions yield then
which is the polynomial operator of the Haar wavelets. For N > 1, we have N – 1 degrees of freedom and, correspondingly, we need N – 1 additional conditions. These may be chosen in such a way that the low–pass characteristics of are improved. These characteristics follow from the following two features of the FT of taken in correspondence of the multiples of the fundamental frequency i. The FT of the dilatation equation, i.e. eq.(7.58), yields
For odd, even,
then
so that
Finally, for
then
Summarizing these results
ii. In correspondence of any function
where
For
we get
is the
periodic version of
we get, from Parseval’s theorem,
viz.,
101
NOTES ON A SIMPLIFIED TOUR The
function
may be expressed as a Fourier series where
In case of
then
and the coefficients
so that
and, finally,
The eqs.(7.64) and (7.65) tell us that attains its maximum at and then it rapidly decreases, being zero in correspondence of all Thus it is a low pass filter. This feature may be further enhanced by using the N – 1 degrees of freedom in such a way that smoothly attains the zero values at the frequencies i.e. by imposing the vanishing of the derivatives of at the To do this, a possible recipe consists in assuming that is given by the product of a polynomial of degree N – 1 (normalized so that ) times the product of N Haar–filters, viz.,
Thus as required by the first constraint (7.60), and also a zero of order N. The problem is now that of selecting a such that also obeys the orthonormality condition represented by the second of eqs.(7.60) or by eq.(7.62). To this aim, define
where
and
From the identity
we get
Since
and The above identity then reads
and then where
it follows that
and
102
MARZIO MARSEGUERRA
is a polynomial of degree 2(2N – 1) in we may write
Since and then
is a real quantity,
where
is a polynomial of degree 2(N – 1) in The above analysis shows that and share the same analytical form and obey the same analytical constraints. Then and the determination of amounts to find the square root of i.e. to find a polynomial of degree N – 1 such that
In the engineering jargon, this square root extraction is also called spectral factorization. We shall restrict ourselves to the analytical determination of for N = 2 and N = 3. A detailed discussion of these cases may be found in Kaiser (1994, pp. 180–183). N = 2: The second–order polynomial
reads
The–first order polynomial to be determined reads coefficients are determined by the conditions (7.66) and out that there are two possible solutions, viz., and, correspondingly, there are two polynomials
The unknown It turns
Apart from the zeros at has a zero outside the unit circle and has a zero within the unit circle: following the usual engineering jargon, they will then be called minimum– and maximum–phase filters, respectively. N = 3: The fourth–order polynomial
reads
The second–order polynomial to be determined reads unknown coefficients are determined by the condition (7.66) and It turns out that there are two possible solutions, for namely
where and
The
and As in the preceding case, are the minimum– and the maximum–phase filters, respectively.
103
NOTES ON A SIMPLIFIED TOUR
The filter coefficients for cases from N = 4 to N = 10 are given in Daubechies (1992, p.196).
8. SUB–BAND FILTERING The sub–band filtering of a signal is a decomposition–processing–reconstruction technique widely adopted by electrical engineers. In its simple, two–channel version, suitable filters are applied to the signal, which is decomposed in a low frequency and a high frequency component, each relating to half of the entire frequency range. Since these components have different Nyquist frequencies, correctly sampled versions are then obtained by discarding every other sampled value (sub–sampling). The decomposed signals are then suitably processed according to the specific problem at hand, and finally the reconstruction stage is carried out by means of a procedure which is the inverse of that adopted in the decomposition stage. Let us consider a band limited function i.e. a function such that for As mentioned in §2.2, may be represented by a Fourier series in terms of the discrete sequence of values sampled at times where is the time step resulting from the Nyquist criterion. From eq.(2.4) we get where Consider the signals resulting from low-pass (LP) and high-pass (HP) filtering 8.1 Ideal case Low–pass filtering. Let having a bandwidth half of that of
be an ideal LP square filter centered at viz.,
Inverse transforming, we get
From eq.(2.4) it follows that can also be expressed by its Fourier series in terms of sampled with a time step less or equal to Therefore we can use the same time step selected for so that
where
Application of the LP filter get, from eqs.(8.1) and (8.3),
or, more specifically
to
yields
In the frequency domain we
104
MARZIO MARSEGUERRA
so that, from the first of eqs.(2.4), we get
The expressions so far obtained have been derived by sampling and with the time step However by definition, has the same bandwidth of therefore the Nyquist criterion is satisfied also if is sampled with a time step Then, from eq.(2.4) we get
where
Knowledge of the coefficients i.e. essentially of sampled with a time step of allows the determination of the function (see eq.(2.5)), viz
From a computational point of view, let us assume that we have a fast code for computing convolutions. The coefficients appearing in the expression (8.6) for may then be computed as follows: i. compute the convolution between the coefficients and viz.,
ii. retain only the even terms of the sequence tion)
(downsampling of the convolu-
Very simple and important expressions may now be obtained by resorting to the 24 . We define
Retaining only the even terms in
24
For transform.
(downsampling), yields
i.e. for
on the unit circle,
is the Fourier
105
NOTES ON A SIMPLIFIED TOUR The function
may be expressed in terms of
and
as follows
Finally
High–pass filtering. Let which is the complement of that of
be an ideal HP square filter having a bandwidth in the bandwidth of viz.
Inverse transforming, we get
Following a procedure quite analogous to that given in the LP case, we end up with the following expressions
where
or, more specifically,
Moreover,
so that
Sampling with a time step
where
yields
106
MARZIO MARSEGUERRA
Knowledge of the coefficients viz.
allows the determination of
The recipe for computing the coefficients runs as before: i. compute the convolution between the coefficients and
ii. retain only the even terms of the sequence tion) Going to the
viz.,
( downsampling of the convolu-
we define
Retaining only the even terms in
Finally, the function
(see eq.(2.5)),
(downsampling), yields
may be expressed as the
in eq.( 8.10), viz.
Reconstruction of f(u). From the definitions (8.2) and (8.11) of the ideal square filters, it follows that so that, from eqs.(8.5) and (8.13) we get Inverse transforming (see eqs.(8.7) and (8.15)), we get
Evaluating for
NOTES ON A SIMPLIFIED TOUR For
even, i.e.
107
we get
so that
Since
For
the above expression may be formally written as
odd, i.e.
we get
so that
Substitution of eqs.(8.4) and (8.12) yields
Taken together, eqs.(8.14) and (8.15) yield
Thus, knowledge of the coefficients and appearing in the expressions (8.7) and (8.15) of the filtered signals, allows the determination of the coefficients required for the reconstruction of the original signal From a computational point of view, the reconstruction formula (8.20) may be viewed as obtained through the following steps: i. Starting from the adimensional sequences introduce two new adimensional sequences each obtained by inserting a zero after every element of the corresponding original sequence. This operation is called ”upsampling” of the original sequences. By inserting the zeros in the odd positions of the new sequences, their elements are
108
MARZIO MARSEGUERRA
ii. Compute the convolutions of the new,”upsampled”, sequences with the constants of the two filters
iii. Add the results obtained in item ii., viz.,
where and Let us now consider the reconstruction formula (8.20) or (8.23) in terms of the We define
and the
of the convolution (8.23) reads
8.2 Real case The choice of the ideal square filters and for sub–band filtering a given signal has the drawback that the filter constants, namely and slowly decay with Then a large number of them is required for performing the calculations pertaining to the filtering procedures. A possible remedy could consist in performing the filtering action with a pair of new filters obtained by suitably smoothing respectively. Correspondingly, with respect to the original filters, the new ones will have larger bandwidths therefore they are more concentrated in time, which means that the constants decay faster with and the burden of the convolution computations is lower. However, the larger bandwidths of the new filters also imply that the time step adopted in the ideal case for sampling the filtered signals should be correspondingly reduced, the maximum allowable value compatible with the Nyquist criterion being The use of the time step then gives rise to the aliasing phenomenon. Nevertheless we substitute the ideal filters with their smoothed version, viz.,
and and introduce the following new notation
NOTES ON A SIMPLIFIED TOUR
109
Correspondingly, the decomposition expressions (8.10) and (8.17) become
and the reconstruction formula (8.24) becomes
Substitution of eqs. (8.25) finally yields
On we have written instead of because of the aliasing effects which prevent the exact reconstruction of the true signal. The sub–band filtering procedure given in this section is schematically represented in Figure 4. Let us come back to the aliasing effects present in the reconstruction formula (8.26). More specifically, the aliasing effects is present in the second term on indeed, for
and i.e. is delayed by In case of ideal filters, the filtered signals have supports so that the addition of their versions shifted by does not change their shape. Viceversa, in case of real filters, the supports are larger than and the above mentioned additions corrupt the filtered signals. Nevertheless the true signal can be recovered provided the filter constants are selected in such a way that the second term on of eq.(8.26) vanishes, i.e. provided
110
MARZIO MARSEGUERRA
In pursuit of satisfying this condition, we report the two following schemes: a) The Esteban and Galand scheme (Esteban and Galand, 1977). The various involved function are expressed in terms of as follows
With this choice, the condition (8.27) is satisfied and eq.(8.26) becomes
The corresponding filters are called ”quadrature minor filters” (QMF) for the following reasons: The condition implies that so that for we get
Thus equals delayed by In addition, if is symmetric,
and then
which means that is the mirror of with respect to b) The Mintzer’s scheme (Mintzer, 1985). The various involved functions are expressed in terms of as follows
With this choice, the condition (8.27) is satisfied and eq.(8.26), in correspondence of and real yields
In both the above schemes, the relating filters are functions of the basic filter for computational reasons, this is usually of the FIR–type, i.e. it has only a finite number of nonzero terms. Within the Esteban and Galand scheme, no FIR filter has been found suitable for an exact reconstruction of (i.e. such that the factor in brackets in eq.(8.28) be equal to 2), even though and may be kept very close to each other. In the Mintzer’s scheme, instead, it is possible to find FIR filters such that the above reconstruction is exact. We end this section by comparing a decomposition/reconstruction stage of the sub–band filtering with that of the MRA in the wavelet transform. In the following we shall assume real valued sequences and work in the domain. The two procedures, namely sub–band filtering and MRA coincide provided that: i. The operator coincides with Since
111
NOTES ON A SIMPLIFIED TOUR the condition becomes ii. The operator
coincides with
or
Since (see eq.(7.46))
the condition becomes iii. The operators
and
coincide with
and
respectively. Since
the conditions become and
In terms of
the subband filtering functions are
which, apart from a change of sign in and coincide with the choice of the Mintzer’s scheme. The change of sign is inessential since the above definitions also satisfy the condition (8.27). 9. CONCLUSIONS A very simple indication of the variability of a stationary signal is represented by its variance, i.e. by a single number. After Fourier – 175 years ago! – we know that this single number may be resolved in a function of the frequency, namely the power spectrum, which tells us how the variance is distributed among the harmonics of the signal. By so doing, a lot of additional information is gained from the data: for example, a peak in the spectrum of a signal measured on a plant indicates the proneness of the plant to resonate at the peak frequency and it may suggest techniques for favouring or avoiding that resonance. A further, important, improvement in signal analysis is constituted by the Windowed Fourier Transform, by means of which the above analysis may be extended to non stationary signals. At each time, the Fourier transform is now restricted to a portion of the signal enveloped within a window function contained in a small time interval around the given time. By so doing, the signal is more deeply investigated since, instead of the above function of one variable, we now deal with a function of two variables, namely time and frequency, which allows us to examine the frequency content of successive pieces of the time signal. However the constancy of the width of
112
MARZIO MARSEGUERRA
the selected window makes the approach too rigid: indeed the window width may be too large when the signal exhibits fast transients and too small when it slowly varies. The Wavelet Transform overcomes this drawback by using windows of different widths: a mother wavelet, a little wave suitably localized in time and frequency, is selected and at the various scales the signal is viewed by wavelets which are dilated and shifted versions of the mother wavelet. Thus, the procedure extracts the local details, e.g. the fast transients, at small scales and the signal trend at large scales. An additional feature of the wavelet transform is the possibility of selecting a mother wavelet which fits the problem at hand: in the windowed Fourier approach we always make use of complex exponentials and the freedom is limited to the choice of the window which envelopes the exponentials. In spite of the fact that nowadays the wavelets have become a relatively popular mathematical technique, they are still scarcely utilized by nuclear engineers. A field of primary interest is certainly that of the early detection of system failures: indeed the wavelet transform seems the right methodology for detecting the warning spikes or fast transients of short duration randomly occurring in some plant signals before the occurrence of important failures. Another field which may be profitably tackled by the wavelet methodology is that of signal denoising, handled by suitably thresholding the wavelet coefficients. Among the standard applications of the wavelets, we should mention the attempts to improve the solutions to some mathematical models of phenomena of interest to the nuclear power plants: an example is the open question of the fully developed turbulent flow, generally performed by means of the Navier–Stokes equations with a dominant non linear advection term. Another standard application is the data compression, which is of importance in many different nuclear engineering fields ranging from the reactor operation and maintenance to the safeguards. We hope that the present review/introduction will stimulate our professional community towards the use of the wavelet methodology. Acknowledgments: I would like to thank Dr. S. Tarantola for useful discussions and suggestions.
REFERENCES Chui, C., 1992, An introduction to Wavelets, Academic Press, San Diego. Cohen, A., and J., 1996, Wavelets, the Mathematical Background, Proc. IEEE, 84, 4. Daubechies, I., 1992, Ten Lectures on Wavelets, Soc. for Industrial and Applied Mathematics, Philadelphia. Esteban, D., and Galand, C., 1977, Application of Quadrature Mirror Filters to Split–Band Voice Coding Schemes, Proc IEEE Int. Conf. Acoust. Signal Speech Process., Hartford, Connecticut, pp. 191–195. Gabor, D., 1996, Theory of Communication, J. IEE (London), 93, pp. 429–457. Haar, A., 1910, Zur Theorie der Orthogonalen Funktionen–Systeme, Math. Ann., 69, pp. 331–371. Kaiser, G., 1994, A Friendly Guide to Wavelets, Birkhäuser Mallat, S., and Hwang, W.L., 1992, Singularity Detection and Processing with Wavelets, IEEE Trans. on Information Theory, 38, 2. Mallat, S. and Zhong, Z., 1992, Characterization of Signals from Multiscale Edges, IEEE Trans. on Pattern Analysis and Machine Intelligence , 14, 7. Mintzer, F., 1985, Filters for Distortion–Free Two–Band Multirate Filter Banks, IEEE Trans. Acoust. Speech Signal Process., 33, pp. 626–630. Special Issue on wavelets, Trans. of the IEEE, April 1996, 84, No. 4.
GENETIC ALGORITHMS FOR INCORE FUEL MANAGEMENT AND OTHER RECENT DEVELOPMENTS IN OPTIMISATION
Jonathan N. Carter Centre for Petroleum Studies Imperial College(ERE) Prince Consort Road South Kensington London SW7 2BP United Kingdom
1
INTRODUCTION
The number of different optimisation schemes available to the nuclear industry has continued to grow. In the last review, Parks1 identified Simulated Annealing (SA) as a useful tool and Genetic Algorithms (GA) as tools with potential but needing further development. Since that review, two simpler derivatives of SA have been developed – the Great Deluge Algorithm (GDA) and Record to Record Travel (RRT) – and the GA have been successfully applied to fuel cycle management. This paper reviews these developments in optimisation and also describes some other new optimisation tools available: the Tabu Algorithm (TA) developed by Glover2, the Population-Based Incremental Learning Algorithm (PBIL) proposed by Baluja3 and the Multi-point Approximation Method (MpA) being developed by Frandsen4 for use in the petroleum industry. Together, this collection of techniques form a powerful toolkit for the nuclear engineer seeking to solve difficult optimisation problems.
2 OPTIMISATION PROBLEMS Optimisation is the process of finding the optimum (i.e. best) solution possible to a problem. This solution is often required to meet a number of additional constraints. In mathematical terms this can be formulated as:
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
113
114
JONATHAN N. CARTER Subject to
are the decision variables, i.e. those things that can be changed. is the objective function which is usually a numerical, rather than an analytical, function of the decision variables. If one has to minimise a function this is equivalent to maximising So clearly maximisation and minimisation problems can be transformed one into the other. In numerical optimisation, an optimum, either local or global, is assumed to have been found when all points in a small region around the current point, which meet all the conditions, have function values that are worse than the function value at the current point. In practice we often cannot test all points, and so substitute ‘a sufficiently large number’ in place of ‘all’. As a consequence one can never be completely certain that an optimum has been identified. Neither can one be sure whether the optimum is the global optimum or just a local optimum. An example of such a problem might be to minimise the cost, of running a power station, subject to the requirement that the power output will be 1 GW and emissions will not exceed the safe limit: Minimise Subject to emissions safe limit - actual emissions > 0 power output = 1 GW The decision variables might be the number of staff at different grades, or the temperature of the core. In practice we are often satisfied with finding a ‘better’ solution, rather than the true (global) optimum solution. How much ‘better’ a solution needs to be, depends on the economics of the problem. How much does it cost to find a better solution, to implement it and what is the benefit? We may not even know if a better solution exists. 2.1
Type of Problems
Many optimisation problems can be classified as one of the following: Continuum problems having decision variables which are continuous numbers. For example, the outlet pressure of a steam turbine. Discrete problems having decision variables which can only take discrete values. For example, the number of cooling towers to be built. Combinatoric problems are those where the decision variable is the order of a finite set of objects, which determines the value of the objective function. For example, the Travelling Salesman Problem (TSP) or the PWR reload core design problem described below. Optimisation methods have been developed to tackle problems in each of these categories. Table 1 gives a list of some of these methods. However, many practical optimisation problems encountered in industry do not fall neatly into one of the above categories as they involve a combination of continuous, discrete and/or combinatoric variables. These are much harder to solve with traditional methods. In these cases, the problem is often approximated into a form which can be handled by traditional methods. For example, a problem involving continuous and
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
115
discrete decision variables, might use continuous variables to approximate the discrete variables. In this paper, several different optimisation methods are described. Some are particularly suited to one category of problem, while others can be used on all three, singly or in combination.
2.2
The PWR Reload Core Design Problem
Most of the optimisation methods described in this review can be applied to combinatoric problems such as the PWR incore fuel management problem described in this section. As this problem is used to illustrate most of the optimisation methods reviewed in this paper, the problem and previous studies are described in some detail below. The Problem. For a Pressurised Water Reactor (PWR) incore fuel management involves specifying the arrangement of fresh and partially burnt fuel in the core and the burnable poison loadings in the fresh fuel assemblies. Most PWRs use a three batch loading scheme, i.e. at each refuelling, one third of the fuel is replaced. The remaining fuel assemblies, having resided in different regions of the core for one or two cycles already, are generally all different. Hence for a PWR with 193 fuel assemblies, the fuel placement problem alone (assuming there are 64 identical new fuel assemblies and the centre assembly, which is generally the assembly with the highest burn-up, has been specified) has (128!) possible solutions (since there are ways of choosing where to put the new fuel assemblies and 128! ways of arranging the others in the remaining spaces). The search space is usually reduced by assuming rotational or reflective quarter-core symmetry or one-eighth symmetry. However, even if quarter-core symmetry is assumed, fuel assemblies from equivalent locations in different quadrants of the core are not identical. A fuel assembly from one quadrant is identical to the corresponding assembly in a neighbouring quadrant rotated by 90°. Hence when working in (rotational) quarter-core symmetry, each fuel assembly can be considered to have four possible orientations each of which represents exchanging that assembly with the corresponding one from another quadrant. New fuel assemblies of course have uniform cross-section and therefore do not need to be re-orientated. Thus, if we consider the fuel placement problem in rotational quarter-core symmetry the number of possible solutions is (since in rotational symmetry we consider 48 fuel assemblies 16 of which are new). Although all the new fuel assemblies are identical the fuel manager may choose to load Burnable Poison (BP) rods with some of the new fuel assemblies that are not in
116
JONATHAN N. CARTER
control rod positions. It is usual to load each quadrant of the fuel assembly identically with a maximum of 24 BP rods per fuel assembly. Thus there are seven possible BP loadings for each assembly (0-6 BP rods on each fuel assembly quadrant). If we include burnable poison loading the search space contains about possible solutions. To summarise, to specify a core reload pattern, the fuel manager must specify for each location: the fuel assembly to go at that location the orientation of that fuel assembly (if it is a partially burnt assembly) the burnable poison loading on that assembly (if it is a new assembly). Objectives and Constraints The fuel manager’s objective is to minimise the cost of the energy generated by the power plant and early studies considered such factors as the interest on fuel, the re-processing costs and operating charges (e.g. Wall and Fenech5, Lew and Fenech6). However, because these costs may vary and there is no simple formulation of the cost function, it is general practice to work with cost-related objective functions such as: maximum End-of-Cycle (EOC) reactivity (equivalent to maximum cycle length) (e.g. Mélice7, Colleti et al. 8) maximum burn-up of fuel (e.g. Stover and Sesonske9, Hoshino10) minimum radial power peaking (e.g. Chen et al. 11, Naft and Sesonake 12, Federowicz and Stover13, Chitkara and Weisman14). The last of these is not a true economic objective (see Downar and Sesonske15) but as it is consistent with safety considerations, is frequently used as an objective in fuel management studies. Where it is not used as the objective function, it must be incorporated into the problem as a constraint. Some early studies sought to minimise the fresh fuel consumption (e.g. Suzuki and Kiyose16, Goldschmidt17) but as this must be determined well in advance, it is properly an out-of-core decision. The actual objective favoured in practice will depend generally on the utility but a practical optimisation tool should be flexible enough to work with any of the above. The constraints include limits on: maximum moderator temperature coefficient; maximum burn-up of discharged fuel; maximum average burn-up of each batch of discharged fuel; maximum power peaking; minimise pressure vessel damage. Previous Studies. Early fuel management studies used few-zoned one-dimensional reactor models with axial symmetry and tried to determine into which of these zones fuel assemblies should be placed (although not where exactly in the zones different fuel assemblies should be located) for a series of cycles. In the first of these, Wall and Fenech5 considered a PWR model with three equal volume zones, identified by increasing distance from the centre of the core, and three fuel batches which each occupied one
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
117
zone. At each refuelling, one of 28 options (since each batch could be replaced or moved to another zone) had to be chosen. They used dynamic programming (see Bellman18) to minimise the cost of the power produced subject to power-peaking and burn-up constraints. Stover and Sesonske9 used a similar method to identify Boiling Water Reactor (BWR) fuel management policies. Their model was similar to that of Wall and Fenech except the two interior zones were scatter-loaded and they optimised the fraction of each zone to be refuelled at the end of each cycle. Similarly, Fagan and Sesonske19 used a direct search to maximise the cycle length for a scatter-loaded PWR with fuel shuffling between zones permitted, and Suzuki and Kiyose16 used Linear Programming to place fuel in five radial zones to minimise the fresh fuel consumption of a Light Water Reactor (LWR). These studies did not address the poison management since parallel work was directed at solving this part of the problem independently for a given fuel loading (e.g. Terney and Fenech20, Motoda and Kawai21, Suzuki and Kiyose22). Indeed Suzuki and Kiyose16 used this decoupling to assign the problem of the power-peaking constraint to the poison management problem. Mélice7 took a different approach to the problem by surveying analytical techniques for determining reactivity profiles which maximised core reactivity or minimised power-peaking. The available fuel could then be used to synthesise the optimal reactivity profile as closely as possible. This approach also allowed the decoupling of the fuel placement and poison management scheme that would maintain the desired power profile. Mélice sought flat power profiles since these would enable the extraction of maximum power from the core. These led naturally to the out-in i fuel management schemes which were widely used at the time. Mélice’s fuel placement problem then was reduced to seeking the arrangement of partially-burned fuel in the core interior (the periphery being loaded with fresh fuel) that would best match the optimal reactivity profile. Thus, Mélice was the first to consider the detailed placement of fuel in a 2-dimensional lattice which he did by trial and error, guided by experience. Several authors extended this work using variational techniques (e.g. Wade and Terney23, Terney and Williamson24) or even interactive graphics (Sauer25) to identify ideal power distributions and Linear Programming to automate their synthesis (Sauer25, Chen et al. 11, Terney and Williamson24). These methods however either distributed fuel amongst but not within the several zones or restricted fuel to an out-in scatterloadedii core. Meanwhile the problem of determining optimal detailed placement of fuel assemblies was being tackled by considering the effects of successive binary exchanges of fuel assemblies. Having used heuristic rules to restrict the possible exchanges, Naft and Sesonske12 proposed ranking binary exchanges by the improvement in the objective function and then performing each in order of ranking, keeping the improvements. Similarly Stout and Robinson26 formulated shuffling rules for fuel within zones of a out-in scatter-loaded PWR. These direct search approaches reduce to a hill-climbing search since the best solution is preserved after each exchange. They are therefore prone to locating local rather than global optima.iii A simple direct search on all such exi Out-in refuelling is when old fuel is removed from the centre of the core, partially burnt fuel is moved from the outer edge to the centre of the core and new fuel is added to the edge. In-out refuelling is similar but the new fuel is placed in the centre. These different fuel loadings produce different performance profiles. ii Scatter loading is when new fuel is distributed throughout the core. iii The objective functions used are non-linear functions of the decisions variables, this means that finding optima is difficult and that multiple local optima often exist.
118
JONATHAN N. CARTER
changes is clearly too time-consuming so Chitkara and Weisman14 used a direct search to fine-tune the result obtained by linearising the objective and constraints and using Linear Programming. However, because they allowed fuel movement to neighbouring locations only, this method was also prone to convergence onto a local optimum. The method was later extended to BWR fuel management by Lin et al. 27 and Yeh et al. 28. Binary exchanges are one of the smallest changes that can be made to a loading pattern and are therefore useful for fine-tuning a reload core design. Small changes from a reference loading pattern also lend themselves to evaluation by perturbation theory at the cost of being limited to certain small perturbations only. This was exploited by Federowicz and Stover13 who used integer linear programming to find the best loading pattern close to some reference pattern. This pattern could then be used as the new reference pattern and the optimisation continued. Similarly, Mingle29 and Ho and Rohach30 used perturbation theory to evaluate the effects of binary exchanges during a direct search, enabling more candidate solutions to be examined without calling a time-consuming evaluation code more frequently. The size of the fuel placement problem prompted many researchers to divide it into sub-problems which could then be solved using the most appropriate optimisation technique. Motoda et al. 31 developed an optimisation scheme for BWRs using the methods of Sauer25, Suzuki and Kiyose16 and Naft and Sesonske12. The basic idea was to allocate fuel in three zones, expand this to a five zone model and finally use a direct search to shuffle the fuel within those zones. The poison management problem was decoupled by assuming the constant power Haling depletioniv 32 . However, the inflexibility of PWR control methods at that time precluded the use of this method for PWR fuel management. In the early 1980s, the development of burnable poison rods for PWRs prompted interest in increasing the burn-up of the fuel to further minimise costs. Ho and Sesonske33 considered changing from a 3-batch to a 4-batch scheme while maintaining the cycle length. Clearly higher enrichment fresh fuel would be required and this could cause power-peaking problems. Reload core design would thus be dominated by the power peaking constraint. They rose to this challenge by assuming a modified out-in loading and using a direct search similar to that of Naft and Sesonske12 to place fuel assemblies in one-eighth symmetry. Burnable poisons were used where necessary to suppress power-peaking but the orientation of fuel assemblies was not considered. However, the combination of out-in loading with higher enrichment fresh fuel leads to increased neutron leakage and thus an economic penalty as well as greater irradiation damage to the pressure vessel. An alternative way to increase the burn-up of fuel is to use low-leakage fuel loadings where the fresh fuel is allowed in the core interior. To keep the power-peaking within safety constraints, burnable poisons can be loaded onto fresh fuel assemblies. The resulting centre-peaked power distributions had already been shown to imply minimum critical mass (see Goldschmidt17 or Poon and Lewins34). By 1986, interest in extending pressure vessel life by reducing neutron leakage meant that low-leakage loadings were being used in over half of all PWRs (see Downar and Kim35). However, the optimisation methods described above assume the conventional out-in loading patterns. Many of them assign fresh fuel to the periphery and shuffle only partially burned fuel in the core interior. Low-leakage loadings require fresh and partially burned fuel to be shuffled and burnable poison loadings to be determined for the fresh fuel assemblies increasing considerably the size of the problem. iv Haling’s principle states that the minimum power-peaking is achieved by operating the reactor so that the power shape does not change appreciably with time.
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
119
In addition, because out-in loadings deplete to give flatter power profiles, the power poisons in low-leakage designs meant that even if the Beginning-of-Cycle (BOC) power profile was feasible, it did not imply that there would not be power-peaking problems later in the cycle. By this time, emphasis had shifted from optimising the arrangement of fuel over many cycles to optimising the single-cycle problem. Suzuki and Kiyose16 had shown earlier that cycle-by-cycle optimisation gave results that were within 1% of the multicycle optimum. Chang and Sesonske36 extended Ho and Sesonske’s direct method to optimise lowleakage cores by fixing the fresh fuel positions to enforce low-leakage. In a different approach, Downar and Kim35 separated the fuel placement from the burnable poison loading problem by assuming a constant power Haling depletion. A fuel loading could be chosen in the absence of poison, using a simple direct search of binary exchanges and then the core was depleted in reverse back to BOC to find the burnable poison loading that would satisfy power-peaking constraints. However, the inflexibility of PWR poison design meant that the optimal poison management schemes and the constant power Haling depletion were often not realisable. In practice the simplified models used by researchers made nuclear utilities reluctant to use their optimisation methods and actual reload cores were designed by expert engineers using heuristic rules and experience (see Downar and Sesonske15). As early as 1972, Hoshino10 investigated the incorporation of these heuristic rules into an automated optimisation procedure by using a learning technique to deduce the relative importance of various rules in refuelling a very simple reactor model with four equal volume zones and one fuel batch in each zone. More recently, there have been several investigations of heuristic search and the use of expert systems to apply heuristic rules (e.g. Moon et al. 37, Galperin et al. , 38, Tahara39, Rothleder et al. 40). While this approach has the potential to reduce unpredictability and variability of human designers and to retain the expertise of these fuel managers, it does not necessarily produce optimal loading patterns, being limited by the knowledge of those same people. It also becomes unreliable when faced with situations not previously encountered. Thus the challenge to produce an optimal Loading Pattern (LP) using a realistic reactor model remained. Computer codes available today include LPOP (Loading Pattern Optimisation Program) and FORMOSA (Fuel Optimisation for Reloads: Multiple Objectives by Simulated Annealing). Following the practice of Mélice and others whose work concentrated on finding and synthesising optimal power and reactivity profiles, LPOP was developed at Westinghouse by Morita et al. 41 and Chao et al42 , 43. Starting with a user-specified target power profile, a backward diffusion calculation is performed to find an equivalent reactivity distribution. This is then matched as closely as possible with the available fuel and BPs using integer programming. However, in general it is not possible to match exactly this power profile, which being typically based on low-leakage cores and design experience is not necessarily optimal. FORMOSA was developed by Kropaczek44, 45 at North California State University (NCSU) where the search for practical optimisation tools which make a minimum of assumptions led to the use of Monte Carlo direct search techniques (e.g. Comes and Turinsky46, Hobson and Turinksy 47). Any reactor model could thus be used and treated as a ‘black box’ and any combination of objective function and constraints used. Continuing along the lines of Federowicz and Stover13, perturbation theory was used to determine the effects of fuel shuffling from a reference LP. This approach tended to restrict searches to around the reference solutions but was refined by White et al. ,
120
JONATHAN N. CARTER
who introduced first-order48 and then higher-order49 Generalised Perturbation Theory (GPT). The latter is able to predict the effects of quite large perturbations including the movement of fresh fuel. FORMOSA combines GPT with the Monte Carlo search algorithm, Simulated Annealing, which is described later in this paper. Poon50, 51 has since shown that Genetic Algorithms can be used to speed up the identification of nearoptimal LPs. This work is described in more detail section 3.6. Since Poon’s work, the GA has been applied to incore fuel management by DeChaine and Feltus52, 53 and Tanker and Tanker54. These studies are discussed in section 3.7.
2.3
A Simplified Reactor Model
To illustrate how all of the algorithms, except the Multi-point Approximation method, described in this paper work, we will use a simplified, idealised reactor core containing a total of 61 fuel assemblies in quarter-core rotational symmetry, as shown in figure 1.v The optimisation problem is to arrange the fuel within the core to maximise some objective function while satisfying the necessary constraints. For the purpose of this study, we assume that the objective function is calculated by a ‘black box’ reactor simulator and that there are no constraints. This last assumption is not restrictive since any constraints could be incorporated into the objective function through a penalty function. We will assume that the central fuel element has already been selected (it is usually the oldest assembly, or the assembly with the highest burn-up) and we will use quartercore rotational symmetry. We will also make the following three simplifications to the problem. Firstly, rotational orientations are neglected (i.e. we do not allow fuel elements to be shuffled from one quadrant to another). Secondly, we give each new fuel element in the same quadrant a unique identifier, as if they were not identical (in practice, around one third of the fuel elements will be new and therefore identical. By giving each new fuel element in the quadrant a unique identifier, we avoid the complications of having to avoid exchanging two new, identical fuel elements). Thirdly, we neglect burnable poisons. These simplifications reduce the problem to finding the optimal arrangement of the fifteen fuel assemblies in the quarter-core. There are then different ways of rearranging the fuel in the core. Figure 2 shows one possible rearrangement. v The use of bold characters in figure 1 is just a naming convention and does not imply anything about the properties of the fuel elements.
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
121
This simplified incore problem is useful for the didactic purposes of this paper. In practice, it is not difficult to incorporate rotational orientations, identical new fuel elements and burnable poisons when using the optimisation schemes described in this paper. For example both Kropaczek44 and Poon50 show how this can be done, for Simulating Annealing and Genetic Algorithms respectively. For the purposes of this paper all of the examples will assume that we are trying to maximise some function. It should be remembered that any minimisation problem can easily be converted into a maximisation problem.
3 OPTIMISATION ALGORITHMS FOR COMBINATORIAL OPTIMISATION 3.1 Hill-climbing Algorithms Many optimisation algorithms proceed by making a small change to the ‘current’ solution to obtain a ‘trial’ solution. If this change results in an improvement, then the ‘current’ solution is replaced by the ‘trial’ solution. Otherwise the ‘current’ solution is retained. An algorithm of this type is known as a hill-climber. The problem with algorithms that follow this procedure is that they tend to get trapped by local, sub-optimum solutions. The two important decisions that have to be made when implementing a hillclimbing algorithm are firstly, how to define a ‘small change’ and secondly, how to choose the ‘trial’ solutions. A ‘small change’ will have different meanings for different problems. For the idealised incore problem that we are considering there are several possible definitions of ‘small’ change that define the local neighbourhoodvi relative to the ‘current’ solution. One choice might be to exchange two adjacent fuel elements, eg
An alternative might be to exchange any two fuel elements, eg vi
The local neighbourhood of a solution consists of all the solutions that can be obtained by making a small change to the solution. The neighbourhood is therefore dependent on the allowable ‘small change’.
122
JONATHAN N. CARTER
There are other possible choicesvii, but these are the simplest. The trade-off over the size of the local neighbourhood is that for small neighbourhoods (adjacent fuel element swap), the search converges rapidly to a local optimum. For larger neighbourhoods, convergence is slower but the final solution reached is likely to be better. The method employed to choose the fuel elements to be exchanged can either be deterministic or random (stochastic). These two methods lead to either deterministic hill-climbers or random hill-climbers.
3.2 Simulated Annealing The major problem with hill-climbing algorithms, is that when they converge to a local optimumviii they are then unable to escape and find the global optimum. This is because, starting at the local optimum, all moves in the local neighbourhood are worse and so are rejected. Figure 3 illustrates the concept of a local optimum for two continuous decision variables. The Simulated Annealing (SA) algorithm differs from simple hill-climbers by allowing some ‘trial’ solutions worse than the ‘current’ solution to be accepted. In this way the algorithm may be able to extricate itself from one local optimum and pass into an adjacent region with a better local optimum. The algorithm is shown in figure 4. The term ‘T ’ in the exponential probability is known as the temperature. This refers to the origin of the method in the simulation of annealing processes in solidsix. In general vii
For example, if we were using a more realistic incore fuel management problem, we might change a fuel assembly’s orientation or its burnable poison loading. viii A local optimum is any solution which has a better function value than any other solution in its local neighbourhood. ix In 1953, Metropolis et al55 described a statistical mechanics algorithm to find the equilibrium configuration of a collection of atoms at different temperatures. Kirkpatrick et al. 56
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
123
T is not a constant and decreases with increasing number of trials. The way that T decreases is known as the cooling schedule. A common cooling schedule is to reduce the temperature by a constant factor after ‘N ’ trials. The temperature controls the search by making it more or less difficult to escape from one local optimum to another. At the start of the search, a high temperature will be chosen; this allows the algorithm to search widely for a good area. Later, as the temperature decreases towards zero, fewer non-improving solutions are accepted, and the algorithm behaves more like a stochastic, or random, hill-climber. The strength, and weakness, of simulated annealing is the cooling schedule. Theoretically, it can be shown that given a sufficiently high initial temperature and a slow enough cooling schedule, then reaching the global optimum can be guaranteed57. However, in practice, there is insufficient time available to meet these conditions and therefore SA users are forced to select initial temperatures and cooling schedules which do not guarantee reaching the global optimum. Nevertheless, the choice of these two parameters has to made with care, since they have been shown to affect the quality of the solution obtained and the rate of convergence. To obtain a good result requires some experience of working on the problem under consideration. There are many references to applications of simulated annealing. For example, “invented” SA by recognising that the same algorithm could be used as a general optimisation technique.
124
JONATHAN N. CARTER
Aarts and Korst57 and Reeves58 discuss SA applied to combinatorial optimisation in general, and Parks59 discusses its application to fuel management in an advanced gascooled reactor.
3.3 Great Deluge Algorithm The Great Deluge Algorithm has been proposed by Deuck60 as a simpler alternative to Simulated Annealing. Instead of simulating the annealing process, this algorithm mimics the wanderings of a hydrophobic creature during a flood. Consider a landscape consisting of hills and valleys, where the hills represent good solutions. Initially the landscape is dry and the ‘current’ solution can wander anywhere. It then starts to rain and the lowest areas start to fill with water. The current solution is allowed to wander, but it does not like to get its feet wet, so it avoids areas that have been flooded. As the water level rises the current solution is forced on to ever higher areas of the landscape. Eventually the rising water level will cause the appearance of islands, at which point the current solution will become trapped near a local optimum. One might expect this algorithm to work less well than Simulated Annealing, since once the solution has been trapped on an island then it cannot escape to another island.x However, practical experience has shown this not to a problem, and further that the results do not strongly depend on the ‘rain’ algorithm used. A good rain algorithm has x It is not possible to construct a proof that shows that the global optimum can be reached under certain conditions (unlike Simulated Annealing).
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
125
been suggested by Deuck60, and confirmed by Carter61 as:
The best known will always be above the current water level. The GDA has been applied to several combinatorial optimisation problems, including the travelling salesman problem60, balancing hydraulic turbines 61 , cutting stock problems61 and chip placement60. 3.4
Record-to-Record Travel
The basic algorithm for Record-to-Record Travel (RRT), also proposed by Deuck60, is the same as for GDA – the difference is that the water level is a fixed level below the best known solution, i.e. the record solution. The water level is only raised if a new best solution is found. When Deuck compared the performance of SA, GDA and RRT, he found that both GDA and RRT performed better than SA. Although it is known that given sufficient time, an appropriate cooling schedule and initial temperature SA is guaranteed to reach the global optimum, no such proof can be produced for GDA and RRT. Comparative tests between GDA, RRT and other algorithms by Sinclair62 and Carter61 have shown that GDA is a good choice as a ‘black-box’ combinatorial optimisation algorithm and Carter61 suggests parameters for the inexperienced user. 3.5
Tabu Search
Tabu Search, proposed by Glover2, is quite different in its philosophy. Rather than making a random variation to the current solution to obtain the next point, all the neighbouring points are examined and the best, subject to some tabu, is chosen as the next current point. In figure 7 a flow diagram for Tabu Search is given. The basic ideas are explained with the example given below. Before we embark on the example, it is necessary to define four items: the neighbourhood of the current solution is defined by all possible two-element swaps. the ‘Tabu Rule’ states that a swap is tabu if either of the fuel elements have been used in either of the last two iterations. the ‘Aspiration Criteria’ is that a swap is non-tabu if it would make the function value better than any previous solution. the penalty function is given by 0.1 × (sum of frequencies for two swapped fuel elements) We start with an initial ‘current’ solution and evaluate its function value, Next we evaluate the function valuesxi for all possible ‘trial’ solutions in the neighbourhood xi
An arbitrary penalty has been included in the calculation of so as to penalise the swapping of two new fuel elements. Alternatively this could have been included as part of the Tabu Rule.
126
JONATHAN N. CARTER
of the current solution. These are then ranked according to the improvement they offer compared to the ‘current’ solution. We also have an empty tabu table. The tabu table records two pieces of information for each fuel element; how many iterations it is since a fuel element was last tabu (the recency) and the number of times a fuel element has been involved in a swap (the frequency). The recency and the frequency are both a form of memory. The is calculated from minus the penalty function. At the start, see figure 8, all swaps are acceptable, i.e. there are no tabu options. We therefore take the top option in our ranked list to obtain a new current solution. The tabu table is updated as follows: the recency line is incremented by ‘1’ and fuel elements D and J are set to ‘-2’, indicating that they have just been used; fuel elements D and J of the frequency line are also incremented by ‘1’. Finally, we evaluate and rank the new neighbourhood ‘trial’ solutions. At iteration 1, see figure 9, swaps involving either of the fuel elements D and J are
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
127
now tabu. So three of top five ‘trial’ solutions are marked as tabu. This leaves the swap (N,K ) as the best option. However, the swap (J,H ) would make the current solution better than any previous solution, and so it meets the ‘aspiration criteria’ and has its tabu status removed. Now the best non-tabu swap is (J,H ), and we can generate the next ‘current’ solution, update the tabu table and generate the new neighbourhood. At iteration 2, see figure 10, we find that the best of the non-tabu swaps has a negative i.e. the move would make the function value worse. None of the tabu swaps meet the aspiration criteria, so we are left with selecting a non-improving move. We calculate the for each of the non-tabu ‘trial’ solutions, and choose the option with the least bad (best!) In this case none of the acceptable swaps have non-zero penalties. However, this is not generally the case. The effect of choosing a swap with good is that the search is pushed into regions of parameter space not previously examined. It is clear that choice of the penalty function is critical in getting the correct
128
JONATHAN N. CARTER
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
129
balance between search and convergence. So in this case we select swap (N,A) the function value drops to and we proceed to iteration 3 (figure 11). For this example it has been decided that fuel elements should be tabu for two moves. So any fuel element that has a recency value of ‘0’ or more can take part in valid swaps. So the fuel element D is no longer tabu. Swaps involving fuel elements A, H, J and N are tabu. So the best non-tabu option is (D,F), and none of the tabu moves meet the aspiration criteria. The (D,F) option is chosen, and the process proceeds to the next iteration (figure 12). When implementing Tabu Search there are a number of issues that have to be addressed. The first is the choice of neighbourhood. In the example the neighbourhood was any ‘trial’ solution that could be reached by swapping two fuel elements. Alternatives would be to swap adjacent fuel elements, or to allow four fuel elements to be moved. As the size of the neighbourhood is increased, so is the effort required to evaluate at all the points. Against this is the likely improved search pattern, with fewer local optimum and faster convergence. It may be possible to reduce the effort required to search a large neighbourhood by using a fast, but less accurate, model to initially rank the trial solutions and then to use a slower, but more accurate, model on just the best trial solutions. A second issue is that of the tabu table. The table represents a form of memory. Part of the memory is short term, in that it defines moves in the local neighbourhood which are tabu. In the example the rule was that a swap involving a fuel element moved in either of the last two two iterations was tabu. We could have had a longer/shorter restriction, and/or changed the rule to make tabu only the exact pairs swapped. The rule could easily have been one of many other possibilities. Then there are the aspiration criteria, which allow the tabu status of moves to be over ridden. In the example the aspiration criteria was that if a tabu move would result in a function value that was better that all previous ‘current’ solutions, then it was to be accepted. Obviously this rule is open to modification.
130
JONATHAN N. CARTER
Finally there are the rules that govern the penalty; how the penalty is defined and what forms of memory are used. There is no particular reason for using the frequencybased memory of the example. The issues of what rules, memory and neighbourhoods should be used are widely discussed in the literature. For an expanded introduction the reader is referred to the paper by Glover63, which includes a discussion on how to apply Tabu Search and references to over 30 different applications. Tabu Search is a much more complex algorithm to implement than simulated annealing or any its derivatives. It requires a degree of experience both in using Tabu Search and an understanding of the function that is being optimised. The benefit is that when correctly used the performance is better than SA like algorithms.64, 65, 66, 67 It should be possible to construct the rules for Tabu Search such that the global optimum can be reached. However the author is not aware of any formal proof to this statement. In short, Tabu Search should NOT be used as a ‘black-box’ routine. While the author
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
131
is not aware of any current applications of this technique in the nuclear industry, Tabu Search could be applied to incore fuel management and related problems.
3.6 Genetic Algorithms Genetic Algorithms is the name given to a group of optimisation algorithms that draw their inspiration from the ideas of evolutionary genetics. There are a number of significant differences between the group of algorithms and those that have been discussed so far in this review. Rather than having a single ‘current’ solution, genetic algorithms have a population of many current solutions. The method proceeds by ‘breeding’ new (offspring) solutions from two parent solutions from the current population. Breeding is a way of exchanging information between the parents to produce the offspring. All of the decisions in this process are made in an essentially random way. It might seem surprising that such a scheme would work, but experience shows that it works extremely well. In Parks68 review he gave a description of a simple genetic algorithm using binary strings. Below we discuss the basic algorithm and then expand on the details of implementation. Finally we discuss a specific application to incore fuel management. The basic algorithm is given in figure 13. To implement it a number of decisions
132
JONATHAN N. CARTER
have to be made. In order of importance these are: 1. The problem of encoding, or representation; 2. Crossover operator; 3. Selection scheme for parents; 4. Construction of the new population; 5. The mutation operator; 6. Population size; 7. Initial population; 8. Parameter settings.
Encoding the Problem. Encoding your problem, choosing how your problem is represented, is probably the single most critical issue in the implementation of the GA. Get it right and the GA will perform well; get it wrong and the GA will perform badly. It is for this reason that when used as a ‘black box’ optimisation tool the results are often poor. There is a close relationship between the representation and the possible crossover operators, which can lead to the choice of representation being led by the wish to use a particular crossover operator. This is not the correct way to proceed. The general rule that has to be followed is that the representation should be as ‘natural’ and ‘simple’ as possible and preserve important relationships between elements. For example, in a curve fitting problem we aim to find optimum values for a list of independent parameters, (a, b...). There are no relationships between the parameters and the natural form is a simple list. The only outstanding decision is to whether to use a binary representation for the numbers or a decimal representation. A binary representation is generally preferred as it is simpler than a decimal representation. If the problem is the travelling salesman problem (TSP), where we find the sequence in which a group of cities should be visited to minimise the total distance travelled, then the most natural representationxii is to list the cities in the order that they should be visited. It is possible to find ways that use binary or decimal notation, but they are not natural and do not preserve the relationship between elements. For a problem like that of fuel management described in this paper, the natural representation is that of a 2-D grid. It is possible to express the grid as a linear sequence, but this does not preserve all the relationships, or a decimal notation, losing even more information about relationships between elements. Other problems have their own representations, which fall into none of the described categories. Crossover Operators. The purpose of a crossover operator is to take information from each of the parents and to combine it to produce a viable (or feasible) individual offspring. An ideal crossover operator will take any two parents in the natural encoding and generally produce a viable offspring. If this is not possible, then you either have to xii There is no formal definition of a natural representation. The guiding principle is that the representation should be as simple and as meaningful as possible, and that under crossover and mutation, relationships between elements should be preserved.
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
133
test each offspring solution to see if it is viable, or the representation has to be changed so that guaranteed crossover can be performed. It is at this point that many of the people who use the GA as a black box algorithm go wrong. They often select a crossover scheme based on binary strings (since most introductory texts use this form) and force their problem to be represented by binary strings. They may be able to guaranty the viability of the offspring, but the encoding is not natural and does not preserve relationships. Hence, the algorithm fails to give good results, and the GA maybe dismissed as ineffective. Crossover on binary strings. We start with two parent strings, P1 and P2, and randomly select a number of points along the strings. The number of randomly chosen points may be one, two, some other number, or randomly chosen. Then to produce an offspring we copy the binary digits from P1 until we reach the first marker, we then copy from P2 until the next mark, and then copy from P1 again, switching at each successive marker.
Clearly one can easily produce a second offspring by starting with P2 instead of P1, and this is often done. Crossover on ordered lists. For problems like the TSP, where the natural representation is an ordered list, simple crossover, as described above, does not lead to viable offspring, e.g.
The result is both an incomplete list and one with duplication. To overcome this problem many ingenious operators have been proposed: Partially Mapped Crossover (PMX) 69 , Order Crossover (OX)70, Order Crossover #271, Position Based Crossover (PBX) 71 , Genetic Edge Recombination (GER) 72 , Enhanced Edge Recombination (EER)73, Tie-Breaking Crossover #1 (TBX1) and Tie-Breaking Crossover #2 (TBX2) 74 , Intersection Crossover (IX) and Union Crossover (UX) 75 , and permutation Union Crossover (UX2) 74 . Figure 14 gives an example of one, known as the Partially Mapped Crossover (PMX). Crossover on grids. Many of the crossover operators can easily be extended to work on grids, see figure 15 for a version of PMX. General considerations for crossover operators. A consequence of designing a crossover that generates viable offspring is that the offspring solution tends to receive more information from one parent and some randomisation occurs. Both of these are generally considered bad. In the example of fuel management that is discussed below we describe a representation that includes information not only on the position of fuel
134
JONATHAN N. CARTER
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
135
elements but also other properties (their reactivities). The crossover operator then uses both positional and reactivity data to produce offspring. At first sight the offspring seem to have suffered a lot of randomisation. However, this is not the case and the crossover operator combines the information in each parent to obtain a balanced, viable offspring. Selection scheme for parents. To produce an offspring solution, we need two parents. These parents are selected from the ‘current’ population. There are various ways in which the parents could be selected. All of the methods are essentially random in their nature, but each introduces ‘selection pressure’ (or bias). The simplest method would be to assign each individual in the population an equal probability of being selected and then to choose two randomly (this is known as the random walk selection). The problem with this method is that ‘bad’ individuals are as likely to be selected as ‘good’ individuals. Hence there is no selection pressure, which should favour better individuals and improve their chances of breeding. In nature, organisms that are well suited to their environment are likely to produce many offspring for the next generation, while poorly adapted individuals may produce few, or no, offspring. There are two distinct methods for introducing selection bias: that can be based either linear ranking or proportional selection, and tournament selection.
an individual
In the earliest applications of the GA the probability was selected was based on some fitness function76.
that
The difficulties with this procedure are three: all the need to be positive with individual. To obtain an acceptable may need to be scaled and translated,
having the highest value if is the best the calculated model function
If the variation among the is too small then the selection probabilities will be become uniform, and selection pressure will be lost. If the best has a function value will be selected as a parent.
then only one individual
It therefore requires a degree of care in deciding on how is calculated from This usually will required a good understanding of the function An alternative scheme based on linear ranking was introduced by Baker77. In this algorithm the probability that an individual is selected depends on its rank where the best solution has rank
is a parameter in the range [1,2] which modifies the selection pressure. corresponds to a uniform selection probability while gives a value of proportional to the rank In most studies that use Baker’s ranking scheme, a value of is normally taken. This method has been extended, by Bäck78 and Carter79, so that only part of the population is used to breed from. It is known as the scheme
136
JONATHAN N. CARTER
µ is the size of the population from which parents are selected. Carter79 has shown for several problems that choosing gives good results. Tournament Selection. Tournament selection for parents was introduced by Goldberg, Korbel and Deb80. To select a parent we first select members of the current population, using a uniform probability. Then the best of these is selected as the parent. This is repeated for each parent required. It can be shown that the probability of selecting a particular individual as the parent is given by
(NB at this equivalent to Baker Ranking with ) It has recently been claimed, on theoretical grounds, that tournament selection is the best method81. A major advantage for tournament selection over rank-based selection when is large and is small, is that it is much quicker to compute. Construction of the New Population. Once function values for each of the offspring solutions have been calculated, we need to construct a new population from a combination of the previous population and the children. We have a total of individuals to choose from to create a population of size It is quite possible that some of these individuals are identical, so we may wish to remove most of the duplicated individuals before proceeding to create the new population. The principal methods, discussed below, are either generational replacement (with or without elitism) or steady state replacement. Generational Replacement. In a pure generational replacement scheme, no member of one population is allowed to pass into the next population. The population is made from the best of the children produced. A variation on this is to allow an ‘elite group’ to be guaranteed survival from one generation to the next. Clearly where is the size of the elitest group. A commonly used value is where just the best individual is kept. Another possibility is to set and N = 1, where the worst members of the population are discarded at each generation and replaced. A third possibility is to set so that the previous breeding population is carried forward but after ranking, some of these individuals may not be in the new breeding population. Steady-State Replacement. In this case N is normally set to N = 1, and all members of the previous population (except the parents) are carried forward. The parents and the children then compete, either deterministically or randomly to complete the new population. This approach ensures that parents are always selected from the best available population. Mutation Operators The type of mutations that are possible will depend on the representation that you are using. For the incore fuel management problem, the reader is referred to section 3.1. It should be noted that in the GA, the mutation operator is only rarely used. The author often uses one mutation per twenty offspring solutions. Population Size and Initial Population. The size of the population and its relationship to the performance of the GA is one of the least well documented areas. Initial work82 suggested that populations should be quite large and that the initial
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
137
population should be randomly chosen. However more recent work83, 79 has suggested that it is quite possible to work with small breeding populations, but that it may need a non-random initial population to ensure good convergence. In some problems83 it is fairly clear how to select initial populations. In others, such as the incore fuel management problem, it is not obvious how the initial population should be selected. If a random initial population is used, it is worthwhile including a best guess for the solution. However, care should be taken so as to avoid biasing the results. Parameter Settings. Having read the preceding discussion, one might think that setting up a Genetic Algorithm would be a complex task. However most of the parameters are for fine tuning the performance and, a default set of parameters will give acceptable results for many problems. In the author’s experience the following represent an acceptable initial parameter set: population size sets of parents number of children per set of parents mutation rate, one mutation per 20 offspring elitism with selection probability either or
selection with tournament selection with
initial population: 20% user specified, 80% random problem representation: this can only be decided by the user. The aim should be to choose a ‘natural’ representation. crossover: this needs to work effectively with the representation, but the author suggests the following as a starting method: binary strings, either 2 point crossover or random-k point crossover ordered strings (with no element information), the fast implementation of Union crossover (original Fox and McMahon75, fast version Poon and Carter (UX2) 74 ). ordered strings (with element information): HTBX as described below. higher-dimensional representation, generalisations of the above. Heuristic Tie-Breaking Crossover for Ordered Strings (HTBX). Many problems can be represented by an ordered string of a finite set of objects. The archetypal problem of this type is the Travelling Salesman Problem, where the string represents the order in which the cities are to be visited. Many crossover operators have been suggest to tackle strings of this sort and ensure that any offspring is viable. These include PMX, OX, OX2, PBX, CX, IX, UX and UX2. In a comparative study Poon & Carter74 showed that averaged over six problems UX2 was the best performer. There are ordering problems where additional information is available. Such a problem is the balancing of Francis turbines. A “Francis” hydraulic turbine is constructed from a central shaft and an outer annulus. Between these are fixed a number
138
JONATHAN N. CARTER
(typically 20) of curved blades. As water flows from the outer edge to the centre the turbine turns, rotating the shaft which is connected to an electric alternator. These machines can be very large with diameters of 10 m and blade masses of about 18 tonne. Blade masses can vary by ±5%. As the position of the blades is fixed, the system is balanced by welding the blades in an order that minimises the distance of the centre of gravity of all the blades from the centre of the shaft axis. Final balancing is achieved by adding lead weights to the outer annulus. The additional information available is the mass of the individual blades. In a standard implementation of an ordering problem a list of objects (a,b,c...) has to be placed in some order, so as to optimise a function that depends on the order. In many problems the elements have some property, such as their weights, which allow them to be ranked. So if the element labelled ‘e’ is the eighth heaviest, then instead of referring to element ‘e’ we refer to the eighth heaviest element. It is also easy to find elements similar to element ‘e’, these being the seventh and ninth heaviest elements. The HTBX operator, described in figure 16, uses these principles. In the example included in figure 16, after the parents have been encoded using the ranks of the elements and crossover performed, the first offspring is (5,1,3,6,4,4). In this offspring the second ranked element (if ranked by weight then this would be the second heaviest) is not present and the fourth ranked element appears twice. After the offspring have been re-mapped, this first offspring becomes (5,1,2,6,4,3). The elements with ranks 1, 5 and 6 have retained their positions. The element with rank 3 has been replaced by the element with rank 2, leaving the two places occupied by the element with rank 4 to be filled by the elements with ranks 3 and 4. This final decision is made in a random way. Comparing the offspring to its parents one might be surprised to find that what had been the element ranked 3 in both parents has been replaced by the element with rank 2. This is just a consequence of trying to meet the preferences for the element with rank 4. In a situation like this the element is always replaced with a ‘similar’ element. In larger problems the operator does not introduce variations as large as those seen here, since if we have 100 elements then the sixth heaviest will be much like the second heaviest, etc. HTBX for incore fuel management. To apply HTBX to the incore fuel management problem, we need to generalise the crossover operation to two dimensions and to identify the property by which to rank the fuel elements. The ranking property that we choose to use is that of the fuel element reactivity. Where two or more fuel elements have the same reactivity the ties are broken randomly. The substrings used in crossover for ordered strings are replaced by ‘chunks’ from the two dimensional space, as illustrated in figure 17. We start with two randomly chosen parents, P1 and P2. The fuel element identifiers are replaced by the fuel element rank to obtain R1 and R2. A offspring, C1*, is created by copying one chunk from the first parent and another from the second parent. The chunk boundary was chosen randomly; there is no particular reason for the chunks to be simply connected as in this example. A random map is generated and combined with C1* before the resulting grid is re-ranked, Finally the ranks are replaced by fuel element identifiers to obtain the offspring C1. Comparison of the original descriptions of the parents (P1 & P2) and the final description of the offspring (C1) might make you think that the operation had been very destructive. However if one compares parents and offspring using rank labels (R1, R2 & C1**) we can see that most positions have changed their rank by at most one.
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
139
140
JONATHAN N. CARTER
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
141
3.7 Comparison of Algorithms in PWR reload core design In this section we compare the performance of SA and GA in a realistic PWR reload core design problem. This work is reported more fully by Poon50. Test Problem. The problem is to find a loading pattern in rotational symmetry that minimises the power-peaking, starting from a reference loading pattern. There are 48 assembles to shuffle of which 20 are identical fresh fuel assemblies. These may be loaded with up to three ‘Burnable Poison’ rods per quadrant, so there are four possible burnable poison loadings for each fresh fuel assembly which is not in a control rod position. The 28 partially burnt fuel assemblies may be orientated in four ways. Simulated Annealing. The Simulated Annealing (SA) results for the above problem were generated using FORMOSA84. FORMOSA is a simulator based on Generalised Perturbation Theory, coupled to some highly tuned SA algorithms. Genetic Algorithm. The results presented here were generated using a tuned version of the GA with the HTBX crossover operator. The simulator used by FORMOSA was used to produce the reactor simulations required for the GA. The same reference loading pattern was used throughout by both algorithms.
142
JONATHAN N. CARTER
Discussion of the Results. Figure 18 shows the average performance over ten independent runs, for the GA and two of FORMOSAs search algorithm. The GA finds solutions as good as those found by SA in approximately 1/3 of the function evaluations. The GA was stopped at 20000 function evaluations, when it seemed to have stopped converging. This is a well documented problem with the algorithm. Whilst the GA is good at finding the approximate position of an optimum solution, it is not suited to fine tuning that estimate. This has been demonstrated theoretically by several authors (e.g. Sulomon85). The advised response is to switch to a hill-climbing algorithm when the GA starts to lose performance. Applications of the GA to Nuclear Engineering. Apart from the work by Poon et al. 50, 51 on the application of the GA to incore fuel management, the author is aware of only three other papers. DeChaine and Feltus describe their CIGARO system in two papers52, 53, the third paper is by Tanker and Tanker54. DeChaine and Feltus describe how they use a binary bit string (genome) to represent the beginning-of-cycle (the ratio of neutron numbers in successive generations in a source-free nuclear reactor of infinte extent) at all the loading positions. They then try to match the required distribution with the available fuel type, before submitting the core loading to a reactor physics code. The GA is quite standard for a bit based genome (although the crossover rate of 0.45 might be considered slightly low). They justify their choice of representation and GA by the statement: “The GA cannot work with the real optimisation variables, i.e. the assignment of fuel types to loading position.”52 The author hopes that the preceding discussion has indicated how the GA can be directly applied to the assignment of fuel types to loading position. Tanker and Tanker54 work with the fuel loading pattern, but only allow three fuel types. They use what is a modified version of the Partially Mapped Crossover, described in figure 14, to re-order fuel elements. They report that linear programming is about 30 times quicker than their GA, although the result is not as good. Linear programming is a very efficient way of solving problems where the objective function is a linear combination of the decision variables and all the constraints are linear as well. The linear programming algorithm is able to exploit the high degree of redundancy due to having just three fuel types. The Genetic Algorithm is unable to exploit this redundancy or all of the constraints, so it has to search a much larger solution space than the linear programming algorithm. When the objective function is linear then the GA would not be the optimisation algorithm of choice since it performs best on non-linear functions. General Applications of the GA. Genetic Algorithms have been an active area for research for the last two decades. Since the algorithm was first introduced by Holland76 there have been six international conferences86, 87, 88, 89, 90, 91 on the topic and applications of the algorithm are also widely discussed in other journals and conferences. There are many introductory texts now available, although two of the better ones, Davis92 and Goldberg82, are now a little dated. Since these were published much progress has been made, particularly in the area of combinatorial optimisation. 3.8
Population-Based Incremental Learning Algorithm
The population-based incremental learning (PBIL) algorithm, proposed by Baluja3, combines a hill-climbing algorithm with evolutionary optimisation. It is different from all the other algorithms considered in this paper, in that it does not proceed from
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
143
a current solution(s) to a trial solution by some re-combination or mutation method. Instead a prototype probability vector is used to generate random solutions, which in turn are used to update the prototype vector. The PBIL algorithm operates on binary strings. A flow diagram is given in figure 19. The prototype probability vector P is a vector of numbers in the range [0.0, 1.0]. Initially every element of the vector is set to 0.5. To generate a random trial vector T, a binary string, we first generate a vector Z with random elements in the range [0.0, 1.0]. If the value of an element in the random vector Z is greater than the element in the prototype vector P, then the element in the trial vector T takes on the value ‘0’, otherwise a value of ‘1’ is taken. For example Prototype vector P Random vector Z
( 0.4 , 0.2 , 0.7 , 0.7 , 0.2 ) (0.76, 0.21, 0.87, 0.56, 0.32)
Trial vector T
( 0 , 0 , 0 , 1 , 0 )
If, after generating M trial vectors, the best trial vector is B and the worst is W, with another random vector then the prototype vector P is updated according to the following rule
144
JONATHAN N. CARTER
where and are numbers in the range [0.0, 1.0] This moves the prototype vector slightly towards the best vector, slightly away from the worst vector and adds a small random variation. The size of the three parameters controls the search and convergence properties. For a problem that can be expressed as a binary string, this method can be applied directly. For combinatoric problems, the problem can be encoded using the following scheme. Gray codingxiii is used to represent a number in the range where is greater or equal to the number of elements. For example, in our test problem with 15 elements, S = 4 so The length of the binary string is then S times the number of elements. For a six-element problem, as in figure 16, so one would need a total of 6 x 3 = 18 bits. If at some stage the prototype vector P and random vector Z are: Prototype Vector P (0.74, 0.52, 0.80, 0.30, 0.67, 0.68, 0.53, 0.51, 0.27, 0.61, 0.22, 0.67, 0.39, 0.34, 0.50, 0.84, 0.69, 0.90) Random Vector Z (0.52, 0.73, 0.75, 0.66, 0.66, 0.51, 0.01, 0.02, 0.75, 0.35, 0.13, 0.12, 0.89, 0.02, 0.86, 0.75, 0.83, 0.91) then this will generate the following trial vector T of Gray codes (1,0,1,0,1,1,1,1,0,1,1,1,0,1,0,1,0,0,) which decodes to give (6,2,4,5,3,7). Clearly this needs to be re-mapped and any conflicts resolved. This could be done as in the HTBX crossover algorithm. Finally we obtain (d,c,f,b,a,e) as a trial solution. The PBIL algorithm is quite a new method and initial tests93 on some standard combinatorial optimisation problems (TSP, Knapsack, bin packing, job-shop scheduling) look promising. In particular, although PBIL is much simpler than the GA, it can generate comparable results for many problems. It is therefore expected that there will be a growing interest in applications of PBIL in industry.
4 OPTIMISATION METHODS FOR CONTINUUM PROBLEMS The previous section concentrated on methods for combinatorial optimisation, although most of the methods can be adapted to work on problems involving continuous decision variables. In this section a new method, developed in the petroleum industry by Frandsen4, for continuum problems is described. It should find applications in many other industries.
4.1 A Trust Region Multi-point Approximation Method In many fields of science and engineering we wish to optimise a function that depends on the results from a slow numerical simulation (e.g. simulation of a reactor core). This often results in a trade-off between the quality of the solution (as many optimisation xiii
see the appendix on Gray coding
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
145
schemes require numerous function evaluations) and the computational resources used. This method attempts to alleviate this problem. It is often the case that the objective function cannot be directly calculated from the decision variables In these cases the objective function depends on a vector The elements of are themselves complex functions of the decision variables. Mathematically this can be written
The elements of will often be the output of some comprehensive and slow numerical simulation, such as a reactor simulation. In many optimisation methods for continuous decision variables, implicit use of a generic function to approximate the objective function around a current point is made. For example the Newton-Ralphson method assumes a linear model. These generic models have a number of parameters that are adjusted to achieve the approximation where are tunable parameters. As an alternative to using a generic model one might use an approximation model to replace the comprehensive model, ie
The approximation will normally be quick to evaluate, at least compared with capture the relevant physics and be qualitatively correct. These models generally do not have tunable parameters. The use of Generalised Perturbation Theory (GPT) for incore fuel management would be an example. Multi-point Approximation The multi-point approximation (MpA) combines the generic model approach with the approximation model approach. A generic model is defined which uses an approximation model with tunable parameters.
where and are linear scaling parameters, and is a quickly evaluated approximation model with (L – 2)xiv adjustable parameters. These parameters would be physical parameters of the model such as the neutron absorption cross-section or the thermal utilisation factor. In a standard approximation model these parameters would be fixed by the design of the reactor. We allow them to change, accepting that it implies a slight modification of the reactor. It is entirely possible that by adjusting these parameters one would obtain a reactor that could not physically be built. For instance the parameters might imply moderator characteristics that can not be provided by any known moderator. However, if a moderator with those characteristics did exist, then the reactor could be constructed. The parameters α are used to adjust the generic model so that it matches as closely as possible at a set of points at which is known. This is achieved by minimising a weighted error squared function.
xiv The number of parameters (L – 2) in the approximation model is dependent on the user’s choice of model
146
where the sum is over all the points at which associated with each point. If is the current estimate of the optimum of
JONATHAN N. CARTER
is known. The and
are weights
a given radius, then
as shown in figure 20. is chosen such that there are at least L points (L being the number of tunable parameters ) within the hyper-sphere The region is known as the “trust region”, i.e. the region where we trust to be a good approximation to Having defined a trust region and fixed the generic model we now find the point within or on the boundary of the trust region that minimises the generic model function. For this point we then run the slow numerical simulator and calculate the following three quantities
and are used to assess the success of the approximation. The multi-point approximation can be considered as a function that takes as arguments and returns and the algorithm is defined in table 2. The radius of the trust region, is adjusted as the algorithm proceeds.
and
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
147
Having calculated and a step can be classified as either a ‘poor step’ or a ‘good step’ or an ‘indifferent step’. A step is classified as poor if either or it is classified as good if and otherwise it is classified as indifferent. If the step is classified as indifferent then is replaced by is left unchanged, and a new multi-point approximation is made. If the step is classified as poor then the number of points at which has been calculated inside the trust region are counted. If C < L then a new randomly chosen point is added to the trust region and evaluated at this point before a new multi-point approximation is made. If this is not the case then is reduced by a factor of 2, and if is positive then is replaced by If the step is classified as good then a new multipoint approximation is made with the radius of the trust region increased by a factor where is the number of successive ‘good’ multipoint approximations. This sequence is stopped if either the MpA is ‘not good’ or stops improving. New values for and are the selected. All the details of this algorithm are given in figure 21. The success of this approach will depend critically on the choice of the fast approximation model and the parameters that are chosen to be adjustable. The MpA method is very new and is still under development for the petroleum industry. One of its advantages is that it can smooth-out a noisy objective function calculated from a comprehensive simulation due to computation effects within the simulations. It has been successfully tested on the standard analytic test functions used by the optimisation research community. For this reason it has been included in this review. The difficulty for the petroleum industry has been in the selection of appropriate approximation models for the problems being tackled. This should not deter researchers from applying the technique to other problems.
5
CONCLUSIONS
In this paper six algorithms for combinatorial optimisation problems, such as PWR reload core design, have been reviewed. Five of these algorithms have been tested by several researchers on a range problems. Of these five, GDA is the best choice for an inexperienced user wanting a “black-box” routine for occasional use. If you need to perform optimisation regularly, then the author suggests that the GA would be most efficient. It would need to be coupled with a simple hill-climbing algorithm for final optimisation. Time spent on choosing a representation and crossover operator will be repaid by better performance. PBIL is a new algorithm, which requires further testing. It is much simpler to implement than the GA, and does not suffer from the local optimum trapping problems of the GDA. Tabu Search has produced some very good results, but does seems complicated to implement. It has a definite down side in the requirement to evaluate the function for the complete local neighbourhood. The author is not aware of a good comparative test between Tabu Search and the GA. All of the methods, except Tabu Search, can be adapted for problems involving continuous decision variables. They are most useful for problems where many local optima exist. Interested readers should consult the relevant literature (see references) to find the appropriate implementation methods. The Multi-point Approximation algorithm is a new method that may make optimisation possible for problems for which optimisation has not been practical before. Most iterative search algorithms need hundreds or thousands of function calls to solve for real problems. If each function call takes 2-3 hours to compute, then optimisation is
148
JONATHAN N. CARTER
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
149
not practical. The method is allowing progress on such problems in the petroleum industry. The critical element is the selection of the approximation model for the generic function. It is expected that this algorithm will have many applications in industry.
Acknowledgements I would like to thank Drs J.D. Lewins and G.T. Parks for their valuable assistance in the preparation of this paper, and in particular for the benefit of their knowledge of the relevant nuclear technology. Any mistakes are however mine. I would also like to thank Dr P.W. Poon for the review of previous work on the reload core design problem and for permitting the use of results from her PhD thesis.
REFERENCES 1. G.T. Parks, Advances in optimisation and their application to problems in the field of nuclear science and technology, Advances in Nuclear Science and Technology 21:195 (1990). 2. F. Glover, Future paths for integer programming and links to artificial intelligence, Computers and Operations Research 5:533 (1986). 3. S. Baluja and R. Caruana, Removing the genetics from the genetic algorithm, Proceedings of the Twelfth International conference on machine learning (1995). 4. P.E. Frandsen, J. Andersen and J. Reffstrup, History matching using the multi-point approximation approach, Proceeding of the fifth European conference on the Mathematics of Oil Recovery, Z. Heinemann and M. Kriebernegg (eds), (1996). 5. I. Wall and H. Fenech, Application of dynamic programming to fuel management optimization, Nuclear Science and Engineering 22:285 (1965). 6. B.S. Lew and H. Fenech, A non-equilibrium cycle approach for optimization of successive nuclear fuel reloads in pressurized water reactors, Annals of Nuclear Energy 5:551 (1978). 7. M. Mélice, Pressurized water reactor optimal core management and reactivity profiles, Nuclear Science and Engineering 37:451 (1969). 8. J.P. Colletti, S.H. Levine and J.B. Lewi, Iterative solution to the optimal poison management problem in pressurized water reactors, Nuclear Technology 63:415 (1983). 9. R.L. Stover and A. Sesonske, Optimization of boiling water reactor fuel management using accelerated exhaustive search technique, Journal of Nuclear Energy 23:673 (1969). 10. T. Hoshino, In-core fuel management optimization by heuristic learning technique, Nuclear Science and Engineering 49:59 (1972). 11. Y.F. Chen, J.O. Mingle and N.D. Eckhoff, Optimal power profile fuel management, Annals of Nuclear Energy 4:407 (1977). 12. B.N. Naft and A. Sesonske, Pressurized water reactor optimal fuel management, Nuclear Technology 14:123 (1972). 13. A.J. Federowicz and R.L. Stover, Optimization of pressurized water reactor loading patterns by reference design perturbation, Transactions of the American Nuclear Society 17:308 (1973). 14. K. Chitkara and J. Weisman, An equilibrium approach to optimal in-core fuel management for pressurized water reactors, Nuclear Technology 24:33 1974. 15. T.J. Downar and A. Sesonske, Light water reactor fuel cycle optimization: Theory versus practice, Advances in Nuclear Science and Technology 20:71 (1988).
150
JONATHAN N. CARTER
16. A. Suzuki and R. Kiyose, Application of linear programming to refuelling optimization for light water reactors, Nuclear Science and Engineering 46:112 (1971). 17. P. Goldschmidt, Minimum critical mass in intermediate reactors subject to constraints on power density and fuel enrichment, Nuclear Science and Engineering 49:263 (1972). 18. R.E. Bellman, Dynamic Programming, Princeton University Press, Princeton, (1957). 19. J.R. Fagan and A. Sesonske, Optimal fuel replacement in reactivity limited systems, Journal of Nuclear Energy 23:683 (1969). 20. W.B. Terney and H. Fenech, Control rod programming optimization using dynamic programming, Nuclear Science and Engineering 39:109 (1970). 21. H. Motoda and T. Kawai, A theory of control rod programming optimization in two-region reactors, Nuclear Science and Engineering 39:114 (1970). 22. A. Suzuki and R. Kiyose, Maximizing the average fuel burn-up over entire core: A poison management optimization problem for multizone light water reactor cores. Nuclear Science and Engineering 44:121 (1971). 23. D.C. Wade and W.B. Terney, Optimal control of nuclear reactor depletion, Nuclear Science and Engineering 45:199 (1971). 24. W.B. Terney and E.A. Williamson Jr, The design of reload cores using optimal control theory, Nuclear Science and Engineering 82:260 (1982). 25. T.O. Sauer, Application of linear programming to in-core fuel management optimization in light water reactors, Nuclear Science and Engineering 46:274 (1971). 26. R.B. Stout and A.H. Robinson, Determination of optimum fuel loadings in pressurized water reactors using dynamic programming, Nuclear Technology 20:86 (1973). 27. B. Lin, B. Zolotar and J. Weisman, An automated procedure for selection of optimal refuelling policies for light water reactors, Nuclear Technology 44:258 (1979). 28. S.H. Yeh, A.Y. Ying and J. Weisman, An automated procedure for selecting boiling water reactor refuelling policies following operational problems or changes, Nuclear Technology 61:78 (1983). 29. J.O. Mingle, In-core fuel management via perturbation theory, Nuclear Technology 27:248 (1975). 30. L.-W. Ho and A.F. Rohach, Perturbation theory in nuclear fuel management optimization. Nuclear Science and Engineering 82:151 (1982). 31. H. Motoda, J. Herczeg and A. Sesonske, Optimization of refuelling schedule for light water reactors. Nuclear Technology 25:477 (1975). 32. R.K. Haling, Operating strategy for maintaining an optimal power distribution throughout life, In ANS Topical Meeting on Nuclear Performance of Power Reactor Cores TID 7672 (1963). 33. A.L.B. Ho and A. Sesonske, Extended burn-up fuel cycle optimization for pressurized water reactors, Nuclear Technology 58:422 (1982). 34. P.W. Poon and J.D. Lewins, Minimum fuel loading, Annals of Nuclear Energy 17:245 (1990). 35. T.J. Downar and Y.J. Kim, A reverse depletion method for pressurized water reactor core reload design, Nuclear Technology 73:42 (1986). 36. Y.C. Chang and A. Sesonske, Optimization and analysis of low-leakage core management for pressurized water reactors, Nuclear Technology 65:292 (1984). 37. H. Moon, S.H. Levine and M. Mahgerefteh, Heuristic optimization of pressurized water reactor fuel cycle design under general constraints, Nuclear Technology 88:251 (1989). 38. A. Galperin and Y. Kimhy, Application of knowledge-based methods to in-core fuel management, Nuclear Science and Engineering 109:103 (1991). 39. Y. Tahara, Computer-aided system for generating fuel-shuffling configurations, Journal of Nuclear Science and Technology 28:399 (1991). 40. B.M. Rothleder, G.R. Poetschat, W.S. Faught and W.J. Eich, The potential for expert
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
41. 42. 43.
44. 45.
46. 47.
48.
49.
50. 51.
52.
53. 54.
55.
56. 57. 58. 59. 60. 61.
151
system support in solving the pressurized water reactor fuel shuffling problem. Nuclear Science and Engineering 100:440 (1988). T. Morita, Y.A. Chao, A.J. Federowicz and P.J. Duffey, LPOP: Loading pattern optimization program, Transactions of the American Nuclear Society 52:41 (1986). Y.A. Chao, C.W. Hu and C. Suo, A theory of fuel management via backward diffusion calculation, Nuclear Science and Engineering 93:78 (1986). Y.A. Chao, A.L. Casadei and V.J. Esposito, Development of fuel management optimization methods at Westinghouse, Proceedings of the International Reactor Physics Conference IV:237 (1988). D.J. Kropaczek, In-Core Nuclear Fuel Management Optimization Utilizing Simulated Annealing. PhD thesis, North Carolina State University, 1989. D.J. Kropaczek and P.J. Turinsky, In-core nuclear fuel management optimization for pressurized water reactors utilizing simulated annealing, Nuclear Technology 95:9 (1991). S.A. Comes and P.J. Turinsky, Out-of-core fuel cycle optimization for non-equilibrium cycles, Nuclear Technology 83:31 (1988). G.H. Hobson and P.J. Turinsky, Automatic determination of pressurized water reactor core loading patterns that maximize beginning-of-cycle reactivity with power-peaking and burn-up constraints, Nuclear Technology 74:5 (1986). J.R. White, D.M. Chapman and D. Biswas, Fuel management optimization based on generalized perturbation theory. In Proceedings of the Topical Meeting on Advances in Fuel Management 525 (1986). J.R. White and G.A. Swanbon, Development of a technique for practical implementation of higher-order perturbation methods, Nuclear Science and Engineering 105:106 (1990). P.W. Poon, The genetic algorithm applied to PWR reload core design, PhD Thesis, Cambridge University, (1992). P.W. Poon and G.T. Parks, Optimising PWR reload core designs In R. Männer and B. Manderick, editors, Parallel Problem Solving from Nature 2, page 371, North–Holland, Amsterdam, (1992). M. DeChaine and M. Feltus, Comparison of Genetic algorithm methods for fuel management optimisation, Proceedings of The International Conference for Mathematics and Computations, Reactor physics and Environmental Analysis 645 (1995). M. DeChaine and M. Feltus, Nuclear fuel management optimization using genetic algorithms, Nuclear Technology 111:109 (1995). E. Tanker and A. Tanker, Application of a genetic algorithm to core reload pattern optimization, Proceedings of the International Conference on Reactor Physics and Reactor Computations 559 (1994). N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller, Equations of state calculations by fast computing machines. Journal of Chemical Physics 21:1087 (1953). S. Kirkpatrick, C.D. Gelatt Jr and M.P. Vecchi, Optimization by simulated annealing Science 220:671 (1983). E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines, John Wiley & Sons, (1989). C. Reeves (editor), Modern Heuristic Techniques for Combinatorial Problems, Blackwell Scientific Publications (1993). G.T. Parks, Optimization of advanced gas-cooled reactor fuel performance by a stochastic method, Nuclear Energy 26:319 (1987). G. Deuck, (1993). New Optimisation Heuristics: The Great Deluge Algorithm and the Record-to-Record Travel, J. Computational Physics, 104, 86. J.N. Carter, Optimum performance of genetic algorithms and derivatives of simulated
152
62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72.
73. 74. 75.
76. 77. 78. 79.
80. 81. 82. 83. 84.
JONATHAN N. CARTER
annealing for combinatorial optimisation, In Intelligent Engineering Systems Through Artificial Neural Networks 6, ASME Press, (1996). M. Sinclair, Comparison of the performance of modern heuristics for combinatorial optimization on real data, Computers Operation Research 20:687 (1993). F. Glover and M. Laguna, Tabu search, in Modern Heuristic Techniques for Combinatorial Problems see reference 58, (1993). A. Hertz and D. deWerra, Using tabu search techniques for graph coloring, Computing 29:345 (1987). Bhasin, Carreras and Taraporevola, Global Router for Standard Cell Layout Design, Dept Electrical and Computer Eng, University of Texas, Austin, (1988). Malek, Search Methods for Travelling Salesman Problems, Dept Electrical and Computer Eng, University of Texas, Austin, (1988). P. Hansen and B. Jaumard, Algorithms for the Maximum Satisfiability Problem, RUTCOR Research Report RR#43-87, Rugers, New Brunswick, (1987) G.T. Parks, Advances in optimization and their applicability to problems in the field of nuclear science and technology, Advances in Nuclear Science and Technology 21:195 (1989). D.E. Goldberg and R.L. Lingle, Alleles, loci and the travelling salesman problem, presented in ICGA 1 see reference 86, 154 (1985). I.M. Oliver, D.J. Smith and J.R.C. Holland, A study of permutation crossover operators on the travelling salesman problem, presented in ICGA 2 see reference 87, 224 (1987). G. Syswerda, Schedule optimization using genetic algorithms, In A Handbook of Genetic Algorithms see reference 92, 332 (1991). D. Whitley, T. Starkweather and D. Fuquay, Scheduling problems and the travelling salesman: the genetic edge recombination operator, presented in ICGA 3 see reference 88, 133 (1989). T. Starkweather, S. McDaniel, K. Mathias, D. Whitley and C. Whitley, A comparison of genetic sequencing operators. presented in ICGA 4 see reference 89, 69 (1991). P.W. Poon and J.N. Carter, Genetic algorithm operators for ordering applications, Computers and Operational Research 22:135 (1995). B.R. Fox and M.B. McMahon, Genetic operators for sequencing problems, In G.J.E. Rawlins, editor, Foundations of Genetic Algorithms, 284, Morgan Kaufmann, San Mateo, (1991). J.H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor: The University of Michigan Press, (1975). J.E. Baker, Adaptive selection methods for genetic algorithms, presented in ICGA 1 see reference 86, 101 (1985). T. Bäck and F. Hoffmeister, Extended selection mechanisms in genetic algorithms, presented in ICGA 4 see reference 89, 92 (1991). J.N. Carter, Optimal performance of genetic algorithms and stochastic hill-climbers for combinatorial optimisation, in Intelligent Engineering Systems Through Artificial Neural Networks 5:399 (1995). D. Goldberg, B. Korb and K. Deb, Messy genetic algorithms: motivation, analysis and first results, Complex Systems 3:496 (1989). T. Blickel and L. Thiele, A mathematical analysis of tournament selection, presented in ICGA 6 see reference 91, 9 (1995). D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning Addison Wesley, Reading, (1989). C. Reeves, Using genetic algorithms with small populations, presented in ICGA 590 92 (1993). Electric Power Research Center, North Carolina State University Department of Nuclear Engineering, Rayleigh NC, FORMOSA-P User’s Manual, (1992).
GENETIC ALGORITHMS AND OTHER RECENT DEVELOPMENTS
153
85. R. Sulomon, Genetic algorithms and the complexity on selected Test Functions, Intelligent Engineering Systems Through Artificial Neural Networks 5:325 (1995). 86. J.J. Grefenstette (ed), Proceedings of the First International Conference on Genetic Algorithms and their Applications, Lawrence Erlbaum Associates, Hillsdale, (1985). 87. J.J. Grefenstette (ed), Proceedings of the Second International Conference on Genetic Algorithms and their Application, Lawrence Erlbaum Associates, Hillsdale, (1987). 88. J.D. Schaffer (ed), Proceedings of the Third International Conference on Genetic Algorithms and their Applications, Morgan Kaufmann, San Mateo, (1989). 89. R.K. Belew and L.B. Booker (eds), Proceedings of the Fourth International Conference on Genetic Algorithms and their Applications, Morgan Kaufmann, San Mateo, (1991). 90. S. Forrest (ed), Proceedings of the Fifth International Conference on Genetic Algorithms and their Applications, Morgan Kaufmann, San Mateo, CA, (1993). 91. L. Eshelman (ed), Proceedings of the sixth International Conference on Genetic Algorithms and their Applications, Morgan Kaufmann, San Mateo, (1995). 92. L. Davis, A Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, (1991). 93. S. Baluja, An empirical comparison of seven iterative and evolutionary function optimization heuristics, CMU-CS-95-193, School of Computer Science, Carnegie Mellon University, (1995). 94. D. Whitley, K. Mathias, S. Rana and J. Dzubera, Building better test functions, presented at ICGA 6 see reference 91, 239 (1995).
Appendix: GRAY CODING A Gray code is a binary representation of an integer number such that adding or subtracting one from the integer, changes the Gray code at only one bit in the representation (unlike a standard binary number). Table 3 lists the Binary and Gray codes for the integers 0-32. If one looks at the change in representation between integers 15 and 16, one finds that the binary representation has changed at five locations, while the Gray code has changed at just one location. This is repeated at many points in the sequence. The binary coding has proved difficult for some algorithms to exploit successfully. Most algorithms have found Gray coding easy to exploit. This has been discussed for the GA by Whitley et al. 94 To convert between binary and Gray coding we make use of binary matrix multiplication
where B is a binary coded vector, G is a Gray coded vector and matrix of the form
is a upper triangular
154
JONATHAN N. CARTER
for example 26
To decode we use
where
is an upper triangular matrix of the form
Using the previous example
THE COMPUTERIZATION OF NUCLEAR POWER PLANT CONTROL ROOMS Dr. Bill K.H. Sun Sunutech, Inc. P. O. Box 978, Los Altos, California 94023, USA and Dr. Andrei N. Kossilov Exitech Corporation Stavangergasse 4/9/4, A-1220 Vienna, Austria
1. INTRODUCTION 1.1 Operation of Nuclear Power Plants The major goals of using computers in operation of nuclear power plants (NPPs) are (1) to improve safety, (2) to reduce challenges to nuclear power plant, (3) to reduce the cost of operations and maintenance, (4) to enhance power production, and (5) to increase productivity of people. In the past decade, there has been a growing need to address obsolescence, improve human performance, and to comply with increasingly stringent regulation requirements. The needs to use computers arise in the on-line, real-time processes of control and protection, alarm detection and display in the control rooms, and in the on-line assessment of processes needed for operation of a power reactor. In addition, they arise equally in the semi on-line operation needed for daily or hourly assessment of operation; for example, monitoring of detailed reactor flux distributions, etc. As a result, NPPs have implemented plans to replace ageing analogue systems with digital systems and have developed comprehensive and accessible information database and management systems. These systems support operations for an overall improvement in quality assurance and productivity. Advances in information and communication technology have been proven to help utilities operate power plants more efficiently by integrating computer resources and increasing the availability of information to meet NPP staff needs and corporate business strategy [1-3]. A major difficulty with the application of computers in the control rooms is that the requirements of a NPP are always stringent and specific. Much on-line software must, therefore, be specifically written and the hardware configuration is generally difficult to obtain off the shelf. A major problem with procurement of on-line systems has been and continues to be the need for internal redundancy to ensure high availability of the computer functions. These problems have often prevented the design intentions from being fully achieved, and the economic benefits expected have not always been fulfilled in consequence.
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
155
156
BILL K. H. SUN AND ANDREI N. KOSSILOV
1.2 Benefits of Computerization of Control Rooms An increasing use of computers and information technology in general in many existing and all new nuclear stations have enabled a relatively high degree of automation and allowed dynamic plant state to be represented in digital computer memory and logic [4,5]. Exploiting this advantage and the rapid evolution of digital technology, nuclear power plants have achieved substantial safety and operational benefits. Some of the most potential significant features and benefits for computerization of control room systems are the following: Substantial reduction in panel complexity Substantial reduction in instrumentation complexity Elimination of error-prone tasks Integrated emergency response information system Procedure driven displays to guide operator actions Critical alarms with diagnostics messages dealing with plant disturbance Increased intelligent information for operators to plan and execute correctly The above features have been applied to control room systems to provide operating nuclear utilities with tools that substantially reduce power plant operating, maintenance and administration costs. This has been achieved in the following ways: Reduction of plant forced outages Faster recovery from forced outages Avoidance of plant equipment damage and extension of service life due to early diagnoses of equipment malfunction Faster start ups Automation of labour intensive operation, maintenance and administration processes 1.3 Scope and Purpose of the Paper The paper describes the status and philosophical background of computerization of nuclear power plant control rooms. The paper also provides information describing the history of operational experience, lessons learned, and a summary of major computer systems. The state-of-the-art and future trends in the use of computers in control rooms, including man-machine interface, life-cycle management, verification and validation, etc., are also considered in view of the fast evolution of the technology. The paper provides a resource for those who are involved in researching, managing, conceptualizing, specifying, designing, manufacturing or backfitting power plant control rooms. It will also be useful to those responsible for performing reviews or evaluations of the design and facilities associated with existing power plant control rooms.
2.
HUMAN-MACHINE INTERFACE
2.1 Importance of Human-Machine Interface in Computerization Many functions in NPPs are achieved by a combination of human actions and automation. Computerization is basically an automation process that allocates certain human functions to computer machines. To understand the roles of human, machines, and their interactions is of critical importance in computerization of nuclear power plant control rooms. The importance of human-machine interface for ensuring safe and reliable operation of nuclear power plants had been recognised by the nuclear energy community long before the Three Mile Island Unit 2 (TMI) and the Chernobyl accidents The concept of operator support and human factors have been increasingly used to better define the role of control rooms. In the late 1970s, the impact of analysis
COMPUTERIZATION OF CONTROL ROOMS
157
results from the TMI accident considerably accelerated the development of recommendations and regulatory requirements governing the resources and data available to operators in NPP control rooms. One important outcome was the implementation of computer driven safety parameter display systems in control rooms with the objective of providing human operators with on-line and real-time display of critical safety information to aid operators in emergency management. Among the human-machine interface parameters, the ergonomics of control boards and panels, resources and facilities to deal with abnormal situations and data display from instrumentation, are all important for design improvements made to computerize control rooms of nuclear power plants. 2.2 Allocation of Functions between Automata and Humans Increasingly, computer-based systems are used to support operations and maintenance personnel in the performance of their tasks in the control rooms. There are many benefits which can accrue from the use of computers but it is important to ensure that the design and implementation of the support system and the human task places the human in an intellectually superior position, with the computer serving the human. In addition, consideration must be given to computer system integrity, software validation and verification, consequences of error, etc. To achieve a balance between computer and human actions, the design process must consider each operational function in regard to either computer, human operation, or more commonly in nuclear plants, a combination of human and computer [6]. The following factors will govern the relative weighting used in allocating functions between humans and computers: 1. 2. 3. 4.
Existing practices Operational and design experience Regulatory factors Feasibility
5. Cost 6. Technical climate 7. Policy matters 8. Cultural and social aspects
The various factors may differ between applications and may be affected by whether a new design or a modification to an existing process through retrofit is being considered. In the retrofit case, the implementation of computers has less flexibility, owing to existing plant designs, operating practices, the need for replication, etc. 2.3 Application of Human Factors in Control Room Design Human factors efforts in design of computerized control rooms has been based on a firm analytical foundation. Human factors efforts complement those of other control room design team participants, resulting in an integrated design that supports tasks performed by control room personnel. Human factors principles and criteria, along with information resulting from analyses, are applied in selecting panels and consoles, configuring them in relation to other furnishings, and establishing ambient environmental conditions (light, sound climate) to promote effective personal performance. In addition, ability features (personal conveniences, aesthetic considerations, safeguards against common hazards) are specified to promote personnel comfort, morale, and safety [5,7]. The primary human factors objective in control room design is to increase operational effectiveness by ensuring that capabilities and needs of personnel are reflected in coordinated development of interactive design features. Human factors recommendations are intended to ensure that the design and placement of consoles and other major items support effective task performance during all operating modes. Recommended layout alternatives facilitate visual and physical access to display and control instruments.
158
BILL K. H. SUN AND ANDREI N. KOSSILOV
3. COMPUTERIZATION IN NUCLEAR POWER PLANT CONTROL ROOMS
3.1 Application of Computer Technology in Control Rooms The use of computers in nuclear power plants date back almost to the beginning of commercially applied nuclear power. At the time, a central computer served as a data logger. With the progress of technology, smaller dedicated computers have been introduced which now serve for data acquisition, exchange of data throughout the plant, information generation by means of simple logic or more complicated analytical functions, and by providing the desired information to the operator in the control room, usually by means of video display units. In parallel, computers began to be used for open and closed loop control and are also applied for the protection of the plant. Early protection applications often were limited to calculating safety-relevant parameters such as departure from nucleate boiling; later applications included all signal handling, trip detection and redundant majority voting functions. The application of computers in control rooms can be distinguished between two functions: for storage of operational data in order to have available historical data for later check-up or comparison and for real-time data management in order to serve all needs of on-line monitoring and automation. The basic functions usually provided for a nuclear power plant by on-line computer systems are plant monitoring and recording, and the display of information and alarms. On a typical reactor plant, between 3000 and 7000 analogue signals from instruments measuring temperature, pressure, flow, neutron power levels, voltage, current, and other special parameters exist. In addition, between 10 000 and 20 000 state signals will be used. These state signals provide information on switch gear states, valve states, and alarm states and may include the states of control room switches. The main use of the computer system is to read in these signals at intervals, typically one second, and to form an internal data base representing the plant condition. That internal database is then available for software to perform checks of analogue signals for high, low and other alarms, and checks of state signals for alarms. In the alarm check process, the computer system provides recorded time of detection on printed logs and on magnetic media for off-line analysis. Video Display Units (VDUs) allow the alarms to be displayed to the operators, with clear language titles, the instrument or plant contact identity, and other information. Typical computer functions are listed in Table I. This table shows a great variety of computerized functions exists in various nuclear power plants. These functions include information and operator aid systems, computerized control and automation, and computerized protection systems. It is worth mentioning, however, that while some sort of information and operator aid computer system exist at almost every nuclear power plants, the application of computers to closed-loop control is more recent and the application to the protection system has been done only to very recent plants and current designs of light water and CANDU reactor plants. Particularly for the protection systems, safety implications and regulatory concerns have been major challenges to implementation. 3.2 Evolution of Control Room Designs In the past two decades, rapidly evolving computer and communications technology has revolutionized control room system designs. The technology include computers,
COMPUTERIZATION OF CONTROL ROOMS
159
data highways, communication devices, many different information display mechanisms, human input/output facilities, software, voice annunciation and voice actuation systems. Control Room designs consider various factors including cost, operational reliability and safety. This rapid technological development in computers and electronics is coupled with significant progress in the behavioural sciences that greatly increases our knowledge of the cognitive strengths and weaknesses of human beings. In nuclear power stations, as in most complex industrial plants, control room systems design has progressed through three generations. First generation systems consist entirely of fixed, discrete components (hand switches, indicator lights, strip chart, recorder, annunciator windows, etc.). Human factors input was based on intuitive common sense factors which varied considerably from one designer to another. Second generation systems incorporate video display units and keyboards in the control panels. Computer information processing and display are utilized. There is systematic application of human factors standards and guidelines. The human factors are applied mainly to the physical layout of the control panels and the physical manipulation performed by the operators. Third generation systems exploit the dramatic performance/cost improvements in computer, electronic display and communication technologies of the 1990s. Further applications of human factors address the cognitive aspects of operator performance. All new nuclear plants and most operating plants now utilize process computers to implement part of the control room information system. As computer costs decrease
160
BILL K. H. SUN AND ANDREI N. KOSSILOV
and reliability increases, computers are being used more extensively. The increased functionality provided by computer systems yields significant benefits. The use of computers also creates problems. For example, additional costs must be justified, information overload must be avoided and provision must be made to deal with the possibility of rapid obsolescence because the technology is changing so rapidly. 3.3 Computer Architecture There are three classic forms of computer hardware architecture as they are applied to control room system design: 1. Centralized Redundant Computers Typically a dual redundant centralized computer system is specified where one computer is in control and the other one is running in "hot standby". When a fault is detected by self checking in the controlling computer, the "hot standby" takes over. 2. Distributed Computing (functional distribution) Functional distribution provides for multiple computers to dynamically share the total computing load. The computing tasks are allocated among a number of separate control processing units which are interconnected in a communication network. Although the computing tasks are distributed, the processors are not geographically distributed to achieve cost saving and simplification in wiring, cabling and termination. 3. Distributed Control (geographic distribution) A distributed control architecture allows the processing to be geographically distributed. The processors are located close to the inputs and outputs to the plant. This architecture can provide substantial cost savings and reliability benefits because the conventional wired analogue and relay logic is replaced by more highly standardized self checking digital system modules. Because such configurations are relatively new and because they require greater performance and faster response, there is more technical risk in such an architecture. Because of the improvements in computer and VDU functionality, application software and their development tools have been developed that provide the basic software building blocks for the design, implementation and validation of the detailed control room system software. The existence of these tools makes it possible and desirable for plant staff to undertake the detailed design, implementation and validation of the control room designs. With the aid of the full scope training simulators that are now required before start-up, the plant personnel can carry out the design validation and training of operators for the new designs. Software quality is essential for successful implementation and licensing of Control Room Systems for new plant and retrofit applications. Software quality is achieved by careful attention to the following requirements: Properly trained, and experienced software development staff; Comprehensive, clearly documented software requirements and specifications; A well organized and clearly documented and understood software development process based on a pre-established software life cycle; Use of proven, up to date software development tools such as compilers, editors, graphics interpreters, de-bugging facilities, and file revision control tools; Documented validation and verification to the level required in the software development process in accordance with the degree of nuclear public safety functionality in the software; Thorough, well organized testing; Comprehensive software configuration control.
COMPUTERIZATION OF CONTROL ROOMS
161
3.4 Control Room Operator Support Systems Operator support systems are discrete computer systems or functions of the plant process computers that are based on intelligent data processing and they draw inputs from plant instrumentation and control systems. Applications are mostly real-time and on-line [8-10]. In addition to control room operators, users of the support systems include operations staff management, technical specialists (e.g. engineering reactor physicists), maintenance staff, emergency management and sometimes safety authorities. In practice, the operator support systems have been implemented as functions of plant process monitoring systems, or stand-alone applications for monitoring and diagnosis, such as materials stress monitoring, vibration monitoring and loose part monitoring. In the following, the function and purpose of major operator support systems are described with examples of practical applications and their operational status. 1. Task oriented displays The function is primarily to present relevant plant information to support operators in specific tasks such as start-up, shut-down and other transients by optimizing information type, form and presentation. Typical examples are operating point diagrams and curves for optimum operation in transients indicating operating area and possible limits and their violation. 2. Intelligent alarm handling The function is to support operators to understand the information given by the alarms especially in plant transients, where the alarm overflow often is a problem. This is done by logical reduction and masking of irrelevant alarms, synthesizing them, dynamic prioritization based on the process state, first alarm indication, displaying the alarm state of subsystems or functional groups of the plant, etc. 3. Fault detection and diagnosis The function is to alert operators to problems and to aid them to diagnose those before the normal alarm limits are reached, where simple alarm monitoring is impractical or where complex situations cannot be revealed by alarms or alarm logic. Examples are: • Fault monitoring of protection logic and associated electrical supplies, fuel pin failure detection and prediction. • Detection and identification of leakage, e.g. mass balancein the primary circuit. • Model-based fault detection for components (e.g. preheaters) and measurement loops. 4. Safety function monitoring Examples include critical safety function monitoring, safety parameter display system, etc. Their function is to alert the operators to the safety status of the plant. This is based on the monitoring of derived critical safety functions or parameters, so that operators can concentrate on maintaining those safety functions. The severity of the threat which challenges functions as well as guidance in the recovery are also given in some applications. In these cases, relevant emergency procedures are referred and implementation of corrective actions are supervised. 5. Computerized operational procedures presentation The function is to complement written operating and emergency procedures by computerized operator support. For instance: • Guiding the operator to the relevant procedure. • Presentation of procedures dynamically and interactively on displays. • Follow-up monitoring of actions required in the procedures.
162
BILL K. H. SUN AND ANDREI N. KOSSILOV
6. Performance monitoring The function is to calculate and monitor the efficiency and optimum operation of main pumps, turbine, generator, condenser, steam generators, preheaters, etc. in order to detect developing anomalies. The reactor thermal energy can be calculated as well as heat, electricity and mass balances. The computation is based on physical equations and plant measurements which must be accurate enough to guarantee reliable results.
7. Core monitoring The function is to calculate and monitor the operation of reactor and fuel, for instance in order to maximize the energy output of the fuel but still keeping adequate operating margins. Examples are: Load following and simulation/prediction. Reactor power distribution and burn-up. Prediction of Xenon, critical Boron. Computation is based on reactor physics and in-core measurements, such as neutron flux, and temperature.
8. Vibration monitoring and analysis The function is to reveal, in an early phase, failures of rotating machines such as turbines and main pumps by monitoring the shaft vibration using Fourier analysis methods. Systems are operational in most countries. This is to aid the technical specialists to analyze the often voluminous data from the monitoring instrumentation. They are typically stand-alone or common; with loose parts monitoring they might be connected to the plant process computers to submit information also to the control room operators.
9. Loose part monitoring The function is to detect loose parts in the reactor circuit based on noise analysis methods. Systems are operational in most plants.
10. Materials stress monitoring The function is to monitor and predict cracks in pipes, tanks, vessels, etc. This is based on counting the thermal transients of the critical points, on the results/special arrangements, and calculation of stresses and cracks using physical or empirical algorithms. They are mostly dedicated stand-alone systems.
11. Radiation release monitoring The function is to monitor in plant emergencies the radiation release to the plant environment for the plant emergency staff, authorities, etc. The evaluation is based on deviation models using radiation measurements of the plant and meteorological measurements as the source data. 3.5 Emergency Response Facilities
The TMI action plan called for improvements in emergency preparedness through the provision of three separate facilities to be utilised in support of emergency operations, namely:
1. Technical support center, a room near to but separate from the control room that will be the focus for technical and strategic support to the control room operations staff. The room must provide a plant status information system and communication facilities.
COMPUTERIZATION OF CONTROL ROOMS
163
2. On-site operational support center, a marshalling area for operational support personnel (maintenance, security, auxiliary operators, etc.). This facility also must contain a plant information status system and communications facilities. 3. Near site emergency operations facility, the central focal point for planning and co-ordinating all on-site and off-site emergency activities including evacuation, communications with news media organizations, co-ordinating with government and community organizations. A plant information status system and adequate communication systems are required. An essential requirement of the control room system design is that the plant information system that is used by the main control room staff should be the same one that provides plant information in the emergency response facilities. The intent is to provide facilities for use in normal routine operations which will also be useful in emergency situations. If station staff are not accustomed to using a particular facility in routine operation, they will be unfamiliar or uncomfortable with it for emergency use. 4. SAFETY AND LICENSING 4.1 Safety Considerations Safety considerations are critical in the design and operation of control room systems. The human-machine interface provides the media for communicating the plant state to the operators and, the mechanisms for the operator to alter the state of the plant. If information is misrepresented because there is a fault in the display systems, the operator may respond incorrectly during a plant upset. Consequently, there may be situations where the correct operation of these systems is critical to ensure public safety [3, 11-13]. It is important to identify a small subset of the control room systems that are required to respond correctly to the "design basis accident" and Probabilistic Risk Analysis (PRA) scenarios that are analyzed as part of the licensing process for the plant. There is a portion of the control room system that are dedicated as "safety systems" that are physically, functionally and electrically isolated from the other systems and subjected to more stringent design requirements. The challenge for the control room system design is to provide an interface to the safety and non-safety systems that alleviate any human factors problems resulting from the differences in design. 4.2 Control Room Function with Increased Complexity From the production point of view, the economic operation of NPPs is emphasized. For maintaining the high availability of the plant, the design of control room systems should support the operators in the following: Normal operation including pre-analyzed transients Abnormal transients, especially in early fault detection and diagnosis in order to prevent the situation leading to reactor scrams on the initiation of safety systems Outage operation The increased size and complexity of nuclear power plants has greatly influenced the operational requirement for the design of the control rooms and their systems. Plant operation is centralized in the main control room. More extensive monitoring of the plant is needed to achieve high availability. As a consequence, the number of indicators, alarms and manual controls, etc. in the control room has grown substantially. Load following of the electrical grid is a factor in the operational requirements for utilities in geographical areas with a high percentage of nuclear power supply to the grid.
164
BILL K. H. SUN AND ANDREI N. KOSSILOV
The initiatives to solve the problems of growing complexity and information overflow in control rooms are: Higher automation levels, i.e. automation of some operator actions. Utilization of computer technology, e.g. by: reducing irrelevant information by means of hierachization, prioritization, condensing, suppression etc. supporting operators by further data processing. 4.3 Operational Experience and Lessons Learned The operational experience of plants shows, that for safety and productivity of nuclear power, operator action is very important. Investigations indicate that human error is the main contributing factor of the incidents which occurred. The scenarios of the TMI accident in 1979 and the Chernobyl accident in 1986 are well known, with lessons learned. The following are worth particular attention: At TMI, because the operators had to base their decisions on a situation which was not clear, many of the actions they took to influence the process during the accident significantly exacerbated the consequences of the initiating events. One of the factors, which led to actions being taken which were both inadequate and too late, was poor use of the data made available to the operators in the control room. They were unable to satisfactorily process the large amounts of data available to them and had difficulty distinguishing between significant and insignificant information. At Chernobyl, the main cause of the accident was a combination of the physical characteristics and safety systems of the reactor and the actions and decisions taken by the operators to test at an unacceptably low power level with the disabling of automatic trips. Their actions introduced unacceptable distortions in the control rod configuration, and eventually led to the destruction of the reactor. The root cause of the human error relates to the lack of a safety culture in the station which in turn led to, among other things, inadequate knowledge of the basic physics governing the operational behaviour of the reactor. During the last three decades of reactor operations, the role of control room operators has been shifting from the traditional equipment operator to a modern day information manager. As such, the cognitive requirements on control room operations personnel to improve availability and reliability and improve safety challenges to the plant have increased. These personnel are working with more complex systems, and responding to increasing operational and regulatory demands. As the demand and requirement on the operators intensified, diagnostic and monitoring errors have all occurred in power plants, causing reductions in availability and substantial cost consequences. Plant safety has been challenged due to misinterpretations of data and incorrect assumptions of plant state. Since the Three Mile Island event, a number of diagnostic aids have been implemented such as critical parameter displays, saturation and sub cooling margins and symptom based emergency operating procedures. These have all been useful in assisting humans in making their decisions. A number of human factors studies on human-machine interfaces have also been performed. Therefore, reliable, integrated information for operation use is a critical element for protecting the nuclear plant capital investment and increasing availability and reliability. With appropriately implemented digital techniques, human capabilities have been augmented substantially in their capacity to monitor, process, interpret and apply information, thus reducing errors in all stages of information processing. Taking advantage of technological and human engineering advances will continue to help operations personnel to reduce errors, improve productivity, and reduce risk to plant and personnel [14-16].
COMPUTERIZATION OF CONTROL ROOMS
165
In recognition of the problems and needs from the operating experience, there are major industry efforts underway to take advantage of experience. One is the design and construction of new plants with modern control room systems, such as the French 1450 MW N4 plant and the Japanese 1300 MW advanced BWR plant. The other is the upgrading and backfitting of existing control room systems including digital control and instrumentation as well as human-machine interface systems. 4.4 Challenges to Control Room Retrofitting Design of a Control Room System should also involve careful consideration of training issues such as training programs and instructions. There is always risk that instructions and training programs neglects human factor issues. For instance, if instructions are written without participation of the users they may only reflect technical issues. Thus user participation is required. The way a control room system is organized from a ergonomical and human factors point of view effects the way the operators learn to handle the system. For instance, clear labelling, coding and demarcation facilitates the learning process and the operator can spend more time in learning "mental models" about the system rather than be occupied with unnecessary cognitive activities related to bad ergonomics. Maintenance outages may deserve special attention with respect to training as a general remark. It is noted that outage operations have been found to create special problems from the control room point of view due to high activity in the station. When retrofitting upgrades are made it is important that the operators are given instructions and procedures related to these changes before they are made. The economic lifetime of instrumentation and control systems is much shorter than for the major process equipment and structures such as turbine and pressure vessels. The main factors which affect the useful life of instrumentation and control systems are technical obsolescence and functional obsolescence. Increased functionality is achieved mainly through software upgrades. Consequently, there is an increasing need to be able to modify existing software and build in new software modules. The retrofit of control rooms in many plants in the world will be a challenge in the near future. The cause of this is not only the ageing but also the safety modifications and operational improvements available from new technology. In the replacement of equipment and systems, developments, technical trends and supplier policies should be considered particularly with digital instrumentation and control standardization, compatibility and open system architecture making gradual upgrading possible. 5.
IMPLEMENTATION AND MAINTENANCE ISSUES
5.1
Quality Assurance
Computerization of control room systems should be developed according to a recognized Quality Assurance (QA) plan and properly defined project plan, describing the purpose of the system, the responsibility of each member of the project team, the project segmentation, reviews, hold-points, end-user approval, etc. [17-20]. The development should be divided in defined phases (e.g. definition, implementation, configuration), including for each phase, the required output (i.e. documentation and test results). In addition, in-service maintenance, development and upgrading should be considered.
166
BILL K. H. SUN AND ANDREI N. KOSSILOV
Standardization in development helps in obtaining compatibility with suppliers, easier maintenance and longer life. Proven methods and tools should be used especially in software development, and new methods should first be tested with prototypes. Modular design eases the management of program units. 5.2
Verification and Validation (V&V)
In the functional design phase, the correct assignment of control room functions between operator and automation should be verified. This functional assignment should be validated to demonstrate that the whole system would achieve all the functional goals. The V&V of functional assignment is related to the design of new control rooms and major retrofitting projects, where the role of the operator will change. The procedure of V&V should, however, be applied to the design of functional requirements of all new systems or functions installed in the control rooms. The output of this phase is an input to the specification of control room systems. In the specification phase the functional specifications are verified and validated in order to make sure that they fulfil the design principles and technical requirements and the control room systems really support safe, reliable and economical operation. The use of flexible computerized human/machine interface techniques and simulators makes it possible to perform the final validation in the implementation phase. Even in the commissioning phase of the implementation in the real plant, it is possible to make modifications to the human/machine interface such as display pictures or operator support systems. The process of V&V of control room systems is described in more detail in IEC 964 [14]. The main considerations are: V&V should be planned and systematic; Evaluation should be based on predefined criteria and scenarios; The evaluation team should consist of specialists with various expertise, who are independent from the designers. A basic requirement for computer systems is that system documentation is verified at each stage of production. Each document which defines the design should be produced by a top-down process. For the highest requirement of safety, the verification should be independent, and formally conducted using check lists and documents with recorded resolutions of any discrepancies. After completion of design, in the form of detailed specifications of hardware and computer codes, system validation is needed to confirm that the integrated hardware and software perform correctly. The definition of functions given in the original requirement for the system must be taken, and interpreted into the form of detailed functional tests, test instruction, and expected test results. The computer system must then be systematically tested to those requirements, and the results recorded and analyzed. Any discrepancies of performance should be formally recorded and corrected through change notices, in accordance with the QA procedures. 5.3
Configuration Management and Change Control
An important concern for control room systems is the accuracy and correctness of the data which they use as input. Techniques are implemented which assure that the correct data is being accessed and used. Information sources and documents, such as plant drawings, plant models, computer-aided design data bases, equipment descriptions and procedures, etc., must be kept up-to-date. On-line real-time data should be time stamped and should be checked to ensure that the correct parameter and time step are used. Similarly, plant archival and trend data should be checked to ensure that the correct data is being used. Software configuration control is also important to assure that the proper version is being utilized.
COMPUTERIZATION OF CONTROL ROOMS
167
The importance of supplying the correct information to control room systems cannot be overstated. These systems will perform control and safety functions which affect the plant directly. They will also perform monitoring, display, diagnostic and decision aid functions. The output of these functions will be used by the plant staff to make their decisions for operating the plant. If the input to the control room systems is not correct and accurate then the output to these systems will be faulty and the wrong actions will be taken. 6. FUTURE TRENDS There is a continuing trend in the nuclear power industry to apply computer technology to various engineering, maintenance and operations functions. The next generation of nuclear power programs has placed significant emphasis on the appropriate use of modern technology for the Human-Machine Interface System. Considerable experience has been accumulated regarding computerization of control rooms. This experience provides opportunities to improve fundamentally the safety and operability characteristics of nuclear plants. Breakthroughs in information and communication technology provide a real opportunity and challenge to exploit these capabilities in a manner that will provide benefits to operation of nuclear power plants. Computerization in existing plants will bring powerful computing platforms and will make possible the integration of new sophisticated applications with the basic process monitoring systems in the control room. Specifically, the integrated control room will include the following aspects: diagnostic and monitoring functions to operations stall; operator aids and advisory systems; dynamic plant monitoring information systems and information management such as electronic document management, automated procedures, and plant equipment databases; human-machine interface environment which is common for all systems and allows the integration of all capabilities rather than using several different humanmachine interfaces; incorporation of human factors engineering and human reliability in the design and development of systems for plant operation; The communication of real time plant information to off-site personnel for dispatch or monitoring functions. Some specific trends are discussed in the following sections with more details. 6.1 Diagnosis and Prognosis Systems The control room operators will be equipped with diagnosis and prognosis systems which integrate a broad knowledge of the dynamic behaviour of processes and systems, disturbance sequences and failure mechanisms with recovery methods and operating procedures, etc. Those systems will provide easy access to the stored knowledge and the ability to explain to the user how solutions and answers have been achieved. One of the most important enhancements to diagnosis and prognosis systems is to ensure reliable and accurate signal input into the systems. This calls for development of on-line simulation models with the ability for separation of process anomalies and sensor faults, filter techniques to connect the models to the process and coupling of simulation with cause-consequence relationships. 6.2 Distributed Systems The use of distributed on-line systems, with input and output equipment local to plant and connected by local area network methods will increase and will integrate data collection, control room controls and automatic closed-loop control functions. They
168
BILL K. H. SUN AND ANDREI N. KOSSILOV
will operate autonomously, and provide their information to other systems for on-line display and operator support in the control room as well as for off-line engineering analysis. The advantage of such systems is the reduction in cables and the provision of comprehensive information while preserving total electrical and physical isolation with optic fibre. 6.3 Communication Systems Communication systems will play a key role in a power plant to inter-connect controllers, plant monitoring and data acquisition systems, and engineering, maintenance, and operator workstations and diagnostic systems for plant operators. These networks can best be integrated by the most widely recognized International Standards Organization Open Systems Interconnections. 6.4 Fault Tolerance Design Fault Tolerance design for hardware and software will become a common practice in control room systems to satisfy the requirement of high reliability and availability. Fault tolerance design will protect against single failures, able to identify faults as they occur, isolate the faults, and continue to operate. Since the failure modes for software are different from hardware, fault-tolerant design for software will consider the failures caused by common modes effects which may be introduced during designing or programming. While the assurance of fault-free software is difficult to achieve, regardless of formal verification and validation, software fault-tolerance may require the consideration of functional diversity. This means that duplicate software would be designed, programmed, verified, and validated independently and separately. Alternately, fault-tolerant software may require a backup system or a fail-safe option so that if the software fails, it will fail to a safe state as the overall system requires. 7. CONCLUSIONS In summary, computerization of nuclear power plant control rooms will increase and will bring significant advantages of safety and economy. Computerization has great importance both for existing plant upgrades and for future plants. The study leads to the following conclusions: 1. The integration of human factors knowledge and practices with new information system technology is leading to significant improvement in the nuclear power plant human-machine interface. 2. The control room computerization effort has realized the need to accommodate station staff operating philosophy, procedure implementation principles, operation work control, operations work organization and personal communication among the plant operations staff. 3. The use of computer systems for on-line display of plant state to the operators is now common, and retrofit systems with improved performance are being implemented. The flexibility of color video display units and the use of structured hierarchies of displays have overcome most of the problems of early systems. The use of such systems has major advantages of providing information on the complete plant, with the added ability to present summary and calculated information to operators. 4. Fault tolerant digital control systems have significant advantages, and have been successfully applied to many nuclear plants. These controllers use redundant microprocessors and signal validation methods, and provide wide range algorithms with more optimized performance and higher reliability than the previous analogue controllers. They have been shown to reduce plant outages and trips, and reduce safety challenges to the plant. 5. Computers are being used increasingly to provide integrated and multiplexed operation of control room controls, with connection to the plant equipment using
COMPUTERIZATION OF CONTROL ROOMS
169
local area network systems. This provides significant advantages of cable reduction, space reduction and improvement of safety. 6. Computer-based systems for protection have been implemented successfully in several countries. The lessons learned from extensive verification and validation required for assurance of software accuracy and integrity have stimulated effort in the production of high quality software to address concerns by regulation and licensing. 7. The flexibility of computers for information processing has resulted in their application for diagnosis and prognosis purposes, such as disturbance analysis, success path monitoring, computerized operation instructions and vibration monitoring, etc. REFERENCES 1. Taylor, J.J., Sun, B.K.H., Application of Computers to Nuclear Power Plant Operations, Nuclear News, October (1990), pp.38-40 2. Computerization of Operations and Maintenance for Nuclear Power Plants, IAEATECDOC-808, IAEA, Vienna, July (1995). 3. Safety Implications of Computerized Process Control in Nuclear Power Plants, IAEA-TECDOC-581, IAEA, Vienna, (1991). 4. Control Rooms and Man-Machine Interface in Nuclear Power Plants, IAEATECDOC-565, IAEA, Vienna (1990). 5. Control Room Systems Design in Nuclear Power Plants, IAEA-TECDOC-812, IAEA, Vienna (1995). 6. The Role of Automation and Humans in Nuclear Power Plants, IAEA-TECDOC668, IAEA, Vienna (1992). 7. Human Factors Guide for Nuclear Power Plant Control Room Development, EPRI NP-3659, EPRI, Palo Alto (1984). 8. Computer Based Aids for Operator Support in Nuclear Power Plants, IAEATECDOC-549, IAEA, Vienna (1990). 9. Functional Design Criteria for a Safety Parameter Display System for Nuclear Power Stations, Standard IEC-960, Geneva (1988). 10. Computerized Support Systems in Nuclear Power Plants, IAEA-TECDOC-912, Vienna, October (1996). 11. Programmed Digital Computers Important to Safety for Nuclear Power Stations, Standard IEC-987, Geneva (1989). 12. Nuclear Power Plants-Instrumentation and Control Systems Important of SafetyClassification, Standard IEC-1226, Geneva (1993). 13. Safety Related Instrumentation and Control Systems for Nuclear Power Plants: A Safety Guide, Safety Series No. 50-SG-D8, IAEA, Vienna (1984). 14. Design for Control Rooms of Nuclear Power Plants, Standard IEC-964, Geneva (1989). 15. Guidelines for Control Room Design Review, NUREG-700 (1981). 16. Nuclear Power Plants - Control Rooms - Operator Control, Standard IEC-1227, Geneva (1993). 17. Quality Assurance Organization for Nuclear Power Plants, A Safety Guide, No. 50-SG-QA7, IAEA, Vienna (1983). 18. Establishing the Quality Assurance Programme for Nuclear Power Plant Project, and Safety Guide, No. 50-SG-QA1, IAEA, Vienna (1987). 19. Code on the Safety of Nuclear Power Plants: Quality Assurance, No. 50-C-QA, IAEA, Vienna (1988). 20. Manual on Quality Assurance for Installation and Commissioning of Instrumentation, Control and Electrical Equipment in Nuclear Power Plants, Tec nical Reports Series No.301, IAEA, Vienna (1989). 21. Control Points for Reactor Shutdown with Access to Main Control Rooms, Supplementary Standard IEC-965, Geneva (1989). 22. Software for Computers in the Safety Systems of Nuclear Power Station, Standard IEC-880, Geneva (1987). 23. Standard for Software Configuration Management Plans, IEEE-828 (1983).
CONSEQUENCES OF CHERNOBYL A VIEW TEN YEARS ON
A. Borovoi and S. Bogatov Russian Research Centre "Kurchatov Institute" 123182, Kurchatov Squar, Moscow, Russia. 1. INTRODUCTION On the night of April 26th, 1986, at 1:23, the mistakes of personnel operating Unit 4 of the Chernobyl Nuclear Power Plant (ChNPP), multiplied by the mistakes of the reactor designers, resulted in the biggest accident in the history of atomic energy. The explosion completely demolished the active core and the upper part of the reactor building. Other structures were significantly damaged. Barriers and safety systems protecting the environment against radionuclides in the nuclear fuel were destroyed. Radioactive release at the level of around 1016 Bq per day continued for ten days (26.04.1986 to 06.05.1986). After that, the release rate became a thousand times less and continued to decrease gradually. When the active phase of the accident finished, it was clear that a relatively small amount of nuclear fuel (~ 3.5%) and a significantly larger (by an order of magnitude) amount of volatiles had been released. If fuel contamination defined the radiation situation near the ChNNP (within a 30 km zone), then was the source of the long term contamination for a surrounding area of many thousands of square kilometres. The Chernobyl accident has influenced, to a certain extent, the lives of millions of people. Hundreds of thousands were evacuated from the contaminated areas, a further hundred thousand participated immediately in the creation of the "Sarcophagus" (Russian Ukritiye) over the damaged Unit 4. Many were involved in the decontamination work at the site adjacent to the ChNNP. Others were engaged in activities to prevent the contamination of the Pripyat and Dniepr Rivers. Ten years have elapsed since the accident. Work on the mitigation of its consequences has not stopped for a single day. Nevertheless as the twentieth century passes into the twentyfirst century, many problems of Chernobyl remain unsolved. In our opinion there remain three main problems: Sarcophagus safety; Revival of the area for residence and operation;
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
171
172
A. BOROVOI AND S. BOGATOV
Long term medical consequences for the still irradiated population and the "liquidators", participants in post-accidental mitigation activities. About 180 te of irradiated uranium is still in the Sarcophagus, having at present more than 750 GBq of radioactivity. This radioactivity can be hazardous to the environment. The extent of the hazard is not yet comprehended due to inadequate information about the properties of the Sarcophagus, but it is clear that the hazard increases with time. It is necessary to transform the temporary storage of the nuclear and radioactive materials into an ecologically safe system. That is the problem of Sarcophagus safety. Huge resources have already been spent in cleaning up and remediating contaminated areas, but the success attained so far is no more than meagre. Until recently, people were still being evacuated from the contaminated areas. This area is now being widened and people are still being moved from their homes. What can be done to allow people to return and resume their normal lives is one more unsolved problem. In the framework of the International Chernobyl Project (ICP), experts from dozens of scientific institutions all over the world carried out extensive work in an attempt to clarify the medical consequences of the accident. Their modest results were contradicted by some specialists (and even more by some non-specialists) in Belarus, Ukraine and Russia. One of the arguments was that screening was only carried out on the population of the contaminated areas and not on the liquidators working at the site of the ChNPP. This question was not considered by the ICP and, consequently, this group was not studied. This is only one example of the medical problems still to be solved. The problems mentioned above are far from being all within the competence of the authors of this review. Sarcophagus safety is our main field of interest. Remediation problems were looked at from time to time and medical problems were studied in as much as we were involved in the preparation of the input data for professionals working with us. In accordance with this remark, we invite the interested reader to consider the significance of our conclusions in this review.
2. THE ACCIDENT Before the accident In the 1960s, the programme of rapid development of atomic energy in the USSR encountered a significant obstacle. The pressurised water reactor (the so-called VVER-type reactor) needed the production of a very large, hardened, containment or pressure vessel. But the USSR industry available at the time was not able to produce such vessels, as well as a number of other necessary elements. This was the reason that another type of Nuclear Power Plant (NPP) was started without the necessity for such containment. This type of reactor was called RBMK- the Russian abbreviation of "channel type reactor of high power" The principle of this type of reactor, where graphite is used as a moderator and boiling water is used as a coolant, was well known since the Russian NPP had been put into operation in Obnisk. There was experience of two dozen years of operation, for uranium-
CONSEQUENCES OF CHERNOBYL
173
graphite military reactors had been used for plutonium production. Therefore the development of a new type of power reactor was completed rather quickly. The active core of RBMK-1000 (1000 corresponds to the electric power in MW) looks like a cylinder 7 m high and 11.8 m in diameter, composed of graphite blocks. 1661 vertical channels of internal diameter 80 mm, made of zirconium-niobium alloy, pass through the blocks. Heat-generating fuel assemblies, composed of fuel rods, are situated within the channels. The total amount of nuclear fuel in the reactor was of 190.2 te of uranium as enriched in the isotope. Water comes in from below the channel, washes around the assemblies and is heated up to boiling point. Having been separated from the water, the steam generated goes into the turbine. As the energy is transferred, the steam is condensed and pumped back into the reactor. To control and stop the reactor, there are 211 control rods. To stop the reactor, 187 control rods are introduced from above into the active core through special channels. Another 24 shortened control rods, intended to smooth the axial energy field, are introduced into the active core from below. Without considering the reactor design in more detail, let us note that a kind of a "delayedaction mine" was put into the design and that "mine" finally caused the accident. The scientific name of this mine is "Positive Reactivity Feedback". That this happens may be explained as follows. Water plays a double role for the RBMK design - it is simultaneously a neutron absorber and moderator. But the main physical manifestation for water is neutron absorption, rather than moderation; that is why the graphite is used as a basic neutron moderator. When boiled to steam, water reduces its density, so that fewer neutrons are absorbed by the water. Thus the rate of nuclear fission increases together with heat generation. The temperature increases, accelerating steam generation, and so on. Positive Feedback or "reactivity runaway" occurs, resulting in a rapid increase in the power of the reactor. It was a significant feature of the Chernobyl reactor, that the Positive Feedback effect increases with time of operation, as the fuel burnt up. Did the designers of the RBMK know about it? Yes they did and they tried to provide the means not to lose control of the reactor. These means were described in Technical Regulation Rules, a document to be executed by personnel under all circumstances. In order not to allow reactivity runaway to evolve rapidly, and to allow the operator to undertake the necessary actions, at least 26-30 control rods had always to be situated within the active core. In special cases, in accordance with the special order of the NPP administration, it was permitted to operate at a lesser number of control rods. If the number of control rods was reduced to 15, the reactor had to be stopped immediately. In Technical Regulation Rules, a mode of operation was considered when the power level might for some time be less than 50% of the designed one. In this case the reactor was in its so-called "xenon poisoning" position and subsequent power increase is allowed only when at least 30 of the control rods are within the core. Otherwise the reactor must be stopped until xenon poisoning completion; the time necessary for the decay of the xenon-135 being some 48 hours. In an emergency, the reactor might be stopped by the same control rods. Control rods were tripped into the core by the emergency protection system at the rate of 0.4 m/s within a total 18-20 s. The second "mine" occurred here.
174
A. BOROVOI AND S. BOGATOV
Being inserted from above, the control rods had an absorbing part about 6 m long and a graphite displacer 4.5 m long situated at the end of the rod. When the control rod was completely withdrawn, the graphite displacer was located at the middle of the core. When the rods are first inserted into the core, the graphite displacer (a neutron moderator) first superseded the water column (a neutron absorber) located beneath the displacer. At this time, for several seconds the effect opposite to the one desired took place; the absorbing water was removed and the moderator was inserted. As a result the power in the lower part of the reactor began to increase until the absorbing part of the rod had gone down. The effect of the positive reactivity overshoot was amplified when a lot of control rods were lowered simultaneously. However, the effect of the temporary power increase was not considered dangerous when sufficient control rods were situated in the core. This unpleasant feature of the reactor had been familiar to the designers since the first Unit of the Ignalia NPP (Lithuania) had been commissioned 13 years before the accident. However the effect was underestimated and no protective measures were undertaken. Causes and development of the accident On April 26th 1968, it was planned to stop Unit 4 of the ChNPP for maintenance service. During the stoppage some tests were planned to clarify some questions about reactor safety. We will not spend time describing the test programme and discussing its necessity as the disputes about it continue to this day. Just note that the fatal mistakes made by the personnel during the test, resulted in the two incipient defects blowing up the reactor.
CONSEQUENCES OF CHERNOBYL
175
First, xenon poisoning happened due to the long reactor operation at 50 % power level (Fig. 1). Then, at 00:28 (less than an hour before the accident) power dropped almost to zero as a result of operator mistake or technical failure. At this time the number of control rods in the core was less than 30 and in accordance with regulatory rules, the reactor should have been stopped and the test put off. Instead the power was increased to perform the test. This was done by the withdrawal of a number of operating control rods; subsequent calculations showed that only 7-8 control rods remained in the core. Therefore, protection against Positive Feedback was reduced to impermissible levels. At the same time the automatic emergency reactor stoppage system had been disconnected so that the tests could go ahead. At the beginning of the test at 01:23:04, the power level of the reactor, brought to an extremely unstable position, increased suddenly up to 115 % (of nominal level) and had been increasing continuously. At 01:23:40 the rod drop button (EPS-5) of the emergency protection system was pressed to shut down the reactor. It was a moment when the second "mine" was fired. Graphite displacers superseded water in the control channels, the neutron flux along with the heat generation in the lower part of the reactor increased. The amount of steam increased along with the power level and that in turn increased the steam content. Control rod lowering occurred slowly and Positive Feedback had time to manifest itself to a large extent. In accordance with different assessments, the power level was increased by 3.5-80 times. Two explosions followed each other in a few seconds; Unit 4 of the ChNPP ceased to exist. Disputes about the development of the accident continue, but for the most part scientists follow the version presented in broad outline here.1-3 Safety improvements for RBMK reactors undertaken after the accident What was to be done after the accident? The two major questions were: shut down the RBMK immediately and whether to try to get rid of any shortcomings in the RBMK reactors elsewhere? These were questions having not only a technical but economic basis. So therefore let us quote some numbers.3 There were 14 operating Units of the RBMK-1000 in the USSR (10 in Russia and 4 in the Ukraine) at the moment of the accident and another Unit of the RBMK-1500 in Lithuania. Their power was 50 % of the total power of all the other NPPs. Several years after the accident another Unit of the RBMK-1500 in Lithuania and one Unit of the RBMK-1000 in Russia were put into operation. Ten years after the accident, in 1996, the total energy produced by nuclear energy in Russia, was more than 1800 million MW*hour. In 1995 NPP's generated ~ 12% of the total electric power in the country. The share of the RBMK was 55% of nuclear energy production. These reactors generated and still generate the cheapest electrical energy. The forecast of energy production for the RBMK reactors to the year 2020 was estimated to be an additional 1700 million MW*h at a total cost of $68 billion ($40 billion by Russian reactors) at the rate of electricity 4 ¢ per kW*h. Safety improvements for these reactors were assessed to cost an order of magnitude smaller. That is why shutting down the RBMKs has been regarded as unacceptable. Safety improvements have been done as follows: first, administrative ones, prohibiting operations in conditions similar to those at Chernobyl; second, technical modifications of the reactor, reducing Positive Feedback in particular during the control rod trip, and creating a rapid emergency system for reactor shutdown.
176
A. BOROVOI AND S. BOGATOV
Positive Feedback reduction was achieved by inserting into the active core eighty to ninety additional control rods instead of fuel assemblies and by increasing the number of control rods in the core (43-48) required during the operation. Further Positive Feedback reduction was provided by an increase in the fuel enrichment (2.4 % of instead of 2 % previously). Control rods were modified to avoid the graphite column entering the lower part of the channel. Therefore the effect of positive reactivity due to lowering the graphite displacers was eliminated. Automation was modified to reduce the time of total control rod insertion into the core from 18 s to 12 s and a rapid emergency system with an operation time of 2.5 s was created. Other safety improvements have been made and are being planned. In the reports submitted to the international forum of the IAEA One decade after Chernobyl: nuclear safety aspects2,3, it was declared that "... objective indices for NPPs with the operating RBMK reactors witness a safety level comparable with Western NPPs built at the same time." First countermeasures undertaken and their efficacy Let us come back to the first days after the accident. Hundreds of urgent questions had to be answered, the main problems had to be identified and formulated appropriately. As far as the nuclear fuel was concerned, there were three types of associated hazard: nuclear, thermal and radiational. The nuclear hazard is usually considered as a selfsustaining chain reaction (SCR). It can happen only if several conditions are met. The main condition is a large enough fuel accumulation in some space and availability within the fuel containing material (FCM) of the neutron moderator, graphite and water, etc. SCR could take place, for example, in the remaining reactor structure, if the structure was conserved after the explosion(s). As a matter of fact, SCR is possible in approximately 1/10 of the active core, if the control rods are removed. How dangerous would be the consequences of SCR within the damaged Unit-4? For a long time this damage was overestimated and continues to be overestimated to the present day. First, it happened due to the mistrust towards specialist assertions (the Chernobyl accident itself did not foster trust); later it was due to personal interests and mass-media misinformation. The term "nuclear hazard" is associated by ordinary people with a nuclear explosion and, consequently, with a huge flash of light and a shock wave. Nothing similar was expected inside Unit 4. In the case of SCR the fuel would be heated, the dangerous composition dispersed and the reaction stopped. The main hazard in this case would be due to the release of radioactivity generated for the time of the operation of this "self-made" reactor. All estimates showed the release could not be comparable (at least 1000 times smaller) than the one that had taken place during the initial accident. But, having been influenced by the catastrophe, the members of the Governmental Commission did not trust such predictions. That is why on the first day after the accident, some attempts were made to measure neutron fluxes near the ruins of the Unit. It was supposed that big neutron fluxes could indicate continuing reactor operation. The measurements have failed, but the attempts resulted in noxious consequences for the men participating. As well as the nuclear hazard, a thermal one (the so-called China Syndrome) caused fear. This term taken from a movie of the same name, implied that molten nuclear fuel, having been heated by nuclear decay energy, would flow down, burning through the floors of the building and would reach and contaminate subsoil water.
CONSEQUENCES OF CHERNOBYL
177
Finally the radiation hazard became greater with each hour as every puff of smoke spread the radioactivity to new areas. It was necessary to create barriers against all the mentioned menaces. Resolute measures were undertaken involving thousands of people. For the past decade accounts of these measures appeared in many publications. But the efficacy of these measures was not estimated anywhere, taking into account the material resources spent and the collective exposure of the participants. There are first estimates of "benefit" of the counter measures4,5 as well as more detailed analysis6. We would like to discuss here only two actions taken involving the effort. Dumping of different materials into the reactor compartment had been started since April 27, 1986. Part were boron compounds, mainly which acted as the neutron absorber to provide nuclear safety. The other part (clay, sand, dolomite) was intended to create a filtrating layer and to reduce the release of radioactivity. In addition, dolomite in the high temperature regions, was able to generate carbon dioxide during its decomposition and thus provide gas cover that prevented oxygen access to the burning graphite. Finally the last part was to absorb the heat being generated. From the 27th of April to the 10th of May, 5000 te of different materials were dumped, including 40 te of 800 te of dolomite, 800 te of clay and sand and 2400 te of lead. Dumping of materials continued after the active phase of the accident had finished. According to the registers of helicopter crews, about 14 000 te of solid materials, 140 te of liquid polymers and 2500 te of three-sodium-phosphate had been dropped in the period before June 1986. In accordance with the initial plan, the active core had to be gradually covered by a loose mass. That reduced radioactivity but made heat removal harder. According to expert calculation, the simultaneous influence of these two mechanisms would stop the release initially; then it would increase for a short time (hot gases breaking through) and stop the release eventually. For many reasons it was very difficult to measure adequately the activity release; the error of measurement was too high. Nevertheless, the measurements showed first a reduction in release and then an increase. Finally on 6th of May the release had become a hundred times smaller. It seemed practice confirmed the theory. It was considered true for three years and in some works7 it continues to be asserted today. But it became clear to the scientists working at Unit 4 between 1989 and 1990, that the bulk of the material did not get into the reactor at all4,5. Let us consider the facts. First is a picture of the former Central Hall of the reactor. It is filled with the dropped material which formed mounds many metres high. It could be seen from the helicopters before the completion of the Sarcophagus erection and this was confirmed by groups of carefully prepared scouts who reached the Hall. But for fairness sake, it can not still exclude that a significant part of the materials got to the reactor aperture. Second in the middle of 1988 it was possible with the help of optical devices and TV cameras to see inside the reactor shaft. None of the dropped materials were detected there. One can argue that the materials on reaching areas of high temperatures, melted and flowed throughout the lower rooms of the Unit. Indeed, masses of solidified, lava-like material ("lava") containing the nuclear fuel, were detected on the lower floors. Thirdly, any presence of lead in the lava could indicate it to consist not only of the reactor materials (concrete, uranium, steel, etc.) but the dropped material as well. No lead was found within the reactor or beneath, whereas 2400 te were dropped from the helicopters. Having studied dozens of samples, it became clear that negligibly small amounts of lead (less than 0.1 wt %) is contained in lava, i.e. there is less than 1 te of lead in lava weighing about 13 000 kg.
178
A. BOROVOI AND S. BOGATOV
This means that virtually no lead went into the reactor. There were suppositions that the lead had evaporated, but soil samples showed no significant lead content. Therefore, dropped materials even if they got into the reactor; have not been able to influence the release materially. Those are the known facts. What prevented the pilots fulfilling their orders? It is possible that a bright spot (burning graphite) glowed near the reactor aperture. It could easily be taken by non-specialists for the reactor mouth. This is discussed in the work of Dr. Sich6 and we just present a figure from this work (Fig .2). Maybe the upper biological shield of the reactor (its upper lid), thrown by the explosion and landing in an almost vertical position, as well as the hundreds of steel tubes located on the lid, played the role of a "shield" which "reflected" all the dropping material. It is difficult to judge now. However, dumping the materials in the Reactor Hall cannot be regarded as useless. Boroncontaining materials have been found in the Central Hall where many active core fragments as well as fuel dust were thrown. Landing on the fuel, the material ensured its nuclear safety. Sand, clay and dolomite created a thick layer over the radioactive debris and helped to make more safe the subsequent work of the builders and scientists. Some small part of the materials could still get to the reactor and facilitate lava formation. It took three years, however, to accumulate and realise these facts. No one was able to foresee this in May 1986. A second example of the post-accident misunderstanding concerns the perceived need to provide a forced cooling system. To prevent the China Syndrome, a heat exchanger was created under the basement of the Unit for a very short time (towards the end of June) under hard working conditions. Forced cooling was foreseen for the concrete slab. It was supposed at the time that if the hot nuclear fuel got onto the concrete, it would interact with
CONSEQUENCES OF CHERNOBYL
179
it like a hot iron thrown onto ice. Really the fuel, melting the ambient materials, began to dissolve in the melt, reducing in such a way the specific heat generation of the mixture - as if iron would be soluble in water, like salt. As a result, only one floor was burnt through, in the room immediately under the reactor, but three other floors remained in front of the melt corium on the way to the earth's surface. However, the real hazard of the China Syndrome could not be estimated with the knowledge available in May 1986. Nuclide inventory of the reactor The immediate questions that had to be answered to the Governmental Commission were as follows: what type and amount of radionuclides were stockpiled in the reactor at the time of the accident; how much nuclear fuel and radionuclides were thrown out of the damaged Unit and how much of it remained inside? It was clear, that without having information on the nuclide inventory and the radiation from nuclear fuel, it was impossible to assess either the radiation hazard for the release or the nuclear and thermal hazard of the fuel inside the Unit. Usually, appropriate calculations are carried out during the reactor design process, but for Unit 4 they were either not carried out or were not accessible even to the Governmental Commission. During the first days, the general calculations of the Moscow Physical Engineering Institute for the uranium-graphite reactors were used. Then the crude results of our group in the Kurchatov Institute were used as they had been made just for the Chernobyl reactor. At the next step, calculations were carried out where the nuclide accumulation history was considered for every one of the 1659 fuel assemblies that had been in the reactor before the accident8. Finally, inventory calculations have been carried out taking into account neutron field inhomogeneity inside the core during the reactor operation time. It is significant, for example, for transuranics to be generated due to consecutive neutron capture9. Other scientific groups carried out similar calculations. It is worth noting three of them10-12. First is the work of E. Warman10, who made his assessments on the basis of Soviet reports to the IAEA. Second is an article by G. Kirchner and C. Noak11. The authors had no detailed information on fuel burnup and neutron fields in the core; nevertheless they were able to extract the necessary information from the ratios of nuclide activities in the hot particles detected near Munich. The work of A. Sich12 is based on data the same as the Russian work9, and it uses the same calculation method. That is why the results are close to each other and practically similar for radiologically significant nuclides. Comparison of the results, mentioned above, is given in Table 1. Which of the radionuclides enumerated are the most hazardous for the longest period of time? Among gamma emitters, it is Its half life is 30 years, which means its activity becomes an order of magnitude smaller only 100 years later. Activities of the main gamma emitting nuclides in the fuel are shown in Fig 3. Seven years later, virtually only has notable significance Number one among the pure beta-emitters is Alpha-emitters have changed their leader for these years and will have changed again in the next ten years. At first it was Then plutonium isotopes became the most intensive alpha emitters. However, one of the plutonium isotopes is transformed by beta-decay into being an alpha emitter. Having been accumulated, this isotope surpasses the plutonium isotopes alpha-activity (Fig. 4).
180
A. BOROVOI AND S. BOGATOV
CONSEQUENCES OF CHERNOBYL
181
Active phase of the accident (26.04.86 - 05.05.86). "Lava" formation After an explosion and loss of all means of surveillance, a damaged reactor for a finite time has become invisible and we can only observe the final result of many multi-stage physical and chemical processes. Besides the natural complexity of the processes, their understanding is complicated to a large extent by a number of post-accident activities. The following hypothesis seems to us to be the most comprehensive and encompasses all the observed facts. 1. The first explosion, which had a clear enough mechanism, resulted in the dispersal of the nuclear fuel and a rupture in the lower part of the fuel channels. The second explosion is supposed to be due to the loss of water in the active core. After these two explosions, the active core looked like a homogenous mixture of fuel fragments, zirconium and graphite. About 90 te of the nuclear fuel within the reactor shaft took part in this lava formation. 2. During the following days, three main processes took place in the core: filtration of atmospheric air from the lower rooms through the gap after the lowering of the reactor basement plate (BP); graphite and fuel fragments oxidising in the air, resulting in the
182
A. BOROVOI AND S. BOGATOV
fuel being dispersed into micro-particles. And finally, as a result of the graphite burning up, the fuel descended into the reactor basement plate and was compacted. This increased the specific heat generation of the residual mixture. These processes could increase the release rate both due to the increase in the temperature of the fuelcontaining mixture and its compression, resulting in the thinning of the filtration layer. 3. Finally the specific heat generation of the mixture was high enough to melt through the damaged south-eastern quadrant of the BP. Serpentine, metal of the BP and lower water communication tubes had been involved in the melting process, as well as more than 900 te of sand and concrete. The sand involved was likely to have come from the filling of the lateral biological tank shield. The concrete had come partly from the floor of the sub-reactor room and partly from the wall slabs of the steam separators rooms. Presumably these plates were thrown into the reactor vault by the explosive wave. These three processes led to the following situation. The transformation of a dry form of fuel containing materials into a liquid one after the melting was, presumably, the main cause of the sharp decrease in the release rate on May 6th (Fig. 5). The lava generated was a melt of silicon-containing materials from the active core environment that comprised the active core fragments. The average fuel content of the lava is about 7 % (wt), 2 % of which is dissolved in a silicon matrix and another 5 % associated with micro-inclusions. The total amount of lava is estimated to be about 1300 te.
3. AREA POLLUTION Estimates of total radioactive release In principle, there were three approaches to assess the release. The first way was to measure the radioactivity as it was going out of the reactor. Secondly, the radioactive fallout after the accident might be defined and measured. The third way was to define the residual activity within the Unit.
CONSEQUENCES OF CHERNOBYL
183
Consider the first way. The day after the accident, the first attempts were made to take samples of the aerosols above the damaged Unit and to study their radionuclide composition. Subsequently, air samples were taken regularly with helicopters or specially equipped aeroplanes above the reactor as well as at the site adjacent to the Unit. For many reasons (unstable nature of the release, change in weather, high radiation fields, work around the damaged Unit, etc.), accuracy of the measurements was not very good - as seen in Fig. 5. At present, attempts are being carried out to establish a data bank on the releases and to assess their validity. Despite all the intrinsic uncertainties, these first measurements of the release made it possible to obtain a significant result - besides the volatiles (noble gases, iodine, caesium, tellurium and some others), other radionuclides in the release were bound inside small fuel particles. Ratios of observed and expected (in fuel) radionuclide activities for one of the first air filters, taken above the damaged reactor (27.04.86), are shown in Table 2.
The ratios are similar (within methodical uncertainties) for all nuclides other than caesium. The result was considered as evidence that the caesium release occurred independently of other nuclides, having been significantly more intensive. Subsequent analyses of other air filters and soil samples confirmed this conclusion. Let us discuss the second possible approach. Large scale studies of area pollution had started the first day after the accident. But if the dose rate could be easily measured by different type dose meters, the composition of gamma-emitting nuclides was not so easily measured by different gamma-spectroscopy methods. The gamma measure-ments could be considered adequate but the identification and quantitative measure-ment of pure alpha(Pu) and beta-emitters (Sr) in fallout needed complicated radio-chemical analyses and were inappropriately delayed. The method proposed was based on the fact that the main part of the nuclides, having been little measured, is bound in the fuel matrix where the radionuclide composition is relatively constant. That is why, instead of a long and difficult chemical analysis, it was possible to define the quantity of only one chosen gamma-emitting nuclide and subsequently to use known correlations between the activities in the fuel for the chosen
184
A. BOROVOI AND S. BOGATOV
nuclide and the desired nuclide. For example, the following correlation was used to define plutonium isotope content in samples: where A(Pu) is the total alpha activity of the plutonium isotopes k is a correlation coefficient at the moment of the accident, and is the activity of the cerium-144 isotope. This radionuclide was taken for the following reasons: hard connection with the fuel matrix due to high evaporation temperature; relatively long half-life convenient energies and yields for gamma-rays. Really, there was a three-stage system of fallout studies. Firstly, aero-gamma reconnaissance had been carried out as a first crude approximation. Secondly, soil samples were taken and measured by semiconductor gamma-spectrometers to define the content of the Finally, radio-chemical analyses were carried out intermittently to be confident in the stability of the correlations. It was soon clear that the main part of the fuel fallout occurred in the nearest zone (about 10 km) around the ChNPP. "Independent" (of fuel matrix) caesium fallout gave an insignificant contribution to the total (nearest) pollution. This is illustrated in Table 3, where the averaged ratios of activities and corrected to 26.04.86, are presented for different distances from the damaged reactor. The activities ratio deviated from the "fuel" ratio (0.066) only at distances of more than three km.
The data on fuel fallout made it possible to give the first estimate of the fuel release out of Unit 44. In accordance with this estimate, about 0.3 % of the total fuel amount were thrown out on the site around the ChNPP, ~1.5 % - within the exclusion zone, ~ 1.5 % on other areas of the USSR and less than 0.1 % occurred outside the USSR. Therefore the total fuel release according to this estimate was about 3.5 %. This number was reported by the head of the Soviet delegation V. Legasov at the IAEA meeting in August 19861. Work on the third way to estimate the release (the assessment of fuel remaining in Unit 4) has been carried out. Huge radiation fields and ruins hindered the work. The first assessment had been obtained in August-September 1986, when a programme of thermal measurements on the surface and around the reactor ruins had been completed. Comparing the experimental heat generation with the calculated one, it was possible to assess that not less than 90 % of the initial fuel amount remained inside (only an upper assessment was available)14.
CONSEQUENCES OF CHERNOBYL
185
At the present time on the basis of these analyses for hundreds of thousands of samples, it can be regarded as established (0.68 confidence level) that more than 96 % of the initial fuel amount is located within the Sarcophagus (Ukritiye)14. Features of contamination: Caesium spots The first release of radioactivity immediately following the reactor explosion was directed towards the Southwest. Subsequent releases lasted for many days (chiefly for the first ten days) and looked like a radioactive plume, whose elevation was due to the burning graphite and the processes of heating the active core materials. The most intensive plume was observed 2-3 days after the explosion in a northerly direction, where radiation levels were up to 10 mGy/h on April 27th and 5 mGy/h on April 28th, at an altitude of 200 m, 5-10 km away from the damaged Unit. The plume elevation on April 27th exceeded 1200 m above sea level at a distance of 30 km to the Northwest. On the following days15 the plume elevation did not exceed 200-400 m. The basic method of radioactive fallout surveillance was the aero-gamma survey. Figure 6 shows schematically the dose rates (corrected to May 10th) around the ChNPP obtained by aero-gamma survey 35. This map was used as the basic one for decision making. In accordance with the data, borders for different zones were defined: evacuation zone (dose rate (DR) more than 50 µSv/h); exclusion zone (DR> 200 µSv/h); and a zone of rigid control (30 µSv/h
where is a ratio of activities of isotope (i) and of in the sample and is the same calculated ratio for the fuel. It is clear that the fractionation coefficient equals 1 for pure fuel fallout and is more than 1 in case of sample enrichment by the isotope (i). Significant enrichment by caesium isotopes was observed for the northern direction of the radioactive trace (Gomelskaya, Gitomirskaya, Chercasskaya regions), as well as for the southern periphery of the western trace (northern regions of Gitomir and Rovno). Table 4 shows the fractionation of the radionuclides for typical areas of caesium enrichment (caesium spots)35. The feature of radionuclide composition at caesium spots is a significant enrichment (regarding ) by virtually all nuclides except cerium and plutonium. Caesium contamination was extremely inhomogeneous. That was due to both the variations in the
186
A. BOROVOI AND S. BOGATOV
CONSEQUENCES OF CHERNOBYL
187
release rate and weather conditions during the transport of the radioactivity. One of the main reasons for caesium spot generation was "wet" fallout (accompanied by rain). It is emphasised16 that content in wet fallout was 15-20 times more than dry fallout. Figure 7 shows dose rates at different distances from the ChNPP for wet and dry fallout. It is seen how fallout is more extensive in the areas where rain occurred. Problems of caesium fallout estimation. As was mentioned above, fuel fallout is located immediately near the ChNPP; outside the 30 km border, it presents no danger. It was caesium fallout that caused the pollution of vast areas far from the ChNPP. The urgent question was (and remains), what total amount of caesium, especially its radiologically significant isotope was released during the accident? Let us recollect that its amount in the core at the time of the accident was In the Legasov report1 to the IAEA, a release of (13±7) % was presented. It was a first assessment, when a detailed survey of the contaminated areas had not yet been completed. Subsequently this number was considerably modified. Many studies have been carried out, where caesium and other volatile releases were considered. First we would like to discuss the results of two of them (see Tables 5&6), where the authors prove that the volatile release was significantly larger than was previously1 reported. The first work (E. Warman10) was done in 1987, the second one (Ilyin et al.17) in 1990. It will be recognised that the initial release had been significantly underestimated. A specially devised method of estimation was used by us18 in 1990. It was based on the studies carried out inside the Sarcophagus. As we mentioned above, one of the fuel containing material (FCM) modifications is solidified lava5. It was discovered that only one-third of the caesium amount remained in the lava compared to the amount that would be expected from the uranium content. Thus having been heated, the deficient caesium was released from the FCM. Other FCM modifications indicated a caesium-fuel correlation. The total amount of fuel in the lava is estimated to be about 100 te An estimation of caesium release can be done as follows:
188
A. BOROVOI AND S. BOGATOV
CONSEQUENCES OF CHERNOBYL
There were of in the active core (total fuel amount 215 te about It is known that about 66 % of caesium left the lava. That corresponds to or say 65/250 ~ 30 %.
189
), thus,
Level of radiation is measured in units of natural background supposed to be 0.1 µSv/h. 1 -dry fallout, 2-wet fallout. Finally in 1995 at the Conference "Radiological, medical and social consequences of the accident at the ChNPP: Remediation of the areas and population", it was said in the report presented by Yu. Israel19: "The total amount of detected at the nearest trace, was in the European part of the USSR, outside the nearest trace and all over Europe11." The data were obtained after the integration of caesium fallout over all the available maps of pollution. The Iodine Problem. The iodine problem first of all relates to the thyroid gland radiation injury, especially for children. The main source of this radiation was the isotope with a half-life of 8 days. It was reported by V. Legasov1, that the release of iodine was of (20±10) % of the total a5). On the whole, all the assessments done later are more pessimistic, but neither the early assessments nor the subsequent ones have any validity. One of the ways to get a valid quantitative assessment would be to define the amount of the long-lived isotope in the FCM inside the Sarcophagus, especially in the lava. If we know the residual activity of inside the Sarcophagus, we would be able to assess the release knowing the initial activity of in the fuel The method was developed, but proved to be very expensive and has not yet been realised. It is reasonable now to suggest the iodine release to be 50-60 % of the initial amount, or 1.5 to
190
A. BOROVOI AND S. BOGATOV
4. CREATION OF THE SARCOPHAGUS: ITS ADVANTAGES AND SHORTCOMINGS Work on the site After the explosion, the area in the immediate vicinity of the damaged Unit was contaminated by dispersed active core fragments, pieces of graphite and fuel rods and radioactive structural elements. They occurred on the roof and inside the turbine hall, deaeration stack, metal supports and roof of Unit 3, etc. Near Unit-4, gamma radiation was limited by the reactor ruins. During the active phase of the accident, radioactive dust (fuel particles) was deposited on the site, roofs and walls of the buildings. First measurements of dose rates around the damaged Unit gave big values - hundreds and thousands of mSv/h (see Figure 8, where the results of dose rate measurements at the site are presented, taken on the afternoon of 26.04.8620). This circumstance substantiated a long-lived myth, that almost all the fuel was thrown out of the Unit. Really the large dose rates were due to the extremely high specific activity of the fuel. Calculation showed that if 0.3 % of the total fuel was dispersed uniformly over the site, the dose rate on May 6th at a level of 1 m above the ground would be of 0.5 Sv/h. Due to the uneven location of the sources, dose rates observed varied from parts to tens of Sv/h (near the active core fragments). Before commencement of mitigation activities, it was necessary to provide passage-ways and to clean up the site. For this purpose, military machines IMR-2, equipped with additional shielding were used. From the very beginning the work was hard. Standard military machines do not have the necessary protection; there were neither remote devices to search for local radioactive sources, nor vehicles for the transportation of radioactive waste. During the work, radiation protection for the drivers was increased from between a hundred to a thousand times and remote controlled vehicles were developed. Clean-up work was carried out in the following manner: cleaning the site of contaminated garbage and equipment; decontamination of the outdoor surfaces of the buildings; excavation and removal of the upper soil layer (50-100 mm); laying concrete slabs and "clean materials" (sand, gravel) on the ground; covering the surfaces by film forming compositions. As a result of the clean up as well as radioactive decay, dose rates near Unit-4 did not exceed 15 mSv/h at the time of completion of the "Ukritye" encasement (November 1986). "Ukritye" encasement building The necessity for the erection of a casement for Unit 4 had been clear from the first days after the accident. This construction had to prevent dispersion of radioactivity from the reactor ruins and to protect the adjacent area against gamma-radiation. Among 18 projects considered, one solution proposed the erection around the Unit of an independent air-tight building. Another solution proposed utilising to the maximum extent the remaining structures of the damaged Unit. The second approach was finally chosen. It had the advantage in cost and terms of building construction. The design and building were ompleted in 6 months (an unprecedented example in world practice), but this approach had its negative features. There was a lack of information about the rigidity of the old structure
CONSEQUENCES OF CHERNOBYL
191
used as supports for the new one, a necessity to utilise remote concreting, the unavailability in some cases of welding, etc. All these difficulties were due to huge radiation levels near the damaged Unit. All these obstacles have resulted in two significant shortcomings for the construction - uncertainty as to the strength of the supporting structures and lack of leaktightness. The total area of openings over the new encasements after building completion was about During construction, significant amounts of concrete (called "fresh") flowed into the destroyed Unit building. This made it difficult or impossible to pass into and observe many of the rooms. On the other hand, partial covering of the FCM with a concrete layer improved the radiation situation and facilitated passage into other rooms.
5. RESEARCH ACTIVITIES ASSOCIATED WITH THE SARCOPHAGUS On completion of the Sarcophagus encasement, information about fuel location was restricted to the data available from the periphery of the Unit. Penetration into the room, located close by the reactor, was complicated by high radiation levels, damaged structural elements and fresh concrete. To continue the studies and to get information on nuclear and radiation hazard from the Sarcophagus, a programme was developed by the Kurchatov Institute5. In accordance with the programme, the rooms to the west and south of the Unit were cleaned up, drilling equipment was installed inside and bore holes were drilled through the concrete and steel to the places of potential fuel location. By means of visual methods (TV-cameras, periscopes) and newly developed thermal and radiation detectors, many measurements have been taken. Samples were studied simultaneously. It was then possible to assess the distortions inside and to strengthen the emergency structures which, if they fell, would result in additional destruction and radioactive dust release. It had been clear in 1988 that the reactor shaft was empty. Subsequent studies showed that the fuel inside the Sarcophagus had the following modifications14: Active core fragments, which were supposed to be located mainly on the upper floors of the Unit, in particular in the Central Reactor Hall, where they were found under the layer of materials dumped in 1986. Up to this day, there is very little information about this fuel;
192
A. BOROVOI AND S. BOGATOV
Finely dispersed fuel (dust), called "hot fuel particles". Their dimensions vary from parts to hundreds of microns. This fuel modification forms all the fuel surface contamination. The total amount of fuel dust inside the Sarcophagus is assessed to be 10 te including 1 te located immediately under the roof of the Sarcophagus (both values are known no better than to the order of magnitude); Solidified lava-like fuel containing materials (LFCM). We described above the origin of this form of the FCM during the active phase of the accident. There is relatively satisfactory information on lava at the lower floors, but the high radiation field and fresh concrete hinder as before the precise estimation of the amount of fuel located inside. The range in the assessments of fuel amounts in lava is now within 70-150 te. This estimate is influenced by the environment, especially by water; lava destructs fast; The last modification is presented by water solutions of uranium, plutonium, etc. For fairness sake, let us note insignificant uranium concentrations - about 1 mg/1. During the work many FCM locations have been identified. Physical and chemical properties of the FCM have been studied and constant surveillance of the FCM accumulations is established. The surveillance is organised for radioactive aerosol release out of the Sarcophagus and for water inside. The data on the FCM location in the Sarcophagus are presented in Table 7.
6. WHAT IS THE THREAT FROM THE SARCOPHAGUS?
What are the hazards represented by the Sarcophagus? They are the radioactive dust release at the fall of the buildings; the penetration of radioactive water out of the Sarcophagus into the environment; the beginning of a self-sustaining chain reaction (SCR) inside an accumulation of fuel-containing masses; and the release of radioactivity to the outside through openings. What is the likelihood of these processes and their aftermath? The experience of recent years. The release of radioactive aerosols out of the Sarcophagus is being controlled with the help of plane-tables mounted on its roofing. The most probable path for aerosol release was determined, to choose suitable control points. Special attention was paid to air streams passing through the places of main accumulations of fuel-containing masses, reactor space and ruins in the Central Hall. The upper estimations of radioactive release through the openings of the Sarcophagus were of: in in in 1992, in 1993, in 1994, and in 1995. The plutonium fraction in the total activity was limited to 0.4-1.2 %. Figure 9 shows the average concentration of alpha-active nuclides in the air at the site near the Sarcophagus. A considerable decrease in radioactive aerosol concentration was caused by the work of a dust-suppressing installation, mounted in the Central Hall. It is noticeable that the air near the Unit is being decontaminated with time. Work in 1987-1989 on strengthening the accessible inner construction, which was damaged to a large extent in the accident, has avoided further destruction. Oservation of the building for subsidence did not display any anomalies. Seismic waves of the magnitude 3.4 - 4, which were felt in the region of the ChNPP, from the Romanian earthquake on the 30-31st of May 1990, did not cause noticeable external damage and movement. Inside the Sarcophagus some increase in cracks in the walls was observed.
CONSEQUENCES OF CHERNOBYL
193
The general conclusion is that up to this time the Sarcophagus has exerted no negative influence upon the neighbouring territory. And up to this time it has been possible to avoid emergency situations. The most hazardous structures of the Sarcophagus. All main bearing structures of the Sarcophagus, such as beams B1 and B2, roofing over the Central Hall, steel shields of the covering, the beam "Mammoth" and others are designed and constructed in accordance with building requirements. That is why the durability of these structures themselves does not
194
A. BOROVOI AND S. BOGATOV
cause any doubt. The supports for the main structures are another matter. The question of their durability has already been discussed in many works. There is no common opinion on this question at present and we have to consider the most pessimistic forecasts. According to these forecasts, lasting stability of the old constructions which support beams Bl and B2 (see Fig. 10), is estimated as 10 years at usual climatic conditions (snow, wind, temperature). For comparison, the lasting stability of the other Sarcophagus constructions is estimated to be 80 years14. Possible consequences of the building collapsing. Efforts to estimate the initial events, the sequence of their passing, the most direct influence and further consequences of the accident were made in many works21-24. The most hazardous radiation accident, associated with the fall of Bl and B2, along with the upper structures of the building was considered in these works. As a result of the roofing fall, large amounts of dust from the surfaces beneath would be involved in the turbulent air trace behind the fallen structures. The calculations carried out23 indicated that under these accident conditions, about 5 te of dust, containing about 50 kg of fine dispersed fuel, could be involved in the turbulent trace. The altitude of the cloud elevation over the earth surface would be 100 m (the height of the building being about 60 m), its diameter would be 20 m. According to these estimates, about 20 % of the release would be involved in a so-called "aerodynamic shadow" behind the building. The extent of this shadow is about 200 m. Even taking into consideration average optimistic estimations, the surface contamination in the region of the aerodynamic shadow at the site could reach many tens of for Cs-137 and Sr-90 and about for Pu+Am isotopes. At slow wind speeds and at short distances from the Sarcophagus (hundreds of metres), the inhalation doses due to transuranics would be very high up to the lethal effects causing lung cancer induction. With increasing distance, doses drop quickly and at a distance of 10 km they are less than the permissible ones.
CONSEQUENCES OF CHERNOBYL
195
The model chosen was based on a supposition that there was no strong wind. In the opposite case, at very strong wind speeds, tornado or hurricane, for example, which could cause the collapse of the Sarcophagus, dust clouds are formed mainly by other mechanisms and a greater amount of dust along with the fuel particles could be involved. Recently, Dr. Pretch and others24 published calculations of doses to which the people working at the site could be exposed in case of Sarcophagus collapse. The results of these estimations are presented in Fig. 1124. The results23,24 cause clear anxiety. However it should be mentioned that the necessary information about the source term is not good enough to make accurate quantitative forecasts. And in the work mentioned above, "white spots" are interpreted as maximum pessimistic suppositions. Water in the Sarcophagus. Water is the main enemy for the safety of the Sarcophagus. It can destroy the FCM, increasing the quantity of "loose" radioactivity inside; contribute to the destruction of the building elements; cause an increase in the criticality of the FCM and in the course of time, as their cooling and destruction proceeds, causes the generation of nuclear hazardous compositions; contaminate ground waters with radionuclides.
196
A. BOROVOI AND S. BOGATOV
Apart from these direct influences upon the Sarcophagus, water has an indirect influence. It destroys the normal operation of diagnostic systems, makes obstacles to the investigations, transforms the premises into especially hazardous ones (from the standpoint of electrical safety), and so on. That is why the most important task for Sarcophagus safety is to take measures for the reduction in the quantity of water penetrating the premises, and, in the event of water penetration, to organise the constant control of its location, of its radionuclide composition, of the presence of fissile materials in it and to take active counter-measures if necessary. Investigations display several possible ways of water penetration into the building. They include: natural precipitation penetrating through the openings and water in dustsuppression composition being regularly spread over the Central Hall. There is one more source of water, the influence of which is increasing as the Sarcophagus cools. This is condensation water originating as a dew inside the cold areas of the building after penetration of the moist and warm outdoor air into the inside. As a result of work on hermitisation, many cracks in the coverings of the Turbine Hall and deaeration stack, as well as a part of long cracks in the inclined parts of the covering of the reactor compartment, were closed. The work on the roofing hermitisation has decreased water penetration, but not to a great extent. Radioactive water accumulates gradually in the lower storeys of the Sarcophagus (total amount estimated to be ) and then leaves them by ways still not well known. Systematic studies of water masses in the Sarcophagus were started in 1991. In 1995 about 40 different areas of the Sarcophagus were monitored. The total beta-activity for the water samples ranged within in 1995. The main contribution to total activity was due to caesium isotopes being present mainly in dissolved forms (74 - 100%).
CONSEQUENCES OF CHERNOBYL
197
Activity of the samples, caused by Sr-90, ranged within Uranium isotope content varied within 5-20 000 µg/l, mainly in soluble forms. Plutonium activity does not exceed 3000 Bq/l. Among the areas being observed, Room 009/4 (level 0.0) stands out . The concentration of caesium and uranium in this room has increased by two orders of magnitude since 1991 (caesium from to uranium from 10 to 4300 µg/L). What is the level of danger for water penetration out of the sarcophagus borders? To answer this question, it is necessary to remember that several hundred kg of fuel (500-700) are buried in the soil immediately under the Sarcophagus and about 3 te of fuel is situated out of the site, inside the exclusion zone. This fuel is washed out by rain and other natural waters and, in our opinion, must play a considerably more important role in the contamination of subsoil waters. The possibility and aftermath of a further nuclear accident. The hazard of the SCR in the active core of the Unit 4 Reactor bothered scientists up to the middle of 1988. It was due to the fact that for the RBMK-1000 reactor, a comparatively small part of its core (more than 154 fuel channels with graphite moderator in the absence of control rods) was able to go critical. Such a part could have been left after the accident. But, as it was already mentioned, in May 1988 investigators managed to look inside the reactor shaft and they determined that the reactor stack (active core as it was ) did not exist any more. They had to answer the question - could the SCR start with the accumulation of fuelcontaining materials mentioned previously. The answer was given in "Technical basis of nuclear safety for the Sarcophagus encasement (TBNS)", which was published by the Kurchatov Institute towards the end of 199025. In this work, experimental and calculated data, obtained up to the middle of 1990, were generalised and conclusions on nuclear safety were made. The basic conclusion of the TBNS was as follows:" Since the moment of the end of the active phase of the accident, the aggregate of diagnostical measurements indicates subcriticality of all the FCM located inside the Sarcophagus." At the same time it should be mentioned that all the measurements were made at the surface of the fuel "lava", because of an absence of a so-called "hot drilling" technique which is the extraction of highly radioactive core samples. Calculations confirmed also that all modifications of the FCM are deeply subcritical in every geometric combination in the absence of water. Estimates of the criticality of mixtures, composed of lava-looking FCM and water, showed that at the composition of lava studied, (the neutron multiplication coefficient in an infinite medium) is always less than 1 (again, according to the results for surface samples). This is illustrated in Fig. 12, where the nuclear-hazardous area is depicted within co-ordinates (content of uranium in lava) - (fuel burnup) for the most hazardous homogenous mixture of the LFCM with water. The area of observed parameters of LFCM stands far from the hazardous border. In addition, two more barriers prevent water penetration into lava accumulations: thermal barrier (large lava accumulations had temperatures at the surface of 60-70 °C and, according to the estimations, temperatures inside had to exceed 100 °C); waterproof glass-looking surface of the lava. An additional safety barrier was the content of the neutron absorbers - boron and gadolinium salts - in the samples of the water studied in the Sarcophagus. The salts were
198
A. BOROVOI AND S. BOGATOV
dissolved in water when it passed through the materials located in the Central Hall or were dissolved intentionally in water used in technological operations. Over the last five years, a lot of new results have been obtained, including ones on the long term stability of the LFCM. These results revealed that many of the safety barriers have become lower (as predicted in the TBNS). Two barriers prevented earlier water penetration into lava-looking FCM: its high temperature and the water-proofness of the substance itself. Calculations (Fig. 13) and experiments displayed a considerable cooling of the lava. Cracking also transforms lava into a water permeable structure. The amount3 of condensed water formed on the cold surfaces has increased noticeably during recent years. This water does not pass through much material and does not contain neutron absorbers. Moreover, let us remember that the so-called "nuclear safe" parameters for the lava are suitable for its surface. Further studies of samples, taken from subreactor room 305/2 in 1992-1993, revealed the presence of active core fragments in a non-melted state. Thus it has become necessary to take into consideration a new composition "lava + active core fragments + water", being in some cases more hazardous than a composition "lava + water". But what is the degree of real hazard of the consequences of SCR in the FCM inside the Sarcophagus? First, we should mention that an initial SCR is not equal to the explosion of some specially constructed mechanism. Calculations and estimates show that at the existing geometry of the LFCM, neither an explosion nor a blast wave should be expected. More
CONSEQUENCES OF CHERNOBYL
199
probably, the heating up and decay of conglomerates, accompanied by radioactive release, would take place. Thus it is convenient to estimate the nuclear accident only in terms of radiological consequences. Nowadays, the most dangerous scenario for the development of a nuclear accident is connected with any rapid pouring of water over the fuel-containing materials. Even neglecting the presence of protective barriers, the consequence of such an accident might be personal irradiation in the immediate vicinity of the Sarcophagus up to doses of tens of millisieverts. That is many hundreds of times smaller than similar values for the Sarcophagus collapse.
7. NECESSITY AND STRATEGY FOR THE TRANSFORMATION OF THE SARCOPHAGUS Our knowledge of the Sarcophagus is still very limited and our fears can be exaggerated to a great extent, but until our information is complete, the general approach in a science of safety is to consider maximum conservative forecasts. If we cannot describe the real hazard for the Sarcophagus quantitatively, a tendency to increase is nevertheless definitely seen. What shall we do? Should we continue our investigations intensively until it is possible to estimate the hazard associated with the Sarcophagus? But our investigations become more difficult with time and safety requirements become stricter. We must refuse without any doubt to allow people to operate within the hazardous premises, as was allowed during the post accidental years 1986-1987. The corresponding robotics and remote methods are only now being developed and they are very expensive. Objective factors, financial and technical, as well as a great variety of subjective ones, led to the situation during recent years for the study to be shortened rather than intensified. Should we take preventive methods and rebuild the Sarcophagus? Having provided ecological safety for hundreds of years, we might under the shield of Sarcophagus-2, without hurrying and very accurately, take radioactive materials apart, arrange them according to their activity and eventually dispose of them. However this is a very expensive measure. Still in 1989, Academician S. Belyaev and one of the authors proposed a concept for the Sarcophagus transformation and a Sarcophagus-2 creation. The concept was discussed and modified, but gradually everybody came to the
200
A. BOROVOI AND S. BOGATOV
conclusion that the problem cannot be solved by our own forces. It is necessary to ask for help from the international community - ideological, technical, but mainly financial help. In June-July of 1993, an international tender to transform the Sarcophagus into an ecologically safe system was held in Kiev. About 400 projects, proposals, and ideas were submitted. Six of them have become the winners. They were well-considered projects from France, Germany, England, Ukraine and Russia. None of the projects was able to satisfy the jury totally. But from the work that had been done, an assessment of the cost of the Sarcophagus-2 was about 2 billion dollars. In the spring of 1995 the Kurchatov Institute proposed its "Concept for work at the Sarcophagus for the years 1995-2000". The main task was considered in the Concept to be to undertake immediately measures to stabilise the state of the Sarcophagus. It was said in the Concept: "It seems to be unjustified optimism for such a short period as 3, 5 or even 7 years, to rely on the Sarcophagus transformation to isolate it totally from the environment with the help of Sarcophagus-2. This was not only from the technical standpoint, but from a financial one. The solution of the Sarcophagus problems has to include for the years 19952000 the following tasks: to ensure the current safety of the Sarcophagus; to ensure its long- term safety (stabilisation); to prepare for the transformation (to Sarcophagus-2); At the stabilisation phase, measures need to be undertaken to minimise the influence of the Sarcophagus on the environment for a sufficiently long time, more than 15 years. These measures will allow us to carry out the transformation safely and thoroughly." The concept was approved in general by the Ukranian Institutions, enlarged and transformed into the document "Basic directions of assurance for Sarcophagus safety for 1995-2000". At present the international organisations give much help in work on the Sarcophagus stabilisation and transformation. There are several projects funded by the Commission of European Communities. But time slips by and the Sarcophagus remains one of the most hazardous structures of atomic energy as well as the symbol of the Chernobyl tragedy.
8. REMEDIATION OF CONTAMINATED AREAS Measures undertaken in agriculture A considerable part of the contaminated agricultural areas are located in a region of marshy scrub, where sandy soils with low humus and acid pH are widespread. Under these conditions and have an increased migrational ability in the soil-plant chain that lead to increased content of the radionuclides in agricultural and stock-breeding production. Intensive reclamation measures were undertaken: deep ploughing (5 cm deeper than usual) over all contaminated arable lands in the autumn of 1986 and spring 1987 along with inverting the upper soil layer. This resulted in a significant lowering of the nuclide concentration throughout the plant root region, the cessation of dust resuspension and a lowering by 3-4 times the dose rates at the fields;
CONSEQUENCES OF CHERNOBYL
201
liming all arable land of acid conditions. This measure, being a necessary part of cropgrowing, decreases considerably the transport of radionuclides from soil to plants; annual fertilisation of contaminated land with increased amounts of mineral fertilisers, mainly potash and phosphoric fertilisers. Practical experience in the reclamation activity over arable and fodder lands contaminated by Chernobyl fallout make it possible to get stable harvests and to decrease radionuclide accumulation in plant-growing production by 1.5-10 times. The introduction of reclamative agents into the sandy soil reduces nuclide mobility and restricts their transport by 2-4 times. Decontamination of territories and buildings To lower radioactive contamination of populated areas, where evacuation has not been carried out, measures were undertaken to remove and to bury radioactive substances as well as to cease or to make lower their migration in the environment. A brief description of some of the activities in the republics of the former USSR is given below. Ukraine. The first clean up was carried out over places of attendance, such as schools, hospitals and stores as well as the most contaminated dwelling houses. From May to October 1986, 1229 localities were cleaned up in the Kievskaya and Gitomirskaya regions and 479 for the years 1987-1990. Some characteristics and scope of the clean up activities are given in Table 826. The removal of contaminated soil was a difficult operation. This was due to lack of packaging, dustproof loaders and means of compaction, as well as special vehicles for the transport of the radioactive waste. During decontamination a large amount of liquid and solid municipal waste, with different levels of radioactivity, were generated, as well as biomass. During the first months, the radioactive waste was buried in temporary storage sites without isolated linings. Although some populated areas were decontaminated two, three or more times, the results achieved were not satisfactory. One of the reasons was that secondary radionuclides were transported by wind from forests and dirty roads as well as poor quality decontamination. Experience has shown that decontamination by means of: clean up of solid surfaces with special mechanisms and chemicals; use of cheap absorbers to decrease the surface contamination; replacement of roofs of one storey buildings; are of low efficacy from the standpoint of a cost-benefit analysis. Relatively more effective are: removal of contaminated soil from homesteads; ploughing the soil in private gardens along with the introduction of fertilisers. Figure 14 shows the results on dose rate reduction achieved during remediation activities at four settlements in the Kievskaya and Gitomirskaya regions27. The remediation included: evacuation of contaminated soil in gardens and drains; concreting around houses; removal of contaminated ground to places of burial. The efficiency of the remediation was not high; the dose rate was reduced by 26 % on average.
202
A. BOROVOI AND S. BOGATOV
Byelorussia. To reduce dose rate, more than of contaminated soils were removed from populated areas, more than 4500 ramshackle buildings were demolished and more than a million square metres of roofing were replaced. Much work has been done to asphalt streets and pavements. To supply the population with clean water, more than 3000 wells were cleaned up, additional artesian wells have been used in the municipal water supply, and new pumping stations mounted. Productivity of the central water supply was increased by per day; more than 900 km of piping were laid. As a result the central water supply for the most contaminated areas was increased by 90-100 %. The big anxiety was due to the Gomel water supply from the river Soge flowing through the contaminated areas. In this connection new artesian wells were built and the Gomel water supply was switched to underground sources.
CONSEQUENCES OF CHERNOBYL
203
Russian Federation. The main work has been done by military units, civil defence, volunteer detachments and inhabitants. For the years 1986-1989, 915 km of roads were cleaned up, of contaminated soil were removed, of clean soil were delivered, 74 km of water piping were laid, 5762 wells (including artesian ones) were built, 2334 flats were supplied with natural gas. In western regions of the Bryanskaya region, 302 localities were decontaminated; 50 of them were decontaminated two, three or six times. These works have not given satisfactory results; the coefficient of decontamination achieved was of 1.2-1.6. This was mainly due to protraction in remediation (time elapsed after contamination was long enough to allow the radionuclides to penetrate into the soil and materials) as well as the low efficiency of decontamination techniques available.
9. MEDICAL CONSEQUENCES: RESIDENCE IN THE CONTAMINATED AREAS Early consequences (acute injury) Large groups of personnel and population were subject to screening from the first days after the accident. Patients had clothes and body surfaces contaminated; increased levels of radioactivity of the thyroid gland were registered. Because any information on the external irradiation of people was absent, it was difficult for medical staff to arrive at an adequate diagnosis. All patients, without exception, who appealed to the hospitals in Ukraine, Byelorussia and Russia about their health after the Chernobyl accident, were examined or subjected to hospitalisation. The main group who received acute radiational disease (ARD), were treated in Moscow (clinical hospital of the Institute of Biophysics) as well as in hospitals in Kiev. Generalised data on the patients with ARD are presented in Table 928.
Therefore 134 persons who were at the site of the ChNNP at the time of the accident, incurred ARD. Almost one-third of the patients had a heavy (III) or extremely heavy (IV) degree of ARD. Twenty eight people who died, were patients who incurred external and combined irradiation (large radiational burns of the skin, of underlying tissues and whole body irradiation) in doses incompatible with life. It was due to the efforts of the doctors that
204
A. BOROVOI AND S. BOGATOV
it was possible to save several patients with a heavy degree of ARD and in particular, one patient irradiated by an absolutely mortal dose. "Liquidators" The main work to clean up the industrial site adjacent to the Chernobyl Nucler Power Plant (ChNPP) has been carried out by soldiers. Within two days of the accident, the mobilisation of soldiers and officers from the reservists had been declared through the military registration and enlistment offices. These people contributed more than 80 % of the total number of the military men. We have no official information on the number of military men involved in the mitigation activities and we present here only a crude assessment28. From the end of April to November-December of 1986, more than 90 000 military men were irradiated within the 30 km zone, including the site of the ChNPP. Taking into account the work during the years 1987-1989, this number has to be increased by 3-4 times. The number of persons sent to the ChNPP from other NPPs, either from another facility of the former USSR, personnel of the ChNPP, or the basic organisation of the Ministry of Middle Machinery (US-605), was 52 778 in 1986. Generalised data on the Republics of the former USSR, based on State Statistical Committee information, were obtained in 1991 for the first time. Starting in 1990, 316 553 people taking part in the mitigation, were surveyed in the USSR. Among these, 112 952 were from the Russian Federation, 148 598 from the Ukraine, and a further 37 346 persons (11.8 %) were surveyed in the other republics. According to the data of the All-Union Distributive Registry (Obninsk), to the end of 1991 the number of "liquidators" (the clean-up troops) involved were: 138 390 in 1986; 85 556 in 1987; 26 134 in 1989; 43 020 in 1989. Thus an approximate number of liquidators is estimated to be 300 000 persons. One of the urgent problems related to the liquidators is the level and kind of irradiation. Deficiency, or sometimes absence of, emergency dose-meters did not allow the assessment of doses of external irradiation - to say nothing of the intake of radionuclides or betairradiation of the skin. Fortunately, doses due to radionuclide intake usually did not contribute too large (~5%) a part to the total dose absorbed. The main body of soldiers and sergeants did not have individual dosemeters and a so-called "collective dosimetry" method had been put into practice. Scout-dosimetrists measured dose rates at the workplace in advance and then the person responsible for the work calculated the time to be spent in this zone taking into account the permissible levels of irradiation. The accuracy of such a calculation cannot be regarded as high, because radiation fields were extremely inhomogeneous due to the presence of local sources (active core fragments) with extremely high dose rates in the vicinity. In general, the officially registered doses to liquidators are in doubt. Figure 15 shows the distribution of documented doses received by liquidators according to the data of the Russian National Medical Dosimetric Registry (RNMDR). Sharp peaks (or drops) attract attention at doses 25, 10 and 5 cGy, corresponding to emergency levels in 1986, 1987 and later. The authors are not able to explain this effect from a physical standpoint and state it without making any comment.
CONSEQUENCES OF CHERNOBYL
205
In 1986, immediately after the Chernobyl accident, the Ministry of Health of the USSR initiated a programme to create an All-Union Distributive Registry for persons incurring irradiation. At the moment of the USSR break-up, the data base for liquidators comprised medical and dosimetric information on about 284 919 people. At present only residents of the Russian Federation are included in the RNMDR. On 01.12.94, the RNMDR encompassed 370 120 persons, including liquidators (43 %), evacuated men (2.2 %), residents or (former residents) of the contaminated areas (50.4 %), descendants of liquidators of the years 1986-1987, and migrants from the evacuation zone (0.2 %). At the present the RNMDR seems to be the most informative source to forecast the long term consequences of irradiation to the health of the liquidators. The forecast30 of additional mortality for liquidators 20 years later due to malignancies caused by radiation is shown in Table 10, with an account of age distribution (average age 33 years). The accuracy of mean dose assessments is about 50 %. In particular, it is seen that excessive radiation-induced mortality (attributive risk) due to all malignancies reached 2.8 %. A similar index for leukaemia is 23.6 %. Mortality for liquidators for the years 19901993 is shown in Fig. 16. It is seen not to be detectable at the back-ground of control indices. The situation seems to be much more difficult concerning the interpretation of all diseases and disabilities among the liquidators. It is known29 that indices of diseases of the liquidators in many cases exceed similar ones throughout the Russian population. For example, the diseases of the endocrine system are 18.4 times more frequent, mental derangements are 9.6 times more frequent and the mean index for disease is 1.5 times more than corresponding indices throughout Russia. Certainly the quality and completeness of surveillance for the liquidators is much better than the usual practice in Russia; the most experienced specialists are involved in the survey. Registered diseases are, on average, several times more often revealed in special institutions than in ordinary clinics. For this reason it is extremely difficult to get an adequate control group for comparison.
206
A. BOROVOI AND S. BOGATOV
It has been revealed that social and psychological factors connected with the accident have a considerable influence on the diseases and mental state of the liquidators. All this together with radiational injury could be defined as the "Chernobyl syndrome" and attempts to extract from this complex the radiation influence by itself, are very difficult. One of these attempts has been done30, where disease and disability indices were assessed through the groups corresponding to ranges of doses received of 0-5 cGy, 5-20 cGy and > 20 cGy according to the data from the RNMDR. Liquidators receiving doses of 0-5 cGy were considered as the internal control group. Within the framework of standard multi-factor
CONSEQUENCES OF CHERNOBYL
207
analysis, two factors influencing morbidity were analysed: the dose (with three gradations of 0-5 cGy, 5-20 cGy and more than 20 cGy) and date of entering the zone of radiation (also with three gradations: 1986, 1987, 1988-1990). On the basis of the analysis of three kinds of diseases (endocrine system, blood circulation system and mental derangement), it was revealed that entering the zone is the prevailing factor from the standpoint of influence on the morbidity. A similar situation is observed concerning disability indices. The disability index for liquidators is 2.8-3.2 times more than a similar index for all Russia. It is noticeable that according to the frequency of occasion and the gravity of the disease, the persons engaged in mitigation activities at the age of 18-30 years and now about 40 years old, correspond to the age group of 50-55 years old in the Russian population in general. Therefore two conclusions can be derived from the data on liquidators: virtual data for the time after the accident as well as the forecast of total mortality, obtained from the radiational risk coefficients of the ICRP, are in good agreement with the observable indices; they do not exceed corresponding control indices throughout the Russian Federation; the liquidators of the years 1986-1987 represent a group at especially high risk. Population of the contaminated areas
As far as the population of the contaminated areas is concerned, the question to be asked is about the influence of low doses of radiation. Within the range of doses 20-50 mGy, and according to some estimations, up to 100 mGy, radiation does not cause immediate deleterious effects to people's health. At the same time, according to the concept of a threshold-free influence of radiation, any irradiation may in principle, be a cause of induction of late consequences experienced as an increase of malignant damage among irradiated persons, as well as detrimental hereditary effects affecting their descendants. According to the recommendations of the International Commission on Radiological Protection (ICRP)31, the likelihood of occurrence for all kinds of fatal malignancies among a population for all life-time at low dose irradiation is estimated to be some (i.e., 5 % per sievert). It is worthwhile comparing this value with a similar index for "usual", spontaneous mortality due to all cancers, about 20 % for developed countries. Let us consider expedient radiological consequences to the population that resides in the "zones of rigid control" (ZRC) over the territory of 9 regions of Russia, Ukraine and Byelorussia contaminated by radioactive fallout as a result of the Chernobyl accident. The collective dose equivalent for all life-time, to the population (273 000 persons) is estimated32 to be 31 000 person*Sv, unless any restrictions on the manner of life are introduced. Calculation gives, if spontaneous mortality due to malignancies is supposed to be 20 %, an increase due to expected radiation induced cancers will be 0.56 % to give a total of 20.56%. 15 617 000 persons resided in the territory of these nine regions (including the ZRC in five of the regions). The committed dose equivalent for this population due to Chernobyl fallout was assessed to be 192 000 person*Sv. Calculation shows that in similar circumstances, radiation induced mortality due to all malignancies may be increased by 0.06%, from 20 % to 20.06 %.
208
A. BOROVOI AND S. BOGATOV
Considering this excess mortality, it should be taken into account that the annual increment in malignancies for developed countries can be up to 1.5 %, exceeding the assessments mentioned above. Besides that, it does not seem possible to observe radiation induced cancers (except for malignant tumours of the thyroid gland) at this expected level of all fatal cancers. Firstly, consider any statistical restrictions. For example, to get statistically valid data on deleterious effects due to an irradiation by doses of about 10 mSv per person, both the considered and reference groups have to consist of 5 million people. Secondly, the natural fluctuation in oncological diseases is about ± 10 %, exceeding the expected effect. Thirdly, radiation induced tumours do not differ from those of other origins. Within a framework of all cancers known to be radiologically induced, leukaemia (malignant tumour of blood-creating tissue) is of major interest. The peculiarity of leukaemia, especially the acute forms, is their short latent period. It is known that the minimum latent period of leukaemia is 2 years, and the maximum within 5-15 years after irradiation. The life time risk of leukaemia induction at low intensive irradiation is according to the ICRP data, that is, one-tenth of the total risk coefficient for all fatal cancers. Studies carried out in Russia, Ukraine and Byelorussia show the lack of statistically valid differences in leukaemia induction during the pre- and post- accidental periods. Even though leukaemia is the earliest evidence of stochastic effects of irradiation, the time elapsed is not sufficient to produce a final conclusion. However no leukaemia epidemic is expected. Malignant tumours of the thyroid gland are rarer than leukaemias in humans and fatality due to these tumours lies within 5-10 %. Taking account of the low frequency of thyroid malignancies of natural induction, it could be expected that radiation induced cancers would exceed those occurring spontaneously. According to world statistics, the minimum latent period for thyroid tumour manifestation is about ten years after irradiation. However, three years after the accident, a sharp increase in thyroid cancers was reported from Byelorussia. There is still no consistent opinion on this subject among scientists. During the first weeks after the accident, measurements of content in thyroid glands were carried out on 31 000 people in the most contaminated regions of Russia. These measurements revealed a strong dependence on age. Children younger than 3 years old received doses 5-8 times higher than adults under the same conditions. Mean doses to different age groups varied within 1-20 cGy, but individual doses could be 10 Gy or higher. The forecast30 of excessive numbers of thyroid cancers for the population of contaminated Kalugeskya and Bryanskaya regions (population 105 300 and 466 900 respect-ively) is given in Table 11. As seen from the Table, the attributive risk for children in these regions is 45 % and 26 % respectively; that is, every second or third cancer respectively will be radiation induced. The next deleterious effects proved to be due to radiation are hereditary effects. It is known that radiation induced damage occurs in the gonads. Such damage can manifest as some impairment for the descendants of irradiated individuals. The likelihood of such hereditary effects was clarified by the ICRP in 1990 (Publication 60). It adopted a coefficient for
CONSEQUENCES OF CHERNOBYL
209
serious hereditary effects of for all populations. According to this number, it can be estimated that the expected risk of serious hereditary ill-health occurring for subsequent generations of people residing in the 9 contaminated regions of Russia, Byelorussia and Ukraine, will be something more than 100 occurrences per 1 million. Taking account of the high spontaneous level of clinically significant hereditary diseases for humans (60 000 innate anomalies and 15 000 genetic diseases per 1 million live-born children), it becomes clear that there are insuperable practical difficulties in detecting these theoretically possible extra hereditary effects. There is a hazard of radiation injury to embryo and foetus. Most susceptibility to radiation occurs between the 8th and the 15th week of pregnancy and irradiation by doses above 1 Gy can result in mental backwardness; the ICRP believes that these effects can have a threshold. But such doses have never been reached in women as a result of the Chernobyl accident. Finally, some words about the radiological significance of the so called "hot particles" of the Chernobyl accident. As a result of complicated physical and chemical processes of fuel destruction and volatiles condensation, a large number of particles of high specific activity were generated. If a hot particle occurs in the human body, local doses in the immediate vicinity of it can be extremely high; this was regarded as an ability for increased cancer induction. It is worth noting that if we suppose a linear dose-response dependence for any organ as a whole, there is no problem concerning hot-particles; the energy deposited and the mass of the organ are similar both in the case of hot-particle and the same uniformly distributed activity. An increased response is possible only for effects that are non-linear with dose. The comparative hazard of the Chernobyl fuel hot-particles and similar uniformly distributed activity was assessed34 according to a model of non-linear doseresponse dependence. Withiout considering the calculation in detail, it can be noted that the hot-particles proved to be several times less hazardous than could be expected from the same amount of gaseous radio nuclides intake.
210
A. BOROVOI AND S. BOGATOV
Criteria of residence in contaminated areas As mentioned above, by early May 1986 a generalised map of dose rates around the ChNPP was obtained on the basis of aero-survey and terrestrial measurements, Fig. 6. This map was regarded as the guide for the evacuation of the population. According to proposals made by the Ministry of Health and the State Committee for Meteorology, adjacent to the ChNNP area, the zones were set as follows: exclusion zone (dose rate > ); evacuation zone (dose rate > ); zone of rigid control (dose rate > ), where temporary evacuation of children and pregnant women was carried out. According to the assessments, the dose rate of on the 10th of May 1986, corresponded to an annual dose equivalent of 50 mSv (5 rem). That was half the permissible dose limit set by the Ministry of Health for the first year after the accident. To facilitate planning the protective measures, criteria were developed on the basis of surface contamination by long-lived radio nuclides. Thus the urgent task was in bordering regions, where contamination exceeded accepted limits. The idea was to set the instrumentally measured limits of contamination, providing permissible levels of annual intake and external irradiation by The following limits were chosen as the criteria: for for and for Pu (alpha-emitting isotopes). The basic criterion, having been 100 mSv for a population for the first year after the accident, was mainly met (average doses being 3-4 times lower). It was the reason for the National Commission on Radiological Protection (NCRP) to set annual dose limits as follows: 30 mSv/y in 1987, 25 mSv for 1988-1989 and 173 mSv as a total dose up to 1st of January 1990. Theoretical assessments by the Ministry of Health indicated, that the group of people for which irradiation received was more than the emergency limit (set at 173 mSv), was 0.4 % of the population. The task was given to the NCRP of the former USSR to develop recommendations on the substantiation of permissible limits of irradiation to the population for a long time, including the restoration period. It should be noted that international practice had not yet developed a distinct position on this question. The working group of the NCRP proposed to utilise, as a criterion for radiological protection, an individual dose to some critical group of the population. In this case, the critical group consisted of children born within the time period of 1986 ± 2 years. It is worth noting that the usual distribution of dose is log-normal, and in the case of the Chernobyl accident, individual doses (within a single group) could vary by 5 times. As a result the NCRP proposed in October 1988: to set a quantity of 350 mSv as the sum of external and internal irradiation for 70 years of life, supposing the critical group as children; this limit must include doses for the previous years since the accident; to consider this foreseeable limit as an intervention level for planned and controllable evacuation of people from areas where this limit would be exceeded; to put the recommendations into practice from 01.01.1990.
CONSEQUENCES OF CHERNOBYL
211
According to the assessments, the dose limit of 350 mSv could be exceeded within the ZRC for a population of ~ 56 000 in 242 populated areas if no measures to improve the radiation situation were carried out. Mainly these were villages where the surface contamination was more than For many reasons, the implementation of the above mentioned concept was very difficult. As a result the Academy of Sciences of the USSR has created a Commission to develop a more human concept. The objective of this new concept was a mainly social compromise due to radiophobia (fear of radiation) in the population. It was proposed to take as a criterion for any necessary resettling of the population, an annual dose of 5 mSv, and as a "criterion of intervention", a dose of 1 mSv/year was proposed. For settlements where doses of irradiation ranged within 1-5 mSv/y, it was recommended to take protective measures to reach a dose of 1 mSv/y on the condition that voluntary resettlement was available, along with a payment of compensation at State expense. The articles of this conception were realised in the law "On the social protection of citizens, incurring radiation due to the accident at the ChNNP", passed by the Supreme Council of the Russian Federation on 15th May 1991. On the breakdown of the USSR, the newly created Russian National Commission on Radiological Protection (RNCRP) approved the modified project for the "Conception of protection of population and economic activities at the areas affected by radioactive contamination" and "Proposals on its practical realisation". The main difference of the new Conception from the previous one, was a refusal to impose compulsory resettlement of the population away from the contaminated regions. In the new variant, areas where doses of 15 mSv/y are likely, are referred to as zones of radiational control of the environment, agricultural production and doses to the population. Areas where the annual dose of 5-50 mSv are likely are referred to as zones of voluntary resettling. 10. CONCLUSION In September 1996, while this article was being written, the mass media of Ukraine and other states, reported on a supposed "self-sustaining chain reaction" (SCR) that occurred in the lower part of the Ukritiye. This publicity sowed seeds of anxiety which fell on the fertile soil of fear and misunderstanding. The facts were that an increase of neutron counts, by a factor of 3-5 were observed over a period of several hours. It will require a considerable effort by the scientists and engineers involved to provide the population with a real explanation of the event. Very properly, neutron monitoring had been maintained to detect a self-sustained critical reaction. Neutrons would be observed even in a sub-critical system, however, due to the effect of long lived delayed neutrons and natural fissioning in uranium-238, multiplied by the remaining, dispersed fuel. The observed increase factor would not be consistent with a dangerous SCR but could be due to more minor changes in conditions with the Sarcophagus. The event could in reality be due to two causes. Firstly, it could be due to trivial instrument failure. Though all possible tests were performed, this cause cannot be ruled out entirely, making the observation spurious. Secondly, it could be due to physical effects. It had rained
212
A. BOROVOI AND S. BOGATOV
heavily on those days and large amounts of water penetrated the Sarcophagus. Additional water in the melted fuel (LFCM) would result in a change of neutron spectrum by moderation and hence of the multiplication properties of the LFCM. Both of these effects would then lead to an increase in count rate in the detectors. But the rise was perhaps a billion times lower than consistent with a self-sustaining reaction. And we might remind ourselves that even if such an SCR occurred it would be dangerous only for those in the immediate vicinity. The hazard should not be overestimated and the publicity smacks of taking advantage of the true Chernobyl tragedy. But the event strengthens, we believe, our argument that it is necessary to start work immediately on the stabilisation of the Sarcophagus and to improve its safety before the next century is reached. When we speak of remediation, what should our target be? In our opinion, the Chernobyl area itself should be considered as a reserve and not the focus of remediation. Money and human health should not be wasted on the removal of fuel particles. The natural environment has taken care to catch and restrain them successfully. Dissemination of radioactivity in this area is very slow and does not require the evacuation of people. This would leave it feasible, under proper care, to continue the operation of the remaining power plants with their essential supply of electricity. We cannot say the same of the caesium contamination that covers thousands of square kilometres of valuable land. The loss of useful territory is too large and here active remediation measures are necessary. It is essential to develop economically acceptable procedures to return people to the land and to begin to re-use it. The question of the long term consequences of the Chernobyl accident to the health of the peoples of Russia, Ukraine and Byelorussia remains difficult to answer. Even though the expected number of additional mortalities from cancers is unlikely to be seen against the statistical background of natural death, the deleterious effect includes a wide range of factors which probably interact to strengthen each other. First amongst these we would put factors of a social and psycho-emotional nature. While less tangible, we do not deny their reality. It is difficult, however, to separate the consequences of Chernobyl from the general changes in the health and reported health of the former USSR. Finally we have to say that all the power of the former Soviet Union and its ability to concentrate such a huge force in this single endeavour have not served to overcome the problems of the accident. Most of these problems remain to be solved by future generations.
REFERENCES 1. The accident at the Chernobyl NPP and its consequences. USSR State Committee on the Utilization of Atomic Energy. IAEA Post Accident Review Meeting, 25-29 August, Vienna (1986). 2. A.A. Abagyan, E.O. Adamov, E.V. Burlakov et al. Chernobyl accident causes; overview of studies over the decade. IAEA International Forum "One decade After Chernobyl: Nuclear Safety Aspects". April 1-3, IAEA-J4-TC972, Working Material, 46:65, Vienna (1996). 3. V.A. Sidorenko. Nuclear safety of the RBMK reactors: main results and prospects. IAEA International Forum "One decade After Chernobyl: Nuclear Safety Aspects". April 1-3, IAEA-J4-TC972, Working Material, 435:447. Vienna (1996). 4. A.A .Borovoi. Fission product and transuranic release during the Chernobyl accident. Materials of International Conference "The Fission of Nuclei - 50 Years". Leningrad (1989).
CONSEQUENCES OF CHERNOBYL
213
5. A.A. Borovoi. Inside and Outside the Sarcophagus. Preprint of the Complex Expedition of the Kurchatov Institute. Chernobyl (1990). 6. A.R. Sich. Chernobyl accident management actions, Nuclear Safety, v.35(1), 1:23 (1994). 7. Nuclear accident and RBMK reactor safety, report GRS-130, Berlin (1996). 8. S.N. Begichev, A.A.Borovoi, E.V. Burlakov et al. Fuel of the Unit 4 of the ChNPP (short reference book). Preprint of Kurchatov Institute 5268/3. Moscow (1990). 9. A.A. Borovoi, A.A. Dovbenko, M.V. Smolyankina and A.A. Stroganov. Definition of nuclear physical properties for the Unit 4 of the ChNPP. Report of the NSI AS 52/11-20, Moscow (1991). 10. E.A. Warman. Soviet and far-field radiation measurements and an inferred source term from Chernobyl. Presented at the New York Chapter Health Physics Symposium, April 3, Brookhaven National Laboratory, New York (1987). 11. G. Kirchner and C. Noack. Core history and nuclide inventory of Chernobyl core at the time of the accident. Nuclear Safety, v.29(1), 1:15 (1988). 12. A.R. Sich. Chernobyl Accident Revisited, Ph.D. dissertation, MIT, Massachusetts (1994). 13. Sandia National Laboratories. Technical report on item 6 - development of a model of fuel-surrounding material interactions. Contract AL-9407. Albuquerque (1995). 14. A.A. Borovoi. Analytical report (Post-Accident Management of Destroyed Fuel from Chernobyl) IAEA Working Material, Vienna (1990). 15. Yu. A. Israel, V.N. Petrov, S.I. Avdiyushin et al. Radioactive pollution of the environment at the accident zone of the ChNPP. Meteorology and Hydrology, No 2,5:18 (1987). 16. Ch. Hohenemser and Renn Or. Chernobyl's other Legacy. Environment. 30:4 (1988). 17. L.A. Ilyin et al. Radiocontamination patterns and possible health consequences of the accident at the ChNPP. Journal of Rad. Protection, v.10(1) (1990). 18. S. Beliayev, A. Borovoi et al. Radioactivity Releases from the Chernobyl NPP Accident International Conference. Comparison of Consequences of Three Accidents: Kyshtim, Chernobyl, Windscale, October 1-5, Luxembourg (1990). 19. Yu.A. Israel, E.D. Stutkin, I.M. Nazarov and A.D. Fridman. Radioactive pollution of the environment after the Chernobyl accident. Radiological, Medical and Social Consequences of the Accident al the ChNPP. Remediation of the Areas and Population. 21-25 May. Theses of the report, Moscow (1995). 20. S.V. Ilyichev, O.A. Khochetkov, V.P. Kriyuchkov et al. Retrospective Dosimetry for the Participants of Mitigation Activities after the Accident at the ChNPP. "Seda-Style", Kiev (1996). 21. M.V. Sidorenko et al. Forecast of lasting stability of the "Ukritiye" encasement of the Unit 4 of the ChNPP. Final Report on Contract No. 877. Kiev (1994). 22. V.P. Beskorovaynyi, V. Kotovich, V.G. Molodykh, V.V. Skurat, L.A. Stankevich and G.A. Sharovarov. Radiation consequences of collapse of structural elements of the Sarcophagus. "Sarcophagus Safety '94". The State of the Chernobyl Nuclear Power Plant Unit 4. Proceedings of International Symposium, 14-18 March, Zeleny Mys (1994). 23. Preparing and Expert Assessment of Input Materials for a new issue of the "Base for Radiational Safety of the "Ukritiye". Report 09/39 of State Enterprise "Expertise" Moscow (1992). 24. G. Pretzsch. Analysis of the accident "Roof Collapse" for the "Ukritiye" Encasement. Report GRS-A2241, Berlin (1995). 25. S.T. Beliayev, A.A. Borovoi, V.G. Volkhov et al. Technical Basis of Nuclear Safety for the "Ukritiye" Encasement (TBNS), Complex Expedition of Kurchatov Institute, Chernobyl (1990). 26. Chernobyl. Five Difficult Years: Collection of the Materials. IzdAT, Moscow (1992). 27. A.V. Kretinin, A.F. Landin. Efficiency analysis for countermeasures on the reduction of irradiation to populations residing in the radioactively contaminated areas. Problems of Chernobyl Exclusion Zone, Minchernobyl, Naukova Dumka, Kiev (1995). 28. L.A. Ilyin. Realities and Myths of Chernobyl. "ALARA Limited", Moscow (1994). 29. V.K. Ivanov, A.P. Konogorov, A.F. Tsyb, Eu. M. Rastopchin, M. A. Maksyutov, A.I. Gorsky, A.P. Biryukov, S.Yu. Chekin. Planning of long-term radiation and epidemiological research on the basis of the Russian National Medical Dosimetric Registry. Nagasaki Symposium on Chernobyl Update and Future, Amsterdam (1994). 30. A.F. Tsyb, L.A. Ilyin and V.K. Ivanov. Radiational risks of Chernobyl: an assessment of mortality and disability on the basis of data of the National Radio-epidemiological Registry (1995). Radioecological, Medical and Social Consequences of the Accident at the Chernobyl NPP. Remediation of the Areas and Population. All-Russian Conference, 21-25 May, Moscow (1995). 31. Recommendations of the International Commission on Radiological Protection. Publication 60.
214
A. BOROVOI AND S. BOGATOV
International Commission on Radiological Protection. 1990. Pergammon Press, Oxford and New York (1991). 32. L.A. Ilyin. Time -limits for radiational influence, irradiation of the population and medical consequences of the Chernobyl accident. Med. Radiology. No 12. 9:17 (1991). 33. V.F. Stepanenko, A.F. Tsyb, Yu.I. Gavrilin et al. Doses of internal irradiation to the thyroid for the Russian population as a result of the Chernobyl Accident. Radioecological, Medical and Social Consequences of the Accident at Chernobyl NPP. Remediation of Areas and Population All-Russian Conference, 21-25 May, Moscow (1995). 34. S.A. Bogatov. Method of comparative assessment of the radiological significance of inhalation of "hot particles" of the Chernobyl accident. Kurchatov Institute Preprint 5601/3, Moscow (1992). 35. Yu.A. Israel, S.M.Vaculovsky, V.A.Vetrov, V.N. Petrov, F.Ya. Rovinsky and E.D. Stukhin. Chernobyl: Radioactive Pollution of the Environment. Hydrometeoizdat, Leningrad (1990).
DYNAMIC RELIABILITY
Jacques Devooght Service de Métrologie Nucléaire Université Libre de Bruxelles 50, av. F.D. Roosevelt B - 1050 Brussels
1. INTRODUCTION Dynamic reliability is a term coined for a chapter of reliability theory linked to dynamic systems. If reliability is the ability of an industrial system to perform a given task on a mission time without failure, dynamic reliability emphasizes the fact that such systems evolve dynamically and that failures (and repairs) can influence the dynamics and reciprocally, that dynamics (and its associated state variables) can affect failure or repair rates. Since the evolution of the system can in general branch at any time from one dynamics to another, the resulting event tree is infinitely ramified : hence the alternate term continuous event tree used for dynamic reliability. The term "probabilistic dynamics" is also used to make it clear that a system switches randomly from one dynamics to another. The industrial and scientific context under which such studies appear is clearly the field of PRA (Probabilistic risk assessment) and PSA (Probabilistic safety assessment) used for instance in nuclear reactor safety studies, whose backbone is the event tree/fault tree methodology1 . The current use of this methodology has been reviewed recently2. On the other hand, dynamic reliability as a specialized field, has been reviewed recently by N. Siu3. The interested reader is encouraged to read first this thorough review which is a bottom-up approach to dynamic reliability in the sense that it shows clearly how the classical approach can lead sometimes to erroneous results and how new methods were progressively developed to remove the defects. On the other hand we will adopt here a complementary, top-down approach where we try to formulate the problem in a general mathematical setting, introduce the approximations needed to cope with the extensive numerical problems met, and try to unify existing approaches.
Advances in Nuclear Science and Technology, Volume 25 Edited by Lewins and Becker, Plenum Press, New York, 1997
215
216
JACQUES DEVOOGHT
The plan of the paper is the following : I. METHODS 1. 2. 3. 4. 5. 6. 7. 8.
Introduction. Physical setting. Chapman-Kolmogorov equations. Reduced forms. Exit problems. Semi-Markovian generalization. Subdynamics. Application to event trees.
II. SOLUTION TECHNIQUES 9. 10. 11. 12.
Semi numerical methods. The Monte Carlo method. Examples. Prospects and conclusions.
2. PHYSICAL SETTING Ideally an event tree should start from a given top event (characterized by the current state of the reactor and eventually its past history) and yield probabilistic statements on the future state of the reactor, e.g. on the physical state variables, (like power, temperature, isotopic concentrations, etc.), on the component (or hardware) state of the system taking into account the protection devices, controls, operator’s interventions including all possible failures, malfunctioning, operator’s errors, etc. These probabilistic statements are used either to determine damage to installations and public (as in level 2 and level 3 PSA studies4) or to pinpoint defects of design, procedures, etc. Obviously no method is able today, nor is it likely in the near future, to yield such sweeping statements, essentially on two counts : (1) the considerable complexity of the problem due to a combinatorial explosion of possible situations combined with uncertainties on most data, not to mention the complexity of human modelling, (2) the sheer volume of numerical calculations needed for any realistic appraisal. The classical event tree is essentially static in the sense that it takes into account the chronology of events, not their actual timing. More precisely, to quote N. Siu3, there are "a number of structural characteristics of this static, logic-based approach for modeling accident scenarios which are of interest. First, if variations in the ordering of the success and failure events are possible, these variations do not affect the final outcome of a scenario or its likelihood. (If ordering does make a difference, the event tree would have to be expanded in order to handle possible permutations of events). Second, variations in event timing do not affect scenario outcomes or frequencies, as long as these variations are not large enough to change "failures" to "successes", or vice versa. Third, the effect of process variables and operator behavior on scenario development are incorporated through the success criteria defined for the event tree/fault tree top events. Fourth, the boundary conditions for the analysis of a given top event (or basic event, when dealing with cut set representations of accident scenarios) are provided in terms of top event (basic event) successes and failures; variations in parameters not explicitly modeled are not treated".
DYNAMIC RELIABILITY
217
However if we plan a top-down approach we should at least be able to list the characteristics of a realistic model. 1. Models of the system under study.
The subject of nuclear dynamics is the evolution with time of state variables such as power, temperature and pressure, etc. (as well as neutron or precursor densities) and the study of "forces" such as reactivity. The state of the reactor in a dynamics problem is described by a vector whose components are fields, such as power density and temperature, which are functions of the position and time t. A huge amount of work has been devoted to the definition of condensed descriptions of the state of the reactor. From continuous to nodal and from nodal to point descriptions, we have a whole set of approximations to which a probabilistic description can be appended. They are all characterized by the fact that all parameters are assumed to be known, and in that sense, reactor dynamics is a deterministic problem. The knowledge of the parameters amounts to the knowledge of the structure of the reactor : components function properly, or else in accident analysis, their mode of failure is given. In general, whole subsystems are compactly described by laws that do not disclose their internal structure. For instance, the safety control rods may be introduced through ramp reactivity laws, etc. 2. Time scale of the accident.
Needless to say the knowledge of the initial state of the reactor is essential for the development of the accident. However this development is dependent on failures happening after the initiating event but also on failure or errors developed before the initiating event as in the case of passive safety devices which lie dormant. Usually the interval of time over which accidents develop precludes significant additional failures e.g. besides those already present before the start - except those induced by the operators, or else some hardware failures which increase rapidly when state variables grow out of their nominal range or when sensors are destroyed. 3. Change of state of hardware components. These changes of state are either provoked by protection devices, or by operator intervention or by the physical influence of state variables inducing failure (e.g., rupture of a vessel by overpressure). They are either deterministic or probabilistic. 4. Human error modelling.
Recovery actions by operators are essential parts of dynamic event trees and are characterized by time distributions (see 6). 5. Uncertainty of data and models.
The treatment of model uncertainty is sometimes reduced to the treatment of the uncertainty of parameters involved in the models - which is not the whole story5. Failure rates are uncertain and characterized by a distribution. Latin hypercube sampling is often used to treat uncertainties in parameters for models which are otherwise deterministic.
218
JACQUES DEVOOGHT
However there is no difficulty to introduce parameter uncertainty - at little cost - in simulation methods of calculation6 which are essential and practically unique tools for dynamic reliability studies. Most of the reluctance met for the treatment of uncertainty is unjustified from a Bayesian point of view : if experts disagree, probabilistic statements incorporating their opinion are after all belief statements that should incorporate whatever knowledge we have. Objections have been made that failure probabilities are subject to uncertainties and therefore that the dependence of on physical variables is subject to even greater uncertainties, if not to outright ignorance. Therefore, any theory using this concept would be useless or premature. We do not believe the objection to be valid on two distinct grounds7 : a.
The evolution of a transient under the influence of a mix of deterministic and probabilistic laws (and where transition points are not predetermined) is a bona fide problem of nuclear safety; therefore, our objective is to give a proper rigorous mathematical setting for this problem, a starting point from which methods of solutions and adequate data collections can be examined.
b.
Transition laws already exist that depend on state variables, such as the probability of ignition of a hydrogen explosion as a function of its concentration or the probability of containment break as a function of overpressure8. Many experts’opinions are expounded in this form9. Moreover, sensitivity studies can be used to bracket the uncertainties10.
6. Time delays. Contrary to the standard event tree methodology where time plays only an ordering role (before, after), time plays an essential role in dynamic reliability where competing processes are at play and where the next branching is determined for instance by the fastest process. This is of course true in deterministic dynamic processes but in our case we must add the fact that certain processes, like human intervention, have uncertain duration. On a shorter scale, uncertainty on time delays in breakers, relays, etc. may influence the outcome of a fast transient. Finally, time distribution is at the root of the distinction between Markovian and semiMarkovian modelling where in the former it is necessarily exponential.
3. THE CHAPMAN-KOLMOGOROV EQUATIONS 3.1. Forward and backward equations Since our objective is the description of reactor transients where change of states of the reactor can occur randomly we have naturally to look for what the theory of stochastic process has in store for us. The most developed theory is based on a Markovian hypothesis, which loosely speaking amounts to saying that the failure evolution of a system is dependent only on its present state. If a stochastic variable x has taken values at time the conditional probability
DYNAMIC RELIABILITY
219
with can be written i.e. the past can be ignored. This is a strong assumption and strictly speaking most physical systems do not obey it, only because aging of a system obviously influences its actual behaviour. However it is very often adopted because departures from a Markovian hypothesis are usually small and if not, there is a well known trick11 which amounts to enlarge the state space, e.g. to introduce auxiliary variables to reduce the problem again to a Markovian formalism. This is a last resort attempt because it is extremely costly from a numerical point of view. However as we shall see below, the case of "semi-Markov" systems can be handled with little more difficulty. To describe the state of a reactor, we need at each time t whose components are power, fluxes, temperature, etc., i.e. whatever 1/ A state vector variables that are needed to describe adequately the neutronic and thermohydraulic state of the reactor and its auxiliary equipment. The choice and definition of is considered to be adequate for the purpose of PRA analysis. 2/ The component (or hardware) state of the reactor labelled by an integer i. If for instance the reactor has n lumped components, each of which being in m possible different states, we have a total of states, i.e. index i runs from 1 to
If is a vector in space the state of the reactor can be given as a point in a space which is the Cartesian product i.e. M replicas of A Markov process can be either continuous or discontinuous. Let us assume that is continuous but that the hardware state index jumps from one integer to another. Symbolically we can describe a sample trajectory in phase space as in Fig. 1.
Let us remark that for a general stochastic process, need not be continuous, although it is in our case. It could also be either deterministic or stochastic. In the first case we have a so-called partially deterministic process12 and in the second case a Wiener process, better known as a Brownian motion. In the first case, for each state i, the reactor obeys deterministic dynamics
Standard dynamic models use fields like temperature or power field where is a point in the reactor, and yield partial differential equations. We assume in writing (3.1) that these equations have been discretized and reduced to a set of ordinary (non) linear
220
JACQUES DEVOOGHT
differential equations. Our objective is to find the conditional probability density where the initial state is at time and the Chapman-Kolmogorov equation yields13 the set of partial differential equations
where
and is the conditional probability of transition by unit time from component state i to component state j. All the knowledge necessary to develop a PRA analysis is in fact summarized in and Let us note however that concerns not only spontaneous transitions but also transitions induced by controls and operators, although in the latter case we must enlarge our phase space (see § 6). From a mathematical point of view, (3.2) is a system of M linear partial differential equations in and is, for a given t, a multivariate distribution in a space of dimensions. Therefore no standard numerical scheme (except Monte Carlo) is ever likely to solve (3.2) and the main purpose of (3.2) is to lead to new concepts and introduce simplified models. Let us point out however that the number of distinct dynamics (3.1) is usually considerably smaller that M. Finally if we forego the deterministic dynamics assumption, we will obtain additional second-order (e.g. diffusion-like) terms in (3.2), a situation we will meet latter (§ 7) when uncertain parameters are involved in the dynamics. To the "forward" Kolmogorov equation (because "future" variables are involved) we can append an equivalent equation, the "backward" Kolmogorov equation
which gives identical results, the initial condition for both being eq. (3.5)
Equation (3.4) will play an essential role in the definition of generalized reliability and exit times. 3.2. Particular cases Let us remark first that the forward Kolmogorov equation contains two important particular cases :
221
DYNAMIC RELIABILITY a.
e.g. the reactor does not change in any way its component state. Equation (3.2) reduces to a Liouville equation with solution
where
is the solution of the O.D.E. system (3.1) with initial condition
This is the usual deterministic reactor dynamics case. b.
i.e. the dynamics is uninfluenced by the component state of the reactor and it is initially (and stays) in a steady state. Then
which is the standard Markovian model used in reliability, except that a parameter does not appear explicitly.
which is here
Equation (3.2) has the form of a conservation equation for a probability fluid and taking into account case (a) we can rewrite it in integral form7. To simplify notation we shall no more write explicitly the initial condition unless it is necessary to do so, and assume we have :
The physical meaning of (3.9) is clear : the contribution to trajectories in phase space from
to
is given either by
with a probability
of
unchanged component state, or by trajectories that start from at some time have a transition from j to i and do not change anymore this state in the remaining interval The analogy with neutron transport is evident: if we write the monoenergetic Boltzmann equation with discretized angular fluxes
in its standard notation
222
JACQUES DEVOOGHT
we have the immediate correspondance
It is even more apparent in a context of statistical mechanics, using a Hamiltonian formalism with
where
corresponds to (3.1) and
where the Liouville theorem gives
the component state i being for instance a label for a type of particle (for instance its charge), each type of particle having its own dynamics. The system of integral equations (3.9) will be generalized in § 6 allowing for a semiMarkovian formulation, but as much it is a point of departure for Monte Carlo methods of solution.
4. REDUCED FORMS 4.1. Discretization of physical variables We can define two auxiliary probability densities :
Summing (3.2) over all states
with
DYNAMIC RELIABILITY
223
Integrating (3.2) over
with
and
Not surprisingly we recover the two special cases (3.6), (3.8) since we have "projected out" variables or i, without making however any special assumptions. But we observe that parameters are time dependent contrary to (4.5) even if is time independent. As such these equations are useless because their very definition rests on the detailed knowledge of through (4.4) and (4.6). However we can partition space
in cells
and integrate (3.2) over
Then
where
is the exterior normal to the surface
of
and
with
Let Then
be the subset of
such that
for
with
224
JACQUES DEVOOGHT
where is the common surface of and its neighbors with To exploit (4.12) numerically we must make the additional assumption
with
Therefore
What we obtain is a pure jump process13. Equation (4.15) is the basis for the CCCMT method14 which is an outgrowth of the CCMT method developed by Aldemir15,16. 4.2. Moments and marginal distributions We define, using eq. (3.2)
To obtain differential equations for moments, we need some assumptions on the dependence of
on the variable
as well as for
If we assume17 a quadratic
dependence on of these functions, the system obtained for will inevitably depend on higher moments. This problem is usually solved with the introduction of an approximate closure relation. The choice made in refs. 17,18 is the closure relation of a Gaussian distribution. The structure of the system obtained is :
DYNAMIC RELIABILITY where and µ are vectors and and will not be given here.
225 a matrix. The explicit form of
Methods to solve (4.18) and later to synthesize examined in § 9.
is very complex
from the moments will be
Obviously the large number of variables is an important numerical obstacle and we may be tempted to work only with marginal distributions and later try to synthesize the distribution
from its marginals,
Let us point out however that marginal distributions do not uniquely determine the full distribution19. The literature on this subject yields many suggestions each with its advantages and defects. For instance, let be a marginal distribution of its cumulative distribution function. Then
is an interpolant to a distribution with the same univariate marginals. Coefficients etc. may be fitted to obtain the conservation of covariance and higher moments. An alternate solution, based on the use of Gordon’s transfinite interpolation yields20 :
where the
and
are auxiliary distributions linked by
The covariance matrix and some (but not all) third moments may be conserved using free parameters Unfortunately positivity of (4.20) cannot be guaranteed under all circumstances although numerical experience so far (§ 9) did not show too much difficulty. Substituting (4.20) in (3.2) with the same quadratic dependence assumed above, adding an index i to each function in (4.21) to label Markovian state i, we obtain the following system for the marginal distribution
226
JACQUES DEVOOGHT
We have now N independent systems of hyperbolic partial differential equations for The explicit expressions for all coefficients appearing in (4.22) are given in18.
4.3. Benchmarks Finally to close this analytical chapter we ask ourselves if there is no closed form solution of (3.2) available to serve as benchmark to test approximate methods of solution. For instance the trivial example of two Markovian states with state graph
and dynamics
initial state
Other benchmarks have been solved by P.E. Labeau22.
in state i=1, yields21
227
DYNAMIC RELIABILITY 5. EXIT PROBLEMS13,23 5.1. The event tree as an "exit" problem
One of the fundamental problems of the theory of stochastic differential equations is the calculation of the probability that the representative point in phase-space will exit (and when) from a given domain D. For instance, if we superpose a noise to a deterministic process, even if all trajectories point away from the boundary of D, a probability exists that, due to the random perturbations, or noise, the point will cross the boundary. The exit problem is concerned with the distribution of the exit point and with the exit time. The problem is of relevance for transmission over potential barriers in the field of chemical kinetics, and the reader is referred to ref. 24 for other examples. The exit problem is fully relevant to the mathematical theory of continuous event trees and its safety consequences. For instance, consider the event tree associated with a loss-ofcoolant accident. Some branches are associated with a return to a safe state and others are associated with catastrophic outcomes, such as the failure of containment followed by radioactive releases. The safety analyst wants to know the probability that will cross the boundary of a safety domain D, usually a polyhedron determined, for instance, by maximum allowable powers, maximum allowable temperatures or pressures, etc., or by the fact that some dangerous states i are entered. Let Then is the distribution of the time of passage in state which is not, in general, the distribution of the time of first passage of interest in exit problems. To study exit problems, we must convert eq. (3.2) into a boundary value problem. Indeed, the initial value problem that describes reactor dynamics involves no boundaries. The values of are constrained only by laws of physics (for instance, positivity of temperature), and these constraints are embodied in the space being inaccessible. The definition of
(Fig. 2) is the analyst’s business and, therefore, a function of his
objectives. Let us examine the sign of If
certain domains of the phase-
and if
where
is the exterior normal to then
When
varies
continously as well as is a set of lines on i.e. to pass from to on we cross a line. However, if D is, for instance, a box with plane faces, we can have
everywhere on a face. The exit problem involves only
The partition of may be different from each component state i. For instance, let i=1 be the state where all components function properly. In principle, on we should have and No trajectory should cross the safety boundary An exit of D necessarily involves a transition of We transform the problem by forbidding trajectories leaving to re-enter through by imposing
228
JACQUES DEVOOGHT
The time of crossing of crossing due to eq. (5.1).
is the time of first crossing because there are no second
Condition (5.1) is equivalent to the vacuum boundary condition for used in transport theory. The domain outside D is made fully absorbing, and no return is allowed. We remark that condition (5.1) is more involved because it applies only to the part which is not necessarily the same for all i. We introduce conditional probabilities
by
Let
The probability that the escape time T > t when the component state is i is
Let
be the escape rate of D in state i :
The mean escape time in state i is
where we introduce explicitly the initial condition through by
We define the mean escape time
229
DYNAMIC RELIABILITY If we introduce an average escape time irrespective of
then7
obeys
which generalizes a well known expression for the mean-time to failure Markovian reliability analysis, if we write (5.8) as
in
where the last term in (5.9) expresses the fact that during the stay of the system in state j, its dynamics will move from to a neighboring point where the MTTF is modified if
5.2. Generalized reliability For the case where
is independent of t we have time translation invariance, e.g.
The rate of escape through the surface
summed over all states is
Integrating (5.11)
If we evaluate (3.4) for
and apply operator
we obtain
230
JACQUES DEVOOGHT
is a generalized reliability function, giving the probability that at time t, a system which starts at in state will not have "failed" by crossing the safety boundary. If for all i and all the system can never fail and we have the solution However we can also introduce failed Markovian states. Following the usual definition, this amounts to partitioning the component states in where Y is the set of failed states with for Then the system will fail eventually at some time either by crossing the surface or by entering Y, i.e. and the generalized reliability function obeys
with the initial condition
if
and
We can generalize this problem further by introducing a damage function when, instead of scoring a failure (independent of whenever we cross we score instead a damage function of 25,26. The use of the backward (or adjoint) equation always appear when we study the outcome of a stochastic process as function of its initial state. We can transform (5.14) into a set of integral equations
A formal proof is given in ref. 27 although we can justify (5.15) heuristically in the following way. The reliability of a system starting at time t=0 in is the sum of two terms. The first is the probability that the system does not leave state i in interval [0,t] if the system starts in and does not leave D is the characteristic function of D). The second term is explained likewise as the sum over of the probability of staying in (i,D) for time and have a transition to a working state and survive for the rest of the time 6. SEMI-MARKOVIAN GENERALIZATION 6.1. Definition and formulation
Let us start from eq. (3.9) and define
231
DYNAMIC RELIABILITY
(a)
is the conditional probability that the stochastic process will remain in state i during time t if the state vector is initially equal to holding time distribution in state i;
(b)
in other words it is the
is the conditional probability that if a transition occurs when in state it will be to state with the normalization
Therefore
where
We can understand
as the probability to have in j a transition to state i and
to stay in that state up to a time in interval is and
The rate of transition out of state
with
which is the rate of transition out of state initial state.
without any previous transition out of the
The physical significance of (6.6) is therefore clear since the rate of transition is now the sum of two rates according to the fact that a transition to a intermediate state occurs or not. As such, nothing prevents us to use (6.6) in the general case where is the cumulated distribution function (c.d.f.) of the transition time t when the initial state is relaxing the exponential distribution (6.1) assumption. Admittedly this extension is heuristic and must be validated through the theory of piecewise deterministic stochastic process12. In the same way we can generalize (5.15) to
232
JACQUES DEVOOGHT
where
This is a generalization to the semi-Markovian case of the standard system of equations 28 for reliability functions. We obtain the classical system if we omit all references to 6.2. Human error modelling29 The generalization to non-exponential transition time distributions is particularly useful in the case of human error modelling where the exponential distribution for the time of action (for instance) of an operator is notoriously insufficient. Although an Erlang distribution can be obtained by the addition of r fictitious states in Markov modelling, it is necessary to have a large value of r to model dead time distributions like
From the point of view of Monte Carlo (see § 10) it is not much more difficult to treat arbitrary c.d.f. However to model correctly the action of the operator we must enlarge our state space. On one hand we have a reactor R in a state on the other hand we have an operator O in a state defined for instance as the direct product of "diagnostic states" and "stress states" (or levels) i.e. We define the combined state of the system Reactor plus Operator : the evolution of is deterministic when is fixed and we define also
to be substituted to
as
since
(eq. 6.5), etc.
We can use the same formalism as defined in § 6.1. We decompose
since we have three possibilities : transition of the reactor (first term) without change of state of the operator, transition of the operator without change of the reactor (second term) or simultaneous transition which has a probability of second order (third term). We can further decompose
where the first term relates to
spontaneous transitions independent of the action of the operator (for instance hardware failures) and the second term relates to transitions of the reactor induced by actions of the operator. Similarly
where change of state of the operator is due in part in reaction to sensors activated by the reactor and, for instance, changes in stress level due to its past errors, etc.
233
DYNAMIC RELIABILITY
The human error modelling involved is quite heavy and it is not our purpose to dwell here on this subject. What we wanted to show here is that the difficulty of the problem lies not so much with the computational difficulty of solving this kind of problem (for instance by Monte Carlo29) as with the definition of an adequate human error model30.
7. SUBDYNAMICS An event tree may be defined as a collection of paths. Each path the set transition time
is characterized by
which means that the dynamics is when it becomes
until
until the
etc., the probability to complete the path being
the notation being self-evident. The trajectory in phase space is
and we write for short of components
for the compound trajectory with
the vector
Therefore for a given path and a given transition time vector the trajectory is fully deterministic and the corresponding conditional probability density of is
If we know the probability density of density
we obtain the unconditional probability
A classical event tree analysis is a pruning of an (hypothetical) complete tree with and, when it is done at all, the choice of an average vector will give the probability of a damage : as
where
is the characteristic function of the unsafe domain
234
JACQUES DEVOOGHT
Equation (7.5) is formally a solution of the Chapman-Kolmogorov equation, with the understanding that we have a continuous set of paths and is probably impossible to find if we take into account all possible transitions. However this is not any more true if we recall (§ 2) that most transitions are not failures but transitions provoked by protection devices, or operators and that is a distribution peaked around an expected value If the fluctuations of are large enough, the chronological order of transition could change, but in that case it would define another path We can consider
as a solution of the Liouville equation
which is technically a parametric stochastic differential equation where
Can we find an equation which obeys
To answer this question we must first define a restriction operator
and a prolongation operator
with the identity operator and a projection operator. What we want to find out is the equation obeyed by the projection If we define
and equation (7.7) reads
with
and
DYNAMIC RELIABILITY
235
The problem is technically the problem met in statistical mechanics when we look for "subdynamics", i.e. when unwanted variables are "projected out" (i.e. the Zwanzig projector method)31,32. One obtains33
where
is solution of
where and where we do not show the dependence of explicitly. Equation (7.18) is the point of departure of an analysis, which after due "Markovianization" (elimination of the convolution product) leads33,34 to a Fokker-Planck equation
where the flow vector
and the diffusion tensor
is
has components
with
and Another way to obtain in terms of
is to use the first-order Taylor’s development of which can be obtained by Peano’s theorem, and which gives
identical results for the case of a Gaussian distribution for Since the Fokker-Planck equation (7.20) is an advection-diffusion equation we can easily surmise the qualitative behaviour of its solution : the probability cloud centered on the trajectory defined
236 by
JACQUES DEVOOGHT will progressively spread out. What is less easy to predict is that the
diffusion tensor will suddenly grow in the vicinity of each transition time and decrease after. An important conclusion of the analysis is that despite this behaviour, the probability density remains highly singular. Indeed, looking at equations (7.2 and 7.3) the locus of is a one-parameter (t) family for a two parameters family for and in general a n parameter family after n-1 transitions. This means that the support of the distribution is an hypersurface of dimension n after n transitions, where n < N in practice. This is a clue for approximation techniques that need only to operate in a space of (much) lower dimension than N. Extension of this procedure to other parameters is possible. For instance, we consider system (3.2) as describing transport of isotopes in a geological medium, where is the probability of finding isotope i at position If the dynamics depends on a number of uncertain parameters describing the variability of the medium, the ensemble average will yield diffusion-like equations. The same conclusion appears if we take into account uncertainties on failure rates or on the dynamics of the reactor transients. The application of any restriction operator blurs the image, i.e. introduces a smearing or a diffusion around the convective solution. The same result obtains also if one aggregates Markovian states into macrostates. 8. APPLICATION TO EVENT TREES 8.1. Dynamic event trees The technique of event tree plays a cardinal role in probabilistic risk assessment studies. The reader can find up to date accounts of their use in PRAs in ref. 4 as well as in the reports NUREG-1150 : "Severe accident risks : an assessment for five US nuclear power lants". A full analysis of the shortcomings of the current use of event trees as far as the time variable is concerned has been given by N. Siu3 as well as attempts to remedy these shortcomings. One of the acknowledged deficiencies is the existence of many uncertainties. "The importance of considering uncertainty in the inputs ... goes beyond just accounting for the inherent imprecision in inputs. Some calculations in PRA involve thresholds, which, depending on whatever or not they are met, can have a large influence on subsequent results... An example is the treatment of the probability of early containment failure in an Accident Progression Event Tree J. Helton and R. Breeding35 remark that the treatment of uncertainty involves two types: "(1) uncertainty due to physical variability, which results in many different accidents being possible, (2) uncertainty due to lack of knowledge which results in an inability to determine the properties of individual accidents". These two categories could be qualified as parametric uncertainties. Indeed the first category involves the uncertainty on rate of failures and the second is often related to the presence in physical models of parameters which are uncertain and determined by expert opinion. Monte Carlo is often used (or its stratified version : latin hypercube) to treat these uncertainties. However we should stress that a third category is no less important (and
DYNAMIC RELIABILITY
237
usually underestimated if not ignored completely). It relates to branching time uncertainty and generally speaking needs to explicit knowledge of the dynamics. This is particularly true of sequences of event trees involving recovery operations with human intervention36, or loss of offsite power (and its subsequent eventual recovery) with the ensuing transients triggered by the protection devices. Let us examine for instance a simple example described by the following elementary APET : water vapour mixed with fission product gases pours out of reactor vessel (sequence a); HR (Heat Removal) starts before Spray (b) which branches to (b’) : Spray starts eventually or to (b") Spray never starts; Spray starts before HR (c) which branches to (c’) : HR starts eventually or to (c") : HR never starts. Let and be the cumulated distribution function of the starting time of HR and Spray respectively with and the probabilities that HR and S (respectively) ever starts. Elementary calculations give the time dependent probabilities for each state a,b’,b",c’,c" as displayed on Fig. 3. The asymptotic values for b",a,c" are those obtained without consideration of time distributions. However for b’ the probability that this sequence will be obtained asymptotically, i.e. knowledge of
is dependent upon the explicit
and
Similarly we can obtain the distribution of the physical state vector in accordance with the method exposed in § 7. A complete exposition of the method of calculation of the unfolding of an event tree, essentially by iterative solution of the system of integral equations given in § 6 is given in ref. 37. If we limit ourselves to sequences a,b,c for simplicity, i.e. either HR or S can happen, which ever is faster :
with
238
JACQUES DEVOOGHT
and i=a,b,c are the respective dynamics which in this case correspond to an involving, pressure and temperature of water vapor, isotopic concentrations, etc. In a classical event tree, even if one takes into account the two options (H before S or S before H) the consideration of a single value of could fail to capture the probability of crossing a threshold level, a failure more likely and more sensitive if it involves tails of distributions. If the knowledge of the can be obtained from the deterministic dynamics, the distribution and will depend on various factors : (1) (2) (3)
operator actions; state dependent failures transitions on demand resulting from threshold crossings for control or protection systems. The latter case will be examined in § 8.4.
Distributions and in this example, embody the uncertainties on transition times. They could be treated in principle as parametric uncertainties although very serious computational problems do arise as we shall see in part II. If we bar the case (3) of transitions on demand where uncertainty is limited, cases (1) and (2) can lead to a large overlapping of transients. Even if we examine only the order of appearances of transitions, the fact that, for a given order, we could have very different probabilities of damage according to the specific value of the transition times. The current use of adjectives in APET studies (level II event trees) like "early", "late" or "very late" is to say the least, a very crude way of discretizing the time variable. 37
8.2. Decoupling passive and active events
Let us assume that we partition the Markovian states in two groups : where labels the first group and the second. In the protection domain context, components may change their state either because they fail or malfunction, or because the change is triggered by a signal when they act as a protection device. We assume that the first group can have both types of transition and the second only failures. We write therefore
Eq. (8.4) is true in general because the probability that both groups change their state in interval is of order and not Therefore only one group at a time can change its state. Although rates of failure may be dependent on most transients do not involve the extreme values of state variables that justify the dependence. If we associate with the second group and if we assume moreover that the passive events due to the failure of the second group are uninfluenced by the states of components of the first group, we have
We can write in general
DYNAMIC RELIABILITY where
239
is a conditional probability density.
Substituting (8.6) into the differential formulation of probabilistic dynamics7
with
and
Since
integrating (8.7) over
and summing over
gives
and therefore the conditional probability density of
of
given
is the solution
which is now the evolution equation for the first group of (active) components only. Moreover the consideration of time scales introduces further simplifications. During the preaccident period, the reactor operates in a steady state with and we have a graceful degradation of the system through random failures obtained by solving (8.11). Once the initiating event of the accident happens at time T, the evolution during the accident transient can be described by
240
JACQUES DEVOOGHT
since on a time scale much shorter than T, the only transitions to be examined are those of the first group influenced by In practice the last term of (8.12) may very well be negligible because since protection devices, operator actions and failures strongly influenced by
are much more important.
Let us remark that the influence may be strictly limited to a few states. If we assume a transition is dependent only on the availability of an auxiliary electric source the states of other components are irrelevant. The number of distinct contexts under which we need to compute is usually drastically reduced. We may also relate the conditional probability dependence on to the mixed structure of an event tree where we append to a branch of the event tree an associated fault-tree involving states. 8.3. Reduction of the number of Markovian states One of the objections raised against a classical Markovian analysis of the type (8.11) is the large number of states growing exponentially with the number of components. The first practical approach to the reduction of the number of states was introduced by Papazoglou and Gyftopoulos6 where they show that the consideration of symmetries between components or groups of components allows exact aggregation. The factor of reduction can be substantial, but not important enough to allow the consideration of practical problems with a few dozens of components. Approximate methods can be applied after the first reduction due to symmetry. When components are independent, e.g. when their failure or repair rates are independent of the state of all other components, the solution of the general problem can be obtained from the state probability of each component. In practice we have influences on these rates due to groups of other components, for instance for failure rates due to common cause effects, or for repair or maintenance rates due to repair policies involving priorities between components, etc. The influence graph of the system synthesizes by directed arrows between component nodes the fact that some components influence others. Although it allows the solution of system (4.5) in block triangular form it does not by itself reduces significantly the size of O.D.E. systems to be solved. However we can usually partition the state space in "contexts" or groups of states, such that the conditional probability of transition of a component is approximately the same for influencing components belonging to the same context. For technical details see refs. 38 and 39 where the technique used is similar to the one given in § 8.2. 8.4. Transitions upon demand We will assume in the following, transitions upon deterministic demand. Therefore intermediate failures, other than those eventually met at setpoint surfaces, are excluded from the analysis developed below. The demand is triggered if any one of the setpoints is reached. In general the setpoint corresponds to a surface depending usually on a single variable but possibly on more than one. For instance excessive power or temperatures, or insufficient coolant flow, define setpoint surfaces. Often, however, they are hyperplanes involving several variables, as in the case of the overtemperature-overpower reactor trip for Westinghouse Nuclear Power Plants, defined by a linear combination of Reactor Coolant System pressure, core inlet and outlet temperatures. We assume that the demand is instantaneous, e.g. that the transition time is negligible compared to the characteristic time constants of the dynamic transient
241
DYNAMIC RELIABILITY Since we need to calculate expressions like
we may represent setpoint
transition by a rate given by a sum of Dirac functions
Therefore
The crossing of the surface will happen at time s, solution of
and we assume here a single crossing. Therefore the Dirac function depends on the single variable s and can be expressed by the well-known relation
where
with n=l by our assumption.
This allows us 37 to express
where of crossing
is the time necessary to go in state j from the initial point
to the point
of
Eq. (8.18) means that the probability for the system of not having a transition out of component state j is a decreasing step function. The number
is defined by the fact
that
is the probability of failure of the demand, in the sense that the change of state has not occurred. These data can in principle be obtained directly from the FMEA of the protection system.
242
JACQUES DEVOOGHT
9. SEMI NUMERICAL METHODS 9.1. Discrete dynamic event trees methods (DET) One of the oldest and most important tools to deal with dynamic reliability problems is the discrete dynamic event tree method, as developed in various codes like DYLAM40, DETAM41, ADS42 and their subsequent offsprings. Basically it amounts to discretizing the time variable, e.g. approximating (6.4) by a staircase function25,43.
in eq.
We define the rate of transition out of i, if m transitions have already occurred. Then from (6.6)
We approximate
with
by
Therefore
A similar analysis is made in ref. 37 where the transition times
are defined by the
crossing of set point surfaces (see § 8.4). The general structure of the trajectory is
can be easily obtained : transitions occur at times etc.
243
DYNAMIC RELIABILITY The probability of a given branch is given by state reached before the
where i(n) is the
transition.
The most important problem is the choice of the
We emphasize the fact that the
are potential transition times but not necessarily actual transition times. Therefore we are facing the following dilemma: either the grid to
is dense to have a good approximation
and we have a combinatorial explosion of the event tree; or the grid
is
sparse to avoid this memory problem, but we expect a loss of accuracy due to the poor agreement of
with
It can be shown that for a semi-Markovian reliability problem with no physical variables the error incurred by the choice of
where
is bounded by
X is the set of working states and
is substochastic (and
which means that P
exists). Obviously a good criterion for
is
In the case of a Monte Carlo method, the generation of a sample of n transition times will correspond to
The stochastic variable
appears in the Kolmogorov-Smirnov non-parametric test44 and for instance
On the other hand in DET methods the transition times are fixed at the start. The choice made in DYLAM is not identical to (9.6) because the time decreasing values of
as t grows).
are regularly spaced (with
244
JACQUES DEVOOGHT
The probability of a sequence up to a branching point is the product of all probabilities of the ordered set of transitions in the sequence before the branching point. When do we need to generate another sequence at the branching point? Two parameters regulate the apparition of new descendant sequences : (1) (2)
the time step (which is also used for the integration of the physical variables). the probability threshold which defines if a new branching point is generated.
If is the current probability of a sequence, branching point k included, we examine ∆t later if
where is the probability later that no transition has occurred. If (9.9) is verified a subsequence is generated, or more precisely, the physical variables and the initial probability of the subsequence is stored in a stack to be retrieved when needed. The current sequence is explored up to and the software goes back to the last branching point and starts again, the whole search being done by a depth-first algorithm74. The dynamic tree is limited by two characteristics : (1) if the probability of the current sequence is below a threshold it is deleted or grouped with others (2) or a Russian roulette saves 1 sequence out of a given number. The growth of sequences is examined in ref. 45. The combination of DYLAM with simulation packages extends the possibilities of the method46. 9.2. Cell-to-cell methods Cell-to-cell methods were developped by T. Aldemir et al.15,16,47 at Ohio University for application to reliability and safety problems, as an outgrowth of a general method developped by Hsu48. They can be derived from the Chapman-Kolmogorov equations along the lines set in § 4.1. We can rewrite the final system (4.15) in the form
As such, system (9.10) does not allow for transitions on demand, i.e. transitions induced by the control or protection devices. A simple modification14 allows such transitions if one substitutes
in the matrix of eq. (9.10), where is the probability that the protection device puts the system in state i if it was in state j, while the physical variables move from cell to cell Large linear systems of type (9.10) are commonplace in classical Markovian reliability. Objections raised concerning the practical difficulty of writing are not warranted; these matrices are very sparse and can be built as needed from a knowledge of the structure
DYNAMIC RELIABILITY
245
of the system38 . The stiffness of the matrix due to the large ratio of physical time constants to components mean time to failure needs a particular treatment49. Some results are discussed in § 11.2.c. A similar scheme was proposed by Aneziris and Papazoglou50. 9.3. Other methods We may, as an alternative approach, use the techniques sketched in § 4.2. This approach is a bottom-up method using low-dimensional marginal distributions. A few problems have been studied by P.E. Labeau51 who has observed that the fact that many multivariate distributions may be associated to a given set of marginal d.f., which may lead to difficulties, the main one being the lack of a priori information on the support of the distribution On the other hand, synthesis methods like (4.19),(4.20) or others can be valuable to handle Monte Carlo results in a concise manner.
10. THE MONTE CARLO METHOD 10.1. The transport analogy Monte Carlo techniques have been much developed in neutron transport theory where treatment of shielding problems has shown the necessity to improve the accuracy of estimates of the transmitted fraction when this fraction is very small. A well documented and up to date analysis of the Monte Carlo technique in linear transport theory has been given recently by I. Lux and L. Koblinger52 and we shall adopt the same notation as well as refer to its proofs in the sequel below. Among other valuable references we find the older text of Spanier and Gelbard53. Let
a point in phase space; T(P’,P), the transport kernel such that T(P’,P)dP is the probability that a particle having its last collision with final characteristic, P’ will have its next collision in P; the collision kernel such that is the mean number of particles resulting from a collision in and produced with final characteristics in dP’ about P’. If
is the collision density in P, it obeys the integral Boltzmann equation :
where Q(P) is the external source density. For later use we define the full kernel
A formal analogy exists with reliability if we establish a correspondence between eq. (10.1) and eq. (6.6).
246
JACQUES DEVOOGHT
Let
e.g. the rate of disappearance from state i. Then
where
is the conditional probability that a "collision", e.g. transition in The Dirac functions express the fact that neither nor transition.
will lead to a state changes during the
Finally
which expresses the probability of "transport" along the deterministic trajectory leaving state i. Since
we have and
The Neumann development of the solution of the transport equation
with
without
DYNAMIC RELIABILITY
247
is interpreted in neutron transport as the contribution of the successive collisions until the neutron either is absorbed or escapes of the reactor. We can give the same interpretation in reliability37 as the contribution to of the successive transitions until the system fails or crosses a safety boundary (cfr. (9.1)). 10.2. Sampling methods The generation of non uniform random variates is the key problem of Monte Carlo. A huge literature exists related to this subject The reader is referred to the excellent book of L. Devroye54 for a thorough "state-of-the-art" report. The most useful methods for dynamic reliability are given below : (a) The inverse method Let F be the c.d.f. of a random variable x with inverse
defined by
If u is a uniform [0,1] random variable, the has distribution function F. If x has a c.d.f. F then F(x) is uniformly distributed on [0,1].
For example if
We see already that unless
the solution of (10.14) may not be readily available.
(b) The rejection method If the density f(x) is bounded by
for all x, and if it is easier to sample from g(x) than f(x), we can use the following algorithm: "generate two independent random variates x (from g) and u (from a uniform density on [0,1]) and accept X if
If not, repeat". Numerous variants exist and the quality of a rejection method depends, among others things, on the expected number of trials to obtain x. (c) The discrete inversion method To sample from a discrete distribution where state i (i=1,2,...) has probability generate a uniform [0,1] variate u and set
we
248
JACQUES DEVOOGHT
The search for x can be sequential, e.g. by computing successive values of
and could
therefore be time consuming. Improved methods (binary search, guide-tables, etc.) may be used54. We should at this stage stress the important conceptual difference between the solution of Markovian problems by the solution of O.D.E. and the solution of the same problems by Monte Carlo. In the first case we have for instance N components, each of which may be in two states (working or failed), and therefore we have states with a transition matrix A Monte Carlo method will not select a next state after a transition out of state i, by a sampling of the state from the discrete distribution. Instead if component has a total transition rate (failure or repair) the time of transition is selected from
and the component which has failed is identified by algorithm (c) with
with the
advantage that now we have generally However we cannot shed the state concept altogether because the current state of the system
which means that the set
may depend on the must be updated after
each transition. In all fairness, the objection raised against conventional Markovian methods as to the need to store is unwarranted, since it can be computed when needed from the logic of the system and its operational rules38. The situation is more complicated in non-Markovian problems. Let of component
where
is the c.d.f. of the sojourn time in its present state
when the state of the system is
The hazard rate is
Semi-Markovian models can be studied with this type of c.d.f. as well as non-Markovian models, provided we enlarge suitably the phase space by addition of new variables. This happens for instance when hazard rates are function of the time spent before t in each state of the system. If we have N independent components, each of which will have its first transition at time then
249
DYNAMIC RELIABILITY
is the c.d.f. of the transition time of the system. It cannot be handled as (10.18) but by the following method : (d) The composition method54 The algorithm amounts to solve sequentially for
and retaining the minimum value of Various improvements like the thinning method of Lewis and Shedler54 can be used if we know a simple function for all t. If we examine the C(P”,P’)T(P’,P) obtained in (10.6, 10.8)
all three transitions rates can be treated by one of the algorithms given above, the last one being deterministic. We should stress the fact that in dynamic reliability, the computational burden rests essentially on the evaluation of e.g. the solution of the dynamics of the system, a sobering fact if one is tempted to look for refined sampling methods for i and t. This problem will be examined in § 10.5. 10.3. Scoring a Monte Carlo game Once we select an initial state : (a) we follow the trajectory in space according to probability law T(P’,P) until we reach the transition time obtained from the c.d.f.
e.g. we generate a uniformly distributed random variable
(b) if we are in state
where again
in [0,1] and solve
the next Markovian state will be j if
is uniform in [0,1] , and we return to (a).
If we keep track of the number
of "particles" in state i, cell k of
centered
250 in
JACQUES DEVOOGHT
then we obtain an estimate
of
if N is the size of the sample.
This "analogue" Monte Carlo, so called because we follow strictly the analogy of the transport of a particle, is usually inefficient because variance decreases slowly (as ). The purpose of a Monte Carlo game is to compute a functional of
say
Although we may optimize the estimation of one functional (for instance minimize its variance) we cannot do so for more than one, a fortiori for an uncountable infinity of them. We are essentially interested in the unreliability of (or in the damage incurred by) a system. Let us point out however that to know the unreliability of a system with a relative accuracy of say 1 % is considerably more complicated than to know the reliability with the same accuracy. Since most industrial systems are (very) reliable most of the sampling time will be wasted unless one biases the sampling towards interesting events, e.g. those leading to a failure. The unreliability of a system, where the probability density of state given by
is
is
This expression is obtained from the definition of
from the relation where A is the event "to have left X in interval (0,t)" and B the event "to be outside D", barring the possibility to leaving X in D If we choose to evaluate the unreliability at time
with
and
we obtain from (10.25)
DYNAMIC RELIABILITY Let Therefore
251
be the expected value of the score when the process starts from point P.
since the distribution of the starting points is Q(P). It can be shown52 that
obeys an integral equation adjoint to (10.1). Indeed
or
with
and the kernel e.g. eq. (10.32) is the adjoint of eq. (10.1). The interpretation is familiar to reactor physicists conversant with the concept of importance55, since here the "importance" of a starting distribution is the expected score. If we modify f(P) we modify but not kernel L(P,P"). We may now explicit the collision kernel and write with
where
is the probability that the particle be absorbed in P’ (with and where is a conditional probability.
any point outside of
The score will be f(P,P’) when we have a free flight between P and P’, if there is an absorbtion in P’ and if we have a scattering with transfer from P’ to P". Then I(P) is transformed into
The solution of this integral equation is uniquely determined by choose our non-negative score functions at will provided we preserve
Four possible choices, among others, are given in Table 10.1. with
Therefore we may e.g. as long as
252
JACQUES DEVOOGHT
The first has the inconvenience of a possible singularity where The fourth is less complicated than the third and they both avoid the singularity. With the fourth we score at each free flight, contrary to the first and second where we score only at the last event, an inconvenience for very reliable systems. The first two are called last-event estimators and the last two free-flight estimators. For short time compared to MTTF, free-flight estimators are more efficient since they score in any state; on the other hand for long times compared to MTTF, last-event estimators are to be preferred since most histories will stop before reaching the mission time56. 10.4. Non-analogue games and zero-variance schemes To assess the quality of a Monte Carlo estimate, it is customary to use a figure of merit or an efficiency factor S given by
Indeed for a given game the variance of an estimate varies asymptotically as the inverse of the sample size. Therefore a non-analogue game will be more efficient than another if, for a given sample size the variance is smaller, or for a given variance the needed size is smaller. A non-analogue game is a Monte Carlo game with T(P,P’) and C(P’,P") altered respectively to and with the obvious requirement that the expected value of the score remains unbiased, but with a reduced variance for the same sample size. Statistical weights are attributed to "particles" e.g. in our case to systems transients, in order to compensate for the altered outcome of the game. For instance if we want to estimate the unreliability at some time we can favor transitions towards failed states - the larger number of failures reducing the variance - but we have to multiply by a weight W < 1 to conserve the correct expected value. This problem is examined in § 10.5. A zero-variance scheme is ideally possible and will be discussed below. It means that all successive histories yield the same (correct) score, which is only possible if we know this score in advance... which means that the Monte Carlo game is unnecessary! The important practical result, however, is the fact that any approximate knowledge of the functional to be estimated can be used to obtain a scheme with at least a strong reduction of variance.
253
DYNAMIC RELIABILITY Let us specialize our results to conventional reliability. system for is
The Markovian differential
which gives the integral form
with
The unreliability of a system starting at t=0 in state i is immediately as a particular cas of (5.15):
and can be obtained
which by differentiation gives
We remark that if we take into account the fact that for system (10.44) is adjoint to system (10.40). It is curious that conventional reliability theory makes little use of (besides and even less of the adjoint character. Let the unconditional reliability function R(t) of the system be
Therefore
with
is obtained. Importance sampling52 is an analogue game without the introduction of statistical weights. However the non-analogue kernels are given by
254
JACQUES DEVOOGHT
where
yields
The weight is constant and it suffices to fix the weight W of the starter as
If one chooses the estimator
then the transformed game (10.49,50) is
unbiased, e.g. gives the same final expected score (10.24) as the analogue game52. We have however an arbitrary function U(P), or V(P). If we choose V(P) such that we increase the probability to meet interesting events (and contributions to the score) we may expect to reduce the variance. In fact if the variance will be zero. Since the game is unbiased even if it is sufficient to choose V(P) as a best approximation of The case for reliability is given in Table 10.2. We may however proceed in a different way and ask for a zero-variance whatever be P. In other words if is the second moment of the score then a zero-variance method is defined by asking
for all P.
However the source must be modified to
The modifications of the kernels are given in Table 10.3. with the corresponding result for reliability20’27’52. Although the functions are not known, any approximation can be substituted in the right columns of Tables 10.2. and 10.3. and yield Monte Carlo games with variances which are not zero but generally strongly reduced even for crude approximations (see for instance § 11.1.). We show now (§ 10.5) that standard biasing techniques are related to the general schemes given hereabove. 10.5. Biasing and integration techniques
10.5.1. State selection Let be a random variable equal to 1 if the transition from state i reaches state k, and zero otherwise. We want to estimate the expected value of the random variable
DYNAMIC RELIABILITY
255
256
JACQUES DEVOOGHT
where
are given weights. We partition the state space E into mutually exclusive sets and we use conditioning to write
where is the conditional probability that when a transition occurs in state the next state will be in the subset and where is the conditional probability that if the next state is in J, it will be j. Therefore
and
If we define a modified Monte Carlo game with with a score
Since
and
then
The expected value is unchanged if choose
probabilities
We can therefore
freely provided
we
minimize
the
variance
under constraint (10.62) if we take
with
DYNAMIC RELIABILITY
257
258
JACQUES DEVOOGHT
For the particular case
Nothing prevents us to make the partition of E dependent on the initial state. The biasing of Monte Carlo games for reliability evolution was used by Lewis et al.57 and their choice amounts to write where R is the set reached after a repair of a component and F the set reached after a failure. Since the objective is to evaluate the rare event (for a reliable system) of a system failure they bias the transition towards F, e.g. and In their case is defined implicitly. A substantial improvement was proposed by Marseguerra and Zio58 where class F was again split in two classes : say C is the set of states which is a basic event for one or more cut-sets and NC for the remaining states. Probabilities are chosen in an ad-hoc manner and set C is further split into subsets with associated probabilities which are inversely proportional to the "distance" from the current state i to the closest cut-set. This amounts to choosing inversely proportional to that distance. These heuristics are excellent, as examplified in Table 10.4. by the figure of merit S (see eq. 10.39). The example chosen59 is a ten-component system. Although the values chosen in this example for can be considered as an optimal game for values of chosen as in refs. 57 and 59 we could turn the problem around and set an objective defined by (10.59). A natural example would be to define as the unreliability. If we put for the failed state, then (10.43) can be written
Therefore for a discrete one-step
and we observe that the Monte Carlo game defined above by (10.55)(10.59) relates to (10.66).
Note : N : number of trials; t: computer time (s); U : unreliability; of transitions per trial; S : figure of merit.
: average number
DYNAMIC RELIABILITY
259
A look at Table 10.2. shows that the modified transition rate
with
is given by
since
The analogy of (10.67) with (10.65) will be complete if the sets i, in which case,
contain only one state
10.5.2. Transition time selection The choice usually made is
to force a transition before the mission time The analogy with the zero-variance result is striking if we compare to the same result of Table 10.2
with the understanding that adopts an approximation
if
but the result is however different, even if one
Special problems are met when we have transitions on demand. The discussion so far concerns standard reliability evaluation, but it is not essentially different for dynamic reliability. The choice made by Labeau20 is
with
and
close to 1 and with
the greater the state has a chance to contribute to failure.
260
JACQUES DEVOOGHT
10.5.3. Integration techniques The greatest challenge for dynamic reliability, by far, is the cost of integration of the dynamics (3.1). Let us remember however that accuracy is not essential. Indeed rough scoping in dynamic PRA is certainly a substantial progress compared to methods which ignore dynamic variables altogether. Considering the data uncertainty, a relative accuracy of ten percent, and certainly of a few percents is acceptable. Oddly, this is completely at variance with the culture of ODE-solvers designers and a degraded standard ODE-solver with large time steps may not be the optimal solution considering that anyhow the numerical stability criterion should be satisfied. Limited experience shows that a RK 2 solver, with adaptative time step, is satisfactory, but the problem remains open. However substantial improvements are needed and one possibility is the memorization technique developed by Labeau56. Indeed for reliable systems - which is in general the case for industrial systems most trajectories in phase space coincide with the most probable trajectory. Memorization of that trajectory saves effort time if we have the possibility of branching out at any time. Let
be the probability that a system starting in state i has no transition between and kth control surface;
the
is the probability of transition at the kth control
surface towards the most probable trajectory. Then
and
are respectively the probability to reach the kth control in the most probable trajectory, and the probability to have, moreover, a correct functioning of the kth control. Sampling with the algorithm 10.2 (c) (eq. 10.17) gives the interval where a stochastic transition may occur, and forced by a law of type (10.68) on which control device will be out of work. If we have
including a fictitious one for the end of mission control surfaces met
along the most probable trajectory, a proportion
of stories lead to a known score. We can therefore force (see § 10.5.1.) transitions towards less probable trajectories. This corresponds to one of the rules of Monte Carlo lore, e.g. to substitute to a stochastic variable, whenever possible, its expected value. The expected score, if
is the score of the mth modified game (e.g. the unreliability at time
and
the score obtained following a most probable trajectory completed up to mission time, is given by
This amounts to an effective number of histories which is improvement for safe systems where
a substantial
261
DYNAMIC RELIABILITY
Another possibility is the neural network technique of Marseguerra and Zio59,60. They use a three-layered, feedforward neural network which acts as a nonlinear mapping. The value of connection weights are obtained through a training, obtaining a set of input and outputs by running the full problem. The choice of the variables is important: for instance, the authors recommend that the output variables should be only a part of the input variables, the rest being obtained by simple algebraic relations. The reported increase of integration speed compared to a standard numerical method is 20. However there is a certain number of questions like the number of networks that should be in principle equal to the number of different dynamics and the time necessary to identify the right variables compared to the time of integration for the Monte Carlo trials. The pros and cons of each method are currently under investigation. Use of sensitivity coefficients as an alternate way to reduce variance has been examined in ref. 62. Let us point out as a final remark that a distinction between Monte Carlo and discrete event simulation (DES) seems to us completely unnecessary (see ref. 3). DES deals with objects, here components. Whenever the term Monte Carlo is used here it is meant that the sampling is done as in DES. The use of the symbol does not signify that in an actual simulation its explicit knowledge is required.
11. EXAMPLES OF NUMERICAL TREATMENT OF PROBLEMS OF DYNAMIC RELIABILITY 11.1. Classical Markovian and semi-Markovian problems None of the references quoted here in this section deal specifically with dynamic reliability. They deal however with some aspects of reliability calculation through Monte Carlo and they have a potential of generalization to dynamic problems. The pioneer work of Papazoglou and Gyftopoulos6 showed the importance of dealing with transition rates uncertainties, Monte Carlo has a decisive advantage over other methods in this respect. The technique used is "double randomization"63 which means that for every nth history a new sampling of the transition rate is made. Optimal choice of n is an open problem and interesting results have been obtained by Lewis and Bohm57,64. Non Markovian problems have been treated by Dubi et al.65,66 and Lewins et al.67,68,69. The treatment of non-Markovian problems, like those taking into account the age of the components or the time a component has been working, etc. involves generally the introduction of supplementary variables11,70. The mathematical difficulties associated with non-Markovian problems show again the unmatched ability of Monte Carlo to deal with these problems. "Zero-variance" methods in reliability have been little explored so far. The general algorithm of § 10.4 has been applied recently by Delcoux et al.27 to a system of 7 components. Results for the variance and the quality factor are given in Fig. 4 and Fig. 5. Case I corresponds to analogue simulation. Cases II and III correspond to an algorithm which minimizes the unreliability variance at time the simulation giving unbiased estimates for Cases II and III are associated respectively with and as defined in Table 10.3. In order to average the unreliability, cases IV and V correspond to a minimization of the variance of
with
and
where
262 in Table 10.3 we substitute the variance has a minimum for costly and unnecessary.
JACQUES DEVOOGHT to
It is interesting to note that
h and that improved sampling of the time
is
Attempts to generalize the method to the tank problem are still unconclusive20. 11.2. Dynamic reliability 11.2.a. Fast reactor transient The first full-scale analysis of a dynamic reliability problem by Monte Carlo was done by C. Smidts and J. Devooght71 for a reactivity transient in a fast reactor. The EUROPA model was the same as the one treated by Amendola and Reina by an earlier version of DYLAM72. The components were : two channels, one pump, four sensors, three controllers,
DYNAMIC RELIABILITY
263
a scram logic and a scram actuator leading to states. The dynamic involves 12 differential equations. The Monte Carlo simulation was run on a CRAY XMP/14. The vector structure allows to run histories in parallel. Up to stories could be run in parallel, but the initial group of stories must be divided into a limited number of subgroups of reasonably large size that undergo the same treatment. A dozen such groups was identified, and their size was maintained constant by delaying the calculation until the size was sufficient to run a batch in parallel. Biasing techniques were of type discussed in 10.5.1 and 10.5.2. Two categories of transitions were considered for biasing : failed safe transitions (leading to precocious scram and therefore to success) and pure failures. Ninety percent of the transitions were forced in the second category. The time distributions of the success and failure time obtained by Monte Carlo had respectively two and one peaks that could be identified as the times necessary to complete sequences identified in the classical event tree analysis. One of the most interesting conclusions is the proof that a dependence of a transition rate on the physical variables can affect the outcome of the calculation, here the failure probability. Fig. 6 shows the variation of the (hypothetical) failure rate of the scram actuator as function of the temperature of the sodium channel temperature, the results corresponding to the 4 laws being given in Table 11.1.
The same problem was examined recently by De Luca and Smidts73 from the point of view of a "zero-variance" sampling. The method used was importance sampling (see Table 10.2).
264
JACQUES DEVOOGHT
11.2.b. Comparison between Monte Carlo and DYLAM A comparison was made between DYLAM and Monte Carlo by C. Smidts43. The comparison is meaningful if we apply it to a problem for which we have a ready-made solution available. Analytical benchmarks are not very numerous but multidimensional problems can be generated by using the direct product of one dimensional problems. The simple example given in § 4.3 was used and the object of the calculation was the probability that for any Two values were used the system being in two states (1,2) with a failure rate and a repair rate Various errors were examined like for instance the maximum absolute error where are respectively the analytical and numerical (DYLAM) probabilities. The comparison between the performances of DYLAM and Monte Carlo is difficult to make because many parameters are involved in the fine tuning of each method; moreover the results are problem-dependent. The author43 reaches the following conclusion for a single component: DYLAM is superior to Monte Carlo for low failure rates and the reverse is true for high failure rates. However for many components (and certainly for a large number) Monte Carlo proves superior if it used with biasing, whatever are the failure rates. The use of failure on demand is different: the number of sequences is limited and in general a DET method is suitable. However Monte Carlo can be easily adapted to such situations75 (see § 8.4) and is not sensitive to loss of events of very low probability. Its accuracy is also self- improving with the number of trials, which is not the case of DET methods where is common to all sequences and cannot be adapted to each state. 11.2.c. The heated holdup tank One of the most popular problems in dynamic reliability is the heated storage tank problem treated by Aldemir76, Deoss and Siu77, Marseguerra and Zio78, Tombuyses and
DYNAMIC RELIABILITY
265
Aldemir14, Cojazzi40, Labeau20, Matsuoka and Kobayashi79. Unfortunately the problem was not defined as a benchmark : many variants were used to test algorithms which are not strictly comparable. The holdup tank system is described in Fig. 7. Level should be maintained between HLA=6 m and HLB=8 m. Failure occurs either by dry-out when the level is below HLV=4 m or by overflow when the level exceeds 10 m. Control logic is exercised on two pumps (one normally in standby) and a valve, each component having 4 states. Heat is added to the liquid at rate W. The dynamic of the system is
with h the level, the temperature, being the Boolean variable of the ithB component (1 for ON or stuck ON, 0 for OFF or stuck OFF). We have also and The value Q is the flow common for the three components, is the inlet temperature and W a constant proportional to the heat flux. All failure rates have a common temperature dependence given at Fig. 8. Analytical solution being possible, the accuracy of test algorithms can be checked. The Monte Carlo analysis of Marseguerra and Zio shows clearly the impact of the failure (and repair) rates on the probability of crossing the safety boundary. Fig. 9 show the p.d.f. and c.d.f. for dry-out and overflow. The solid line corresponds to a reference case with no failure on demand and equal transition rates and the dashed line to the case in which the failure rates of the components in states opposite to the initial ones have been increased by a factor of 10 for transitions towards the state "stuck-on" and 100 for transitions towards the state "stuck-off". The first case can be treated by classical fault
266
JACQUES DEVOOGHT
tree analysis and the effect appears clearly when failure rates are different, as well as when failures on demand, are introduced. A solution of eq. (10.14) could be found in this case without much difficulty due to the choice made for but this is not necessarily true in all cases.
DYNAMIC RELIABILITY
267
The same problem was treated by Tombuyses and Aldemir14 by the continuous cell-tocell mapping method (CCCMT) (§ 4.1). A comparison of the results was made with Monte Carlo converged to 0.15 %. The space is divided in cells when and are respectively the number of intervals for and Results were compared to the CCMT method which is a discrete time Markov process and on the whole the results were favorable to the continuous version. Accuracy may be insufficient if the number of cells is too low (see Fig. 10 and Fig. 11) but the error lies below one percent for P.E. Labeau has examined the same problem20 (without temperature) to test various biasing strategies for Monte Carlo. Since the objective was to run many Monte Carlo simulations, the variable was omitted. Some results are given in Tables 11.2 and 11.3. Two sets of stochastic failure rates (S1) and failures on demands (S2) were used, S2 being one order of magnitude lower than S1. Free-flight estimators lead to better statistical accuracy but have a longer calculation time. For S2 the efficiency factor is in favor of freeflight estimators. Table 11.4 shows the results for four methods; although the computation time for the fourth (corresponding to the scheme of eq. (10.73)) is higher, the efficiency factor defined by is better for this scheme for reliable systems.
268
JACQUES DEVOOGHT
,20
The same problem (without temperature) was treated by Cojazzi by DYLAM40 with the main purpose of showing the difference between a dynamic approach and a classical fault tree method, as well as by Matsuoka and Kobayashi by the GO-FLOW method79. 11.2.d. The pressurizer One of the most complex system treated so far by Monte Carlo (with 11.2.a) is the pressurizer of a PWR treated by Labeau80. The same problem has been modelled by the neural network technique by Marseguerra et al.81 but no Monte Carlo results have been published so far. The model used in ref. 80 is a system of 12 nonlinear differential equations. Safety boundaries are a lower and upper water level and a lower and upper
DYNAMIC RELIABILITY
269
pressure. Failure occurs when these boundaries are crossed or when heaters are failed or when we have massive water discharge with valve stuck-open. The total number of states is 900. Mission time was equal to 115 s necessary to reach a steady state after a few openings and closing of the relief valve. Superiority of the biased method with memorization (eq. 10.73) appears clearly at Fig. 12 where low-probability events are not even detected by the two other methods. Moreover it is much more stable (as shown by the constancy of the standard deviation (Fig. 13)). A fairly good estimate of the unreliability is estimated with only histories which is not true for the other two. Discontinuities in Fig. 14 are due to the occurence of rare events. Table 11.4 shows that the single memorization technique and the biasing with memorization have comparable efficiency factors. We reach the general conclusion that this sizeable and realistic problem is treated by Monte Carlo with a small computation time (on a RS-6000 workstation) and good relative accuracy, the problem being well out of reach of any other methods discussed in this article. 11.2.e. Optimization of the Steam Generator Tube Rupture emergency procedure An application83,84 of eq. 8.14 has been successfully implemented as a variant of the DET methods described in 9.1, that is free of the discretization assumption of as a staircase function, using eq. 8.18 instead to calculate Although restricted to setpoint
270
JACQUES DEVOOGHT
transitions, it has as an important advantage that the treatment of the dynamics has been made through an engineering simulator of the same capability as those traditionally used in the classical safety cases by the main US vendors, as well as those used off-line to setup the success criteria of classical static PSAs. The simulator has been coupled to the scheduler of the DYLAM code in order to be able to follow the dynamics of all the branches of a dynamic event tree in an optimum way. The branching is generated every time a setpoint for the demand of a safety system is reached as calculated by the simulator83. This "tree simulation" package also incorporate a software able to execute procedures as if they were automatic pilots, potentially inducing transitions based on alarm setpoints for entry into procedure and/or procedure actions. Branching at these procedure setpoints allows then to study the impact of human errors or system malfunctions while executing procedures. In Figures 15, 16 and 17 we can see results of this type of tree simulation in the case of a steam generator tube rupture of a real PWR (see ref. 83, 84 for details). All the main control and protection systems have been included. A set of important safety systems have been selected as potential headers of a dynamic event tree, the headers becoming active as a result of the simulated set point crossing branching conditions. In Figures 16 and 17 we can see the effect of procedure execution including cycling instructions.
DYNAMIC RELIABILITY
271
With the general implementation of parallel processors this method seems most promising particularly because efficient algorithms to handle larger and larger size trees are being developed. Another important feature is that the calculation of the probability of each branch can be made, as the sequence is developed, with exactly the same techniques of Boolean functions that are now implemented in classical PSA packages allowing the use of the same fault trees. Finally, the implementation of Monte Carlo roulettes at the branching setpoint conditions both, for the condition itself or for the time distribution of operator actions after an alarm setpoint is reached, seems very possible indeed, thus removing in practice the main
272
JACQUES DEVOOGHT
implications of the restrictions to transitions upon demand and rising new expectations for convergence with the general Monte Carlo techniques, providing at the same time natural extensions of classical PSAs.
12. CONCLUSIONS 12.1. Synthesis of available methods of solution We have shown in this review paper that dynamic reliability can be studied from an unified point of view. Not all problems have been examined, like for instance the optimal control of piecewise deterministic processes12, or diagnostics82. Fig. 18 is a summary of the methods used in dynamic reliability (except GO-FLOW). The cross at the center of the figure shows a partition of the plane in four regions. Below the horizontal line we have differential formulations and above integral formulations. Right of the vertical line we have direct or forward formulations and left we have adjoint or backward formulation. Our starting point is the differential forward Chapman-Kolmogorov (DFCK) equation (1) (eq. 3.2). However we have added a random vector parameter which means that is a conditional distribution (see last paragraph of § 7). The transformation of DFCK into its integral form IFCK (2) is made like in eq. (3.9). However we could also extends IFCK to the semi-Markovian case (see eq. 6.4). The reverse is not true, since we cannot in general give a differential formulation from a semi-Markovian IFCK. We have also the IBCK (2*) counterpart. The four equations in the dotted square are equivalent formulations in the Markovian case. If we project out
along the lines used in § 7 (subdynamics) we obtain an equation for which in general may be approximated by a Fokker-Planck
equation (3) (eq. 5.70). If is non existent, then (1) and (3) are identical as well as (2) and (4). The most general technique available for solving (2), hence (1) is Monte Carlo (12). No approximation whatsoever is needed since we sample from the dynamics, (i,t) from the semi-Markovian c.d.f. and random vector (for instance the failure rates) from a given distribution IFCK (4) is the starting point of discrete dynamic event trees methods (6) like DYLAM, DETAM, ADS, etc. replacing Integrating (3) (or (1) without ) over cells we obtain (see eq. 4.15) of the CCCMT method (7). Projecting out Markovian states from (1) or (3) we obtain a Liouville or a Fokker-Planck equation for (8) (see eq. 4.3). A particular case of (7), which is exact if is independent of O.D.E. system (9) which is the backbone of standard reliability theory.
is the Markovian
Another possibility is to obtain the marginal distribution (13) (eq. 4.22) and the moments (14) (see eq. 4.16,17) and synthesize the multivariate distribution (15) (eq. 4.20). The integral backwards equation IBCK (4*) leads to a Monte Carlo evaluation of the reliability (or any damage function). The DBCK (3*) leads to the differential formulation of the generalized reliability function (eq. 5.14) leading also to generalized MTTF functions.
DYNAMIC RELIABILITY
273
274
JACQUES DEVOOGHT
12.2. Prospects What are the prospects for computational methods in dynamic reliability? Realistic problems involve m=10 to 100 physical variables with a number n of components involved in active events of a few dozens (see § 8.2). These problems are computationaly intensive since they involve systems of a large number of partial differential or integral equations. No conventional numerical method is ever likely to solve these equations since if we admit p intervals for each physical variable, we have unknowns. The only method which has the potential to solve such problems is the Monte Carlo method. It is rather insensitive to the size of the system, at least compared to other methods. Indeed the computational burden varies linearly with the number of components n (and not with ) and with a low power of N, corresponding to the task of solving O.D.E. equations (3.1). Some of the examples treated by Monte Carlo (see § 11.2.a, 11.2.d) show that certain problems are already reachable with workstations. The resources of biased Monte Carlo have been barely tapped and idle workstation computing power is available almost everywhere. The obstacles to Monte Carlo algorithms is the fact that they are less scrutable than others on one hand and that they are often foreign to engineering culture on the other hand. Other methods like discrete dynamic event tree methods or cell-to-cell methods are by no means out of consideration. First of all the first one provides a hands-on approach essential for a physical understanding of a dynamic reliability problem, and with an obvious link to conventional event tree analysis. Monte Carlo is a last resort method that should be used on a restricted set of problems where dynamics plays an essential role. DET methods can be used to identify these problems. A second reason for their future use is the fact that a biased Monte Carlo is blind without a rough chart of the domain to be explored and DET as well as cell-to-cell methods may be able to provide these charts. It is likely that in a nottoo-distant future dynamic reliability software will incorporate these complementary approaches. ACKNOWLEDGEMENTS I would like to thank first Prof. C. Smidts, Drs. B. Tombuyses and P.-E. Labeau with whom in order I have enjoyed the fascinating and challenging problems of dynamic reliability and who have contributed to many interesting solutions. I owe much to Prof. M. Marseguerra’s critical acumen and thanks are extended to Prof. T. Aldemir and A. Mosleh, Drs. J.M. Izquierdo and N. Siu as well as others who knowingly or not have shaped my ideas on this subject. I am also indebted to Prof. J. Lewins for his thoughtful suggestions on a first draft. Mrs. J. Immers is to be thanked for her tireless efforts to bring the typescript into good shape. REFERENCES 1. 2. 3.
T. Aldemir, N.O. Siu, P.C. Cacciabue and B.G. Göktepe (editors), "Reliability and Safety Assessment of Dynamic Process Systems", NATO ASI Series, Vol. 120, Springer (1994). T. Aldemir and N. Siu (Guest editors), "Special Issue on Reliability and Safety Analysis of Dynamic Process Systems", RESS, 52(3) (June 1996). N. Siu, "Risk assessment for dynamic systems : an overview", RESS, 43:43 (1994).
DYNAMIC RELIABILITY
4. 5. 6. 7. 8. 9. 10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
22. 23. 24. 25.
26.
275
"USNRC : A review of NRC staff uses of probabilistic risk assessment", NUREG-1489, 1994. "Proceedings of Workshop I in Advanced Topics in Risk and Reliability Analysis: model uncertainty, its characterization and quantification", Report NUREG/CP-0138 (October 1994). I.A. Papazoglou and E.P. Gyftopoulos, "Markov process for reliability analysis of large systems", IEEE Transactions on Reliability, 26(3) (1977 & supplement). J. Devooght and C. Smidts, "Probabilistic reactor dynamics I. The theory of continuous event trees", Nucl. Sci. Eng., 111:229 (1992). J.C. Helton, R.J. Breeding and S.C. Hora, "Probability of containment failure mode for fast pressure rise", RESS, 35:91 (1992). Report NUREG/CR-4551, SAND 86-1309, Vol. 2 (December 1990). G. Apostolakis and P.C. Cacciabue (Guest editors), "Special Issue on Probabilistic Safety Assessment Methodology", RESS, 45(1-2) (1994). D.R. Cox, "The analysis of non-Markovian stochastic processes by the inclusion of supplementary variables", Proc. Cambridge Phil. Soc., 51:433 (1955). M.H.A. Davis, "Markov models and optimization", Monographs on Statistics and Applied Probability, 49, Chapman Hall (1993). C.W. Gardiner, "Handbook of stochastic methods for physics, chemistry and the natural sciences", Springer-Verlag, Berlin (1985). B. Tombuyses and T. Aldemir, "Continuous cell-to-cell mapping and dynamic PSA", Proc. of ICONE-4 (ASME), Vol. 3, pp. 431-438, New Orleans (1996). M. Belhadj and T. Aldemir, "The cell-to-cell mapping technique and ChapmanKolmogorov representation of system dynamics", J. of Sound and Vibration, 181(4):687 (1995). T. Aldemir, "Utilization of the cell-to-cell mapping technique to construct Markov failure models for process control systems", PSAM I, Vol. 2, pp. 1431-1436, Elsevier (1991). J. Devooght and P.E. Labeau, "Moments of the distributions in probabilistic dynamics", Ann. Nucl. En., 22(2):97 (1995). P.E. Labeau and J. Devooght, "Synthesis of multivariate distributions from their moments for probabilistic dynamics", Ann. Nucl. En., 22(2):109 (1995). N.L. Johnson and S. Kotz, "Distributions in statistics: Continuous multivariate distributions", J. Wiley and Sons, New York (1972). P.E. Labeau, "Méthodes semi-analytiques et outils de simulation en dynamique probabiliste", PhD Thesis, Université Libre de Bruxelles (1996). M. Cukier, "Détermination du temps de sortie du domaine de sécurité de l’espace des états lors d’un transitoire accidentel", MSc Thesis, Université Libre de Bruxelles (1991). P.E. Labeau, "A method of benchmarking for two-state problems of probabilistic dynamics", Nucl. Sc. Eng., 119:212 (1995). Z. Schuss, "Theory and applications of stochastic differential equations", Wiley (1980). P. Hänggi and P. Talkner, "Reaction rate theory : fifty years after Kramers", Rev. Mod. Phys., 62(2) (April 1990). J. Devooght and C. Smidts, "Probabilistic dynamics : the mathematical and computing problems ahead". In T. Aldemir, N.O. Siu, P.C. Cacciabue and E.G. Göktepe (editors), Reliability and Safety Assessment of Dynamic Process Systems, Turkey, August 24-28, 1992, pp. 85-100, NATO, Springer Verlag, Berlin (1994). J. Devooght and C. Smidts, "Probabilistic dynamics as a tool for dynamic PSA", RESS, 52(3): 185 (1996).
276 27. 28. 29. 30. 31. 32.
33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51.
JACQUES DEVOOGHT J.L. Delcoux, J. Devooght and P.E. Labeau, Proc. ESREL 96/PSAM-III, Vol. 1, pp. 436-442, Crete (1996). A. Birolini, "On the use of stochastic processes in modeling reliability problems", Springer-Verlag, Berlin (1985). C. Smidts, "Probabilistic reactor dynamics. IV, An example of man-machine interaction", Nucl. Sc. Eng., 112:114 (1992). E. Hollnagel, "Reliability analysis and operator modelling", RESS, 52(3):327 (1996). S. Nordholm and R. Zwanzig, J. Stat. Phys., 13:347 (1975). H. Grabert, "Projection operator techniques in nonequilibrium statistical mechanics", Springer Tracts in Modern Physics, Vol. 95 (1982). J. Devooght, "The computation of time dependent event trees", to appear in Proc. PSA’96, Park City (September 1996). P. Grigolini, in "Noise in nonlinear dynamical systems", Vol. 1 (Theory of continuous Fokker-Planck system), edited by F. Moss, P.V.E. Me Clintock, Cambridge University Press (1989). J.C. Helton and R.J. Breeding, "Calculation of reactor accident safety goals", RESS, 39:129 (1993). Ed. M. Dougherty Jr., "Time considerations in recovery analysis", RESS, 35:107 (1992). J.M. Izquierdo, E. Melendez and J. Devooght, "Relationships between probabilistic dynamics, dynamic event trees and classical event trees", RESS, 52:197 (1996). B. Tombuyses, "Modélisation markovienne en fiabilité Réduction des grands systèmes", PhD Thesis, Université Libre de Bruxelles (1994). J. Devooght and B. Tombuyses, "The use of the component influence graph to reduce the size of Markovian availability problems", RESS, 46(3):237 (1994). G. Cojazzi, "The Dylam approach to the dynamic reliability analysis of systems", RESS, 52:279 (1996). C. Acosta and N. Siu, "Dynamic event tree in accident sequence analysis : application to steam generator tube rupture", RESS, 41:135 (1993). K.-S. Hsueh and A. Mosleh, "The development and application of the accident dynamic simulator for dynamic probabilistic risk assessment of nuclear power plants", RESS, 52(3):297 (1996). C. Smidts, "Probabilistic dynamics : a comparison between continuous event tree and discrete event tree model", RESS, 44:189 (1994). M.G. Kendall and A. Stewart, "The advanced theory of statistics", C. Griffin, London. J. Devooght and C. Smidts, "A theoretical analysis of Dylam-type event tree sequence", Proc. Int. Conf. PS AM II, pp. 011 (1-6) (1994). G. Cojazzi, J.M. Izquierdo, E. Melendez and M. Sanchez-Perea, "The Dylam-Treta package", Technical note N° 1.92.111, ISE/IE 2358/92, JCR Ispra (1992). T. Aldemir, M. Belhadj and L. Dinca, "Process reliability and safety under uncertainties, RESS, 52(3) (1996). C.S. Hsu, "Cell-to-cell mapping. A method of global analysis for nonlinear systems", Springer-Verlag, New York (1987). B. Tombuyses and J. Devooght, "Solving Markovian systems of ODE for availability and reliability calculations", RESS, 48:47 (1995). O. Aneziris and I. Papazoglou, "Safety analysis of probabilistic dynamic systems", Proc. of ICONE-4 (ASME), Vol. 3, pp. 171-179, New Orleans (1996). P.E. Labeau, "Improvement of probabilistic dynamics calculations by the determination of the support of the distribution", Proc. of ESREL’95, Vol. 1, pp. 431-444 (1995).
DYNAMIC RELIABILITY 52. 53. 54. 55. 56. 57. 58. 59. 60.
61. 62.
63.
64. 65. 66.
67. 68. 69. 70.
71. 72.
73.
74.
277
I. Lux and L. Koblinger, "Monte Carlo particle transport methods : neutron and photon calculations", CRC Press, Boca Raton (1991). J. Spanier and E.M. Gelbard, "Monte Carlo principles and neutron transport problems", Addison-Wesley, Reading (1969). L. Devroye, "Non-uniform random variate generation". Springer-Verlag, New York (1986). J. Lewins, "Importance : the adjoint function", Pergamon Press, Oxford (1965). P.E. Labeau, "Probabilistic dynamics: estimation of generalized unreliability through efficient Monte-Carlo simulation", Ann. Nucl. Eng. (1996). E.E. Lewis and F. Bohm, "Monte Carlo simulation of Markov unreliability models", Nucl. Eng. Des., 77:49 (1984). M. Marseguerra and E. Zio, "Monte Carlo approach to PSA for dynamic process systems", RESS, 52(3):227 (1996). M. Marseguerra and E. Zio, "Non linear Monte Carlo reliability analysis with biasing towards top event", RESS, 39:31 (1993). M. Marseguerra and E. Zio, "Improving the efficiency of Monte Carlo methods in PSA by using neural networks", Proc. of PSAM II, 025.1-025.8, San Diego, (1994). A.M.H. Meeuwissen, "Dependent random variables in uncertainty analysis", Ph.D.thesis, T.U. Delft (1991). N. Siu, D. Kelly and J. Schroeder, "Variance reduction techniques for dynamics probabilistic risk assessment", Proc. Int. Conf. PSAM II, pp. 025(27-32) (1994). G.A. Mikhailov, "Minimization of computational costs of non-analogue Monte Carlo methods", Series on Soviet and East European Mathematics, World Scientific, Singapore (1991). E.E. Lewis and T.U. Zhuguo, "Monte Carlo reliability modeling by inhomogeneous Markov processes", Reliab. Eng., 16:277 (1986). A. Dubi et al., A.N.E., 22:215 (1995). A. Dubi, A. Gandini, A. Goldfeld, R. Righini and H. Simonot, "Analysis of non Markovian systems by a Monte-Carlo method", Annals Nucl. En., 18:125 (1991). Yun-Fu Wu and J.D. Lewins, "System reliability perturbation studies by a Monte Carlo method", Annals Nucl. Energy, 18:141 (1991). J.D. Lewins and Yun-Fu Wu, "Some results in semi Markov reliability theory", Proc. of PSAM II, Vol. 1, 025(15-26), San Diego (1994). Yun-Fu Wu and J.D. Lewins, "Monte Carlo studies of non-Markovian system reliability", preprint. C.A. Clarotti, "The method of supplementary variables and related issues in reliability theory". In A. Serra and R.E. Barlow, editors. Proc. of the Int. School of Physics "Enrico Fermi", pp. 86-103, North-Holland, Amsterdam (1986). C. Smidts and J. Devooght, "Probabilistic reactor dynamics II. A Monte Carlo study of a fast reactor transient", Nucl. Sci. Eng., 111:241 (1992). A. Amendola and G. Reina, "Event sequence and consequences spectrum : a methodology for probabilistic transient analysis", Nucl. Sc. Eng., 77:297 (1981). P. De Luca and C. Smidts, "Probabilistic dynamics : development of quasi-optimal Monte Carlo strategies", Proc. ESREL’96/PSAM III, Vol. 3, pp. 2111-2116 Crete (1996). M. Gondran and M. Minoux, "Graphes et algorithmes", Editions Eyrolles, Paris (1979).
278 75. 76. 77. 78. 79. 80. 81. 82. 83. 84.
JACQUES DEVOOGHT P.E. Labeau, "Improved Monte Carlo simulation of generalized unreliability in probabilistic dynamics", Proc. of ICONE-4, Vol. 3, pp. 419-426 (1996). T. Aldemir, "Computer assisted Markov failure modeling of process control systems", IEEE Transactions on Reliability, 36:133 (1987). D.L. Deoss and N. Siu, "A simulation model for dynamic system availability analysis", (MITNE-287), MIT (1989). M. Marseguerra and E. Zio, "Monte Carlo approach to PSA for dynamic process systems", RESS, 52(3):227 (1996). T. Matsuoka and M. Kobayashi, "An analysis of a dynamic system by the Go-Flow methodology", Proc. ESREL’96/PSAM III, pp. 1547-1552, Crete (1996). P.E. Labeau, "A Monte Carlo dynamic PSA study of a PWR pressurizer", to appear in Proc. PSA 96, Park City, USA (1996). M. Marseguerra, M.E. Ricotti and E. Zio, "Neural network-based fault detections in a PWR pressurizer", accepted for publication in Nucl. Sc. Eng. (1996). S. Aumeier, J.C. Lee and A.Z. Akcasu, "Probabilistic techniques using Monte Carlo sampling for multi-component system diagnostics", Proc. of the Int. Conf. Math. Comp. (ANS), Portland (USA), April 30-May 4, 1995, p. 103-112. M. Sanchez and J.M. Izquierdo, "Application of the integrated safety assessment methodology to emergency procedures of a SGTR of a PWR", RESS, 45:159 (1994). M. Sanchez and J. Melara, "Extending PSA to accident management (AM). The case of the steam generator tube rupture (SGTR) emergency operating procedures (EOF) assessment", Proc. of ICONE-4 (ASME), Vol. 3, pp. 523-530, New Orleans (1996).
INDEX
A-bomb: see survivors Accident Progression Event Tree (APET), 233 Accute doses, 13 Adjoint, 56, 248 Admissibility conditions, 71 ADS code, 239 Advanced neutron source (ANS) reactor, 32 Aldermaston, 2, 3 Analog(ue) Monte Carlo, 40, 247 Autocorrelation, 33 Auto-power spectral density (APSD), 38 Basis dual, 55 reciprocal, 57 standard, 54 Bayesian theory, 213 Beginning-of-cycle (BOC), 98 Biasing (Monte Carlo), 253, 265 Biological tank shield, 180 Black, Sir Douglas, 1 Advisory Group, 2 Boiling water reactor (BWR) fuel management, 95 Boltzmann transport equation, 39, 216, 241 Branching time uncertainty, 233 British Nuclear Fuels (BNFL), 1 Brownian motion, 214 Burghfield, 2, 3 Burkitt’s lymphoma, 17 Burnable poison (BP), 94 Caesium contamination, 183 Caithness study, 10 Causality criteria, 7 Cause-and—effect, 7 Cell-to-cell method, 219, 241, 264 Cf-252 source, 40 Change control, 167 Chapman–Kolmogorov: see Kolmogorov Chernobyl control room, 156, 164 Nuclear Power Plant (ChNPP), 170 China Syndrome, 175 Chronic doses, 13 Closure relation, Gaussian, 220 Combinatoric problems, 93 Committee on Medical Aspects of Radiation (COMARE), 2, 21 Computer architecture, 160 Continuous cell-to-cell CC(C)MT: see Cell-tocell
Cross-correlation, 33 Crossover operators, 113 partially mapped, 124 Cross-power spectral density (CPSD), 34, 38,46 CWT (continuous wavelet transform), 70 Data management, 158 storage, 158 Decontamination, 198 Delay times, 46 Design base accident, 163 DETAM code, 239 Detectors, 42 Deterministic dynamic event tree (DDET), 268 DFCK, 270; see also Chapman–Kolmogorov Dilation equation, 74, 98 Discrete event simulation (DES), 259 Dose rate (DR) zones, 184 Doses, accute v chronic, 13 doubling, 14 Dounreay nuclear establishment, 2, 10 Downsampling, 104, 105 DYLAM code, 239, 260, 265, 268 Dynamic event trees (DET), 239 Earthquake, Romainian (1990), 190 Egrement North, 13 Emergency response, 162 Encoding, 113 End-of-cycle (EOC), 95 Ensemble, 32 Epstein–Barr virus (EBV), 17 Ergodic process, 33 Erlang distribution, 218 Errors, systematic, 7 Escape times, 224 Esteban and Galland scheme, 109 EUROPA model, 260 Event tree, continuous, 210 exit problem, 222 Expert system, in core design, 118 Factor X, 12–15 Fault tolerance, 168 Film badge, 4, 11 Filtering, 81 FIR-type, 110 high-pass, 104 low-pass, 103 sub-band, 102
279
280 Foker–Planck equation, 231 FORMOSA (code), 99, 123 Fragile gene sites, 20 Frame, continuous, 65 operator, 66 Francis turbine, 119 Free-flight estimator, 249 FT (Fourier transform), 58 et seq. Fuel assemblies, 94 containing material (FCM), 174 Gabor, 58, 60 Gardner report, 4 Gaussian, 53 closure relation, 220 Generalised perturbation theory (GPT), 123 Generation time, 35, 45 Genetic algorithms (GA), 93, 111 GO-FLOW method, 265, 270 Gordon interpolation, 220 Gray coding, 126, 137 Great deluge algorithm (GDA), 91, 104 Haar wavelets97, 100 Haling’s principle, 97 Hamiltonian, 217 Harwell, 3 Health and Safety Executive (HSE), 11 Heavy water, reactor pool, 44, 45 Helicopter drops, 176 Hereditary component, 16 Hermetian conjugate, 56 Heuristic rules, 98 HTBX crossover, 119 tie-breaking, 120 Hilbert space, 59, 66 Hill, Sir Austin Bradford, 7 Hölder index, 74 regular, 73, Hope v BNFL, 5 Hot particles, 199 HP filter: see Filtering Human error, 212 factors, 157 modelling, 218 IBCK 270; see also Kolmogorov IFCK 270; see also Kolmogorov IFT (inverse Fourier transform), 59 Imperial Cancer Research Fund, 3 Importance (adjoint), 248 sampling, 251 Infection, leukaemia cause of, 21 In-out loading, 96 International Chernobyl Project (ICP), 171
INDEX International Commission on Radiological Protection (ICRP), 5, 203 Iodine release 187 IWFT (inverse windowed Fourier transform), 63 Jump process, 219 KENO code, 32 Kolmogorov equations, 213 et seq. -Smirnov test, 240 Last-event estimator, 249 Latin hypercubes, 212, 233 Laurent polynomials, 84, 91 Lava-material, 176 et seq. -like fuel containing materials (LFCM), 189 Legasov report, 185 Leukeamia incidence, 1 et seq. projections, 202 Linear energy transfer (LET), 18 Liouville equation, 216, 230 Liquidators, 200 LP filter: see Filtering Maintenance outages, 165 Mapping, inverse, 68 Mammoth beam, 191 Markovian modelling, 213 Mean time to failure (MTTF), 225 Medical Research Council, UK (MRC), 6 Memorization technique, 258, 267 Mintzer scheme, 110 Monte Carlo calculations, 215, 217 Mother wavelet, 53, 69, 93, 97 Mouse data, 13–14, 18 MRA (multiresolution analysis), 81 Multaplicative factors, 13 Multi-point approximation (MpA), 128 Mutations, minisatellite, 19 National Commission on RadiologicalProtection (Russian) (NCRP), 200 National Radiological Protection Board(UK) (NRPB), 1 Neumann series, 39, 243 Neural networks, 258 Noise analysis, 37 Non-analogue Monte Carlo, 249 Non-Hodgkin’s lymphoma, 2 NPP (nuclear power plant), 158, 171 NUREG-1150 report, 232 Nyquist frequency, 60, 70
281
INDEX Operator support, 161 Out-in loading, 96 Parseval’s theorem, 77, 79, 80 Peano’s theorem, 232 Phase filters, 102 Plutonium correlation, 182 Point kinetics, 31 et seq. Poisson distribution 38, 41 Population-based incremental learning algorithm (PBIL), 125 Positive feedback, 174 Pressurised water reactor (PWR), core reload, 93 Probabalistic risk assessment (PRA), 163 safety assessment (PSA), 210, 268, 269 QMF (quadrature minimum filter), 110 Radiation Effects Research Foundation (Japan), 9 Radioactive release assessment, 186 Rain algorithm, 105 Random variates, 244 RBMK-1000 (Russian) reactor, 171 Reactivity, positive feed-back, 172 Reactor dynamics, 212 Reay v BNFL, 5 Reclamation, 197–8 Record-to-record algorithm (T), 105 Reproducing kernel, 64 Resolution of unity (identity), 57 continuous, 63 Rose review, 9 Russian National Medical Dosimetric Registry (RNMDR), 201 Safety domain, 224 Sampling, up and down, 88 Sarcophagus, 170 et seq. predicted collapse, 192 water entry, 193–2, 196 Scintillation detectors, 42 Seascale, 1 Selection schemes, 116 Self-sustaining chain reaction (SCR), 174, 194, 207 Semi-Markov: see Markov Simulated annealing (SA), 91, 102 Smoking, 8 Source transfer function, 36
Spectral densities, 34 factorisation, 102 Spencer’s distribution (Cf-252), 41 Statistical mechanics, 217 Stochastic processes, 214 Survivors, atomic bombs, 9, 13, 17 Symmetry reduction, 237 Tabu search algorithm, 105 Technical basis of nuclear safety (TBNS) report, 194 Thinning method, 246 Three Mile Island (TMI) control room, 156, 164 Threshold crossings, 235 Thurso, 2 Time delays, 46, 51 Top event, 211 Training, 165 Transfer functions, 31 Transition times, 240, 257 Travelling salesman problem, 119 Trust region approximation, 127 Twins, monozygotic, 15 Ukritiye see sarcophagus United Kingdom Atomic Energy Authority (UKAEA), 6 United Nations Scientific Committee on the Effects of Radiation (UNSCEAR), 14, 21 Unsafe domain, 230 Variance reduction, 40 Vector computing, 261 Verification and validation (V&V), 166 Volterra equation, 39 VVER (Russian) reactor, 171 Water supply, 199 Weapons testing, 4 WFT (windowed Fourier transform), 58 et seq. metric operator, 63 Wiener process, 214 Window, band limited, 76, 78 Windscale, 1 WT (wavelet transform), 89, 94 Xenon poisoning, 172 Zero-variance schemes, 250 et seq. Zone of rigid control (ZRC), 204 Zwanzig projector, 231