NEUTRON FLUCTUATIONS A TREATISE ON THE P HYSICS OF B RANCHING P ROCESSES
This page intentionally left blank
NEUTRON FLUCTUATIONS A TREATISE ON THE P HYSICS OF B RANCHING P ROCESSES IMRE PÁZSIT Chalmers University of Technology, Gothenburg, Sweden
LÉNÁRD PÁL Hungarian Academy of Sciences, Budapest, Hungary
Amsterdam • Boston • Heidelberg • London • New York • Oxford Paris • San Diego • San Francisco • Singapore • Sydney • Tokyo
Elsevier The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK Radarweg 29, PO Box 211, 1000 AE Amsterdam,The Netherlands First edition 2008 Copyright © 2008 Elsevier Ltd. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK; phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/ permissions,and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-0804-5064-3 For information on all Elsevier publications visit our web site at books.elsevier.com Typeset by CharonTec Ltd (A Macmillan Company), Chennai, India www.charontec.com Cover design: Maria Pázsit Printed and bound in Great Britain 07 08 09 10 10 9 8 7 6 5 4 3 2 1
Dedicated to MARIA and ANGELA
‘Some books are to be tasted, others to be swallowed, and some few to be chewed and digested’ (Francis Bacon, 1564–1626)
C ONTENTS
Preface Acknowledgement List of most frequently used notations
xi xiii xv
I. Physics of Branching Processes
1
1. Basic Notions
3
1.1 1.2
1.3
1.4 1.5 1.6
Definitions Equations for the Generating Functions 1.2.1 Intuitive solution 1.2.2 Solution according to Kolmogorov and Dmitriev Investigation of the Generating Function Equations 1.3.1 Uniqueness of the solution, regular and irregular branching processes 1.3.2 Moments: subcritical, critical and supercritical systems 1.3.3 Semi-invariants Discrete Time Branching Processes Random Tree as a Branching Process Illustrative Examples 1.6.1 Regular processes 1.6.2 Explosive process 1.6.3 Modelling of branching processes
2. Generalisation of the Problem 2.1 2.2 2.3
Joint Distribution of Particle Numbers at Different Time Instants 2.1.1 Autocorrelation function of the particle number Branching Process with Two Particle Types Extinction and Survival Probability 2.3.1 Asymptotic forms of the survival probability 2.3.2 Special limit distribution theorems
3. Injection of Particles 3.1 3.2 3.3
3.4
Introduction Distribution of the Number of Particles 3.2.1 Expectation, variance and correlation Limit Probabilities 3.3.1 Subcritical process 3.3.2 Critical process 3.3.3 Supercritical process Probability of the Particle Number in a Nearly Critical System 3.4.1 Preparations
3 4 4 7 11 11 15 17 20 23 25 26 31 32
36 36 37 39 41 43 46
55 55 57 61 69 69 72 74 77 77 vii
viii
Contents
3.4.2 3.4.3
Equations of semi-invariants Determination of the approximate formula
4. Special Probabilities 4.1 4.2
4.3
4.4
4.5
Preliminaries The Probability of the Number of Absorptions 4.2.1 Expectation of the number of absorptions 4.2.2 Variance of the number of absorptions 4.2.3 Correlation between the numbers of absorptions 4.2.4 The probability of no absorption events occurring Probability of the Number of Detections 4.3.1 One-point distribution of the number of detected particles 4.3.2 Two-point distribution of the number of detected particles Probability of the Number of Renewals 4.4.1 Expectation and variance of the number of renewals 4.4.2 Correlation function of the number of renewals Probability of the Number of Multiplications 4.5.1 Expectation and variance of the number of multiplications 4.5.2 Correlation function of the number of multiplications
5. Other Characteristic Probabilities 5.1 5.2 5.3 5.4
5.5
Introduction Distribution Function of the Survival Time Number of Particles Produced by a Particle and Its Progeny 5.3.1 Quadratic process Delayed Multiplication of Particles 5.4.1 Expectations and their properties 5.4.2 The covariance and its properties 5.4.3 Properties of the variances 5.4.4 Probability of extinction Process with Prompt and Delayed Born Particles 5.5.1 Expectations 5.5.2 Variances
6. Branching Processes in a Randomly Varying Medium 6.1 6.2
6.3
6.4
Characterisation of the Medium Description of the Process 6.2.1 Backward equations 6.2.2 Forward equations Factorial Moments, Variances 6.3.1 The first factorial moments 6.3.2 Properties 6.3.3 Second factorial moments 6.3.4 Variances Random Injection of the Particles 6.4.1 Derivation of the forward equation 6.4.2 Expectations, variances, covariances
78 80
82 82 82 86 88 91 97 102 103 105 107 108 111 113 114 116
119 119 119 121 123 127 128 132 135 139 141 143 146
149 150 150 151 153 154 154 155 160 161 165 165 166
ix
Contents
7. One-Dimensional Branching Process 7.1
7.2
Cell Model 7.1.1 Description of the model 7.1.2 Generating function equations 7.1.3 Investigation of the expectations Continuous model 7.2.1 Generating function equations 7.2.2 Investigation of the expectations
II. Neutron Fluctuations 8. Neutron Fluctuations in the Phase Space: The Pál–Bell Equation 8.1 8.2
8.3
8.4
Definitions Derivation of the Equation 8.2.1 The probability of no reaction 8.2.2 Probabilities of the reactions 8.2.3 Partial probabilities 8.2.4 Generating function equation 8.2.5 Distribution of neutron numbers in two disjoint phase domains Expectation, Variance and Covariance 8.3.1 Expectation of the number of neutrons 8.3.2 Variance of the number of neutrons 8.3.3 Covariance between particle numbers Pál–Bell Equation in the Diffusion Approximation 8.4.1 Derivation of the equation 8.4.2 Expectation, variance and correlation 8.4.3 Analysis of a one-dimensional system
9. Reactivity Measurement Methods in Traditional Systems 9.1 9.2
9.3
9.4 9.5 9.6
Preliminaries Feynman-Alpha by the Forward Approach 9.2.1 First moments 9.2.2 Second moments 9.2.3 The variance to mean or Feynman-alpha formula Feynman-Alpha by the Backward Approach 9.3.1 Preliminaries 9.3.2 Relationship between the single-particle and source-induced distributions 9.3.3 Calculation of the single-particle moments 9.3.4 Calculation of the variance to mean 9.3.5 Feynman-alpha formula with six delayed neutron groups Evaluation of the Feynman-Alpha Measurement The Rossi-Alpha Method Mogilner’s Zero Probability Method
10. Reactivity Measurements in Accelerator Driven Systems 10.1
Steady Spallation Source 10.1.1 Feynman-alpha with a steady spallation source 10.1.2 Rossi-alpha with a steady spallation source
179 179 179 180 182 191 191 192
203 205 206 207 208 208 209 211 212 213 213 214 215 217 217 220 223
231 231 234 236 236 238 240 240 242 243 247 248 250 253 257
259 260 260 263
x
Contents
10.2
10.3 10.4
Pulsed Poisson Source with Finite Pulse Width 10.2.1 Source properties and pulsing methods 10.2.2 Calculation of the factorial moments for arbitrary pulse shapes and pulsing methods 10.2.3 General calculation of the variance to mean with arbitrary pulse shapes and pulsing methods 10.2.4 Treatment of the pulse shapes and pulsing methods 10.2.5 Rossi-alpha with pulsed Poisson source Pulsed Compound Poisson Source with Finite Width Periodic Instantaneous Pulses 10.4.1 Feynman-alpha with deterministic pulsing 10.4.2 Feynman-alpha with stochastic pulsing 10.4.3 Rossi-alpha with stochastic pulsing
11. Theory of Multiplicity in Nuclear Safeguards 11.1 11.2 11.3
11.4
11.5 11.6
Neutron and Gamma Cascades 11.1.1 Notations Basic Equations Neutron Distributions 11.3.1 Factorial moments 11.3.2 Number distribution of neutrons 11.3.3 Statistics of emitted and detected neutrons Gamma Photon Distributions 11.4.1 Factorial moments 11.4.2 Number distribution of gamma photons 11.4.3 The statistics of detected gamma photons Joint Moments Practical Applications: Outlook
Appendices A. Elements of the Theory of Generating Functions A.1
A.2 A.3 A.4 A.5 A.6
Basic Properties A.1.1 Continuity theorems A.1.2 Generating function of the sum of discrete random variables On the Roots of Equation g(x) = x, 0 ≤ x ≤ 1 A Useful Inequality Abel Theorem for Moments Series Expansion Theorem An Important Theorem
B. Supplement to the Survival Probability B.1
Asymptotic Form of Survival Probability in Discrete Time Process B.1.1 The first step of the proof B.1.2 The second step of the proof
Bibliography Index
264 265 268 269 272 277 281 283 283 287 290
294 295 295 298 299 300 301 303 305 305 306 307 309 311
313 315 315 315 320 321 322 323 325 327
330 330 331 331
335 339
P REFACE
Thorough descriptions of branching processes can be found in almost every book and monograph that deals with stochastic processes [1–5]. Moreover, in the monographs by T.E. Harris [6] and B.A. Sevast’yanov [7], nearly every problem of the theory is discussed with mathematical rigour. There are innumerable publications available about the applications of the theory of branching processes in the different fields of natural sciences such as physics [8], nuclear engineering [9–11], and biology [12]. With regard to the fluctuations in branching processes concerning nuclear chain reactions, these are synonymous with zero power neutron noise, or neutron fluctuations in zero power systems. In this respect, already in 1964, earlier than the books by Stacey [9] and Williams [11] appeared in print, a remarkable general work, amounting to a monograph, was published by D.R. Harris [13] on this topic. However, it is somewhat surprising that no monograph has been published since 1974 on neutron fluctuations. There appears to be a need for a self-contained monograph on the theory and principles of branching processes that are important both for the studies of neutron noise and for the applications, and which at the same time would treat the recent research problems of neutron noise by accounting for new developments. The ambition to fill this gap constitutes the motivation for writing this book. This book was thus written with two objectives in mind, and it also consists of two parts, although the objectives and parts slightly overlap. The first objective was to present the theory and mathematical tools used in describing branching processes which can be used to derive various distributions of the population with multiplication. The theory is first developed for reproducing and multiplying entities in general, and then is applied to particles and especially neutrons in particular, including the corresponding detector counts. Hence, the text sets out by deriving the basic forward and backward forms of the master equations for the probability distributions and their generating functions induced by a single particle. Various single and joint distributions and their special cases are derived and discussed. Then the case of particle injection by an external source (immigration of entities) is considered. Attention is given to the case when some entities (particles) are born with some time delay after the branching event. Moments, covariances, correlations, extinction probabilities, survival times and other special cases and special probabilities are discussed at depth. All the above chapters concern an infinite homogeneous material. In Chapter 7 space dependence is introduced. A one-dimensional case is treated as an illustration of a simple space-dependent process, in which a number of concrete solutions can be given in closed compact form. Whereas the first part treats concepts generally applicable to a large class of branching processes, Part II of this book is specifically devoted to neutron fluctuations and their application to problems of reactor physics and nuclear material management. The emphasis is on the elaboration of neutron fluctuation based methods for the determination of the reactivity of subcritical systems with an external source. First, in Chapter 8, a detailed derivation of the Pál–Bell equation, together with its diffusion theory approximation, is given. The original publication of the Pál–Bell equation constituted the first theoretical foundation of the zero power noise methods which had been suggested earlier by empirical considerations. Thereafter, Chapters 9 and 10 deal with the applications of the general theory to the derivation of the Feynman and Rossi-alpha methods. Chapter 9 concerns the derivation of the classical formulae for traditional systems, whereas Chapter 10 reflects the recent developments of these methods in connection with the so-called accelerator-driven systems, i.e. subcritical cores driven with a spallation source, and/or with pulsed sources. Finally, Chapter 11 touches upon the basic problems and methods of identifying and quantifying small samples of fissile material from the statistics of spontaneous and induced neutrons and photons. This area of nuclear safeguards, i.e. nuclear xi
xii
Preface
material accounting and control, is under a rapidly increasing attention due to the general increase of safety and safeguards needs worldwide. A special new contribution of this book to the field of neutron noise is constituted by Chapter 6, in which the so-called zero power neutron noise, i.e. branching noise, is treated in systems with time-varying properties. Neutron noise in systems with temporally varying properties is called‘power reactor noise’. Neutron fluctuations in low power steady systems and high power systems with fluctuating parameters have constituted two disjoint areas so far which were treated with different types of mathematical tools and were assumed to be valid in non-overlapping operational areas. The results in Chapter 6 are hence the first to establish a bridge between zero power noise and power reactor noise. Due to space limitations, the Langevin technique and the theory of the parametric noise are not discussed. The interested reader is referred to the excellent monographs by Van Kampen [14] and Williams [11]. Since the generating functions play a decisive role in many considerations of this book, the theorems most frequently used in the derivations are summarised in Appendix A. This book is not primarily meant for mathematicians, rather for physicists and engineers, and notably for those working with branching processes in practice, and in the first place for physicists being concerned with reactor noise investigations and problems of nuclear safeguards. However, it can also be useful for researchers in the field of biological physics and actuarial sciences. The authors are indebted to many colleagues and friends who contributed to the realisation of this book in one way or another and with whom they collaborated during the years. One of us (I.P.) is particularly indebted to M.M.R. Williams, from whom he learnt immensely on neutron noise theory and with whom his first paper on branching processes was published. He also had, during the years, a very intensive and fruitful collaboration with several Japanese scientists, in particular withY.Yamane andY. Kitamura of Nagoya University. Chapters 9 and 10 are largely based on joint publications. Research contacts and discussions on stochastic processes and branching processes with H. Konno of the University of Tsukuba are acknowledged with thanks. Parts of this book were written during an inspiring visit to Nagoya and Tsukuba. The experimental results given in the book come from the Kyoto University Critical Assembly at KURRI, and contributions from the KURRI staff are gratefully acknowledged. The chapter on nuclear safeguards is largely due to a collaboration with Sara A. Pozzi of ORNL, who introduced this author to the field. Both authors are much indebted to Maria Pázsit, whose contributions by translating early versions of the chapters of Part I from Hungarian can hardly be overestimated. She has also helped with typesetting and editing the LaTeX version of the manuscript, as well as with proofreading. The authors acknowledge with thanks constructive comments on the manuscript from M.M.R. Williams and H. van Dam and thank S. Croft for reading Chapter 11 and giving many valuable comments. Without the funding contribution of many organisations this book would not have been possible. Even if funding specifically for this book project was not dominating, it is a must to mention that the research of one of the authors (I.P.) was supported by the Swedish Nuclear Inspectorate (SKI), the Ringhals power plant, the Swedish Centre for Nuclear Technology (SKC), the Adlerbert Research Foundation, the Japan Society for the Promotion of Science (JSPS) and the Scandinavia–Japan Sasakawa Foundation. Their contribution is gratefully acknowledged. We had no ambition to cite all published work related to the problems treated in this monograph. The books and papers listed in the ‘List of Publications’ represent merely some indications to guide the reader. One has to mention the excellent review of source papers in reactor noise by Saito [15]. This review contains practically all of the important publications until 1977 which are in strong relation with the topic of this book. Imre Pázsit Gothenburg Lénárd Pál Budapest February 2007
A CKNOWLEDGEMENT
The authors are grateful to Elsevier Ltd for granting permission to reproduce the material detailed below: • • • •
Figure 5 from the article by Y. Kitamura et al. in Progr. Nuc. Ener., 48 (2006) 569. Figures 3 and 4 from the article by I. Pázsit et al. in Ann. Nucl. Ener., 32 (2006) 896. Figures 4 and 5 from the article by Y. Kitamura et al. in Progr. Nucl. Ener., 48 (2006) 37. Figures 1 and 3 from the article by A. Enqvist, I. Pázsit and S. Pozzi in Nucl. Instr. Meth. A, 566 (2006) 598.
xiii
This page intentionally left blank
LIST OF M OST F REQUENTLY U SED N OTATIONS
Symbol P{· · · } E{· · ·} D2 {· · ·} Q ν P{ν = k} = fk k q(z) = ∞ k=0 fk z E{ν} = q (1) = q1 E{ν(ν − 1)} = q (1) = q2 Qa = Qf0 Qb = Qf1 Qm = Q(1 − f0 − f1 ) n(t) P{n(t) = n|n(0) = 1} = pn (t)
Description Symbol of probability Symbol of expectation Symbol of variance Intensity of a reaction Number of progeny (neutrons) in one reaction Probability of {ν = k} Basic generating function Expectation of the progeny number in one reaction Second factorial moment of the progeny number in one reaction Total intensity of absorption Intensity of renewal Intensity of multiplication Number of particles at time t Probability of finding n particles at time t in the case of one starting particle n Generating function of pn (t) g(z, t) = ∞ n=0 pn (t)z q Number of particles produced by one injection (spallation) event P{q = j} = hj Probability of {q = j}; probability that there are j emitted neutrons per spallation event j r(z) = ∞ Generating function of probability hj j=0 hj z E{q} = r (1) = r1 Expectation of the particle number produced by one injection event; expectation of the number of neutrons emitted per spallation event E{q(q − 1)} = r (1) = r2 Second factorial moment of the particle number produced by one injection event; second factorial moment of the number of neutrons emitted per spallation event Dν = q2 /q12 , Dq = r2 /r12 Diven factors of ν and q s(t) Intensity of the injection process at time t N(t) Number of particles at time t in the case of particle injection P{N(t) = n|n(t0 ) = 0} = Pn (t|t0 ) Probability of finding n particles at time t if the particle injection started at t0 ≤ t n G(z, t|t0 ) = ∞ Generating function of Pn (t|t0 ) n=0 Pn (t|t0 )z α = Q(q1 − 1) > 0 Multiplication intensity (q1 > 1) Decay intensity (q1 < 1) a =− α = Q(1 − q1 ) > 0 m1 (t) Expectation of n(t) Second factorial moment of n(t) m2 (t) M1 (t) Expectation of N(t) M2 (t) Second factorial moment of N(t) na (t − u, t) Number of absorptions in the time interval [t − u, t], u ≥ 0 P{na (t − u, t) = n|n(0) = 1} = p(n, t, u) Probability of absorbing n particles in the time interval [t − u, t] in the case of one starting particle xv
xvi
Symbol Na (t − u, t)
List of Most Frequently Used Notations
Description Number of absorptions in the time interval [t − u, t], u ≥ 0 in the case of particle injection P{Na (t − u, t) = n|n(0) = 0} = P(n, t, u) Probability of absorbing n particles in the time interval [t − u, t] in the case of particle injection (a) m1 (t, u) Expectation of the number of absorbed particles in the time interval [t − u, t] in the case of one starting particle (a) m2 (t, u) Second factorial moment of the number of absorbed particles in the time interval [t − u, t] in the case of one starting particle (a) M1 (t, u) Expectation of the number of absorbed particles in the time interval [t − u, t] in the case of particle injection (a) M2 (t, u) Second factorial moment of the number of absorbed particles in the time interval [t − u, t] in the case of particle injection D2 {Na (t − u, t)} Variance of Na (t − u, t) {S(t) = S }, ∈ Z (+) Medium is in the state S at time t U Subset of the coordinate-velocity space u = {r , v} Phase point in the coordinate-velocity space n(t, U) Number of neutrons in the subset U at time t p[t0 , u0 ; t, n(U)] Probability of finding n neutrons in the subset U at time t, when one neutron started from the phase point u0 at time t0 ≤ t m1 (t0 , u0 ; t, U) Expectation of the number of neutrons in the subset U at time t, when one neutron started from the phase point u0 at time t0 ≤ t Second factorial moment of the number of neutrons in the subset m2 (t0 , u0 ; t, U) U at time t, when one neutron started from the phase point u0 at time t0 ≤ t C(t) Number of the delayed neutron precursors at time t Z(t, td ) Number of the detected neutrons in the time interval [td , t] Intensity of capture λc λf Intensity of fission λd Intensity of detection λ Decay constant S Source intensity pf (n, m) Probability of emitting n neutrons and m precursors in one fission Generating function of pf (n, m) gf (x, y) ∂gf (x, y)/∂x|x=y=1 = νp Average number of prompt neutrons per fission ∂gf (x, y)/∂y|x=y=1 = νd Average number of delayed neutrons per fission Average number of neutrons per fission ν = νp + νd β = νd /ν Effective delayed neutron fraction ρ Reactivity = 1/ν λf Prompt neutron generation time α = (β − ρ)/ Prompt neutron decay constant used in Chapters 9 and 10
= λd /λf Detector efficiency Probability of finding N neutrons and C precursors at time t in P(N , C, Z, t|t0 ) the system driven by a source, and of counting Z neutrons in the time interval [0, t] G(x, y, v, t|t0 ) Generating function of P(N , C, Z, t|t0 ) Z(t) Asymptotic expectation of the number of detected neutrons in the time interval [0, t] μZZ (t, 0|t0 ) Modified second factorial moment
List of Most Frequently Used Notations
Symbol limt0 →−∞ μZZ (t, 0|t0 ) = μZZ (t) Y (t) = μZZ (t)/Z(t) p(n, c, z, T , t)
P(N , C, Z, T , t)
g(x, y, v, T , t) G(x, y, v, T , t) ν μ ν1 ν2 ν3 M ϕr Mγ μ1 μ2 μ3 P(n) F(n)
xvii
Description Asymptotic modified second factorial moment Y (t) in the Feynman-alpha formula Probability that there are n neutrons and c precursors at time t in the system, induced by one initial neutron at t = 0, and there have been z detector counts in the time interval [t − T , t] Probability that there are N neutrons and C precursors at time t in the system, induced by a source of intensity S, and that there have been Z detector counts in the time interval [t − T , t], provided that there were no neutrons and precursors in the system at time t = 0 and no neutron counts have been registered up to time t = 0 Generating function of p(n, c, z, T , t) Generating function of P(N , C, Z, T , t) Total number of neutrons produced in a cascade Total number of gamma photons produced in a cascade Neutron singles Neutron doubles Neutron triples Leakage multiplication Average number of neutrons generated in a sample Gamma multiplication per one initial neutron Gamma singles Gamma doubles Gamma triples Number distribution of neutrons generated in a sample Number distribution of gamma photons generated in a sample
This page intentionally left blank
P A R T
O N E
Physics of Branching Processes
This page intentionally left blank
C H A P T E R
O N E
Basic Notions
Contents 1.1 1.2 1.3 1.4 1.5 1.6
Definitions Equations for the Generating Functions Investigation of the Generating Function Equations Discrete Time Branching Processes Random Tree as a Branching Process Illustrative Examples
3 4 11 20 23 25
1.1 Definitions First, the basic definitions will be summarised, and for the sake of easier overview, the simplest way of treatment is chosen. The medium, in which certain objects are capable for not only to enter reactions but also can multiply themselves, is called a multiplying medium. Suppose that this medium is homogeneous and infinite. The medium will often be referred to as a system as well. For example, objects can be bacteria on a nourishing soil, or particles suitable for chemical or nuclear chain reactions, etc. In the following, we will use the name particle instead of object. Suppose that at a certain time instant t0 , only one particle capable for multiplication exists in the multiplying medium. Denote the number of particles at the time instance t ≥ t0 by n(t). It is evident that n(t) ∈ Z, where Z is the set of non-negative integers. The event which results in either absorption or renewal, or multiplication of the particle is called reaction. Let τ be the interval between the time of appearance of the particle in the multiplying medium and that of its first reaction. Suppose that the probability distribution function P{τ > t − t0 |t0 } = T (t0 , t),
t0 ≤ t,
in which t0 is the time instance when the particle appears in the multiplying medium, satisfies the functional equation T (t0 , t) = T (t0 , t )T (t , t), t0 ≤ t ≤ t. In our considerations, let T (t0 , t) be the exponential distribution given by the equation T (t0 , t) = e −Q(t−t0 ) ,
(1.1)
where Q is the intensity of the reaction. Further, let ν be the number of new particles born in the reaction, replacing the particle inducing the reaction, and let P{ν = k} = fk , Neutron fluctuations ISBN-13: 978-0-08-045064-3
k∈Z
(1.2) © 2008 Elsevier Ltd. All rights reserved.
3
4
Imre Pázsit & Lénárd Pál
be the probability that ν = k. It is obvious that f0 is the probability of absorption, f1 that of renewal, while fk , k > 1 is the probability of multiplication. The quantities Q, fk , k ∈ Z are the parameters determining the state of the multiplying medium.1 The first case to be treated is the determination of the conditional probability P{n(t) = n|n(t0 ) = 1} = p(n, t|1, t0 ) in the case when the process is homogeneous in time, i.e. the probability p(n, t|1, t0 ) depends only on the time difference t − t0 . Hence, one can choose t0 = 0 and accordingly write P{n(t) = n|n(0) = 1} = p(n, t|1, 0) = p1n (t) = pn (t).
(1.3)
In the sequel the notation pn (t) = p1n (t) will be used. Hence, pn (t) is the probability that exactly n particles exist in the medium at time t ∈ T , provided that at t = 0 there was only one particle in the medium. Here, T denotes the set of the non-negative real numbers. This description is usually called the one-point model, since the branching process in the homogeneous infinite medium is characterised by the number of particles in the medium at one given time instant.2 For determining the probability pn (t), the generating function g(z, t) = E{zn(t) |n(0) = 1} =
∞
pn (t) zn ,
|z| ≤ 1
(1.4)
n=0
will be used.3
1.2 Equations for the Generating Functions 1.2.1 Intuitive solution To begin with, an intuitive solution will be given which starts by progressing from backwards in the branching process n(t), considering the mutually exclusive complete set of first events following the time instant t = 0. The following theorem will be proven. Theorem 1. The generating function g(z, t) = g(z, 0) = z satisfies the backward-type equation
∞
n=1 pn (t)z
n , |z| ≤ 1
of the probability pn (t) with the initial condition
∂g(z, t) = −Qg(z, t) + Qq[g(z, t)], ∂t
(1.5)
in which q(z) = E{zν } =
∞
fk z k
(1.6)
k=0
is the so-called basic generating function. Proof. The proof is based on the fact that the event {n(t) = n|n(0) = 1} is the sum of the following mutually exclusive two events. 1. The single particle in the medium at the time instant t0 = 0 does not enter into a reaction until time t > 0, hence the number of particles will be exactly 1 at time t. 1 In
the theory of branching processes, this process belongs to the category of the so-called age-dependent processes. notion ‘one-point model’ is not to be mixed up with the point model of reactor theory, where the phrase ‘point’ refers to a spatial property. short summary of the characteristics of generating functions is found in Appendix A.
2 The 3A
5
Basic Notions
2. The single particle which exists in the medium at time t0 = 0 will have a first reaction until time t > 0, such that the first reaction will take place in some subinterval (t , t + dt ] of the interval (0, t], where t runs through every point of the interval (0, t] and every new particle born in the reaction under the remaining time t − t will generate so many further new particles independently from each other that their number together will be exactly n at time t > 0. Based on this, one can write that pn (t) = e −Qt δn1
⎡
t
+Q
e −Qt ⎣f0 δn0 +
∞
0
fk
k
⎤ pnj (t − t )⎦dt ,
n1 +···+nk =n j=1
k=1
and this is the same as pn (t) = e −Qt δn1 +Q
⎡
t
e
−Q(t−t )
⎣f0 δn0 +
0
∞
fk
k
⎤ pnj (t )⎦dt .
n1 +···+nk =n j=1
k=1
From this one immediately obtains the integral equation g(z, t) = e −Qt z + Q
t
e −Q(t−t )
0
∞
fk [g(z, t )]k dt
k=0
for the generating function of (1.4) which, by taking into account the definition in (1.6), can be written in the following form: t g(z, t) = e −Qt z + Q e −Q(t−t ) q[g(z, t )]dt . (1.7) 0
By derivation of this equation with respect to t, one obtains (1.5). The initial condition g(z, 0) = z follows immediately also from (1.7). This equation derived for the generating function g(z, t) of the probability pn (t) belongs to the family of the so-called backward Kolmogorov equations.4 Equation (1.5) can also be obtained directly by considering the probabilities of the two mutually exclusive events (in first order in dt) of having a reaction or having no reaction between 0 ≤ t ≤ dt. In many cases, one may need the exponential generating function of the probability pn (t|1) which is defined by the infinite series gexp (z, t) =
∞
pn (t)e nz ,
|e z | ≤ 1,
n=0
that satisfies the equation ∂gexp (z, t) = −Qgexp (z, t) + Qqexp [ log gexp (z, t)] ∂t with the initial condition gexp (z, 0) = e z, where qexp (z) =
∞
fk e kz ,
(1.8)
|e z | ≤ 1.
k=0 4 According
to the terminology of master equations, this equation, especially in the differential equation form, is a ‘mixed’-type equation, since the variable t refers to the final (terminal) time, and not the initial time on which the backward master equation operates on.
6
Imre Pázsit & Lénárd Pál
Derive now the so-called forward Kolmogorov equation determining the probability pn (t). In this case, the probability pn (t + t) will be expressed by probabilities due to an earlier time instant t. For the generating function (1.4) the following theorem will be proved. Theorem 2. The generating function g(z, t) with the initial condition g(0, z) = z satisfies the linear forward-type differential equation ∂g(z, t) ∂g(z, t) = Q[q(z) − z] . (1.9) ∂t ∂z Proof. Considering that nQt + o(t) is the probability that one reaction takes place in the medium containing n particles in the interval (t, t + t] at the time instant t > 0, one can write that pn (t + t) = pn (t)(1 − nQt) + Qt
n
(n − k + 1) fk pn−k+1 (t) + o(t).
k=0
After rearranging the equation and performing the limit t → 0, one obtains dpn (t) = −Qnpn (t) + Q fk (n − k + 1)pn−k+1 (t). dt n
(1.10)
k=0
The corresponding initial condition is pn (0) = δn1 . From this, equation (1.9) immediately follows for the generating function g(z, t) =
∞
pn (t)zn ,
|z| ≤ 1,
n=0
with the initial condition g(z, 0) = z. Also in this case, it is worth quoting the equation ∂gexp (z, t) ∂gexp (z, t) = Q[e −z q(e z ) − 1] ∂t ∂z
(1.11)
for the exponential generating function gexp (z, t) =
∞
pn (t)e nz ,
|e z | ≤ 1,
n=0
and which is appended by the initial condition gexp (z, 0) = e z . Remark. In many applications, it is practical and hence customary to separate the reactions leading to absorbtion and multiplication. Let Qa be the intensity of absorption, whereas Qf that of multiplication.5 Hence, one can write Q = Qa + Qf . Let pf (k) denote the probability that a number k ∈ Z new particles are generated by the incoming particle which disappears in the multiplying reaction (i.e. fission). In this case, one obtains for the generating function g(z, t) the integral equation t t e −Q(t−t ) dt + Qf e −Q(t−t ) qf [g(z, t )]dt , (1.12) g(z, t) = e −Qt z + Qa 0 5 This
0
separation is motivated on physical grounds, with absorption corresponding to capture, and multiplication to fission, including the possibility of zero neutrons generated in fission.
7
Basic Notions
where qf (z) =
∞
pf (k)zk .
k=0
The relationship between the fk and the pf (k) can be written as fk =
Qf Qa pf (k) + δk,0 , Q Q
(1.13)
which can be inverted as pf (0) =
Q Qa f0 − Qf Qf
and pf (k) =
Q fk , k = 1, 2, . . . . Qf
(1.14)
Using (1.14) in (1.12), one regains immediately the more concise equation (1.7). It can also be seen that ∞
fk =
k=0
∞ Qf Qf Qa Qa pf (k) + + = = 1, Q Q Q Q k=0
and ∞
kpf (k) = E{νf } =
k=0
∞ Q Q Q kfk = E{ν} = q1 , Qf Qf Qf
(1.15)
k=0
where E{νf } is the expectation of the number of neutrons per fission,6 whereas q1 ≡ q (1) is the expectation of the number of neutrons per reaction. In a similar manner one finds that E{νf (νf − 1)} =
Q q2 Qf
(1.16)
with q2 ≡ q (1), and hence the important relationship E{νf (νf − 1)} q2 = E{νf } q1
(1.17)
holds. This identity will be very instrumental when transferring results and expressions from Part I to II of the book, where the processes of absorption and fission will be separated, and the formalism will be built on the use of the distribution pf (k) and its moments.
1.2.2 Solution according to Kolmogorov and Dmitriev In the following, the solution of the problem will be described by using the methods of Kolmogorov and Dmitriev [16]. Let T denote the set of non-negative real numbers [0, ∞) and n(t) be an integer valued random process, homogeneous in time, defined over the parameter space T . Let us call the set of non-negative integers Z, i.e., the values which may be assumed by n(t), the phase space of n(t). The random process n(t), t ∈ T generates a homogeneous Markov process if the transition probability P{n(t) = j|n(0) = i} = pij (t) 6 In Chapters 9–11, where the probability distribution
as ν and E{νf (νf − 1)} as ν(ν − 1) .
(1.18)
pf (k) and its factorial moments will be used, and in Section 6.4, E{νf } will be simply denoted
8
Imre Pázsit & Lénárd Pál
fulfils the following conditions: (a) pij (t) ≥ 0,
∀i, j ∈ Z and t ∈ T ;
(b) ∞
pij (t) = 1,
∀i ∈ Z and t ∈ T ;
j=0
(c) pij (t) =
∞
pik (u)pkj (t − u),
∀i, j ∈ Z and 0 ≤ u ≤ t, u, t ∈ T ;
k=0
(d)
pij (0) = δij =
1,
if i = j,
0,
if i = j.
If t varies continuously, then, in addition to condition (d) we shall also suppose that lim pii (t) = 1.
(1.19)
t↓0
From this and the conditions (a) and (b), it immediately follows that for every i, for which i = j, the transition probability pij (t) converges continuously to zero if t ↓ 0, i.e. lim pij (t) = 0.
(1.20)
t↓0
Further, from condition (c), it follows that the transition probabilities pij (t), i, j = 0, 1, . . . are continuous at every time instant t ∈ T . Definition 1. The Markov process n(t), t ∈ T , defined in the phase space Z, is called a branching process if pkn (t) = pn1 (t)pn2 (t) · · · pnk (t).
(1.21)
n1 +···+nk =n
This equation expresses the fact that the k particles existing in the system at t = 0 initiate branching processes independently from each other.7 Let ni (t) denote the number of progeny created by the ith particle at t. Obviously, the number of progeny for t > 0 generated by the k particles present in the system at t = 0 is expressed by the random process n(t) = n1 (t) + n2 (t) + · · · + nk (t),
(1.22)
in which ni (t), i = 1, . . . , k are independent from each other and have the same distribution P{ni (t) = n|ni (0) = 1} = pn (t),
i = 1, . . . , k.
It follows then that the transition probability pkn (t) is simply the k-fold convolution of the transition probability pn (t), as expressed by (1.21). In the further considerations, the following theorem is of vital importance. 7 It
has to be emphasised that this assumption is only valid in a medium whose properties do not vary in time. This question is discussed in detail in Chapter 6.
9
Basic Notions
Theorem 3. The generating function g(z, t) of the probability pn (t) fulfils the functional equation g(z, t + u) = g[g(z, u), t],
(1.23)
and the initial condition g(z, 0) = z. Proof. Equation (1.23) is a direct consequence of the fact that n(t) is a branching Markov process, i.e. pn (t + u) =
∞
pk (t)pkn (u),
k=0
and
pkn (u) =
pn1 (u) . . . pnk (u).
n1 +···+nk =n
Hence, in this case one has g(z, t + u) =
∞ k=0
pk (t)
∞
pkn (u)zn =
n=0
∞
pk (t)[ g(z, u)]k = g[g(z, u), t)],
k=0
and this is exactly what was to be proven. The initial condition, in its turn, follows from the fact that pn (0) = δn1 , and accordingly g(z, 0) = z. By considering the condition (1.19), one can write lim p1 (t) = 1.
t→0
(1.24)
From this it follows that if t → 0 then g(z, t) → z, moreover this is valid uniformly to every z for which the condition |z| ≤ 1 is satisfied. Also, it can easily be proven that g(z, t) is uniformly continuous in t for every t ∈ [0, ∞), provided that |z| ≤ 1. It is then assumed that if t → 0, the probabilities pn (t), n = 0, 1, . . . can be written in the following form: p1 (t|1) = 1 + w1 t + o(t),
(1.25)
and pn (t|1) = wn t + o(t), n = 1. Since 0 ≤ p1 (t) ≤ 1 must be hold, the inequality w1 < 0 has to be fulfilled. Considering that ∞
pn (t) = 1,
(1.26)
∀t ∈ [0, ∞),
n=0
the equality ∞
wn = 0
(1.27)
n=0
has to be satisfied. By introducing the notations ∞
wn = Q[ fn − δn,1 ],
where 0 ≤ f1 < 1, one obtains n=0 fn = 1. Due to this, the quantity 0 ≤ fn ≤ 1 can be interpreted as the probability of the event that exactly n particles are born during the reaction, hence its meaning is equal to the probability defined in (1.2). Moreover, the quantity Q having a dimension [time]−1 is the intensity of the reaction. After these preparations, the basic theorem of branching processes can be stated.
10
Imre Pázsit & Lénárd Pál
Theorem 4. Introducing the function s(z) =
∞
wn z n = Q
n=0
∞
fn zn − Qz = Q[q(z) − z],
(1.28)
n=0
where q(z) was defined in (1.6), for every |z| ≤ 1 the generating function g(z, t) fulfils the backward differential equation ∂g(z, t) = s[g(z, t)] = −Qg(z, t) + Qq[g(z, t)], ∂t
(1.29)
and the forward linear partial differential equation ∂g(z, t) ∂g(z, t) ∂g(z, t) = s(z) = Q[q(z) − z] , ∂t ∂z ∂z
(1.30)
respectively, under the initial condition g(z, 0) = z. For the proof, the following lemma is needed. Lemma 1. If the conditions in (1.25)–(1.27) are fulfilled, then the asymptotic formula g(z, t) = z + s(z)t + o(t) = z + Q[q(z) − z] + o(t)
(1.31)
is uniformly valid for every z for which it is true that |z| ≤ 1, for the case t ↓ 0. Proof. One only has to prove that the absolute value of g(z, t) − z − s(z) t converges to zero for t ↓ 0, since the statement in (1.31) immediately follows from it. To this order, let us write the inequality pk (t) p1 (t) − 1 g(z, t) − z k ≤ |z| + − s(z) − w − w 1 k |z| t t t k =1 pk (t) p1 (t) − 1 pk (t) ≤ − w1 + − wk + + wk . t t t k =1,k≤N
k>N
k>N
The last term on the right-hand side can be made arbitrarily small if N is chosen sufficiently large. By fixing now N at this value and selecting a sufficiently small value for t, it is obvious that even the first and second terms on the right-hand side can be made arbitrarily small. Further, from (1.26), it is seen that in the case of t → 0 pk (t) wk , → t k>N
k>N
hence we have proved that even the third term can be made arbitrarily small. By virtue of the foregoing, the lemma is proved. Proof. Now, the generating function equations (1.29) and (1.30) can easily be derived. By using (1.23), one has g(z, t + t) = g[ g(z, t), t], i.e. g(z, t) = g[ g(z, t − t), t].
11
Basic Notions
With the help of the lemma proved above, one obtains from these the equations g(z, t + t) = g(z, t) + s[g(z, t)]t + o(t), and
g(z, t) = g(z, t − t) + s[g(z, t − t)]t + o(t).
Since g(z, t) is uniformly continuous for every t ∈ [0, ∞) if |z| ≤ 1, it is obvious that the above leads to ∂g(z, t) = s[g(z, t)] = −Qg(z, t) + Qq[g(z, t)], ∂t which agrees exactly with (1.29). To derive (1.30), one applies equation (1.23) in an alternative way. From the relationships g(z, t + t) = g[g(z, t), t] = g[z + s(z)t + o(t), t] = g(z, t) +
∂g(z, t) s(z) t + o(t), ∂z
and g(z, t) = g[g(z, t), t − t] = g[z + s(z)t + o(t), t − t] = g(z, t − t) +
∂g(z, t − t) s(z) t + o(t), ∂z
after rearrangement and performing the limit t → 0, the generating function equation in (1.30) is immediately obtained. The initial condition g(0, z) = z is the consequence of the relation pn (0) = δn1 ,
∀n ≥ 0
as it was pointed out before.
1.3 Investigation of the Generating Function Equations 1.3.1 Uniqueness of the solution, regular and irregular branching processes From the theory of the differential equations, it follows that the solutions of the generating function equations (1.29) and (1.30) are identical. Hence, it is sufficient to investigate only (1.29) by taking into account the initial condition g(z, 0) = z. According to the existence theorem of the differential equations, equation (1.29) for every point |z| < 1 has only one solution g(z, t) which satisfies the initial condition g(z, 0) = z and equation (1.23). However, it has to be specifically investigated under what conditions this solution satisfies also the limit relationship lim g(z, t) = g(1, t) = z↑1
∞
pn (t|1) = 1,
(1.32)
n=0
i.e. under which conditions the solution g(z, t) can be considered a probability generating function. For this purpose, we shall use the integral equation t q[g(z, t − u)]e −Qu du + ze −Qt , g(z, t) = Q 0
which is equivalent with the differential equation (1.29) and the integral equation (1.7).
(1.33)
12
Imre Pázsit & Lénárd Pál
Theorem 5. The integral equation (1.33) has a single solution g(z, t) which satisfies the inequality |g(z, t)| ≤ 1 in every point |z| ≤ 1 and the limit relation (1.32) if and only if
dq(z) q1 = dz
= z=1
∞
nfn < +∞.
(1.34)
n=0
Proof. For the proof, suppose the opposite of the statement. In the first step, assume that (1.33) has two solutions in the interval [0, t0 ]. Let these be g1 (z, t) and g2 (z, t). It will be shown that g1 (z, t) and g2 (z, t) cannot be different in the interval [0, t0 ), i.e. g1 (z, t) = g2 (z, t),
∀t ∈ [0, t0 ].
To prove this, one has to make use of the property of the generating function that if |u| ≤ 1 and |v| ≤ 1 then8 |q(u) − q(v)| ≤ q1 |u − v|. Hence, one has
t
|g1 (z, t) − g2 (z, t)| ≤ Qq1
(1.35)
|g1 (z, t − u) − g2 (z, t − u)|e −Qu du.
0
Define the function K (t , t) = sup |g1 (z, u) − g2 (z, u)|
(1.36)
|g1 (z, t) − g2 (z, t)| ≤ q1 (1 − e −Qt )K (0, t).
(1.37)
t ≤u≤t |z|≤1
by the use of which one obtains that
Select now a value of t0 > 0 such that the inequality 0 < q1 (1 − e −Qt0 ) < 1
(1.38)
is fulfilled. It follows from (1.37) that sup |g1 (z, t) − g2 (z, t)| ≤ q1 (1 − e −Qt0 )K (0, t0 ), 0≤t≤t0 |z|≤1
i.e. one can write that K (0, t0 ) ≤ q1 (1 − e −Qt0 )K (0, t0 )
(1.39) which, by virtue of the inequality (1.38), can only be fulfilled if K (0, t0 ) = 0. This means that in every point of the interval [0, t0 ) one has g1 (z, t) = g2 (z, t), ∀|z| ≤ 1. (1.40) In the next step, it will be shown that the equality (1.40) is valid also in the interval [t0 , 2t0 ]. Since (1.33) has only one solution in the interval [0, t0 ], therefore for every t which lies in the interval t0 < t < 2t0 , the equation t g1 (z, t) − g2 (z, t) = Q {q[g1 (z, t − u)] − q[g2 (z, t − u)]}e −Qu du 0
holds. Based on this, one can write that |g1 (z, t) − g2 (z, t)| ≤ Qq1
t
t0 8 The
proof of the inequality can be found in Section A.3.
|g1 (z, t − u) − g2 (z, t − u)|e −Qu du.
13
Basic Notions
By applying the previous procedure, one arrives at the inequality K (t0 , 2t0 ) ≤ q1 (1 − e −Qt0 )K (t0 , 2t0 ),
(1.41)
in which K (t0 , 2t0 ) =
sup |g1 (z, t) − g2 (z, t)|.
t0 ≤t≤2t0 |z|≤1
Since 0 < q1 (1 − e −Qt0 ) < 1, the relation (1.41) can only be valid if and only if the equality K (t0 , 2t0 ) = 0, i.e. g1 (z, t) = g2 (z, t),
∀ |z| ≤ 1
(1.42)
is fulfilled in every point t of the interval [t0 , 2t0 ). By continuing this procedure, it is seen that the equality (1.42) must be valid in every point t of the interval [0, +∞]. On the other hand, from this it follows that (1.33) has one and only one solution for every |z| ≤ 1 in the interval 0 ≤ t < ∞. If z = 1 then (1.33) can be written in the following form: g(1, t) = Q
t
q[g(1, t − u)]e −Qu du + e −Qt ,
0 ≤ t < ∞.
0
Because g(1, t) = 1 is a solution of this equation, it is obvious that this is the only solution in the point z = 1. Based on the foregoing, the branching process n(t) is called regular if lim g(z, t) = g(1, t) = z↑1
∞
P{n(t) = n|n(0) = 1} =
n=0
∞
pn (t|1) = 1,
(1.43)
n=0
and the condition of this is that the inequality q1 < ∞ should be fulfilled. In the case when q1 = ∞, then lim g(z, t) = g(1, t) = z↑1
∞
P{n(t) = n|n(0) = 1} =
n=0
∞
pn (t|1) < 1.
(1.44)
n=0
This process is called irregular or in other words explosive. The notation explosive is motivated by the fact that in this case P{n(t) = ∞|n(0) = 1} = 1 − g(1, t) > 0, i.e. an infinite number of progeny can be generated during a finite time interval with non-zero probability. (In the case of a regular process P{n(t) = ∞|n(0) = 1} = 0.) Naturally, this can only happen if there is a nonzero probability that an infinite number of particles can be generated in a single multiplication reaction. In reality, of course, such a process can hardly exist. It can be shown that instead of the regularity condition q1 < ∞ of the branching processes, a more general condition can also be formulated. Define the integral C =
1
1−
du , u − q(u)
(1.45)
in which > 0. Theorem 6. The branching process n(t) corresponding to the generating function q(z) is regular if C = ∞, and explosive if C < ∞.
14
Imre Pázsit & Lénárd Pál
Proof. Starting from (1.29) one has
Qt =
z g(z,t)
du , u − q(u)
(1.46)
in which z is now a real number in the interval [0, 1]. From this equation, it is seen that the right-hand side has to be bounded for every finite t. Define the function z du R(z) = , u − q(u) 0 and let z0 be the smallest positive number for which z0 − q(z0 ) = 0. If z0 < z ≤ 1 then z − q(z) ≥ 0, i.e. R(z) is a non-decreasing function of z on the interval (z0 , 1]. From (1.46) it follows that Qt = R(z) − R[g(z, t)],
∀ z ∈ (z0 , 1],
i.e. the inequality R(1) − R[g(t, 1)] < ∞, ∀ t < ∞ has to hold for every finite t at the point z = 1 as well. Since if C = ∞, then
(1.47)
lim R(z) = R(1) = ∞, z↑1
and (1.47) can only be fulfilled if lim g(z, t) = g(1, t) = 1, z↑1
i.e. when the branching process is regular. If, however, C < ∞ then lim R(z) = R(1) < ∞, z↑1
and in this case the inequality 0 < R(1) − R[g(1, t)],
∀0
(1.48)
can only be fulfilled if lim g(z, t) = g(1, t) < 1, z↑1
i.e. when the branching process is explosive. The simple condition of regularity q1 < ∞ arises immediately from the more general condition of C = ∞. Namely, if q(u) = 1 + q1 (u − 1) + o(u − 1), then
1
du , (q − 1)(1 − u) + o(1 − u) 1− 1 and from this it can be seen that C is diverging if q1 < ∞. In the case, however, when C =
q(u) = u − A (1 − u)α + o[(1 − u)α ], where 0 < α < 1 and 0 < A < ∞ then C is finite, i.e. g(t, 1) < 1 and lim q (u) = q1 = 1 + αA lim (1 − u)−(1−α) = ∞, u↑1
which is simply the condition of explosiveness.
u↑1
15
Basic Notions
1.3.2 Moments: subcritical, critical and supercritical systems Start with a recollection of some well-known definitions and relations. The expectation E{n(t)k }
=
∞
nk pn (t) = m(k) (t),
∀ t ∈ [0, ∞)
(1.49)
n=0
is called the kth ordinary moment of the branching process n(t), whereas the expectation E{n(t)[n(t) − 1] . . . [n(t) − k + 1]} =
∞
n(n − 1) · · · (n − k + 1) pn (t) = mk (t),
∀ t ∈ [0, ∞)
(1.50)
n=k
is called the kth factorial moment. The ordinary moments can be expressed by the factorial moments, and vice versa, factorial moments by the ordinary moments as [17] m(k) (t) =
k
S(k, j)mj (t)
and
mk (t) =
j=1
k
∫ (k, j)m(j) (t),
j=1
where ∫ (k, j) denotes the Stirling numbers of the first kind, whereas S(k, j) that of the second kind. Since it may be the case that g(z, t) is not determined for the values z > 1, the kth factorial moment is given by the limit ∂k g(z, t) (1.51) z↑1 ∂zk for every positive real number k. According to the Abelian theorem concerning the summation of series [18], the expression E{n(t)[n(t) − 1] · · · [n(t) − k + 1]} = mk (t) = lim
∞
∂k g(z, t) = n(n − 1) · · · (n − k + 1) zn−k pn (t) ∂zk n=k
converges to E{n(t)[n(t) − 1] · · · [n(t) − k + 1]} =
∞
n(n − 1) · · · (n − k + 1) pn (t)
n=k
for every fixed t ∈ [0, ∞) if z ↑ 1. It the following, we will mostly need the first three ordinary or factorial moments. The variance D2 {n(t)} of the process n(t) can be easily calculated from the factorial moments, namely D2 {n(t)} = m2 (t) + m1 (t) − [m1 (t)]2 .
(1.52)
For calculating the first and second factorial moments of n(t), we will need the first and second factorial moments of the random variable ν. From the basic generating function q(z) defined in (1.6), the first factorial moment is given by
dq(z) = E{ν} = q (1) = q1 , (1.53) dz z=1 whereas the second is given by
2 d q(z) = E{ν(ν − 1)} = q (1) = q2 . (1.54) dz2 z=1 In the following, the notations q1 and q2 will be used.
16
Imre Pázsit & Lénárd Pál
The first and second factorial moments of n(t) can be determined from the integral equation (1.33). By differentiating with respect to z once and twice, as well as substituting z = 1, one obtains t m1 (t) = Qq1 e −Q(t−u) m1 (u)du + e −Qt , (1.55) 0
and
m2 (t) = Qq1
t
e −Q(t−u) m2 (u)du + Qq2
0
t
e −Q(t−u) [m1 (u)]2 du,
(1.56)
0
respectively. Introduce the Laplace transforms m˜ k (s) =
∞
e −st mk (t)dt,
k = 1, 2.
0
It follows from (1.55) that m˜ 1 (s) =
1 , s + Q(1 − q1 )
i.e. m1 (t) = e αt ,
(1.57)
where α = Q(q1 − 1) = −a (1.58) is the fundamental exponent characterising the multiplying medium. If α < 0 then the expectation of the number of particles generated by one particle decreases exponentially with t from unity to zero. If α = 0 then the expectation of the particle number remains unity for every t. Finally, if α > 0 then the expectation grows exponentially to infinity. The solution of (1.56) can also be obtained by Laplace transform methods. For the Laplace transform one obtains Qq2 m˜ 2 (s) = V (s), s+a where ∞ ∞ 1 V (s) = e −st [m1 (t)]2 dt = e −st e −2at dt = , s + 2a 0 0 i.e.
Qq2 1 1 m˜ 2 (s) = − , if a = 0, a s + a s + 2a and Qq2 m˜ 2 (s) = 2 , if a = 0. s For the second moment one obtains the expression ⎧ q2 ⎨ Q e αt (e αt − 1), if α = 0, α m2 (t) = (1.59) ⎩ Qq2 t, if α = 0. By using (1.52), the variance of the process n(t) is given as ⎧ q ⎨ Q 2 − 1 e αt e αt − 1 , α D2 {n(t)} = ⎩ Q q2 t,
if α = 0, if α = 0.
(1.60)
17
Basic Notions
Definition 2. According to the formulae (1.57) and (1.60), the branching processes can be divided into three categories depending on in which media they occur: for α < 0, the process is called subcritical; if α = 0 and Qq2 > 0, the process is critical; and finally if α > 0, the process is supercritical. It is remarkable that the characteristics of the medium, and hence also that of the process, are exclusively determined by the quantities q1 and q2 . From the definition it follows that if q1 < 1 then the medium is subcritical; if q1 = 1 and q2 > 0 then it is critical; and finally, if q1 > 1 then it is supercritical. It is also worth remarking that, according to (1.57) and (1.60), in a critical system, while the expectation of the particle number is constant, the variance grows linearly in time, and diverges asymptotically. The implications of this fact for the operation of nuclear reactors in the critical state are sometimes discussed in the literature [11]. We will return to this question in connection with the extinction probability.
1.3.3 Semi-invariants In many cases, in addition to the ordinary and factorial moments of the n(t), knowledge of its semi-invariants ∂n log gexp (z, t) , z→0 ∂zn
κn (t) = lim
n = 1, 2, . . .
(1.61)
is also needed. An important property of the semi-invariants is expressed by the theorem below. Theorem 7. The semi-invariants of the branching process n(t) satisfy the linear differential equation system dκn (t) = dt j=1 n
n Rn−j+1 κj (t), j−1
with the initial conditions
κn (0) =
1,
if n = 1,
0,
if n > 1.
n = 1, 2, . . .
(1.62)
The coefficients Rj in the equation system are given by the formula Rj = QE{(ν − 1)j } = Q
∞
(k − 1) j fk .
(1.63)
k=1
Proof. Introduce the logarithmic generating function φ(z, t) = log gexp (z, t). From equation (1.11), it follows that at every instant t and point z where gexp (z, t) = 0, the equation ∂φ(z, t) ∂φ(z, t) = Q[q(e z )e −z − 1] ∂t ∂z holds with the initial condition φ(0, z) = z. Let Q[q(e z )e −z − 1] = R(z),
(1.64)
18
Imre Pázsit & Lénárd Pál
and notice that R(z) = Q
∞ k=0
=Q
∞
⎤ ∞ ∞ j z fk e (k−1)z − 1 = Q ⎣ fk (k − 1) j − 1⎦ j! j=0 ⎡
k=0
E{(ν − 1) j }
j=1
zj j!
=
∞ j=1
Rj
zj , j!
where Rj is identical with (1.63). If the semi-invariants κk (t), k = 1, 2, . . . exist then one can write that φ(z, t) =
∞ k=1
κk (t)
zk . k!
(1.65)
Substitute now the power series of φ(z, t) and R(z) with respect to z into equation (1.64). One obtains that ∞ dκn (t) zn n=1
The coefficient of
zn
dt
n!
=
∞
Rj
j=1
∞ zj zk−1 κk (t) . j! (k − 1)! k=1
on the right-hand side is equal to Rn κ1 (t)
1 1 + Rn−1 κ2 (t) + ··· n!0! (n − 1)!1!
n 1 n 1 1 Rn−i+1 κi (t) + · · · + R1 κn (t) Rn−i+1 κi (t). = (n − i + 1)!(i − 1)! (n − 1)!1! n! i=1 n − i + 1 Hence
n n dκn (t) Rn−i+1 κi (t), = n−i+1 dt i=1
and this is identical with equation (1.62). Determine now the semi-invariants κ1 (t) and κ2 (t). First of all, one notices that κ1 (t) = E{n(t)} = m1 (t) and κ2 (t) = D2 {n(t)} = m2 (t) + m1 (t)[1 − m1 (t)]. Based on (1.62), one obtains dκ1 (t) = R 1 κ1 dt
and
dκ2 (t) = 2R1 κ2 + R2 κ1 (t). dt
The initial conditions are κ1 (0) = 1 and κ2 (0) = 0. From (1.63) one has R1 = QE{ν − 1} = Q (q1 − 1) = α, R2 = QE{(ν − 1)2 } = Q [E{ν(ν − 1)} − E{ν − 1}] = α and accordingly, κ1 (t) = e R1 t = e αt .
Qq2 −1 , α
19
Basic Notions
Further,
⎧ q ⎨ [Q 2 − 1]e αt (e αt − 1), if α = 0, α κ2 (t) = ⎩ Qq2 t, if α = 0.
Performing the Laplace transform of equation (1.62), and using the notation ∞ κ˜ n (s) = e −st κn (t)dt = κ˜ n , 0
one can write that (s − nR1 )˜κn =
n n k=1
k
Rk κ˜ n−k+1 + δn1 ,
(1.66)
i.e. (s − R1 )˜κ1 = 1 −R2 κ˜ 1 + (s − R2 )˜κ2 = 0 3 −R3 κ˜ 1 − R2 κ˜ 2 + (s − 3R1 )˜κ3 = 0 2 .. .
−Rn κ˜ 1 −
n Rn−1 κ˜ 2 + · · · + (s − nR1 )˜κn = 0. n−1
From this, the following solution is obtained: s − R1 0 1 −R2 s − 2R2 κ˜ n = .. Dn n . −Rn − Rn−1 n−1
0 0 .. n .
−
n−2
Rn−2
· · · 1 · · · 0 ··· 0
in which Dn =
n
(s − kR1 ).
k=1
As an illustration, the Laplace-transforms of the first three semi-invariants are given as follows: κ˜ 1 (s) =
1 , s − R1
s − R1 1 κ˜ 2 (s) = (s − R1 )(s − 2R1 ) −R2 and
1 R2 , = 0 (s − R1 )(s − R2 )
s − R1 1 −R2 κ˜ 3 (s) = (s − R1 )(s − 2R1 )(s − 3R1 ) −R3 =
3R22 + R3 (s − 2R1 ) . (s − R1 )(s − 2R1 )(s − 3R1 )
0 s − 2R1 −3R2
1 0 0
20
Imre Pázsit & Lénárd Pál
Investigate now the dependence of the nth semi-invariant on t in the case when the medium is critical, i.e. when R1 = α = 0 and R2 = Qq2 > 0. It is easy to confirm that in this case when n = 1, 2 then κ˜ 1 (s) = whereas if n > 2 then κ˜ n (s) =
1 s
and κ˜ 2 (s) = R2
n k
2
k=3
(Qq2 )n−1
1 , s2
(1.67)
1 ˜ n−2 (s) , 1 + sn
(1.68)
where the function ˜ k (s) is a kth order polynomial of s. From this, it obviously follows that in the critical state, if n = 1, 2 then κ1 (t) = 1 and whereas if n > 2 then κn (t) =
n k k=3
2
(Qq2 )n−1
κ2 (t) = Qq2 t, t n−1 [1 + n−2 (1/t)] , (n − 1)!
where k (1/t) is a kth order polynomial of 1/t. It is worth noting that the t-dependence of the semi-invariants of the branching process n(t) in the critical state is dominantly determined by the second factorial moment q2 .
1.4 Discrete Time Branching Processes It is well-known that F. Galton and H.W.Watson were the first to deal with branching processes in the 1870s, in order to determine the probability of the extinction of families. The number of articles and monographs on the discrete time branching processes named after them is exceedingly large. An excellent survey on the Galton–Watson processes is given in the by now classic monograph by T.E. Harris [6]. In this book, however, discrete time branching processes are not dealt with. Only some elementary questions are discussed here that are necessary, among others, for the modelling of branching processes. Divide the interval [0, t) into T equal and mutually non-overlapping subintervals t. The concept of the reaction will be defined as before with the associated number distribution, such that fk , k = 0, 1, 2, . . . is the probability that k particles are born in a reaction. Obviously, f0 is the probability of the absorption, f1 is that of renewal, and fk , k = 2, 3, . . . is that of the actual multiplication. Suppose that in every subinterval t at most one reaction can occur. Moreover, let W be the probability of the occurrence of the reaction, while 1 − W is the probability of the non-occurrence of the reaction. Let n( j), j = 0, 1, . . . , T denote the number of particles in the multiplying medium at the jth discrete time point, i.e. in the subinterval [( j − 1)t, jt]. Determine the probability P{n(j) = n|n(0) = 1} = pn (j),
j = 0, 1, . . . , T
(1.69)
of the event that exactly n particles are present in the multiplying system at the jth discrete time instant, provided that there was just one particle present at the 0th time instant. Then from obvious considerations one can write down the backward equation for j > 1 as pn (j) = (1 − W )pn ( j − 1) + Wf0 δn0 + W
∞ k=1
fk
k
pni ( j − 1).
(1.70)
n1 +···+nk =n i=1
By introducing the generating functions g(z, j) =
∞ n=0
pn (j)zn
(1.71)
21
Basic Notions
and q(z) =
∞
fk z k ,
(1.72)
k=0
after some elementary considerations one obtains the equation g(z, j) = (1 − W )g(z, j − 1) + Wq[g(z, j − 1)],
(1.73)
and since pn (0) = δn1 , one has g(z, 0) = z. Further, obviously if ∞
pn (j) = 1 and
∞
n=0
fk = 1, then g(1, j) = 1.
k=0
With adequate rigour, based on the fundamental relation (1.23), one can discuss the discrete time homogeneous branching processes, the so-called Galton–Watson processes. Let us call one generation of particles the particles that are born from one particle under unit time. Denote their number with n(1) and introduce the notations P{n(1) = n|n(0)} = pn (1) = pn , and g(z, 1) =
∞
pn zn = g(z),
n=0
respectively. From (1.23), by selecting t = j − 1 and u = 1, one obtains g(z, j) = g[g(z, 1), j − 1] = g[g(z), j − 1].
(1.74)
One notes that g(z, 0) = z,
g(z, 1) = g(z),
g(z, 2) = g(g(z)), . . . ,
i.e. g(z, j) is equal to the jth iteration of g(z, 1) = g(z). Accordingly, g(z, j) = g(g( . . . g(z) . . . )) = gj (z), where gj (z) denotes the jth iteration of g(z). Equation (1.73) can also be obtained by an iteration process starting from j = 0. Then pn (1) = (1 − W )δn1 + W
∞
fk δnk ,
k=0
and hence g(z, 1) = g(z) = (1 − W )z + Wq[z]. By continuing, one can write that g(z, 2) = g(g(z, 1), 1) = (1 − W )g(z, 1) + Wq[g(z, 1)], g(z, 3) = g(g(z, 1), 2) = g(g(g(z, 1), 1), 1) = g(g(z, 2), 1) = (1 − W )g(z, 2) + Wq[g(z, 2)], and so on. This leads to the conclusion that g(z, j) = (1 − W )g(z, j − 1) + Wq[g(z, j − 1)],
(1.75)
22
Imre Pázsit & Lénárd Pál
and this is the same as equation (1.73). The expectation m1 (j) = E{n(j)|n(0) = 1} can be calculated from (1.73) by the relation
dg(z, j) = m1 (j). dz z=1 One obtains that m1 ( j) = [1 − W (1 − q1 )]m1 ( j − 1), and since m1 (0) = 1, m1 ( j) = [1 − W (1 − q1 )]j , where q1
= E{ν}.9
(1.76)
It is seen that for fixed t ⎧ ⎨ 0, lim m1 (j) = 1, j→∞ ⎩ ∞,
if q1 < 1, if q1 = 1, if q1 > 1.
Accordingly, one can state that the discrete time process ξ(j), j = 0, 1, . . . is subcritical if q1 < 1, critical if q1 = 1 and supercritical if q1 > 1. The second factorial moment m2 ( j) = E{n(j)[n(j) − 1]|n(0) = 1} can be calculated by using the relation
d 2 g(z, j) dz2
= m2 ( j). z=1
From (1.73) one obtains m2 (j) = [1 − W (1 − q1 )]m2 (j − 1) + Wq2 [m1 (j − 1)]2 . By introducing the notations 1 − W (1 − q1 ) = a
and
Wq2 = b,
and by taking into consideration (1.76), from the previous equation the recursive expression m2 (j) = a m2 (j − 1) + b a2(j−1) is obtained, which by using the generating function γ(s) =
∞
m2 (j)sj
j=0
is simplified to γ(s) = a s γ(s) + b s 9 Let
1 , 1 − a2 s
a2 s < 1.
jt = t and introduce the notation Wj = Qj t, where W = W (t). If, for a fixed t lim Qj = lim
j→∞
then
t→0
W = Q, t
Qj (1 − q1 )t j = e −Q(1−q1 )t , lim m1 ( j) = lim 1 − j→∞ t→0 j and this agrees exactly with the expectation (1.57) of the continuous time parameter process.
23
Basic Notions
After some elementary steps, one obtains the formula
s 1 1 γ(s) = b −a . 1 − a 1 − as 1 − a2 s From this it immediately follows that m2 (j) = ba j
1 − aj 1 − m1 ( j) = Wq2 m1 ( j) , 1−a 1 − m1 (1)
if q1 = 1.
(1.77)
If q1 → 1 then m1 ( j) → 1, thus 1 − m1 ( j) 1 − [1 − W (1 − q1 )]j = lim = j, q1 →1 1 − m1 (1) q1 →1 1 − [1 − W (1 − q1 )] lim
hence in a critical system m2 ( j) = Wq2 j. For the variance of the process n( j), one obtains ⎧ q2 ⎨ 1 + m1 ( j)[1 − m1 (j)], if q1 = 1, D2 {n(j)|n(0) = 1} = 1 − q1 ⎩ Wq2 j, if q1 = 1.
(1.78)
(1.79)
1.5 Random Tree as a Branching Process It has long been well known that to every Galton–Watson process one can order a graph called a tree which displays the development of the population formed by the successors of a single particle entering the multiplying medium at t = 0 by mutually independent reactions at discrete time instants t = 1, 2, . . .. A wealth of outstanding work has been published on these Galton–Watson trees. These will, however, not be discussed here, rather the interested reader is referred to the excellent publication of Bollobás [19]. For the illustration of branching processes, however, a brief discussion will be given of the process describing the development of the random trees in the case when the period between the consecutive branching points is a continuous random variable with exponential distribution. Suppose that the tree consists of active and inactive branching points, called nodes.10 The nodes are connected with branches. The development of the tree can be described as follows: at the moment t = 0, the tree consists of only one node (root) which becomes inactive by the random time τ and creates new active nodes k = 0, 1, 2, . . . with probabilities fk at the ends of branches of the same length. The same happens with these mutually independent nodes as with the root node and this continues as long as an active node is generated. The first step of the random development of the tree is illustrated in Fig. 1.1, while a possible form of the development until a given time t is shown in Fig. 1.2. The horizontal dotted lines denote the random time instants where the new nodes appear after being created by the node that became inactive. It is obvious that the active nodes are always the end points of the tree. This random tree corresponds to a branching process in which the number of particles at t is equal to the number of active nodes of the tree, whereas the number of the particles absorbed until t, equals the number of the inactive nodes. Denote the number of active nodes with na (t) and that of the inactive nodes with ni (t) at t. Determine now the generating function g (a,i) (za , zi , t) =
∞ ∞
p(a,i) (na , ni , t|1, 0)zana zini
na =0 ni =0 10 Results
of a more detailed investigation of the problem can be found in the works [20–23].
(1.80)
24
Imre Pázsit & Lénárd Pál
...
Root
f0
f1
f2
f3
...
Figure 1.1 The first step of the random development of the tree. The active root node becomes inactive and creates 0, 1, 2, 3, . . . new active nodes with probabilities f0 , f1 , f2 , f3 , . . . . Active nodes are marked by light circles, while inactive nodes by dark ones. Living tree
Root
Figure 1.2 A possible realisation of the tree development. The active nodes capable for further development are marked by light circles. The inactive nodes that do not take part in the development any longer are marked by dark circles. The horizontal dotted lines denote the random time instants where the new nodes appear after being created by the node that became inactive (i.e. the branching of the previously active node).
of the probability P{na (t) = na , ni (t) = ni |na (0) = 1, ni (0) = 0} = p(a,i) (na , ni , t|1, 0). By considering the mutually exclusive two first events that can occur after the moment t = 0, one arrives at ∂g (a,i) (za , zi , t) = −Qg (a,i) (za , zi , t) + Qzi q[g (a,i) (za , zi , t)] ∂t
(1.81)
with the initial condition g (a,i) (za , zi , 0) = za . It is immediately seen that the generating function g (a,i) (za = z, zi = 1, t) = g (a) (z, t) satisfies equation (1.5), i.e. g (a) (z, t) = g(z, t). From (1.81), one can immediately write down the probabilities and variances of the numbers of the active and inactive nodes. Without going into details, using the previous
25
Basic Notions
notations, one obtains (a)
(i)
m1 (t) = e αt
and m1 (t) =
Q αt (e − 1), α
if q1 = 1. When q1 = 1 then (a)
m1 (t) = 1 and The variances are given by the expressions D2 {na (t)} = and
D2 {ni (t)} =
Q α
(i)
m1 (t) = Qt.
Q E{(ν − 1)2 }(e αt − 1)e αt α
3
[D2 {ν} + E{(ν − 1)2 }e αt ](e αt − 1) − 2
Q α
2 D2 {ν}Qte αt
if q1 = 1. If q1 = 1, then D2 {na (t)} = q2 Qt
and D2 {ni (t)} = Qt +
1 q2 (Qt)3 . 3
The variance of the number of inactive nodes for the case q1 < 1 converges to the limit D2 {ν}/(1 − q1 )3 if t → ∞. This means that in the case of branching processes in a subcritical medium, the variance of the number of absorbed particles converges to a finite value if the duration of the process converges to infinity. It is worth calculating also the covariance functions of na (t) and ni (t), Cov{na (t)ni (t)} = E{na (t)ni (t)} − E{na (t)}E{ni (t)}. After some elementary operations, for q1 = 1 one obtains
D2 {ν} D2 {ν} αt αt Cov{na (t)ni (t)} = 1 + e (e − 1) − Qte αt , (q1 − 1)2 q1 − 1
(1.82)
and for q1 = 1 1 (1.83) q2 (Qt)2 . 2 For the case of q1 = 1, in a critical medium, the correlation function shows a peculiar behaviour. One obtains √ −1/2 Cov{na (t)ni (t)} 3 3 = 1+ , (1.84) D{na (t)} D{ni (t)} 2 q2 (Qt)2 Cov{na (t)ni (t)} =
and from this it follows that in a critical medium, a sufficient long time√ after the start of the process, the correlation between the numbers of the active and non-active particles is 3/2, i.e. it is constant, independently from any parameter influencing the process. More detailed calculations can be found in [20–23]. The investigation of the random trees of continuous time parameter has enriched the theory of branching processes with many valuable results; however, their full description lies somewhat outside the basic subject of this monograph.
1.6 Illustrative Examples In the following, both in the illustrative examples and for the exactly solvable problems, it is assumed that the generating function q(z) of the medium is known. The use of the quadratic expression 1 q(z) = f0 + f1 z + f2 z2 = 1 + q1 (z − 1) + q2 (z − 1)2 , 2
f0 + f1 + f2 = 1
(1.85)
26
Imre Pázsit & Lénárd Pál
2.5
Permitted region
2
q2
1.5 1 0.5
0.5
1
1.5
2
2.5
q1
Figure 1.3 The permitted values of q1 and q2 if the generating function q(z) of the random variable ν is a quadratic function of z.
is advantageous because it represents the effects of the generating function q(z), which is unknown but possesses three finite first factorial moments, sufficiently well. Besides, physically it describes a process in which at most two particles can be born in a reaction (collision), which is a good model of atomic collision cascades with recoil production. From (1.85) it follows that the allowable values of q1 and q2 are contained in a single, specific domain of the plane (q1 , q2 ). This domain is illustrated in Fig. 1.3. In the forthcoming, the generating function (1.85) will be called a quadratic generating function. The probabilities fi , i = 0, 1, 2 can be expressed by the moments q1 and q2 as 1 f0 = 1 − q1 + q2 , 2 f1 = q1 − q2 , 1 f2 = q2 . 2 The roots of the equation q(z) − z = f2 z2 − (1 − f1 )z + f0 = 0. will also be needed. These are obtained as zi =
⎧ ⎪ ⎨ 1, ⎪ ⎩1+
if i = 1, 2 (1 − q1 ) , q2
if i = 2.
(1.86)
1.6.1 Regular processes Yule–Furry process One of the simplest regular processes is the Yule–Furry process, well-known from the literature. In this case f0 = f1 = 0 and f2 = 1, i.e. q(z) = z2 , and hence q1 = q2 = 2. In other words, in this specific medium the particle is doubled in every reaction. Let us write down both the backward equation ∂g(z, t) = −Qg(z, t)[1 − g(z, t)] ∂t
(1.87)
∂g(z, t) ∂g(z, t) = Qz(z − 1) ∂t ∂z
(1.88)
and the forward equation
27
Basic Notions
for the generating function. We require also the conditions g(z, 0) = z
and
g(1, t) = 1.
(1.89)
As mentioned earlier, it follows from the theory of differential equations that the solution of these two equations is the same function ze −Qt g(z, t) = . (1.90) 1 − z(1 − e −Qt ) However, for better insight, this result will be derived explicitly. Backward equation
One notes that dg(z, t) = Q dt, g(z, t)[g(z, t) − 1]
from which the equality g(z, t) g(z, t) − 1 follows. By taking into account the initial condition g(0, z) = z, one obtains Ce −Qt =
C=
z , z−1
and from this, with some basic algebra, one arrives at (1.90) immediately. Forward equation The characteristic equation of the homogeneous, linear, first order partial differential equation (1.88) is: dz Q dt + =0 z(z − 1) whose integral 1 ψ(z, t) = Qt + log 1 − z is at the same time the basic integral of the partial differential equation. It is known that any continuous and differentiable function H (u), in which u = ψ(z, t), is also an integral of (1.88), i.e.
1 g(z, t) = H Qt + log 1 − . z Accounting for the initial condition g(0, z) = z, the functional equation
1 H log 1 − =z z is obtained. By introducing the notation v = log 1 − z1 , it is seen that H (v) =
1 . 1 − ev
Hence g(z, t) = H [ψ(z, t)] =
1 1 = . ψ(z,t) 1−e 1 − (1 − z1 )e Qt
By multiplying the numerator and denominator with ze −Qt , the solution (1.90) is obtained.
28
Imre Pázsit & Lénárd Pál
Quadratic process In the following, the regular process will be dealt with, defined by the quadratic generating function which will be called a quadratic process. By taking (1.85) into account, the backward equation (1.29) takes the following form: dg Q dt = . (1.91) f0 + (f1 − 1)g + f2 g 2 The roots of the denominator g1,2 =
1 − f1 ±
(1 − f1 )2 − 4f0 f2 2f2
on the right-hand side can be expressed by the quantities q1 = f1 + 2f2 and q2 = 2f2 . One finds that gk =
1, if k = 1, 1 + 2(1 − q1 )/q2 , if k = 2.
(1.92)
By utilising this, one arrives at Q(1 − q1 )dt =
1 1 − dg, g − 1 − 2(1 − q1 )/q2 g−1
and further to g(z, t) = 1 −
2C(z)(1 − q1 )/q2 . e Q(1−q1 )t − C(z)
Here C(z) is the integration constant which, from the initial condition g(z, 0) = z can be determined from the equation z =1−
2C(z)(1 − q1 )/q2 . 1 − C(z)
This yields C(z) =
1−z , 1 − z + 2(1 − q1 )/q2
and hence finally one obtains g(z, t) = 1 −
1−z , q2 e Q(1−q1 )t + (1 − z) 2(1−q (e Q(1−q1 )t − 1) 1)
(1.93)
if q1 = 1. For the case when q1 = 1, i.e. if the medium is critical, from equation (1.93) by applying the L’Hospital rule one obtains 1−z . (1.94) g(z, t) = 1 − 1 + (1 − z)q2 Qt/2 It is worth to demonstrate another procedure for the determination of the generating function g(z, t). By using the expression of the generating function q(z) 1 q(z) = 1 + q1 (z − 1) + q2 (z − 1)2 , 2 one obtains from (1.29) 1 dg = −Q(1 − q1 )(g − 1) + Qq2 (g − 1)2 . dt 2
29
Basic Notions
By introducing the function h(z, t) =
1 1 − g(z, t)
for which one has h(z, 0) =
1 , 1−z
one can immediately write 1 dg 1 1 dh = = Q(1 − q1 ) + Qq2 2 dt (1 − g) dt 1−g 2 1 = Q(1 − q1 )h + Qq2 . 2 The solution for q1 = 1 by using the formula for h(0, z) is equal to h(z, t) =
q2 1 Q(1−q1 )t e + (e Q(1−q1 )t − 1). 1−z 2(1 − q1 )
From this it follows that g(z, t) = 1 −
1−z , q2 e Q(1−q1 )t + (1 − z) 2(1−q (e Q(1−q1 )t − 1) 1)
and this agrees with the formula (1.93). The expression for the case q1 = 1 can be obtained from this by a simple limit procedure. Figure 1.4 shows the shape of the surface determined by the generating function g(z, t) in a subcritical medium (q1 = 0.95) with parameters q2 = 0.5 and Q = 0.4.11 By expanding the right-hand side of (1.93) and (1.94) into a power series with respect to z and introducing the notation U (t) =
g (t, z)
1 0.9 0.8 0.7 0.6 0
1 − e −Q(1−q1 )t , Q(1 − q1 )
(1.95)
Q 0.4 1 0.75 10
q1 0.95 q2 0.5 20 30 t 40 0 50
0.5 z 0.25
Figure 1.4 The generating function g(z,t) in a subcritical medium. 11 Notations
of the dimensions of parameters will be omitted both here and in the following. Units for the figures will be chosen such that they show the essential characteristics of the phenomena.
30
Imre Pázsit & Lénárd Pál
Table 1.1 Values of probabilities f0
f1
f2
q1
q2
0.30
0.45
0.25
0.95
0.5
0.25
0.50
0.25
1.00
0.5
0.20
0.55
0.25
1.05
0.5
Extinction probability
1 0.8 0.6 q1 0.95
0.4 0.2
Q1
q1 1
q2 0.5
q1 1.05
0 0
10
20
30
40
50
Time (t )
Figure 1.5 systems.
Dependence of the extinction probability p0 (t) on time t in subcritical, critical, and supercritical
one obtains the probabilities
⎧ e −Q(1−q1 )t ⎪ ⎪ ⎪ ⎨ 1 − 1 + Qq U (t)/2 , 2 p0 (t) = ⎪ 1 ⎪ ⎪ , ⎩1− 1 + Qq2 t/2
and pn (t) =
if q1 = 1, (1.96) if q1 = 1,
⎧ [Qq2 U (t)/2]n−1 ⎪ ⎪ , ⎪ e −Q(1−q1 )t ⎨ [1 + Qq U (t)/2]n+1
if q1 = 1,
⎪ ⎪ ⎪ ⎩
if q1 = 1,
2
(Qq2 t/2)n−1 , (1 + Qq2 t/2)n+1
(1.97)
n = 1, 2, . . . . The values of the probabilities f0 , f1 , f2 that were used in calculations are listed in Table 1.1. The quantity p0 (t) is called the extinction probability and R(t) = 1 − p0 (t), the survival probability. The extinction probability converges to 1 in both subcritical and critical media for t → ∞. In a supercritical medium one obtains that q1 − 1 lim p0 (t) = 1 − 2 . (1.98) t→∞ q2 According to this, the survival probability R(t) converges to a value larger than zero only in a supercritical medium if t → ∞. Figure 1.5 displays the time-dependence of the extinction probability in the case of subcritical, critical and supercritical processes. In Fig. 1.6, the t-dependence of the probabilities p1 (t) and p2 (t) is shown also for three different processes. Calculate now the time instant tmax in a critical system (q1 = 1) for which the probability pn (t) is a maximum. From dpn (t) (Qt)n−2 = Q(2/q2 )2 [2(n − 1)/q2 − 2Qt] = 0 dt (2/q2 + Qt)n+2
31
Basic Notions
0.15
0.8 0.6
Q1
q1 0.95
q2 0.5
q1 1
Probability (p2 (t ))
Probability (p1 (t ))
1
q1 1.05
0.4 0.2 0
0.1
Q1
q1 0.95
q2 0.5
q1 1 q1 1.05
0.05
0 0
10
20
30
40
50
0
10
20
Time (t )
Figure 1.6
30
40
50
Time (t )
Dependence of probabilities p1 (t) and p2 (t) on time t in subcritical, critical, and supercritical systems.
it follows that tmax = where n = 1, 2, . . . .
n−1 Qq2
and pn (tmax ) =
2 n+1
2
n−1 n+1
n−1 ,
Remark. It is worthwhile to put into context the fact that the extinction probability converges to 1 in critical systems, while the expectation remains constant. As it was seen in Section 1.3.2, at the same time, the variance diverges linearly with time. These facts together are consistent. The time-dependence of the variance shows that the branching process is not stationary, and hence not ergodic either. This means that the ensemble average is not equal to the time average, and the realisations of processes in several identical systems have different asymptotic behaviour. An illustrative explanation is that, if we take an increasing number of systems and look at the asymptotic properties, in most systems except a small fraction of them, the population will die out asymptotically, while in the remaining systems it will diverge. Increasing the number of systems, the number of the systems in which the population diverges remains finite, whereas the number in which the population dies out, goes to infinity. It is this, asymptotically negligible measure (in terms of the number of systems considered) of divergence that guarantees that the expectation can remain constant while the extinction probability converges to unity. Concerning the implications for the operation of for example nuclear reactors, the above facts have little relevance. Partly, both the divergence of the variance and the certainty of extinction are only asymptotic properties; and partly, due to the fact that the process is not ergodic, the asymptotic behaviour of the moments of the population does not say anything about the long-term behaviour of the system. An illustration of the different behaviour of the different realisations in a critical systems is shown in Fig. 1.9.
1.6.2 Explosive process As a second example, a simple explosive process is chosen, for which it is known that its generating function is not a real probability generating function, since g(t, 1) < 1 if t > 0. Let the function √ q(z) = 1 − 1 − z define this simple process for which it is easily seen that q1 = ∞. Naturally, this q(z) satisfies also the requirement for the explosive processes, arising from the general condition in (1.42), according to which the integral 1 dz C = z − q(z) 1−
has to be finite for every 0 < < 1. One obtains that √ C = −2 log 1 − < ∞,
∀ 0 < < 1.
32
Imre Pázsit & Lénárd Pál
Q 0.4 1 g(t, z) 0.75 0.5 0.25 0 0
1 0.75 5
Figure 1.7
10 t
0.5 z 0.25 15
0
Generating function of an explosive process.
The generating function g(z, t) satisfies the following equation: dg = Q(1 − g) − Q 1 − g, dt
(1.99)
from which, by taking into account the initial condition g(z, 0) = z, one can write down the solution in the form g(z,t) dx = Qt. √ 1−x− 1−x z Considering that √ dx = −2 log (1 − 1 − x) + const., √ 1−x− 1−x one arrives at √ (1.100) g(z, t) = 1 − [1 − e −Qt/2 (1 − 1 − z)]2 , and further to g(1, t) = 1 − (1 − e −Qt/2 )2 < 1, if t > 0. Figure 1.7 shows the surface defined by the generating function of equation (1.100). For illustration, the probabilities are listed as p0 (t) = 0, p1 (t) = e −Qt/2 , 1 p2 (t) = e −Qt/2 (1 − e −Qt/2 ). 4 It is obvious that P{n(t) = ∞|n(0) = 1} = 1 − g(1, t) = (1 − e −Qt/2 )2
(1.101)
is the probability that under a finite period of time t, an infinite number of progeny is generated.
1.6.3 Modelling of branching processes The modelling is based on equation (1.70). For the sake of simplicity, suppose that a particle generates a reaction under the time t with the probability W , in which the particle is either absorbed, renewed or gets converted into two particles. Let the probabilities of these events be f0 , f1 and f2 . The first step is the random generation of the numbers 0, 1, 2 with a frequency corresponding to the probabilities f0 , f1 , f2 . The next step produces the realisations of the number of particles n(t) at the discrete time instants t = 1, 2, . . . , T , supposing that the number of particles was r at t = 0. Hence, with the help of a simple
33
Basic Notions
Number of particles
60
q1 0.95
50 40 30 20 W 0.4, q2 0.5
10 0
10
20 30 Time steps
40
50
Figure 1.8 Three particular realisations illustrating the evolution of the number of particles in a subcritical medium.
Number of particles
120 W 0.4, q2 0.5
100 80 60 40
q1 1
20 0
10
20 30 Time steps
40
50
Figure 1.9 Three particular realisations illustrating the evolution of the number of particles in a critical medium.
Number of particles
160
q1 0.5
140
W 0.4, q2 0.5
120 100 80 60 40
0
10
20 30 Time steps
40
50
Figure 1.10 Three realisations illustrating the formation of the number of particles in a supercritical medium.
program, one can describe any possible history of the r particles present at t = 0 in the multiplying medium, with fixed probabilities W , f0 , f1 , f2 . In Fig. 1.8, one can see three realisations of the discrete process n(t) defined by the probabilities W = 0.4, f0 = 0.3, f1 = 0.45, f2 = 0.25, of a subcritical medium, starting from the initial condition n(0) = 50, at t = 1, 2, . . . , 50. For the sake of illustration, the discrete points are connected by linear sections. Figures 1.9 and 1.10 each show three realisations of the discrete process n(t) defined by the probabilities W = 0.4, f0 = 0.25, f1 = 0.5, f2 = 0.25 and W = 0.4, f0 = 0.2, f1 = 0.55, f2 = 0.25 of a critical and a supercritical medium, respectively. The curves in the figures illustrate well the fact that the characteristic behaviour in the subcritical, critical and supercritical media develops through significant fluctuations. Moreover, even realisations
34
Imre Pázsit & Lénárd Pál
3 2.5
W 0.4, q2 0.5
0.8
q1 0.95
0.7
Variance
Expectation
1 0.9
0.6 0.5
S
0.4 0.3
C
Figure 1.11
q2 0.5 q1 0.95
1.5
S
10
20 30 Time steps
C
0 40
50
0
20
40 60 Time steps
80
100
Estimation of the time-dependence of the expectation and the variance in a subcritical medium. 10
1.06
S
1.04
C
1.02 1 0.98
N 104
0.94 0
20
C 6 N 104
4
q1 1
2
q1 1
0.96
S
8
W 0.4, q2 0.5 Variance
Expectation
2
0.5
1.08
Figure 1.12
W 0.4
1 N 104
0
N 104
W 0.4, q2 0.5
0 40 60 Time steps
80
100
0
10
20
30
40
50
Time steps
Estimation of the time-dependence of the expectation and the variance in a critical medium.
can occur that do not show the typical behaviour for the given medium at all. It can happen, for example, that in a supercritical medium, one of the realisations leads to the extinction of the particles for some finite t, i.e. the state n(t) = 0, t > 0 is realised. It is interesting to demonstrate the estimation of the time-dependence of the expectation E{n(t)|n(0) = 1} and the variance D2 {n(t)|n(0) = 1} by using a relatively not very large number of realisations of the process. The estimated values are compared to those calculated directly from the formula (1.76), and from (1.79), respectively, but there is no strive to achieve high precision since the goal is only to illustrate the random behaviour of the process. Figure 1.11 shows the time-dependence of the expectation and the variance, respectively, estimated by N = 10 000 realisations of the process n(t) in a subcritical medium, starting from n(0) = 1. The symbol ∗ corresponds to the estimated values, while the points of the continuous curves to the values calculated from the corresponding formula. For a critical process, Fig. 1.12 displays the time-dependence of the expectation and the variance. One can see that the estimated expectations (S) hardly show more than 2% fluctuation around the exact values (C). The time-dependence of the estimated and the exact variances, on the other hand, are nearly identical. As seen in Fig. 1.13, in a supercritical medium, the time-dependence of the estimated expectations and the variance (S) agrees outstandingly well with the exact time-dependence (C). We note that in both the critical and the supercritical cases, altogether only N = 10 000 realisations were used for the calculations. The realisations can be used for the calculation of the probabilities pn (t), t = 0, 1, . . . . As an example, determine the time-dependence of the extinction probability p0 (t). In connection with the direct calculation of p0 (t), it is worth noting that p0 (t) = g(t, 0), hence if q(z) = f0 + f1 z + f2 z2 ,
35
Basic Notions
S
2.25
C
S
2 1.75
N 104
1.5
q1 1.05
1.25
20
N 104
10
q1 1.05
W 0.4
W 0.4, q2 0.5
1
q2 0.5
0 0
Figure 1.13
C
30 Variance
Expectation
40 2.5
10
20 30 Time steps
40
50
0
10
20 30 Time steps
40
50
Estimation of the time-dependence of the expectation and the variance in a supercritical medium.
Extinction probability
1
q2 0.5 0.8 W 0.4 0.6 0.4 0.2
N 104 0
Figure 1.14 systems.
10
20 30 Time steps
q1 0.95 calc q1 1 calc q1 1.05 calc 40
50
Estimation of the time-dependence of the extinction probability in subcritical, critical and supercritical
then from g(z, t) = (1 − W )g(z, t − 1) + Wq[g(z, t − 1)] one obtains the formula p0 (t) = Wf0 + [1 − W (1 − f1 )]p0 (t − 1) + Wf2 [p0 (t − 1)]2 . Define the function h(x) = h0 + h1 x + h2 x2 , in which h0 = Wf0 ,
h1 = 1 − W (1 − f1 )
and
h2 = Wf2 .
By using this function, one can write that p0 (t) = h[p0 (t − 1)], by noticing that p0 (0) = 0. This formula was used for the direct calculation of the probabilities p0 (t). It can easily be shown that if t → ∞, then the value of the extinction probability is given by the smallest positive root of the equation h(x) = x. It can be proven that this root equals unity in both subcritical and critical media, while in a supercritical medium it is less than 1 and is equal to the value of q1 − 1 lim p0 (t) = p0 (∞) = 1 − 2 . t→∞ q2 It can be seen in Fig. 1.14 that the curve p0 (t) corresponding to q1 = 1.05 tends to the value 0.8 if t → ∞.
C H A P T E R
TW O
Generalisation of the Problem
Contents 2.1 Joint Distribution of Particle Numbers at Different Time Instants 2.2 Branching Process with Two Particle Types 2.3 Extinction and Survival Probability
36 39 41
2.1 Joint Distribution of Particle Numbers at Different Time Instants Let t1 = t and t2 = t + u be two time instants where u ≥ 0. Let n(t1 ) and n(t2 ) denote the number of particles present in a multiplying system at the time instants t1 and t2 , respectively. Then, P{n(t2 ) = n2 , n(t1 ) = n1 |n(0) = 1} = p2 (n1 , n2 , t1 , t2 )
(2.1)
is the probability that in the multiplying system, n1 particles are present at t1 and n2 particles at t2 ≥ t1 , provided that one particle existed in the system at t = 0. It is customary to call this description the two-point model. We will now prove the following important theorem.
Theorem 8. In the case of a homogeneous process, the generating function g2 (z1 , z2 , t1 , t2 ) =
∞ ∞
p2 (n1 , n2 , t1 , t2 )z1n1 z2n2
(2.2)
n1 =0 n2 =0
satisfies the functional equation g2 (z1 , z2 , t1 , t2 ) = g[z1 g(z2 , t2 − t1 ), t1 ],
(2.3)
in which g(z, t) is the solution of (1.29) or its equivalent (1.30).
Proof. Since one has P{n(t2 ) = n2 , n(t1 ) = n1 |n(0) = 1} = P{n(t2 ) = n2 |n(t1 ) = n1 }P{n(t1 ) = n1 |n(0) = 1}, Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
36
37
Generalisation of the Problem
and since each of the n1 particles found in the system at time t1 will start a branching process independently from the others, one can write that ∞ ∞
P{n(t2 ) = n2 , n(t1 ) = n1 |n(0) = 1}z2n2 z1n1 =
n1 =0 n2 =0
∞ ∞
P{n(t2 ) = n2 |n(t1 ) = n1 }z2n2
n1 =0 n2 =0
×P{n(t1 ) = n1 |n(0) = 1}z1n1 =
∞
[z1 g(z2 , t2 − t1 )]n1 P{n(t1 ) = n1 |n(0) = 1}.
n1 =0
This proves the theorem, as expressed by equation (2.3). As a generalisation of the case, let us now determine the generating function of the probability distribution P{n(tj ) = nj , j = 1, . . . , k|n(0) = 1} = pk (n1 , . . . , nk , t1 , . . . , tk ). Introduce the notations t1 = t and tj − tj−1 = uj−1 , j = 2, . . . , k. By using these, the relation (2.3) can be written in the form g2 (z1 , z2 , t, u1 ) = g[z1 g(z2 , u1 ), t]. Defining the operation ˆ j zj = zj g(zj+1 , uj ), (2.4) G one notes that the expression g[z1 g(z2 , u1 ), t] can be considered as the transform of the function g1 (z1 , t) = g(z1 , t) that results from the operation ˆ 1 z1 = z1 g(z2 , u1 ) G applied directly to the variable z1 . By using (2.4), one can write that ˆ j zj = zj g(G ˆ j+1 zj+1 , uj ) = zj g[zj+1 g(zj+2 , uj+1 ), uj )], ˆ j+1 G G by the virtue of which one has ˆ 2G ˆ 1 z1 , t) = g{z1 g[z2 g(z3 , u2 ), u1 , t]}. g3 (z1 , z2 , z3 , t, u1 , u2 ) = g(G By induction, one arrives at ˆ j−1 · · · G ˆ 1 z1 , t), gj (z1 , . . . , zj , t, u1 , . . . , uj−1 ) = g(G
(2.5)
where g1 (z1 , t) = g(z1 , t) and j = 2, . . . , k. Expression (2.5) is the generating function of the j-point model.
2.1.1 Autocorrelation function of the particle number In many cases, one needs the autocorrelation function E{[n(t) − m1 (t)][n(t + u) − m1 (t + u)]} = Rn,n (t + u, t) which will now be calculated. Actually, it would be more correct to call the function Rn,n (t + u, t) the autocovariance function. However, whenever it does not lead to confusion, following common practice, Rn,n will be referred to as the autocorrelation.
38
Imre Pázsit & Lénárd Pál
First, determine the expectation
∂2 g2 (z1 , z2 , t, u) E{n(t)n(t + u)} = ∂z1 ∂z2
. z1 =z2 =1
Introducing the notation s = s(z1 , z2 ) = z1 g(z2 , u) and considering that ∂g(s, t) dg ∂s = , ∂z1 ds ∂z1 one can write down the relation
as well as
∂g(s, t) dg ∂s = , ∂z2 ds ∂z2
d 2 g ∂s ∂s dg ∂2 s ∂ 2 g2 = 2 + . ∂z1 ∂z2 ds ∂z1 ∂z2 ds ∂z1 ∂z2 From this it follows that E{n(t)n(t + u)} = [m2 (t) + m1 (t)]m1 (u). After a short calculation, one obtains E{[n(t) − m1 (t)][n(t + u) − m1 (t + u)]} = Rn,n (t + u, t) = D2{n(t)}e αu ,
(2.6)
(2.7)
in which, according to (1.60) D2{n(t)}
=
[Q qα2 − 1]e αt (e αt − 1), if α = 0, if α = 0. Qq2 t,
It is seen that D2{n(0)} = 0, and if α = −a < 0,
then
lim D2{n(t)} = 0,
t→∞
hence lim Rn,n (t + u, t) = 0.
t→∞
The normalised covariance function Cn,n (t + u, t) =
Rn,n (t + u, t) D{n(t)} αu = e D{n(t)}D{n(t + u)} D{n(t + u)}
(2.8)
is called the (traditional) autocorrelation function of n(t). In a critical system, i.e. when α = 0, one has t , Cn,n (t + u, t) = t+u and further that n(t) and n(t + u) − n(t) are uncorrelated for every finite t and u. This follows from (2.7), since for α = 0 E{[n(t) − m1 (t)][n(t + u) − m1 (t + u)]} − E{[n(t) − m1 (t)]2 } = 0, i.e. E{[n(t) − m1 (t)][n(t + u) − n(t)]} = 0. This observation implies that in a subcritical system near to criticality, the variation in the number of particles (increase or decrease) under a time period u following a time instant t is not correlated with the number of particles at t. This statement may have important consequences regarding the characteristics of the fluctuations in quasi-critical systems.
39
Generalisation of the Problem
2.2 Branching Process with Two Particle Types Suppose that in a multiplying system, two different particle types, which can be converted into each other, develop a branching process. Again, it is assumed that a particle of any type found in the multiplying system at any given time can induce a reaction and generate a number of progeny independently from its own past and from the history of the other particles present. Denote the two particle types as T1 and T2 , respectively. Let n1 (t) be the number of particles of type T1 and n2 (t) that of the particles of type T2 in the multiplying system at t ≥ 0, respectively. Further, let Qi t + o(t), i = 1, 2 be the probability that during the time period t → 0, a particle of type Ti (i = 1, 2) will induce a reaction. As a result of the reaction, ν particles of type T1 and μ particles of type T2 will be generated.1 Let (i)
P{ν = k, μ = l|Ti } = fk,l
(2.9)
denote the probability that in a reaction induced by one particle of the type Ti , the number of generated particles of types T1 and T2 will be k and l, respectively. The normalisation condition reads as ∞ ∞
(i)
fk,l = 1,
i = 1, 2.
k=0 l=0
Define the probabilities P{n1 (t) = n1 , n2 (t) = n2 |n1 (0) = 1, n2 (0) = 0} = p(1) (n1 , n2 , t)
(2.10)
P{n1 (t) = n1 , n2 (t) = n2 |n1 (0) = 0, n2 (0) = 1} = p(2) (n1 , n2 , t).
(2.11)
and 1 , n2 , t) is the probability that at time t ≥ 0 there will be n1 particles of type T1 and n2 particles of type T2 in the system, provided that at t = 0 there was one particle of type Ti and no particle of the opposite type in the system. Introduce the generating functions
Obviously, p(i) (n
g (i) (z1 , z2 , t) =
∞ ∞
p(i) (n1 , n2 , t)z1n1 z2n2
i = 1, 2
(2.12)
n1 =0 n2 =0
and q(i) (z1 , z2 ) =
∞ ∞
(i)
fk,l z1k z2l ,
i = 1, 2.
(2.13)
k=0 l=0
The g (i) and q(i) fulfil the conditions g (i) (1, 1, t) = 1,
q(i) (1, 1) = 1
and g (i) (z1 , z2 , 0) = zi ,
i = 1, 2.
The backward Kolmogorov equation will now be derived for the probabilities p(1) (n1 , n2 , t) 1 Naturally
ν and μ are random variables.
and p(2) (n1 , n2 , t).
(2.14)
40
Imre Pázsit & Lénárd Pál
To be able to follow closely the evolution of the process, the intuitive method which was already used in Section 1.2.1 will be followed. Then one can write t (1) (1) −Q1 t δn 1 1 δn 2 0 + Q 1 e −Q1 (t−t ) [ f0,0 δn1 0 δn2 0 p (n1 , n2 , t) = e 0
+ S (n1 , n2 , t ) + S (n1 , n2 , t ) + S (1,2) (n1 , n2 , t )]dt , (1)
(2)
where S (1) (n1 , n2 , t ) =
∞ k=1
S (2) (n1 , n2 , t ) =
(1)
fk,0
∞ l=1
k
p(1) (au , bu , t ),
a1 +···+ak =n1 b1 +···+bk =n2 u=1
(1)
f0,l
l
p(1) (av , bv , t ),
a1 +···+al =n1 b1 +···+bl =n2 v=1
and S
(1,2)
(n1 , n2 , t ) =
∞ ∞
(1)
k=1 l=1
×
(1)
fk,l
(1)
(2)
(2)
(1)
(1)
(2)
(2)
a1 +···+ak +a1 +···+al =n1 b1 +···+bk +b1 +···+bl =n2
k l
p(1) (au(1) , bu(1) , t )p(2) (av(2) , bv(2) , t ).
u=1 v=1
Based on this, for the generating function g (1) (z1 , z2 , t), one obtains the following equation: g (1) (z1 , z2 , t) = e −Q1 t z1 + Q1
t
e −Q1 (t−t )
0
∞ ∞
fk,l [g (1) (z1 , z2 , t )]k [g (2) (z1 , z2 , t )]l dt , (1)
k=0 l=0
which – by considering the definition (2.13) – can be written in the following form: t g (1) (z1 , z2 , t) = e −Q1 t z1 + Q1 e −Q1 (t−t ) q(1) [g (1) (z1 , z2 , t ), g (2) (z1 , z2 , t )]dt .
(2.15)
0
In a completely analogous way, one can derive the generating function equation t e −Q2 (t−t ) q(2) [g (1) (z1 , z2 , t ), g (2) (z1 , z2 , t )]dt . g (2) (z1 , z2 , t) = e −Q2 t z2 + Q2
(2.16)
0
Differentiating with respect to t, from these equations one arrives at ∂g (i) = Qi [q(i) ( g (1) , g (2) ) − g (i) ], ∂t
i = 1, 2.
(2.17)
By introducing the notations s(i) (z1 , z2 ) = Qi [q(i) (z1 , z2 ) − zi ],
i = 1, 2,
(2.18)
the basic equations can be written in a rather simple form as ∂g (i) = s(i) ( g (1) , g (2) ), ∂t together with the initial conditions g (i) (z1 , z2 , 0) = zi , i = 1, 2.
i = 1, 2,
(2.19)
41
Generalisation of the Problem
Following the method used for proving Theorem 2 of Section 1.2.1, the generating function equations corresponding to the forward Kolmogorov equation can also be derived as ∂g (i) ∂g (i) = Qj [q(j) (z1 , z2 ) − zj ] , ∂t ∂zj j=1 2
i = 1, 2.
(2.20)
By using the notation (2.18), the above can be written in the following concise form: ∂g (i) ∂g (i) s(j) (z1 , z2 ) = , ∂t ∂zj j=1 2
i = 1, 2,
(2.21)
together with the initial conditions g (i) (z1 , z2 , 0) = zi , i = 1, 2. These equations can easily be derived also by rigorous methods, by utilising basic theorems on the properties of the branching processes. An elegant example of such a derivation was given by Sevast’yanov [24].
2.3 Extinction and Survival Probability If a branching process starting from n(0) = 1 takes the value n(t) = 0 for some time t > 0, then it is said to have become degenerate by time t > 0, or in other words the particle population died out by t > 0. The probability P{n(t) = 0|n(0) = 1} = p0 (t) is called the probability of extinction until time t > 0. The probability p0 (t) has already been calculated for the fundamental process when q(z) is a quadratic function of its argument.2 We shall now investigate the properties of extinction in the case of an arbitrary regular process. In the case of a continuous branching process, let At1 denote the event n(t1 ) = 0. From the event At1 it then follows for all t2 > t1 that n(t2 ) = 0, hence it is obvious that At1 ⊆ At2 , i.e. P(At1 ) ≤ P(At2 ). Accordingly, p0 (t1 ) ≤ p0 (t2 ), if t1 ≤ t2 . It is seen that the probability of the extinction p0 (t) occurring until time t > 0 is a non-decreasing function of t. From this it follows that max p0 (t) = lim p0 (t) = p ≤ 1, t→∞
0
thus it is reasonable to call the limit value lim p0 (t) = p
t→∞
(2.22)
the probability of extinction. In the case of a discrete branching process, also called a Galton–Watson process, let Ak be the event that the number of particles in the system is zero at the k-th discrete time point, i.e. n(kt) = nk = 0 where kt = tk is the k-th discrete time point. Let A denote the event of extinction and p = P(A), the probability of the extinction, respectively. Obviously, the event A will occur if at least one of the events Ak , k = 1, 2, . . . occurs, that is. ∞ Ak . A= k=1 2 See
(2.14) and Fig. 1.5.
42
Imre Pázsit & Lénárd Pál
Since Ak ⊆ Ak+1 , from the so-called ‘summation theorem’ [17] one has lim P(Ak ) = P(A),
k→∞
and this is equivalent with the relationship lim p0 (kt) = p.
k→∞
In the following, a branching process will be called extincting or degenerate if its extinction probability is equal to 1, and surviving or non-degenerate if its extinction probability is less than 1, respectively. The following fundamental theorem will be proved now. Theorem 9. If q1 ≤ 1 then, for both discrete and continuous processes, the extinction probability p is equal to the trivial solution p = 1 of the equation p = q(p); whereas if q1 > 1, then it is equal to the only non-negative solution of the equation p = q(p) which is less than unity. Proof. First it will be shown that for both the continuous and the discrete case, the extinction probability is determined by the equation p = q( p), after which Theorem 9 will be proved. For continuous processes, with the substitution g(0, u) = p0 (u), from equation (1.23) one obtains p0 (t + u) = g[p0 (u), t]. In view of the limit relation
lim p0 (u) = lim p0 (t + u) = p,
u→∞
u→∞
one has p = g( p, t) for all t, hence from equation (1.29) it follows that p = q( p). For discrete processes, on the other hand, equation (1.73) can be written in the following form: g(z, t) = h[g(z, t − 1)],
t = 1, 2, . . . ,
where g(z, 0) = z
and h(z) = z + [q(z) − z]W .
Considering that g(0, t) = p0 (t), one has p0 (t) = h[p0 (t − 1)],
where p0 (0) = 0.
Since lim p0 (t) = lim p0 (t − 1) = p,
t→∞
t→∞
the expression p = h(p) = p + [q(p) − p]W is obtained, which is exactly the equation p = q(p). Since q(z) is a probability generating function, i.e. q(1) = 1, the following theorem is valid:3 Theorem 10. If q1 ≤ 1 then for all points 0 ≤ z < 1 one has q(z) > z and the equality p = q(p) holds only for the point z = p = 1; conversely, if q1 > 1, then there exists one and only one point z0 = p < 1 at which q(p) = p; and further, q(z) > z if 0 ≤ z < p, and q(z) < z if p < z < 1, respectively. Proof. By the foregoing steps, the statement on the extinction probability in Theorem 9 has been proved. 3 The
proof of the theorem is given in Appendix A.
43
Generalisation of the Problem
2.3.1 Asymptotic forms of the survival probability Investigate the behaviour of the survival probability P{n(t) > 0|n(0) = 1} = 1 − p0 (t) = R(t)
(2.23)
in the case when t → ∞.
Subcritical medium The following statement appears to be nearly self-evident. Theorem 11. In the case of a continuous process in a subcritical system, the boundedness of the expression
1 0
q(1 − x) + q1 x − 1 dx = − log R0 x[q(1 − x) + x − 1]
(2.24)
is a necessary and sufficient condition for the asymptotic expression R(t) = R0 e −Q(1−q1 )t [1 + o(1)]
(2.25)
to hold when t → ∞. Proof. Substituting the expression g(0, t) = 1 − R(t) into equation (1.29), one arrives at dR = −Q[q(1 − R) + R − 1], dt noting that R(0) = 1. The above has the implicit solution
1
Qt =
R(t)
dx . q(1 − x) + x − 1
One notes that log
1 e −Q(1−q1 )t dx = Q(q1 − 1)t − log R(t) = (q1 − 1) R(t) q(1 − x) +x−1 R(t) 1 1 dx q(1 − x) + q1 x − 1 + = dx = k(t). x x[q(1 − x) + x − 1] R(t) R(t)
From this one immediately obtains the expression R(t) = e −k(t) e −Q(1−q1 )t . Considering that for sufficiently large t values one has R(t) << 1, one observes that e −k(t) = R0 [1 + o(1)], where
R0 = exp − 0
from which (2.24) is immediately obtained.
1
q(1 − x) + q1 x − 1 dx , x[q(1 − x) + x − 1]
(2.26)
44
Imre Pázsit & Lénárd Pál
Theorem 12. In a subcritical system with a discrete process, fulfilment of the inequality E{n(1) log n(1)} < ∞
(2.27)
is the necessary and sufficient condition for the asymptotic relation R(t) = R0 [1 − W (1 − q1 )]t [1 + o(1)],
0 < R0 < ∞
(2.28)
to hold when t → ∞, where the probability W is defined in Section 1.4. The proof of this statement is rather long, and it does not involve physical considerations. Hence the proof is given in Appendix B.
Critical medium Determine how the survival probability behaves in a critical medium for t → ∞. Theorem 13. In a critical medium, for a continuous process the asymptotic form of the survival probability R(t) is given by the formula 2 [1 + o(1)], t → ∞ (2.29) R(t) = Qq2 t provided that q2 is finite. Proof. For the proof, start again from (1.29). The substitution g(0, t) = 1 − R(t) yields dR(t) = −Q{q[1 − R(t)] + R(t) − 1}. dt According to the series expansion theorem described in Section A.5, one has 1 q[1 − R(t)] = 1 − q1 R(t) + A[1 − R(t)]R 2 (t), 2 where A[1 − R(t)] ≤ q2 . Since in a critical medium q1 = 1, from the above equation follows that 1 dR(t) = − QA[1 − R(t)]R 2 (t). dt 2 Considering that A[1 − R(t)] = q [θ(t)], where 1 − R(t) ≤ θ(t) < 1, and further that R(t) → 0, if t → ∞, one has A[1 − R(t)] = q2 + φ(t), where φ(t) → 0, if t → ∞. Thus equation (2.30) can be written as 1 dR(t) = − Q[q2 + φ(t)]R 2 (t), dt 2 from which it follows that R(t) =
2 Q[q2 t +
t 0
φ(u)du + C]
.
(2.30)
45
Generalisation of the Problem
Since R(0) = 1, it follows that C = 2, and accordingly R(t) =
−1 t 2 2 1 2 φ(u)du + = 1+ + o(1/t), Qq2 t q2 t 0 q2 t Qq2 t
which is exactly what was to be proven. Theorem 14. In the case of a discrete process, the asymptotic form of the survival probability R(t) in a critical medium is given by the formula 2 [1 + o(1)], (2.31) R(t) = Wq2 t in which t tends to infinity on the set of positive real numbers. Proof. The proof will be again based on the series expansion theorem of Section A.5. From the equation R(t + 1) = 1 − h[1 − R(t)], one obtains 1 R(t + 1) = R(t) − WA[1 − R(t)]R 2 (t). 2 Since R(t) → 0 if t → ∞, one has A[1 − R(t)] = q2 + φ(t), where φ(t) → 0, if t → ∞. Thus, 1 R(t + 1) = R(t) − W [q2 + φ(t)]R 2 (t), 2
(2.32)
hence 1 R(t + 1) = 1 − W [q2 + φ(t)]R(t) → 1, R(t) 2 if t → ∞. Rearrange now (2.32) in the following form: 1 R(t + 1) = R(t) − Wq2 R(t)R(t + 1) + ψ(t), 2 where 1 1 ψ(t) = − W φ(t)R 2 (t) − Wq2 R(t)[R(t) − R(t + 1)]. 2 2 Making use of equation (2.32) to substitute the difference R(t) − R(t + 1) yields 1 1 ψ(t) = − W φ(t)R 2 (t) − W 2 q2 [q2 + φ(t)]R 3 (t). 4 2 Considering (2.33), one observes that ψ(t) = 0, R(t)R(t + 1) hence from (2.34), which can be rearranged in the form as lim
t→∞
ψ(t) 1 1 1 = − Wq2 + , R(t) R(t + 1) 2 R(t)R(t + 1)
(2.33)
(2.34)
46
Imre Pázsit & Lénárd Pál
it follows that 1 1 1 = + Wq2 + χ(t), R(t + 1) R(t) 2 1 where χ(t) → 0, if t → ∞. Performing the summation t−1 n=0 R(n+1) leads to t−1 n=0
t−1 t−1 1 1 1 = + Wq2 t + χ(n), R(n + 1) R(n) 2 n=0 n=0
and this is nothing else than t−1 1 1 χ(n). = Wq2 t + R(t) 2 n=0
From this the validity of the statement follows directly. The space-dependent theory of extinction is not discussed in the present work. Interesting results of the extinction probability for neutrons in a supercritical sphere were given by Williams [25]. One should also mention that a simple space-dependent theory of the extinction was published by Schroedinger [26] already in 1945.
2.3.2 Special limit distribution theorems It is obvious from the considerations so far that in the case of subcritical and critical systems, pn (t) → 0 if t → ∞ for all n = 1, 2, . . . and it is also clear that ∞ n=1 pn (t) = R(t) → 0 if t → ∞. A more subtle description of the detailed behaviour of the processes for t → ∞ can be investigated with the help of specially defined probabilities.
Asymptotic distribution of the particle number for a surviving process in a subcritical medium Instead of the probability P{n(t) = n|n(0) = 1} of the standard process, let us introduce the probability P{n(t) = n|n(t) > 0} =
P{n(t) = n, n(t) > 0} pn (t) = , P{n(t) > 0} R(t)
∀n ≥ 1,
(2.35)
which corresponds to a process surviving until time t > 0. The following important statement will be proved. Theorem 15. For a subcritical medium, the limit values lim
t→∞
pn (t) = wn , R(t)
n = 1, 2, . . .
(2.36)
do exist, and the quantities w1 , w2 , . . . satisfy the condition ∞
wn = 1,
n=1
hence they can be considered as the probabilities of a regular distribution. Further, the generating function k(z) =
∞ n=1
wn z n
(2.37)
47
Generalisation of the Problem
is given by the formula
k(z) = 1 − exp α
z
0
dx , s(x)
(2.38)
in which α = Q(q1 − 1)
s(x) = Q[q(x) − x].
and
Proof. The proof consists of a series of simple steps. From (2.35) it can be immediately seen that the generating function ∞ P{n(t) = n|n(t) > 0}zn k(z, t) = n=1
can be written in the following form: ∞
k(z, t) =
1 − g(z, t) g(z, t) − g(t, 0) 1 pn (t)zn = =1− R(t) n=1 R(t) R(t)
which, by introducing the notation 1 − g(z, t) = R(z, t),
(2.39)
R(z, t) . R(t)
(2.40)
can be cast into the form k(z, t) = 1 − From (1.29) one has dt =
dg dg = , Q[q(g) − g] s( g)
and from this, one obtains
g(z,t)
t=
du s(u)
g(z,0)
which, by the change of the integration variable u = 1 − x, takes the form
1−z
t=
R(z,t)
dx . s(1 − x)
(2.41)
It was earlier pointed out in connection with (2.26) that the equality t=
1
R(t)
also holds, hence
1−z R(z,t)
dx s(1 − x)
dx = s(1 − x)
1 R(t)
dx . s(1 − x)
Taking into account the equalities
1−z R(z,t)
dx = s(1 − x)
R(t) R(z,t)
dx + s(1 − x)
1−z R(t)
dx = s(1 − x)
1 R(t)
dx , s(1 − x)
48
Imre Pázsit & Lénárd Pál
finally the formula
R(t) R(z,t)
dx = s(1 − x)
1 1−z
dx = s(1 − x)
z 0
du . s(u)
(2.42)
is obtained. Since R(z, t) ≤ R(t) → 0 if t → ∞, the relation s(1 − z) = −αz[1 + (z)] is obviously true in the interval 0 ≤ z < 1 where (z) is continuous, and if z → 0 then (z) → 0. Based on this, the left-hand side of (2.42) can be rearranged as follows: R(t) dx 1 R(z,t) dx 1 R(z, t) = = log , (2.43) α R(t) x[1 + (x)] α[1 + (θ)] R(t) R(z,t) s(1 − x) where R(z, t) ≤ θ ≤ R(t). From (2.42) and (2.43) one obtains z R(z, t) du = exp α[1 + (θ)] , R(t) 0 s(u) and if t → ∞, then
z R(z, t) du = exp α . t→∞ R(t) 0 s(u) lim
Since α < 0,
1
du = ∞, s(u) 0 hence k(1) = 1, from which all statements of the theorem will follow. The following question arises naturally: What is the condition that the expectation of the number of particles in a surviving, subcritical system shall be finite? The answer is given by the theorem below. Theorem 16. The expectation k (1) =
∞
is finite if and only if the integral4 1 αu + s(1 − u) R0 = exp du us(1 − u) 0 n=1 nwn
(2.44)
is finite. In this case the expectation is supplied by the formula lim z↑1
1 dk(z) = , dz R0
(2.45)
and the relationship 1 (2.46) R0 also exists. From this it also follows that the order of the summation and taking the limiting value is interchangeable: lim E{n(t)|n(t) > 0} =
t→∞
∞ n=1 4 This
∞ 1 pn (t) pn (t) = lim = n . t→∞ R(t) t→∞ R(t) R 0 n=1
n lim
integral corresponds exactly to the integral (2.24) if it is solved for R0 and one introduces the notation s(z) = Q[q(z) − z].
49
Generalisation of the Problem
Proof. First the statement (2.46) will be proved. It is obvious that E{n(t)} e αt = , P{n(t) > 0} R(t)
E{n(t)|n(t) > 0} =
and since for sufficiently large t one has R(t) ≈ R0 e αt , one immediately obtains (2.46). For the proof of the first part of the statement, based on (2.38) one writes z dk(z) 1 du = −α exp α . dz s(z) 0 s(u) Considering that for values of z slightly less than unity, s(z) ≈ −α(1 − z), one has z 1 1 du −α ≈ = exp , s(z) 1−z 0 1−u and accounting for
z z dk(z) du du ≈ exp α + dz 0 s(u) 0 1−u
yields, after some simple arrangements, the expression 1 dk(z) αu + s(1 − u) ≈ exp du . dz us(1 − u) 1−z From this it follows that dk(z) lim = exp z↑1 dz
0
1
αu + s(1 − u) du . us(1 − u)
Hence, if the integral on the right-hand side is finite, then the expectation derived from the generating function k(z) is also finite, which is just the theorem that was to be proved.
Non-parametric asymptotic distribution in a critical medium An important characteristics of the branching process in a critical medium is that its asymptotic behaviour is exclusively determined by the first two factorial moments of the generating function q(z). It appears to be interesting to construct a random process whose distribution function describes the asymptotic behaviour of the process in a critical medium without material parameters. In the following, we show that the random process n(t) q(t) = (2.47) E{n(t)|n(t) > 0} satisfies this requirement. Theorem 17. If q1 = 1 and q2 is finite, then the distribution function n(t) P ≤ x|n(t) > 0 = S(x, t) E{n(t)|n(t) > 0}
(2.48)
has the property that it converges to the exponential distribution function S(x) = 1 − e −x , if t → ∞.
x ≥ 0,
(2.49)
50
Imre Pázsit & Lénárd Pál
Proof. First of all, one notices that E{n(t)|n(t) > 0} ≈
1 Qq2 t. 2
(2.50)
This follows from E{n(t)} R(t) and the fact that in a critical system E{n(t)} = 1, and further that for sufficiently large t one has R(t) ≈ 2/Qq2 t. In order to prove (2.49), define the characteristic function ∞ e −ωx dS(x, t) = E{e −ωq(t) |n(t) > 0} = E{e −ωR(t)n(t) |n(t) > 0} (2.51) ϕ(ω, t) = E{n(t)|n(t) > 0} =
0
which can be written in the following form: ϕ(ω, t) =
g[t, e −ωR(t) ] − g(t, 0) R[t, e −ωR(t) ] =1− . R(t) R(t)
(2.52)
To obtain the asymptotic form of the characteristic function ϕ(ω, t) for t → ∞, an expression for the function R(z, t) for large t values is needed. From (1.29) ∂R(z, t) = −s[1 − R(z, t)] ∂t with the initial condition R(0, z) = 1 − z. By applying the series expansion theorem given in Section A.5, perform now the substitution 1 s(z) = QA(z)(z − 1)2 . 2 Here |A(z)| ≤ q2 and A(z) → q2 if z → 1. After some brief calculation, one arrives at ∂R(z, t) 1 = − QA[1 − R(z, t)]R 2 (z, t). ∂t 2 Taking into account that R(z, t) = 1 − p0 (t) −
∞
pn (t)zn = R(t) −
n=1
the inequality
∞
pn (t)zn ,
n=1
∞ n |R(z, t)| ≤ R(t) + pn (t)z ≤ 2R(t) n=1
holds, from which one has |R(z, t)| ≤ 2R(t) → 0,
if t → ∞.
In view of this, one can write A[1 − R(z, t)] = q2 + δ(z, t), where for all |z| ≤ 1, δ(z, t) tends uniformly to zero if t → ∞. So, equation (2.32) takes the form ∂R(z, t) 1 = − Q[q2 + δ(z, t)]R 2 (z, t), ∂t 2
(2.53)
51
Generalisation of the Problem
whose solution with the initial condition R(z, 0) = 1 − z is given by the equation
t 1 1 1 − = Q q2 t + δ(z, u)du . R(z, t) 1 − z 2 0 From this, the formula 2(1 − z) [1 + (z, t)] 2 + Qq2 t(1 − z) immediately follows, in which (z, t) converges uniformly to zero for all |z| ≤ 1 if t → ∞. By performing the substitution R(z, t) =
(2.54)
z = e −ωR(t) in (2.54), from (2.52) one obtains the characteristic function ϕ(ω, t) = 1 − 2
1 − e −ωR(t) (1 + ). R(t)[2 + Qq2 t(1 − e −ωR(t) )]
(2.55)
By utilising the asymptotical formula R(t) =
2 [1 + o(1)], Qq2 t
it follows that ω 1 = , ω+1 ω+1 and this is exactly the characteristic function of the exponential distribution function. lim ϕ(ω, t) = 1 −
t→∞
Asymptotic distribution of the normalised particle number in a supercritical medium In the case of a supercritical medium when q1 > 1 and q2 < ∞, introduce the definition of the normalised particle number n(t) r(t) = = n(t)e −αt , (2.56) E{n(t)} where α = Q(q1 − 1) > 0. Theorem 18. It will be shown that the distribution function r(t) for all u > 0 satisfies the relation E{[r(t + u) − r(t)]2 } = E{r2 (t + u)} − E{r2 (t)} → 0,
if t → ∞.
(2.57)
This means that r(t) converges in quadratic mean, and hence naturally also stochastically, to a random variable denoted by r∗ , if t → ∞. Proof. For the proof one needs the expectations E{n(t)} = e αt
and E{n2 (t)} =
Qq2 αt αt e (e − 1) + e αt . α
(2.58)
From these one has E{r(t)} = 1
and
E{r2 (t)} =
Qq2 (1 − e −αt ) + e −αt . α
Naturally, E{r2 (t)} →
Qq2 , α
if t → ∞.
(2.59)
52
Imre Pázsit & Lénárd Pál
Then, one derives the equality E{r(t)r(t + u)} = E{r2 (t)}.
(2.60)
To this order, consider the expectation
∂2 g2 (z1 , z2 , t, u) ∂z1 ∂z2
z1 =z2 =1
= E{n(t + u)n(t)}.
By utilising the expression E{n(t + u)n(t)} = [m2 (t) + m1 (t)]m1 (u) derived in Section 2.1.1, and using the expectations (2.58), one obtains E{n(t + u)n(t)} = E{n2 (t)}e αu . Multiplying by e −α(2t+u) yields E{r(t + u)r(t)} = E{r2 (t)}, from where it follows that E{[r(t + u) − r(t)]2 } = E{r2 (t + u) − r2 (t)}. From this, for every u > 0 the relation E{[r(t + u) − r(t)]2 } = E{r2 (t + u)} − E{r2 (t)} → 0,
if t → ∞
holds. This shows that r(t) converges to the random variable r∗ in quadratic mean (accordingly, also stochastically) if t → ∞. It is known from the theory of stochastic processes that in this case the characteristic function of r(t) converges to the characteristic function of the random variable r∗ if t → ∞, i.e. ∗
ϕ(ω, t) = E{e −ωr(t) } = g(t, exp{−ωe −αt }) → ϕ(ω) = E{e −ωr }. Theorem 19. The characteristic function (Laplace–Stieltjes transform) ∞ ϕ(ω) = e −ωx dS(x) 0
of the limit distribution lim P{r(t) ≤ x} = lim S(x, t) = P{r∗ ≤ x} = S(x)
t→∞
t→∞
is determined by the differential equation s[ϕ(ω)] dϕ(ω) = , dω αω
ϕ(0) = 1,
(2.61)
whose solution in implicit form is given by the equation 1 − ϕ(ω) = ω exp 1
ϕ(ω)
s(x) − α(x − 1) dx . s(x)(x − 1)
Proof. Substitute z with the expression z = exp{−ωe −α(t+u) }
(2.62)
53
Generalisation of the Problem
in the standard relationship g(z, t + u) = g[g(z, t), u]. One obtains ϕ(ω, t + u) = g[ϕ(ωe −αu , t), u], and from this, for t → ∞ the relation ϕ(ω) = g[ϕ(ωe −αu ), u].
(2.63) follows. By utilising the formula derived for the generating function g(z, t) in (1.31), one obtains from (2.63) that ϕ(ω) = ϕ(ωe −αu ) + us[ϕ(ωe −αu )] + o(u), which can also be written in the form ϕ(ω) − ϕ(ωe −αu ) o(u) = s[ϕ(ωe −αu )] + . u u By performing the transition u ↓ 0, the equation s[ϕ(ω)] dϕ(ω) = dω αω is obtained, which corresponds exactly to (2.61). The formula (2.62) will be derived by using a special form of (1.29). Namely, (1.29) can be written in the form dg s(g) − α(g − 1) − dg = αdt g−1 s(g)(g − 1) from which, by integration with respect to t between 0 and t, one obtains g(z,t) s(x) − α(x − 1) αt 1 − g(z, t) = (1 − z)e exp dx . s(x)(x − 1) z With the substitution z = exp{−ωe −αt }, one arrives at 1 − ϕ(ω, t) = (1 − exp{−ωe −αt })e αt exp
ϕ(ω,t)
exp{−ωe −αt }
s(x) − α(z − 1) dx , s(x)(x − 1)
from which (2.62) is immediately obtained if t → ∞, since 1 − exp{−ωe −αt } = ω. t→∞ e −αt lim
This completes the proof of the statements on the asymptotic behaviour of supercritical processes. As an illustration, it seems to be practical to determine the distribution function of S(x) for a quadratic generating function q(z). It will be shown that S(x) = [1 − (1 − p)e −(1−p)x ](x),
(2.64)
where p = 1 − 2(q1 − 1)/q2 is the probability of extinction and (x) is the unit step function being continuous from the right. It can be seen from the definition that S(x) has a first order discontinuity at the point x = 0, since S(−0) = 0, whereas S(+0) = p.
54
Imre Pázsit & Lénárd Pál
By using (2.61), the proof of the statement (2.64) goes as follows. Since s(ϕ) = Q f2 (ϕ − 1)(ϕ − p), one has dϕ 1 dω = . (ϕ − 1)(ϕ − p) 1−p ω From here, after integration 1 ϕ(ω) − 1 = C, (2.65) ϕ(ω) − p ω in which the constant C can be determined from the condition ϕ(0) = 1. By applying L’Hospital’s rule, it follows that ϕ(ω) − 1 lim = ϕ (0) = −E{r∗ } = −1, ω→0 ω and based of this, one has C = −1/(1 − p). Then by simple rearrangement one obtains from (2.65) ϕ(ω) =
1 − p + pω . 1−p+ω
(2.66)
From this, the following expression can be deduced for the Laplace transform of the distribution function S(x): ψ(ω) =
ϕ(ω) 1 1−p = − . ω ω ω+1−p
Its inverse corresponds to the formula in (2.64). We note that due to the discontinuity at x = 0 one has dS(x) = pδ(x)dx + (1 − p)2 e −(1−p)x dx,
∀ x ≥ 0.
(2.67)
C H A P T E R
T H R E E
Injection of Particles
Contents 3.1 3.2 3.3 3.4
Introduction Distribution of the Number of Particles Limit Probabilities Probability of the Particle Number in a Nearly Critical System
55 57 69 77
3.1 Introduction Suppose that at t = 0 there are no particles, capable of multiplication, present in the multiplying medium. As time passes, however, by virtue of some events occurring at random or fixed time points, particles from external or internal sources appear in the medium and will start branching processes. Let us call these particles of external or internal origin injected particles. Accordingly, the injection process is a series of events following each other accidentally or deterministically, each of which results in the appearance of one or more particles in the multiplying medium.1 In the following, we shall investigate how the characteristics of the injection process affect the distribution of the number of particles generated in the multiplying medium. Let η(t0 ) denote the random time instant of the first occurrence of the injection event after the time t0 . In the expression of the probability P{η(t0 ) ≤ t} = F(t0 , t) = 1 − T (t0 , t),
(3.1)
the term T (t0 , t) stands for the probability that no injection event occurs in the interval (t0 , t]. Obviously, T (t0 , t) is a monotonic, non-increasing function of t, for which the equalities T (t0 , t0 ) = 1 and T (t0 , ∞) = 0 hold. If T (t0 , t) satisfies the functional equation T (t0 , t) = T (t0 , t )T (t , t), then
t0 ≤ t ≤ t,
t T (t0 , t) = exp − s(u)du ,
t0 ≤ t,
(3.2)
(3.3)
t0
where s(u) is a non-singular and non-negative function for which the limit value relation lim
t
t→∞ t 0 1 In
s(u)du = ∞
the mathematical literature, the injection process is usually called immigration and the injected particles are referred to as immigrated particles.
Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
55
56
Imre Pázsit & Lénárd Pál
must hold. s(u) is called the intensity function of the injection. We note that, following from its definition, the function t S(t0 , t) = s(u)du t0
has to be continuous at every point t ∈ [t0 , ∞), because if it has a discontinuity of the first order, say at time t = td , then the probability T (t0 , td ) is not uniquely determined, since lim T (t0 , t) = lim T (t0 , t). t↑td
t↓td
It seems suitable to make a seemingly trivial observation at this point. It does happen in the literature [27, 28] that the function s(u) is identified as the generalised function s(u) =
∞
[δ(u − nT0 ) + δ(u + nT0 )] + δ(u),
n=1
the so-called Dirac-comb or Dirac-pulse train. This is presumably made on the assumption that it describes a periodic particle injection with a time interval T0 . This is, however, an incorrect procedure as it can immediately be seen, since if s(u) is equal to the Dirac-pulse train, S(t0 , t) is a step function containing first order discontinuities. One can also mention the obvious contradiction that if s(u) is a Dirac train, then the probability that no injection takes place in the interval (0, T0 ] is obtained from (3.3) as e −1 ; whereas it is self-evident that this probability is 0, since injection occurs at every time moment nT0 , n = 1, 2, . . . . Suppose that at the injection time points a random number of particles appear in the multiplying system. Let q be the number of particles appearing at the moment of injection and let P{q = j} = hj
(3.4)
denote the probability that this number is exactly equal to j. For the sake of later use, we define the generating function ∞ hj z j (3.5) r(z) = j=0
and the factorial moments
rk =
d k r(z) dzk
, z=1
k = 1, 2, . . . ,
from which latter the first two, i.e. r1 = E{q} and r2 = E{q(q − 1)}
(3.6)
will play an important role in our considerations. For simplicity, it seems again worthwhile to give the definition of the branching process which is started by one injected particle. Suppose that the reaction time of each of the particles that was injected into or was born in the multiplying medium follows an exponential distribution with an intensity parameter Q, i.e. the branching process is a Markov process. Let n(t) denote the number of particles at time t ≥ 0 in an infinite homogeneous multiplying medium. Consider now the probability P|{n(t) = n|n(0) = 1} = pn (t)
(3.7)
and its generating function g(z, t) = E{zn(t) } =
∞ n=0
pn (t)zn .
(3.8)
57
Injection of Particles
The backward equation determining the latter was already derived in Section 1.2.1 as t g(z, t) = e −Qt z + Q e −Q(t−t ) q[g(z, t )]dt .
(3.9)
0
We recall that q(z) =
∞
fk z k ,
(3.10)
0
where fk is the probability that in a reaction induced by a particle, in which the inducing particle is absorbed, ν = k new particles are generated. Because of their frequent later use, also the notations
2 dq(z) d q(z) q1 = E{ν} = and q2 = E{ν(ν − 1)} = (3.11) dz z=1 dz2 z=1 are recalled here together with the first and second factorial moments of the particle number n(t) induced by a single starting particle. These latter were determined in Section 1.3.2 as
∂g(z, t) m1 (t) = (3.12) = e αt ∂z z=1 and
∂2 g(z, t) m2 (t) = ∂z2
z=1
⎧ q2 ⎨ Q (e αt − 1)e αt , α = ⎩ Qq2 t,
if α = 0,
(3.13)
if α = 0,
where α = Q(q1 − 1). In subcritical systems, where α < 0, the notation a = −α = Q(1 − q1 ) > 0 will often be used.
3.2 Distribution of the Number of Particles Let N(t) denote the number of particles at time t ≥ t0 in the multiplying medium with an external source, having an exponential distribution of the injection times.2 Assuming that at time t0 ≤ t the medium did not contain particles capable of multiplication, let us determine the probability P{N(t) = n|n(t0 ) = 0} = P(n, t|t0 , 0) = P(n, t|t0 )
(3.14)
that at t ≥ t0 there are exactly n particles in the multiplying medium. By using the well-known method of deriving the backward Kolmogorov equation, one can immediately write that t ∞ P(n, t|t0 ) = T (t0 , t)δn0 + T (t0 , t )s(t ) hj pn1 ( t|j, t )P(n2 , t|t )dt . (3.15) t0
n1 +n2 =n j=0
In this form, in contrast to the mixed backward equation (1.7), this is a true backward equation since time homogeneity is not utilised, and operations are performed on the initial (source) time. Since each of the j particles injected at time t will start a branching process independently from the others, the equality
pn1 (t|j, t ) =
j
pki (t − t )
(3.16)
k1 +···+kj =n1 i=1 2 Here
and in the rest of the book the convention is adopted that quantities (distributions, moments, etc.) corresponding to single-particle-induced processes will be denoted by lower case letters, whereas those corresponding to branching processes induced by an extraneous source of particles will be denoted with the same symbols in capital letters.
58
Imre Pázsit & Lénárd Pál
holds. Here, account was taken of the fact that the branching process induced by a particle is homogeneous in time. Define the generating function G(z, t|t0 ) =
∞
P(n, t|t0 )zn .
(3.17)
n=0
By taking (3.15) and (3.16) into account, one obtains G(z, t|t0 ) = T (t0 , t) +
t
T (t0 , t )s(t )r[g(z, t − t )]G(z, t|t )dt ,
t0
from which, in view of the relation dT (t0 , t) = s(t0 )T (t0 , t), dt0 the following backward differential equation is obtained: ∂G(z, t|t0 ) = s(t0 ){1 − r[g(z, t − t0 )]}G(z, t|t0 ) ∂t0
(3.18)
with the initial condition limt0 ↑t G(z, t|t0 ) = 1. The solution is given by the expression G(z, t|t0 ) = exp
t
s(t ){r[g(z, t − t )] − 1}dt .
(3.19)
t0
Choosing the start t0 of the injection equal to zero and taking the logarithm of (3.19) yields log G(z, t|t0 = 0) =
t
s(t ){r[g(z, t − t )] − 1}dt =
0
t
s(t − t ){r[g(z, t )] − 1}dt
(3.20)
0
which is now a mixed-type backward equation since operations on the initial time were transferred to the final (terminal) time. The important statement can be proved that if q1 < 1, i.e. if the medium is subcritical and max s(t) < ∞,
0≤t≤∞
and further if s(t) is not periodic, then the limit generating function t s(t − t ){r[g(z, t )] − 1}dt G ∗ (z) = lim G(z, t|t0 = 0) = lim exp t→∞
t→∞
(3.21)
0
exists.This means that N(t) is asymptotically stationary.3 If s(t) is periodic, then, as it will be seen, N(t) is periodically stationary, i.e. the probability that n particles are present in the system at an arbitrary moment u is exactly the same as at the time points u + kT0 , where T0 is the duration of the period and k = ±1, ±2, . . . . In the case when s(t) = s0 = const. and if at each injection event one particle is injected, i.e. if hj = δj1 and hence r(z) = z, then one obtains the expression for the case of a multiplying system driven by a simple homogeneous Poisson process as: t−t0 t G(z, t|t0 ) = exp s0 (3.22) [g(z, t − t ) − 1]dt = exp s0 [g(z, t ) − 1]dt t0 3 Definition
of asymptotic stationarity is given in Section 3.3.
0
59
Injection of Particles
in which, of course, one has the freedom of choosing t0 = 0 and writing t G(z, t) = exp s0 [g(z, t ) − 1]dt .
(3.23)
0
If at each injection point a random number of particles enter the multiplying medium, i.e. if hj is the probability that exactly j particles are injected, then from (3.19), one obtains the expression for a multiplying system driven by an external source given by a homogeneous compound Poisson process: t G(z, t) = exp s0 {r[g(z, t )] − 1}dt . (3.24) 0
In the case of injection according to a non-homogeneous compound Poisson process the intensity function s(t) often happens to be periodic. A simple variant of this will be discussed later, namely when there are periodically repeated breaks among the injection periods. Investigate now the process in which the injection events occur exactly at the time instances tk = kT0 , k = 1, 2, . . . , i.e. the pulse train is not a Poisson sequence. Every injection event results in the occurrence of a random number of particles in the multiplying medium. Let again hj denote the probability that the number of injected particles is q = j. Further, let Nk (u) denote the number of particles in the multiplying medium at time u after the kth injection, for which one has the inequality 0 ≤ u < T0 . Then, P{Nk (u) = n} = Pk (n, u)
(3.25)
is the probability that at time u ≤ T0 after the kth injection exactly n particles are found in the multiplying system. These particles are the descendants of the particles that already existed in the system at time T0 after the (k − 1)th injection and of the particles entering at the kth injection in the system. Based on this, one can write Pk (n, u) =
∞
Pk−1 (i, T0 )A(n1 , u|i)
n1 +n2 =n i=0
∞
hj B(n2 , u|j ),
(3.26)
j=0
where
A(n1 , u|i) =
i
pa (u),
(3.27)
pb (u)
(3.28)
a1 +···+ai =n1 =1
and
B(n2 , u|j) =
j
b1 +···+bj =n2 =1
are the same functions, while pa (u) and pb (u) are the probabilities of the type defined in (3.7). Define the generating function Gk (z, u) =
∞
Pk (n, u)zn .
(3.29)
n=0
By simple considerations, from (3.26) the recursive equation Gk (z, u) = Gk−1 [g(z, u), T0 ]r[g(z, u)]
(3.30)
is obtained, in which g(z, 0) = z. From this equation, the factorial moments can easily be derived. We note that application of the above equation can be considered correct, as opposed to the generating function equation based on the non-homogeneous compound Poisson distribution relying on the intensity function identified by the Dirac train, since the use of the source intensity function and that of (3.23) is avoided.
60
Imre Pázsit & Lénárd Pál
It can be shown that if q1 < 1, i.e. if the system is subcritical, then the limit value lim Gk (z, u) = G ∗ (z, u)
(3.31)
G ∗ (z, u) = G ∗ [g(z, u), T0 ]r[g(z, u)].
(3.32)
k→∞
exists and thus also the function equation
From this it follows that Nk (u) is periodically stationary. In several cases, more needs to be known about the random process N(t) than what the probability function P(n, t|t0 ) itself contains. In order to calculate the autocovariance and/or the autocorrelation function N(t), one needs the two-point probability P{N(t) = n1 , N(t + u) = n2 |n(t0 = 0)} = P2 (n1 , n2 , t, t + u|t0 ).
(3.33)
We shall prove that the logarithm of the generating function G2 (z1 , z2 , t, t + u|t0 ) =
∞ ∞
P2 (n1 , n2 , t, t + u|t0 )z1n1 z2n2
(3.34)
n1 =0 n2 =0
is given by log G2 (z1 , z2 , t, t +u|t0 ) =
t
t+u
s(t ){r[g(z1 g(z2 , u), t −t )]−1}dt +
t0
s(t ){r[g(z2 , t +u−t )]−1}dt . (3.35)
t
The proof goes as follows. Since P{N(t) = n1 , N(t + u) = n2 |n(t0 = 0)} = P{N(t + u) = n2 |N(t) = n1 }P{N(t) = n1 |n(t0 ) = 0}, based on (3.34) this yields G2 (z1 , z2 , t, t + u|t0 ) =
∞
G(z2 , t + u|n1 , t)P{N(t) = n1 |n(t0 ) = 0}.
(3.36)
n1 =0
By considering that the n1 particles being present at time t will start branching processes in the interval [t, t + u] independently from each other, as well as that injection can also take place during the time interval u, one obtains that G(z2 , t + u|n1 , t) = [g(z2 , u)]n1 G(z2 , t + u|t). Substituting this formula into expression (3.36) leads to G2 (z1 , z2 , t, t + u|t0 ) = G[z1 g(z2 , u), t|t0 ]G(z2 , t + u|t)
(3.37)
whose logarithm, taking into account the relation (3.19), is identical to (3.35). This completes the proof of the statement. In the case when s(t) = s0 = const. and t0 = 0, then (3.35) can be written in the form t u log G2 (z1 , z2 , t, t + u|t0 ) = s0 {r[g(z1 g(z2 , u), t )] − 1}dt + s0 {r[g(z2 , t )] − 1}dt . (3.38) 0
0
The second term on the right-hand side above is the result of the rearrangement u t+u s(t ){r[g(z2 , t + u − t )] − 1}dt = s(t + u − t ){r[g(z2 , t )] − 1}dt t
after the substitution
0
s(t + u − t ) = s0 .
61
Injection of Particles
3.2.1 Expectation, variance and correlation Now the characteristics of the expectation, variance and autocorrelation of the particle number in a multiplying system with particle injection will be investigated. For simplicity, only injections according to homogeneous Poisson and non-homogeneous periodic compound Poisson distributions will be considered.
Homogeneous compound Poisson distribution For the calculations, it is practical to use the logarithm of the generating function. For the expectation, from (3.24) it follows that
∂ log G(z, t) s0 r1 E{N(t)|n(0) = 0} = M1 (t, α) = = {exp (αt) − 1}. (3.39) ∂z α z=1 If the system is critical, i.e. if α = 0, then M1 (t, α = 0) = s0 r1 t.
(3.40)
If the system is subcritical, i.e. if α = −a < 0, then the limit value lim M1 (t, α = −a < 0) = M1∗ (a) =
t→∞
s0 r1 a
(3.41)
exists due to asymptotic stationarity. Further, the variance can be calculated from the formula
2 ∂ log G(z, t) ∂ log G(z, t) 2 + . D {N(t)|n(0) = 0} = V (t, α) = ∂z2 ∂z z=1 If α = 0, then one obtains that
1 q12 1 Dν (e αt − 1) + r1 Dq (e αt + 1) , V (t, α) = M1 (t, α) 1 + 2 q1 − 1 2 where
q2 E{ν(ν − 1)} = 2 q1 E{ν}2 are the so-called Diven factors. If α = 0 then Dν =
and
Dq =
r2 E{q(q − 1)} = 2 r1 E{q}2
(3.42)
(3.43)
1 r2 . V (t, α = 0) = M1 (t, α = 0) 1 + q2 Qt + 2 r1
(3.44)
In a subcritical system when α = −a < 0 and hence N(t) is asymptotically stationary, apparently the limit value 1Q 2 1 lim V (t, α = −a < 0) = V ∗ (a) = M1∗ (a) 1 + (3.45) q1 Dν + r1 Dq , t→∞ 2 a 2 exists. Determine the autocovariance function of the particle number N(t), E{[N(t) − M1 (t, α)][N(t + u) − M1 (t + u, α)]} = RN,N (t, t + u, α) which, according to (3.38), is equal to the expression
2 t 2 ∂ r{g[z1 g(u, z2 , t )]} ∂ log G2 (z1 , z2 , t, t + u) = s0 dt . ∂z1 ∂z2 ∂z1 ∂z2 0 z1 =z2 =1 z1 =z2 =1
(3.46)
62
Imre Pázsit & Lénárd Pál
A short calculation leads to RN,N (t, t + u, α) = V (t, α)e αu ,
u ≥ 0. The autocorrelation function of N(t) can also be written down immediately. One obtains ! V (t, α) αu CN,N (t, t + u, α) = e , u ≥ 0, V (t + u, α) which, in a critical system, can be given in the following form: ! t 1 + r2 /r1 + q2 Qt/2 . CN,N (t, t + u, α = 0) = t + u 1 + r2 /r1 + q2 Q(t + u)/2
(3.47)
(3.48)
(3.49)
It is seen that in a critical system the correlation between the particle numbers N(t) and N(t + u) decreases relatively slowly as a function of the time difference. In contrast, it is surprising that in the same system the particle numbers N(t) and N(t + u) − N(t) are uncorrelated, i.e. the variation of the particle number during the interval [t, t + u] is not correlated with the number of particles at time t. The statement can easily be proved. If α = 0, then from (3.47) one has RN,N (t, t + u, α = 0) − V (t, α = 0) = 0, which is equal to E{[N(t) − M1 (t, α = 0)][N(t + u) − N(t) − M1 (t + u, α = 0) + M1 (t, α = 0)]} = E{[N(t) − M1 (t, α = 0)][N(t + u) − N(t)]} = 0, which proves the statement. Note: In the forthcoming, the variance and autocovariance function of the particle number N(t) will be needed in the case of particle injection by a simple homogeneous Poisson process. From (3.42), if α = 0, one obtains
1 q2 (3.50) (e αt − 1) , D2 {N(t)} = V (t, α) = M1 (t, α) 1 + 2 q1 − 1 where M1 (t, α) =
s0 αt (e − 1). α
If α = 0 then
1 V (t, α = 0) = s0 t 1 + q2 Qt . 2 In a subcritical system, when α = −a < 0 and thus N(t) is stationary, there exists the limit value
1Q 2 s0 lim D2 {N(t)} = D2 {N∗ } = 1+ q1 Dν . t→∞ a 2 a
(3.51)
The form of the autocovariance function is identical with (3.47), except that now V (t, α) is equal to (3.50).
Non-homogeneous compound Poisson distribution Due to its practical importance, instead of the general treatment of the problem, investigate the case when the intensity function s(t) is periodic. The time instances 0, T0 , 2T0 , . . . , nT0 , . . . of the time parameter t ∈ [0, ∞] will be called the period points and the time interval T0 the period. Consider first the injection process in which the starting points of the injection are identical with the period points. Let W < T0 denote the time duration of the injection and suppose that the injections in the time
63
Injection of Particles
periods W , following each other with the time interval T0 − W , correspond to a Poisson distribution with a constant intensity s0 . The intensity function of the injection can then be given as s(t) = s0 [(t − nT0 ) − (t − W − nT0 )], (3.52) n
which describes a train of square pulses.4 Moreover, suppose that each injection results in the occurrence of a random number of source particles (of endogen or exogen origin) in the multiplying medium. No source particles enter the system in the intervals T0 − W between the time intervals W . By using (3.20), the expectation of N(t) can be given as
M1 (t, W ) =
∂ log G(z, t) ∂z
= s0 r1 z=1
∞
t
[(t − nT0 ) − (t − W − nT0 )]e α(t−t ) dt ,
(3.53)
n=0 0
where α = Q(q1 − 1). Since in the forthcoming mostly the behaviour of subcritical systems will be studied, the notation α = −a < 0 will be used. By introducing the Laplace transform ∞ 1 − e −sW ˜ 1 (s, W ) = M e −st M1 (t, W )dt = s0 r1 , s(s + a)(1 − e −sT0 ) 0 or by integrating (3.53) directly, it is easily seen that ∞ s0 r1 [(1−exp{−a(t−nT0 )})(t−nT0 )−(1−exp{−a(t−W −nT0 )})(t−W −nT0 )]. (3.54) M1 (t, W ) = a n=0
Figure 3.1 illustrates the development of the periodic stationarity as a function of time in a relatively strongly subcritical system for the parameters s0 = 1 and r1 = 1. In order to determine the expectation of the periodically stationary particle number, introduce the notation t = kT0 + u, where k is a non-negative integer, and u is either in the interval [0, W ) or in [W , T0 ]. After some elementary calculations, the expectation of the particle number at the moment u after the kth but before the (k + 1)th period point is obtained as M1 (kT0 + u, W ) = M1 (k, u, W ) =
s0 r1 ϕk (u, a, W ), a
(3.55)
Expectation
4 3 2 1
T0 1, W 0.4, a 0.1 s0 1, r1 1 0
10
20
30
40
50
Time (t)
Figure 3.1 Time-dependence of the expectation of the particle number in a periodically pulsed subcritical system. 4 The
case of other pulse shapes, and in particular Gaussian-like pulses, will be treated in Section 10.
64
Imre Pázsit & Lénárd Pál
where
⎧ aW − 1 ⎪ −au + e ⎪ ⎪ (1 − e −kaT0 )e −au , 1 − e ⎨ e aT0 − 1 ϕk (u, a, W ) = ⎪ e aW − 1 aT0 ⎪ ⎪ ⎩ (e − e −kaT0 )e −au , e aT0 − 1
if 0 ≤ u < W , (3.56) if W ≤ u ≤ T0 .
From this equation, by using the limit k → ∞, one immediately obtains the periodically stationary expectation of the particle number N(t) from any period point to the consecutive period point as lim M1 (k, u, W ) = M1∗ (u, W ) =
k→∞
where
s0 r1 ϕ(u, a, W ), a
⎧ e aT0 − e aW −au ⎪ ⎪ ⎪ 1 − e , ⎨ e aT0 − 1 ϕ(u, a, W ) = ⎪ ⎪ e aW − 1 aT0 −au ⎪ ⎩ e e , e aT0 − 1
As expected,
(3.57)
if 0 ≤ u < W , (3.58) if W ≤ u ≤ T0 .
M1∗ (0, W ) = M1∗ (T0 , W ),
i.e. in the periodically stationary state, the expectation at the starting point of every period is the same as that at the end point of the period. Also, it can easily be seen that M1∗ (u, W ) is continuous in the points u = W and u = T0 , but not differentiable. It is worth mentioning that lim M1∗ (u, W ) =
W →T0
s0 r1 , a
which is not surprising, since in this case the injection process is not periodic, rather it becomes stationary. It can also be easily shown that the average of the periodically stationary expectation is equal to T0 1 s0 r1 W M1∗ (u, W )du = , T0 0 a T0 i.e. it is the same as that of a stationary source with an intensity downscaled by the so-called duty cycle W/T0 . For the variance of the particle number N(t), one calculates
2 ∞ t ∂ log G(z, t) 2 = M2 (t, W ) − [M1 (t, W )] = s0 Hn (t , W ){r2 [m1 (t − t )]2 + r1 m2 (t − t )}dt . (3.59) ∂z2 0 n=0 Here Hn (t , W ) = (t − nT0 ) − (t − W − nT0 ),
(3.60)
and in a subcritical system
m1 (t − t ) = e −a(t−t ) , while Q 2 q1 Dν [1 − e −a(t−t ) ]e −a(t−t ) , a noting that a = Q(1 − q1 ) > 0. Introduce also in this case the notation t = kT0 + u, where k is a non-negative integer, and u is a time instant either in the interval [0, W ) or [W , T0 ]. From (3.59)
s0 r1 Q 2 1 1 2 q Dν ϕk (u, a, W ) − ϕk (u, 2a, W ) + r1 Dq ϕk (u, 2a, W ) M2 (kT0 +u, W )−[M1 (kT0 +u, W )] = a a 1 2 2 m2 (t − t ) =
65
Injection of Particles
38.8
s0 1, r1 1, T0 1, a 0.1
Variance
38.6 38.4 38.2 38.0 37.8
W 0.4, Dn 2, Dm 1 0
2
4
6
8
10
Number of periods
Figure 3.2 Time-dependence of the variance of the periodically stationary particle number in a pulsed subcritical system.
and from this the variance is found to be D2 {N(kT0 + u)} = Vk (u, a, W ) = M2 (kT0 + u, W ) − [M1 (kT0 + u, W )]2 + M1 (kT0 + u, W ).
(3.61)
In a subcritical medium where a > 0, if k → ∞, then a periodically stationary state occurs and consequently, for the variance, one obtains
s0 r1 Q 2 1 q1 Dν ϕ(u, a, W ) − ϕ(u, 2a, W ) lim Vk (u, a, W ) = V ∗ (u, a, W ) = k→∞ a a 2 1 (3.62) + r1 Dq ϕ(u, 2a, W ) + ϕ(u, a, W ) . 2 Figure 3.2 illustrates the ‘oscillation’ of the variance of the periodically stationary particle number in the pulse train after a period point, for the ‘model’ Diven factors Dν = 2 and Dq = 1. An interesting question is how the variance (3.62) changes in the case when the injection time duration W tends to zero, while the injection intensity s0 tends to infinity such that lim s0 W = C < ∞.
W →0 s0 →∞
Since lim s0 ϕ(u, a, W ) = Ca
W →0 s0 →∞
(3.63)
e aT0 −au e , e aT0 − 1
from (3.62) one obtains
e −au e −au Q lim V ∗ (u, a, W ) = CM1∗ (u) 1 + q12 Dν 1 − + r , D 1 q W →0 a 1 + e −aT0 1 + e −aT0
(3.64)
s0 →∞
where
e −au , 0 ≤ u ≤ T0 . W →0 1 − e −aT0 As it will be seen in the next section, for C = 1 this formula is identical with the one which is obtained for the Dirac-pulse train, and which is an incorrect result for the strictly periodic, instantaneous injection. The reason for the agreement, as well as for the deviation from the correct result, is of course that the limit (3.63) violates the conditions required for an intensity function as described at the beginning of the chapter, similarly as the intensity function of the Dirac-pulse train does. M1∗ (u) = lim M1∗ (u, W ) = r1
66
Imre Pázsit & Lénárd Pál
In the case when one wants to investigate the number of particles in a subcritical system driven with a periodic pulse train of constant pulse width at time points that are not synchronised to the period points, rather they occur at random time points (‘random injection’ or ‘stochastic pulsing’), it is practical to define a virtual injection process in which the injection time intervals start at a random time distance from the period points, and the realisations of which lie in the interval [0, T0 ]. Let P{θ ≤ x} = pθ (x) denote the probability that the random time distance θ is not larger than x and x ∈ [0, T0 ]. In the simplest case, it can be supposed that θ has a uniform distribution in the interval [0, T0 ]. If x is a fixed realisation of θ then the source intensity is given by s(t|x) = s0 =
+∞
[(t − nT0 − x) − (t − W − nT0 − x)].
n=−∞
Let N(t) denote the number of particles at time t ≥ 0. Obviously, P{N(t) = N |n(0) = 0, θ = x} = P(N , T |x)
(3.65)
is the probability that in a multiplying system with injection, there are N particles at time t ≥ 0, provided that the injection time intervals start at time x after the period points and at t = 0, there were no particles in the system. Then, according to the theorem of total probability, T0 P(N , t) = P(N , t|x)dpθ (x) (3.66) 0
is the probability that in the case of the so-called stochastic injection, there are exactly N particles in the multiplying system at the moment t ≥ 0. The characteristics of the probability P(N , t) will be investigated in Chapter 10 of this book.
Strongly periodic injection In this case, as mentioned before, the injection events occur exactly at the points tk = kT0 , k = 1, 2, . . . and every injection event results in the occurrence of a random number of particles in the multiplying system. Here, only the expectation and variance of the periodically stationary particle number will be determined. Based on (3.32), it seems to be practical to introduce the function5 H (z, u) = log G ∗ (z, u) = log G ∗ [g(z, u), T0 ] + log r[g(z, u)]. Calculating
∂H (z, u) ∂z
(3.67)
= M1∗ (u) = M1∗ (T0 )e −au + r1 e −au , z=1
the relation M1∗ (T0 ) = r1
e −aT0 1 − e −aT0
immediately follows. Using this one obtains M1∗ (u) = r1
e −au , 1 − e −aT0
0 ≤ u ≤ T0 ,
(3.68)
which shows how the periodically stationary expectation oscillates. This oscillation is illustrated in Fig. 3.3 which also demonstrates that M1∗ (0) = M1∗ (T0 ), that can also be seen from (3.68). 5 The
H (z, u) defined here is not to be confused with the Hn (t, W ) of the square pulse train (3.60).
67
Injection of Particles
11
Expectation
10.5 10 9.5 T0 1, a 0.1, r1 1
9 0
2
4
6
8
10
Number of periods
Figure 3.3 Time-dependence of the expectation of the periodically stationary particle number in a pulsed subcritical system.
The variance is calculated from
2
2
2 ∂ H (z, u) ∂ log G ∗ [g(z, u), T0 ] ∂ log r[g(z, u)] = + , ∂z2 ∂z2 ∂z2 z=1 z=1 z=1 since D2 {N∗ (u)} = V ∗ (u) =
∂2 H (z, u) ∂z2
+ M1∗ (u).
(3.69)
z=1
It can be easily seen that
2 ∂ H (z, u) = M2∗ (u)−[M1∗ (u)]2 = [M1∗ (T0 )+r1 ]m2 (u)+{M2∗ (T0 )−[M1∗ (T0 )]2 +r2 −r12 }m12 (u), (3.70) ∂z2 z=1 where m1 (u) = e −au
and m2 (u) =
Q 2 q Dν e −au (1 − e −au ). a 1
Substituting T0 for u in (3.70) yields M2∗ (T0 )
=
[M1∗ (T0 )]2
+ M1∗ (T0 )
Q 2 1 e −aT0 . q Dν + r1 (Dq − 1) a 1 1 + e −aT0 1 + e −aT0
From this and using (3.69), the variance is obtained as
Q e −au e −au V ∗ (u) = M1∗ (u) 1 + q12 Dν 1 − + M1∗ (u)r1 (Dq − 1) . −aT a 1+e 0 1 + e −aT0
(3.71)
(3.72)
It is remarkable that if Dq = 1, i.e. if the number of particles injected in the period points has a Poisson distribution, then the contribution of the fluctuation of the injected particle number to the variance is zero. Figure 3.4 shows a section of the oscillation of the variance of the periodically stationary particle number. With the parameters used in the calculations, the amplitude of the oscillation is relatively small compared to the variance. Investigate now what result is obtained if the intensity function s(t) is taken to be equal to the Dirac-comb or Dirac-pulse train. If the first injection occurs at the moment t1 = T0 , then, by using (3.20), one obtains log G(z, t|t0 = 0) =
t ∞ 0 n=1
δ(t − nT0 ){r[g(z, t − t )] − 1}dt ,
(3.73)
68
Imre Pázsit & Lénárd Pál
93 T0 1, a 0.1, r1 1
Variance
92.5 92 91.5 91
Dn 2, Dm 1.2 0
2
4
6
8
10
Number of periods
Figure 3.4 Time-dependence of the variance of the periodically stationary particle number in a pulsed subcritical system.
from which the expectation is equal to M1 (t) = r1
∞
t
δ(t − nT0 )e −α(t−t ) dt .
(3.74)
n=1 0
If k = [t/T0 ] is the largest integer which does not exceed t/T0 and 0 ≤ u < T0 , then M1 (t) = M1 (kT0 + u) = r1 e −a(kT0 +u)
k
e naT0 = r1
n=1
e −au (1 − e −kaT0 ). 1 − e −aT0
(3.75)
One notes that the limit value limk→∞ M1 (kT0 + u) = M1∗ (u) exists and is equal to M1∗ (u) = r1
e −αu , 1 − e −αT0
0 ≤ u ≤ T0 .
(3.76)
This expectation is identical with the expectation deduced from the correct solution in (3.68). Such an agreement will not be obtained for the variance, as it will be shown immediately. The variance is calculated again from the integral
∂2 log G(z, t|t0 = 0) ∂z2
= z=1
t ∞ 0
∂2 r[g(z, t − t )] δ(t − nT0 ) ∂z2 n=1
dt = M2 (t) − [M1 (t)]2 .
z=1
Elementary operations lead to
∂2 r[g(z, t − t )] ∂z2
= r2 [m1 (1, t − t )]2 + r1 m2 (1, t − t ),
z=1
where
m1 (1, t − t ) = e −a(t−t ) and m2 (1, t − t ) =
Q 2 q Dν e −a(t−t ) (1 − e −a(t−t ) ). a 1
69
Injection of Particles
Again, by utilising the notation k = [t/T0 ] and by performing the integration, one has Q 2 e −au q1 Dν (1 − e −kaT0 ) a 1 − e −aT0 Q 2 e −2au − r1 (1 − e −2kaT0 ), q1 Dν − r1 Dq a 1 − e −2aT0
M2 (kT0 + u) − [M1 (kT0 + u)]2 = r1
(3.77)
from which for k → ∞ the asymptotic formula M2∗ (u) − [M1∗ (u)]2 = r1
e −au 1 − e −aT0
Q 2 e −au e −au + r . q1 Dν 1 − D 1 q a 1 + e −aT0 1 + e −aT0
is obtained. From here the variance is obtained as
e −au Q e −au D2 {N∗ (u)} = M1∗ (u) 1 + q12 Dν 1 − + r , D 1 q a 1 + e −aT0 1 + e −aT0
0 ≤ u ≤ T0 .
(3.78)
(3.79)
Comparing this with (3.72) shows that the component corresponding to the injection differs from that in the correct formula. The contribution corresponding to the injection in the Dirac-train is larger than in (3.72). On the other hand, as it was mentioned before, the result from the treatment of the pulses with a Dirac-train intensity function (3.79) gives the same results as the one gets in the limit of decreasing the width of finite pulses to zero and increasing the intensity to infinity, expression (3.64).
3.3 Limit Probabilities In many cases, it is essential to know how the distribution P{N(t) = n} = P(n, t) of the number of particles N(t) generated in the multiplying system behaves for t → ∞. To this end, it is first practical to define the notion of the asymptotically stationary random process. Definition 1. The random process N(t) is called asymptotically stationary if the limit relation lim P{N(t) = N , N(t + u1 ) = N1 , . . . , N(t + uk ) = Nk }
(3.80)
t→∞
= P{N∗ = N , N∗ (u1 ) = N1 , . . . , N∗ (uk ) = Nk },
k = 1, 2, . . .
exists. In the simplest case, asymptotical stationarity means the existence of the limit probability lim P{N(t) = N } = P{N∗ = N }.
t→∞
(3.81)
In the following, as long as it does not lead to misunderstanding, the asymptotically stationary random process will be called simply as stationary. For simplicity, in the following the considerations will be restricted to the case when the particle injection obeys a homogeneous Poisson process, i.e. when s(t) = s0 and r(z) = z.
3.3.1 Subcritical process When the medium is subcritical, i.e. if α = −a < 0, the following important statement holds.
70
Imre Pázsit & Lénárd Pál
Theorem 20. The limit values lim P(n, t) = Pn∗ ,
n = 0, 1, . . .
t→∞
(3.82)
do exist, and the generating function G ∗ (z) =
∞
Pn∗ zn ,
|z| ≤ 1
(3.83)
s−1 ds . q(s) − s
(3.84)
n=0
can be given by the formula G ∗ (z) = exp
s0 Q
1
z
Proof. First one has to show that the limit probability lim G(z, t) = G ∗ (z) = exp s0
t→∞
∞
[g(z, t) − 1]dt
0
exists if α = −a < 0. For this it suffices to show that the improper integral ∞ [g(z, t) − 1]dt 0
is finite for all values |z| ≤ 1. In Section A.3, it is proved that if |z1 | ≤ 1 and |z2 | ≤ 1, then for the probability generating function g(z, t) =
∞
pn (t)zn ,
|z| ≤ 1,
n=0
the inequality
g(z1 , t) − g(z2 , t) ≤ |z1 − z2 |g (1, t)
holds. If z1 = z and z2 = 1, then this inequality takes the form |g(z, t) − 1| ≤ |z − 1|g (1, t), where g (1, t) = e −at Based on the above
T
lim
T →∞ 0
Thus,
lim
T →∞ 0
T
and a = Q(1 − q1 ) > 0.
[g(z, t) − 1]dt ≤ |z − 1| lim
T →∞ 0
∞
[g(z, t) − 1]dt =
T
e −at dt.
[g(z, t) − 1]dt ≤
0
|z − 1| , a
hence the existence of the limit probability is proved. It remains to prove (3.84). This can be achieved as follows. Define the function h(z) by the equation ∞ 1 [g(z, t) − 1]dt = h(s)ds. s0 0
z
From this, it immediately follows that h(z) = −s0 0
∞
∂g(z, t) dt. ∂z
(3.85)
71
Injection of Particles
By utilising (1.27): ∂g(z, t) ∂g(z, t) = Q[q(z) − 1] , ∂t ∂z from (3.85) one obtains s0 g(z, ∞) − g(z, 0) , Q q(z) − z and since g(z, 0) = z and g(z, ∞) = 1, finally one arrives at h(z) = −
h(z) =
s0 z − 1 , Q q(z) − z
whereby (3.84) is proved.
Quadratic process In order to illustrate the behaviour of the limit probability Pn∗ , select the case of the quadratic generating function, i.e. q(z) = f0 + f1 z + f2 z2 and q(1) = 1. Considering (1.86), one obtains 1 z
s−1 2 ds = q(s) − s q2
where s2 = 1 + 2
1
z
1 − q1 q2
Based on this
ds 2 1 − s2 , = log s − s2 q2 z − s2
and
q2 = q12 Dν .
q2 log G (z) = log 1 + (1 − z) 2(1 − q1 ) ∗
and hence
− 2s0
Qq2
,
− 2s0 Qq2 q2 G (z) = 1 + (1 − z) . 2(1 − q1 ) It is seen that G ∗ (1) = 1, moreover that
∗ dG (z) s0 = M1∗ = , where a = Q(1 − q1 ) > 0. dz a z=1 ∗
For the variance D2 {N∗ }, after a short calculation, one obtains
2 ∗
d G (z) s0 q2 ∗ ∗ 2 D2 {N∗ } = + M − [M ] = 1 + 2 1 dz2 a 2(1 − q1 ) z=1 which is identical with (3.51), as expected. Note that the deviation from the variance, characteristic of the Poisson distribution, is a consequence of the fact that q2 = 0. Finally, determine the probability P0∗ that in a stationary multiplying system with injection, at an arbitrary time instant the number of particles is zero. By making use of the equation G ∗ (0) = P0∗ , one has P0∗
= 1+
q2 2(1 − q1 )
− 2s0
Qq2
= e −s0 ta ,
72
Imre Pázsit & Lénárd Pál
where
2 q2 ta = log 1 + . Qq2 2(1 − q1 ) As one could expect, the probability P0∗ decreases exponentially with increasing source intensity.
3.3.2 Critical process Investigate now the asymptotic behaviour of the process in a critical medium. Since both the expectation and the variance of N(t) tend to infinity if t → ∞, one obviously have to choose a random process for this analysis which is linear in N(t) and has a limit probability distribution function if t → ∞. It will be shown that the random process 2N(t) X(t) = Qq2 t is suitable for the analysis of the asymptotic characteristics of N(t). Theorem 21. In a critical medium, the distribution function P{X(t) ≤ x} = U (x, t) possesses a limit distribution lim U (x, t) = U ∗ (x),
t→∞
which is given by the gamma distribution defined by the formula x 1 ∗ U (x) = (x) yc−1 e −y dy, (c) 0
(3.86)
in which c=
2s0 . Qq2
Since the characteristic function of U ∗ (x) is given as +∞ +∞ e iωx dU ∗ (x) = xc−1 e −x(1−iω) dx = −∞
0
1 , (1 − iω)c
it only needs to be shown that for the characteristic function of U (x, t), ∞ 2n 2iω exp iω P(n, t) = G exp ,t , E{e iωX(t) } = Qq2 t Qq2 t n=0
(3.87)
(3.88)
the limit relation
2s 2iω − 0 , t = (1 − iω) Qq2 lim G exp t→∞ Qq2 t holds. In other words, it is to be proved that t 2s0 [g(s, u) − 1]du = − log (1 − iω), lim s0 t→∞ Qq2 0 where in the function g(s, u)
2iω . s = exp Qq2 t
(3.89)
73
Injection of Particles
Proof. For the proof, (2.54) will be used in the following form: 1 − g(s, u) =
1−s [1 + (s, u)], 1 + 12 Qq2 u(1 − s)
(3.90)
where for all |s| ≤ 1, (s, u) converges uniformly to zero if u → ∞. Let T < t, and for proving (3.89), perform the partitioning t s0 [g(s, u) − 1]du = I1 (t) + I2 (t) + I3 (t), 0
where
I1 (t) = s0
T
[g(s, u) − 1]du,
I2 (t) = −s0
0
and
t
T
t I3 (t) = −s0 1 − g(s, u) − T
1−s du 1 + 12 Qq2 u(1 − s)
1−s du. 1 + 12 Qq2 u(1 − s)
By utilising the already mentioned inequality |1 − g(s, u)| ≤ |s − 1|g (1, u) which is proved in Section A.3, in which now g (1, u) = 1 since α = a = 0, and further by considering the well-known inequality |e iϕ − 1| ≤ |ϕ|, in which ϕ is real, one arrives at I1 (t) ≤ s0
T
0
exp 2iω − 1 du ≤ s0 T 2|ω| Qq t Qq t 2
2
which shows that if t → ∞, then I1 (t) → 0. By performing the integration in I2 (t), one obtains
2s0 1 1 2s0 I2 (t) = − log 1 + Qq2 t(1 − s) + log 1 + Qq2 T (1 − s) . Qq2 2 Qq2 2 Since
2iω lim t(1 − s) = lim t 1 − exp t→∞ t→∞ Qq2 t
=−
2iω , Qq2
it is seen that lim I2 (t) = −
t→∞
2s0 log (1 − iω). Qq2
I3 (t) can be rewritten in the following form: I3 (t) = −s0
t
T
and from this one has
(1 − s) (s, u) du, 1 + 12 Qq2 u(1 − s)
t 1 |I3 (t)| ≤ s0 | (s, u)|du. 1 T 1/(1 − s) + 2 Qq2 u
By taking into account the inequality (1 − e iϕ )−1 ≥ 0,
74
Imre Pázsit & Lénárd Pál
in which ϕ is real, one has
−1 1 2iω ≥ 0. = 1 − exp 1−s Qq2 t By neglecting the term q2 Qu/2 > 0 in the denominator, this leads to t 2iω |I3 (t)| ≤ s0 1 − exp Qq t | (s, u)|du. 2 T
Again, by applying the inequality |1 − e iϕ | ≤ |ϕ|, one can write that |I3 (t)| ≤ s0 " where s invariably denotes exp
2iω Qq2 t
2s0 |ω| t − T max (s, u), Qq2 t T ≤u≤t
#
. Based of this, it is seen that if t ≥ T → ∞, then I3 (t) → 0,
hence the statement in (3.89) is proved.
3.3.3 Supercritical process The number of particles in a supercritical medium with injection tends to infinity with probability 1 with increasing time. Therefore, for the analysis of the asymptotic behaviour of the process, instead of N(t), one has to introduce a normalised random process. It will be shown that the random process R(t) =
N(t) = e −αt N(t), E{n(t)}
(3.91)
in which α = Q(q1 − 1) > 0 is properly normalised and is suitable for such investigations. First it will proved that R(t) converges to a random variable R ∗ in the quadratic mean if t → ∞, i.e. the equality lim E{[R(t) − R ∗ ]2 } = 0 (3.92) t→∞
holds. Proof. It suffices to show that for every h ≥ 0, if t → ∞ then E{[R(t + h) − R(t)]2 } → 0, and moreover uniformly. For this only the trivial identity R(t + h) − R(t) = R(t + h) − E{R(t + h)} + R(t) − E{R(t)} + E{R(t + h)} − E{R(t)} is needed, from which one obtains that E{[R(t + h) − R(t)]2 } = D2 {R(t + h)} + D2 {R(t)} − 2RR,R (t, t + h) + [E{R(t + h)} − E{R(t)}]2 . (3.93) Since D2 {R(t)} = e −2αt D2 {N(t)}, by accounting for the formula for D2 {N(t)}, given by (3.50), one arrives at lim D2 {R(t)} =
t→∞
s0 Qq2 . 2α2
75
Injection of Particles
Accounting for the equality lim D2 {R(t + h)} = lim D2 {R(t)},
t→∞
t→∞
the limiting value of the sum of the first two terms on the right-hand side of equation (3.93) is equal to s0 Qq2 . α2 By using expression (3.47) for the autocovariance RN,N (t, t + h), one can write RR,R (t, t + h) = e −α(2t+h) RN,N (t, t + h) = e −2αt D2 {N(t)}, which leads to s0 Qq2 . α2 That is, the asymptotic value of the sum of the first three terms on the right-hand side of (3.93) is zero. It remains to show that the asymptotic value of the fourth term is also zero. It is obvious that −2 lim RR,R (t, t + h) = − t→∞
E{R(t + h)} − E{R(t)} = e −α(t+h) E{N(t + h)} − e −αt E{N(t)} =
s0 −αt e (1 − e −αh ), α
and from this it follows that lim [E{R(t + h)} − E{R(t)}]2 = 0.
t→∞
Hence the statement (3.92), namely that there exists a random variable R ∗ to which the random process R(t) converges in quadratic mean if t → ∞ is proved. Next the following important theorem will be proved. Theorem 22. The distribution function P{R(t) ≤ x} = V (x, t) possesses the limit distribution lim V (x, t) = V ∗ (x),
(3.94)
t→∞
whose characteristic function (its Laplace–Stieltjes transform) ∞ ∗ (ω) = e −ωx dV ∗ (x)
(3.95)
0
is given by the expression ∗ (ω) = exp
s0 α
0
ω
ϕ(u) − 1 du , u
(3.96)
in which ϕ(u) satisfies the equation s[ϕ(u)] dϕ(u) = , du uα
(3.97)
with the remark that s[ϕ(u)] = Q{q[ϕ(u)] − ϕ(u)} and
ϕ(0) = 1.
Proof. Based on the limit relation lim P{R(t) ≤ x} = P{R ∗ ≤ x},
t→∞
76
Imre Pázsit & Lénárd Pál
which follows from the foregoing, one can claim that the characteristic function E{e −ωR(t) } =
∞
[exp{−ωe −αt }]n P(n, t) = (ω, t)
(3.98)
n=0
of the random process R(t), in the case of t → ∞, converges to the characteristic function ∗
E{e −ωR } = ∗ (ω).
(3.99)
(ω, t) = G(exp{−ωe −αt , t}).
(3.100)
From (3.98) it follows that Introduce the notation st = exp{−ωe −αt }. Since
t G(st , t) = exp s0 [g(st , u) − 1]du , 0
write
G(st+τ , t + τ) = G(st+τ , t) exp s0
t+τ
[g(st+τ , u) − 1]du .
t
From this, by substituting u = v + t, one arrives at the equation τ [g(st+τ , v + t) − 1]dv . G(st+τ , t + τ) = G(st+τ , t) exp s0
(3.101)
0
Applying the basic relation (1.23) yields g(st+τ , v + t) = g[g(st+τ , u), v]. Since g( exp{−ωe −αt }, t) = ϕ(ω, t), this implies that
g(st+τ , t) = g( exp{−ωe −ατ e −αt }, t) = ϕ(ωe −ατ , t).
By performing the limit transition t → ∞, it follows from (3.101) that τ [g(ϕ(ωe −ατ )) − 1, v]dv . ∗ (ω) = ∗ (ωe −ατ ) exp s0
(3.102)
0
If v ≤ τ ↓ 0 then, by taking into account (1.31), the following relationship is obtained: g[ϕ(ωe −ατ ), v] = ϕ(ωe −ατ ) + vs[ϕ(ωe −ατ )] + o(v). By using this in (3.102), we find that ∗ (ω) = ∗ (ωe −ατ ) exp{s0 [ϕ(ωe −ατ ) − 1]τ + o(τ)} = ∗ (ωe −ατ ){1 + s0 [ϕ(ωe −ατ ) − 1]τ + o(τ)}. Rearrangement and dividing by ω(1 − e −ατ ) yields [ϕ(ωe −ατ ) − 1]τ ∗ −ατ o(τ) ∗ (ω) − ∗ (ωe −ατ ) = s (ωe ) + ∗ (ωe −ατ ) 0 ω(1 − e −ατ ) ω(1 − e −ατ ) ω(1 − e −ατ )
77
Injection of Particles
from which, with the limit transition τ ↓ 0, the differential equation 1 d∗ (ω) s0 ϕ(ω) − 1 = ∗ (ω) dω α ω is obtained, whose integrated form is identical with (3.96). This completes the proof of the theorem. Calculate again the limit distribution function V ∗ (x) by using the quadratic basic generating function. By applying (2.66), after elementary calculations one gets σ 1−p ∗ (ω) = , ω+1−p where
s0 (1 − p) 2s0 q1 − 1 = and p = 1 − 2 . α Qq2 q2 From this, performing the inverse Laplace transform leads to (1 − p)σ x σ−1 −(1−p)y V ∗ (x) = y e dy, (σ) 0 σ=
i.e. the normalised random process N(t) in this case follows a gamma distribution.
3.4 Probability of the Particle Number in a Nearly Critical System It seems to be instructive to investigate the distribution function of the number of particles in stationary subcritical systems with injection whose state is very close to the critical case. These systems are called almostcritical systems. We have already proved that, in critical systems with injection, the normalised particle number follows a gamma distribution. Now, following the ideas of Harris [13], it will be shown that in almost-critical systems, the distribution of the particle number can be well approximated with a gamma distribution if t → ∞. It will be found that the limit probability defined in (3.82) can be approximated by the formula Pn∗ ≈ in which
γ[c, (n + 1)d] − γ[c, nd] , (c)
γ[c, jd] =
jd
xc−1 e −x dx,
(3.103)
j = 0, 1, . . . ,
0
and where c=
2s0 a + Qq2
and
d=
2a . a + Qq2
(3.104)
3.4.1 Preparations For the proof of (3.103), it is advantageous to use the forward Kolmogorov equation which, in the case of injection according to a homogeneous Poisson process, can be derived as follows. Suppose that at time t = 0, there are no particles in the system; on the other hand, let s0 t + o(t) be the probability that one particle enters the system during the interval [t, t + t]. Following the method used when deriving Theorem 2, for the probability P{N(t) = n|n(0) = 0} = P(n, t),
(3.105)
78
Imre Pázsit & Lénárd Pál
one can write P(n, t + t) = P(n, t)(1 − nQt − s0 t) + Qt
∞
(n − k + 1)fk P(n − k + 1, t) + s0 tP(n − 1, t) + o(t),
k=0
from which the equation ∞
dP(n, t) = −(nQ + s0 )P(n, t) + Q (n − k + 1)fk P(n − k + 1, t) + s0 P(n − 1, t) dt k=0
follows, appended with the initial condition P(n, 0) = δn0 . Introducing the exponential generating function Gexp (z, t) =
∞
P(n, t)e nz ,
|e z | ≤ 1,
n=0
and taking its logarithm (z, t) = log Gexp (z, t),
(3.106)
∂(z, t) ∂(z, t) = Q[q(e z )e −z − 1] + s0 (e z − 1) ∂t ∂z
(3.107)
one arrives at the equation
with the initial condition (z, 0) = 0, since Gexp (z, 0) = 1.
3.4.2 Equations of semi-invariants If the semi-invariants
∂j (z, t) Kj (t) = ∂zj
,
j = 1, 2, . . . ,
(3.108)
z=0
exist, then one can write (z, t) =
∞ j=1
Kj (t)
zj . j!
By virtue of this and the considerations in Section 1.3.3, from (3.107) one obtains dKj (t) = dt i=1 j
j Rj−i+1 Ki (t) + s0 , j−i+1
j = 1, 2, . . . ,
(3.109)
in which Rj = QE{(ν − 1) j }, and the initial conditions are given by the relations Kj (0) = 0, j = 1, 2, . . . . Observe that R1 = QE{ν − 1} = Q(q1 − 1) = α,
(3.110)
R2 = QE{(ν − 1)2 } = QE{ν(ν − 1) − (ν − 1)} = Q(q2 − q1 + 1).
(3.111)
and
79
Injection of Particles
If the system is in a subcritical state very close to critical, i.e. if R1 < 0, but −R1 = a << 1 then, as already mentioned, the limit probabilities lim P(n, t) = Pn∗ ,
t→∞
n = 0, 1, . . . ,
exist, hence also the semi-invariants lim Kj (t) = Kj∗ (a),
t→∞
j = 1, 2, . . . ,
(3.112)
exist. Introduce the Laplace transforms K˜ j (z) =
∞
e −zt Kj (t)dt,
j = 1, 2, . . . .
0
From equation (3.109) one obtains (z − jR1 )K˜ j (s) =
j−1 i=1
s0 j Rj−i+1 K˜ i (z) + . j−i+1 z
Out of the Abel-type theorems [17], using the relationship lim zK˜ j (z) = lim Kj (t) = Kj∗ (a)
z→0
t→∞
from the previous equation the recursive equation ⎡ ⎤ j−2 1 j ∗ Kj∗ (a) = ⎣ R2 Kj−1 (a) + Rj−i+1 Ki∗ (a) + s0 ⎦. 2 ja i=1
(3.113)
is obtained. For the further considerations, the following simple theorem is needed. Theorem 23. If a → 0 then the semi-invariant Kj∗ (a) tends to infinity as a−j . Proof. The proof will be based on the method of induction. Let Fj denote the class of functions that depend on a such that if a → 0 then they diverge as a−j . It is obvious that if ur (a) ∈ Fr , while vs (a) ∈ Fs , then 0, if s < r, ur (a) = const., if s = r, lim a→0 vs (a) ∞, if s > r.
(3.114)
From (3.113) one has 1 K1∗ (a) = s0 , a
R2 1 1 ∗ K2 (a) = s0 + , 2 a2 2a
2 R2 1 R2 1 R3 1 K3∗ (a) = s0 + + + , 2 a3 2 3 a2 3a and so on. It is seen that K1∗ (a) belongs to the function class F1 , K2∗ (a) to F2 and K3∗ (a) to F3 . Based on this, ∗ (a) ∈ F ∗ one can assume that Kj−1 j−1 , and in this case from (3.113), it follows that Kj (a) ∈ Fj .
80
Imre Pázsit & Lénárd Pál
3.4.3 Determination of the approximate formula It can immediately be seen that lim
a→0
2aKj∗ (a) ∗ (a) ( j − 1)R2 Kj−1
= 1,
(3.115)
∗ (a) and K ∗ (a) the approximate hence, in a stationary almost-critical system, between the semi-invariants Kj−1 j equality R2 ∗ Kj∗ (a) ≈ ( j − 1) Kj−1 (a) 2a holds. From this, by iteration one obtains the relationship
Kj∗ (a) ≈
(j−1) R2 s0 ( j − 1)! . a 2a
Since according to (3.111) R2 = a + Qq2 , one can write
2s0 a + Qq2 j ≈ (j − 1)! . a + Qq2 2a It is known that from the logarithm of the Laplace transform of the gamma distribution function Kj∗ (a)
1 F(c, d, x) = (c)
dx
(3.116)
yc−1 e −y dy
0
defined by the parameters c > 0 and d > 0, i.e. the logarithm of c ∞ d (c, d, z) = e −zx dF(c, d, x) = z+d 0 is nothing else than (c, d, z) = c log
d , z+d
hence, one obtains for the jth semi-invariant
j d (c, c, z) 1 κj = = c( j − 1)! j . dzj d z=0
(3.117)
Comparing this with (3.116), one can claim that in stationary, almost-critical systems the random process N(t) follows with a good approximation a gamma distribution whose parameters are determined by the expressions c and d in (3.104). Based on this, in the case of an almost-critical system, one can write lim P{ζ(t) ≤ n} =
t→∞
n
Pk∗ ≈
k=0
1 (c)
dn
xc−1 e −x dx,
(3.118)
0
hence it follows that Pn∗ ≈
1 (c)
d(n+1) dn
xc−1 e −x dx =
γ[c, (n + 1)d] − γ[c, nd] , (c)
(3.119)
81
Injection of Particles
0.0035
Probability
0.003 0.0025 0.002
c 14.93
s0 10 Q1
d 0.03
0.0015
q1 0.98
0.001
q2 1.3
0.0005 200
300
400
500
600
700
800
900
Number of particles
Figure 3.5 The dependence of the probability P ∗n on the particle number n.
where
jd
γ(c, jd) =
xc−1 e −x dx.
0
With this (3.103) is verified. It is worth mentioning that a better approximation can be attained if the parameters c and d in expression (3.119) are determined from the equation
c s0 Qq2 ∗ 2 ∗ κ2 = 2 = K2 (a) = D {N } = 1+ , d a 2a and accordingly, the formulae 2s0 2a and d = (3.120) 2a + Qq2 2a + Qq2 are used instead of those in (3.104). The dependence of the probability Pn∗ on n is illustrated in Fig. 3.5. The parameter values denoted in the figure were calculated from the data shown in the table below. c=
Data for Fig. 3.5 s0
Q
f0
f1
f2
f3
f4
10
1
0.47
0.23
0.20
0.05
0.05
C H A P T E R
F O U R
Special Probabilities
Contents 4.1 4.2 4.3 4.4 4.5
Preliminaries The Probability of the Number of Absorptions Probability of the Number of Detections Probability of the Number of Renewals Probability of the Number of Multiplications
82 82 102 107 113
4.1 Preliminaries As the considerations so far showed, in branching processes three principal events take place, namely, absorption, renewal and multiplication of particles. Let na (t − u, t),
nb (t − u, t)
and nm (t − u, t)
denote the number of absorptions (a), renewals (b) and multiplications (m) in the interval [t − u, t], 0 ≤ u ≤ t. In the special case when t ≤ u then na (t − u, t) = na (t), nb (t − u, t) = nb (t) and nm (t − u, t) = nm (t) denote the number of absorptions (a), renewals (b) and multiplications (m) in the interval [0, t]. For the forthcoming discussion, we will primarily need to know the probabilities P{na (t − u, t) = n|n(0) = 1} = pa (n, t, u),
(4.1)
P{nb (t − u, t) = n|n(0) = 1} = pb (n, t, u),
(4.2)
P{nm (t − u, t) = n|n(0) = 1} = pm (n, t, u),
(4.3)
and their properties. By using the equations derived for these probabilities, the probabilities of the numbers of various events (absorption, renewal and multiplication) occurring in the interval [t − u, t], 0 ≤ u ≤ t will be determined for both single-particle injections and for systems sustained by randomly injected particles.
4.2 The Probability of the Number of Absorptions In a branching process absorption occurs if a reaction by a particle results in its vanishing. Suppose that the branching process is homogeneous and determine first the probability pa (n, t, u) of the event that the number of absorbed particles in the interval [t − u, t] is exactly n, i.e. na (t − u, t) = n, provided that there was one Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
82
83
Special Probabilities
particle in the multiplying system at t = 0. It is obvious that Xa (n, t), pa (n, t, u) = Ya (n, t, u),
if t ≤ u, if t ≥ u,
(4.4)
since if t ≤ u then pa (n, t, u) cannot depend on u. In addition, the equality Xa (n, u) = Ya (n, u, u) has also to be fulfilled. For determining the generating function ga (z, t, u) =
∞
pa (n, t, u)zn
(4.5)
n=0
of the probability pa (n, t, u), let us write down the backward Kolmogorov equation. To this order, start with the integral equation t pa (n, t, u) = e −Qt δn0 + Q f0 e −Qt [(t − u − t )δn0 + (t + u − t) δn1 ]dt 0
t
+Q
e 0
−Qt
∞
fk
k=1
k
pa (nj , t − t , u)dt ,
n1 + ··· +nk =n j=1
whose right-hand side consists of a sum of the probabilities of three mutually exclusive events. One immediately realises that this equation can be rearranged into the following form: t −Qt pa (n, t, u) = e δn0 + Q f0 e −Q(t−t ) [(t − u)δn0 + (u − t )δn1 − δn0 ]dt 0
+Q
⎡ t
e
−Q(t−t )
⎣f0 δn0 +
0
∞ k=1
fk
k
⎤ pa (nj , t , u)⎦ dt .
n1 + ··· +nk =n j=1
From this, one obtains for the generating function ga (z, t, u) the integral equation t t ga (z, t, u) = e −Qt + Q f0 e −Q(t−t ) [(t − u) + (u − t )z − 1]dt + Q e −Q(t−t ) q[ga (z, t , u)]dt 0
0 from which, accounting for the fact that (u − t ) = 1 − (t − u), by derivation with respect to t, the following
differential equation is obtained: ∂ga (z, t, u) = Q f0 [(t − u) − 1](1 − z) − Q ga (z, t, u) + Qq[ga (z, t, u)] ∂t with the initial condition ga (z, 0, u) = 1. From equation (4.4) one has
ga (z, t, u) =
ha (z, t), ka (z, t, u),
if t ≤ u, if t ≥ u,
(4.6)
(4.7)
where ha (z, t) =
∞ n=0
Xa (n, t)zn ,
(4.8)
84
Imre Pázsit & Lénárd Pál
and ka (z, t, u) =
∞
Ya (n, t, u)zn .
(4.9)
n=0
Naturally, the differential equations ∂ha (z, t) = −Q f0 (1 − z) − Qha (z, t) + Qq[ha (z, t)], ∂t
t ≤ u,
(4.10)
and ∂ka (z, t, u) = −Qka (z, t, u) + Qq[ka (z, t, u)], ∂t also hold, with the initial conditions
t ≥ u,
(4.11)
ha (z, 0) = 1 and ha (z, u) = ka (z, u, u). Theorem 24. If the solution ha (z, t) of (4.10) is known, then the solution of (4.11) can be given in the following form: ka (z, t, u) = g[ha (z, u), t − u],
(4.12)
where g(z, t) is the solution of (1.29) with the initial condition g(z, 0) = z. Proof. By using the theorem of total probability, for t ≥ u one can write that P{na (t − u, t) = n|n(0) = 1} =
∞
P{na (t − u, t) = n|n(t − u) = k} P{n(t − u) = k|n(0) = 1}.
k=0
Noticing that P{na (t − u, t) = n|n(t − u) = k} =
k
P{na (u) = nj |n(0) = 1},
n1 + ··· +nk =n j=0
i.e. that in the case of a homogeneous process, each of the k particles being present in the multiplying system at time t − u, initiates a branching process independently from the others, and these branching processes together will lead to the generation of n particles at a time period u later, at time t, it immediately follows that ka (z, t, u) =
∞
P{na (t − u, t) = n|n(0) = 1}zn
n=0
=
∞
P{n(t − u) = k|n(0) = 1}[ha (z, u)]k = g[ha (z, u), t − u],
k=0
This completes the proof of statement (4.12) of the theorem. Based on Theorem 24, an expression for (4.7) valid for any arbitrary time t ≥ 0 can be written in the following form: ga (z, t, u) = (u − t)ha (z, t) + (t − u)g[ha (z, u), t − u]. (4.13) The factorial moments of na (t − u, t), t ≥ u, and na (t), t ≤ u, respectively, can be determined either directly from this equation or from the basic differential equations that can be derived from (4.6). If equation (4.13) is used for this purpose, then the derivatives of the function h(z, t) with respect to z can be obtained from (4.10).
85
Special Probabilities
For the case of a multiplying system with injection, suppose that the system does not contain any particles at time t = 0, but during the interval [0, t] particles get into the system according to a Poisson process of intensity s0 . In this case, let Na (t − u, t) denote the number of absorbed particles in the interval [t − u, t]. According to the method discussed in Section 3.2 for the relationship between and the single-particle-induced n the source-induced distributions, for the generating function Ga (z, t, u) = ∞ n=0 Pa (n, t, u)z of the probability P{Na (t − u, t) = n|n(0) = 0} = Pa (n, t, u) one can derive the equation
log Ga (z, t, u) = s0
t
[ga (z, t , u) − 1]dt ,
(4.14)
(4.15)
0
from which the various moments of the number of absorptions occurring in the interval [t − u, t] can be calculated. Theorem 25. If 0 < q1 < 1 and q2 < ∞, i.e. the system is subcritical, then the limit generating function lim Ga (z, t, u) = Ga∗ (z, u) =
t→∞
∞
Wa (n, u) zn
(4.16)
n=0
exists and, accordingly, the limit probability lim Pa (n, t, u) = Wa (n, u)
t→∞
(4.17)
also exists. Hence one has lim Na (t − u, t) = Na∗ (u), dist
t→∞
thus Na (t − u, t) is asymptotically stationary. The limit generating function is given by the formula u 1 t−1 s0 ∗ Ga (z, u) = exp s0 [ha (z, t) − 1]dt exp dt . Q ha (z, u) q(t) − t 0 Proof. For the proof we shall use (4.15), modified by the help of (4.13), in the form u t−u log Ga (z, t, u) = s0 [ha (z, t ) − 1]dt + s0 {g[ha (z, u), t ] − 1}dt . 0
(4.18)
(4.19)
0
It is seen that the condition of existence of the limit probability Ga∗ (z, u) is the existence of the improper integral ∞ I (z, u) = {g[ha (z, u), t] − 1}dt. (4.20) 0
By utilising the inequality proved in Section A.3, one can write |g[ha (z1 , u), t] − g[ha (z2 , u), t]| ≤ |ha (z1 , u) − ha (z2 , u)|g (1, t), since ha (z, u) < 1 if |z| < 1 and ha (1, u) = 1. Obviously, |ha (z1 , u) − ha (z2 , u)| ≤ |z1 − z2 |ha (1, u), where one has max ha (1, u) < K < ∞, hence |g[ha (z1 , u), t] − g[ha (z2 , u), t]| ≤ |z1 − z2 | Kg (1, t).
86
Imre Pázsit & Lénárd Pál
q2 0.5 q1 0.95 Q 0.4, s0 1
0.5 Probability
0.4
u 0.1 u 0.2 u 0.4
0.3 0.2 0.1 0 0
2 4 6 Number of absorptions
8
Figure 4.1 The probability of the number of absorptions in the intervals u = 0.1, 0.2, 0.4.
Considering that in a subcritical system g (1, t) = e −(1−q1 )Qt = e −at , after the substitutions z1 = z and z2 = 1, one obtains |g[ha (z, u), t] − 1| ≤ |z − 1| K e −at , from which it follows that the improper integral I (z, u) is finite and hence the limit probability Ga∗ (z, u) does exist. The formula (4.18) arises directly from (4.19) if, after taking the limit t → ∞, one accounts for the equality (3.84), which holds for subcritical systems, ∞ 1 t−1 s0 [g(z, t) − 1]dt = exp dt exp s0 Q z q(t) − t 0 and performs the substitution z = ha (z, u). In the case of a quadratic generating function q(z), the probability Wa∗ (n, u) can be calculated relatively easily. One only needs to construct the power series of the generating function Ga∗ (z, u) in z. Figure 4.1 shows the dependence of the probability Wa∗ (n, u) on the number n of the absorbed particles for three time intervals u. It is notable that increasing of the time interval leads to the appearance of a maximum.
4.2.1 Expectation of the number of absorptions Investigate first the properties of the expected number of absorbed particles. From equation (4.15) we obtain
t ∂ log Ga (z, t, u) (a) (a) = M1 (t, u) = s0 E{Na (t − u, t)} = m1 (t , u)dt , (4.21) ∂z 0 z=1 where (a) m1 (t , u)
= E{na
(t
− u, t )}
∂ga (z, t , u) = ∂z
.
(4.22)
z=1
One notes that if t ≤ u then (a)
(a)
M1 (t, u) = M1 (t)
and
(a)
(a)
m1 (t, u) = m1 (t).
From equation (4.6) one can derive the differential equation (a)
dm1 (t, u) (a) = αm1 (t, u) − Q f0 [(t − u) − 1], dt
(4.23)
87
Special Probabilities
from which, after a short algebra, for the case α = Q(q1 − 1) = 0 one obtains (a)
m1 (t, u) = (u − t)
Q f0 αt Q f0 αt (e − 1) + (t − u) e (1 − e −αu ). α α
(4.24)
If α = 0, i.e. for a critical process, one has (a)
m1 (t, u) = (u − t)Q f0 t + (t − u)Q f0 u.
(4.25)
It is worth noting that, naturally, the result (4.24) can also be obtained from (4.13). If t ≤ u then from (4.10) for α = 0 one has (a) dm1 (t) (a) = αm1 (t) + Q f0 , dt (a) and from this, accounting for the initial condition m1 (0) = 0, the solution (a)
m1 (t) =
Q f0 αt (e − 1), α
t ≤ u,
is obtained. For t ≥ u, from the second term of (4.13) it follows that (a)
(a)
m1 (t, u) = m1 (t − u) m1 (u) =
Q f0 αt e (1 − e −αu ), α
t ≥ u.
It is seen that these two solutions are identical with equation (4.24). (a) Calculate now the expectation M1 (t, u). For α = 0 αt −αu e −1 Q f0 Q f0 (a) αt 1 − e (u − t) −t + (t − u) e −u , M1 (t, u) = s0 α α α α whereas for α = 0 one has (a) M1 (t, u)
1 t−u 2 2 = s0 Q f0 t 1 − (t − u) . 2 t
(4.26)
(4.27)
Figure 4.2 illustrates the time-dependence of the expectation of the absorptions in a subcritical, critical and supercritical system, respectively, with the parameter values s0 = 1, Q = 0.4, q2 = 0.5 and f0 = 0.3 for the case when u = 10.1 In the subcritical case, when α = −a < 0 then Na (t − u, t) converges to the stationary random process Na∗ (u) when t → ∞, and accordingly, (a)
lim M1 (t, u) = s0 u
Expectation
t→∞
350 300 250 200 150 100 50 0
f0 0.3 q1 0.95 q1 1.00 q1 1.05
Q f0 . a
(4.28)
Q 0.4 u 10 s0 1
q2 0.5 0
20
40
60
80
100
Time (t)
Figure 4.2 The expectation of the number of absorptions in the function of time. 1 The
dimensions of the parameters are not given, since those are unambiguous by the definitions of the parameters. The time is given here, and in the following, in suitably scaled units.
88
Imre Pázsit & Lénárd Pál
4.2.2 Variance of the number of absorptions The variance of Na (t − u, t) can also be calculated from equation (4.15), since
2 ∂ log Ga (z, t, u) (a) 2 D {Na (t − u, t)} = + M1 (t, u), ∂z2 z=1 where
(4.29)
t ∂2 log Ga (z, t, u) (a) = s m2 (t , u)dt . 0 ∂z2 0 z=1 From this it is seen that the variance to mean can be written in the following form: t D2 {Na (t − u, t)} s0 (a) = 1 + m2 (t u)dt , (a) (a) 0 M1 (t, u) M1 (t, u)
(4.30)
(4.31)
which shows that the deviation of the variance of the process Na (t − u, t) from that of a Poisson process is constituted by the second term on the right-hand side of (4.31). (a) In order to determine the variance, the second factorial moment m2 (t, u) has first to be determined. To this end the relationship
2
2 ∂ ha (z, t) ∂ ka (z, t, u) (a) m2 (t, u) = (u − t) + (t − u) (4.32) ∂z2 ∂z2 z=1 z=1 will be used. The function
∂2 ha (z, t) ∂z2
(a)
= m2 (t) z=1
can be obtained from a differential equation, derived from (4.10) as (a)
dm2 (t) (a) (a) = αm2 (t) + Qq2 [m1 (t)]2 , dt
(4.33)
(a)
where m1 (t) is equal to the first term on the right-hand side of (4.24). The initial condition of (4.33) is (a) naturally m2 (0) = 0. A brief calculation yields 3 Q (a) m2 (t) = q2 f02 e αt (e αt − 2αt − e −αt ), α = 0. (4.34) α On the other hand, for a critical process, i.e. for α = 0, from (4.34) the expression (a)
m2 (t) =
1 q2 f02 (Qt)3 , 3
t ≤ u,
(4.35)
is obtained. The second term on the right-hand side of (4.32) can be calculated by using the equality (4.12) with the result
2 ∂ ka (z, t, u) (a) (a) = m2 (t − u)[m1 (u)]2 + m1 (t − u)m2 (u), ∂z2 z=1 where q Q e αt (e αt − 1)/α, if α = 0, m2 (t) = 2 q2 Qt, if α = 0. Based on this, expression (4.32) takes the following form: (a)
(a)
(a)
(a)
m2 (t, u) = (u − t)m2 (t) + (t − u){m2 (t − u)[m1 (u)]2 + m1 (t − u)m2 (u)}.
(4.36)
89
Special Probabilities
After performing the substitutions and some rearrangement, for α = 0 one obtains 3 (a) 2 Q m2 (t, u) = q2 f0 e αt {(u − t) [e αt − e −αt − 2αt] α
whereas if α = 0 then (a) m2 (t, u)
+ (t − u) [e αu − e −αu − 2αu + (e α(t−u) − 1)(e αu + e −αu − 2)]},
(4.37)
$ u %3 1 t−u 2 3 = q2 f0 (Qt) (u − t) + (t − u) 1+3 . 3 t u
(4.38)
The variance (4.29) can now be calculated. First, determine the integral (4.30). Introduce the following notations: •
if t ≤ u then
t
s0 0 •
m2 (t , u)dt = s0 (a)
whereas if t > u then
t
0
t
m2 (t )dt = Ia (α, t), (a)
(4.39)
m2 (t , u)dt = Ja (α, t, u). (a)
s0 0
(4.40)
Elementary operations yield that if α = 0 then Ia (α, t) =
1 s0Q 3q2 f02 α−4 (e 2αt − 4αt e αt + 4e αt − 2αt − 5), 2
(4.41)
and if α = 0 then 1 s0Q 3q2 f02 t 4 . 12 For calculating Ja (α, t, u), it is more practical to use (4.36) instead of (4.37). Thus, u t t (a) (a) (a) Ja (α, t, u) = s0 m2 (t )dt + s0 [m1 (u)]2 m2 (t − u)dt + s0 m2 (u) m1 (t − u)dt. Ia (0, t) =
0
u
(4.42)
u
From this it follows that if α = 0 then Ja (α, t, u) = Ia (α, u) +
1 s0Q 3q2 f02 α−4 (e α(t−u) − 1) × [(e α(t−u) − 1)(e αu − 1)2 + e 2αu − 2αue αu − 1], (4.43) 2
and if α = 0 then
t t Ja (0, t, u) = Ia (0, u) 1 + 2 −1 3 −1 . u u It is easy to realise that the following equality holds: Ia (α, u) = Ja (α, u, u),
(4.44)
∀α.
Applying now (4.29) leads to (a)
D2 {Na (t − u, t)} = M1 (t, u) + (u − t) Ia (α, t) + (t − u) Ja (α, u, t). Figure 4.3 illustrates the variance to mean D2 {Na (t − u, t)} E{Na (t − u, t)}
(4.45)
90
Variance to mean
Imre Pázsit & Lénárd Pál
35 30 25 20 15 10 5
f0 0.3 q1 0.95 q1 1.00 q1 1.05
Q 0.4 u 10 s0 1
q2 0.5 0
20
40 60 Time (t )
80
100
Figure 4.3 Variance to mean of the number of absorptions as a function of time t. 500
Variance
400
s0 1
q1 0.95 Q 0.4
300
f0 0.3 u8
200
u 10 u 12
100 q2 = 0.5
0 0
50
100
150 200 Time (t )
250
300
Figure 4.4 Variance of the number of absorptions in the function of time t for three different values of the period u.
of the absorptions occurring during the period u = 10 as a function of t in subcritical, critical and supercritical state, for the parameter values s0 = 1, Q = 0.4, q2 = 0.5 and f0 = 0.3. Likewise, in Fig. 4.4 it is seen how the variance of the number of absorptions depends on the time for three different values of the observation interval u under unchanged values of the parameters s0 , Q, q2 , f0 . Calculating the variance in the case of a subcritical medium for t → ∞ gives
1 − e −au (a) 2 −2 2 ∗ D {Na (u)} = M1 (∞, u) 1 + Q q2 f0 a 1− , (4.46) au (a)
where a = Q(1 − q1 ) > 0, whereas M1 (∞, u) is equal to (4.28). To prove this formula it is sufficient to determine the limiting value lim Ja (−a, u, t) = Ja (−a, u, ∞),
t→∞
then to perform a rearrangement of the right-hand side of the equation D2 {Na∗ (u)} = M1 (∞, u) + Ja (−a, u, ∞). (a)
Alternatively, starting from the differential equation derived from the generating function in (4.6), (a)
dm2 (t, u) (a) (a) = −am2 (t, u) + Qq2 [m1 (t, u)]2 , dt the Laplace transform (a)
m˜ 2 (s, u) =
0
∞
e −st m2 (t, u)dt (a)
91
Special Probabilities (a)
(a)
can be calculated at s = 0, taking into account that m2 (0, u) = m2 (0) = 0. Since (u − t)(t − u) = 0, one can write that Q f0 2 (a) 2 [m1 (t, u)] = [(u − t)(1 − e −at )2 + (t − u)e −2at (e au − )2 ], a from which one arrives at
1 − e −su Q f0 2 1 − e −(s+a)u 1 − e −(s+2a)u e −(s+2a)u (a) (s + a) m˜ 2 (s, u) = Qq2 −2 + + (e au − 1)2 . a s s+a s + 2a s + 2a From the above, after some simple algebra one obtains (a) s0 m˜ 2 (0, u)
= s0
∞
0
(a) m2 (t, u)dt
= Ja (−a, u, ∞) = s0 uQ
q2 f02 a−3
3
1 − e −au 1− au
.
By taking into account the equality M1 (∞, u) = s0 uQ f0 a−1 , (a)
finally the formula (4.46) is obtained.
4.2.3 Correlation between the numbers of absorptions In the following, we will be concerned with exploring the stochastic dependence between the absorptions occurring in two different time intervals. For this purpose, the autocorrelation function of the random process Na (t − u, t) can be selected as an indicator. Actually, it would be more correct to use the term autocovariance function instead of the autocorrelation function. However, following the customs that have been adopted in the physics literature, the term ‘correlation’ will also be used for the covariance. The cross-correlation is used for studying the stochastic relationship between two different random processes. The cross-correlation between the numbers of the absorbed and the still active (still living) particles has already been discussed briefly in Section 1.5. Here, the case of the non-overlapping intervals will be discussed, and thereafter the case of the overlapping ones.
Non-overlapping intervals Let [t − u2 , t] and [t − u2 − θ − u1 , t − u2 − θ] be two mutually non-overlapping intervals and let Na (t − u1 , t ),
(t = t − u2 − θ)
and
Na (t − u2 , t)
denote the numbers of particles captured in the first and the second interval, respectively, in the case when there were no particles in the multiplying system at time t = 0, but particles were injected in the interval [0, t] according to a Poisson process with intensity s0 . The goal is the calculation of the correlation function RNa , Na (t, θ, u1 , u2 ) = E{[Na (t − u1 , t ) − M1 (t , u1 )][Na (t − u2 , t) − M1 (t, u2 )]}, (a)
(a)
(4.47)
in which t = t − u2 − θ. For determining RNa ,Na (t, θ, u1 , u2 ), we need the generating function of the probability Pa (n1 , n2 , t, θ, u1 , u2 ) = P{Na (t − u1 , t ) = n1 , Na (t − u2 , t) = n2 |n(0) = 0}
(4.48)
defined as Ga (z1 , z2 , t, θ, u1 , u2 ) =
∞ ∞ n1 =0 n2 =0
Pa (n1 , n2 , t, θ, u1 , u2 ) z1n1z2n2 .
(4.49)
92
Imre Pázsit & Lénárd Pál
u1 0
t u1
u2 t
tu2
t
Figure 4.5 Arrangement of the mutually non-overlapping time intervals (t = t − u2 − θ).
According to the considerations described in Section 3.2, the logarithm of this generating function can be given in the following form: t [ga (z1 , z2 , v, θ, u1 , u2 ) − 1]dv. log Ga (z1 , z2 , t, θ, u1 , u2 ) = s0 (4.50) 0
Here
∞ ∞
ga (z1 , z2 , t, θ, u1 , u2 ) =
pa (n1 , n2 , t, θ, u1 , u2 ) z1n1z2n2
(4.51)
n1 =0 n2 =0
is the generating function of the probability pa (n1 , n2 , t, θ, u1 , u2 ) = P{na (t − u1 , t ) = n1 , na (t − u2 , t) = n2 |n(0) = 1}.
(4.52)
The quantities na (t − u1 , t ) and na (t − u2 , t) represent the numbers of absorbed particles in the intervals [t − u1 , t ] and [t − u2 , t], respectively, provided that there was one particle at time t = 0 in the multiplying system, i.e. the condition n(0) = 1 was fulfilled. For the solution, an equation determining the generating function ga (z1 , z2 , t, θ, u1 , u2 ) has to be derived. To this end, write down the backward Kolmogorov equation determining the probability pa (n1 , n2 , t, θ, u1 , u2 ). These considerations can be illustrated by the time axis given in Fig. 4.5. The probability pa (n1 , n2 , t, θ, u1 , u2 ) is the sum of the probabilities of three mutually exclusive events. The first event is that the single starting particle in the system at time t = 0 does not induce a reaction in the interval [0, t]; the second is that the starting particle is absorbed in its first reaction during the time [0, t], while the third is that the first reaction in the interval [0, t] results in renewal or multiplication. Accordingly, pa (n1 , n2 , t, θ, u1 , u2 ) = pa(1) (n1 , n2 , t, θ, u1 , u2 ) + pa(2) (n1 , n2 , t, θ, u1 , u2 ) + pa(3) (n1 , n2 , t, θ, u1 , u2 ). These three terms are given as pa(1) (n1 , n2 , t, θ, u1 , u2 ) = e −Qt δn1 0 δn2 0 , t (2) pa (n1 , n2 , t, θ, u1 , u2 ) = Q f0 e −Qv A(n1 , n2 , t − v, θ, u1 , u2 )dv,
(4.53) (4.54)
0
where A(n1 , n2 , t − v, θ, u1 , u2 ) = (t − u2 − θ − u1 − v) δn1 0 δn2 0 + (v − t + u2 + θ + u1 ) (t − u2 − θ − v) δn1 1 δn2 0 + (v − t + u2 + θ) (t − u2 − v) δn1 0 δn2 0 +(v − t + u2 ) (t − v) δn1 0 δn2 1 , and finally
pa(3) (n1 , n2 , t, θ, u1 , u2 ) = Q
0
t
e −Qv
∞
fk ba(k) (n1 , n2 , t − v, θ, u1 , u2 )dv,
k=1
where ba(k) (n1 , n2 , t
− v, θ, u1 , u2 ) =
k
n11 +···+n1k =n1 n21 +···+n2k =n2 j=1
pa (n1j , n2j , t − v, θ, u1 , u2 ).
(4.55)
93
Special Probabilities (3)
(2)
Adding to pa and subtracting from pa the integral t e −Qv dv δn1 0 δn2 0 , Q f0 0
and by utilising the properties of the unit step function (x) that is continuous from the right, after rearranging and changing the notation, the following integral equation is obtained for the generating function defined in (4.51): t e −Q(t−v) {[(v − u2 − θ − u1 ) − (v − u2 − θ)](1 − z1 ) ga (z1 , z2 , t, θ, u1 , u2 ) = e −Qt + Q f0 0
+ [(v − u2 ) − 1](1 − z2 )}dv t +Q e −Q(t−v) q[ga (z1 , z2 , t − v, θ, u1 , u2 )]dv. (4.56) 0
This is naturally equivalent with the differential equation ∂ga (z1 , z2 , t, θ, u1 , u2 ) = −Q ga (z1 , z2 , t, θ, u1 , u2 ) + Qq[ga (z1 , z2 , t, θ, u1 , u2 )] + Q f0 [(t − u2 ) − 1](1 − z2 ) ∂t (4.57) + Q f0 [(t − u2 − θ − u1 ) − (t − u2 − θ)](1 − z1 ) subject to the initial condition ga (z1 , z2 , 0, θ, u1 , u2 ) = 1. Considering that
2 ∂ log Ga (z1 , z2 , t, θ, u1 , u2 ) , RNa , Na (t, θ, u1 , u2 ) = ∂z1∂z2 z1 =z2 =1
(4.58)
from (4.50) one obtains RNa , Na (t, θ, u1 , u2 ) = s0 0
where
(a)
m2 (t, θ, u1 , u2 ; 1, 2) =
t
(a)
m2 (v, θ, u1 , u2 ; 1, 2)dv,
∂2 ga (z1 , z2 , t, θ, u1 , u2 ) ∂z1∂z2
is equal to the mixed second moment E{na (t − u1 , t )na (t − u2 , t)}. For one can derive the differential equation
(4.59)
(4.60)
z1 =z2 =1 (a) m2 , according
to (4.60), from (4.57)
(a)
dm2 (t, θ, u1 , u2 ; 1, 2) (a) (a) (a) = αm2 (t, θ, u1 , u2 ; 1, 2) + Qq2 m1 (t, θ, u1 , u2 ; 1) m1 (t, θ, u1 , u2 ; 2). dt One also needs the moments
(a)
m1 (t, θ, u1 , u2 ; 1) = and
(a) m1 (t, θ, u1 , u2 ; 2)
∂ga (z1 , z2 , t, θ, u1 , u2 ) ∂z1
∂ga (z1 , z2 , t, θ, u1 , u2 ) = ∂z2
(4.61)
(4.62) z1 =z2 =1
.
(4.63)
z1 =z2 =1
For these moments, from the generating function equation (4.57) one can write down the simple differential equations (a)
dm1 (t, θ, u1 , u2 ; 1) (a) = αm1 (t, θ, u1 , u2 ; 1) − Q f0 [(t − u2 − θ − u1 ) − (t − u2 − θ)], dt
(4.64)
94
Imre Pázsit & Lénárd Pál
and (a)
dm1 (t, θ, u1 , u2 ; 2) (a) = αm1 (t, θ, u1 , u2 ; 2) − Q f0 [(t − u2 ) − 1] dt with the initial condition (a) (a) m1 (0, θ, u1 , u2 ; 1) = m1 (0, θ, u1 , u2 ; 2) = 0. With the above formulae the correlation function RNa , Na (t, θ, u1 , u2 ) is determined. In the following, we shall only be concerned with the calculation of the correlation function ∞ (a) ∗ lim RNa ,Na (t, θ, u1 , u2 ) = RNa ,Na (θ, u1 , u2 ) = s0 m2 (t, θ, u1 , u2 ; 1, 2)dt, t→∞
(4.65)
(4.66)
0
describing the asymptotically stationary process in a subcritical medium. Equation (4.61), which defines the function (a) (a) m2 (t, θ, u1 , u2 ; 1, 2), contains the moments m1 (t, θ, u1 , u2 ; i), i = 1, 2. For these moments, from (4.64) and (4.65), one obtains the solutions (a)
m1 (t, θ, u1 , u2 ; 1) =
Q f0 [(1 − e −a(t−θ−u2 ) )(t − θ − u2 ) − (1 − e −a(t−θ−u2 −u1 ) )(t − θ − u2 − u1 )], (4.67) a
and (a)
m1 (t, θ, u1 , u2 ; 2) = We notice that if t ≥ u1 + θ + u2 , then
Q f0 [1 − e −at − (1 − e −a(t−u2 ) )(t − u2 )]. a
(a)
m1 (t, θ, u1 , u2 ; 1) =
(4.68)
Q f0 −a(t−θ−u2 −u1 ) e (1 − e −au1 ), a
and Q f0 −a(t−u2 ) e (1 − e −au2 ). a In possession of these moments, one can start solving equation (4.61) for the generating function. After simple but lengthy rearrangements, one arrives at Q f0 2 −at au2 (a) (a) e (e − 1)(1 − e −a(t−θ−u2 ) )(t − θ − u2 ) m1 (t, θ, u1 , u2 ; 1)m1 (t, θ, u1 , u2 ; 2) = a Q f0 2 −at au2 e (e − 1)(1 − e −a(t−θ−u2 −u1 ) )(t − θ − u2 − u1 ). − a (a)
m1 (t, θ, u1 , u2 ; 2) =
With the above, from (4.61) the Laplace transform
(a)
m˜ 2 (s, θ, u1 , u2 ; 1, 2) = is obtained as
t
0
(a) m˜ 2 (s, θ, u1 , u2 ; 1, 2)
= Qq2
e −st m2 (t, θ, u1 , u2 ; 1, 2)dt (a)
f0 1 − q1
× (1 − e −(s+a)u1 ) e −(s+a)(θ+u2 )
2
1 s+a
(e au2 − 1)
1 1 − . s + a s + 2a
It is evident from (4.66) that ∗ (θ, u1 , u2 ) = s0 m˜ 2 (0, θ, u1 , u2 ; 1, 2), RN a ,Na (a)
(4.69)
(4.70)
95
Special Probabilities
so finally one arrives at ∗ (θ, u1 , u2 ) RN a ,Na
1 = s0 Qu1 u2 q2 2
f0 1 − q1
2
1 − e −au1 1 − e −au2 −aθ e , au1 au2
(4.71)
where a = Q(1 − q1 ). It is seen that the correlation function of the stationary process in a subcritical system decays exponentially with the time θ separating the intervals u1 and u2 from each other.
Overlapping intervals Let us now deal with the determination of the correlation function (a)
(a)
RNa ,Na (t, θ, u) = E{[Na (t − θ − u, t) − M1 (t, θ + u)][Na (t − u, t) − M1 (t, u)]} between the numbers of the absorbed particles Na (t − u, t) and Na (t − θ − u, t) in the overlapping intervals [t − u, t] and [t − θ − u, t]. u t
0
tu
t
Figure 4.6 Arrangement of the overlapping time intervals (t = t − u − θ).
The procedure is almost identical with the previous one. Also here the probability P{na (t − θ − u, t) = n1 , na (t − u, t) = n2 |n(0) = 1} = pa (n1 , n2 , t, θ, u)
(4.72)
(2)
is needed, in which only the component pa (n1 , n2 , t, θ, u) is modified. Considering the time axis illustrating the arrangement of intervals in Fig. 4.6, one can write t pa(2) (n1 , n2 , t, θ, u) = Q f0 e −Qv A(n1 , n2 , t − v, θ, u)dv, 0
where A(n1 , n2 , t−v, θ, u) = (t−θ−u−v)δn1 0 δn2 0 +(v−t+θ+u) (t−u−v)δn1 1 δn2 0 +(t)(v−t+u)δn1 1 δn2 1 . Performing the rearrangement and redefining the notation as was done already earlier, one obtains for the generating function ga (z1 , z2 , t, θ, u) =
∞ ∞
pa (z1 , z2 , t, θ, u)z1n1z2n2
(4.73)
n1 =0 n2 =0
the integral equation ga (z1 , z2 , t, θ, u) = e −Qt + Q f0 +Q
t
e −Q(t−v) [(v − θ − u)(1 − z1 ) + (v − u)(1 − z2 ) + z1 z2 − 1]dv
0 t
e −Q(t−v) q[ga (z1 , z2 , v, θ, u)]dv.
0
From this one can derive the differential equation ∂ga (z1 , z2 , t, θ, u) = −Qga (z1 , z2 , t, θ, u) + Qq[ga (z1 , z2 , t, θ, u)] ∂t + Q f0 [(t − θ − u) (1 − z1 ) + (t − u)(1 − z2 ) + z1 z2 − 1]
(4.74)
96
Imre Pázsit & Lénárd Pál
with the initial condition ga (z1 , z2 , 0, θ, u) = 1. The generating function Ga (z1 , z2 , t, θ, u) =
∞ ∞
Pa (n1 , n2 , t, θ, u)z1n1z2n2
(4.75)
n1 =0 n2 =0
of the probability P{Na (t − θ − u, t) = n1 , Na (t − u, t) = n2 |n(0) = 0} = Pa (n1 , n2 , t, θ, u)
(4.76)
is determined similarly as before by the equation
t
log Ga (z1 , z2 , t, θ, u) = s0
[ga (z1 , z2 , v, θ, u) − 1]dv.
(4.77)
0
Hence, one can immediately write that
∂2 log Ga (z1 , z2 , t, θ, u) RNa , Na (t, θ, u) = ∂z1 ∂z2
z1 =z2 =1
= s0 0
t
(a)
m2 (v, θ, u; 1, 2)dv.
(4.78)
From equation (4.74), in analogy with (4.60), one has (a)
∂m2 (t, θ, u; 1, 2) (a) (a) (a) = αm2 (t, θ, u; 1, 2) + Qq2 m1 (t, θ, u; 1) m1 (t, θ, u; 2) + Q f0 [1 − (t − u)], ∂t where
(a) m1 (t, θ, u; 1)
and
∂ga (z1 , z2 , t, θ, u) = ∂z1
(a) m1 (t, θ, u; 2)
∂ga (z1 , z2 , t, θ, u) = ∂z2
(4.79)
(4.80) z1 =z2 =1
.
(4.81)
z1 =z2 =1
For these moments, from (4.74) one obtains the differential equations (a)
dm1 (t, θ, u; 1) (a) = αm1 (t, θ, u; 1) + Q f0 [1 − (t − θ − u)] dt
(4.82)
and (a)
dm1 (t, θ, u; 2) (a) = αm1 (t, θ, u; 2) + Q f0 [1 − (t − u)] dt
(4.83)
with the initial condition (a)
(a)
m1 (0, θ, u; 1) = m1 (0, θ, u; 2) = 0. In the forthcoming, again only the correlation function of asymptotically stationary processes in a subcritical medium will be dealt with, i.e. ∗ (θ, u) lim RNa , Na (t, θ, u) = RN a , Na
(4.84)
t→∞
(a)
(a)
for the determination of which (4.79) has to be solved. For this the functions m1 (t, θ, u; 1) and m1 (t, θ, u; 2) are needed. From (4.82) and (4.83) one has (a)
m1 (t, θ, u; 1) =
Q f0 [1 − e −at − (t − θ − u)(1 − e −a(t−θ−u) )] a
(4.85)
97
Special Probabilities
and (a)
m1 (t, θ, u; 2) =
Q f0 [1 − e −at − (t − u)(1 − e −a(t−u) )]. a
(4.86)
(a)
By using these, from (4.79) the Laplace transform of the second mixed moment m2 (t, θ, u; 1, 2) ∞ (a) (a) m˜ 2 (s, θ, u; 1, 2) = e −st m2 (t, θ, u; 1, 2)dt 0
can be obtained after some simple but lengthy calculations as 2 2 1 f0 f0 (a) [1 (s, θ, u) + 2 (s, θ, u)] , (1 − e −su ) + Qq2 m˜ 2 (s, θ, u; 1, 2) = Q f0 + q2 s(s + a) 1 − q1 1 − q1 (4.87) where 1 −(s+a)(θ+u) 1 (s, θ, u) = e − e −s(θ+u) e −aθ + e −(s+a)u + e −su − 2 , s+a and 1 2 (s, θ, u) = 1 − e −(s+a)(θ+u) + e −s(θ+u) e −aθ − e −(s+a)u . s + 2a In view of the fact that ∞ (a) ∗ m2 (t, θ, u; 1, 2)dt, RNa , Na (θ, u) = s0 0
one has ∗ RN (θ, u) = s0 m˜ 2 (0, θ, u, 1, 2). a ,Na (a)
Utilising (4.87) yields ∗ (θ, u) = RN a ,Na
f0 q2 s0 uf0 1 − e −au 1 −aθ 1+ 1 − 1 − (1 − e ) . 1 − q1 (1 − q1 )2 au 2
(4.88)
In the case when θ = 0, then ∗ (0, u) = lim D2 {Na (t − u, t)}, RN a ,Na t→∞
where limt→∞ D2 {Na (t − u), t} is equal to (4.46), which gives the variance of the number of the absorptions during the time interval u for a stationary subcritical process.
4.2.4 The probability of no absorption events occurring In many cases, it is important to know the probability that no particle absorptions occur in the interval [t − u, t], where u ≥ 0, in a multiplying system with random injection, given that there where no particles present at time t = 0. If the random injection is a Poisson process, then one can infer from (4.15) that this probability is given by the formula t (4.89) [1 − ga (t , u, 0)]dt . Pa (t, u, 0) = Ga (t, u, 0) = exp −s0 0
(t , 0)
and ka (t , u, 0) in the equation The task is therefore to determine the functions ha t t ' & [1 − ga (t , u, 0)]dt = (u − t) ha (t , 0) + (t − u)ka (t , u, 0) dt . 0
(4.90)
0
Again, the calculations will be performed for the case when the basic generating function q(z) is quadratic.
98
Imre Pázsit & Lénárd Pál
Quadratic process First, the functions ha (z, t) and ka (t, u, z) will be determined from (4.10) and (4.11), since the searched functions ha (t, 0) and ka (t, u, 0) can be obtained from these by substituting z = 0. For simplicity, suppressing notations on the variables u and z as well as introducing the functions ha (z, t) = y1 (t)
and ka (z, t, u) = y2 (t),
the equations dy1 = −Q(1 − f1 ) y1 + Q f2 y21 + Qf0 z, dt
if 0 ≤ t ≤ u,
(4.91)
if u ≤ t,
(4.92)
and dy2 = −Q(1 − f1 )y2 + Q f2 y22 + Q f0 , dt can be written down together with the initial conditions y1 (0) = 1 and
y2 (u) = y1 (u).
For the solution of (4.91), determine the roots of the equation f2 y21 − (1 − f1 ) y1 + f0 z = 0. One obtains (1)
y1 = 1 + d(1 − r)
(4.93)
(2) y1
(4.94)
= 1 + d(1 + r)
where 1 − q1 d= q2 Based on this, from (4.91) it follows that ( 1 (2)
y1 − y 1
! and r =
−
1+
2f0 (1 − z). d(1 − q1 )
(4.95)
)
1 (1)
y1 − y 1
dy1 = r(1 − q1 )Qdt
which can immediately be integrated. After some rearrangements and by accounting for the initial condition y1 (0) = 1, one arrives at
1 + C1 e −art , t ≤ u, (4.96) y1 (t) = 1 + d 1 − r 1 − C1 e −art where 1−r C1 = (4.97) and a = Q(1 − q1 ). 1+r The roots needed for the solution of (4.92) can be immediately obtained from the formulae (4.93) and (4.94) by substituting z = 1, i.e. r = 1. We find that (1)
(2)
y2 = 1 and y2 = 1 + 2d, hence the equation to be solved can be written in the form 1 1 − dy2 = a dt. y2 − 1 − 2d y2 − 1
99
Special Probabilities
Taking into account the initial condition y2 (u) = y1 (u), a simple rearrangements leads to C2 e −at , 1 − C2 e −at
y2 (t) = 1 − 2d
t ≥ u,
(4.98)
where
1 − e −aru e au . 1 − C12 e −aru In possession of y1 (t) and y2 (t), the probability (4.89) can be calculated. Determine now the probability that in an asymptotically stationary subcritical medium with injection, there will be no absorptions during the time interval u. First, we shall prove that the limit generating function u ∞ lim Ga (z, t, u) = Ga∗ (z, u) = s0 [y1 (t) − 1]dt + [y2 (t) − 1]dt (4.99) C2 = −C1
t→∞
0
u
is given by the expression (1 − C1 )(1 − C2 e −au ) s0 , log Ga∗ (z, u) = s0 u d(1 − r) + 2 d log a 1 − C1 e −aru
(4.100)
in which the quantities d, r, C1 and C2 are determined by the formulae (4.95) as well as (4.97) and (4.98). By considering that lim P (0) (t, u) t→∞ a
= Wa(0) (u),
(4.101)
the sought limit probability is given by the formula Ga∗ (u, 0) = Wa(0) (u).
(4.102)
For the proof, one has to determine the integrals in (4.99). It can easily be seen that u s0 1 − C1 [y1 (t ) − 1]dt = s0 u d(1 − r) + 2 d log , s0 a 1 − C1 e −aru 0 as well as
s0 u
t
s0 1 − C2 e −au [y1 (t ) − 1]dt = 2 d log , a 1 − C2 e −art
hence if a > 0, one has ∞ s0 (1 − C1 )(1 − C2 e −au ) [ga (z, t, u) − 1]dt = s0 u d(1 − r) + 2 d log , s0 a 1 − C1 e −aru 0 which was the original statement. Figure 4.7 shows the probability that no particle absorption occurs during the time interval u in a stationary multiplying subcritical system with injection, for the case of three different source intensities. Now the probability will be determined that in a stationary subcritical system with injection, no particle absorption will occur in either of the two time intervals u1 and u2 following each other with a time lag θ . (0) This probability Wa (θ, u1 , u2 ) can be obtained from (4.49), since lim Ga (0, 0, t, θ, u1 , u2 ) = Wa(0) (θ, u1 , u2 ).
t→∞
Accordingly, the integral I (z1 , z2 , θ, u1 , u2 ) = s0 0
∞
[ga (z1 , z2 , t, θ, u1 , u2 ) − 1]dt
(4.103)
100
Imre Pázsit & Lénárd Pál
1
Probability
f0 0.3 s0 0.8 s0 1.0 s0 1.2
q2 0.5 q1 0.95 Q 0.4
0.8 0.6 0.4 0.2 0 0
0.2
0.4 0.6 Time interval (u)
0.8
1
Figure 4.7 Probability of no particle absorption occurring as a function of time period u for three different source intensities.
has to be calculated. For this, one has to solve (4.57) with a quadratic q(z). By introducing the notations ⎧ y (t), if t ≤ u2 , ⎪ ⎨ 1 y2 (t), if u2 ≤ t ≤ u2 + θ, ga (z1 , z2 , t, θ, u1 , u2 ) = ⎪ ⎩y3 (t), if u2 + θ ≤ t ≤ u2 + θ + u1 , y4 (t), if u2 + θ + u1 ≤ t, the following four equations are obtained dy1 dt dy2 dt dy3 dt dy4 dt
= −Q(1 − f1 )y1 + Q f2 y21 + Q f0 z2 , = −Q(1 − f1 )y2 + Q f2 y22 + Q f0 ,
0 ≤ t ≤ u2 , u2 ≤ t ≤ u2 + θ,
= −Q(1 − f1 )y3 + Qvf2 y23 + Q f0 z1 , = −Q(1 − f1 )y4 + Q f2 y24 + Q f0 ,
u2 + θ ≤ t ≤ u2 + θ + u1 ,
u2 + θ + u1 ≤ t,
with the initial conditions y1 (0) = 1, y2 (u2 ) = y1 (u2 ), y3 (u2 + θ) = y2 (u2 + θ), y4 (u2 + θ + u1 ) = y3 (u2 + θ + u1 ). By introducing the notations ! rk =
1+2
f0 (1 − zk ), d(1 − q1 )
k = 1, 2
based on the method described earlier, one obtains the following solutions:
1 + C1 (z2 ) e −ar2 t y1 (t) = 1 + d 1 − r2 , 0 ≤ t ≤ u2 , 1 − C1 (z2 ) e −ar2 t y2 (t) = 1 − 2d
C2 (z2 ) e −at , 1 − C2 (z2 ) e −at
u2 ≤ t ≤ u2 + θ,
(4.104)
(4.105)
(4.106)
101
Special Probabilities
1 + C3 (z1 , z2 ) e −ar1 t y3 (t) = 1 + d 1 − r1 , 1 − C3 (z1 , z2 ) e −ar1 t
u2 + θ ≤ t ≤ u2 + θ + u1 ,
(4.107)
and C4 (z1 , z2 ) e −at , u2 + θ + u1 ≤ t, (4.108) 1 − C2 (z1 , z2 ) e −at in which the quantities C1 (z2 ), C2 (z2 ), C3 (z1 , z2 ) and C4 (z1 , z2 ) are determined from the initial conditions through the following relationships: 1 − r2 (z2 ) C1 (z2 ) = , (4.109) 1 + r2 (z2 ) y4 (t) = 1 − 2d
C2 (z2 ) =
K2 (z2 ) e au2 , K2 (z2 ) − 2
(4.110)
where 1 + C1 (z2 ) e −ar2 (z2 )u2 , 1 − C1 (z2 ) e −ar2 (z2 )u2 K3 (z2 ) − r1 (z1 ) ar1 (z1 )(u2 +θ) . C3 (z1 , z2 ) = e K3 (z2 ) + r1 (z1 ) K2 (z2 ) = 1 − r2 (z2 )
(4.111)
In the above, 1 + C2 (z2 ) e −a(u2 +θ) , 1 − C2 (z2 ) e −a(u2 +θ) K4 (z1 , z2 ) C4 (z1 , z2 ) = e a(u2 +θ+u1 ) , K4 (z1 , z2 ) − 2 K3 (z2 ) =
(4.112)
with 1 + C3 (z1 , z2 ) e −ar1 (z1 )(u2 +θ+u1 ) . 1 − C3 (z1 , z2 ) e −ar1 (z1 )(u2 +θ+u1 ) With elementary, although troublesome work, the following expressions are obtained for the integral (4.103): K4 (z1 , z2 ) = 1 − r1 (z1 )
I (z1 , z2 , θ, u1 , u2 ) = I1 (z2 , u2 ) + I2 (z2 , u2 ) + I3 (z1 , z2 , θ, u1 , u2 ) + I4 (z1 , z2 , θ, u1 , u2 ), where I1 (z2 , u2 ) = s0 u2 d(1 − r2 ) + 2
s0 1 − C1 , d log a 1 − C1 e −ar2 u2
(4.113)
s0 1 − C2 e −au2 , I2 (z2 , u2 ) = 2 d log a 1 − C2 e −a(u2 +θ)
(4.114)
s0 1 − C3 e −ar1 (u2 +θ) , I3 (z1 , z2 , θ, u1 , u2 ) = s0 u1 d(1 − r1 ) + 2 d log a 1 − C3 e −ar1 (u2 +θ+u1 )
(4.115)
and I4 (z1 , z2 , θ, u1 , u2 ) = 2
s0 d log [1 − C4 e −a(u2 +θ+u1 ) ]. a
(4.116)
Finally, one has Wa(0) (θ, u1 , u2 ) = exp{I (0, 0, θ, u1 , u2 )}.
(4.117)
102
Imre Pázsit & Lénárd Pál
Probability
0.322 0.320
s0 1, Q 0.4, q1 0.95
0.318
u1 0.1, u2 0.1 q2 0.5, f0 0.3
0.316 0.314 0.312 0
50 100 150 200 250 Time-span between intervals u1 and u2
Figure 4.8 Dependence of the probability of no particle absorptions in two intervals lying a time interval θ apart from each other, on θ, in a stationary subcritical medium with injection.
Coefficient (r (0) d )
0.03 0.025
s0 1, Q 0.4, q1 0.95
0.02
u1 0.1, u2 0.1
0.015
q2 0.5, f0 0.3
0.01 0.005 0 0
Figure 4.9
50 100 150 200 250 Time-span between intervals u1 and u2
Decrease of the correlation coefficient with the increase of the time lag θ separating the intervals. (0)
Figure 4.8 illustrates that with the choice of u1 = u2 = 0.1, how the probability Wa (θ, 0.1, 0.1) depends on the time lag θ between two neighbouring intervals. According to the expectations it is seen that lim Wa(0) (θ, u1 , u2 ) = Wa(0) (u1 ) Wa(0) (u2 ),
θ→∞
which shows that if sufficiently long time passes between the intervals u1 and u2 , the probabilities of the nonoccurrences of particle absorption become practically the probabilities of independent events. This dependence can be characterised by the correlation coefficient (0)
ra(0) (θ, u1 , u2 ) =
(0)
(0)
Wa (θ, u1 , u2 ) − Wa (u1 )Wa (u2 ) (0)
.
Wa (θ, u1 , u2 )
The dependence of this coefficient on θ can be seen in Fig. 4.9 for u1 = u2 = 0.1.
4.3 Probability of the Number of Detections Experimental observation of the statistics of the number of particles is only possible through their detection. The detection process of neutrons is also an absorption reaction; a certain fraction of the absorptions, namely when the absorption takes place with the nuclei of the detector material, counts as detection. In the framework of the description, the detector must conform with the idealised model used so far, according to which the branching process takes place in an infinite homogeneous medium. Hence, it will be supposed that in the multiplying system, there exist objects – call them detecting particles – in a uniform distribution which, if they
103
Special Probabilities
absorb a particle of the branching process, give a signal suitable for observation. The counting of the signals is the process of registration or recording. Obviously, the time series of recorded signals is used as an information carrier for estimating the various parameters of the branching process. It is reasonable to select the concentration of the detecting particles at a level such that it practically does not influence the development of the observed branching process, but at the same time it supplies sufficient information such that one can draw conclusions on the process investigated at an acceptable significance level.
4.3.1 One-point distribution of the number of detected particles According to the previous notation conventions, Qf0 stands for the intensity of the reaction resulting in absorption. The intensity Qd of the detection reaction is also included in the total intensity Q. The ratio Qd = c << 1 Qf0
(4.118)
is called the detection efficiency, and c is obviously the probability that a particle absorption results in detection (registration). Investigate now a subcritical system driven by a source in an asymptotically stationary state. Let na (u) denote the number of absorbed particles during the time interval u, and out of these let nd (u) ≤ na (u) denote the number of particles detected during the same time interval u. Determine the generating function E{znd (u) } = Gd (z, u) =
n ∞
P{nd (u) = k, na (u) = n}zk .
(4.119)
n=0 k=0
Since P{nd (u) = k, na (u) = n} = P{nd (u) = k|na (u) = n} P{na (u) = n}, hence Gd (z, u) =
∞
P{na (u) = n}
n=0
n
P{nd (u) = k|na (u) = n}zk .
(4.120)
k=0
Observe that P{nd (u) = k|na (u) = n} is the probability that k absorptions out of n ≥ k absorptions result in detection, thus n k P{nd (u) = k|na (u) = n} = c (1 − c)n−k . (4.121) k Then, it follows from (4.120) that Gd (z, u) =
∞
P{na (u) = n} [c z + 1 − c]n ,
(4.122)
Gd (z, u) = Ga [c z + 1 − c, u] ≡ Ga [c(z), u],
(4.123)
n=0
i.e. where Ga (z, u) =
∞
P{na = n}zn
(4.124)
n=0
is the generating function given by (4.18), and c(z) = c z + 1 − c.
(4.125)
104
Imre Pázsit & Lénárd Pál
It is easy to confirm that c(z) is the generating function of the binary probability pc (n) that an absorption leads to detection, which can be seen in an alternative derivation of Gd (z, u). Indeed, c(z) is the generating function of the probability pc (n) = (1 − c)δn,0 + c δn,1 ,
(4.126)
and from the independence of the absorptions of the individual particles, one directly obtains that Gd (z, u) = Ga [c(z), u].
(4.127)
For the expectation of the number of detections during the time interval u,
∂Gd (z, u) (d) , M1 (u) = ∂z z=1 one obtains the trivial result
(4.128)
∂Ga (z, u) (4.129) ∂z z=1 which can also be expected from elementary considerations. The next step is to determine the variance of the number of detections during the time interval u, (d)
(a)
M1 (u) = cM1 (u) = c
(d)
(d)
(d)
D2 {nd (u)} = M2 (u) + M1 (u)[1 − M1 (u)].
(4.130)
From (4.122) one obtains (d)
(a)
M2 (u) = c 2 M2 (u), hence
D2 {nd (u)} = c 2
2 (a) (a) (a) M2 (u) − M1 (u) + cM1 (u),
(4.131)
and so finally (a)
(a)
D2 {nd (u)} M (u) − [M1 (u)]2 . =1+c 2 (a) E{nd (u)} M1 (u)
(4.132)
According to equation (4.46), (a)
(a)
M2 (u) − [M1 (u)]2 (a)
M1 (u)
=
Qq1 a
2 D ν f0
1−
1 − e −au au
,
(4.133)
where a = Q(1 − q1 ) > 0 and Dν is the Diven factor of the number of particles generated per reaction, Dν = q2 /q12 . By virtue of this, the well-known Feynman formula or Feynman alpha formula (without delayed neutrons) is obtained from (4.132) and (4.133) as 1 − e −au Qq1 2 D2 {nd (u)} Dν f0 1 − =1+c . (4.134) E{nd (u)} a au The derivation of this formula with the inclusion of delayed neutrons is given in Part II, Chapter 9, and its use for reactivity measurement in traditional and accelerator driven systems is demonstrated in Chapters 9 and 10. In reality, the conditions under which (4.134) was derived (infinite homogeneous system in one energy group) are not fulfilled. Despite of this, the formula (4.134) is widely used, with the introduction of various correction factors, for investigations of branching processes in finite inhomogeneous systems, without even caring about the circumstance whether the detection takes place inside or outside the system. Unfortunately, the value of calculations trying to account for the geometrical properties is limited, since they concern the modelling of the real conditions whose validity is hardly possible to verify (i.e. when trying to measure the reactivity of a system of unknown composition and geometry). Despite of this, the formula (4.134) is believed to be useful in general, because it accounts for the most essential factors in the fluctuations of the number of detections.
105
Special Probabilities
4.3.2 Two-point distribution of the number of detected particles Let us consider now the Rossi-alpha method, which is based on the two-point distribution of the number of detected particles. Suppose that particles generating branching processes are randomly injected into the multiplying infinite medium. Let [t − u2 , t]
and [t − u2 − θ − u1 , t − u2 − θ]
be two disjoint intervals, and denote na (t , t − u1 ),
(t = t − u2 − θ),
and na (t, t − u2 )
the number of absorbed particles in one of the intervals and in the other one, respectively. Suppose that there was no particle in the multiplying system at time t = 0, but particle injection took place according to a Poisson process with intensity s0 in the interval (0, t]. Let nd (t , t − u1 ) ≤ na (t , t − u1 )
and
nd (t, t − u2 ) ≤ na (t, t − u2 )
(4.135)
be the number of the detected particles in the intervals [t − u1 , t ] and [t − u2 , t], respectively. Define the probability P{nd (t , t − u1 ) = k1 , nd (t, t − u2 ) = k2 |n(0) = 0} = Pd (k1 , k2 , t, θ, u1 , u2 ).
(4.136)
From the foregoing discussion it follows that the generating function Gd (z1 , z2 , t, θ, u1 , u2 ) =
∞ ∞
Pd (k1 , k2 , t, θ, u1 , u2 )z1k1 z2k2
(4.137)
k1 =0 k2 =0
is given as Gd (z1 , z2 , t, θ, u1 , u2 ) = Ga [c(z1 ), c(z2 ), t, θ, u1 , u2 )],
(4.138)
where Ga [ · · · ] corresponds to the generating function (4.49). If the system is subcritical, there exists the limit value lim Gd (z1 , z2 , t, θ, u1 , u2 ) = Gd∗ (z1 , z2 , θ, u1 , u2 ) = Ga∗ [c(z1 ), c(z2 ), θ, u1 , u2 ],
t→∞
(4.139)
from which the moments characterising the basic properties of the stationary system can be determined. (1) In the following, let nd (u1 ) denote the number of particles detected during the time interval u1 , while (2) (1) (2) nd (u2 ) the same during the interval u2 following u1 with a time difference θ. Similarly, let na (u1 ) and na (u2 ) denote the number of absorbed particles during the time interval u1 and u2 , respectively. Making use of the material of subsection 4.2, based on equation (4.139), it follows that
∗ ∂Gd (z1 , z2 , θ, u1 , u2 ) Qf0 (1) (d) E{nd (u1 )} = = M1,0 (u1 ) = c (4.140) s 0 u1 , ∂z1 a z1 =z2 =1 and
(2)
E{nd (u2 )} =
∂Gd∗ (z1 , z2 , θ, u1 , u2 ) ∂z2
(d)
z1 =z2 =1
= M0,1 (u2 ) = c
Qf0 s0 u2 . a
(4.141)
For the forthcoming derivation we need the mixed second moment
(1) E{nd (u1 )
(2) nd (u2 )}
∂2 Gd∗ (z1 , z2 , θ, u1 , u2 ) = ∂z1∂z2
(d)
z1 =z2 =1
= M1,1 (θ, u1 , u2 )
(4.142)
106
Imre Pázsit & Lénárd Pál
which can be obtained from (4.139) and (4.49) in the following form: (d)
M1,1 (θ, u1 , u2 ) =
1 2 c 2
Qf0 q1 a
2
1 − e −au1 1 − e −au2 −aθ e s0 u1 Qu2 au1 au2
(d)
Dν
(4.143)
(d)
+ M1,0 (u1 ) M0,1 (u2 ). The well-known Rossi-alpha formula (for prompt neutrons only) can be derived from this expression, provided that the intervals u1 → du and u2 → dθ are infinitesimally small. It is obvious that in this case (d) M1,1 (θ, du, dθ)
=c
2
Qf0 a
2
1Q 2 1+ q Dν e −aθ 2 s0 1
s02 du dθ + o(du dθ).
(4.144)
(d)
It can be shown that by neglecting the terms o(du dθ), M1,1 is the probability that one detection occurs in the interval du, and another one in the interval dθ, following time θ later. For the proof, we assume that the detecting events are rare, i.e. the relationship Pd∗ (θ, du, dθ, k1 , k2 ) F(θ), if k1 = k2 = 1, lim (4.145) = 0, if k1 > 1, k2 > 1, du→0 du dθ dθ→0
is satisfied. Of course, the requirement lim Pd∗ (θ, du, dθ, k1 , k2 ) = δk1 ,0 δk2 ,0
du→0 dθ→0
is also valid independently from the rarity property. If the moment (d)
M1,1 (θ, du, dθ) exists, then the series ∞ ∞
k1 k2 Pd∗ (θ, du, dθ, k1 , k2 )
k1 =1 k2 =1
is absolutely convergent, thus the order of the operations of limit taking and summation is reversible. Accounting for (4.145), it follows that M1,1 (θ, du, dθ) = Pd∗ (θ, du, dθ, k1 = 1, k2 = 1) + o(du dθ) = F(θ)du dθ + o(du dθ). (d)
(4.146)
Hence, it is seen that by neglecting the terms o(du dθ) F(θ)du dθ = c
2
Qf0 a
2
1Q 2 −aθ 1+ s02 du dθ q Dν e 2 s0 1
(4.147)
is the probability that one detection occurs in the interval du, and another one in dθ at time θ later. In view of the fact that the probability that one detection occurs in the interval du is Wd (du, 1) = c
Qf0 s0 du, a
(4.148)
the probability that one detection takes place in the interval dθ provided that a detection took place exactly time θ earlier is given as F(θ)du dθ C(θ)dθ = . (4.149) Wd (du, 1)
107
Special Probabilities
Hence it follows that
Qf0 1Q 2 −aθ C(θ)dθ = c dθ. (4.150) q Dν e s0 1 + a 2 s0 1 This is the renowned Rossi-alpha formula without delayed neutrons, which can only be considered as a probability, even approximately, if both intervals dθ and du in (4.149) are infinitesimally small. The derivation of this formula with the inclusion of delayed neutrons is given in Part II, Chapter 9, and its use for reactivity measurements in accelerator driven systems is demonstrated in Chapter 10.
4.4 Probability of the Number of Renewals In a branching process, a renewal takes place if the reaction induced by a particle leads to the birth of a new particle. First, determine the probability pb (t, u, n) of the event defined already in (4.2), that exactly n renewals take place in the interval [t − u, t], provided that there was one particle in the multiplying system at time t = 0. Again, as previously (see (4.4)), one can write Xb (n, t), if t ≤ u, pb (n, t, u) = (4.151) Yb (n, t, u), if t ≥ u. Of course, the equality Xb (n, u) = Yb (n, u, u) has to be satisfied. For the generating function gb (z, t, u) =
∞
pb (n, t, u)zn
(4.152)
n=0
of the probability pb (n, t, u), from the backward Kolmogorov equation t pb (n, t, u) = e −Qt δn0 + Q f1 e −Qt [(t − u − t )pb (n, t − t , u) + (t + u − t)pb (n − 1, t − t , u)]dt 0
− Q f1
t
e
−Qt
pb (n, t − t , u)dt + Q
⎡
0
t
e
−Qt
⎣f0 δn0 +
0
∞ k=1
fk
k
⎤ pb (nj , t − t , u)⎦dt ,
n1 +···+nk =n j=1
after an appropriate rearrangement, one obtains the integral equation t t gb (z, t, u) = e −Qt + Q f1 (1 − z) e −Q(t−t ) [(t − u) − 1]gb (z, t , u)dt + Q e −Q(t−t ) q[gb (z, t , u)]dt . 0
0
From this, by derivation with respect to t, one arrives at the non-linear differential equation ∂gb (z, t, u) = −Q{1 − f1 [(t − u) − 1](1 − z)}gb (z, t, u) + Q q[gb (z, t, u)] ∂t with the initial condition gb (z, 0, u) = 1. Based on (4.151), one can write h (z, t), if t ≤ u, gb (z, t, u) = b kb (z, t, u), if t ≥ u,
(4.153)
(4.154)
where hb (z, t) =
∞ n=0
Xb (n, t)zn ,
(4.155)
108
Imre Pázsit & Lénárd Pál
and kb (z, t, u) =
∞
Yb (n, t, u)zn .
(4.156)
n=0
Of course, the differential equations ∂hb (z, t) = −Q[1 + f1 (1 − z)]hb (z, t) + Qq[hb (z, t)], ∂t
t ≤ u,
(4.157)
and ∂kb (z, t, u) = −Q kb (z, t, u) + Q q[kb (z, t, u)], ∂t also hold, for which the conditions
t≥u
(4.158)
hb (z, 0) = 1 and hb (z, u) = kb (z, u, u) are fulfilled. Note that while the (4.157) differs even in its form from (4.10), equation (4.158) has exactly the same form as (4.11). Theorem 26. Similarly to Theorem 22, it can also be easily proved that if the solution hb (z, t) of (4.157) is known, then the solution of (4.158) can be given in the following form: kb (z, t, u) = g[hb (z, u), t − u],
(4.159)
where g(z, t) is the solution of (1.29) with the initial condition g(z, 0) = z. Hence, for an arbitrary t ≥ 0 the formula (4.154) is given in the form gb (z, t, u) = (u − t) hb (z, t) + (t − u)g[hb (z, u), t − u].
(4.160)
Suppose that the multiplying system does not contain any particles at t = 0, hence n(0) = 0; moreover that particles enter the system during the interval [0, t] according to a Poisson process with intensity s0 . Let Nb (t − u, t) denote the number of particles renewed in the interval [t − u, t]. It is evident that the generating n function Gb (z, t, u) = ∞ n=0 Pb (n, t, u)z of the probability P{Nb (t − u, t) = n|n(0) = 0} = Pb (n, t, u) satisfies the equation
log Gb (z, t, u) = s0
t
(4.161)
[gb (z, t , u) − 1]dt ,
(4.162)
0
from which the moments of the number of renewals occurring in the interval [t − u, t] can be calculated.
4.4.1 Expectation and variance of the number of renewals From equation (4.162) one can immediately write
∞ ∂ log Gb (z, t, u) (b) (b) E{Nb (t − u, t)} = = M1 (t, u) = s0 m1 (t , u)dt , ∂z 0 z=1 where (b) m1 (t , u)
= E{nb
(t
− u, t )}
∂gb (z, t , u) = ∂z
(4.163)
. z=1
(4.164)
109
Special Probabilities
The variance of Nb (t − u, t) can also be determined from (4.162), since
2 ∂ log Gb (z, t, u) (b) D2 {Nb (t − u, t)} = + M1 (t, u), ∂z2 z=1 where
∂2 log Gb (z, t, u) ∂z2
and (b) m2 (t , u)
= s0
t
m2 (t , u)dt , (b)
0
z=1
(4.165)
∂2 gb (z, t , u) = ∂z2
. z=1
From this, it is seen that the variance to mean can be given in the following form: D2 {Nb (t − u, t)} (b) M1 (t, u)
=1+
s0 (b) M1 (t, u)
0
t
m2 (t , u)dt , (b)
(4.166)
which shows that the deviation of the process Nb (t, t − u) from the Poisson process is expressed by the second term of the right-hand side of (4.166). For determining the probabilities (4.163) and the variance (4.165), we (b) (b) need the factorial moments m1 (t, u) and m2 (t, u). From equation (4.153), based on (4.164), one obtains (b)
dm1 (t, u) (b) = αm1 (t, u) − Q f1 [(t − u) − 1], dt whose form agrees exactly with that of (4.23), which was discussed earlier, by substituting f1 into f0 . Thus, it is obvious that e αt − 1 1 − e −αu (b) m1 (t, u) = (u − t)Q f1 + (t − u)Q f1 e αt , (4.167) α α if α = 0 and (b)
m1 (t, u) = Q f1 [t − (t − u)(t − u)],
(4.168)
if α = 0. Based on this, (b)
M1 (t, u) = s0
αt
Q f1 e −1 Q f1 1 − e −αu (u − t) (t − u) e αt −t + −u , α α α α
if α = 0 and (b) M1 (t, u)
t−u 2 1 2 , = s0 Q f1 t 1 − (t − u) 2 t
(4.169)
(4.170)
if α = 0. In the subcritical case, when α = −a < 0, the process Nb (t − u, t) converges to a stationary process if t ⇒ ∞, since Q f1 (b) lim M (t, u) = s0 u . (4.171) t→∞ 1 a (b)
For the second factorial moment m2 (t, u), from (4.153) the following differential equation is obtained: (b)
dm2 (t, u) (b) (b) (b) = αm2 (t, u) + 2Q f1 [1 − (t − u)]m1 (t, u) + Qq2 [m1 (t, u)]2 . dt
(4.172)
110
Imre Pázsit & Lénárd Pál
Introducing the notation (b)
(b)
m1 (t, u) = m1 (t),
if 0 ≤ t ≤ u,
and solving the equation (b)
dm2 (t) (b) (b) (b) = αm2 (t) + 2Q f1 m1 (t) + Qq2 [m1 (t)]2 dt (b)
with the initial condition m2 (0) = 0 yields (b)
m2 (t) = q2
Q α
3
f12 (e 2αt − 2αt e αt − 1) − 2
if α = 0, and
Q α
2 f12 (e αt − αt e αt − 1),
(4.173)
(b) m2 (t)
=
1 1 + q2 Qt , 3
f12 (Qt)2
(4.174)
if α = 0. Using (4.160) gives (b)
(b)
(b)
(b)
m2 (t, u) = (u − t)m2 (t) + (t − u){m2 (t − u)[m1 (u)]2 + m1 (t − u)m2 (u)}, where m1 (t) = e αt , and
m2 (t) =
q2 Q e αt e q2Qt,
αt −1
α
(4.175)
if α = 0, if α = 0.
,
Based on (4.175), the variance to mean (4.166) can be calculated, although its explicit form will not be given here. However, the limit ∞ (b) (b) lim D2 {Nb (t − u, t)} = M1 (∞, u) + s0 m2 (t, u)dt, t→∞
0
which exists if α = −a < 0, i.e. if the system is subcritical, will be determined. The integral on the right-hand side can easily be calculated from the Laplace transform ∞ (b) (b) m˜ 2 (s, u) = e −st m2 (t, u)dt, 0
since
0
∞
(b)
(b)
m2 (t, u)dt = m˜ 2 (0, u).
(b)
The Laplace transform m˜ 2 (s, u) can easiest be obtained from the differential equation (4.172), since (b)
m˜ 2 (s, u) =
1 [U (s, u) + V (z, u)], s+a
where
u
U (s, u) = 2Q f1 0
and
V (s, u) = q2 Q 0
∞
e −st m1 (t)dt, (b)
e −st [m1 (t, u)]2 dt. (b)
111
Special Probabilities
Since
U (s, u) = 2
Q f1 a
one can write that
2 1 − e −su 1 − e −(s+a)u a − , s s+a
U (0, u) = 2
Q f1 a
(b)
2
1 − e −au . au 1 − au
(a)
It was seen that m1 (t, u) formally agrees with m1 (t, u), only one has to substitute f1 into f0 . Hence, based on the calculations performed in the previous subsection, one arrives at V (0, u) = q2 Qu
Q f1 a
2 1−
1 − e −au au
.
Eventually, in view of all the above, one obtains that
(b)
lim D2 {Nb (t − u, t)} = M1 (∞, u) + s0
t→∞
=
(b) M1 (∞, u)
∞
0
(b)
m2 (t, u)dt
Q f1 1+2 a
1 Q 1 − e −au 1 + q2 . 1− 2 a au
(4.176)
It is seen that the dependence of the variance lim t→∞ D2 {Nb (t − u, t)} on u agrees formally with the udependence of the stationary random process limt→∞ Na (t, t − u) of (4.46), derived in Section 4.2.2.
4.4.2 Correlation function of the number of renewals Let [t − u2 , t] and [t − u2 − θ − u1 , t − u2 − θ] be two mutually non-overlapping intervals and by using the notation t = t − u2 − θ let Nb (t − u1 , t )
and
Nb (t − u2 , t)
denote the number of renewed particles in the first and the second interval, respectively, in the case when there were no particles at time t = 0 in the multiplying system, but particles were injected in the interval [0, t] according to a Poisson process. The goal is to calculate the correlation function RNb ,Nb (t, θ, u1 , u2 ) = E{[Nb (t − u1 , t ) − M1 (t , u1 )][Nb (t − u2 , t) − M1 (t, u2 )]}, (b)
(b)
(4.177)
in which t = t − u2 − θ. The solution can be obtained as follows. Let nb (t − u1 , t )
and
nb (t − u2 , t)
be the number of renewed particles in the interval [t − u1 , t ], and [t − u2 , t], respectively, provided that there was one particle in the multiplying system at time t = 0, i.e. the condition n(0) = 1 was fulfilled. Determine first the generating function gb (z1 , z2 , t, θ, u1 , u2 ) =
∞ ∞
pb (n1 , n2 , t, θ, u1 , u2 )z1n1z2n2
n1 =0 n2 =0
of the probability P{nb (t − u1 , t ) = n1 , nb (t − u2 , t) = n2 |n(0) = 1} = pb (n1 , n2 , t, θ, u1 , u2 ).
(4.178)
112
Imre Pázsit & Lénárd Pál
By knowing this and following the considerations in Chapter 3 concerning with the injection, for the generating function ∞ ∞ Pb (n1 , n2 , t, θ, u1 , u2 )z1n1z2n2 (4.179) Gb (z1 , z2 , t, θ, u1 , u2 ) = n1 =0 n2 =0
of the probability P{Nb (t − u1 , t ) = n1 , Nb (t − u2 , t) = n2 } = Pb (n1 , n2 , t, θ, u1 , u2 ), one can write down the equation
t
log Gb (z1 , z2 , t, θ, u1 , u2 ) = s0
[gb (z1 , z2 , v, θ, u1 , u2 ) − 1]dv.
(4.180)
0
In view that
RNb ,Nb (t, θ, u1 , u2 ) =
∂2 log Gb (z1 , z2 , t, θ, u1 , u2 ) ∂z1∂z2
the relationship
RNb ,Nb (t, θ, u1 , u2 ) = s0 0
t
, z1 =z2 =1
(b)
m2 (s, θ, u1 , u2 ; 1, 2)ds,
(4.181)
immediately follows, in which
(b)
m2 (t, θ, u1 , u2 ; 1, 2) =
∂2 gb (z1 , z2 , t, θ, u1 , u2 ) ∂z1∂z2
(4.182) z1 =z2 =1
is the mixed second moment E{nb (t − u1 , t )nb (t − u2 , t}. Thus one has to write down the equation determining the generating function gb (z1 , z2 , t, θ, u1 , u2 ). Without repeating the steps used already in the previous subsection, one arrives at t −Qt gb (z1 , z2 , t, θ, u1 , u2 ) = e + Q f1 e −Q(t−s) {[(s − u2 − θ − u1 ) − (s − u2 − θ)](1 − z1 ) 0
t
+ [(s − u2 ) − 1](1 − z2 )}gb (z1 , z2 , s, θ, u1 , u2 )ds + Q
e −Q(t−s) q[gb (z1 , z2 , s, θ, u1 , u2 )]ds,
0
(4.183) The above is of course equivalent to the differential equation ∂gb (z1 , z2 , t, θ, u1 , u2 ) = −Qgb (z1 , z2 , t, θ, u1 , u2 ) + Qq[gb (z1 , z2 , t, θ, u1 , u2 )] ∂t + Q f1 {[(t − u2 − θ − u1 ) − (t − u2 − θ)](1 − z1 ) + [(t − u2 ) − 1](1 − z2 )}gb (z1 , z2 , t, θ, u1 , u2 ) (4.184) with the initial condition gb (z1 , z2 , 0, θ, u1 , u2 ) = 1. According to (4.182), one obtains from (4.184) (b)
dm2 (t, θ, u1 , u2 ; 1, 2) (b) (b) (b) = αm2 (t, θ, u1 , u2 ; 1, 2) + q2 Qm1 (t, θ, u1 , u2 ; 1)m1 (t, θ, u1 , u2 ; 2) dt (b)
(b)
+ Q f1 {[(θ + u2 + u1 − t) − (θ + u2 )]m1 (t, θ, u1 , u2 ; 2) + (u2 − t)m1 (t, θ, u1 , u2 ; 1)}, (4.185) where, regarding that t = t − u2 − θ, (b) m1 (t, θ, u1 , u2 ; 1)
= E{nb
(t
− u1
, t )}
∂gb (z1 , z2 , t, θ, u1 , u2 ) = ∂z1
z1 =z2 =1
113
Special Probabilities
and
(b) m1 (t, θ, u1 , u2 ; 2)
∂gb (z1 , z2 , t, θ, u1 , u2 ) = E{nb (t − u2 , t)} = ∂z2
. z1 =z2 =1
For these moments, in a subcritical medium (α = −a < 0), from (4.184) after a short calculation one obtains the formulae (b)
m1 (t, θ, u1 , u2 ; 1) =
Q f1 [(1 − e −a(t−θ−u2 ) )(t − θ − u2 ) − (1 − e −a(t−θ−u2 −u1 ) )(t − θ − u2 − u1 )], (4.186) a
and Q f1 [1 − e −at − (1 − e −a(t−u2 ) )(t − u2 )]. a In the following, only subcritical media will be considered, i.e. when α = −a < 0. In this case (b)
m1 (t, θ, u1 , u2 ; 2) =
Q f1 , a
(4.188)
Q f1 , a
(4.189)
lim E{Nb (t , t − u1 )} = M1 (∞, θ, u1 , u2 ; 1) = s0 u1 (b)
t→∞
(4.187)
where t = t − u2 − θ and (b)
lim E{Nb (t, t − u2 )} = M1 (∞, θ, u1 , u2 ; 2) = s0 u2
t→∞
which means that Nb (t, t − u) converges to a stationary random process if t → ∞. The limit value lim RNb ,Nb (t, θ, u1 , u2 )
t→∞
of the correlation function RNb ,Nb (t, θ, u1 , u2 ) in the case of α = −a < 0 is given by the expression (b) s0 m˜ 2 (0, θ, u1 , u2 ; 1, 2), where ∞ (b) (b) m˜ 2 (0, θ, u1 , u2 ; 1, 2) = m2 (t, θ, u1 , u2 ; 1, 2)dt. 0
After simple but laborious calculations, one obtains lim RNb ,Nb (t, θ, u1 , u2 ) = s0 Qu1 u2 f12
t→∞
Q a
1 Q 1 − e −au1 1 − e −au2 −aθ e . 1 + q2 2 a au1 au2
(4.190)
Formally, this expression depends on the variables θ, u1 , u2 exactly the same way as (4.71) does. The correlation function of the stationary subcritical renewal process exponentially decreases with the time θ separating the intervals u1 and u2 .
4.5 Probability of the Number of Multiplications In a branching process, multiplication takes place if a reaction induced by a particle leads to the birth of more than one new particles. Determine the probability pm (n, t, u) of the event {nm (t − u, t) = n|n(0) = 1} defined already in (4.3). The previously elaborated method can be applied here as well, hence the details of calculations will be neglected. For the generating function gm (z, t, u) =
∞ n=0
pm (n, t, u)zn
(4.191)
114
Imre Pázsit & Lénárd Pál
of the probability P{nm (t − u, t) = n|n(0) = 1} = pm (n, t, u), from the backward Kolmogorov equation, t t −Qt −Q(t−t ) δn0 + Q f0 e δn0 dt + Q f1 e −Q(t−t ) pm (n, t , u)dt pm (n, t, u) = e 0
0
⎡
t
+Q
e −Q(t−t ) ⎣(t − u)
∞
0
fk
pm (nj , t , u)
n1 +···+nk =n j=1
k=2
+ (u − t )
k
∞
fk
k
⎤ pm (nj , t , u, )⎦dt
n1 +···+nk =n−1 j=1
k=2
after appropriate rearrangement and differentiation with respect to t, one arrives at the non-linear differential equation ∂gm (z, t, u) = −Qgm (z, t, u) + Qq[gm (z, t, u)] + Q f1 (u − t)(1 − z)gm (z, t, u) ∂t − Q(t − u)(1 − z)q[gm (z, t, u)] + Q f0 (u − t)(1 − z)
(4.192)
with the initial condition gm (z, 0, u) = 1. Similarly as before, suppose here again that the multiplying system does not contain particles at time t = 0, i.e. n(0) = 0; and that particles enter the system in the interval [0, t] according to a Poisson process with intensity s0 . Let Nm (t − u, t) denote multiplications in the interval [t − u, t]. Obviously, the generating the number of n of the probability function Gm (z, t, u) = ∞ P (n, t, u)z m n=0 P{Nm (t − u, t) = n|n(0) = 0} = Pm (n, t, u) satisfies the equation
log Gm (z, t, u) = s0
t
(4.193)
[gm (z, t , u) − 1]dt ,
(4.194)
0
from which the moments of the number of the multiplications occurring in the interval [t − u, t] can be calculated.
4.5.1 Expectation and variance of the number of multiplications From equation (4.194), one has
∂ log Gm (z, t, u) E{Nm (t − u, t)} = ∂z
=
(m) M1 (t, u)
z=1
where
= s0 0
∞
m1 (t , u)dt , (m)
∂gm (z, t , u) . ∂z z=1 The variance of Nm (t − u, t) can also be determined from (4.194), since
2 ∂ log Gm (z, t, u) (m) 2 + M1 (t, u), D {Nm (t − u, t)} = ∂z2 z=1 m1 (t , u) = E{nm (t − u, t )} =
(m)
where
∂2 log Gm (z, t, u) ∂z2
= s0
z=1
(4.195)
0
t
m2 (t , u)dt , (m)
(4.196)
(4.197)
(4.198)
115
Special Probabilities
and m2 (t , u) = (m)
∂2 gb (z, t , u) ∂z2
.
(4.199)
z=1 (m)
For the calculation of the expectation (4.195) and variance (4.197), we need the factorial moments m1 (t, u) (m) and m2 (t, u). Based on (4.196), one obtains from (4.192) that (m)
dm1 (t, u) (m) = αm1 (t, u) + Q f (m) (u − t), dt
(4.200)
where f (m) = 1 − f0 − f1 =
∞
fk > 0
(4.201)
k=2
is the probability that the reaction results in multiplication. The solution of (4.200) with the initial condition (m) m1 (0, u) = 0 is: (b)
m1 (t, u) = (u − t)Q f (m)
e αt − 1 1 − e −αu + (t − u)Q f (m) e αt , α α
(4.202)
if α = 0, and (m)
m1 (t, u) = Q f (m) [t − (t − u)(t − u)],
(4.203)
if α = 0. Based on this (m) M1 (t, u)
αt −αu Q f (m) e −1 αt 1 − e = s0 (u − t) − t + (t − u) e −u , α α α
if α = 0, and (m) M1 (t, u)
t−u 2 1 (m) 2 , = s0 Q f t 1 − (t − u) 2 t
(4.204)
(4.205)
if α = 0. In a subcritical medium when α = −a < 0, the process Nm (t − u, t) converges to a stationary process if t → ∞, since (m)
lim M1 (t, u) = s0 u
t→∞
Q f (m) . a
(4.206)
(m)
Based on (4.199), from (4.192) for the second factorial moment m2 (t, u) one obtains the differential equation: (m)
dm2 (t, u) (m) (m) = αmm(b) (t, u) + Qq2 [m1 (t, u)]2 + 2Q(u − t)(q1 − f1 )m1 (t, u). dt Note that q1 − f1 =
∞
kfk > 0.
k=2
In a subcritical medium when α = −a < 0, determine the stationary variance ∞ (m) (m) lim D2 {Nm (t − u, t)} = M1 (∞, u) + s0 m2 (t, u)dt. t→∞
0
(4.207)
116
Imre Pázsit & Lénárd Pál
The integral on the right-hand side can easily be calculated, leading to (m)
lim D2 {Nm (t − u, t)} = M1 (∞, u) Q 2 (m) Q 1 − e −au (m) + M1 (∞, u) q2 f + 2 (1 − f1 ) − 2 1 − . a a au
t→∞
(4.208)
It is seen that the dependence of the variance limt→∞ D2 {Nm (t, t − u)} on u formally agrees with the u-dependence of the formulae derived previously for the stationary random processes lim Na (t, t − u)
t→∞
lim Nb (t, t − u).
and
t→∞
4.5.2 Correlation function of the number of multiplications Let [t − u2 , t] and [t − u2 − θ − u1 , t − u2 − θ] be two mutually non-overlapping intervals and by using the notation t = t − u2 − θ let Nm (t − u1 , t ) and Nm (t − u2 , t) denote the number of multiplications the first and the second interval, respectively. Suppose that there was no particle in the multiplying system at time t = 0, on the other hand particle injection took place according to a Poisson process in the interval [0, t]. The task is to calculate the correlation function RNm ,Nm (t, θ, u1 , u2 ) = E{[Nm (t − u1 , t ) − M1 (t , u1 )][Nm (t − u2 , t) − M1 (t, u2 )]} (m)
(m)
(4.209)
in which t = t − u2 − θ. For the solution, one can use the procedure known from the foregoing. Let nm (t − u1 , t )
nm (t − u2 , t)
and
be the number of multiplications taking place in the intervals [t − u1 , t ] and [t − u2 , t], respectively, in a medium in which exactly one particle existed at time t = 0. Determine first the generating function gm (z1 , z2 , t, θ, u1 , u2 ) =
∞ ∞
pm (n1 , n2 , t, θ, u1 , u2 )z1n1z2n2
(4.210)
n1 =0 n2 =0
of the probability P{nm (t − u1 , t ) = n1 , nm (t − u2 , t) = n2 |n(0) = 1} = pm (n1 , n2 , t, θ, u1 , u2 ). In possession of this and following the considerations in Chapter 3, one can write for the generating function Gm (z1 , z2 , t, θ, u1 , u2 ) =
∞ ∞
Pm (n1 , n2 , t, θ, u1 , u2 )z1n1z2n2
(4.211)
n1 =0 n2 =0
of the probability P{Nm (t − u1 , t ) = n1 , Nm (t − u2 , t) = n2 } = Pm (n1 , n2 , t, θ, u1 , u2 ) the equation
log Gm (z1 , z2 , t, θ, u1 , u2 ) = s0 0
t
[gm (s, θ, u1 , u2 , z1 , z2 ) − 1]ds.
(4.212)
117
Special Probabilities
From this, by considering the relationship
∂2 log Gm (z1 , z2 , t, θ, u1 , u2 ) RNm ,Nm (t, θ, u1 , u2 ) = ∂z1 ∂z2
, z1 =z2 =1
it follows immediately that RNm ,Nm (t, θ, u1 , u2 ) = s0
t
0
where
(m)
m2 (s, θ, u1 , u2 ; 1, 2)ds,
(m) m2 (t, θ, u1 , u2 ; 1, 2)
∂2 gm (z1 , z2 , t, θ, u1 , u2 ) = ∂z1 ∂z2
(4.213)
(4.214) z1 =z2 =1
is equal to the mixed second moment E{nm (t − u1 , t )nm (t − u2 , t}. Hence, for the solution we need to know the equation determining the generating function gm (z1 , z2 , t, θ, u1 , u2 ). One can immediately write t gm (z1 , z2 , t, θ, u1 , u2 ) = e −Qt + Q e −Q(t−v) {[(v − u2 − θ) − (v − u2 − θ − u1 )](1 − z1 ) 0
+ [1 − (v − u2 )](1 − z2 )}[f0 + f1 gm (z1 , z2 , v, θ, u1 , u2 )]dv t +Q e −Q(t−v) {1 − [(v − u2 − θ) − (v − u2 − θ − u1 )](1 − z1 ) 0
− [1 − (v − u2 )](1 − z2 )}q[gm (z1 , z2 , t − v, θ, u1 , u2 )]dv,
(4.215)
which is naturally equivalent to the differential equation ∂gm (z1 , z2 , t, θ, u1 , u2 ) = −Qgm (z1 , z2 , t, θ, u1 , u2 ) + Qq[gm (z1 , z2 , t, θ, u1 , u2 )] ∂t + Q[(u2 + θ − t) − (u2 + θ + u1 − t)](1 − z1 ) − Q(u2 − t)(1 − z2 )q[gm (z1 , z2 , t, θ, u1 , u2 )] − Q(u2 − t)(1 − z2 )[ f0 + f1 gm (z1 , z2 , t, θ, u1 , u2 )]
(4.216)
with the initial condition gm (z1 , z2 , 0, θ, u1 , u2 ) = 1. Based on (4.214), from (4.216) one arrives at (m)
dm2 (t, θ, u1 , u2 ; 1, 2) (m) = αm2 (t, θ, u1 , u2 ; 1, 2) dt (m)
(m)
+ q2Qm1 (t, θ, u1 , u2 ; 1)m1 (t, θ, u1 , u2 ; 2) + Q(q1 − f1 ){[(u2 + θ + u1 − t) − (u2 + θ − t)] (m)
(m)
× m1 (t, θ, u1 , u2 ; 2) + (u2 − t)m1 (t, θ, u1 , u2 ; 1)}, where, by noting that t = t − u2 − θ, (m) m1 (t, θ, u1 , u2 ; 1)
and
= E{nm
(t
− u1
, t )}
∂gm (z1 , z2 , t, θ, u1 , u2 ) = ∂z1
(m) m1 (t, θ, u1 , u2 ; 2)
∂gm (z1 , z2 , t, θ, u1 , u2 ) = E{nm (t − u2 , t)} = ∂z2
z1 =z2 =1
. z1 =z2 =1
(4.217)
118
Imre Pázsit & Lénárd Pál
For these moments, in the subcritical case when α = −a < 0, after a short calculation from (4.216) one obtains the formulae (b)
m1 (t, θ, u1 , u2 ; 1) =
Q f (m) (1 − e −a(t−θ−u2 ) )(t − θ − u2 ) a Q f (m) − (1 − e −a(t−θ−u2 −u1 ) )(t − θ − u2 − u1 ), a
and (m)
m1 (t, θ, u1 , u2 ; 2) =
Q f (m) [1 − e −at − (1 − e −a(t−u2 ) )(t − u2 )]. a
In this case Q f (m) , a
(4.218)
Q f (m) , a
(4.219)
lim E{Nm (t − u1 , t )} = M1 (∞, θ, u1 , u2 ; 1) = s0 u1 (m)
t→∞
where t = t − u2 − θ and (m)
lim E{Nm (t − u2 , t)} = M1 (∞, θ, u1 , u2 ; 2) = s0 u2
t→∞
which means that Nm (t − u, t) converges to a stationary random process if t → ∞. The limit value lim RNm ,Nm (t, θ, u1 , u2 )
t→∞
of the correlation function RNm ,Nm (t, θ, u1 , u2 ) in the case of α = −a < 0 is given by the expression (m) s0 m˜ 2 (0, θ, u1 , u2 ; 1, 2), where ∞ (m) (m) m˜ 2 (0, θ, u1 , u2 ; 1, 2) = m2 (t, θ, u1 , u2 ; 1, 2)dt. 0
After simple calculations one obtains lim RNm ,Nm (t, θ, u1 , u2 ) = s0Qu1 u2 (q1 − f1 )2 ×
t→∞
Q a
1 Q 1 − e −au1 1 − e −au2 −aθ e . 1 + q2 2 a au1 au2
(4.220)
This expression depends formally on the variables θ, u1 , u2 exactly the same way as the formula (4.71) does.
C H A P T E R
F I V E
Other Characteristic Probabilities
Contents 5.1 5.2 5.3 5.4 5.5
Introduction Distribution function of the survival time Number of Particles Produced by a Particle and Its Progeny Delayed Multiplication of Particles Process with Prompt and Delayed Born Particles
119 119 121 127 141
5.1 Introduction In this chapter, we shall discuss the distribution function of the survival time of a branching process in a given medium. We shall also investigate the distribution of the size of the population generated by a particle and its progeny, as well as we shall study a branching process whose evolution is influenced by the randomly delayed activity of particles born in a reaction. At the same time, we shall also deal with a process in which the reactions are capable to produce particles both promptly and with a random delay. Branching processes of this type play an important role in the theory of neutron noise.
5.2 Distribution Function of the Survival Time Let n(t) be the number of particles at time t and let τ > 0 be the time instant when the event {n(τ) = 0} occurs. It is obvious that after τ > 0 no reaction will occur. The random variable τ can be considered as the survival time of the process. Let P{τ ≤ t|n(0) = 1} = L(t) (5.1) be the probability that the survival time of the process is not larger than t, where t ∈ T . Since the events {n(t) = 0|n(0) = 1} and
{τ ≤ t|n(0) = 1}
are equivalent, one can write that P{τ ≤ t|n(0) = 1} = P{n(t) = 0|n(0) = 1} = p0 (t) = L(t).
(5.2)
It is seen that the extinction probability p0 (t) is, concurrently, also the distribution function of the survival time. More detailed calculations are worth to perform in the case when L(t) can be determined exactly. Accordingly, consider the case q(z) = f0 + f1 z + f2 z2 Neutron fluctuations ISBN-13: 978-0-08-045064-3
and
f0 + f1 + f2 = 1. © 2008 Elsevier Ltd. All rights reserved.
119
120
Imre Pázsit & Lénárd Pál
From (1.96) after some rearrangement, it follows that L(t) = 1 − R(t), where R(t) =
⎧ ⎪ ⎨ exp{αt} 1 + ⎪ ⎩
1+
−1 Qq2 , 2α (exp{αt} − 1)
if α = 0, (5.3)
−1 1 , 2 Qq2 t
if α = 0,
where α = Q(q1 − 1) and q2 = 2f2 . It is seen that in the case of α = −a ≤ 0 (i.e. in case of a subcritical or a critical process), the probability that the survival time of the process is less than infinite converges to 1. In the supercritical case, the survival time is infinite with the probability 2α/Qq2 , and less than infinite with the probability 1 − 2α/Qq2 . Calculate the moments ∞ ∞ E{τ n |n(0) = 1} = t n dL(t) = − t n dR(t), n = 1, 2, . . ., (5.4) 0
0
of the survival time τ of the population. Considering that the moments E{τ n |n(0) = 1} exist only if α = −a < 0, i.e. if the process is subcritical, the calculations will be obviously performed only for the case when α = −a < 0. For determining the moments, introduce the Laplace transform ϕ(z) = E{e −zτ |n(0) = 1} ∞ −zt = e dL(t) = − 0
From the equation
∞
e
−zt
dR(t) =
0
1
e −zR
−1 (x)
dx.
(5.5)
0
−1 Qq2 = x, e −at 1 + (1 − e −at ) 2a
one obtains t=R
−1
x(1 + γ) (x) = − log x+γ
hence
ϕ(z) =
1 x(1 + γ) z/a
0
where γ=2
x+γ
1/a ,
dx,
1 − q1 . q2
(5.6)
(5.7)
For the expectation, one obtains the expression
dϕ(z) E{τ|n(0) = 1} = − dz
z=0
1 γ . = log 1 + a γ
(5.8)
The variance of the survival time is given by the formula D2 {τ|n(0) = 1} = −
γ {(1 + γ)[ log (1 + 1/γ)]2 + 2Li2 (−1/γ)} a2
(5.9)
in which Li2 (· · · ) is the so-called Jonquière dilogarithmic function. The dependence of the relative standard deviation D{τ|n(0) = 1}/E{τ|n(0) = 1} on the parameter q1 is illustrated in Fig. 5.1. It is seen that the survival
121
Relative standard deviation
Other Characteristic Probabilities
2.4 q2 0.5
2.2 2
Q 0.4
1.8 1.6 1.4 1.2 0.8
0.85
0.9
0.95
1
Values of q1
Figure 5.1
Dependence of the relative standard deviation of the survival time on the parameter q1 .
time shows significant fluctuations which grow beyond all limits when approaching the critical state, although the probability that the survival time is less than infinite converges to 1. In the supercritical state certain processes may become extinct with the probability 1 − 2α/Qq2 , but irrespective of this, the fluctuation of their survival time is infinitely large.
5.3 Number of Particles Produced by a Particle and Its Progeny Let there be exactly one particle in the multiplying system at time t = 0 and denote Np (t) the number of all particles produced by this particle and its progeny in the interval [0, t], irrespective of how many of them get absorbed in the interval [0, t]. The sum of the progenies belonging to one particle is called a population. The random process Np (t) whose possible values are non-negative integers, gives the size of the population generated by the particle and its progeny during time t. The original particle that started the multiplying process is not counted in the population. Hence, if the size of the population is zero, it means that the single particle in the system at time t = 0 did not generate any particles in the interval [0, t]. The task is now to determine the probability P{Np (t) = n|n(0) = 1} = pp (n, t).
(5.10)
According to the procedure used in the foregoing, one can write pp (n, t) = e −Qt δn0 +Q
t
⎡
e −Q(t−t ) ⎣f0 δn0 +
0
∞
fk
k=1
k
⎤ pp (nj , t )⎦ dt ,
n1 +···+nk =n−k j=1
from which, for the generating function gp (z, t) =
∞
|z| ≤ 1,
pp (n, t)zn ,
(5.11)
n=0
the following integral equation is obtained: gp (z, t) = e −Qt + Q
0
t
e −Q(t−t ) q[zgp (z, t )]dt .
(5.12)
122
Imre Pázsit & Lénárd Pál
By derivating with respect to t, one obtains the differential equation ∂gp (z, t) = −Qgp (z, t) + Qq[zgp (z, t)] ∂t
(5.13)
associated with the initial condition: gp (z, 0) = 1.
(5.14)
This equation differs from the basic equation (1.29) only in that the argument of q is not the generating function but z times the generating function, and further the initial condition is not gp (z, 0) = z, rather it is given by (5.14). This latter just expresses the fact that the single particle starting the process and being present in the multiplying system at t = 0 is not counted into the population Np (t). The factorial moments can be calculated by the formula (p) mk (t)
=
∂k gp (z, t) ∂zk
. z=1
Determine first the expectation (p)
m1 (t) = E{Np (t)|n(0) = 1}. From (5.13) it follows that (p)
dm1 (t) (p) = αm1 (t) + Qq1 , dt
(5.15)
(p)
and the initial condition is m1 (0) = 0. The solution is equal to (p)
m1 (t) = Qq1
e αt − 1 , α
if α = 0,
(5.16)
and (p)
m1 (t) = Qt,
if α = 0.
(5.17)
In the case of a subcritical process, when α = −a < 0, one obtains that (p)
lim m1 (t) =
t→∞
q1 , 1 − q1
(5.18)
which shows that the expectation of the size of the population generated by one particle and its progeny is finite. If, however, q1 ↑ 1, i.e. if the system becomes critical, then the expectation becomes infinite. As a second step, calculate the variance 2 (p) (p) (p) D2 {Np (t)|n(0) = 1} = m2 (t) + m1 (t) − m1 (t)
(5.19) (p)
of the population. For this, the equation determining the second factorial moment m2 (t) is obtained from (5.13) as (p)
dm2 (t) (p) (p) (p) = αm2 (t) + 2Qq1 m1 (t) + Qq2 [1 + m1 (t)]2 dt
(5.20)
123
Other Characteristic Probabilities (p)
with the initial condition m2 (0) = 0. The solution can be written in the following form: (p)
e αt − 1 αt Qq1 1 − e −αt +2 (q1 + q2 )Qte αt 1 − α αt 2 Qq1 sinh αt + 2q2 − 1 , if α = 0, Qte αt α αt
m2 (t) = q2 Qt
(5.21)
and 1 (p) m2 (t) = q2 Qt + (1 + q2 )(Qt)2 + q2 (Qt)3 , 3
if α = 0.
(5.22)
(p)
By virtue of (5.16) and (5.17) derived for m1 (t), the variance D2 {Np (t)|n(0) = 1} can be obtained from (5.19) which, however, will not be given here. In the case when the process is subcritical, i.e. if α = −a < 0, then the variance D2 {Np (t)|n(0) = 1} tends to a finite value if t → ∞, namely to
1 q1 q1 q2 1+ . (5.23) + lim D2 {Np (t)|n(0) = 1} = t→∞ 1 − q1 1 − q1 q1 (1 − q1 )2 It follows from this that the random process Np (t) converges in mean square to the random variable Np∗ , i.e. the relation lim Np (t) = Np∗
t→∞
is valid, if and only if q1 < 1 and q2 < ∞.
5.3.1 Quadratic process Similarly to the foregoing, we again investigate the case when q(z) = f0 + f1 z + f2 z2 , and accordingly, 1 f0 = 1 − q1 + q2 , 2 f1 = q1 − q2 , 1 f2 = q2 . 2
(5.24) (5.25) (5.26)
Is noteworthy that the conclusions one can draw from the exactly solvable equation (due to the application of the present simple, quadratic q(z)) agree remarkably well with those one can (5.13) which draw from k . The differential corresponds to the generating function q(z) defined by the infinite power series ∞ f z k k=0 equation (5.13) in the present case can be given in the form dgp (z, t) = Qf2 z2 dt, [gp (z, t) − r1 ][gp (z, t) − r2 ]
(5.27)
124
Imre Pázsit & Lénárd Pál
where r1,2 =
% √ √ 1 $ 1 − f1 z ∓ 1 − az 1 − bz 2 2f2 z
(5.28)
are the two roots of the equation f2 z2 [gp (z, t)]2 − (1 − f1 z) gp (z, t) + f0 = 0. The constants a and b in (5.28) are given by the formulae a = f1 − 2 f0 f2 , b = f1 + 2 f0 f2 .
(5.29) (5.30)
Taking into account the initial condition gp (z, 0) = 1, one obtains gp (z, t) =
r2 (r1 − 1) − r1 (r2 − 1) exp{Qs(z)t} , r1 − 1 − (r2 − 1) exp{Qs(z)t}
(5.31)
where s(z) = f2 z2 (r2 − r1 ).
(5.32)
From this it can be immediately seen that the following limit generating function exists: lim gp (z, t) = gp∗ (z) = r1 =
t→∞
√ √ 1 (1 − f1 z − 1 − az 1 − bz). 2f2 z2
(5.33)
It is obvious that b > a and it can also easily be confirmed that1 b < 1 if (1 − q1 )2 > 0. In the case when q1 = 1, i.e. if the system is critical, then b = 1. By taking into account that gp∗ (z) =
∞
pp∗ (n)zn ,
n=0
we need to construct the power series of the expression (5.33) with respect to z in order to obtain the probabilities pp∗ (n), n = 0, 1, . . ., . This is relatively straightforward. Introduce the notation ck = ( − 1)k
1/2 (1/2 − 1) · · · (1/2 − k + 1) (3/2) = ( − 1)k , k! (3/2 − k)(k + 1)
and – assuming that (bz)2 ≤ 1 – let us write ∞ √ 1 − az = 1 + ck a k z k
and
√
1 − bz = 1 +
k=1
∞
ck b k z k .
k=1
Based on this, one can immediately confirm that ∞ √ √ 1 − az 1 − bz = 1 + c1 (a + b)z + [ck (ak + bk ) + dk ]zk , k=2
where dk =
k−1
cj ck−j aj bk−j .
j=1
proof is rather simple. From the inequality b = f1 + 2 f0 f1 < 1 it follows that (1 − f1 )2 > 4(1 − f1 )f2 − 4f22 , which is equivalent to 2 2 (1 − f1 − 2f2 ) = (1 − q1 ) > 0.
1 The
125
Other Characteristic Probabilities
Finally, one arrives at gp∗ (z) = −
∞ 1 [cn+2 (an+2 + bn+2 ) + dn+2 ]zn , 2f2 n=0
(5.34)
1 [cn+2 (an+2 + bn+2 ) + dn+2 ] 2f2
(5.35)
i.e. pp∗ (n) = −
is the probability that Np∗ = n. Figure 5.2 illustrates how the probability of a population containing a given number of particles decreases by the increasing of the particle number in the case of a subcritical system ( f0 = 0.30, f1 = 0.45, f2 = 0.25) and in a supercritical one ( f0 = 0.20, f1 = 0.55, f2 = 0.25), respectively. It is important to note that ∞ if q1 < 1, q1 /(1 − q1 ), ∗ n pp (n) = does not exist, if q1 ≥ 1. n=0 The figure clearly shows that a population containing a finite number of particles can occur with a large probability even in a supercritical system, but the expectation of the number of particles in the population is infinite. Based on Fig. 5.2, one can get the impression that pp∗ (n) is a monotonically decreasing function of the non-negative integer n. A careful analysis shows though that this is not the case. To each parameter q1 there corresponds a parameter qc (1) such that for any permitted values of q2 equal to or larger than that, the ratio R2k (q1 , q2 ) =
pp∗ (2k) pp∗ (2k − 1)
is larger than 1. 0
5
10
15
20
0.3
q1, q2
Probability
0.25
0.95, 0.5
0.2
1.05, 0.5
0.15 0.1 0.05 0
Figure 5.2
0
5
10 15 Population size
20
Probabilities of the population size. 0
5
10
Probability
0.5
15
20
q1 , q 2
0.4
0.90, 0.80
0.3
1.05, 0.95
0.2 0.1 0 0
Figure 5.3
5
10 15 Population size
Non-monotonic decrease of the population size.
20
126
Imre Pázsit & Lénárd Pál
Figure 5.3 illustrates, for the case of q1 = 0.9, a probability distribution in which the probabilities of the even populations are larger than those of the preceding odd populations. (c) Figure 5.4 demonstrates for three different values of q1 that the ratio R2 is larger than one, if q2 > q2 . (c) Table 5.1 contains, for a few values of the parameter q1 the values of q2 such that for q2 values above that value, R2 and R4 are larger than unity. It is worth investigating the dependence of the probability of the population size on the multiplication parameter q1 for fixed values of q2 . It can be expected that the maximum probability is shifted towards larger populations with increasing q1 . Figure 5.5 illustrates that – for relatively small q1 values – the probability that a population of an even number of particles is larger than that of the corresponding odd population containing one particle less.2 The
1.8
f0 q1 q2 q1 0.90
1.6
q1 0.95
1.4
q1 0.99
Ratio R2
2
1.2 1 0.8 0.6
0.4
0.5
0.6
0.7
0.8
Values of q2
Figure 5.4
Dependence of the ratio R2 on the parameter q2 for the three values of q1 . Table 5.1 Values of q(c) 2 for a few q1 values above which R 2 and R 4 are larger than unity R2
R4
q(c) 2
q(c) 2
0.90
0.7025
0.7393
0.95
0.7506
0.7876
1.00
0.8000
0.8370
1.05
0.8506
0.8876
q1
0.06 Probabilities
0.05 0.04 q2 0.5 n3 n4 n5
0.03 0.02 0.01 0 0.5
0.6
0.7
0.8
0.9
1
1.1
Values of q1
Figure 5.5 The probabilities of the population size as a function of q1 for a fixed q2 value. 2 It
can be seen in the figure that for values of q1 < 0.62, pp∗ (4) > pp∗ (3).
127
Other Characteristic Probabilities
fact that in a given case the probability of every population containing an odd number of particles is zero for q1 = 0.5 is related to the fact that3 f1 = q1 − q2 = 0.
5.4 Delayed Multiplication of Particles The process that will be discussed here differs from those discussed up to now basically in the fact that in the reaction generated with an intensity Q, the (previously inactive) particle only becomes active, whereas after a random time period τ, the active particle is either absorbed or renewed or multiplied. We assume that the new particles given rise from the active particles are inactive, and therefore they are not capable for absorption, renewal or multiplication, only to become active again. In the following, the inactive particles are denoted by R1 and the active ones by R2 . It is to be stressed here that this type of branching process differs essentially from the one represented by the neutron chain reactions with delayed neutrons. We assume that the lifetime of the active particles follows an exponential distribution, i.e. P{τ ≤ t} = 1 − e −λt ,
(5.36)
where λ > 0 and λ−1 is the expectation of the lifetime of particles. It would be possible to apply even more general assumptions; however, the assumption of an exponential distribution largely simplifies the calculations. The principal scheme of the process is illustrated in Fig. 5.6. Let n1 (t) denote the number of the inactive particles and n2 (t) that of the active particles at a time t ≥ 0, respectively. Define the following probabilities: P{n1 (t) = n1 , n2 (t) = n2 |S1 } = p(1) (n1 , n2 , t)
(5.37)
P{n1 (t) = n1 , n2 (t) = n2 |S2 } = p(2) (n1 , n2 , t),
(5.38)
and where S1 = {n1 (0) = 1, n2 (0) = 0} and S2 = {n1 (0) = 0, n2 (0) = 1}. (5.39) It is obvious that p(1) (n1 , n2 , t) is the probability that at t ≥ 0 there are n1 inactive and n2 active particles in the system, provided that there was one inactive particle at t = 0. Similarly, p(2) (n1 , n2 , t) is the probability that at t ≥ 0 there are n1 inactive and n2 active particles in the system, provided that there was one active particle at t = 0. Based on the usual considerations, it is easily verified that t p(1) (n1 , n2 , t) = e −Qt δn1 ,1 δn2 ,0 + Q e −Q(t−t ) p(2) (n1 , n2 , t )dt 0
Absorption
Q R1
R2
Renewal Multiplication
Figure 5.6 The principal scheme of the process.
3 This
follows from the fact that
and from this it is seen that for
pp∗ (3) = f2 f1 f02 + f1 f2 f02 + f1 f1 f1 f0 f1 = 0, pp∗ (3) = 0.
128
Imre Pázsit & Lénárd Pál
and p(2) (n1 , n2 , t) = e −λt δn1 ,0 δn2 ,1 + λ
⎡
t
e
−λ(t−t )
⎣f0 δn1 ,0 δn2 ,0
0
+
∞
fk
k
⎤ p(1) (uj , vj , t )⎦ dt .
u1 +···+uk =n1 v1 +···+vk =n2 j=1
k=1
By introducing the generating functions g (i) (z1 , z2 , t) =
∞ ∞
p(i) (n1 , n2 , t)z1n1 z2n2 ,
i = 1, 2,
(5.40)
e −Q(t−t ) g (2) (z1 , z2 , t )dt
(5.41)
n1 =0 n2 =0
one obtains from the previous two equations g (z1 , z2 , t) = e (1)
−Qt
t
z1 + Q
0
and g (2) (z1 , z2 , t) = e −λt z2 + λ
t
e −λ(t−t ) q[g (1) (z1 , z2 , t )]dt .
(5.42)
0
From these integral equations, by derivation with respect to t, the differential equations ∂g (1) (z1 , z2 , t) = −Qg (1) (z1 , z2 , t) + Qg (2) (z1 , z2 , t) ∂t
(5.43)
and ∂g (2) (z1 , z2 , t) = −Qg (2) (z1 , z2 , t) + Qq[g (1) (z1 , z2 , t)] ∂t are obtained with the initial conditions g (1) (z1 , z2 , 0) = z1
and
(5.44)
g (2) (z1 , z2 , 0) = z2 .
The factorial moments are given by the derivatives
∂r1 +r2 g (i) ∂zr1 ∂zr2
z1 =z2 =1
= mr(i)1 ,r2 (t),
i = 1, 2,
(5.45)
in which r1 and r2 are non-negative integers. In the following, we will only concern with the determination of the moments (i)
(i)
(i)
(i)
m1,0 (t), m0,1 (t), m1,1 (t), m2,0 (t)
and
(i)
m0,2 (t),
i = 1, 2,
since the investigations will be restricted to the time-dependence of the expectations, variances, and the covariance between the numbers of the active and inactive particles.
5.4.1 Expectations and their properties For the expectations (i)
E{n1 (t)|Si } = m1,0 (t)
and
(i)
E{n2 (t)|Si } = m0,1 (t)
129
Other Characteristic Probabilities
the following equations are obtained: (1) m1,0 (t)
=e
−Qt
m1,0 (t) = λq1
e −Q(t−t ) m1,0 (t )dt ,
0
(2)
t
+Q t
0
(2)
e −λ(t−t ) m1,0 (t )dt , (1)
(5.46) (5.47)
for the inactive particles, and (1) m0,1 (t)
t
=Q 0
e −Q(t−t ) m0,1 (t )dt ,
m0,1 (t) = e −λt + λq1 (2)
0
t
(2)
(5.48)
e −λ(t−t ) m0,1 (t )dt . (1)
(5.49)
for the active particles. Introduce the Laplace transforms (i) m˜ 1,0 (s)
=
∞
e 0
−st
(i) m1,0 (t)dt
and
(i) m˜ 0,1 (s)
=
∞
0
e −st m0,1 (t)dt, i = 1, 2, (i)
(5.50)
From (5.46) and (5.47), as well as (5.48) and (5.49), one has s+λ , (s + λ)(s + Q) − q1 Qλ q1 λ (2) m˜ 1,0 (s) = , (s + λ)(s + Q) − q1 Qλ Q (1) m˜ 0,1 (s) = , (s + λ)(s + Q) − q1 Qλ s+Q (2) m˜ 0,1 (s) = . (s + λ)(s + Q) − q1 Qλ (1)
m˜ 1,0 (s) =
(5.51) (5.52) (5.53) (5.54)
The equation (s + Q)(s + λ) − q1 Qλ = s2 + (Q + λ)s + (1 − q1 )Qλ = 0 has two real roots which can be written in the following form: 1 s1 = − Q(1 + r − d) 2
and
1 s2 = − Q(1 + r + d), 2
(5.55)
where r=
λ Q
and d =
(1 − r)2 + 4q1 r.
It can be immediately verified that s2 < 0 both in subcritical, critical and supercritical systems, while s1 =
<0, if q1 < 1, 0, if q1 = 1, >0, if q1 > 1.
(5.56)
130
Imre Pázsit & Lénárd Pál
By making use of s1 and s2 , equations (5.51)–(5.54) can be written in the following form: s+λ , (s − s1 )(s − s2 ) q1 λ (2) m˜ 1,0 (s) = , (s − s1 )(s − s2 ) Q (1) m˜ 0,1 (s) = , (s − s1 )(s − s2 ) s+Q (2) m˜ 0,1 (s) = . (s − s1 )(s − s2 ) (1)
m˜ 1,0 (s) =
(5.57) (5.58) (5.59) (5.60)
If q1 = 1 then s1 = 0, s2 = −Q(1 + r) and d = 1 + r. Accordingly, one obtains from the equations above that s+λ , s[s + Q(1 + r)] λ (2) m˜ 1,0 (s|cr) = , s[s + Q(1 + r)] Q (1) m˜ 0,1 (s|cr) = , s[s + Q(1 + r)] s+Q (2) m˜ 0,1 (s|cr) = , s[s + Q(1 + r)] (1)
m˜ 1,0 (s|cr) =
(5.61) (5.62) (5.63) (5.64)
where cr indicates that the formulae refer to critical systems. From these Laplace transforms, the dependence of the expectations on the parameter t is immediately obtained as 1 r − 1 s1 t 1 r − 1 s2 t (1) (5.65) m1,0 (t) = 1+ e + 1− e , 2 d 2 d r (2) m1,0 (t) = q1 (e s1 t − e s2 t ), (5.66) d 1 (1) m0,1 (t) = (e s1 t − e s2 t ), (5.67) d 1 r − 1 s1 t 1 r − 1 s2 t (2) m0,1 (t) = (5.68) 1− e + 1+ e , 2 d 2 d if q1 = 1. If q1 = 1, then r 1 −Q(1+r)t , + e 1+r 1+r r (2) m1,0 (t|cr) = [1 − e −Q(1+r)t ], 1+r 1 (1) m0,1 (t|cr) = [1 − e −Q(1+r)t ], 1+r 1 r (2) m0,1 (t|cr) = + e −Q(1+r)t . 1+r 1+r (1)
m1,0 (t|cr) =
(5.69) (5.70) (5.71) (5.72)
Figure 5.7 illustrates the expectation of the number of inactive particles as a function of t in subcritical, critical and supercritical systems for r = 5 and Q = 0.4. In the left-hand side figure, the time-dependence of
131
Other Characteristic Probabilities
1
lQ 5 q1 0.95 q1 1 q1 1.05
0.95 0.9 0.85
Expectations
Expectations
1
0.8 0.75 0.7
0.6
2
(a)
4 6 Time (t )
8
lQ 5 q1 0.95 q1 1 q1 1.05
0.4 S2
0.2
S1 0
0.8
0
10
0
2
4
(b)
6
8
10
Time (t )
Figure 5.7 Expectation values of the number of the inactive particles as a function of time t. S1 and S2 refer to the alternatives that there was one inactive particle R1 and one active particle R2 in the system at time t = 0, respectively. 0.2
l/Q 5 q1 0.95 q1 1 q1 1.05
0.15 l/Q 5 q1 0.95 q1 1 q1 1.05
0.1 S1 0.05
0 (a)
Expectations
Expectations
0.5
2
4
6
8
0.4
S2
0.3 0.2
10
0
2
(b)
Time (t )
4
6
8
10
Time (t )
Figure 5.8 The expectation of the number of active particles as a function of time t. S1 and S2 refer to the alternative that there was one inactive particle R1 and one active particle R2 in the system at the moment t = 0, respectively.
the expectations started by an inactive particle, whereas in the right-hand side figure that of the expectations started by an active particle can be seen. It can be noted that if the starting particle is an inactive one, then the expectation of the number of inactive particles in a supercritical system starts increasing only after an initial phase of decreasing. If the starting particle is active then the expectation of the number of inactive particles in a subcritical system increases at the beginning, then it starts decreasing after having reached a maximum. The (min) (max) values of the time parameters belonging to the minimum t1 and to the maximum t1 can be determined by (5.65) and (5.66), respectively: (min)
t1
(max)
t1
1 (d + 1)2 − r 2 log , if q1 > 1, Qd (d − 1)2 − r 2 1 r +1+d = log , if q1 < 1. Qd r +1−d =
(5.73) (5.74)
Figure 5.7 also shows that the expectation of the number of inactive particles in a critical system converges to the same value r/(1 + r) both in the case when the process is started either by an inactive or by an active particle. Figure 5.8 illustrates the expectation of the number of the active particles as a function of t in subcritical, critical and supercritical systems for r = 5 and Q = 0.4. The time-dependence of the expectations of a process started by an inactive particle can be seen in the left-hand side figure, while the same for a process initiated by an active particle is shown in the right-hand side one. It can be seen that if the starting particle is not active, the expectation exhibits a maximum value in a subcritical system, whereas if the starting particle is active, it exhibits a minimum in a supercritical system. From the formulae (5.68) and (5.67) one can determine the
132
Imre Pázsit & Lénárd Pál
1
q1 1.05 (1)
m 1,0
Expectations
0.8
(1)
m 0,1
0.6
(2)
t5
0.4
m 1,0 (2)
m 0,1
0.2 0
2
4
6
8
10
Values of r l/Q
Figure 5.9 Dependence of the expectations on the parameter r = λ/Q in a supercritical system at the time instant t = 5. (min)
values t2
(max)
and t2
of the time belonging to the minimum and the maximum, respectively: (min)
t2
(max)
t2 (max)
1 (r + d)2 − 1 log , if q1 > 1, Qd (r − d)2 − 1 1 r +1+d = log , if q1 < 1. Qd r +1−d =
(5.75) (5.76)
(max)
= t2 . It is also seen that in a critical system, after a relatively short time, the expectation One notes that t1 corresponding to the process started by an inactive particle will be identical to the expectation corresponding to the process started by an active particle, since both converge to the value 1/(1 + r) if t → ∞. Figure 5.9 illustrates the expectations of the numbers of the active and inactive particles as functions of the parameter r = λ/Q at the time instant t = 5, in a slightly supercritical system. It can be seen that if the starting particle is inactive, then the expectations of the numbers of both the active and the inactive particles increase by increasing r, while if the starting particle is active, then those decrease by increasing r. This behaviour can easily be interpreted based on the meaning of λ and Q.
5.4.2 The covariance and its properties It can be expected that a stochastic time-dependence will appear between the numbers of the active and inactive particles. In order to study this dependence, let us determine the covariances (i)
D1,1 (t) = E{n1 (t)n2 (t)|Si } − E{n1 (t)|Si }E{n2 (t)|Si },
i = 1, 2.
(5.77)
By using the notations introduced earlier, one can write (i)
(i)
(i)
(i)
D1,1 (t) = m1,1 (t) − m1,0 (t)m0,1 (t), where
(i)
m1,1 (t) =
∂2 g (i) ∂z1 ∂z2
(5.78)
i = 1, 2.
, z1 =z2 =1
From (5.41) and (5.42), the equations (1) m1,1 (t)
and (2)
m1,1 (t) = q1 λ
0
t
=Q 0
t
e −Q(t−t ) m1,1 (t )dt,
e −λ(t−t ) m1,1 (t )dt + q2 λ (1)
(2)
0
t
(5.79)
e −λ(t−t ) m1,0 (t )m0,1 (t )dt (1)
(1)
(5.80)
133
Other Characteristic Probabilities
can be derived. These can be solved by the Laplace transforms ∞ (i) (i) m˜ 1,1 (s) = e −st m1,1 (t)dt,
i = 1, 2.
(5.81)
0
Since, based on the formulae (5.65) and (5.67) one has (1)
1 & (d + r − 1)e 2s1 t 2d 2
(1)
m1,0 (t)m0,1 (t) =
' − 2(r − 1)e (s1 +s2 )t − (d − r + 1)e 2s2 t ,
the following two equations can be written down: (1)
m˜ 1,1 (s) =
Q (2) μ1,1 (s), s+Q
(5.82)
and (2)
λ (1) m˜ (s) s + λ 1,1
1 λ r −1 d−r +1 d+r −1 + q2 2 , −2 − 2d s + λ s − 2s1 s − s1 − s2 s − 2s2
m˜ 1,1 (s) = q1
(5.83)
if q1 = 1. By introducing the function ρ(s) =
r −1 d−r +1 d+r −1 −2 − , s − 2s1 s − s 1 − s2 s − 2s2
(5.84)
we obtain that Q2 1 r ρ(s), 2 d 2 (s − s1 )(s − s2 ) 1 r Q(s + Q) (2) m˜ 1,1 (s) = ρ(s), 2 2 d (s − s1 )(s − s2 ) (1)
m˜ 1,1 (s) =
(5.85) (5.86)
if q1 = 1. On the other hand, if q1 = 1, then r Q2 1 q2 ρ(s|cr), 2 2 (1 + r) s[s + Q(1 + r)] r Q(Q + s) 1 (2) m˜ 1,1 (s|cr) = q2 ρ(s|cr), 2 (1 + r)2 s[s + Q(1 + r)] (1)
m˜ 1,1 (s|cr) =
where
(5.87) (5.88)
r r −1 1 − − . s s + Q(1 + r) s + 2Q(1 + r) By performing the inverse Laplace transforms, we arrive at ρ(s|cr) = 2
1 r q2 Q[k(t) − (t)] 2 d3
(5.89)
1 r q2 Q[(Q + s1 )k(t) − (Q + s2 )(t)], 2 d3
(5.90)
(1)
m1,1 (t) = and (2)
m1,1 (t) =
134
Imre Pázsit & Lénárd Pál
lQ 5 q1 0.95 q1 1 q1 1.05
Covariances
0.4 0.3 0.2
S1
0.1
q2 0.5
0 0.1 0
5
10
15
20
Time (t )
Figure 5.10 The time-dependence of the covariance between the numbers of the inactive and active particles, provided that the starting particle was an inactive R1 type.
where k(t) = (d + r − 1) − 2(r − 1)
e 2s1 t − e s1 t s1
e (s1 +s2 )t − e s1 t e 2s2 t − e s1 t − (d − r + 1) s2 2s2 − s1
and (t) = (d + r − 1) − 2(r − 1)
e 2s1 t − e s2 t 2s1 − s2
e (s1 +s2 )t − e s2 t e 2s2 t − e s2 t − (d − r + 1) s1 s2
if q1 = 1. If q1 = 1, i.e. the system is critical, then (1)
m1,1 (t|cr) = q2
r2 1 r(4r − 1) Qt − q2 3 (1 + r) 2 (1 + r)4
r(r − 1) −Q(1+r)t r2 e Qt + 2q e −Q(1+r)t 2 (1 + r)3 (1 + r)4 r 1 − q2 e −2Q(1+r)t 2 (1 + r)4 + q2
(5.91)
and (2)
m1,1 (t|cr) = q2
r2 1 r[r 2 + (r − 1)2 ] Qt + q2 (1 + r)3 2 (1 + r)4
r 2 (r − 1) −Q(1+r)t r(r 2 + 1) −Q(1+r)t e Qt − q2 e 3 (1 + r) (1 + r)4 1 r(2r + 1) −2Q(1+r)t + q2 e . 2 (1 + r)4 − q2
(5.92) (1)
The curves shown in Fig. 5.10 display the initial section of the dependence of the covariance D1,1 (t) on t for the parameter values r = λ/Q = 5 and q2 = 0.5 in a subcritical, critical and supercritical system. We notice that initially, the covariance is negative, which means that the increasing and decreasing of the number of inactive particles leads to decreasing and increasing of the number of active particles, respectively. After some
135
Other Characteristic Probabilities
l/Q 5 q1 0.95 q1 1 q1 1.05
Covariances
0.4 0.2
S2
0 q2 0.5
0.2 0
5
10 Time (t )
15
20
Figure 5.11 The time-dependence of the covariance between the numbers of the inactive and active particles, provided that the starting particle was an active R2 type.
time, however, the covariance becomes positive. From this it follows that there exists a time instant in which there is no correlation between the numbers of the inactive and active particles.4 It is obvious that if t → ∞ (i) then D1,1 (t) → 0 in a subcritical case, whether i = 1 or 2. It is worth mentioning that in a critical system the (i)
linear dependence of the covariance on time develops relatively soon. In a supercritical system D1,1 (t) → ∞ if t → ∞. (2) Similar statements can be made also concerning the time-dependence of the covariance D1,1 (t). Figure 5.11 shows that the time-dependence of the covariance during the initial phase of the process, started by an active particle, is affected by the state of the system in a particular way. The conventional time-dependence (in the specific time units selected here) is attained in the domain t > 8.
5.4.3 Properties of the variances The diagonal terms of the covariance matrix are the following variances: (i)
(i)
(i)
i = 1, 2
(5.93)
(i)
(i)
(i)
i = 1, 2
(5.94)
D2 {n1 (t)|Si } = m2,0 (t) + m1,0 (t) − [m1,0 (t)]2 , D2 {n2 (t)|Si } = m0,2 (t) + m0,1 (t) − [m0,1 (t)]2 ,
(i)
(i)
In order to determine these variances, we need the factorial moments m2,0 (t) and m0,2 (t), i = 1, 2. From the generating function equations (5.41) and (5.42), one obtains for the inactive particles (1) m2,0 (t)
and (2)
m2,0 (t) = q1 λ
t
0
=Q
t
0
e −Q(t−t ) m2,0 (t )dt
e −λ(t−t ) m2,0 (t )dt + q2 λ (1)
(2)
t
0
(5.95)
e −λ(t−t ) [m1,0 (t )]2 dt , (1)
(5.96)
whereas for the active ones we will have
(1)
m0,2 (t) = Q and (2)
m0,2 (t) = q1 λ 4 This, however, does
0
t
0
t
e −Q(t−t ) m0,2 (t )dt
e −λ(t−t ) m0,2 (t ) dt + q2 λ (1)
(2)
0
t
(5.97)
e −λ(t−t ) [m0,1 (t )]2 dt . (1)
not mean that the random variables n1 (t) and n2 (t) are also independent at this time point.
(5.98)
136
Imre Pázsit & Lénárd Pál (1)
By accounting for the expression (5.65) derived for m1,0 (t), by employing the procedure known from the foregoing one arrives at 1 r (1) (5.99) m2,0 (t) = q2 Q 3 [u2,0 (t) − v2,0 (t)], 4 d (2)
m2,0 (t) =
1 r q2 Q 3 [(d − r + 1)u2,0 (t) + (d + r − 1)v2,0 (t)], 8 d
(5.100)
if q1 = 1, where u2,0 (t) = (d + r − 1)2
e 2s1 t − e s1 t s1
+ 2[d 2 − (r − 1)2 ]
e 2s2 t − e s1 t e (s1 +s2 )t − e s1 t + (d − r + 1)2 , s2 2s2 − s1
(5.101)
and further v2,0 (t) = (d + r − 1)2
e 2s1 t − e s2 t 2s1 − s2
+ 2[d 2 − (r − 1)2 ]
e (s1 +s2 )t − e s2 t e 2s2 t − e s2 t + (d − r + 1)2 . s1 s2
(5.102)
In the case when q1 = 1, i.e. if the system is critical, then (1)
m2,0 (t|cr) = q2
r3 r(r 2 − 2r − 1/2) r2 Qt − q − 2q e −Q(1+r)t Qt 2 2 (1 + r)3 (1 + r)4 (1 + r)3
+ q2
r(r 2 − 2r − 1) −Q(1+r)t 1 r e + q2 e −2Q(1+r)t (1 + r)4 2 (1 + r)4
(5.103)
and (2)
m2,0 (t|cr) = q2
r3 r(r 3 + 2r + 1/2) r3 Qt + q + 2q e −Q(1+r)t Qt 2 2 (1 + r)3 (1 + r)4 (1 + r)3
−q2
r 2 (r 2 + 1) −Q(1+r)t 1 r(2r + 1) −2Q(1+r)t e − q2 e . (1 + r)4 2 (1 + r)4
(5.104)
(i)
Determination of the factorial moments m0,2 (t), i = 1, 2, can be achieved via a standard procedure by (1)
accounting for the formula m0,1 (t) in (5.67). By applying Laplace transform, after some simple but tedious calculations one arrives at r (1) (5.105) m0,2 (t) = q2 3 Q[u0,2 (t) − v0,2 (t)] d and 1 r (2) (5.106) m0,2 (t) = q2 3 Q[(d − r + 1)u0,2 (t) + (d + r − 1)v0,2 (t)], 2 d where u0,2 (t) =
e 2s1 t − e s1 t e (s1 +s2 )t − e s1 t e 2s2 t − e s1 t −2 + , s1 s2 2s2 − s1
(5.107)
v0,2 (t) =
e 2s1 t − e s2 t e (s1 +s2 )t − e s2 t e 2s2 t − e s2 t −2 + , 2s1 − s2 s1 s2
(5.108)
137
Other Characteristic Probabilities
5
3 2 1 0
5
(a)
10 Time (t )
15
4 3
S2
2 1
q2 0.5
0
lQ 5 q1 0.9 q1 1 q1 1.1
5
S1 Variances
4 Variances
6
lQ 5 q1 0.9 q1 1 q1 1.1
q2 0.5
0 0
20
5
(b)
10 Time (t )
15
20
1
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
S1 q2 0.5 lQ 5 q1 0.9 q1 1 q1 1.1 0
1
(a)
S2 q2 0.5
0.8 Variances
Variances
Figure 5.12 Variance of the number of the inactive particles as a function of time t in a subcritical, critical and supercritical medium. The left-hand side figure refers to a process started by an inactive particle, and the right-hand side one to a process started by an active particle.
2
3
Time (t )
4
0.6
lQ 5 q1 0.9
0.4
q1 1 q1 1.1
0.2 0 5
0 (b)
1
2
3
4
5
Time (t )
Figure 5.13 The variance of the number of inactive particles in the time interval directly after the start of the process. In the left-hand side figure the evolution of the variances of a process started by an inactive particle, while in the right-hand side figure the same for a process started by an active particle in subcritical, critical and supercritical media is shown.
provided that q1 = 1. When q1 = 1, i.e. when the system is critical, after taking the appropriate limits, one arrives at the following formulae: (1)
r r r 5 Qt − q2 + 2q2 e −Q(1+r)t Qt 3 4 (1 + r) 2 (1 + r) (1 + r)3 r r 1 + 2q2 e −Q(1+r)t + q2 e −2Q(1+r)t 4 (1 + r) 2 (1 + r)4
m0,2 (t|cr) = q2
(5.109)
and (2)
r r2 1 r(2r − 3) q Qt + − 2q e −Q(1+r)t Qt 2 2 (1 + r)3 2 (1 + r)4 (1 + r)3 r 1 r(2r + 1) −2Q(1+r)t + 2q2 e −Q(1+r)t − q2 e . 4 (1 + r) 2 (1 + r)4
m0,2 (t|cr) = q2
(5.110)
The time-dependence of the variances can then be determined from (5.93) and (5.94). Figure 5.12 shows the variance of the number of the inactive particles as a function of time t in a subcritical, critical and supercritical medium. An anomalous behaviour can be observed at the beginning of the processes started by either an inactive or an active particle. In Fig. 5.13 it can be clearly seen that initially, the variance is larger in a subcritical medium than in a supercritical medium, but the largest variance can be observed in the critical system. As time passes, however, the expected tendency gains force that the variance of the number of inactive particles tends to zero in a subcritical medium, while it tends into infinity in critical and supercritical media when time tends to
138
Imre Pázsit & Lénárd Pál
0.3 0.2 0.1
0.3 0.2 0.1
q2 0.5
S1
0
lQ 5 q1 0.9 q1 1 q1 1.1
0.4 Variances
Variances
0.5
lQ 5 q1 0.9 q1 1 q1 1.1
0.4
0
5
10 Time (t)
(a)
15
20
q2 0.5
S2
0 0
5
(b)
10 Time (t )
15
20
0.175 0.15 0.125 0.1 0.075 0.05 0.025 0
0.25 0.2 l/Q 5 q1 0.9 q1 1 q1 1.1
S1 q2 0.5 0
1
2
(a)
Variances
Variances
Figure 5.14 The variance of the number of active particles as a function of time t in subcritical, critical and supercritical media. The left-hand side figure refers to a process started by an inactive particle, while the right-hand side figure to that started by an active particle.
3
4
0.15
l/Q 5 q1 0.9 q1 1 q1 1.1
0.1 0.05 0
5
Time (t )
0
(b)
1
2
S2 q2 0.5 3
4
5
Time (t )
Figure 5.15 The variance of the number of active particles in the time interval immediately after the start of the process. In the left-hand side figure the evolution of the variances of a process started by an inactive particle, while in the right-hand side figure the same in a process started by an active particle can be seen in subcritical, critical and supercritical media.
infinity. The variance of the number of active particles is illustrated as a function of time t in Figs. 5.14 and 5.15. When the starting particle is inactive, the t-dependence of the variance is very similar to the t-dependence of the inactive particles discussed in the foregoing, with the difference that the order of the variance curves does not change compared to their order at the beginning, for either of the parameter values q1 < 1 or q1 ≥ 1. The time-dependence of the variance in a subcritical medium shows a maximum even in this case; however, the value tmax belonging to the maximum is smaller than in the case of the variance of the number of inactive particles. In the case when also the starting particle is active, the initial section of the variance curves gets significantly modified, as it is clearly seen in the right-hand side of Fig. 5.15. Namely, a short time after the start of the process, a local maximum develops both in a subcritical, critical and supercritical medium. This is a consequence of the fact that the starting particle can induce a reaction leading to multiplication, whose products can induce further reactions only after they become active. Naturally, if t → ∞, the variance of the number of active particles tends to zero in a subcritical medium, while in critical and supercritical media it tends to infinity; in a critical system linearly and in a supercritical system exponentially. The special behaviour of the variance of the number of the inactive and active particles occurring directly after the starting of the process can be important in certain processes (such as chemical, biological) in which retarding effects play a role.
Asymptotic properties of the variances Investigate now in somewhat more detail the asymptotic properties of the variances. From the expressions derived for the second factorial moments and the expectations it immediately follows that in a subcritical medium lim D2 {n1 (t)|Si } = lim D2 {n2 (t)|Si } = 0,
t→∞
t→∞
i = 1, 2.
(5.111)
139
Other Characteristic Probabilities
Further, it can be easily confirmed that in a critical medium the following limiting values exist: 3 D2 {n1 (t)|Si } r lim = q2 , i = 1, 2 t→∞ Qt 1+r D2 {n2 (t)|Si } r = q2 , t→∞ Qt (1 + r)3 lim
i = 1, 2.
(5.112) (5.113)
By omitting the details of the calculations, the limit values are given here that concern processes in a supercritical medium: D2 {nk (t)|Si } (i) lim (5.114) = Ak , i = 1, 2 and k = 1, 2. t→∞ exp{2s1 Qt} We find that d+r −1 2 4r (1) A1 = q2 −1 , 2d (1 + r − 3d)(1 + r − d) and * q r 2 d−r d+r −1 2 1 (2) A1 = q2 −1 , d q1 r(1 + r − 3d)(1 + r − d) as well as (1) A2
and
1 = 2 d
4q2 r −1 , (1 + r − 3d)(1 + r − d)
16q2 r(d − r) d−r +1 2 − 1 . 2d (d − r + 1)2 (1 + r − 3d)(1 + r − d) In a critical medium, the asymptotic expressions 3 r Qt, i = 1, 2 D2 {n1 (t)|Si } ≈ q2 1+r
(2)
A2 =
and r Qt, i = 1, 2 (1 + r)3 can be considered as very good approximations already for Qt > 5. In a supercritical medium, if Qt > 200, the asymptotic expressions D2 {n2 (t)|Si } ≈ q2
(i)
D2 {nk (t)|Si } ≈ Ak exp{2s1 Qt},
i = 1, 2
and k = 1, 2
approximate the true values of the variances with a high accuracy.
5.4.4 Probability of extinction Determine the probability that at a given time instant t > 0 there is neither an active nor an inactive particle in the system, provided that at time t = 0, the system was in state Si , i = 1, 2 i.e. it contained either one inactive or one active particle. From the generating function in (5.40), for this probability one obtains the relationship (i)
p(i) (0, 0, t) = p0 (t) = g (i) (0, 0, t),
i = 1, 2.
Further, from the generating function equations (5.41) and (5.42), one obtains t (2) (1) e −Q(t−t ) p0 (t )dt p0 (t) = Q 0
(5.115)
(5.116)
140
Imre Pázsit & Lénárd Pál
and
(2) p0 (t)
t
=λ 0
e −λ(t−t ) q[p0 (t )]dt . (1)
(5.117)
Let τ denote the random time point counted from t = 0, in which n1 (τ) = n2 (τ) = 0. Obviously, (i)
P{τ ≤ t|Si } = P{n1 (t) = 0, n2 (t) = 0|Si } = p0 (t),
i = 1, 2.
(5.118)
(i)
Hence, p0 (t) is not only the probability that at time t > 0 there are no particles in the system, but also that the time interval of the extinction is not larger than t, provided that at time t = 0 the system was in one of the (i) states Si , i = 1, 2. The probability p0 (t) can be called the probability of extinction, for whose determination the generating function q(z) needs to be known.
Quadratic process In the case when q(z) = f0 + f1 z + f2 z2 and f0 + f1 + f2 = 1, from the integral equations (5.116) and (5.117) one obtains the non-linear differential equations (1)
dp0 (t) (1) (2) = −Qp0 (t) + Qp0 (t) dt
(5.119)
dp0 (t) (2) (1) (1) = −λp0 (t) + λ[f0 + f1 p0 (t) + f2 p0 (t)2 ] dt
(5.120)
and (2)
(1)
(2)
with the initial conditions p0 (0) = p0 (0) = 0. Instead of seeking for an analytical solution, the time(i) dependence of the probabilities p0 (t), i = 1, 2 was determined via a numerical procedure in a subcritical (q1 = 0.95) and a supercritical (q1 = 1.05) medium, assuming f0 = 0.3 and 0.2, respectively. It can be seen in Fig. 5.16 that the influence of the initial state decays relatively quickly. The asymptotic values of the extinction probabilities are equal, and independent of the initial state. That is, in a given medium the possible values of p0 (1) lim p (t) t→∞ 0
are determined by the roots
p0 =
(2)
= lim p0 (t) = p0
(5.121)
t→∞
1, f0 /f2 ,
if q1 ≤ 1, if q1 > 1,
of the equation
0.8
q1 0.95, l/Q 5 f0 0.3
0.6
(1)
0.4
p0 (t )
0.2
p0 (t )
(2)
Extinction probability
Extinction probability
f2 p02 − (1 − f1 )p0 + f0 = 0
0.6
q1 1.05, l/Q 5 f0 0.2
0.4
(1)
p0 (t ) 0.2
(2)
p0 (t)
0
0 0
20
40
60
Time (t )
80
100
0
20
40
60
80
100
Time (t )
Figure 5.16 The time-dependence of the extinction probability in a subcritical and a supercritical medium.
141
Other Characteristic Probabilities
arising from the expressions (5.119) and (5.120). We notice that in subcritical and supercritical systems, the probability of the extinction is 1 (the probability of surviving is 0), while in supercritical systems, the probability of surviving is f0 q1 − 1 1− =2 , f2 q2 which, naturally, is exactly equal to the probability derived earlier for the process n(t).
5.5 Process with Prompt and Delayed Born Particles Consider a branching process in which the reaction induced by one particle called type T1 can result in not only the multiplication5 of the particle, but also in the birth of particles called type T2 , each of which, independently from the others, gives birth to one particle of type T1 through decay with a random time delay. Consequently, a particle of type T2 cannot directly induce a reaction, only the particle of type T1 born at its decay. The principal scheme of the process is shown in Fig. 5.17. The particle of type T1 born in the decay of the T2 -type particle is called a delayed born T1 particle, while the one born directly in the reaction is called a prompt T1 particle. This branching process is important not only in reactor physics, but also in biophysics, in the modelling and interpretation of certain retarded phenomena. Let ν1 and ν2 denote the number of particles of type T1 and type T2 , respectively, generated in a reaction induced by one particle of type T1 , and let P{ν1 = k1 , ν2 = k2 } = f (k1 , k2 )
(5.122)
be the probability that ν1 = k1 and ν2 = k2 . Define the basic generating function q(z1 , z2 ) =
∞ ∞
f (k1 , k2 )z1k1 z2k2
(5.123)
k1 =0 k2 =0
and the factorial moments as (1,2) qij
=
∂i+j q(z1 , z2 )
.
j
∂z1i ∂z2
(5.124)
z1 =z2 =1
Further, let n1 (t) and n2 (t) denote the number of particles of type T1 and T2 , respectively, at time t ≥ 0. Naturally, n1 (t) is the sum of the number of particles of type T1 at the time t ≥ 0 born as both prompt and k1 particles of type T1, k1 0, 1, . . .
T1
Q
k2 particles o f type T2, k2 0, 1, . . .
T2
Figure 5.17 5 The
T1
Basic scheme of the process.
renewal and absorption of the particle T1 are also included in the multiplication.
142
Imre Pázsit & Lénárd Pál
delayed. Determine the probability of the event that at time t ≥ 0, there are n1 particles of type T1 capable of inducing a reaction and n2 particles of type T2 non-capable of inducing reaction in the multiplying system, provided that at t = 0 exactly one particle of type T1 was in the system. For this probability, P{n1 (t) = n1 , n2 (t) = n2 |n1 (0) = 1, n2 (0) = 0} = p(D) (n1 , n2 , t|1, 0),
(5.125)
the backward Kolmogorov equation can be written down without any difficulty. Further, we will also need the equation determining the probability P{n1 (t) = n1 , n2 (t) = n2 |n1 (0) = 0, n2 (0) = 1} = p(D) (n1 , n2 , t|0, 1).
(5.126)
Since particles of both type T1 and type T2 can induce a branching process independently of each other, the relationship = p(D) (u1 , u2 , t|k1 , 0)p(D) (v1 , v2 , t|0, k2 ) (5.127) p(D) (n1 , n2 , t|k1 , k2 ) = u1 +v1 =n1 u2 +v2 =n2
holds, in which
p(D) (u1 , u2 , t|k1 , 0) =
k1
p(D) (i , j , t|1, 0)
(5.128)
p(D) (i , j , t|0, 1).
(5.129)
i1 +···+ik1=u1 j1 +···+jk1=u2 =1
and
p(D) (v1 , v2 , t|0, k2 ) =
k2
i1 +···+ik2 =v1 j1 +···+jk2 =v2 =1
Suppose that 1 − exp{−λt} is the probability that a particle of type T2 decays during a time interval not larger than t > 0. Based on the usual considerations, it is obvious that p(D) (n1 , n2 , t|1, 0) = e −Qt δn1 ,1 δn2 ,0 +Q
t
e
⎡
−Q(t−t )
0
⎣
∞ ∞
⎤ f (k1 , k2 )p(D) (n1 , n2 , t |k1 , k2 )⎦ dt .
(5.130)
k1 =0 k2 =0
Similarly, for the probability p(D) (n1 , n2 , t|0, 1), one has the simple integral equation t e −λ(t−t ) p(D) (n1 , n2 , t |1, 0)dt . p(D) (n1 , n2 , t|0, 1) = e −λt δn1 ,0 δn2 ,1 + λ
(5.131)
0
Introduce the generating functions g (D) (z1 , z2 , t|1, 0) =
∞ ∞
p(D) (n1 , n2 , t|1, 0)z1n1 z2n2
(5.132)
p(D) (n1 , n2 , t|0, 1)z1n1 z2n2 .
(5.133)
n1 =0 n2 =0
and g
(D)
(z1 , z2 , t|0, 1) =
∞ ∞ n1 =0 n2 =0
Accounting for the relationships (5.127) to (5.129), after elementary operations one obtains t e −Q(t−t ) q[g (D) (z1 , z2 , t |1, 0), g (D) (z1 , z2 , t |0, 1)]dt g (D) (z1 , z2 , t|1, 0) = e −Qt z1 + Q 0
(5.134)
143
Other Characteristic Probabilities
and g (D) (z1 , z2 , t|0, 1) = e −λt z2 + λ
t
e −λ(t−t ) g (D) (z1 , z2 , t |1, 0)dt .
(5.135)
0
In the following, we will suppose that the numbers ν1 and ν2 of the particles of type T1 capable of multiplication and of type T2 , non-capable for multiplication, born in the same reaction are independent, and so f (k1 , k2 ) = f (1) (k1 )f (2) (k2 ). Accordingly, q(z1 , z2 ) = q(1) (z1 )q(2) (z2 ),
(5.136)
where q (z1 ) = (1)
∞
f
(1)
(k1 )z1k1
∞
q (z2 ) = (2)
and
k1 =0
f (2) (k2 )z2k2 .
k2 =0
Equation (5.134) now takes the following form: g
(D)
(z1 , z2 , t|1, 0) = e
−Qt
z1 + Q
t
e −Q(t−t ) q(1) [g (D) (z1 , z2 , t |1, 0)]q(2) [g (D) (z1 , z2 , t |0, 1)]dt ,
(5.137)
0
while (5.135) does not change. From the generating functions, the factorial moments can easily be calculated:
∂j+k g (D) (z1 , z2 , t|1, 0)
j ∂z1 ∂z2k
= mj,k (t|1)
(D)
(5.138)
(D)
(5.139)
z1 =z2 =1
∂j+k g (D) (z1 , z2 , t|0, 1) j ∂z1 ∂z2k
= mj,k (t|2) z1 =z2 =1
in which |1) refers to the fact that the starting particle was of type T1 , while |2) that it was of type T2 . For the solution of the equations that can be derived for the moments, we will use the Laplace transforms ∞ (D) (D) m˜ j,k (s|i) = e−st mj,k (t|i)dt, i = 1, 2. (5.140) 0
In the following, we will only concern with the moments describing the behaviour of the particles of type T1 . For the sake of consistence with the previous formulae, the following notations will be used: (1)
E{ν2 } = q1 ,
(2)
(1)
E{ν2 (ν2 − 1)} = q2 ,
E{ν1 } = q1 ,
(5.141)
as well as (2)
E{ν1 (ν1 − 1)} = q2 ,
(5.142)
and (1) (2)
E{ν1 ν2 } = q1 q1 .
(5.143)
5.5.1 Expectations For the expectation of the number of particles of type T1 , from (5.137) and (5.135) one obtains m1,0 (t|1) = e−Qt + Q (D)
0
t
e −Q(t−t ) [q1 m1,0 (t |1) + q1 m1,0 (t |2)]dt , (1) (D)
(2) (D)
(5.144)
144
Imre Pázsit & Lénárd Pál
or
(D)
m1,0 (t|2) = λ
t
0
e −λ(t−t ) m1,0 (t |1)dt , (D)
(5.145)
depending on whether the starting particle was of type T1 or type T2 . For the Laplace transforms of the expectations, a short calculation yields the following formulae: s+λ
(D)
m˜ 1,0 (s|1) =
(s + Q)(s
(1) + λ) − q1 Q(s
(s + Q)(s
(1) + λ) − q1 Q(s
,
(5.146)
(2)
,
(5.147)
λ
(D)
m˜ 1,0 (s|2) =
(2)
+ λ) − q1 λ(s + Q) + λ) − q1 λ(s + Q)
where the roots of the identical denominators on the right-hand sides are given by the following expression: s1,2 =
# 1 " Q q1 − 1 − r + (r − 1)q1 β± [q1 − 1 − r + (r − 1)q1 β]2 + 4r(q1 − 1) , 2
where
(2)
(1)
(2)
q1 = q1 + q1 , One notes that s1 − s2 =
β=
q1 q1
and
r=
λ . Q
(5.148)
(5.149)
[q1 − 1 − r + (r − 1)q1 β]2 + 4r(q1 − 1) > 0,
(5.150)
that is s1 > s2 . In a subcritical system, i.e. when q1 < 1, one can easily confirm that s1 < 0 and hence naturally s2 < 0. If q1 = 1, i.e. when the system is critical, then (2)
s1 = 0
and s2 = Q[(r − 1)q1 − r] < 0.
In a supercritical system, i.e. if q1 > 1, one obtains that s1 > 0, while s2 < 0. For the sake of curiosity, it is worth giving the formulae of the roots in the case when the expectation of the lifetime of the particle of type T1 is equal to the expectation of the lifetime of the particles of type T2 , that is when r = λ/Q = 1. We find that s1 = Q(q1 − 1) and s2 = −Q. By introducing the notations s1 = Qa1
and
s2 = Qa2 ,
(5.151)
after some simple rearrangements, the Laplace transform (5.146) can be written in the form (D)
m˜ 1,0 (s|1) =
1 1 r + a1 r + a2 − , a1 − a2 s − Qa1 a1 − a2 s − Qa2
while (5.147) as (D) m˜ 1,0 (s|2)
r = a1 − a 2
1 1 − s − Qa1 s − Qa2
(5.152)
.
(5.153)
Based on these, it is obvious that (D)
m1,0 (t|1) =
r + a1 a1 Qt r + a2 a2 Qt e − e a1 − a 2 a1 − a 2
(5.154)
r (e a1 Qt − e a2 Qt ). a1 − a 2
(5.155)
and (D)
m1,0 (t|2) =
145
Other Characteristic Probabilities
1 Expectation
l/Q 5.104
b 0.01
0.8
q1 0.995
0.6
q1 1
0.4
q1 1.005
0.2 0
0
500
1000 Time (t )
1500
2000
Figure 5.18 Expectation of the number of particles of type T1 as a function of time parameter Qt in subcritical, critical and supercritical systems, provided that the starting particle was also of type T1 .
Let us write down the solutions in the case of a critical medium, i.e. when q1 = 1. We find r − 1 (2) r (D) (2) m1,0 (t|1) = 1− q exp{−[r − (r − 1)q1 ]Qt} , (2) r 1 r − (r − 1)q1 and
"
r
(D)
m1,0 (t|2) =
(2)
# (2) 1 − exp{−[r − (r − 1)q1 ]Qt} ,
r − (r − 1)q1 respectively. Note that if the medium is critical, then the limit expectation is equal to (D)
lim m1,0 (t|i) =
t→∞
r (2)
r − (r − 1)q1
,
i = 1, 2,
i.e. it is independent of whether the branching process was started by a particle of type T1 or T2 , which, of course, is a trivial statement. In the case when r = 1, i.e. if Q = λ, then m1,0 (t|1) = e (q1 −1)Qt (D)
and m1,0 (t|2) = e −Qt (D)
e q1 Qt − 1 , q1
and if in addition, q1 = 1, then m1,0 (t|1) = 1 and m1,0 (t|2) = 1 − e −Qt , (D)
(D)
thus it is obvious that (D)
lim m1,0 (t|i) = 1,
t→∞
i = 1, 2.
The curves in Fig. 5.18 illustrate the expectation of the number of particles of type T1 as a function of time parameter Qt in a subcritical (q1 = 0.995), critical (q1 = 1) and supercritical (q1 = 1.005) process, given that exactly one particle of type T1 was in the multiplying system at time t = 0. For the case shown in the figure, r = 0.0005 which means that in the process investigated, the expectation 1/λ of the lifetime of the particles of type T2 , incapable of reaction, is much larger than the expectation of the reaction time 1/Q. If, at the particular level q1 > 1 of supercriticality the fraction β of the expected number of the delayed particles is larger than β(c) = (q1 − 1)/q1 , then the expected number of the prompt particles is not sufficient to reach criticality. It is then said that the supercritical medium is not prompt critical.6 In our case q1 = 1.005, (1)
(1)
(1)
view of the relationship q1 = q1 + q1 β, one can write q1 = q1 (1 − β), and by requiring that q1 be smaller than unity, i.e. that the inequality q1 (1 − β) < 1 be fulfilled, it immediately follows that q1 − 1 β > β(c) = . q1
6 In
146
Imre Pázsit & Lénárd Pál
hence β(c) = (q1 − 1)/q1 = 0.0049751. The value β = 0.017 chosen here is much larger than this value, thus (1) q1 = 0.990495 < 1. This is the reason for the fact that the curve belonging to the value q1 = 1.005 decreases strongly at the beginning with the increase of the time parameter Qt, and the exponential increase of the expectation starts only after a certain time period, due to the influence of the delayed particles. In the case when q1 = 1 and r = 0.0005 one obtains that (D)
lim m1,0 (t|1) = 0.0476417.
t→∞
However, it has to be remarked that the expectation of the particle number n1 (t) in the critical state for large Qt values is not a characteristic data, since if t → ∞, the variance of n1 (t) tends to infinity, as it will be shown shortly.
5.5.2 Variances Depending on whether one particle of type T1 or T2 existed in the multiplying system at time t = 0, the variance of the number n1 (t) of particles of type T1 is given by the following expressions: (D)
(D)
(D)
(5.156)
(D)
(D)
(D)
(5.157)
D2 {n1 (t)|n1 (0) = 1, n2 (0) = 0} = m2,0 (t|1) + m1,0 (t|1) − [m1,0 (t|1)]2 and D2 {n1 (t)|n1 (0) = 0, n2 (0) = 1} = m2,0 (t|2) + m1,0 (t|2) − [m1,0 (t|2)]2 . (D)
It is seen that in order to determine the variances, the factorial moments m2,0 (t|i), i = 1, 2 need to be calculated. From the generating function equations (5.137) and (5.135) one obtains t (D) (1) (D) (2) (D) e −Q(t−t ) [q1 m2,0 (t |1) + q1 m2,0 (t |2) + h(t )]dt , m2,0 (t|1) = Q (5.158) 0
where h(t ) = q2 [m1,0 (t |1)]2 + 2q1 q1 m1,0 (t |1)m1,0 (t |2) + q2 [m1,0 (t |2)]2 , (1)
(D)
(1) (2) (D)
and (D)
m2,0 (t|2) = λ
t
(2)
e −λ(t−t ) m2,0 (t |1)dt ,
0
respectively. Let
(D)
χ(s) =
∞
(D)
(D)
(5.159)
e −st h(t)dt
0
denote the Laplace transform of h(t). Then the following expressions are obtained for the Laplace transforms (D) of m2,0 (t|i), i = 1, 2: Q(s + λ) χ(s), (s − s1 )(s − s2 ) Qλ (D) m˜ 2,0 (s|2) = χ(s), (s − s1 )(s − s2 ) (D)
m˜ 2,0 (s|1) =
7 In
practice, usually the value of β is given, and it is this which determines the level of the supercriticality (c)
q1 = 1/(1 − β) (1)
below which the criticality q1 = 1 cannot be reached by the prompt particles.
(5.160) (5.161)
147
Other Characteristic Probabilities
where s1 and s2 are identical with (5.148), while χ(s) =
b11 b12 b22 + + , s − 2s1 s − s 1 − s2 s − 2s2
(5.162)
in which 1 (1) (2) 2 2 2 (r + a q ) + 2q β(1 − β)r(r + a , ) + q r 1 1 1 2 2 (a1 − a2 )2 1 (1) (2) = −2 [q (r + a1 )(r + a2 ) + q12 β(1 − β)r(2r + a1 + a2 ) + q2 r 2 ] (a1 − a2 )2 2
b11 = b12 and
1 (1) (2) [q (r + a2 )2 + 2q12 β(1 − β)r(r + a2 ) + q2 r 2 ]. (a1 − a2 )2 2 In the following, we will only investigate the variance b22 =
D2 {n1 (t)|n1 (0) = 1, n2 (0) = 0}, (D)
therefore it is sufficient to determine only the factorial moment m2,0 (t|1). After simple but lengthy calculations, (D)
by using the notations (5.151), the following formula can be derived for the factorial moments m2,0 (t|1): (D)
m2,0 (t|1) = where
I11 (t) =
I12 (t) =
and
b11 b12 b22 I11 (t) + I12 (t) + I22 (t), a1 − a 2 a1 − a 2 a1 − a 2
r + a1 2a1 Qt r + a2 2a1 Qt (e − e a1 Qt ) − (e − e a2 Qt ) , a1 2a1 − a2
(5.163)
r + a1 (a1 +a2 )Qt r + a2 (a1 +a2 )Qt (e − e a1 Qt ) − (e − e a2 Qt ) a2 a1
r + a1 2a2 Qt r + a2 2a2 Qt (e − e a1 Qt ) − (e − e a2 Qt ) . 2a2 − a1 a2 In the case when q1 = 1, i.e. that of a critical process, a1 = 0 and a2 = −a = −r + (r − 1)β, thus I22 (t) =
(D)
(D)
lim m2,0 (t|1) = m˜ 2,0 (t|1) =
q1 →1
b˜ 11 b˜ 12 b˜ 22 I˜11 (t) + I˜12 (t) + I˜22 (t), a a a
(5.164)
where 1 (1) (2) b˜ 11 = 2 [q2 r 2 + 2β(1 − β)r 2 + q2 r 2 ], a 1 (1) (2) b˜ 12 = −2 2 [q2 r(r − a) + β(1 − β)r(2r − a) + q2 r 2 ] a and 1 (1) (2) b˜ 22 = 2 [q2 (r − a)2 + 2β(1 − β)r(r − a) + q2 r 2 ]. a The variance D2 {n1 (t)|n1 (0) = 1, n2 (0) = 0} can be calculated from the expression (5.156). Instead of giving an explicit solution, we show its principal properties in Fig. 5.19.
148
Imre Pázsit & Lénárd Pál
35 lQ 5.104
0.01
25
q1 0.995
20
q1 1
15
q1 1.005
40 Variances
Variances
30
10
q1 1
lQ 5.104 b 0.005
30
b 0.004 20
b 0.003
10
5 0
0 0 (a)
500
1000 Time (t)
1500
2000
0 (b)
5000
10 000
15 000
20 000
Time (t)
Figure 5.19 The variance of the number of T1 -type particles as a function of time parameter Qt, provided that the process was induced by one T1 -type particle. The right-hand side figure illustrates the time-dependence of the variance in a critical system for three values of β in a larger time interval than in the left-hand side figure.
Figure 5.19 shows that the variance of the particles of type T1 , generated in a process initiated by one T1 -type particle does not vary in a monotonic manner with the increase of the time parameter Qt. If the system is subcritical with respect to the prompt particles, but it is on the whole supercritical (q1 > 1) with respect to the sum of the prompt and delayed particles, then we observe that with the increase of the time parameter Qt, the variance first reaches a local maximum, then by passing through a local minimum, tends more and more in an exponential manner to infinity. This behaviour is displayed by the curve corresponding to the value q1 = 1.005 in the left-hand side figure. If q1 < 1, i.e. the system is subcritical then, after reaching a maximum, the variance decreases essentially exponentially to zero if Qt → ∞. If the system is critical, i.e. if q1 = 1, then in order to be able to demonstrate also the linearly increasing section of the time-dependence of the variance, a relatively large time interval has to be chosen in the case of the parameters used in the calculations. The right-hand side figure illustrates the time-dependence of the variance for three values of β in the interval 0 ≤ Qt ≤ 20 000 in a critical system.
C H A P T E R
S I X
Branching Processes in a Randomly Varying Medium
Contents 6.1 6.2 6.3 6.4
Characterisation of the Medium Description of the Process Factorial Moments, Variances Random Injection of the Particles
150 150 154 165
So far, it has been assumed that the parameters Q, fk , k = 0, 1, 2, . . . determining the state of the multiplying system are constant. In many cases, however, these parameters are random processes or random variables themselves. Several studies in the mathematical literature [29–32] deal with discrete or continuous time branching processes taking place in a randomly varying medium. These investigations are aimed mostly to the study the probability of extinction or the asymptotical characteristics of the supercritical state. Our ambition here is rather to trace the effects of the simplest random variations of the multiplying medium on the expectation and the variance of the number of particles, and the correlation between their values in two different time points. Some earlier attempts were made in the field of neutron fluctuations but without concrete solutions of the type that will be reported in this chapter [33, 34]. It is to be mentioned here that this chapter is strongly related to the group of problems arising in the theory of neutron fluctuations in multiplying systems with parameters varying randomly in time.1 As is known, the neutron fluctuations in temporally constant (and low power) systems, one characteristics of which is that their variance is proportional to the mean, are referred to as zero power noise; whereas the neutron fluctuations arising in randomly varying systems, which lead to neutron fluctuations whose variance is proportional to the mean squared, are referred to as power reactor noise. These latter are treated with a linearised version of the Langevin equation, the procedure often being referred to as the ‘Langevin technique’. Such a treatment can only account for the effect of the random fluctuations of the system parameters on the neutron distribution, whereas the effect of the branching process is entirely missing. By using the more fundamental master equation description, the material presented in this chapter gives an account of the effect of both the branching process as well as the fluctuations of the system parameters, i.e. it accounts for both the zero power and power reactor noise simultaneously. Hence, it helps to better understand the properties of neutron fluctuations in randomly varying systems. 1 In
the text, for brevity, the expression ‘a system varying randomly in time’ will often be referred to as a ‘random system’ or a ‘randomly varying system’.
Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
149
150
Imre Pázsit & Lénárd Pál
6.1 Characterisation of the Medium Let S be a countable set 2 of the possible states determined by the random parameters Q, fk , k = 0, 1, 2, . . . of the multiplying system and let S(t), t ∈ T be the random process which determines in which state the system is at time t. Accordingly, {S(t) = S }, ∈ Z + is the event that the system is in state S at time t ≥ 0. In the following, we concern only with the simplest case. We suppose that the multiplying system has two possible states, i.e. S = {S1 , S2 }. In this case, the system is characterised by the parameters (i) Qi , fk , k = 0, 1, . . . , i = 1, 2. For later reference, let us determine the transition probabilities P{S(t) = Sj |S(0) = Si } = wj,i (t),
j, i = 1, 2
(6.1)
of the process S(t) describing the random changes of state of the medium. (As the notation shows, the flow of time, i.e. causality is from right to left in the above formulae and in all subsequent equations in this chapter. That is, in equation (6.1) the index i stands for the initial, and j for the final variables.) Let λ2,1 t + o(t) be the probability that the transition S1 → S2 occurs during time t, and λ1,2 t + o(t) that the transition S2 → S1 occurs within t. For the sake of simplicity suppose that λ1,2 = λ2,1 = λ and write down the backward equations determining the probabilities wj,i (t), j, i = 1, 2 in integral form. From obvious considerations these are given by t w1,1 (t) = e −λt + λ e −λ(t−t ) w1,2 (t )dt , (6.2) w2,1 (t) = λ
0 t
e −λ(t−t ) w2,2 (t )dt ,
(6.3)
0
w1,2 (t) = λ
t
0
w2,2 (t) = e
−λt
e −λ(t−t ) w1,1 (t )dt ,
+λ
t
e −λ(t−t ) w2,1 (t )dt .
(6.4) (6.5)
0
The solution is readily given as w1,1 (t) = w2,2 (t) =
1 (1 + e −2λt ) 2
(6.6)
and 1 (1 − e −2λt ). (6.7) 2 In possession of the transition probabilities wj,i (t), any higher moment of the system state S(t) can be calculated. w2,1 (t) = w1,2 (t) =
6.2 Description of the Process It is a natural idea to see whether one can construct the solution for the random medium based on the general theorems and results of Chapter 1, and in particular of Sections 1.1 and 1.2. It is easy to see, however, that the results obtained therein assume heavily the independence of the chains started by several particles existing simultaneously in the system. Hence, the concrete results are only applicable within the time periods that are separated by the state changes of the system. Based on this observation, one can construct a solution by describing the evolution of the process in a piecewise manner and combining it together with the statistics 2 The
assumption of the system having discrete states is important for the applicability of the master equation technique. With discrete states, an exclusion principle exists for infinitesimal time periods between the change of the state of the system and the change of the number of particles, which is essential in the formulating of the probability balance equation.
151
Branching Processes in a Randomly Varying Medium
of the time instances of the state changes of the system. A similar, piecewise constructed solution technique was also applied in Chapter 3, for the calculation of the neutron distribution in a subcritical system with a pulsed particle injection with a finite pulse width. Such a method can be considered as an extension of the Markov time point technique to a randomly varying system. With such a method, it is possible to derive the generating function of the one-point density function of the branching process n(t) in a random medium with the generating functions of the constant medium, as it was demonstrated in [35]. As is seen in [35], for the random medium problem, this method is relatively complicated, and hence will not be dealt with here. For the treatment of the problem we shall hence take recourse to the basic methodology of deriving various stochastic properties of the particle population, i.e. the master equation formalism. Denote by n(t) the number of particles in a multiplying system, and define the probability P{n(t) = n, S(t) = Sj |n(0) = m, S(0) = Si } = pj,i (n, t|m),
(6.8)
that the system contains n particles at t ≥ 0 and is in the state Sj , j = 1, 2, supposed that it contained m particles at t = 0 and then was in the state Si , i = 1, 2. Note that ∞ j
pj,i (n, t|m) =
n=0
wj,i (t) = 1,
∀ m ∈ Z +.
(6.9)
j
For the sake of completeness, define also the generating function gj,i (z, t|m) =
∞
pj,i (n, t|m) zn ,
(6.10)
n=0
which naturally satisfies the relationship gj,i (1, t|m) = wj,i (t). As is well known, also from the earlier chapters, one can derive either a backward- or a forward-type master equation for pj,i (t, n|m). For various reasons, in the case of the constant medium treated in earlier chapters, the use of the backward equation has proved more practical, and hence it was used from the very beginning. However, for the case of a randomly varying medium, this practical advantage of the backward master equation completely disappears, and the solution of the higher factorial moment equations becomes extremely complex. The reasons for this will be analysed in the following subsection.
6.2.1 Backward equations The objective of the backward master equation is the determination of the transition probabilities pj,i (n, t|m = 1) = pj,i (n, t|1),
i, j = 1, 2.
Often one is only interested in the probability that there exist a given number of particles in the system at time t ≥ 0, irrespective of whether the system is in state S1 or S2 at time t, provided that there was one particle in the system of state Si , i = 1, 2 at time t = 0. In this case, one can use the probabilities3 pi (n, t|m) = p1,i (n, t|m) + p2,i (n, t|m),
i = 1, 2.
(6.11)
It is possible to derive a backward equation directly for pi (n, t|m), since the backward equation is operating on the initial variables, hence the summation w.r.t. the final variables in (6.11) does not interfere with the operations of the equation. The same possibility does not exist for the forward equation, for obvious reasons. 3 Note
that
∞ n=0
pi (n, t|m) = w1,i (t) + w2,i (t) = 1.
152
Imre Pázsit & Lénárd Pál
Based on the already familiar considerations, the backward equations yielding the transition probabilities pj,i (n, t|m = 1) = pj,i (n, t|1),
i, j = 1, 2
by taking into account the random state changes of the multiplying medium, can be written as t pj,1 (n, t|1) = e −(Q1 +λ)t δ1j δ1n + λ e −(Q1 +λ)(t−t ) pj,2 (n, t |1)dt 0
t
+ Q1
e −(Q1 +λ)(t−t )
0
∞
fm(1) pj,1 (n, t |m)dt ,
j = 1, 2
(6.12)
m=0
and pj,2 (n, t|1) = e −(Q2 +λ)t δ2j δ1n + λ + Q2
t
e −(Q2 +λ)(t−t ) pj,1 (n, t |1)dt
0 t
e −(Q2 +λ)(t−t )
0
∞
fm(2) pj,2 (n, t |m)dt ,
j = 1, 2.
(6.13)
m=0
Note that here one can keep the final co-ordinates j as arbitrary, again due to the fact that there is no operation on the final co-ordinates in the backward equation. Now, since the branching processes initiated by several particles found in the system at a given time instant are not independent in a randomly varying medium,4 the relationship P(n, t|m) =
m
P(n , t|1)
(6.14)
n1 +···+nm =n =1
expressing the basic property of the branching processes in a constant medium, does not hold. Hence we need to keep the pj,i (n, t|m) with a general m on the right-hand side in the backward equation below. For the generating functions gj,i (z, t|m) =
∞
pj,i (n, t|m)zn ,
i, j = 1, 2
and
m ∈ Z+
(6.15)
n=0
one obtains the differential equations ∞
(1) ∂gj,1 (z, t|1) = −(Q1 + λ)gj,1 (z, t|1) + λgj,2 (z, t|1) + Q1 fk gj,1 (z, t|k) ∂t
(6.16)
k=0
and ∞
(2) ∂gj,2 (z, t|1) fk gj,2 (z, t|k), = −(Q2 + λ)gj,2 (z, t|1) + λgj,1 (z, t|1) + Q2 ∂t
(6.17)
k=0
respectively, with the initial conditions gj,i (z, 0) = δji z,
i, j = 1, 2.
The lack of the validity of the factorisation property (6.14) has serious consequences on the possibilities to obtain closed form solutions from the backward equation either for the probability distributions, or even 4 The
dependence is a consequence of the fact that each branching process is affected by the same random series of system state changes.
153
Branching Processes in a Randomly Varying Medium
for the factorial moments. As is seen from (6.16) and (6.17), or their predecessors, the backward equations supply an infinite system of coupled differential (or integral) equations, since in addition to gj,i (z, t|1), all gj,i (z, t|k) with k = 0, 1, 2, . . . occur on the right-hand side. In order to attempt a solution, one has to generalise (6.16) and (6.17) to have gj,i (z, t|m) with an arbitrary m on the left-hand side, and then try to solve the arising infinite system of equations with e.g. some suitable closure assumption. This is a task with a formidable complication, hence this path is not practical to follow for the treatment of a random medium. In the continuation we shall therefore use the forward equation throughout. Remark. In view of the fact that gj,i (1, t|m) = wj,i (t), ∀ m ∈ Z + , from (6.16) and (6.17) one obtains the equations dwj,1 (t) (6.18) = λ[wj,2 (t) − wj,1 (t)] dt and dwj,2 (t) = λ[wj,1 (t) − wj,2 (t)], (6.19) dt with the initial conditions wj,i (0) = δji , i, j = 1, 2. It is immediately seen that these are exactly identical with those obtained by differentiating (6.2)–(6.5) w.r.t. t.
6.2.2 Forward equations Let us now derive the forward equations for the transition probabilities pj,i (n, t|1) = pj,i (n, t), i, j = 1, 2. By using the notations previously defined, after the well-known considerations, one arrives at the following differential equations: ∞
dp1,i (n, t) (1) = −(nQ1 + λ)p1,i (n, t) + λp2,i (n, t) + Q1 (n − k + 1)fk p1,i (n − k + 1, t), dt
(6.20)
k=0 ∞
dp2,i (n, t) (2) = −(nQ2 + λ)p2,i (n, t) + λp1,i (n, t) + Q2 (n − k + 1)fk p2,i (n − k + 1, t). dt
(6.21)
k=0
Based on these two equations, one can immediately show that the generating functions gj,i (z, t), i, j = 1, 2 satisfy the partial differential equations ∂g1,i (z, t) ∂g1,i (z, t) = Q1 [q1 (z) − z] + λ[g2,i (z, t) − g1,i (z, t)] ∂t ∂z
(6.22)
and ∂g2,i (z, t) ∂g2,i (z, t) = Q2 [q2 (z) − z] + λ[g1,i (z, t) − g2,i (z, t)] ∂t ∂z with the initial conditions gj,i (z, 0) = zδji , i, j = 1, 2.
(6.23)
It is also evident that gj,i (1, t) = wj,i (t),
Remark.
i, j = 1, 2. By accounting for (6.24), from (6.22) and (6.23) one obtains the equations dw1,i (t) = λ[w2,i (t) − w1,i (t)] dt
and
dw2,i (t) = λ[w1,i (t) − w2,i (t)] dt
whose solutions are identical with those of equations (6.18) and (6.19).
(6.24)
154
Imre Pázsit & Lénárd Pál
6.3 Factorial Moments, Variances In numerous cases of applications, the information contained in the first and second moments of the number of particles is sufficient. If the generating function equations are known then the equations for the factorial moments can easily be written down. The k’th factorial moment 5 of the number of particles found in the system of state Sj at time t ≥ 0 is defined by the formula d k gj,i (z, t) (k) mj,i (t) = , i, j = 1, 2, (6.25) dzk z=1
provided that the system was found in state Si at time t = 0 and contained one particle. If the state of the system at the terminal time t is not fixed, then (k)
(k)
(k)
mi (t) = m1,i (t) + m2,i (t),
i = 1, 2.
(6.26)
It is interesting to note that, unlike for the backward equation, the above summation for the final state cannot be achieved on the defining equations (6.22) and (6.23), since both final states occur in both equations. This is a consequence of using the forward equation, which operates on the final co-ordinates. As it was shown in [36], a summation over the final states at the level of the generating functions leads to a closure problem when calculating the moments. The reasons for the occurrence of the closure problem were analysed more in detail in [37], and were shown to be related to the non-linearity of the random medium problem, in that the products of two random variables (system state and particle number) occur in the master equations (6.20) and (6.21). On the other hand, by first calculating the moments for a crisp state and then summing up of the solutions for the final states, is free of any closure problems for all orders of moments; this is the method we apply in this chapter. In [37] it was also shown that, by methods of summing up infinite series, the closure problem can also be dealt with in an exact way, such that also the joint moments of the system state and number of particles can be calculated without truncating approximations.
6.3.1 The first factorial moments Since the first moments can be derived correctly also from the backward equations, first (6.16) and (6.17) will be used to this end. One obtains (1)
dmj,1 (t|1) dt
(1)
(1)
= −(Q1 + λ)mj,1 (t|1) + λmj,2 (t|1) + Q1
∞
(1) (1)
fk mj,1 (t|k)
k=0
and (1)
dmj,2 (t|1) dt
(1)
(1)
= −(Q2 + λ)mj,2 (t|1) + λmj,1 (t|1) + Q2
∞
(2) (1)
fk mj,2 (t|k).
k=0
In view that only for the first moment, one can use the factorisation (1)
(1)
mj,i (t|k) = kmj,i (t|1),
i, j = 1, 2,
introducing the notations ∞
(i)
kfk = ν i ,
i = 1, 2
k=0 5 Note
that because of the indexing referring to the state of the system, this notation differs from that introduced in (1.50).
(6.27)
155
Branching Processes in a Randomly Varying Medium
and αi = Qi (ν i − 1), the above equations take the following form:
i = 1, 2,
(6.28)
(1)
dmj,1 (t|1) dt
(1)
(1)
(6.29)
(1)
(1)
(6.30)
= (α1 − λ)mj,1 (t|1) + λmj,2 (t|1)
and (1)
dmj,2 (t|1) dt
= (α2 − λ)mj,2 (t|1) + λmj,1 (t|1).
(1)
Appended with the initial conditions mj,i (0) = δj,i , i, j = 1, 2, the solution of these equations is identical with those of the forward equations, given below. From the forward equations (6.23) and (6.24), one can write down the equations directly as (1) (1) (1) = α1 m1,i (t) + λ m2,i (t) − m1,i (t)
(6.31)
(1) (1) (1) = α2 m2,i (t) + λ m1,i (t) − m2,i (t) .
(6.32)
(1)
dm1,i (t) dt and (1)
dm2,i (t) dt
(1)
By taking into account the initial conditions mj,i (0) = δij , i, j = 1, 2, one obtains exactly the same solutions as from (6.29) and (6.30).
6.3.2 Properties Calculate the expectation of the number of particles at a time t ≥ 0 when the system is found in the crisp state Sj , j = 1, 2, provided that it was in the state Si , i = 1, 2 at time t = 0. From either the forward or the backward equations, one obtains the solutions s1 + λ − α2 s1 t s2 + λ − α2 s2 t e − e , s1 − s 2 s1 − s2 λ λ (1) (1) e s1 t − e s2 t , m2,1 (t) = m1,2 (t) = s1 − s2 s1 − s 2 s1 + λ − α1 s1 t s2 + λ − α1 s2 t (1) m2,2 (t) = e − e . s1 − s 2 s1 − s2 (1)
m1,1 (t) =
(6.33) (6.34) (6.35)
Here 1 α1 + α2 − 2λ + (α1 − α2 )2 + 4λ2 , 2 1 s2 = α1 + α2 − 2λ − (α1 − α2 )2 + 4λ2 2
s1 =
(6.36) (6.37)
are the roots of the characteristic equation (s + λ − α1 )(s + λ − α2 ) − λ2 = 0. One can easily see that s1 − s2 =
(α1 − α2 )2 + 4λ2 ≡ δ > 0.
(6.38)
156
Imre Pázsit & Lénárd Pál
Investigate first the effect of the state of the system at the initial time t = 0 on the time-dependence of the expectation of the particles number. For this, calculate the expectations (1)
(1)
(1)
mi (t) = m1,i (t) + m2,i (t),
i = 1, 2.
(6.39)
Based on equations (6.33)–(6.35), one can write that (1)
m1 (t) =
s1 + 2λ − α2 s1 t s2 + 2λ − α2 s2 t e − e s1 − s 2 s1 − s 2
(6.40)
and s1 + 2λ − α1 s1 t s2 + 2λ − α1 s2 t e − e . (6.41) s1 − s2 s1 − s 2 We can now investigate the asymptotic properties and related question of the definition of criticality. One observes that the conventional definition of the critical state of the system needs to be modified in the case of a randomly varying system. If the state of the system does not vary, then the ‘convention’ is that for (1)
m2 (t) =
∞
kfk − 1 = ν − 1 =
k=0
α = 0, Q
the system is referred to as critical, for α < 0 as subcritical, while for α > 0 as supercritical. The expectation of the number of particles in a critical system is constant, in a subcritical system it converges to zero, whereas in a subcritical system it tends to infinity if t → ∞ (cf. (1.57) and (1.58)).6 In a randomly varying medium, Q and fk , where k ∈ Z + , are random variables. Define the random variable (∞ ) k φ(z) = Q fk z − z . k=0
In the mathematical literature, the branching process in a random medium is called critical if dφ(z) E = 0. dz z=1 In this case, however, one cannot claim that the expectation of the number of particles in a critical state is constant, not even that it is bounded. In the simplest case when the medium has only two states, namely S1 and S2 , and each of these has a probability of 1/2, one obtains the condition of the critical state as dφ(z) 1 E = (α1 + α2 ) = 0, dz z=1 2 where αi = Qi (ν i − 1), In this case s1 = −λ +
λ2 + α 2
and
i = 1, 2. s2 = −λ −
λ2 + α 2 ,
where α1 = −α2 = α > 0. It is seen that s1 > 0, which means that the expectations in (6.33)–(6.35) become infinite if t → ∞. It appears practical to formulate the definition of the criticality for a randomly varying medium as follows: the system is critical if the expectation of the number of particles generated by the branching process is 6 It
is important to point out that the classification is based on the expectation.
157
Branching Processes in a Randomly Varying Medium (1)
finite and larger than zero when t → ∞. The system is subcritical if mj,i (∞) = 0, i, j = 1, 2 and supercritical if (1) mj,i (∞) = ∞, i, j = 1, 2.
Since in the case considered here s1 > s2 , one can claim that a randomly varying binary system is critical if s1 = 0. This yields a relationship between the parameters λ and αi , i = 1, 2 from the criticality condition s1 = 0. From the equation 2λ − α1 − α2 =
(α1 − α2 )2 + 4λ2 ,
one arrives at λcr =
α1 α2 > 0. α1 + α 2
(6.42)
This condition, however, is only necessary but not sufficient condition of criticality, since as seen, it is fulfilled also for α1 > 0 and α2 > 0, although this case corresponds to the supercritical state. For (6.42) be also the sufficient condition of the criticality, it is necessary to request that either of α1 and α2 be negative, while the other positive and the inequality α1 + α2 < 0 be fulfilled. This also means that in the conventional sense, the system is at times subcritical, and some other times supercritical but never exactly critical. In order that the system be critical in the mean, the frequency λ of the state change should be equal to the fixed value determined by the formula (6.42). Hence the criticality of the randomly varying medium can be considered as criticality in the mean. Figure 6.1 shows the parameter plane (α1 , α2 ). The points critical in the mean are contained in the regions CRM, in which α1 + α2 < 0, but either α1 or α2 is larger than 0, on the condition that the frequency of the change of state λ has to be exactly equal to the value λcr = α1 α2 /(α1 + α2 ) corresponding to the given point (α1 , α2 ). If λ > λcr then the points of the region CRM are subcritical in the mean; if, however, λ < λcr then the points define a state supercritical in the mean. The points of the regions SPCRM with α1 + α2 > 0 but such that the signs of α1 and α2 are different, correspond to the state supercritical in the mean. If both α1 and α2 are larger than 0, then the system is strongly supercritical; whereas if both are smaller than 0, then it is strongly subcritical. The first set of points lie in the region SSPCR, whereas the second set above in the region SSBCR, respectively. Figure 6.2 shows the time-dependence of the expectation of the particle number in a system critical in the mean, corresponding to the condition λ = λcr . It is seen that if the system starts from a supercritical state, then the expectation of the number of the particles converges to a value larger than the initial particle number (in our case n(0) = 1), while if it started from a subcritical state, it tends to a value smaller than the initial particle
a2 values
SPCRM
SSPCR
CRM a1 values SPCRM SSBCR
CRM
(1)
Figure 6.1 The parameter plane (α1 , α2 ) based on the time-dependence of mi (t), i = 1,2. SSPCR is the region of the strongly supercritical state, SPCRM that of the supercritical in the mean, SSBCR the region of the strongly subcritical, and finally CRM that of the critical in the mean under the condition λ = λcr .
158
Imre Pázsit & Lénárd Pál
lcr 0.02
1.2 Mean value
1.1
S1, a1 0.02
s1 0
1
S2, a2 0.01
s2 0.05
0.9 0.8 0.7 0.6 0
20
40
60
80
100
Time (t )
Figure 6.2 Time-dependence of the expectation of the number particles in a critical system (s1 = 0), for the cases of initial state S1 and S2 , respectively. l 0.025 lcr 0.02
Mean value
1.2
S1, a1 0.02
1
S2, a2 0.01
s1 0.0008
0.8
s2 0.0592
0.6 0
100
200
300
400
500
Time (t )
Figure 6.3 Time-dependence of the expectation of the number of particles in a system subcritical in the mean, λ > λcr , for the cases of initial state S1 and S2 , respectively.
number for t → ∞. What is noteworthy is that in a system critical in the mean7 1 α1 − α2 + 2λ (1) lim m (t|cr) = , 1+ t→∞ 1 2 s1 − s 2 1 α2 − α1 + 2λ (1) lim m2 (t|cr) = , 1+ t→∞ 2 s1 − s 2 i.e. (1)
(1)
m1 (∞|cr) = m2 (∞|cr).
(6.43)
This means that in a randomly varying system critical in the mean, the expectation of the number of particles does not forget which state of the medium the branching process started from. Figure 6.3 shows the time-dependence of the expectation of the number of particles in the case when λ > λcr , i.e. if the system is subcritical in the mean. It is seen that if the system was supercritical at t = 0, then the expectation initially increases; thereafter, after reaching a maximum, will decrease almost linearly. If the system was subcritical at t = 0, then the expectation decreases monotonically, and rather strongly at the beginning. The difference between the two curves also decreases with increasing time, and naturally, disappears after infinite time. One can say that the expectation of the number of particles in the system subcritical in the mean after sufficiently long time ‘almost forgets’ in which state the system was initially. The ratio of the two curves, on the other hand, tends to a constant value, since both decay asymptotically with the same exponent s1 , hence in this sense the effect of the initial state is preserved even asymptotically. The two curves in Fig. 6.4 demonstrate the evolution of the expectation of the number of particles as a function of time for a system supercritical in the mean, corresponding to the inequality λ < λcr . If the system 7 m(1) (t|cr), i = 1, 2 i
denotes the expectation of the number of particles in a system critical in the mean.
159
Branching Processes in a Randomly Varying Medium
l 0.015 lcr 0.02
Mean value
2
S1, a1 0.02 S2, a2 0.01
1.5 s1 0.0012, s2 0.0412
1 0.5 0
100
200 300 Time (t)
400
500
m2(1)(t, l) m1(1) (t, l)
Figure 6.4 Time-dependence of the expectation of the number of particles in a system supercritical in the mean, λ < λcr , for the cases of initial state S1 and S2 , respectively. 0.8 0.7
a1 0.02, a2 0.01
0.6
t 20
0.5
t 40
0.4
lcr 0.02
0.3 0.01
0.015
0.02 l values
0.025
(1)
0.03 (1)
Figure 6.5 The dependence of the difference m2 (t, λ) − m1 (t, λ) on the λ intensity of the random state changes at time moments t = 20 and t = 40. The system is subcritical in the mean if λ > λcr = 0.02 and supercritical in the mean if λ < λcr = 0.02.
was subcritical at start, then the expectation will increase after an initial period of decreasing; if, on the other hand, the system was supercritical at time t = 0, then the expectation starts to increase immediately. (1) (1) Finally, let us calculate the asymptotic expectations mi (∞|cr) ≡ limt →∞ mi (t|cr), i = 1, 2 in the case when α1 = −a1 < 0 α2 > 0 and α2 − a1 < 0, i.e. when the subcritical and critical states alternate with such an intensity that the system will be critical in the mean. From (6.40) and (6.41) and making use of the criticality condition one has (1)
m1 (∞|cr) =
α2 (a1 + α2 ) a12 + α22
and
(1)
m2 (∞|cr) =
a1 (a1 + α2 ) , a12 + α22
i.e. a process critical in the mean remembers the initial state of the system even after infinite time. If the intensity λ of the state changes increases, i.e. during the time λ−1 only a few particle reactions can occur, then the sensitivity of the expectations to the initial state decreases considerably. This tendency is seen in Fig. 6.5. (1) For the sake of completeness, let us write down the expectations mj,i (∞|cr), i, j = 1, 2 for a system critical in the mean. From (6.33)–(6.35) one obtains (1)
(1)
m2,1 (∞|cr) = m1,2 (∞|cr) =
a1 α2 + α22
a12
and (1)
m1,1 (∞|cr) =
α22 , a12 + α22
(1)
whereas m2,2 (∞|cr) =
a12 , a12 + α22
160
Imre Pázsit & Lénárd Pál
i.e. the remembering to the initial state is provided by the transition Si → Si , i = 1, 2.
6.3.3 Second factorial moments For calculating both the variance and the covariance, we need to know the second factorial moments
(2)
mj,i (t|k) =
∂2 gj,i (z, t|k) ∂z2
i, j = 1, 2,
,
(6.44)
z=1
whose determination will be discussed here. In order to show what kind of difficulties the use of backward equations causes in the calculations of the second factorial moments, let us write from (6.16) and (6.17), based on (6.44) the equations (2)
dmj,1 (t|1) dt
(2) (2) −(Q1 + λ)mj,1 (t|1) + λmj,2 (t|1) + Q1
=
∞
(1) (2)
(6.45)
(2) (2)
(6.46)
fk mj,1 (t|k)
k=0
and (2)
dmj,2 (t|1) dt
=
(2) (2) −(Q2 + λ)mj,2 (t|1) + λmj,1 (t|1) + Q2
∞
fk mj,2 (t|k).
k=0
The fundamental difficulty is that now one cannot utilise the equality (2)
(2)
(1)
mj,i (t|k) = kmj,i (t|1) + k(k − 1)[mj,i (t|1)]2 , since the branching processes induced by the k different particles are not independent. Instead of equations (6.45) and (6.46), it appears to be more practical to determine the second factorial moments from the forward equations. From equations (6.22) and (6.23), one obtains the following equations: (2)
dm1,i (t) dt
(2)
(2)
(1)
(6.47)
(2)
(2)
(1)
(6.48)
= (2α1 − λ)m1,i (t) + λm2,i (t) + γ1 m1,i (t)
and (2)
dm2,i (t) dt
= (2α2 − λ)m2,i (t) + λm1,i (t) + γ1 m2,i (t),
where γi ≡ Qi qi (1) = Qi ν(ν − 1) i ,
i = 1, 2.
(6.49)
The parameters γi , related to the second factorial moments of the branching, will play an important role in the continuation. They are analogous to the Diven factor of traditional zero power noise theory. Equations (6.48) and (6.49) are readily solved by e.g. Laplace transform methods with the result (2)
m1,i (t) = γ1 and (2)
m2,i (t) = γ1
t
0
0
t
m1,i (t − t )F1,1 (t )dt + γ2 (1)
m1,i (t − t )F1,2 (t )dt + γ2 (1)
t
0
0
t
m2,i (t − t )F2,1 (t )dt
(6.50)
m2,i (t − t )F2,2 (t )dt ,
(6.51)
(1)
(1)
161
Branching Processes in a Randomly Varying Medium
where σ1 + λ − 2α2 σ1 t σ2 + λ − 2α2 σ2 t e − e , σ1 − σ 2 σ1 − σ 2 λ λ F2,1 (t) = F1,2 (t) = e σ1 t − e σ2 t , σ1 − σ 2 σ1 − σ2 σ1 + λ − 2α1 σ1 t σ2 + λ − 2α1 σ2 t F2,2 (t) = e − e . σ1 − σ 2 σ1 − σ 2
F1,1 (t) =
Further, σ 1 = α1 + α 2 − λ + and σ2 = α1 + α2 − λ −
(6.52) (6.53) (6.54)
(α1 − α2 )2 + λ2 ,
(6.55)
(α1 − α2 )2 + λ2 .
(6.56)
In general, σ1 = s1 and
and
σ2 = s2 ,
σ1 − σ2 = 2 (α1 − α2 )2 + λ2 ≡ > 0.
It is interesting to calculate the time-dependence of the moments to (6.54). The result, after a considerable algebra, is given as (2) mj,i (t)
=
4
(2) mj,i (t),
()
Cj,i φ (t),
(6.57) i, j = 1, 2 in detail from (6.50)
(6.58)
=1
where φ1 (t) =
e σ1 t − e s1 t σ1 − s 1
and
φ2 (t) =
e σ 1 t − e s2 t , σ1 − s 2
(6.59)
φ3 (t) =
e σ 2 t − e s1 t σ2 − s 1
and
φ4 (t) =
e σ 2 t − e s2 t . σ2 − s 2
(6.60)
as well as
()
The values of the coefficients Cj,i are not given here for simplicity; they can be found in [35, 38]. Often one is only interested in which initial state the medium was when the branching process started, but is uninterested to specify the final state. In this case, one has to study the behaviour of the factorial moments (2)
(2)
(2)
mi (t) = m1,i (t) + m2,i (t),
i = 1, 2.
(6.61)
As mentioned earlier, the summation can only be performed after that the individual moments were calculated.
6.3.4 Variances (2)
In possession of the second factorial moments mj,i (t), i, j = 1, 2, the variances can be determined by the formulae (2) (1) (1) D2 {n(t), j|n(0) = 1, i} = vj,i (t) = mj,i (t) + mj,i (t) 1 − mj,i (t) , i, j = 1, 2. (6.62) If one is only interested in the effect of the initial state of the medium, then one has to calculate the variance (2) (1) (1) D2 {n(t)|n(0) = 1, i} = vi (t) = mi (t) + mi (t) 1 − mi (t) , i = 1, 2. (6.63)
162
Imre Pázsit & Lénárd Pál
a1 0.02, a2 0.01
Variances
10 8
v11
6
v21
4
v12
l 0.04, g 1
2
v22
0 0
50
100 Time (t )
150
200
Variances
Figure 6.6 Time-dependence of the variances vj,i (t), i, j = 1,2 in a randomly varying, strongly subcritical system defined by the parameters λ = 0.04, α1 = −0.02, α2 = −0.01 and γ = 1.
a1 0.02, a2 0.01
600 500 400 300 200 100 0
v11 v21 v12
l 0.04, g 1 0
20
40 60 Time (t )
v22
80
100
Figure 6.7 Time dependence of the variances vj,i (t) in a randomly varying, strongly supercritical system defined by the parameters λ = 0.04, α1 = 0.02, α2 = 0.01 and γ = 1.
Of course, the variances in (6.62) give more information on the process. Let us first illustrate the variance of the number of particles in a randomly varying strongly subcritical and strongly supercritical medium, respectively, as a function of time. Figure 6.6 shows the time-dependence of the variances in a medium fluctuating between the subcritical states S1 = {α1 = −0.02} and S2 = {α2 = −0.01}. Here, for example, v2,1 (t) is the variance of the number of particles at time t in the medium which is in state S2 = {α2 = −0.01} at t, provided that it was in the initial state S1 = {α1 = −0.02} and contained one particle. The figure shows that the variances ‘remember’ the initial state of the medium even after a relatively long time. Figure 6.7 illustrates the time-dependence of the variances for the case when the random changes of the medium occur between the supercritical states S1 = {α1 = 0.02} and S2 = {α2 = 0.01}. The curves tend to infinity if t → ∞, although with different steepness. It is seen from (6.59) and (6.60) that in a randomly varying medium critical in the mean, for which s1 = 0, by increasing t, the variance diverges exponentially, since σ1 > 0 if s1 = 0. It is well known that in a constant critical medium, the variance of the number of particles diverges linearly in t (cf. (1.60)). The random state changes of the medium, more exactly the fact that the medium is sometimes in a supercritical, sometimes in a subcritical state, modifies radically the time-dependence of the variance, as seen in Fig. 6.8. The ‘competition’ between the final states, commencing already at the start of the process, deserves some attention. After sufficient lapse of time, the magnitude of the variances is primarily determined by the final state, although the effect of ‘remembering’ the initial state does not disappear. We shall now explore, in the case when the condition s1 = 0 is fulfilled, how the value of σ1 varies for different fixed values of α2 > 0 as a function of a1 − α2 > 0, considering that α1 = −a1 < 0. The condition s1 = 0 means that 1 1+ (α1 + α2 ) − λcr + (α1 − α2 )2 + λ2cr = 0, 2 2
163
Branching Processes in a Randomly Varying Medium
120
a1 0.02, a2 0.01
Variances
100
v11
80
v21
60
l lcr 0.02
40
g1
v12 v22
20 0 0
20
40
60
80
100
Time (t )
Figure 6.8 Time-dependence of the variances vj,i (t) in a randomly varying system critical in the mean, defined by the parameters λ = λcr = 0.02, α1 = −0.02, α2 = 0.01 and γ = 1.
s1
s1 0
a1 a1 0
0.03 0.025
a2 0.01
0.02
a2 0.02
0.015
a2 0.04
0.01 0.005 0
0.02
0.04 0.06 a1a2
0.08
0.1
Figure 6.9 Variation of σ1 for fixed values for α2 > 0 as a function of α1 − α2 > 0, by taking into account that α1 = −a1 < 0.
and hence + 1 σ1 = (α1 + α2 ) + λ2cr + (α1 − α2 )2 − 2
!
λ2cr +
α1 − α2 2
2 ,
where α1 = −a1 < 0,
and a1 − α2 > 0, α2 > 0.
Figure 6.9 shows the variation of σ1 for fixed values α2 > 0 as a function of a1 − α2 > 0. In view of the fact that in a binary randomly varying medium the time-dependence of the variance of the particle number is determined by the roots σ1 , σ2 , s1 , s2 , it is interesting to investigate the dependence of the roots σ1 > σ2 and s1 > s2 , that play a decisive role, on the frequency λ. For the sake of simplicity, we concern only with the behaviour of σ1 and s1 , corresponding to the previously investigated point (α1 = −0.02, α2 = 0.01) of the upper CRM region, the latter being seen in Fig. 6.1. In Fig. 6.10, it can be seen that at the frequency λ = λcr = 0.02, one has s1 = 0, thus the expectation of the particle number will be a finite number and larger than zero if t → ∞. On the other hand, since σ1 > 0, the variance diverges exponentially (and not linearly), as was pointed out earlier. Figure 6.10 also reveals that at the frequency λ = 2λcr = 0.04, σ1 = 0 and s1 < 0, which results in the fact that the variance converges to a finite number larger than zero if t → ∞, as is seen in Fig. 6.11. As can be expected, the variance is larger if the final state is supercritical (S2 = {α2 = 0.01}), than if the final state is subcritical (S1 = {α1 = −0.02}). Besides, the effect of the initial state is retained throughout, i.e. even in this case, the process ‘does not forget’ which state it was started from in a medium subcritical in the mean. The graph in Fig. 6.10 also shows that the variances decay exponentially for frequency values λ > 2λcr . There exists a frequency λ = λ0 at which σ1 = s1 < 0, and in this case, among the functions defined in (6.59),
164
Imre Pázsit & Lénárd Pál
s1 and s1 values
0.015
a1 0.02, a2 0.01, l0 0.0632
0.01
s1 0.005
s1
0
0.005 0.02
Figure 6.10
0.04 0.06 l values
0.08
0.1
Dependence of σ1 and s1 on the frequency λ in the point (α1 = −0.02, α2 = 0.01).
a1 0.02, a2 0.01
Variances
300 250
v11→110
200
v21→220
150
v12→160
100
v22→320
50
l 0.04, s1 0, s2 0.1, g 1
0 0
500
1000
1500
2000
Time (t)
Variances
Figure 6.11 Time-dependence of the variances vj,i (t), i,j = 1, 2 at the frequency λ = 2λcr = 0.04 corresponding to the value σ1 = 0 in the point (α1 = −0.02, α2 = 0.01) of the CRM domain.
a1 0.02, a2 0.01
70 60 50 40 30 20 10 0
v11 v21 v12 v22
l0≈0.0632456, g 1 s1 s1 0
200
400
600
800
1000
Time (t)
Figure 6.12 Time-dependence of the variances vj,i (t), i,j = 1, 2 at the frequency λ0 ≈ 0.06332 corresponding to the value σ1 = s1 < 0 in the point (α1 = −0.02, α2 = 0.01) of the CRM region.
one has φ1 (t) = te s1 t . This results in the fact that the variances tend to zero almost exponentially if t → ∞. This behaviour is illustrated in Fig. 6.12 which also shows the effect of the initial state on the maxima of the variances. Figure 6.13 illustrates how the variances vary at the time instant t = 100 as a function of the frequency λ. It is seen that the variances of the number of particles produced by processes taking place in a medium starting from and arriving at different states deviate from each other less and less with increasing λ, which is self-evident, since the system stays for less and less time in a given state.
165
Variances
Branching Processes in a Randomly Varying Medium
200 175 150 125 100 75 50 25
a1 0.02, a2 0.01 v11 v21 t 100
v12
g1
0.02
0.04 0.06 Frequency (l)
0.08
v22 0.1
Figure 6.13 Dependence of the variances vj,i (100), i,j = 1,2 on the frequency λ in the point (α1 = −0.02, α2 = 0.01) of the CRM region.
6.4 Random Injection of Particles 6.4.1 Derivation of the forward equation Suppose that in the random multiplying medium at time t = 0 there are n0 = 0, 1, . . . particles, and the system itself is in the state Si , i = 1, 2. We assume that during the time [0, t], particles that initiate branching processes are injected to the system randomly and independently from each other. This can happen, for example, such that the source particles being uniformly distributed in the system emit particles randomly that initiate branching processes. Denote 0 ≤ τk ≤ t the kth time point of the injection (note that τ0 = 0). Let us choose the simplest case when the random time intervals θk = τk − τk−1 , k = 1, 2, . . . between two consecutive injections are independent random variables of identical distribution and let P{θk ≤ t} = 1 − e −s0 t ,
k = 1, 2, . . . ,
(6.64)
where s0 is the intensity of injection. Moreover, let N(t) be the number of particles present in the source-driven system at t ≥ 0. The objective is the calculation of the generating functions Gj,i (z, t|n0 ) =
∞
Pj,i (n, t|n0 )zn ,
i, j = 1, 2,
(6.65)
n=0
of the probabilities P{N(t) = n, S(t) = Sj |N(0) = n0 , S(0) = Si } = Pj,i (n, t|n0 ),
i, j = 1, 2.
(6.66)
Based on well-known considerations, one can immediately write down the forward equations determining the generating functions: ∂G1,i (z, t|n0 ) ∂G1,i (z, t|n0 ) = (z − 1)s0 G1,i (z, t|n0 ) + Q1 [q1 (z) − z] + λ[G2,i (z, t|n0 ) − G1,i (z, t|n0 )] (6.67) ∂t ∂z and ∂G2,i (z, t|n0 ) ∂G2,i (z, t|n0 ) = (z − 1)s0 G2,i (z, t|n0 ) + Q2 [q2 (z) − z] + λ[G1,i (z, t|n0 ) − G2,i (z, t|n0 )]. (6.68) ∂t ∂z The initial conditions are given by the formulae Gj,i (z, 0|n0 ) = δj,i zn0 ,
i, j = 1, 2
(6.69)
166
Imre Pázsit & Lénárd Pál
and, of course, the relationships Gj,i (1, t|n0 ) =
∞
Pj,i (n, t|n0 ) = wj,i (t),
i, j = 1, 2
(6.70)
n=0
also have to be satisfied.
6.4.2 Expectations, variances, covariances Expectations The expectations of N(t), by taking account of the possible states of the multiplying medium, can be calculated by the formula
∂Gj,i (z, t|n0 ) (1) = Mj,i (t|n0 ), i, j, = 1, 2. (6.71) ∂z z=1 If n0 = 0 then we use the notation (1)
(1)
Mj,i (t|0) = Mj,i (t). From (6.67) and (6.68), one obtains the equations (1)
dM1,i (t|n0 ) dt and
(1)
dM2,i (t|n0 )
Mean values
dt
(1) (1) (1) = s0 w1,i (t) + α1 M1,i (t|n0 ) + λ M2,i (t|n0 ) − M1,i (t|n0 )
(6.72)
(1) (1) (1) = s0 w2,i (t) + α1 M2,i (t|n0 ) + λ M1,i (t|n0 ) − M2,i (t|n0 )
(6.73)
a1 0.02, a2 0.01
17.5 15 12.5 10 7.5 5 2.5 0
(1)
M11 (t) (1)
M21 (t) (1)
M12 (t) (1) M22 (t)
l 0.04, s 0 1 0
10
20 30 Time (t)
40
50
(1)
Figure 6.14 Initial time-dependence of the expectations Mj,i (t), i, j = 1,2 in a strongly subcritical medium, which contained no particles at time t = 0.
Mean values
35
a1 0.02, a2 0.01
30
M11→32.1
25
M21→35.7
20
M12→32.1
15
M22→35.7
10 l 0.04, s0 1
5 0 0
100
200
300
400
500
Time (t) (1)
Figure 6.15 Time-dependence of the expectations Mj,i (t), i, j = 1, 2 in a strongly subcritical medium, which did (1)
not contain any particles at time t = 0. The saturation values Mj, i (∞) = Mj, i can also be seen in the figure.
167
Branching Processes in a Randomly Varying Medium
with the initial conditions based on (6.69): (1)
Mj,i (0|n0 ) = δj,i n0 ,
i, j = 1, 2.
(6.74)
Omitting the details of the calculations, the solution is given as
(1)
M1,i (t|n0 ) = s0
t
w1,i (t − t )m1,1 (t )dt + s0 (1)
0
t
0
(1) (1) (1) w2,i (t − t )m1,2 (t )dt + n0 δ1,i m1,1 (t) + δ2,i m1,2 (t) (6.75)
and (1) M2,i (t|n0 )
t
= s0
w2,i (t − t
0
(1) )m2,2 (t )dt
+ s0
(1) (1) (1) w1,i (t − t )m2,1 (t )dt + n0 δ2,i m2,2 (t) + δ1,i m2,1 (t) . (6.76)
t
0
(1)
Figure 6.14 illustrates the initial time-dependence of the expectations Mj,i (t), i, j = 1, 2 in a strongly subcritical medium. What is remarkable is the rearrangement of the expectations as a result of the decreasing effect of the initial state in time, which eventually leads to the time-dependence seen in Fig. 6.15. Notice that (1) (1) (1) (1) M1,1 (∞) = M1,2 (∞) and M2,1 (∞) = M2,2 (∞). (1)
Figure 6.16 shows the initial time-dependence of the expectations Mj,i (t), i, j = 1, 2 in a medium critical in the mean. It is this initial ‘chaos’ from which the time-dependence seen in Fig. 6.17 develops, showing how the expectations tend to infinity if t → ∞. If at time t ≥ 0 the state of the medium can be either S1 or S2 , then (1)
(1)
(1)
Mi (t|n0 ) = M1,i (t|n0 ) + M2,i (t|n0 ),
(6.77)
a1 0.02, a2 0.01
30 Mean values
i = 1, 2.
(1)
25
M 11 (t )
20
M 21 (t )
15
M 12 (t )
10
M 22 (t )
(1) (1) (1)
5
l lcr 0.02, s0 1
0 0
10
20
30
40
50
Time (t) (1)
Figure 6.16 Initial time-dependence of the expectations Mj,i (t), i, j = 1, 2 in a medium critical in the mean, which contained no particles at time t = 0.
Mean values
300
a1 0.02, a2 0.01
250
(1)
M 11 (t )
200
(1)
M 21 (t )
150
(1)
M 12 (t )
100
(1)
M 22 (t )
50
l lcr 0.02, s0 1
0 0
100
200 300 Time (t)
400 (1)
500
Figure 6.17 Time-dependence of the expectations Mj,i (t), i, j = 1, 2 in a medium critical in the mean which contained no particles at time t = 0.
168
Imre Pázsit & Lénárd Pál
Notice that
(1)
Mi (t|n0 ) = s0
0
t
w1,i (t − t )m1 (t )dt + s0 (1)
0
t
(1) (1) (1) w2,i (t − t )m2 (t )dt + n0 δ1,i m1 (t) + δ2,i m2 (t) ,
i = 1, 2. (6.78)
In the case when s1 < 0, i.e. if the medium is subcritical, then there exist the asymptotic values (1)
(1)
(1)
lim Mj,i (t|n0 ) = Mj,i (∞) = Mj,i ,
t→∞
i, j = 1, 2,
and these can easily be determined. After a short calculation, one obtains that (1)
M1,i (∞) =
2λ − α2 1 s0 = M (1) (1, ∞) 2 α1 α2 − λ(α1 + α2 )
(6.79)
and
2λ − α1 1 (6.80) s0 = M (1) (2, ∞). 2 α1 α2 − λ(α1 + α2 ) It can be seen that the initial state of Si has no influence on the asymptotic values; on the other hand the state to which the medium converges when t → ∞ obviously does. Based on (6.79) and (6.80), one can write that (1)
M2,i (∞) =
(1)
Mi (∞) = M (1) (1, ∞) + M (1) (2, ∞) =
4λ − (α1 + α2 ) 1 s0 = M (1) (∞) = M , 2 α1 α2 − λ(α1 + α2 )
(6.81)
and one can see that naturally, the index i referring to the initial state is not needed. It is obvious that in a strongly subcritical medium, when α1 = −a1 < 0 and α2 = −a2 < 0, M (1) (∞) =
4λ + a1 + a2 1 s0 , 2 a1 a2 + λ(a1 + a2 )
and since a1 and a2 can always be chosen in the forms a1 = a + δa,
a2 = a − δa,
one arrives at
s0 (δa)2 M (∞) = 1+ 2 . a a − (δa)2 + 2λa Here, s0 /a is the stationary expectation of the particle number in a constant medium characterised by the multiplication constant α = −a.8 It can be seen that the random state changes of a medium increase this value, i.e. the expectation of the particle number in a system, fluctuating around a certain static value, will be higher than that in the static system. Similar results have been presented in [36,39]. Investigate now the case when the medium is subcritical in the mean. Let α1 = −a1 < 0, α2 > 0 and the inequality α1 + α2 < 0 to hold. After some calculations, one can write that
1 2s0 (a1 + α2 )2 (1) 1+ , M (∞) = a1 − α 2 4 λ(a1 − α2 ) − a1 α2 (1)
and from this, it can immediately be seen that the inequality λ>
a1 α2 = λcr a1 − α 2
must be fulfilled. In other words, in this case λ cannot be an arbitrarily small positive number. 8 Since
the system is strongly subcritical, the frequency λ can take any positive real value.
169
Branching Processes in a Randomly Varying Medium
Variances In order to determine the variances, the factorial moments
2 ∂ Gj,i (z, t|n0 ) (2) = Mj,i (t|n0 ), ∂z2 z=1
i, j = 1, 2
(6.82)
have to be calculated. If n0 = 0, then again we use the notation (2)
(2)
Mj,i (t|0) = Mj,i (t). From equations (6.67) and (6.68) one obtains (2)
dM1,i (t|n0 ) dt
(2) (2) (2) (1) = 2α1 M1,i (t|n0 ) + λ M2,i (t|n0 ) − M1,i (t|n0 ) + (2s0 + γ1 )M1,i (t|n0 )
(6.83)
(2) (2) (2) (1) = 2α2 M2,i (t|n0 ) + λ M1,i (t|n0 ) − M2,i (t|n0 ) + (2s0 + γ2 )M2,i (t|n0 ).
(6.84)
and (2)
dM2,i (t|n0 ) dt The initial conditions
(2)
Mj,i (0|n0 ) = δj,i n0 (n0 − 1),
i, j = 1, 2
can be determined from (6.69). The solutions read as t t (2) (1) (1) M1,i (t|n0 ) = (γ1 + 2s0 ) F1,1 (t − t )M1,i (t |n0 )dt + (γ2 + 2s0 ) F1,2 (t − t )M2,i (t |n0 )dt 0
0
+ n0 (n0 − 1)[δ1,i F1,1 (t) + δ2,i F1,2 (t)],
(2) M2,i (t|n0 )
= (γ1 + 2s0 )
t
F2,1 (t − t
0
(1) )M1,i (t |n0 )dt
(6.85) + (γ2 + 2s0 ) 0
t
F2,2 (t − t )M2,i (t |n0 )dt (1)
+ n0 (n0 − 1)[δ1,i F1,2 (t) + δ2,i F2,2 (t)],
(6.86)
where the functions Fj,i (t) are given by (6.52)–(6.54). Likewise, for the case when at t > 0 the system can be in either state, one uses (2)
(2)
(2)
Mi (t|n0 ) = M1,i (t|n0 ) + M2,i (t|n0 ), and from the above this is given as t t (2) (1) (1) Mi (t|n0 ) = (γ1 + 2s0 ) F1 (t − t )M1,i (t |n0 )dt + (γ2 + 2s0 ) F2 (t − t )M2,i (t |n0 )dt 0
0
+ n0 (n0 − 1)[δ1,i F1 (t) + δ2,i F2 (t)],
(6.87)
where Fi (t) = F1,i (t) + F2,i (t). In these integrals, only known, already determined functions occur, thus the integrations can easily be performed. These are, however, not needed for the numerical calculations. We are primarily interested in the characteristics of the variances, especially in the case when there is no particle in the multiplying system at time t = 0, i.e. when n0 = 0. For simplicity, using the notation {N(t), S(t) = Sj |n(0) = 0, S(0) = Si } = Nj,i (t),
170
Variances
Imre Pázsit & Lénárd Pál
a1 0.02, a2 0.01
175 150 125 100 75 50 25 0
V11 V21 V12 g1 g2 1
V22
l 0.04, s0 1
0
5
10 Time (t)
15
20
Figure 6.18 Initial section of the time-dependence of the variancesVj,i (t) in a randomly varying strongly subcritical system defined by the parameters λ = 0.04, α1 = −0.02, α2 = −0.01, γ = 1 and s0 = 1. 3000
a1 0.02, a2 0.01
Variances
2500
V11→2146
2000 1500
g1 g2 1
1000
l 0.04
500
s0 1
100
200 300 Time (t)
V21→2644 V12→2146 V22→2644 400
500
Figure 6.19 Time-dependence of the variances Vj,i (t) in the long-time interval [0, t] in a randomly varying, strongly subcritical system defined by the parameters λ = 0.04, α1 = −0.02, α2 = −0.01, γ = 1 and s0 = 1. The values Vj,i (∞) are also shown in the figure.
one can write that (2) (1) (1) D2 {Nj,i (t)} = Mj,i (t) + Mj,i (t) 1 − Mj,i (t) = Vj,i (t),
i, j = 1, 2.
(6.88)
In a strongly subcritical system, when α1 < 0 and α2 < 0, the initial time-dependence of the variance Nj,i (t) for some definite values of the parameters is illustrated in Fig. 6.18. A remarkable phenomenon is the rearrangement of the time-dependences due to the gradual weakening of the effect of the initial state. In Fig. 6.19 the development of the ‘saturation’ can be observed in the time-dependence of the variances Vj,i (t), i, j = 1, 2. For the sake of illustration, the maximum values belonging to t → ∞ are also given here. It is worth noticing that lim V1,1 (t) = lim V1,2 (t) = V (1, ∞)
t→∞
t→∞
and lim V2,1 (t) = lim V2,2 (t) = V (2, ∞),
t→∞
t→∞
which shows that the process, being weakly stationary for t → ∞, feels only the instantaneous state of the medium and, obviously, totally forgets what state it was in at time t = 0. In a system critical in the mean, the variances Vj,i (t), i, j = 1, 2 tend to infinity for t → ∞. However, the time-dependence of the variances at the beginning of the process, belonging to various initial and final states, is rather interesting.9 This is seen in Fig. 6.20, while Fig. 6.21 illustrates the rapid increase just before getting into the asymptotic, divergent state. 9 One
notices that the influence of the initial state decreases by the passing of the time; the curves cross each other, e.g. the curve V1,1 (t) gets into the bottom and the curve V1,2 (t) goes immediately above it.
171
Branching Processes in a Randomly Varying Medium
Variances
a1 0.02, a2 0.01
s0 1
250
V11
200
V21
150
V12
100
V22
g1 g2 1
50
l lcr 0.02
0 0
5
10
15
20
Time (t )
Figure 6.20 Initial sections of the time-dependence of the variances Vj, i (t), i, j = 1, 2 in a multiplying system critical in the mean, whose parameters are shown in the figure.
Variances
800 000
a1 0.02, a2 0.01
g1 g2 1
V11
600 000
V21
400 000
V12 l lcr 0.02 s0 1
200 000 0 0
100
V22
200 300 Time (t )
400
500
Figure 6.21 Time-dependence of the variances Vj,i (t), i, j = 1, 2 in the time-interval just before getting into the asymptotically diverging state, in a multiplying system critical in the mean. 175
a1 0.02, a2 0.01
Variances
150 125
V1
100
V2
75 g1 g2 1
50 25
l 0.04, s0 1
0 0
5
10 Time (t)
15
20
Figure 6.22 Initial sections of the time-dependence of the variances Vi (t), i = 1, 2 in a strongly subcritical system.
In the case when the system can be either in state S1 or S2 at t ≥ 0, the variance of N(t) is given by the formula (2) (1) (1) D2 {N(t)|S(0) = Si } = Vi (t) = Mi (t) + Mi (t) 1 − Mi (t) , (6.89) provided that the system was in state Si and it did not contain any particles at time t = 0.10 Figure 6.22 illustrates well that although the state of the multiplying medium at time t = 0 slightly influences the time-dependence of the variances at the beginning of the process, nevertheless this influence becomes negligible with the passing 10 Although
it is trivial, yet it is worth mentioning that Vi (t) cannot be constructed as a sum of the variances V1,i (t) and V2,i (t).
172
Imre Pázsit & Lénárd Pál
2500 a1 0.02, a2 0.01
Variances
2000
V1→2494.5
1500 g1 g2 1
1000
V2→2494.5
l 0.04
500
s0 1
0 0
100
200
300
400
500
Time (t )
Figure 6.23 Time-dependence of the variances Vi (t), i = 1,2 in the long interval [0, t] in a strongly subcritical system.
of time, as shown in Fig. 6.23. Moreover, it is obvious that the asymptotic values belonging to t → ∞ are exactly identical, i.e. V1 (∞) = V2 (∞) = V . The variances V belonging to the given parameter values are shown in Fig. 6.23. Investigate now the characteristics of the constant variance of the particle number in the case of a weakly stationary process, maintained by injection of particles in a strongly subcritical random medium.11 One obtains that (1)
(2)
M1,i = M (2) (1, ∞) =
(1)
1 (γ1 + 2s0 )(λ − 2α2 )M1,i + (γ2 + 2s0 )λM2,i 2 2α1 α2 − λ(α1 + α2 )
(6.90)
and (1)
(2)
M2,i = M (2) (2, ∞) = (1)
(1)
1 (γ1 + 2s0 )λM1,i + (γ2 + 2s0 )(λ − 2α1 )M2,i , 2 2α1 α2 − λ(α1 + α2 )
(6.91)
(1)
where M1,i and M2,i are identical with the formulae (6.79) and (6.80), for which we have seen that they do not depend on the index i, that is on the state of the system at t = 0. As a consequence of this, the moments (2) (2) M1,i and M2,i are also independent from the index i. In the case when the state of the system at a certain time of the stationary process can be either S1 or S2 , one can write that M (2) = M (2) (1, ∞) + M (2) (2, ∞) =
(γ1 + 2s0 )(λ − α2 )M (1) (1, ∞) + (γ2 + 2s0 )(λ − α1 )M (1) (2, ∞) . (6.92) 2α1 α2 − λ(α1 + α2 )
By taking the above into consideration, in the case of a process maintained by injection into a system of strongly subcritical state, the variance of the particle number can be calculated from the formula & ' V (λ) = M (2) + M (1) 1 − M (1) , (6.93) in which
4λ − (α1 + α2 ) 1 s0 = M. 2 α1 α2 − λ(α1 + α2 ) Figure 6.24 illustrates the dependence of the ratio V (λ)/V (0) on the frequency λ, characterising the random changes of state of the strongly subcritical medium for fixed values α2 < 0 and different values α1 < 0. The value of V (0) belongs to a static system with well-defined parameters. It is noteworthy that with increasing λ, i.e., with the decrease of the average time of staying in a given state, the variance is decreasing.This is, however, evident since the frequent change of state must result in the decrease of the fluctuations of the particle number. (1)
(1)
M (1) = M1 + M2 =
11 In
this case α1 < 0, α2 < 0, hence σ1 < 0.
173
Branching Processes in a Randomly Varying Medium
Ratio of variances
1 0.9
a2 0.01 a1 0.02
0.8
a1 0.03
0.7
a1 0.04
0.6 g1 g2 1, s0 1
0.5 0.4 0.3 0
0.01
0.02 0.03 Frequency (l)
0.04
0.05
Figure 6.24 Dependence of the ratio V(λ)/V(0) on the frequency λ of the random state changes of the strongly subcritical medium, with fixed values α2 < 0 and different values α1 < 0. a2 0.01
Ratio of variances
2 1.8
a1 0.02
1.6
a1 0.03
1.4
a1 0.04
1.2 g1 g2 1, s0 1
1 0.8 0.6 0
0.01
0.02
0.03
0.04
0.05
Frequency (l)
Figure 6.25 Dependence of the ratio V(λ)/V0 on the frequency λ of the random state changes of the strongly subcritical medium, with fixed values α2 < 0 and different values α1 < 0.
If λ = 0, i.e. the state of the medium does not change, then α1 = α2 = α and γ1 = γ2 = γ. In that case, one obtains from (6.93) the well-known formula, valid for the medium of constant parameters as 1γ s0 1+ , V = a 2a in which a = −α > 0. Define now a subcritical multiplying medium characterised by the parameters α=
2α1 α2 α1 + α 2
and
γ=
1 (γ1 + γ2 ). 2
In this case, the stationary value of the variance of the particle number equals to s0 1 γ V0 = 1+ . −α 2 −α Compare now the variances in a strongly subcritical medium of randomly changing parameters at various frequencies λ to this value of V0 . The result of the comparison is shown in Fig. 6.25. The expectation of the particle number at a given time moment in a weakly stationary process, maintained by random injection in a subcritical medium of constant parameters, is supplied by the formula M = s0 /a, in which a = −α > 0. The variance of the particle number 1γ V =M 1+ 2a
174
Imre Pázsit & Lénárd Pál
is linear in M . If, however, the state of the multiplying medium varies randomly in time, then a component containing M 2 also appears in the stationary variance. This immediately follows from (6.93). We shall now ‘trace’ the appearance of the term M 2 . To this order, assume that there is only a small difference between α1 and α2 , as well as γ1 and γ2 . Introduce the following notations: where
α1 = α + α < 0 and
α2 = α − α < 0,
γ1 = γ + γ > 0 and
γ2 = γ − γ > 0,
α > 0, further, let
where γ > 0. Investigate the case when α << 1, a
γ << 1 γ
and
λ << 1, γ
where a = −α > 0. By performing the expansion in (6.90) in the small parameters α /a, γ /γ and λ/γ up to the quadratic terms, one obtains that 2 1γ 3γ α 3λ 5λ2 α γ λ λ2 V = M 1+ +M 1+ + 1− 1−2 +4 2 + 2 2a 2a a a a 2a 2a a2 2 λ α + M2 1 − 2 + ··· . (6.94) a a One can see that, due to effect of the random state changes of the medium, not only the members containing M 2 are present in the variance, but the components linear in M are also modified. In this notation it is also directly seen that in the case of λ = α = γ = 0 the formula reverts to the one valid in the constant medium. It seems worthwhile to make here a brief remark. Identify the particles with neutrons in nuclear reactors. In this case, it is this formula that one can compare with the traditional zero power noise and power reactor noise formulae. With regard to the case of the static system, i.e. when α = 0, it is seen that only the first term on the right-hand side of V remains, and we know that this is exactly identical with the traditional result for the variance of the number of neutrons in a subcritical system with a source, traditionally written in the form ν(ν − 1) V =M 1+ , (6.95) 2ν |ρ| since the reactivity ρ in the present model is equal to αν /Q [40]. Here, as is usual in the reactor physics literature, ν and ν(ν − 1) are the first and second factorial moments of the number of neutrons generated per fission event, in contrast to the q1 and q2 occurring in the first term on the right-hand side of (6.94) through γ q2 = , a q1 |ρ| since these latter are the first and second factorial moments of the number of neutrons per reaction (see e.g. (6.49)). However, as it was shown in equation (1.17), one has ν(ν − 1) q2 = ν q1 which shows the complete equivalence between (6.95) and the first term of the right-hand side of (6.94). It is also seen, however, that for the case of a time-varying low power system, i.e. when M → 0 but at the same time α = 0, there will remain components of the noise that are not identical with the traditional zero reactor noise. These are represented by the second term, on the right-hand side of V , i.e. that in the square bracket. There it is seen that some of the terms are only proportional to the factor α , which represents the
175
Branching Processes in a Randomly Varying Medium
system variations, whereas there are also some terms containing also the factor γ, which expresses the effect of branching. It can also be shown that the last term of (6.94) is identical with the result of a linearised Langevin equation, corresponding to the neutron noise induced solely by the small state changes of a system [40]. In the case of the Langevin approach, however, the conditions for the applicability of the linearisation procedure cannot be established within the approach itself.
Covariances To calculate the covariance, one needs to know the probability P{N(t + u) = n2 , S(t + u) = Sk ; N(t) = n1 , S(t) = Sj |0, i} = Pk,j,i (n2 , t + u; n1 , t|0)
(6.96)
which can be written in the following form: Pk,j,i (n2 , t + u; n1 , t|0) = Pk, j (n2 , t + u|n1 , t)Pj,i (n1 , t1 |0).
(6.97)
If the process is homogeneous in time, then Pk,j (n2 , t + u|n1 , t) = Pk,j (n2 , u|n1 ).
(6.98)
Introduce the generating function Gk,j,i (z2 , t + u; z1 , t|0) =
∞ ∞
Pk,j,i (n2 , t + u; n1 , t|0)z1n1 z2n2
(6.99)
n2 =0 n1 =0
which, by virtue of (6.97) and (6.98), assumes the following form: Gk,j,i (z2 , t + u; z1 , t|0) =
∞
Gk,j (z2 , u|n1 )Pj,i (n1 , t1 |0)z1n1 .
(6.100)
n1 =0
Here Gk,j (z2 , u|n1 ) is the generating function defined in (6.65).12 To calculate the covariance " # (1) (1) Rk,j,i (t + u, t) = E Nk,i (t + u) − Mk,i (t + u) Nj,i (t) − Mj,i (t) ,
(6.101)
one has to determine the expectations
∂2 Gk,j,i (z2 , t + u; z1 , t|0) Bk,j,i (t + u, t) = E{Nk,i (t + u)Nj,i (t)} = ∂z1 ∂z2
.
(6.102)
z1 =z2 =0
i, j, k = 1, 2. By performing the assigned calculations, one can write that Bk,j,i (t + u, t) =
∞ n1 =0
(1)
n1 Mk,j (u|n1 )Pj,i (n1 , t1 |0),
(6.103)
where, according to (6.75) and (6.76) (1)
(1)
(1)
Mk,j (u|n1 ) = Mk,j (u|0) + n1 mk,j (u),
j, k = 1, 2.
(6.104)
note also here that Gk,j (z2 , u|n1 ) cannot be factorised, since the branching processes induced by the n1 particles at time u = 0 are not independent.
12 We
176
Imre Pázsit & Lénárd Pál
By taking these into account, one obtains ∞
Bk,j,i (t + u, t) =
n1 =0
(1) (1) (1) (1) Pj,i (n, t1 |0)n1 Mk,j (u|0) + n1 mk,j (u) = Mj,i (t|0)Mk,j (u|0)
(1) (2) (1) + mk,j (u) Mj,i (t|0) + Mj,i (t|0) .
(6.105)
In the case when the state of the system can be either S1 or S2 at times t and t + u, provided that at t = 0 it was in Si , i = 1, 2 and it did not contain any particles, then the procedure is to sum up the expectations Bk,j,i (t + u, t) for the indices j and k. As a result of this, one obtains that Bi (t + u, t) = B1,1,i (t + u, t) + B2,1,i (t + u, t) + B1,2,i (t + u, t) + B2,2,i (t + u, t) (1) (1) (1) (2) (1) = M1,i (t|0)M1 (u|0) + m1 (u) M1,i (t|0) + M1,i (t|0) (1) (1) (1) (2) (1) + M2,i (t|0)M2 (u|0) + m2 (u) M2,i (t|0) + M2,i (t|0) , i = 1, 2.
(6.106)
The covariance of the particle numbers at times t and t + u is provided by the expression (1)
(1)
Ri (t + u, t) = Bi (t + u, t) − Mi (t)Mi (t + u)
(6.107)
in the case when the state of the system at t and t + u can be either S1 or S2 , provided that it was in Si , i = 1, 2 at t = 0 and did not contain any particles. We are interested here in the covariance lim Ri (t + u, t) = R(u)
(6.108)
t→∞
corresponding to a weakly stationary process maintained by random injection in a strongly subcritical system. From (6.107), and by considering the relationships (6.106), (6.79) and (6.80), this covariance can be written in the following form: & ' (1) (1) (1) R(u) = M (1) (1, ∞)M1 (u|0) + m1 (u) M (2) (1, ∞) + M (1) (1, ∞) + M (2) (1, ∞)M2 (u|0) & ' (1) + m2 (u) M (2) (2, ∞) + M (1) (2, ∞) − M 2 . (6.109) We will prove that the formula (6.109) for the covariance R(u) is identical with the following expression: R(u) = D0 e −2λu + D1 e −μ1 u + D2 e −μ2 u ,
(6.110)
where μ1 = −s1 > 0 and
μ2 = −s2 > 0.
Moreover, it will be shown that D0 + D1 + D2 = V ,
(6.111)
where V defined by (6.93) is the stationary variance of the random process N(t). (1) In order to prove these, we need the explicit forms of the functions Mi (u|0), i = 1, 2. One can prove that Mi (u|0) = M + A0 e −2λu + A1 e −μ1 u + A2 e −μ2 u , (1)
(i)
(i)
(i)
i = 1, 2,
(6.112)
177
Branching Processes in a Randomly Varying Medium
where (1)
A0 =
α2 − α 1 1 (2) s0 = −A0 , 2 (2λ − μ1 )(2λ − μ2 )
(1)
A1 = s0 (2)
A1 = s0
(2λ − μ1 )2 − λ(α1 + α2 ) + α2 μ1 , μ1 (μ1 − μ2 )(2λ − μ1 ) (2λ − μ1 )2 − λ(α1 + α2 ) + α1 μ1 , μ1 (μ1 − μ2 )(2λ − μ1 ) (2λ − μ2 )2 − λ(α1 + α2 ) + α2 μ2 , μ2 (μ2 − μ1 )(2λ − μ2 )
(1)
A2 = s0 (2)
A 2 = s0
(2λ − μ2 )2 − λ(α1 + α2 ) + α1 μ2 . μ2 (μ2 − μ1 )(2λ − μ2 )
By considering only the non-exponential term M from (6.112), the sum of the terms in (6.109) M (1) (1, ∞)M + M (1) (2, ∞)M − M 2 that are independent from u is zero, since M (1) (1, ∞) + M (1) (2, ∞) = M . From (6.40) and (6.41), by a simple change of notations one obtains that m1 (u) = h1,1 e −μ1 u − h2,1 e −μ2 u (1)
and m2 (u) = h2,1 e −μ1 u − h2,2 e −μ2 u , (1)
where 2λ − μ1 − α2 , μ2 − μ1 2λ − μ1 − α1 = , μ2 − μ1
2λ − μ2 − α2 , μ2 − μ 1 2λ − μ2 − α1 = . μ2 − μ 1
h1,1 =
h2,1 =
h2,1
h2,2
Hence, we have proved the statement that the covariance R(u) is a sum of three exponentials. The coefficients D , = 0, 1, 2 can immediately be written down, since both M (2) ( j, ∞) and M (1) ( j, ∞) are well-known functions of the parameters for both values of the index j. Only as a reminder: (1)
(2)
D0 = M (1) (1, ∞)A0 + M (1) (2, ∞)A0 ,
& (1) (2) D1 = M (1) (1, ∞)A1 + M (1) (2, ∞)A1 + M (2) (1, ∞) ' & ' + M (1) (1, ∞) h1,1 + M (2) (2, ∞) + M (1) (2, ∞) h2,1 and (1)
(2)
D2 = M (1) (1, ∞)A2 + M (1) (2, ∞)A2 − [M (2) (1, ∞) + M (1) (1, ∞)]h2,1 − [M (2) (2, ∞) + M (1) (2, ∞)]h2,2 . (1)
What is left is to prove the statement in (6.111). This goes rather easily. Since Mi (0|0) = 0, i = 1, 2, one obtains from (6.109) that R(0) = D0 + D1 + D2 = M (2) (1, ∞) + M (1) (1, ∞) + M (2) (2, ∞) + M (1) (2, ∞) − M 2 = M (2) (∞) + M (1 − M ) = V .
178
Imre Pázsit & Lénárd Pál
To write down the spectral density s(ω), based on (6.110) is a trivial task. One obtains that μ 1 D1 μ 2 D1 1 2λD0 + 2 + 2 s(ω) = πV ω2 + 4λ2 ω + μ21 ω + μ22 and evidently
+∞
−∞
(6.113)
s(ω)dω = 1.
With some further analysis of equation (6.113) it can be shown that it contains both a term corresponding to the traditional noise in constant systems, and a term corresponding to the noise resulted in time-varying systems. Both these terms have slightly modified amplitudes compared to the noise in constant and to that of time-varying systems. In addition, the result contains an interference term which does not have an equivalent either in the noise of constant or in the noise of time-varying systems. Without a detailed quantitative analysis, we only note that the first term of the covariance of the noise, equation (6.110), is proportional to the covariance of the process of the state changes of the system, i.e. shows the same correlation decay constant 2λ. This can be considered as an explicit proof of the statement that due to the state changes of the system, the different processes in the system, started by different particles existing at a given time, will not be independent, and hence the factorisation assumption, used in the theory of constant systems is not applicable in systems with properties change in time.
C H A P T E R
S E V E N
One-Dimensional Branching Process
Contents 7.1 Cell Model 7.2 Continuous Model
179 191
Up until now, we have supposed that the branching processes take place in an infinite homogeneous medium. To get a flavour of space-dependence, we shall now investigate the case when the process takes place on a finite interval [0, ] of the one-dimensional space. The multiplication in a one-dimensional finite interval was already treated by R. Bellman and his co-authors [41] in 1958. A significantly more comprehensive analysis of the problem will be now given here.
7.1 Cell Model 7.1.1 Description of the model In the first step, the following model will be chosen for the discussion. Suppose that the multiplying ‘medium’ consists of a series of pieces of cells, as illustrated in Fig. 7.1. Let us construct the time interval [0, T ] with 0 ≤ T ≤ ∞, as an integer number multiple of the time interval T , i.e. let T = tT , and hence define the discrete ‘time points’ t = 0, 1, . . . Consider a particle which at time (t − 1) is found in the xth cell. The particle will be called as travelling to the right, if it is found in the (x + 1)th cell at the tth time point and travelling to the left if it is in the (x − 1)th cell, provided that during the time interval T between the (t − 1)th and tth time points it did not cause reaction, whose probability will be denoted 1 − w. Thus w is the probability that a particle staying in the xth cell at the time instant (t − 1) will cause a reaction, which results in the appearance, at the time instant t, of a particle travelling to the right in cell (x + 1), and a particle travelling to the left in cell (x − 1), while the particle generating the reaction disappears from the xth cell. In other words, the reaction consists of the event that two particles are born out of one particle: one travelling to the right and one travelling to the left, both of which can start further multiplying reactions independently from each other and from the other particles in the series of cells. We assume that at time t = 0 one particle exists in the cell series, and this particle is called the starting particle. 1
Figure 7.1
2
3
...
x1
x
x+1
1
Schematic figure of the one-dimensional medium.
Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
179
180
Imre Pázsit & Lénárd Pál
Let n(+) (x, t) and n(−) (x, t) denote the number of particles travelling to the right and to the left, respectively, in the xth cell at the time point t ≥ 0. Moreover, let n (t) =
[n(+) (x, t) + n(−) (x, t)]
(7.1)
x=1
be the number of particles in the sequence of cells of length at the moment t ≥ 0. Before starting to study the random process n (t), it seems to be reasonable to make some trivial remarks. It is obvious that if we place one starting particle travelling to the right into the first cell at time t = 0, then at t = 1 there will also be one particle travelling to the right in the second cell as well with unit probability, since the particle in the first cell either gets into the second cell without reaction with a probability 1 − w, or by inducing a reaction in the first cell with a probability w, it will emit a progeny travelling to the right into the second cell, whereas the other, second particle born in the reaction, travelling to the left, will escape from the series of cells. Likewise, if we place a particle travelling to the left into the first cell at time t = 0, then by the time t = 1 it either will have exited at the left end of the cell series with a probability 1 − w, or it have induced a reaction with probability w, as a result of which a particle travelling to the right will appear in the second cell, while the other particle born in the reaction will escape from the series of cells. In either case, there will be no particles at time t = 1 in the first cell. If at time t = 0 a particle travelling to the left is placed into the th, last cell, then the same event takes place at the end of the cell series as in the case of the particle travelling to the right at the beginning of the cell series, i.e. there will be a particle travelling to the left in the ( − 1)th cell by time t = 1 with a probability equal to one, and no particle will be in the th cell.
7.1.2 Generating function equations Let P{n (t) = n|n(+) (x, 0) = 1} = pn(+) (t, |x) = pn(+) (t|x),
(7.2)
1 ≤ x ≤ and t = 0, 1, . . . denote the probability that there are n particles in the cell series consisting of pieces of cells1 at the tth time point, provided that there was one starting particle travelling to the right in the xth cell at time t = 0. Similarly, let P{n (t) = n|n(−) (x, 0) = 1} = pn(−) (t, |x) = pn(−) (t|x),
(7.3)
1 ≤ x ≤ and t = 0, 1, . . . denote the same probability, with the difference that there was one starting particle travelling to the left in the xth cell at t = 0. According to the symmetry properties of the system, it is obvious that pn(+) (t|x) = pn(−) (t| − x + 1)
(7.4)
for every values 1 ≤ x ≤ and t = 0, 1, . . . At time t = 0, one has pn(+) (0|x) = pn(−) (0|l − x + 1) = δn,1
(7.5)
for every 1 ≤ x ≤ . (+) We can derive the backward Kolmogorov equation for the probability pn (t|x), 1 < x < according to the following simple consideration: •
The initiating particle travelling to the right, placed into the xth cell2 at time t = 0 will not induce reaction, whose probability is 1 − w, rather it will move over into the (x + 1)th cell up to time t = 1, from where it will move further and in the continuation, until the end of the remaining time interval t − 1, i.e. at the tth time point, there will be exactly n particles (travelling to the right and to the left together) in the cell series.
1 According 2 x = 1
to the definition in (7.2), we do not denote that the cell series consists of pieces of cells. and x = .
181
One-Dimensional Branching Process
•
The initiating particle travelling to the right, placed into the xth cell at time t = 0 with probability w will induce a reaction, upon the effect of which – out of the two particles born – the one moving to the left gets into the (x − 1)th cell, while the one moving to the right into the (x + 1)th cell, continuing their transport from there, such that there will be n1 particles in the cell series produced by the particle moving to the left, while n2 particles by the one moving to the right at the end of the remaining time section t − 1, i.e. at the tth time point, under the condition that n1 + n2 = n.
If at time t = 0 the initiating particle moving to the right or to the left gets into either the first or the last cell, then the possibility of leaving the cell series has to be taken into consideration. Based on all the above, one can write that pn(+) (t|x) = (1 − w)pn(+) (t − 1|x + 1) + w pn(−) (t − 1|x − 1)pn(+) (t − 1|x + 1), (7.6) 1 2 n1 +n2 =n
for every x for which the inequality 1 < x < is satisfied. If x = 1, then pn(+) (t|1) = (1 − w)pn(+) (t − 1|2) + wpn(+) (t − 1|2) = pn(+) (t − 1|2),
(7.7)
while if x = , then pn(+) (t|) = (1 − w)δn,0 + wpn(−) (t − 1| − 1).
(7.8) (−) pn (t|x).
Due to (7.4), there is no need to write down the equations determining the probability for the further considerations, it seems to be useful to supply even these equations: pn(−) (t|x) = (1 − w)pn(−) (t − 1|x − 1) + w pn(−) (t − 1|x − 1)pn(+) (t − 1|x + 1) 1 2
However, (7.9)
n1 +n2 =n
for every x for which the inequality 1 < x < is satisfied. If x = 1, then pn(−) (t|1) = (1 − w)δn,0 + wpn(+) (t − 1|2),
(7.10)
pn(−) (t|) = (1 − w)pn(−) (t − 1| − 1) + wpn(−) (t − 1| − 1) = pn(−) (t − 1| − 1).
(7.11)
whereas if x = , then
Introduce the generating functions g (±) (z, t|x) =
∞
pn(±) (t|x)zn ,
|z| ≤ 1,
(7.12)
n=0
which obviously satisfy the following equations: g (+) (z, t|x) = (1 − w)g (+) (z, t − 1|x + 1) + wg (−) (z, t − 1|x − 1)g (+) (z, t − 1|x + 1)
(7.13)
g (−) (z, t|x) = (1 − w)g (−) (z, t − 1|x − 1) + wg (−) (z, t − 1|x − 1)g (+) (z, t − 1|x + 1),
(7.14)
and if 1 < x < . If x = 1, then g (+) (z, t|1) = g (+) (z, t − 1|2),
(7.15)
(z, t|1) = 1 − w + wg
(7.16)
g
(−)
(+)
(z, t − 1|2),
whereas if x = , then g (+) (z, t|) = 1 − w + wg (−) (z, t − 1| − 1), g
(−)
(z, t|) = g
(−)
(z, t − 1| − 1).
(7.17) (7.18)
182
Imre Pázsit & Lénárd Pál
7.1.3 Investigation of the expectations In the following, we will be concerned only with the investigation of the expectations
(±) ∂g (z, t|x) = m(±) (t|x). ∂z z↑1 After a short calculation, one obtains that ⎧ if x = 1, ⎨ m(+) (t − 1|2), (+) m (t|x) = m(+) (t − 1|x + 1) + w m(−) (t − 1|x − 1), if 1 < x < , ⎩ w m(−) (t − 1| − 1), if x = , noting that m(+) (0|x) = 1 for every 0 ≤ x ≤ ; further, ⎧ if x = 1, ⎨ w m(+) (t − 1|2), (−) m (t|x) = m(−) (t − 1|x − 1) + w m(+) (t − 1|x + 1), if 1 < x < , ⎩ (−) m (t − 1| − 1), if x = ,
(7.19)
(7.20)
noting again that m(−) (0|x) = 1 for every 0 ≤ x ≤ . With the initial values m(±) (0|x) = 1, the recursive formulae (7.19) and (7.20) are suitable for numerical calculations. The results of such calculations are shown in Figs 7.2 and 7.3. Based on the figures, one can surmise that for each cell series length there belongs an exactly determined probability w = wcrt , such that for w values greater than wcrt , the expectation of the number of the particles in the cell series tends to infinity for 8, x 1
Expectation
3 2.5
w 0.180
2
w ≈ 0.209
1.5
w 0.220
1 0.5 0
5
10 15 Time (t )
20
25
Expectation
Figure 7.2 Expected number of the particles in the cell series consisting of = 8 cells, as a function of t for three different values of w, provided that there was one initiating particle moving to the right in the cell x = 1 at time t = 0. 3
7 x1
2.5
w 0.22
2
w ≈ 0.241
1.5
w 0.260
1 0.5 0
5
10
15
20
25
Time (t)
Figure 7.3 Expected number of particles in the cell series consisting of = 7 cells, as a function of t for three different values of w, provided that there was one initiating particle moving to the right in the cell x = 1 at time t = 0.
183
One-Dimensional Branching Process
t → ∞, whereas for w values smaller than wcrt it tends to zero. That is, if w > wcrt then the system is supercritical, while if w < wcrt , then it is subcritical. If w = wcrt , i.e. if the system is critical, then in a cell series consisting of an even number of cells (see Fig. 7.2 where = 8), the expectation converges to a fixed value, hence one can say that the random process n (t) is weakly stationary; while in a cell series consisting of an odd number of cells (see Fig. 7.3 where = 7), if t → ∞ the expectation oscillates between two finite values, hence one can only claim that the random process n (t) is periodically stationary.
Characteristic properties In the following, some characteristic properties of the expectations m(±) (t|x) will be investigated. Define the function r (±) (t, y, ) =
m(±) (t|x)yx ,
|y| ≤ 1,
(7.21)
x=1
and introduce the generating function h(±) (s, y, ) =
∞
r (±) (t, y, )st ,
(7.22)
t=0
which, of course, is not a probability generating function. By taking into account equations (7.19), one obtains that r (+) (t, y, ) =
1 (+) [r (t − 1, y, ) − ym(+) (t − 1|1)] + wy[r (−) (t − 1, y, ) − y m(−) (t − 1|)]. y
Because of the symmetry one has m(+) (t|1) = m(−) (t|) = m(t),
(7.23)
and thus r (+) (t, y, ) =
1 (+) r (t − 1, y, ) + wy r (−) (t − 1, y, ) − m(t − 1)(1 + wy+1 ). y
Further, from the above it follows that s h(+) (s, y, ) − r (+) (0, y, ) = h(+) (s, y, ) + wys h(−) (s, y, ) − s(1 + wy+1 )h(s), y where r (+) (0, y, ) =
m(+) (0|x)yx =
x=1
yx = y
x=1
1 − y = r(y, ), 1−y
and h(s) =
∞
m(t)st .
(7.24)
t=0
After rearrangement, one obtains the following equation: (s − y)h(+) (s, y, ) + wsy2 h(−) (s, y, ) = sy(1 + wy+1 )h(s) − yr(y, ).
(7.25)
Without repeating the foregoing, one can immediately write down the generating function equation ws h(+) (s, y, ) + y(sy − 1)h(−) (s, y, ) = sy(w + y+1 )h(s) − yr(y, )
(7.26)
184
Imre Pázsit & Lénárd Pál
which can be derived from equations (7.19). The determinant of the equation system consisting of (7.25) and (7.26) is: s − y wsy2 D = (7.27) = −sy(y2 − 2λy + 1), ws y(sy − 1) where
1 1 2 λ= + (1 − w )s . 2 s
(7.28)
The generating functions h(±) (s, y, ) can immediately be written down, since h(+) (s, y, ) = where (+)
D
D(+) D
h(−) (s, y, ) =
and
D(−) , D
sy(1 + wy+1 )h(s) − yr(y, ) wsy2 = sy(w + y+1 )h(s) − yr(y, ) y(sy − 1)
and D(−)
s − y sy(1 + wy+1 )h(s) − yr(y, ) = . +1 ws sy(w + y )h(s) − yr(y, )
After a short calculation, one arrives at h(+) (s, y, ) =
y[1 − (1 − w 2 )sy + wy+1 ] y − (1 − w)sy2 h(s) − r(y, ), y2 − 2λy + 1 s(y2 − 2λy + 1)
(7.29)
as well as y[w − (1 − w 2 )sy + y+1 ] y − (1 − w)s h(s) − 2 r(y, ). (7.30) y2 − 2λy + 1 s(y − 2λy + 1) The latter equation is actually unnecessary, since it follows from the symmetry properties of (7.4) that h(−) (s, y, ) =
h(−) (s, y, ) = y+1 h(+) (s, y−1 , ) and h(+) (s, y, ) = y+1 h(−) (s, y−1 , ). Based on this, we will only concern with the function h(+) (s, y, ) in the forthcoming, which can be rewritten in the following form: h(+) (s, y, ) =
q(+) (s, x)yx ,
x=1
where q(+) (s, x) =
∞
m(+) (t|x)st .
t=0
Suppose that the expectation is bounded for any finite and every t, where t ∈ Z and Z is the set of non-negative integers. This means that m(+) (t|x)
max m(+) (t|x) ≤ M < ∞, t∈Z
∀ < ∞.
185
One-Dimensional Branching Process
According to this, |q(+) (s, x)| ≤ M
|s| , 1 − |s|
hence h(+) (s, y, ) is a regular function of y for every |s| < 1. From this it also follows that the denominator of the function sh(+) (s, y, ): J (s, y, w) = y2 − 2λ y + 1 has the roots y1 = λ + y2 = λ −
(7.31)
λ2 − 1
(7.32)
λ2 − 1
(7.33)
which are at the same time also the roots of the numerator given by the expression K (s, y, w) = sy[1 − (1 − w 2 )sy + wy+1 ]h(s) − y[1 − (1 − w)sy]r(y, ).
(7.34)
That is, the identities K (s, y1 , w) = 0 and
K (s, y2 , w) = 0
hold. From these, it follows that h(s) = h(s, w, ) =
y1 [1 − (1 − w)y1 s](1 − y1 ) s(1 − y1 )[1 − (1 − w 2 )y1 s + wy+1 1 ]
=
y2 [1 − (1 − w)y2 s](1 − y2 ) s(1 − y2 )[1 − (1 − w 2 )y2 s + wy+1 2 ]
,
(7.35)
where now, with the notation h(s, w, ), also the dependence of h(s) on the probability 0 < w ≤ 1 was also indicated. According to (7.24), to derive the expectation m(t) = m(w, t), one has to calculate the coefficient of the term st in the power series of the function h(s, w, ) with respect to s. To this end the following recursion appears to be well suitable: m(w, 0) = lim h(s, w, ), s↓0
1 m(w, 1) = lim [h(s, w, ) − m(w, 0)], s↓0 s 1 m(w, 2) = lim 2 [h(s, w, ) − m(w, 0) − m(w, 1)s], s↓0 s ··· ··· ··· t−1 1 m(w, t) = lim t [h(s, w, ) − m(w, k)sk ], s↓0 s k=0
···
···
···
where lims↓0 h(s, w, ) = 1. Of course, m(w, t) can be derived in other ways as well. Since h(s, w, ) is an analytical function of s within the circle |s| ≤ ρ < 1 and also on the circle Cρ itself, one can write that , 1 h(s, w) m(w, t) = ds. (7.36) 2πi Cρ st+1
Critical state Consider now the calculation of the probability w = wcrt , under which a critical state is generated in a cell series of length . In this case, the limit values lim sup m(wcrt , t) = msup (wcrt ) t→∞
(7.37)
186
Imre Pázsit & Lénárd Pál
and lim inf m(wcrt , t) = minf (wcrt )
(7.38)
t→∞
need to exist and thus, according to the Abel theorem, the relationship lim (1 − s)h(s, wcrt , ) = s↑1
1 [msup (wcrt ) + minf (wcrt )] = m(wcrt , ) 2
(7.39)
must hold. Introduce the notation y2 = u(s, w) and rewrite the formula (7.35) into the following form: ϕ(s, w) , χ(s, w)
(7.40)
u(s, w) [1 − (1 − w)su(s, w)] {1 − [u(s, w)] } s[1 − u(s, w)]
(7.41)
h(s, w) = where ϕ(s, w) = and
χ(s, w) = 1 − (1 − w 2 )su(s, w) + w[u(s, w)]+1 .
(7.42)
Let us also write that ϕ(s, w) = ϕ(w, 1) + ϕ (w, 1)(s − 1) + o(s − 1) and χ(s, w) = χ(w, 1) + χ (w, 1)(s − 1) + o(s − 1), where o(s − 1) = 0. s−1 It is seen that the relationship (7.39) can be fulfilled if and only if there exists a 0 ≤ wcrt ≤ 1 which satisfies the equation lim s↑1
2 χ(1, wcrt ) = 1 − (1 − wcrt )u(1, wcrt ) + wcrt [u(1, wcrt )]+1 = 0,
in which
! u(1, wcrt ) = 1 −
where
2 wcrt
2
− iwcrt 1 −
2 wcrt = exp{−iφ(wcrt )}, 4
* 2 /4 wcrt 1 − wcrt φ(wcrt ) = arctan , 2 /2 1 − wcrt
0 < φ(wcrt ) <
π . 2
The objective is therefore the determination of the real root w = wcrt () of the equation χ(1, w) = w exp{−i( + 1)φ(w)} − (1 − w 2 ) exp{−iφ(w)} + 1 = 0,
(7.43)
for a given and which falls into the interval 0 < w ≤ 1. It is obvious that the equations {χ(1, wcrt )} = 0 and
{χ(1, wcrt )} = 0
have to be fulfilled. Figure 7.4 shows the dependence of the real and imaginary parts of χ(w, 1) on the probability w in the case of a chain consisting of = 7 cells. The common root wcrt = 0.241 . . . , at which the system containing seven cells is critical, is marked on the figure.
187
One-Dimensional Branching Process
Function ()
1.5
Real part
1
Imaginary part
0.5 0 0.5
7
wcrt ≈ 0.241 0
0.2
0.4
0.6
0.8
1
Probability (w )
Figure 7.4
Dependence of the real and imaginary parts of the function χ(w, 1), = 7, on the probability w. Table 7.1
Expectations in cell series of critical states
wcrt
msup (wcrt )
minf (wcrt )
m(wcrt )
2
1
1
1
1
3
0.61803
1.61803
1
1.30902
4
0.44504
1.46556
1.46556
1.46556
5
0.34730
1.79393
1.33151
1.56272
6
0.28463
1.62947
1.62947
1.62947
7
0.24107
1.85887
1.49779
1.67833
8
0.20906
1.71570
1.71570
1.71570
9
0.18454
1.89267
1.59782
1.74525
10
0.16516
1.76920
1.76920
1.76920
11
0.14946
1.91341
1.66462
1.78902
12
0.13648
1.80569
1.80569
1.80569
13
0.12558
1.92743
1.71239
1.81991
14
0.11629
1.83219
1.83219
1.83219
15
0.10828
1.93754
1.74825
1.84290
In Table 7.1, the critical probabilities wcrt () and the expectations msup (wcrt ), minf (wcrt ) and m(wcrt ) defined by (7.39), belonging to the cell numbers = 2, 3 . . . , 15 are listed. The values of msup (wcrt ) and minf (wcrt ) were determined through the recursive expression (7.19), whereas for the calculation of m(wcrt ) the formula m(wcrt ) = −
ϕ(1, wcrt ) χ (1, wcrt )
was used. According to the expectations mentioned earlier, one can see that the characteristics of the critical process in the cell train containing cells of even and odd number of cells are rather different. The oscillation in the chain containing an odd number of cells is related to the fact that there are an equal number of cells both to the right and to the left from the central cell. For the sake of illustration, in Fig. 7.5 we display the dependence of the critical probability wcrt on the number of cells . It is remarkable that if the cell number exceeds 10, then the decreasing of wcrt slows down significantly.
188
Imre Pázsit & Lénárd Pál
wcrt 1 0.8 0.6 0.4 0.2 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Figure 7.5
Dependence of the probability wcrt on the number of the cells .
Investigate now how the expectation of the number of particles in the cell series of length of critical state depends on in which cell we put the initiating particle, travelling either to the right or to the left. As mentioned, a critical state exists if the limit values lim sup m(+) (t, wcrt |1) = lim sup m(−) (t, wcrt |) = msup (wcrt ) t→∞
t→∞
and lim inf m(+) (t, wcrt |1) = lim inf m(−) (t, wcrt |) = minf (wcrt ) t→∞
t→∞
exist. We have observed that a non-oscillating critical state is developed if msup (wcrt ) = minf (wcrt ) = m(wcrt ) and this is fulfilled if = 2k where k = 1, 2 . . . Introduce the following notations: (+) (x), lim sup m(+) (t, wcrt |x) = msup
(7.44)
(−) lim sup m(−) (t, wcrt |x) = msup (x),
(7.45)
(−) (+) msup (x) = msup ( − x + 1).
(7.46)
t→∞ t→∞
where Similarly, let (+)
(7.47)
(−)
(7.48)
lim inf m(+) (t, wcrt |1) = minf (x), t→∞
lim inf m(−) (t, wcrt |) = minf (x), t→∞
where (−)
(+)
minf (x) = minf ( − x + 1). From equations (7.19) and (7.20), one obtains that ⎧ if x = 1, m(+) (2), ⎪ ⎪ ⎨ m(+) (x) = m(+) (x + 1) + wcrt m(−) (x − 1), if 1 < x < , ⎪ ⎪ ⎩ wcrt m(−) ( − 1), if x = ,
(7.49)
(7.50)
189
One-Dimensional Branching Process
and
⎧ if x = 1, w m(+) (2), ⎪ ⎪ ⎨ crt (−) m (x) = m(−) (x − 1) + wcrt m(+) (x + 1), if 1 < x < , ⎪ ⎪ ⎩ (−) m ( − 1), if x = . (±)
(7.51)
(±)
Since for the expressions msup (x) and minf (x) one has formally equivalent equations, the indices sup and inf are not indicated. Define the polynomials (generating functions) g (±) (z) =
m(±) (x)zx ,
(7.52)
x=1
between which, based on (7.50) and (7.51), the following relationships are fulfilled: g (+) (z) =
1 (+) g (z) − m(+) (1) + wcrt z[g (−) (z) − m(−) ()z ], z
and
g
(−)
(z) = wcrt
1 (+) (+) g (z) − m (1) + z[g (−) (z) − m(−) ()z ]. z
It is obvious from the symmetry relationships (7.46) and (7.49) that both in the cases of the index sup and inf one has m(+) (1) = m(−) () = m(1), and by simple rearrangement, one obtains from the previous equations that (1 − z)g (+) (z) + wcrt z2 g (−) (z) = m(1)z[1 + wcrt z+1 ],
(7.53)
wcrt g (+) (z) − z(1 − z)g (−) (z) = m(1)z[wcrt + z+1 ].
(7.54)
The determinant of this equation system equals to: 1 − z wcrt z2 2 D= /2)z + z2 ] = −z[1 − 2(1 − wcrt wcrt −z(1 − z) and the solutions can be written in the following form: g (+) (z) = m(1)z
2 )z + w z +1 1 − (1 − wcrt crt , 2 1 − 2(1 − wcrt /2)z + z2
(7.55)
g (−) (z) = m(1)z
2 )z + z +1 wcrt − (1 − wcrt . 2 /2)z + z 2 1 − 2(1 − wcrt
(7.56)
One observes that ∞
1 2 = Un (1 − wcrt /2)zn , 2 2 1 − 2(1 − wcrt /2)z + z n=0
190
Imre Pázsit & Lénárd Pál
where Un (y) is the Chebyshev polynomial of second kind and order n. By utilising this relationship, one can immediately see that 1, if x = 1, m(+) (x) = m(1) (7.57) H (+) (x), if 1 < x ≤ , where 2 2 2 H (+) (x) = Ux−1 (1 − wcrt /2) − (1 − wcrt )Ux−2 (1 − wcrt /2).
Since m(+) () = wcrt m(−) ( − 1), further m(−) ( − 1) = m(+) (2) = m(+) (1) = m(1), the equation 2 2 2 /2) − (1 − wcrt )U−2 (1 − wcrt U−1 (1 − wcrt /2) − wcrt = 0
has to be fulfilled. We note that in addition to the single solution wcrt for cell number in the interval (0, 1) in (7.43), this equation can be fulfilled even for other real values w, but these roots are not ‘physical’ in the case of cell number . Similarly, from (7.56) one obtains that if x = 1, wcrt , (−) m (x) = m(1) (7.58) H (−) (x), if 1 < x ≤ , where now 2 H (−) (x) = wcrt Ux−1 (1 − wcrt /2).
Considering that m(−) () = m(+) (1) = m(1), the critical probability wcrt belonging to the cell number has to satisfy the equation: 2 wcrt U−1 (1 − wcrt /2) − 1 = 0.
Of course, this equation can also be fulfilled for values of w differing from wcrt . Figure 7.6 illustrates the (±) (±) expectations msup (x) and minf (x) at a time infinitely long after the initial time, as functions of the cell number (±)
(±)
(±)
(±)
2 1.75 1.5 1.25 1 0.75 0.5 0.25
Left→Right
7
wcrt ≈ 0.241 msup ≈ 1.859 1
(a)
Left←Right Expectation
Expectation
x of the initiating particle. If = 7, then msup (x) > minf (x), and if = 8, then msup (x) = minf (x). One can conclude that if is odd, then the expectation is quasi-stationary, i.e. it oscillates; and if it is even, then it is weakly stationary.
2
3
minf ≈ 1.489
5 4 Cell number
6
7
2 1.75 1.5 1.25 1 0.75 0.5 0.25
Left→Right
8 wcrt ≈ 0. 209 msup minf ≈ 1.716
1 (b)
Left←Right
2
3
5 4 Cell number
6
7
8
Figure 7.6 Expectation of the number of particles in the row of cells of length = 7 and = 8 in the critical state at time t = ∞ as a function of the cell number 1 ≤ x ≤ of the initiating particle. For the cell number = 7 one (±) (±) (±) (±) has msup (x) > minf (x), whereas for = 8 one has msup (x) = minf (x).
191
One-Dimensional Branching Process
7.2 Continuous model 7.2.1 Generating function equations Suppose that the interval [0, ] of the one-dimensional space has the characteristics that a particle travelling either to the right or to the left with the velocity |v| = v can generate a reaction, which results in the birth of one particle travelling to the right and one particle travelling to the left, respectively. Let Qt + o(t) be the probability that a particle moving in the medium of length [0, ] will induce a reaction during the time [t, t + t]. Let us also suppose that the particles generated can induce further reactions, i.e. can multiply, independently from each other and from the other particles. Moreover, if a particle reaches the end of the section [0, ] either on the right-hand side or on the left-hand side, then it can freely escape from the interval [0, ]. Let I(t0 , x, ±v) denote the event that one particle with velocity ±v gets into the point x ∈ [0, ] of the empty interval [0, ] at t0 . Moreover, let n (t) be the number of particles in the interval [0, ] at time t ≥ t0 . Determine the probability P{n (t) = n|I(0, x, ±v)} = pn(±) (t|x).
(7.59)
Based on simple considerations, one can write that −x −x pn(+) (t|x) = e −Qt − t δn,1 + e −Q(−x)/v t − δn,0 v v t −x −Qt +Q e − t kn (t − t , x + vt )dt , 0 ≤ x ≤ , v 0 and pn(−) (t|x) = e −Qt
$x
+Q
% $ x% − t δn,1 + e −Qx/v t − δn,0 v v
t
e −Qt
$x
0
where kn (t − t , x ± vt ) =
v
n1 +n2 =n
% − t kn (t − t , x − vt )dt ,
0 ≤ x ≤ ,
pn(+) (t − t |x ± vt )pn(−) (t − t |x ± vt ), 1 2
and (t) is the unit step function. Introduce the probability generating functions g (±) (z, t|x) =
∞
pn(±) (t|x)zn ,
(7.60)
n=0
for which it can easily be confirmed that they satisfy the following integral equations: g (+) (z, t|x) = e −Qt ( − x − vt)z + e −Q(−x)/v (vt − − x) t +Q e −Q(t−t ) [ − x − v(t − t )]j[z, x + v(t − t ), t ]dt ,
(7.61)
0
and g (−) (z, t|x) = e −Qt (x − vt)z + e −Qx/v (vt − x) t e −Q(t−t ) [x − v(t − t )]j[z, x − v(t − t ), t ]dt , +Q 0
(7.62)
192
Imre Pázsit & Lénárd Pál
where j[z, x ± v(t − t ), t ] = g (+) [z, t |x ± v(t − t )]g (−) [z, t |x ± v(t − t )],
(7.63)
and 0 ≤ x ≤ . It is obvious that the equations g (±) (z, 0|x) = z
and
g (±) (1, t|t) = 1
(7.64)
hold. Moreover, because of the symmetry properties of the process, the relationship g (+) (z, t|x) = g (−) (z, t| − x)
(7.65)
∂j[z, x ± v(t − t ), t ] ∂j[z, x ± v(t − t ), t ] = ±v , ∂t ∂x
(7.66)
is also true. By considering that
from the integral equations (7.61) and (7.62) by derivation with respect to t, after elementary rearrangements one obtains the differential equations below in the interval 0 ≤ x ≤ : ∂g (+) (z, t|x) ∂g (+) (z, t|x) −v = −Qg (+) (z, t|x) + Qg (+) (z, t|x)g (−) (z, t|x) ∂t ∂x
(7.67)
and ∂g (−) (z, t|x) ∂g (−) (z, t|x) +v = −Qg (−) (z, t|x) + Qg (+) (z, t|x)g (−) (z, t|x). (7.68) ∂t ∂x The initial conditions arise immediately from the expressions (7.64). Based on the relationship (7.65), the boundary conditions can be written as g (+) (z, t|0) = g (−) (z, t|) = g(z, t), g
(+)
(z, t|) = g
(−)
(z, t|0) = 0.
(7.69) (7.70)
7.2.2 Investigation of the expectations Our further considerations will be still limited to the expectations. The equations of the expectations
∂g (±) (z, t|x) ∂z
= m(±) (t|x),
0 ≤ x ≤ ,
(7.71)
z=1
can be derived from (7.67) and (7.68). We obtain that ∂m(+) (t|x) ∂m(+) (t|x) −v = Qm(−) (t|x), ∂t ∂x
(7.72)
∂m(−) (t|x) ∂m(−) (t|x) +v = Qm(+) (t|x), ∂t ∂x
(7.73)
and
0 ≤ x ≤ . The initial conditions are:
m
(±)
(0|x) =
1,
if x ∈ [0, ],
0,
if x ∈ [0, ],
(7.74)
193
One-Dimensional Branching Process
whereas the boundary conditions can be written as m(+) (t|0) = m(−) (t|) = m(t),
(7.75)
m(−) (t|0) = m(+) (t|) = 0.
(7.76)
and Introduce the Laplace transforms ψ(±) (s|x) =
∞
e −st m(±) (t|x)dt,
0 ≤ x ≤ ,
(7.77)
0
which in the interval 0 ≤ x ≤ satisfy the differential equations dψ(+) (s|x) + Qψ(−) (s|x), dx dψ(−) (s|x) sψ(−) (s|x) − d(x) = −v + Qψ(+) (s|x), dx sψ(+) (s|x) − d(x) = v
where now s is to be considered simply as a parameter, and 1, if x ∈ [0, ], d(x) = 0, if x ∈ [0, ]
(7.78) (7.79)
(7.80)
is the rectangle function. The boundary conditions (7.75) and (7.76), on the other hand, are modified as follows: ψ(+) (s|) = ψ(−) (s|0) = 0
(7.81)
ψ(+) (s|0) = ψ(−) (s|) = ψ(s),
(7.82)
and where taking the expression (7.75) into account: ψ(s) =
∞
e −st m(t)dt.
(7.83)
0
We note that m(t) denotes the expectation of the number of particles at time t ≥ 0, which were created by the initial particle moving to the right, starting from the point x = 0, or equivalently, by a particle moving to the left and starting from the point x = , at time t = 0. Thus, the task is to solve the differential equation system dψ(+) Q 1 s − ψ(+) + ψ(−) = − d(x) dx v v v
(7.84)
dψ(−) s Q 1 + ψ(−) − ψ(+) = + d(x) (7.85) dx v v v in the interval 0 ≤ x ≤ with the boundary conditions (7.81) and (7.82). We describe the calculations below only for the sake of illustration.
Solution of the differential equation system Start with the homogeneous linear equation system dy(+) s Q − y(+) + y(−) = 0, dx v v
(7.86)
194
Imre Pázsit & Lénárd Pál
s dy(−) Q + y(−) − y(+) = 0. dx v v First, let us find the basic solutions. To this end we substitute the expressions y(+) = C (+) e λx
and
(7.87)
y(−) = C (−) e λx
into (7.86) and (7.87). We obtain that $
λ−
s % (+) Q (−) C + C = 0, v v
(7.88)
Q (+) $ s % (−) C + λ+ C = 0. (7.89) v v The necessary and sufficient condition for this equation system to have solutions different from the trivial one C (+) = C (−) = 0 is that the characteristic equation s Q λ− s2 Q2 v v 2 D= − + =0 (7.90) = λ v2 v2 s Q − λ + v v −
should be fulfilled. This is true when λ1 =
1 2 s − Q2 v
or λ2 = −
1 2 s − Q2. v
(7.91)
The following relationships are also trivially true: λ1 + λ2 = 0,
λ1 − λ2 =
2 2 s − Q2 v
and
λ1 λ2 = −
s2 − Q 2 . v2
The basic solutions are obtained by substituting the eigenvalues λ = λ1 and λ = λ2 into (7.88) and (7.89), respectively, and then determining the relationship between the constants C (+) and C (−) for both λ = λ1 and λ = λ2 . If λ = λ1 , then $ s % (+) Q (−) λ1 − C + C = 0, (7.92) v v $ Q s % (−) − C (+) + λ1 + (7.93) C = 0. v v It is easily verified that (7.93) follows from (7.92). Multiplying (7.92) by (λ1 + s/v) leads to λ21
s2 − 2 v
C (+) +
Q$ s % (−) λ1 + C = 0, v v
and since λ21 −
s2 Q2 = − , v2 v2
one can immediately write down that
Q Q (+) $ s % (−) − C + λ1 + = 0. C v v v
195
One-Dimensional Branching Process
Considering that Q/v = 0, the equation s % (−) Q (+) $ C + λ1 + C =0 v v
−
has to be fulfilled, which is exactly identical with (7.93). According to this, for the eigenvalue λ = λ1 one obtains $ Q s% C (+) = C1 and C (−) = −C1 λ1 − , v v since by replacing these formulae into (7.92), we find that $ s%Q Q$ s% C1 − λ1 − C1 = 0, λ1 − v v v v i.e. the basic solutions are constituted by the expressions (+)
y1 If λ = λ2 , then
= C1
Q λ1 x e v $
λ2 −
(−)
and y1
$ s % λ1 x = −C1 λ1 − e . v
s % (+) Q (−) C + C = 0, v v
(7.94)
(7.95)
and the equations
Q (+) $ s % (−) C + λ2 + (7.96) C =0 v v have to be fulfilled. Also in this case, it can be seen that (7.96) follows from (7.95). Not repeating the previous trivial steps, we simply write down the basic solutions $ Q s % λ2 x (+) (−) (7.97) e y2 = C2 e λ2 x and y2 = −C2 λ2 − v v −
belonging to the eigenvalue λ = λ2 . Based on the well-known theorem from the theory on differential equations, in possession of the basic solutions, the general solutions of the homogeneous equation system in (7.86) and (7.87) are given by the formulae Q Q y(+) = C1 e λ1 x + C2 e λ2 x (7.98) v v and $ $ s % λ1 x s % λ2 x y(−) = −C1 λ1 − e − C2 λ2 − e . (7.99) v v The general solution of the system of non-homogeneous equations (7.84) and (7.85) can be obtained by the the Lagrange method of the variation of the constants. Hence, let us write the solutions in the following form: ψ(+) = A1
Q λ1 x Q e + A2 e λ 2 x v v
(7.100)
and
$ $ s % λ1 x s % λ2 x ψ(−) = −A1 λ1 − e − A 2 λ2 − e , v v where A1 and A2 can be determined from the following equation system: 1 dA1 Q λ1 x dA2 Q λ2 x e + e = − d(x), dx v dx v v $ % $ % dA1 1 s λ1 x dA2 s λ2 x λ1 − e + λ2 − e = − d(x). dx v v dx v
(7.101)
196
Imre Pázsit & Lénárd Pál
By requiring that the restriction e λ1 x Q = $ % v λ − s e λ1 x 1 v
2Q 2 2Q 2Q 2 $ % s λ2 x = − v2 s − Q = − v λ1 = v λ2 = 0, λ2 − e v e λ2 x
be fulfilled for the determinant of the equation system, one can write down the equations dA1 1 d(x) s + Q λ2 x =− λ2 − e dx 2Q λ2 v and
1 d(x) s + Q λ1 x dA2 λ1 − =− e dx 2Q λ1 v
from which one obtains for the interval 0 ≤ x ≤ that 1 d(x) s + Q e λ2 x λ2 − + B1 A1 = − 2Q λ2 v λ2 and
1 d(x) s + Q e λ1 x λ1 − + B2 . A2 = − 2Q λ1 v λ1
(7.102)
(7.103)
Of course, A1 = A2 = 0 if x ∈ [0, ]. Then, after some minor rearrangements, one can give the general solution of the equation system (7.84) and (7.85).
Investigation of the properties of the solution For the sake of simplicity it is practical to introduce the terms transport time c=
v
and relative coordinate x =
x ,
as well as the notation u=
0 ≤ x ≤ 1, s2 − Q 2 .
By accounting for these, the general solution of the equation system (7.83) and (7.85) can be written in the following form: 1 Q (7.104) + (B1 e cu x + B2 e −cu x ) ψ(+) (s|x ) = s−Q v and 1 s+u s−u (7.105) + B1 e cu x + B2 e −cu x . ψ(−) (s|x ) = s−Q v v The constants B1 and B2 can be calculated from the boundary conditions ψ(+) (s|1) =
1 Q + B1 e cu + B2 e −cu = 0 s−Q v
197
One-Dimensional Branching Process
and
s−u 1 s+u + B1 + B2 = 0. s−Q v v
ψ(−) (s|0) = By making use of the determinant cu e Q B = s−u v v
e −cu Q cu −cu s + u = 2 [(s + u)e − (s − u)e ] = 0 v v
of the equation system, one obtains that v 1 s + u − Q e −cu Q s − Q (s + u)e cu − (s − u)e −cu
(7.106)
v 1 s − u − Q e cu . Q s − Q (s + u)e cu − (s − u)e −cu
(7.107)
B1 = − and B2 = + Introducing the notation
r=
Q Q = s+u s + s2 − Q 2
(7.108)
and considering the trivial relationship 1 1 1+r = , s−Q u 1−r after suitable rearrangements one obtains ψ(+) (s|x ) =
1 − e −cu(1−x ) 1 + r (1 − r e −cu x ). u(1 + r e −cu ) 1 − r
(7.109)
Due to the symmetry relationship ψ(−) (s|x ) = ψ(+) (s|1 − x ), in the continuation we will not be concerned with the Laplace transform of the expected number of the particles generated in a process initiated by a particle moving to the left. It is practical to rewrite the formula (7.109) in the following form: ψ(+) (s|x ) = ψ(s) where
∞
1 − e −cu(1−x ) 1 − r e −cu x , 1 − e −cu 1−r
(7.110)
1 − e −cu 1 + r (7.111) u 1 + r e −cu 0 is the Laplace transform of the function m(t). As a reminder, we notice that m(t) is the expectation of the number of particles present in the interval [0, ] at the moment t ≥ 0 which were generated either by a particle travelling to the right starting from the point x = 0 or by a particle moving to the left and starting from the point x = at time t = 0. In the formula (7.109) the series expansion ψ(s) =
e −st m(t)dt =
∞
1 = dk r k , −cu (1 − r)(1 + r e ) k=0
in which dk =
k j=0
(−1)j e −jcu ,
k = 0, 1, . . .
198
Imre Pázsit & Lénárd Pál
can be executed for every s for which {(s + u)} > Q. By virtue of this, expression (7.109) can be rewritten in the following form: (+)
ψ(+) (s|x ) = ψ0 (s|x ) +
∞
(+)
ψk (s|x ),
(7.112)
k=1
where 1+r [1 − e −cu(1−x ) − r(e −culx − e −cu )] u
(+)
ψ0 (s|x ) =
(7.113)
and (+)
(+)
ψk (s|x ) = ψ0 (s|x )dk r k . In the infinite series of
ψ(+) (s|
x)
(7.114)
in (7.112), there are terms of the type 1 ν −αu r e u
present. Let Lˆ −1 denote the operator of the inverse Laplace transform. It can easily be shown that 1 ν −αu t − α ν/2 Lˆ −1 = (t − α) Iν (Q t 2 − Q 2 ), r e u t+α
(7.115)
where α > 0 and
ν > −1,
while Iν ( · · · ) is the modified Bessel function of first kind and of order ν. With the help of this formula, from (7.112) one can obtain the terms of the infinite series (+)
m(+) (t|x ) = m0 (t|x ) +
∞
(+)
mk (t|x )
(7.116)
k=1
expressed by the modified Bessel functions. For illustration, we give here the terms for x = 0. By introducing the notation tk = t 2 − k2 c 2 , (7.117) one arrives at (+)
m0 (t|0) = I0 (Qt) − (t − c)I0 (Qt1 ) + I1 (Qt) − (t − c)I1 (Qt1 ) and (+) mk (t|0)
= (−1)
k
[t − kc]
t − kc t + kc
k+1
t − kc + [t − kc] t + kc
2
k
2
Ik (Qt k ) − [t − (k + 1)c]
t − (k + 1)c t + (k + 1)c
t − (k + 1)c Ik+1 (Qtk ) − [t − (k + 1)c] t + (k + 1)c
k+1 2
Ik (Qtk+1 )
k+1
*
2
Ik+1 (Qtk+1 ) .
Numerical calculations can be performed relatively easily, since the infinite series converge fast. Before presenting the results of calculations, we need to determine the values of the parameters Q, and v at which the system becomes critical. Obviously, the system is critical if the limit value (+)
lim m(+) (t|x ) = m∞ (x )
t→∞
199
One-Dimensional Branching Process (+)
exists, and it is a positive real number. According to the well-known Abel theorem [42], if the limit value m∞ (x ) exists, then (+)
m∞ (x ) = lim sψ(+) (s|x ).
(7.118)
s→0
It is important to note that the reverse of the statement is not true. Let us first find the values of the parameters Q and c = /v, for which the function ψ(+) (s|x ) has a pole at the point s = 0. We see immediately that 1 + i [1 − e −iQc(1−x ) ][i − e −iQcx ] ψ(+) (0|x ) = , iQ(1 − i) i + e −iQc and from this it follows that ψ(+) (0|x ) is singular if and only if e −iQc + i = 0, i.e. Qc =
1 + 4k π, k = 0, 1, . . . 2
(7.119)
By choosing the requirement Qc = π/2, the limit value lims → 0 sψ(s) can be calculated as lim sψ(s) =
s→0
(1 + i)2 = 2. i
(7.120)
In view of this one can write that
" π " π #% #$ 1 − exp −i (1 − x ) 1 + i exp −i x (+) 2 2 m∞ (x ) = lim sψ(+) (s|x ) = 2 s→0 1+i 1+i " π #% $ " π #% $ 1 + i exp −i x 1 + i exp i x π 2 2 = 2 cos x , =2 (1 + i)2 2
that is one finds that in a critical system, the expectation of the number of particles in the interval [0, ] at time t = ∞ is given by the formula π (+) m∞ (x ) = 2 cos x , 0 ≤ x ≤ 1, (7.121) 2 provided that one initiating particle moving to the right was in the point x = x at t = 0. Due to the symmetry, it is obvious that π (−) m∞ (x ) = 2 sin x . 2 (+) Figure 7.7 shows the dependence of the expectations m (t|0) and m(+) (t|0.5) on the time parameter t for a subcritical case Qc = 1.2 < π/2. Considering that according to the assumptions, the multiplication takes Q 0.4, c 3
Expectation
2
x 0
1.5
x 0.5
1 0.5 0 0
5
10 Time (t)
15
20
Figure 7.7 The dependence of m(+) (t|x ) on t in the case of a subcritical process for the values x = 0 and 0.5.
200
Imre Pázsit & Lénárd Pál
3
Q p/6, c 3
Expectation
2.5
x 0
2
x 0.5
1.5 1 0.5 0
5
10 Time (t)
15
20
Figure 7.8 The dependence of m(+) (t|x ) on t in the case of a critical process for the values x = 0 and 0.5. Q 0.55, c 3
Expectation
4
x 0 3
x 0.5
2 1
0
5
10
15
20
Time (t )
Figure 7.9 The dependence of m(+) (t|x ) on t in the case of a supercritical process for the values x = 0 and 0.5.
place instantaneously, and no particle absorption takes place in the interval [0, ], the initiating particle starting from the point x = 0 at t = 0, or its progeny moving to the right, reaches the right-hand side boundary of the interval [0, ] at time t = c − 0 and it leaves from the interval at t = c + 0.This is expressed by the unit magnitude discontinuity, seen at the value t = c = 3 on the curve. The particle travelling to the right but starting from the point with relative coordinates x = 0.5 – or its progeny – naturally exits from the interval [0, ] sooner, at the time point t = c/2 = 1.5. Figure 7.8 illustrates the dependence of the expectations m(+) (t|0)
and
m(+) (t|0.5)
on the time parameter in a critical process with Qc = π/2. The curves – in addition to the discontinuous behaviour discussed above – show that after a relatively short time, the stationary (critical) level of the expectation is reached. If the relative coordinate of the initiating particle is x = 0, then m(+) (t|0) → 2, whereas if x = 0.5, then m(+) (t|0.5) → 2 cos(π/4) = 1.4142 . . . if t → ∞. Figure 7.9 illustrates the dependence of the expectations m(+) (t|0) and m(+) (t|0.5) on the time parameter t for the supercritical process Qc = 1.65 > π/2. The curves – in addition to the behaviour discussed earlier – show that the increasing of the expectation becomes monotonic with increasing t. Figure 7.10 illustrates the expectation m(+) (t|x ) of the particle number as a function of x at time t = 5, which was created by an initiating particle moving to right, having relative coordinates 0 ≤ x ≤ 1 and with a transport time c = 3, in the interval [0, ] in case of a subcritical (Q = 0.4), a critical (Q = π/6) and a supercritical (Q = 0.55) system. It is interesting to investigate the dependence on x for the case when the condition Qc = π/2 is fulfilled, for times before reaching the stationary state. We have shown that in the stationary state the dependence of the expectation on x can be given by a simple formula (7.121). This dependence is shown by the curve with the solid line in Fig. 7.11. The curve plotted by the short broken line displays this dependence at time t = 4. It is remarkable that there is hardly any difference between the curve belonging to the stationary state at t = ∞ and this latter curve. However, all graphs corresponding to a time t < c, hence also the curve drawn with the broken line, corresponding to the time t = 2.4, differ radically
201
Expectation
One-Dimensional Branching Process
2
t 5, c 3 Q 0.5
1.5
Q π/6 Q 0.55
1 0.5 0 0
0.2
0.4 0.6 0.8 Start point x to right
1
Figure 7.10 Dependence of the expected number of particles in the interval [0,] on the relative coordinate x at time t = 0 of the initiating particle moving to the right, at time t = 5, in case of a subcritical, critical and supercritical process, respectively. Q p/6, c 3
Expectation
2.5
t 2.4
2
t∞
1.5
t4
1 0.5 0 0
0.2
0.4 0.6 0.8 Start point x to right
1
Figure 7.11 The dependence of the expectation m(+) (t|x ) on the relative coordinates x at t = 0 of the initiating particle at three different time points when the condition Qc = π/2 is fulfilled.
from both the continuous and the dotted curves. Namely, if the particle moving to the right starts from the point x = 1/5 at t = 0, then there will be a particle at the right-hand side end of the interval [0, ] at time t = 4c/5 = 2.4. If x = 1/5 − 0, then the particle in question will still remain within the interval, while if x = 1/5 + 0 then it will have already left the interval, hence there will be one particle less left. It is worth pointing out that with the procedure presented here, actually the exact solution of the onedimensional Boltzmann equation has been determined.
This page intentionally left blank
P A R T
TW O
Neutron Fluctuations
This page intentionally left blank
C H A P T E R
E I G H T
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
Contents 8.1 8.2 8.3 8.4
Definitions Derivation of the Equation Expectation, Variance and Covariance Pál–Bell Equation in the Diffusion Approximation
206 207 213 217
So far, with the exception of the previous chapter, we have considered branching processes in which the statistics of the branching entities (particles) depended only on time. In the modelling of miscellaneous physical, chemical and biological phenomena, one often encounters branching processes in which the particles are characterised by a given set of continuous parameters, such as the lifetime of the particle, its position and velocity coordinates (direction and energy) and so on. In this chapter, we will now deal with branching processes in which the type of each particle is defined by a given point of a suitably chosen Euclidean space, i.e. the phase space in which the particle transport takes place. In the mathematical literature, this type of process is called general branching process. In physics terminology, one rather talks about continuous parametric branching processes. Numerous articles have been published about this process, and an excellent summary of the most important results can be found in the book of Harris [6]. We shall in this chapter describe the evolution of the distribution of neutrons in the phase space. The spatial, directional and energy distribution of neutrons will be governed by the laws of neutron transport, whereas the branching, as before, will be represented by the fission term (which also leads to the emission of neutrons with various directions and energies at various spatial points). Statistical properties of continuous parametric particle branching processes were first studied in physics research in connection with electron–photon showers, induced by cosmic radiation. High-energy light particles and protons, through atomic collisions during their passage in the atmosphere, produce a large number of secondary particles including electron–photon cascades. The problem of calculating the mean and the variance of the number of electrons in such a cascade that slowed down past a certain energy E has been studied in detail by Bhabha, Heitler, Jánossy, Messel and others (an excellent summary is found in the book of Harris [6], Chapter VII). In the case of the neutron fission chain in a multiplying medium, a similar description poses different problems, both conceptually and practically. The case of electron–photon showers, seen from the viewpoint of particle transport, is essentially a one-dimensional problem, due to the strongly forward-peaked character of the scattering of high-energy charged particles. Also, due to the conservation of energy in the electron–photon cascade, there is a monotonic relationship between the position (depth parameter) of an electron and its energy. Finally, the generation of photons and electrons or electron–positron pairs is an instantaneous process, leading to only prompt production of particles. In the case of neutrons, the fission neutrons are emitted isotropically. Also, although scattering is, in general, anisotropic, the straight-ahead scattering model for electron–photon showers is not applicable. Further, due to Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
205
206
Imre Pázsit & Lénárd Pál
the extra energy given to the fission neutrons, there is no energy conservation in the neutron chain. Last, but not least, the generation of delayed neutrons, especially when treated with the backward approach, constitutes a special further complication. For all these differences, the theory of neutron fluctuations in the space-angle-energy dependent case developed independently from cascade theory, and constitutes a completely autonomous area of branching particle processes. This will be described in the forthcoming. One possible treatment of the problem could be to divide the Euclidean space F, characterising the neutrons (e.g. the space and velocity coordinates), into a finite number of non-overlapping cells, and determine the distribution of the neutrons with respect to the cells at a given time t ≥ 0 by methods of matrix theory, provided that we knew this distribution at time t = 0. For the calculations, of course, the transition probabilities between the cells need to be known. Such a method, using the forward Kolmogorov equation is presented by Stacey [9], although he provides an example for the application fundamentally only for the simplest onecell case. We have chosen a simpler way to solve the problem by utilising the advantages of the backward Kolmogorov equation. In essence, we will follow the treatment of the paper published by Pál [43] in 1958, whose more detailed versions can be read in the articles [44–46] published in 1962. Bell’s paper [47] published in 1965 analyses in detail the behaviour of the generating function equation for subcritical and supercritical systems. In 1966 Matthes [48] has also provided an equivalent, but less convenient formalism. In our view, the foundation of the most general stochastic theory of the nuclear fission chain reaction is given by the Pál–Bell equation. Since 1958, numerous articles and monographs have dealt with its applications in reactor physics. Since this equation is suitable for the description of not only the fission chain reactions, but also many other branching processes, we will describe its derivation and its most important characteristics in a quite general form. In this chapter we will follow the notations used in the above-mentioned papers.
8.1 Definitions The Pál–Bell equation concerns a singly connected multiplying medium of finite extension, bounded by a convex surface, and it serves to determine the probability distribution of the number of particles at a given time t ≥ 0, born in the multiplying reactions generated by particles varying randomly their locations and velocities (kinetic energies and velocity directions), whose coordinates and velocities are contained in some Borel set of the corresponding three-dimensional Euclidean space.1 Let VR denote a subset of a three-dimensional Euclidean space V, and let VR be the measure of this subset, i.e. let VR be the volume of the multiplying medium. Denote by SR the closed convex surface which covers the volume VR . Let V r be out of the Borel sets of VR a set which contains the point corresponding to the position vector r and whose measure is Vr ≤ VR . Denote Ov the set among the Borel sets of another three-dimensional Euclidean space O, which includes the point corresponding to the velocity vector v. Define the direct product U = V r ⊗ Ov and let n(t, U) be the random process which gives the number of particles whose position coordinates are contained in the set V r , and whose velocity coordinates are contained in the set Ov at time t. Let us call the subset VR ⊗ O = UR ⊆ F of the points of the six-dimensional Euclidean space V ⊗ O = F the phase space, whose points u = {r , v} determine the types of the particles. Let u0 = {r0 , v0 } be a given point of the set UR and denote I(t0 , u0 ) the event that a particle (e.g. neutron) of velocity v0 was in the point r0 of the volume VR of the multiplying medium at time t0 . Let U1 , . . . , Uh be disjoint subsets of the set UR , and let n(t, U) be characterised by the h-dimensional probability P{n(t, Ui ) = n(Ui ), i = 1, . . . , h|I(t0 , u0 )} = p(h) [t0 , u0 ; t, n1 (U1 ), . . . , nh (Uh )].
(8.1)
In the following, we will be basically dealing only with the determination of the probability P{n(t, U1 ) = n1 (U1 )|I(t0 , u0 )} = p(1) [t0 , u0 ; t, n1 (U1 )] 1 This
means that the trajectory of each particle can be differentiated almost everywhere, i.e. it makes sense to speak about the instantaneous velocities of the particles, which for instance cannot be claimed about particles undergoing non-inertial Brownian movement.
207
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
and with the investigation of its properties. We will only briefly touch upon some characteristics of the two-dimensional probability p(2) [t0 , u0 ; t, n1 (U1 ), n2 (U2 )]. For the sake of simplicity, we shall use the following notation: p(1) [t0 , u0 ; t, n1 (U1 )] = p[t0 , u0 ; t, n(U)].
(8.2)
The probability p[t0 , u0 ; t, n(U)] obviously has to satisfy the initial condition lim p[t0 , u0 ; t, n(U)] = (u0 , U)δn,1 + [1 − (u0 , U)]δn,0 , t↓t0
(8.3)
and the boundary condition relation lim p[t0 , u0 ; t, n(U)] = δn,0
r0 →rs
In (8.3)
(u0 , U) =
0 . s ) > 0. if (
(8.4)
1, if u0 ∈ U, 0, if u0 ∈ U,
0 is the unit vector of the vector v0 , while s is the outward and (u0 , U) = (r0 , Vr )(v0 , Ov ). Further, pointing normal vector at an arbitrary point rs of the convex surface SR bounding the multiplying medium.
8.2 Derivation of the Equation For the determination of the probability p[t0 , u0 ; t, n(U)], start from the fact that the only neutron, found in the multiplying system at time t0 , will either cause a reaction up to time t ≥ t0 or it will not. In order that our considerations should easily be applicable also to the fission chain reaction, we introduce here a modification compared to the previous chapters in that we will treat separately the reactions causing absorption, scattering and fission. Let Q(t, r , v) = Qa (t, r , v) + Qs (t, r , v) + Qf (t, r , v) denote the intensity of the reaction, where Qa (t, r , v) is the intensity of the absorbtion, Qs (t, r , v) is the intensity of the scattering, while Qf (t, r , v) is the intensity of the fission reaction. These intensities are naturally zero at each point r which does not belong to the set VR . Accordingly, Q(t, r , v)t + o(t) is obviously the probability that the neutron whose position vector is r and the absolute value of its velocity is v will cause a reaction in the time interval [t, t + t]. Let A0 be the event that the single neutron of type u0 , found in the multiplying medium at time t0 , does not cause a reaction until time t ≥ t0 . Likewise, let Aa , As and Af be the events that the first reaction of the neutron is either absorption, scattering or fission, respectively. By introducing the notations P{n(t, U) = n(U)|I(t0 , u0 ), A0 } = p0 [t0 , u0 ; t, n(U)], P{n(t, U) = n(U)|I(t0 , u0 ), Aa } = pa [t0 , u0 ; t, n(U)], P{n(t, U) = n(U)|I(t0 , u0 ), As } = ps [t0 , u0 ; t, n(U)], P{n(t, U) = n(U)|I(t0 , u0 ), Af } = pf [t0 , u0 ; t, n(U)], the probability p[t0 , u0 ; t, n(U)] can be represented as a sum of the partial probabilities of four mutually exclusive events, namely p[t0 , u0 ; t, n(U)] = p0 [t0 , u0 ; t, n(U)] + pa [t0 , u0 ; t, n(U)] + ps [t0 , u0 ; t, n(U)] + pf [t0 , u0 ; t, n(U)].
(8.5)
208
Imre Pázsit & Lénárd Pál
8.2.1 The probability of no reaction First, determine the probability of the event T (t0 , u0 , t) that the neutron with velocity v0 at the point r0 of the multiplying medium at time t0 , will not cause any reaction up to time t ≥ t0 . We assume that for each time point t with t0 ≤ t ≤ t, the following relationship holds: T (t0 , u0 , t) = T (t0 , u0 , t )T (t , u0 , t)
(8.6)
in which u 0 = {r , v0 } and r = r0 + (t − t0 )v0 . By considering the condition lim T (t0 , u0 , t) = 1, t↓t0
the solution of the functional equation (8.6) can be written in the following form: t T (t0 , u0 , t) = exp − Q(t , r , v0 )dt .
(8.7)
t0
We note that the relationship (8.6) guarantees that the process in the phase space UR will be a Markovian branching process. If the probability that an event does not occur in the interval [t0 , t] was influenced by any event before t0 , then the formula (8.7) could not be used and the description of the branching process would be much more complicated.
8.2.2 Probabilities of the reactions We also need the probabilities characterising the scattering and fission reactions. In the scattering reaction, the 0 denote the velocity of the neutron velocity of the neutron can change randomly. Let the random vector ϒ before the scattering and ϒ after the scattering, respectively. Let Ov ⊆ O be a finite domain of the velocity space whose measure is O v . Moreover, let ∈ Ov |ϒ 0 = v0 } = Ws (v0 , O v ) P{ϒ
(8.8)
denote the probability that the velocity of a neutron in the scattering reaction changes from v0 to a value v such that v ∈ O v . If Ws (v0 , O v ) is absolutely continuous, i.e. Ws (v0 , O v ) = ws (v0 , v )d 3 v , (8.9) Ov
then the probability density function ws (v0 , v ) exists, and one can state that ws (v0 , v )d 3 v is the probability that the velocity of a neutron with velocity v0 changes in the scattering reaction such that it will fall into the domain d 3 v around the velocity v . Suppose that in a fission reaction there will be ν0 neutrons born promptly, and that there will be ν1 , ν2 , . . . , ν delayed neutron precursors generated. The decay times of the precursors follow an exponential distribution characterised by the coefficients λ1 , λ2 , . . . , λ , respectively. The decay results in a delayed neutron which is capable of multiplication. For the sake of simplicity, we call the neutrons, capable of multiplication as particles of type A, while the delayed neutron precursors, not being capable of multiplication as particles of types B1 , B2 , . . . , B . Define P{ν0 = k0 ; νj = kj , 1 ≤ j ≤ |v0 } = f (k0 ; kj , 1 ≤ j ≤ |v0 )
(8.10)
209
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
as the probability of the event that a particle of type A with velocity v0 in a fission reaction produces k0 particles of type A, and k1 , k2 , . . . , k particles of types B1 , B2 , . . . , B , respectively. We notice that if kj ≤ kj precursors of type Bj decay, then kj delayed particles of type A will be generated. Introduce the generating function
q(z0 ; z1 , . . . , z |v0 ) =
f (k0 ; k1 , . . . , k |v0 )z0k0 z1k1 · · · zk
(8.11)
k0 ,k1 ,...,k
which, by using the notations {k0 , k1 , . . . , k } = k
{z0 , z1 , . . . , z } = z
and
can concisely be written in the following form: q(z|v0 ) =
f (k|v0 )zk ,
(8.12)
k
where zk = z0k0 z1k1 · · · zk . 0,1 , ϒ 0,2 , . . . , ϒ 0,k0 be the velocities of the promptly born neutrons. We assume Let the random vectors ϒ that these probability vectors have identical distributions and that they are independent, and also that they are independent of the speed (energy) of the particle inducing the reaction. Hence, if in addition 0,i ∈ Ov } = W (v ∈ O v ), P{ϒ f (0)
i = 1, 2, . . . , k0
is an absolutely continuous function of each component of v, then it can be written as (0) (0) Wf (v ∈ O v ) = wf (v )d 3 v .
(8.13)
(8.14)
O v
We will also need the velocity distribution of the particles of type A, which are born with a delay from the j,i of the particles A generated from the precursors of type Bj , where precursors. We assume that the velocities ϒ i = 1, 2, . . . , kj , are independent and have identical distributions; moreover, that this statement is true for each precursor type. Let j,i ∈ O v } = W (v ∈ O v ), P{ϒ f ( j)
∀ i = 1, 2, . . . , kj
(8.15)
denote the probability that the velocity of any particle of type A, given rise from the decay of precursors of type Bj is found in the velocity domain O v . If also the condition ( j) ( j) Wf (v ∈ O v ) = wf (v )d 3 v , (8.16) O v
is fulfilled, then one can claim that wf (v )d 3 v is the probability that the velocity of any particle born by the decay of a precursor of type Bj is contained in the domain d 3 v around the velocity v . ( j)
8.2.3 Partial probabilities We can now determine successively the partial probabilities in (8.5). Obviously p0 [t0 , u0 ; t, n(U)] = T (t0 , u0 , t){(u, U)δn1 + [1 − (u, U)]δn0 }, where u = {r0 + (t − t0 )v0 , v0 }.
(8.17)
210
Imre Pázsit & Lénárd Pál
Since (u, U) is zero at each point that does not belong to the domain Vr , it is naturally zero also if the neutron on its way moving with constant velocity on a straight line leaves the system up to time t. Also the probability pa [t0 , u0 ; t, n(U)] is immediately given as pa [t0 , u0 ; t, n(U)] =
t
T (t0 , u0 , t )Qa (t , r , v0 )dt δn0 .
(8.18)
t0
The next step is to determine the probability ps [t0 , u0 ; t, n(U)]. By considering that in this case the first reaction results in the scattering of the neutron, we find that ps [t0 , u0 ; t, n(U)] =
t
dt T (t0 , u0 , t )Qs (t , r , v0 )
ws (v0 , v )p[t , r , v ; t, n(U)]d 3 v ,
(8.19)
O
t0
where2 r = r0 + (t − t0 )v0 . The determination of the probability pf [t0 , u0 ; t, n(U)] is a more complicated task compared to the previous ones. We utilise the fact that each of the prompt neutrons born in the interval [t , t + dt ] starts a multiplying process independently of the others. Thus if the number of prompt neutrons is k0 > 0, then k0
h0 [t, n0 (U)|t , r , v0 , k0 ] =
n0,1 +···+n0,k0 =n0 i=1 O
(0) wf (v )p[t , r , v ; t, n0,i (U)]d 3 v
(8.20)
is the probability that the number of neutrons generated by them, belonging to the set U at time t ≥ t is exactly n0 (U). If k0 = 0 then obviously h0 [t, n0 (U)|t , r , v0 , k0 = 0] = δn0 ,0 .
(8.21)
It is implicit in the above that in the argument of h0 [ · · · ] we have a total number of neutrons generated by k0 prompt neutrons as n0 ≤ n, since the delayed neutrons, arising from the decay of the precursors generated in the same fission, will also produce further neutrons that we need to account for. These latter can be calculated as follows. Note that the particles of type A, generated by the decay of the precursors of type Bj , will start multiplying processes independently from each other. Let us assume that out of the kj precursors of type Bj born in the position r at time t , in the interval (t , t] after t , kj ≤ kj will decay and kj − kj will not decay. The probability that the number of neutrons in U generated by a particle of type A, originating from one particular, say from the i-th, precursor of type Bj decaying during the time interval (t , t], will be at the time instant t exactly nj,i , is given as dj [t, nj,i (U)|t , r ] = λj
t
t
e
−λj (t −t )
O
( j) wf (v )p[t , r , v ; t, nj,i ]d 3 v dt .
(8.22)
Then, the probability that the number of neutrons belonging to the set U at time t, generated by particles of type A originating from the decay of the kj precursors of type Bj born in the position r and at time t , will be exactly nj (U), can be given in the following form: hj [t, nj (U)|t , r , kj ] = e −λj kj
(t−t )
kj kj −λj (kj −kj )(t−t ) δnj ,0 + kj e kj =1
k
j
dj [t, nj,i (U)|t , r ],
(8.23)
nj,1 +···+nj,k =nj i=1 j
2 We
note that in (8.19) and in the forthcoming wherever it is used, the symbol O denotes the integration being extended over the whole velocity phase, i.e. +∞ +∞ +∞ · · · d 3 v = · · · dvx dvy dvz . O
−∞
−∞
−∞
211
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
where the first term on the right-hand side is the probability of the event that none of the precursors of type Bj will decay in the interval (t , t]. Now the determination of the probability pf [t0 , u0 |t, n(U)] is simple and one obtains the result t pf [t0 , u0 |t, n(U)] = T (t0 , u0 , t )Qf (t , r , v0 ) f (k0 ; k1 , . . . , k |v0 ) t0
×
k0 ,k1 ,...,k
hj [t, nj (U)|t , r , v0 , kj ]dt .
(8.24)
n0 +n1 +···+n =n j=0
8.2.4 Generating function equation In view of the fact that each term on the right-hand side of (8.5) has been determined (i.e. related back to the searched quantity p[t0 , u0 ; t, n(U)]), the task of setting up the master equation has been solved. The probability p[t0 , u0 ; t, n(U)] satisfies a complicated non-linear integral equation, which becomes substantially simpler by introducing the probability generating function ∞
g(t0 , r0 , v0 ; t, z) =
p[t0 , u0 ; t, n(U)]zn .
(8.25)
n=0
For this one obtains
g(t0 , r0 , v0 ; t, z) = T (t0 , u0 , t)[1 − (1 − z)(u, U)] +
t
+
T (t0 , u0 , t ) Qs (t , r , v0 )
t
T (t0 , u0 , t )Qa (t , r , v0 )dt
t0
ws (v0 , v )g(t , r , v ; t, z)d 3 v
O
t0
+ Qf (t , r , v0 )q[s(t , r , v0 ; t, z)|v0] dt ,
(8.26)
where the generating function q[s] was defined in (8.11) and (8.12), and the components of the vector s(t , r , v0 ; t, z) are now as follows: (0) s0 (t , r , v0 ; t, z) = (8.27) wf (v )g(t , r , v ; t, z)d 3 v O
and sj (t , r , v0 ; t, z) = e −λj (t−t ) + λj
t
t
e
−λj (t −t )
O
( j) wf (v )g(t , r , v ; t, z)d 3 v dt ,
(8.28)
1 ≤ j ≤ . One easily confirms that (8.26) gives the initial condition lim g(t0 , u0 ; t, z) = 1 − (1 − z)(u0 , U) t↓t0
(8.29)
and the boundary condition lim g(t0 , u0 ; t, z) = 1,
r0 →rs
0 · s ) > 0. if (
It is also straightforward to show that g(t0 , r0 , v0 ; t, 1) = 1 as it should be.
(8.30)
212
Imre Pázsit & Lénárd Pál
From the integral equation (8.26), by derivation with respect to t0 , an integro-differential equation is obtained that is reminiscent to the Boltzmann-type kinetic equation in the form ∂g(t0 , u0 ; t, z) ˆ 0 , u0 ; t, z)+ + Tg(t ∂t0 Qf (t0 , r0 , v0 ) q[s(t0 , r0 , v0 ; t, z)|v0 ] + Qa (t0 , r0 , v0 ) = 0,
(8.31)
ˆ is the so-called transport operator whose effect is shown by the following equation: where T ˆ 0 , u0 ; t, z) = −Q(t0 , r0 , v0 )g(t0 , u0 ; t, z) + v0 · ∇r0 g(t0 , u0 ; t, z) + Qs (t0 , r0 , v0 ) ws (v0 , v )g(t0 , r0 , v ; t, z)d 3 v . Tg(t O
(8.32)
The initial condition of (8.31) is supplied by (8.29), while its boundary condition by the relationship (8.30). 0 is the unit vector of the vector v0 , and s is the outward directed normal vector belonging to In the latter, the point rs of the convex surface SR surrounding the multiplying medium. In the reactor physics literature, equation (8.31) is called the Pál–Bell equation, of which another derivation is also known [49].
8.2.5 Distribution of neutron numbers in two disjoint phase domains In many cases, one needs the two-dimensional probability P{n(t, U1 ) = n1 (U1 ), n(t, U2 ) = n2 (U2 )|I(t0 , u0 )} = p(2) [t0 , u0 ; t, n1 (U1 ), n2 (U2 )].
(8.33)
If U1 ∩ U2 = ∅, then one can immediately write that (2)
p0 [t0 , u0 ; t, n1 (U1 ), n2 (U2 )] = T (t0 , u0 , t){(u, U1 )δn1 ,1 δn2 ,0 + (u, U2 )δn1 ,0 δn2 ,1 + [1 − (u, U1 )][1 − (u, U2 )]δn1 ,0 δn2 ,0 }. Since U1 and U2 do not contain a common point, it is evident that (u, U1 )(u, U2 ) = 0. Without repeating the reasoning used for the derivation of the probability p(1) [t0 , u0 ; t, n1 (U1 )] = p[t0 , u0 ; t, n(U)], we just simply give here the equation determining the probability generating function g (2) (t0 , u0 ; t, z1 , z2 ) =
∞ ∞
p(2) [t0 , u0 ; t, n1 (U1 ), n2 (U2 )]z1n1 z2n2 ,
(8.34)
n1 =0 n2 =0
as follows: ∂g (2) (t0 , u0 ; t, z1 , z2 ) ˆ (2) (t0 , u0 ; t, z1 , z2 ) + Qf (t0 , r0 , v0 )q[s(2) (t0 , r0 , v0 ; t, z1 , z2 )|v0 ] + Qa (t0 , r0 , v0 ) = 0, + Tg ∂t0 (8.35) in which now ˆ (2) (t0 , u0 ; t, z1 , z2 ) = −Q(t0 , r0 , v0 )g (2) (t0 , u0 ; t, z1 , z2 ) + v0 · ∇r0 g (2) (t0 , u0 ; t, z1 , z2 ) Tg + Qs (t0 , r0 , v0 ) ws (v0 , v )g (2) (t0 , r0 , v ; t, z1 , z2 )d 3 v ,
(8.36)
O
and where the components of s(2) are identical in form to those of s expressed by (8.27) and (8.28), only g should everywhere be replaced by the generating function g (2) with suitable arguments. The initial condition of (8.35) is now given by lim g (2) (t0 , u0 ; t, z1 , z2 ) = 1 − (1 − z1 )(u0 , U1 ) − (1 − z2 )(u0 , U2 ), t0 ↓t
213
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
while its boundary condition by lim g (2) (t0 , u0 ; t, z1 , z2 ) = 1,
r0 →rs
if
0 · s ) > 0, (
in which the meaning of rs is the same as before.
8.3 Expectation, Variance and Covariance 8.3.1 Expectation of the number of neutrons From the non-linear integro-differential equation (8.31), one can easily calculate the expectation E{n(t, U)|I(t0 , u0 )} = m1 (t0 , u0 ; t, U) = m1 (t0 , r0 , v0 ; t, Vr , O v )
(8.37)
of the number of neutrons in the phase space U, i.e. in the volume Vr and the velocity domain O v , at time t, that were generated by the single neutron with velocity v0 found in the point r0 of the multiplying medium at the time point t0 , and by its progeny. Since
∂g(t0 , u0 ; t, z) m1 (t0 , u0 ; t, U) = , ∂z z=1 one obtains ∂m1 (t0 , u0 ; t, U) (1) ˆ 1 (t0 , u0 ; t, U) + Q(t0 , r0 , v0 ) + Tm νj Rj (t0 , r0 ; t, U) = 0, ∂t0 j=0
where
(1)
R0 (t0 , r0 ; t, U) = and (1)
Rj (t0 , r0 ; t, U) = λj
t
e −λj (t −t0 )
wf (v )m1 (t0 , r0 , v ; t, U)d 3 v (0)
wf (v )m1 (t , r0 , v ; t, U)d 3 v dt , ( j)
O
t0
O
(8.38)
(8.39)
j = 1, 2, . . . , ,
(8.40)
and further E{νj } = νj ,
j = 0, 1, . . . , .
(8.41)
It is to mention that ν0 is the average number of the prompt neutrons per fission and νj , j = 1, . . . , is the average number of precursor nuclei of type j formed by fission. Since each delayed neutron precursor emits one neutron, it is obvious that the number νj , j = 1, . . . , is equal to the number of delayed neutrons of type j per fission. The initial and boundary conditions are given by (8.29) and (8.30) as lim m1 (t0 , u0 ; t, U) = (u0 , U) t↓t0
(8.42)
and lim m1 (t0 , u0 ; t, U) = 0,
r0 →rs
If the integral
n1 (t0 , u0 ; t, u)du = U
(8.43)
m1 (t0 , u0 ; t, U) =
0 · s ) > 0. if (
n1 (t0 , u0 ; t, r , v)d 3r d 3 v Vr O v
(8.44)
214
Imre Pázsit & Lénárd Pál
exists, then it can be shown that the average density of the neutron number n1 (t0 , u0 ; t, r , v) can be expressed by the following integro-differential equation: ∂n1 (t0 , u0 ; t, r , v) (1) ˆ 1 (t0 , u0 ; t, r , v) + Q(t0 , r0 , v0 ) + Tn νj rj (t0 , r0 ; t, r , v) = 0, ∂t0 j=0
where (1) r0 (t0 , r0 ; t, r , v)
and (1) rj (t0 , r0 ; t, r , v)
= λj
t
e
−λj (t −t0 )
=
O
t0
O
(0) wf (v )n1 (t0 , r0 , v ; t, r , v)d 3 v
( j) wf (v )n1 (t , r0 , v ; t, r , v)d 3 v dt ,
(8.45)
(8.46)
j = 1, 2, . . . , ,
(8.47)
with the initial condition lim n1 (t0 , r0 , v0 ; t, r , v) = δ(r0 − r )δ(v0 − v) t↓t0
and with the boundary condition lim n1 (t0 , r0 , v0 ; t, r , v) = 0,
r0 →rs
0 · s ) > 0. if (
Equation (8.45) is obviously identical with the adjoint of the integro-differential neutron transport equation wellknown from classical transport theory. The methods concerning the approximate solutions of this equation and the investigation of their properties are dealt with in a multitude of articles and monographs. In this book we shall not be concerned with these.
8.3.2 Variance of the number of neutrons As a next step, let us write down the equations regarding the second factorial moment needed for the calculation of the variance of n(t, U) in the case when the random variables νj , j = 0, 1, . . . , are entirely independent. In this case q(z0 ; z1 , . . . , z |v0 ) = qj (zj |v0 ), j=0
and hence
∂2 q(z0 ; z1 , . . . , z |v0 ) ∂zj ∂zj
= χj,j = z0 =···=z =1
νj (νj − 1) , if j = j , νj νj , if j = j .
Since each delayed neutron precursor emits 1 neutron, the equations νj (νj − 1) = 0,
∀ j = 1, . . . ,
must hold, and accordingly, νj2 = νj ,
i.e. D2 {νj } = νj (1 − νj ).
By taking all these into account one obtains ∂m2 (t0 , u0 ; t, U) (2) ˆ 2 (t0 , u0 ; t, U) + Q(t0 , r0 , v0 ) + Tm νj Rj (t0 , r0 ; t, U) ∂t0 j=0 (1) (1) + Q(t0 , r0 , v0 ) χj,j Rj (t0 , r0 ; t, U)Rj (t0 , u0 ; t, U), j,j
(8.48)
215
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
where
(2)
R0 (t0 , r0 ; t, U) = and (2) Rj (t0 , r0 ; t, U)
= λj
t
e
−λj (t −t0 )
O
t0
O
wf (v )m2 (t0 , r0 , v ; t, U)d 3 v (0)
( j) wf (v )m2 (t , r0 , v ; t, U)d 3 v dt ,
(8.49)
j = 1, 2, . . . , .
(8.50)
In this case, the initial condition is given by lim m2 (t0 , u0 ; t, U) = 0 t↓t0
(8.51)
and the boundary condition by lim m2 (t0 , u0 ; t, U) = 0,
r0 →rs
0 · s ) > 0. if (
(8.52)
The variance of n(t, U) can be determined by the formula D2 {n(t, U)|I(t0 , u0 )} = m2 (t0 , u0 ; t, U) + m1 (t0 , u0 ; t, U)[1 − m1 (t0 , u0 ; t, U)].
8.3.3 Covariance between particle numbers In order to calculate the covariance function cov(t0 , u0 ; t, U1 , U2 ) = E{[n(t, U1 ) − m1 (t0 , u0 ; t, U1 )][n(t, U2 ) − m1 (t0 , u0 ; t, U2 )]|I(t0 , u0 )}
(8.53)
between the numbers of neutrons in two non-intersecting domains of the phase space UR at a given moment t ≥ 0, one needs to determine the mixed second moment
2 (2) ∂ g (t0 , u0 ; t, z1 , z2 ) = m1,1 (t0 , u0 ; t, U1 , U2 ). E{n(t, U1 )n(t, U2 )|I(t0 , u0 )} = ∂z1 ∂z2 z1 =z2 =1 This is obtained from ∂m1,1 (t0 , u0 ; t, U1 , U2 ) (1,1) ˆ 1,1 (t0 , u0 ; t, U1 , U2 ) + Q(t0 , r0 , v0 ) + Tm νj Rj (t0 , r0 ; t, U1 , U2 ) ∂t0 j=0 (1) (1) + Q(t0 , r0 , v0 ) χj,j Rj (t0 , r0 ; t, U1 )Rj (t0 , r0 ; t, U2 ) = 0, (8.54) j,j
(1,1)
where, if j = 0, then Rj (t0 , r0 ; t, U1 , U2 ) is formally identical with (8.39), and if j = 1, 2, . . . , then it is identical with (8.40), only that m1 has to be replaced everywhere by m1,1 with the proper arguments. The initial condition of (8.54) reads as lim m1,1 (t0 , u0 ; t, U1 , U2 ) = 0 t↓t0
and the boundary condition is given as lim m1,1 (t0 , u0 ; t, U1 , U2 ) = 0,
r0 →rs
0 · s ) > 0. if (
In possession of m1,1 (t0 , u0 ; t, U1 , U2 ), the covariance function (8.53) can be written in the following form: cov(t0 , u0 ; t, U1 , U2 ) = m1,1 (t0 , u0 ; t, U1 , U2 ) − m1 (t0 , u0 ; t, U1 )m1 (t0 , u0 ; t, U2 ),
(8.55)
216
Imre Pázsit & Lénárd Pál
in which U1 and U2 are disjoint subspaces of the phase space UR . If the integrals m1,1 (t0 , u0 ; t, U1 , U2 ) = n1,1 (t0 , u0 ; t, u1 , u2 )du1 du2 ,
(8.56)
U1 U2
as well as
m1 (t0 , u0 ; t, U1 ) =
n1 (t0 , u0 ; t, u1 )du1
(8.57)
n1 (t0 , u0 ; t, u2 )du2
(8.58)
U1
and
m1 (t0 , u0 ; t, U2 ) = U2
exist, one can write down the equation determining the covariance function between the densities in the phase space UR as follows: ∂n1,1 (t0 , u0 ; t, u1 , u2 ) (1,1) ˆ 1,1 (t0 , u0 ; t, u1 , u2 ) + Q(t0 , r0 , v0 ) + Tn νj rj (t0 , r0 ; t, u1 , u2 ) ∂t0 j=0 (1) (1) + Q(t0 , r0 , v0 ) χj,j rj (t0 , r0 ; t, u1 )rj (t0 , r0 ; t, u2 ) = 0.
(8.59)
j,j
The initial and boundary conditions arise trivially from the conditions corresponding to (8.54). For the sake of an easier overview, we write (1,1) (0) r0 (t0 , r0 ; t, r , v) = (8.60) wf (v )n1,1 (t0 , r0 , v ; t, r , v)d 3 v O
and (1,1)
rj
(t0 , r0 ; t, r , v) = λj
t
e −λj (t −t0 )
( j)
O
t0
wf (v )n1,1 (t , r0 , v ; t, r , v)d 3 v dt ,
(1)
j = 1, 2, . . . , .
(8.61)
(1)
Moreover, we note that r0 (t0 , r0 ; t, u) is formally identical with (8.46), while rj (t0 , r0 ; t, u), j = 1, 2, . . . , with the formula (8.47). In possession of n1,1 (t0 , u0 ; t, u1 , u2 ), the formula (8.53) can be given in the following form: cov(t0 , u0 ; t, U1 , U2 ) = c(t0 , u0 ; t, u1 , u2 )du1 du2 = n1,1 (t0 , u0 ; t, u1 , u2 )du1 du2 U1 U2
−
U1 U2
n1 (t0 , u0 ; t, u1 )n1 (t0 , u0 ; t, u2 )du1 du2 ,
(8.62)
U1 U2
in which c(t0 , u0 ; t, u1 , u2 ) is the covariance function of the neutron density in the phase space for any phase point pair u1 = u2 . The Pál–Bell equation can be extended to the case with an extraneous source of neutrons as well as to include the process of neutron detection. With such extensions described in the publications [28, 44–46] it is possible to derive the space-energy dependent versions of the reactivity measurement methods. In the next chapter the classical fluctuation-based reactivity measurement methods will be discussed but the space-energy dependence will not be treated.
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
217
8.4 Pál–Bell Equation in the Diffusion Approximation As the previous section indicated, solution of the Pál–Bell equation in the full space-energy-angular dependent case is rather complicated. For instance the equation for the expectations is identical with the adjoint equation of ordinary deterministic transport theory (equations (8.45)–(8.47)). Already the solution of this integro-differential equation is exceedingly difficult except for the simplest geometries and simple or no energy dependence. The difficulties for calculating the variance lie at an even higher level, in view of the fact that the corresponding equation contains the first moment solution in a non-linear (quadratic) form as a non-homogeneous term of the equation. Due to the difficulties in solving the transport equations for realistic cases, in deterministic problems, recourse is often made to an approximation of the transport equation, the diffusion equation. For the same reason, it appears motivated to investigate the possibilities of deriving the diffusion approximation of the Pál–Bell equations. In deterministic neutron transport, there exists a standard derivation of the diffusion equation in form of an expansion of the angular flux of the transport equation into spherical harmonics, and truncating the expansion after the second term. It is easy to see that by such a procedure, the possibility of setting up a probability balance equation is lost, and the resulting equations can only be used for the calculation of first order moments. In order to derive the diffusion approximation of the Pál–Bell equations, a more profound derivation of the diffusion equations from kinetic theory is necessary. The main task in the first step is to replace the transparent description of particle streaming in transport theory, represented by the concept of trajectories that consist of straight sections of random lengths and whose last section terminates at the point where the scattering neutron is multiplied or absorbed. Instead we need to determine the diffusion kernel which describes the probability density that a neutron existing in one point at a time will appear through a diffusive movement in another point after a given time period. Having obtained the diffusion kernel, one can set up the probability balance equation with the known methods by also accounting for the neutron reactions.
8.4.1 Derivation of the equation We shall in the following disregard the energy dependence of the transport process, i.e. will use one-group or one-speed theory. If the absolute value of the velocity of the neutron does not change, whereas its direction changes isotropically, and moreover if it can be assumed that the intensity of the scattering reaction is large enough such that the free mean path becomes sufficiently small, then the motion of the neutron can be well described by the one-group (i.e. energy-independent) diffusion approximation. The one-group diffusion approximation describes, in an abstract sense, a neutron motion in a three-dimensional phase space, to which one cannot associate a free mean path, rather one has the case that any coordinate of each neutron is a random process of the time parameter t which cannot be differentiated anywhere. Further, the trajectory can experience infinitely many discontinuities in any arbitrary interval, but at the same time it is continuous in the sense formulated below. Let the random vector function ⎛ (1) ⎞ ρ (t) ⎜ (2) ⎟ ρ (t) = ⎝ρ (t)⎠ ρ(3) (t) determining the position of the neutron at the instant t be a Markovian process. Define the distribution function P{ ρ(t) ∈ Vr | ρ(t0 ) = r0 } = B(t, Vr |t0 , r0 ), in which Vr is a subdomain of volume VR of the multiplying medium containing the point r . Suppose that B(t, Vr |t0 , r0 ) is absolutely continuous, i.e. b(t, r |t0 , r0 )d 3 r , (8.63) B(t, Vr |t0 , r0 ) = Vr
218
Imre Pázsit & Lénárd Pál
where b(t, r |t0 , r0 ) is the density function of the process ρ (t), or in other words it is the diffusion kernel mentioned above. This quantity will play an important role in setting up the diffusion master equation. If the limit relation 1 lim B(t, dVr |t0 , r0 ) = 0 (8.64) t↓t0 t − t0 |r −r0 |>
is fulfilled for every > 0, then the process ρ(t) is called continuous. If also the conditions 1 lim (r − r0 )B(t, dVr |t0 , r0 ) = a(t0 , r0 ) t↓t0 t − t0 |r −r0 |≤
and 1 lim t↓t0 t − t0
(8.65)
|r −r0 |≤
(r − r0 ) · (r − r0 )(tr) B(t, dVr |t0 , r0 ) = D(t0 , r0 )
(8.66)
are satisfied, then the random vector function ρ (t) is a diffusion process. Here r (tr) is the transposed of the vector r , i.e. it is a row vector. The vector a(t0 , r0 ) defined by (8.65) is usually called as drift vector, while the matrix D(t0 , r0 ) = [Di,j (t0 , r0 )],
i, j = 1, 2, 3
determined by (8.66) is often called the diffusion matrix. This matrix is symmetrical and not negative definite. It is obvious that by referring to infinitesimal times and volumes, the quantities a(t0 , r0 ) and D(t0 , r0 ) are transition probabilities that can be given from first principles from the material properties of the system, without knowing the distribution function of the neutrons. By introducing the differential operator ˆ = D
3
ai (t0 , r0 )
i=1
∂ (i)
∂x0
1 ∂2 Di,j (t0 , r0 ) (i) ( j) , 2 i=1 j=1 ∂x ∂x 3
+
3
0
(8.67)
0
one can prove that the density function b(t, r |t0 , r0 ) satisfies the second order partial differential equation ∂ ˆ b(t, r |t0 , r0 ) = 0 +D (8.68) ∂t0 under the initial and boundary conditions lim b(t, r |t0 , r0 ) = δ(r − r0 ) t↓t0
and
lim b(t, r |t0 , r0 ) = 0,
r0 →rs
where rs is a point of the convex surface SR surrounding the multiplying medium of volume VR . It is actually an envelope for the true system and corresponds to the so-called extrapolated boundary, into which the real system is embedded. It is seen from this that a basic property of the diffusion processes is that, under certain regularity conditions, the density function b(t, r |t0 , r0 ) is determined by the drift vector and the diffusion matrix alone. This is all the more surprising, since in general, the first two moments do not determine a probability density function in general. The diffusion processes are exceptions in this sense. If the drift vector is zero and the diffusion matrix has only identical constant diagonal elements that are independent from t0 and r0 , then one obtains the well-known diffusion equation ∂ + Dr0 b(t, r |t0 , r0 ) = 0 (8.69) ∂t0 under the initial and boundary conditions above. In this case one has b(t, r |t0 , r0 ) → b(t − t0 , r0 , r ),
219
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
hence, after re-denoting t − t0 → t, one can write that ∂b(t, r0 , r ) = Dr0 b(t, r0 , r ), ∂t
(8.70)
with the initial condition b(0, r0 , r ) = δ(r − r0 ) and the boundary condition b(t, rs , r ) = 0. In the latter, rs is an arbitrary point of the convex surface SR of the extrapolated boundary of the multiplying medium. Since in one-group diffusion theory one does not need to consider the scattering events, we return to the convention that there is only one type of reaction, and one corresponding intensity, which leads to branching including the case of zero newly generated neutrons. Let hence the multiplying medium be characterised by the constant reaction intensity Q, which is independent of the time parameter t and the position vector r . Thus, Qt + o(t) is the probability that a neutron induces a reaction in the interval [t, t + t], which can result in generating ν0 ≥ 0 new neutrons and νj ≥ 0 delayed neutron precursors of type Bj , j = 1, 2, . . . , . For a constant reaction intensity, the multiplying process is homogeneous in time, which one can take into account directly from the beginning. Similarly to the full Pál–Bell equation, the basic quantity sought will be the probability p[t, r0 ; n(Vr )] that at time t > 0, there will be n neutrons in the domain Vr containing the point r , provided that there was only one neutron of type A at the point r0 of the multiplying medium at time t = 0. For setting up the master equation, we note that this probability is the sum of the probabilities of two mutually exclusive events. The first event is that no reaction occurs in the time interval [0, t] and there will be n neutrons in the volume Vr at time t > 0. The second event is that a reaction occurs in any of the subintervals [t , t + dt ] of the interval [0, t], and as the result of the processes taking place afterwards, there will be exactly n neutrons in the volume Vr at time t ≥ t . Unlike in the case of the full Pál–Bell equation, in the case of diffusing neutrons, we need also to account for the fact that non-occurrence of a reaction can also be due to the fact that until t > 0 the neutron may leave the multiplying medium without inducing any reaction; the other possibility is of course that the neutron stays within the system but does not cause a reaction. Introducing the generating function g(t, r0 ; z) =
∞
p[t, r0 ; n(Vr )]zn
(8.71)
n=0
with the usual reasoning one obtains g(t, r0 ; z) = 1 + (z − 1)e −Qt
VR
b(t, r0 , r )(r , Vr )d 3r + Q
t
e −Qt
0
b(t , r0 , r ){q[s(t − t , r ; z)] − 1}d 3r dt , VR
(8.72)
where the components of s are now as follows: s0 (t , r ; z) = g(t , r ; z), sj (t , r ; z) = λj
t
(8.73)
e −λj (t −t ) g(t , r ; z)dt + e −λj t ,
j = 1, 2, . . . , .
(8.74)
0
By the derivation of this equation with respect to t, one arrives at ∂g(t, r0 ; z) = Dr0 g(t, r0 ; z) + Q{q[s(t, r0 ; z] − g(t, r0 ; z)}. ∂t
(8.75)
The initial condition reads as lim g(t, r0 ; z) = 1 + (z − 1)(r0 , Vr ), t↓0
(8.76)
220
Imre Pázsit & Lénárd Pál
and the boundary condition as lim g(t, r0 ; z) = 1,
(8.77)
r0 →rs
where rs is an arbitrary point of the extrapolated boundary SR . Also one may often need the two-dimensional probability P{n(t, Vr1 ) = n1 (Vr1 ), n(t, Vr2 ) = n2 (Vr2 )|I(0, r0 )} = p(2) [t, r0 ; n1 (Vr1 ), n2 (Vr2 )].
(8.78)
If the domains Vr1 and Vr2 do not have any common points, then the equation determining the probability generating function ∞ ∞
g (2) (t, r0 ; z1 , z2 ) =
p(2) [t, r0 ; n1 (Vr1 ), n2 (Vr2 )]z1n1 z2n2
(8.79)
n1 =0 n2 =0
for this case reads as ∂g (2) (t, r0 ; z1 , z2 ) = Dr0 g (2) (t, r0 ; z1 , z2 ) + Q{q[s(2) (t, r0 ; z1 , z2 )] − g (2) (t, r0 ; z1 , z2 )} ∂t
(8.80)
with the initial condition lim g (2) (t, r0 ; z1 , z2 ) = 1 + (z1 − 1)(r0 , Vr1 ) + (z2 − 1)(r0 , Vr2 ) t↓0
(8.81)
and the boundary condition lim g (2) (t, r0 ; z1 , z2 ) = 1.
(8.82)
r0 →rs
The components of s(2) are formally similar to those of s given in (8.73) and (8.74), merely g has to be replaced everywhere by g (2) .
8.4.2 Expectation, variance and correlation The expectation of the number of neutrons generated in the subvolume Vr < VR of the multiplying system by a neutron starting diffusing from the point r0 at time t = 0 and by its progeny, i.e.
∂g(t, r0 ; z) E{n(t, Vr )|I(0, r0 )} = = m1 (t, r0 ; Vr ) (8.83) ∂z z=1 at any time point t > 0 can be determined from the equation ∂m1 (t, r0 ; Vr ) (1) νj Rj (t, r0 ; Vr ), = Dr0 m1 (t, r0 ; Vr ) + Q(ν0 − 1)m1 (t, r0 ; Vr ) + Q ∂t j=1
where now (1)
Rj (t, r0 ; Vr ) = λj
t
0
e −λj (t−t ) m1 (t , r0 ; Vr )dt ,
j = 1, 2, . . . ,
(8.84)
(8.85)
with the associated initial condition lim m1 (t, r0 ; Vr ) = (r0 , Vr )
(8.86)
lim m1 (t, r0 ; Vr ) = 0.
(8.87)
t↓0
and the boundary condition r0 →rs
221
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
If the integral
m1 (t, r0 ; Vr ) =
n1 (t, r0 ; r )d 3 r
(8.88)
Vr
exists, then for the neutron density n1 (t, r0 ; r ) the equation ∂n1 (t, r0 ; r ) (1) = Dr0 n1 (t, r0 ; r ) + Q(ν0 − 1)n1 (t, r0 ; r ) + Q νj rj (t, r0 ; r ) ∂t j=1
(8.89)
can be written, where (1) rj (t, r0 ; r )
t
= λj
e −λj (t−t ) n1 (t , r0 ; r )dt ,
j = 1, 2, . . . ,
(8.90)
0
with the initial condition lim n1 (t, r0 ; r ) = δ(r − r0 )
(8.91)
lim n1 (t, r0 ; r ) = 0.
(8.92)
t↓0
and the boundary condition r0 →rs
We now turn to the equation for the second factorial moment which is needed for the calculation of the variance of n(t, Vr ) at an arbitrary time t > 0, accepting the assumption concerning the independence of the random variables νj , j = 0, 1, . . . , . From
2 ∂ gf (t, r0 ; z) = m2 (t, r0 ; Vr ), ∂z2 z=1 it follows that ∂m2 (t, r0 ; Vr ) (2) = D∇r0 m2 (t, r0 ; Vr ) + Q(ν0 − 1)m2 (t, r0 ; Vr ) + Q νj Rj (t, r0 ; Vr ) ∂t j=1 (1) (2) +Q χj,j Rj (t, r0 ; Vr )Rj (t, r0 ; Vr ),
(8.93)
j,j
where (2)
Rj (t, r0 ; Vr ) = λj
t
0
e −λj (t−t ) m2 (t , r0 ; Vr )dt ,
j = 1, 2, . . . ,
with the initial condition lim m2 (t, r0 ; Vr ) = 0
(8.94)
lim m2 (t, r0 ; Vr ) = 0.
(8.95)
t↓0
and the boundary condition r0 →rs
With these, the variance of n(t, Vr ) at time t is determined by the formula D2 {n(t, Vr )|I(0, r0 )} = m2 (t, r0 ; Vr ) + m1 (t , r0 ; Vr )[1 − m1 (t , r0 ; Vr )]. The spatial correlation between the numbers of the neutrons in two non-intersecting subdomains of the multiplying system at a time t > 0 is given by the covariance function cov(t, r0 ; Vr1 , Vr2 ) = E{[n(t, Vr1 ) − m1 (t, r0 ; Vr1 )][n(t, Vr2 ) − m1 (t, r0 ; Vr2 )]|I(0, r0 )},
(8.96)
222
Imre Pázsit & Lénárd Pál
for whose calculation the mixed second moment
2 (2) ∂ g (t, r0 ; z1 , z2 ) = m1,1 (t, r0 ; Vr1 , Vr2 ) E{n(t, Vr1 )n(t, Vr2 )|I(0, r0 )} = ∂z1 ∂z2 z1 =z2 =1
(8.97)
needs to be determined. This latter is determined from ∂m1,1 (t, r0 ; Vr1 , Vr2 ) = Dr0 m1,1 (t, r0 ; Vr1 , Vr2 ) + Q(ν0 − 1)m1,1 (t, r0 ; Vr1 , Vr2 ) ∂t +Q
(1,1)
νj Rj
(t, r0 ; Vr1 , Vr2 ) + Q
j,j
j=1
(1)
(1)
χj,j Rj (t, r0 ; Vr1 )Rj (t, r0 ; Vr2 ),
(8.98)
with the corresponding initial condition lim m1,1 (t, r0 ; Vr1 , Vr2 ) = 0
(8.99)
lim m1,1 (t, r0 ; Vr1 , Vr2 ) = 0.
(8.100)
t↓0
and the boundary condition r0 →rs
The expressions (1)
Rj (t, r0 ; Vri ),
i = 1, 2, j = 1, 2, . . . ,
are identical with the formulae defined by (8.85), while t (1,1) Rj (t, r0 ; Vr1 , Vr2 ) = λj e −λj (t−t ) m1,1 (t , r0 ; Vr1 , Vr2 )dt ,
j = 1, 2, . . . , .
0
Under the usual continuity conditions, from (8.98) it follows that ∂n1,1 (t, r0 ; r1 , r2 ) (1,1) = Dr0 n1,1 (t, r0 ; r1 , r2 ) + Q(ν0 − 1)n1,1 (t, r0 ; r1 , r2 ) + Q νj rj (t, r0 ; r1 , r2 ) ∂t j=1 (1) (1) +Q χj,j rj (t, r0 ; r1 )rj (t, r0 ; r2 ) (8.101)
j,j
with the associated initial condition lim n1,1 (t, r0 ; r1 , r2 ) = 0, t↓0
if r1 = r2
(8.102)
and the boundary condition lim n1,1 (t, r0 ; r1 , r2 ) = 0.
(8.103)
r0 →rs
The definition of the functions (1)
rj (t, r0 ; r ),
j = 1, 2, . . . ,
can be found in (8.90), whereas the functions (1,1)
rj are defined by (1,1)
rj
(t, r0 ; r1 , r2 ),
(t, r0 ; r1 , r2 ) = λj 0
t
j = 1, 2, . . . ,
e −λj (t−t ) n1,1 (t , r0 ; r1 , r2 )dt ,
j = 1, 2, . . . , .
223
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
Again, from the continuity conditions, the covariance function (8.96) can be given in the following form: cov(t, r0 ; Vr1 , Vr2 ) = c(t, r0 ; r1 , r2 )d 3r1 d 3r2 , (8.104) Vr1
Vr2
where c(t, r0 ; r1 , r2 ) is the spatial covariance function of the neutron densities at the points determined by the vectors r1 = r2 .
8.4.3 Analysis of a one-dimensional system In the following, the effects caused by the ‘spatial confinement’ of the branching process will be shown in a simple one-dimensional system. We suppose that there are no delayed neutrons and the multiplying medium occupies the interval [0, ], > 0 such that x = 0 and x = represent the extrapolated boundaries of the system. The material parameters are the quantities Q, D and fk , k ∈ Z defined earlier.
Expectation and variance of the neutron number Let n(t, |x0 ) denote the number of neutrons in the system of length at time t ≥ 0, which are generated by a neutron and its progeny, starting from the point 0 ≤ x0 ≤ of the system at time t = 0. According to convention, this number characterises the importance of the point x0 of the system at time t ≥ 0. It is obvious that the expectation and variance give fundamental information about the behaviour of n(t, |x0 ). First, determine the expectation E{n(t, |x0 )} = m1 (t, x0 ; ) = n1 (t, x0 ; x)dx. (8.105) 0
In this formula, n1 (t, x0 ; x) is the neutron density defined by (8.88), which satisfies the equation ∂n1 (t, x0 ; x) ∂2 n1 (t, x0 ; x) + Q(q1 − 1)n1 (t, x0 ; x), =D ∂t ∂x20
(8.106)
where q1 = ν0 under the initial condition n1 (0, x0 ; x) = δ(x − x0 ) and the boundary condition n1 (t, 0; x) = n1 (t, ; x) = 0. By introducing the notations α = Q(q1 − 1)
and n1 (t, x0 ; x) = e αt a(t, x0 , x),
(8.107)
one obtains from (8.106) that ∂a(t, x0 ; x) ∂2 a(t, x0 ; x) =D . ∂t ∂x20 This equation has to be solved under the initial and boundary conditions a(0, x0 ; x) = δ(x − x0 )
and a(t, 0; x) = a(t, ; x) = 0,
respectively. We obtain that a(t, x0 ; x) =
∞ k=0
e −Dλk t ϕk (x0 )ϕk (x),
(8.108)
224
Imre Pázsit & Lénárd Pál
1.2
1 c 1.02
Expectation
1 0.8
q1 2
0.6
c t2 t4 t8
0.4 0.2 0 0
0.2
0.4 0.6 Start point (x0)
0.8
1
Figure 8.1 Dependence of the expectation m1 (t, x0 ; ) of the neutron number on the position coordinate x0 of the neutron starting the branching process in a subcritical system at different time points.
where ϕk (x) is the eigenfunction of the eigenvalue problem d2ϕ + λϕ = 0, dx2
ϕ(0) = ϕ() = 0
and 0 < λ1 < λ2 < · · · < λk < · · · are the eigenvalues belonging to the corresponding eigenfunctions. The solution is ∞ 2 αk t πk πk n1 (t, x0 ; x) = e sin x0 sin x,
(8.109)
k=1
where π2 k2 . (8.110) 2 The expectation of the number of neutrons generated by the single neutron starting from x0 at time t = 0 can be written in the form below for t > 0 based on (8.105): αk = α − Dλk
m1 (t, x0 ; ) = 2
∞ k=0
e
αm t 1 − cos
πk
πk
and
λk =
∞
4 πk π(2j + 1) sin x0 = e α2j+1 t sin x0 . π(2j + 1) j=0
(8.111)
In view of the fact that 0 < λ1 < λ2 < · · · < λk < · · · and hence α1 > α2 > · · · > αk > · · · , the system is subcritical if α1 = α − Dλ1 < 0, critical if α1 = α − Dλ1 = 0 and supercritical if α1 = α − Dλ1 > 0. From the equation α1 = 0, it follows the expression for the critical size ! D crt = π . Q(q1 − 1) The state of the system can be characterised by its size, i.e. with the relation of its length to the critical size: if /crt < 1 then the system is subcritical; if /crt = 1 then it is critical; and if /crt > 1 then it is supercritical. For displaying the quantitative results, we shall use = 1 and the time will be counted in units of Q −1 . Figure 8.1 illustrates the dependence of the expectation m1 (t, x0 ; ) of the neutron number on the position coordinates x0 of the neutron starting the branching process in a subcritical multiplying system for three different values of the time parameter t. It is seen, as could be predicted, that with increasing time t, the expectation of the number of neutrons decreases at all points of the interval [0, = 1]. Figure 8.2 displays how the expectation of the number of progeny, generated by a neutron starting from the middle of the multiplying system, i.e. from x0 = 0.5, depends on the time t if the system is subcritical
225
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
Expectation
1.5
1
q1 2
1.4 1.3
x0 0.5 c 1.02 c 1 c 0.98
1.2 1.1 1 0
1
2
3
4
5
Time (t)
Figure 8.2 Dependence of the expectation m1 (t, x0 ; = 1) of the neutron number on the time parameter t in the point x0 = 0.5.
(crt > 1), critical (crt = 1) and supercritical (crt < 1), respectively. It is worth noticing that in a subcritical system the time-dependence shows a maximum, which means that at the beginning of the process, the system behaves as if it was supercritical. The reason for this is that initially, the leakage at the ends does not decrease the number of neutrons sufficiently, hence a virtual supercritical state is formed, which subsequently switches over to the subcritical state when the leakage takes its stationary value. Calculation of the variance D2 {n(t, |x0 )} of the neutron number n(t, |x0 ) goes as follows. We need the second factorial moment of n(t, |x0 ) and to this end the differential equation that can be derived from (8.93) ∂m2 (t, x0 ; ) ∂2 m2 (t, x0 ; ) + αm2 (t, x0 ; ) + Qq2 [m1 (t, x0 ; )]2 , =D ∂t ∂x20
(8.112)
where q2 = ν0 (ν0 − 1) has to be solved, by taking into account the initial condition m2 (0, x0 ; ) = 0 and the boundary condition m2 (t, 0; ) = m2 (t, ; ) = 0. Again, seeking the solution of (8.112) in the form m2 (t, x0 ; ) = e αt a2 (t, x0 ; ),
(8.113)
∂2 a2 (t, x0 ; ) ∂a2 (t, x0 ; ) =D + Qq2 e −αt [m1 (t, x0 ; )]2 , ∂t ∂x20
(8.114)
one obtains
where m1 (t, x0 ; ) =
∞ j=0
4 (π2j + 1) e α2j+1 t sin x0 . π(2j + 1)
Define the Green’s function equation ∂G(t, x0 , x) ∂2 G(t, x0 , x) + δ(x0 − x) =D ∂t ∂x20
(8.115)
satisfying the limit conditions and by utilising its solution ∞
G(t, x0 , x) =
2 −Dλk t kπ kπ e sin x0 sin x, k=1
in which
λk =
kπ
2 ,
(8.116)
226
Imre Pázsit & Lénárd Pál
30
2 1 c 1.02
q1 2 q2 4
c t 0.1 t 0.2 t 0.4
1 0.5
0
0.2
(a)
0.4 0.6 Start point (x0)
Variance
Variance
25 1.5
0.8
1 20 c 1.02 15
c t2 t4 t8
10 5 0
1
q1 2 q2 4
0.2
0.4
0.6
0.8
1
Start point (x0)
(b)
Figure 8.3 Dependence of the variance of the neutron number on the coordinate 0 ≤ x0 ≤ 1 of the neutron starting the process at time t = 0 in a subcritical system, at different time points.
one can write that a2 (t, x0 ; ) = Qq2
t 0
G(t , x0 , x)[m1 (t − t , x; )]2 dx dt .
(8.117)
0
This leads to m2 (t, x0 ; ) = q2
3 ∞ 2 k0 π Ck0 (t) sin x0 , π
(8.118)
k0 =1
where Ck0 (t) =
∞ ∞ A(k0 , k1 , k2 ; t)B(k0 , k1 , k2 ) . (2k1 + 1)(2k2 + 1)
k1 =0 k2 =0
Here A(k0 , k1 , k2 ; t) = Q
exp{(α2k1 +1 + α2k2 +1 )t} − exp{α2k0 +1 t} , α2k1 +1 + α2k2 +1 − α2k0 +1
in which
αk = Q(q1 − 1) 1 −
kcrt
2 ,
and B(k0 , k1 , k2 ) stands for B(k0 , k1 , k2 ) = I [2(k1 + k2 + 1) − k0 ] + I [2(k2 − k1 ) + k0 ] + I [2(k1 − k2 ) + k0 ] + I [2(k1 + k2 + 1) + k0 ], where I (k) =
1 − cos kπ . k
The dependence of the variance D2 {n(t, |x0 )} = m2 (t, x0 ; ) + m1 (t, x0 ; )[1 − m1 (t, x0 ; )]
(8.119)
on the position x0 where the multiplying process was started at time t = 0 is shown in Fig. 8.3 for different time points. In Fig. 8.3a, the curves displaying the relations taking place at the beginning of the process can be seen. It is worth noticing that the fluctuations of the number of neutrons are initially larger if the position of the neutron starting the process is peripheral. At later times, however, the variance of the neutron number of the process started from the central point of the system will become the largest, as it can be observed in Fig. 8.3b.
227
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
35
100
c 1.02 60
30 Variance
x0 0.5
80 Variance
1 q1 2 q2 4
c 0.98
40 20
1
25
c 1.02
20
q1 2
15
q2 4
10 5
0 0
2
4
6
8
Time (t )
(a)
0
10
10
20
30
40
50
Time (t )
(b)
Figure 8.4 (a) Time-dependence of the variance of the neutron number in subcritical and supercritical systems, in which the neutron starting the process was in the point x0 = 0.5 at time t = 0. (b) In a subcritical system the variance shows a maximum, then gradually converges to zero.
The dependence of the variance of the number of neutrons on time, when the initial neutron started at the centre of the system, i.e. x0 = 0.5, at the moment t = 0, is shown in Fig. 8.4. The curves in Fig. 8.4a show the time-dependence of the variance in subcritical and supercritical systems, respectively, whereas Fig. 8.4b illustrates how, after a sufficiently long time, the variance becomes zero after having reached its maximum in a subcritical system.
Covariance of neutron densities The covariance function is defined as c(t, x0 ; x1 , x2 ) = n1,1 (t, x0 ; x1 , x2 ) − n1 (t, x0 ; x1 )n1 (t, x0 ; x2 ).
(8.120)
To determine n1,1 (t, x0 ; x1 , x2 ), we need (8.101) for the present case: ∂n1,1 (t, x0 ; x1 , x2 ) ∂2 n1,1 (t, x0 ; x1 , x2 ) =D + αn1,1 (t, x0 ; x1 , x2 ) + Qq2 n1 (t, x0 ; x1 )n1 (t, x0 ; x2 ) ∂t ∂x20
(8.121)
with the initial condition lim n1,1 (t, x0 ; x1 , x2 ) = 0,
if x1 = x2
(8.122)
n1,1 (t, 0; x1 , x2 ) = n1,1 (t, ; x1 , x2 ) = 0.
(8.123)
n1,1 (t, x0 ; x1 , x2 ) = e αt a1,1 (t, x0 ; x1 , x2 ),
(8.124)
t↓0
and the boundary condition Seeking the solution in the form
one has to solve the non-homogeneous partial differential equation ∂a1,1 (t, x0 ; x1 , x2 ) ∂2 a1,1 (t, x0 ; x1 , x2 ) =D + Qq2 (t, x0 ; x1 , x2 ) ∂t ∂x20
(8.125)
(t, x0 ; x1 , x2 ) = e −αt n1 (t, x0 ; x1 )n1 (t, x0 ; x2 ).
(8.126)
in which If G(t, x0 , x) is the solution of the Green’s function equation ∂G(t, x0 , x) ∂2 G(t, x0 , x) −D = δ(x − x0 ) ∂t ∂x20
(8.127)
228
Imre Pázsit & Lénárd Pál
with G(t, 0, x) = G(t, , x) = 0, then n1,1 (t, x0 ; x1 , x2 ) = Qq2 e
αt
t 0
G(t , x0 , x)(t − t , x; x1 , x2 )dt dx.
(8.128)
0
After the usual steps, one arrives at 3 ∞ ∞ ∞ 2 L(k0 , k1 , k2 ) k0 =1 k1 =1 k2 =1 t πk0 πk1 πk2 × sin x0 sin x1 sin x2 e αk0 t e (αk1 +αk2 )(t−t ) dt , 0
n1,1 (t, x0 ; x1 , x2 ) = Qq2
(8.129)
if αk = 0, k = 1, 2, . . . , and where B(k0 , k1 , k2 ) 4π with B(k0 , k1 , k2 ) having been defined before. If the multiplying system is critical, i.e. if α1 = 0, then on the right-hand side of the mixed (joint) moment n1,1 (t, x0 ; x1 , x2 ), a term increasing linearly with t occurs, which can be given by the following formula: L(k0 , k1 , k2 ) =
(0)
n1,1 (t, x0 ; x1 , x2 ) =
32Qq2 π π π t sin x0 sin x1 sin x2 . 3π2
(8.130)
It is practical to write the sum of the other terms that are either constants, or tend exponentially to zero if t → ∞, in the following grouping: ⎡ 3 ∞ 2 ⎣ e αk2 t − 1 π π πk2 (1) L(1, 1, k2 ) sin x0 sin x1 sin x2 n1,1 (t, x0 ; x1 , x2 ) = Qq2 αk 2 k2 =2
+
∞
L(1, k1 , 1)
k1 =2
+
∞
e αk1 t − 1 π πk1 π sin x0 sin x1 sin x2 αk 1 ⎤
L(k0 , 1, 1)
k0 =2
−1 πk0 π π x0 sin x1 sin x2 ⎦, sin αk 0
e αk0 t
(8.131)
and ⎡ 3 ∞ ∞ 2 ⎣ e (αk1 +αk2 )t − 1 π πk1 πk2 (2) x1 sin x2 L(1, k1 , k2 ) sin x0 sin n1,1 (t, x0 ; x1 , x2 ) = Qq2 αk 1 + α k 2 k1 =2 k2 =2
+
∞ ∞ k0 =2 k2 =2
+
∞ ∞ k0 =2 k1 =2
L(k0 , 1, k2 )
e αk2 t − e αk0 t πk0 π πk2 sin x0 sin x1 sin x2 αk 2 − α k 0
⎤ e αk1 t − e αk0 t πk0 π π ⎦ L(k0 , k1 , 1) sin x1 sin x2 , x0 sin αk 1 − α k 0 k1
(8.132)
229
Neutron Fluctuations in the Phase Space: The Pál–Bell Equation
x0 0.5 c 1.02
150
x0 0.5
200 1 q1 2 q2 4 t 10
c 0.98 100
Covariance
Covariance
200
50
1 q1 2 q2 4 t 10
c 1.02
150
c 0.98
100 50
0
0 0
(a)
0.1
0.2
0.3
0.4
0
0.5
Distance from x0
(b)
0.2 0.4 0.6 0.8 Distance between the points x1 and x2
1
Figure 8.5 (a) The dependence of the covariance of the neutron densities in the domain 0 ≤ x ≤ 0.5 in a supercritical system. (b) The dependence of the covariance c(10, 0.5; 0.5 − x/2, 0.5 + x/2) on the distance x between the two points under the same conditions.
and finally ⎡ 3 ∞ ∞ ∞ 2 e (αk1 +αk2 )t − e αk0 t (3) ⎣ n1,1 (t, x0 ; x1 , x2 ) = Qq2 L(k0 , k1 , k2 ) αk 1 + α k2 − α k 0 k0 =2 k1 =2 k2 =2 πk0 πk1 πk2 × sin x0 sin x1 sin x2 .
(8.133)
Based on all the above, one obtains that if α1 = 0, then (0)
(1)
(2)
(3)
n1,1 (t, x0 ; x1 , x2 ) = n1,1 (t, x0 ; x1 , x2 ) + n1,1 (t, x0 ; x1 , x2 ) + n1,1 (t, x0 ; x1 , x2 ) + n1,1 (t, x0 ; x1 , x2 ).
(8.134)
If there exists a k > 1 such that αk = 0, then the multiplying system is obviously supercritical, since the quantities α1 , . . . , αk−1 are positive. If k is an odd number, then even in this case a term linear in t will occur. A numerical investigation of the properties of the covariance function c(t, x0 ; x1 , x2 ), defined in (8.120) yields the following if the neutron inducing the multiplication starts from the point x0 = 0.5 at t = 0. The calculations are made for the time-dependence of the covariance as a function of the distance between two points, one of which being x1 = x0 = 0.5, while the other x2 = 0.5 + x and 0 ≤ x ≤ 0.5. The curves in the left-hand side of Fig. 8.5 illustrate the dependence of the covariance c(10, 0.5; 0.5, 0.5 + x) on the distance x measured from the central point x1 = x0 = 0.5 at time t = 10 for the case of a subcritical and a supercritical system. It is seen that the covariance is zero on the boundary point corresponding to the value x = 0.5. In the right-hand side of Fig. 8.5 two curves can be seen, which show the covariance c(10, 0.5; 0.5 − x/2, 0.5 + x/2) of the neutron densities as a function of the distance x between the points x1 = 0.5 − x/2 and x2 = 0.5 + x/2 also in a subcritical and a supercritical system. It is also of interest to calculate the time-dependence of the given spatial covariance of the neutron density. The two curves shown in Fig. 8.6 correspond again to the case when the process is started by a neutron from the point x0 = 0.5 at t = 0. The curves show how in a subcritical and a supercritical system the covariance c(t, 0.5; 0.25, 0.75) between the neutron densities corresponding to the points x1 = 0.25 and x2 = 0.75 changes with time. During the time interval chosen for the illustration, it can already be observed that in a supercritical system, the covariance increases considerably with time, since it has to tend to infinity with t → ∞. On the other hand, the chosen time interval is not sufficient for showing the specific characteristics of the time-dependence of the covariance in a subcritical system. The curve in Fig. 8.7a illustrates that in a subcritical system, after having reached a well-defined maximum value, the covariance starts decreasing and tends to zero in accordance with theory if t → ∞. This curve has,
230
Imre Pázsit & Lénárd Pál
120 x0 0.5
Covariance
100 80
c 1.02
60
c 0.98
40
x1 0.25
20
x2 0.75
1
q1 2 q2 4
0 0
2
4 6 Time (t )
8
10
Figure 8.6 Dependence of the covariance c(t, 0.5; 0.25, 0.75) between the neutron densities corresponding to the points x1 = 0.25 and x2 = 0.75 on time in a subcritical and a supercritical case. 40
0.2
c 1.02 20 x1 0.25 10
(a)
10
20
x1 0.25
0.4
x2 0.75
0.8
0 0
x0 0.5
0.2
0.6
x2 0.75
c 1.02
1
0
30
Covariance
Covariance
1
30
40
50
0
Time (t)
0.1
(b)
0.2
0.3
0.4
0.5
Time (t)
Figure 8.7 Variation of the covariance function of the neutron densities with time in a subcritical system.
however, also a hidden feature. Figure 8.7b shows how the covariance function c(t, 0.5; 0.25, 0.75) varies directly after starting the process. According to the calculations, the covariance of neutron densities is initially negative, but after a relatively short time, it turns to positive. The reason for this is that at the beginning of the process, the effect of the loss of neutrons caused by the leakage dominates, as the multiplication in the centre of the system does not start immediately with its full intensity. So far it has been supposed that the neutron inducing the process is found in a given point x0 of the interval [0, ] at t = 0. Let us assume now that the starting neutron can be in any of the subintervals dx0 of the interval [0, ] at the moment t = 0 with a certain probability. For the sake of simplicity, let dx0 ,
0 ≤ x0 ≤
be the probability that the starting neutron can be found in the partial interval dx0 of the interval [0, ] at the moment t = 0. For the average covariance function c(t, x1 , x2 ) =
1
c(t, x0 ; x1 , x2 )dx0
(8.135)
0
it can be proven that the asymptotic relationship lim
t→∞
is fulfilled for critical systems.
c(t, x1 , x2 ) 64 π π = 2 Qq2 sin x1 sin x2 t 3
(8.136)
C H A P T E R
N I N E
Reactivity Measurement Methods in Traditional Systems
Contents 9.1 9.2 9.3 9.4 9.5 9.6
Preliminaries Feynman-Alpha by the Forward Approach Feynman-Alpha by the Backward Approach Evaluation of the Feynman-Alpha Measurement The Rossi-Alpha Method Mogilner’s Zero Probability Method
231 234 240 250 253 257
9.1 Preliminaries The most important practical application of the theory of neutron fluctuations is to utilise the information in the distribution of the number of neutrons or the detector counts, or in their lower order moments, to get information about the system. One type of information sought is the multiplication properties of reactor cores, such as the margin to criticality, expressed by the multiplication factor or the reactivity, as in the case of the startup of traditional reactors or during the operation of source driven subcritical systems. This possibility was observed already at the dawn of nuclear power, and some early work is found in [50–52, 56]. We have seen earlier that the presence of branching, which is due to the fission process, leads to the deviation of the variance of the number of neutrons from that of a Poisson process, in which latter the variance is equal to the mean and hence there is no additional information in the variance. In case of branching, the first and second moments carry non-identical information on the process, the utilisation of which enhances the performance of the diagnostic applications significantly. Experience shows that although already the expectations, i.e. first moments, carry information about e.g. the reactivity or the effective delayed neutron fraction, yet a combination of the first two moments leads to much more powerful and accurate ways of determining these parameters. One aspect is that some unknown factors, such as the source strength, detector efficiencies, and to some extent also spatial and geometry effects, can be eliminated by taking the ratio of the variance to the expectation. In this chapter we will derive and discuss the classical fluctuation-based reactivity measurement methods, namely the variance to mean or Feynman-alpha and the correlation or Rossi-alpha methods. In addition to these, the zero probability method of Mogilner will also be touched upon. In order to gain insight, the Feynman-alpha equations will be derived both by the forward and the backward methods, respectively, partly in order to demonstrate the differences in the two methodologies in a pragmatic way, and also as a complement to the general methods used in the first part of the book. It is to be noted here that in this and the subsequent chapters, unlike in Part I of the book where an integral representation of the first collision probability was used when deriving backward master equations, the forward and backward equations will be derived by considering infinitesimal time intervals around the final Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
231
232
Imre Pázsit & Lénárd Pál
and the initial times, respectively. The merit of the integral form of the master equations is that it is valid also for non-Markovian processes; however, for Markovian processes as the ones treated here, the differential form lends an alternative which in some cases may have advantages in practical work. Its use is thus demonstrated here, for the benefit of the reader. Due to this and other reasons, e.g. in order to conform with several recent publications in the field, the formalism and notations will be slightly different from that used in the first part of the book. There is also another difference in the description which is not related to the type of the equations selected, rather to the input parameters used. As it was already mentioned in Chapter 1, in the general theory of branching processes one deals only with one type of reaction, which leads to branching, including the event of zero entities produced in one branching event. The intensity of the reaction was denoted with Q, the probability distribution of the number of new entities (particles) born in the reaction was denoted by fk , and the first two factorial moments of fk were denoted as q1 and q2 . In neutron transport, for reasons related to the physics rather to the mathematics of the process, neutron capture, leading to the disappearing of the neutron without inducing fission, is distinguished from the fission process. The multiplicity distribution of the neutrons will correspond to the fission neutron distribution (with the inclusion that fission can also lead to zero produced neutrons). This concept changes the formalism somewhat; however, the differences are minor and in most cases the differences and the way of applying the results of Part I in the forthcoming formulae and calculations are self-obvious. Here we just give a summary of what is to be kept in mind. The process will be described here by the intensities of neutron capture, λc , fission, λf , and also neutron detection λd . Actually, detection is also a neutron capture process, in designated nuclei which belong to a detector, and in Chapter 4, Section 4.3, it was already shown how the statistics of the detected neutrons can be obtained from those of the absorbed neutrons. In the forthcoming treatment, the traditional description is chosen instead, with keeping track of the detection as a separate random process explicitly. The total reaction intensity is then called the intensity of absorption, i.e. the sum of all events in which a neutron enters a reaction: λa = λc + λf + λd ≡ Q. The probability of producing k so-called prompt fission neutrons will be denoted as pf (k):1 P{ν = k} = pf (k). When calculating the first and second moment of the neutron number or the number of detector counts, the first and second factorial moment of the number distribution of fission neutrons, calculated from pf (k), will enter the formulae. As described in equations (1.13)–(1.17), q1 and q2 are related to ν and ν(ν − 1) as ν =
λa q1 , λf
ν(ν − 1) =
λa q2 λf
where λa = Q. In particular this leads to ν(ν − 1) q2 = . ν q1 Most often the second factorial moment of the fission neutron number will be expressed by the Diven factor Dν =
ν(ν − 1) . ν 2
Here we have the slight inconvenience that the Diven factor of the fission neutron distribution will be denoted with the same symbol as the Diven factor of the total multiplicity, i.e. the Diven factor of the distribution hk , which was used in Part I throughout. On the other hand, in view of the above, one has the equality q1 Dν (fk ) = ν Dν (pf ), 1 Actually
in this chapter and the first part of Chapter 10, also the delayed neutrons will be accounted for, which means that a more general probability distribution pf (n, m) will be used. However, here it is sufficient to use pf (k) = ∞ m=0 pf (k, m) for the clarification.
233
Reactivity Measurement Methods in Traditional Systems
Table 9.1
Prompt pf (k) values [53] for 235 U
pf (k)
Boldeman and Dalton [54]
Gwin et al. [55]
Recommended
pf (0)
0.0333 ± 0.0005
0.0291 ± 0.0009
0.0317 ± 0.0015
pf (1)
0.1745 ± 0.0010
0.1660 ± 0.0024
0.1720 ± 0.0014
pf (2)
0.3349 ± 0.0020
0.3362 ± 0.0034
0.3363 ± 0.0031
pf (3)
0.3028 ± 0.0020
0.3074 ± 0.0035
0.3038 ± 0.0004
pf (4)
0.1231 ± 0.0012
0.1333 ± 0.0026
0.1268 ± 0.0036
pf (5)
0.0281 ± 0.0015
0.0259 ± 0.0014
0.0266 ± 0.0026
pf (6)
0.0032 ± 0.0012
0.0021 ± 0.0006
0.0026 ± 0.0009
pf (7)
0.0001 ± 0.0001
0.0002 ± 0.0002
0.0002 ± 0.0001
νp
2.4057
2.437 ± 0.005
2.413 ± 0.007
νp (νp − 1)
4.626
4.705
4.635
νp2
7.032
7.142
4.049
2
σ (νp )
1.245
1.207
1.226
D νp
0.7994
0.7922
0.7962
which means that the constructions of the type on the left-hand side in the formulae of Part I can be directly rewritten in terms of the right-hand side in the formalism used in the following chapters. As an illustration, Table 9.1 contains the prompt neutron emission multiplicity distribution pf (k), i.e. the probability that a given fission will result in the emission of νp = k prompt neutrons, as well as some moments, the variance and the Diven factor of the prompt neutron distribution. The important parameter α = Q(q1 − 1), describing the temporal evolution of the system, will remain unchanged, except for a sign change. It would read, with the recent notations, as λf ρ Q(q1 − 1) = λa ν − 1 = λa where the reactivity ν λf − λa ν λf and the prompt neutron generation time, = 1/ν λf were introduced. However, due to the fact that the applications all refer to neutron fluctuations in subcritical systems, where ρ < 0, the parameter α > 0 will be defined as ρ α=− hence it will correspond to the parameter a = −α of Part I, which was also used frequently in subcritical systems. On the other hand, whenever external sources with multiple emission of neutrons will be considered, the same notations will be kept for the distribution, the generating function and the moments as in Part I. Namely the source neutron multiplicity distribution is denoted by hk , its generating function with r(z), the first two factorial moments as r1 and r2 and the source Diven factor as Dq = r2 /r12 . It can be mentioned that certain aspects of reactivity measurement techniques will not be discussed here, e.g. such as the effect of dead time on the measurements. One reason for this is that with the development of instrumentation and electronics, the effect of dead time is much less noticeable on the measurements than it once was. The interested reader is advised to the literature [57–61, 63] on this subject. ρ=
234
Imre Pázsit & Lénárd Pál
Just as in the preceding parts of this book, except Chapter 8, the treatment will be carried out in infinite homogeneous systems and detectors in one energy group. Space- and energy-dependent treatments can be found in references [28, 46, 64–66].
9.2 Feynman-Alpha by the Forward Approach The Feynman-alpha or variance to mean method [50–52] requires the calculation of the first and second moments of the number of detections during a period T = t − td while the system is in a stationary state. Here td designates the fixed time instant when the detection was started. Actually, the Feynman-alpha formula without delayed neutrons was already derived in Section 4.3. With the methods of Section 5.5, it would be possible to extend the derivation such that it includes delayed neutrons. Here, however, we shall instead use the traditional derivation with the forward and backward equations. In order to calculate the moments, a master equation will be considered for the joint distribution of neutrons, one group of delayed neutron precursors and neutron counts in an interval in a subcritical system with a neutron source. The characteristics of the solution are quantitatively similar even if six different delayed neutron precursor types are taken into account, but the solution will become significantly more complicated. Hence for more transparency we shall outline the solution with one delayed neutron group.2 A space- and energy-independent model will be employed, corresponding to the case of an infinite homogeneous system in one-group theory. It is assumed that the source was switched on at t0 ≤ t and the detection of neutrons was started at td ≤ t, where td ≥ t0 . Let the random processes N(t) and C(t) represent the number of neutrons and the delayed neutron precursors at the time instant t ≥ 0, respectively. Likewise, let the random process Z(t, td ) represent the number of detected neutrons in the time interval [td , t]. It is to be noted that Z(t, td ) =
Z(t, td ), 0,
if t ≥ td , if t < td .
Define the probability P{N(t) = N , C(t) = C, Z(t, td ) = Z|N(t0 ) = 0, C(t0 ) = 0, Z(t0 , td ) = 0} ≡ P(N , C, Z, t|t0 ),
t0 ≤ td .
(9.1)
Here P(N , C, Z, t|t0 ) is the joint probability that there will be N neutrons and C delayed neutron precursors at the time instant t in the subcritical system driven by a source, and that Z neutron counts were recorded during the period t − td ≥ 0. In order to derive a probability balance equation, we need to define the transition probabilities, which in turn are related to the nuclear physics parameters and the intensity of the source. Thus λc , λf and λd will denote the intensities of capture, fission and detection, respectively (these are often referred to in the literature as ‘transition probabilities per unit time’). For example λc = v c , where c is the macroscopic cross section of capture and v the neutron speed. Further, λ is the decay constant of the delayed neutron precursors, and pf (n, m) is the probability of emitting n neutrons and m delayed neutron precursors in a fission event: P{νp = n, νd = m} = pf (n, m). S is the source intensity, such that the probability of emitting a neutron within dt is equal to S dt. In the continuation, without restricting generality it will be assumed that the start of the detection time td is equal to zero, hence notation on td , or on T = t − td will be suppressed. Following standard considerations, the forward master equation for P(N , C, Z, t|t0 ) can be written down by considering changes of the state of 2 The
results corresponding to six groups will be quoted without derivation at the end of this chapter.
235
Reactivity Measurement Methods in Traditional Systems
the system between t and t + dt, leading to dP(N , C, Z, t|t0 ) = λc P(N + 1, C, Z, t|t0 )(N + 1) dt + λd P(N + 1, C, Z − 1, t|, t0 )(N + 1) + λf P(N + 1 − n, C − m, Z, t|t0 )(N + 1 − n)pf (n, m) n
m
+ SP(N − 1, C, Z, t|t0 ) + λP(N − 1, C + 1, Z, t|t0 )(C + 1) − P(N , C, Z, t|t0 )[N (λf + λc + λd ) + λC + S].
(9.2)
The initial condition associated with this equation reads as P(N , C, Z, t = t0 |t0 ) = δN ,0 δC,0 δZ,0 .
(9.3)
By defining the generating functions G(x, y, v, t|t0 ) =
N
and gf (x, y) =
C
(9.4)
xn ym pf (n, m),
(9.5)
Z
n
xN yC vZ P(N , C, Z, t|t0 )
m
the following equation is obtained from (9.2): ∂G(x, y, v, t|t0 ) ∂G(x, y, v, t|t0 ) = {λf [gf (x, y) − x] − λd (x − v)} ∂t ∂x ∂G(x, y, v, t|t0 ) + (x − 1)SG(x, y, v, t|t0 ) + λ(x − y) ∂y
(9.6)
with the initial condition G(x, y, v, t = t0 |t0 ) = 1,
t0 ≤ 0.
(9.7)
The proof of the asymptotic stationarity of G(x, y, v, t|t0 ), i.e. the proof of the limit relation lim G(x, y, v, t|t0 ) = Gst (x, y, v, t)
t0 →−∞
is not a simple task, therefore it will not be given here. Instead, we shall use a heuristic reasoning at the level of the moments of the various random processes to determine the asymptotic forms of the moments. From (9.6) we shall derive equations for the first and second moments of the detector counts in the usual way of taking derivatives with respect to the parameters x, y and v. In the sequel, the following conventional notations are introduced. For the expectation of the random processes N(t), C(t) and Z(t, 0), the notation of the expectation value is omitted, e.g. ∂G(x, y, v, t|t0 ) . (9.8) E{N(t)} ≡ N(t) ≡ N (t) = ∂x x=y=v=1 Further, one has
∂gf (x, y) = npf (n, m) ≡ νp ≡ ν (1 − β), ∂x x=y=1 n m ∂gf (x, y) = mpf (n, m) ≡ νd ≡ ν β, ∂y x=y=1 n m
(9.9) (9.10)
236
Imre Pázsit & Lénárd Pál
where ν is the average number of neutrons per fission and β is the effective delayed-neutron fraction. The notations ρ =
ν λf − (λf + λc + λd ) , ν λf
=
1 ν λf
and
=
(9.11)
λd λf
(9.12)
are standard, i.e. ρ, and stand for reactivity, prompt neutron generation time and detector efficiency, respectively.
9.2.1 First moments The three first moment equations read as follows: dN (t|t0 ) ρ−β = N (t|t0 ) + λC(t|t0 ) + S, dt
(9.13)
dC(t|t0 ) β = N (t|t0 ) − λC(t|t0 ) dt
(9.14)
and dZ(t, 0|t0 ) = λd N (t, |t0 ) = λf N (t|t0 ), dt In a steady subcritical medium with a steady source we find that lim N (t|t0 ) = N ,
t0 →−∞
t ≥ 0.
(9.15)
lim C(t|t0 ) = C
t0 →−∞
and lim Z(t, 0|t0 ) = Z(t).
t0 →−∞
By using these limit relations one obtains the following stationary solutions: S , −ρ βN βS C = = λ λ(−ρ)
N =
(9.16) (9.17)
and Z(t) = λf Nt.
(9.18)
For obvious reasons, the solutions for the neutron population N and the detector count Z are the same as those obtained without delayed neutrons. It is important to note that the argument of Z(t) in (9.18), which looks formally as time, means actually the measurement time interval [0, t] and not a time instant.
9.2.2 Second moments For sake of simplicity, following Pluta [67] and Williams [11], we introduce the modified second moment of the random variables a and b as follows: μaa ≡ a(a − 1) − a 2 = σa2 − a
(9.19)
237
Reactivity Measurement Methods in Traditional Systems
and μab ≡ ab − a b
(9.20)
where a and b stand for either of the variables neutron population N, precursor population C and detector count Z. Then, taking the auto- and cross-derivatives, one obtains the following six equations: dμNN (t|t0 ) dt dμNC (t|t0 ) dt dμCC (t|t0 ) dt dμNZ (t, 0|t0 ) dt dμCZ (t, 0|t0 ) dt
= −2αμNN (t|t0 ) + 2λμNC (t|t0 ) + λf νp (νp − 1) N (t|t0 ), = −(α + λ)μNC (t|t0 ) + = −2λμCC (t|t0 ) + 2
β μNN (t|t0 ) + λμCC (t|t0 ) + λf νp νd N (t|t0 ),
β μNC (t|t0 ) + λf νd (νd − 1) N (t|t0 ),
= −αμNZ (t, 0|t0 ) + λμCZ (t, 0|t0 ) + λf μNN (t|t0 ), = −λμCZ (t, 0|t0 ) +
β μNZ (t, 0|t0 ) + λf μNC (t|t0 ),
(9.21) (9.22) (9.23) (9.24) (9.25)
and dμZZ (t, 0|t0 ) = 2 λf μNZ (t, 0|t0 ). dt In the above, the prompt neutron decay constant α, was introduced as α=
β−ρ .
Further, some second moment notations were introduced as follows n(n − 1)pf (n, m), νp (νp − 1) = νp νd =
n
m
n
m
and νd (νd − 1) =
nmpf (n, m) = νp νd ,
n
m(m − 1)pf (n, m).
(9.26)
(9.27)
(9.28) (9.29)
(9.30)
m
As equation (9.29) shows, it is assumed that the number of fission neutrons and delayed neutron precursors are independent random variables. There are no experimental data available on νp νd , and the assumption of the statistical independence of the prompt and delayed neutron numbers seems plausible. The term corresponding to (9.30) in this description is equal to zero. Readers interested in the formal treatment when all possible autoand cross-terms between prompt neutrons and six delayed neutron precursors are explicitly kept, are referred to a recent publication [68]. It is evident that lim μNN (t|t0 ) = μNN ,
t0 →−∞
lim μNC (t|t0 ) = μNC
t0 →−∞
and lim μCC (t|t0 ) = μCC ,
t0 →−∞
while lim μNZ (t, 0|t0 ) = μNZ (t),
t0 →−∞
lim μCZ (t, 0|t0 ) = μCZ (t)
t0 →−∞
238
Imre Pázsit & Lénárd Pál
and lim μZZ (t, 0|t0 ) = μZZ (t).
t0 →−∞
By taking into account these relationships one obtains the asymptotic solutions for μNN , μNC and μCC as % λf N $ ρ μNN = (9.31) − λ νp (νp − 1) − 2λνp νd , 2(λ + α)ρ μNC =
% − λf N $ ρ − α νp (νp − 1) − 2 ανp νd , 2(λ + α)ρ
(9.32)
and β μNC . (9.33) λ We are interested in determining the stationary modified variance of the detector counts, μZZ (t), from the coupled equation system μCC =
dμNZ (t) = −αμNZ (t) + λμCZ (t) + λf μNN , dt
(9.34)
dμCZ (t) β = −λμCZ (t) + μNZ (t) + λf μNC , dt
(9.35)
and dμZZ (t) = 2 λf μNZ (t). (9.36) dt The solution of the equations (9.34)–(9.36) goes as follows. Again, (9.34) and (9.35) can be solved separately from (9.36) because of the one-way coupling between the former and the latter. One obtains then for the Laplace transform of μNZ (t) the expression
λf λ (9.37) (μNN + μNC ) + μNN , μNZ (s) = H (s) s where the function H (s) is defined as H (s) = s2 + (λ + α)s −
λρ .
(9.38)
In arriving to (9.38) use has been made of the fact that both μNZ (t) and μCZ (t) are equal to zero at t = 0. To Laplace invert (9.37), one needs the two roots of H (s), which we shall denote as −αp and −αd , the subscripts p and d referring to ‘prompt’ and ‘delayed’, respectively. Once μNZ (t) is found, it can be put into (9.26) from where the searched modified variance μZZ (t) can be derived by quadrature. These calculations are straightforward, hence details of the derivation will not be given here.
9.2.3 The variance to mean or Feynman-alpha formula The solution can be compactly written as Y (t) ≡ where
λ2f σ 2 (t) μZZ (t) = Z −1= [W (αp )fp (t) − W (αd )fd (t)], Z(t) Z(t) (α + λ)(αp − αd )
λ2 W (s) ≡ 1 − 2 s
νp (νp − 1) −
λ2 2νp νd s2
(9.39)
(9.40)
239
Reactivity Measurement Methods in Traditional Systems
and
1 − e −αi t , i = p, d. αi t It is seen that (9.39) is of the standard Feynman-alpha form, since it can be written as fi (t) ≡ 1 −
λ2f σZ2 (t) =1+ [W (αp )fp (t) − W (αd )fd (t)] ≡ 1 + Yp fp (t) + Yd fd (t). Z(t) (α + λ)(αp − αd )
(9.41)
(9.42)
To analyse the formula, it is useful to consider the quantitative values of the roots αi and make some simplifications according to the magnitude of the various terms. We just recall that for negative reactivities |ρ| < β, to a very good approximation, the two roots are given as αp = α =
β−ρ
(9.43)
and λρ . β−ρ Since αp = α λ, the prompt term Yp = Y1 in (9.42) can be written as αd =
Y1 =
Dν p (1 − β)2
(β − ρ)2
(9.44)
(9.45)
which can be simplified with a very good approximation to Y1 =
Dνp (β
− ρ)2
=
λd λf ν(ν − 1) . α2
(9.46)
Here the so-called Diven factor Dνp of the prompt neutrons was introduced as D νp =
νp (νp − 1) . νp 2
(9.47)
The abbreviation Y1 is extensively used in the various forms of the Feynman-alpha formula in the literature. For assessing the delayed term Yd = Y2 , one notes that unlike for the prompt alpha, one has αd ≤ λ, and thus the same neglection cannot be made as in (9.45). One has
Dνp 2νp νd ρ−β 2 Y2 = 1+ −1 . (9.48) (ρ − β)2 ρ νp (νp − 1) The function (ρ − β)/ρ diverges at criticality, and it decreases asymptotically to unity at deep subcriticalities. This divergence of the coefficient of the smallest root of the characteristic (inhour) equation is a known property of the solution. In summary, in a subcritical reactor with an extraneous source with Poisson statistics, the variance to mean formula can be written as
Dνp μZZ (t) 1 − e −αt Y (t) = = 1− Z(t) (ρ − β)2 αt 2 * 2νp νd ρ−β 1 − e −αd t + , (9.49) 1+ −1 ρ αd t νp (νp − 1)
240
Imre Pázsit & Lénárd Pál
1000 4
Y value
100
10 20
10
40 1
0.001
100
0.1
10 Time length (s)
1000
Figure 9.1 Feynman-alpha Y(t) values as functions of the measurement time length for different negative reactivities, indicated in 10−4 units on the right of the curves.
where α and αd are given in (9.43) and (9.44). The above forms of the Feynman-alpha formula demonstrate its suitability for the determination of the subcritical reactivity. First of all the source intensity, usually not known well quantitatively in an experiment, disappears from the variance to mean. Formally this is a result of taking the ratios of two moments, which both depend linearly on the source intensity. The physical explanation is that the source emits particles with Poisson statistics, and thus it cannot contribute to the deviation of the statistics from Poisson variance. Also, it is seen that the variance to mean depends on the measurement time in a non-linear way. This gives an opportunity to determine the decay constant α, which contains the important parameters of the system, by fitting the time-dependence of the measurement to the form (9.49). Of course the fitting gives the prompt decay constant α = (β − ρ)/ , hence two parameters out of the reactivity ρ, effective delayed neutrons fraction β and prompt neutron generation time need to be known, so that the third, usually the reactivity, should be possible to determine. A further comment is that all geometrical and energy spectrum aspects, associated with a finite detector, reactor and external source, are comprised primarily in the parameters
and Yi , whereas the time-dependence is rather insensitive to these aspects. Some calculated Feynman-alpha Y (t) curves are shown in Fig. 9.1 for illustration. It can be noted that the prompt neutron time constant can be determined from the first part of the curve that ends at the first plateau at time values a few times 1/α, i.e. a few times the lifetime of the prompt chain. The second part of the curve, for longer measurement times corresponding to the time constant of the delayed neutrons, is usually not measured and not utilised.
9.3 Feynman-Alpha by the Backward Approach 9.3.1 Preliminaries As it was already seen in several chapters of Part I, when using the backward equations with a source, it is necessary to progress in two steps. Since the backward equation operates on initial variables, the master equation that describes the evolution of the population concerns a cascade that was started by one initial neutron. In order to calculate the distributions of a cascade induced by a source of particles with an intensity S, one needs to use a second master equation, connecting the single-particle induced and source-induced distributions, or their generating functions. This latter equation determines how the moments of the source-induced cascade can be calculated from those of a cascade initiated by a single particle. In order to facilitate keeping track of these two types of distributions, the notation conventions introduced in the previous chapters of Part I will be used. The distribution, the generating function and the moments belonging to a cascade started by a single particle will be denoted by lower case symbols. The same quantities,
241
Reactivity Measurement Methods in Traditional Systems
i.e. the distributions, generating functions and their moments, that belong to a cascade induced by a source over a time period, will be denoted by capital letters. The notations in the previous section, referring to the case of source-induced cascades, conform too with this convention. Similarly to the previous section, the notation of expectation values will be dropped in the formulae, as will be specified below. Accordingly, the following probability distributions are defined: P{n(t) = n, c(t) = n, z(t, T ) = z|n(0) = 1, c(0) = 0, z(0, T ) = 0} ≡ p(n, c, z, T , t)
(9.50)
is the probability that there will be n neutrons and c precursors at time t in the system, induced by one initial neutron at t = 0, and that there have been z detector counts between t − T and t. Similarly, let P{N(t) = N , C(t) = C, Z(t, T ) = Z|N(0) = 0, C(0) = 0, Z(0, T ) = 0} ≡ P(N , C, Z, T , t)
(9.51)
be the probability that there are N neutrons and C precursors at time t in the system, and that there have been Z detector counts between t − T and t, induced by a source of intensity S, switched on at t = 0, such that there were no neutrons and precursors in the system and no detector counts have been registered up to time t = 0. Strictly speaking, in order to use the proper backward formalism, one should keep the time of emission of the single particle and the switching on the extraneous source as a variable t0 of the equation, i.e. define the distributions p(n, c, z, T , t|t0 )
and P(N , C, Z, T , t|t0 ).
(9.52)
The derivation of the master equations and the moments would then lead to derivatives and integrals with respect to the variable t0 . However, due to time homogeneity, all processes will only depend on t − t0 . Hence one can choose t0 = 0 and by this to transfer the time derivatives and subsequent integrals in the moment calculations to the final time t. Therefore, the equations used in the forthcoming will be so-called ‘mixed types’, in which the collision operator will act on the source (initial) particles, but the time variable will be the terminal time of the process. To expedite simplifications of the expressions, following similar definitions used in the foregoing, the following notations will be used: n(t) ≡ n(t), z(t, T ) ≡ z(t, T ), n(t)(n(t) − 1) ≡ mnn (t),
(9.53)
z(t, T )(z(t, T ) − 1) ≡ mzz (t, T )
(9.54)
for the moments of the single-particle induced distributions, and N(t) ≡ N (t), Z(t, T ) ≡ Z(t, T ), N(t)(N(t) − 1) − N(t) 2 ≡ μNN (t), Z(t, T )(Z(t, T ) − 1) − Z(t, T ) 2 ≡ μZZ (t, T )
(9.55) (9.56)
for the moments of the source-induced distributions. The stationary values of the latter are denoted as lim N (t) = N ,
t→∞
lim μNN (t) = μNN ,
t→∞
lim Z(t, T ) = Z(T ),
t→∞
lim μZZ (t, T ) = μZZ (T ).
t→∞
(9.57) (9.58)
9.3.2 Relationship between the single-particle and source-induced distributions The master equation that connects these two distributions was first treated by Sevast’yanov [24]. The derivation of this equation can be made by considering the probabilities, in first order of dt, of the mutually exclusive
242
Imre Pázsit & Lénárd Pál
events of no source emission or one source neutron emission within the initial time interval [0, dt]: P(N , C, Z, T , t) = (1 − S dt)P(N , C, Z, T , t − dt) + S dt P(N1 , C1 , Z1 , T , t) p(n2 , c2 , z2 , T , t).
(9.59)
N1 +n2 =N C1 +c2 =C Z1 +z2 =Z
Introducing the probability generating functions g(x, y, v, T , t) =
n
and G(x, y, v, T , t) =
c
N
C
xn yc vz p(n, c, z, T , t)
(9.60)
xN yC vZ P(N , C, Z, T , t)
(9.61)
z
Z
one obtains the following differential equation from (9.59): dG(x, y, v, T , t) = SG(x, y, v, T , t){g(x, y, v, T , t) − 1}. dt
(9.62)
Accounting for the initial conditions g(x, y, v, T , 0) = x and
G(x, y, v, T , 0) = 1,
the solution of (9.62) is obtained as t G(x, y, v, T , t) = exp S [g(x, y, v, T , t ) − 1]dt .
(9.63)
0
It can also be shown that the asymptotic value, i.e. the limit of the generating function, ∞ lim G(x, y, v, T , t) = exp S [g(x, y, v, T , t ) − 1]dt t→∞
(9.64)
0
exists if the system is subcritical. The moments of the source-induced distribution can be expressed by the single-particle induced ones by calculating the derivatives. Here we shall use the notations in (9.53)–(9.58). Thus, the asymptotic (stationary) value of the expected number of neutrons in the source-driven system is given as ∞ N =S n(t)dt. (9.65) 0
The asymptotic value of the modified second moment of the source-induced neutron number or detector count, μNN , is obtained as ∞ mnn (t)dt. (9.66) μNN = S 0
Actually, as it will be seen in the next section, the second factorial moment of the single-particle distribution, mnn (t) can be represented as a convolution, mnn (t) = 0
t
n(t − t )qnn (t )dt ,
(9.67)
243
Reactivity Measurement Methods in Traditional Systems
where qnn (t) is a given function of n(t) and can be assumed known. Hence, by virtue of (9.65)–(9.67), one has ∞ qnn (t)dt. (9.68) μNN = N 0
In a formally completely analogous manner, one obtains ∞ Z(T ) = S z(t, T )dt
(9.69)
0
and
∞
μZZ (T ) = N
qzz (t, T )dt,
(9.70)
0
where the source function qzz (t, T ) is defined by the convolution-type solution of the single-particle moment t mzz (t, T ) = n(t − t )qzz (t , T )dt . (9.71) 0
The concrete values of the source functions qnn (t) and qzz (t, T ) will be obtained in the next section.
9.3.3 Calculation of the single-particle moments For the same reasons as mentioned in the previous subsection, i.e. utilising time homogeneity, the master equations derived here will again be of ‘mixed’ type equations, because time derivative will be taken w.r.t. the terminal (detection) time. However, the scattering operator will be a backward type one, referring to the interactions of the initial particle, and in all other respects we shall utilise the properties of the backward formalism. According to the above, we shall derive coupled equations for the following quantities. In addition to the already defined probability P{n(t) = n, c(t) = c, z(t, T ) = z|n(0) = 1, c(0) = 0, z(0, T ) = 0} ≡ p(n, c, z, T , t), one needs also to define P{n(t) = n, c(t) = c, z(t, T ) = z|n(0) = 0, c(0) = 1, z(0, T ) = 0} ≡ w(n, c, z, T , t)
(9.72)
as the probability that there are n neutrons and c precursors at time t in the system, induced by one initial precursor at t = 0, and that there have been z detector counts between t − T and t. The corresponding probability generating function is defined as h(x, y, v, T , t) = xn yc vz w(n, c, z, T , t), (9.73) n
c
z
whereas the generating function g(x, y, v, T , t) of p(n, c, z, T , t) was defined in (9.60). The initial conditions for the above quantities read as p(n, c, z, T , 0) = δn,1 δc,0 δz,0 ,
g(x, y, v, T , 0) = x
(9.74)
w(n, c, z, T , 0) = δn,0 δc,1 δz,0 ,
h(x, y, v, T , 0) = y.
(9.75)
and The master equations for g and h can be obtained as follows. With the usual arguments one writes p(n, c, z, T , t) = (1 − λa dt)p(n, c, z, T , t − dt) + λc δn,0 δc,0 δc,0 dt + λf dt pf (k, ) Ak (n1 , c1 , z1 , T , t)B (n2 , c2 , z2 , T , t) k,
n1 +n2 =n c1 +c2 =c z1 +z2 =z
+ λd dtδn,0 δc,0 [(t, T )δz,1 + (t, T )δz,0 ],
(9.76)
244
Imre Pázsit & Lénárd Pál
where
Ak (n1 , c1 , z1 , T , t) =
k
P(n1j , c1j , z1j , T , t),
(9.77)
W (n2j , c2j , z2j , T , t).
(9.78)
n11 +···+n1k =n1 j=1 c11 +···+c1k =c1 z11 +···+z1k =z1
while
B (n2 , c2 , z2 , T , t) =
n21 +···+n2k =n2 j=1 c21 +···+c2k =c2 z21 +···+z2k =z2
The function (t, T ) is defined as (t, T ) =
1 for 0 ≤ t ≤ T , 0 otherwise
(9.79)
and (t, T ) = 1 − (t, T ). In (9.76), the various terms on the right-hand side are equal to the probabilities of the mutually excluding events which within the infinitesimal time interval (0, dt) are as follows: 1. 2. 3. 4.
the source neutron does not have a collision; it is captured in the system (reactor); it induces fission; it is absorbed in the detector.
Furthermore, in all four cases, there will be n neutrons and c precursors at time t in the system, and z detector counts in (t − T , t). In the fission term, k prompt neutrons and precursors will be generated. Since their later development (multiplication) is independent from each other, the joint probability is a product of the individual probabilities, subject to the constraint that they together lead to n neutrons, c precursors and z detector counts. With similar arguments, for the precursor-induced cascade one obtains w(n, c, z, T , t) = (1 − λ dt)w(n, c, z, T , t − dt) + λ dtp(n, c, z, T , t).
(9.80)
The symbols in the above equations have their usual meaning. From (9.76) and (9.80) one obtains for the generating functions g and h of (9.60) and (9.73) the following differential equations: ∂g(x, y, v, T , t) = λf pf (k, )g k (x, y, v, T , t)h (x, y, v, T , t) ∂t k
+ λc − λa g(x, y, v, T , t) + λd {(v − 1)(t, T ) + 1}
(9.81)
and ∂h(x, y, v, T , t) = λ{g(x, y, v, T , t) − h(x, y, v, T , t)}. ∂t
(9.82)
The second of the foregoing equations can be explicitly solved. By taking into account the initial condition (9.75), one gets t e −λ(t−t ) g(x, y, v, T , t )dt + ye −λt . (9.83) h(x, y, v, T , t) = λ 0
245
Reactivity Measurement Methods in Traditional Systems
Putting this back into (9.81) yields one single equation from which all statistics can be derived as ∂g(x, y, v, T , t) = λf pf (k, )[g(x, y, v, T , t)]k ∂t
k
t −λ(t−t ) −λt × λ e g(x, y, v, T , t )dt + ye 0
+ λc − λa g(x, y, v, T , t) + λd {(v − 1)(t, T ) + 1}.
(9.84)
Here one can note one significant difference compared to the forward equation. In the backward formalism, after eliminating h via (9.83), one gets one single equation for the generating function g. Since the equation does not contain any derivatives w.r.t. the variables x, y and z, for any moment, i.e. expected values of any order, one single equation can be derived for any individual moment, which can be solved separately from the other moment equations. The only technical difficulty of the solution is the calculation of certain nested integrals as will be seen soon. In the forward formalism, there is also one single master equation as a starting point. However, the equation contains derivatives w.r.t. x and y ([9], [18]). Because of this, for any moment except the first moment of the detector count, a coupled system of differential equations arises. The order of the system is increasing with the order of the moments. This, in general, constitutes more difficulties in the solution than the performing of the integrals in the backward case. The moments can be calculated as follows. For the first moment ∂g(x, y, v, T , t) n(t) = , (9.85) ∂x x=y=v=1 one obtains from (9.84) the equation dn(t) = λf ν(1 − β)n(t) + νβλf λ dt − λa n(t) + δ(t).
t
e −λ(t−t ) n(t )dt
0
(9.86)
Here the parameters ν ≡ νp + νd and β were introduced as kpf (k, ) ≡ ν(1 − β) νp =
(9.87)
k
and
νd =
k
pf (k, ) ≡ νβ.
(9.88)
In equation (9.86) the initial condition (9.74) was added directly to the equation such that one has n(t) |t=−0 = 0.
(9.89)
This was done to help realise later that the first moment n(t) is the Green’s function of the higher moments. A temporal Laplace transform of (9.86) yields for n˜ (s) ≡ L[n(t)] n˜ (s) =
s+λ s2 + s(λ + α) −
λρ
,
where the notations α, ρ and were introduced as usual. From this one gets sp + λ sp t s d + λ sd t n(t) = e − e , sp − sd sp − s d
(9.90)
(9.91)
246
Imre Pázsit & Lénárd Pál
where sp and sd are the two roots of the denominator of (9.90), the indices p and d referring to prompt and delayed, respectively. For the second moment ∂2 g(x, y, v, T , t) (9.92) mnn (t) = ∂x2 x=y=v=1 one obtains the equation dmnn (t) = λf ν(1 − β)mnn (t) dt t + νβλf λ e −λ(t−t ) mnn (t )dt − λa mnn (t)
0
+ λf νp (νp − 1) n2 (t) + 2νp νd n(t)λ
e −λ(t−t ) n(t )dt ,
t
(9.93)
0
where νp (νp − 1) and νp νd were defined in (9.28) and (9.29) with the subsequent comments. Writing this equation in the form dmnn (t) = λf ν(1 − β)mnn (t) + νβλf λ dt − λa mnn (t) + qnn (t)
t
e −λ(t−t ) mnn (t )dt
0
(9.94)
with
qnn (t) = λf νp (νp − 1) n (t) + 2νp νd n(t) 2
t
e
−λ(t−t )
n(t )dt
,
(9.95)
0
a comparison with (9.86) shows that n(t) is the Green’s function for mnn (t) with the source term qnn (t). Since, as seen from (9.74), the initial condition for mnn (t) is mnn (0) = 0,
(9.96)
the solution for mnn (t) can be written as mnn (t) =
t
n(t − t )qnn (t )dt ,
(9.97)
0
which is the same as (9.67). As mentioned earlier, to obtain μNN , one does not need to evaluate (9.97), only to identify the term qnn (t) for the evaluation of (9.68). The foregoing example illustrates the previously mentioned fact that in the backward approach one obtains a single explicit expression for moments of all order, which can be evaluated separately from all other auto- and cross-moments. As was seen above, in the forward approach, a coupled differential equation system between various auto- and cross-moments is obtained for all higher order moments. The mean and the modified variance of the detector counts of the single-particle cascades can be calculated by taking ∂g(x, y, v, T , t) z(t, T ) = (9.98) ∂v x=y=v=1
and mzz (t, T ) =
∂2 g(x, y, v, T , t) ∂v2 x=y=v=1
(9.99)
247
Reactivity Measurement Methods in Traditional Systems
from (9.84), respectively. For the first moment one obtains the equation dz(t, T ) = λf ν(1 − β)z(t, T ) + νβλf λ dt − λa z(t, T ) + λd (t, T ).
t
e −λ(t−t ) z(t , T )dt
0
(9.100)
The initial condition is z(0, T ) = 0. A comparison of (9.100) with (9.86), accounting also for the initial condition for z(t, T ), shows that the first moment n(t) is also the Green’s function for z(t, T ). Thus the solution can be written as t z(t, T ) = λd n(t − t )(t , T )dt . (9.101) 0
The second moment can be calculated by applying (9.99) to (9.84) with the result dmzz (t, T ) = λf ν(1 − β)mzz (t, T ) + νβλf λ dt
t
e −λ(t−t ) mzz (t , T )dt − λa mzz (t, T ) + qzz (t, T ),
(9.102)
0
where the source term is given as qzz (t, T ) = λf νp (νp − 1) z2 (t, T )
+ 2λf νp νd z(t, T )λ
t
e −λ(t−t ) z(t , T )dt .
(9.103)
0
Equation (9.102) has the same structure as (9.94), with the exception that the source term is different. The initial condition, similarly to that of mnn , is mzz (0, T ) = 0,
(9.104)
see (9.74). Hence the solution can be written in the form (9.71): t mzz (t, T ) = n(t − t )qzz (t , T )dt .
(9.105)
0
Again, to calculate μZZ (T ), it was sufficient to identify the source term qzz .
9.3.4 Calculation of the variance to mean From (9.90), by using (9.65), one obtains for the stationary, source-induced mean value N = S n˜ (s = 0) =
S . −ρ
(9.106)
This is a trivial result that could have been derived directly from e.g. a deterministic point kinetic equation. For the stationary value of the detector counts, from (9.69) and (9.101), by a twofold application of (9.65) one obtains ∞ Z(T ) = S z(t, T )dt = λf NT , (9.107) 0
which is of course the same result one obtains from forward theory. The second factorial moment of the neutron number is obtained by evaluating the integral in ∞ μNN = N qnn (t)dt 0
248
Imre Pázsit & Lénárd Pál
with the source term given by (9.95). This yields the result μNN =
% λf N $ ρ − λ νp (νp − 1) − 2λνp νd . 2(λ + α)ρ
(9.108)
This is the same as was obtained by the use of forward theory, equation (9.31). To calculate the modified variance of the detector counts in a stationary reactor with a source, one has simply to use (9.103) in (9.70), which reads ∞ μZZ (T ) = N qzz (t, T )dt. 0
The integrals can be performed analytically by using also (9.101) in (9.103). The result of this integral will be given directly in the Feynman-alpha formula below. This latter is usually written in the form Y (t) ≡
2 σZZ μZZ (t) −1= , Z(t) Z(t)
(9.109)
where Z(t) and μZZ (t) are given by (9.107) and (9.70), respectively, with re-denoting T as t after the integral in t was performed. One obtains the final result as
λ2f ' & μZZ (t) = W (αp )fp (t) − W (αd )fd (t) , Z(t) (α + λ)(αp − αd ) where
λ2 W (s) ≡ 1 − 2 s
and fi (t) ≡ 1 −
νp (νp − 1) − 1 − e −αi t , αi t
λ2 2νp νd s2
i = p, d.
(9.110)
(9.111)
(9.112)
Here the notations αp,d = −sp,d (9.113) were introduced as before. These expressions are identical with those derived from the forward approach, equations (9.40) and (9.41).
9.3.5 Feynman-alpha formula with six delayed neutron groups For completeness we quote here the Feynman-alpha formula with accounting for six delayed neutron groups. These will be given here without derivation, since the algebra involved is rather lengthy. For more detailed derivations, the reader is referred to [11, 46, 68]. To cite the formula we need some definitions. For obvious reasons, instead of the probabilities P(N , C, Z, T , t) and pf (m, n) we need to consider the probabilities p(n, c1 , . . . , c6 , z, T , t) and P(N , C1 , . . . , C6 , Z, T , t) as well as pf (n, m1 , . . . , m6 ), whose meaning should be clear from the notations. Here Ci , i = 1, . . . , 6 are the six delayed neutron precursor groups with corresponding decay constants λi , i = 1, . . . , 6. Auto- and cross-terms of the joint factorial moments of the new variables will hence occur in the formulae. The prompt neutrons will be counted as group number zero, whenever indexed formulae are used.
249
Reactivity Measurement Methods in Traditional Systems
It can easily be confirmed that the expectation n(t) of the single-particle induced distribution p(n, c1 , . . . , c6 , z, T , t) is still the Green’s function of all higher moments of all variables. The time-dependence of n(t), which will thus determine the time-dependence of all moments, is given by the roots si of the characteristic equation, often written in a form called the ‘inhour equation’ ( s +
6 i=1
βi s + λi
) − ρ = G −1 (s) = 0,
(9.114)
where G(s) is the so-called zero power transfer function, and νi βi = 6 . i=0 νi The Feynman-alpha formula can now be written in the form μZZ (t) Yi = Z(t) i=0 6
Y (t) =
1−
1 − e −αi t αi t
,
(9.115)
where αi = −si , i = 0, 1, . . . , 6, and as in the previous section, the variable t now stands for the measurement time length. The actual form of the Yi will depend on which second order moments of the delayed neutron precursors, as calculated from pf , one keeps as different from zero. On physical grounds one can expect that the moments νi (νi − 1) and νi νj , i = j, i, j = 1, . . . , 6
(9.116)
are exactly zero and negligible, respectively. For such a case the solution can be written compactly as [11] Yi = 2 Dν Ai
G(αi ) , αi
(9.117)
where the Ai are the residues of G at si . In a more explicit way, introducing the notations ⎞−1 ⎛ 6 (s + λ ) β λ i j 1 j=1 j j ⎠ zi = 36 = ⎝1 + 2 (s + λ ) (s + λ ) i j i j j =i j=1 36
(9.118)
and ωi =
6 zi z j , si j=0 si + sj
i = 0, 1, . . . , 6
(9.119)
the result can be written as ⎡ Yi = 2 λ2f ωi ⎣ν0 (ν0 − 1) − 2
6 λ2j ν0 νj j=1
si2 − λ2j
⎤ ⎦.
(9.120)
For the case of zi = ωi = λi = 0, i = 2, . . . , 6, equations (9.115)–(9.120) revert to (9.110)–(9.112). For the hypothetical case that all terms in (9.116) are kept as non-zero, and also the source is assumed to have compound Poisson statistics (corresponding to a spallation source, see the next chapter), the full solution is given in [68].
250
Imre Pázsit & Lénárd Pál
9.4 Evaluation of the Feynman-Alpha Measurement In the Feynman-alpha measurement the procedure is to record the number of counts in disjoint, consecutive time intervals of length T (gate width) which may be separated by a given time advancement θ. Let Z1 , Z2 , . . . , Zn denote the numbers of counts in n consecutive T -intervals. The prompt part of the variance of the number of neutrons detected in the gate width T , i.e. 2 λf 1 − e −αT 2 2 σZZ (T ) = Z(T ) 1 +
ν Dν 1 − , (9.121) α αT is usually estimated by the empirical variance ( )2 n n 1 1 Vn (T ) = Zi − Zi n − 1 i=1 n i=1
(9.122)
calculated from the measured data. It is important to note that Vn (T ) is only an asymptotically unbiased estimate 2 (T ), i.e. for finite n of σZZ 2 E{Vn (T )} = σZZ (T ).
(9.123)
2 (T ) σZZ
This means that the fitting of Vn (T ) to can result in a not negligible error in the estimation of parameters determining the dynamics of the multiplying system. In 1963, Pál [69] pointed out that since the sample elements Z1 , Z2 , . . . , Zn are not independent, it is motivated to choose the time advancement θ between the gate widths so large that the bias be negligibly small. In 1966, Babala [70] noted that by increasing the number n of the repetition of the gate width T , it is also possible to decrease the bias of Vn (T ), independently from the value of θ. It is obvious, that both procedures increase the measuring time considerably. In order to calculate the bias and to study its properties we have chosen a very simple model. We have taken into consideration only the prompt neutrons and supposed that the multiplying medium is homogeneous and infinite. From this, it follows that the nuclei supplying the detection signals are also distributed homogeneously in the medium, even if this is not the case in any real situation.3 Determine the bias 2 Bn (T ) = σZZ (T ) − E{Vn (T )} (9.124) of the empirical variance Vn (T ). By using the arithmetic mean 1 Zi n i=1
(9.125)
1 2 Zi − < Z >2 . n − 1 i=1
(9.126)
n
< Z >= one can immediately write that n
Vn (T ) =
After some short calculations one obtains n n E{Vn (T )} = (σ 2 (T ) + [Z(T )]2 ) − E{< Z >2 }, n − 1 ZZ n−1
(9.127)
3 Naturally, there are known some calculations aiming to consider the geometry of the detector, based on the finiteness and the spatial inhomogeneity
of the multiplying media. Nevertheless, such refinements do not affect the essence of our considerations.
251
Reactivity Measurement Methods in Traditional Systems
where
⎛ ⎞ n 1 E{< Z >2 } = 2 ⎝ E{Zi2 } + E{Zi Zj }⎠ . n i=1
(9.128)
i =j
By accounting for the covariance Ri,j = E{Zi Zj } − E{Zi } E{Zj },
(9.129)
it follows from (9.128) that E{< Z >2 } =
1 2 n−1 1 Ri,j , (σZZ (T ) + [Z(T )]2 ) + [Z(T )]2 + 2 n n n
(9.130)
i =j
hence from (9.127), the equation 2 E{Vn (T )} = σZZ (T ) −
1 Ri,j n(n − 1)
(9.131)
i =j
is obtained, from which it is seen that the bias is equal to Bn (T ) =
1 Ri,j . n(n − 1)
(9.132)
i =j
The covariance Ri,j has been already determined in Section 4.2.3. In the case of the model investigated, the time interval θi,j between the intervals T belonging to the sample elements Zi and Zj is θi,j = (j − i)θ + (j − i − 1)T = (j − i)(T + θ) − T ,
(9.133)
thus applying the notations used in this chapter one can write Ri,j = R(T )e αT e −α|j−i|(T +θ) ,
(9.134)
where
1 2 Sλf λf 2 2 ν Dν (1 − e −αT )2 . R(T ) = 2 2 α α The task is thus to determine the sum e −α|j−i|(T +θ) = Hn .
(9.135)
(9.136)
i =j
By introducing χ = α(T + θ), it can be seen that ⎞ ⎛ n n n e −|j−i|χ = 2 e −(j−i)χ = 2 ⎝ e −(j−1)χ + e −(j−2)χ + · · · + e −(j−k+1)χ + · · · + e −χ ⎠ , Hn = i =j
j>i
j=2
j=3
j=k
from which by simple rearrangement one obtains Hn = 2[(n −1)e −χ +(n −2)e −2χ +· · ·+e −(n−1)χ ] = 2e −nχ [(n −1)e (n−1)χ +(n −2)e (n−2)χ +· · ·+e χ ]. (9.137) Define the geometrical series γn−1 (χ) = e χ + e 2χ + · · · + e (n−1)χ =
e nχ − e χ eχ − 1
252
Imre Pázsit & Lénárd Pál
Bias factor 0.3
u0
0.25
u 10 s
0.2 0.15 0.1 0.05 100
300
500
700
900
n
Figure 9.2 Dependence of the bias factor En (T, θ) on the number n of the replays of the gate width T = 10 μs in the case of α = 500 s−1 for values of θ = 0 and 10 μs.
and notice that Hn = 2e −nχ
dγn−1 (χ) . dχ
Since dγn−1 (χ) e nχ − e χ ne nχ − e χ , = χ − eχ χ dχ e −1 (e − 1)2 one has
2 1 − e −(n−1)χ χ −nχ Hn = χ + n−e e . e −1 eχ − 1 Based on this, from (9.132) it follows that 1 1 − e −nα(T +θ) 2e αT Bn (T ) = R(T ) 1 − . n − 1 e α(T +θ) − 1 n(e α(T +θ) − 1)
(9.138)
(9.139)
It is seen that if n → ∞, then Bn (T ) → 0, i.e. the empirical variance Vn (T ) is asymptotically unbiased even in the case when θ = 0. In reality, however, the numbers n1 , n2 , . . . , nN of sample elements in the gate widths T1 , T2 , . . . , TN are always finite, therefore the bias has to be taken into consideration. Taking into account expression (9.139), one obtains from (9.131) that *
2 λf 1 − e −αT 2 [1 − En (T , θ)] , E{Vn (T )} = Z(T ) 1 +
ν Dν 1 − (9.140) α αT where
(1 − e −αT )2 1 1 − e −nα(T +θ) En (T , θ) = 1− (9.141) n − 1 (e αθ − e −αT )(αT + e −αT − 1) n(e α(T +θ) − 1) is the bias factor characterising the distortion of the empirical variance. Figure 9.2 shows the dependence of the bias factor on the number n of the repetitions of the gate width T = 10 μs. In order to illustrate the characteristics of the dependence, the values α = 500 s−1 and θ = 0 and 10 μs have been chosen for numerical calculations. It is worth mentioning that the bias factor does not depend on either the Diven factor Dν or on the detector efficiency . For a given subcriticality, the decisive parameters are: the number n of the recorded gate widths, their length T and the separation time θ between the two adjacent gates. Figure 9.3 shows the dependence of the bias factor En (T , θ) on the gate width T for three values of n of the repetitions in the case of α = 500 s−1 when the time advancement θ is equal to 10 μs. One can see that at a fixed replay number n, the bias factor increases with the decreasing of the gate width. Since the gate widths
253
Reactivity Measurement Methods in Traditional Systems
0.1
500 s1 n 300 n 500 n 700
Bias factor
0.08 0.06 0.04 0.02
u 10 s 0
0.02
0.04 0.06 Gate width
0.08
0.1
Figure 9.3 Dependence of the bias factor En (T, θ) on the gate width T at three numbers n of replays in the case of α = 500 s−1 for value of θ = 10 μs.
smaller than α−1 are decisive in the evaluation of the measurements, the bias of the empirical variance cannot be neglected. It is known that the ‘standard recipe’ of the evaluation of measurement data is as follows: let Ti be a recording interval, and determine the empirical variance Vni (Ti ) from the numbers of the counts Zi,1 , Zi,2 , . . . , Zi,ni due to the ni disjoint gate widths Ti , following each other by advanced time θ ≥ 0. Perform this procedure for the intervals T1 < · · · < Ti < · · · < TN , and then find the unknown parameters by a non-linear fit of Vn (T )/
to the function 2 σZZ (T )/Z(T ).
This procedure is correct only when the bias of the empirical variance Vni (Ti ), i = 1, 2, . . . , N is negligible. However, in most of the important cases the bias is rather large, therefore, it seems to be necessary to fit the time series of the relative empirical variances to the function E{Vn (T )}/Z(T ). There are many different computer codes available to perform the fitting, however they will be not listed here.
9.5 The Rossi-Alpha Method An alternative method for the determination of the parameters of the multiplying system is the autocovariance (alternatively, autocorrelation) or Rossi-alpha method [56]. The method was originally introduced by B. Rossi in his studies of cosmic-particle cascades, later coined as showers, where detection of coincidences was used to indicate the occurrence of cosmic particles. To calculate the autocovariance of the detector counts in short time intervals around the time instants t1 and t2 = t + τ, one needs to consider two-point (in time) distributions. As was shown in Part I of this book, for a prompt branching process such covariance functions can easily be composed from one-point distributions. In the case when one has to account for delayed neutrons, the calculations are somewhat more involved. For such a case the backward equation is better suited than the forward approach. Hence we shall derive the Rossi-alpha formula with the backward equation only. Again, the calculations will be made by assuming one group of delayed neutrons; the case of six groups in the most general case is found in [68]. This means that one needs to repeat similar steps as in the case of the Feynman-alpha formula, i.e. we need to consider both the single-particle and source-induced distributions. Below we shall go through the main steps of the calculation. In contrast to the Feynman-alpha formula, one now needs to define the following probability distributions: p(n1 , c1 , z1 , t; n2 , c2 , z2 , t + τ) (9.142)
254
Imre Pázsit & Lénárd Pál
which gives the probability that at time t and t + τ there will be N1 and N2 neutrons and C1 and C2 delayed neutron precursors in the system, respectively, and there will have been Z1 and Z2 detections during [t − dt, t] and [t + τ − dt, t + τ], respectively, due to one single neutron in the system at time t = 0. Notation on the time interval length dt will be omitted throughout. Likewise one needs P(N1 , C1 , Z1 , t; N2 , C2 , Z2 , t + τ)
(9.143)
which is the same as (9.142), but induced by an extraneous source of intensity S switched on at t = 0 when no neutrons were found in the system. Finally, one will also need the probability w(n1 , c1 , z1 , t; n2 , c2 , z2 , t + τ)
(9.144)
which is again the same as (9.142), but with the difference that there existed only one delayed neutron precursor in the system at t = 0. A simple calculation shows that the relationship between the asymptotic form of the generating functions of the p and P of (9.142) and (9.143) is given as G(x1 , y1 , v1 ; x2 , y2 , v2 , τ) = lim G(x1 , y1 , v1 , t; x2 , y2 , v2 , t + τ) t→∞ ∞ = exp S [g(x1 , y1 , v1 , t; x2 , y2 , v2 , t + τ) − 1]dt
(9.145)
0
and it can also be shown that this limit exists in subcritical systems. By derivation, from (9.145), formulae can be obtained for the stationary values of the autocovariance of the neutron number and that of the detector counts, defined as CNN (τ) = lim CNN (t, t + τ) = lim [N(t)N(t + τ) − N(t) N(t + τ) ] t→∞
t→∞
(9.146)
and similarly for CZZ (τ). A simple calculation yields CNN (τ) = S
∞
∞
mnn (t, τ)dt = N
0
{qnn (t, τ) + δ(t)n(τ)}dt,
(9.147)
0
where, similarly to the previous section, mnn (t, τ) = n(t)n(t + τ) , and use was made of the fact that the mnn (t, τ) of the single-particle induced cascade is given as a convolution of n(t) with the source function qnn (t, T ) + δ(t)n(τ). This latter will be derived below. In an analogous manner, a similar expression is obtained for the detector counts in small time intervals dt and dτ around t and t + τ, respectively, as CZZ (τ) = S
∞
mzz (t, τ)dt = N
0
∞
qzz (t, τ)dt.
(9.148)
0
Here the notation on the infinitesimal time intervals was dropped. One now needs to consider the master equations for p and w. For the corresponding generation functions g and h, these can be written, similarly to (9.81) and (9.82), as dg(x1 , y1 , v1 , t; x2 , y2 , v2 , t + τ) pf (k, )g k h + λc − λa g = λf dt k
l
+ λd {v1 (t, dt) + v2 (t + τ, dt) + 1,2 }
(9.149)
and dh(x1 , y1 , v1 , t; x2 , y2 , v2 , t + τ) = λ{g(· · ·) − h(· · ·)}, dt
(9.150)
255
Reactivity Measurement Methods in Traditional Systems
where (t, dt) was defined in (9.79), and 1,2 = 1 − (t, dt) − (t + τ, dt). For the sake of simplicity the arguments of g and h are omitted on the right-hand sides of the equations (9.149) and (9.150). The initial conditions are g(x1 , y1 , v1 , 0; x2 , y2 , v2 , τ) = x1 g(x2 , y2 , v2 , τ)
(9.151)
h(x1 , y1 , v1 , 0; x2 , y2 , v2 , τ) = y1 h(x2 , y2 , v2 , τ),
(9.152)
and where g(x2 , y2 , v2 , τ) and h(x2 , y2 , v2 , τ) are the generating functions of the one-point distributions, treated in Section 9.3. It is to be noted that on the left-hand sides of the mixed-type backward equations (9.149) and (9.149), a total time derivative appears, instead of the partial derivative. This is the consequence of the fact that the time-dependence was shifted from the initial time t0 , appearing in the true backward equation, to the final times t and t + τ. Due to the total derivative, in the solutions the integrals will affect both time variables in the arguments. Calculating the one-point expected values n(t) and z(t, dt) leads to the same results as before, with the slight difference that due to the infinitesimal detection time length, one obtains z(t, dt) = λd
t
n(t − t )(t, dt)dt = λf n(t)dt.
(9.153)
0
The stationary first moment of the detector counts in a system with a source is thus given as Z(∞, dt) = λf
S dt ≡ Zdt, −ρ
(9.154)
where the parameter Z was defined as the constant detection rate in the stationary system. For the derivation of the second moments mnn (t, τ) and mzz (t, τ) we shall use condensed notations. We recast (9.86) as t dn(t) = λf ν(1 − β)n(t) + νβλf λ e −λ(t−t ) n(t )dt dt 0 4 − λa n(t) + δ(t) ≡ L n(t) + δ(t), (9.155) where the integral operator 4 L is defined by (9.155). With this notation the equation obtained for mnn (t, τ) can be written as dmnn (t, τ) =4 L mnn (t, τ) + qnn (t, τ) + δ(t)n(τ) (9.156) dt with qnn (t, τ) = λf νp (νp − 1) n(t)n(t + τ)
t+τ + λf λνp νd n(t) e −λ(t+τ−t ) n(t )dt 0
t
+ n(t + τ)
e
−λ(t−t )
n(t )dt
.
(9.157)
0
The term δ(t)n(τ) in (9.156) arises from the fact that the initial condition mnn (0, τ) = n(τ), which can be derived from (9.151), was added to the right-hand side. From here, one has ∞ CNN (τ) = N qnn (t, τ)dt + Nn(τ). 0
(9.158)
256
Imre Pázsit & Lénárd Pál
Calculation of the integral will be deferred until the autocovariance of the detector counts is calculated. For this one needs the equation for mzz which reads as dmzz (t, τ) =4 L mzz (t, τ) + qzz (t, τ) dt
(9.159)
with qzz (t, τ) = λf νp (νp − 1) z(t)z(t + τ)
t+τ + λf λνp νd z(t, dt) e −λ(t+τ−t ) z(t , dt)dt 0
t
+ z(t + τ, dt)
e −λ(t−t ) z(t , dt)dt .
(9.160)
0
There is now no δ-function term on the right-hand side of (9.159), corresponding to the initial condition mzz (τ) = 0. With the above, the autocovariance CZZ (τ) is given by the integral ∞ CZZ (τ) = N qzz (t, τ)dt. (9.161) 0
Due to equation (9.153), it is easy to see that qzz (t, τ) = ( λf )2 qnn (t, τ)dt dτ.
(9.162)
The autocovariance of the random variables n(t) and z(t, dt) has therefore a very similar structure, except that the autocovariance of the neutron number contains an extra term, N n(τ). This term is due to the initial neutrons found in the system at the beginning of the measurement. The initial detections in a time interval dt at the beginning of the measurement do not lead to a similar term, since the detections remove chains that the detected particles otherwise could have started. With these preliminaries, the Rossi-alpha formula is given as follows. The formula is usually defined in the literature [11] as Z(t, dt)Z(t + τ, dτ) CZZ (τ) R(τ)dτ = lim = + Z dτ. (9.163) t→∞ Z(t, dt) Z dt where Z is the constant detection rate in the stationary system, defined in (9.154). In the literature, the first term in the last equality of (9.163) is often referred to as the ‘correlated counts’, and the very last term as the ‘uncorrelated background’. The information on the sought parameter α is contained in the covariance to the mean, i.e. the first of these two terms, through its dependence on τ. Thus, due to its simpler form and the fact that the information is contained in that term, in most derivations we shall refer to the Rossi-alpha formula as the covariance to the mean, Pr (τ)dτ, i.e. Pr (τ)dτ =
CZZ (τ) . Z dt
(9.164)
However, in some later parts when making comparisons with experiments, the autocovariance R(τ) will also be used. Evaluation of the integral in (9.161) leads to the result Pr (τ)dτ =
λ2f dτ 2(α + λ)(αp − αd )
[W (αp )fp (τ) − W (αd )fd (τ)],
(9.165)
where the functions W (s) are identical with those in (9.111), whereas the functions fi (τ) are now given by fi (τ) ≡ αi e −αi τ ,
i = p, d.
(9.166)
257
Reactivity Measurement Methods in Traditional Systems
Generalisation to the case of six groups of delayed neutron precursors can be made the same way as in the case of the Feynman-alpha formula. One obtains 1 Pr (τ) = Yi αi e −αi τ , 2 i=0 6
(9.167)
where the factors Yi were given in (9.118) and (9.120).
9.6 Mogilner’s Zero Probability Method A remarkable method was suggested by Mogilner [71] for the determination of the prompt decay constant α in the early sixties. He assumed that the probability of detecting Z(t) = Z neutrons during the time interval t in a steady subcritical reactor with a steady source may be expressed by a special negative binomial probability. Denoting this probability by PM (Z, t), Mogilner introduced for the generating function GM (z, t) =
∞
PM (Z, t)zZ
(9.168)
Z=0
the following expression: GM (z, t) = [1 + (1 − z)ψ(t)]−Z(t)/ψ(t) ,
(9.169)
where Z(t) = λf and
S t, −ρ
(9.170)
λf 2 1 − e −αt 2 νp Dνp 1 − . (9.171) ψ(t) = Yf (t) =
α αt The notations used in these formulae have been already defined. It is worth noting that the heuristic generating function (9.169) is constructed in such a way that its first two factorial moments are the same as those that can be derived from the exact equation (9.64). The probability of obtaining no counts in the time interval t is then equal to
PM (0, t) = GM (0, t) = [1 + ψ(t)]−Z(t)/ψ(t) ,
(9.172)
from which it follows that ln [1 + ψ(t)] 1 = Z(t) . (9.173) PM (0, t) ψ(t) For the determination of ψ(t), i.e. for the calculation of the decay constant α one has to estimate the zero probability PM (0, t) by performing measurements in n time intervals t separated by long enough waiting time, and then by determining the number k ≤ n of time intervals t in which no neutron detection was observed. However, it is evident that the applicability of Mogilner’s formula (9.169) needs theoretical confirmation. This was made first by Pál [46] and later in a more detailed form by Szeless [72]. The zero probability P(0, t), i.e. the probability that there is no neutron detection during time t in a steady subcritical reactor with a steady neutron source, has already been calculated in Section 4.2.4. By using the notations of this chapter, one can rewrite equation (4.102) in the following form:
Z(t) 2 (γ + 1)2 − (γ − 1)2 exp{−γαt} ln P(0, t) = −2 1+ ln , (9.174) γ +1 (γ − 1)αt 4γ ln
258
Imre Pázsit & Lénárd Pál
where
λf 2 νp 2 Dνp . (9.175) α One has to mention that this equation is only valid when the probability of emitting n neutrons per fission is different from zero for the values n = 0, 1, and 2. It is clearly seen that (9.174) is entirely different from the equation Z(t) ln PM (0, t) = − ln [1 + Yf (t)], (9.176) Yf (t) but it can be easily shown that the first two terms of the expressions (9.174) and (9.176) when expanded in power series in Y are the same, i.e. Mogilner’s formula is a good approximation if Y 1. This condition, however, means that the variance of the counts is hardly different from that of a Poisson distribution. The determination of α, on the other hand, is based on the deviation from a Poisson distribution, i.e. the better the condition Y > 1 is fulfilled the more it works. Therefore, one should rather use expression (9.174) instead of Mogilner’s formula for the evaluation of measured data. It was already mentioned that the separation time θ between the measuring intervals t should be sufficiently large. In Section 4.2.4 it was proved that the probability of the non-occurrences of neutron detections in two time intervals separated by a given time θ, becomes practically the product of two probabilities, when θ converges to infinity. Therefore, if θ is chosen large enough, then the events in successive intervals of t are almost independent, and the probability to find no count in k cases out of a total number of n ≥ k measurements can be given by n P(n, k, t) = [P(0, t)]k [1 − P(0, t)]n−k . (9.177) k γ=
√
1 + 2Y
and Y =
By using the maximum likelihood method, from the function L(n, k, t, α, Y ) = ln P(n, k, t) the parameters α and Y can be estimated. More detailed information about the estimation of the parameters can be found in [73].
C H A P T E R
T E N
Reactivity Measurements in Accelerator Driven Systems
Contents 10.1 10.2 10.3 10.4
Steady Spallation Source Pulsed Poisson Source with Finite Pulse Width Pulsed Compound Poisson Source with Finite Width Periodic Instantaneous Pulses
260 264 281 283
The traditional methods of reactivity measurement with fluctuation analysis, discussed in the previous chapter, are all based on the use of a steady extraneous source with Poisson statistics. In the measurements a radioactive source is used, such as an Am–Be, Sb–Be or Pu–Be source with a constant intensity and simple Poisson statistics. The study of the statistical fluctuations in subcritical source driven systems became interesting again recently due to the interest in so-called accelerator driven systems, (ADS) [74–76, 83]. An ADS is a subcritical reactor, driven by a strong external source, which utilises e.g. the spallation reaction for neutron generation. The advantages of such a system are that it has excellent operational safety properties (a criticality accident is practically impossible), can utilise fertile nuclides as fuel such as 232 Th or 238 U much easier than traditional reactors (hereby having access to more abundant fuel reserves as the current reactors), and finally, on the long run and with proper design, it would produce less high-level nuclear waste. As a matter of fact, an ADS can incinerate more waste than it produces, so it can also be used primarily for transmutation of nuclear waste with energy production as a by-product. Due to the faster neutron spectrum and hence shorter neutron generation times, as well as the smaller fraction of delayed neutrons in most minor actinides whose burning is one purpose of the ADS, such a system has to be run at a relatively deep subcriticality, such as with a keff = 0.95 or lower. Hence to achieve sufficiently high power, a rather intensive extraneous source is needed. The dominating current ADS concept is based on a source utilising spallation. A spallation source will differ from a traditional one on at least two accounts. First, it emits a large, random number of neutrons on each spallation event, hence it will have a so-called compound Poisson statistics. Second, most likely it will run in a pulsed mode, i.e. it will be non-stationary. Both of these features deviate from those of the traditional sources. Hence the statistics of the neutron numbers and the detection events will also be different in subcritical systems driven with such a source. The branching processes generated in a subcritical system, driven with a spallation source will be investigated in this chapter. First, the Feynman- and Rossi-alpha formulae will be derived by assuming a steady source with compound Poisson statistics. Then the same formulae will be derived for periodic pulsed sources with a Poisson distribution of neutrons in the pulse. Both deterministic and random pulsing will be considered. Just as in the preceding chapter, the treatment will be confined to infinite homogeneous systems and detectors and to one energy group. This will provide some, albeit not perfect, transparency to the derivations and results, yet being suitable for comparison with experimental results and evaluation of measurements. Interested readers will find space- and energy- dependent treatments in [28, 64–66, 77, 78].
Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
259
260
Imre Pázsit & Lénárd Pál
10.1 Steady Spallation Source In a spallation source, a large and random number of neutrons (typically several tens of neutrons) are generated by each incoming projectile, usually a high-energy proton. In a thick target, a whole shower of spallation reactions will take place. It will be assumed that the target is small enough such that all neutrons in one reaction can be assumed as being born simultaneously. Such sources will be called multiple emission sources. These were already described in Part I, Section 3.2, for the case without delayed neutrons. Again, it would be possible to extend the formalism of Chapter 3 to include delayed neutrons. However, similarly to the previous chapter, the traditional reactor physics derivation will be given here instead. It can be mentioned by passing that emission of multiple neutrons from a source takes place even with sources based on spontaneous fission, such as a 252 Cf source. Likewise, in a core containing spent fuel, there is an intrinsic source from the higher actinides that were generated during the previous fuel cycle and which contain isotopes with spontaneous fission, such as even mass number Pu nuclides. So the case of sources with compound Poisson statistics is interesting even outside the ADS area, and it has been already discussed in the literature [77,78]. However, the effects of the relatively low multiplicity of the source neutrons in case of a 252 Cf source or from other transuranic elements in spent fuel is negligible, hence this question was not given much attention before. As it will be seen here, the large neutron multiplicity of spallation neutrons (or even that of the neutrons in the pulse of a neutron generator) has a much more significant effect, which warrants a study of those effects in more detail. The temporal distribution of spallation events can be assumed exponential, leading to Poisson statistics of the number of spallation events in a time interval with a constant intensity S. The distribution of the number of particles emitted per event will be denoted by pq (n) satisfying the equation ∞
pq (n) = 1,
(10.1)
n=0
and its generating function as r(z) =
∞
pq (n)zn .
(10.2)
n=0
Only the first two moments of the above distribution will enter the relevant final formulae, and these are r1 = q =
npq (n)
and r2 = q(q − 1) =
n
n(n − 1)pq (n).
(10.3)
n
For consistence of notation it is practical to introduce the Diven factor of the source: Dq =
q(q − 1) r2 = 2. q 2 r1
(10.4)
10.1.1 Feynman-alpha with a steady spallation source The assumptions of the model will also be the same as before: an infinite homogeneous system, infinite detector, one group of delayed neutrons, and one-group theory. Similarly to the derivation of the Rossi-alpha formula in the previous chapter, in the remaining part of Section 10.1 the solution will be given only by the backward formalism. The derivation used follows closely that of [79]. The derivation based on the forward equations is found in [80]. Since the difference compared to the traditional case lies entirely in the different statistics of the source, the distributions induced by a single particle will not be affected. Hence it is only the formula connecting the single-particle-induced and source-induced distributions which has to be re-calculated; the rest of the
261
Reactivity Measurements in Accelerator Driven Systems
calculations will go along a line very similar to that given in Section 9.3. With the same arguments as with (9.59), one can write, in first order of dt: P(N , C, Z, T , t) = (1 − S dt)P(N , C, Z, T , t − dt) + S dt P(N − n, C − c, Z − z, T , t) pq (k) Ak (n, c, z, T , t), n,c,z
(10.5)
k
where, analogously to (9.77), the function Ak (n, c, z, T , t) is defined as k
Ak (n, c, z, T , t) =
p(nj , cj , zj , T , t).
(10.6)
n1 +···+nk =n j=1 c1 +···+ck =c z1 +···+zk =z
With the usual steps, the solution of the equation connecting the corresponding generating functions of the single-particle-induced distribution p g(x, y, v, T , t) = xn yc vz p(n, c, z, T , t) n
c
z
and that of the source-induced distribution P, G(x, y, v, T , t) = xN yC vZ P(N , C, Z, T , t) N
C
Z
which were introduced in (9.60) and (9.61), reads as t G(x, y, v, T , t) = exp S {r[g(x, y, v, T , t )] − 1}dt .
(10.7)
0
This is formally identical with the solution (3.24) of Section 3.2. The form of the relationship is not affected by the presence of delayed neutrons or detectors in the system; those only affect p(n, c, z, T , t), but not the source properties. It can also be shown that the asymptotic value, i.e. the limit of the generating function, limt→∞ G(x, y, v, T , t) exists if the system is subcritical. The various asymptotic factorial moments, corresponding to (9.65)–(9.70) are now easy to calculate. The first moments read as ∞ r1 S n(t)dt = , (10.8) N = r1 S −ρ 0 ∞ Z(T ) = r1 S z(t, T )dt = λf NT . (10.9) 0
The second moments, and notably the modified variances are obtained as ∞ ∞ qnn (t)dt + r2 S n2 (t)dt μNN = N 0
and
μZZ (T ) = 0
∞
(10.10)
0
qzz (t, T )dt + r2 S
∞
z2 (t, T )dt,
(10.11)
0
where the second factorial moments qnn (t) and qzz (t, T ) of the single-particle-induced distributions are the same as before, given by (9.95) and (9.103). An inspection of (9.103) shows that the expression of qzz (t, T ) contains two different quadratic forms of z(t, T ), out of which one having the same form as the last term on
262
Imre Pázsit & Lénárd Pál
the right-hand side of (10.11), which expresses the effect of the multiple source. Hence it can be envisaged that the dependence of μZZ on the measurement time T will be the same as in the traditional case of a simple Poisson source, but its amplitude enhanced with a factor, depending on the various parameters of the system. Since Z(T ) has also the same dependence on the measurement time as in the traditional case, the structure of the Feynman-alpha formula with a steady spallation source (or any multiple emission source, i.e. a 252 Cf source) will be the same as in the traditional case. Evaluating the integrals in (10.11) shows that this is indeed the case. Again, after performing the integrals, by re-denoting the measurement time T as t, the Feynman-alpha formula is now obtained as follows:
λ2f μZZ (t) = [W (αp )fp (t) − W (αd )fd (t)], Z(t) (α + λ)(αp − αd )
(10.12)
where the parameters αp , αd and the functions fp (t) and fd (t) are the same as in (9.113) and (9.112), respectively, whereas the function W (s) is modified to λ2 ν λ2 W (s) = 1 − 2 νp (νp − 1) + r2 (−ρ) − 2 2 νp νd . (10.13) s s r1 To see the effect of the extra term arising from the source multiplicity, it is worth re-writing the formula in the standard Feynman Y form μZZ (t) = Yp fp (t) + Yd fd (t) Z(t) and consider the prompt term Yp fp (t) ≡ Y1 f1 (t) only. Noting as before that αp λ, the last term of (10.13) can be neglected, and one obtains ( )
Dνp
Dνp r 1 Dq 1 − e −αt 1 − e −αt (1 Y1 f1 (t) = 1 + + δ) 1 − = (−ρ) 1 − . (10.14) ν Dνp (ρ − β)2 αt (ρ − β)2 αt Here the parameter δ=
r1 Dq (−ρ) ν Dνp
(10.15)
was introduced, following Pázsit and Yamane [80]. In a similar manner, the effect of the source multiplicity can be quantified in the case of six delayed neutron groups with a generalisation of (9.120): ⎡ ⎤ 6 2 ν ν λ 0 j j ⎦. Yi = 2 λ2f ωi ⎣ν0 (ν0 − 1) (1 + δ) − 2 (10.16) 2 − λ2 s i j j=1 It is thus seen that for multiple emission sources, the variance to mean formula changes only very slightly. The time-dependence of the formula, which is the basis of the reactivity determination, is the same as with a simple Poisson source. Another similarity is that the source strength disappears from the expression for the variance to mean even with multiple emission sources. It is also obvious that the presence of the multiple emission is beneficial for the application of the Feynman method for the measurement of the reactivity, since it increases the amplitude of the useful part of the variance, i.e. the factor Y which stands for the deviation from the Poisson statistics in the count rate. The physical reason for the fact that the time-dependence does not change but the amplitude increases is simple. The time-dependence is determined by the rate by which the correlations between the numbers of the neutrons (or corresponding detector counts) born in the same chain, will die out. This depends only on the properties of the system, hence the presence of a steady multiple emission source will not change it. The amplitude of the variance to mean, on the other hand, depends also on the number of neutrons having a
263
Reactivity Measurements in Accelerator Driven Systems
4.0 Am–Be Cf-252 Random–PNG Fitted curves
Y value
3.0
2.0
1.0
0.0 0.00
0.01
0.02 0.03 Gate width (s)
0.04
Figure 10.1 Measured Y values in an experiment with an Am–Be source, a neutron generator. keff = 0.9874 (from [81]).
0.05
252
Cf source and a randomly pulsed
common ancestor. In the traditional case all those neutrons are generated in the chain, started by the individual source particles. The number of such neutrons will increase when several neutrons enter the system from the source simultaneously, hence the increase of the amplitude of the time-dependent part of the variance to mean. The presence of the ‘enhancement factor’ δ > 0 is beneficial for the determination of the reactivity, which can be important for deeply subcritical systems. A quantitative estimate of the parameters determining the magnitude of δ yields that the Diven factors show a very little variation whether it regards fission or spallation, i.e. the factor Dq /Dνp will be in the order of unity. For a 252 Cf source in a core loaded with enriched uranium, the factor r1 /ν = q /ν is also in the order of unity, hence for a keff = 0.95, the correction represented by δ is below 10%. The situation is the same for an intrinsic source represented by spent fuel. For spallation sources, on the other hand, the situation is different. For spallation with protons in the GeV range and a thick Pb target, r1 ≈ 40 neutrons per spallation event, and accordingly, with keff = 0.95, δ ≈ 3. For larger subcriticalities the value increases further, and for keff = 0.7 one has δ ≈ 27. Hence for deeply subcritical systems the large source multiplicity becomes a very significant factor in measuring the reactivity of the system. The enhancing effect of the factor δ was confirmed in an experiment performed by Kitamura et al. [86]. In that experiment, a multiple emission source with a large δ value was achieved in form of narrow pulses of a D–T neutron generator which were triggered in a random fashion. The random triggering was induced by detecting gamma-rays from a 60 Co source, and using the detections as trigger signals. The average detection rate was approximately 108 pulses per second. In addition, an Am–Be, and a 252 Cf source were also used. The high δ value of the randomly pulsed neutron generator was attained by the large mean number of neutrons in the pulse. The results are shown in Fig. 10.1 for a measurement in a system with keff = 0.9874 (ρ ≈ −0.013). It is seen that the Feynman-Y curves for the multiple emission 252 Cf source are indistinguishable from those of the Am–Be source, agreeing with the numerical estimates of δ for a 252 Cf source and the level of subcriticality, as indicated above. However, the Y function corresponding to the true randomly pulsed neutron generator has a markedly larger amplitude than the other two curves, showing the effect of large source multiplicity.
10.1.2 Rossi-alpha with a steady spallation source Based on the foregoing, the derivation of the Rossi-alpha formula can be made quite simply. All one needs is the generalisation of the formula (9.145) to the case of multiple source emission, similarly as in (10.7). One obtains Gst (x1 , y1 , v1 ; x2 , y2 , v2 , τ) = lim G(x1 , y1 , v1 , t; x2 , y2 , v2 , t + τ) t→∞ ∞ = exp S {r[g(x1 , y1 , v1 , t ; x2 , y2 , v2 , t + τ)] − 1}dt . 0
(10.17)
264
Imre Pázsit & Lénárd Pál
Again, like in the previous chapter, notation on the infinitesimal measuring time intervals dt at t and t + τ will be omitted. From this, one obtains for the covariance the equation ∞ ∞ qzz (t, τ)dt + r2 S z(t)z(t + τ)dt, (10.18) CZZ (τ) = N 0
0
where qzz (t, τ) was defined before in (9.160). The effect of the multiple emission source is contained in the last term on the right-hand side. Evaluating the integral yields the results that, similarly to the case of the Feynman-alpha formula, the time-dependence of the Rossi-alpha remains the same with multiple emission sources as in the traditional case, whereas the amplitudes are modified the same way as in the case of the Feynman-alpha in the previous section. Concretely, one has Pr (τ)dτ =
λ2f dτ CZZ (τ) = [W (αp )fp (τ) − W (αd )fd (τ)], Z dt 2(α + λ)(αp − αd )
(10.19)
where the functions W (s) are equal to those in (10.13) and the functions fi (τ) with (9.166). For the case of six groups of delayed neutron precursors can be made the same way as in the case of the Feynman-alpha formula. One obtains ⎡ ⎤ 6 6 6 2 ν ν λ 0 j 1 j ⎦ αi e −αi τ , Pr (τ) = Yi αi e −αi τ = λ2f ωi ⎣ν0 (ν0 − 1) (1 + δ) − 2 (10.20) 2 − λ2 2 i=0 s i j i=0 j=1 where the factors ωi are defined in (9.119).
10.2 Pulsed Poisson Source with Finite Pulse Width Based on the available technology of particle accelerators, it seems likely that a future ADS will be driven by a pulsed spallation source. Likewise, in several current model ADS experiments, a traditional pulsed neutron generator was used [83]. The case of the pulsed source is hence of practical interest, and in this section the Feynman- and Rossi-alpha formulae corresponding to a pulsed source we will be derived. In this section, delayed neutrons will be neglected in the derivation throughout. This is mostly motivated by the fact that in cores containing minor actinides, or even in traditional cores utilising fast fission, the delayed neutron fraction is much smaller than in reactors based on thermal fission of 235 U. The results with the inclusion of one group of delayed neutrons will be touched upon at the end of the section. At first we shall treat the case of a traditional neutron generator emitting finite width pulses with a simple Poisson distribution of neutrons in the pulse. Then the case of the finite width spallation source will be discussed. The derivations will mostly refer to the Feynman-alpha method. The derivation of the Rossi-alpha formula will be given without details, since the derivation can be inferred from the case of the Feynman formula. For very narrow pulses, it is a good approximation that all neutrons in the pulse are emitted simultaneously, as is indicated by the example in Fig. 10.1 [81]. The modelling of narrow pulses was already mentioned and discussed, at the level of the particle distribution, in Section 3. The case of narrow pulses can be described much simpler by assuming strongly periodic processes for the injection, i.e. periodic instantaneous injection, such as with the methods of Section 3. This methodology was applied by Degweker [60, 65] for the treatment of neutron fluctuations and for the pulsed Feynman-alpha method in accelerator driven systems. The case of periodic instantaneous injection will be treated at the end of this chapter. In the pulsed mode with finite width pulses, the distribution of the neutron injection can be considered as an inhomogeneous Poisson process, i.e. one with a time-dependent intensity S(t). Such cases were already treated in Chapter 3, however, those calculations concerned the distribution of the particle number, and not the number of detections in an interval. The extension of those methods to the distribution of the detection process would be relatively complicated. In the forthcoming a treatment will be used that builds on the backward
265
Reactivity Measurements in Accelerator Driven Systems
equation based formalism of the preceding chapter, and by which both deterministic and random pulsing as well as arbitrary pulse shapes can be treated in a general framework [87]. As was seen in the preceding section and the previous chapter, the backward approach relies on the determination of the single-particle-induced distribution and the way how the single-particle and sourceinduced distributions are related to each other. Similarly to the case of multiple emission sources, the pulsed characteristics of the source will not alter the single-particle-induced distributions, hence again the difference as compared to the steady sources will lie at the expression that connects the single-particle-induced distribution with those due to an external source, switched on at t = 0. At the level of the generating functions, this formula with an inhomogeneous Poisson source reads as t G(x, v, T , t) = exp (10.21) S(t )[g(x, v, T , t − t ) − 1]dt . 0
Here the generating functions g(x, v, T , t) and G(x, v, T , t) are the same as those defined in (9.60) and (9.61), except that the auxiliary variable y, corresponding to the delayed neutrons, is missing. The formula needs to be considered in its asymptotic form when t → ∞. In the traditional case of constant source, this can be made in (10.21) by extending the upper limit of the integration to infinity. However, with a periodically stationary source, this limit has to be taken in a particular way which will be discussed soon below, hence the substitution is not denoted here.
10.2.1 Source properties and pulsing methods A pulsed neutron generator produces a periodic sequence (‘train’) of pulses.1 This means that the number of injected source neutrons, as well as the number of neutrons in the system and the number of detected neutrons, will not be stationary stochastic processes in the general sense, rather they will be periodically stationary. In addition to periodic stationarity, all moments will be oscillating quantities, in contrast to the smooth (non-oscillatory) behaviour of the moments in the case of a steady source. There exists a way of making the process stationary, although except the first moment, the moments cannot be made non-oscillatory. Namely, one can perform a Feynman- (or Rossi-) alpha experiment in two different ways: one can open the counting gate either synchronised with the pulsing, or randomly. The first case is referred to as ‘deterministic pulsing’, whereas the second as ‘stochastic pulsing’. In this latter case the process will be stationary. In reality of course it is not the pulsing of the generator which is random, rather the starting point of the measurements. In fact, since measurements are made nowadays with a high resolution time-to-digital converter (as opposed to a shift register), such that the individual detection times are registered together with the neutron pulse trigger, a given measurement can be evaluated both by the deterministic or stochastic pulsing method. The periodic pulsed character of the source is formulated mathematically as follows. Assume that the pulse repetition period is T0 , and that the pulse train consists of a periodic sum of the same pulse shape f (t) such that it is non-vanishing only within [0, T0 ), i.e. ≥ 0 for 0 ≤ t ≤ T0 , (10.22) f (t) = 0 otherwise. Here f (t) is assumed to be normalised either by its integral or by the maximum value, whereas the total intensity of the source will be described by the help of an intensity factor S0 . Hence the time-dependent intensity of the source is given as S(t) = S0
∞ n=0
f (t − nT0 ) ≡ S0
∞
fn (t).
(10.23)
n=0
Various forms of the pulse shape f (t) will be specified later. For the discussion of the solution technique that follows below, f (t) can be left unspecified. This will exemplify one advantage of the technique used, namely 1 True
stochastic pulsing has also been achieved, see [81] and Fig. 10.1, but at high performance accelerators only the periodic pulsing is practical.
266
Imre Pázsit & Lénárd Pál
that it can be applied to various pulse shapes with roughly the same effort, and that a substantial part of the steps does not depend on the pulse shape. This means that new pulse shapes can be treated with a rather moderate extra effort, once the problem was fully solved for one particular pulse shape. The case of random pulsing is then described by introducing a random variable ξ, taking non-negative real values ξ ∈ [0, T0 ) with a density p(ξ). Hence, P{ξ ≤ ξ ≤ ξ + dξ} ≡ p(ξ)dξ
(10.24)
is the probability that the random variable ξ takes a value between ξ and ξ + dξ. The random variable ξ describes the random starting point of the measurement within a pulse period T0 . The corresponding random source is then denoted as S(t|ξ = ξ) ≡ S(t|ξ) = S(t − ξ),
(10.25)
and with (10.25) one has S(t|ξ) = S0
∞
f (t − nT0 − ξ).
(10.26)
n=0
Such a source whose intensity itself is a random process is called a doubly random Poisson process, or Cox process. The deterministic and stochastic pulsing methods can actually be treated in a common framework by specifying the corresponding probability densities of the random variable ξ. For the stochastic pulsing, one will have 1/T0 for ξ ≤ T0 , p(ξ) = (10.27) 0 for ξ > T0 , whereas for the deterministic case it is given as p(ξ) = δ(ξ).
(10.28)
Expectations with respect to the random variable ξ will be denoted by brackets in this section in the definitions; wherever practical, bracket-free notations will be introduced for the first two factorial moments, as usual. Hence one has T0 S(t|ξ) = S(t|ξ)p(ξ)dξ (10.29) 0
and S(t |ξ)S(t |ξ) ≡ K (t , t ) =
T0
S(t |ξ)S(t |ξ)p(ξ)dξ.
(10.30)
0
It is easily confirmed that the covariance CovS (t , t ) = S(t |ξ)S(t |ξ) − S(t |ξ) S(t |ξ) = K (t , t ) − S(t |ξ) S(t |ξ)
(10.31)
is identically zero if and only if the density of the random variable ξ is a Dirac-delta function as in (10.28), i.e. for the case of a deterministic pulsing. For any other distribution p(ξ), CovS (t , t ) = 0, and in particular t t CovS (t , t )dt dt > 0. (10.32) 0
0
This fact will have a bearing also on the variance to mean of the neutrons and the detections in a system driven with a randomly pulsed source. Along the same lines, the generating function G(x, t|ξ) = P{N(t) = N |ξ = ξ}xN (10.33) N
267
Reactivity Measurements in Accelerator Driven Systems
of the probability that N particles will be emitted by the source, i.e. by a doubly stochastic Poisson process during the time period [0, t), (which is the same as the number of particles at time t in an infinite system without absorption and multiplications), is given as t S(t|ξ)dt . (10.34) G(x, t|ξ) = exp (x − 1) 0
The expected value of this generating function with respect to ξ is defined as 5 6 t T0 G(x, t|ξ)p(ξ)dξ = exp (x − 1) S(t|ξ)dt . G(x, t|ξ) = 0
(10.35)
0
As an illustration let us consider now the case of stochastic pulsing with square pulses of width W < T0 , i.e. with f (t) = H (t) − H (t − W ) = (t, W ),
(10.36)
and hence ∞
S(t|ξ) = S0
(t − nT0 − ξ, W ),
t ≥ 0.
(10.37)
n=−∞
For the first moment of the number of neutrons emitted by the random source between (0, t), from (32) one obtains 6 5 t T0 d 1 S0 W N (t|ξ) ≡ N (t) = = S(t |ξ)dξdt = t, (10.38) G(x, t|ξ) dx T T0 0 0 0 x=1 where, in line with the previous notations, the expectation sign was dropped. It is seen that the expectation N (t) is a smooth function of time, as expected. The second moment of the number of neutrons emitted by the stochastic source can be calculated from 6 5 2 t t T0 t t 1 d N (t)(N (t) − 1) = = G(x, t|ξ) S(t |ξ)S(t |ξ)dt dt dξ = K (t , t )dt dt , dx2 T0 0 0 0 0 0 x=1 (10.39) where the kernel K (t , t ) was introduced in (10.30). Then the variance of the number of neutrons in the system at time t is given from t t 2 2 CovS (t , t )dt dt . (10.40) σN (t) = N (t)(N (t) − 1) − N (t) + N (t) = N (t) + 0
0
By using the concrete form of the stochastic pulse train (10.37) in (10.31), the integrals in (10.40) can be easily evaluated by Laplace transform methods. One notices that K (t , t ) = K (|t − t |) ≡ K (u) is a symmetric function of its arguments. Hence, t 0
t
K (t , t )dt dt = 2
0
t 0
t
K (t − t )dt dt .
The inner integral of the above can be calculated by first performing a Laplace transform * t K˜ (s) K˜1 (s) K (t − t )dt = = , L s s(1 − e −sT0 ) 0 where K˜1 (s) ≡
T0
e 0
−su
S2 K (u)du = 0 T0
(10.41)
0
e −sW − 1 e −sT0 (e sW − 1) W (1 − e −sT0 ) . + + s2 s2 s
(10.42)
268
Imre Pázsit & Lénárd Pál
The Laplace inverse of (10.42) is easily obtained by calculating the residues at s = 0 and s = ±2nπi/T0 ; n = 0, 1, 2 . . . (K˜1 (s) does not have a singularity at s = 0), which will in effect give the solution in form of a Fourier series. The remaining integral in (10.41) can be performed term by term on the series. The result, multiplied by 2, yields the second moment quantity N (t)(N (t) − 1) . This finally leads to ∞ 2 (t) σN nπt 2 2S0 T03 −4 nπW 2 sin =1+ sin n ≥ 1. N (t) Wtπ4 n=1 T0 T0
(10.43)
Equations (10.40) and (10.43) show the over-Poisson variance of the number of neutrons emitted by a stochastically pulsed source. In general, doubly stochastic processes have over-Poisson variances. The over-Poisson character can be explained by the positive integral of the covariance function, and hence, ultimately, by the fact that stochastic pulsing introduces correlations between the numbers of source particles at different times.
10.2.2 Calculation of the factorial moments for arbitrary pulse shapes and pulsing methods For the calculation of the factorial moments of the detector counts in a subcritical reactor with a subcritical source, either deterministically or stochastically pulsed, one has to use (10.21) generalised to the case of the pulsed source, denoted as in (10.25):
t G(x, v, t|ξ) = exp S(t |ξ){g(x, v, t − t ) − 1}dt . (10.44) 0
The factorial moments have then to be taken from the expectation G(x, v, t|ξ) =
T0
G(x, v, t|ξ)p(ξ)dξ,
(10.45)
0
where, in the calculations of the factorial moments, first the derivatives with respect to x or z will be taken, after which the expectation w.r.t. ξ can be performed. The pulsing type is described by the probability density p(ξ) of ξ; furthermore, the pulse shape f (t) is only needed when the integrals w.r.t. ξ are performed. Thus, first the generic results will be derived, with the dependence on the realisation of the random variable ξ retained, from which then the deterministic and stochastic cases can be obtained for various pulse shapes. Again use can be made of the fact that the single-particle-induced distributions, i.e. the factorial moments of g(x, v, t) of (10.44) are known from the foregoing. Hence one can turn immediately to the calculation of the moments of G(x, v, t) of equation (10.45). The expressions for the first moments are as follows 5 N (t) ≡ N (t|ξ) =
6 t ∂G(x, v, t|ξ) = S(t |ξ) n(t − t )dt , ∂x 0 x=v=1
where
N (t|ξ) =
t
S(t |ξ)n(t − t )dt .
(10.46)
(10.47)
0
Likewise, 6 t t ∂G(x, v, t|ξ) Z(t, T ) = Z(t, T |ξ) = = S(t |ξ) z(t − t , T )dt = λd (t , T )N (t − t )dt , ∂v 0 0 x=v=1 (10.48) 5
269
Reactivity Measurements in Accelerator Driven Systems
with
t
Z(t, T |ξ) =
S(t |ξ)z(t − t , T )dt = λd
0
t
(t , T )N (t − t |ξ)dt .
(10.49)
0
The reason for the fact that the source-induced detector count can be written in two different ways is that it is a double convolution between three different functions. By using previous definitions and expressions, one out of two convolutions can be re-denoted as an autonomous function/notation, and this can be done by two different ways. This dual representation gives a possibility to choose the form that suits the actual computations better, which fact will be utilised below. The second factorial moment of the source-induced distribution 6 5 2 ∂ G(x, v, t|ξ) (10.50) MZZ (t, T ) ≡ Z(Z − 1) = ∂v2 x=v=1 can also be expressed in two equivalent ways, i.e. t t S(t |ξ)mzz (t − t , T )dt + Z 2 (t, T |ξ) = qzz (t , T )N (t − t , ξ)dt + Z 2 (t, T |ξ), (10.51) MZZ (t, T |ξ) = 0
0
where mzz (t, T ) is the second factorial moment of the single-particle-induced neutron counts, defined in (9.54) and qzz is given in (9.103). For the Feynman-alpha formula one needs the modified variance, which can be written as μZZ (t, T ) = MZZ (t, T ) − Z 2 (t, T ). (10.52) From the above this leads to μZZ (t, T ) = μZZ (t, T |ξ) = =
t
S(t |ξ) mzz (t − t , T )dt + Z 2 (t, T |ξ) − Z 2 (t, T )
0 t
qzz (t , T )N (t − t )dt + Z 2 (t, T |ξ) − Z 2 (t, T ).
(10.53)
0
Here, it is obvious that due to the expectation with respect to the random variable ξ, the last two terms of (10.53) will be different. This is not the case for a steady source or for the deterministically pulsed case, where no such expectation has to be taken, and hence those two terms cancel each other. In that case, with neglecting notation on ξ, the convolution can be rearranged to t t S(t )mzz (t − t )dt = qzz (t , T )N (t − t )dt , (10.54) μZZ (t, T ) = 0
0
with qzz (t, T ) being the source term, given by 7 8 qzz (t, T ) = λf νp (νp − 1) z2 (t, T ),
(10.55)
which is obtained from (9.103) by neglecting the delayed neutrons. This is what is used in principle in the deterministic method.
10.2.3 General calculation of the variance to mean with arbitrary pulse shapes and pulsing methods As was seen in the previous section, the task is to calculate the kernels N (t|ξ), Z(t, T |ξ), and with their help the second order quantities MZZ (t, T |ξ) and/or μZZ (t, T |ξ) . These need to be obtained for the case t → ∞ and then the expectation value of these quantities w.r.t. ξ needs to be taken. The formal solution for these can be given for arbitrary pulse shapes, with the pulse shape only affecting certain terms of a Fourier expansion. In this section the pulse will be described by its ‘mother function’ f (t), as defined in (10.22).
270
Imre Pázsit & Lénárd Pál
The starting point is the Laplace transform of (10.47), which reads as ˜ N˜ (s|ξ) = S(s|ξ)˜ n(s).
(10.56)
By virtue of (10.22)–(10.26) and the fact that without delayed neutrons one has n(t) = e −αt , Equation (10.56) can be written as N˜ (s|ξ) =
˜ S(s|ξ) S0 e −sξ f˜ (s) = . s+α (s + α)(1 − e −sT0 )
Here, f˜ (s) is the Laplace transform of the truncated function f (t), which can be calculated as T0 ∞ f (t)e −st dt = f (t)e −st dt, f˜ (s) =
(10.57)
(10.58)
0
0
whereas the term (1 − e −sT0 ) in the denominator arises from the summing up of the geometric series, given rise by the shifted integrals for the terms f (t − nT0 ). Since the function f˜ (s) does not have any singularities of its own, for times t > T0 , the inverse transform of (10.57) is entirely determined by the residues of the function e −sξ e st f˜ (s) . (s + α)(1 − e −sT0 )
(10.59)
The actual form of the pulse shape only affects the numerical values of the residues, but not the residue structure, which is as follows: 1. A pole at s = −α, which describes the transient after switching on the source at t = 0. 2. A pole at s = 0, the corresponding residue gives the asymptotic mean value of the oscillating function N (t). 3. An infinite number of complex conjugate roots on the imaginary axis at the values s = ±iωn , yielding harmonic functions in the time domain, representing a Fourier series expansion of the oscillating part of N (t) in the form sin (ωn t) and cos(ωn t). Here the notation ωn ≡
2nπ , T0
n = 1, 2, · · ·
(10.60)
was introduced. The term arising from the residue at s = −α will not give a contribution to the asymptotic value, but we keep it here for completeness. The residues at s = 0 and s = −α are given as Res s=0
f˜ (s = 0) e −sξ e st f˜ (s) = ≡ a0 , (s + α)(1 − e −sT0 ) αT0
e −sξ e st f˜ (s) f˜ (s = −α) −α(t−ξ) ≡ a−α e −α(t−ξ) , = − αT e −sT 0) s=−α (s + α)(1 − e e 0 −1 Res
(10.61)
(10.62)
whereas Res{iωn } + Res{−iωn } can be written as Res
s=+iωn
e −sξ e st f˜ (s) e −sξ e st f˜ (s) + Res = an sin (ωn t) + bn cos (ωn t) cos (ωn ξ) (s + α)(1 − e −sT0 ) s=−iωn (s + α)(1 − e −sT0 ) + bn sin (ωn t) − an cos (ωn t) sin (ωn ξ). (10.63)
271
Reactivity Measurements in Accelerator Driven Systems
Here the parameters an and bn are defined as " # " # ˜ (iωn ) − α f˜ (iωn ) ω f n 2 an = T0 ωn2 + α2 and
(10.64)
" # " # ˜ ˜ 2 α f (iωn ) + ωn f (iωn ) bn = . T0 ωn2 + α2
(10.65)
Then, N (t|ξ) is given as N (t|ξ) = Na (t|ξ) + S0 a−α e −α(t−ξ) = S0 [a−α e −α(t−ξ) + a0 ] + S0
∞
[an sin(ωn t) + bn cos(ωn t)] cos(ωn ξ)
n=1
+ S0
∞
[bn sin(ωn t) − an cos(ωn t)] sin(ωn ξ),
(10.66)
n=1
where Na (t|ξ) stands for the periodically asymptotic value of N (t|ξ). The actual value of the parameters a−α , a0 , an and bn will depend on the pulse form. Formally, however, all subsequent formulae, including the variance to mean can be calculated from the above expression. Equation (10.66) shows that in the stochastic case, where averaging w.r.t. ξ incurs integrations of the trigonometric functions over their period, all oscillating terms disappear in N (t), as is expected. In the deterministic case the oscillating term due to the cos(ωn ξ) remains present. For the calculation of the asymptotic value of the detector counts Z(T | ξ) = lim Z(t, T |ξ), t→∞
the last equality of (10.49) is the more suitable, by simply integrating (10.66) term by term. The execution of the limit t → ∞ requires, however, some attention. If the start of the measuring interval, t − T , always coincides with the beginning of a pulse, i.e. at time KT0 with some integer K , then of necessity one has t = KT0 + T , and it follows that letting t → ∞ equals to letting K → ∞. This procedure is not necessary for the stochastic pulsing, where the asymptotic number of neutrons becomes constant after the averaging w.r.t. ξ, but for the sake of uniform treatment, the limit will be calculated for both cases according to the above. With that, one has: KT0 +T KT0 +T T Z(T |ξ) = lim λd (t , T )N (t − t |)dt = lim λd N (t|ξ)dt = λd Na (t|ξ)dt. K →∞
K →∞
0
KT0
0
(10.67) The last step above results from the periodic character of the asymptotic form Na (t|ξ). Performing the integral will yield the result ∞ Z(T |ξ) = S0 λd a0 T + Cn (T ) cos (ωn ξ) + Sn (T ) sin (ωn ξ) , (10.68) n=1
where Cn (T ) ≡
an {1 − cos(ωn T )} + bn sin(ωn T ) , ωn
(10.69)
Sn (T ) ≡
bn {1 − cos(ωn T )} − an sin(ωn T ) . ωn
(10.70)
272
Imre Pázsit & Lénárd Pál
This is the general expression for arbitrary pulse shapes and pulsing techniques. The case of the deterministic pulsing is then obtained by substituting ξ = 0, whereas the stochastic pulsing is obtained by integrating (10.68) between zero and T0 w.r.t. ξ which leads to the disappearance of all oscillating parts. The calculation of the second order moments goes on similar lines. One finds it again that, due to the simple explicit expression for N (t|ξ), it is more convenient to use the last identity of (10.51) or (10.54). We shall use here the last equality of (10.51) where, from the concrete form of Z(t, T ) and (10.55) one has ⎧ 2 λ λf ν(ν − 1) ⎪ ⎪ (1 − e −αt )2 , 0 < t ≤ T, ⎨ d 2 α qzz (T , t) ≡ (10.71) 2 ⎪ ⎪ ⎩ λd λf ν(ν − 1) (e αT −1 )2 e −2αt , T < t. α2 As (10.71) shows, one needs to evaluate the integrals over trigonometric functions multiplied by the zeroth, first and second power of e −αt . These integrals can be jointly evaluated in a formal manner. The execution of the limit t → ∞ is performed the same way as in connection with (10.67). After lengthy but straightforward calculations, the result is given in the following form: S0 λ2d λf ν(ν − 1) 1 − e −αT μZZ (T |ξ) = a0 T 1 − α2 αT
∞ −αT 2 −ωn an + 2αbn an An (T ) + bn Bn (T ) + (1 − e cos(ωn ξ) ) + ωn2 + (2α)2 n=1 * ∞ −αT 2 ωn bn + 2αan bn An (T ) − an Bn (T ) − (1 − e sin(ωn ξ) + Z 2 (T |ξ) − Z 2 (T ). ) + 2 (2α)2 ω n n=1 (10.72) Here the following notations were introduced: An (T ) ≡ pn (0, T ) − 2pn (α, T ) + pn (2α, T ), with pn (α, T ) ≡ e
−αT
T
e αt sin(ωn t)dt =
0
α sin(ωn T ) + ωn {e −αT − cos(ωn T )} , ωn2 + α2
(10.73)
(10.74)
and Bn (T ) ≡ qn (0, T ) − 2qn (α, T ) + qn (2α, T ), with qn (α, T ) ≡ e
−αT
T
e αt cos(ωn t)dt =
0
ωn sin(ωn T ) − α{e −αT − cos(ωn T )} . ωn2 + α2
(10.75)
(10.76)
Equations (10.68)–(10.70) and (10.72)–(10.76) serve as the formal solution for the Feynman-alpha formulae with arbitrary pulse shapes and arbitrary pulsing statistics. In the following they will be used to derive the concrete cases of square and Gaussian pulses and deterministic and stochastic pulsing methods.
10.2.4 Treatment of the pulse shapes and pulsing methods Square pulses The sequence of square pulses is described by f (t) = (t, W ),
(10.77)
273
Reactivity Measurements in Accelerator Driven Systems
where the square window function was defined in (9.79). From this one has 1 − e −sW . f˜ (s) = s
(10.78)
Putting this into (10.61), (10.64) and (10.65) will yield the following results for a0 , an and bn : W , αT0
(10.79)
an =
ωn sin(ωn W ) + α{1 − cos(ωn W )} , nπ(ωn2 + α2 )
(10.80)
bn =
α sin(ωn W ) − ωn {1 − cos(ωn W )} . nπ(ωn2 + α2 )
(10.81)
a0 =
and
Gaussian pulses The reason for treating Gaussian-like pulse shapes is that there is experimental evidence that neutron generator pulses may have a bell-shaped form of time-dependence, which can be approximated, within the region they differ from zero, with a Gaussian fit. This was the case for instance with the GENEPI pulsed neutron generator, used in the European research program MUSE with the fast research reactor core MASURCA [83]. The pulse shape of the generator is shown in Fig. 10.2. Such a pulse can be approximated with a Gaussian, whose overwhelming area is embedded into a square window of width W if W ≈ 2 × FWHM (full width at half maximum). This is mostly for the purpose of easy comparison with the case of square pulses. Then one has 2 * t − W2 f (t) = exp − (t − W/2, W ), (10.82) 2σ 2 where the window function (t, T ) is the same as before. The normalisation of the pulse shape f (t) was chosen such that the maximum value of the pulse function is unity, i.e. the same as with the square shape, just for practical purposes. With this, f˜ (s) is given as * ∞ * W W 2 W 2 √ −sW +(σs)2 − t − − t − −st 2 2 2 ≈ = e −st exp e exp 2πσe . (10.83) f˜ (s) = 2σ 2 2σ 2 0 −∞
Count rate (Arbitrary unit)
0.25 Measurement Fitting
0.2 0.15 FWHM
0.1 0.05 0
0
2
4
6
8
10
Time (μs)
Figure 10.2 Time shape of the neutron pulse of GENEPI, obtained by detection of associated alpha-particles of the T(d,n)4 He reaction in a silicon detector. The Gaussian fit gives a FWHM of 400 ns. (from [84]).
274
Imre Pázsit & Lénárd Pál
By choosing σ = W /8, the cut-off values of f (t) become f (0) = f (W ) ≈ 3.4 · 10−4 , hence the error of the approximation in (10.83) is indeed negligible. Substituting (10.83) into (10.61), (10.64) and (10.65) yields √ 2πσ , (10.84) a0 = αT0 (ωn σ)2 √ exp − $ω " $ω % %# 2 2πσ 2 n n an = cos α sin W + ω W , (10.85) n T0 (ωn2 + α2 ) 2 2 (ωn σ)2 √ exp − " $ω $ω % %# 2 2πσ 2 n n bn = α cos sin W − ω W . (10.86) n T0 (ωn2 + α2 ) 2 2 The above two sets of parameters a0 , an and bn can now be used in the formulae with deterministic and stochastic pulsing, derived below.
Deterministic pulsing This case is equivalent with substituting ξ = 0 into the relevant formulae (10.68) and (10.72). All expressions containing the sin(ωn ξ) terms disappear, and likewise the last two terms of (10.72) cancel out. The results for the mean and the modified variance will become as follows (for simplicity, the expectation value sign, indicating the averaging w.r.t. ξ will be omitted): ∞ an {1 − cos(ωn T )} + bn sin(ωn T ) Z(T ) = S0 λd a0 T + , (10.87) ωn n=1 and μZZ (T ) =
S0 λ2d λf ν(ν − 1) 1 − e −αT a T 1 − 0 α2 αT * ∞ ∞ −ωn an + 2αbn −αT 2 an An (T ) + bn Bn (T ) + (1 − e + . ) ωn2 + (2α)2 n=1 n=1 (10.88)
The particular cases of square and Gaussian pulses can be then obtained by substituting (10.79)–(10.81) or (10.84)–(10.86), respectively, into (10.87) and (10.88). As seen, the source strength S0 cancels out from the formula Y (T ) = μZ (T )/Z, similarly as in the case with continuous sources. Since both Z(T ) and μZ (T ) are asymptotically linearly dependent on the measurement time T , the Y (T ) curve goes into saturation, similarly as in the case with continuous sources. The formulae above also show that the oscillating deviations from the continuous Feynman-curve tend to zero asymptotically with the time gate T . The relative weight of the oscillations depends on the relationship between the pulse angular frequency ω = 2π/T0 and the prompt neutron time constant α. For the case ω α, i.e. high pulse repetition frequency, the oscillations are small. Furthermore, with increasing pulse width W the relative magnitude of the oscillations decreases. For short pulses with low pulse frequency, ω ≤ α, the deviations from the smooth Y (T ) become quite significant. The magnitude of the oscillations depends rather weakly on the level of subcriticality, through the parameter α in the formulae, and it increases with increasing subcriticalities. The above qualitative analysis is illustrated quantitatively on Fig. 10.3, which shows deterministic Feynman Y (T ) functions for various pulse frequencies, pulse forms and pulse widths. As Fig. 10.3d shows, for low frequency pulsing, the pulse shape does have a certain influence on the fine structure of the Y (T ) function.
275
Reactivity Measurements in Accelerator Driven Systems
W 0.001 ms, T0 2 ms
1
1
0.8
0.8 Y(T )
Y(T )
W 1 ms, T0 2 ms
0.6 0.4 Pulsed Steady
0.2 0
0
2
4
(a)
6
8
10
0.4
0
12
T (ms)
Y(T )
1
0.6 Gaussian Square Steady
0.4 0.2 4
(c)
6
4
8
10
6
8
12
0.6 Gaussian Square Steady
0.4 0.2 0
12
0
2
(d)
T (ms)
10
W 0.5 ms, T0 5 ms, s 0.125 ms
0.8
2
2
T (ms)
1
0
0
(b)
0.8
0
Pulsed Steady
0.2
W 0.5 ms, T0 2 ms, s 0.125 ms
Y(T )
0.6
4
6
8
10
12
T (ms)
Figure 10.3 Theoretical Feynman Y-curves for deterministic pulsing, with square and Gaussian pulses.
Stochastic pulsing In that case, as (10.27) shows, one needs to integrate (10.68) and (10.72) over the pulse period. Then all harmonic functions disappear in (10.68), and also the ones seen explicitly in (10.72). The only nontrivial case is the integration of the second last term in (10.72), Z 2 (T | ξ), which contains second order products of the sine and cosine functions, i.e. requires the evaluation of 1 T0
T0
0
∞
2 {Cn (T ) cos (ωn ξ) + Sn (T ) sin (ωn ξ)}
dξ.
(10.89)
n=1
However, substituting the definitions of Cn and Sn , and making use of the orthogonality relationships between the trigonometric functions, this integral is readily shown to be equal to ∞ C 2 (T ) + S 2 (T ) n
n=1
n
2
=2
∞ 2 a + b2 n
n=1
n
ωn2
sin2
$ω
n
2
% T .
(10.90)
Because the expected value of the detector counts is an especially simple function for the case of stochastic pulsing, i.e. (10.91) Z(T ) = S0 λd a0 T , it is practical to give directly the Y (T ) function. It will have the relatively simple form λd λf ν(ν − 1) 1 − e −αT Y (T ) = 1− α2 αT ∞ 2S0 λd an2 + bn2 2 $ ωn % sin T . (10.92) + a0 T n=1 ωn2 2
276
Imre Pázsit & Lénárd Pál
α 300 s1
1
1
0.8
0.8
0.6 0.4
0.6 0.4
Stochastic Steady
0.2 0
α 400 s1
1.2
Y(T )
Y(T )
1.2
0
10
(a)
20 30 T (ms)
40
Stochastic Steady
0.2 0
50
0
10
(b)
α 500 s1 1
1
0.8
0.8
Y(T )
Y(T )
50
1.2
0.6 0.4
0.6 0.4
Stochastic Steady
0.2
(c)
40
α 600 s1
1.2
0
20 30 T (ms)
0
10
20
30
T (ms)
40
Stochastic Steady
0.2 0
50 (d)
0
10
20
30
40
50
T (ms)
Figure 10.4 Theoretical Feynman Y-curves for stochastic pulsing, with square pulses. W = 1 ms,T0 = 15 ms.
Again, the particular cases of square and Gaussian pulses are obtained by substituting (10.79)–(10.81) or (10.84)– (10.86), respectively, into (10.92). As the above shows, one significant difference between the deterministic and stochastic cases concerns the relative weight of the oscillating part to the smooth part. Most notably, in the stochastic pulsing case, the oscillating part is linear in the source strength, i.e. the source strength does not disappear from the variance to mean. This is a clear consequence of the ‘randomising’ of the pulse, which leads to a qualitatively different dispersion of the source neutrons, and in particular to the non-zero value of the auto-covariance of the source functions, as discussed earlier. This property is then transferred to the statistics of the neutron chain and that of the detector counts. It is interesting that the Diven factor of fission, which is present in all formulae of the continuous source and deterministic pulsing, is absent from the oscillating part of the Feynman formula of the stochastic pulsing. This can be expressed as if the oscillating part is controlled by the source statistics instead of the statistics of the multiplication of the fission chain. This also means that with a strong source, the relative oscillations become large. There are indications in recent experiments that this indeed occurs [85]. A second feature is that the smooth part of (10.92) is proportional to 1/α, whereas the oscillating part, through the factor 1/a0 , is proportional to α. This means that the relative weight of the oscillating part will increase faster with increasing subcriticalities than in the case of deterministic pulsing. Again, this can be said to be due to the fact that the oscillating part is influenced mostly by the source properties, whose significance increases in deep subcritical systems. Some representative calculated values of the stochastic pulsed Feynman-alpha curves are shown in Fig. 10.4, to illustrate the above qualitative analysis. The above formulae and their application to the evaluation of experiments was verified in measurements. Among others they were applied in the EU-supported project MUSE [66, 85]. Another dedicated series of measurements were made at the KUCA reactor (Kyoto University Critical Assembly) by using a pulsed neutron generator. The measurements were made with various subcriticality levels, by various pulse repetition frequencies (or periods), and pulse widths. Details of these measurements are found in [82, 86].The reference reactivities and corresponding alpha values were determined by the rod drop and the pulsed neutron technique, respectively. The data could be evaluated both by the deterministic and the stochastic method. The results of the deterministic method for the four different subcriticality levels are shown in Fig. 10.5, and those for the
277
Reactivity Measurements in Accelerator Driven Systems
Deterministic method: 0.65$
Y value
3.0
α (Ref.) 266 2 s1 α (Fitted) 241 1 s1
2.0 1.0
Deterministic method: 1.30$ 2.5 2.0 Y value
4.0
0.0 0.00 0.01 0.02 0.03 0.04 0.05 (a) Gate widthT (s)
1.5 1.0 0.5
Measured Fitted curve
α (Ref.) 494 3 s1 α (Fitted) 500 2 s1
1.0 0.8 0.6 0.4 0.2
0.8
Measured Fitted curve
0.0 0.00 0.01 0.02 0.03 0.04 0.05 Gate widthT (s) (c)
Figure 10.5
Deterministic method: 2.72$ 1.0
Y value
Y value
1.2
Measured Fitted curve
0.0 0.00 0.01 0.02 0.03 0.04 0.05 (b) Gate widthT (s)
Deterministic method: 2.07$ 1.4
α (Ref.) 369 3 s1 α (Fitted) 355 1 s1
α (Ref.) 598 4 s1 α (Fitted) 607 4 s1
0.6 0.4 0.2
Measured Fitted curve
0.0 0.00 0.01 0.02 0.03 0.04 0.05 Gate widthT (s) (d)
Measured and fitted results for the deterministic pulsing (from [87]).
stochastic pulsing on Fig. 10.6. The experimentally determined Y values are shown as symbols together with the experimental errors, and the fitted curves are shown as continuous lines. The reference alpha values and the ones obtained from the curve fitting are shown in all four sub-figures. The fitting was made by assuming a square pulse form, based on experimental evidence with the neutron generator. The agreement between the experimental and the fitted curves is generally good. Especially for the stochastic pulsing the agreement between the fitted curves and alpha parameters with the measurements and the reference alpha values is excellent. There are larger deviations in the case of the deterministic method. As it will be seen, these deviations remain also when the measurement is modelled with periodic, instantaneous (infinitely narrow) pulses. This indicates that some assumptions of the model do not perfectly match the physical situation. One explanation was given in [66], showing that the simultaneous existence of an intrinsic, continuous source together with the pulsed source changes the shape of the Feynman-Y curve such that the relative amplitude of the oscillating part becomes smaller, as in the measurements above. In the stochastic pulsing method, the averaging procedure corresponding to the random distribution of the start of the measurement eliminates the effect of such unaccounted phenomena, which explains why this latter method gives better results. In addition, in the stochastic pulsing method an extra free parameter occurs, the source intensity, which multiplies only the oscillatory part, and hence adjusts the relative weight of the smooth and the oscillatory part, making it possible to compensate for the effect of an inherent source. No similar possibility exists in the deterministic pulsing method, rather one has to modify the formula by adding a traditional Feynman-Y curve. At any rate, the results here and other publications suggest, that the stochastic method appears to be more suitable for evaluating measurements than the deterministic pulsing.
10.2.5 Rossi-alpha with pulsed Poisson source Based on previous material, the Rossi-alpha formula for pulsed sources can be easily derived. As we have seen in the previous chapter, the difference between the Feynman- and Rossi-alpha formulae lies in the difference in the equation connecting the single-particle-induced and source-induced equations, so that instead of one-point distributions, one has to handle two-point ones. Then, as was seen in the previous section, the
278
Imre Pázsit & Lénárd Pál
Stochastic method: 0.65$ 4.0
α (Ref.) 266 2 s1 α (Fitted) 253 1 s1
2.0 1.0 0.0 0.00 0.01 0.02 0.03 0.04 0.05 Gate width T (s)
1.5 1.0
Gate width T (s)
(b)
1
α (Fitted) 495 3 s
1
α (Fitted) 601 4 s
1.0 0.8 0.6 0.4
Measured Fitted curve
0.2
Gate width T (s)
(c)
0.6 0.4 Measured Fitted curve
0.2
0.0 0.00 0.01 0.02 0.03 0.04 0.05
Figure 10.6
α (Ref.) 598 4 s1
0.8 Y value
Y value
Stochastic method: 2.72$ 1.0
α (Ref.) 494 3 s1
1.2
Measured Fitted curve
0.0 0.00 0.01 0.02 0.03 0.04 0.05
Stochastic method: 2.07$ 1.4
α (Fitted) 373 2 s1
0.5
Measured Fitted curve
(a)
α (Ref.) 369 3 s1
2.0 Y value
3.0 Y value
Stochastic method: 1.30$ 2.5
0.0 0.00 0.01 0.02 0.03 0.04 0.05 Gate width T (s)
(d)
Measured and fitted results for the stochastic pulsing (from [87]).
formula has to be modified in order to account for the time-dependence of the intensity of the source in the pulsed case. On the other hand, the single-particle-induced distributions do not change. In practice, one can therefore nearly ‘taylor together’ the solution from already known elements. For brevity, for the Rossi-alpha method only the case of the stochastic pulsing will be treated. Partly because experience shows that evaluating the measurements with the method of stochastic pulsing yields more consistent results than the deterministic pulsing; and partly because it is more similar to the ‘spirit’ of the original Rossi-alpha measurement where the counting gate is opened at a detection event in the first detector, which happens randomly in relation to the pulsing. Hence we start with an equation, corresponding to (10.17), but such that the time-dependent source intensity is accounted for, and simplified in the sense that the delayed neutrons are neglected: Gst (x1 , v1 ; x2 , v2 , τ|ξ) = lim G(x1 , v1 , t; x2 , v2 , t + τ|ξ) t→∞ t = lim exp S(t |ξ)[g(x1 , v1 , t − t ; x2 , v2 , t + τ − t ) − 1]dt . (10.93) t→∞
0
The factorial moments that are sought have to be calculated from the expectation of the above generating function with respect to ξ, i.e. Gst (x1 , v1 ; x2 , v2 , τ) = lim G(x1 , v1 , t; x2 , v2 , t + τ | ξ) = lim t→∞
t→∞ 0
T0
G(x1 , v1 , t; x2 , v2 , t + τ|ξ)p(ξ)dξ.
(10.94) Again, as in the previous cases of simple Poisson source and a compound Poisson source, the notations on the infinitesimal measurement times dt1 ≡ dt and dt2 ≡ dτ are neglected. The dependence of the distributions on the random variable ξ is denoted. The notations are otherwise standard and do not need explanations. The first moments regarding the number of neutrons N (t|ξ) or the detections Z(t, dt|ξ) at t or t + τ are the same as in the previous section, with straightforward alterations of the detection time from T to dt. The
279
Reactivity Measurements in Accelerator Driven Systems
covariance function of the detector counts in the infinitesimal intervals dt at t and dτ at t + τ can be calculated from (10.93). Similarly to the modified variance of (10.53), one finds that due to the time-dependent stochastic source, the covariance will contain additional terms as compared to the steady source. The result is t CZZ (t, τ) = CZZ (t, τ|ξ)dξ = qzz (t, τ)N (t − t |ξ) dt 0
+ Z(t, dt|ξ)Z(t + τ, dτ|ξ) − Z(t, dt|ξ) Z(t + τ, dτ|ξ) .
(10.95)
Here, N (t|ξ) is given in (10.66), the single-particle-induced second factorial moment source term qzz (t, τ) is the same as in (9.160), and Z(t, dt|ξ) reads as Z(t, dt|ξ) =
λd S0 f˜ (s = 0) λd S0 f˜ (s = −α) e −α(t−ξ) dt − dt αT0 α e αT0 − 1 + λd S0
∞
{Cn (t, dt) cos (ωn ξ) + Sn (t, dt) sin (ωn ξ)} ,
(10.96)
n=1
where now, due to the infinitesimal length of the measurement time, the coefficients Cn (t, dt) and Sn (t, dt) are given as Cn (t, dt) = [an sin(ωn t) + bn cos(ωn t)]dt, n = 1, 2, . . . (10.97) and Sn (t, dt) = [bn sin(ωn t) − an cos(ωn t)]dt,
n = 1, 2, . . . ,
(10.98)
whereas the coefficients an and bn are the same as in (10.64) and (10.65). As mentioned there, they depend on the actual pulse shape. The calculation goes along the same lines as in the previous section. The only term whose calculation requires some effort is the second last term of (10.95). However, due to the orthogonality of the trigonometric functions over the period T0 , over which the averaging in ξ is to be performed, this goes without problems. The final result, corresponding to the asymptotic case, will be written here in the alternative form of the Rossi-alpha formula, based on the function R(τ), as defined in (9.163) for the traditional case. For the pulsed source case treated here it is expressed as R(τ)dτ ≡ lim
t→∞
Z(t, dt|ξ)Z(t + τ, dτ|ξ) CZZ (τ) = + Zdτ. Z(t, dt|ξ) Zdt
A short calculation yields R(τ)dτ =
λd λf ν(ν − 1) −ατ λs S0 f˜ (s = 0) e dτ + dτ 2α αT0 ∞
+
λd S0 αT0 2 (an + bn2 ) cos(ωn τ)dτ. 2f˜ (s = 0)
(10.99)
n=1
Equation (10.99) shows that, similarly to the Feynman-alpha formula for randomly pulsed sources, the pulsed Rossi-alpha formula consists of two terms equivalent to (with an amplitude scaling) the Rossi-alpha formula of the traditional case, and an additional non-negative oscillating term. This last term shows periodical, undamped oscillations with the pulse period T0 . This is a definite difference compared to the pulsed Feynmanalpha formula, in which the oscillating part is not periodic, rather the oscillations have a decaying amplitude. The periodic character of the oscillatory part in the pulsed Rossi-alpha formula makes it very suitable for evaluating measurements. Consideration of the various pulse shapes goes exactly as in the previous sections, and the corresponding coefficients an and bn have already been calculated. For square pulses, i.e. f (t) = (t, W ),
280
Imre Pázsit & Lénárd Pál
25 000 Equation (10.101)
R(τ) (s1)
20 000
First two terms of Equation (10.101)
15 000 10 000 5000 0 0.000
0.002
0.004 0.006 Time interval (s)
0.008
0.010
Figure 10.7 An example of a stochastically pulsed Rossi-alpha curve for square pulses,T0 = 2 ms,W = 10−3 ms.
these are given by (10.80) and (10.81), yielding an2 + bn2 =
4 n2 π2 (α2
+ ωn2 )
sin2
$ω
n
2
% W ,
n = 1, 2, . . . .
(10.100)
Hence the Rossi-alpha formula for square pulses becomes R(τ) =
λd λf ν(ν − 1) −ατ λd S0 W + e 2α αT0
+
∞ $ % 2λd S0 αT0 1 2 ωn sin W cos(ωn τ). π2 W n=1 n2 (α2 + ωn2 ) 2
(10.101)
For Gaussian pulse shapes, an and bn are given by (10.85) and (10.86), thus an2 + bn2 = and hence
8πσ 2 2 2 e −ωn σ , + ωn2 )
T02 (α2
n = 1, 2, . . . ,
√ λd λf ν(ν − 1) −ατ λd S0 2πσ R(τ) = e dτ + 2α αT0 √ ∞ 1 2λd S0 α 2πσ 2 2 + e −ωn σ cos(ωn τ) . 2 2 T0 α + ωn n=1
(10.102)
(10.103)
Figure 10.7 shows a plot of the pulsed Rossi-alpha formula R(τ) for square pulses, i.e. (10.101). The parameters used correspond to a typical thermal reactor system [86]. The curve demonstrates the periodic oscillations around the traditional Rossi-alpha term, i.e. the first two terms of (10.101). The fact that the pulsed Rossi-alpha formula consists of a traditional Rossi-alpha expression and a periodic part, can be utilised in an interesting way. Normally, one would try to fit a curve to the expression (10.101). This is, however, a somewhat complicated task, in view of the fact that (10.101) contains a series expansion in which several terms need to be kept for good accuracy, since for narrow pulses the Rossi-alpha curve has sharp edges, as is seen in Fig. 10.7. This is a definite difference compared to the stochastically pulsed Feynman method, which remains smooth (edge-free) even for very narrow pulses. However, one can avoid fitting the highly oscillating terms by determining them from the experiment itself, taking the oscillatory part from the section of the covariance curve for large delay times t.
281
Reactivity Measurements in Accelerator Driven Systems
0.08
R(τ) (100 μs1)
Measured 0.06
0.04
0.02 0.00
0.02
0.04 0.06 Time interval (s)
0.08
0.10
Figure 10.8 Result of a pulsed Rossi-alpha experiment (−ρ = 0.65$,T0 = 20 ms). From Kitamura et al. [86]. 0.05 Measured Fitted curve
R(τ) (100 μs1)
0.04 0.03 0.02 0.01 0.00 0.01 0.00
0.02
0.04
0.06
0.08
Time interval (s)
Figure 10.9 The non-oscillating component of Rossi-alpha curve (−ρ = 0.65$, T0 = 20 ms). From Kitamura et al. [86].
This has been demonstrated in an experiment performed in the KUCA using a D–T pulsed neutron source [86]. Figure 10.8 shows an experimental result of the pulsed Rossi-alpha method. In Fig. 10.9, the traditional Rossi-alpha curve is shown, which was obtained by subtracting the section of the experimental curve ranging from 0.08 s to 0.10 s from the curve ranging from 0.00 s to 0.08 s in Fig. 10.8. This part of the curve contains only the periodic term. The result of a single exponential fitting is also shown in Fig. 10.9. The resulting decay constant, i.e. the α ≈ 263 ± 1 s−1 showed a good agreement with that obtained by the pulsed neutron source method performed by using the same core configuration. More examples are given in [82, 86]. Further applications of the pulsed Rossi-alpha method in experiments are found in [85].
10.3 Pulsed Compound Poisson Source with Finite Width The above results can be extended to more complicated models with the same formalism. One extension is to assume that the statistics of the source neutron injection within the pulse follows a compound Poisson distribution. Such a case would correspond to a spallation based accelerator with a finite width of proton pulses. Another extension is to generalise the description to the inclusion of delayed neutrons in the branching process. The first of these two extensions will be described briefly below. The case of delayed neutrons is not considered here; such results can be found in [88].
282
Imre Pázsit & Lénárd Pál
The extension of the results to a compound inhomogeneous Poisson source is straightforward, based on the treatment of the previous cases. From combining (10.7) and (10.44), one has
G(x, v, t|ξ) = exp
t
S(t |ξ){r[g(x, v, t − t )] − 1}dt ,
(10.104)
0
where as before, r(z) = n pq (n) zn is the generating function of the source probability distribution pq (n). The calculations can be performed with basically the same techniques as in the foregoing, although they are more involved. Here we only quote the final results,2 using the same notations as before.
Feynman-alpha, deterministic pulsing The Feynman Y -term is obtained as f˜ (s = 0) 1 − e −αT T (1 + δ) 1 − Y (T ) =Y0 αT0 αT
∞ b n ωn an 1 + 1 − δ An (T ) + Y0 an α n=1 + Y0
∞ n=1
a n ωn bn 1 + 1 + δ Bn (T ) bn α
1 ωn2 (1 −ω 1 + 1 + a − δ) + 2αb ∞ n n n 2 α2 (1 − e −αT )2 + Y0 2 2 (2α) + ωn n=1
δ ,
(10.105)
where Y0 =
λd λf ν(ν − 1) λd r1 S0 . α2 Z(T )
Here the parameter δ, defined in (10.15), δ=
r1 Dq (−ρ) ν Dνp
appears again. Just as in the case of the steady spallation source, the part of the variance to the mean that exceeds unity is amplified by the appearance of the factor δ in the first term on the right hand side. This term contains a time-dependent factor which is the same as that of the traditional method. However, the amplitude of the oscillating part is also amplified, through the occurrence of the factor r1 = q . Hence, what regards the applicability of the pulsed Feynman-alpha for the evaluation of measurements, the presence of the multiple source may be slightly less advantageous, due to the amplification of the oscillating terms. In addition, since the factor δ is proportional to the absolute value of the reactivity, for systems near to critical, the oscillating terms are amplified more than the smooth part. 2 These
calculations were performed by Y. Kitamura (unpublished work).
283
Reactivity Measurements in Accelerator Driven Systems
Feynman-alpha, stochastic pulsing As expected, the Feynman-alpha formula for the stochastic pulsing takes a much simpler form even in the case of a multiple emission source. The result is Y (T ) =
λd λf ν(ν − 1) 1 − e −αT (1 + δ) 1 − α2 αT ∞ λd r1 S0 αT0 an2 + bn2 $ ωn % sin T . +2 2 f˜ (s = 0)T n=1 ωn2
(10.106)
Here, the first term on the right-hand side, corresponding to the traditional formula, is modified exactly the same way as in the case of the steady spallation source. Similarly to the case of deterministic pulsing, the oscillatory part is amplified too, with the factor r1 .Thus for large source multiplicities r1 and slight subcriticalities |ρ|, the oscillatory parts will be amplified more by the source multiplicity than the smooth part.
Rossi-alpha, random pulsing As is quite predictable at this point, the formula (10.99) gets modified into Pr (τ)dτ =
λd λf ν(ν − 1) (1 + δ)e −ατ dτ 2α ∞ λd r1 S0 αT0 2 (an + bn2 ) cos(ωn τ)dτ. + 2f˜ (s = 0) n=1
(10.107)
The same comments are valid regarding the effect of the source multiplicity as in the case of the Feynman-alpha with random pulsing.
10.4 Periodic Instantaneous Pulses In this section the case of infinitely narrow pulses, i.e. instantaneous injection, will be treated.The Feynmanalpha formula will be calculated for both deterministic pulsing and random pulsing. For the Rossi-alpha, due to the nature of the measurement (the measurement gate is opened at random through detection of a neutron in the system), only the random pulsing case is interesting. These cases will be treated after an elegant and effective formalism used by Degweker in two publications [60, 65]. The formalism is basically the same as the one developed in Section 3.2 for the distribution of the number of particles, but extended to the case of distribution of the detections.
10.4.1 Feynman-alpha with deterministic pulsing It is supposed that pulses are emitted into the system at times t = nT0 , with n running through the integers from −∞ to ∞. The measurement starts at t = +0, and lasts over a gate length T , i.e. its start coincides with the arrival of a pulse (hence this case is also called ‘pulse-triggered Feynman-alpha measurement’ [65]). Since the pulsing was started at t = −∞, the system is already in a periodically stationary state at t = 0. It is supposed that in one pulse a random number of neutrons are injected with a probability distribution pq (n) and corresponding generating function r(z), i.e the same notations are used as before. The following quantities are defined: pd (n, t, T )
(10.108)
284
Imre Pázsit & Lénárd Pál
is the probability that one single neutron, emitted into the system at t ≤ T will lead to n detections during the interval [0, T ), and ∞ zn pd (n, t, T ) (10.109) gd (z, t, T ) = n=0
is its generating function. Likewise, Pd (n, T ) is the probability that the pulse train together will lead to n detections during [0, T ), and Gd (z, T ) =
∞
zn Pd (n, T )
(10.110)
(10.111)
n=0
is the corresponding generating function. For the calculations, the first two factorial moments of gd (z, t, T ) will be needed. These have already been determined in Chapter 4. Due to the independence of the progeny production of the neutrons partly for all neutrons injected within one pulse, and partly for the different pulses, for the generating function Gd (z, T ) one can write down directly the following equation: Gd (z, T ) =
∞
[T /T0 ]
r[gd (z, −nT0 , T )]
n=0
r[gd (z, kT0 , T )],
(10.112)
k=1
where [T /T0 ] stands for the largest integer not exceeding T /T0 . Here the fact was accounted for that the measurement starts at t = +0, so the pulse arriving at t = 0 does not fall into the measurement period. The above expression could of course have been written in one single product with n running from −∞ to [T /T0 ]. However, the factorisation is useful for the calculations, because the one-particle-induced detection generating function gd (z, t, T ), and hence also its factorial moments, have different functional forms depending on whether t is negative or positive (i.e. if the source particles from the pulse arrived inside or outside the measuring period). To keep track of this difference, we shall use the subscripts ‘1’ and ‘2’ for the generating function and its factorial moments for the cases t ≤ 0 and t ≥ 0, respectively, i.e. gd1 (1, t, T ), t ≤ 0, gd (1, t, T ) = gd2 (1, t, T ), t ≥ 0. For the calculation of the first moment M1 (T ) = Gd (1, T ) one obtains from (10.112)
⎡
Gd (1, T ) = r1 ⎣
∞ n=0
gd1 (1, −nT0 , T ) +
[T /T0 ]
(10.113) ⎤ gd2 (1, kT0 , T )⎦ ,
(10.114)
k=1
where r1 is the expectation of the number of neutrons emitted in a pulse. To evaluate (10.114), one needs the expectation gd (1, t, T ). This was given in (4.24), and by a suitable redefinition of the arguments, it is obtained from λd αt −αT ) = g (1, t, T ), t ≤ 0, d1 α e (1 − e gd (1, t, T ) = (10.115) λd −α(T −t) ) = gd2 (1, t, T ), t ≥ 0. α (1 − e Substituting (10.115) into (10.114) and performing the summations leads to the result r1 λd [1 + [T /T0 ](1 − e −αT0 ) − e −α(T −[T /T0 ]T0 ) ] α(1 − e −αT0 ) r1 λd = [1 + [T /T0 ](1 − e −αT0 ) − e −αu ], α(1 − e −αT0 )
M1 (T ) =
(10.116)
285
Reactivity Measurements in Accelerator Driven Systems
where the notation u = T − [T /T0 ]T0
(10.117)
was introduced, in analogy with its definition and use in Chapter 3. Obviously, 0 ≤ u ≤ T0 is a running parameter, describing the variation of the measurement time between the kth and (k + 1)th period, such that T = kT0 + u, with k = [T /T0 ]. The second moment M2 (T ) of Gd (z, T ) is calculated from (10.112) with twofold derivation. A brief calculations yields
M2 (T ) =
∞
2 [r2 gd1 (1, −nT0 , T ) + r1 gd1 (1, −nT0 , T )]
n=0
+ r12
∞
gd1 (1, −nT0 , T )
[T /T0 ]
gd1 (1, −n T0 , T )
n =n
n=0
+
∞
2 [r2 gd2 (1, −nT0 , T ) + r1 gd2 (1, −nT0 , T )]
n=0
[T /T0 ]
+ r12
gd2 (1, −nT0 , T )
∞
gd2 (1, −n T0 , T ).
(10.118)
n =n
n=1
For the Feynman-alpha one actually needs the modified variance M2 (T ) − M12 (T ). Adding and subtracting the terms n = n and subtracting M12 (T ) leads to
M2 (T ) − M12 (T ) =
∞
2 [(r2 − r12 )gd1 (1, −nT0 , T ) + r1 gd1 (1, −nT0 , T )]
n=0
[T /T0 ]
+
2 [(r2 − r12 )gd2 (1, nT0 , T ) + r1 gd2 (1, −nT0 , T )].
(10.119)
n=1
To evaluate (10.119), the second factorial moment of the single-particle-induced detector count is needed. From (4.37), with a proper change of arguments, it is given as gd (1, t, T )
=
(−t)gd1 (1, t, T ) + (t)gd2 (1, t, T )
λ2d λf ν(ν − 1) U (−) (t, T ), t ≤ 0, = U (+) (t, T ), t ≥ 0, α3
where U (−) (t, T ) = e αt [1 − 2αTe −αT − e −2αT − (1 − e −αT )2 (e αt − 1)] and U (+) (t, T ) = 1 − 2α(T − t)e −α(T −t) − e −2α(T −t) .
(10.120)
286
Imre Pázsit & Lénárd Pál
Using (10.120) in (10.119) and performing the summations leads to the following result for the Feynman Y (T ) function: Y (T ) =
M2 (T ) − M12 (T ) M1 (T )
λ2d λf r1 ν(ν − 1) −2αue −αu + 2(1 − e −αT ) = [T /T ] + 0 M1 (T )α3 1 − e −αT0
e −2αu − e −2αT + (1 − e −αT )2 2αT0 e −αT0 (e −αu − e −αT ) − 1 − e −2αT0 (1 − e −αT0 )2 2 2 λ (r2 − r1 ) 2(e −αu − e −αT ) e −2αu − e −2αT + (1 − e −αT )2 + d [T /T + ] − . (10.121) 0 M1 (T )α2 (1 − e −αT0 ) (1 − e −2αT0 ) −
This result can be written in a somewhat simpler form by utilising some identities between the parameters and also re-casting the first moment into M1 (T ) =
r1 λd ∗ M1 (T ) α
with M1∗ (T ) =
1 + [T /T0 ](1 − e −αT0 ) − e −αu . 1 − e −αT0
One then obtains λd λf ν(ν − 1) Y (T ) = α2 M1∗ (T )
2αT0 e −αT0 (e −αu − e −αT ) −2αue −αu + 2(1 − e −αT ) − + (1 + δ∗ )[T /T0 ] 1 − e −αT0 (1 − e −αT0 )2 −αu − e −αT ) e −2αu − e −2αT + (1 − e −αT )2 ∗ 2(e − (1 − δ∗ ) − δ . (10.122) 1 − e −2αT0 (1 − e −αT0 )
Here δ∗ is a ‘source enhancement factor’ δ∗ ≡
r1 (Dq − 1) r1 |ρ| |ρ| = δ − . ν Dν ν Dν
(10.123)
Its form is similar to the factor δ, defined in (10.15) and found in connection with multiple emission stationary and non-stationary sources. The difference is that instead of the factor Dq , the expression Dq − 1 appears in the numerator. The fact that in systems pulsed with instantaneous injection, i.e. with strictly periodic pulses of zero width, Dq − 1 replaces Dq in the second moments expressions was already observed in Chapter 3, in connection with the variance of the particle number. Although the present expressions refer to the variance of the detected neutrons, the difference remains the same. In particular, as it was also discussed in Chapter 3, this difference remains even if the width of the finite pulses is decreased to zero, while the source intensity is increased accordingly. This property of the variance of the neutron number with periodic instantaneous injection is sometimes expressed as if the periodicity of the instantaneous pulsing reduces the variance of the neutron number and the detector counts [60, 65]. An illustration of the realisation is given in Fig. 10.10. This figure was plotted with the same data for T0 and α as those in Fig. 10.5a. A comparison of the figures shows that qualitatively, the shape of the pulse is very similar to the one obtained from using the model of a narrow, but finite source. In particular, the fitting of the theoretical curves to the measured ones has approximately the same performance, irrespective of which model is used. This means that although the correct parameter α can be found also with the model of instantaneous pulses, the shape of the fitted curve shows the same deviations from the measured one as in the case of the model with finite pulse
287
Reactivity Measurements in Accelerator Driven Systems
1.2
Y (T )
1 0.8 0.6
T0 0.02, α 266, δ 0.2
0.4 0.2 0
0.01
0.02
0.03
0.04
0.05
Gate width (T )
Figure 10.10 The same Feynman-alpha curve as in Fig. 10.5a, but with periodic instantaneous pulses with T0 = 0.02 s, α = 266 s−1 and δ∗ = 0.2.
width. One would expect that the present model of instantaneous pulses is more flexible, since it contains the extra parameter δ∗ , containing the first two factorial moments of the number of neutrons per injection (i.e. in one pulse of the neutron generator). However, in the fitted curve, the oscillating part still exceeds that of the measurements, indicating that the way the parameter δ∗ appears in the formula does not reduce the oscillating part with the optimum fitting. Presumably the same considerations apply to the difference as those mentioned in Section 10.2.4.
10.4.2 Feynman-alpha with stochastic pulsing Here it is assumed, similarly as in Section 10.2.1, that the arrival of the pulses and the start of the measurement are not synchronised. To extend the description of the previous section to this case, it is assumed that the arrival time of the last pulse that entered the system with t ≤ 0, i.e. outside the measurement period, is equal to t = −ξ, where ξ is a realisation of the random variable ξ which is uniformly distributed in [0, T0 ]. Following the same arguments, the generating function of the distribution of the detector counts is now given as 1 Gd (z, T ) = T0
T0 0
1 Gd (z, ξ, T )dξ = T0
∞ T0 0
[(T +ξ)/T0 ]
r [gd (z, −nT0 − ξ, T )]
n=0
r[gd (z, k − ξT0 , T )]dξ.
k=1
(10.124) The factorial moments are to be determined by derivations from (10.124). The first moment is simply given as ⎡ ⎤ T0 [(T +ξ)/T0 ] ∞ r 1 ⎣ M1 (T ) = Gd (1, T ) = gd1 (1, −nT0 − ξ, T ) + gd2 (1, kT0 − ξ, T )⎦ dξ. (10.125) T0 0 n=0 k=1
This expression can be evaluated by two different ways, and actually both will be used. For the calculation of the first moment, the simpler way of evaluation goes by recognising that with a suitable change of variables, the sum of the two integrals can be written as two contiguous integrals in the form
0 T r1 M1 (T ) = g (1, t, T ) + gd2 (1, kt, T ) dt. (10.126) T0 −∞ d1 0 and g in (10.126), the integrals can readily be evaluated and one Using the concrete form (10.115) of the gd1 d2 obtains r1 λd M1 (T ) = T. (10.127) α T0 This expresses the expected result that with random pulsing, the expectation of the detector counts is a smooth linear function of the measurement time, just as in the case of the random pulsing with finite width pulses.
288
Imre Pázsit & Lénárd Pál
Calculation of the second factorial moment is more laborious, although still quite straightforward. Apart from the extra variable ξ and the integral, the structure of the expressions will be identical with those in (10.118), including the appearance of the terms n = n . As previously, these can be eliminated by adding and subtracting the terms n = n , leading to M2 (T ) =
r1 T0
T0 0
r1 + T0 +
r1 T0
r2 + 1 T0
Gd (1, ξ, T )dξ = ∞ T0
0
n=0
0
⎡ T0
⎣
∞ T0 0
2 gd1 (1, −nT0 − ξ, T )dξ
n=0
(r2 − r12 ) − ξ, T )dξ + T0
gd1 (1, −nT0
T0 [(T +ξ)/T 0]
(r2 − r12 ) T0
T0 [(T +ξ)/T 0] 0
n=0
gd2 (1, −nT0 − ξ, T )dξ
n=0 ∞
0
2 gd2 (1, −nT 0 − ξ, T )dξ
gd2 (1, −nT0 − ξ, T ) +
n=1
[(T +ξ)/T0 ]
⎤2 gd2 (1, kT0 − ξ, T )⎦ dξ.
(10.128)
k=1
In the first four terms on the right-hand side, the sum of the integrals can again be converted into single contiguous integrals, but this is not possible in the last term which contains products of sums. Recognising that the last term in the square brackets is equal to Gd2 (1, ξ, T ), the modified variance can be written as M2 (T ) − M12 (T )
(r2 − r12 ) = T0
0
−∞
2 gd1 (1, t, T )dt
r1 + T0
0
−∞
gd1 (1, t, T )dt
T r1 (r2 − r12 ) T 2 gd2 (1, t, T )dt + g (1, t, T )dt + T0 T0 0 d2 0
T0 2 T0 1 1 2 + Gd (1, ξ, T )dξ − Gd (1, ξ, T )dξ . T0 0 T0 0
(10.129)
The very last term of (10.129) is just the square of M1 (T ) which was already calculated and available from (10.127). The reason for writing it in the above form is to show that unlike for the case of stationary sources or for the deterministic pulsing, but in agreement with the case of stochastic finite width pulses, equation (10.53), the last two terms do not cancel each other. This fact leads to some complications, but without principal difficulties. To begin with, the integrals in the first four terms of the right-hand side of (10.129) can be readily performed by using (10.115) and (10.120) for the gd and the gd , respectively, leading to the compact result
r1 λ2d λf ν(ν − 1) (r2 − r12 ) λ2d 1 − e −αT . T 1− + T0 α3 T 0 α2 αT
(10.130)
Calculation of the second last term, the integral of Gd2 (1, ξ, T ), poses some more difficulties. By recalling (10.124) and (10.125), it can be written as
1 T0
T0 0
Gd2 (1, ξ, T )dξ =
r1 T0
T0 0
⎡ ⎤2 [(T +ξ)/T0 ] ∞ ⎣ gd1 (1, −nT0 − ξ, T ) + gd2 (1, kT0 − ξ, T )⎦ dξ. n=0
k=1
(10.131)
289
Reactivity Measurements in Accelerator Driven Systems
This expression can be evaluated by first calculating the sums and then squaring and integrating. Performing the sums leads to Gd (1, ξ, T ) =
∞ n=0
=
gd1 (1, −nT0 − ξ, T ) +
[(T +ξ)/T0 ]
gd2 (1, kT0 − ξ, T )
k=1
r1 λd [(T + ξ)/T0 ] + α
− e −α (T ,ξ) , 1 − e −αT0
e −αξ
(10.132)
where (T , ξ) = T + ξ − [(T + ξ)/T0 ]T0 .
(10.133)
The integral of the square of (10.132) can be performed by noticing that, by writing again T = kT0 + u, 0 ≤ u ≤ T0 , or in other words, u = T − [T /T0 ]T0 , one has [(T + ξ)/T0 ] = k + (ξ + u − T0 ) and in a similar way
(10.134)
(T , ξ) =
u + ξ, ξ ≤ T0 − u, u + ξ − T0 , ξ > T0 − u.
(10.135)
The integration of the six terms of the square of (10.132) is rather tedious and lengthy, but the reward is a stunningly simple expression. Summing up all terms, subtracting M12 (T ) and dividing with M1 (T ) yields the final result ' λd λf ν(ν − 1) & 1 − e −αT ∗ Y (T ) = 1 + δ 1 − α2 αT * r1 λd u(T0 − u) e α(u−T0 ) + e −αu − e −αT0 − 1 + . (10.136) + α T0 T αT 1 − e −αT0 Similarly to the results of the stochastic pulsing with finite pulses, this expression consists of two terms; a smooth term corresponding to the traditional Feynman-alpha solution with stationary sources, although with an amplitude enhanced with the factor δ∗ , defined in (10.123), and a non-negative oscillating term with an oscillation period equal to T0 . In both cases, the smooth, traditional term constitutes a lower envelop of the oscillatory part. This expression is in a close analogy with the one obtained for the case of stochastic finite width pulses with compound Poisson statistics within the pulse, expression (10.106), which also contains a traditional term enhanced with the source multiplicity, and an oscillating part. Another, although more remote, similarity is that in the stochastic pulsed case with finite width pulses, the oscillating term contains the source intensity, which is not present in the case of stationary sources or that of deterministic pulsing. For instantaneous pulses no source intensity can be defined, but in (10.136) the expression r1 /T0 occurs in the first term of the oscillating component, which plays a similar role. The difference between the two formulae is that the source enhancement factors are different. While physically this is a rather significant difference, concerning the use of the formulae for the determination of the reactivity, the two solutions seem to be equally suitable. Each of them contains a separate parameter for the smooth and the oscillating parts, that can be determined from the fitting procedure. The formula for the instantaneous injection has the clear advantage that its form is much simpler than that of the finite pulse case, (10.106), the latter containing the searched parameter α not only in the exponent of the smooth part, but also in each coefficient an and bn of the oscillating part. Because of its much simpler form, the instantaneous pulsing formula seems to be superior in applications to reactivity determination in pulsed subcritical systems. One illustration of the stochastic instantaneous pulsed Feynman-alpha curve is shown in Fig. 10.11. Again, a very good similarity is seen with the corresponding curve in Fig. 10.6d calculated with the same T0 and α parameters.
290
Imre Pázsit & Lénárd Pál
1.2
Y (T )
1 0.8 0.6 0.4
α 598, T0 0.02
0.2 0 0
0.01
0.02 0.03 Gate width (T )
0.04
0.05
Figure 10.11 Feynman-Y curve for stochastic instantaneous pulsing, for the same case as in Fig. 10.6d. T0 = 0.02 s, α = 598 s−1 . The broken line shows the traditional Feynman-Y curve, corresponding to the first term of (10.136).
10.4.3 Rossi-alpha with stochastic pulsing Here the concepts applied remain the same, but the formalism will change somewhat. It is assumed that the measurement is triggered by a detection at time t = 0, and is completed by another detector count at t = τ. The detections take place in infinitesimal intervals dt and dτ around t = 0 and t = τ, respectively, hence the detection intensity of the joint detection is λ2d times the joint expected value of the neutron number at times t = 0 and t = τ. As in the previous case, at time t = 0 the system is already in a periodic stationary state, since the pulsing started at t = −∞. For clarity of the derivation, the intensity of the joint detections at t = 0 and t = τ will be denoted as P2 (0, τ) = λ2d N(0)N(τ) , such that P2 (0, τ)dt dτ = λ2d N(0)N(τ) dt dτ is the probability, in first order of dt dτ, of having one detection within (0, dt) and another one in (τ, dτ). Again it is practical to consider the events before and after t = 0 separately. It will be assumed that the last pulse for t ≤ 0 arrived at t = −ξ, where ξ is a random variable distributed uniformly in [0, T0 ]. With these preliminaries, and accounting for the additive character of the expected values, P2 (0, τ) can be written as follows: ∞ λ2d T0 P2 (0, τ) = nQ(n, ξ) X (n, τ, ξ)dξ, (10.137) T0 0 n=0 where X (n, τ, ξ) =
∞
m[P(m, τ|n − 1, 0) + R(m, τ|0, 0, ξ)].
m=0
Here Q(n, ξ) is the probability that at time t = 0 there will be n neutrons in the system on the condition that the last pulse for t ≤ 0 arrived at t = −ξ; P(m, τ|n, 0) is the probability that in a source-free system there will be m neutrons at time t = τ given that at time t = 0 there were n neutrons present (the argument n − 1 in (10.137) accounts for the detection of one neutron at t = 0); and finally R(m, τ|0, 0, ξ) is the probability that in a pulsed system, there will be m neutrons at time t = τ given that the last pulse for t ≤ 0 arrived at t = −ξ, and that there were no neutrons (n = 0) at time t = 0 in the system. This equation can be converted into one for the corresponding generating functions on the right-hand side, by introducing FQ (z, ξ) = Q(n, ξ)zn (10.138) n
and FR (z, τ, ξ) =
m
R(m, τ|0, 0, ξ)zm ,
(10.139)
291
Reactivity Measurements in Accelerator Driven Systems
whereas the generating function of P(m, τ|n − 1, 0) is equal to g(z, τ)n−1 where g(z, τ) is the basic generating function of the probability of finding n particles in the system at time τ given that there was one particle in the system at t = 0. This generating function was defined in Chapter 1 and its first and second moments are given in (1.57) and (1.59) as g (1, τ) = e −ατ
(10.140)
and λf ν(ν − 1) −ατ e (1 − e −ατ ). (10.141) α Here account was taken of the fact that the α in the above is equal to −α of Part I, and that Qq2 = λf ν(ν − 1) . With the above definitions, (10.137) can be re-written as g (1, τ) =
λ2 P2 (0, τ) = d T0
T0 0
{FQ (1, ξ)g (1, τ) + FQ (1, ξ)FR (1, τ, ξ)}dξ.
(10.142)
Equation (10.142) can be evaluated by noticing that the generating functions FQ and FR can be constructed, in a similar way as in the previous section for the Feynman-alpha method, from g(z, τ) as follows: FQ (z, ξ) =
∞
r[g(z, nT0 + ξ)]
(10.143)
n=0
and
[(τ+ξ)/T0 ]
FR (z, τ, ξ) =
r[g(z, τ + ξ − kT0 )].
(10.144)
k=1
The calculation goes exactly along the same lines as in the previous section. Applying the same tricks as before, i.e. adding and subtracting the terms n = n and converting the single sums of integrals from 0 to T0 into single integrals from 0 to infinity, one arrives at ⎧ ⎨ λ2 (r − r 2 ) ∞ λ2 r1 ∞ 1 d 2 P2 (0, τ) = g 2 (1, t)dt + d g (1, t)dt ⎩ T0 T0 0 0 2 ⎫ T0 ∞ ⎬ 2 2 λ r + d1 g (1, nT 0 + ξ) dξ e −ατ ⎭ T0 0 λ2 r 2 + d1 T0
⎡
T0 0
⎣
n=0
∞ n=0
g (1, nT 0 + ξ)dξ
[(τ+ξ)/T0 ]
⎤ g (1, τ + ξ − kT0 )⎦ dξ.
(10.145)
k=1
The integrals in the first two terms and the summations in the third and the fourth are carried out easily. One finds that the last term, containing the product of two sums, can be written as the difference of two terms, out of which one cancels out with the third term of (10.145). The remaining term contains the construction e −αξ e −α(τ,ξ) , (1 − e −αT0 ) where the function (τ, ξ) is the same as in (10.133) and (10.135), where now the running variable 0 ≤ u ≤ T0 is defined by u = τ − [τ/T0 ]T0 . The integral of this term with respect to ξ can be evaluated with the technique
292
Imre Pázsit & Lénárd Pál
1.2 1
R (τ)
0.8 α 266, T0 0.02
0.6 0.4 0.2 0
0.02
0.04
0.06
0.08
Gate delay (τ)
Figure 10.12 Rossi-alpha curve for stochastic instantaneous pulsing. T0 = 0.02 s, α = 266 s−1 . The broken line shows the traditional Rossi-alpha curve, corresponding to the first two terms of (10.148).
described in the previous section. Summing up all terms leads to the final result P2 (0, τ) =
λ2d 2αT0
& −αu ' r1 λf ν(ν − 1) r12 −αT0 αu + e e e . + r2 − r12 e −ατ + α 1 − e −αT0
(10.146)
This expression contains the expected decaying exponential, plus an oscillating part with non-zero mean, which shows periodical stationarity. The expression can be converted into the more familiar form of the Rossi-alpha formula, expressed in the form R(τ) of (9.163), by taking into account that the detection rate Z reads in this case, from (10.127), as r 1 λd . (10.147) Z= αT0 Moreover, it gives insight to split up the last, oscillating term into a constant value expressing the time average over the oscillation period, plus a term oscillating with a zero temporal mean. This leads to λd λf ν(ν − 1) r 1 λd P2 (0, τ) = [1 + δ∗ ]e −ατ + Z 2α αT0
r 1 λd 2(1 − e −αT0 ) −αu −αT0 αu + + e e , e − 2(1 − e −αT0 ) αT0
R(τ) =
(10.148)
where the enhancement factor δ∗ was defined in (10.123). The result (10.148) contains a traditional smooth Rossi-alpha formula, i.e. a decaying exponential and a constant part corresponding to the so-called correlated and uncorrelated counts, plus an oscillating part. Comparison with (10.147) shows that the constant part is equal to the detection intensity in one detector, i.e. it agrees with the corresponding term in the traditional Rossi-alpha formula. The oscillating part has a discontinuous derivative, unlike in the case of the stochastically pulsed Feynman-alpha curve. Qualitatively the solution is similar to that obtained by the method of finite but narrow pulses, even if some of the factors appearing have different values. One illustration of (10.148) is given in Fig. 10.12. As both the formula and the figure show, in contrast to the pulsed Feynman-alpha method, but in agreement with the findings of the treatment with finite width pulses, the oscillations do not decay with time, rather they are periodic. Qualitatively a very good agreement can be seen in a comparison with the results shown in Fig. 10.7, referring to the case of finite but narrow pulses. This lends the possibility of eliminating the oscillating part from an experiment, as was demonstrated earlier, from the tail of the curve at large τ values. For an analysis of the difference between the results of finite width pulses and instantaneous injection, one has to compare (10.148) with the case of a pulsed compound Poisson source, (10.107). The comparison shows that the basic difference between the inhomogeneous compound Poisson source and the instantaneous injection is that in the former the source enhancement factor δ appears, whereas in the latter the factor δ∗ .
Reactivity Measurements in Accelerator Driven Systems
293
This is again the same difference as the one observed earlier in the comparisons of finite narrow pulses and instantaneous pulses. The conclusions regarding the applicability of the two formulae are also the same as before. Whether the finite width pulse or the instantaneous injection description is a better model of the actual physical situation cannot be decided from purely mathematical principles. Regarding the applications of the results for the evaluation of measurements, the difference is insignificant. In that process, both δ and δ∗ are fitting parameters, hence there is no difference which formula is being used. From the practical point of view, since (10.148) contains a closed form expression, it might be more advantageous to use than (10.107) when evaluating measurements with narrow pulses.
C H A P T E R
E L E V E N
Theory of Multiplicity in Nuclear Safeguards
Contents 11.1 11.2 11.3 11.4 11.5 11.6
Neutron and Gamma Cascades Basic Equations Neutron Distributions Gamma Photon Distributions Joint Moments Practical Applications: Outlook
295 298 299 305 309 311
Branching in the fission process, as the physical origin of time correlations and hence non-trivial statistical properties of the neutron distribution, can be used also in areas other than measuring the reactivity (or multiplication factor) in nearly critical systems. One such area, a branch of nuclear safeguards, deals with nuclear material control and accounting. The purpose is to detect, identify and quantify fissile material with non-intrusive methods [89]. This is achieved by detecting radiation, either neutrons or gamma photons, that are emitted either spontaneously (passive methods), or through inducing by neutron or photon irradiation (active methods). As a rule, such investigations concern samples of fissile material far from being critical. The material, consisting of transuranic elements, is a neutron emitter through spontaneous fission or emission through (α, n) reaction, and the passive way of identification is based on detecting the emitted neutrons, and lately also the associated gamma photons. However, even in small samples, a primary neutron has a non-zero probability to start a short chain before escaping through fission induced in the sample, much like in the case of fast fission in the fuel elements of thermal reactors. The difference in the number and energy distributions in spontaneous and induced fission between the different isotopes gives, theoretically, a possibility of identifying the fissile isotope, and also its mass. In practice, with present technology, only some lumped parameters can be determined. The most relevant material to be quantified is plutonium, and the measurements supply the so-called 240 Pu effective mass [89, 90]. The total Pu mass is then extracted from the effective 240 Pu mass by determining the isotopic composition of the sample by other means such as gamma-ray spectroscopy. In this monograph only the number distributions will be considered, and the energy aspects will be disregarded, just as in the previous chapters.The number distribution of the fission neutrons is usually quantified by the low order factorial moments. In the safeguards literature, the number of neutrons or gamma photons in a spontaneous fission, as well as the number of particles generated or leaving the sample is often referred to as multiplicities, and the various descriptors as multiplicity distribution, multiplicity moments, etc. For example, experimental determination of the factorial moments is usually referred to as ‘multiplicity counting’. Whenever it cannot lead to confusion, any of these quantities may be referred to as just multiplicities. The factorial moments, just as the number distributions of the source neutrons and neutrons leaving the sample, will coincide for small samples where all neutrons born in the sample will leak out. However, for larger samples, the possibility of fission by the initial neutrons before leaking out, and to a much lesser extent absorption without fission, will alter the multiplicities. Due to the short lifetime of the strongly subcritical chains, all
Neutron fluctuations ISBN-13: 978-0-08-045064-3
© 2008 Elsevier Ltd. All rights reserved.
294
295
Theory of Multiplicity in Nuclear Safeguards
neutrons from one source event, including those generated in the chain, can be counted as born simultaneously. Hence the detected multiplicities (factorial moments) carry information on both the spontaneous and the induced neutron generation. This gives the possibility of determining the effective mass of the sample. Recently, also gamma multiplicity counting was started to be used for non-destructive investigations [91]. Gamma photons do not develop a chain by themselves, but in each fission event multiple gamma photons are generated. Their multiplicities are larger than those of the fission neutrons, hence their detection can enhance the performance of the identification method, especially for low sample masses. In addition, they may prove useful in cases when the sample is covered by neutron absorbing or scattering material which is penetrable by gamma photons. In the literature, early work in the field used combinatorial methods [92, 93], and later the generating functions [94–97]. In the following the basic equations for the generating function of the joint probability distribution of neutrons and gamma photons, generated and/or emitted from a sample due to an intrinsic source, will be set up. From this generating function the factorial moments as well as the probability distributions can be derived. The equation contains, as special cases, the individual neutron and gamma distributions. First, the factorial moments and the probability distributions for neutrons and photons will be derived and discussed. The effect of detection efficiency, and for the gamma quanta also that of the internal absorption, will be discussed. Finally, the joint statistics of neutron and gamma photons will be described.
11.1 Neutron and Gamma Cascades 11.1.1 Notations Let Ir denote the event that the cascade is started by one neutron and Is the event that the cascade is started by one source event, respectively.1 The following random variables are used: 1. 2. 3. 4. 5. 6. 7. 8.
νs = the number of source neutrons in one emission, νr = the number of neutrons in one fission reaction,2 μs = the number of source gamma quanta in one emission, μr = the number of gamma quanta in one fission reaction, ν = the total number of neutrons produced in a cascade induced by one neutron, μ = the total number of gamma quanta produced in a cascade induced by one neutron, ν˜ = {ν|Is } = the total number of neutrons in a cascade produced by one source event, ˜ = {μ|Is } = the total number of gamma quanta produced in a cascade by one source event. μ
The objective is to determine the joint distribution and the various auto- and cross-moments of the ˜ variables ν˜ and μ. It is to be mentioned that although the numbers of neutrons and gamma quanta originating from one ˜ fission can be assumed to be independent, this is not true for the total numbers ν, μ and ν˜ , μ. Introduce the probabilities: P{νs = n} = ps (n),
(11.1)
P{νr = n} = pr (n),
(11.2)
P{νs = n} = fs (n),
(11.3)
P{νr = n} = fr (n).
(11.4)
source event Is is nothing else than the appearance of a random number of source neutrons and source gamma quanta by spontaneous processes in the sample. 2 The subscript ‘r’ refers to ‘reaction’ here. In the safeguards literature it is common to use the subscript ‘i’, alluding to ‘induced’ (fission, in contrast to the spontaneous fission of the source event). 1 The
296
Imre Pázsit & Lénárd Pál
Further, let P{ν = n|Ir } = p(n)
(11.5)
be the probability that the total number of neutrons produced in a cascade is exactly n, provided that the cascade was started by one neutron. Similarly, let P{ν = n|Is } = P(n)
(11.6)
be the probability that the total number of neutrons produced in a cascade is exactly n, provided that the cascade was started by one source event. For the gamma quanta P{μ = n|Ir } = f (n)
(11.7)
is the probability that the total number of gamma quanta produced in a cascade is exactly n, provided that the cascade was started by one neutron, and P{μ = n|Is } = F(n)
(11.8)
is the probability that the total number of gamma quanta produced in a cascade is exactly n, provided that the cascade was started by one source event. As seen in definitions (11.1–11.8), the distributions pr (n) and fr (n) are defined as to correspond to fission reactions, i.e. the effect of the neutron capture is not included. The absorption of neutrons and gamma photons in the sample will be discussed later. For the time being absorption is neglected, hence the treatment concerns the number of neutrons and gamma photons generated in the sample, as opposed to those leaving the sample. With no absorption, these two sets are equivalent. Define the following generating functions: qs (z) = E{zνs } =
∞
ps (n)zn ,
(11.9)
pr (n)zn ,
(11.10)
fs (n)zn ,
(11.11)
fr (n)zn
(11.12)
n=0
qr (z) = E{zνr } =
∞ n=0
rs (z) = E{zμs } =
∞ n=0
rr (z) = E{zμr } =
∞ n=0
and h(z) = E{zν |Ir } =
∞
p(n)zn ,
(11.13)
P(n)zn ,
(11.14)
n=0
H (z) = E{zν |Is } =
∞ n=0
g(z) = E{zμ |Ir } =
∞
f (n)zn ,
(11.15)
F(n)zn .
(11.16)
n=0
G(z) = E{zμ |Is } =
∞ n=0
297
Theory of Multiplicity in Nuclear Safeguards
As known, for the characterisation of the sample the factorial moments of the random variables listed earlier are needed. For the sake of simplicity the following notations will be used:
d k qs (z) dzk
= E{νs (νs − 1) · · · (νs − k + 1)} = νs,k ,
(11.17)
z=1
d k qr (z) = E{νr (νr − 1) · · · (νr − k + 1)} = νr,k , dzk z=1
k d rs (z) = E{μs (μs − 1) · · · (μs − k + 1)} = μs,k , dzk z=1
k d rr (z) = E{μr (μr − 1) · · · (μr − k + 1)} = μr,k dzk z=1
(11.18) (11.19) (11.20)
and
d k h(z) dzk
= E{ν(ν − 1) · · · (ν − k + 1)} = νk ,
(11.21)
z=1
d k H (z) = E{˜ν(˜ν − 1) · · · (˜ν − k + 1)} = ν˜ k , dzk z=1
k d g(z) = E{μ(μ − 1) · · · (μ − k + 1)} = μk , dzk z=1
k d G(z) ˜ μ˜ − 1) · · · (μ˜ − k + 1)} = μ = E{μ( ˜ k. dzk z=1
(11.22) (11.23) (11.24)
It is important to note that the cascade process itself is assumed to be instantaneous. This approach is satisfactory when the duration of the multiplication process is short compared to the detector system response time, and the backscattering of neutrons to the sample from the detector is negligible. Each neutron in the sample is characterised by a uniform probability p of not leaving the sample, but producing a fission in it. Obviously, 1 − p is the probability that a neutron leaves the sample without producing any reaction if absorption can be neglected, as it is assumed here. The case when absorption without fission (capture) can occur will be discussed later. The detection of the neutrons produced instantaneously in a source event is of course not simultaneous. Each individual neutron, after its generation and leaving the sample, will be detected according to an exponential distribution with a parameter called the ‘neutron die-away time’ [89]. This gives the practical possibility to detect bursts of neutrons from one fission chain. Since there is no multiplication in the detector, the associated time constant is much shorter than 1/α of a subcritical multiplying system. This aspect will be returned on later. In a measurement, p is not known, hence it is one of three unknown parameters that has to be determined or eliminated for the determination of the sample mass3 . The other two unknowns are the sample mass itself, expressed in terms of the spontaneous fission rate, and the relative contribution of (α, n) reactions to the production of source neutrons. To determine these three unknowns, three independent measurement quantities are needed, and it is the first three factorial moments of the neutron distribution that are usually used for this purpose. This method is referred to as neutron multiplicity counting, in contrast to neutron coincidence counting which only uses the first two factorial moments, and hence cannot determine all three parameters above. This also means that the detector efficiency also needs to be known, which will be assumed here, although in practice this is not always the case. In conceptual studies for investigating sensitivities quantitatively, the value of the probability p should be available from calculations. Calculation of p is a rather complex problem which can be solved only approximately by, for instance, suitable Monte-Carlo techniques [97, 98]. 3 In
practice the so-called leakage multiplication M =
1−p 1 − pνr,1
is used instead of p in the equations (see Section 11.3.1).
298
Imre Pázsit & Lénárd Pál
With the above condensed formalism, all source events as well as internal absorption of the neutrons in the sample can be accounted for. For instance, if the sample contains an isotope that undergoes spontaneous fission with a number distribution of neutrons psf (n), and at the same time produces neutrons through α-emission and a subsequent neutron production by (α, n) process, then the source distribution ps (n) is given as ps (n) =
Qf Qα δ1,n + psf (n), Qα + Q f Qα + Q f
(11.25)
where Qα and Qf are the intensities of the neutron production via the (α, n) and the spontaneous fission event, respectively. It is customary to rewrite (11.25) in a different form by introducing the ratio α of the average neutron production between (α, n) and spontaneous fission: α=
Qα , Qf νsf
(11.26)
where νsf ≡ νsf ,1 is the first moment of psf (n). The parameter α is one of the above-mentioned unknowns which need to be determined from the measurement, hence in the practical work it has to be kept explicit in the formulae (see e.g. [89]). With this (11.25) can be rewritten as ps (n) =
ανsf δ1,n + psf (n) . 1 + ανsf
(11.27)
In a similar manner, for a mixture of isotopes, ps (n) is given as the weighted average over the distributions of the various spontaneous fission events and the (α, n) processes. From (11.27) it is seen that the moments νsf ,n of the spontaneous fission source and the moments νs,n of the combined source, which will be used in all forthcoming derivations in this chapter, are related as νs,n =
νsf ,n (1 + αδ1,n ) . 1 + ανsf
(11.28)
The dependence of the final results for the multiplicity moments on the unknown parameter α is obtained by the re-substitution of νs,n with νsf ,n via (11.28) [89].
11.2 Basic Equations Since the number distribution of neutrons and gamma quanta generated in the sample will be derived by starting with one initial event, a backward equation formalism will be used. Then, as usual, one has to proceed in two steps, first by deriving an equation for the distribution of neutrons induced by one initial particle, and then another equation connecting the source-induced and the single-particle-induced distributions. In order to investigate the joint distribution of the random variables ν and μ, define the probability P{ν = n1 , μ = n2 |Ir } = w(n1 , n2 |1)
(11.29)
that the numbers of neutrons and gamma quanta emitted from a sample are exactly n1 and n2 , respectively, provided that the cascade was started by one neutron.4 One can write that w(n1 , n2 |1) = (1 − p)δn1 ,1 δn2 ,0 + p
∞ k=0
4 If
pr (k)
∞ =0
fr ()
k
w(n1i , n2i |1).
(11.30)
n11 +···+n1k =n1 i=1 n21 +···+n2k =n2 −
there is no absorption then the numbers of neutrons and gamma quanta emitted from the sample are equal to the numbers of neutrons and gamma quanta produced in the sample.
299
Theory of Multiplicity in Nuclear Safeguards
Introducing the generating function u(z1 , z2 |1) ≡ u(z1 , z2 ) =
∞ ∞
w(n1 , n2 |1)z1n1 z2n2 ,
(11.31)
n1 =0 n2 =0
one obtains u(z1 , z2 ) = (1 − p)z1 + prr (z2 )qr [u(z1 , z2 )].
(11.32)
P{ν = n1 , μ = n2 |Is } = W (n1 , n2 |Is )
(11.33)
Let be the probability that the numbers of neutrons and gamma quanta emitted from a sample are exactly n1 and n2 , respectively, provided that the cascade was started by one source event Is . Since W (n1 , n2 |Is ) =
∞ =0
fs ()
∞ k=0
ps (k)
k
w(n1i , n2i |1),
(11.34)
n11 +···+n1k =n1 i=1 n21 +···+n2k =n2 −
it can be immediately shown that the generating function U (z1 , z2 |Is ) =
∞ ∞
W (n1 , n2 |Is )z1n1 z2n2
(11.35)
n1 =0 n2 =0
satisfies the equation U (z1 , z2 |Is ) ≡ U (z1 , z2 ) = rs (z2 )qs [u(z1 , z2 )].
(11.36)
From equations (11.32) and (11.36) all the joint and individual moments and probability distributions of the numbers of the generated neutrons and gamma photons can be derived. The individual distributions for neutrons and gamma quanta are contained as special cases that can be obtained by taking z2 = 1 and z1 = 1, respectively. They will be first described, before turning to the joint moments and distributions.
11.3 Neutron Distributions For the neutrons it is clear that u(z1 = z, z2 = 1) = h(z)
and U (z1 = z, z2 = 1) = H (z).
(11.37)
From equation (11.32) it follows that h(z) = (1 − p)z + pqr [h(z)],
(11.38)
H (z) = qs [h(z)].
(11.39)
and from (11.36) that These equations are exactly the same as (12b) and (14) in [94]. Equations (11.38) and (11.39) are implicit equations for h(z) and H (z), and they cannot be solved explicitly. However, this is not necessary either, because both the factorial moments and the values of P(n) can be obtained analytically in a recursive manner to any order, by derivation and subsequently solving the algebraic equations that arise. The highest order derivative can always be expressed explicitly from a first order equation, in terms of the (already known) lower order derivatives.
300
Imre Pázsit & Lénárd Pál
11.3.1 Factorial moments The factorial moments of the number distribution of neutrons generated in the sample are related to measurable quantities, although they cannot be measured themselves. Out of these the three first have practical significance, since they are used routinely in safeguards measurements, and these are called as singles, doubles and triples. These will be given here, as derived from (11.38) and (11.39). First moments (singles) ν˜ 1 = νs,1 h1 , where
dh(z) h1 = dz It follows from (11.38) that h1 = 1 − p + pνr,1 h1 , and so h1 =
1−p ≡ M, 1 − pνr,1
(11.40)
. z=1
where
pνr,1 < 1,
(11.41)
where M ≡ h1 is called the leakage multiplication.5 Finally, one obtains that ν˜ 1 =
1−p νs,1 = Mνs,1 . 1 − pνr,1
(11.42)
Often it is practical to express the probability p as well as some of its functions by the leakage multiplication M. It follows from (11.42) that M−1 p= , (11.43) νr,1 M − 1 and further that p M−1 = . (11.44) 1 − pνr,1 νr,1 − 1 Using the right-hand side of (11.44) in the forthcoming expressions instead of the left-hand side has the advantage that it is a linear function of its argument M as opposed to the nonlinear dependence of the left-hand side on p. If the average number of induced fissions in a cascade is called ϕr , then the average number of neutrons generated in the sample can be written as ν˜ 1 ≡ Mνs,1 = (1 − p)(νs,1 + ϕr νr,1 ), and one finds that ϕr = Second moments (doubles) In a similar way one obtains 2 ν˜ 2 = M νs,2 +
pνs,1 M−1 ≡ νs,1 . 1 − pνr,1 νr,1 − 1
p M−1 2 νs,1 νr,2 = M νs,2 + νs,1 νr,2 . 1 − pνr,1 νr,1 − 1
(11.45)
(11.46)
Third moments (triples) ν˜ 3 = M 5 If
3
* M−1 M−1 2 2 νs,3 + νs,1 νr,2 . (3νs,2 νr,2 + νs,1 νr,3 ) + 3 νr,1 − 1 νr,1 − 1
(11.47)
absorption is present and p still stands for the probability of inducing fission, then M is called the total multiplication. For a discussion on the significance of the difference between total and leakage multiplication, see [99]. In the treatment here always the leakage multiplication is used.
301
Theory of Multiplicity in Nuclear Safeguards
This procedure can be continued analytically in principle to any arbitrary order. Expression (11.47) for ν˜ 3 could be made more explicit by substituting ν˜ 1 and ν˜ 2 from (11.42) and (11.46), respectively. However, when calculating the higher order moments, it is more practical to keep the lower order moments symbolic, so that the formulae do not swell rapidly with increasing order.6 Then, by invoking also symbolic computation codes [100], moments up to very high orders (approximately 100) can be calculated, as was recently demonstrated [101]. For the factorial moments this does not have a practical interest, since in general only the first three moments are used. It is more interesting for the number distribution itself, which is described below.
11.3.2 Number distribution of neutrons The distribution P(n) can be obtained from (11.38) and (11.39) by noting that p(n) and P(n) are the Taylor expansion coefficients of h(z) and H (z), respectively, i.e. 1 d n h(z) 1 d n H (z) p(n) = and P(n) = . (11.48) n! dzn z=0 n! dzn z=0 The derivation of the p(n) and P(n) will show similarities in both the derivation and in the structure of the solutions, which is mainly dependent on the nesting structure of (11.38) and (11.39). An obvious difference will be, however, that for the factorial moments, the derivatives have to be evaluated at z = 1, for which one has h(1) = H (1) = 1, and the derivation of the equations for the generating functions will just lead to the appearance of the factorial moments of the number of neutrons generated in induced and spontaneous fission. For the probability distributions, the derivatives have to be evaluated at z = 0, for which case no similar convenience exists. In particular, as (11.38) and (11.39) show, one will need the derivatives of the generating functions qs (z) and qr (z) at z = h(0) = p(0). To simplify the notations, define the modified nth factorial moments d n qs (h) d n qr (h) νs,n = and ν = . (11.49) r,n dhn h(z=0)=p(0) dhn h(z=0)=p(0) The probabilities P(0) and p(0) are then given from (11.38) and (11.39)as P(0) = qs [p(0)]
(11.50)
and p(0) = p qr [p(0)] = p
N
pr (n)[p(0)]n .
(11.51)
n=0
Equation (11.51) is an N th order algebraic equation where N ≈ 8. This equation has, accordingly, N roots, out of which one and only one real root in the interval [0, 1]. This can be proven as follows. Define the function (z) = z − pqr (z) which is an N th order polynomial in z. It can be proved immediately that the equation (z) = 0 has only one root in the interval [0, 1]. Since (0) = −p pr (0) ≤ 0 and
(1) = 1 − p ≥ 0,
the statement is obvious. It means that 0 ≤ p(0) ≤ 1 and consequently 0 ≤ P(0) ≤ 1. The root has to be found numerically.The solution will depend, in addition to the distribution pr (n), also on the value p of the probability of inducing a fission. Formally, with the notations introduced in (11.49), p(0) can be written in the form p(0) = p νr,0 .
(11.52)
6 This will be necessary already in the forthcoming sections when calculating the factorial moments of the distributions concerning gamma photons.
302
Imre Pázsit & Lénárd Pál
This will fit formally with the solutions found for the p(n) for n ≥ 1, with the exception that (11.52) is not a solution, rather an equation, since the right-hand side contains the powers of p(0) as weighting factors. Similarly, one can write P(0) =
N
ps (n) · p(0)n = νs,0 .
(11.53)
n=0
The higher order terms do not require the solution of higher order algebraic equations, only linear ones. One finds that the only difference in the calculating the values of the P(n) as compared to the factorial moments of the number of detected neutrons resides in the replacing of the ordinary factorial moments of the source emission and induced reaction distributions with the modified ones, whereas the solution of the algebraic equations arising from the nested structure of equations (11.38) and (11.39) remains the same. Hence there is a one-to-one formal correspondence between the factorial moments ν˜ n of the number of detected neutrons and the probabilities P(n) of detecting n neutrons from one source event. In particular, one obtains P(1) = νs,1
1−p ≡ M νs,1 , 1 − p νr,1
(11.54)
where, in analogy with (11.42), the modified leakage multiplication M was introduced. One can see that with the introduction of the modified expected values νs,n and νr,n , the expression for P(1) is formally equivalent with the expected value ν˜ 1 of the number of neutrons (singles) generated in one source emission event, equation (11.42), with the difference that νs,1 and νr,1 replace νs,1 and νr,1 . The dependence of P(1) on the non-leakage probability p is though more complicated than that of ν˜ 1 since, unlike νs,1 and νr,1 which are nuclear physics constants, νs,1 and νr,1 depend also on p, through the dependence of p(0) on p, cf. (11.51). In a similar manner, one will have P(2) = and
1 2 M−1 M νs,2 + νs,1 νr,2 , 2! νr,1 − 1
* 2 1 3 M−1 M−1 2 P(3) = M νs,3 + νs,1 νr,2 . (3νs,2 νr,2 + νs,1 νr,3 ) + 3 3! νr,1 − 1 νr,1 − 1
(11.55)
(11.56)
As is seen from the above, the expressions for P(n) for n ≥ 1 are identical with those of the ν˜ n if the substitution {νs,k , νi,k } → {νs,k , νi,k } is performed, and the result divided by n!, and conversely, the ν˜ n , n ≥ 1, can be obtained from the P(n) by multiplying with n! and substituting {νs,k , νi,k } → {νs,k , νi,k }. As mentioned earlier, the higher order factorial moments can be calculated by symbolic computation to large orders. This is valid also for the calculation of the probabilities P(n) of the number of generated neutrons. Actually, due to the formal equivalence between the factorial moments and the probabilities, the same algorithm can be used for calculating both, and calculation of these two different quantities requires the same amount of computational effort concerning the obtaining of explicit solutions. The only difference is that the factorial moments of the source and induced emissions need to be modified for the calculation of the probability distribution P(n) and also an algebraic equation needs to be solved for the probability p(0). The feasibility of calculations up to orders 50 by using the symbolic code Mathematica [100] has been demonstrated [101] where quantitative results are given and are compared with Monte-Carlo calculations. Such an example is given in Fig. 11.1 [101]. These calculations were performed for three different spherical samples with masses of 335, 2680 and 9047 g and sample material consisting of 20 wt.% 240 Pu and 80 wt.% 240 Pu. The corresponding values of the non-leakage probability p were calculated with the Monte-Carlo code MCNP-PoliMi7 [102]. The figure shows how the number distributions develop a tail for larger values n of the neutron number with increasing sample mass, implying increasing values of the non-leakage probability p. 7 Note
that these calculations were made with the model Pu density of 10 g/cm3 .
303
Theory of Multiplicity in Nuclear Safeguards
100 Spontaneous fission 335 g MCNP 2680 g MCNP 9047 g MCNP Analytical data
Probability, P(n)
101 102 103 104 105 106 0
5
10
15 20 25 30 Number of neutrons
35
40
45
Figure 11.1 The number distribution P(n) of neutrons generated in a sample, for three different sample masses. The symbols show the results obtained by the Monte-Carlo code MCNP-PoliMi. From Enqvist et al. [101].
The values obtained from the analytical calculations were compared with the results of Monte-Carlo calculations performed with the code MCNP-PoliMi and an excellent agreement was found as is seen in the figure.
11.3.3 Statistics of emitted and detected neutrons In the case when the internal absorption of the neutrons in the sample is not negligible, the statistics of the number of neutrons emitted from the sample will deviate from that of the number of neutrons generated in the sample. The process of internal absorption of the neutrons can be taken into account with the formalism used so far, by redefining p as the probability of a first reaction of a neutron, and pr (n) as the probability of generating n neutrons in a reaction, rather than in a fission. That is, similarly to the treatment in Part I, pr (0) needs also to include the effect of capture. The presence of absorption will alter the factorial moments and the probability distributions of the neutrons emitted compared to the corresponding quantities of the neutrons generated. In practice, these changes are rather moderate (however, for some subtleties, see [99]). The fact that the emitted neutrons are detected with a detector efficiency ≤ 0 will also alter the statistics. The process of detection can be taken into account by the (uniform) detection efficiency ≤ 1 that a neutron which escaped from the sample will be detected. The procedure is the same as described in Section 4.3, equations (4.126) and (4.127). The generating function of the distribution of the number of neutrons detected for one neutron leaving the sample is given as ε(z) = z + (1 − ).
(11.57)
As was shown in Section 4.3, and as is readily confirmed by simple considerations, the generating functions of the detected neutrons for one initial neutron, hd (z), and that of the detected neutrons per one initial source event, Hd (z), are given as hd (z) = h[ε(z)]
(11.58)
Hd (z) = H [ε(z)].
(11.59)
d n hd (z) d n h(z) n =
dzn dzn
(11.60)
d n Hd (z) d n H (z) n =
. n dz dzn
(11.61)
and Hence, for n > 0,
and
304
Imre Pázsit & Lénárd Pál
Spontaneous fission 335 g 2680 g 9047 g Absorption included
102
104
100
102 103 104
106 0 (a)
335 g 2680 g 9047 g Detection included
101 Probability, P(n)
Probability, P(n)
100
10
20 30 Number of neutrons
40
0
5
(b)
10 15 20 Number of neutrons
25
30
Figure 11.2 The number distribution of neutrons P(n). (a) The number distribution of neutrons leaving the sample when the internal absorption is accounted for (lines with crosses), the absorption-free values shown with other symbols. (b) The same when the detection process with a detection efficiency of 50% is also taken into account.
For the factorial moments these relationships have a simple meaning. One only needs the derivatives with n ≥ 1, and the derivatives have to be evaluated at z = 1 in (11.60) and (11.61). Hence the factorial moments ν˜ n,d of the number of detected neutrons are equal to simply n times the corresponding moments ν˜ n for the total number of neutrons leaving the sample, ν˜ n,d = n ν˜ n .
(11.62)
For the results in (11.42)–(11.47), this simply means that the right-hand sides have to be multiplied with the corresponding power of to take into account the effect of detection. For the probability distributions pd (n) and Pd (n), the situation is considerably more involved. Formally, equations (11.60) and (11.61) still hold, but now the derivatives in (11.60) and (11.61) need to be taken at z = pd (0), and this latter itself depends on the detection probability. This statement is clear when considering that due to (11.58), equation (11.51) is modified to pd (0) = (1 − p)(1 − ) + pqr [pd (0)].
(11.63)
This means that the derivatives have to be evaluated for different values on the left-hand and the right-hand sides of (11.60) and (11.61), respectively. Equation (11.63) still has only one real root 0 < pd (0) < 1, but it is a function of the detection probability . This means that the modified factorial moments νs,n and νr,n of the source and induced neutron numbers per fission, respectively, defined in (11.49), will also change, since there is an implicit dependence of the substitution value. Hence the pd (n) and Pd (n), although formally still having the same expressions as before, acquire, in addition to the explicit dependence on through a multiplicative term, also an implicit dependence on the detection probability. The relationship between the probabilities of the detected and the emitted neutrons can hence be written as Pd (n) = n P(n)|ν( )s,n ,ν( )r,n . (11.64) This means that for different detector efficiencies, the probabilities P(n) have to be fully re-calculated. On the other hand, the formal equivalence between the factorial moments and the probability distributions, with proper substitutions, described in the previous section, still holds and can be utilised in setting up symbolic algebra algorithms for the analytical calculations of the factorial moments and the probabilities. Some illustrations of the influence of the internal absorption and the detection process are shown in Fig. 11.2. In the calculations the same three sample masses and sample composition (20 wt.% 240 Pu and 80 wt.% 240 Pu) were used as in the previous case, and the first collision probabilities p were again calculated by MCNP-PoliMi. Figure 11.2a illustrates that when internal absorption in a sample containing a portion of
305
Theory of Multiplicity in Nuclear Safeguards
the non-fissile element 240 Pu is taken into account, it hardly influences the number distributions at all. These calculations were performed with real cross-section data, but with a model Pu density (10 g/cm3 ). Accounting for the detection process with an assumed 50% detection efficiency has, on the other hand, a significant effect on the number distributions, as expected, as seen in Fig. 11.2b.
11.4 Gamma Photon Distributions At each fission, either spontaneous or induced, a relatively large number of gamma photons are created (up to about 20 for certain isotopes). Excluding the possibilities for photofission or photoneutron production, which require higher photon energies than those from fission, or the presence of some materials such as beryllium that are usually not contained in the samples investigated, the photons themselves do not take part in the branching process (the presence of a gamma photon cannot lead to the occurrence of other photons).8 However, even without multiplication of the gamma photons, their production follows the entire chain of neutron branching, and on the whole, a substantially larger number of gamma photons are generated than neutrons. In practice, gamma absorption in the heavy elements (i.e. inside the sample) will constitute a strong screening factor which is absent for neutrons, and this fact counteracts the advantages of gamma multiplicity counting for large samples. Nevertheless, in certain sample mass ranges, and in particular when used in combination with neutron counting, including gamma detection into the process can enhance the possibilities of detecting and identifying fissile material. As was the case with neutrons, fs (n) can accommodate the production of gamma photons from both fission and (α, n) reactions, with a suitable weighting procedure. Regarding the gamma photons from reactions, one can also include both induced fission, and gamma photon production from other reactions, such as capture and inelastic scattering of neutrons. In that case fr (n) is the probability of emitting n gamma photons per neutron reaction. Gamma capture inside the sample, and the effect of detection will only be accounted for in Section 11.4.3. Taking into account the relationships u(z1 = 1, z2 = z) = g(z)
and U (z1 = 1, z2 = z) = G(z),
(11.65)
in the case of gamma quanta the following equations are obtained from (11.32) and (11.36): g(z) = (1 − p) + prr (z)qr [g(z)]
(11.66)
G(z) = rs (z)qs [g(z)],
(11.67)
and which are identical with (3) and (4) in [97].
11.4.1 Factorial moments From the above, the factorial moments and the probabilities F(n) can be calculated with a procedure similar to the case of the neutrons, although the expressions will be somewhat more involved. In order to simplify notations and expedite the interpretation of certain frequently occurring factors, similar to the leakage multiplication of the neutrons, we will make use of the first three factorial moments of the single-neutron-induced gamma distributions as follows. The first moment dg(z) g1 ≡ (11.68) dz z=1 8 Such
processes can be included into the formalism without difficulty, but they are not listed here, for brevity.
306
Imre Pázsit & Lénárd Pál
is obtained from (11.66) as pμr,1 μr,1 ≡ Mγ = ϕ. (11.69) 1 − pνr,1 νs,1 r The quantity Mγ , introduced here in analogy with the leakage multiplication M of the neutrons, can be called the gamma multiplication per one initial neutron, and it gives the average number of gamma photons generated in the sample by one initial neutron. The last equality of (11.69) shows clearly how this factor is related to the average number of fissions in the system. For the other two factorial moments of the single-neutron-induced gamma photon number one obtains d 2 g(z) M−1 = g2 ≡ {μr,2 + 2μr,1 νr,1 Mγ + νr,2 Mγ2 } (11.70) dz2 z=1 νr,1 − 1 g1 =
and g3 ≡
d 3 g(z) M−1 {μr,3 + 3μr,2 νr,1 Mγ + 3μr,1 [νr,2 Mγ2 + νr,1 g2 ] + νr,3 Mγ3 + 3νr,2 Mγ g2 }. (11.71) = dz3 z=1 νr,1 − 1
With these notations, the first three factorial moments of the source-induced gamma photon numbers are given as follows. Singles μ ˜ 1 = μs,1 +
νs,1 pμr,1 = μs,1 + νs,1 Mγ . 1 − pνr,1
(11.72)
Doubles μ ˜ 2 = μs,2 + 2μs,1 νs,1 Mγ + νs,2 Mγ2 + νs,1 g2 .
(11.73)
Triples (11.74) μ ˜ 3 = μs,3 + 3μs,2 νs,1 Mγ + 3μs,1 {νs,2 Mγ2 + νs,1 g2 } + νs,3 Mγ3 + 3νs,2 g2 + νs,1 g3 . Again, higher order moments can be derived by symbolic computation. Moments up to 100th order were derived recently and verified by Monte-Carlo calculations [101].
11.4.2 Number distribution of gamma photons The probabilities f (n) and F(n) are calculated as Taylor expansion coefficients of g(z) and G(z), respectively: 1 d n G(z) 1 d n g(z) and F(n) = . (11.75) f (n) = n! dzn z=0 n! dzn z=0 The procedure is very much the same as in the case of neutrons, with the obvious differences that instead of h(z = 0) = p(0) the quantity g(z = 0) = f (0) will appear as both unknown and as a substitution variable. First, the substitution of z = 0 into (11.66) yields f (0) = (1 − p) + prr (0)qr [f (0)] = (1 − p) + prr (0)
N
pr (n)[f (0)]n .
(11.76)
n=0
This equation has exactly the same order N as (11.51) has for p(0) for the neutrons, i.e. N = 8 is a good indicative value. This is because the order of the equation is determined by the maximum degree of branching, which is only associated with the fission process, since the gamma photons do not undergo branching in the model treated here. Having solved this equation for f (0), one has F(0) = fs (0)qs [f (0)] = fs (0)
N n=0
ps (n)[f (0)]n .
(11.77)
307
Theory of Multiplicity in Nuclear Safeguards
Here again it is suitable to introduce short-hand notations for the derivatives of rs (z) and rr (z) taken at z = 0 and for those of qs (z) and qr (z) at z = f (0), as follows: d n rr (z) d n rs (z) = n! fs (n) ≡ μs,n ; = n! fr (n) ≡ μr,n (11.78) dzn z=0 dzn z=0 and
d n qs (g) (γ) = νs,n ; dg n g(0)=f (0)
d n qr (g) (γ) = νr,n . dg n g(0)=f (0)
(11.79) (γ)
(γ)
The superscript (γ) in (11.79) is meant to indicate that the modified factorial moments νs,n and νr,n are not equal to their counterparts in (11.49), because they are weighted with the probability f (0) in contrast to p(0), used in the calculation of νs,n and νr,n . With the quantities introduced here, equation (11.77) can be rewritten as F(0) = fs (0)νs,0 ,
(11.80)
in analogy with (11.53). The calculation of the higher order probabilities is straightforward, one obtains F(1) = μs,1 νs,0 + μs,0 νs,1
pμr,1 νr,0 , 1 − pμr,0 νr,1
(11.81)
and F(2) =
1 [μ νs,0 + 2μs,1 νs,1 f (1) + μs,0 νs,2 f 2 (1) + μs,0 νs,1 · 2!f (2)], 2! s,2
(11.82)
and so on. A comparison of (11.81) with (11.72) shows that similarly to the case of the neutrons, the factorial moments μn can be obtained from the probabilities F(n) by the substitution and νs,n , νr,0 → νs,n , νr,n and μs,n , μr,0 → μs,n , μr,n . This is of course valid for any order n, even if a direct confirmation between F(2) and μ2 is not straightforward. However, the comparison between (11.81) with (11.72) also reveals that this conversion of the formulae for F(n) to μn is reversible only in a formal sense, but not in practice as long as the μn are given in their usual form, such as in (11.72)–(11.74). The reason is the differing structure of (11.66) and (11.67) from (11.38) and (11.39), i.e. the occurrence of the product of two generating functions in the former, which leads to the occurrence of zeroth order terms such as νs,0 μs,0 in the F(n). The corresponding zeroth order ordinary factorial moments, appearing in the expressions of the μn , are identically unity and are not seen in the expressions. The more complicated structures notwithstanding, the probabilities F(n) can be calculated to high orders (N ≈ 100) with symbolic computation, such that the whole probability distribution can be reconstructed. Such a result is shown from [101] in Fig. 11.3. Similarly to the case of the neutrons, it is seen that without absorption of the gamma photons, with increasing sample mass the distributions develop a tail for higher photon number values with increasing sample mass. The figure again shows both the analytical results and those obtained by the Monte-Carlo code MCNP-PoliMi and an excellent agreement is found.
11.4.3 The statistics of detected gamma photons Similarly as in (11.57), introduce the generating functions (z) and εγ (z) of the probability distributions of the number of leaked out gamma photons for an individual photon existing in the system and that of the detected photons for one photon that has already leaked out, respectively: (z) = lγ (z − 1) + 1
(11.83)
εγ (z) = γ (z − 1) + 1.
(11.84)
and
308
Imre Pázsit & Lénárd Pál
100 335 g 2680 g 9047 g
Probability, F(n)
101
Analytical data
102
103
104
0
10
20 30 Number of neutrons
40
50
Figure 11.3 The number distribution of gamma photons F(n) generated in a sample, for three different sample masses. The symbols show the results obtained by the Monte-Carlo code MCNP-PoliMi. From Enqvist et al. [101].
Here lγ is the probability for a photon in the system to leak out and γ is the probability for a leaked out gamma to be detected. Then, similarly as for the neutrons, the generating function of the detected gamma photons per initial neutron gd (z) and the generating function Gd (z) of the detected photons per initial source event are given as gd (z) = g{[εγ (z)]} (11.85) and Gd (z) = G{[εγ (z)]}.
(11.86)
d n gd (z) d n g(z) = (lγ γ )n dzn dzn
(11.87)
From here it follows that
and d n Gd (z) d n G(z) = (lγ γ )n . (11.88) n dz dzn From here similar conclusions follow as for the neutrons, what regards the relationships between the factorial moments and the probabilities fd (n) and Fd (n) of detected gamma photons and those generated in the sample. The factorial moments of the detected gamma photons are obtained as multiplies of those of the generated photons with the factor (lγ γ )n , i.e. μn,d = (lγ γ )n μn . (11.89) For the probabilities Fd (n), one observes that due to (11.85), equation (11.76) for f (0) is changed to that for fd (0) as fd (0) = (1 − p) + prr [1 − lγ γ ]qr [fd (0)],
(11.90)
which means that fd (0) becomes a function of the product of the escape and detection probabilities. Hence as before, the derivatives on the two sides of (11.87) and (11.88) have to be taken at different arguments, and (γ) (γ) while the expressions for the derivatives remain formally identical, the included factors νs,n , νr,n , μs,n and μr,n all change values. In analogy with (11.62) and (11.64), one finds that Fd (n) = (lγ γ )n F(n)lγ γ .
(11.91)
The formal similarities between the formulae for the generated and the emitted/detected neutrons can also in this case be used to generate symbolic codes that calculate the factorial moments and the probabilities with the same algorithm but with different coefficients.
309
Theory of Multiplicity in Nuclear Safeguards
100
100
335 g
335 g
2680 g
2680 g
9047 g
9047 g Detection included
Probability, F(n)
Probability, F(n)
Absorption included
102
104
106
104
106 0
(a)
102
10
20 30 Number of gammas
40
50
0 (b)
10
20 30 Number of gammas
40
Figure 11.4 (a) The number distribution of photons F(n) leaving the sample by accounting for the internal absorption of the gamma quanta. (b) The same when the detection process with a detection efficiency of 50% is also taken into account.
Some illustrative cases are shown in Fig. 11.4. Figure 11.4a shows the probability distribution of the number of photons escaping from the sample with internal gamma absorption. In contrast to the case of the neutrons, the internal gamma absorption has a significant influence on the number distributions. In general they are shifted to lower numbers, leading also to lower factorial moments. Also the structure of the curves changes, they develop multiple intersections with increasing sample size. For lower gamma numbers the increase of the mass leads to decreasing probabilities, whereas for the large numbers the trend reverses, and becomes similar to the case without absorption, i.e. larger masses yield higher probabilities. Figure 11.4b shows the effect of the detection added on top of the internal absorption, with a detector efficiency of 50%. It is seen that the distributions are shifted even more to the lower photon numbers, whereas the tendency of multiple crossings is preserved. Most important, the discriminative power of the gamma counting method for sample mass is diminished to practically zero.
11.5 Joint Moments From the generating function equations (11.32) and (11.36) the joint or mixed moments of the neutron and gamma quanta numbers can also be determined. Physically the most interesting of these is the covariance ˜ defined as Cov{˜ν, μ},
2
∂ U (z1 , z2 ) ∂U (z1 , z2 ) ∂U (z1 , z2 ) ˜ = E{˜νμ} ˜ − E{˜ν} E{μ} ˜ ≡ Cov{˜ν, μ} − . ∂z1 ∂z2 ∂z1 ∂z2 z1 =z2 =1 z1 =z2 =1 z1 =z2 =1 (11.92) Using the generating function (11.36) one obtains
∂U (z1 , z2 ) 1−p = E{˜ν} = νs,1 = Mνs,1 , (11.93) ∂z1 1 − pνr,1 z1 =z2 =1
∂U (z1 , z2 ) μr,1 ˜ = μs,1 + p = E{μ} νs,1 = μs,1 + νs,1 Mγ , (11.94) ∂z2 1 − pνr,1 z1 =z2 =1 and
∂2 U (z1 , z2 ) ∂z1 ∂z2
z1 =z2 =1
= μs,1 νs,1 h1 + νs,2 h1 g1 + νs,1 c1,1 ,
(11.95)
310
Imre Pázsit & Lénárd Pál
where
∂u(z1 , z2 ) h1 = ∂z1 and
g1 =
∂g(z1 , z2 ) ∂z2
while
z1 =z2 =1
dh(z1 ) = dz1
z1 =z2 =1
=
c1,1
∂2 u(z1 , z2 ) = ∂z1 ∂z2
dg(z1 ) dz2
z1 =1
=M
z2 =1
= Mγ ,
. z1 =z2 =1
This latter can be obtained from the generating function (11.32). One finds that c1,1 = p(μr,1 νr,1 h1 + νr,2 h1 g1 + νr,1 c1,1 ), and from this it follows that
μr,1 νr,1 M + νr,2 MMγ . 1 − pνr,1 This leads to the final result for the covariance in the form 2 p p 2 2 ˜ = Cov{˜ν, μ} (νs,1 + μr,1 νr,2 )M3 . (νs,1 νr,1 + νs,2 − νs,1 )μr,1 M + 1−p 1−p c1,1 = p
(11.96)
(11.97)
The nonlinear dependence of the covariance on the leakage multiplication gives some insight into the mechanism of the cascade process. One can also write (11.97) in the equivalent form 2 ˜ = (νs,2 − νs,1 Cov{˜ν, μ} )MMγ + νs,1
M−1 {μr,1 νr,1 M + νr,2 MMγ }. νr,1 − 1
(11.98)
It is seen that for p > 0, the covariance is not zero, and it can be shown that it is positive and it increases with increasing p. For p = 0, the covariance is naturally zero. ˜ and E{˜νμ( ˜ μ ˜ − 1)}, we need to introduce For the calculation of the higher joint moments E{˜ν(˜ν − 1)μ} some auxiliary quantities. These are as follows: 3 p 1 ,z2 ) c2,1 = ∂ ∂zu(z2 ∂z = {μr,1 [νr,2 M2 + νr,1 h2 ] + νr,3 M2 Mγ + νr,2 [h2 Mγ + 2Mc1,1 ]}, 2 1 z1 =z2 =1 1 − pνr,1 c1,2 =
∂3 u(z1 ,z2 ) ∂z1 ∂z22 z1 =z2 =1
(11.99) p = {μr,2 νr,1 M + 2μr,1 [νr,2 MMγ + νr,1 c1,1 ] + νr,3 MMγ2 1 − pνr,1 + νr,2 [2Mγ c1,1 + Mg2 ]},
where g2 was defined in (11.70), and analogously, d 2 h(z) M−1 h2 ≡ = νr,2 M2 . 2 dz z=1 νr,1 − 1
(11.100)
(11.101)
With the above, the triple joint moments are given as follows: ˜ = μs,1 [νs,2 M2 + νs,1 h2 ] + νs,3 Mγ M2 + νs,2 [2Mc1,1 + Mγ h2 ] + νs,1 c2,1 E{˜ν(˜ν − 1)μ}
(11.102)
and ˜ μ ˜ − 1)} = μs,2 νs,2 M+2μs,1 [νs,2 MMγ +νs,1 c1,1 ]+νs,3 MMγ2 +νs,2 [g2 M+2Mγ c1,1 ]+νs,1 c1,2 . (11.103) E{˜νμ( The mixed triples (11.102) and (11.103) express the expectations of detecting simultaneously two neutrons and one gamma photon, and one neutron and two gamma photons, respectively.
Theory of Multiplicity in Nuclear Safeguards
311
11.6 Practical Applications: Outlook In the foregoing the detection process was described by an efficiency parameter, without taking into account the time-dependence of the detection process. In reality, the neutrons that were born simultaneously in the‘superfission’process (source fission or (α, n) process plus internal multiplication) will not be detected with perfect coincidence, rather only within a suitable gate window. In order to increase the detection efficiency, thermal detectors are used, surrounded by a moderator. The slowing down and diffusion time of the neutrons in the moderator/detector system is a random variable, resulting in a time difference in the detection of the neutrons arising from the same source event. This is usually taken into account by a ‘detector die-away time’, i.e. a simple exponential distribution of the detection times of the neutrons emitted from the sample [89, 103–105]. In practice, therefore, the factorial moments of the detected particles, such as the neutron multiplicity moments (11.42), (11.46) and (11.47) are converted into the detection rates of singles (S), doubles (D) and triples (T). This is achieved by accounting for the (unknown) intensity Qf (cf. (11.26)) of the spontaneous fission (in the safeguards literature usually denoted as F), and simulating the detection process with the concept of the detector efficiency and detector die-away time. This latter leads, among others, to the appearance of the so-called gate fraction or gate utilisation factors, associated to the measurement gate widths, in the formulae. The dependence of the moment expressions on the factor α of the ratio of the intensity of neutron production via (n, α) processes and spontaneous fission need to be made explicit by the use of (11.28). The three moment equations are then solved for the three unknowns F, M and α [89]. These methods may be extended to account for multiple die-away time constants [103, 104]. It has to be emphasised that the concept of the neutron die-away time is phenomenological, and the formulae given for the evaluation of registered pulse trains are empirical in character. A rigorous derivation of the whole process which also takes into account the finiteness of the detectors is an important aspect which not been dealt with in this book. Another key problem, not discussed here, is the correction for dead-time losses, especially for the higher moments (triples), where so-called chance pile-up has a major impact [62]. However, the model equations derived with these limitations are remarkably useful in practice for establishing the principal functional relationships, and permitting empirical calibrations to be established and used [106].
This page intentionally left blank
Appendices
This page intentionally left blank
A p p e n d i x
A
Elements of the Theory of Generating Functions
Contents A.1 A.2 A.3 A.4 A.5 A.6
Basic Properties On the Roots of Equation g(x) = x, 0 ≤ x ≤ 1 A Useful Inequality Abel Theorem for Moments Series Expansion Theorem An Important Theorem
315 321 322 323 325 327
In this chapter, we summarise some of the elementary properties of the generating functions that play an exclusively important role in the theory and applications of branching processes. Roughly speaking, a generating function is a polynomial, a power series, whose exponents are non-negative integers and whose coefficients are probabilities of discrete random variables which assume non-negative integer values.
A.1 Basic Properties Define the discrete random variables xn , n = 1, 2, . . . , taking non-negative integers and denote P{xn = k} = pnk ,
k = 0, 1, . . .
(A.1)
the probability that xn = k for every fixed n. For the probabilities pnk , k ≥ 0, n ≥ 1, the relationships 0 ≤ pnk ≤ 1 and
∞
pnk = 1, ∀n ≥ 1
(A.2)
k=0
are fulfilled. The power series E{zxn }
= gn (z) =
∞
pnk zk
(A.3)
k=0
is called the generating function of the discrete random variable xn , in which z is generally a complex quantity. Since the generating function is uniquely determined by the distribution of a random variable, we may speak about the generating function of a probability distribution on the set of non-negative integers. However, this property needs some complementary remarks which will be formulated by the following two theorems.
A.1.1 Continuity theorems Theorem 27. If the limit values lim pnk = pk ,
n→∞ Neutron fluctuations ISBN-13: 978-0-08-045064-3
k = 0, 1, . . .
(A.4) © 2008 Elsevier Ltd. All rights reserved.
315
316
Appendix A
exist and the relationship ∞
pk = 1
(A.5)
k=0
holds, then the series of generating functions g1 (z), g2 (z), . . . , gn (z), . . . in every point |z| ≤ 1 converges to the generating function g(z) =
∞
pk z k ,
(A.6)
k=0
i.e. lim gn (z) = g(z),
n→∞
∀|z| ≤ 1.
(A.7)
If the limit values (A.4) exist, but the condition (A.5) is not fulfilled, i.e. if ∞
pk = g(1) < 1,
(A.8)
k=0
then the limit relationship (A.7) is valid only inside of the unit circle, i.e. lim gn (z) = g(z),
n→∞
provided that |z| < 1.
(A.9)
Proof. To prove the theorem, first it has to be verified that if the conditions (A.4) and (A.5) are fulfilled, then |gn (z) − g(z)| <
(A.10)
for every |z| ≤ 1, where is an arbitrarily small positive number and n > n0 , where n0 is a sufficiently large positive integer. Let us take at first the case of |z| < 1. Select the value for the positive integer N sufficiently large such that the inequality
|z|N < 4 is fulfilled. Since pnk and pk are positive numbers not larger than 1, it is obviously true that |pnk − pk | ≤ pnk + pk , and hence one can write that
N −1 ∞ ∞ k N (pnk − pk )z ≤ |pnk − pk | + |z| (pnk + pk ) . |gn (z) − g(z)| = k=0
k=0
k=N
Considering that ∞
(pnk + pk ) < 2,
k=N
one obtains that |gn (z) − g(z)| =
N −1 k=0
|pnk − pk | + . 2
After fixing the value of N , due to the limit relationship (A.4), one can find such a positive integer n0 that the inequality
|pnk − pk | < 2N
317
Appendix A
be fulfilled for every n > n0 . In this way, if |z| < 1 and n > n0 , then |gn (z) − g(z)| < , i.e. lim gn (z) = g(z),
if |z| < 1.
n→∞
In the proof, we have not utilised the condition (A.5).1 Hence, we now show that the asymptotic relationship lim gn (z) = g(z)
n→∞
is true only if the condition (A.5) is fulfilled for |z| = 1. Namely, if |z| = 1, then one can start from the inequality ∞ N −1 |gn (z) − g(z)| ≤ |pnk − pk | + (pnk − pk ) , k=0
k=N
and since ∞
pnk = 1 −
k=N
N −1
pnk
k=0
based on the relationship (A.4), one can write for every fixed N that ∞
lim
n→∞
pnk = 1 −
k=N
N −1
pk .
k=0
If the condition (A.5) is fulfilled, then ∞
lim
n→∞
pnk =
k=N
∞
pk .
k=N
From this, however, it follows that by fixing the value of N , there exists a positive real number n0 such that if n > n0 , then ∞
(pnk − pk ) < . 2 k=N
Considering that n0 can be selected such that the inequality |pnk − pk | <
2N
be fulfilled, then if n > n0 , it is immediately proved that the inequality |gn (z) − g(z)| <
is true even for |z| = 1, i.e. relationship limn→∞ gn (z) = g(z) is valid also for |z| = 1, provided that g(1) = ∞ k=0 pk = 1. 1 This
means that the function g(z) determined by the limit relationship lim gn (z) = g(z),
n→∞
if |z| < 1, does not necessarily satisfy the relationship g(1) =
∞
k=0 pk
= 1; in other words, it is not necessarily a probability generating function.
318
Appendix A
The reverse of Theorem 27 can also be stated. Theorem 28. If the generating functions of the probabilities pnk , k = 0, 1, . . . , n = 1, 2, . . . , gn (z) =
∞
n = 1, 2, . . .
pnk zk ,
k=0
converge in every point |z| ≤ 1 to the generating function g(z) =
∞
pk z k ,
k=0
i.e. if lim gn (z) = g(z),
n→∞
∀|z| ≤ 1
is fulfilled, then lim pnk = pk
and g(1) =
n→∞
∞
pk = 1.
k=0
If, however, the limit relationship is fulfilled only in the points z inside of the unit circle, i.e. if lim gn (z) = g(z),
n→∞
∀|z| < 1,
then it only follows that the function g(z), regular in the unit circle, can be given by a power series in z with non-negative coefficients, but it cannot be excluded that the inequality g(1) =
∞
pk < 1
k=0
holds. Proof. To prove Theorem 28, we will use the method of total induction. First of all, one can immediately realise that from the relationship lim gn (0) = g(0),
n→∞
it follows that lim pn0 = p0 ,
n→∞
(A.11)
i.e. gn (0) = pn0
and
g(0) = p0 .
Now, suppose that from limn→∞ gn (z) = g(z), |z| ≤ 1, the limit relationships lim pnk = pk ,
n→∞
k = 1, 2, . . . , r − 1
(A.12)
follow. If, based on this assumption, we prove that from the relationship lim gn (z) = g(z),
n→∞
|z| ≤ 1
the relation limn→∞ pnr = pr , also follows, then with total induction, we conclude that the limit relationship lim pnk = pk
n→∞
is true for every k.
(A.13)
319
Appendix A
In order to show this statement, define the functions gn(r) (z)
=
∞
pnk z
k−r
=z
−r
gn (z) −
k=r
pnk z
k
k=0
and g (r) (z) =
r−1
∞
pk zk−r = z−r g(z) −
k=r
r−1
pk z k .
k=0
By considering the assumption that if lim n→∞ gn (z) = g(z), |z| ≤ 1, then equation (A.12) also follows. Therefore one can write that r−1 r−1 lim pnk zk = pk z k , n→∞
k=0
k=0
and thus it is obvious that the relationship lim g (r) (z) n→∞ n
= g (r) (z),
|z| ≤ 1
(A.14)
is fulfilled. Hence gn(r) (z) = pnr +
∞
pnk zk−r
g (r) (z) = pr +
and
k=r+1
∞
pk zk−r ,
k=r+1
one obtains ∞
pnr − pr = gn(r) (z) − g (r) (z) − z
(pnr − pr )zk−r−1 ,
k=r+1
from which the inequality ∞
|pnr − pr | ≤ |gn(r) (z) − g (r) (z)| + |z|
|(pnr − pr )||z|k−r−1
k=r+1
follows. By using the trivial relationship |(pnr − pr )| ≤ 1, the last inequality can be given in the following form: |pnr − pr | ≤ |gn(r) (z) − g (r) (z)| +
|z| , 1 − |z|
|z| = 1.
Select the value |z| = 0 so that the inequality |z|
≤ 1 − |z| 2 be fulfilled, where is an arbitrary small positive number. According to (A.14), one can always find such a positive real number n0 that the relationship |gn(r) (z) − g (r) (z)| ≤
2
is valid for n larger than n0 . Accordingly, it is evident that if n > n0 , then |pnr − pr | ≤ , and this is exactly what we wanted to prove.
320
Appendix A
Remark. It is still left to show that if the relationship lim gn (z) = g(z)
(A.15)
n→∞
is valid even for |z| = 1, then ∞
pk = g(1) = 1.
k=0
This can be done easily, because if (A.15) is fulfilled in the point z = 1, i.e. if limn→∞ gn (1) = g(1), then since gn (1) =
∞
pn,k = 1 and
g(1) =
k=0
∞
pk ,
k=0
one obtains that lim
∞
n→∞
pn,k = 1 = g(1) =
∞
k=0
pk .
k=0
We notice that, according to the above, if lim n→∞ gn (z) = g(z) is fulfilled only for |z| < 1, then, although limn→∞ pnk = pk is true for every k, one will nevertheless have ∞ k=0 pk ≤ 1, since 1 ≥ lim
n→∞
N
pnk =
k=0
is fulfilled for an arbitrarily large N . Consequently, if
N
∞
k=0 pk
P{x = +∞} = 1 −
pk
k=0
< 1, then
∞
pk > 0.
k=0
A.1.2 Generating function of the sum of discrete random variables In the theory of branching processes one has to deal often with the sum of independent, identically distributed random variables taking values in the non-negative integers. The following theorem is especially useful for solving many problems arising in the theory of multiplying systems involving a particle source of multiple emission. Theorem 29. Let x1 , x2 , . . . be a sequence of identically distributed, independent random variables taking values in the non-negative integers and let r be a random variable independent of xj , j = 1, 2, . . . which assumes also non-negative integers. The generating function of the sum yr = x1 + x2 + · · · + xr
(A.16)
Gy (z) = Gr [Gx (z)],
(A.17)
is given by where Gx (z) = E{zx } =
∞
P{x = n}zn
n=0
and Gr (z) = E{zr } =
∞ r=0
P{r = r}zr .
321
Appendix A
Proof. In order to prove the theorem we use the properties of the conditional expectation and find that Gy (z) = E{zyr } = E{E{zyr |r}} =
∞
E{zyr |r = r}P{r = r}.
r=0
Since yr = x1 + x2 + · · · + xr , one obtains the equation Gy (z) =
r ∞ ∞ ∞ # " r E z j=1 xj P{r = r} = E{zxj }P{r = r} = P{r = r}[Gx (z)]r = Gr [Gx (z)], r=0
r=0 j=1
r=0
thus the theorem is proved. It is useful to mention that if P{r = r} = δr,n , then Gr (z) = zn , and Gy (z) = [Gx (z)]n .
A.2 On the Roots of Equation g(x) = x, 0 ≤ x ≤ 1 The generating function2 g(x) = a positive generating function if
∞
k=0 pk x
0 ≤ pk ≤ 1,
k , 0 ≤ x ≤ 1 is called a probability generating function
∀k ≥ 0,
and g(1) =
∞
or just simply
pk = 1.
k=0
Theorem 30. If g (1) = q1 ≤ 1 then g(x) > x if 0 ≤ x < 1,
(A.18)
and if g (1) = q1 > 1 then there exists a point x0 < 1 for which g(x0 ) = x0 and g(x) > x if 0 ≤ x < x0 , g(x) < x if x0 < x < 1.
(A.19)
For verification, define the function ϕ(x) = g(x) − x whose derivative ϕ (x) = g (x) − 1 is a non-decreasing function of x in the interval [0,1). If g (1) ≤ 1 then ϕ (1) ≤ 0, accordingly, ϕ (x) < 0 for every point 0 ≤ x < 1. As ϕ(1) = 0, it is evident that ϕ(x) > 0 if 0 ≤ x < 1. Thus, the first statement of the theorem has been proved. If g (1) > 1 then ϕ (1) > 0 and ϕ(x) < 0 in all those points x < 1 that are near to the point x = 1 due to that ϕ(1) = 0. However, ϕ(0) = p0 ≥ 0, consequently, there has to exist such a point 0 ≤ x0 < 1, in which ϕ(x0 ) = 0. Two such points, however, cannot exist, as g(x) is convex (all of its derivatives are positive) in the interval [0, 1). With this, we have also proved the second statement of the theorem. In Fig. A.1, two generating functions can be seen. The first one is g1 (x) = 0.1 + 0.3x + 0.6x2 , while the second one is g2 (x) = 0.35 + 0.4x + 0.25x2 . Considering that g1 (1) = 1.5 > 1, the equation g1 (x) − x = 0 has two roots in the interval [0, 1], namely the trivial x = 1 and the cardinal x0 = 1/6 < 1. Since g2 (1) = 0.9 < 1, the equation g2 (x) − x = 0 has only one root in the interval [0, 1] and this is the trivial root x = 1. 2 Our
considerations now refer to the generating function defined by the power series of {z} = x ∈ [0, 1].
322
Appendix A
Generating functions
1 0.8
g2(x)
0.6 0.4 g1(x) 0.2 x0 1/6 0 0
0.2
0.4
0.6
0.8
1
Variable (x)
Figure A.1
Illustration of the roots of the equation g(x) − x = 0.
A.3 A Useful Inequality We may often need the simple inequality below, which in a proper sense is a variant of the mean-value theorem of differential calculus. Theorem 31. Let u and v be two arbitrary points of the unit circle, hence |u| ≤ 1 and |v| ≤ 1. Furthermore, let g(z) =
∞
pn zn ,
|z| ≤ 1 and
0 ≤ pn ≤ 1,
∀n ≥ 0
n=0
be a probability generating function. We will prove that |g(u) − g(v)| ≤ |u − v|g (1).
(A.20)
For this we only need the equality g(u) − g(v) =
∞
pn (un − vn ) = (u − v)
n=0
∞ n=0
pn
u n − vn . u−v
Since u n − vn = un−1 + un−2 v + · · · + uvn−2 + vn−1 , u−v one obtains |un−1 + un−2 v + · · · + uvn−2 + vn−1 | ≤ n and from this, it immediately follows that |g(u) − g(v)| ≤ |u − v|
∞ n=0
By considering that ∞ n=0
the inequality (A.20) is fulfilled.
npn = g (1),
npn .
323
Appendix A
A.4 Abel Theorem for Moments In many case, the factorial moments mk =
∞
n(n − 1) · · · (n − k + 1)pn =
∞
(k)
uj ,
k = 1, 2, . . .
(A.21)
j=0
n=k
are needed, where (k)
uj = (k + j)(k + j − 1) · · · (j + 1)pk+j .
(A.22)
Theorem 32. We shall prove that if the limit value lim
n
n→∞
(k)
uj = mk
j=0
exists, then lim x↑1
∞
(k)
uj xj = m˜ k
j=0
also exists and m˜ k = mk . Based on this, one can state that the kth derivative of the generating function g(x) =
∞
p n xn
n=0
converges to the factorial moment mk if x ↑ 1, i.e. lim x↑1
d k g(x) = mk , dxk
∀k ≥ 1.
(A.23)
The proof goes as follows. Introduce the notation sn(k) =
n
(k)
uj .
(A.24)
= mk ,
(A.25)
j=0
According to the assumption lim s(k) n→∞ n (k)
and from this follows that the series {sn } is bounded, hence |sn(k) | ≤ B, where 0 < B < ∞. Based on this, ∞ n=0
|sn(k) |xn ≤
∞
Bxn =
n=0
B , 1−x
which means that the series ∞ n=0
sn(k) xn
if 0 ≤ x < 1,
324
Appendix A (k)
is convergent for every 0 ≤ x < 1. By noticing that s−1 = 0, from the trivial equation (1 − x)
∞
sn(k) xn =
n=0
∞
(k)
(sn(k) − sn−1 )xn
n=0
the following equation can be obtained by considering (A.24): (1 − x)
∞
sn(k) xn =
n=0
∞
un(k) xn .
(A.26)
n=0
Since ∞
(1 − x)
xn = 1,
if 0 ≤ x < 1,
n=0
one can write (1 − x)
∞
mk xn = mk .
(A.27)
n=0
Subtracting (A.27) from (A.26) yields ∞
un(k) xn
− mk = (1 − x)
n=0
∞
(sn(k) − mk )xn .
(A.28)
n=0
From the limit relationship (A.25), it follows that to every real number > 0 there exists a real number n0 = n0 ( ) such that
|sn(k) − mk | < , if n ≥ n0 . 2 Divide the right-hand side of (A.28) into two parts as follows: ∞
un(k) xn − mk = (1 − x)
n=0
n 0 −1
(sn(k) − mk )xn + (1 − x)
∞
(sn(k) − mk )xn .
n=n0
n=0
The first term on the right-hand side can be made arbitrarily small by selecting x just slightly less than 1; accordingly, n n n 0 −1 0 −1 0 −1
(sn(k) − mk )xn ≤ (1 − x) |sn(k) − mk | ≤ δ |sn(k) − mk | ≤ , (1 − x) 2 n=0
n=0
n=0
where δ is a sufficiently small positive number and 1 − x < δ. With a fixed x selected this way, it can be seen that for the second term on the right-hand side one has ∞ ∞ ∞
(k) n (sn − mk )x ≤ (1 − x) xn ≤ (1 − x) xn = . (1 − x) 2 2 2 n=n n=n n=n 0
0
By considering all these, one obtains that
∞ (k) n un x − mk < , n=0
and since is an arbitrarily small positive number, we have proved that lim x↑1
∞ n=0
un(k) xn = lim x↑1
d k g(x) = mk . dxx
0
325
Appendix A
A.5 Series Expansion Theorem The following theorem can often be used for the deduction of asymptotic relationships and performing various estimations. Theorem 33. Let x be a non-negative random variable of integer value and let g(z) =
∞
P{x = j}z j =
j=0
∞
pj z j
j=0
be its generating function satisfying the condition g(1) = 1. If the k-th factorial moment of x, mk = g (k) (1) is finite, then the series expansion below is valid: g(z) =
k−1 j=0
g (j) (1)
(z − 1) j (z − 1)k + Rk (z) , j! k!
(A.29)
where Rk (z) is a non-decreasing function of z in the interval [0, 1] if z is real, hence 0 ≤ Rk (z) ≤ g (k) (1).
(A.30)
|Rk (z)| ≤ g (k) (1)
(A.31)
However, if z is complex, then for every |z| ≤ 1, furthermore Rk
(z) → g (k) (1)
if z → 1.
The inequality (A.30) can easily be obtained. Let us write down the Taylor series g(z) in the following form: k−1 (z − 1) j (z − 1)k g (j) (1) + g (k) (zθz + 1 − θz ) g(z) = , j! k! j=0 where 0 ≤ θz ≤ 1. Obviously, the last term on the right-hand side of the expression (A.29) is the same as the remainder of the Taylor series, i.e. Rk (z) = g (k) (1 + zθz − θz ). Since g(z) and all its derivatives g (k) (z), k = 1, 2, . . . are non-negative, g(z) is a non-decreasing convex function in the interval [0, 1], thus it is obviously true for every |z| ≤ 1 that g (k) (1 + zθz − θz ) ≤ g (k) (1), hence Rk (z) ≤ g (k) (1) and it is just that what we stated in (A.30). To prove the inequality (A.31), we will need the following lemma. Lemma 2. The Rj (z) given by (A.29) can be given for every index j = 1, 2, . . . k in the following form: n−j+1 ∞ Rj (z) pn C(n − l, j − 1)zl−1 , = Qj (z) = j! n=0 l=1
where
n C(n, k) = . k
(A.32)
326
Appendix A
We will prove the lemma by using the method of total induction. The relationship (A.32) can be seen to be true for j = 1, since Q1 (z) =
∞ n=0
pn
n
C(n − l, 0)zl−1 =
∞
pn (1 + z + · · · + zn−1 ),
n=0
l=1
and this is identical with the formula (A.29): ∞
R1 (z) = Q1 (z) =
∞
g(z) − g(1) zn − 1 = pn = pn (1 + z + · · · + zn−1 ). z−1 z − 1 n=0 n=0
By virtue of this, suppose now that the expression (A.32) is true for the indices j = 2, 3, . . . , k − 1 and show that, due to this, it is true for the index j = k, too. For this, we need to know the relationship between the functions Qk−1 (z) and Qk (z), which is provided by the trivial equation g(z) =
k−1
(z − 1)j (z − 1)k (z − 1)j (z − 1)k−1 g (j) (1) + Rk (z) = + Rk−1 (z) . j! k! j! (k − 1)! j=0 k−2
g (j) (1)
j=0
After an elementary rearrangement, one can write that g (k−1) (1) + Qk (z)(z − 1) = Qk−1 (z), (k − 1)! from which one obtains the necessary recursive formula Qk (z) =
Qk−1 (z) − g (k−1) (1)/(k − 1)! . z−1
(A.33)
Substitute (A.32) – which is valid for the index j = k − 1 as supposed – into Qk−1 (z) and let us rewrite ∞
∞
g (k−1) (1) n(n − 1) · · · (n − k + 2) C(n, k − 1)pn = pn = (k − 1)! (k − 1)! n=0 n=0 by using the well-known relationship3 C(n, k − 1) =
n−k+2
C(n − l, k − 2)
l=1
in the following form: n−k+2 ∞ g (k−1) (1) = pn C(n − l, k − 2). (k − 1)! n=0 l=1
After executing all these steps, one obtains that Qk (z) =
∞ n=0
3 L.
pn
n−k+2 l=1
C(n − l, k − 2)
n−k+2 ∞ zl−1 − 1 pn C(n − l, k − 2)(1 + z + · · · + zl−2 ), = z−1 n=0 l=2
Pál, Fundamentals of Probability Theory and Statistics (in Hungarian), Vol. II, p. 888, expression (F.013),Akadémiai Kiadó, Budapest, 1995 [17].
327
Appendix A
from which by a permissible rearrangement, one arrives at the formula Qk (z) =
∞
n−k+1
pn
n=0
z(l−1)
n−k+2
l=1
C(n − j, k − 2).
j=1+l
By taking into account that C(n − l, k − 1) =
n−l−k+2
C(n − l − r, k − 2) =
r=1
n−k+2
C(n − j, k − 2),
j=1+l
one can immediately realise that Qk (z) =
∞
pn
n−k+1
n=0
C(n − l, k − 1)z(l−1)
l=1
and by this it is proved that (A.32) is true for every index k ≥ 1. It follows directly from the expression of Qk (z) that |Qk (z)| ≤
∞ n=0
pn
n−k+1
C(n − l, k − 1) =
∞
pn C(n, k),
n=0
l=1
and by this the inequality (A.31) has been proven, too.
A.6 An Important Theorem In many applications, the following theorem can be useful. Theorem 34. Let g(z) =
∞
pn z n
n=0
be a probability generating function; further, let the equation g(1) = 1 be fulfilled, as well as δ be a non-negative number less than unity. It will be shown that the series ∞
[1 − g(1 − δn )]
(A.34)
n=0
is convergent if and only if the series ∞
pn log n
(A.35)
n=1
is convergent. In other words, the series (A.34) is either convergent for every δ belonging to the interval (0, 1) or is not for any δ corresponding to the interval (0, 1), depending on whether the series (A.35) is convergent or not. The theorem can be proved in the following way: since 1 − g(1 − δn+1 ) < 1 − g(1 − δn ),
328
Appendix A
the function U (x) = 1 − g(1 − δx ) is monotonically decreasing. From this it follows that the inequality k−1
k
U (n) ≥
k
U (x)dx ≥
j
n=j
U (n)
n=j+1
is true for every j < k non-negative integer, i.e. it is true that 0≤
k−1
k
U (n) −
U (x)dx ≤
j
n=j
k
k
U (n) −
U (x)dx ≤
j
n=j
k
U (n) −
n=j
k
U (n) = U (j).
n=j+1
Let now choose j = 1 and k = ∞ and introduce the notation δ = e −α , α > 0. From the previous inequality, one obtains that ∞ ∞ 0≤ [1 − g(1 − δn )] − [1 − g(1 − e −αx )]dx ≤ 1 − g(1 − δ), 1
n=1
and based on this one can state that the series (A.34) converges or diverges depending on whether the integral
∞
[1 − g(1 − e −αx )]dx =
1
1 α
1
1−e −α
1 − g(y) dy 1−y
(A.36)
is finite or infinite. We can immediately realise that ( ) ∞ ( ) ∞ ∞ n 1 − g(y) k n = 1− 1− pn y y = pk y n , 1−y n=0 n=0 k=0
hence
1
1−e −α
Since 1 −
n
k=0 pk
=
1
1−e −α
∞
k=0
) ( ∞ n 1 − g(y) 1 pk [1 − (1 − e −α )n+1 ]. dy = 1− 1−y n + 1 n=0
k=n+1 pk , one
k=0
can write that
∞ ∞ ∞ ∞ 1 − g(y) 1 1 pk [1 − (1 − e −α )n+1 ] ≤ pk . dy = 1−y n+1 n+1 n=0 n=0 k=n+1
k=n+1
It is seen from this that the integral (A.36) is finite or infinite whether or not the series ∞ n=0
∞ 1 pk n+1
(A.37)
k=n+1
is convergent or divergent. Let us write it down in detail: ∞ n=0
∞ 1 1 1 1 pk = (p1 + p2 + · · · ) + (p2 + p3 + · · · ) + (pn+1 + pn+2 + · · · ) + · · · , n+1 0+1 1+1 n+1 k=n+1
329
Appendix A
and let us execute the rearrangement ∞ n=0
∞ 1 1 1 1 1 p1 + 1 + p2 + · · · + 1 + + · · · + pn + · · · pk = n+1 0+1 2 2 n k=n+1
=
∞ n=1
pn
n−1 k=0
∞
1 1 = pn . k+1 k n=1 n
k=1
It is known that n 1 k=1
k
= log n + O(1),
and so one can claim that if the series ∞
pn log n
n=1
is convergent or divergent, then the series (A.34) is also convergent or divergent, respectively, and this is just that what we wanted to prove.
A p p e n d i x
B
Supplement to the Survival Probability
Contents B.1 Asymptotic Form of Survival Probability in Discrete Time Process
330
B.1 Asymptotic Form of Survival Probability in Discrete Time Process It is not an easy task to prove the asymptotic form of the survival probability (2.28) in a discrete time branching process taking place in a subcritical system. The statement that the fulfilment of the inequality E{n(1)} log n(1) < ∞ is the necessary and sufficient condition for the asymptotical relation R(t) = R0 [1 − (1 − q1 )W ]t [1 + o(1)],
0 < R0 < ∞
to hold when t → ∞, will be proved by using the method of Sevast’yanov [7]. Before starting the proof, we rearrange (1.73). After substituting g(t, 0) = 1 − R(t) and re-denoting with j = t + 1, one arrives at 1 − R(t + 1) = (1 − W )[1 − R(t)] + Wq[1 − R(t)], which, by introducing the function1 h(z) = z + W [q(z) − 1],
(B.1)
R(t + 1) = 1 − h[g(t, 0)] = 1 − h[1 − R(t)].
(B.2)
can be written in the form
Then, in the first step of the deduction of the asymptotic formula (2.28), it will be shown that R(t) = R0 [1 − W (1 − q1 )]t [1 + o(1)] if the constant R0 < ∞ exists. Thereafter, in the second step, <we will prove = that the necessary and sufficient condition for the existence of R0 < ∞ is the inequality E n(1) log n(1) < ∞. note that by introducing the probability generating function h(z), equation (2.5) can also be written in a more concise form as g(j, z) = h[g(j − 1, z)]. Further, since g(1, z) = h(z), one can see that
1 We
g(j, z) = hj (z) = hj−1 [h(z)] = h[hj−1 (z)], where hj (z) is the jth iterate of h(z).
330
331
Appendix B
B.1.1 The first step of the proof Proof. The first step is relatively simple. Define the function I (t) =
R(t) , [h (1)]t
(B.3)
and by using the formula (B.2), let us write down the recursive relationship I (t + 1) =
1 − h[g(t, 0)] 1 − g(t, 0) 1 − h[g(t, 0)] = = H [g(t, 0)]I (t), t+1 [h (1)] h (1)[1 − g(t, 0)] [h (1)]t
(B.4)
in which
1 − h(z) . (B.5) h (1)(1 − z) By applying L’Hospital’s rule, it can directly be demonstrated that H (1) = 1, i.e. H (z) is a probability generating function, consequently, H (z) =
∞
H (z) =
|z| ≤ 1,
Hn z n ,
(B.6)
n=0
where 0 ≤ Hn ≤ 1. From the recursive formula (B.4), it follows that I (t + 1) =
t
H [g(n, 0)].
(B.7)
n=0
If the limit value lim I (t) = R0
t→∞
(B.8)
exists, then I (t) = R0 [1 + o(1)] for sufficiently large t and so from the expression (B.3), the formula R(t) = R0 [1 − W (1 − q1 )]t [1 + o(1)]. immediately follows.
B.1.2 The second step of the proof Proof. In the second step, one has to show that the necessary and sufficient condition that R0 < ∞ exists is that ∞
H [g(n, 0)],
i.e.
n=0
∞
log H [g(n, 0)]
n=0
be convergent. As log (1 − x) ≤ −x,
if 0 ≤ x < 1,
we state that ∞
log H [g(n, 0)] ≤ −
n=0
∞
{1 − H [g(n, 0)]},
n=0
and by this one can express that the limit value (B.8) shall exists if the infinite series ∞ n=0
{1 − H [g(n, 0)]}
332
Appendix B
is convergent. We will prove that this series is convergent if the inequality < = E n(1) log n(1) < ∞ is fulfilled. We will perform this proof also in two steps. First it will be shown that an upper limit can be assessed for the survival probability R(n) = 1 − g(n, 0) for every non-negative real number n;then by utilising the monotonically increasing characteristics of H (z), we will confirm that the infinite sum ∞ n = 0 {1 − H [g(n, 0)]} < = is convergent if E n(1) log n(1) < ∞.
Upper bound Start from the trivial relationship derived from the formula (B.2), 1 − g(2, z) = 1 − h[g(1, z)] = 1 − h[h(z)]. According to the mean value theorem, for every z in the interval [0, 1] there exists a point θ(z) in the interval [h(z), 1], for which the equality 1 − h[h(z)] = h [θ(z)][1 − h(z)]
(B.9)
holds. From the inequalities h(0) ≤ h(z) < θ(z) < 1, it also follows that 0 < h [h(0)] ≤ h [θ(z)] ≤ h (1), because h (z) is a monotonic non-decreasing function in the interval [0, 1]. Based on this, one can write the inequality h [h(0)][1 − h(z)] ≤ h [θ(z)][1 − h(z)] ≤ h (1)[1 − h(z)], which can be given in the following form, by virtue of (B.9): h [h(0)][1 − h(z)] ≤ 1 − h[h(z)] ≤ h (1)[1 − h(z)]. Introduce the notation λ = h [h(0)] and substitute z by h(z). We obtain that λ{1 − h[h(z)]} ≤ 1 − h[h[h(z)]] ≤ h (1){1 − h[h(z)]}, which, according to (B.9), can be given in the form λh [θ(z)][1 − h(z)] ≤ 1 − g(3, z) ≤ h (1)h [θ(z)][1 − h(z)]. Then, since λ ≤ h [θ(z)]
and
h (1) ≥ h [θ(z)],
one arrives at λ2 [1 − h(z)] ≤ 1 − g(3, z) ≤ [h (1)]2 [1 − h(z)]. By repeating this procedure, one obtains the inequality λn−1 [1 − h(z)] ≤ 1 − g(n, z) ≤ [h (1)]n−1 [1 − h(z)],
333
Appendix B
from which for z = 0 it follows that λn−1 R(1) ≤ R(n) ≤ [h (1)]n−1 R(1).
(B.10)
R(n) ≤ [h (1)]n .
(B.11)
Since h (1) > R(1)2 , one can see that
Convergence of the sum By introducing the notation δ = h (1) < 1 and utilising the monotonically non-decreasing characteristics of H (z), one can write that 1 − H [g(n, 0)] = 1 − H [1 − R(n)] ≤ 1 − H (1 − δn ). ∞ n According to the ∞theorem proved in Section A.6, the infinite series n=0 [1 − H (1n− δ )] is convergent if the infinite series n=1 Hn log n is convergent. Here, Hn is the coefficient of the term z in the power series (B.6). From (B.3), one can directly calculate Hn =
∞ 1 hk , h (1) k=n+1
where h0 = Wf0 ,
h1 = 1 − W (1 − f1 ),
and
hk = Wfk
if k > 1.
For the sake of a simpler overview, introduce the notation vn+1 =
∞
hk .
k=n+1
Based on the foregoing, one can state that the limit value (B.6) exists if the infinite series ∞
vn+1 log n
n=1
is convergent. Since vn+1 ≤ vn , the inequality ∞
vn+1 log n ≤
n=1
∞
vn log n
n=1
∞ is evidently true. Hence, if the series ∞ n=1 vn log n is convergent, the series n=1 vn+1 log n is convergent, too. By a simple rearrangement, one obtains that ∞
vn log n =
n=1 2 This
∞ n=1
hn
n
log k =
k=1
∞
hn log n!.
n=1
equality can directly be confirmed. Considering that h (1) = 1 − W (q1 − 1)
and
R(1) = 1 − Wq(0),
it is obviously true that h (1) − R(1) = W (1 − q1 + q(0)) = W [1 + f0 − A] > 0, since 1 + f0 ≥ 1 and in a subcritical process A < 1.
334
Appendix B
By utilising the Stirling relationship log n! = n log n + O(n), one can state that the condition for the existence of the convergence of the series ∞
vn log n,
n=1
i.e. the condition of the existence of the limit value (B.6) is that the inequality ∞
< = hn n log n = E n(1) log n(1) < ∞
n=1
be fulfilled. By this, we have proved our theorem.
(B.12)
B IBLIOGRAPHY
1. J.L. Doob, Stochastic Processes, John Wiley & Sons, New York, Chapman & Hall, London, 1953. 2. M.S. Bartlett,An Introduction to Stochastic Processes, Cambridge, University Press, London, 1955. 3. A.T. Bharucha-Reid, Elements of the Theory of Markov Processes and Their Applications, McGraw-Hill Book Company, New York, 1960. 4. I.I. Gichman and A.V. Skorochod, Introduction to Random Processes, Nauka, Moscow, 1965. 5. S. Karlin and H.M. Taylor,A First Course in Stochastic Processes,Academic Press, New York, 1975. 6. T.E. Harris,The Theory of Branching Processes, Springer-Verlag, Berlin, 1963. 7. B.A. Sevast’yanov, Branching Processes, Nauka, Moscow, 1971. 8. S.K. Srinivasan, Stochastic Theory and Cascade Processes,American Elsevier Publishing Company, New York, 1969. 9. W.M. Stacey, Space-Time Nuclear Reactor Kinetics,Academic Press, New York, 1969. 10. R.E. Uhrig, Random Noise in Nuclear Reactor Systems, Ronald Press, New York, 1970. 11. M.M.R. Williams, Random Processes in Nuclear Reactors, Pergamon Press, Oxford, 1974. 12. P. Jagers, Branching Processes with Biological Applications, Wiley Series in Probability and Mathematical Statistics, London, 1975. 13. D.R. Harris, Naval Reactors Physics Handbook,Vol. I, pp. 1010–1142, United States Atomic Energy Commision, 1964. 14. N.G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland,Amsterdam, 1992. 15. K. Saito, Prog. Nuc. Ener., 3 (1978) 157. 16. A.N. Kolmogorov and A.N. Dmitriev, Doklady, 56 (1947) 5. 17. L. Pál, Fundamentals of Probability Theory and Statistics,Vol. I. and II.,Akadémiai Kiadó, Budapest, 1995. 18. B. Szökefalvi-Nagy, Introduction to Real Functions and Orthogonal Expansions, pp. 420–422, Akadémiai Kiadó, Budapest, 1964. 19. B. Bollobás, Random Graphs, Second Edition, Cambridge Univ. Press, 2001. 20. L. Pál, Randomly Evolving Trees I, arXiv:cond-mat/0205650, 30 May 2002. 21. L. Pál, Randomly Evolving Trees II, arXiv:cond-mat/0211092, 5 Nov 2002. 22. L. Pál, Randomly Evolving Trees III, arXiv:cond-mat/0306540, 21 Jun 2003. 23. L. Pál, Phys. Rev. E, 72 (2005) 051101. 24. B.A. Sevast’yanov,The theory of branching random processes, Uspehi Math. Nauk, 6 (1951) 47. 25. M.M.R. Williams,Ann. Nucl. Ener., 31 (2004) 933. 26. E. Schroedinger, Proc. Roy. Irish Acad.,Vol. LI, Section A, No. 1 (1945). 27. Y. Kitamura, H. Yamauchi, and Y. Yamane Ann. Nucl. Ener., 30 (2003) 897. 28. D. Ballester and J.L. Muñoz-Cobo,Ann. Nucl. Ener., 32 (2005) 493. 29. K.B. Athreya and S. Karlin,Ann. Math. Stat., 42 (1977) 1499. 30. K.B. Athreya and S. Karlin,Ann. Math. Stat., 42 (1977) 1843. 31. D. Tanny,Ann. Probab., 1, (1977) 100. 32. N.A. Berestova, Soviet. Math. Dokl., 26 (1982) 514. 33. M. San Miguel and M.A. Rodriguez, Proc. NATO Adv. Res. Workshop on Noise and Nonlinear Phenomena in Nucl. Systems. Plenum, NY, (1988). 34. D.C. Sahni,Ann. Nucl. Ener., 16 (1989) 397. 35. L. Pál, Branching processes in a medium randomly varying in time, CTH-RF-184, Chalmers University of Technology, Sweden, 2004. 36. I. Pázsit, Z.F. Kuang and A.K. Prinja,Ann. Nucl. Ener., 29 (2002) 169. 37. Y. Kitamura, I. Pázsit,A. Yamamoto and Y. Yamane Ann. Nucl. Ener., 34 (2007) 385. 38. L. Pál and I. Pázsit, Proc. of SPIE., 5845 (2005) 115. 39. M.M.R. Williams, J. Nucl. Ener., 25 (1971) 563. 40. L. Pál and I. Pázsit, Nucl. Sci. Eng., 155 (2007) 425. 41. R. Bellman, R. Kalaba and G.M. Wing, J. Math. Mech., 7 (1958) 149. 42. G. Doetsch, Handbuch der Laplace-Transformation, Band I, s. 458,Verlag Birkhauser, Basel, 1950. 43. L. Pál, Nuovo Cimento, Supplemento, 7 (1958) 25. 44. L. Pál,Acta Phys. Hung., 14 (1962) 345. 45. L. Pál,Acta Phys. Hung., 14 (1962) 357.
336
Bibliography
46. L. Pál,Acta Phys. Hung., 14 (1962) 369. 47. G.I. Bell, Nucl. Sci. Eng., 21 (1965) 390. 48. W. Matthes, Nucleonics, 8 (1966) 87. 49. I. Pázsit, Physica Scripta, 59 (1999) 344. 50. E.D. Courant and P.R. Wallace, Phys. Rev. 72 (1947) 1038. 51. F. de Hoffman,The Science and Engineering of Nuclear Power, Vol. III,Addison Wesley Press, Cambridge, MA, 1949. 52. R.P. Feynman, F. de Hoffman and R. Serber, J. Nucl. Ener. 3 (1956) 64. 53. N.E. Holden and M.S. Zucker, Nucl. Sci. Eng., 98 (1988) 174. 54. J.W. Boldeman and A.W. Dalton, Promt Nubar Measurements for Thermal Neutron Fission, AAEC/E-172, Australian Atomic Energy Commission, (1967). 55. R. Gwinm, R.R. Spencer and R.W. Ingle, Nucl. Sci. Eng., 87 (1984) 381. 56. J.D. Orndoff, Nucl. Sci. Eng., 2 (1957) 450. 57. M. Srinivasan and D.C. Sahni, Nukleonik, 9 (1967) 155. 58. E.J.M. Wallerbos and J.E. Hoogenboom,Ann. Nucl. Ener., 25 (1998) 733. 59. S.B. Degweker,Ann. Nucl. Ener., 16 (1989) 409. 60. S.B. Degweker,Ann. Nucl. Ener., 27 (2000) 1245. 61. F.C. Difilippo, Nucl. Sci. Eng., 142 (2002) 174. 62. W. Hage and D.M. Cifarelli, Nucl. Sci. Eng., 112 (1992) 136. 63. Ming-Shih Lu and T. Teichman, Nucl. Sci. Eng., 147 (2004) 56. 64. J.L. Muñoz-Cobo,Y. Rugama,T.E. Valentine, J.T. Michalzo and R.B. Perez,Ann. Nucl. Ener., 28 (2001) 1519. 65. S.B. Degweker,Ann. Nucl. Ener., 30 (2003) 223. 66. D. Ballester, J.L. Mu˜uoz-Cobo and J.L. Kloosterman,Ann. Nucl. Ener., 32 (2005) 1519. 67. P.R. Pluta, Reactor kinetics and control, Proceedings at the University of Arizona,AZ, (1964) 136. 68. Z.F. Kuang and I. Pázsit, Proc. Roy. Soc. A, 458 (2002) 232. 69. L. Pál, React.Sci. Technol., 17 (1963) 395. 70. D. Babala, Neutron Counting Statistics in Nuclear Reactors, Kjeller Report, KR-114, (1966). 71. A.I. Mogilner and V.G. Zolotukhin,Atomnaya Ener., 10 (1961) 377. 72. A. Szeless and L. Ruby, Nucl. Sci. Eng., 45 (1971) 7. 73. A. Szeless,Atomkernenergie, 18 (1971) 209. 74. C. Rubia et al., Report CERN/AT/95–44 (ET) (1995). 75. S. Andriamonje et al., Phys. Lett. B, 348 (1995) 697. 76. Report IAEA-TECDOC 985 (1997). 77. J.L. Muñoz-Cobo and G.Verdú, Proc. NATO Adv. Res. Workshop on Noise and Nonlinear Phenomena in Nucl. Systems. Plenum, NY, 1988. 78. J.L. Muñoz-Cobo, R.B. Perez and G. Verdú, Nucl. Sci. Eng., 95 (1988) 83. 79. I. Pázsit and Y. Yamane, Nucl. Sci. Eng., 133 (1999) 269. 80. I. Pázsit and Y. Yamane,Annal. Nucl. Ener., 25 (1998) 667. 81. Y. Kitamura,T. Misawa,A. Yamamoto,Y. Yamane, C. Ichihara and H. Nakamura, Prog. Nucl. Ener., 48 (2006). 82. Y. Kitamura, K. Taguchi, A. Yamamoto,Y. Yamane, T. Misawa, C. Ichihara, H. Nakamura, and H. Oigawa, Int. J. of Nucl. Ener. Sci. Technol., 2 (2006) 266. 83. R. Soule et al., Nucl. Sci. Eng., 148 (2004) 124. 84. J. Vollaire, L’expérience MUSE-4: mesure des paramètres cinétiques d’un système sous-critiqu, PhD thesis, Institut National Polytechnique de Grenoble, 2004. 85. Y. Rugama, J.L. Klosterman and A. Winkelman, Prog. Nucl. Ener., 44 (2004) 1. 86. Y. Kitamura, K. Taguchi, T. Misawa, I. Pázsit, A. Yamamoto,Y. Yamane, C. Ichihara, H. Nakamura and H. Oigawa, Prog. Nucl. Ener., 48 (2006) 37. 87. I. Pázsit,Y. Kitamura, J. Wright and T. Misawa,Ann. Nucl. Ener., 32 (2005) 896. 88. Y. Kitamura, I. Pázsit, J. Wright,A. Yamamoto and Y. Yamane,Ann. Nucl. Ener., 32, (2005) 671. 89. N. Ensslin,W.C. Harker, M.S. Krick, D.G. Langner, M.M. Pickrell and J.E. Stewart, Application Guide to Neutron Multiplicity Counting. Los Alamos Report LA-13422-M (1998). 90. S. Croft, L.C.-A. Bourva, D.R. Weaver and H. Ottmar, J. Nucl. Mat. Manage., XXX 10 (2001). 91. S.A. Pozzi, J.A. Mullens and J.T. Mihalczo, Nucl. Instr. Meth. A, 524 (2004) 92. 92. R. Dierckx and W. Hage, Nucl. Sci. Eng., 85 (1982) 325. 93. W. Hage and D.M. Cifarelli, Nucl. Instr. Meth. A 236 (1985) 165. 94. K. Böhnel, Nucl. Sci. Eng., 90 (1985) 75. 95. W. Matthes, Proc. NATO Adv. Res. Workshop on Noise and Nonlinear Phenomena in Nucl Systems, Plenum, NY, 1988. 96. Ming-Shih Lu and T. Teichman, Nucl. Instr. Meth. A, 313, (1992) 471. 97. I. Pázsit and S.A. Pozzi, Nucl. Instr. Meth. A, 555 (2005) 340. 98. L.C.-A. Bourva, S. Croft and D.R. Weaver, Nucl. Instr. Meth. A, 479 (2001) 640.
Bibliography
337
99. L.C.-A. Bourva, S. Croft and Ming-Shih Lu, Extension to the point model for neutron coincidence counting, Proceedings of the ESARDA 25th Annual Meeting, Symposium on Safeguards and Nuclear Material Management, Stockholm, Sweden (2003). 100. Wolfram Research Inc., Mathematica,Version 5.2, Champaign, IL 2005. 101. A. Enqvist, I. Pázsit and S.A. Pozzi, Nucl. Instr. Meth. A, 566 (2006) 598. 102. S.A. Pozzi, E. Padovani and M. Marseguerra, Nucl. Instr. Meth. A, 513 (2003) 550. 103. L.C.-A. Bourva and S. Croft, Nucl. Instr. Meth. A, 431 (1999) 485. 104. S. Croft and L.C.-A. Bourva, Nucl. Instr. Meth. A, 453 (2000) 553. 105. P. Baeten, Quantification of transuranic elements by neutron multiplicity counting. A new approach by time interval analysis, PhD thesis,Vrije Universiteit, Brussel, 1999. 106. S. Croft and L.C.-A. Bourva, Calculation of the correction factors to be applied to plutonium reference standards when used to calibrate passible neutron counters. Proc. ESARDA 23rd Annual Meeting, Symposium on Safeguards and Nuclear Material Management, Bruges, Belgium ,(2003).
This page intentionally left blank
I NDEX
absorption, 3, 6, 82, 207, 303 accelerator driven systems, 259, 264 active particle, 127 adjoint equation, 214 ADS, 259, 264 asymptotic properties, 138 asymptotic stationarity, 58, 69, 103 autocorrelation, 37, 62, 91 autocovariance, 37, 61, 253 average neutron density, 214 backward equation, 5, 39, 152, 160, 240, 298 basic generating function, 4, 15, 77, 141 basic theorem, 9 Boltzmann equation, 201, 212 branching process, 8, 39, 206, 298 cascades, 205, 295 Cf-252 source, 260, 263 chain reaction, 3, 127, 206 closure assumption, 153 compound Poisson statistics, 61, 259, 281 condition of explosiveness, 14 condition of regularity, 14 continuous Markovian process, 218 convergence in mean square, 123 covariance, 132, 135, 175, 215, 309 covariance matrix, 185 Cox process, 375 critical in the mean, 218 critical state, 17, 72, 156 cross-correlation, 91 degenerate distribution, 41 delayed neutron precursor, 208 delayed neutrons, 141, 208, 234, 264 detected neutrons, 303 detected particles, 103 detection, 102, 232, 250, 303 detection efficiency, 103, 295, 303 die-away time, 311 diffusion approximation, 217 diffusion matrix, 218 diffusion process, 218 Diven factor, 61, 104, 233, 260 doubles, 300, 306, 311 drift vector, 218 emitted neutrons, 303 explosive branching process, 13, 14, 31
exponential generating function, 5, 78 extinction, 41, 42, 119, 139, 140 extinction probability, 30, 41, 119 factorial moments, 15, 154, 214, 297, 300, 305 Feynman-alpha, 104, 234, 240, 250, 260, 287 fission, 206, 233 forward equation, 6, 153, 161, 169, 234 frequency of the state change, 157, 159, 163 fundamental exponent, 16 Galton-Watson process, 20 gamma distribution, 72 gamma photon distributions, 305, 306 gamma quanta, 296 generating function, 4, 10, 191, 315 higher joint moments, 310 homogeneous Markov process, 7 homogeneous Poisson process, 59 homogeneous process, 4 immigration, 55 inactive particle, 127 induced fission, 294 inhour equation, 249 injected particle, 55 injection intensity, 56, 165 injection process, 55, 66 intensity of reaction, 3, 6 intensity parameter, 56 internal absorption, 294, 303 irregular branching process, 13 joint distribution, 11, 91, 116, 298 joint moments, 309 joint statistics, 295 leakage multiplication, 300 limit distribution theorems, 46 limit probability, 69 Markovian branching process, 208 Markovian process, 217 master equation, 5, 151, 211 MCNP-PoliMi, 302, 308 339
340 memory effect, 162, 163 modelling of the process, 32 modified factorial moments, 301, 307 modified leakage multiplication, 302 modified second moment, 236, 242 modified variance, 238, 261, 269, 285 multiplication, 3, 113 multiplicity, 233, 294 multiplicity counting, 295 multiplying medium, 3, 20, 55, 150, 206 negative binomial distribution, 257 neutron to gamma multiplication, 306 non-homogeneous Poisson process, 59, 62, 264 nuclear safeguards, 294 one-point distribution, 151, 253, 277 one-point-model, 4 Pál-Bell equation, 206, 212 parameter space, 8 periodically stationary process, 58, 63, 183, 265 photon number distributions, 306 Poisson statistics, 61, 259 population, 121 probability generating function, 4, 11, 23, 315 prompt critical, 145 prompt neutron, 141, 208 pulsed Poisson source, 62, 264 pulsed source, 56, 259, 264, 283 pulsing methods, 66, 265, 283 quadratic generating function, 26, 53, 71, 97, 123 quadratic process, 28, 71, 98, 123 random injection, 165 randomly varying medium, 149 reactivity measurement, 231, 259 regular branching process, 13 regular process, 36 renewal process, 3, 107 Rossi-alpha, 105, 253, 263, 277, 290
Index
safeguards, 294 scattering reaction, 208 second factorial moment, 15, 57, 109, 115, 160, 214, 300 semi-invariants, 17, 78 simple moment, 15, 213 singles, 300, 306 size of population, 121 source event, 295 source particles, 165 spallation, 259 spallation source, 259 spatial correlation, 221, 227 spontaneous emission, 294 spontaneous fission, 294, 298 state of medium, 4, 156, 164 stationary random process, 69 statistics of detected gamma photons, 307 Stirling-numbers, 15 stochastic injection, 66, 265, 287 strongly subcritical state, 157, 176 strongly supercritical state, 157 subcritical in the mean, 157 subcritical state, 17, 69, 156 supercritical in the mean, 157 supercritical state, 17, 74, 156 survival probability, 30, 44, 330 survival time, 119 surviving process, 42 transfer function, 249 transition probability, 8, 150 transmutation, 259 transport operator, 212 triples, 300, 306 two-point distribution, 253, 277 variance, 88, 108, 114, 135, 161, 169, 214 variance to mean, 234, 238, 247, 269 Yule-Furry process, 26 zero probability, 257