Lecture Notes in Mathematics Editors: A. Dold, Heidelberg B. Eckmann, Ztirich F. Takens, Groningen
1592
Karl Wilhelm ...
38 downloads
783 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Lecture Notes in Mathematics Editors: A. Dold, Heidelberg B. Eckmann, Ztirich F. Takens, Groningen
1592
Karl Wilhelm Breitung
Asymptotic Approximations for Probability Integrals
Springer-Verlag Berlin Heidelberg NewYork London Paris Tokyo Hong Kong Barcelona Budapest
Author Karl Wilhelm Breitung Department of Civil Engineering University of Calgary 2500 University Drive, N.W. Calgary, Alberta T2N 1N4, Canada
Mathematics Subject Classification (1991): 41A60, 41 A63, 60F 10, 60G 15, 60G70, 62N05, 62P99, 90B25
ISBN 3-540-58617-2 Springer-Verlag Berlin Heidelberg New York
CIP-Data applied for This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. 9 Springer-Verlag Berlin Heidelberg 1994 Printed in Germany Typesetting: Camera ready by author SPIN: 10130174 46/3140-543210 - Printed on acid-free paper
Preface In this lecture note asymptotic approximation methods for multivariate integrals, especially probability integrals, are developed. It is a revised version of my habilitationsschrift "Asymptotische Approximationen fiir Wahrscheinlichkeitsintegrale". The main part of this research was done when I worked at the Technical University and the University of Munich. The motivation to study these problems comes from my work in the research project "Zuverl~,ssigkeitstheorie der Bauwerke" (reliability theory of structures) at the Technical University of Munich in the department of civil engineering. For the tolerant support of the mathematical research in this project I would like to thank Prof. Dr.-Ing. H. Kupfer. I am grateful to Prof. Dr.-Ing. R. Rackwitz and Prof. Dr.-Ing. G.I. Schu$11er that they made me clear the engineering topics of reliability theory and helped me in my research. Further I would like to thank my former colleagues at the Technical University of Munich and at the University of Munich which supported me during my work at these universities: Dr.-Ing. B. FieBler, Dr. M. Hohenbichler, Dr. A. RSsch, Dr. H. Schmidbauer and Dr. C. Schneider. Additionally I would like to express my gratitude to Prof. Dr. F. Ferschl for pointing out occasional errors and misprints in the original German version. The major part of this revision was made when I stayed as visiting fellow at the University of New South Wales in 1991. I would like to thank especially Prof. A. M. Hasofer for his help and for his kind invitation to the University of New South Wales and to express my delight at having worked there. For their help and discussions I thank wholeheartedly Prof. Dr. F. Casciati, Prof. Dr. L. Faravelli (both University of Pavia), Prof. Dr. P. Filip (FH Bochum), Prof. Dr. K. Marti (University of the Federal Armed Forces at Neubiberg) and Prof. Dr. W.-D. Richter (University of Rostock). Prof. Dr. M. Maes (University of Calgary) knows what I mean. For eliminating the worst bugs in my English I thank Poul Smyth, the Irish poet of the Bermuda triangle, and for making a cover design Ringo Praetorius, the executioner of Schichtl at the Oktoberfest. Unfortunately the publisher and the series editors decided not to use this cover design. Finally a short comment about the mathematical level of this note should be made. It is intended also for mathematically interested reliability engineers. Probably, therefore, the mathematicians will complain about the low level and the inclusion of too much elementary material and the engineers will go the other way. Calgary, September 1994 Karl Wilhelm Breitung
Contents Notation 1
ix
Introduction 1.1 T h e E v a l u a t i o n of M u l t i v a r i a t e Integrals . . . . . . . . . . . . . . 1.2 S t r u c t u r a l Reliability . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Stochastic O p t i m i z a t i o n . . . . . . . . . . . . . . . . . . . . . . . 1.4 Large D e v i a t i o n s a n d E x t r e m e Values . . . . . . . . . . . . . . . 1.5 M a t h e m a t i c a l Statistics . . . . . . . . . . . . . . . . . . . . . . . 1.6 C o n t e n t s of this Lecture Note . . . . . . . . . . . . . . . . . . . .
1
M a t h e m a t i c a l Preliminaries
9
2.1 2.2 2.3 2.4 2.5 2.6
2.7
Results from Linear A l g e b r a . . . . . . . . . . . . . . . . . . . . . Results from A n a l y s i s . . . . . . . . . . . . . . . . . . . . . . . . Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extrema under Constraints ..................... P a r a m e t e r D e p e n d e n t Integrals . . . . . . . . . . . . . . . . . . . Probability Distributions ....................... 2.6.1 U n i v a r i a t e d i s t r i b u t i o n s . . . . . . . . . . . . . . . . . . . 2.6.2 T h e n - d i m e n s i o n a l n o r m a l d i s t r i b u t i o n . . . . . . . . . . . 2.6.3 C a l c u l a t i o n of n o r m a l integrals . . . . . . . . . . . . . . . Convergence of P r o b a b i l i t y D i s t r i b u t i o n s . . . . . . . . . . . . . .
1 2 6 7 7 8
9 11 14 16 21 28 28 29 30 31
A s y m p t o t i c Analysis
34
3.1 3.2 3.3 3.4
34 35 40 44
T h e Topic of A s y m p t o t i c Analysis . . . . . . . . . . . . . . . . . T h e C o m p a r i s o n of F u n c t i o n s . . . . . . . . . . . . . . . . . . . . A s y m p t o t i c Power Series a n d Scales . . . . . . . . . . . . . . . . Deriving Asymptotic Expansions . . . . . . . . . . . . . . . . . .
Univariate Integrals
45
4.1 4.2 4.3
45 45 47
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Watson's Lemma ........................... T h e Laplace M e t h o d for U n i v a r i a t e F u n c t i o n s . . . . . . . . . . .
vii
7
Multivariate Laplace Type Integrals 5.1 I n t r o d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Basic Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Interior M a x i m a . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 B o u n d a r y M a x i m a . . . . . . . . . . . . . . . . . . . . . . . . . .
51 51 52 55 65
Approximations for Normal Integrals 6.1 T i m e - I n v a r i a n t Reliability P r o b l e m s . . . . . . . . . . . . . . . . 6.2 Linear A p p r o x i m a t i o n s ( F O R M Concepts) . . . . . . . . . . . . . 6.2.1 T h e H a s o f e r / L i n d reliability index . . . . . . . . . . . . . 6.2.2 G e n e r a l i z a t i o n to n o n - n o r m a l r a n d o m variables . . . . . . 6.3 S O R M Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 A s y m p t o t i c S O R M for N o r m a l Integrals . . . . . . . . . . . . . . 6.4.1 T h e generalized reliability index . . . . . . . . . . . . . . 6.5 T h e A s y m p t o t i c D i s t r i b u t i o n in the Failure D o m a i n . . . . . . . 6.6 N u m e r i c a l Procedures and I m p r o v e m e n t s . . . . . . . . . . . . . 6.7 E x a m p l e s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85 85 86 86 88 90 91 98 99 102 103
Arbitrary Probability Integrals 7.1 7.2
7.3
P r o b l e m s of the T r a n s f o r m a t i o n M e t h o d . . . . . . . . . . . . . . A s y m p t o t i c A p p r o x i m a t i o n s in the Original Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 T h e c o n s t r u c t i o n of Laplace integrals . . . . . . . . . . . . 7.2.2 E x a m p l e s . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 P a r a m e t e r d e p e n d e n t densities . . . . . . . . . . . . . . . 7.3.2 E x a m p l e . . . . . . . . . . . . . . . . . . . . . . . . . . . .
106 106 109 109 112 115 115 118
Crossing Rates of Stochastic Processes 8.1 I n t r o d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Definition a n d Properties of Stochastic Processes . . . . . . . . . 8.3 M a x i m a a n d Crossings of a Stochastic Process . . . . . . . . . . 8.4 Crossings t h r o u g h a n Hypersurface . . . . . . . . . . . . . . . . . 8.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . .
121 121 121 124 127 133
A
Bibliography
135
B
Index
145
viii
Notation The set of the natural numbers is denoted by ~ and the set of complex numbers by C. The n-dimensional euclidean space is denoted by ~'*. For the set of the vectors in ~ with all components being positive we write / ~ . A vector in /~n is written as ~ and the zero vector ( 0 , . . . , 0) as o. The transpose of ~ is written as z~T. The unit vector in direction of the xl-axis is denoted by ei. For the euclidean norm of a vector 9 we write I~1 and for the scalar product of two vectors ~ and y we use (~, y). The subspace of ~'~ spanned by k vectors a l , . . . , ak is written as s p a n [ a 1 , . . . , ak]. For the orthogonM complement of a subspace U C ~ n we write U • An n x m-matrix is written with bold Roman letters: A,B,... The ndimensional unity matrix is denoted by I n and an n x k matrix consisting of zeros by o~,k. The cofactor matrix C of an n x n matrix A is the n x n matrix C = ( ( - 1 ) i+j det(Aij))i,j=l .....n with Aij being the (n - 1) x (n - 1) matrix obtained from A by deleting the i-th row and the j - t h column. The rank of a matrix B , i.e. the number of its linearly independent column vectors, is denoted by rk(B). The probability of an event A is denoted by /P(A). An one-dimensional random variable is denoted by a capital Roman letter: X, Y , . . . and for ndimensional random vectors bold capital Roman letters arc used: X , Y , . . . . For the probability density function and the cumulative distribution function of a random variable we write p.d.f, and c.d.f, respectively. The expected value of a random variable X is written as ~ : ( X ) and its variance as var(X). The covariance between X and Y is denoted by coy(X, Y). A function f : D --~ /R on an open set D C ~ n is called a Cl-function if all partial derivatives of first order exist and are continuous. Analogously by induction C~-functions (r > 1) are defined. A function f : D --~ /~ on an open set D C / R ~ is called a C~-function if all partial derivatives of order r - 1 exist and are continuously differentiable. Further a function f : D --~ /R on a closed set D C ~ is called a C~-function if there is an open set U C ~ with D C U such that f is defined on U and f is according to the definition above a C~-function. A function T : /R" --* /Rm,~ ~-* T(~) = ( t l ( x ) , . . . , t m ( ~ ) ) is called a C ~vector function if all component functions t l ( ~ ) , . . . , t,~ (~) are Cr-functions. For a function f : R " --*/~ the first partial derivatives with respect to the variables xi (i = 1, , n) at the point 9 are denoted by f i ( ~ ) or 0/(~) and the gradient of by V f ( ~ ) . The second derivatives of this function with respect to the variables O~f(~e) and x~ and xj ( i,j = 1,...,n) at the point ~ are denoted by f~J(~) or Ox~Oxj 9 .
.
0.Vi
its Hessian by H / ( ~ ) . For functions of the form f ( ~ , y) by V ~ f ( ~ , y) the gradient with respect to the vector x is denoted. In the same way V y f ( ~ , y) means the gradient with respect to the second vector y. The divergence of a Cl-vector function u ( x ) is denoted by div(u(~)).
ix
Chapter 1
Introduction 1.1
T h e Evaluation of Multivariate Integrals
In m a n y fields of applied mathematics it is necessary to evaluate multivariate integrals. In a few cases analytic solutions can be obtained, but normally approximation methods are needed. The standard methods here are numerical and Monte Carlo integration. In spite of the fact that computation time today is accessible plentiful, such procedures are sufficient for m a n y problems, but in a number of cases they do not produce satisfactory results. Such integrals appear for example in structural reliability, stochastic optimization, m a t h e m a t i c a l statistics, theoretical physics and information theory. If we consider an integral in the form f ( x ) d~
(1.1)
F
with F C ~ and f : F --~ H~, there are three main causes, which might make numerical or Monte Carlo integration difficult: 1. The dimension n of the integration domain is large. 2. The domain F has a complicated shape. 3. The variation of f in the domain F is large. Often not only the integral itself, but its derivatives with respect to parameters are of interest. The most general case is that the integrand f ( ~ ) as well as the integration domain F depend on parameters. Another important point is that often not only one integral has to be computed, but the behavior of the integral under parameter changes is of interest. Then m a n y evaluations of the integral are necessary. A similar problem occurs if some sort of optimization should be made. Further in reliability problems with r a n d o m finite elements (see for example [4]) even now the necessary computing time can be prohibitive.
Methods for analytic approximations have been developed in different fields and sometimes due to the specialization nowadays in science some of t h e m have been rediscovered at least once or twice. Therefore it is a t t e m p t e d here to give a list of the available textbooks in this field and certainly for the field of multivariate Laplace methods an overview of the relevant results. The basic idea of such methods is that instead of integrating over the whole domain F, points or subsets are identified from whose neighborhoods the main contributions to the integral come from. Therefore instead of integrating the numerical problem is then to find these sets by some optimization method. Such approximation methods are not a solution for all problems. Their efficiency and usefulness depends on the problem. If the underlying assumptions about the structure of the integral are not fulfilled to some degree, the use of other schemes might be better. In some cases then the use of asymptotic approximation methods, which are described in this book, perhaps in combination with the aforementioned methods, is advisable.
1.2
Structural Reliability
One field, in which such concepts had been very successful, is structural reliability. In the following we will give a short outline of this field. The first proposals to use probabilistic models for structural reliability are from [98], [78] and [64], but not before the sixties such problems were studied more intensively. Then in the last thirty years in mechanical and civil engineering probabilistie methods were developed for the calculation of the reliability of components and structures, since pure deterministic methods were not adequate for a number of problems. Textbooks about such methods are [10], [16], [17], [39], [84], [89], [99], [124] and [128]. In structural reliability the computation of multivariate integrals is an important problem. The studies in this lecture note were motivated by the problems in this field, since standard methods did not lead to satisfactory results. At the beginning the random influences acting on a structure were modelled simply by two time-invariant random variables, a load variable L and a resistance variable R. If L _> R, the structure failed, if L < R, it remained intact. But soon it was clear that such a model was far too simple even for components of a structure. Even if the time influence is neglected, for a sufficient description of the random influences a random vector X = ( X 1 , . . . , X,~) with a large number n of components is needed. In general one part of this vector is composed of load variables and the other of resistance variables. If now the p.d.f, f ( x ) of the random vector X which models the r a n d o m influences on a structure is known and the conditions for failure can be expressed as a function of the vector, the probability for failure can be calculated. Then the integration domain is given by a function g(~) in the form {w;g(~) _< 0}. The function g ( a ) describes the state of the structural system under consideration. If g(x) > 0, the system is intact and if g(a) _< 0, the system fails.
The failure domain is denoted by F = {x; g(x) <_ 0}, the safe domain by S -- {x;g(w) > 0} and the limit state surface, the boundary of F by G = = o}.
X2
~
Limit state surface G = {x; g(~) = O}
< o}
Safe Domain S =
{x;g(x) > 0}
XI
Figure 1.1: Failure domain, limit state surface and safe domain Then the problem is to compute the probability of failure ~(F) =
f /
f ( x ) dx.
(1.2)
g(x)<0 Often we can distinguish two types of random variables in the random vector X . Firstly random variables, which can not be changed by the design of the structure, such as loads acting on it; secondly variables, which can be influenced by the design, for example the strength of concrete. It was tried first to use the standard methods for the calculation of such integrals mentioned above to compute these multivariate probability integrals. In numerical or Monte Carlo integration then the integral is approximated in
the form N
1P(F) ~ ~ w(wi)f(z~i).
(1.3)
i----1
Here the xi's are points in F, determined by a deterministic or random mechanism and the w(a~i) are their respective weights. The main difficulties in computing this integral are that in general the probability density f ( x ) is small in the failure domain and that the shape of the domain is not explicitly known, but given implicitly by the function g(a~); therefore standard techniques have difficulties in finding a scheme for creating enough points in F. Soon it became clear that in their usual form they were not suited to this problem, since they were too time consuming and inaccurate. Certainly now with the increasing capacity of computers it is possible to solve more and more reliability problems, without spending any effort on refining methods. But the author thinks that it is absolutely wrong to say that we just have to wait until the computing capacity is large enough to make it possible to solve more complex problems. Since the main point in structural reliability and other fields is not only the computation of probabilities, but the gain of insight in the structure of the problem. If an approximate analytic solution is found, the important influences on the failure mechanism become often much clearer. The usual formulation of the problem, compute P(F), given the density f(x) and the limit state function g(x), is misleading. In reality the probability distribution is known only approximately and also the limit state function is only an appproximation. The problem of estimating such a function by experimental design is considered in [58] and [a2]. Der Kiureghian [44] and Maes [90] treat the problem of structural reliability under model uncertainties. Furthermore in a problem of structural reliability the probability of failure depends on a number of design parameters in the structure. Some of these parameters can be changed by altering the design of the structure, others not. Therefore the real problem here is more complex than it appears at the first glance. Therefore a more realistic probabilistic model is in the form P(F]~9) =
f /
f ( ~ l ~ ) dx.
(1.4)
, ]
g(xlO)<_o Here all functions depend on a parameter vector ~ = ( 0 1 , . . . , 0k).
1. g(xl~ ) is
the limit state function for the parameter value ~.
2. f(x Iv~)is
the probability density function of the random vector X for the parameter value t~.
The usual formulation is that the parameter vector ~ is assumed to be fixed and known. Then for this fixed vector t~ = ~0 the probability • ( F I 0 0 ) is computed. But in the more general setting this is only one of the interesting
quantities. If the values of the parameters are known and can be influenced by the design of the structure, one problem can be to optimize the structure in some sense. In reliability the quantity to be minimized is usually the probability of failure, but there are restrictions on the possible parameter values. The following quantities and distributions may be of interest: 1. The partial derivatives of ~ ( F I O ) with respect to the components 01, - 9 0k of the parameter vector. 2. The distribution o f / P ( F I ~ ) if there is a probability distribution of ~. 3. The asymptotic form of the conditional distribution of X under the condition g(X]O) < O. Until now only the time-invariant reliability problem is solved sufficiently. Often time-variant problems are transformed into a time-invariant form, for example by modelling a random process only by its extreme value distribution. This is related to the fact that the theory of multivariate stochastic processes and their extreme value theory is restricted mainly to Gaussian processes. In probabilistic models for structural reliability we have two different probability theory and statistical inference. With methods of probability theory and other mathematical concepts the structure of the mathematical model is studied. The other problem is the relation of the model to reality. A common feature of the majority of books and papers in structural reliability is that the statistical problems are often neglected. We will discuss shortly the problems of statistical inference in reliability modelling. Since in reliability problems data are often sparse or do not exist at all, conclusions are in many cases based on a lot of assumptions, whose correctness can not be proved. The results, which are obtained, are difficult to check. For example if scientist A computes a failure probability of 10 -~ for a single building during one year and scientist B instead a probability of 10 -6, who is right? An approach for a solution might be the development of probabilistic models, which make more specific predictions (see for example [30] and [34]). This would mean that not only failure probabilities are computed, but the probability distributions of other events, which are connected with the occurrence of failures, but can be observed more frequently. By such an approach, we get an iterative process of model building, prediction, observation and model improvement. In the book of Matheron [97] the general problem of probabilistic models in science is discussed. The importance of this book is that here it is made clear that the justification and the acceptance of these methods in science comes from the fact that they give for many problems satisfactory solutions, but not that they are correct descriptions of the reality. This might be quite unintelligible for statisticians who have never been involved in any applied work as they have lived in a pure academic environment. But anyone who works at problems which have some connection with the real world outside Academia will probably agree that this is not so wrong at all.
An alternative method in coping with uncertainty in structural reliability is for example fuzzy set theory (see [6]). In fuzzy set methods uncertainties, which are not of a probabilistic nature, can be modelled. A drawback is that there is no clear rule how to incorporate additional information as it is done for example in the Bayesian concept by Bayes' theorem. A further alternative concept is convex modelling (see [9]). In convex modelling no specific probability distribution for the random influences is assumed and only bounds for admissible influences are derived. Another problem of statistical inference in reliability is that the usual statistical estimation methods are focused on fitting distributions to the central part of the d a t a and not to the tails. Here modified estimation procedures should be used (see [40]), which give more weight to the fit in the distribution tails, since in reliability calculation the risk is mainly in underestimating the extremes of r a n d o m influences and not in making wrong estimates about their means.
1.3
Stochastic
Optimization
A m a t h e m a t i c a l field, where similar problems occur, is stochastic optimization. Here a given stochastic system, for example a network, a technical structure or a queuing system, should be optimized in its performance, which is described by a function of the design parameters, but is in general not known analytically. Basic concepts of stochastic optimization can be found in the book of Rubinstein [123]. For a given stochastic system we have an integral in the form f
I(~) = / R(xltg)f(xlO ) dx,
(1.5)
and a set of k constraint functions
= ....
gk(e) = 0.
(1.6)
Here ~ is an m-dimensional parameter vector, who can achieve values in a subset V C 1~m. f(xlt~) with x E / R n is a p.d.f., R(x]O) is a function, which describes the performance of the system for the given values ~ and x. The functions gl ( t g ) , . . . , gk (~) give restrictions for the possible values of t~ and describe usually costs or available resources. Theu I(O) is the performance of the system for the p a r a m e t e r vector ~. In stochastic optimization for fixed values of the p a r a m e t e r t9 the values of at least some of the functions are not known exactly and have to be estimated by some procedure. Usually here the problem is considered that gi(~) = E(hi(X)) and only random samples are available from which the expected value 1E(hi(X)) has to be estimated. In this formulation the task is to minimze the integral I ( ~ ) under the restrictions in equation (1.6). For this purpose it is necessary to evaluate the function I(v~) and at least some of its sensitivities (derivatives, gradients, Hessians, etc.)
with respect to changes in the parameter vector tg. To compute these, various methods can be used. Here often Monte Carlo methods are adequate. Applications of stochastic optimization methods in structural design are given in [96] and [95].
1.4
Large D e v i a t i o n s and E x t r e m e V a l u e s
Large deviation theory studies the asymptotic behavior of the probabilities /P()~A) as )~ ---* ~ ; here /P(.) is a probability ineasure on a measurable space (E,13) and A is an subset of E. If for example the underlying space is the n-dimensional Euclidean space /R", then such probabilities are given by ndimensional integrals. The case of the standard normal probability measure is considered in the papers of Richter ([114], [14], [115], [116], [118] and [107]). Similar questions appear if we study the asymptotic distribution of the sum Ein=l Xi of r a n d o m variables with mean zero (see [18], [19] and [119]). Extreme value theory is concerned with the m a x i m a (or minima) of sequences of r a n d o m variables or of a stochastic process. If we consider a sequence X], X2,... of i.i.d r a n d o m variables, the classical question of extreme value theory was to find under which conditions by a suitable scaling and shifting a non-degenerate limit distributions exist for the sequences
Yn = m a x ( X 1 , . . . , Xn) resp. Zn = r a i n ( X 1 , . . . , X,~).
(1.7)
The classical textbook in this field is the book [68] of Gumbel written 1958. The textbook of Leadbetter, Lindgren and Rootzgn [82] gives in the first part an overview of further development of the extreme value theory for sequences. In the case of a stochastic process X(t) we consider the random variables
Y(T) :
max
0
X(t) resp. Z(T) = min X(t). 0
(1.8)
Here similar results as for sequences can be obtained, see [43] and [82]. The results in these books are usually derived by approximating the process by a suitably chosen sequence of random variables. An alternative method, based on sojourn times, is outlined in [12].
1.5
Mathematical Statistics
Similar questions arise in mathematical statistics. Here asymptotic methods play an important role in investigating large sample behavior. An overview of such applications can be found in the book of Barndorff-Nielsen and Cox [7]. Here a sequence of i.i.d, random variables X1, X 2 , . . . is given and for functions f(X1,..., Xn) as n ---+ oe asymptotic approximations are sought. A standard example are the asymptotic behaviors of the sample mean f( = n -1 ~i~1 Xi and the sample variance S = (n - 1) -1 E i n = l ( X i -- X)2, which converge under some regularity conditions to the respective moments of the Xi's.
If the random variable X1 has the p.d.f, f(x) the joint p.d.f, of the first n random variables is 1-I,~1 f(xi) and the log likelihood function is ln(I-[i~l f ( x i ) ) = ~ , ~ 1 ln(f(xl)). The asymptotic behavior of these function can be studied using the Laplace method. In the case of random vectors X1, X2,... we need again results about the asymptotic structure of multivariate integrals. Additional importance have asymptotic approximation methods in Bayesian statistics, where the derivation of the posterior distribution requires often the evaluation of multivariate integrals. The first who used the Laplace method outlined in this book for such problems, i.e. the derivation of posterior distributions, was Lindley [86]. Further results are given in [131].
1.6
Contents
of this Lecture Note
In chapter 2 first some results from linear algebra and analysis are given for reference and some new results about sufficient conditions for constrained extrema and multivariate parameter dependent integrals are derived. In the third chapter a short outline of the basic concepts of asymptotic analysis is given. Then in the fourth univariate and in the fifth multivariate Laplace type integrals are studied. In the sixth these results are applied to normal random vectors. In the seventh chapter it is shown that such approximations can be made also for non-normal random vectors. In the last chapter then asymptotic approximations for crossing rates of differentiable stationary Gaussian vector processes are derived.
Chapter 2
Mathematical Preliminaries 2.1
R e s u l t s from Linear A l g e b r a
In this section s o m e results from linear algebra, which are n e e d e d in t h e following a n d are n o t a l w a y s in s t a n d a r d t e x t b o o k s , are collected. Let U C ~'~ be a linear s u b s p a c e of ~ a n d U • its o r t h o g o n a l c o m p l e m e n t . For each vector x E J7~'~ there is a unique o r t h o g o n a l p r o j e c t i o n o n t o this subspace. T h i s m e a n s t h e r e exists a vector u G U a n d a vector w C U • such t h a t = u + w . T h i s p r o j e c t i o n vector is o b t a i n e d by m u l t i p l y i n g x by the n • n p r o j e c t i o n m a t r i x P u of the s u b s p a c e U.
L e m m a 1 Let U be a k-dimensional subspace of ff~n and U • its orthogonal complement, an (n - k)-dimensional subspace. Then for the projection matrix P u the following relations hold 1. P u = P2u, i.e. P u is idempotent, 2. P u • = I,~ - P u , 3. p T = p ~ . PROOF:
Follows f r o m t h e definitions.
[]
L e m m a 2 Let x be an n-dimensional vector and a l , . . . , a k C ~ n with k < n be linearly independent vectors which are the columns of the n • k matrix A(al,...,ak). Then we have for the unique orthogonal projection P v x = k ~ i = l 7iai of a vector x onto the subspace U -- s p a n [ a 1 , . . . , ak] that 1. P u : A ( A T A ) - 1 A T , 2. 3 'T = ( 7 1 , . . . , 7 k ) T = ( A T . A ) - I A T w ,
3. P u x = ~ = l
7iai = A . ( A T . A ) - I A T x .
PROOF: We have that P u x C s p a n [ a 1 , . . . , ak] due to definition. define the projection matrix as in 1., we get (x - P u x ) T p u x
=
xT p u x -- xT p T p u z
=
xTpux
Therefore P u x is orthogonal to x - P u x given by A ( A T A ) - I A T.
-- w T p u w
If we
(2.1)
= O.
and the projection m a t r i x is in fact []
L e m m a 3 Let a l , . . . , a k
C 1Rn be k linearly independent vectors with k <_ n which are the columns o f the n x k m a t r i x A -- ( a l , . . . , a k ) . Then the volume
V ( P ) o f the k - d i m e n s i o n a l parallelepiped P -- {y; y = ~ki= 1 oqai with 0 < oq < 1 f o r i = 1 , . . . , k} is given by
V ( P ) = ~/det(A T - A) .
(2.2)
Here d e t ( A y . A ) is the Gramian of the vectors a l , . . . , ak.
PROOF: This result is proved by induction over n (see for example [103], p. 81 or [102], p. 181). []
D e f i n i t i o n I Let A be an m • n matrix. A n n • m - m a t r i x A -
with the following
properties 1. A A -
is s y m m e t r i c ,
2. A - A is s y m m e t r i c , 3. A A -
A -- A ,
~. A - A A -
= A-,
is called a generalized inverse o f A .
If A has an inverse, then A -1 = A - . Sometimes this generalized inverse is called Moore-Penrose-inverse. For every matrix A exists a unique generalized inverse (see [67], p. 24-25). If the linear equation system A x = b can be solved, then A - b is a solution of the system. A detailed discussion of generalized inverses and their properties can be found for example in [113]. For two special cases we have the following results. L e m m a 4 Let D be an n • n-diagonal m a t r i x with diagonal elements dii (i = 1,...,n). Then the generalized inverse D - o f D is an n • n diagonal m a t r i x with the i-th diagonal element equal to d~ 1 if dii ~ 0 and zero if dii = O.
PROOF:
See [67], p. 28.
[]
10
Lemma5 Let A : ( a l , . . . , a k ) be an n x k - m a t r i x ( k < n), such that f o r the k vectors ai always a T 9aj = 5ij. Then the generalized inverse of A is given by (2.3)
A - = A T.
PROOF: From the definitions follows that A A T and A T A Using the definition of A we have A T A = (bij)i,j=l,...,k
Ik.
=
are symmetric. (2.4)
From this follows that
(2.5)
AAT A = AIk = A,
and ATAA
(2.6)
T = I k A T = A T"
[]
The following l e m m a shows that under some conditions two quadratic forms associated with two symmetric matrices can be brought together in a diagonal form. L e m m a 6 Let be A a s y m m e t r i c positive definite n x n matrix and B a symmetric positive semidefinite n x n matrix. Then there exists a non-singular n x n matrix T and non-negative numbers # l , . . . , P n with det(T) = det(A) -1/2 such that 7~
(z~T)T A T x
:
~*
: Z x~,
(2.7)
i=1
(xTITBTx
=
fi#ix~.
(2.8)
i----1
PROOF:
2.2
See [i12], p. 29.
[]
Results from Analysis
To measure the sensitivity of a function with respect to changes in the arguments usually the partial derivatives are calculated. But they depend on the scale used for the various variables and are not invariant with respect to linear transformations. A better measure for the sensitivity of a function with respect to changes in the variables are its partial elasticities. D e f i n i t i o n 2 For a function f : ~:~ ~ F~ the partial elasticity e i ( f ( x ) ) with respect to xi at a point x with f ( x ) ~ 0 is defined by ~i(f(~))
if(x)
= x~. - f(~)
11
(2.9)
The partial elasticity gives approximately the change of the function value in ' percents if the value xl of the i-th component is changed by 1%. ~ Let in the ~'~ a coordinate change by a rotation is made, i.e. the new unit vectors are taken as a l , . . . , a n , where ( a i , a j ) = 6ij and A = ( a l , . . . , a n ) , For a vector x = ( x l , . . . , x . ) = ( ( x , e l ) , . . . , (x,e~)), then the new coordinates Y = ( y l , . . . , y n ) = ( ( x , a a ) , . . . , (x,a,~)) are given by y = ATze. The following result shows, how the second derivatives of a function of n variables are transformed if a coordinate transformation by a rotation is made. L e m m a 7 Let be given a twice continuously differentiable function f : ~ ---* and an orthogonal n x n-matrix A = ( a l , . . . , an). A coordinate transformation is given by y = A T x . For the second derivatives of the function in the new coordinates f ( y ) = f ( A y ) we have the following transformation formula
H f(y) = AT H ](Ay)A. PROOF:
(2.10)
The gradient of f ( y ) is
Vf(y) = ATVf(Ay).
(2.11)
By differentiating further we get the second derivatives
Hf(y) = ATH] (Ay)A.
(2.12) []
In the terminology of tensors we have that the first derivatives of a function are a covariant first-order tensor and the second derivatives a covariant secondorder tensor (see for example [83], p. 67/8). T h e o r e m 8 ( L e b e s g u e C o n v e r g e n c e T h e o r e m ) Let f ( x ) , f l ( x ) , f 2 ( x ) , . . . be measurable functions on a measurable subset D C 1Rn such that 1. limm-~o f r o ( X ) = I(X) almost everywhere.
2. There exists an integrable function g(x) such that for all x e D and for m = 1 , 2 , . . . always Ifm(x)l <_ g(x). Then Df(X PROOF:
f JD f,~(X) dx.
dx
See [62], p. 232.
(2.13) []
T h e o r e m 9 ( I m p l i c i t F u n c t i o n T h e o r e m ) Let U C 1~'~ and V C 1~ ~ be open sets and (xo,Yo) E U x V. Further f : U x V --~ 1Rk is a C~-vector function with f ( x o , Yo) = o. I f then
Oyj
i , j = l ..... k
12
then exists an open neighborhood U1 C U of ~0 and a Ck-vector function h : U1 ---+~ k such that for ~ ~ U1 always f ( ~ , h(~)) = o and h(~o) = Yo. PROOF: See for example [102], p. 74/5.
(2.15) []
T h e o r e m 10 ( M o r s e L e m m a ) Let ~o be a non-degenerate stationary point of the real-valued C ~ - f u n c t i o n f(~e). Then there is a neighborhood U of zeo and a neighborhood V o f o such that there is a C~-diffeomorphism T : U -~ V with
f(T-l(y))
= f(~eo) + l y T H ] ( x o ) y ,
(2.16)
and such that for the transformation determinant J ( v ) of the diffeomorphism J(o) = 1. PROOF:
See for example [140], p. 479.
[]
If the Hessian is positive or negative definite, by a further linear transformation it can be achieved that the function or its negative can be written as the sum of the squares of the coordinates. We will need only the following simpler form of this result. C o r o l l a r y 11 Let f : F --+ lift'~ be a twice continously differentiable function, which has a the point xo in the interior o f f a non-degenerate maximum. Then there there is a neighborhood U of ~o and a neighborhood V of o such that there is a C2-diffeomorphism T : U ---* V with
f(T-l(y))
= f(xo) -
llyl=.
(2.17)
Here the Jaeobian J ( ~ ) of the transformation has the value I det(H/(xo))] 1/2 at the point xo. PROOF: We assume that ~0 = o and f ( o ) = 0. Making a suitable rotation, we can always achieve that the unit vectors ei are the eigenvectors of H i ( o ) with negative eigenvalues ~1,. 99 ~,~. Let K be a sphere with o in its center so small that h" C F and the Hessian H / ( x ) is negative definite throughout K. We define a vector function T : V ~ ~'~ by T(m) = 12f(m)l 1/2 ~im with T(m) = ( t l ( ~ ) , . . . , t ~ ( x ) ) .
(2.18)
For m = o we set T(o) = o. Then we have
-IT(a~)12/2 = f ( ~ ) . From the definitions follows that this vector function is a C2-function. We show that T is an one-to-one mapping in a subset of I( with the origin in its interior. Since we have for the function f ( ~ ) near o the expansion
1 '~ f ( x ) = -~ E Aix2 + i----1
13
~ ~12),
(2.19)
we get for the vector function the expansion T(x) =
-
/x"~n ~ 2\ ,/2 [ ~-.,i.-:1 AiXi
~-~--~
~ + o(1~1=).
(2.20)
F r o m this we get for the first derivatives Otj
~126u + o(1~1), ~ -~ o
-
Oxi
(2.21)
and therefore we find for the aacobian J(x) = - i
~ / 2 + o(ixl) ' x -~ o.
(2.22)
i----1
So we have a ( o ) = - H , ~ , A~/2 ---- I det(Hs(~ '/2 Therefore the Jacobian is not zero at o and there exists a n e i g h b o r h o o d U C K of o t h a t the vector function is an one-to-one m a p p i n g and defines a C2-diffeomorphism. Near o we have t h a t tj ~
~1/2
-'b
x~ + o(1~1)
(m23)
and
xi ~ - A i i l 2 t i
+ o(IT(x)[).
(2.24) []
2.3
Manifolds
Here some basic results a b o u t manifolds are collected. More detailed descriptions can be found in [62], [102] and [130]. Definition3 Let 1 <_ r < n and q >_ 1. A n o n - e m p t y set M C lt7n is called an r - d i m e n s i o n a l C q - m a n i f o l d i f f o r each xo C M there exists a neighborhood U of xo and a C q - v e c t o r f u n c t i o n qt : U --+ 1#7n - r with 9 = ( q s l , . . . , ~ n - r ) such that D~(.) = (W'(.),..., W~-'(.)) has rank n - r for all ~ ~ U and M n U : {~ c U; ~ ( ~ ) = o}. A n (n - 1)-dimensional manifold in the n-dimensional space is usually called an hypersurface. D e f i n i t i o n 4 Let g be a f u n c t i o n f r o m an open set D C ~ into an r - d i m e n sional manifold M C ~ with r <_ n. Then g is called a regular t r a n s f o r m a t i o n
ff 1. g is a C l - v e c t o r function, 2. g is injective, 3. The m a t r i x D g ( t ) -- (~Tgi(t),... , XTgn(t)) has rank r f o r all t C D .
14
D e f i n i t i o n 5 Let S be a non-empty, relativly open subset of an r-dimensional Cq-manifold. A bOective transformation F of S in H:~r is called a local coordinate system for S if F = T -1 and T is a regular transformation from an open set ~C~ ~ onS. This means that each point x 6 S is uniquely F ~ ( x ) , . . . , Fr(x). For example on a sphere in 1R3 a latitude and altitude. On an ( n - 1)-dimensional manifold G = {x; g(x) g : ~'~ --+ ~ with V g ( x ) ~k o for all x 6 G, a normal by
= IVg(at)l-*Vg(
determined by the values point is determined by its = 0} defined by a function vector field can be defined
(2.25)
).
This normal field defines an orientation on G. The tangential space of a manifold M at a point x 6 M will be denoted by TM(X). The Weingarten mapping defined in the following describes how a surface is curved. A morer precise description of the Weingarten mapping can be found in [130], p. 55/6. D e f i n i t i o n 6 The Weingarten mapping of an (n - l)-dimensional Cl-manifold G in ~Vztn oriented by the normal vector field n(m) at a point at 6 G is defined by
L,~-t : TG(at) ---+TG(X), v ~-+ L,~_~(v)
On1(.) =
(2.26)
s
--sO~X/'''''--i----1 "
i=1
"
~X/
/ "
The Weingarten mapping gives a measure for the change of the direction of the normal vector on the surface G if one moves through the point x with velocity vector v at x. Since the Weingarten mapping is a linear self-adjoint mapping it has n - 1 eigenvectors v l , . . . , v , ~ - i with eigenvalues ~ 1 , . . . , n,~-i (see for example [130], p. 86). The vi's are called main (or principal) curvature directions and the ni's the main (principal) curvatures. The curvature x(v) of a curve through x in G in direction of v is given by (see [130], p. 92)
~(v)
1 -
"~, O2g(at) ,~ ,~ V i Vj
-
.
(2.27)
i,j=l
This definition of the curvature is such that a sphere around the origin has positive curvature. Since the Weingarten m a p is linear, it can be described by a matrix. The determinant and the trace of the Weingarten m a p are i m p o r t a n t in differential geometry. The determinant K(p) = det(Lp) is called the Gauss-Kronecker curvature of the surface at at. It is equal to the product of the main curvatures, i.e. n-1 K ( p ) = 1-Ii=l ~i. n _ 1 times the trace of L v is called the mean curvature H(p) of the surface at x. Therefore H(p) = n -1 ~ i =n -l- 1 gi is the average of the main curvatures. 15
The volume an(p) of an n-dimensionM sphere {x; I~1 _< p) is given by (see [62], p. 218) 2 9:rn/2 - - . p" (2.28) ~"(P) = ~ r ( n / 2 ) and the (n-1)-dimensional surface/3~ (p) of this sphere is found by differentiating this expression with respect to p giving 2 97r~ / 2
/3,(p) = r(n/2------~"p,~-i
(2.29)
Similar as in the univariate case there are relations between the derivatives of a vector function in a domain and the integral of the function over the boundary. T h e o r e m 12 ( D i v e r g e n c e T h e o r e m ) Let G be a compact (n - 1)-dimensional submanifold in the n-dimensional space, defined by a Cl-function 9 : ~'~ ~ ~ by G = {x;g(x) = O} such that the gradient ~7g(y) does not vanish on G and that F = {x;g(x ) < 0} is a compact domain. Then, if the manifold is oriented by the normal field n ( y ) = I V g ( y ) l - i V g ( y ) , for a continuously differentiable vector field u ( x ) the following equation holds
f div F
= f(n(u), u(u))
ds(u).
(2.30)
G
Here ds(y) denotes surface integration over G.
PROOF:
2.4
See for example [102], p. 319.
[]
Extrema under Constraints
Here some results about extrema under constraints are given (For more details [70]).
see
D e f i n i t i o n 7 Given is a set U C ~ and a subset M C U. A function f : U -+ ff~ has at a point ~ E M a local maximum ( m i n i m u m ) under the constraint M if there exists a 5 > 0 such that for z E M with I z - x I ~ 5 always f ( z ) <_ f ( x ) ( resp. f ( z ) > f ( ~ ) ) .
T h e o r e m 13 ( L a g r a n g e M u l t i p l i e r R u l e ) Let U C E~'~ be an open set. On this set let be defined the continuously differentiable functions f : U --~ J~ and g l , . . . , g k : U ~ iT~ such that an (n - k)-dimensional Cl-manifold M C U is defined by M = {x;gi(x) ..... gk(x) = 0}.
16
ff the function f has a local extremum under the constraint M at a point x* G M and the gradients Vg~(x*),..., Vgk(x*) are linearly independent, there exist numbers ,~1, 9.., Ak such that k
vf(~') = ~ ~,Vg~(~*). i=1
These numbers are called Lagrange multipliers. PROOF:
See [62], p. 161.
[]
If the coordinate system is changed to local coordinates on a surface, the form of the second derivatives with respect to these coordinates at a local extremum on a manifold is derived in the following lemma. L e m m a 14 Let be given a twice continuously differentiable function f : 1Rn --~ f t and k twice continuously differentiable functions gi : ~ n --+ ~ with k < n. The functions gi define an ( n - k )-dimensional manifold G = A~=l{X; gi(x) = 0}. Assume that further: 1. The function f has at the point x* C G a local maximum (minimum) with respect to G and the gradients V g l ( x * ) , . . . , Vgk(x*) are linearly independent. 2. The gradient V f ( x * ) can be written as k
Vf(x*) = E
71Vgz(x*).
(2.31)
/=1
(This follows from 1. using the Lagrange multiplier theorem 13, p. 16.) 3. In a neighborhood of x* in G there is a local coordinate system given by the coordinates u l , . . . , u,~-k defined by the inverse of the function T : U ---* a, (u,,..., u~_~) H ( x ~ ( ~ ) , . . . , x~(u)). Then the second partial derivatives of the function f ( T ( U l , . . . , u~-k)) with respect to the local coordinates u l , . . . , u , ~ - k have the form
(
02f(T(u"--':'-un-k))) OUiOUJ
= D . H ( x * ) . D T,
(2.32)
J i,j=l ....... k
where
1. v = (VXl(U),
.., vxo(
(2.33)
)) = ( a x j )
\ Oui ) i,j=l,
....
is the Jacobian of the regular transformation to local coordinates. .
H(x*) -
( fiJ( :c* ) _ ~k ~,glJ(~*) l=1
17
)
(2.34) i,j=l,...,n
PROOF: First we calculate the first partial derivatives o f f ( T ( u ~ , . . . , u,~-k)). They are Of ~-~ ~ . Ox~ = ~f (~)O~-u/for i = 1 , . . . , n - k. (2.35) 0Ui
r=l
The second derivatives with respect to the coordinates u l , . . . , u,~_~ are then
-
02f OUiOUj -
~-~ fmr(:~*) Ox~ Oxm + L r * 02xr . f (x ) Ou--Ouj OUi OUj rn,r----i r----I -
(2.36)
-
Since the u l , . . . , u ~ - k are a local coordinate system for the manifold G, g t ( T ( u ~ , . . . , un-k)) = 0 for l = 1 , . . . , k. By differentiating this identity we get Ogl(T(Ul,...,Un-k)) L Oui =
r * OXr gt ( X ) ~ / u / = 0.
(2.37)
r':--i
Differentiating further gives 02g1(T(ul,...,u,~-k)) ouiouj =
r~ gt
. Oxr Oxm ~ (~)~ o~j 4-
~
,
0 x~
= 0 . (2.38)
From the last equation we obtain then ~2
rn,r=l
r----i
Now using equation (2.31) gives finally r=l
OUiOUJ
7,g, ( x ) Ou-~uj
(2.40)
r----1 l : 1
71 =
-
/=1
2
m,r=l
,~ g~
, Ox~ Oxm (~)-5-~0~,
Inserting this result in equation (2.36) gives the final result.
[]
D e f i n i t i o n 8 Let a set F be defined by gj(x) <_ 0 for j = 1 , . . . m . A point x E F with gl(w) . . . . . gk(x) = 0 for a number k with 1 < k <_ m and gj(x) < 0 for j = k + 1 , . . . , m is called a regular point o f F if the gradients ~Tgl(x), . .., Vgk(x) are linearly independent.
18
For this general case of inequality constraints we have the following theorem. T h e o r e m 15 Let be given a continuously differentiable function f : ~'~ ~ 1R and m continuously differentiable functions gi : h~n ---+ s I f the function f has at the point x* a local m a x i m u m ( m i n i m u m ) under the constraints gj(x) < 0 for j = 1,...m
(2.41)
and x* is a regular point of F = N ~ = l { g j ( x ) <_ 0}, there exist numbers A 1 , . . . , A m with Aj >_ 0 (resp. Aj < O) for j = 1 , . . . , k such that m
Vf(x*)
Aj V g j (x*).
= E
(2.42)
j----1
Here Aj = 0 if g j ( x * ) < 0 for j -- 1 , . . . , m .
PROOF:
See [70], p. 186.
[]
A sufficient condition for local extrema under constraints can be found by studying the second derivatives of the objective function and the constraint functions at a stationary point of the Lagrangian. D e f i n i t i o n 9 Let A be a s y m m e t r i c n • n - m a t r i x and B an rn • n - m a t r i x with rn < n. The matrix A is called positive (negative) definite under the constraint B ~ = o if f o r an x E IR ~ with x 7s o and B x = o always x T A m > 0 (resp. x T A x < 0).
T h e o r e m 16 Let U C ~'~ be an open set. On this set are defined continuously differentiable functions f : U ---+ 1R and g l , . . . , gk : U --~ 1R such that an (n - k)dimensional C 1-manifold M C U is defined by M = {x; g l ( x ) . . . . .
g k ( x ) = 0}.
(2.43)
For a local m a x i m u m ( m i n i m u m ) under the constraint M at a point x* it is sufficient that the following conditions are fulfilled." 1. The gradients ~ T g l ( x * ) , . . . , ~7gk(x*) are linearly independent and there are numbers A1,...,Ak with k
~7f ( x * ) = E
Ai~Jgi(~*)"
(2.44)
i=1 k Am( ) at X* is neg2. The Hessian of the function -+ ative (resp. positive) definite under the constraint B x = o where B = (Vgl(a~*),...,Vgk(~*))
T.
19
PROOF:
See [5], p. 213-214.
[]
T h e usual criterion for the definiteness of a m a t r i x under linear constraints is the following. Here for a k x / - m a t r i x C the s y m b o l C "~i denotes the m x i - m a t r i x consisting of the first m rows and i columns of the m a t r i x C . T h e o r e m 17 Let A be a s y m m e t r i c n x n - m a t r i x and B an m x n - m a t r i x with m < n and r k ( B ) = m . Then: 1. A is positive definite under the constraint B x = o i f f
( - 1 ) m det 2. A
Aii Bm i
(Bin') T ]/ > 0 f o r i = m + l ,
0
. .. , n.
is negative definite under the constraint B x = o i f f (Aii
( - 1 ) i det PROOF:
(Bmi) T )
Bm i
0
> 0 f o r i = m + 1,.
.in,
rn
A proof can be found in [93], p. 136.
In the next theorem a new simpler condition is given. T h e idea here is to project the m a t r i x onto the subspace, which is orthogonal to the subspaee defined by B x = o. T h e o r e m 18 Let A be a s y m m e t r i c n x n - m a t r i x and B an m x n - m a t r i x with m < n a n d r k ( B ) = m . U C H~n is defined by U = { ~ ; B x = o}. Then the following s t a t e m e n t s are equivalent: a) The m a t r i x A is positive (negative) definite under the constraint B x = o. b) The n • n - m a t r z x9 A + = P uTA P u + P u T• 1 7 7 ( A _ = P uTA P u - P u • 1 7 Y7 is positive (negative) definite. PROOF: We prove only the case "positive definite". b) =~ a): If the m a t r i x A + is positive definite, we have for all x r o with B x = o that xT(pTuAPu
+ P uTx P u x ) x
=
xTP~APu
= xTA~+O
=
xTAx
x + x T P uTx P u ~ - x
(2.45)
> O.
This gives s t a t e m e n t a). a) =~ b): Let be z E ~'~, then it can be written in the form z = x + y with G U and y E U • T h e n we have that zT A+z = (x + y)T(pT Ap g + PTu•177
and since P u •
= o and P u y
= xTpTAPux
+ y),
(2.46)
ly[ 2.
(2.47)
= o, we get
+ yTpT•
= xTAx
20
+
Since x E U from this follows that x A x > 0 if x # o. Since lyl 2 > 0 iff y # o, we get therefore that
zTA+z > 0
(2.48)
iff z # o. But this means t h a t the matrix A+ is positive definite.
[]
Using this criterion it is only necessary to check, if one n • n-matrix is definite. The easiest way to do this is to make a Cholesky decomposition of the m a t r i x (or of its negative), see [129], p. 145-8 and [139], p. 13. If the decomposition algorithm breaks down, the matrix is not positive definite. Further this result can be used for calculating the determinant of matrices, which are obtained by projection onto subspaces. C o r o l l a r y 19 Given is an n • n-matrix B and a k-dimensional subspace U C J ~ defined by U = s p a n [ a l , . . . , a k ] , where the al's are orthonormal vectors. Then the determinant of the k • k matrix AT B A is equal to the determinant of the n • n matrix p T B p U + (In - p u ) T ( I , -- Pu), i.e.
det(ATBA) = det(PTuBPu + (In - p u ) T ( I , -- Pu)). PROOF:
(2.49)
We assume that the vectors ai are the first k unit vectors, i.e. Then we get
ai=eifori--1,...,k.
A T B A = (bij)i,j=l ..... k = B ,
(2.50)
and that
pu=(Ik
ok,,~-k On-k,k
).
(2.51)
On-k,n-k
Then we have
pTBPu+(I~--pu)T(I~--Pu)=
( ~o~-k,k I~-k~
).
(2.52)
But the determinant of this m a t r i x is equal to the determinant of /3 = []
(bij)i,j=l,...,k. The general case is proved by making a rotation.
2.5
Parameter Dependent Integrals
The derivative of an integral where integrand and limits depending on a parameter r is given by the Leibniz theorem. Then the Leibniz theorem gives the derivative of such an integral with respect to this p a r a m e t e r (see [1], p. 11, 3.3.7).
21
T h e o r e m 20 (Leibniz T h e o r e m ) Let I C 1R be an open interval. Given are two twice continuously differentiable functions a : I --+ lt~ and b : I ---+ 1~, and a continuous function lt~ x I ---* ~ , which is continuously partially differentiable with respect to the second argument r. Then the integral b(,)
/ .
(2.53)
I ( r ) = / f(x,7-) dx ~(~) is differentiable with respect to r and its derivative is given by
I'(r)
-
d--r
(2.54)
f ( x , r) dx
a(~) Of(x, T) O~ dx+[b'(T)f(b(r),r)-a'(T)f(a(T),T)].
=
The first summand describes the influence of r on the function f ( x , v) and the term in the square brackets the influence on the boundary points.
PROOF: We prove only the case that a(v) is identically equl to zero, i.e. a(r) = O. The general case follows easily. We have b(,) I(r+h)-I(r)
=
f
[f(x,~+h)-f(x,7")]
dz
(2.55)
0
b(r+h)
+
f
f(x,T + h) dx.
b(~)
Using the mean value theorem, this can be written as P
I ( r + h) - I(r) = /
h i ( x , r + hO(x)) dx
(2.56)
0
+ ( b ( ~ + h) - b ( r ) ) f ( x + ~*(b(r + h) - b(~)), ~ + h).
Here 0 _< v~(x) _< 1 and 0 _< ~* _< 1. This gives then for the difference quotient I ( r + h) - I(r) h
/ = J
f ( x , ~ + h ~ ( x ) ) dx
0
b(T + h) h
w
b ( r ) f ( x + O*(b(T + h) - b(v)), v + h).
22
(2.57)
For the first integral we get using the Lebesgue theorem that b(~)
/
b(~)
f(x, 7"9- h~(x)) dx ~
j
0
(2.58)
f(x, r) dx, h ---* O,
0
and for the second term we have as h --~ 0 that b(r+h)-
h
b(r)
f ( x + O * ( b ( r + h ) - b(r)),r+ h ) ~ b ' ( r ) f ( r ) .
(2.59)
This gives the result in this case.
D
This result is generalized to functions of several variables in the following theorem. Here the boundary of the integration domain is a surface in /R '~. 21 Let be given two continuously differenliable functions f : R n • I --+ ~ , (x, r) ~-~ f ( x , r ) and 9 : ~ n • I ~ 1~, (x, r) ~ g(x,r) with I an open interval. Assume further that."
Theorem
1. There exist integrable functions h, hi : ~ n --+ ~ such that for r E I always
If( ,r)l < h(ze) and IL-(x, T)P < hi(x).
(2.60)
2. For all r E I the set G ( r ) = {x;g(~, r) = O} is a compact Cl-manifold in 1Rn and for all x G G(r) always ~Txg(x, r) 5s o. (An orientation for G(v) is given by the normal field nr = IVxg(x, r)[-1V~eg(x, r) ). Then the integral F(T) -=
f ( x , T) dx
/
(2.61)
g(x,~)<0
exists for all r G I and the derivative of this integral with respect to r is given by F'(T)=
j
f~(x, T) dx - j
g(m,~)
a(~)
f ( y , T) gr
r)
IVug(u,
ds~(y).
(2.62)
Here ds~(y) denotes surface integration over G(r). PROOF: In the following a short proof for this result is given. The existence of the integrals follows from the first condition. To derive the form of the derivative, we first write the difference F(r + h) - F(r) as the sum of two terms f
f
.J
g(xr
g(xr
23
T) dx (2.63)
=
/
( f ( x , r -t- h) - f ( x , r)) dx
f
f(.,r) d.-
g(~ r
+
g(~ ,r-t-h)<0
f
f ( . , r ) dx.
g(X,r)
=:D~(h) For small enough e > 0 in a neighborhood G~(r) = {~; minyeG(r ) lY--~[ < c} of G(r) a coordinate system can be introduced where each y E G,(7") which is given in the form y = ~ + 5 "nr (~) (here nr (~) denotes the surface normal at ~) with x E G(r) has the coordinates (~, 5). In this coordinate system the difference D2(h) can be written in the form
D2(h)
l(fh)
]
f ( ~ + 5 n r ( X ) , v ) D ( x , 5 ) d6 dsr(~).
f_ g(~,r)=O
0
(2.64)
Here D(~, 5) is the transformation determinant for the change of the coordinates. Due to the definition of the coordinates we have that D(~, 0) = 1. The Cl-function l(~, h) is defined implicitly by the equation g(x + / ( ~ , h)n,-(~), r + h) = 0. The existence of such a Cl-function can be proved (for sufficiently small h) using the implicit function theorem (see theorem 9, p. 12) if always Vyg(y, r) 7~ o for y G G(r), which is assumed in condition 2. Using the mean value theorem, we get z(~,h)
J
f(x + 5. n,(~),r)D(x,5)
(2.65)
d6
o
1(,, h)f(~ + O(x)6 . nr(,), T)D(,, O(X)6). with 0 _< 0(x) _< 1, Making a Taylor expansion of l(x, 0) with respect to h gives h) =
(2.66)
O) + o(h).
Inserting this in equation (2.65) and then integrating gives in the limit as h --~ 0 that g(~,T)=0 Making a Taylor expansion of the function g(x, T) we get (2.68) =
+
24
+o(h)
and using again the expansion in equation (2.66) of l(a~, h) with respect to h yields further
= hlh(ag, 0)(nr (ag), Vagg(ag)) -+- hgr(a~, r) -~ o(h). Since g(a: + l(~, h)nr(~), r + h) = 0 and we get from the last equation that h . lh(~, O)lV~g(~, 7-)1 =
(2.69)
(nr(a~), Va~g(a~, 7-)} = ]Va~g(a~, 7")1,
- h . g,(~, 7-) + o(h).
(2.70)
As h ~ 0 this gives in the limit
zh( , o) =
7-)1-1.
(2.71)
This gives for D2(h) then lim
h--*O
hD2(h)= -
/
gr(~,T)]Vxg(x, T)[--lf(~, T)dsr(~).
(2.72)
g(X,r)=0
For the first integral Dl(h) we obtain, since g is a continuous function that {~; g(~, r + h) < 0} --- {~; g(~, v) < 0} as h --+ 0, by interchanging integration and partial differentiation, which is possible under the conditions above (see [62], p. 238) that lim 1 D l ( h ) =
h-+0
/
f~(x, r) da~.
(2.73)
g(x,~)<0 With equations (2.72) and (2.73) together the result is obtained.
[::]
A problem of the last result is that the derivative is given as the sum of a surface and a domain integral. Since it is often difficult to calculate surface integrals, another form which transforms this surface integral into a domain integral might be useful. Uryas'ev has proved such a representation formula in [136], but the proof requires that the gradient Vxg(~, v) does not vanish in the domain {~;g(~, r) < 0}. If the function g(x, 7) is bounded from below and continuously differentiable with respect to ~, there must be a point x0 in the domain, where the gradient vanishes. Therefore this assumption is to restrictive. The following theorem shows that the result remains valid even if there are points where the gradient vanishes as long as the Hessian at these points is regular.
25
T h e o r e m 22 Given is a function F(v) as in the theorem 21, p. 23 and the same conditions as in this theorem are fulfilled. Assume further that: 1. The function g(:e, 7-) is twice continuously differentiable. 2. For a given 7- E I there is a finite number of points Y z , . . . , Y k in the interior of the domain {x; g(x, 7-) < O} with V y g ( y i , 7-) = o and the Hessian H g ( y i ) is regular at all these points. 3. The dimension n is larger than two. Then the function F(7-) is differentiable at 7- and we have for the derivative F'(v) that
F'(7-)=
/:f(*, 7-)gr (*, 7-) x'-z , j [ f ~ - ( z , 7 - ) - d i v ~ ~V--~-~g(x:~))]Vv x g ( . , 7 - ) ) ] g(x :)_.So
dx.
(2.74)
PROOF: First we assume that the gradient does not vanish. Then the divergence theorem gives that div ( f ( x , 7-)g,(x, 7-) V x g ( x , 7-)
s-5~,,.5[ 2) d~
g(x:)_
J
f ( u , 7-)
g,(y,r)
, ~yg(y,7-)
(2.75)
Vyg(y,7-) }dsT(y).
IVug(u, 7-)1~IVug(u, 7-)1' IVug(u, 7-)1'
Since the scalar product is always equal to unity, we get then f = /
f(Y,7-)
g~-(u, 7-) IVug(u, 7-/I ds~(u).
(2.76)
But this is just the second term in equation (2.62), p. 23 in theorem 21 and so the result is correct in this case. Now we assume that there is one point Yl in the interior of D where the gradient vanishes and the Hessian is regular at this point. Then we can again use the divergence theorem for a slightly reshaped domain F(e) = F \ {x; Is - Yl [ -< e}, where e is chosen so small that F(e) C F. Then the gradient does not vanish througout F(c) and we get using the result we just proved that f
div ( f ( y , 7-)gr(y, 7-) V y g ( y , 7-)
1~12/
F(,)
gr(u, 7-)7-)1 dsr(u) f f(u, 7-)Ivug(u,
a(,-)
=x~[(~)
26
~.
(2.77)
/ f(y,T)gT-(y,T) Yl --Y
+
Ivug(u,r
Kr
(~
Vyg(y,7")]) dse(y).
ul' IVvg(u,r =I2(Q
Here KE is the sphere around Yl with radius ~ and ds~(y) denotes surface integration over Ke. For the second integral we have, since the absolute value of the scalar product is always less equal than unity, that
II~(r < K / IVyg(y, 7-)1-1 ds,(y)
(2.78)
K~ with K = maXyEF\F(e ) If(Y, r)gr r)l. Making a Taylor expansion of the gradient at Yl gives
Vyg(y, 7"): Hg(yl)(y -- Yl)
+ O(E).
Let ~ 1 , . . . , ) ~ be the eigenvalues of the Hessian. Since we get with 0 < )% = min{]All,..., I~n]} that
(2.79)
Hg(yl)
[vvg(v, ~)1 _> ~Ao + o(~).
is regular,
(2.80)
We have then for the integral I2(e)
112(e)l _ K / [e~Xo+
o(t:))] -1
dse(y).
(2.81)
Kr Evaluating this integral using equation (2.29), p. 16 gives then for n > 2 that
I~(r = o(r ~ ~ o.
(2.82)
Here we need the assumption that n > 2. Therefore as c ~ 0 the integral tends to zero and we have in this case lim~_o
/
F(~)
=
div
(f(y, r)gr(y, T) Vyg(y, 7") ~ dx IVug(~, ~)12/
/ f(u, ~) gr(Y' ~)
Is(c)
(2.83)
dsr(u).
, ]
This proves the result for the case of one stationary point. It is generalized in an analogous way for several points with vanishing gradients. [:] Using the same arguments as in the theorem, we get a simple result, which shows how to transform a surface integral into a domain integral. 27
C o r o l l a r y 23 Given is a Cl-function f : 1~n ~ 1f~ with n > 2 and a compact domain F C ~ n defined by a C2-function g : ~ n --+ R by F = {~;g(x) < 0}. The gradient of g does not vanish on the surface a = {*; g(.) = 0} and vanishes only at a finite number of points in the interior of F and at all these points the Hessian Hg(ze) is regular. Then
vg(~) G
,..
F
PROOF: As in the last theorem.
2.6 2.6.1
(2.84)
Probability Univariate
[]
Distributions
distributions
A univariate random variable X is called a normally distributed random variable if its p.d.f, has the form
f(x)_ ~ v1~ e _ ~
(2s5)
Here ~ ( X ) = # and var(X) = cr2. If # = 0 and c~2 = 1 the distribution is called standard normal. The c,d.f, of this distributions is denoted by ~(x), i.e.
~(~)=
1 /
(2.86)
e -y212 dy.
--CO
Its density is denoted by ~l(x), i.e. ~l(x) = (27r)-l/2exp(-x2/2). For this c.d.f, there is the following asymptotic expansion (see [1], p. 932, 26.2.12.) 9( - x )
~l(x)x ( 1 + f i
( - 1 ) ~ (x2 2 n~- 1 ) ' ' )
, x --* cx~.
(2.87)
rt----1
Here the symbol n!! denotes the product over all odd natural numbers less equal n. The expansion above is divergent. The error R~ for terminating the expansion at the n-th term co
R . = (-1)"+1(2n +
1)!! / ~~1(~1dy.
(2.88)
x
Often only the leading term in this expansion, Mill's ratio, is used r
~1(~)-, x ~ . x
28
(2.891
The square root of the sum of the squares of n independent random variables X x , . . . , X ~ with standard normal distributions X/~-]~=IX/2 has a X-distribution. This distribution has the p.d.f. 2x'~-1
x.(x)- 2./2r(n/2)
e - ~ / 2 for x > O.
(2.90)
For the complementary c.d.f, of this distribution Q,(x]n) = P ( ~ X g > x) from equation 26.4.10, p. 941 in [1] the following asymptotic expansion is derived 2_ x,-2-~/2 Q ( x l n ) " 2"12P(n/2)
_ X,(X)
7
'
x--+ ~o.
(2.91)
A random variable X has an exponential distribution with parameter .~, symbolically Exp()0, if its p.d.f, has the form f ( x ) = Ae- ~
2.6.2
The
n-dimensional
for x _> 0.
normal
(2.92)
distribution
Let X1, 9 9 Xn be an n-dimensional random vector with p.d.f.
f(x)=(27r)-~/2,det(~),-U2exp(-l(x-,)T~-l(~-,)).
(2.93)
Such a distribution is called an n-dimensional normal distribution. (We consider here only non-singular distributions.) Here /* = ( # I , - . - , P - ) is an ndimensional vector and .~ = (crij)i,j=l ...... is a positive definite n x n-matrix and we have
~,(Xi) cov(Xi,Xj)
= #i f o r i = 1 , . . . , n , = ~ij f o r i , j = 1 , . . . , n .
(2.94) (2.95)
Such a distribution with mean vector/* and covariance matrix ~7 is denoted by Nn(/*, ~ ) . The density of the Nn(o, In)-distribution is denoted by ~,~(x), i.e. -
Ex
y
= (2rr)-"/~exp
(-Ixl=/2).
(2.96)
Let the random vector X consist of two vectors X1 = ( X 1 , . . . , X r ) and X 2 = ( X r + l , . . . , Xn). The mean vector is split in the same way /'1 = ( # 1 , " ' , # r )
/*2 = (#r+l,...,]-ln)
=
(JET(X1),...,/E(Xr))
=
(-t~7(Xr+l),...,J~(Xn)).
(2.97)
Then the covariance matrix has the form
~:
( ~721 ~'11 "~'12 ~'22 ) " 29
(2.98)
Here ~Tn is the covariance matrix of the random vector X 1 , ,~22 is the covariance matrix of the random vector X2 and ~712 is the matrix of the covariances between the components of the random vectors X1 and X2. given X1 is an (n - r)-
T h e o r e m 24 The conditional distribution of X 2 dimensional normal distribution
(2.99)
Nn-r(/-t2 -4- ~21 ~'111(X1 - / 1 1 ) , ~22 - ~21~111~'12), where "~7~ is the inverse of the matrix "~11-
PROOF:
2.6.3
See [111], chapter 8, p. 552.
C a l c u l a t i o n of n o r m a l
[]
integrals
Here some integrals are evaluated, which appear in the following. More detailed discussions and results can be found in [100] and [106]. L e m m a 25
/?
=
dx
Ixl~l(~)
co
S
.
(2.100)
X~l(X) dx
(2.101)
PROOF:
F =
2
/0
Ixl~l(x) dx
=
2
d~l(x)l
=
2 . ~:::~1(0)
co
dx /
dx
= 2 1--~
v~
:
9
[]
L e m m a 26 Given are a negative definite n • n-matrix A and two n-dimensional vectors b and c. Then
f
1 ~
exp(~ A~) d~
-
(2~)"/2 x/Idet(A)l'
(2.102)
R"
/ oxp(b~ + ~ a ~ )
d~ -
R"
(271)n/2
1 T
-1
exp(~b a b), (2.103)
~/[ det(A) l I c T x [ e x p ( ~1x T A x ) d x
=
30
2(27r)(n-1)/2 c T a - l c det(A)
1/2.
(2.104)
PROOF: The first two assertions are shown in [100], p. 19. To prove the last assertion, we assume that - A is the inverse of a covariance matrix of an n-dimensional normal distribution N(o, - A - l ) . The random vector with this distribution is denoted by X . Then the coordinates are rotated by multiplying by an orthogonal matrix T = ( t l , . . . , ?~n)T w i t h tl = [c:[-1c and then a new random vector Y is defined by Y = T X . Then we have
= Jcl
~/[ det(A)[ (27r)n/2 i
leTzlexp(lxTAx)2 dx
flyll
du,
(2.105)
where the term in the brackets is the first absolute moment of a normally distributed random variable Y1 with mean zero and variance [c[-2cTA-lc; so we get for the integral = lc142var(Yx) = 4 2 1 c T A-1cl. Multiplying by
2.7
(27r)"/ul det(A)1-1/2
(2.106)
the result is obtained.
D
C o n v e r g e n c e of P r o b a b i l i t y D i s t r i b u t i o n s
D e f i n i t i o n 10 A sequence F,~ of k-dimensional distribution functions is called weakly convergent to a distribution function F iff limoo F . ( x ) = F ( x ) for all points x E ~ k at which F is continuous. F,--+ F.
(2.107) Symbolically we write then
D e f i n i t i o n 11 If X ~ and X are random vectors with distribution functions F~ and F, then the sequence X n is called to be convergent in distribution to X iff F,~ --+ F. Symbolically we write X,D-~X.
(2.108)
T h e o r e m 27 ( C r a m e r - W o l d - D e v i c e ) A sequence (Xn)~c~W of k-dimensional random vectors X n = ( X n l , . . . , X ~ k ) converges in distribution to a random vector X = ( X 1 , . . . , X k ) iff for all (tl , . . . , tk ) E ~ k k
Ej=ltjXj
9
31
=l tj X,~j
D
PROOF: A proof is given in [13], p. 335.
[]
Using this device the convergence of n-dimensional random vectors can be proved by showing the convergence of linear combinations of components of these vectors. D e f i n i t i o n 12 Let X be a random variable. The moment generating function of X is defined by O0
M ( s ) = lF,(e "x) = /
e s* dF(x)
(2.109)
--00
for all s E ~ with M ( s ) < cxz. If the c.d.f. F ( x ) has a p.d.f, f ( x ) , M(s) can be written as OO
M(s) : /
e*~f(x) d x .
(2.110)
The standard normal distribution N(0, 1) has the moment generating function (see [13], p. 242)
M(s) = e "~/~ .
(2.111)
For an arbitrary normal distribution N(p, (r~) instead we have
M(s) = e ('a)2/~+s" .
(2.112)
An exponential distribution Exp(A) has the moment generating function A M(s) - A - s
(2.113)
The convergence of random variables can be shown by proving the convergence of the corresponding moment generating functions. T h e o r e m 28 Assume that the random variables Xn (n E tV) and X have moment generating functions M~(s) and i ( s ) defined in the interval [-6, 6] with 5 > 0 such that i n ( s ) ~ M(s)
for each s E [-5, 6 ], then X~ D X,
(2.114)
i.e. the random variables X,~ converge in distribution towards X . PROOF:
See [13], p. 345.
[]
By combining the last two theorems a convergence theorems for multivariate random vectors can be derived. 32
T h e o r e m 29 A sequence (Xn)neZ~r of k-dimensional random vectors X,~ -~ ( X ~ I , . . . , Xnk) converges in distribution to a random vector X = ( X 1 , . . . , Xk)
iff for all t = ( t l , . . . ,tk) E 1f~k the moment generating functions Mn,t(s ) of the random vamables ~ j = l tjX~j converge towards the moment generaling function 9
k
k
M t ( s ) of the random variable ~ j = l t j X j for any s in an interval [-St, 6 t ] with 6t > 0 . PttOOF: From theorem 27, p. 31 follows that from the convergence of all linear combinations of the components X n l , . . . , Xnk, i.e. of the random variables ~ = 1 tiX,~i to the random variable ~ =k1 tiXi follows the convergence Of the random vectors X n to X . But the convergence of such a linear combination can be shown by proving that its moment generating function converges to the moment generating function of ~ = 1 tiXi. [] 9
33
Chapter 3
Asymptotic Analysis 3.1
The Topic of Asymptotic Analysis
The main topic of asymptotic analysis is the study of the behavior of functions when the argument approaches certain limits on the boundary of the domain of definition of the function. In a more abstract setting, for two metric spaces X and Y and a function f : X ~ --+ Y, where X ~ C X with X ~ ~ X, asymptotic analysis studies the behavior of the function f ( x ) , as x --* x0, where x0 is a limit point of X ~ with x0 ~ X ~. Certainly it is also possible to study a function in the neighborhood of points inside its domain of definition with the methods of asymptotic analysis, but the interesting and important results are concerned with boundary points. One of the basic ideas is to find a "nice" function g(x) such that as x --~ x0 we have
f(x) = g(x) + r(x) (3.1) with r(x) being "small" in comparison with g(x) for x "near" x0. "Nice " means here that g(x) should be of a simpler form as f(x), but describing the structure of f(x) near x0 "sufficiently good". This is different from the problems of classical analysis where in general the behavior of f as x --~ x0 with x C X' is studied. If f is continuous at x0, we have limx--.xo f(x) = f(xo). If f is differentiable at x0, the function can be approximated still more precisely by its Taylor expansion at this point. For a x~ we can use function f(xl,..., xn), which is differentiable at a point (x~ for example the first order Taylor expansion to describe the function near this point
f(xl,...,x,~),~ f(x~176
f i Of(X)oxi X = ~ o ( X ' - X ~
(3.2)
i=1
If this is not sufficient, for example, if we are looking for e x t r e m a of the function, we can make higher order Taylor expansions. Finally, an analytic C~176 is given by its Taylor expansion at a point. 34
W h a t makes asymptotic analysis a little bit tricky is the fact t h a t here we are interested in the behavior of the function near a point where the function is not defined. Therefore here a more pathological behavior is possible near these points on the boundary. EXAMPLE: Consider the function
f(z)
= exp(-1/z)
(3.3)
defined in the complex plane with the exception of the origin. In the region {z;Re(z) > 0} the function approaches zero as z ~ 0, but for in the region {z; Re(z) _< 0} the function shows a chaotic behavior as z ~ 0, since 0 is an essential singularity of this function. [] Standard problems for asymptotic approximations are: 9 T h e evaluation of integrals 9 The computation of solutions of equations 9 The evaluation of sums W h y use asymptotic methods in cases, where numerical solutions are available ? If there is only one number needed, it may be certainly easier to calculate it by m a y b e crude numerical methods. But in general, asymptotic expansions give much more insight in the structure of the problem and how various parameters influence the result. Deriving such results is surely more complicated, but we gain much more information by doing it this way. For example, using advanced integration methods it is possible nowadays to compute the distribution function of sums of r a n d o m variables very accurately, but a result as the central limit theorem gives much more understanding what happens in the limit. Secondly sometimes even now the numerical difficulties are such that an asymptotic result is better. Thirdly, asymptotic approximations are a good starting point for further numerical or Monte Carlo methods. Textbooks about asymptotic analysis are [127], [105], [15], [59] and [140]. The first book by Sirovich gives a broad overview about asymptotic methods, but is quite sloppy about proofs. The book of Olver [105] covers only univariate functions, but in a very detailed form. The book of Bleistein and IIandelsman [15] treats the multivariate case in chapter 8. Wong [140] considers bivariate integrals in chapter V I I I and multivariate integrals in chapter IX.
3.2
T h e C o m p a r i s o n of F u n c t i o n s
The most i m p o r t a n t cases in asymptotic analysis are when the function f(x) approaches zero or infinity as x -~ x0. But to describe this behavior more precisely, we need some ideas to measure the velocity towards infinity or zero. For example, all the following functions approach infinity as x + ec In(In(x)), In(x), v @, x, x 2 , 2 ~ , exp(x), exp(exp(x)), 35
(3.4)
but it can be seen easily, using l'Hospital's rule, that from the left to the right the velocity of convergence towards infinity increases. To find simple functions which model the asymptotic form of more complicated functions we have to introduce order relations between functions. To compare the behavior of two functions, often the following symbols o and O, introduced by Landau, are used. D e f i n i t i o n 13 A f u n c t i o n f : M ---* 1~ is o f order 0 o f the f u n c t i o n g : M ---* 1t~ as x ---* xo with x E M i f there is a constant K and a neighborhood UK o f xo such that I f ( x ) I<_ I f I g ( x ) I for all x E UK 0 M . (3.5) T h e n we say that as x --* xo, f is large " 0 " o f g and write symbolically f ( x ) = O ( g ( x ) ) , x -~ xo.
(3.6)
D e f i n i t i o n 14 A f u n c t i o n f : M --* ff~ is of order o o f the f u n c t i o n g : M --~ 1R as x --* xo with x E M i f f o r all constants I~ > 0 there is a neighborhood UK o f xo such that I f ( x ) ]< K ] g(x) I for all x e UK N M . (3.7) We write symbolically f ( x ) = o(g(x)), x -+ xo.
(3.8)
and say that, as x --* xo, f is s m a l l "o" o f g.
For these order relations there are a number of useful relations following from the definitions above. We have: 1. O ( O ( f ) ) = O ( f ) , 2. O ( o ( f ) ) = o(f), 3. o ( O ( f ) ) = o ( f ) ,
4. o(o(f)) : o(f), 5. o ( f g ) = o ( f ) o ( g ) , 6. O ( f ) . o(g) = o ( f g ) ,
7. o ( f ) + o ( f ) = o ( f ) , 8. o ( f ) + o ( f ) = o ( f ) , 9. o ( f ) + O ( f ) -- O ( f ) .
The meaning of the first is for example that if g = O ( h ) and h = O ( f ) , as x ~ x0, then g = O ( f ) . A stronger asymptotic relation between functions is the following.
36
D e f i n i t i o n 15 I f f o r two functions f : M --* ~ and g : M --~ ~ vanishing in a neighborhood of xo) as x --* xo with x E M lim f ( x ) = 1,
(both non
(3.9)
these functions are called asymplotically equivalent and we write symbolically f ( x ) ,~ g(x), x ~ xo.
(3.10)
This is an equivalence relation between functions. It gives more information t h a n the " O " - and " o ' - relations. T h e relation f ( x ) ~ g(x) as x ---* x0 means t h a t the relative error of a p p r o x i m a t i n g f ( x ) by g(x) converges to zero as x ---* x0, since lira g ( x ) - f ( x ) _
1-
lim f ( x ) =
0.
(3.11)
A n o t h e r way of expressing f ( x ) ~ g(x) as x --* xo is the equation f ( x ) = g(x) + o ( f ( x ) ) .
(3.12)
In m a n y cases the main interest is to find for a given function f ( x ) an a s y m p t o t i c a l l y equivalent function g ( x ) which is of a simpler form. A s y m p t o t i c equations can be multiplied, divided and raised to arbitrary powers. If f ,-~ r and g ~ r as x ---, x0, we have 1. f g ~ r 1 6 2 2. f / g ~ r 1 6 2 3. f ~ -.~ r ~ . A s y m p t o t i c equations can be added only if all s u m m a n d s are positive (or negative). T h e following example shows t h a t elsewhere wrong results are obtained.
EXAMPLE: x-x
2 x2
~
3x-x
2, x ~ ,
,-~
x 2, x - - , o c .
(3.13) (3.14)
By adding these a s y m p t o t i c equations we would get t h a t x is a s y m p t o t i c a l l y equivalent to 3x as x --+ c~, which is wrong. []
37
Consider now t h a t f ( x ) ,,, g ( x ) as x ~ xo. Under which conditions then follows t h a t h ( f ( z ) ) ~ h ( g ( z ) ) as x --* zo for a function h? Theorem
30 Let be given two f u n c t i o n s f ( x )
and g ( x ) on a set M with
(3.15)
f ( x ) ~ g ( x ) , x - * xo.
F u r t h e r is given a f u n c t i o n h on a set D such that f ( x ) E D and g ( x ) E D f o r all x E M . I f there is a closed set X C D (which m a y contain i n f i n i t y ) with f ( x ) E X and g ( x ) E X f o r all x C M and such that h is c o n t i n u o u s on X and
# 0 for all 9
X, then h ( f ( x ) ) ,.~ h ( g ( x ) ) , x ~ xo.
(3.16)
PROOF: (See [11], p. 14) If h ( f ( x ) ) 75 h ( g ( x ) ) as x ----, xo, there m u s t be a 5 > 0 and a sequence x~ ---, x0 with [ h ( f ( x , ~ ) ) / h ( g ( x , ~ ) ) - 1 1>_ 5 for all n E f / . The sequence f ( x , ~ ) has (at least) one limit point z in D. Let now be x,~ a subsequence ofx,~ with f ( x , ~ k ) ---* z, then due to equation (3.15) g(x,~k) --~ z. Since h ( x ) is continuous, we get therefore h ( f ( x , ~ k ) ) --~ h ( z ) and h ( g ( x , ~ , ) ) --, h ( z ) . From this follows using h ( z ) # 0 t h a t h ( f ( x , ~ ) ) / h ( g ( x , ~ ) ) --~ 1. But this contradicts the assumptions about the sequence. []
It is i m p o r t a n t to notice for which cases the theorem above is not valid. If we have a function f with f ( x ) ~ 0 as x ---* oe, then we can conclude t h a t f ( x ) ,.~ g ( x ) , x ---* oz ~ l n ( f ( x ) ) ~ ln(g(x)), x --* co.
(3.17)
But in general we have l n ( f ( x ) ) ~ ln(g(x)), x ~ oo r
f ( x ) ,,~ g ( x ) , x ---* oo.
(3.18)
Therefore it is often easier to derive an a s y m p t o t i c a p p r o x i m a t i o n for the loga r i t h m of a function t h a n for the function itself. But the a p p r o x i m a t i o n of the l o g a r i t h m gives less information as shown in the next example. EXAMPLE: Given are the functions f ( x ) = x '~ 9e - ~ und g ( x ) = e - x . T h e n lim l n ( f ( x ) ) _
ln(g(x))
lira n - l n ( x ) - x _ 1,
(3.19)
-x
but for the functions themselves we get lim f ( x ) _
g(x)
Xn
. e-X
lim - - -
lim x ' ~ = ~ .
(3.20)
T h e differentiation of order relations and a s y m p t o t i c equations is in general possible only under some restrictions. Results are given in [105], p. 8-11 for holomorphic functions and functions with m o n o t o n e derivatives. A more complete review of these results can be found in [11]. T h e integration is much easier as shown in the following lemma. 38
L e m m a 31 Is S and g are continuous functions on (a, oo) with g > O, we have for integrals the following results
. IS $ ~ g(~)d~ = oo, then f [ f ( y ) d y = 0 ( ] ~ g(y)d~), ~ -~ oo,
(~) f = O ( g ) , ~ - ~ oo
(b) S = o(g),~ ~ 0o
~ f ( y ) d y = o(f~ g(y)dy), x --, oo,
(c) f ~ c. g with c r O, x --* oo . I7 $~ g(~)d~ < oo, then f oo f ( y ) d y = O ( f ~ g ( y ) d y ) , x -~ oo,
(a) S = O(g), 9 ~ oo (b) f = o ( g ) , ~ o o
(c) S ~ c .g with c # o, x --~ oo ~
f ~ f ( y ) d y ~ c. f ~ g(y)dy, x
-~
oo.
PROOF: We prove the first three s t a t e m e n t s . If f = O(g) as x --~ oo, we can find using t h e fact t h a t S a n d g are c o n t i n u o u s a n d g > 0 a c o n s t a n t K such If(Y)l ~- Kg(y) for all y > a a n d n o t o n l y for a n e i g h b o r h o o d of oo a n d therefore
~f(y)dy
<_ K . ~ g ( y ) d y .
(3.21)
In t h e second case t h e r e exists for all c > 0 an x~ :> a such t h a t IS(Y)I -< cg(y) for all y _> x~ a n d hence for all x _> x~
<_ e
g(y)dy < c
g(y)dy.
(3.22)
Since t h e i n t e g r a l fa~ g(y)dy is divergent, we can choose an x so large t h a t
f g(y)dy >_ f' f(y)dy.
(3.23)
T h e l a s t two r e l a t i o n s t o g e t h e r give
ffff f(y)dy
<_ 2c ffff g(y)dy.
(3.24)
T h i s c o m p l e t e s t h e proof. To prove the t h i r d s t a t e m e n t we use the last result for t h e f u n c t i o n f - cg which is o(g), if S ,,~ cg. []
39
3.3
Asymptotic
Power
Series
and Scales
To describe the a s y m p t o t i c behavior of a function more precisely, one can define general expansions. D e f i n i t i o n 16 Let f be a continuous function on the set M C 1R with xo a limit point of this set. The formal power series En~=o an(x - xo) n is said to be an asymptotic power series expansion of f , as x --* xo, in M if the following equations
=o
(3.25)
hold for all m 6 zW or in an equivalent formulation f ( x ) -- ~
an(x -- x0) ~ = O ((x - xo) "~+1
(3.26)
n=0
We write symbolically f(x)~a,(x-
xo) ~, x---+xo.
(3.27)
nz0
D e f i n i t i o n 17 I f the equation (3.25) is satisfied only for rn = 0 , . . . , N - 1, but not for m >__N , we say that ~ =N-1 o a~ (x -- xo) '~ is an asymptotic power series of f as x ---* xo to N terms and write N-1
f(x) ~ E
ai(x -- xo) i, x ---+ xo.
(3.28)
i----0
ooo a , x -n is said to be an asymptotic power D e f i n i t i o n 18 The formal series ~ , = series expansion o f f as x ---* czo if for all m 6 ~ always
(3.29) n~O
An equivalent formulation is f ( x ) = ~ - ~ a n x - n + O (x - ( r e + l ) ) , x --~ ~ .
(3.30)
n-~O
for all m 6 ~ .
I f these conditions are fulfilled, we write
f(x) ~ ~
anx -~, x ~ oo.
n~-O
40
(3.31)
In the s a m e way as above we define an a s y m p t o t i c power series expansion of f as x --~ (x~ to N t e r m s if the conditions above are satisfied only for m = 0 , . . . , N - 1. If there exists an a s y m p t o t i c power series expansion of a function, it is unique (see [15], p. 16). In m a n y cases a s y m p t o t i c expansions of functions have to be m a d e with respect to more complicated functions, which are no power series. D e f i n i t i o n 19 A sequence of functions {Ca(x)} with n E tV is called an asymptotic scale or asymptotic sequence as x --+ Xo in M if for all n the function Ca(x) is a continuous function on M and further r
= o(r
~ --+ ~0.
(3.32)
In [15] the t e r m " a s y m p t o t i c sequence" is used. Here in the following we use " a s y m p t o t i c scale" as in [11]. EXAMPLE: T h e following sequences of functions are a s y m p t o t i c scales: 1. {(x - x0)7~}, z ~ z0, 7n e C, Re("/a+l ) > Re(Tn),
3. fig(x)]"}, ~--+ x0, g ( ~ 0 ) = 0, 4. {g(x)r
z ~ xo.
Here in 4. the functions {Ca} form an a s y m p t o t i c scale as x --+ x0, while g(x) in 3. and 4. is continuous and not identically zero in any neighborhood of x0. [3 Now we can define an expansion of a function with respect to such a scale. D e f i n i t i o n 20 Let f be a continuous function defined on M C ~ and let the sequence {Ca(x)} be an asymptotic scale as x --+ xo in M . Then the f o r m a l series En~=l a,r is said to be an asymptotic expansion of f(Je) as x ~ Xo with respect to {Ca(x)} if f o r all m E PC always
lim ~-+~o
f(x) -E~m=l aar
= 0,
(3.33)
Cm(X)
or in an equivalent f o r m
f(x) = f i a ~ r
+ O(r
(3.34)
Symbolically oo
f ( x ) .v E
aaCn(x), x --+ x0.
rt=l
41
(3.35)
D e f i n i t i o n 21 I f the equation (3.33) holds only f o r m = 1 , . . . , N , then we write N
f ( x ) ..~ E
a~r
x --~ x0.
(3.36)
rt:l
This is called an asymptotic expansion o f f to N terms with respect to the asymptotic scale {r or an expansion of Poincard type. If a function has an asymptotic expansion with respect to an a s y m p t o t i c scale, this expansion is unique and its coefficients are given successively by a,~ = lira x'-*Zo
[
f(x)-
air i----1
]
/era(x).
(3.37)
This is shown in the next theorem. T h e o r e m 32 Let the function f have an asymptotic expansion to N terms with respect to the asypmtotic scale {r as x ---* xo in the form N
f(x) ~ E
a,~r
x -~ x0.
(3.38)
rt~-I
Then the coefficients a l , . . . , am are uniquely determined. O
PROOF: See [15], p. 16-17.
EXAMPLE: But it is possible that different functions have the same asymptotic expansion. For example the functions 1 -
-
x+l
and
1
+ exp(-x)
x+l
(3.39)
both have the asymptotic expansion with respect to the scale {x-m}, namely E
x~
(3.40)
n--~l
as x --+ 0% since x '~ e x p ( - x ) tends to zero as x --+ oo for all n C ,W.
[]
The following two theorems about integration and differentiation are from [15]. The integration of asymptotic relations is possible under weak conditions.
42
T h e o r e m 33 Given is an asymptotic scale {r of positive functions on I = (a, xo) as x ~ xo such that for all x e (a, xo) the integrals
ICn(y)l dy
9 n(x) =
(3.41)
exist. Then the k~n(X) form again an asymptotic scale. I f f is a function such that as x --~ xo N
a=r
f(x) ~ ~
x ~ Zo
(3.42)
n=l
and if further the function ~o
g(x) = / f ( t ) dt X
exists for all x E (b, xo) with a < b < Xo, then g(x) has an asymptotic expansion to N terms as x ---* xo with respect 1o the asymptotic scale {q/,~(x)} in the form N
g(x) ~ E
anq2'~(x)' x --~ xo.
(3.43)
n=l
PRooF:
See [15], p. 30-31.
O
This shows that integrating an asymptotic expansion with respect to an asymptotic scale with positive functions gives immediately the asymptotic expansion with respect to the integrated scale functions. Differentiating them is not so easy, we must assume that an expansion of the derivative exists. T h e o r e m 34 Given is an asymptotic scale {r as x --+ xo on an interval I = (a, xo). Assume that r --- o(1) as x ---+xo and that all t n ( x ) are differentiable in I with derivative functions X~(X) = r such that these functions are again an asymptotic scale {Xn(X)} as x ---+xo in I. A differentiable function f ( x ) in I has an asymptotic expansion to N terms with respect to the asymptotic
scale {r N
a,r
f(x) - f(xo) ~ ~
x ~ x0.
(3.44)
n=l
I f then a) the function )in(x) is positive in an interval (b, xo) with a < b < xo and b) the function f ' ( x ) has an asymptotic expansion to N terms with respect to this expansion has the form N
f'(x) ~ E
a n x , ( x ) , x --+ xo.
rt=l
43
(3.45)
PROOF:
See [15], p. 32.
[]
It m a y occur that the derivative f ' ( x ) of a function f ( x ) does not have an asymptotic expansion with respect to an asymptotic scale {r even in the case that the function f ( x ) itself has an expansion with respect to the asymptotic scale {(I)n} with r = (I)',, see [15], p. 31.
3.4
Deriving Asymptotic Expansions
The derivation of an asymptotic approximation or expansion for a function f ( x ) as x ---* x0 can not be made schematically. If we consider for example the function f ( x ) = e -~: as x ~ oz and the asymtotic scale {x-n}, then an asymptotic expansion of f ( x ) with respect to this scale would be f(z)
~
O . x-",
x
--+ ~.
(3.46)
i=1
This says only that f ( x ) approaches zero faster than all functions of the asymptotic scale as x ~ cc and gives no further information about the asymptotic behavior of f ( x ) . Therefore before making an expansion we have to choose a suitable scale. This is done usually by making some rough prior estimates of the order of magnitude of the function under consideration. There are normally three (or two) steps in deriving an asymptotic expansion for a function f ( z ) as x ---* x0: 1. Choose an asymptotic scale {On(X)}, 2. Calculate the expansion coefficients a,~ for the asymptotic relation
f(x) ~ ~
a n e n ( x ) , x --~ x0,
(3.47)
n----1
3. Calculate (if possible) error estimates for the approximation errors N
f(x) - E
a,~r
(3.48)
n----1
for N = 1 , 2 , . . . . Often there is no possibility to find the error estimates in step 3. Then judgements about the quality of the approximations can be based only on numerical examples, which give some idea about the magnitude of the error. This happens especially in the multivariate case. An i m p o r t a n t new method for calculating approximations is the combination of analytic and Monte Carlo methods. Here the analytic approximations found by asymptotic analysis are used as an initial estimate of the integral and then this estimate is improved by Monte Carlo importance sampling methods which use the information about the structure of the integrand found by the analytic method. 44
Chapter 4
Univariate Integrals 4.1
Introduction
In this chapter we derive two important results about the asymptotic expansion of univariate integrals. We consider only Laplace integrals OO
f e-~'th(t) dt
(4.1)
0
and Laplace type integrals
f e~Y(t)h(t) dt.
(4.2)
D
Here D is a subset of ~ .
4.2
Watson's
Lemma
In this section an asymptotic approximation for Laplace transforms is derived. This result was proved by Watson in [138]. Given is the Laplace transform OO
(4.3)
I(~) = f e-~tf(t) dt 0
of a function f(t). Assume that for this function /(t)
=
f(t)
~
o(:), ~
t --, oo,
(4.4)
cmt a=, t --+ O+ .
(4.5)
rn=O
with - 1 < a0 < al < a2 < ... < an
---+ oo.
45
The
arn'S
are
not necessarily integers.
L e m m a 35 ( W a t s o n ' s L e m m a ) Given is a locally integrable function f ( t ) on (O,c~) bounded for finite t and (4-4) and (4.5) hold. Then, as A---*
I(A) ~
~.Cm i e-;~tt am d t = ~ c,.F(a.~+l) Aam+ z m=0
0
(4.6)
m--0
PROOF: Let R be a fixed positive number. Then we write R
oo
/(A) = / e - ~ t f ( t ) dt+ f e - ~ t f ( t ) dt. 0
(4.7)
R
From equation (4.4) follows that there is a constant K such that If(t)] _< Ke at for all t > R. Hence we get for A > a always O0
(4.8)
I1=(~)1 _< /e-~'(f(01dt R (DO
OO
_< / e - ~ t K e a t
dt
Is / e-(A-a)tdt = - -K e_(~,_~) n A-a
=
R
R
FL~ - a e-a"]J = o(e-~R)" The last relation follows, since the term in the square brackets remains bounded as
)l---+ oo.
For each natural number N we can write using equation (4.5) N
f(t) = E
c'~tam + pN(t)
(4.9)
m----0
with pn(t) = O(t aN+l) as t ~ 0+. Then we write the integral N
R
in the form
R
.,.
+ m----0
II(A)
0
(4.10)
0
For these integrals we get then R
oo
c~
(4.11) 0
0
_
R
F(1 -t- am) + O(e_AR), A ---+c~.
Al+a~ 46
Now we have that N
R
1(~) = ~ cmr(am + 1)A- ( i + a ' )
+
r n -~ O
From (4.5) and since that
/ pN(t)e-~t dt + O(e-~R).
(4.12)
o
pN(t)
is bounded on (0, R), there is a constant KN such
]pN(t)l < KNt~N+~
(4.13)
for all t E (0, R]. Then we obtain an upper bound
lgY(t)r -At
< KN
taN+'e -At
dt =
KN r(ay+l
--
-t- 1)
,~ag.l.19rl
(4.14)
o
With the equations (4.8), (4.11) and (4.14) the final result is obtained.
4.3
[]
T h e Laplace M e t h o d for U n i v a r i a t e Functions
For Laplace type functions it is possible to derive asymptotic approximations by studying the structure of the functions in the integral near the global maximum points of the function in the exponent. Given is a function I(A) of the real variable A in the form of an integral
I(A) = / h(x) exp(Af(x)) dx.
(4.15)
Here c~ E • and [(~,/3) is a finite or semi-infinite interval, h(x) and f(x) are real functions and A is a real parameter. The Laplace method gives asymptotic approximations for I(A) as )~ ---+oc. To derive this approximation, one studies the functions h(x) and f(x) in the neighborhood of the points where the function f has its global maximum max~_<~
47
The following result is from [57], p. 37. Theorem
36 Assume that the following conditions are fulfilled:
1. f ( x ) is a real function on the finite or semi-infinite interval [c~,fl) and in an interval (ct, c~+ e] with c > 0 it is continuously differentiable and sup
~+c_<x<_/~
f(x)
<_ f ( ~ ) -
5, with 6 > O.
(4.16)
For the derivative f'(x) we have that f'(x) f'(x)
< =
0 forallxC(c~,o~+c], Ol)r-1 7!- O((X 0~)r - l ) with r > O. -a(x -
-
-
2. h(x) is a real and continnous function on the interval
[~, ~)
(4.17) (4.18)
with
h(x) = b ( x - a)s-1 + o ( ( x - ~)s-1) with s > O.
(4.19)
3. The integral
f Ih(x)lexp (f(x))
dx
(4.20)
Gt
is finite. Then the integrals ~(~) =
h(x) exp ()tf(x)) dx
(4.21)
o,
with A > 1 are all finite and have the asymptotic approximation 8
I(~) ,~ b F r
r r A--~ exp(~f(c~)), a ---+ oo. \a/
(4.22)
PP~OOF: For simplicity we assume that f ( a ) = 0. To show that all integrals I(A) are finite, we get an upper bound for them
II(~)1 ~ /Ih(x){exp(~f(x)) dx t~
P
<_ f
[h(x)lexp(Af(x)) dx
+
f
Ih(x)lexp(Af(x)) dx.
ot
=/~,x)
=I~,X) 48
(4.23)
The first integral is finite. For the second we get using equation (4.16) that P
II~(&)l < /
(,X-
Ih(xllexp(f(x) +
1 ) ( f ( a ) - 5)) dx
(4.24)
o'd-e
=
Ih(x)l exp (f(x)) dx = O(exp(-)~6)).
exp(-6(,~ - 1)) f c~+e
But since due to assumption 3) the last integral is finite, we find that I I ( ~ ) 1 < for all ,~ > 0. In the first integral we make a substitution x --~ u = - f ( x ) . We have then
du dx
-
f'(x),
(4.25)
1 if(x)"
(4.26)
and
dx du
Now we can write the first integral in the form I1(/~)----
/
[h(x(u))~u]eXp(~u)du.
0
9
(4.27)
9
=k(~)
We have
u = - I ( x ) = - f f'(y) dy.
(4.28)
ot
Using the asymptotic form of f ' we get by integrating that u ~
~-(x -
~)',
x -~ ~.
(4.29)
, u~0.
/ 4.30 /
r
Therefore x - c~ --.
For the function in the square brackets in equation (4.27) we find then the expansion as u --+ 0
h(x(u))~fTa u ,,, a( _ a),-r.
(4.31)
and so we get k(u)-
b a
1
Using now Watson's lemma gives the result.
49
(4.32) []
From this result some special cases can be derived. C o r o l l a r y 37 Let f and h be continuous functions on a finite interval [c~,fl]. For some important special cases the last theorem gives : a) If the global maximum o f f ( z ) occurs only at the point xo C (~,~), h(xo) # O, f ( ~ ) is near ~o twice continuously d•rentiable and f " ( ~ 0 ) < O, then
I(A) ,.~ h(xo)exp(~f(xo))
AIf"(x0)l
' A ~
cr
(4.33)
b) If the global maximum occurs only at ~, h(~) # 0 and f ( x ) is near c~ continuously differentiable with f ' ( c 0 < O, then 1
I(A) ~ h(o~)exp(Af(cr))Alf,(oOi , A ~ oo.
(4.34)
c) If the global maximum occurs only at ~, h(o0 # 0 and f ( x ) is twice continuously differentiable near ct with f'(c~) = 0 and f " ( ~ ) < O, then Z(,~) ~ h(c~)exp(,~f(c~)) PROOF:
2,Xlf,,(cO [ , ,~ ~ ~ .
(4.35)
T h e results follow directly from the last theorem.
[]
For all these integrals it is possible to derive asymptotic expansions in the sense of Poincard if higher derivatives of f and h exist at the global m a x i m u m points (see [15], chapter 5 and [140], p. 58).
50
Chapter 5
Multivariate Laplace Type Integrals 5.1
Introduction
An important part of asymptotic analysis is the derivation of approximation methods for integrals depending on one or more parameters. In the last chapter some results about univariate integrals were given, here we will consider multivariate integrals. The two most important types of integrals are 1. Laplace type integrals
i(~) =
f
h(.)~ ~r
d.
F
2. and Fourier type integrals
J(~) = / h ( . ) e ~j(*) d.. F
Here F is a subset of ~n, h(x) and f(x) are functions defined on F and A is a real parameter. Here the asymptotic behavior of the functions I(~) or J(A) is studied as A approaches infinity or zero. In the following we will consider only Laplace type integrals. For these integrals the asymptotic behavior is dominated by the structure of the functions near the global maximum points of f with respect to F. In the following we will consider only Laplace type integrals. In the case of Fourier integrals also other stationary points of this functions may be important for the asymptotic behavior of the integrals, therefore the structure of the functions near all points with ~Tf(a) = o has to be considered. Such results can be found in [15], chap. 8.4.
51
5.2
Basic Results
In this section we consider asymptotic approximations for multivariate Laplace type integrals as/3 --, oc I(r
= / h(~) e x p ( ~ 2 f ( x ) ) dx.
(5.1)
, J
F
with F C ~ , h(~) a continuous function and f ( ~ ) a twice continuously differentiable function. Here /3 is a real parameter. We set here as parameter ~2 instead of ,~ deviating from the usual terminology, since in the applications in the following chapters for normal integrals the parameter appears in this form. In the years 1948-1966 the Laplace method was generalized to approximations for multivariate integrals. The first result is given in the article of Hsu [76] in 1948. Here the most simple case as described in theorem 41, p. 56 is treated. The most extensive monography about the Laplace method for bivariate integrals is from J. Focke [63], published 1953. Fulks/Sather [66] proved a theorem which allows that the function h is zero at the maximum point of f or has a slight singularity at this point, see theorem 43, p. 60. Jones [79] then in the year 1966 proved the theorem 46, p. 67 about boundary maxima. In the following we will consider only finite-dimensional spaces; some results about Laplace integrals in infinite-dimensional spaces are given in [53], [55] and [54]. In these papers a similar problem, i.e. the asymptotic form of functionals of probability measures, which converge towards Gaussian probability measures, is studied. The asymptotic behavior of I(/3) as /3 --* oc can be studied with similar methods as in the univariate case. But here some additional problems appear and due to this there is no complete theory for the asymptotics of these integrals until now. In this report a summary of results will be given, which should be sufficient for most problems in applications. In principle the asymptotic behavior of these integrals is determined by the structure of the functions f and h and of the integration domain F in the neighborhood of the points or sets where the function f achieves its global maximum with respect to F. In Sirovich [127] in chapter 2.8 the different types of maximum points are described: 1. points in the interior of F (Type I), 2. points at the boundary of F with a smooth boundary in the neighborhood (Type II), 3. points at the boundary of F where the boundary is non-smooth (Type III). Depending on the type of the maximum point different asymptotic approximations are obtained. In general these approximations depend on the values of the functions f, g and h and their first two derivatives at the maximum points. Only if these vanish, higher derivatives must be computed (see [140], chap. VIII, 5). Then the first non-vanishing derivatives determine the form of the asymptotics. 52
The standard results for integrals with a maximum in the interior, i.e. theorem 41, p. 56 and for a maximum at the boundary, theorem 46, p. 67, can be found in the textbooks [15], chap. 8, [140], chap. IX and [127], chap. 2.8. Additional to the standard results in this report we also treat the case that the maximum of the function is a submanifold of the boundary of the integration domain and that there is an additional function depending on the parameter/3. To avoid unnecessary technical details, we first prove a lemma that shows that under slight regularity conditions it is often possible to restrict the integration domain to a compact set. The following lemma gives conditions under which it is possible to replace an integral over a non-compact domain by an integral over a compact domain without changing its asymptotic behavior. This is a generalization of a result of M. Hohenbichler, published in [33]. L e m m a 38 ( C o m p a c t i f i c a t i o n L e m m a ) Given is a closed set F C ff~n and two continuous functions f and h : j~:~n --~ 1R. Assume further that: 1. The set M = { y E F; f ( y ) = m a x : ~ r f ( x ) } is compact,
2. f Ih(~)lJ(Y)d~ < ~, F
3. For every neighborhood V of M : s u p { f ( y ) ; y E F \ V} < m a x x ~ s f ( x ) , 4. There exists a neighborhood U of M such that for all x E U always h(x) >
o (or h(~) < 0). 5. For all neighborhoods V of M always dx > 0.
(5.2)
FnV
Then for all • > 1
/ Ih(~)le~s(~) d~ < ~,
(5.3)
F
and for all neighborhoods V of M holds that
f h(~)j's(~) d ~ - f h(~)e~'J(:~) d~, ~,-~ ~. F
FnV
53
(5.4)
PROOF: Let be m -- supwcF f ( x ) and 5, c be positive constants. For A _> 1 and f ( y ) > rn - 5 we have then
e(:~-l)](Y) >_ e (~-l)(m-5)
(5.5)
e~1(Y) >_ e(X-1)('~-6)e ](y).
(5.6)
or
In the same way for ~ _> 1 and f ( y ) < m -
~ we find
e )~](y) <_ e('~-l)(m-e)e "f(y).
(5.7)
Since f ( y ) <_ m for all y E F we get from equation (5.7) in the limit as e ~ 0 that
F
F
This proves the first part of the lemma. Now we assume that h(y) > 0 for all y E V with V being a neighborhood of M. In the following we use the notations
G(~,v)= J h(y)~~s(y) dy , G,(~,V)= f Ih(y)l~~s(y) dy, (5.8) FnV
FnV
c(~, v)= f h(~)~s(y) dy , Go(A,v) = f Ih(~)f~J(Y) dy. (5.9) F\V
F\V
First we prove the proposition (+): (+) If V is a neighborhood of M with h(y) > 0 for all y C V, then for all neighborhoods W of M with V C W always
(5.10) Let c = m - s u p { f ( y ) ; y E F \ V}. Since f is continuous, there exists a neighborhood VI C V of M such that for y C 1/1 always
f ( y ) >_ m - e/2.
(5.11)
Since we assumed that h(y) > 0 for y C V, we get with equation (5.5) that
IG(~, 171)1 = Ga(s 1/1) >_ e(:'-l)(m-~/2)Ga(1, V~).
(5.12)
Here is Ga(1, V1) > 0 due to assumption 5. From equation (5.7) follows further that
0o(~, v) < ~(~-1)(m-~)0o(1,y). with (~a(1, V) < oo due to assumption 2. 54
(5.1a)
From the relations (5.12) and (5.13) follows
0,(I, v)
e(~-l)(~-~)ao(1, v) ~- c()~-l)(m-el2)Ga(1, V)
a()~, Vl)
=
e_xe/2 Ga(1, V)
ao(1, v)
+0,
(5.14) I --* oo.
But since h(y) > 0 for all y E V, we have that
0 < a(~, vl) < a(~, V)
(5.~5)
and further also for an arbitrary neighborhood W with V C W that 0 < ao(~, W \ V) _< ao(~, ~ n \ V) = G~(~, Y).
(5.16)
From the equations (5.14), (5.15) und (5.16) we get finally that lim G~(I, W \ V) _ 0. ~,-->oo a(~, v)
(5.17)
This gives statement (+). Let now l) be an arbitrary neighborhood of M. Then there exists a neighborhood V1 C l) of M with h(y) > 0 for all y C V1 due to assumption 4. From (+) we get then G(),, V~) ~ G(~, V), ), ~ oo and G()~, V1) ~ G(A,/~n), ,~ ---, oo. From these two asymptotic equations we obtain the second part of the lemma. []
5.3
Interior
Maxima
We consider now the case that the maximum of the function in the argument of the exponent of a Laplace type integral occurs at one point in the interior of the integration domain. The first theorem is a well known standard result. First two lemmas are proved which are needed in the following to get estimates of the integrals. L e m m a 39 Let 13 be a positive real parameter. Then for all constants e, k > 0 always ] x ] e x p ( - k ] x ] 2) dx PROOF:
I
-"+ 0 ,
(5.18)
By using spherical coordinates we get that 27rn/2
1~,1 exp(-kl~l 2)
d~ - r(n/2) /. /
p~ exp(-kP2) dp -~ 0
(5.19)
p>fle
I~l>Ze
I-7
aS ~ --+ OO.
55
L e m m a 40 Let F C ~ n be a compact set with the origin in its interior. Further is given a twice continuously differentiable function f : F ---* 1R. Assume that further:
1. f ( o ) > f ( x ) for all ~ ~ F with x r o, 2. The Hessian H I(o) is negative definite. Then there exists a constant k > 0 such that for all x E F always < f ( o ) - kt l
(5.20)
PROOF: We assume that f ( o ) = 0. If we now assume that for all k > 0 there is a point x E F with - k < f ( ~ ) j ~ j - 2 < 0, then there exists a sequence (x(m)) of points in F with - m -1 < f(x(m))Jx('~)J-2 < 0. Since F is compact, this sequence has a convergent subsequence which converges towards a point ~0 G F. Since f is continuous, we have then that f ( x 0 ) =- 0. From assumption 1. follows then that x0 = o. Due to assumption 2. the Hessian H I (o) is negative definite and making a Taylor expansion around o we get with xTf(o) = o that
f(y) = f(o) + (VI(o))Ty + ~yTH/(Oy)y = ~yTHf(Oy)y
(5.21)
with0<0
limsupf(y('~))Jy('~)] -2 _< ~1 A0 < 0, rrt ~
(5.22)
cx:)
with A0 < 0 being the largest eigenvalue of the Hessian H ! ( o ) and so such a sequence (x(m)) can not exist. Therefore such a constant k > 0 exists. []
T h e o r e m 41 Given is a compact set F C 1~n, a twice continuously differentiable function f : ~'~ -+ 1~ and a continuous function h : ~ --* 1~. Let be defined I(/3) ----/ h(x) exp(/32f(x)) dx.
(5.23)
F
If the function f achieves its global maximum with respect to F only at the point x* in the interior o f F and if the Hessian H i ( x * ) is negative definite, the following asymptotic equation is valid I(4) ~
. x/J
56
(5.24)
PROOF: A proof is given in [15], p. 332-4, [59], p. 74-75 and [140], p. 495. The idea of the proof is to approximate first the functions f and h by Taylor expansions at the maximum point and then to show that the integral over these approximating Taylor expansions is asymptotically equivalent to the integral over the functions. Near z* the function f can be approximated by its second order Taylor expansion at x*. Since f is twice differentiable and at x* there is an extremum, the first derivatives vanish, i.e. V f ( z * ) = o and so we get that
1 f(~) ~ f(~*) + ~(~
x*)TH/(x*)(x
For h it is sufficient to replace the function we have near z* the approximation
h(x)eZ2:(x)
h(x*) exp
[
1 tfl2(f(x *) + ~(x
-
h(x)
x*).
(5.25)
by its value at h(z*). Hence
x*)THj(~c*)(x
--
x*))]
(5.26)
Let now U C / R ~ be a neighborhood of x*. One can show by estimating the differences that
/ h(~): ~j(~) d~
(5.27)
F
1 - x*)THf(x*)(x / h(x*)exp [fl2(f(x*) + -~(x
x*))] d~
u
1 - x*)THj(x*)(x / h(x *)exp [fl2(f(x*) + -~(x
x*))]
= h(x*)eZ2](x')/exp[~-~(x--x*)TH:(x*)(~--x*)]
dx
dx
But for the last integral we have (see lemma 26, p. 30) that
= (2~)~/~
h(~*)
~/I det( H f (x* ) ) l
9:~:(~')fl -~.
(5.28)
This gives the asymptotic relation in the theorem. We give now an exact proof. We assume that x* = o and f ( o ) -- 0. First we define the functions
hp(x)
= h(f1-1 x) exp(fl2f(fl-lx)).
By defining the functions h and f for all x C ~ the functions h e are defined on ~ . Then we have
(5.29) \ F by
flnI(fl) = fl'~ L . h(y) exp(fl2f(y)) dy.
57
f(x) = h(x) = O, (5.30)
Making the substitution y --+ z = fly gives
We show now that we can apply the Lebesgue theorem for the limit of the integrals over the functions hz(~). First we note that l e m m a 40, p. 55 ensures that there is a constant k > 0 such that Ihn(:~)l < m a x Ih(:~)l e x p ( - f 2 ( / c f - 2 l : r l 2 ) ) = m a x Ih(:~)l exp(-#cl:~12). --
~ED
(5.32)
~ED
Since the last function is integrable we have an integrable upper bound for all these functions and we can apply the Lebesgue theorem if we find the limit function. We have then, since h is continuous lim h ( f - ~ )
= h(o).
fl---+co
(5.33)
Since o is in the interior of F, for all ~ there is a /7o such that for all f >/7o the points j - l ~ lie in F and therefore then the function f ( f - l x ) is twice differentiable with respect to f for these values. For the function J2f(/3-1w) we get using l'Hospital's rule twice
lim f2f(f- 1~)
:
= lira _f-2 E,\I fi(f-l /~--*co
,) x,
=
n~oo
lim - E i ~ l p-~c~
--2f1-3
= lim
lira f ( f - 1~) f-2
p--oo
,5'~ co
--
--2f1-2
(5.34)
fi(f-lz)xi
--2fl - I
fiJ(fl-l:c)xixj
lim i,j-~l
=7
ffJ(o)xixj
=
xrHs(o)z.
i,j=l
So we find for the limit lira
~ - - - * OO
y
I(f)=
n
h(o)exp 7z Hs(o)z
dz.
(5.35)
Using l e m m a 26, p. 30 we get finally the result
lim
O~c,o
/7'~I(f)
=
(2~)'12h(o)iHf(o)[ -'1~
(5.36) []
The meaning of the approximation above is that asymptotically as f ---+co the function exp [ f 2 ( f ( x ) - f(x*))] near x* is proportional to the density of a normal distribution N ( o , - f - 2 H ] l ( ~ * ) ) . Showing that the integration domain can be replaced b y / I T ~ without changing the essential asymptotic behavior, an integral over a normal density is obtained. 58
In the following theorem the function h(~) is vanishing at the maximum point. T h e o r e m 42 Given is a multivariate integral 1(13) depending on a real parameter/3 I(/3) = / go (x)[gl(~)[ exp(f~2f(x)) d~.
(5.37)
F
Further the following conditions hold: 1. F is a compact set in J~:~'~, 2. f : 1R'~ --, 1R is a twice continuously differentiable function with (a) f ( x ) < f ( x * ) for all ~ e D with ~ 7~ x*, (b) The Hessian H I ( x * )
is negative definite,
(c) go: lt~n ~ J~ is a continuous function with go(x*) r 0, (d) gl : 1Rn ~ ~:~ is a continuously differentiable function vanishing at x*, i.e. gl(x*) = 0. Then the integral I(13) has the following asymptotic approximation as t3 --* oo (Vgl(~,))THI1(w,)Vgl(w,) det(Uy(x*))
I(13)"2(27r)(~-l)/2g~
1/2 eE~y(x. ) ' 13~+1 9(5.38)
PROOF: We assume that x* = o, f ( o ) = 0 and F is convex. We use the Lebesgue convergence theorem 8, p. 12. We define the function hE(~ ) by h E(x) = go (/3-t x)(/31g 1(/3-1 x)l) exp (1332f(13- ~x)).
(5.39)
for x E F and zero elsewhere. Since F is convex, we have that [/3gl(fl-lx)l < Kllxl with K1 = m a x ~ e r IVgl(x)]. Therefore we can find again an upper bound for all functions IhE(x)[. Then we have 1(/3) = 1 3 - ( , + 1 ) [ hE(x ) dx. J//~"
(5.40)
For the function hE(x ) we have that lim go(13-1x) = go(o).
(5.41)
E~oo
Applying l'Hospital's rule, we get lim 131g1(/3-1x)l E~
=
lim E-~
dgl(E-lx) dE -- --fl-2(Vgl(13-1z))Tx d_p-1 _~-~ dE
=
=
E-~oo
59
(5.42)
In the same way by applying l'Hospital's rule twice we get =
lim r lim --j3-2(V f ( ~ - l i f ) ) T x
=
--2~ -3
~3--c~
ifTHj(9-
= lim r162
lim
/3--c~
if)ifg-:
=
(5.43)
lim (Vf(/5- l x ) ) T z f~-*c~
2j3 - 2
dZ
dP -2 d/5 2~ -1
lim l x T H f ( ~ - l x ) i f
p---, oo 2
= lxH](o)x.
This gives for the integral over ha(if ) then lim /y~ ha(if ) dif = /1R g~176176
~ - - - + OO
n
(~ifH](o):~)
n
dx. (5.44)
Using theorem 26, p. 30 we get the final result
lim [
h e ( x ) dx = 2(2~r)('~-l)/2g0(o)
(vgl(~(~176 H71(o))
1/2"(5"45)
This gives the result. If F is not convex, we take a subset FK C F, which is convex has the origin in its interior. It can be shown easily using lemma 39, p. 55 that the integral over F \ FK is negligible compared with the integral over FK.
[]
We derive now an approximation if the function h(if) vanishes at the maximum point of f or has a slight singularity. This is a modification of the approximation by Fulks/Sather [66]. This result contains the last as a special cane.
T h e o r e m 43 Given is a compact set F C ~'~, a twice continuously differenliable function f : 1Rn ---+ 1R and a continuous function h0 : S~(1) -+ ~ with Sn(1) the n-dimensional unit sphere. Further for a point xo in the interior of F let be defined a continuous function h : F \ {xo} ---+//~, which has as x ~ ~o
the form h(if)
,~ lif - i f o ] U h o ( ] if -
i f o l - l ( if - i f o ) ) .-4- o ( l i f - ifol u)
(5.46)
wilh u > - n . Given is the integral
•
= / h(if) exp (Z f(if)) dif. F
60
(5.47)
If the function f achieves its global maximum wilh respect to F only at the point z0 and if the Hessian H I (z0) is negative definite, the following asymptotic equation is valid as ~ --+ oo I(fl) ,.~ 2(n+v)/2-ir
(~__u)
Io
i det(Hf(xo))]l/2
e~1(x*)~ -(n+v) .
9
(5.48)
Here the constant Io is defined by a surface integral /
Io
IzTH-/~(xo)zl~'12ho(z)ds,~(z)
(5.49)
S~(1)
owr S~(1). PROOF: We assume t h a t z0 = o and f ( o ) = 0. Further the eigenvectors of the Hessian H f ( o ) are the n unit vectors negative eigenvalues ,~1, . . . , - ~ . We take an e > 0 such t h a t the set K~ = {x; Ixl < e} is a the Hessian HI(w) is negative definite for all x C Kc. We split two integrals
we assume t h a t e l , . . . , e~ with subset of F and the integral into
h(x) e x p ( ~ 2 f ( x ) ) dx
(5.50)
F
= / h(x)exp(~2f(x)) dx-t- / K~
h(x)exp(fl2f(x)) dx.
F\K~
9
J
9
9
For the second integral we get with K0 = m a X x e F Ih(x)l t h a t
II~(Z)I _
exp(]32 f ( w ) ) dx.
(5.51)
F\K~
Using l e m m a 40, p. 55 gives t h a t there is a constant k > 0 with and therefore 112(Z) t
_
/
exp(-~klxl
~) dx -- o(~ ~)
f(x) <_-k]x[ 2 (5.52)
. /
F\Kr
for all r C ~ . For the first integral we use corollary 11, p. 13. We have a diffeomorphism If~ ~ V, x ~ y = T ( x ) as defined in the corollary such t h a t f ( x ) = - l y l 2 / 2 . T h e n we have I1(~)=/IT-I(y)[
~ [ho([y[-ly)
+ o(]T-l(y)]~)] exp(-~[y[2/2)J(y)dy.(5.53)
V
Here J(y) is the J a c o b i a n of the t r a n s f o r m a t i o n T. Since y points in the s a m e direction as x, we have t h a t ho(lyl-ly) = ho(Ixl-lx). 61
We take now a subset of V, a sphere with radius Cl a r o u n d the origin. Using the same a r g u m e n t s as before, it is easy to show t h a t we have a s y m p t o t i c a l l y
Ii(fl) "~ /
IT-1(y)l~ho(lyl-1y)exp(-/321y12/2)J(y)
dy, /3 --+ c~.
(5.54)
* ]
If we make a t r a n s f o r m a t i o n to spherical coordinates (p, v ) with p = lYl and
r = l y l - l y , we get for this integral
_=h*~(o) Since as p --~ 0 we have J ( p r )
r'~n
.-1/2
-P2__,i=1 "~i
~
I d e t ( H / ( o ) ) ] -1/2 and T - l ( p r )
~
ri, we obtain for the function h*(p) the a s y m p t o t i c f o r m h*(p) ~ p"X01 d e t ( H / ( o ) ) 1 - 1 / 2 ,
P ~ 0.
(5.56)
T h e n we have for the integrand in the last equation the form
iop~+~-i exp(_/32 p2/2)1 det ( / - / / ( o ) ) [- 1/2.
(5.57)
Now, we can use theorem 36, p. 48 with s = n + u, r = 2, a = 1 and b = I0 and we find a s y m p t o t i c a l l y
I~
II(Z) - V
(~-~)
2(n+v)/2/3-(~+~),det(H/(o)),-1/2, /3-~ ~ .
(5.58)
This gives the result in this case. T h e general case can be proved by m a k i n g a linear transformation, which reduces it to this case. [] T h e next t h e o r e m is a generalization of theorem 41, p. 56. Here in the exponent additional to the function f which is multiplied by/32 there is a function ./'1 multiplied by /3 (see [25]). Such integrals appear if we consider m o m e n t generating functions of normal r a n d o m variables. Theorem
4 4 Let be given."
1. A compact set F C R n with the origin in its interior. 2. A twice continuously differentiable function f : F ---* IR with f ( x ) < f ( o ) for all ~ r o. The Hessian H / ( o ) of f at o is negative definite. 3. A continuously differentiable function fl : l~n ~ 1~. 4. A continuous function h : F ---* z?~ with h(o) 7s O.
62
For the integral I(/3) -- / h ( x ) e x p ( / 3 f l ( x ) + / 3 2 f ( x ) ) dx
(5.59)
F
we have under these conditions the following asymptotic expression I(fl) = f h(~)exp(flfl(~) +/32f(x)) dx
(5.60)
F
--~ (21r) n/2 h(o) e x p ( - 89
(o))TH 71(o)~fl (o)) e~fl (~176 x/[ d e t ( H f (o))[
, fl ---~ oo.
/3n
PROOF: We assume for simplicity that f ( o ) = fl(o) = 0. Further we assume first that F is convex. Define for all x E ~ \ F the functions f, fl and h by f ( ~ ) = f~(x) = h(~) = 0. Then these functions are defined o n / / ~ and we can write the integral I(/3) in the form 1(/3) = /
h(x)exp (/3f1(;~) --~ /32f(x)) dx.
(5.61)
Making the substitution x -~ y = / 3 x gives then 1(/3) : / 3 - n
/
h(/3-1y)exp (/3fl(/3-1y) q_ /32f(/3-1y)) dy.
(5.62)
Let be defined the functions g/~(x) by
gz(y) = h(/3-1y) exp (/3fl(/3-1y) + 132f(/3-1y))
(5.63)
for x E F and zero elsewhere. These functions are bounded by [gz(Y)[ -< k0 exp (/3fl (/3-1y) + / 3 2 f ( / 3 - 1 y ) ) .
(5.64)
Here k0 = m a x x e F Ih(x)[. From lemma 40, p. 55 follows that there is a constant kl > 0 such that for vectors y E/3F always
/32f(/3-1y) <_ - k l [y[2.
(5.65)
If we define k2 = supxeF IVA(~)[, we get (since F is convex) for all y E/3F further [/3fl(/3-1y)[ < k2ly[. (5.66)
The last three inequalities give the estimate lgZ(Y)I -< ko exp(k2lyl - k~lYl2).
63
(5.67)
The function on the righthand side of the inequality is integrable over/R '~ as can be shown easily using spherical coordinates. This gives an integrable upper bound for the functions [g/31 and so we can use theorem 8, p. 12. This yields lim
/3~oo
[j g/3(y) dy = [j
lim
/3--o~
g/3(y) dy.
(5.68)
We have to find the limit lim/3-~oo g/3(y). Since h is continuous, we find lim
/3---*00
h(/3-1y)
= h(o).
(5.69)
The limit of the p r o d u c t / 3 , f1(/3-ly) is obtained using l'Hospital's rule fl (/3-1 y) /3" fl(/3-1y) -/3-1
d]l(/3-1y) --
--/3-2(V fl(/3-1y))Ty =/3--,oolim
_/3-2
=
lira Z~oc
d/3 dp-1 d/3
(5.70)
(V fl (o))Ty.
Applying l'Hospital's rule twice gives further
f(/3-1y)
_
(Vf(fl-lY))TY
=
lim /3_,~ = lim /3.00
/3-2
2/3-1
lira /3--+~
--/3-2(Vf(/3-1Y))TY
lira
--/3- 2yT H ] (/3-1y)y
/3.-+~
-2~-2
(5.71)
--2/3 -3
lyTHI(o)y. From this we get then lira
/3---+oo
g/3(Y) =
h(o) e x p
((Vfl(o))Ty
A- ~y
)
H.t(o)y .
(5.72)
Applying now the Lebesgue theorem yields lim /
/3.--.,cr
R "~
g/3(y)dy= h(o) /
exp
((Vfl(o))Ty+ lyTHf(o)y)dy
(5.73)
R"
and for this integral we find (see lemma 26, p. 30)
= (2=)./~h(o) exp(- 89(V fl (o))~ H 71 (o)Vfl (o)) ~/I det(Hj(o))l
(5.74)
This proves the theorem for a convex set F. If F is not convex, we take a convex subset Fc of F such that the origin lies in its interior. For this set we can prove the result. Then we have to show only that the integral over F \ Fc is negligible in comparison with the integral over Fc, which can be done easily. []
64
5.4
Boundary Maxima
In this subsection some new results about asymptotic approximations for integrals are proved. In the next l e m m a we give a bound for the decrease of a function near a regular m a x i m u m under constraints. This l e m m a is similar to l e m m a 40, p. 55. If a function has the origin a m a x i m u m with respect to the set {x; x~ _> 0} then due to the Lagrange multiplier rule the gradient is parallel to the x~axis and the matrix of the second derivatives with respect to the variables xl,...,x.-1 must be negative semi-definite. If the gradient does not vanish and the m a t r i x is negative definite, we can prove the following. L e m m a 45 Let be given a twice continuously differcntiable function f with F C ~ such that
:
F --+
1. F is compact with the origin in its interior. 2. f has its global m a x i m u m with respect to F* = F N {x; xn >_ 0} at o. at o does not vanish, i.e. V f ( o ) :~ o.
3. The gradient o f f
~. The ( n - l ) • ( n - l ) matri~ H * ( o ) = (gJ(o))i,~-=~ ....... ~ is negative de~nite at o.
Then there exists a constant k > 0 such that f o r x E F* with x 7s o always
f ( x ) < f ( o ) - k(lS~l2 + Ix,~l)
(5.75)
where ~ = ( x l , . . . , x n - 1 ) .
PaOOF: We assume that f ( o ) = 0. We assume that no such constant exists. Then there must be a sequence (x(m)) of points in F* such that
--m -1 < f(~(rn))([x(rn)12 + [x(nrn)l) -1 _~ O.
(5.76)
Here we set $('~) = (x~m) ~ ' ' ' , ~ n~(m) -l]" Since F* is compact there is a convergent subsequence and we assume without loss of generality that this subsequence is just the sequence itself. Since f ( x (m) --* 0 we have x (m) --* o. We split the difference
:(,<->> = [:(,<->>- :<,<->,o>] + [:(,<->,o>- :(o,...,o>].
65
Making now a second order Taylor expansion of f with respect to at the origin and a first Taylor expansion with respect to x,~ at ( ~ ( m ) 0) to estimate the differences gives
Xl,...,xn-1
f(~(r~), O) n--1
f
=
(O)Xi
-
f ( O , . . . , O) +
(5.78)
(~,(m))TH*((~l~,(m)
,
0))~, (m)
i=l
= and for the other difference using the mean value theorem
f(x(m)) _ f((f~(m), 0)) = J"~ Hz~(,n)((,02x (m)~x(~)n )1 ,
(5.79)
with 0 _< 01,02 _< 1. Now, since z(r~) ~ o, for m large enough we get
(~(m))TH*((t91(x*,O)))~(m) fnH~(m) kL , tO2 x (n m ))") x n(m)
<_ ~(s:(m))TH*(o)~ (rn) -<-
(5.8o)
~fn(o)x!~)
and so we obtain f ( x (m)) _< ~ l ( f r ~ ( o ) x ( ~ m ) + ~ ( ~ ( ' ~ ) ) T H * ( o ) ~ ( ' ~ ) ) .
(5.81)
Since f'~ (o) < 0 and H* (o) is negative definite we get with A0 < 0 being the largest eigenvalue of this matrix that for m large enough that there is a constant k0 > 0 such that
_< lk0(ix(
) I+
(5.82)
But this contradicts the properties of the chosen sequence. Therefore such a constant k must exist. [] In m a n y of the theorems in this section we will assume that the following condition is fulfilled. C o n d i t i o n A:
Given is a twice continuously differentiable function g : j~n ~ 1~ such that by F = { x ; g ( x ) _< 0} is defined a compact set and by G = { x ; g ( z ) = 0} a compact
C 2 hypersurface. The gradient ~Tg(x ) does not vanish on G and the surface G is oriented by the normal field n ( x ) = [Vg(x)]-lVg(x). In a number of the next results we have to compute the determinant of an (n - 1) • ( n - 1)-matrix A T H A with H an n • n-matrix and A = ( a l , . . . , a,~-l) an n x ( n - 1)-matrix, where the ai's are orthonormal vectors. Let aN be the unit vector, which is orthogonal to a l , . . . , a ~ - l . We can use then corollary 19, p. 21 to simplify this. We have t h a t
det(AT H A ) = det((I,~ - P)T H(I,~ -- P) + p T p ) with P = a,~a~ the projection matrix onto the subspace spanned by a,~. 66
(5.83)
In the following theorem the case is treated that the global maximum of the function is on the boundary of the integration domain and that the function g defining the boundary is smooth at the maximum point. In this case the gradient of the function f does not vanish, it is parallel to the normal vector of the surface at this point, i.e. the first partial derivatives in all directions orthogonal to this are zero. T h e o r e m 46 Let condition A be fulfilled. Further are given a twice continuously differentiable function f : ~ n --+ • and a continuous function h : 1~n --* 1R. Assume that the following conditions are satisfied 1. The function f attains its global maximum with respect to F only at the point ~* E G. 2. At x* the gradient ~Tf ( x * ) does not vanish. 3. In ~* E M the (n - 1) • (n - 1)-matrix H * ( x * ) is regular. This matrix is defined by H * ( x * ) =- A T ( x * ) H ( x * ) A ( x *) with IVg(
*)I
i , j = a .....
and A ( x * ) =- ( a l ( x * ) , . . . , a n - l ( x * ) ) . The a l ( x * ) , . . . , a , ~ _ l ( x * ) form an orthonormal basis of the tangential space of G at x*. Then as ,3 ~ oo the following asymptotic equation is valid
]
h(x)eZ~y(X)dx
1)/2
h(x*) ~/l(Vf(x,))TC(~,)Vf(x,)]
ez~/(x*) Z,~+~
(5.84)
F =
IVf(:~*)lv/I d e t ( H * ( x * ) ) I
/3n+l
Here the n x n-Matrix C(~*) is the cofactor matrix of the n • n-Matrix H ( x * ) . PROOF: A complete proof is given in [15], p. 340 und [59], p. 82. The idea of the proof is similar to the one in the last theorem. We give two proofs. The first is an outline of a proof by making Taylor expansions and comparing the integrals of the Taylor approximation with the originM integral. We assume for simplicity that x* = o, f ( o ) = 0 and that at o the tangential space of G is spanned by the vectors e l , . . . , e , - 1 . T h a t can always be achieved by a coordinate transformation and by adding a suitable constant. Since at o there is a maximum of f with respect to F and so also with respect to G, the Lagrange multiplier rule states that i f ( o ) = 0 for i = 1 , . . . , n - 1.
67
To simplify further we assume that the second derivatives of the function g at this point vanish, i.e. the main curvatures of the surface are all zero and therefore the surface G is not curved at this point. If the surface is curved at this point, we have only to replace in the final result the second derivatives of the function f by the corresponding second derivatives with respect to the local coordinates of the surface. These were calculated as in l e m m a 14, p. 17.
Outline of the first proof." We write
i(Z) = f h(~)ep~s(~) d~.
(5.85)
F
Now we approximate the function f near o by its second order Taylor expansion at o. This gives
f(~) ~ ~1 y~ f ~ ( o ) ~ j + ~1
fin (o)~ix~ + fn(o)x~.
i,j=l
Replacing
I(9) ~ h(o)
h(x) by exp
F
(5.86)
i=1
h(o) we obtain
| ~
S~j(o)x~x~ + ~ S~ n ( o ) ~ + ~Sn(o)~
\i,j=l
d~,
i=1
By estimating the changes it can be shown again that the integration domain for the variables xl,..., xn-1 can be replace by F/n-1 and the domain for x,~ by [0, c~) without changing the asymptotic behavior. This yields then
I(fl)~h(o) / ]~-1
fiJ(o)xixj
/exp 0
+~-'~fin(~176 d x ' ~ i = l With the substitution /?2Xn we get
dxl...dx,~-i
xi ---*ui = ~xi for
- ~n+l
(5.87)
\i,j=l
//
]~.-1 0
+fl--l~-~fin(O)UlUn+2fn(o)U'~)] d u i = l
68
exp
i = 1 , . . . , n - 1 and Xn ---* Un =
E fiJ(~ \i,j=l
.... dul...dun-1.
(5.88)
The term in the exponent divided by fl is asymptotically negligible and therefore
I(z)
-
/
zo+
exp
..
E r'(o)u,u,
)
du,
(5.89)
oo
• fexp(f"(o)u~)
du..
0
For the first integral the result is (see lemma 26, p. 30 ) [ 1 ,~-1
..
f exp[2i~j=Y'a(~
j~n-1
/
(2rr)("-l)/=
dul'"du"-l=ldet(H*(~
(5.90)
Here H * ( o ) = (fiJ(o))i,j=l .....,-1. Since all components of the gradient of f at o with the exception of the n-th are equal zero, IVf(o)l = Ifn(o)l and this gives for the second integral
f
oo exp(fn(~ u,~)
dun = [V/(o)1-1.
(5.91)
The final result is then
/ h ( x ) e jS''f(~') d~ ,.~ (2rr) (n-1)/2 F [Vf(~
h(o)
. ~-(n+l).
(5.92)
det(H*(~
Making a suitable rotation to change the coordinates it is always possible to transform the integral in such a form. Then the second derivatives transform as derived in lemma 7, p. 12. This gives then the general result.
Second proof."
We define the function hp(~) by
h~(a~) = h(fl-l g~,3-2 x, ) exp(/32 f(/~ -1 a~,fl-2 x,) )
(5.93)
with s = (Xl,... , x , - 1 ) for a~ E F and zero elsewhere. We have then that
/3"+1I(/~) = / ~ . ho(x ) d~.
(5.94)
Using lemma 45, p. 65 we get that there is a constant K1 > 0 with Ihz(m)l < K 1 - e x p ( - k ( l ~ l = + xn)),
(5.95)
where K~ = maxa~ee [h(a~)[. But this function is integrable over the domain F and over ~ " and therefore we can apply the Lebesgue theorem if we find limp--,~o h E (a~). We get lim h ( 3 - ~ , 3 - 2 z , ) =
/3--+oo
69
h(o).
(5.96)
For the argument of the exponent we have lira / 3 - 2 f ( / ? - l ~ . , / 3 - 2 x , ) = lira f(/3-15~'fl-2x") Z_ 2
(5.97)
Applying now l'Hospital's rule once gives lim f(/3-15~'/3-2x")
(5.98)
]3-2 E i n-1 = l f / ( f l - l ~ , / 3 - 2 a n ) d- 2 ~ - 3 f n ( f l - 1 5 : , ~ - 2 x n )
=
lim r162
=
lira ~ i ~ 1 fi(~3-1$'fl-2x'~) ~r 2/3-1 t-~lim f n ( 3 - 1 ~ , f l - 2 x n )
=
-2/3 -3
lira Ein=ll fi(fl-l~c' ~--2Xn) O-oo
2 fl- 1
+ f"(o)x,~.
To find the limit of the first summand we apply again the rule yielding then lim Ei"=-11fi(fl-15a'/3-2x'~)
=
=
lim /~.~
_~-2 E i ,n-1 j = l fij (/~-1~,, /~-2Xn)XiX j _~_2/~-3En__i fin (/~-1~,/~-2Xn)XiX n
- 2~- ~ ~--1 1 - ~ ; i m E fiJ(/3-1x'/3-2x~)xixj = ~ x T H ' ( ~ i,j=l []
This result can be proved in a modified form for surface integrals. T h e o r e m 47 Let condition A be fulfilled. Further let be given a lwice differenliable function f : 1Rn ~ 1R and a continuous function h : 1Rn ---+2R. Assume further that:
1. The function f achieves its global maximum with respect to G only at the point x* E G. 2. At x* C M the (n - 1) x (n - 1) matrix H * ( x * ) is regular. This matrix is defined by H * ( x * ) = A T ( x * ) H ( x * ) A ( x *) with IVf(~*)] i j ( x , ) ) ]Vg(x.)lg i,j=l ..... n
H(x*) = (fij(~.)_ and
A(x*) = (al(a~*),..., a n - l ( x * ) ) .
Here the a l ( x * ) , . . . , a,~-a(x*) are an orthonormal basis of the tangential space of G at ~*.
70
Then as/3 -+ ~ the following asymptotic relation is valid J
h(~)ez~J(~) ds~(~) ~ (2~)(~-1)/~
h(x*)
VI det(H*(~'))l
eZ~](x*)/3 - ( ' - 1 )
(5.99)
G
PROOF:
Analogous to the proof of the last theorem.
D
The following theorem is from the article [33] and treats the case that the maximum point is in the intersection of several manifolds. T h e o r e m 48 Given are: 1. A twice continuously differentiable function f : 1~n ---* •
.
2. rn twice continuously differenliable functions gi : j~n ---+j~. 3. A continuous function h : 1~'~ ---* 1~. The functions g l , . . . ,gin define a compact set F = N~=l{x;gi(x) _< 0}. Further the following conditions are fulfilled: 1. The function f achieves its global maximum with respect to F only at the point x*. 2. There is a k E { 1 , . . . , m } such that gi(x*) = 0 for i = 1 , . . . , k and 9j(x*) < 0 for j = k + 1 , . . . , m . 3. The gradients ~ g i ( ~ * ) with i = 1 , . . . , k are linearly independent. 4. The gradient ~Tf(x*) has a unique representation in the form k
i--1
with ~fi > 0 for i = 1 , . . . , k. (From 3 follows with theorem 15, p. 19 that the gradient has such a representation with ~fi >_ 0.) 5. The matrix H*(x*) is regular. Here H*(x.*) =
AT(x*)H(~*)A(x *)
with k
H(~*) = (fli(~,) _ ~ g ~ (
))~,~:~ ......
r=l
and
A(x*) = ( a ~ ( x * ) , . . . , a n _ ~ ( x * ) ) . The ~ ( ~ * ) , . . . , ~ , - k ( ~ * ) are an orthonormal basis of the subsp~ce of ~ ~ which it orthogonal to the subspace span[Vgl(~*),..., Vg,~(~*)].
71
Then the asymptotic relation holds
f
h(~)e ~L'(~) dx
(5.100)
F
,~
(2~.)(,~_k)/2
h(x*)
k
9e ~ f ( x * ) ~ -('~+k), ~ --~ oc.
~;=1 71x/det(G) " I det(H*(x*))l Here a = Vg~(x*),...,
( ( V g , ( ~ * ) ) r . ~gj(~*))~,~=~ ..... ~ i~ the a r ~ m i a n Vg~(~*).
o l the
vector~
PROOF: We give only a short outline of the proof. A complete proof can be found in [33]. We assume that the maximum point x* is the origin o, that f ( o ) = 0 and that the subspace spanned by the k column vectors of A is equal to the subspace spanned by the first k unit vectors el, i = 1,..., k. Then the subspace orthogonal to this space is spanned by the unit vectors ek+l, . . . , en. In [33] it is shown that the integrand can be replaced near the maximum point by its second order Taylor expansion and the function h(w) by its value at this point without changing the asymptotic behavior of the integrals as/3 --* oc. We can write therefore
I(j3)~h(o)/exp[~;((Vf(o))Tx+lxTHi(o)x)J
dx.
(5.101)
F If a coordinate change is made such that the vectors - V g l ( o ) , . . . , - V g k ( o ) become the new first k basis vectors instead of el,...,ek and the other remain unchanged the transformation determinant is equal to the volume of the k-dimensional parallelepiped spanned by the k vectors V g l ( o ) , . . . , Vgk(o) (see [62], p. 206-8). But this volume is the square root of the Gramian d e t ( a ) = ((Vgi(o)) T 9Vgj(o))i,j=~ .....k (see lemma 3, p. 10), yielding
I(fl)
~
F
+~ j=l
L i=t
.~,z=k+l
fJm(o)uium + E E fim(o)ujuk ra=k-{-1
j=l
du, (5.102)
m=l
Here V f ( o ) is replaced by the linear combination E~--1 "fiVgi(O) and the derivatives are with respect to the new coordinates. But since only the first k coordinates are changed, the second derivatives, which involve only the coordinates U k + l , 9 9 9 Un remain the same.
72
With the transformation ui --~ wi =/32ui for i = 1, .., k and uj ~ wj for j ----k + 1,..,n we get
1(/3) ,-,/3-(,~+k) h(~ e~2](~
/ l~,
.+/3--1E
exp
{
=/3uj
[
1 m,lmk+lm'(o)wmw,(5103) E "yiWi + -2
-
i=1
fJm(o)WjWm+ /3--2E E fJm(o)WjWm
j=lrn=k+l
j=l
dw.
m=l
is the transformed domain F. As /3 --+ o0 the terms in the exponent multiplied by negative powers of/3 become negligible and we obtain then
I(/3)..~
~ . e x p /3~+k Vdet (G) j
F
- E TiWi+ 1 fm'(o)wmw, i:1 rn,l=k+l
dw.(5.104)
This integral is now written as the product of two integrals. Introducing as new coordinates local parameters v k + l , . . . , v . of the (n - k)-dimensional manifold N/k=l{x; gi(x) = 0}. The integration domain is changed to ~ _ x ~ - k giving
~r(/3) ~/3-('~+~)
h(o)eZ2.f(o)
[~ exp - ~ ~ ,
dWl...e~
(5.105)
i=1 +
.~-k
rn,l=kT1
Here D(Vk+l,..., Vn) is the transformation determinant for the coordinate change to local coordinates. Without loss of generality we can choose these coordinates in such a way that D ( 0 , . . . , 0) = 1. So we get finally
h(o)
ez~j(~
1(/3),-~ (27r)(n-k)/21-I/k=1 7,~/det(G) det(fl'~(o)),,,~=k+l ...... /3~+k
. (5.106)
Due to the curvature of the manifold N~=I{T;gi(x ) = 0} at the point o we get additionally to the second derivatives of f a curvature factor appears (see lemma 14, p. 17) and the elements )~m (o) are then given by k
f'~(o) -
a2f(~
OxlOx.~
73
~-27i a~gi(~ i=1
OxlOx.~ "
(5.107)
In the last equation only the determinant of the second derivatives depends on the coordinate system. By a suitable rotation the value of this determinant can be found d e t ( H * ( o ) ) = det(A(o)TH(o)A(o)). This gives the result.
(5.108) []
In this case the asymptotic order of magnitude of the integral is exp(~2f(~*))~ -(n+k), it depends on the number k of active constraints in the point ~*. The following theorem treats the case that the maximum of the function is attained on a surface part and not only at one point. T h e o r e m 49 Let condition A be fulfilled. Further is given a twice continuously
differentiable functions f : ~'~ --+ 1~ and a continuous func!ion h : R ~ ---+R . Assume further: 1. The function f achieves its global maximum m = m a x x c F f ( x ) with respect to F exactly on the compact k-dimensional submanifold M C G.
2. vf(
) :/: o for all
x
M.
3. At all points x E M the (n - k - 1) • (n - k - 1)-matrix H*(~) is regular. This matrix is defined by =
with H(ze)= (fiJ(~)
'Vf(zc)' iJ(~)) ~ g i,j=l,...,n
and A(a~) = ( a l ( x ) , . . . , a,,-k-l(a~)).
Here the a l ( w ) , . . . , an-k-l(X) form an orlhonormal basis of the subspace of the tangential space Tc(x) which is orthogonal to TM(X). Then the following asymptotic relation holds f h(x)e ~'/(~) d~
(5.109)
F
(2~r)(n_k_l)/~ / M
h(x) dsM(~)
. eZ,ml3_(n_k+l) ~ --+ ~ .
IVf(~) IX/I det (H* (~))1
Here dsM(~) denotes surface integration over M. PROOF: We can restrict the integration domain to a compact neighborhood V of M. We have then that
/h(x)e~2/(~)dx,,, F
]
h(x)eP2/(X)d~, Z---+~.
FnV
74
(5.110)
We assume that in the set F fq V exists a global coordinate system in the form ( h l , . . . , h k , h k + l , . . . , h , _ l , g ) with ( h l , . . . , h k ) being a global cordinate coordinate system for M. If no such system exists we have to make a partition of unity to find subsets where we can define local coordinate systems (see [130], p. 150). The integral can now be written in the form
I(~) ,.~ [ h(T(u*, ~, un))eZ~/(T(U*,~,~'"))D(T(u*, ~, un)) du*d~du,~.
(5.111)
u
Here u* = ( U l , . . . , u k ) and ~ = ( u k + l , . . . , u n - 1 ) and un = - g ( ~ ) . U is an open subset of ~ and D(T(u*, ~, u,~)) is the transformation determinant. The u l , . . . , uk are a parametrization of M. We assume first that at a point x the tangential space TM(:~) is spanned by the unit vectors e l , . . . , ek and the subspace of Tc(:e) orthogonal to this space by the unit vectors e k + l , . . . , e ~ - l . Then the gradient of f at this point is parallel to the unit vector e~ due to the Lagrange multiplier rule, since here is a m a x i m u m with respect to G. We write the integral in the form
=I(U. ,~) We consider now the integral in square brackets. For it we can for all points in x E M derive an asymptotic approximation 0
I(u*,fl)= / / h(T(u*, ~, un))eZ2f(T(u*'it'~"~))D(T(u*, ~t, Un))dund~. (5.112) (1-5
Without loss of generality we assume always that D(T(u*, 0, 0)) = 1 for the point we consider. If this is not the case it can be achieved by a suitable scaling of the local coordinates. For a given u* the function f(T(u*,~,u,~)) has a global m a x i m u m at f(T(u*, o, 0)) with respect t o / _ / • [-5, 0]. We compute now the second partial derivatives of this function with respect to the variables u k + l , . . . , u,~-i and the first derivative with respect to un. The second derivatives are found using l e m m a 14, p. 17
Of(T(u*, ~, u,~)) = fn(x), 69un 02f(T(u *, ~, un)) = ~)uiuj
(5.113)
Ivf(X)lgij(x), i,j Ivg(
with x = T ( u * , ~, an). 75
)[
= k+l,.,
'
n-1
Since f has a local maximum in all points of M with respect to G, its gradient is parallel at all these points to XJg(a~). From this follows
If"O')l i-IVfO,)I .
(5.114)
So we get for the integral in square brackets with theorem 46, p. 67 the following asymptotic approximation I(u*,/3) ,-.,
(27r)("-k-i)12h(x) en~'~
Ivf(z)l~/I
d e t ( H * (x))l /3~-~+1 "
(5.115)
IVf(~)10~g(~))~ IVg(~)lOXiOXj i i,j.=k+ 1. . . . . . . 1
(5.116)
with
H'(a~) =
(02f(a~)
\o~;oxj
Integrating over the manifold M yields the result of the theorem.
D
An analogous result can be found for surface integrals. T h e o r e m 50 Let condition A be fulfilled. Further is given a twice continuously differentiable function f : n:~n ~ 1~ and a continuous function h : 1~n ---* ~ . Assume further that: i. The function f attains its global maximum m = max~e G f ( x ) with respect to G exactly on the compact k-dimensional submanifold M C G. 2. In all points x E m the (n - k - 1) x (n - k - 1)-matrix is H * ( x ) regular. This matrix is defined by
H*(~) = A T ( ~ ) H ( ~ ) A ( ~ ) with
H(x)
(
tfij(~) ~ g
t ))i,j=l,...,n
and A(a~) -- (al(a~),..., an_k_l(a~)).
Here the a l ( ~ ) , . . . , a,_~_:(~) are an orthonormal ~asis of the subspace of T v ( x * ) , which is orthogonal to TM(~*). Then we have as 13 ---+oo that ih(x)eZ2](X)dsG(re)~(27c)(~_k_:)/2 i h(x) dsM(x) en2.~CT_(n_k_l)(5.117) c M X/I d e t ( H * ( x ) ) l Here
dsM(w) denotes
surface integration over M .
PROOF: Analogous to the last theorem. Only the factor ]xTf(a~)]-l/3-2 does not appear, since we integrate only over the surface. [::]
76
The next theorem is a modification of theorem 42, p. 59 for boundary maxima. T h e o r e m 51 Let condition A be fulfilled. Further is given a twice continuously differentiable function f : 1Rn --* 1Fd, a continuously differentiable function gl : 1~n --* JFg and a continuous function go : l~n --+ 1R. Assume further that: 1. The function f attains its global maximum with respect to F only at the point x* 6 G.
2. g0(~*) r 0 and gl(x*) = O. 3. A t ~* the ( n - 1) • ( n - 1)-matrix H * ( x ) is regular. This matrix is defined by H * ( x * ) = A T ( x * ) H ( x * ) A ( x *) with
H(x*)= (fiJ(x*) I~f(x*)lgiJ(x*)) IVg(:~*)l
~,2=a ..... ,,
and
A(~*) = ( a l ( x * ) , . . . , an_l(X*)). Here the a l ( x * ) , . . . , a,~-l(x*) form an orthonormal basis of TG(X*). Then we have the following asymptotic relation
/
go(x)lgl(x)l
exp(fl2f(x)) dx (5.118)
F
~
2(2~r)(~_2)/2 Ivf(x*)lg~
det(H* (x*))dTn-(x*)d 1/2 e:~Y(w*)fl -(n+2), fl --+ oz.
Here d = Vgl(x*) - (n(x*), Vgl(x*))n(x*) is the projection of Vgl(x*)
onto Ta(~*). PROOF: The proof is similar to the proof of theorem 42, p. 59. We give a short outline. We assume first that x* = o, f ( o ) = 0 and that the tangential space of G at o is spanned by the unit vectors e l , . . . , en-1 and that the curvatures of the surface vanish at o. The gradient V f ( o ) points in the direction of the unit vector en. We write then
I(,~) = / go(:~)lgl(x)l exp(fl2f(ze)) F
77
dx
(5.119)
For the function f we have then the following second order Taylor expansion at the origin i n / R '~ 1~2-~
f(x) ~ ~
..
f " (o)xixj + f'~(o)xn.
(5.120)
i,j=l
For the function 91 we get from its first order Taylor expansion
gl('x) ~ ~ g~(o)xi.
(5.121)
i=1
Replacing go(x) by g0(o) we have approximately for the integral I ( ~ ) - - g 0 ( o ) / I Eg~(o)xilexp F
fiJ(~ xixj
i=1
+
I'~(o)x,~)
2
)
dx. (5.122)
Changing the integration domain t o / R '~-I x [0, oc) we have again approximately
~u
x exp
fiJ(o)xixj + f'~(o)x,~)
2 1
i=1
dx,~ dxl...dx,~_l.
1 Making the substitution xi H ui =/~xi for i = 1 , . . . , n - 1 and x,~ H u,~ = ~=x,~ gives then
-
.0o, / [j.lI)--~. g~ (o)u~ +~- lg~ (o)u~ I fln+2ff~._ 1
2 ~,j=l
(5.124)
i=1
t--f'~(o)u,~ dun dul...dun-1.
j=l
As/3 --~ oo we can neglect the terms multiplied by/3-1 and we get then
103) ~ /3,~+2 ff~n-1
•
I ~_, g~(o)ui I i=1
~ fi~(o)u~uj+f~(o)~ e~ dUl...e~_l i,j=l
78
(5.125)
Splitting this into two integrals gives then
- ~,~+~
/
ff~.-1
ol
I~g~(o)~,lexp
d U l . . , du,~_l (5.126) i,j = l
i=1
,]
(2,O
x /exp(y"(o)u.) d... o
For the second integral we get as value IVf(o)[ -1, since only the partial derivative in the direction of the x,-axis is not equal to zero, which follows from the Lagrange multiplier rule. The first integral can be calculated using lemma 26, p. 30 giving
f
I~g~(o)u, pexp
(11
i=1
~ f'~(o)u~
)
dul ... du,~-i
(5.127)
i,j=l
= 2(2~)(~-2).
n-1 i J ~ij 1/2 E i , j =1 gl (O)gl ( o ) f
d-~-~H~(o-g
Here the matrix 0~J)i,/=l ....... 1 is the inverse of the matrix H*(o) = (fij (o))id=l,...,,~_ 1. To find the value of the first integral we therefore need this inverse. We assumed that this matrix is regular. We have then H*(o) = A T ( o ) H ( o ) A ( o ) with A(o) = ( e l , . . . , e,~-l). Since for a regular matrix the generalized inverse is equal to the inverse we find H*-l(o) = (AT(o)H(o)A(o)) -
= =
H*-(o) A(o)-H(o)-(AT(o))
(5.128)
-.
Since the column vectors of A(o) are orthogonal, we have using lemma 5, p. 11 that A - ( o ) = A T ( o ) and we get
A ( o ) - H ( o ) - ( A T ( o ) ) - = A T ( o ) H - (o)A(o).
(5.129)
Therefore we get for the product in equation (5.127) n-1
E
g~(~176
= (Vgl(o))TA(o)TH-(o)A(o)Vgl(o).
(5.130)
i,j=l
But since A(o)Vgl(o) = Vgl(o) - (e,, Vgl(o)}e n (see lemma 2, p. 9), this is the projection of Vgl(o) onto the tangential space. This proves the theorem in this special case.
79
We get as result for the integral 1/2
I(fl) ~ 2(2~r)('*-2)/2 g0(o) ]~f(o)l
dTH-(o)d det(H* (o))
fl-(,+2), /3 --~ c~.
(5.131)
The general case is shown by transforming into local coordinates.
[]
T h e o r e m 52 Let condition A be fulfilled. Further is given a twice continuously differenliable function f : 1F~n --+ 11~, a continuously differenliable function gl : Et n ~ 1R and a continuous function go : ll~n --+ 1F~. Assume further that: 1. The function f attains its global maximum with respect to G exactly at the point z* E G.
2 go(=*) # o and g l ( = ' ) = 0. 3. At ~* the ( n -
1) •
( n - 1)-matrix H * ( ~ ) is regular. This matrix is defined
H*(=*) = A T ( = * ) H ( = * ) A ( = *) with
H(x*)=
Iv f ( ~ * )1 IVg(~*)l
fiJ(x*)
gq(~.)) ~,j=l ..... .
and
A(:e*) = ( a l ( x * ) , . . . , a,~-l(=*)). Here the a l ( m * ) , . . . ,an-l(~e*) are an orthonormal basis of Tc(=*). Then we have
/ go(=)lgl(~)lexp(/32f(=)) dsc(~)
(5.132)
G
dT H _ ( ~ , ) d 1/2
~ 2(2~)(--~)/2go(~, )
det(H* (~))
e ~ S ( x ' ) f l - " , fl ~
oo.
Here d-- ~7gl(x*)- (n(x*),Vgl(x*))n(x*) is the projection of~Jgl(x*) onto TG(=*). PROOF: Analogous to the last theorem. Only the factor ]Vf(x*)[-1/3 -2 does not appear, since we integrate only over the surface. [] The following theorems are modifications of theorem 44, p. 62.
80
T h e o r e m 53 Let condition A be fulfilled. Further is given a twice continuously differentiable function f : 1~'~ ~ 9rt, a continuously differentiable function f l : j~n ~ ~ and a continuous function h : 1Rn --~ IR. Assume further that: 1. The function f attains its global maximum with respect to F only at the point ~* E G. 2. A t ~* the ( n - 1) • ( n - 1)-matrix H* (~* ) is regular. This matrix is defined by: H*(~*) = A T ( x * ) H ( ~ * ) A ( z *) with H(~ *
IVf(:~*)l g , j ( ~ , ) ) =
f,i(:~,)
IVg(~')t
~,~-_-~..... .
and
A(~*) = (al(T*),..., an-l(X*)). Here the a l ( x * ) , . . space Tc(~*).
, a n - l ( ~ * ) form an orthonormal basis of the tangential
Then the following asymptotic relation holds
/
h(~)exp(flfl(~) +/32f(~)) dx
(5.133)
F
,., (2~r)(~_1)/2 h(~*) e x p ( - - 8 9 eZS,(~.)+Z~/(~,)/7_(~+I), /3 --+ oo. IVf(~*)]V/I det(H* (~*))l Here:
/. c = v f l ( ~ * ) - ( n ( ~ * ) , V f l ( ~ * ) ) n ( ~ * ) Ta(:~*). 2. H - ( x * )
is the projection of Vfl(~*) onto
is the generalized inverse of H ( x * ) .
PROOF: The proof is analogous to the proof of theorem 44, p. 62. We give a short outline. We assume that x* = o, f ( o ) = fl(o) = 0 and that the tangential space of G at o is spanned by the unit vectors e l , . . . , e~-l. The gradient ~7f(o) points in the direction of the unit vector e~. We write I(/3) = f h(~)exp(/3fl(~) +/32f(~)) da. F
81
(5.134)
For f and fl we have then the following second order Taylor expansion at o (for fl we need as in theorem 44 only the first derivatives) I'l
zsl(,) + z~f(~)
~
(5.135)
pZf~(o)x, i----1 n
f42 n- 1
+~-(}7_. f'J(o)~,~j + ~ f ~ ~
+ 2I~(o1~1.
k=l
i,j=l
Collecting the terms for each variable gives exp(flfl (x) +/32f(~)) ~exp
/3
fi(~
(5.136)
E fiJ(~ i,j=l
+(/~(o) + -5-)~,,~ + ~ ~=~ We have then approximately for the integral with again replacing h(e) by
h(o) I(fl)
~
( ,~-a h(o)
exp
/32 ~-1
E fiJ(~ /3~ f~(o)xi+--~-(i,j=l
(5.137)
F
Making the substitution xi --, Yi =/3xi for i = 1,..., n - 1 and x,~ --* y,~ = /32x,~ and enlarging the integration domain to ff~n--1 X [0, OO) gives
h(o) / I(9)~/3.+1 ff~.-a
[0~
n--i n--1 exp 1/~1 f~( ; o)y~ + -~1 YJ(o)v~v~ "=
q.(fn ( o ) + ) Y n *
(5.13s)
i,j=l
E fk,~(o)ykyn dy dyl. . .dyn-1.
The terms multiplied by/3-1 are asymptotically negligible giving then
h(o)
I(/3) ~/3.+1 •
exp
ff~,~-x
I~(o)y~+~ ~
\i=1
Y~(o)Y~Yi)+ f~(o)~ dy. dyi...dy.-1
i,j=l
82
Writing this now as the product of two integrals h(o)
[ ,-a 1 exp f~(o)yi +-~ E flJ(o)yiYj J j~.-1 i,j = l
dyl...dyn-1
(5.139)
J
O0
• fexp(f"(o)y.) dy.. 0
For the integral over y, we get as result IVf(o)l -~, since IVf(o)l = If"(o)l and for the integral over Yl,..., Y,-1 we obtain using lemma 26, p. 30 /
exp
.~,-1 =
fi(~
k i=1
1 + -2 E fiJ(~ i,j=l
dyl...dyn-1
(5.140)
(2zc)(n-1)/2exp(--~cTH-(olcll det(H*(o))[ -U2.
This gives finally I(fl) ,,~ (2~-)("-1)/2h(o) e x p ( - 8 9 1 7 6 iVf(o)lx/det(H,(o))/3 -('t+1), /3 ---* (x~.
(5.141)
This is the result in this special case. By making a coordinate transformation the general case can be brought in this form. If the curvatures at the maximum point do not vanish, we have to replace the second derivatives of the function f by the the corresponding second derivatives with respect to local coordinates (see lemma 14, p. 17). [] The corresponding result for surface integrals is given in the following theorem. T h e o r e m 54 Let condition A be fulfilled. Further is given a twice continuously differentiable function f : 1Rn ---* •, a continouosly differentiable function f l : 1Rn ---+1f~ and a continuous function h : 1R~ ~ 1R. Assume further that: 1. The function f attains its global maximum with respect to G only at the point ~* E G. 2. At ~ * the ( n - 1) • ( n - 1)-matrix H* ( x *) is regular. This matrix is defined by: H*(~*) --- A T ( ~ * ) H ( x * ) A ( ~ *) with:
( H(w*)=
IVf(x*)[ g q ( x * ) ) flj(~,)
IVg(
*)l
..... .
and
A(x*) = ( a l ( x * ) , . . . , a~-l(x*)). Here the a l ( x * ) , . . . , a,~-l(~*) form an orthonormal basis of the tangential
space Ta( * ). 83
Then as ~ ~ oz the following asymptotic relation holds ] h ( ~ ) exp(~fl (~) +
~2 f(~))d~
(5. 142)
G
(27r'l(n-1)/2h(~*'~exp(- 89 H - (x'*)c)~fl(:~')+~2 f(~*) [~-(n-l ) Here 1. c = W I ( ~ * ) - ( n ( ~ * ) , W l ( ~ * ) ) n ( x * )
~ the p~ojection of W l ( X * ) o~to
Tc(~*), 2. H - ( ~ * ) i~ the generalized in~er~e
of H(~*).
PROOF: Analogous to the last proof.
84
[]
Chapter 6
Approximations for Normal Integrals 6.1
Time-Invariant Reliability P r o b l e m s
As outlined in the introduction in the time-invariant model r a n d o m influences on a technical structure are modelled by an n-dimensional random vector X = ( X 1 , . . . , X~). The limit state function g : ~ n ~ /R describes the state of the system. If g(x) > 0, the structure is intact, but if the r a n d o m vector X has the realisation X = x. But if instead g(x) < 0 the structure is defect for such a realization. Therefore we have two domains: 1. The safe domain S = {x;
g(x)
> 0} and
2. the failure domain F = {x; g(~) < 0}. The boundary of the failure domain G = {a~; g(x) = 0} is called the limit state surface. Since if the random vector X has a p.d.f, the probability content of the limit surface is zero, i.e. / P ( X E G) = 0, in general it does not m a t t e r if the failure domain is defined as here by the set {x; g(z) _< 0} or {x; g(x) < 0}. For systems with components the reliability of the system depends on the the reliability of the components. We consider a system S with n components K1, 9 9 K,~. The simplest models here are parallel and series systems. A parallel system with n components fails if all components K1, 9 9 Kn fail. The probability of failure is then /P(S defect) = / P ( I Q
d e f e c t , . . . , I(,~ defect).
(6.1)
If the components are independent, this gives /P(S defect) = ~ i=1
85
1P(IQ defect).
(6.2)
Contrary to this a series system fails if at least one component fails. This gives /P(S defect) = / P ( a t least one component Ki defect). (6.3) If the components are independent, we get /P(S defect) = 1 - E ]P(Ki not defect).
(6.4)
i=1
Starting from these simple systems more complex systems can be constructed. If the failure of the component Ki is described by a limit state function gi of the random vector X , the failure probability of a parallel system can be written as
(6.5)
_< 0}) and for a series system with the same components we get
(6.6)
P(Ujm=l{gj(X) < 0}).
Further results and applications can be found for example in [49], [73], [51] and [71]. For many complex systems it is difficult to compute the system reliability as function of the component reliabilities, but all systems can be reduced to a series systems of parallel subsystems or parallel system of series subsystems.
6.2 6.2.1
Linear Approximations (FORM Concepts) The
Hasofer/Lind
reliability
index
First Freudenthal [65] proposed to use the distance of F to the mean of the distribution as a measure for the reliability and to linearize limit state functions in the point, where the p.d.f, at the boundary of the failure domain is maximal. Further proposals for a reliability index gave Cornell [42] and Rosenblueth/Esteva [121]. The problem with both definitions was that it was necessary to calculate the moments of non-linear functions of X for determining the value of the index for a failure domain. It was therefore proposed to linearize the limit state function. Ditlevsen [47] pointed out that then the result may depend on the specific form of the limit state function and not only on the shape of F. Later, in 1974 Hasofer and Lind [69] proposed to define a reliability index for failure domains, which is invariant with respect to different choices of the limit state function for a given failure domain. They considered a random vector X = ( X 1 , . . . , Xn), which was centered and standardized, i.e.
1E(Xi) cov(Xi,Xj)
=
Ofori=l,...,n,
(6.7)
=
5ij f o r i = l , . . . , n .
(6.8)
86
For an arbitrary domain F C ~'~ Hasofer and Lind defined as reliability index fl(F) of this domain (6.9) /3(F) = min I~1xEF
This idea is formulated in the original article for arbitrary random vectors. But only in the case of standard normal random vectors there is a simple relation between this index and the probability content of half spaces. The p.d.f, of the standard normal distribution i n / R " is
The level curves of this function are circles around the origin. If we now consider a domain F in //~'~, we have
maxf(~) = f(~i~ I~1)~EF
(6.11)
The maximum of the p.d.f, in F is at the points of F with minimal distance to the origin, i.e. whose euclidean norm is minimal. In the case of a linear function gz(a~) = / 3 - o t T - x
(6.12)
with ~ i =n 1 c~i2 = 1 and t3 > 0 due to the rotational symmetry of the standard normal distribution
H:~(gp(X) _< 0) = / P ( / 3 _< otT. X ) = /P(/3 < X l ) = ~(-/3).
(6.13)
If we define FZ = {z;gZ(x) _< 0}, we have for these domains, which are half-spaces i n / R n an one-to-one mapping between the probability _/P(X E F~) and the reliability index/3(Fz) = / 3 of FZ. There are also corresponding inequalities between the probability contents and reliability indices of two half-spaces F1 and /'2. If/3(F1) =/31 and/3(F2) = /32, then /31 >/32 r P(F1) < P(F2). (6.14) Here a larger reliability index corresponds to a smaller probability content and vice versa. But in general this is true only for such half-spaces. For linear limit state functions we have this one-to-one relation between the reliability index and the probability content of the failure domain. I f / 3 ( F ) is known, we get i m m e d i a t e l y / P ( F ) = ~ ( - / 3 ( F ) ) . But in the case of domains defined by non-linear functions the situation is different. Here we cannot compute the value of P ( F ) from the reliability index. As outlined for example in [89], p. 57 we can have failure domains Fa and Fb such that for the reliability indices we have/3(Fb) < /3(Fa), but for the probabilities /P(Fa) > /P(Fb). Therefore it is problematic to use this index for estimating the reliability of structures.
87
In the years 1975-1984 several a t t e m p t s were made to generalize these relations between the index and probability contents to nonlinear functions. The basic idea in this connection was to replace the nonlinear function g in the point x 1 E F with minimal distance to the origin by a linear function gL defined by
gL(x) = ( v g ( x l ) ) T . (x - x 1)
(6.15)
i.e. by its first order Taylor expansion at the point x 1. The failure domain defined by this function is a half-space FL and we have JP(FL) = (P(--Ixl) = The proposed approximation method was therefore: 1. Calculate on the limit surface G = {x; g(x) = 0} the point x I with minimal distance to the origin, i.e. the point with Ixll = mAn Ix[. XEF
2. Replace then g by the first order Taylor expansion given in equation (6.15) gL at x l . 3. Approximate the probability ~ ( F ) by 1P(FL) = qh(--Ixll). The first problem here is: W h a t is to be done if there are several points x 1, . .., x k on the limit state surface ]xl[ . . . . . [xk[ = minx~F Ix[. It may even happen that for whole surface parts M of G we have lyl = minxeF I~1 for all y E M. For example consider the n-dimensional sphere with the center in the origin with radius j3 F = { x ; ~ 2 - ~ i ~ l x ~ _< 0}. Here we have for all points y on this sphere lyl = m i n x e F Ixl. For this case an analytic solution is available, but what to do if only a part of the limit state surface is part of this sphere? To solve the problem of several minimum points it was proposed to make a Taylor expansion gk at all these points as in equation (6.15) and then to i calculate 1 -- .hgg(Nk=l{X;gL(x) > 0}). But this can be done only numerically. For the case of surface parts having minimal distance no useful approximations were proposed. The essential problem was the quality of the approximations. It is possible to find any number of examples where the method is good and also any number of counterexamples with insufficient results. But such reasoning with examples can not replace a sound general mathematical justification. In some papers (for example [61]) quadratic approximations instead of linear were considered. But it was unclear how to choose these approximations and if so an improvement is obtained. Only by applying methods of asymptotic analysis as described in the last chapter, this question can be answered. 6.2.2
Generalization
to non-normal
random
variables
The concept of linear approximation can also be used for non-normal r a n d o m vectors. This is done by transforming them into normal vectors. If an n-dimensional random vector X = ( X 1 , . . . , X,~) consists of n independent r a n d o m variables X 1 , . . . , X,~ with c.d.f.'s F~(xt),..., Fn(xn) and with positive continuous p.d.f.'s f l ( X l ) , . . . , fn(x,~), it can be transformed into a standard normal random vector U with independent components. 88
Such a transformation is given by T:/~"
--~ ~ " , x F-+ u = ((1)-1 [ F l ( X l ) ] , . . . ,
(6.16)
The transformation determinant d e t ( J T ( x ) ) is given by det(JT(X)) = f i i=1
fi(xi) l(ul)
(6.17) "
This gives with transformation theorem for multivariate densities for the density ] ( u ) of the random vector U the form
f(u)
=
:
i=l
f~(x~)l det(JT(x))l
l~l(~ti): i=1
-~ = 1-I f~(x~) r I ~l(ui) i:1
1:1
(27r)-n/2exp ( - - ~ ' U 1 2 )
(6.18)
fi(xi)
9
By such a transformation the original failure domain F = {x; g(x) < 0} is mapped into the domain T(F) = {u;g(T-lu) < 0}. For this domain we can use the approximations described above. At the point u 1 with minimal distance to the origin with respect to T(F) the function g(T-lu) is replaced by its first order Taylor expansion and then for the domain bounded by this linear function approximations are made. With this a method was developed for approximating arbitrary failure domains in the case of a random vector with indepedent components with positive continuous p.d.f's. If the p.d.f, is zero in some parts of ~,~n, it is still possible to apply this transformation, but these parts might be mapped into infinity. This can cause problems for numerical algorithms which search the minimal distance points. A further generalization was found by Hohenbichler and Rackwitz [72] in 1981 using the Rosenblatt-transformation. This transformation, which was described first by Rosenblatt in [120], is a transformation T : ~'~ ---+ J/~'~, which maps an arbitrary n-dimensional random vector X with positive p.d.f, into a standard normal random vector U with independent components. This transformation is defined by .1
=
(6.19)
[Fl(xl)]
ui = +-1 [Fi(xilxl,...,Xi_I)] it n
Here 211
.~
(~)--1 [ Y n ( X n l X l , . . . , X n _ l ) ]
"
Fi(xilXl,..., xi-1) is the conditional c.d.f, of Xi under the condition X 1 =
...~Xi_
I ~
X i _ 1.
89
The problem in this approach is the calculation of the conditional c.d.f.'s in the case of dependent components. In [72] it was proposed to find t h e m by numerical differentiation, but since such methods are numerically not very stable, no example is known to the author where this was done. Further the results m a y depend on the ordering of the random variables as was pointed out by Dolinski [52]. These methods described in this section are known as F O R M (First Order Reliability Methods), since here only a first order Taylor expansion of the limit state function is made. In the next section asymptotic approximations for normal random vectors are derived and in the next chapter it it shown that such transformations to normality are not necesssary for the derivation of asymptotic approximations.
6.3
SORM Concepts
As already said, additional to the FORM approximations based on linear Taylor expansions of the limit state function at the m a x i m u m point of the normal density, SORM (Second Order Reliability Methods) were studied in the late 70's (see [61]), where a second order Taylor expansion of the limit state function is made. T h a t means that we use also the Hessian H g ( ~ ) of the limit state function to fit an approximation. In this context it is necessary to distinguish between two questions: 1. The calculation of the c.d.f, of a quadratic form of a normally distributed n-dimensional random vector X
gq(X)
= a + bT X + 1-XT C X .
2
(6.20)
with a E / R , b an n-dimensional vector and C an n x n matrix. 2. The derivation of approximating quadratic forms gq ( X ) for arbitrary limit state functions g(X) of a normal random vector X , which give an " optimal " approximation for the failure probability. These two problems are different. In the first case a quadratic form of normal r a n d o m variables is given, i.e. it is known that the limit state function is exactly such a quadratic function, even in the regions far off from the minimal distance point. Possible improvement are given by Tvedt ([133],[134] and [135]), Wu/Wirsching [141] and T o r n g / W u [132]. They calculate instead of the asymptotic approximation for the probability content of the paraboloid the exact probability content by higher order expansions or transform methods. This gives better results if in fact the limit state surface is exactly a second order surface, elsewhere the effect of this method on the results depends on the problem. Naess [104] gives bounds for the distribution of quadratic forms.
90
In the second case the problem is to find for a limit state function which is a quadralic form of a normal random vector such a quadratic form, which gives in some sense a good approximation for the failure probability, i.e. we would like that P r ( g ( X ) <: 0) ~ Pr(gq(X) < 0). In [89] in chapter 4, p. 66-69, for example, the two different problems are merged and an approximation method, which should improve the asymptotic concept outlined in the following, is proposed. not
6.4
A s y m p t o t i c SORM for Normal Integrals
If we consider failure domains in the space of i.i.d, standard normal r a n d o m variables, failure occurs in general, if at least some of the r a n d o m influences have extreme values, for example if there are very large loads or if the resistances are small. For realizations near the mean vector o the structure normally is intact. Therefore we can assume in general the failure domain has a large distance to the origin. Let be given a failure domain F with distance/30 to the origin. The failure probability/P(/~) is then
i=1
Making the substitution ~ ~-* y = / 3 o l x , this integral over the domain /~ is transformed into an integral over the domain/3 o 1/~ such that now
(2~r)n/2
exp
-
i=1
This is an integral over a domain with distance one to the origin. Studying now the asymptotic behavior of the integral 1(/3) = (2~r)n/2
exp
--~-
,
~olP we get asymptotic approximations for ~o(~), since I(flo) = 1P(fi). The domain/3 ol/~ is now taken as fixed. So the problem of approximating an integral with large distance/30 to the origin is transformed into the problem of approximating a Laplace integral over a fixed integration domain as a p a r a m e t e r approaches infinity. This integral 1(/3o) is imbedded in a sequence of integrals depending on a parameter /3. For these integrals asymptotic approximations are derived as /3 --~ c~. If the p a r a m e t e r va]ue/3o is large at the integral of interest, the value of the asymptotic approximation for this integral will be a good approximation for this integral in general. 91
In this section the results of the last chapter are applied to normal integrals. Given is an n-dimensional integral in the form , F
d~=
i:1
F
with F C / R " . Now we can construct a sequence I(~) of integrals by defining the domains ~ F = {~;/~-ax 6 F } and setting
I(~) = (2~r)-'~/2 / exp ( - 1 , ~ , 2 ) dz.
(6.25)
•r With the substitution x ~ y = / 3 - 1 z this gives /3'~ / e x p I(/3)- (2~)~/2
(--~,z,2)
d~.
(6.26)
F
These integrals are Laplace integrals. The integration domain does not change. Studying the behavior of these integrals as/3 --+ ~ , we get information about the asymptotic probability content of/3F as/3 ---+co. Essential for the asymptotic behavior of these integrals is the structure of the domain F in this subset of F where the function Ix[ attains its global minimum with respect to F. Using concepts of asymptotic analysis it is possible to obtain approximations also for the cases where the minimum is at several points or on a whole surface part. If there are several points x l , . . . , x k where the minimum occurs, F is split up into k subsets F 1 , . . . , Fk such that in each such subset lies one point. Then for each domain an approximation is derived and then they are added. To simplify the following derivations we consider with the exception of theorem 56, p. 93 always only the case of a single minimum point or surface part. The first result is about domains i n / ~ n , which are bounded by hyperplanes. It was published 1964 by Ruben [122]. T h e o r e m 55 Given are n affin-linear functions gi(x) = aT(x -- ~*) where the ai's and x* are constant vectors with d e t ( a l , . . . , an) 7s 0 and x* = ~ i = l 7iai ~2
with 7i > 0 for i = 1 , . . . , n . For the set F defined by F = Ni=l{~,gi(~ ) < 0}, n
.
--
the followin 9 asymptotic approximation is obtained for ~(13F) ~'~(fl~*) ~-'~, ~ ---, ~ . i P ( ~ F ) ,,- t d e t ( a l , . . . , a,~)l ]-]i~1 7i
92
(6.27)
~ f ( f ) ~ (g.zs-))
(6.28)
PROOF:
See [33], corollary 1, p. 95.
[]
The first result for domains bounded by non-linear functions was published by the author in 1984 (see [23]). It is given in the form as it was published, i.e. for the case of several minimal distance points. T h e o r e m 56 Given is a twice continuously differentiable function 9 : 1t~'~ ~ 1F~ such that for all points x on the surface G = {~; g(x) = 0) always V g ( ~ ) 7~ o. The set F is defined by F = { x ; g ( ~ ) < 0}
(6.29)
min ~l = 1.
(6.30)
and further ~EF
Assume that the following conditions are fulfilled: 1. The function I~1 attains its global minimum one with respect to the hypersurface G = { ~ ; g ( ~ ) = 0} exactly at k points x l , . . . , x k. 2. At all these points x l , . . . , x ~ all main curvatures of the surface G are less than one. Then we have for 1P(/3F) the asymptotic approximation /P(C~F) ~ ~(-r
(1 - m j ) -1/2
, ~ ~ ~.
(6.31)
i=1
Here the ~ii's (j = 1 , . . . , n - 1) are lhe main curvatures of the surface G at lhe point ~i. PROOF: Proofs are given in [23] and [33]. U,fing the compactification l e m m a we can assume that F is a compact set. 1 n We define f ( ~ ) = - 7 ~ i = 1 z2 and use theorem 46, p. 67. Since o ~ F, the m a x i m a of f with respect to F are on the boundary of F. These are the points with minimal distance to the origin. We assume first t h a t there is only one such point ~1. By a suitable rotation we can always achieve that ~1 = ( 0 , . . . , 0, 1) and that the main curvature directions of the surface G at this point x I are the directions of the axes e l , . . . , e ~ - l . We have then
exp - g /3F
V? @.
(6.32)
d~.
(6.33)
i=1
Making the transformation y ---+a: = / 3 - 1 y we get
-: (27r)-n/2fl n /
exp
--
F
x i~l
93
Using the fact t h a t I V f ( x l ) l = [xll = 1, we get with t h e o r e m 46, p. 67 the following a p p r o x i m a t i o n
P(/TF) = ~
(2~)-~/~exp
- 5- ~2~,~
d~
(6.34)
i=1
F
exp(-/72/2) 1 ~v~ ,/I det(H*(~l))l" Here, since f i J ( z e l )
=
--6ij, we get
H*(~I) = (fij(~l)_
for the modified Hessian IVf(~lll_iy,xl,'~ IVg(~l)l
--(-'~'
u
~
(6.35)
J J i , j _ - i ..... . - 1
. . . . . . ~-1 " g"(")l)~,:l
I V g ( ~ 1)
Further we obtain for the d e t e r m i n a n t det
[det(H*(xl))l
((-Sij
gij(xl)
"~" "
iv-7~l~,,,-:, ....... 1)
\
act ((6ij + Ig~J(~) ' ~1:~'~=:
)
...... - 1 /
(6.36)
.
Since due to our assumptions the main curvature directions are parallel to the axes, the mixed derivatives vanish. For the second derivatives (see equation (2.27), p. 15) we get so
gss(~)lVg(~)1-1
= - ~ l j fo~ j = 1 , . . . , n -
1.
(6.37)
In equation (2.27), p. 15 the relation between the derivatives and the curvatures is given. Here ~lj is the m a i n curvature in the direction of the xj-axis at the point x 1. Finally we get
IP(13F)
e -[3~12
--
_
1
v~Z x/ru;(1
, /3 --+ co. -
(6.38)
,~j)
Using Mill's ratio (2.89), p. 28 gives further /P(,SF) ,-,.,
r
qri;:;(i
, /3 --~ oo.
(6.39)
-
This result is independent of the chosen coordinate system. This gives the result for the case of one m i n i m u m point.
94
In the case of several minimum points the asymptotic approximation is calculated for each point and then these contributions are added. [] In the last equation we get the main curvatures of the surface since they are related to the first and second derivatives (equation (2.27), p. 15). The main curvatures in the minimal points must be less or equal 1, since elsewhere it would not be a minimal point anymore. This result demonstrates the relevance of the second derivatives of the function g at the minimal points. For an useful approximation, which is exact in an asymptotic sense, i.e. the relative error approaches zero as /3 ~ oo, it is necessary to take in account the second derivatives. In the sense of approximating the given function g by a simpler function this means t h a t g is replaced by a quadratic function gq in the form n--1
gq( ) =
i7-1
~i;r?. 2 '
(6.40)
(For details see [20] and [23]). This function defines an hyperparaboloid by aq = gq( ) = 0}. This gives the following method for the computation of asymptotic approximations for normal integrals in the case of a finite number of minimal points: 1. Calculate on the limit surface G all points x l , . . . , x k with ]~1] . . . . . F kl = rain =/30. ~EG
2. Calculate at all these points ~1 . . . , x k 1 , . . . , n - 1) of the surface G. 3. If all gij < / 3 0 1, approximate
the main curvatures ~ij (j -
P(F) by
/ P ( F ) ~ ~(-/?o)
(1 - / 3 o ~ i j ) -1/2
.
(6.41)
iml
The name SORM is used for all approximation methods wich use second order Taylor expansions of the limit state function. Now mainly this form of second order approximations based on the asymptotic considerations above is used. The following theorem considers the case that the failure domain is the intersection of several domains defined by gl, 9 9 gm and the m i n i m u m point lies on the intersection of some of the boundaries. This question is of interest in system reliability.
95
T h e o r e m 57 Let be given rn twice continuously differentiable functions g l , . . . , g m : j~n ~ h~ such that gi(o) > 0 for at least one i E { 1 , . . . , m } . Let be defined F = N~n=l{y;gi(y) <_ O} and assume that there is only one point y* E F where the function lYl achieves its global minimum with respect to F. Assume that the following conditions are satisfied:
1. The gradients ai
:-
~Tgi(y*) are linearly independent.
2. F o r t = 1 , . . . , k with 1 <_ k <_ m we have gi(Y*) = 0 and for j = k + 1 , . . . , m instead gj(y*) < O. 3. The vector y* has a unique representation in the form k
y* = - E T i a l
(6.42)
i:l
with 7i > 0 for i = 1 , . . . , k. 4. The (n - k) • (n - k)-Matrix H*(y*) is regular. Here the matrix H*(y*) is defined by H*(y*) = A T ( y * ) H ( y * ) A ( y *) with
(a) H ( y * ) =
{
- E =k ,
ij
*
))
i,j=l,...,n
(b) A ( y * ) = ( a k + l ( y * ) , . . . , a n ( y * ) ) . Here the a k + l ( y * ) , . . . , a n ( y * ) are an orthonormal basis of the tangential space of the ( n - k )-dimensional manifold N = niL1 {y; gi(u) --- O} at v* 5. Let be defined hi(x) = (Vgi(y*))T(y*--x) f o r t = 1 , . . . , k. These functions define a domain F = N/k=,{x; hi(x) <_ 0}. Then the following asymptotic relation holds ~ ( ~ F ) ~ ]det(H*(y*))l-1/2p(~J?), fl ~ ~ .
(6.43)
The probability 1P(/3[') in this equation can be approximated using theorem 55, p. 92. PROOF: See corollary 3 in [33].
[]
The meaning of this result is that, if the functions gl,. 9 gk are replaced by their linearizations in the minimal distance point y*, one obtains an asymptotic approximation for the probability /P(/3F) is found by first approximating the linearized domain and then multiplying it by a factor determined by the curvatures of the manifold N~=l{x; gi(x) = 0}. In the case of a single limit state function, (I)(-j3) is obtained by linearizing the limit state surface at the minimal rt--1 distance point and then the factor YIj=l (1 - n u ) -1/2 gives the second order correction. 96
T h e o r e m 58 Given is a twice continuously differentiable function g : 1l~~ ~ 1~ with g(o) < O. A s s u m e f u r t h e r that: 1. The set M of all points at in F with Ix[ = minyeF[y[ = 1 is a k-dimensional submanifold of the (n - 1)-dimensional manifold G =
{~;g(~) = 0}. 2. For all x E M we have Vg(at) r o. 3. For all at E M the (n - k - 1) x (n - k - 1)-matrix H * ( x ) H*(at) = A T ( a t ) H ( x ) A ( a t ) with:
is regular. Here
(a) H ( ~ ) = ( - & j -IVg(at)l-lgij(at))i,j= 1..... , . (b) A(at) = ( a n - k + l ( X . ) , . . . , an-l(:r)). Here the a n - / c + l ( ~ ) , . . . , a n - l ( X ) are an orthonormal basis of the (n - k - 1)-dimensional subspace of the T v ( x ) which is orthogonal to the k-dimensional tangential space TM(~). Then the following asymptotic relation holds
ZP(ZE) ~ 0(/~lk + 1). r((k + 1)/2) ~(k+1)/2
f M
dsM(w)
~/I det(H*(x))t ' fl --+ oo.
(6.44)
Here d s M ( ~ ) denotes surface integration over M . O(xlk) is the complementary c.d.f, of the xk-distribution (see equation (2.gl), p. 29).
PROOF: Using the compactification lemma we can assume that F is compact. We use theorem 49, p. 74. Here we have f ( x ) = - 1 / 2 ~ = 1 x/~ giving
ZP(C~F) = (2,r)-'~/2r ~ f exp(~2f(.~)) d~. F Now using the theorem we get, since
(6.45)
Ivf(~)l = 1 for a: c M that dSM(X)
Z~(~F) ~ e-~2/~-l(2=)-(k+l)/~
M
",/I det(H*(x))l
(6.46)
dsM(~)
(6.47)
With equation (2.91), p. 29 we get
JP(flF) ,'~ O(fllk -4- 1) r((k A- 1)/2) / 27r(k+l)/2
M
X/I det(H*(x))l '
[]
This is the result of the theorem.
97
The last result shows the influence of the dimension k of the set M = {y; y E G, [Y[ = 1} of minimal distance points on the asymptotic magnitude of P(/3F). Ifk = 0, we have that this probability is of order O ( - f ) , for 0 < k < n - 1 instead it is of order e x p ( - f 2 / 2 ) f k-1 ~ O ( - f ) f ~ . Already in the papers Birndt/Richter [14] and Richter [117] the asymptotic order of the probability was derived, but not the exact value of the constant. 6.4.1
The
generalized
reliability
index
1979 Ditlevsen [48] defined for failure domains F a generalized reliability index BIG(F) by
fa (r) = - r
1(~(F)).
(6.48)
This index is equal to the number x on the real axis, where the standard normal integral from - o o to x has the value 1 - ~ ( F ) . Apparently this it, contrary to the Hasofer/Lind index, an one-to-one mapping between /P(F) and fa(F). It is strictly monotone /3a(F1) > fa(F2) ~ ~O(F1) < s176
(6.49)
The definition of the generalized reliability index gives Z'(fF) = ~(-fa(fF)).
(6.50)
For large values of f a ( / 3 F ) Mill's ratio (2.89), p. 28 gives
r
~
exp(- 89 (fF) 2) v~fG(ZF))
(6.51)
This yields then ln(t.(Zr))
~
ln(exp(- 89 (flF) 2)
v ~ f c (ZF)
(6.52)
Neglecting terms of lower order we get
ln(~(ZF)) 3c(ZF)
~
-~c(9F)~/2 ,/21 ln(ZP(~F))l.
(6.53)
Here we see that the generalized reliability index is asymptotically proportional to the square root of 2 ln(/P(/3F)). A problem is the use of this index in the engineering literature. In many cases, for example in optimization, we need the probability of failure and not the reliability index. But as outlined in chapter 3, example 3.2, p. 38 an asymptotic approximation for logarithm of a function does in general not give an asymptotic approximation the function if it is simply put in the exponent.
98
6.5
T h e A s y m p t o t i c D i s t r i b u t i o n in t h e Failure Domain
For further refining asymptotic approximations by importance sampling methods, it is useful to derive the asymptotic distribution of the r a n d o m vector X in the failure domain, i.e. under the condition g ( X ) <_ O. Let be defined by a twice continuously differentiable function g as usual a failure domain F and a limit state surface G. We consider the conditional distribution of X under the condition g ( f l - l X ) _< 0 for fl ~ c~. Assume that: 1. The surface G has exactly one point x* with ]x*] = rain Ix] = 1, XEG
2. At x* we have for the gradient Vg(x*) # o, 3. At x* all main curvatures ~1, . . . , ~ . - 1 of the surface G are less t h a n unity. By a suitable rotation of the coordinates we always can achieve that x* = ( 0 , . . . , 0, 1) and the main curvature directions of the surface G at x* are in the directions of the coordinate axes e l , . . . , e , ~ - t . Then the second mixed derivatives of g at x* vanish. In a sufficiently small compact neighborhood V C F of x* we can introduce a local coordinate system ( h i , . . . , h a - l , g) with the hi's the arclengths in the directions of the main curvatures on the surface G at x*. Given is so an one-to-one mapping
T : V ---* U,x ~ u = ( h l , . . . , h , - 1 , g )
(6.54)
with U C hq~~ being a neighborhood of the origin i n / R n. We define a sequence of random vectors X Z by
XI~v X ~ -- ~ ( X ~ f l Y ) ' (1a denotes the characteristic function of the set A). From theorem 56, p. 93 we get that
1P(X E flF) = ~='(g(fl-lx) < O) ,~' ~ ( X
~ flV)
(6.56)
as fl ---* 0o. Therefore the asymptotic probability content of flF is equivalent to the content of flV as fl --. oc. A further sequence YZ of random vectors is defined by
(6.57)
Yp =
This is a transformation of X Z into scaled local coordinates.
99
T h e o r e m 59 Under the conditions above we have for the the random vectors
Y Z that y p V
y
(6.58)
as /3 - ~ o o .
Here Y is an n-dimensional random vector with independent components. The distributions of these components are then Yi
v
N(0,(1 - gi) - i ) for i = 1 , . . . , n -
y~
v
Exp(iVg(.)1_1)"
1,
(6.59) (6.60)
PROOF: This theorem is proved using the Cramer-Wold-method (see theorem 29, p. 33). For this we consider the random variables exp(s ~ i = l iYz,i) with t = ( t l , . . . , t , 0. The expected value of this variable is the value of the moment generating function of ~ i = i tiYp,i a t s. If we show that this function n converges as/3 ~ oc to the moment generating function of ~ i = l tiYi for all ti's for any s in an open interval with the origin in its interior, we have shown the convergence of the distribution of YZ to Y. The open interval may be different for different vectors t. We have for the moment generating function
tiY~,i
exp(s
L
1
=
/
exp
1~22y2] dy
p~-i
siE=ltihi(/3-iy)-/32s&g(/3-1y)-- ~
( 2 ~ ) . / 2 z , ( x ~ /3v) v
=
/3n
(6.61)
J
i=1
i=1
1Ex ~
/ exp
n--1 .~(~ _/32 s t . g ( ~ ) + ~
(27r)n/21~(X ~ /3V) V
dx.
i=l
21 From theorem 56, p. 93 we get as/3 --~ oo that
z p ( x ~/3v)
~
~(-/3)
(6.62)
~/Hi2"-11 ( ] -- /~i ) e -~32/2
100
1
For the derivatives of the functions f and f l at x* we find
of(~*) OXn O2f(~ *) OxiOxj Of~(:~*) Oxi
-
~t.lVg(~*)l- 1,
-
5ij(--1 -
-
h'i(~*)sti = sti
(6.63)
st,giJ(x*))
for
i,j = 1 , . . . , n - 1,
(6.64) (6.65)
for i = 1 , . . . , n - 1.
Since the hi's are the arclengths along the main curvature directions at :e* we have h~(:~*) = 1. Since f has a global m a x i m u m at x* with respect to F , due to the Lagrange multiplier rule the first partial derivative with respect to zn is negative and all other first derivatives are zero. From this follows if we take the interval a r o u n d zero so small t h a t 1l~7g(x*)Istnl < 1 we get then
Ivf(~*)l
=
str, lVaO'*)l.
1 -
(6.66)
For the m a t r i x
)
iVg(:~.)l~ (:~*) ~3 , = 1 ....,n-
(6.67) 1
we get with these results for the elements
fij(m,)
IVf(m*)[ ij,
,,
=
ivg(**)l g ~* ) 1 - ~t,~Ivg(~*) (.)) -~,j (1 + (st. + IVg(**)l [)g~
=
-~,~
(6.68)
1 + Ivg(**)-------q
Using t h e o r e m 53, p. 81 yields then
E exp(s
t,Y~,,)
-
~
exp \
=
exp
~i)_~t?
-
tZ,~lVg(~*)l -~ t ~ ~ :
i=1
s2~(1
(6.69) 8
-
~. ,)
1 ~
1
2
1-st~llVg(x*)1-1"
i=1
This is the p r o d u c t of a m o m e n t generating function of a normal distribution rt--1 with m e a n 0 and variance ~--~-i=1 t~(1 - ~i)-1 and an exponential distribution with m e a n t,~lVg(:e*)l. T h e Cramer-Wold-device gives then the result. []
101
The meaning of this result becomes clearer if we consider some examples. If xi = 0 for i = 1 , . . . , n - 1 the surface has no curvature at z*. The arc lengths a r e then simply the distances in the corresponding coordinate directions and they have a standard normal distribution. If xi --+ 1 for i = 1 , . . . , n - 1 the local shape of the surface approaches a spherical shape. The variances of the arc length (1 - hi) -1 go to infinity. If we consider the limit case of a sphere defined by the function g~(z) = / 3 - ~ x ~ we see that the conditional distribution of X under the condition g~(X) = 0 is the uniform distribution on the sphere. If instead ni ~ -~x~ for i = 1 , . . . , n - 1, the probability mass in the failure domain is concentrated more and more around z* and the variances approach zero, since the surface is curved more from the origin into domains with smaller values of the p.d.f..
6.6
N u m e r i c a l P r o c e d u r e s and I m p r o v e m e n t s
The main numerical problem in deriving such approximations is the computation of the minimal distance points. The first method developed by Rackwitz and Fiessler [110] specially for this problem is in fact a simplified modification of an algorithm of P~eni~nyj [109]. An overview of numerical algorithms for minimizing functions under constraints is given in the article of Liu/Der Kiureghian [87]. To avoid problems with numerical instabilities Der Kiureghian et al. [46] proposed to use instead of a paraboloid determined by the second derivatives at the m i n i m u m point one fitted through points in some distance to the minimal point. Further in [45] it is shown that the main curvatures of the limit state surface can be found from the final step lengths of numerical algorithms converging towards the minimal distance point. The derived asymptotic approximations give in m a n y cases sufficient approximations for small failure probabilities. But in some cases it may be necessary to improve the estimates. For this in the articles [125] and [75] was proposed to improve the estimate by importance sampling. Other articles about importance sampling methods are [36], [77] and [80]. nigher order approximations for normal integrals are derived in [35]. Further results in this direction, where the asymptotic form of the distribution in the failure domain as derived in the last section generalized to non-normal distributions as outlined in the next chapter, is used for importance Monte Carlo methods, can be found in [92] and [91].
102
6.7
Examples
The following two examples and figures are from the paper [23] in the Journal of the Engineering Mechanics Division, ASCE, Vol. 110 and are reprinted with permission. EXAMPLE: In the two dimensional space is given a sequence of elliptical domains by the function gt~(z) = / 3 ~ - ((zl/a) 2 + z~) with a > 1. On the curve ga(~) = 0 there are two points with minimal distance to the origin zl = (0, 1) and :e2 = ( 0 , - 1 ) . Using theorem 56, p. 93 gives the following approximation Q(fl) for P(3) =
F(go(~) ___0) 2~(-~) Q(fl)
"" x / 1 -
(6.70)
a -~"
In the picture the approximation Q(3) is compared with the exact probability P(/3) for the values a = 1.25, 2 and 4. O
a c/~J
P(P)
.9
i
I
P
t
.8 1.
2.
3.
4,.
Figure 6.1: Probability content of an ellipse
103
EXAMPLE: Given are n i.i.d, exponential r a n d o m variables Y1,... ,Y,~ each having c.d.f. F(y) = 1 - e - y . Further is given a limit state function
g(y) = n+av/-n-Lyi.
(6.71)
i=1 T h e t r a n s f o r m a t i o n into s t a n d a r d normally distributed variables is given by xi = - ( I )-1 [ e x p ( - y i ) ] for i = 1 , . . . , n. In the t r a n s f o r m e d space the function g has the form n
.q(~) = ~ ln[(I)(--zi)] + n + ~v/-n. i=1
(6.72)
W i t h the Lagrange multiplier m e t h o d we find the only m i n i m a l point zl on the limit state surface G = {~; ~(:e) = 0}, which is z l = (-(I)(e - a / v ~ -
1), . . . , - ~ ( e
-~/x/-~ - 1 ) ) .
(6.73)
T h e distance of this point to the origin is x/-~(e -~/'rr 1)). T h e curvature ~l(z) of the surface at this point is in all directions constant and equal to ~(_z) - z where z -- ~ ( e -a/x/-~ - 1). T h e n we get with t h e o r e m 56, p. 93 the a p p r o x i m a t i o n
9
(1 -
l(z)
(6.74)
In the following picture this a p p r o x i m a t i o n is c o m p a r e d with the exact probability, for 0 < a < 3 and n = 10. and with the linear a p p r o x i m a t i o n ~ ( - [ z l ] ) .
[]
104
exact
quadratic approx~mahon
~ :
. hne~]l"
"~ ~ ~
.1 ~,. 10
approximation !
\
10-2
"N, \
'
"N
\
....
"~
\ \
\ \
1o-%
(X
3.
Figure 6.2: Sum of exponential random variables
105
Chapter 7
Arbitrary Probability Integrals 7.1
P r o b l e m s of t h e T r a n s f o r m a t i o n M e t h o d
In the last chapter it was described how to transform non-normal random vectors into standard normal vectors to obtain asymptotic approximations for failure probabilities. Two points of this method were unclear: 1. Why is it not possible to calculate approximations in the original space of random variables? 2. What is the meaning of the parameters of the Solution in the standard normal space in relation the the original random variables? Concerning the first point it was outlined by the author ([26] and [29]) that such methods can be used also in the original space of the random variables. This solves the second question too, then now the parameters of the solution depend on the distribution parameters of these random variables. To understand the transformation method and to find this solution it is only necessary to clarify which function is minimized in this method. An error in the transformation method appears to the author a misunderstanding of the mathematical meaning of the word transformation". There are two different types of transformations, speaking loosely, in mathematics. The first type changes the structure of the problem; for example to compute the product of two numbers a and b, by taking logarithms the problem of multiplication is transformed into a problem of addition, since, as commonly known ln(a. b) = ln(a) + ln(b). "
106
On the other hand the second type of transformation does not change the structure, but makes only a reparametrization. For example, if an integral over a domain in the two-dimensional space in cartesian coordinates is transformed into an integral in polar coordinates with the integral transformation theorem. Here only the coordinates are different. Maybe by such a reparametrization some functions appear in a simpler form, but this must correspond to some properties of the structure before the transformation. The basic misconception of the transformation method is the belief that this is a transformation of the first type, but in reality it is of the second type, since the structure is not changed. Only a reparametrization is made, the problem of integration remains the same. If we understand this, it becomes apparent that all results and methods of the transformation method must have a natural interpretation in terms of the original random variables and their parameters. To understand, what is done in the minimization procedure of the method, we have only to study the structure of the function, which is minimized. The correspondence between ui and xl is given by
ui = 42-1(Fi(xi)).
(7.1)
But the c.d.f. Fi(xi) can be written as an integral over the exponent of the log likelihood function li( xi ) = ln(fi( xi ) ) as Xi /*
Fi(xi) = / exp(li(y)) dy.
(7.2)
~00
We get so for ui that
ui = , - 1 [ ~_ff'exp(li(y)) dy] .
(7.3)
With the relation (I)-l(x) = -(I)-1(1 - x), this can be rewritten as = _e-i
1-
exp(l
(y)) dy
.
(7.4)
--(30
Since f _ ~ exp(li(y))dy = f~_~o fi(xi)dxi -- 1, this yields
ui = - O -1
[j
exp(li(y
,, ]
dy .
(7.5
,
i
As xi ---* oe this integral can be approximated by the Laplace method (see corollary 37, p. 50), giving
exp(li(y))dy ..~ ;~i
exp(l,(xi) + l~(xi)(y - xi))dy = Xi
107
ii~(xl)l
Inserting this into equation (7.5) we obtain u, ~ _ ~ - 1 [exp(~'(x'))]
L II (xi)l
J'
(7.7)
Or, in another form
exp( li( xi ) ) ... • ( - u i ).
(7.8)
IIs Using Mill's ratio f f ( - y ) .-~ ~ l ( y ) / y as y --~ oo yields then
exp(li(xi)) .~ ~l(ui)
II (x )l
(7.9)
ui
By taking logarithms and neglecting terms of lower order
exp(li(xi))
~
exp(---~-)
(7.10)
u2
,~
-2.11(xi).
(7.11)
If the ui's are positive and tend towards infinity, we get for the norm of the vector u the relation n
-2 E
li(xi) = X/~2 9l(~).
(7.12)
i=1
This can be stated as follows: Minimizing the norm I u l= IT(~)I in the standard normal space is asymptotically equivalent to maximizing the log likelihood function l(x) in the original space. If we consider the log likelihood function in the standard normal space, this becomes clear
l(u)
-
n ln(2~r)- -~ 1 ~-]u~ '~ -~
(7.13)
i----1
=
- 2 ln(27r)- ~1 ] u 12 .
This shows that in fact the minimization of the distance to the origin is equivalent to the maximization of the log likelihood. Since through the transformation into the standard normal space only the form of the failure domain is altered, but not the basic structure of the problem, it should be clear now that in general the maximization of the log likelihood function in the transformed space is in principle the same as in the original space. Knowing this it is possible to use asymptotic methods also in the original space. Here we do not have a geometric interpretation for the structure of the solution, but a probabilistic. 108
7.2 7.2.1
A s y m p t o t i c A p p r o x i m a t i o n s in t h e Original Space The
construction
of Laplace
integrals
The method described in the following is based on proposals of Freudenthal [65], Hasofer/Lind [69] and Shinozuka [126] for making analytic approximations of the limit state functions. Here with methods of asymptotic analysis a sound mathematical foundation is developed (see [26], [29] and [91]). This method is especially important for random vectors with dependent components, since in this case the Rosenblatt-transformation is not applicable in practice. Consider now a failure domain F in n-dimensional space where the log likelihood function of the distribution is given by l(x). To obtain an asymptotic approximation f o r / P ( F ) , first all points in F are located, where the log likelihood function is maximal. Since F is a failure domain, these points will lie in general on the limit state surface G, the boundary of F. For simplicity we assume that there is only one point x 1 E G with l(x l) = maxa~er 1(~) and that for all other points x C F always l(x l) > 1(~). If there are several points x l , . . . , xk on G, where the maximum is achieved, the failure domain is split up into k disjoint domains F 1 , . . . , Fk with tJFi = F such that in each Fi lies one zi- Then for each Fi with the following method an approximation for P ( F i ) is computed and the sum of the approximations gives an approximation f o r / P ( F ) . We define now
/)0 = ~ =
If - max/(x)'xer
(7.14)
Since F is a failure domain, where the p.d.f, is small in general, it can be assumed that maxm l(x) < 0. Elsewhere it would be doubtful to use asymptotic methods anyway. Then the failure probability can be written as ~(F) =/exp
(f~02~02)) dx.
(7.15)
F
We define now a scaled log likelihood function by
= l(z) N'
(7.16)
Then the integral ~ ( F ) can be written in the form P ( F ) = f exp(/~o2h(x)) dx. F
109
(7.17)
This is a Laplace type integral. If we consider the function J(/3), defined by J(/3) = / e x p ( / 3 2 h ( x ) )
dx,
(7.18)
F for this function asymptotic approximations as /3 -+ o o can be derived. The approximation value for/3o gives an approximation f o r / P ( F ) . T h e o r e m 60 Given is a p.d.f, f : ~'~ --+ Y~ with f ( x ) > The log likelihood function ln(f(~)) is denoted by l(~). A differentiable function g: R ~ -+ R defines the failure domain with boundary C : {~; g(~) : 0} and Vg(~) r o for all ~ ~ Assume further that:
0 for all at 6 J~'~. twice continuously F : { x ; g ( x ) <_ 0} G.
1. The function l(x) is twice continuously differentiable. 2. The function l(x) attains its maximum with respect to F only at the point at* 6 G. The conditions about the derivatives of l and g in theorem 46, p. 67 are fulfilled.
3. maxF l(x) < o. 4. The conditions of the compactification lemma 38, p. 53 are fulfilled for the integrals
(7.19)
exp (/32 h ( x ) ) d x
F where h(~) : l(~)//3~ with/3~ : . , / - m a x ~ F l(~,). Then as/3 --+ oo the following asymptotic relation is valid
/
efl2h(:~*) exp(fl2h(x))dx
..~
(21r)(n-1)/2
/~](Vl(at,,))Tc(x.)Vl(~c.)l
(~)n+l --
F
<~In+l.
e z%(m*) =
(7.20)
(27r)(n-1)/2iVl(x,)iV/I d e t ( A r ( x , ) H ( x , ) A ( x , ) ) [
~ r e C(=*) is the matrix of the cofactors of the matri= H(=*), i.e.
H(~*)= ("~
IVl(~*)I
O~g(~*)'~
.
(7.21)
The matrix A ( x * ) = ( a l ( x * ) , . . . , an-l(x*)) is an n x (n-1)-malrix. The vectors al(a~*),..., an-l(a~*) are an orthonormal basis of the tangential space TG(X*).
110
PROOF: The result follows immediately from theorem 46, p. 67 if we set [] f(z~) = h(x) and note that h(~) =/3o2l(x), see [26]. For the probability/P(F) we get as approximation ~ ' ( r ) --
(2~)(~-1)/2f(~*) . IVl(~*)lx/I det(H*(x*))l
(7.22)
Here H * ( x * ) = A T ( ~ * ) H ( ~ * ) A ( x * ) . In a similar way the result of theorem 57, p. 95 can be formulated for arbitrary probability densities. T h e o r e m 61 Given are m twice continuously differentiable functions gi ~:t. These functions define the set F -- Nm=l(w;gi(x) ~_ 0). We assume further that:
: j~n
--+
I. The function 1 attains its global maximum with respect to F only at the point ~* C F and maxF l(x) < O. 2. There is a k E { 1 , . . . , m } with gi(x*)
=
0 f o r t = 1 , . . . , k and
gj(w*)
<
O for j = k + l , . . . , m ,
(7.23)
and the gradients ~Tgi(x*) (i = 1 , . . . , k) are linearly independent. 3. The gradient ~71(~*) has a unique representation in the form k
(7.24) i=l
withTi >O f o r i = l , . . . , k . 4. The matrix H*(~*) is regular. Here we have U*(x*) = A T ( ~ c * ) H ( x * ) A ( x *)
(7.25)
with
(7.26) r=l
$',j'=l,...,n
and
A(x* = (al(x*),..., a~-k(x*)).
(7.27)
The al(x*),...,an_k(x*) form an orthonormal basis of the subspaee [span(gl(~*),...,Vgk(~*))] • in ~ n . 111
5. The conditions of the compactification lemma 38, p. 53 are fulfilled for the integrals
exp(/32 h(x))dx
(7.28)
F
where h(~) = l(x)//3g with/30 = X / - m a x x e F l(x). Then we have as/3 -~ c~ the following asymptotic relation
/ efl~h(X)dx,~(27r)(n_k)/2
e~2h(x.)
( ~ ) n+k
l-Iik=l 7iv/det(G)l d e t ( H * ( x * ) ) l " - -
F
. (7.29)
Here G is the Gramian of the gradients Vgl(x*), . .., Vgk(w*). PROOF:
This theorem follows from theorem 48, p. 71.
1-1
The quality of these approximations depends on the tail behavior of the p.d.f, of g ( X ) . If the tail has an exponential decay, the approximations should be good in general. If the deacy is very slow such approximations might give no sufficient results. As said already in the last chapter these approximations can be used as basis for importance sampling methods. Such a concept is developed in [91]. In a benchmark study [56] it was found that for structural reliability problems this method was more efficient than the competing ones. 7.2.2
Examples
The following examples are from the article [29], Journal of the Engineering Mechanics Division, ASCE, Vol. 117 and are reprinted with permission. EXAMPLE: Given are two independent x2-distributed random variables X1 and X2 with two degrees of freedom each having the p.d.f. 1 "xi" e_~d 2 for xi _> 0. fi(xi) = -~
(7.30)
The log likelihood function of the p.d.f, is then
li(xi) = - ln(4) + ln(xi) - xi/2.
(7.31)
The sum of these random variables is again a X2 distributed r a n d o m variable with four degrees of freedom. We take as limit state function g(xl, x2) g(Xl,X2)
=
21
112
-- X 1 -- X 2.
(7.32)
The log likelihood function has only at the point ( ~ , ~ ) a global maximum. The derivatives of the function are
Ol(z____~)_ 1 Oz~
zi
1 2'
02/(z) Ox~ -
1 x~'
02/(z~) - 0. OzlOx2
(7.33)
The exact failure probability is given by P(X1 + X2 > 21) = 7.15 • 10 -3.
(7.34)
The approximation with theorem 60, p. 110 gives instead P(X1 -4- X2 > 21) ,,~ 8.72 • 10 -3.
(7.35) []
EXAMPLE: This example is taken from [89], p. 78-79. In a similar form it appears also in [72] and [50]. Given is a two-dimensional random vector (X1, X2) with joint p.d.f.
f(xl, x2) = (xl + x2 + xlx2) exp(-(x~ + x2 + XlX2)) for xl, x2 > 0. (7.36) The log likelihood is then I(~1, ~)
= ln(xl + ~
+ ~1~)
- ~1 - x~ - ~ 1 ~ .
(7.37)
As limit state functions are taken
gl(xl,x2) g2(xl, x~)
g3(xl, X2)
18 - 3Xl - 2x2, = -x2, --- - x l .
=
(7.38) (7.39) (7.40)
The gradients are then -3)
Vgl(xl, x2)
Vg2(xl, x2)
=
(?1)
Vg3(Xl, X2) _ _ - ( O 1 ) .
(7.41) '
(7.42) (7.43)
To use theorem 61, p. 111 we need the point, where the log likelihood is maximal in the failure domain. There is only one such point ~0 = (6, 0). At the point (0, 9) there is only a local maximum and not a global.
113
T h e gradient of the log likelihood is l+x2
-1-x2
xl + x2 + xlx2 vl(
l,
.
=
l+xl
(7.44)
-1-Xl
xl + x2 + xlx2 At ~0 = (6, 0) the values are
VI(6, 0) =
= 1+6 6
(7.45)
1-6
5
For the density at (6, 0) we get at this point f ( 6 , 0 ) = 6e -6 ~ 1.49 x 10 -2 .
(7.46)
At this point g~(6, 0) = g2(6, 0) = 0 and g3(6, 0) = - 6 < 0. Therefore only the limit state functions gl and g2 are of interest here. For the G r a m i a n of the gradients of the limit state functions gl and g2 at (6, 0) we get
det(AT . A) = det ( 13 2
2) 1
=9.
(7.47)
T h e coefficients 71 and 72 are 71 :
5
5
]-~ and 72 = 5~-2.
(7.48)
As a p p r o x i m a t i o n we obtain ..~
f(6, 0)
(7.49)
71 "72 " ~ / d e t ( A T A ) 1.49 • 10 -2 1.47.3
-
3 . 3 9 x 10 - 3 .
T h e exact probability 1P(gi(X1,X2) < 0 for i = 1, 2, 3) is 2.94 • 10 -3. In [89] are two different approximations obtained by the t r a n s f o r m a t i o n m e t h o d : 2.68 • 10 -3 and 4.04 • 10 -3. T h e results are different, since if the Rosenblatt t r a n s f o r m a t i o n is used, the order of the r a n d o m variables can influence the result ([52]). But since this invariance is one of the basic requirements for useful algorithms, here is a further a r g u m e n t against the use of the transform a t i o n m e t h o d in these cases.
114
To be quite correct, in [89] still a further approximation is constructed by combining the two linear approximations to a polyhedrical, which gives as result 2.94 x 10 -3, also the exact result. But since for this no generalization, which can be used in more complicated cases, is developed, it appears to be of limited value. This example shows the essential advantage of the method of approximations in the original space. []
7.3
Sensitivity
7.3.1
Parameter
analysis dependent
densities
Often the exact value of the distribution parameters is unknown. A more general formulation of the reliability problem is then in the form f
z, to.,(r) =
j
exp(l( ,to))
(7.50)
g(x,r)<_0 Here l(x, ~9) is the log likelihood fimction. The two parameter vectors to and r describe the random influences. In reliability problems often some of these parameters are design parameters which can be changed. For sensitivity analysis it is necessary to know the derivatives of the function Pto r(F) with respect to these parameters. A problem in connection with sensitivi'ty analysis is what happens if some random variables are replaced by their mean to simplify the problem ([88]). The results of this section are based on [28]. Given is a family of n-dimensional probability densities f ( x , t9), depending on a parameter to E /R k. We assume that f(x, to) > 0 for all (x, to) C ~ " x D with D C//~k being an open subset of ~ k . The log likelihood function l(x, to) = ln(f(~, to)) is a function of z and to. For a fixed parameter value to C D the probability content //)to(F) of a subset F C / ~ is given by
~to(F)
=/exp(l(x,
to)) dx.
(7.51)
F
The partial derivative Po(F)) with respect to Oi is found if the necessary conditions are fulfilled, by interchanging differentiation and integration
OlPto(F)ooi - ff Ol(x,OOi to) exp(/(x, to)) dx. F
115
(7.52)
The partial elasticity c~(/P.0(F)) of the probability n 0 ( F ) with respect to #i is then Oi fa Ol(x, (7.53) ei(lPo(F))- Ptg(F) OOitg) exp(/(x, ~0)) dx. F
As scaling factor a suitable/30 is chosen, for example/30 = X/- maxF l(x, 0 " ) for a tg* with maxF l(x, tg*) < O. Replacing now in the integral the function l(x, ~) by h(z, ~) = l(x, t9)/~02, we obtain
O1Ptg(F)
f 20h(x,~) - J/3o ~ exp(~o~h(x,a)) ax
(7.54)
F
and in the same way for the partial elasticity with respect to fli 0i = /Po(F)
q(k~
f /32 Oh(x,,~) exp(/32h(x, 0)) O--~i
dx.
(7.55)
F
We define now the integrals/P~(F) by
Z%(F) = fexp(/3~h(~, 0)) d~
(7.56)
F
We assume now that there is only one point x* on the boundary of F, where the log likehood function achieves its global maximum with respect to F. Then using theorem 60, p. 110 we get for / P ~ ( F ) as /3 --* oo the asymptotic approximation
~ ( F ) ,--,(2~-)('~-1)/2IVl(x*)lZl det(AT (x*)H(x* )A(x*))l
9(7.57)
Here the functions and matrices are defined as in theorem 60. For the partial derivatives and elasticities then the following asymptotic approximations are obtained using theorem 46, p. 67
OlP~(F) 0~i
~
ei(1P~(F)) ~
Ol(x*,tg) rp~ tp~ ( fl ) 2 ~
Oi"
--~'-'
70
0t(x*,~) ( / 3 ) ~
-~-i
-~o
' /3-~ oo,
, /3 ~ oo.
(7.58) (7.59)
This yields for the partial derivative with respect to v~i at v~* (7.60)
116
The general case, where also the limit state function depends on the parameter vector can be treated in a similar way. We restrict our considerations to the case of one parameter v. We define the integrals Pr~(F) by /P~(F) =
f
exp(,5'~h(x, r)) dx.
(7.61)
g(x,r)_
OPt(F) _ 132 [ Oh(x,r)e~h(X,~.) dx Or Or e(x,r)<_o _ / g,(~, r) d~(~,). g(X,r)=O
(7.62)
i ]
with dsG(x) denoting surface integration over G. For these integrals asymptotic approximations can be derived using the results in chapter 4 (theorem 46, p. 67 for the domain integral and theorem 47, p. 70 for the surface integral). For the first integral the approximation is
Oh(X,Orr) eP~h(X,r)dx .~ /32Oh(x*'orr) / eZ2h(X'~)dx(7.63) g(x,O
~2 f
For the second integral, the surface integral, we get
g(x,,)=0
g~(x, ,) e~h(x.,) ds~(x) IVg~(x, r)l
(7.64)
Ivg~(x*, 01 g(~,O=o
"~ (~0) 2gT(x*'r)'Vxl(x*'r)]P~r(F)''Vgx(x*,r)' The last relation in the equation above is found by comparing the asymptotic approximations in the theorems 46, p. 67 and 47, p. 70
e~h(x'*) dsa(x),~ / g(x,O=o
-~o [Vxl(x*,r)]
117
ep~h(x'T) dx. / g(x,O<_o
(7.65)
2
Multiplied by (~2-go) IVael(a~*, r)l the surface integral is an asymptotic approximation for the probability/P~(F). Adding the two approximations gives the final result
Or
"~ \
-Or
IVga~(a~*i ~-~)~
~-0
(7.66)
An application of these results in structural reliability can be found in [41]. 7.3.2
Example
Let be given two independent random variables X1 and X2, each with a lognormal distribution with E(ln(Xi)) = p and var(ln(Xi)) = o.2 for i = 1, 2. The joint probability density function f(xl, x~) of these random variables is then
f(Xl,X2)
1
E i = I (ln(xi) - #)2
-- 27rO.2XlX2 exp
20.2
(7.67) .
The log likelihood function is 2
l(xa x2) = -ln(2~-) - 2 ln(o.) - ln(Xl) - ln(x2) - E ~ = ~ ( l n ( z , ) - ~)2 ,
2o.2
(7.68)
We consider as limit state function the function (7.69)
g T ( X l , X 2 ) = 7 2 -- X l ' X 2 .
For the derivatives of the log likelihood function we get
Ol(xl, x2)
_
.x~_1( 1 + ln(xi)
OX i
021(xl, .2)
Ox/=
- #
)
(7.70)
O.2 -
z~-2(1
+
l n ( x d - t'
o-2
(7.71)
o'-2).
The gradients of I and g are with c-r = (1 + ~ )
V/(Xl,X2)
:
( llC ) __X21C7
and
Vg(*l,X2)
=
--'1
"
(7.72)
At the point (7, 7), the only global maximum of the log likelihood function, we obtain for the norm of these vectors
IVl('r,'r)l
=
%,'~"f - I c 7 ,
Ivg('r,'r)l
=
v%.
118
(7.73)
For the Hessian at (7, 7)
@j(7,7)-Ivl(7'7)llgij(7,7) )
Hi=
[Vg(% 7)
i,~=1,2
(7.74)
we get then
H ] = ( "r-~(c'Y - 'r-z) c.y7-2
c'~7-2 7-2(c.y - o.-2) ) "
(7.75)
The gradient of 1 at (7, 7) is parallel to the vector (1, 1), therefore a unit vector orthogonal to this vector is the vector at = ( 1 / v ~ , - 1 / v f 2 ) . This gives
@aTHfalI =
(7~) -1-
(7.76)
The joint density at (7, 7) is 1 f(7, 7) - 2~.(7o.)2 exp(
(ln(7) - p)2 ). ~
(7.77)
Now, using the approximation formula (7.22), we obtain as approximation for the failure probability the following
PA(F) =
V~(I+ ~ ) - 7 - 2 o
"-1 ' f(7,7)
=
1 x/~-~x/~g(1 + ~ )
exp(
(7.78)
(ln(7) - ,): cr2 )'
The partial derivatives of the log likelihood function at (7, 7) with respect to the parameters p and cr are O1 O#
- -
Ol 0c~
9
ln(7) - # a2 ,
(7.79)
=
2
-
2. (-c~ -1 + a-a(ln(7) - #)2).
(7.80)
For the derivative of g at (% 7) with respect to 7 we find g~ = 27.
(7.81)
The exact probability content of the failure domain is given by P(F) =
+(-x/~-ln(7) - #).
(7.82)
er
By differentiating this function with respect to the parameters #, ~ and 7, we obtain the true sensitivities of the failure probability to changes in these parameters. 119
For tile partial elasticities of the approximations tion (7.66) t h a t %,(PA(F))
=
2# ln(7) - p
co(PA(F))
=
2
-1+
e-~(PA(F))
=
2
1+
~A(F)
we
get using equa-
(7.83)
0-2
ln(
-#
,
ln(7) - p'~ 0-2 i] 9
(7.84) (7.85)
T h e partial elasticities of the exact failure probability are asymptotically for "/ --+ o o
%,(P(F))
~
2#-In(7)- #
cG(P(F))
~
2-
e~(P(F))
~
2. I n ( 7 ) - p
(7.86)
0.2
In
O-2
- #
,
(7.87) (7.88)
Asymptotically the approximations of the elasticities approach the true elasticities.
120
Chapter 8
C r o s s i n g R a t e s of Stochastic Processes 8.1
Introduction
In m a n y problems of reliability analysis it is necessary to consider time variant influences modelled by stochastic processes. Here the extreme value distribution of the process is of interest. Depending on the structure of the process there are different possibilities to calculate this distribution. For Markov processes it is possible by using these Markov properties to find solutions. For stochastic processes, which are differentiable and therefore not Markovian, other methods must be used. Textbooks which treat also vector processes are [101] and [108]. The problem of extreme value distributions of differentiable processes is studied in [43] and [82].
8.2
Definition Processes
and
Properties
of
Stochastic
D e f i n i t i o n 22 Given is a probability space (~,/3, ~ ) and a parameter set T. A stochastic process is defined as a real function x ( t , w ) which for each t E T is a measurable function of w C ~. Normally T is a time interval and x(t) is the value of the process at time t. In the following an one-dimensional stochastic process is denoted by x(t). D e f i n i t i o n 23 Given is a probability space (~, 13, P ) and a parameter set T. A stochastic vector process is defined as a function ~(t, ~ ) : T • ~ -~ ~ , (t, ~) x ( T , w ) which for each t E T is a measurable function of w E O. In the following an n-dimensional vector process (xl ( t ) , . . . , x~ (t)) is denoted by x(t). 121
D e f i n i t i o n 24 The autocorrelation matrix R ( t z , t 2 ) of a vector process x ( t ) = ( x l ( t ) , . . . , x n ( t ) ) is given by
n ( t l , t2) = (E(~,(tl)x~ (t2))),,j =1 ..... ,
(8.1)
and its autocovariance matrix C ( t l , t 2 ) is given by C ( t l , t 2 ) = ( E ( x i ( t l ) x j ( t 2 ) ) - E(xi(tl)lF,(xj(t2)))i,j=l ..... n. D e f i n i t i o n 25 The cross-correlation matrix R ~ y ( t l , t 2 ) vector processes w(t) and y ( t ) is given by
(8.2)
of two n-dimensional
R ~ y ( t l , t2) ~- (J~(xi(tl)yj (t2)))i,j=l ..... n
(8.3)
and their cross-covariance matrix C ~ y ( t l , t2) is given by C ~ y ( t l , t 2 ) = ( ~ , ( x i ( t l ) y j ( t 2 ) ) - J~(xi(tl))J~(yj(t2))))i,j=l
.......
(8.4)
D e f i n i t i o n 26 A vector process ~(t) is called wide sense stationary if its mean value vector E(~(t)) = U (8.5) is equal to a constant vector it and the autocorrelation matrix depends only on the difference tl - t2
/~(tl, t2) ~- /~(tl "Jr T, t2 -~ 7-) for any T E J~.
(8.6)
D e f i n i t i o n 27 A vector process ~(t) is called strictly stationary if the two processes processes x ( t ) and x ( t + v) have the same finite dimensional distributions for any v E ~ , i.e. for all v E J~, all k G ~7V and all vectors z l , . . . , z~
P ( ~ ( t ~ ) ___ z l , . . . , ~(tk) < z~) = ~ ( ~ ( t l + ~-) < ~ 1 , . . . , ~(tk + ~-) ___ ~k). (8.7) D e f i n i t i o n 28 A stochastic process is said to be a normal process if all finite dimensional distributions of the process are normal distributions. A normal process is completely determined in terms of its means and autocovariances. Therefore a wide sense stationary normal process is also a strictly stationary process. D e f i n i t i o n 29 A stationary stochastic process x(t) is differentiable in the m.s. (mean square) sense if there exists a stationary stochastic process x'(t) such that for any t ~
E
~(t + h) - ~(t) - ~'(t)
= 0.
(8.8)
T h e o r e m 62 A stationary stochastic process x(t) is differentiable in the m.s. sense if its autocorrelation function r(t) has derivatives up to order two. PROOF:
A proof is given in [43], p. 84.
122
[]
Slightly stronger is the differentiability with probability one. D e f i n i t i o n 30 A stationary stochastic process x(t) is said to be differentiable
with probability one if with probability one each path x(t,w) has a derivative Ot
The difference between these two definitions is outlined in [43], p. 84/5. Let x(t) be a stationary normal process differentiable with probability one. Then its derivative process x'(t) is again a stationary normal process (see [43], chapter 7.8). Assume now that x(t) is a stationary normal process differentiable with probability one and such that ]E(x(t)) = 0 and var(z(t)) = 1. Its autocorrelation function is r(t). Then the variances and covariances of the derivative process are (see [43], p. 177) var(x'(t))
--
(8.9)
=
cov(x(0),x'(t))
r'(t).
Especially we have cov(x(t), x'(t)) : r'(0) : 0. (Since r(t) is an even function, its derivative at zero must be zero.) Since the random variables x(t) and x~(t) are both normally distributed, from this follows that they are independent too. D e f i n i t i o n 31 A stationary stochastic vector process is said to be a normal process if all finite-dimensional distributions are normal distributions. Given is a stationary normal vector process ~(t) = ( x l ( t ) , . . . , x . ( t ) ) . Assume that the process is centered and standardized, i.e.
lbh(xi(t)) cov(xi(t),xj(t))
=
0fori=
=
5ij f o r i , j = l , . . . , n .
1,...,n,
(8.10) (8.11)
Assume further that the process has continuously differentiable paths with probability one. The derivative process ~'(t) = ( x ] ( t ) , . . . , x'~(t)) exists therefore and is again a normal process. The autocovariance matrix of a~(t) is then (cov(x
(0),
' t )))i,j=l Xj(
...... :
t.-- t 'i j \
)]i,j=l,...,ri.
(8.12)
The cross-covariance matrix between the process and the derivative process is (see [108], p. 317). ......
......
(8.13) This gives for t = 0 (r~j(O))i,j=l
~- (--COV(X~(0), x j ( O ) ) ) i , j : l
......
-~-
(COV(Xi(0), X~(0)))i,j_--i . . . . . .
..... n
:
(--r~i(O))i,j=l
(8.14)
..... n .
The matrix (r~j(O))i,j=l .....~ is therefore skew-symmetric and so zi(t) and x~(t) are uncorrelated. In the case of a normal process they are independent. 123
We write for the autocorrelation matrix of the process It(t). This gives for the cross-correlation matrix I t ~ , ~ , ( 0 , t) between the process and the derivative process
It~,~,(o, t) = It'(t).
(8.15)
and for the autocorrelation matrix I t z , ( 0 , t) of the derivative processess I t s , ( 0 , t) = - I t " ( t ) .
(8.16)
By a suitable linear transformation (see l e m m a 6, p. 11) it can be achieved always t h a t the covariance matrix of the derivative process is a diagonal matrix. Then we have for the two covariance matrices of the process and the derivative process
It(0) = i ~ , R"(0) =
0
..
0
.
0) 0
9
9
(8.17)
-2 --O'rz
Here 8~ = var(z'i2(t)). This gives a simple structure for the process. Only the form of the crosscorrelation m a t r i x / / ' ( 0 ) between the process and the derivative process is not in a standardized form.
8.3
Maxima Stochastic
and
Crossings
of a
Process
Given is a strictly stationary stochastic process x(t) with continuous time parameter t. In the following we assume that the distribution function F(x) = 1P(x(t) < x) is continuous and that with probability one all paths are continuous functions. Conditions which ensure that these assumptions are fulfilled are given in [43]. The problem, which we will consider in the following, is the extreme value distribution of the process x(t). We want to find the distribution function of the r a n d o m variable
M(T) = sup{x(t); 0 < t < T}.
(8.18)
In general it is not possible to calculate the exact form of this function. Therefore we a t t e m p t to find approximations. If the paths of the process are continuously differentiable with probability one, this property can be used to derive such approximations. This topic was studied in the book of Cramer and Leadbetter [43] and then in Leadbetter, Lindgren and Rootz~n [82]. Solutions in an analytic form are available almost only for normal processes 9 In [22] the author has derived approximations for the m a x i m a of filtered Poisson processes converging towards a normal process. Approaches for a more general theory can be found in [2] and [3]. 124
The use of such differentiable vector processes for the reliability of structures is shown in [137] and [17]. Applications in ship construction are given in [60]. To find an approximation for the extreme value distribution of a stochastic process, approximations for the probability that the process crosses a level in some small time interval are derived. In this subsection the meaning of "crossing a level" will be defined in a precise way (see [82], p. 146). D e f i n i t i o n 32 With Gu we denote the set of all functions f : T ~ 1~ which are continuous on T and not identically to u in any subinterval. If x(t) is a stationary process with the properties described in the last subsection, its sample paths are with probability one members of G~, since O0
to(x(t) r c~) _ Z
zo(~(tj) = ~),
j=l
where {t j } is an enumeration of the rational points in T. Due to the continuity of r ( x ) we have lP(x(tj) = u) = 0 for all tj. Now we consider the functions f in G.. D e f i n i t i o n 33 A function f E Gu has at the point to E T a strict u-upcrossing if for some e > 0 f ( t ) < u in (to
-
e,to) and f ( t ) > u in (to, to + e).
(8.19)
Analogous a strict u-downcrossings is defined if there is some c > 0 with f ( t ) > u in (to - e,t0) and f ( t ) < u for (to, to + c). D e f i n i t i o n 34 A function f E Gu has at a point to E T a u-upcrossing if for some e > 0 and all ~ > 0 f ( t ) < u for all t E (to - e,to) and f ( t ) > u for some
t E (to, to + ~). In the same way u-downcrossings are defined. D e f i n i t i o n 35 A function f E Gu has at a point to E T a (strict) u-crossing if it has there either a (strict) u-upcrossing or a (strict) u-downcrossing. We write U(T) to denote the number of u-upcrossings in the interval (0, T) U ( T ) = # { t E (0, T); u-upcrossing in t}.
(8.20)
These upcrossings can be used to obtain approximations for the extreme value distribution of the stochastic process if the process is smooth enough. We have the relation ~(U(T)=O)
=
P(x(0)u,U(T):0X8.21
=
z.( max z(t) < u) + zp(x(0) > u, U(T) : 0). O
125
)
If we rewrite this equation we get
~ ( max x(t) < u) = P ( U ( T ) = O) - / P ( x ( O ) > u, U(T) -- 0). 0
(8.22)
On the other hand we get then that ~ ( max x(t) > u) = ~ ( x ( 0 ) > u, U(T) = 0) + J~D(X(0) < U, U(T) > 0). (8.23) "0
Since U(T) is a non-negative, integer-valued random variable we find OO
OO
P ( U ( T ) > O) = E 1P(U(T) = i) < E ilP(U(T) = i) = E(U(T)). i----1
(8.24)
i=1
And this gives P ( max x(t) > u) < ~ ( U ( T ) ) + ~ ( x ( 0 ) > u). 0
(8.25)
For a differentiable process the number of crossings of a level in a finite time interval is in general finite with probability one. This gives therefore a simple upper bound for the extreme value distribution. The conditions, when for a differentiable process the crossing rate over a level is given by the conditional expectation of the derivative process was first derived by E.V. Bulinskaya [37]. The following lemma is a simplified form of the theorem 2 in [81], formulated for stationary processes. We consider a stationary stochastic process Y(t) on the time interval [0, 1]. Let gr(x, z) denote the joint p.d.f, of Y(t) and (Y(t + r) - Y(t))/7-. Under some regularity conditions the second random variable will converge towards the derivative Y'(t) of the process. L e m m a 63 Suppose that Y(t) has continuous sample paths with probability one
and that 1. gr(x, z) is continuous in x for each z and v,
2. gr(x, z) -~ p(x, z) as T --+ o un@~mly in ~ fo~ each z, 3. g~(z, z) <_ h(z) for all 7", x, where f Izlh(z) dz < ~ .
Then f E(C(u)) = / Izlp(u, z) dz.
PROOF:
See the article by Leadbetter [81], theorem 2.
(8.26) []
In [94] the conditions under which such results, i.e. the representation of crossing rates by the joint conditional expectation of the derivative process, are worked out in detail. 126
If we consider an n-dimensional vector process x(t) and a function g ://~n --+ /R, we get an one-dimensional process g(~(t)). A u-crossing of the process g(~(t)) corresponds to a crossing through the hypersurface {~; g(z) = u} of the process In the case of normal processes it can be shown in m a n y cases that as u --+ c~ for a suitable scaling of the point process of crossings this point process converges towards a homogeneous Poisson point process. For stationary normal processes with independent components such results are proved in [85]. For stationary processes the expected number of u-upcrossings is under some regularity conditions equal to 1/2 of the expected number of the u-crossings. Therefore the approximations for the number of crossings give also approximations for the number of upcrossings. Extreme value distributions of functions of stochastic processes are of interest in a number of problems in reliability theory (see for example [17]). We consider here only stationary processes, but these methods are applicable also for non-stationary processes. Non-stationary processes are used in earthquake modelling. First results for such processes are given in in [27].
8.4
C r o s s i n g s t h r o u g h an H y p e r s u r f a c e
In the following we consider only stationary normal processes. In this section asymptotic approximations for the expected number ~(C(G)) of crossings through an hypersurface G in//~'~ are derived. The following result gives a formula for calculating the expected number of crossings of a differentiable normal vector process through a hypersurface. Belyaev[8] proved 1968 a theorem that the expected number of crossings of a differentiable vector process through an hypersurface is given a surface integral over this surface under some regularity conditions. We consider a stationary continuously differentiable normal process ~(t) = (xl(t),...,x,~(t)) with derivative process ~'(t) = (z](t),...,z~(t)), which is standardized and centered. Further is given by a continuously differentiable function g : ~ " ~ ~ a Cl-manifold G = {~; g(~) = 0}. Here it is assumed that V g ( x ) -~ o for all ~ E G. Then under some regularity conditions the expected number of crossings of the process ~(t) through the hypersurface G during (0, 1) is given by
=/
=
(8.27)
G
Here n ( ~ ) = IVg(~)F1Vg(~) is the surface normal at a and dsv(a) denotes surface integration, over G. • ( X ; Y = y) denotes the conditional expectation of the r a n d o m variable X under the condition Y = y.
127
To explain the equation above: The last factor ~,~(x) approximates the probability that the process is at the point z and the first factor lbT,(InT(a:)z'(t)[; x(t) = x) gives the conditional expectation of the absolute value of the projection of the derivative process orthogonal to the surface. The product is the probability flow through the surface at this point. T h e o r e m 64 Let x(t) be an one-dimensional stationary normal process with var(x(t)) = 1 whose autocorrelation function r(t) is twice continuously differentiable. Then the expected value 1E(C(u)) of the number of u-crossings in the time interval (0, 1) is finite for all u E ~ and is given by
~(c(~)) = ~ PROOF:
~,(~).
see [43], p. 194.
(8.28) rn
T h e o r e m 65 Let x(t) = (xt(t),...,x,~(t)) be a with probability one continuously differentiable stationary normal vector process with derivative process ~'(t) = ( x i ( t ) , . . . ,x'dt)). Assume that ~(t) is centered and standardized such that its autocorrelation functions have the form as in equation (8.17), p. 12~. n Further is given a function g(x) = 1 - o S ~ with ~i=~ a~ = 1. Then we have
for the expected value ~(C(ZG)) ~:(c(za))
= ~1(9)
2
~
~=~
~g~y.
(s.29)
PROOF: The crossings of the process x(t) through the hypersurface G(fl) = {x; fl - ~ i ~ 1 aixi = 0} correspond to the crossings of the one-dimensionM process y(t) = a T x ( t ) through the level ft. This process y(t) is a stationary normal process with n
var(y(t)) = 1 and var(y'(t)) = ~
a~5/2
(8.30)
i=1
The assertion follows from theorem 64, p. 128.
[]
For the case that the surface through which the process moves consists of a number of hyperplanes in [74] an asymptotic approximation is derived.
128
L e m m a 66 Given is a continuously differentiable stationary normal vector process ~(t) = ( x l ( t ) , . . . , Xn(t)) with derivative process x'(t) = ( x i ( t ) , . . . , x~n(t)). The process is in a standardized and centered form. e is an arbitrary ndimensional vector. Then we have for the conditional distribution of eTx~(t) that
1. lE(cTx'(t); x(t) = x) = --cT R'(O)x, 2. var(cTz~'(t); x(t) = x) = cT(--R"(O) -- R'(O)T R'(O))c. PROOF: p. 123.
From theorem 24, p. 30 we get the result with equations 8.13, D
T h e o r e m 67 Given is a with probability one continuously differentiable stationary normal vector process ~(t) = (=~(t),..., x=(t)) with derivative process ~ ' ( t ) = ( x i ( t ) , . . . , ='(t)). The process is standardized and centered. Further is given a twice continuously diffcrentiable function g : 1R'~ --~ 1R with the following properties: 1. g(o) > 0 and m i n x ~ a Ixl = 1,
2. G is a compact ( n-1)-dimensional C2-manifold with V g( x ) r o for all x E G and G is oriented by the normal vectorfield n ( x ) = 1~Tg(~)l-~Vg(~), 3. There are exactly k points z e l , . . . , x k on the manifold G with lx~l . . . .
= I~kl = min Ixl = 1, ~EG
(8.31)
and at all these points x 1, . .., ~k all main curvatures ~i,j of the surface G arc less than one. 4. The expected number of crossings of x(t) through the hypersurface G during (0, 1) is given by the integral in (8.27), p. 127. Under these conditions for the expected number of crossings of the process x ( t ) through the manifolds fiG during the time interval (0, 1) the following asymptotic relation is valid
~(c(Ia))
"~ ~pl
di
,
with n-1
di
=
H(1-
gi,j),
(8.33)
j=l
~1~(~)
(8.34)
~I(~)
= =
. ~ ( ~ ) ( - R " ( o ) - R'(0)~R'(0)),~(~), , ~ ( ~ ) n ' ( o ) r ( I . + a(~))n'(o),~(~),
(8.35)
a(~)
=
( I V g ( ~ ) t - l g ' J ( ~ ) ) i , j = *...... .
(8.36)
129
PROOF: A proof of this theorem using theorem 42, p. 59 is given in [24]. Another proof which uses theorem 44, p. 62 is given in [25]. Here we give a short outline of a proof. We consider the integral
Z(/~) = f E(InT(y)y'(t)l; y(t) = U)~,~(u)dszG(y).
(8.37)
ZG Define now
#(x) = E ( n T (x)x'(t); x ( t ) = z )
= --nT (x)R'(O)x
~(~)=var(,~T(~)~'(t); ~(t)=~)
=
(8.38)
,~T(~)(--U"(0)--W(0)~R'(0))n(~).
We write the conditional expectation in the form as integral in the form j(~)=
//
]w] ~1 el(Y) ((w - # ( y ) ) / c r i ( y ) ) ~ n ( y ) d w d s z c ( y ) .
(8.39)
Substituting w H z = Z - l ( w - p ( y ) ) / c q ( y ) and y ~-+ x = Z - l y yields
=z
ff
G/R
Here we use lemma 66, p. 128 which gives that #(fix) = ~ # ( x ) and al(~X) =
~(~). Define now in /R '~+~ the n-dimensional C2-manifold G by 0 = {(x, z) E /R~+l;g(x) = 0} so the last integral can be written as a surface integral over this surface j(~)
=
/~-(n+O/iC~l(~)z + #(:~)l~(~z)~,~(#~)dso(w ,z)
= r
-('~§
I~,~(,,)z + ~(~)1 exp(-
(8.40)
(1~,12+ 2))d~o(~,, z).
Define f(m, z)
91(~, z) = H(z~)
2l(l~lkz2),
-
=
r
+ ~(~),
In + G ( x ) .
(8.41) (8.42) (8.43)
The function f ( z , z ) has on this surface exactly at the points ( a l , 0 ) , . . . , (ak, 0) global maxima. Since at x z the function I~I ~ has a minimum with respect to G, we have
Vl~;I 2 = 2~; -- -2,~(~;).
130
(8.44)
From this follows, since R'(0) is a skew-symmetric matrix that #(xt) = = 0. The function gl(~,z) has therefore the value 0 at these maximum points. Therefore we can use theorem 52, p. 80 to calculate the integral. For this we need the first derivatives of the function #(z) and the gradient Vgl(x, z). We get
n(xZ)R'(O)x t
v,(~), ~(~)).
(8.45)
~(~) = _lVg(~)l-1 ~ gi(x)rij, (O)xj.
(8.46)
vg~(~, ~) = ( w ~ ( ~ ) z + We have
i,j-~l
Differentiating with respect to xk gives
o.(~) _
(OlVg(~)l -~
Oxk
~
Oxk
,,j:l~ gi(x)r:j(O)xj+
-~-]Vg(;~)]--Ii,j-:l ~ gik(~)r:J(O)xJ-~-IVg(~)l--ai,j-:l ~ gi(z)r:j(O)hJk)
(8.47)
"
(8.48)
Now we use equation (8.44), p. 130 and replace - x t by n ( z t ) . This gives
o~(~) _ (OlVg(~)1-1
-~Vg(~)l-li,j=l ~ gik(~)r:j(O)gJ(xl)]Vg(Tl)l-l-{-]Vg(~)l-li,j=l ~ gi(gg)r:j(O)~Jk)" Since R'(0) is a skew-symmetric matrix the first summand vanishes V~(~ 1) = -
g~k(~)r~(o)gi(~z)lvg(~t)l-1 i,j=l
-IVg(~)1-1 ~
+lVg(x)l-~ ~,~~=1g/(~)r:j(o)~jk) . Writing this in matrix form gives V#(x l)
= (I, + G(~l))R'(O)n(x l) = H(xl)R'(O)n(x l)
(8.49)
and so we obtain Vgl(x', 0) =
(H(x~)R'(O)n(xZ), al(xl)). 131
(8.50)
(0)
The generalized inverse of the Hessian at (z l, 0) is given by
IVg(z~)l-lg(~)
H](x,z)
H](~, z) of the function f
-H-(xz)
=
"
= f(~, z)-
(s.51)
.
0
0...0
-1
This gives for the determinant det(H*(zl)) n--1
Idet(H*(z'))l
I i-I (1 - ~,j)l = d,.
=
(S.52)
j----1
Using theorem 52, p. 80 gives the result. We obtain for the crossing rate
V~ ~(c(~a)) ~ =
V~
~1(-~) ~
k Vgl(xl,o)THj(~l,O)Vgl(~l,O ) 1/2 ~ ( - n ) ~_, i----1
d~
~ nT(.,)U,(O)~n(~,)n(.,)_n(~,)U,(O)n(~,) + ~(~) 11~ d~
i=1
From the property A A - A
= A follows then
I('*~'(~')R'(O)~H(~')R'(O)n(~") + '~('~')[ (8.53) =
~1(-9)
=
~1(-
d~ ~(~') + ~(~z)
7, [] Here a~(~t) is the conditonal variance of nT(~)~'(t) under the condition z(t) -- ~ t c~22(~z) is the variance of the mean #(x) if z is near ~'. The variance of n T ( ~ ) z ' ( t ) is decomposed into two components. The following theorem was proved by Buss [38] for the case of independent component processes and constant curvatures. The generalization is new.
132
T h e o r e m 68 Given is a twice continuously differentiable function g : 1Rn --* ~x~ with g(o) > O. The set M of all points m C G = { x ; g ( z ) = O} with [a~[ = m i n y e a [y[ = 1 is a compact k-dimensional submanifold of the (n - 1)dimensional manifold G with nonvanishing gradient V g ( x ) , which is oriented by n(a~) = [ V g ( x ) l - l V g ( x ) . Assume further that:
. The (n - k - 1) x (n - k - 1)-matrix H * ( x ) is regular for all x E M It is defined by: H*(x) = AT(x)H(x)A(a~) with H ( z ) = (-hij -
IVg(~)1-1. g~J (~) )~,~=, ..... ,
and the n x ( n - k - 1)-matrix A ( x ) = ( a l ( x ) , . . . , a n - k - l ( Z ) ) is composed of n - k - 1 vectors being an orthonormal basis of the subspace of T c ( x ) which is orthogonal to TM(X). 2. The expected number of crossings of ~(t) through the hypersurface G during (0, 1) is given by the integral in (8.27), p. 127. Then the expected number of crossings o f x ( t ) through fig during (0, 1) has the following asymptotic approximation J~(C(~G)) ~
Xk+l(~) ' •((k --~ 1)/2) f /o2(ag) 2r- o-~(fi~) 27r(k+l)/2 , V ] - ~ e t ( ~ - ~ d s M ( ~ , ) M
(8.54)
with
1. cr (ae) = nT(x)(--R"(O) -- R'(O)T R'(O))n(x), 2.
=
PROOF:
8.4.1
+
Details are given in [31].
[]
Examples
EXAMPLE: Given is a continuously differentiable two dimensional normal process a~(t) = (xl(t), x2(t)) with independent components. The variances of the process and the derivative process a~'(t) = (x~ (t), x'2(t)) are given by
var(xi(t)) = var(x~(t)) = 1 for i = 1, 2. T h e limit state function is
g(xl,x2) = 1 - (x~ a2 + x~) with O < a < 1 If a -- 0 the crossings through f~G are exactly the crossings of the process x2(t) across the level ft. 133
If 0 < a < 1 there are exactly two points on the ellipse G where the probability density of the process is m a x i m a l , the points (0, 1) and ( 0 , - 1 ) . We get as a s y m p t o t i c a p p r o x i m a t i o n for the crossing rate
1F,(C(13G)) ,,, 2~1 (/3) V f f ~ 1 - 1a -2" If a = 1 the density is m a x i m a l on the whole circle. This gives for the crossing rate
lF,(C(fla) ) ,.,, X2(~) V~. []
EXAMPLE: Given is an n-dimensional continuouly differentiable n o r m a l process x(t). Further is given a function g(x) = 1 - E ~ = I 7ix~ with 0 < 7t _< . . . _< 7n-1 < 7,~ = 1. T h e n there are exactly two points y l = ( 0 , . . . , 0 , 1) and y2 = _ y a on the surface G = {x; g(x) = 0} with m i n i m a l distance to the origin. We get for k = 1,2: n-1 1. dk = I - [ i = l (1 - 71),
2.
O.2(yk) = _r~n(O ) -
3. c~(y k)
=
V'~-I(1
-
n-1 ri~(0), '2
Ei:l
'
This gives
~(C(~G)).,,,~ ~1(~)2 / z ]
[
V; v
"2
n-1
11i=1 ( - - 7 i )
A further numerical e x a m p l e can be found in [21].
134
~2
Fnn(O)- E i = ] ~[iFin(O)
Bibliography [1]
M. Abramowitz and I.A. Stegun. Handbook of Mathematical Functions. Dover, New York, 1965.
[2]
J.M.P. Albin. On Extremal Theory for Nondifferentiable Stationary Processes. PhD thesis, University of Lund, Department of Mathematical Statistics, Lund (Sweden), 1987.
[3]
J.M.P. Albin. On extremal theory for stationary processes. Annals of Probability, 18:92-128, 1990.
[4]
S.T. Ariaratnam, G.I. SchuSller, and I. Elishakoff, editors. Structural Dynamics. Elsevier, London, 1988.
[5]
G. Aumann and O. Haupt. Einfiihrung in die reelle Analysis, volume II. De Gruyter, 1979.
[6]
B.M. Ayyub and K.-L. Lai. Structural reliability assessment with ambiguity and vagueness in failure. Naval Engineers Journal, 104:21-35, 1992.
[7]
O.E. Barndorff-Nielsen and D.R. Cox. Asymptotic Techniques for Use in Statistics. Chapman and Hall, London, 1989.
[8]
Yu.K. Belyaev. On the number of exits across a boundary of a region by a vector process. Theory of Probability and Applications, 13:320-324, 1968.
[9]
Y. Ben-Haim and I. Elishakoff. Convex Models of Uncertainty in Applied Mechanics. Elsevier, Amsterdam, 1990.
[lO]
J.R. Benjamin and C.A. Cornell. Probability, Statistics and Decision for Civil Engineers. McGraw Hill, New York, N.Y., 1970.
Stochastic
[11] L. Berg. Asymptotische Darstellungen und Entwicklungen. VEB Deutscher Verlag der Wissenschaften, Berlin (GDR), 1968.
[12]
S. M. Berman. Sojourns and Extremes of Stochastic Processes. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1992.
[13] P. Billingsley. Probability and Measure. Wiley, New York, first edition, 1979. 135
[14] H. Birndt and W.-D. Richter. Vergleichende Betrachtungen zur Bestimmung des asymptotischen Verhaltens mehrdimensionaler Laplace-Gau•Integrale. Zeitschrift fiir Analysis und ihre Anwendungen, 4(3):269-276, 1985. [15] N. Bleistein and R.A. Handelsman. Asymptotic Expansions of Integrals. Dover Publications Inc., New York, 1986. Reprint of the edition by Holt, Rinehart and Winston, New York, 1975. [16] V.V. Bolotin. Statistical Methods in Structural Mechanics. Holden-Day, San Francisco, 1969. [17] V.V. Bolotin. Wahrscheinlichkeitsmethoden zur Berechnung yon Konstruktionen. VEB-Verlag fiir dan Bauwesen, Berlin (GDR), 1981. [18] E. Bolthausen. Laplace approximations for sums of independent random vectors. Probability Theory and Related Fields, 72:305-318, 1986. [19] E. Bolthausen. Laplace approximations for sums of independent random vectors. Part II. Degenerate maxima and manifolds. Probability Theory and Related Fields, 76:167-206, 1987. [20] K. Breitung. An asymptotic formula for the failure probability. In DIALOG 82-5, pages 19-45, Lyngby (Denmark), 1982. Department of Civil Engineering, Danmarks Ingeniorakademi. [21] K. Breitung. Asymptotic approximations for multinormal domain and surface integrals. In G. Augusti et al., editor, Proceedings of the ~,th International Conference on Applications of Statistics and Probability on Soil and Structural Engineering, pages 755-767 (Vol.2), Firenze (Italy), 1983. Universita' di Firenze, Pitagora Editrice. [22] K. Breitung. Edgeworthentwicklungen fiir die Dichten yon differenzierbaren gefilterten Poissonprozessen. PhD thesis, Fakults 10, LudwigMaximilians-Universits Miinchen, 1983. [23] K. Breitung. Asymptotic approximations for multinormal integrals. Journal of the Engineering Mechanics Division ASCE, 110(3):357-366, 1984. [24] K. Breitung. Asymptotic approximations for the crossing rates of stationary Gaussian vector processes. Technical Report 1984:1, Dept. of Math. Statistics, Lund (Sweden), 1984. [25] K. Breitung. Asymptotic crossing rates for stationary Gaussian vector processes. Stochastic Processes and Applications, 29:195-207, 1988. [26] K. Breitung. Asymptotic approximations for probability integrals. Journal of Probabilistic Engineering Mechanics, 4(4):187-190, 1989.
136
[27] K. Breitung. The extreme value distribution of non-stationary vector processes. In A. H.-S. Ang, M. Shinozuka, and G.I. Schu~ller, editors, Proceedings of ICOSSAR '89 5th Int'l Conf. on structural safety and reliability, volume II, pages 1327-1332. American Society of Civil Engineers, 1990. [28] K. Breitung. Parameter sensitivity of failure probabilities. In A. DerKiureghian and P. Thoft-Christensen, editors, Reliability and Optimization of Structural Systems '90, Proceedings of the 3rd IFIP WG 7.5 Conference, Berkeley, California, pages 43-51, New York, 1991. Springer. Lecture Notes in Engineering 61. [29] K. Breitung. Probability approximations by log likelihood maximization. Journal of the Engineering Mechanics Division ASCE, 117(3):457-477, 1991. [30] K. Breitung. A criticism of statistical methods in probabilistic models in structural reliability. In Y.K. Lin, editor, Probabilistic Mechanics and Structural and Geotechnical Reliability, Proceedings of the sixth specialty conference, pages 236-239. American Society of Civil Engineers, 1992. [31] K. Breitung. Crossing rates of Gaussian vector processes. In Transactions of the 11th Prague Conference on Information Theory, Statistical Decision Functions and Random Processes 1990, volume I, pages 303-314, Prague, Czech. Rep., 1992. Academia. [32] K. Breitung and L. Faravelli. Log-likelihood maximization and response surface reliability assessment. Nonlinear Dynamics, 5(3):273-286, 1994. [33] K. Breitung and M. Hohenbichler. Asymptotic approximations for multivariate integrals with an application to multinormal probabilities. Journal of Multivariate Analysis, 30:80-97, 1989. [34] K. Breitung and Y. Ibrahim. Problems of statistical inference in structural reliability. In G.I.G.I. Schu~ller, M. Shinozuka, and J.T.P. Yao, editors, Proceedings of lCOSSAR '93 6th Int'l Conf. on structural safety and reliability, volume II, pages 1223-1226, Rotterdam, 1994. A.A. Balkema. [35] K. Breitung and W.-D. Richter. A geometric approach to an asymptotic expansion for large deviation probabilities of Gaussian random vectors, 1993. submitted to the Journal of Multivariate Analysis. [36] C.G. Bucher. Adaptive sampling - an iterative fast Monte Carlo method. Structural Safety, 5:119-126, 1988. [37] E.V. Bulinskaya. On the mean number of crossings of a level by a stationary Gaussian process. Theory of Probability and its Applications, 6:435438, 1961. [38] A. Buss. Crossings of non-Gaussian processes with reliability applications. PhD thesis, Department of Structural Engineering, Cornell University, Ithaca, N.Y., 1986. 137
[39] F. Casciati and L. Faravelli. Fragility Analysis of Complex gtructures. Research Studies Press Ltd., Taunton, UK, 1991. [40] E. Castillo. Extreme Value Theory in Engineering. Boston, 1989.
Academic Press,
[41] R.-H. Cherng and Y.K. Wen. Reliability of uncertain nonlinear trusses under random excitation. II. Journal of the Engineering Mechanics Division ASCE, 120(4):748-757, 1994. [42] C.A. Cornell. A probability-based structural code. Journal of the American Concrete Institute, 66(12):974-985, 1969. [43] H. Cramer and M.R. Leadbetter. Stationary and Related Stochastic Processes. Wiley, New York, 1967. [44] A. Der Kiureghian. Measures of structural safety under imperfect states of knowledge. Journal of the Engineering Mechanics Division ASCE, 115(5):1119-1140, 1989. [45] A. Der Kiureghian and M. De Stefano. Efficient algorithm for second-order reliability analysis. Journal of the Engineering Mechanics Division ASCE, 117(12):2904 2923, 1991. [46] A. Der Kiureghian, H.Z. Lin, and S.-J. Hwang. Second-order reliability approximations. Journal of the Engineering Mechanics Division ASCE, 113(8):1208-1225, 1987. [47] O. Ditlevsen. Structural reliability and the invariance problem. Technical Report Research Report No. 22, Solid Mechanics Division, University of Waterloo, Waterloo, Canada, 1973. [48] O. Ditlevsen. Generalized second-moment reliability index. Journal of the Engineering Mechanics Division; ASCE, 107(6):1191-1209, 1979. [49] O. Ditlevsen. Narrow reliability bounds for structural systems. Journal of Structural Mechanics, 7:435-451, 1979. [50] O. Ditlevsen. Principle of normal tail approximation. Journal of the Engineering Mechanics Division ASCE, 107(6):1191-1209, 1981. [51] O. Ditlevsen. Taylor expansion of series system reliability. Journal of the Engineering Mechanics Division ASCE, 108:293-307, 1984. [52] K. Dolinski. First-order second-moment approximation in reliability of systems: critical review and alternative approach. Structural Safety, 1:211213, 1983. [53] R.S. Ellis and J.S. Rosen. Asymptotic analysis of Gaussian integrals. I. Isolated minimum points. Transactions of the American Mathematical Society, 273(2):447-481, 1982. 138
[54] R.S. Ellis and J.S. Rosen. Asymptotic analysis of Gaussian integrals, II: Manifold of minimum points. Zeitschrifl fiir Wahrscheinlichkeitstheorie und verwandte Gebiete, 273(2):156-181, 1982. [55] R.S. Ellis and J.S. Rosen. Laplace method for Gaussian integrals with an application to statistical mechanics. The Annals of Probability, 10(1):4766, 1982. [56] S. Engelund and R. Rackwitz. A benchmark study on importance sampling techniques in structural reliability. Structural Safety, 12:255 276, 1993. [57] A. Erd~lyi. Asymptotic Expansions. Dover, New York, 1956. [58] L. Faravelli. Response-surface approach for reliability analysis. Journal of the Engineering Mechanics Division ASCE, 115(12):2763-2781, 1989. [59] M.V. Fedoryuk. Metod perevala (The saddlepoint method). Moscow, USSR, 1977. In Russian.
Nauka,
[60] R. Ferrando, M. Dogliani, and R. Cazzulo. M a x i m a of mooring forces: an approach based on outcrossing methods. Technical Report RR 224, Registro italiano navale, Genova (Italy), 1989. [61] B. Fiessler, H.-J. Neumann, and R. Rackwitz. Quadratic limit states in structural reliability. Journal of the Engineering Mechanics Division ASCE, 105(4):661-676, 1979. [62] W. Fleming. Functions of Several Variables. Springer, New York, 1977. [63] J. Focke. Asymptotische Entwicklungen mittels der Methode der station~ren Phase. Berichte fiber die Verhandlungen der S~chsischen Akademie der Wissenschaften zu Leipzig, Mathematisch-naturwissenschaftliche Klasse, 101,3, 1953. [64] A.M. Freudenthal. The safety of structures. Transactions of the ASCE, 112:1337-1397, 1947. [65] A.M. Freudenthal. Safety and the probability of structural failure. Transactions of the ASCE, 121:1337-1397, 1956. [66] W. Fulks and J.O. Sather. Asyn~lptotics II. Laplace's method for multiple integrals. Pacific Journal of Maihematics, 11:185-192, 1961. [67] F.A. Graybill. Theory and Application of the Linear Model. Wadsworth and Brooks/Cole, Pacific Grove, California, 1976. [68] E.J. Gumbel. Statistics of Extremes. York, 1958.
Columbia University Press, New
[69] A.M. Hasofer and N.C. Lind. An exact and invariant first-order reliability format. Journal of the Engineerin9 Mechanics Division A SCE, 100(1):111121, 1974. 139
[70] M.R. Hestenes. Optimization Theory, The finite dimensional case. Wiley, 1975. [71] M. Hohenbichler, S. Gollwitzer, W. Kruse, and R. Rackwitz. New light on first-and second-order reliability methods. Structural Safety, 4:267-284, 1987. [72] M. Hohenbichler and R. Rackwitz. Non-normal dependent vectors in structural safety. Journal of the Engineering Mechanics Division ASCE, 107(6):1227-1241, 1981. [73] M. Hohenbichler and R. Rackwitz. First-order concepts in system reliability. Structural Safety, 1:177-188, 1983. [74] M. Hohenbichler and R. Rackwitz. Asymptotic crossing rate of Gaussian vector processes into intersections of failure domains. Probabilistic Engineering Mechanics, 1(3):177-179, 1986. [75] M. Hohenbichler and R. Rackwitz. Improvement of second-order reliability estimates by importance sampling. Journal of the Engineering Mechanics Division ASCE, 114(12):2195-2198, 1988. [76] L.C. Hsu. On the asymptotic evaluation of a class of multiple integrals involving a parameter. Duke Mathematical Journal, 15:625-634, 1948. [77] Y. Ibrahim. Observations on applications of importance sampling in structural reliability analysis. Structural Safety, 9(4):269-282, 1991. [78] A.I. Johnson. Strength, Safety and Economical Dimensions of Structures. Statens Kommitt4 fSr Byggnadsforskning, Stockholm, Sweden, 1953. Meddelanden No. 22. [79] D.S. Jones. Generalized Functions. McGraw-Hill, London, 1966. [80] A. Karamchandani and C.A. Cornell. Adaptive hybrid conditional expectation approaches for reliability estimation. Structural Safety, 11(1):59 74, 1991. [81] M.R. Leadbetter. On crossings of levels and curves by a wide class of stochastic processes. Annals of Mathematical Statistics, 37:260-267, 1966. [82] M.R. Leadbetter, G. Lindgren, and H. Rootz@n. Extremes and Related Properties of Random Sequences and Processes. Springer, New York, 1983. [83] T. Levi-Civita. 1977.
The Absolute Differential Calculus.
Dover, New York,
[84] Y.K. Lin. Probabilistic theory of structural dynamics. R. Krieger Publishing Co., Malabar, Florida, 1976.
140
[85] G. Lindgren. Extremal ranks and transformation of variables for extremes of functions of multivariate Gaussian processes. Stochastic Processes and Applications, 17:285-312, 1984. [86] D.V. Lindley. The use of prior probability distributions in statistical inference and decisions. In Proceedings of the 4lh Berkeley Symposium, volume 1, pages 453-468, 1961. [87] P.-L. Liu and A. Der Kiureghian. Optimization algorithms for structural reliability. Structural Safety, 9(3):161-177, 1991. [88] H. Madsen. Omission sensitivity factors. Struclural Safety, 5:35-45, 1988. [89] H. Madsen, S. Krenk, and N.C. Lind. Methods of Structural Safety. Prentice-Hall Inc., Englewood Cliffs, N.J., 1986. [90] M.A. Maes. Codification of design load criteria subject to modeling uncertainty. Journal of the Structural Engineering Division, ASCE, 117(10):2988-3007, 1991. [91] M.A. Maes, K. Breitung, and D.J. Dupuis. Asymptotic importance sampling. Structural Safety, 12:167-186, 1993. [92] M.A. Maes and D.J. Dupuis. Computational aspects of civil engineering reliability analysis. In Proceedings of the 1991 Annual Conference, Canadian Society of Civil Engineers, Vancouver, volume 2, pages 221-230, 1991. [93] J.R. Magnus and H. Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley, New York, 1988. [94] M.B. Marcus. Level crossings of a stochastic process with absolutely continous sample paths. Annals of Probability, 5:52-71, 1977. [95] K. Marti. Approximations and derivatives of probabilities in structural design. Zeiischrift fiir angewandte Mathemaiik und Mechanik, 72(6):T575T578, 1992. [96] K. Marti. Stochastic optimization in structural design. Zeitschrift fiir angewandte Mathematik und Mechanik, 72(6):T452-T464, 1992. [97] G. Matheron. Estimating and Choosing. Springer, Berlin, 1989. [98] M. Mayer. Die Sicherheit der Bauwerke. Springer, Berlin, 1926. [99] R.E. Melchers. Structural Reliability, Analysis and Prediction. Wiley, New York, 1987.
[lOO]
K.S. Miller. Multidimensional Gaussian Distributions. Wiley, New York, 1964.
[101]
K.S. Miller. An Introduction to Vector Stochastic Processes. Krieger, Huntington, N.Y., 1980. 141
[102] J.A. Munkres. Analysis on Manifolds. Addisson-Wesley, Redwood City, CA, 1991. [103] D.C. Murdoch. 1957.
Linear Algebra for Undergraduates.
Wiley, New York,
[104] A. Naess. Bounding approximations to some quadratic limit states. Journal of the Engineering Mechanics Division ASCE, 113(10):1474 1492, 1987. [105] F.W.J. Olver. Asymptotics and Special Functions. Academic Press, London, 1974. [106] D.B. Owen. A table of normal integrals. Communications in StatisticsSimulation and Computation, B9(4):389-419, 1980. [107] G. Pap and W.-D. Richter. Zum asymptotischen Verhalten der Verteilungen und der Dichten gewisser Funktionale Gauss'scher Zufallsvektoren. Maihematische Nachrichten, 135:119-124, 1988. [108] A. Papoulis. Probability, Random Variables and Stochastic Processes. McGraw-Hill Kogakusha, Ltd., Tokyo, first edition, 1965. [109] B.N. Pgeni~nyj. Algorithms for general mathematical programming problems. Cybernetics, 6(5):120-125, 1970. Ill0] R. Rackwitz and B. Fiessler. Structural reliability under combined random load sequences. Computers and Struclures, 9:489-494, 1978. [111] C.R. Rao. Linear Statistical Inference and Its Applications. Wiley, New York, second edition, 1973. [112] C.R. Rao. Lineare slatistische Methoden und ihre Anwendungen. Akademie-Verlag, Berlin, 1973. [113] C.R. Rao and S.K. Mitra. Generalized Inverse of Matrices and its Applications. Wiley, New York, 1971. [114] W.-D. Richter. Moderate deviations in special sets of//~k. Mathematische Nachrichten, 113:339-354, 1983. [115] W.-D. Richter. Laplace-Gauss integrals, Gaussian measure asymptotic behavior and probabilities of moderate deviations. Zeitschrift fiir Analysis und ihre Anwendungen, 4(3):257-267, 1985. [116] W.-D. Richter. Remarks on moderate deviations in the multidimensional central limit theorem. Mathematische Nachrichten, 122:167-173, 1985.
[117]
W.-D. Richter. Laplace integral and probabilities of moderate deviations in/R k. In Probability distributions and mathematical statistics, pages 406420, Tashkent, 1986. "Fan". In Russian. 142
[118] W.-D. Richter. Moderate deviations for a certain class of sets in finite dimensional space. Theory of Probability and its Applications, 31(1):174175, 1986. [119] W.-D. Richter. Zur Restgliedabsch~tzung im mehrdimensionalen integralen Zentralen Grenzwertsatz der Wahrscheinlichkeitstheorie. Mathematische Nachrichten, 135:103-117, 1988.
[120]
M. Rosenblatt. Remarks on a multivariate transformation. The Annals of Mathematical Statistics, 23:470-472, 1952.
[121] E. Rosenblueth and L. Esteva. Reliability basis for some Mexican codes. ACI Publication, SP-31:1-41, 1972.
[122]
H. Ruben. An asymptotic expansion for the multivariate normal distribution and Mill's ratio. J. Res. Nat. Bur. Standards B, 68(1):3-11, 1964.
[123] R.Y. Rubinstein. Monte Carlo Optimization, Simulation and Sensitivity of Queuing Networks. Wiley, New York, 1986. [124] G.I. Schu~ller. Einfiihrung in die Sicherheit und Zuverliissigkeit yon Tragwerken. Verlag Ernst u. Sohn, Munich, 1981. [125] G.I. Schu~ller and R. Stix. A critical appraisal of methods to determine failure probabilities. Structural Safety, 4:293-309, 1987. [126] M. Shinozuka. Basic analysis of structural safety. Journal of the Structural Division ASCE, 109(3):721-740, 1983.
[127]
L. Sirovich. Techniques of Asymptotic Analysis. Springer, New York, 1971.
[128]
G. Spaethe. Die Sicherheit tragender Baukonstruktionen. Springer Verlag, Wien, New York, second edition, 1992.
[129] J. Stoer. Einfiihrung in die Numerische Mathematik L springer, Berlin, 1972. [130] J.A. Thorpe. Elementary Topics in Differential Geometry. Springer, New York, 1979. [131] L. Tierney and J.B. Kadane. Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81:82-86, 1986.
[132] T.Y.
Torng and Y.-T. Wu. Development of a probabilistic analysis methodology for structural reliability estimation. In A I A A / A S M E / A S C E / A H S / A S C 32ud Structures, Structural Dynamics, and Materials Conference, pages 1271-1279. The American Institute of Aeronautics and Astronautics, 1991. part 2.
143
[133] L.
Tvedt. Two second-order approximations to the failure probability. Technical Report Veritas Report RDIV/20-004-83, Det norkse Veritas, Oslo, Norway, 1983.
[134] L. Tvedt. Second-order reliability by an exact integral. In P. ThoftChristensen, editor, Reliability and Optimization of Structural Systems '88 Proceedings of the 2nd IFIP WGT.5 Conference London, pages 377-384, New York, 1989. Springer, Lecture Notes in Engineering 48. [135] L. Tvedt. Distribution of quadratic forms in normal space - application to structural reliability. Journal of the Engineering Mechanics Division ASCE, 116(6):1183-1198, 1990. [136] S. Uryas'ev. A diffferentiation formula for integrals over sets given by inclusion. Numerical Functional Analysis and Optimization, 10:827-841, 1989. [137] D. Veneziano, M. Grigoriu, and C.A. Cornell. Vector process models for system reliability. Journal of the Engineering Mechanics Division ASCE, 103(3):441-460, 1977. [138] G.N. Watson. Harmonic functions associated with the parabolic cylinder. Proceedings of the London Malhematical Society, 17:116-148, 1918. Series 2. [139] J.R. Westlake. A Handbook of Numerical Matrix Inversion and Solution of Linear Equations. Wiley, New York, 1968. [140] R. Wong. Asymptotic Approximations of Integrals. Academic Press, San Diego, 1989. [141] Y.-T. Wu and P. H. Wirsching. New algorithm for structural reliability estimation. Journal of the Engineering Mechanics Dwision ASCE, 113(9):1319-1336, 1987.
144
Index Freudenthal, 86, 109 Fulks, 52, 60
asymptotic analysis, 34 asymptotic power series, 40 asymptotic scale, 41 asymptotic sequence, 41 asymptotically equivalent, 37 autocorrelation matrix, 122 autocovariance matrix, 122
generalized inverse, 10 Gramian, 10 Gumbel, 7 Hasofer, 87, 109 Hasofer/Lind index, 87 Hohenbichler, 53, 89 Hsu, 52 hypersurface, 14
Barndorff-Nielsen, 7 Belyaev, 127 Bulinskaya, 126 X,~-distribution, 29 Cholesky decomposition, 21 Compactification Lemma, 53 convergence in distribution, 31 Cornell, 86 Cox, 7 Cramer-Wold-Device, 31 cross-correlation matrix, 122 cross-covariance matrix, 122 curvature, 15
Implicit Function Theorem, 12 Jones, 52 Lagrange multiplier, 17 Leadbetter, 7 Lebesgue Convergence Theorem, 12 Leibniz Theorem for Differentiation of an Integral, 21 limit state function, 85 limit state surface, 3, 85 Lind, 87, 109 Lindgren, 7 Lindley, 8 local coordinate system, 15
Definiteness under constraints, 19 Der Kiureghian, 4 Ditlevsen, 86, 98 Divergence Theorem, 16 Dolinski, 90 downcrossing, 125 downcrossing, strict, 125
Maes, 4 main curvature directions, 15 main curvatures, 15 manifold, 14 Matheron, 5 moment generating function, 32 Moore-Penrose-inverse, 10 Morse Lemma, 13
Esteva, 86 Extrema under constraints, 16 failure domain, 3 Fiessler, 102 Focke, 52 FORM, 90
Naess, 91 145
normal distribution, n-dimensional, 29 normal distribution, univariate, 28
Watson's lemma, 46 weakly convergent, 31 Weingarten mapping, 15 wide sense stationary, 122 Wu, 91
orthogonal projection, 9 P~eni~ny, 102 parallel system, 86 parameter dependent Integrals, 21 partial elasticity, 11 Poincar~, 42 projection matrix, 9 projection vector, 9 Rackwitz, 89, 102 regular transformation, 14 reliability index, 87 Richter, 7 Ringo and Smyth, v Rootz~n, 7 Rosenblatt, 89 Rosenblat t-Tr ansformation, 89 Rosenblueth, 86 Ruben, 93 Rubinstein, 6 safe domain, 3 Sather, 52, 60 series system, 86 Shinozuka, 109 Sirovich, 52 SORM, 90 sphere, n-dimensional, 16 stationary normal vector process, 123 stochastic process, 121 stochastic vector process, 121 strictly stationary, 122 Torng, 91 Tvedt, 91 upcrossing, 125 upcrossing, strict, 125 Uryas'ev, 25 Watson, 45 146
Lecture Notes in Mathematics For information about Vols. 1-1414 please contact your bookseller or Springer-Verlag
Vol. 1415: M. Langevin, M. Waldschmidt (Eds.), Cinquante Ans de Polyn6mes. Fifty Years of Polynomials. Proceedings, 1988. IX, 235 pages. 1990.
Vol. 1435: St. Ruscheweyh, E.B. Saff, L.C. Salinas, R.S. Varga (Eds.), Computational Methods and Function Theory. Proceedings, 1989. VI, 211 pages. 1990.
Vol. 1416: C. Albert (Ed.), G6om6trie Symplectique et M6canique. Proceedings, 1988. V, 289 pages. 1990.
Vol. 1436: S. Xamb6-Descamps (Ed.), Enumerative Geometry. Proceedings, 1987. V, 303 pages. 1990.
Vol. 1417: A.J. Sommese, A. Biancofiore, E.L. Livorni (Eds.), Algebraic Geometry. Proceedings, 1988. V, 320 pages. 1990.
Vol. 1437: H. lnassaridze (Ed.), K-theory and Homological Algebra. Seminar, 1987-88. V, 313 pages. 1990.
Vol. 1418: M. Mimura (Ed.), Homotopy Theory and Related Topics. Proceedings, 1988. V, 241 pages. 1990.
Vol. 1438: P.G. Lemari6 (Ed.) Le s Ondelettes en 1989. Seminar. IV, 212 pages. 1990.
Vol. 1419: P.S. Bullen, P.Y. Lee, J.L. Mawhin, P. Muldowney, W.F. Pfeffer (Eds.), New Integrals. Proceedings, 1988. V, 202 pages. 1990.
Vol. 1439: E. Bujalance, J.J. Etayo, J.M. Gamboa, G. Gromadzki. Automorphism Groups of Compact Bordered Klein Surfaces: A Combinatorial Approach. XIII, 201 pages. 1990.
Vol. 1420: M. Galbiati, A. Tognoli (Eds.), Real Analytic Geometry. Proceedings, 1988. IV, 366 pages. 1990.
Vol. 1440: P. Latiolais (Ed.), Topology and Combinatorial Groups Theory. Seminar, 1985-1988. VI, 207 pages. 1990.
Vol. 1421: H.A. Biagioni, A Nonlinear Theory of Generalized Functions, XII, 214 pages. 1990.
Vol. 1441: M. Coornaert, T. Delzant, A. Papadopoulos. G6om6trie et th6orie des groupes. X, 165 pages. 1990.
Vol. 1422: V. Villani (Ed.), Complex Geometry and Analysis. Proceedings, 1988. V, 109 pages. 1990.
Vol. 1442: L. Accardi, M. von Waldenfels (Eds.), Quantum Probability and Applications V. Proceedings, 1988. VI, 413 pages. 1990.
Vol. 1423: S.O. Kochman, Stable Homotopy Groups of Spheres: A Computer-Assisted Approach. VIII, 330 pages. 1990. Vol. 1424: F.E. Burstall, J.H. Rawnsley, Twistor Theory for Riemannian Symmetric Spaces. III, 112 pages. 1990. Vol. 1425: R.A. Piccinini (Ed.), Groups of SelfEquivalences and Related Topics. Proceedings, 1988. V, 214 pages. 1990. Vol. 1426: J. Az6ma, P.A. Meyer, M. Yor (Eds.), S6minaire de Probabilit6s XXIV, 1988/89. V, 490 pages. 1990. Vol. 1427: A. Ancona, D. Geman, N. Ikeda, l~cole d'Et6 de Probabilit6s de Saint Flour XVIII, 1988. Ed.: P.L. Hennequin. VII, 330 pages. 1990.
Vol. 1443: K.H. Dovermann, R. Schultz, Equivariant Surgery Theories and Their Periodicity Properties. VI, 227 pages. 1990. Vol. 1444: H. Korezlioglu, A.S. Ustunel (Eds.), Stochastic Analysis and Related Topics VI. Proceedings, 1988. V, 268 pages. 1990. Vol. 1445: F. Schulz, Regularity Theory for Quasilinear Elliptic Systems and - Monge Amp&e Equations in Two Dimensions. XV, 123 pages. 1990. Vol. 1446: Methods of Nonconvex Analysis. Seminar, 1989. Editor: A. Cellina. V, 206 pages. 1990.
Vol. 1428: K. Erdmann, Blocks of Tame Representation Type and Related Algebras. XV. 312 pages. 1990.
Vol. 1447: J.-G. Labesse, J. Schwermer (Eds), Cohomology of Arithmetic Groups and Automorphic Forms. Proceedings, 1989. V, 358 pages. 1990.
Vol. 1429: S. Homer, A. Nerode, R.A. Platek, G.E. Sacks, A. Scedrov, Logic and Computer Science. Seminar, 1988. Editor: P. Odifreddi. V, 162 pages. 1990.
Vol. 1448: S.K. Jain, S.R. L6pez-Permouth (Eds.), NonCommutative Ring Theory. Proceedings, 1989. V, 166 pages. 1990.
Vol. 1430: W. Bruns, A. Simis (Eds.), Commutative Algebra. Proceedings. 1988. V, 160 pages. 1990.
Vol. 1449: W. Odyniec, G. Lewicki, Minimal Projections in Banach Spaces. VIII, 168 pages. 1990.
Vol. 1431: J.G. Heywood, K. Masuda, R. Rautmann, V.A. Solonnikov (Eds.), The Navier-Stokes Equations - Theory and Numerical Methods. Proceedings, 1988. VII, 238 pages. 1990. Vol. 1432: K. Ambos-Spies, G.H. M/.iller, G.E. Sacks (Eds.), Recursion Theory Week. Proceedings, 1989. VI, 393 pages. 1990. Vol. 1433: S. Lang, W. Cherry, Topics in Nevanlinna Theory. II, 174 pages. 1990. Vol. 1434: K. Nagasaka, E. Fouvry (Eds.), Analytic Number Theory. Proceedings, 1988. VI, 218 pages. 1990.
Vol. 1450: H. Fujita, T. Ikebe, S.T. Kuroda (Eds.), Functional-Analytic Methods for Partial Differential Equations. Proceedings, 1989. VII, 252 pages. 1990. Vol. 1451: L. Alvarez-Gaum6, E. Arbarello, C. De Concini, N.J. Hitchin, Global Geometry and Mathematical Physics. Montecatini Terme 1988. Seminar. Editors: M. Francaviglia, F. Gherardelli. IX, 197 pages. 1990. Vol. 1452: E. Hlawka, R.F. Tichy (Eds.), Number-Theoretic Analysis. Seminar, 1988-89. V, 220 pages. 1990. Vol. 1453: Yu.G. Borisovich, Yu.E. Gliklikh (Eds.), Global Analysis - Studies and Applications IV. V, 320 pages. 1990.
Vol. 1454: F. Baldassari, S. Bosch, B. Dwork (Eds.), padic Analysis. Proceedings, 1989. V, 382 pages. 1990.
Vol. 1478: M.-P. Malliavin (Ed.), Topics in Invariant Theory. Seminar 1989-1990. VI, 272 pages. 1991.
Vol. 1455: J.-P. Fran~oise, R. Roussarie (Eds.), Bifurcations of Planar Vector Fields. Proceedings, 1989. VI, 396 pages. 1990.
Vol. 1479: S. Bloch, I. Dolgachev, W. Fulton (Eds.), Algebraic Geometry. Proceedings, 1989. VII, 300 pages. 1991.
Vol. 1456: L.G. Kov~cs (Ed.), Groups - C a n b e r r a 1989. Proceedings. XII, 198 pages. 1990.
Vol. 1480: F. Dumortier, R. Roussarie, J. Sotomayor, H. 2~oladek, Bifurcations of Planar Vector Fields: Nilpotent Singularities and Abelian Integrals. VIII, 226 pages. 1991.
Vol. 1457: O. Axelsson, L.Yu. Kolotilina (Eds.), Preconditioned Conjugate Gradient Methods. Proceedings, 1989. V, 196 pages. 1990. Vol. 1458: R. Schaaf, Global Solution Branches of Two Point Boundary Value Problems. XIX, 141 pages. 1990. Vol. 1459: D. Tiba, Optimal Control of Nonsmooth Distributed Parameter Systems. VII, 159 pages. 1990. Vol. 1460: G. Toscani, V. Boffi, S. Rionero (Eds.), Mathematical Aspects of Fluid Plasma Dynamics. Proceedings, 1988. V, 221 pages. 1991. Vol. 1461: R. Gorenflo, S. Vessella, Abel Integral Equations. VII, 215 pages. 1991. Vol. 1462: D. Mond, J. Montaldi (Eds.), Singularity Theory and its Applications, Warwick 1989, Part I. VIII, 405 pages. 1991. Vol. 1463: R. Roberts, I. Stewart (Eds.), Singularity Theory and its Applications. Warwick 1989, Part II. VIII, 322 pages. 1991. Vol. 1464: D. L. Burkholder, E. Pardoux, A. Sznitman, Ecole d'Et4 de Probabilitts de Saint- Flour XIX-1989. Editor: P. L. Hennequin. VI, 256 pages. 1991. Vol, 1465: G. David, Wavelets and Singular Integrals on Curves and Surfaces. X, 107 pages. 1991. Vol. 1466: W. B a n a s z c z y k , Additive Subgroups of Topological Vector Spaces. VII, 178 pages. 1991. Vol. 1467: W. M. Schmidt, Diophantine Approximations and Diophantine Equations. VIII, 217 pages. 1991. Vol. 1468: J. Noguchi, T. Ohsawa (Eds.), Prospects in Complex Geometry. Proceedings, 1989. VII, 421 pages. 1991. Vol. 1469: J~ Lindenstrauss, V. D. Milman (Eds.), Geometric Aspects of Functional Analysis. Seminar 198990. XI, 191 pages. 1991. Vol, 1470: E. Odell, H. Rosenthal (Eds.), Functional Analysis. Proceedings, 1987-89. VII, 199 pages. 1991. Vol. 1471: A. A. Panchishkin, Non-Archimedean LFunctions of Siegel and Hilbert Modular Forms. VII, 157 pages. 1991. Vol. 1472: T. T. Nielsen, Bose Algebras: The Complex and Real Wave Representations. V, 132 pages. 1991. Vol. 1473: Y. Hint, S. Murakami, T. Naito, Functional Differential Equations with Infinite Delay. X, 317 pages. 1991. Vol. 1474: S. Jackowski, B. Oliver, K. Pawatowski (Eds.), Algebraic Topology, Poznafi 1989. Proceedings. VIII, 397 pages. 1991. Vol. 1475: S. Busenberg, M. Martelli (Eds.), Delay Differential Equations and Dynamical Systems. Proceedings, 1990. VIII, 249 pages. 1991.
Vol. 1481: D. Ferus, U. Pinkall, U. Simon, B. Wegner (Eds.), Global Differential Geometry and Global Analysis. Proceedings, 1991. VIII, 283 pages. 1991. Vol. 1482: J. Chabrowski, The Dirichlet Problem with L ~Boundary Data for Elliptic Linear Equations. VI, 173 pages. 1991. Vol. 1483: E. Reithmeier, Periodic Solutions of Nonlinear Dynamical Systems. VI, 171 pages. 1991. Vol. 1484: H. Delfs, Homology of Locally Semialgebraic Spaces. IX, 136 pages. 1991. Vol. 1485: J. Aztma, P. A. Meyer, M. Yor (Eds.), Stminaire de Probabilitts XXV. VIII, 440 pages. 1991. Vol. 1486: L. Arnold, H. Crauel, J.-P. Eckmann (Eds.), Lyapunov Exponents. Proceedings, 1990. VIII, 365 pages. 1991. Vol. 1487: E. Freitag, Singular Modular Forms and Them Relations. VI, 172 pages. 1991. Vol. 1488: A. Carboni, M. C. Pedicchio, G. Rosolini (Eds.), Category Theory. Proceedings, 1990. VII, 494 pages. 1991. Vol. 1489: A. Mielke, Hamittonian and Lagrangian Flows on Center Manifolds. X, 140 pages. 1991. Vol. 1490: K. Metsch, Linear Spaces with Few Lines. XI]I, 196 pages. 1991. Vol. 1491 : E. Lluis-Puebla, J.-L. Loday, H. Gillet, C. Soul6, V. Snaith, Higher Algebraic K-Theory: an overview. IX, 164 pages. 1992. Vol. 1492: K. R. Wicks, Fractals and Hyperspaces. VIII, 168 pages. 1991. Vol. 1493: E. Benolt (Ed.), D y n a m i c Bifurcations. Proceedings, Luminy 1990. VII, 219 pages. 1991. Vol. 1494: M.-T. Cheng, X.-W. Zhon, D.-G. Deng (Eds.), Harmonic Analysis. Proceedings, 1988. IX, 226 pages. 1991. Vol. 1495: J. M. Bony, G. Grubb, L. H6rmander, H. Komatsu, J. Sj0strand, Microlocal A n a l y s i s and Applications. Montecatini Terme, 1989. Editors: L. Cattabriga, L. Rodino. VII, 349 pages. 1991. Vol. 1496: C. Foias, B. Francis, J. W. Helton, H. Kwakernaak, J. B. Pearson, H~-Control Theory. Como, 1990, Editors: E. Mosca, L. Pandolfi. VII, 336 pages. 1991. Vol. 1497: G. T. Herman, A. K. Louis, F. Natterer (Eds.), Mathematical Methods in Tomography. Proceedings 1990. X, 268 pages. 1991. Vol. 1498: R. Lang, Spectral Theory of R a n d o m Schr0dinger Operators. X, 125 pages. 1991. Vol. 1499: K. Taira, Boundary Value Problems and Markov Processes. IX, 132 pages. 1991.
Vol. 1476: M. Bekkali, Topics in Set Theory. VII, 120 pages. 1991.
Vol. 1500: J.-P. Serre, Lie Algebras and Lie Groups. VII, 168 pages. 1992.
Vol. 1477: R. Jajte, Strong Limit Theorems in Noncommutative L2-Spaces. X, 113 pages. 1991.
Vol. 1501 : A. De Masi, E. Presutti, Mathematical Methods for Hydrodynamic Limits. IX, 196 pages. 1991.
Vol. 1502: C. Simpson, Asymptotic Behavior of Monodromy. V, 139 pages. 1991.
Wol. 1526: J. Az6ma, P. A. Meyer, M. Yor (Eds.), S6minaire de Probabilit6s XXVI. X, 633 pages. 1992.
Vol. 1503: S. Shokranian, The Selberg-Arthur Trace Formula (Lectures by J. Arthur). VII, 97 pages. 1991.
Vol. 1527: M. I. Freidlin, J.-F. Le Gall, Ecole d'Et6 de Probabilit6s de Saint-Flour XX - 1990. Editor: P. L. Hennequin. VIII, 244 pages. 1992.
Vol. 1504: .I. Cheeger, M. Gromov, C. Okonek, P. Pansu, Geometric Topology: Recent Developments. Editors: P. de Bartolomeis, F. Tricerri. VII, 197 pages. 1991.
Vol. 1528: G. Isac, Complementarity Problems. VI, 297 pages. 1992.
Vol. 1505: K. Kajitani, T. Nishitani, The Hyperbolic Cauchy Problem. VII, 168 pages. 1991.
Vol. 1529: J. van Neerven, The Adjoint o f a Semigroup of Linear Operators. X, 195 pages. 1992.
Vol. 1506: A. Buium, Differential Algebraic Groups of Finite Dimension. XV, 145 pages. 1992.
Vol. 1530: J. G. Heywood, K. Masuda, R. Rautmann, S. A. Solonnikov (Eds.), The Navier-Stokes Equations II Theory and Numerical Methods. IX, 322 pages. 1992.
Vol. 1507: K. Hulek, T. Peternell, M. Schneider, F.-O. Schreyer (Eds.), Complex Algebraic Varieties. Proceedings, 1990. VII, 179 pages. 1992. Vol. 1508: M. Vuorinen (Ed.), Quasiconformal Space Mappings. A Collection of Surveys 1960-1990. IX, 148 pages. 1992.
Vol. 1531 : M. Stoer, Design of Survivable Networks. IV, 206 pages. 1992. Vol. 1532: J. F. Colombeau, Multiplication of Distributions. X, 184 pages. 1992. Vol. 1533: P. Jipsen, H. Rose, Varieties of Lattices. X, 162 pages. 1992.
Vol. 1509: J. Aguad~,, M. Castellet, F. R. Cohen (Eds.), Algebraic Topology - Homotopy and Group Cohomology. Proceedings, 1990. X, 330 pages. 1992.
Vol. 1534: C. Greither, Cyclic Galois Extensions of Commutative Rings. X, 145 pages. 1992.
Vol. 1510: P. P. Kulish (Ed.), Q u a n t u m Groups. Proceedings, 1990. XII, 398 pages. 1992.
Vol. 1535: A. B. Evans, Orthomorphism Graphs of Groups. VIII, 114 pages. 1992.
Vol. 1511: B. S. Yadav, D. Singh (Eds.), Functional Analysis and Operator Theory. Proceedings, 1990. VIII, 223 pages. 1992.
Vol. 1536: M. K. Kwong, A. Zettl, Norm Inequalities for Derivatives and Differences. VII, 150 pages. 1992.
Vol. 1512: L. M. Adleman, M.-D. A. Huang, Primality Testing and Abelian Varieties Over Finite Fields. VII, 142 pages. 1992.
Vol. 1537: P. Fitzpatrick, M. Martelli, J. Mawhin, R. Nussbaum, Topological Methods for Ordinary Differential Equations. Montecatini Terme, 1991. Editors: M. Furi, P. Zecca. VII, 218 pages. 1993.
Vol. 1513: L. S. Block, W. A. Coppel, Dynamics in One Dimension, VIII, 249 pages. 1992.
Vol. 1538: P.-A. Meyer, Q u a n t u m P r o b a b i l i t y for Probabilists. X, 287 pages. 1993.
Vol. 1514: U. Krengel, K. Richter, V. Warstat (Eds.), Ergodic Theory and Related Topics III, Proceedings, 1990. VIII, 236 pages. 1992.
Vol. 1539: M. Coornaert, A. Papadopoulos, Symbolic Dynamics and Hyperbolic Groups. VIII, 138 pages. 1993.
Vol. 1515: E. Ballico, F. Catanese, C. Ciliberto (Eds.), Classification of Irregular Varieties. Proceedings, 1990. VII, 149 pages. 1992. Vol. 1516: R. A. Lorentz, Multivariate B i r k h o f f Interpolation. IX, 192 pages. 1992.
Vol. 1540: H. Komatsu (Ed.), Functional Analysis and Related Topics, 1991. Proceedings. X XI, 413 pages. 1993. Vol. 1541: D. A. Dawson, B. Maisonneuve, J. Spencer, Ecole d" Et6 de Probabilit6s de Saint-Flour XXI - 1991. Editor: P. L. Hennequin. VIII, 356 pages. 1993.
Vol. 1517: K. Keimel, W. Roth, Ordered Cones and Approximation. VI, 134 pages, 1992.
Vol. 1542: J.FrOhlich, Th.Kerler, Quantum Groups, Quantum Categories and Quantum Field Theory. VII, 431 pages. 1993.
Vol. 1518: H. Stichtenoth, M. A. Tsfasman (Eds.), Coding Theory and Algebraic Geometry. Proceedings, 1991. VIII, 223 pages. 1992.
Vol. 1543: A. L. Dontchev, T. Zolezzi, Well-Posed Optimization Problems. XII, 421 pages. 1993.
Vol. 1519: M. W. Short, The Primitive Soluble Permutation Groups of Degree less than 256. IX, 145 pages. 1992. Vol. 1520: Yu. G. Borisovich, Yu. E. Gliklikh (Eds.), Global Analysis - Studies and Applications V. VII, 284 pages. 1992. Vol. 1521: S. B u s e n b e r g , B. Forte, H. K. Kuiken, Mathematical Modelling of Industrial Process. Bari, 1990. Editors: V. Capasso, A. Fasano. VII, 162 pages. 1992. Vol. 1522: J.-M. Delort, F. B. 1. Transformation. VII, 101 pages. 1992. Vol. 1523: W. Xue, Rings with Morita Duality. X, 168 pages. 1992. Vol. 1524: M. Coste, L. Mah~, M.-F. Roy (Eds.), Real Algebraic Geometry. Proceedings, 1991. VIII, 418 pages. 1992. Vol. 1525: C. C a s a c u b e r t a , M. Castellet (Eds.), Mathematical Research Today and Tomorrow. VII, 112 pages. 1992.
Vol. 1544: M.Sch/irmann, White Noise on Bialgebras. Vll, 146 pages. 1993. Vol. 1545: J. Morgan, K. O'Grady, Differential Topology of Complex Surfaces. VIII, 224 pages. 1993. Vol. 1546: V. V. Kalashnikov, V. M. Zolotarev (Eds.), Stability Problems for Stochastic Models. Proceedings, 1991. VIII, 229 pages. 1993. Vol. 1547: P. Harmand, D. Werner, W. Werner, M-ideals in Banach Spaces and Banach Algebras. VIII, 387 pages. 1993. Vol. 1548: T. Urabe, Dynkin Graphs and Quadrilateral Singularities. VI, 233 pages. 1993. Vol. 1549: G. Vainikko, Multidimensional Weakly Singular Integral Equations. XI, 159 pages. 1993. Vol. 1550: A. A. Gonchar, E. B. Saff (Eds.), Methods of Approximation Theory in Complex Analysis and Mathematical Physics IV, 222 pages, 1993.